Updates from: 01/07/2021 04:06:24
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/javascript-and-page-layout https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/javascript-and-page-layout.md
@@ -20,8 +20,6 @@ zone_pivot_groups: b2c-policy-type
[!INCLUDE [active-directory-b2c-choose-user-flow-or-custom-policy](../../includes/active-directory-b2c-choose-user-flow-or-custom-policy.md)]
-[!INCLUDE [active-directory-b2c-public-preview](../../includes/active-directory-b2c-public-preview.md)]
- ::: zone pivot="b2c-custom-policy" [!INCLUDE [active-directory-b2c-advanced-audience-warning](../../includes/active-directory-b2c-advanced-audience-warning.md)]
@@ -239,4 +237,4 @@ In the code, replace `termsOfUseUrl` with the link to your terms of use agreemen
## Next steps
-Find more information about how you can customize the user interface of your applications in [Customize the user interface of your application in Azure Active Directory B2C](customize-ui-with-html.md).
\ No newline at end of file
+Find more information about how you can customize the user interface of your applications in [Customize the user interface of your application in Azure Active Directory B2C](customize-ui-with-html.md).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/authentication/concept-resilient-controls https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-resilient-controls.md
@@ -32,8 +32,8 @@ This document provides guidance on strategies an organization should adopt to pr
There are four key takeaways in this document: * Avoid administrator lockout by using emergency access accounts.
-* Implement MFA using Conditional Access (CA) rather than per-user MFA.
-* Mitigate user lockout by using multiple Conditional Access (CA) controls.
+* Implement MFA using Conditional Access rather than per-user MFA.
+* Mitigate user lockout by using multiple Conditional Access controls.
* Mitigate user lockout by provisioning multiple authentication methods or equivalents for each user. ## Before a disruption
@@ -132,9 +132,9 @@ This naming standard for the contingency policies will be as follows:
EMnnn - ENABLE IN EMERGENCY: [Disruption][i/n] - [Apps] - [Controls] [Conditions] ```
-The following example: **Example A - Contingency CA policy to restore Access to mission-critical Collaboration Apps**, is a typical corporate contingency. In this scenario, the organization typically requires MFA for all Exchange Online and SharePoint Online access, and the disruption in this case is the MFA provider for the customer has an outage (whether Azure AD MFA, on-premises MFA provider, or third-party MFA). This policy mitigates this outage by allowing specific targeted users access to these apps from trusted Windows devices only when they are accessing the app from their trusted corporate network. It will also exclude emergency accounts and core administrators from these restrictions. The targeted users will then gain access to Exchange Online and SharePoint Online, while other users will still not have access to the apps due to the outage. This example will require a named network location **CorpNetwork** and a security group **ContingencyAccess** with the target users, a group named **CoreAdmins** with the core administrators, and a group named **EmergencyAccess** with the emergency access accounts. The contingency requires four policies to provide the desired access.
+The following example: **Example A - Contingency Conditional Access policy to restore Access to mission-critical Collaboration Apps**, is a typical corporate contingency. In this scenario, the organization typically requires MFA for all Exchange Online and SharePoint Online access, and the disruption in this case is the MFA provider for the customer has an outage (whether Azure AD MFA, on-premises MFA provider, or third-party MFA). This policy mitigates this outage by allowing specific targeted users access to these apps from trusted Windows devices only when they are accessing the app from their trusted corporate network. It will also exclude emergency accounts and core administrators from these restrictions. The targeted users will then gain access to Exchange Online and SharePoint Online, while other users will still not have access to the apps due to the outage. This example will require a named network location **CorpNetwork** and a security group **ContingencyAccess** with the target users, a group named **CoreAdmins** with the core administrators, and a group named **EmergencyAccess** with the emergency access accounts. The contingency requires four policies to provide the desired access.
-**Example A - Contingency CA policies to restore Access to mission-critical Collaboration Apps:**
+**Example A - Contingency Conditional Access policies to restore Access to mission-critical Collaboration Apps:**
* Policy 1: Require Domain Joined devices for Exchange and SharePoint * Name: EM001 - ENABLE IN EMERGENCY: MFA Disruption[1/4] - Exchange SharePoint - Require Hybrid Azure AD Join
@@ -174,9 +174,9 @@ Order of activation:
5. Enable Policy 4: Verify all users cannot get Exchange Online from the native mail applications on mobile devices. 6. Disable the existing MFA policy for SharePoint Online and Exchange Online.
-In this next example, **Example B - Contingency CA policies to allow mobile access to Salesforce**, a business appΓÇÖs access is restored. In this scenario, the customer typically requires their sales employees access to Salesforce (configured for single-sign on with Azure AD) from mobile devices to only be allowed from compliant devices. The disruption in this case is that there is an issue with evaluating device compliance and the outage is happening at a sensitive time where the sales team needs access to Salesforce to close deals. These contingency policies will grant critical users access to Salesforce from a mobile device so that they can continue to close deals and not disrupt the business. In this example, **SalesforceContingency** contains all the Sales employees who need to retain access and **SalesAdmins** contains necessary admins of Salesforce.
+In this next example, **Example B - Contingency Conditional Access policies to allow mobile access to Salesforce**, a business appΓÇÖs access is restored. In this scenario, the customer typically requires their sales employees access to Salesforce (configured for single-sign on with Azure AD) from mobile devices to only be allowed from compliant devices. The disruption in this case is that there is an issue with evaluating device compliance and the outage is happening at a sensitive time where the sales team needs access to Salesforce to close deals. These contingency policies will grant critical users access to Salesforce from a mobile device so that they can continue to close deals and not disrupt the business. In this example, **SalesforceContingency** contains all the Sales employees who need to retain access and **SalesAdmins** contains necessary admins of Salesforce.
-**Example B - Contingency CA policies:**
+**Example B - Contingency Conditional Access policies:**
* Policy 1: Block everyone not in the SalesContingency team * Name: EM001 - ENABLE IN EMERGENCY: Device Compliance Disruption[1/2] - Salesforce - Block All users except SalesforceContingency
active-directory https://docs.microsoft.com/en-us/azure/active-directory/authentication/concepts-azure-multi-factor-authentication-prompts-session-lifetime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concepts-azure-multi-factor-authentication-prompts-session-lifetime.md
@@ -117,7 +117,7 @@ To configure Conditional Access policies for sign-in frequency and persistent br
1. Select **Security**, then **Conditional Access**. 1. Configure a policy using the recommended session management options detailed in this article.
-To review token lifetimes, [use Azure AD PowerShell to query any Azure AD policies](../develop/configure-token-lifetimes.md#prerequisites). Disable any policies that you have in place.
+To review token lifetimes, [use Azure AD PowerShell to query any Azure AD policies](../develop/configure-token-lifetimes.md#get-started). Disable any policies that you have in place.
If more than one setting is enabled in your tenant, we recommend updating your settings based on the licensing available for you. For example, if you have Azure AD premium licenses you should only use the Conditional Access policy of *Sign-in Frequency* and *Persistent browser session*. If you have Microsoft 365 apps or Azure AD free licenses, you should use the *Remain signed-in?* configuration.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/conditional-access/concept-conditional-access-report-only https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/concept-conditional-access-report-only.md
@@ -27,7 +27,7 @@ Report-only mode is a new Conditional Access policy state that allows administra
- Customers with an Azure Monitor subscription can monitor the impact of their Conditional Access policies using the Conditional Access insights workbook. > [!WARNING]
-> Policies in report-only mode that require compliant devices may prompt users on Mac, iOS, and Android to select a device certificate during policy evaluation, even though device compliance is not enforced. These prompts may repeat until the device is made compliant. To prevent end users from receiving prompts during sign-in, exclude device platforms Mac, iOS and Android from report-only policies that perform device compliance checks. Note that report-only mode is not applicable for CA policies with "User Actions" scope.
+> Policies in report-only mode that require compliant devices may prompt users on Mac, iOS, and Android to select a device certificate during policy evaluation, even though device compliance is not enforced. These prompts may repeat until the device is made compliant. To prevent end users from receiving prompts during sign-in, exclude device platforms Mac, iOS and Android from report-only policies that perform device compliance checks. Note that report-only mode is not applicable for Conditional Access policies with "User Actions" scope.
![Report-only tab in Azure AD sign-in log](./media/concept-conditional-access-report-only/report-only-detail-in-sign-in-log.png)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/conditional-access/howto-conditional-access-policy-registration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/howto-conditional-access-policy-registration.md
@@ -58,7 +58,7 @@ Some may choose to use device state instead of location in step 6 above:
> [!WARNING] > If you use device state as a condition in your policy this may impact guest users in the directory. [Report-only mode](concept-conditional-access-report-only.md) can help determine the impact of policy decisions.
-> Note that report-only mode is not applicable for CA policies with "User Actions" scope.
+> Note that report-only mode is not applicable for Conditional Access policies with "User Actions" scope.
## Next steps
active-directory https://docs.microsoft.com/en-us/azure/active-directory/conditional-access/plan-conditional-access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/plan-conditional-access.md
@@ -21,7 +21,7 @@ Planning your Conditional Access deployment is critical to achieving your organi
In a mobile-first, cloud-first world, your users access your organization's resources from anywhere using a variety of devices and apps. As a result, focusing on who can access a resource is no longer enough. You also need to consider where the user is, the device being used, the resource being accessed, and more.
-Azure Active Directory (Azure AD) Conditional Access (CA) analyses signals such as user, device, and location to automate decisions and enforce organizational access policies for resource. You can use CA policies to apply access controls like Multi-Factor Authentication (MFA). CA policies allow you to prompt users for MFA when needed for security, and stay out of usersΓÇÖ way when not needed.
+Azure Active Directory (Azure AD) Conditional Access analyses signals such as user, device, and location to automate decisions and enforce organizational access policies for resource. You can use Conditional Access policies to apply access controls like Multi-Factor Authentication (MFA). Conditional Access policies allow you to prompt users for MFA when needed for security, and stay out of usersΓÇÖ way when not needed.
![Conditional Access overview](./media/plan-conditional-access/conditional-access-overview-how-it-works.png)
@@ -35,7 +35,7 @@ Before you begin, make sure you understand how [Conditional Access](overview.md)
The benefits of deploying Conditional Access are:
-* Increase productivity. Only interrupt users with a sign-in condition like MFA when one or more signals warrants it. CA policies allow you to control when users are prompted for MFA, when access is blocked, and when they must use a trusted device.
+* Increase productivity. Only interrupt users with a sign-in condition like MFA when one or more signals warrants it. Conditional Access policies allow you to control when users are prompted for MFA, when access is blocked, and when they must use a trusted device.
* Manage risk. Automating risk assessment with policy conditions means risky sign-ins are at once identified and remediated or blocked. Coupling Conditional Access with [Identity Protection](../identity-protection/overview-identity-protection.md), which detects anomalies and suspicious events, allows you to target when access to resources is blocked or gated.
@@ -68,7 +68,7 @@ The following resources may be useful as you learn about Conditional Access:
* [What is Conditional Access?](https://youtu.be/ffMAw2IVO7A) * [How to deploy Conditional Access?](https://youtu.be/c_izIRNJNuk)
-* [How to roll out CA policies to end users?](https://youtu.be/0_Fze7Zpyvc)
+* [How to roll out Conditional Access policies to end users?](https://youtu.be/0_Fze7Zpyvc)
* [Conditional Access with device controls](https://youtu.be/NcONUf-jeS4) * [Conditional Access with Azure AD MFA](https://youtu.be/Tbc-SU97G-w) * [Conditional Access in Enterprise Mobility + Security](https://youtu.be/A7IrxAH87wc)
@@ -99,13 +99,13 @@ When new policies are ready for your environment, deploy them in phases in the p
> [!NOTE] > For rolling out new policies not specific to administrators, exclude all administrators. This ensures that administrators can still access the policy and make changes or revoke it if there's a significant impact. Always validate the policy with smaller user groups before you apply to all users.
-## Understand CA policy components
-CA policies are if-then statements: If an assignment is met, then apply these access controls.
+## Understand Conditional Access policy components
+Conditional Access policies are if-then statements: If an assignment is met, then apply these access controls.
-When configuring CA policies, conditions are called *assignments*. CA policies allow you to enforce access controls on your organizationΓÇÖs apps based on certain assignments.
+When configuring Conditional Access policies, conditions are called *assignments*. Conditional Access policies allow you to enforce access controls on your organizationΓÇÖs apps based on certain assignments.
-For more information, see [Building a CA policy](concept-conditional-access-policies.md).
+For more information, see [Building a Conditional Access policy](concept-conditional-access-policies.md).
![create policy screen](media/plan-conditional-access/create-policy.png)
@@ -192,7 +192,7 @@ ItΓÇÖs important to understand how access tokens are issued.
![Access token issuance diagram](media/plan-conditional-access/CA-policy-token-issuance.png) > [!NOTE]
-> If no assignment is required, and no CA policy is in effect, that the default behavior is to issue an access token.
+> If no assignment is required, and no Conditional Access policy is in effect, that the default behavior is to issue an access token.
For example, consider a policy where:
@@ -204,14 +204,14 @@ If a user not in Group 1 attempts to access the app no ΓÇ£ifΓÇÖ condition is met
The Conditional Access framework provides you with a great configuration flexibility. However, great flexibility also means you should carefully review each configuration policy before releasing it to avoid undesirable results.
-### Apply CA policies to every app
+### Apply Conditional Access policies to every app
-Access tokens are by default issued if a CA Policy condition does not trigger an access control. Ensure that every app has at least one conditional access policy applied
+Access tokens are by default issued if a Conditional Access policy condition does not trigger an access control. Ensure that every app has at least one conditional access policy applied
> [!IMPORTANT] > Be very careful in using block and all apps in a single policy. This could lock admins out of the Azure Administration Portal, and exclusions cannot be configured for important end-points such as Microsoft Graph.
-### Minimize the number of CA policies
+### Minimize the number of Conditional Access policies
Creating a policy for each app isn't efficient and leads to difficult administration. Conditional Access will only apply the first 195 policies per user. We recommend that you analyze your apps and group them into applications that have the same resource requirements for the same users. For example, if all Microsoft 365 apps or all HR apps have the same requirements for the same users, create a single policy and include all of the apps to which it applies.
@@ -225,9 +225,9 @@ If you misconfigure a policy, it can lock the organizations out of the Azure por
* Create an on-premises security group and sync it to Azure AD. The security group should contain your dedicated policy administration account.
- * EXEMPT this security group form all CA policies.
+ * EXEMPT this security group form all Conditional Access policies.
- * When a service outage occurs, add your other administrators to the on-premises group as appropriate, and force a sync. This animates their exemption to CA policies.
+ * When a service outage occurs, add your other administrators to the on-premises group as appropriate, and force a sync. This animates their exemption to Conditional Access policies.
### Set up report-only mode
@@ -237,9 +237,9 @@ It can be difficult to predict the number and names of users affected by common
* requiring MFA * implementing sign-in risk policies
-[Report-only mode ](concept-conditional-access-report-only.md) allows administrators to evaluate the impact of CA policies before enabling them in their environment.
+[Report-only mode ](concept-conditional-access-report-only.md) allows administrators to evaluate the impact of Conditional Access policies before enabling them in their environment.
-Learn how to [configure report-only mode on a CA policy](howto-conditional-access-insights-reporting.md).
+Learn how to [configure report-only mode on a Conditional Access policy](howto-conditional-access-insights-reporting.md).
### Plan for disruption
@@ -292,7 +292,7 @@ When new policies are ready for your environment, make sure that you review each
## Common policies
-When planning your CA policy solution, assess whether you need to create policies to achieve the following outcomes.
+When planning your Conditional Access policy solution, assess whether you need to create policies to achieve the following outcomes.
* [Require MFA](#require-mfa) * [Respond to potentially compromised accounts](#respond-to-potentially-compromised-accounts)
@@ -316,7 +316,7 @@ Common use cases to require MFA access:
### Respond to potentially compromised accounts
-With CA policies, you can implement automated responses to sign-ins by potentially compromised identities. The probability that an account is compromised is expressed in the form of risk levels. There are two risk levels calculated by Identity Protection: sign-in risk and user risk. The following three default policies that can be enabled.
+With Conditional Access policies, you can implement automated responses to sign-ins by potentially compromised identities. The probability that an account is compromised is expressed in the form of risk levels. There are two risk levels calculated by Identity Protection: sign-in risk and user risk. The following three default policies that can be enabled.
* [Require all users to register for MFA](howto-conditional-access-policy-risk.md)
@@ -371,7 +371,7 @@ Some organizations have test tenants for this purpose. However, it can be diffic
### Create a test plan
-The test plan is important to have a comparison between the expected results and the actual results. You should always have an expectation before testing something. The following table outlines example test cases. Adjust the scenarios and expected results based on how your CA policies are configured.
+The test plan is important to have a comparison between the expected results and the actual results. You should always have an expectation before testing something. The following table outlines example test cases. Adjust the scenarios and expected results based on how your Conditional Access policies are configured.
| Policy| Scenario| Expected Result | | - | - | - |
@@ -386,9 +386,9 @@ The test plan is important to have a comparison between the expected results and
### Configure the test policy
-In the [Azure portal](https://portal.azure.com/), you configure CA policies under Azure Active Directory > Security > Conditional Access.
+In the [Azure portal](https://portal.azure.com/), you configure Conditional Access policies under Azure Active Directory > Security > Conditional Access.
-If you want to learn more about how to create CA policies, see this example: [CA policy to prompt for MFA when a user signs in to the Azure portal](../authentication/tutorial-enable-azure-mfa.md?bc=%2fazure%2factive-directory%2fconditional-access%2fbreadcrumb%2ftoc.json&toc=%2fazure%2factive-directory%2fconditional-access%2ftoc.json). This quickstart helps you to:
+If you want to learn more about how to create Conditional Access policies, see this example: [Conditional Access policy to prompt for MFA when a user signs in to the Azure portal](../authentication/tutorial-enable-azure-mfa.md?bc=%2fazure%2factive-directory%2fconditional-access%2fbreadcrumb%2ftoc.json&toc=%2fazure%2factive-directory%2fconditional-access%2ftoc.json). This quickstart helps you to:
* Become familiar with the user interface
@@ -412,7 +412,7 @@ You can view the aggregate impact of your Conditional Access policies in the Ins
Another way to validate your Conditional Access policy is by using the [what-if tool](troubleshoot-conditional-access-what-if.md), which simulates which policies would apply to a user signing in under hypothetical circumstances. Select the sign-in attributes you want to test (such as user, application, device platform, and location) and see which policies would apply. > [!NOTE]
-> While a simulated run gives you a good idea of the impact a CA policy has, it does not replace an actual test run.
+> While a simulated run gives you a good idea of the impact a Conditional Access policy has, it does not replace an actual test run.
### Test your policy
@@ -439,14 +439,14 @@ In case you need to roll back your newly implemented policies, use one or more o
## Manage access to cloud apps
-Use the following Manage options to control and manage your CA policies:
+Use the following Manage options to control and manage your Conditional Access policies:
![Screenshot shows the MANAGE options for C A policies, including Named locations, Custom controls, Terms of use, V P N connectivity, and the selected Classic policies.](media/plan-conditional-access/manage-access.png) ### Named locations
-The location condition of a CA policy enables you to tie access controls settings to the network locations of your users. With [Named Locations](location-condition.md), you can create logical groupings of IP address ranges or countries and regions.
+The location condition of a Conditional Access policy enables you to tie access controls settings to the network locations of your users. With [Named Locations](location-condition.md), you can create logical groupings of IP address ranges or countries and regions.
### Custom controls
@@ -458,7 +458,7 @@ Before accessing certain cloud apps in your environment, you can get consent fro
## Troubleshoot Conditional Access
-When a user is having an issue with a CA policy, collect the following information to facilitate troubleshooting.
+When a user is having an issue with a Conditional Access policy, collect the following information to facilitate troubleshooting.
* User principle Name
@@ -490,4 +490,4 @@ Once you have collected the information, See the following resources:
[Learn more about Identity Protection](../identity-protection/overview-identity-protection.md)
-[Manage CA policies with Microsoft Graph API](/graph/api/resources/conditionalaccesspolicy?view=graph-rest-beta.md)
\ No newline at end of file
+[Manage Conditional Access policies with Microsoft Graph API](/graph/api/resources/conditionalaccesspolicy?view=graph-rest-beta.md)
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-configurable-token-lifetimes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/active-directory-configurable-token-lifetimes.md
@@ -1,7 +1,7 @@
--- title: Configurable token lifetimes titleSuffix: Microsoft identity platform
-description: Learn how to set lifetimes for tokens issued by Microsoft identity platform.
+description: Learn how to set lifetimes for access, SAML, and ID tokens issued by Microsoft identity platform.
services: active-directory author: rwike77 manager: CelesteDG
@@ -10,85 +10,78 @@ ms.service: active-directory
ms.subservice: develop ms.workload: identity ms.topic: conceptual
-ms.date: 12/14/2020
+ms.date: 01/04/2021
ms.author: ryanwi ms.custom: aaddev, identityplatformtop40, content-perf, FY21Q1, contperf-fy21q1 ms.reviewer: hirsin, jlu, annaba --- # Configurable token lifetimes in Microsoft identity platform (preview)
-> [!IMPORTANT]
-> After May 2020, tenants will no longer be able to configure refresh and session token lifetimes. Azure Active Directory will stop honoring existing refresh and session token configuration in policies after January 30, 2021. You can still configure access token lifetimes after the deprecation.
->
-> If you need to continue to define the time period before a user is asked to sign in again, configure sign-in frequency in Conditional Access. To learn more about Conditional Access, visit the [Configure authentication session management with Conditional Access](/azure/active-directory/conditional-access/howto-conditional-access-session-lifetime).
->
-> For tenants that do not want to use Conditional Access after the retirement date, they can expect that Azure AD will honor the default configuration outlined in the next section.
+You can specify the lifetime of a access, ID, or SAML token issued by Microsoft identity platform. You can set token lifetimes for all apps in your organization, for a multi-tenant (multi-organization) application, or for a specific service principal in your organization. However, we currently do not support configuring the token lifetimes for [managed identity service principals](../managed-identities-azure-resources/overview.md).
-## Configurable token lifetime properties after the retirement
-Refresh and session token configuration are affected by the following properties and their respectively set values. After the retirement of refresh and session token configuration, Azure AD will only honor the default value described below, regardless of whether policies have custom values configured configured custom values. You can still configure access token lifetimes after the retirement.
+In Azure AD, a policy object represents a set of rules that are enforced on individual applications or on all applications in an organization. Each policy type has a unique structure, with a set of properties that are applied to objects to which they are assigned.
-|Property |Policy property string |Affects |Default |
-|----------|-----------|------------|------------|
-|Refresh Token Max Inactive Time |MaxInactiveTime |Refresh tokens |90 days |
-|Single-Factor Refresh Token Max Age |MaxAgeSingleFactor |Refresh tokens (for any users) |Until-revoked |
-|Multi-Factor Refresh Token Max Age |MaxAgeMultiFactor |Refresh tokens (for any users) |180 days |
-|Single-Factor Session Token Max Age |MaxAgeSessionSingleFactor |Session tokens (persistent and nonpersistent) |Until-revoked |
-|Multi-Factor Session Token Max Age |MaxAgeSessionMultiFactor |Session tokens (persistent and nonpersistent) |180 days |
+You can designate a policy as the default policy for your organization. The policy is applied to any application in the organization, as long as it is not overridden by a policy with a higher priority. You also can assign a policy to specific applications. The order of priority varies by policy type.
-## Identify configuration in scope of retirement
+For examples, read [examples of how to configure token lifetimes](configure-token-lifetimes.md).
-To get started, do the following steps:
+> [!NOTE]
+> Configurable token lifetime policy only applies to mobile and desktop clients that access SharePoint Online and OneDrive for Business resources, and does not apply to web browser sessions.
+> To manage the lifetime of web browser sessions for SharePoint Online and OneDrive for Business, use the [Conditional Access session lifetime](../conditional-access/howto-conditional-access-session-lifetime.md) feature. Refer to the [SharePoint Online blog](https://techcommunity.microsoft.com/t5/SharePoint-Blog/Introducing-Idle-Session-Timeout-in-SharePoint-and-OneDrive/ba-p/119208) to learn more about configuring idle session timeouts.
-1. Download the latest [Azure AD PowerShell Module Public Preview release](https://www.powershellgallery.com/packages/AzureADPreview).
-1. Run the `Connect` command to sign in to your Azure AD admin account. Run this command each time you start a new session.
+## License requirements
- ```powershell
- Connect-AzureAD -Confirm
- ```
+Using this feature requires an Azure AD Premium P1 license. To find the right license for your requirements, see [Comparing generally available features of the Free and Premium editions](https://azure.microsoft.com/pricing/details/active-directory/).
-1. To see all policies that have been created in your organization, run the [Get-AzureADPolicy](/powershell/module/azuread/get-azureadpolicy?view=azureadps-2.0-preview&preserve-view=true) cmdlet. Any results with defined property values that differ from the defaults listed above are in scope of the retirement.
+Customers with [Microsoft 365 Business licenses](/office365/servicedescriptions/microsoft-365-service-descriptions/microsoft-365-business-service-description) also have access to Conditional Access features.
- ```powershell
- Get-AzureADPolicy -All
- ```
+## Token lifetime policies for access, SAML, and ID tokens
-1. To see which apps and service principals are linked to a specific policy you identified run the following [Get-AzureADPolicyAppliedObject](/powershell/module/azuread/get-azureadpolicyappliedobject?view=azureadps-2.0-preview&preserve-view=true) cmdlet by replacing **1a37dad8-5da7-4cc8-87c7-efbc0326cf20** with any of your policy ids. Then you can decide whether to configure Conditional Access sign-in frequency or remain with the Azure AD defaults.
+You can set token lifetime policies for access tokens, SAML tokens, and ID tokens.
- ```powershell
- Get-AzureADPolicyAppliedObject -id 1a37dad8-5da7-4cc8-87c7-efbc0326cf20
- ```
+### Access tokens
-If your tenant has policies which define custom values for the refresh and session token configuration properties, Microsoft recommends you update those policies to values that reflect the defaults described above. If no changes are made, Azure AD will automatically honor the default values.
+Clients use access tokens to access a protected resource. An access token can be used only for a specific combination of user, client, and resource. Access tokens cannot be revoked and are valid until their expiry. A malicious actor that has obtained an access token can use it for extent of its lifetime. Adjusting the lifetime of an access token is a trade-off between improving system performance and increasing the amount of time that the client retains access after the userΓÇÖs account is disabled. Improved system performance is achieved by reducing the number of times a client needs to acquire a fresh access token. The default is 1 hour - after 1 hour, the client must use the refresh token to (usually silently) acquire a new refresh token and access token.
-## Overview
+### SAML tokens
-You can specify the lifetime of a token issued by Microsoft identity platform. You can set token lifetimes for all apps in your organization, for a multi-tenant (multi-organization) application, or for a specific service principal in your organization. However, we currently do not support configuring the token lifetimes for [managed identity service principals](../managed-identities-azure-resources/overview.md).
+SAML tokens are used by many web-based SAAS applications, and are obtained using Azure Active Directory's SAML2 protocol endpoint. They are also consumed by applications using WS-Federation. The default lifetime of the token is 1 hour. From an application's perspective, the validity period of the token is specified by the NotOnOrAfter value of the `<conditions …>` element in the token. After the validity period of the token has ended, the client must initiate a new authentication request, which will often be satisfied without interactive sign in as a result of the Single Sign On (SSO) Session token.
-In Azure AD, a policy object represents a set of rules that are enforced on individual applications or on all applications in an organization. Each policy type has a unique structure, with a set of properties that are applied to objects to which they are assigned.
+The value of NotOnOrAfter can be changed using the `AccessTokenLifetime` parameter in a `TokenLifetimePolicy`. It will be set to the lifetime configured in the policy if any, plus a clock skew factor of five minutes.
-You can designate a policy as the default policy for your organization. The policy is applied to any application in the organization, as long as it is not overridden by a policy with a higher priority. You also can assign a policy to specific applications. The order of priority varies by policy type.
+The subject confirmation NotOnOrAfter specified in the `<SubjectConfirmationData>` element is not affected by the Token Lifetime configuration.
-For examples, read [examples of how to configure token lifetimes](configure-token-lifetimes.md).
+### ID tokens
-> [!NOTE]
-> Configurable token lifetime policy only applies to mobile and desktop clients that access SharePoint Online and OneDrive for Business resources, and does not apply to web browser sessions.
-> To manage the lifetime of web browser sessions for SharePoint Online and OneDrive for Business, use the [Conditional Access session lifetime](../conditional-access/howto-conditional-access-session-lifetime.md) feature. Refer to the [SharePoint Online blog](https://techcommunity.microsoft.com/t5/SharePoint-Blog/Introducing-Idle-Session-Timeout-in-SharePoint-and-OneDrive/ba-p/119208) to learn more about configuring idle session timeouts.
+ID tokens are passed to websites and native clients. ID tokens contain profile information about a user. An ID token is bound to a specific combination of user and client. ID tokens are considered valid until their expiry. Usually, a web application matches a userΓÇÖs session lifetime in the application to the lifetime of the ID token issued for the user. You can adjust the lifetime of an ID token to control how often the web application expires the application session, and how often it requires the user to be re-authenticated with Microsoft identity platform (either silently or interactively).
-## Token types
+### Token lifetime policy properties
-You can set token lifetime policies for refresh tokens, access tokens, SAML tokens, session tokens, and ID tokens.
+A token lifetime policy is a type of policy object that contains token lifetime rules. This policy controls how long access, SAML, and ID tokens for this resource are considered valid. If no policy is set, the system enforces the default lifetime value.
-### Access tokens
+Reducing the Access Token Lifetime property mitigates the risk of an access token or ID token being used by a malicious actor for an extended period of time. (These tokens cannot be revoked.) The trade-off is that performance is adversely affected, because the tokens have to be replaced more often.
-Clients use access tokens to access a protected resource. An access token can be used only for a specific combination of user, client, and resource. Access tokens cannot be revoked and are valid until their expiry. A malicious actor that has obtained an access token can use it for extent of its lifetime. Adjusting the lifetime of an access token is a trade-off between improving system performance and increasing the amount of time that the client retains access after the userΓÇÖs account is disabled. Improved system performance is achieved by reducing the number of times a client needs to acquire a fresh access token. The default is 1 hour - after 1 hour, the client must use the refresh token to (usually silently) acquire a new refresh token and access token.
+For an example, see [Create a policy for web sign-in](configure-token-lifetimes.md#create-a-policy-for-web-sign-in).
-### SAML tokens
+| Property | Policy property string | Affects | Default | Minimum | Maximum |
+| --- | --- | --- | --- | --- | --- |
+| Access Token Lifetime |AccessTokenLifetime |Access tokens, ID tokens, SAML2 tokens |1 hour |10 minutes |1 day |
-SAML tokens are used by many web-based SAAS applications, and are obtained using Azure Active Directory's SAML2 protocol endpoint. They are also consumed by applications using WS-Federation. The default lifetime of the token is 1 hour. From an application's perspective, the validity period of the token is specified by the NotOnOrAfter value of the `<conditions …>` element in the token. After the validity period of the token has ended, the client must initiate a new authentication request, which will often be satisfied without interactive sign in as a result of the Single Sign On (SSO) Session token.
+> [!NOTE]
+> To ensure the Microsoft Teams Web client works, it is recommended to keep AccessTokenLifetime to greater than 15 minutes for Microsoft Teams.
-The value of NotOnOrAfter can be changed using the `AccessTokenLifetime` parameter in a `TokenLifetimePolicy`. It will be set to the lifetime configured in the policy if any, plus a clock skew factor of five minutes.
+## Token lifetime policies for refresh tokens and session tokens
-The subject confirmation NotOnOrAfter specified in the `<SubjectConfirmationData>` element is not affected by the Token Lifetime configuration.
+You can set token lifetime policies for refresh tokens and session tokens.
+
+> [!IMPORTANT]
+> As of May 2020, new tenants can not configure refresh and session token lifetimes. Tenants with existing configuration can modify refresh and session token policies until January 30, 2021. Azure Active Directory will stop honoring existing refresh and session token configuration in policies after January 30, 2021. You can still configure access, SAML, and ID token lifetimes after the retirement.
+>
+> If you need to continue to define the time period before a user is asked to sign in again, configure sign-in frequency in Conditional Access. To learn more about Conditional Access, read [Configure authentication session management with Conditional Access](/azure/active-directory/conditional-access/howto-conditional-access-session-lifetime).
+>
+> If you do not want to use Conditional Access after the retirement date, your refresh and session tokens will be set to the [default configuration](#configurable-token-lifetime-properties-after-the-retirement) on that date and you will no longer be able to change their lifetimes.
+
+:::image type="content" source="./media/active-directory-configurable-token-lifetimes/roadmap.svg" alt-text="Retirement information":::
### Refresh tokens
@@ -106,9 +99,6 @@ Public clients cannot securely store a client password (secret). For example, an
> [!NOTE] > The Max Age property is the length of time a single token can be used.
-### ID tokens
-ID tokens are passed to websites and native clients. ID tokens contain profile information about a user. An ID token is bound to a specific combination of user and client. ID tokens are considered valid until their expiry. Usually, a web application matches a userΓÇÖs session lifetime in the application to the lifetime of the ID token issued for the user. You can adjust the lifetime of an ID token to control how often the web application expires the application session, and how often it requires the user to be reauthenticated with Microsoft identity platform (either silently or interactively).
- ### Single sign-on session tokens When a user authenticates with Microsoft identity platform, a single sign-on session (SSO) is established with the userΓÇÖs browser and Microsoft identity platform. The SSO token, in the form of a cookie, represents this session. The SSO session token is not bound to a specific resource/client application. SSO session tokens can be revoked, and their validity is checked every time they are used.
@@ -119,13 +109,12 @@ Nonpersistent session tokens have a lifetime of 24 hours. Persistent tokens have
You can use a policy to set the time after the first session token was issued beyond which the session token is no longer accepted. (To do this, use the Session Token Max Age property.) You can adjust the lifetime of a session token to control when and how often a user is required to reenter credentials, instead of being silently authenticated, when using a web application.
-### Token lifetime policy properties
+### Refresh and session token lifetime policy properties
A token lifetime policy is a type of policy object that contains token lifetime rules. Use the properties of the policy to control specified token lifetimes. If no policy is set, the system enforces the default lifetime value.
-### Configurable token lifetime properties
+#### Configurable token lifetime properties
| Property | Policy property string | Affects | Default | Minimum | Maximum | | --- | --- | --- | --- | --- | --- |
-| Access Token Lifetime |AccessTokenLifetime<sup>2</sup> |Access tokens, ID tokens, SAML2 tokens |1 hour |10 minutes |1 day |
| Refresh Token Max Inactive Time |MaxInactiveTime |Refresh tokens |90 days |10 minutes |90 days | | Single-Factor Refresh Token Max Age |MaxAgeSingleFactor |Refresh tokens (for any users) |Until-revoked |10 minutes |Until-revoked<sup>1</sup> | | Multi-Factor Refresh Token Max Age |MaxAgeMultiFactor |Refresh tokens (for any users) | 180 days |10 minutes |180 days<sup>1</sup> |
@@ -133,9 +122,8 @@ A token lifetime policy is a type of policy object that contains token lifetime
| Multi-Factor Session Token Max Age |MaxAgeSessionMultiFactor |Session tokens (persistent and nonpersistent) | 180 days |10 minutes | 180 days<sup>1</sup> | * <sup>1</sup>365 days is the maximum explicit length that can be set for these attributes.
-* <sup>2</sup>To ensure the Microsoft Teams Web client works, it is recommended to keep AccessTokenLifetime to greater than 15 minutes for Microsoft Teams.
-### Exceptions
+#### Exceptions
| Property | Affects | Default | | --- | --- | --- | | Refresh Token Max Age (issued for federated users who have insufficient revocation information<sup>1</sup>) |Refresh tokens (issued for federated users who have insufficient revocation information<sup>1</sup>) |12 hours |
@@ -144,52 +132,9 @@ A token lifetime policy is a type of policy object that contains token lifetime
* <sup>1</sup> Federated users who have insufficient revocation information include any users who do not have the "LastPasswordChangeTimestamp" attribute synced. These users are given this short Max Age because Azure Active Directory is unable to verify when to revoke tokens that are tied to an old credential (such as a password that has been changed) and must check back in more frequently to ensure that the user and associated tokens are still in good standing. To improve this experience, tenant admins must ensure that they are syncing the ΓÇ£LastPasswordChangeTimestampΓÇ¥ attribute (this can be set on the user object using PowerShell or through AADSync).
-### Policy evaluation and prioritization
-You can create and then assign a token lifetime policy to a specific application, to your organization, and to service principals. Multiple policies might apply to a specific application. The token lifetime policy that takes effect follows these rules:
-
-* If a policy is explicitly assigned to the service principal, it is enforced.
-* If no policy is explicitly assigned to the service principal, a policy explicitly assigned to the parent organization of the service principal is enforced.
-* If no policy is explicitly assigned to the service principal or to the organization, the policy assigned to the application is enforced.
-* If no policy has been assigned to the service principal, the organization, or the application object, the default values are enforced. (See the table in [Configurable token lifetime properties](#configurable-token-lifetime-properties).)
+### Configurable policy property details
-For more information about the relationship between application objects and service principal objects, see [Application and service principal objects in Azure Active Directory](app-objects-and-service-principals.md).
-
-A tokenΓÇÖs validity is evaluated at the time the token is used. The policy with the highest priority on the application that is being accessed takes effect.
-
-All timespans used here are formatted according to the C# [TimeSpan](/dotnet/api/system.timespan) object - D.HH:MM:SS. So 80 days and 30 minutes would be `80.00:30:00`. The leading D can be dropped if zero, so 90 minutes would be `00:90:00`.
-
-> [!NOTE]
-> Here's an example scenario.
->
-> A user wants to access two web applications: Web Application A and Web Application B.
->
-> Factors:
-> * Both web applications are in the same parent organization.
-> * Token Lifetime Policy 1 with a Session Token Max Age of eight hours is set as the parent organizationΓÇÖs default.
-> * Web Application A is a regular-use web application and isnΓÇÖt linked to any policies.
-> * Web Application B is used for highly sensitive processes. Its service principal is linked to Token Lifetime Policy 2, which has a Session Token Max Age of 30 minutes.
->
-> At 12:00 PM, the user starts a new browser session and tries to access Web Application A. The user is redirected to Microsoft identity platform and is asked to sign in. This creates a cookie that has a session token in the browser. The user is redirected back to Web Application A with an ID token that allows the user to access the application.
->
-> At 12:15 PM, the user tries to access Web Application B. The browser redirects to Microsoft identity platform, which detects the session cookie. Web Application BΓÇÖs service principal is linked to Token Lifetime Policy 2, but it's also part of the parent organization, with default Token Lifetime Policy 1. Token Lifetime Policy 2 takes effect because policies linked to service principals have a higher priority than organization default policies. The session token was originally issued within the last 30 minutes, so it is considered valid. The user is redirected back to Web Application B with an ID token that grants them access.
->
-> At 1:00 PM, the user tries to access Web Application A. The user is redirected to Microsoft identity platform. Web Application A is not linked to any policies, but because it is in an organization with default Token Lifetime Policy 1, that policy takes effect. The session cookie that was originally issued within the last eight hours is detected. The user is silently redirected back to Web Application A with a new ID token. The user is not required to authenticate.
->
-> Immediately afterward, the user tries to access Web Application B. The user is redirected to Microsoft identity platform. As before, Token Lifetime Policy 2 takes effect. Because the token was issued more than 30 minutes ago, the user is prompted to reenter their sign-in credentials. A brand-new session token and ID token are issued. The user can then access Web Application B.
->
->
-
-## Configurable policy property details
-### Access Token Lifetime
-**String:** AccessTokenLifetime
-
-**Affects:** Access tokens, ID tokens, SAML tokens
-
-**Summary:** This policy controls how long access and ID tokens for this resource are considered valid. Reducing the Access Token Lifetime property mitigates the risk of an access token or ID token being used by a malicious actor for an extended period of time. (These tokens cannot be revoked.) The trade-off is that performance is adversely affected, because the tokens have to be replaced more often.
-
-For an example, see [Create a policy for web sign-in](configure-token-lifetimes.md#create-a-policy-for-web-sign-in).
-
-### Refresh Token Max Inactive Time
+#### Refresh Token Max Inactive Time
**String:** MaxInactiveTime **Affects:** Refresh tokens
@@ -202,7 +147,7 @@ The Refresh Token Max Inactive Time property must be set to a lower value than t
For an example, see [Create a policy for a native app that calls a web API](configure-token-lifetimes.md#create-a-policy-for-a-native-app-that-calls-a-web-api).
-### Single-Factor Refresh Token Max Age
+#### Single-Factor Refresh Token Max Age
**String:** MaxAgeSingleFactor **Affects:** Refresh tokens
@@ -213,7 +158,7 @@ Reducing the max age forces users to authenticate more often. Because single-fac
For an example, see [Create a policy for a native app that calls a web API](configure-token-lifetimes.md#create-a-policy-for-a-native-app-that-calls-a-web-api).
-### Multi-Factor Refresh Token Max Age
+#### Multi-Factor Refresh Token Max Age
**String:** MaxAgeMultiFactor **Affects:** Refresh tokens
@@ -224,7 +169,7 @@ Reducing the max age forces users to authenticate more often. Because single-fac
For an example, see [Create a policy for a native app that calls a web API](configure-token-lifetimes.md#create-a-policy-for-a-native-app-that-calls-a-web-api).
-### Single-Factor Session Token Max Age
+#### Single-Factor Session Token Max Age
**String:** MaxAgeSessionSingleFactor **Affects:** Session tokens (persistent and nonpersistent)
@@ -235,7 +180,7 @@ Reducing the max age forces users to authenticate more often. Because single-fac
For an example, see [Create a policy for web sign-in](configure-token-lifetimes.md#create-a-policy-for-web-sign-in).
-### Multi-Factor Session Token Max Age
+#### Multi-Factor Session Token Max Age
**String:** MaxAgeSessionMultiFactor **Affects:** Session tokens (persistent and nonpersistent)
@@ -244,6 +189,52 @@ For an example, see [Create a policy for web sign-in](configure-token-lifetimes.
Reducing the max age forces users to authenticate more often. Because single-factor authentication is considered less secure than multi-factor authentication, we recommend that you set this property to a value that is equal to or greater than the Single-Factor Session Token Max Age property.
+## Configurable token lifetime properties after the retirement
+Refresh and session token configuration are affected by the following properties and their respectively set values. After the retirement of refresh and session token configuration on January 30, 2021, Azure AD will only honor the default values described below. If you decide not to use Conditional Access to manage sign-in frequency, your refresh and session tokens will be set to the default configuration on that date and youΓÇÖll no longer be able to change their lifetimes.
+
+|Property |Policy property string |Affects |Default |
+|----------|-----------|------------|------------|
+|Access Token Lifetime |AccessTokenLifetime |Access tokens, ID tokens, SAML2 tokens |1 hour |
+|Refresh Token Max Inactive Time |MaxInactiveTime |Refresh tokens |90 days |
+|Single-Factor Refresh Token Max Age |MaxAgeSingleFactor |Refresh tokens (for any users) |Until-revoked |
+|Multi-Factor Refresh Token Max Age |MaxAgeMultiFactor |Refresh tokens (for any users) |Until-revoked |
+|Single-Factor Session Token Max Age |MaxAgeSessionSingleFactor |Session tokens (persistent and nonpersistent) |Until-revoked |
+|Multi-Factor Session Token Max Age |MaxAgeSessionMultiFactor |Session tokens (persistent and nonpersistent) |Until-revoked |
+
+You can use PowerShell to find the policies that will be affected by the retirement. Use the [PowerShell cmdlets](configure-token-lifetimes.md#get-started) to see the all policies created in your organization, or to find which apps and service principals are linked to a specific policy.
+
+## Policy evaluation and prioritization
+You can create and then assign a token lifetime policy to a specific application, to your organization, and to service principals. Multiple policies might apply to a specific application. The token lifetime policy that takes effect follows these rules:
+
+* If a policy is explicitly assigned to the service principal, it is enforced.
+* If no policy is explicitly assigned to the service principal, a policy explicitly assigned to the parent organization of the service principal is enforced.
+* If no policy is explicitly assigned to the service principal or to the organization, the policy assigned to the application is enforced.
+* If no policy has been assigned to the service principal, the organization, or the application object, the default values are enforced. (See the table in [Configurable token lifetime properties](#configurable-token-lifetime-properties-after-the-retirement).)
+
+For more information about the relationship between application objects and service principal objects, see [Application and service principal objects in Azure Active Directory](app-objects-and-service-principals.md).
+
+A tokenΓÇÖs validity is evaluated at the time the token is used. The policy with the highest priority on the application that is being accessed takes effect.
+
+All timespans used here are formatted according to the C# [TimeSpan](/dotnet/api/system.timespan) object - D.HH:MM:SS. So 80 days and 30 minutes would be `80.00:30:00`. The leading D can be dropped if zero, so 90 minutes would be `00:90:00`.
+
+### Example scenario
+
+A user wants to access two web applications: Web Application A and Web Application B.
+
+Factors:
+* Both web applications are in the same parent organization.
+* Token Lifetime Policy 1 with a Session Token Max Age of eight hours is set as the parent organizationΓÇÖs default.
+* Web Application A is a regular-use web application and isnΓÇÖt linked to any policies.
+* Web Application B is used for highly sensitive processes. Its service principal is linked to Token Lifetime Policy 2, which has a Session Token Max Age of 30 minutes.
+
+At 12:00 PM, the user starts a new browser session and tries to access Web Application A. The user is redirected to Microsoft identity platform and is asked to sign in. This creates a cookie that has a session token in the browser. The user is redirected back to Web Application A with an ID token that allows the user to access the application.
+
+At 12:15 PM, the user tries to access Web Application B. The browser redirects to Microsoft identity platform, which detects the session cookie. Web Application BΓÇÖs service principal is linked to Token Lifetime Policy 2, but it's also part of the parent organization, with default Token Lifetime Policy 1. Token Lifetime Policy 2 takes effect because policies linked to service principals have a higher priority than organization default policies. The session token was originally issued within the last 30 minutes, so it is considered valid. The user is redirected back to Web Application B with an ID token that grants them access.
+
+At 1:00 PM, the user tries to access Web Application A. The user is redirected to Microsoft identity platform. Web Application A is not linked to any policies, but because it is in an organization with default Token Lifetime Policy 1, that policy takes effect. The session cookie that was originally issued within the last eight hours is detected. The user is silently redirected back to Web Application A with a new ID token. The user is not required to authenticate.
+
+Immediately afterward, the user tries to access Web Application B. The user is redirected to Microsoft identity platform. As before, Token Lifetime Policy 2 takes effect. Because the token was issued more than 30 minutes ago, the user is prompted to reenter their sign-in credentials. A brand-new session token and ID token are issued. The user can then access Web Application B.
+ ## Cmdlet reference These are the cmdlets in the [Azure Active Directory PowerShell for Graph Preview module](/powershell/module/azuread/?view=azureadps-2.0-preview#service-principals&preserve-view=true&preserve-view=true).
@@ -278,12 +269,6 @@ You can use the following cmdlets for service principal policies.
| [Get-AzureADServicePrincipalPolicy](/powershell/module/azuread/get-azureadserviceprincipalpolicy?view=azureadps-2.0-preview&preserve-view=true) | Gets any policy linked to the specified service principal.| | [Remove-AzureADServicePrincipalPolicy](/powershell/module/azuread/remove-azureadserviceprincipalpolicy?view=azureadps-2.0-preview&preserve-view=true) | Removes the policy from the specified service principal.|
-## License requirements
-
-Using this feature requires an Azure AD Premium P1 license. To find the right license for your requirements, see [Comparing generally available features of the Free and Premium editions](https://azure.microsoft.com/pricing/details/active-directory/).
-
-Customers with [Microsoft 365 Business licenses](/office365/servicedescriptions/microsoft-365-service-descriptions/microsoft-365-business-service-description) also have access to Conditional Access features.
- ## Next steps To learn more, read [examples of how to configure token lifetimes](configure-token-lifetimes.md).\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-optional-claims https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/active-directory-optional-claims.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: develop ms.topic: how-to ms.workload: identity
-ms.date: 1/04/2021
+ms.date: 1/05/2021
ms.author: ryanwi ms.reviewer: paulgarn, hirsin, keyam ms.custom: aaddev
@@ -90,7 +90,7 @@ Some of the improvements of the v2 token format are available to apps that use t
| JWT Claim | Name | Description | Notes | |---------------|---------------------------------|-------------|-------|
-|`aud` | Audience | Always present in JWTs, but in v1 access tokens it can be emitted in a variety of ways, which can be hard to code against when performing token validation. Use the [additional properties for this claim](#additional-properties-of-optional-claims) to ensure it's always set to a GUID in v1 access tokens. | v1 JWT access tokens only|
+|`aud` | Audience | Always present in JWTs, but in v1 access tokens it can be emitted in a variety of ways - any appID URI, with or without a trailing slash, as well as the client ID of the resource. This randomization can be hard to code against when performing token validation. Use the [additional properties for this claim](#additional-properties-of-optional-claims) to ensure it's always set to the resource's client ID in v1 access tokens. | v1 JWT access tokens only|
|`preferred_username` | Preferred username | Provides the preferred username claim within v1 tokens. This makes it easier for apps to provide username hints and show human readable display names, regardless of their token type. It's recommended that you use this optional claim instead of using e.g. `upn` or `unique_name`. | v1 ID tokens and access tokens | ### Additional properties of optional claims
@@ -104,8 +104,8 @@ Some optional claims can be configured to change the way the claim is returned.
| `upn` | | Can be used for both SAML and JWT responses, and for v1.0 and v2.0 tokens. | | | `include_externally_authenticated_upn` | Includes the guest UPN as stored in the resource tenant. For example, `foo_hometenant.com#EXT#@resourcetenant.com` | | | `include_externally_authenticated_upn_without_hash` | Same as above, except that the hash marks (`#`) are replaced with underscores (`_`), for example `foo_hometenant.com_EXT_@resourcetenant.com`|
-| `aud` | | In v1 access tokens, this is used to change the format of the `aud` claim. This has no effect in v2 tokens or ID tokens, where the `aud` claim is always the client ID. Use this to ensure that your API can more easily perform audience validation. Like all optional claims that affect the access token, the resource in the request must set this optional claim, since resources own the access token.|
-| | `use_guid` | Emits the client ID of the resource (API) in GUID format as the `aud` claim instead of an appid URI or GUID. So if a resource's client ID is `bb0a297b-6a42-4a55-ac40-09a501456577`, any app that requests an access token for that resource will receive an access token with `aud` : `bb0a297b-6a42-4a55-ac40-09a501456577`.|
+| `aud` | | In v1 access tokens, this is used to change the format of the `aud` claim. This has no effect in v2 tokens or either version's ID tokens, where the `aud` claim is always the client ID. Use this configuration to ensure that your API can more easily perform audience validation. Like all optional claims that affect the access token, the resource in the request must set this optional claim, since resources own the access token.|
+| | `use_guid` | Emits the client ID of the resource (API) in GUID format as the `aud` claim always instead of it being runtime dependent. For example, if a resource sets this flag, and it's client ID is `bb0a297b-6a42-4a55-ac40-09a501456577`, any app that requests an access token for that resource will receive an access token with `aud` : `bb0a297b-6a42-4a55-ac40-09a501456577`. </br></br> Without this claim set, an API could get tokens with an `aud` claim of `api://MyApi.com`, `api://MyApi.com/`, `api://myapi.com/AdditionalRegisteredField` or any other value set as an app ID URI for that API, as well as the client ID of the resource. |
#### Additional properties example
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-saml-claims-customization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/active-directory-saml-claims-customization.md
@@ -130,7 +130,7 @@ You can use the following functions to transform claims.
| **StartWith()** | Outputs an attribute or constant if the input starts with the specified value. Otherwise, you can specify another output if thereΓÇÖs no match.<br/>For example, if you want to emit a claim where the value is the userΓÇÖs employee ID if the country/region starts with "US", otherwise you want to output an extension attribute. To do this, you would configure the following values:<br/>*Parameter 1(input)*: user.country<br/>*Value*: "US"<br/>Parameter 2 (output): user.employeeid<br/>Parameter 3 (output if there's no match): user.extensionattribute1 | | **Extract() - After matching** | Returns the substring after it matches the specified value.<br/>For example, if the input's value is "Finance_BSimon", the matching value is "Finance_", then the claim's output is "BSimon". | | **Extract() - Before matching** | Returns the substring until it matches the specified value.<br/>For example, if the input's value is "BSimon_US", the matching value is "_US", then the claim's output is "BSimon". |
-| **Extract() - Between matching** | Returns the substring until it matches the specified value.<br/>For example, if the input's value is "Finance_BSimon_US", the first matching value is "Finance_", the second matching value is "_US", then the claim's output is "BSimon". |
+| **Extract() - Between matching** | Returns the substring until it matches the specified value.<br/>For example, if the input's value is "Finance_BSimon_US", the first matching value is "Finance\_", the second matching value is "\_US", then the claim's output is "BSimon". |
| **ExtractAlpha() - Prefix** | Returns the prefix alphabetical part of the string.<br/>For example, if the input's value is "BSimon_123", then it returns "BSimon". | | **ExtractAlpha() - Suffix** | Returns the suffix alphabetical part of the string.<br/>For example, if the input's value is "123_Simon", then it returns "Simon". | | **ExtractNumeric() - Prefix** | Returns the prefix numerical part of the string.<br/>For example, if the input's value is "123_BSimon", then it returns "123". |
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/configure-token-lifetimes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/configure-token-lifetimes.md
@@ -10,52 +10,83 @@ ms.service: active-directory
ms.subservice: develop ms.workload: identity ms.topic: how-to
-ms.date: 12/14/2020
+ms.date: 01/04/2021
ms.author: ryanwi ms.custom: aaddev, content-perf, FY21Q1 ms.reviewer: hirsin, jlu, annaba --- # Configure token lifetime policies (preview)
-Many scenarios are possible in Azure AD when you can create and manage token lifetimes for apps, service principals, and your overall organization.
+You can specify the lifetime of an access, SAML, or ID token issued by Microsoft identity platform. You can set token lifetimes for all apps in your organization, for a multi-tenant (multi-organization) application, or for a specific service principal in your organization. For more info, read [configurable token lifetimes](active-directory-configurable-token-lifetimes.md).
-> [!IMPORTANT]
-> After May 2020, tenants will no longer be able to configure refresh and session token lifetimes. Azure Active Directory will stop honoring existing refresh and session token configuration in policies after January 30, 2021. You can still configure access token lifetimes after the deprecation. To learn more, read [Configurable token lifetimes in Microsoft identity platform](active-directory-configurable-token-lifetimes.md).
-> We’ve implemented [authentication session management capabilities](../conditional-access/howto-conditional-access-session-lifetime.md) in Azure AD Conditional Access. You can use this new feature to configure refresh token lifetimes by setting sign in frequency.
--
-In this section, we walk through a few common policy scenarios that can help you impose new rules for:
-
-* Token lifetime
-* Token max inactive time
-* Token max age
-
-In the examples, you can learn how to:
-
-* Manage an organization's default policy
-* Create a policy for web sign-in
-* Create a policy for a native app that calls a web API
-* Manage an advanced policy
-
-## Prerequisites
-In the following examples, you create, update, link, and delete policies for apps, service principals, and your overall organization. If you are new to Azure AD, we recommend that you learn about [how to get an Azure AD tenant](quickstart-create-new-tenant.md) before you proceed with these examples.
+In this section, we walk through a common policy scenario that can help you impose new rules for token lifetime. In the example, you learn how to create a policy that requires users to authenticate more frequently in your web app.
+## Get started
To get started, do the following steps: 1. Download the latest [Azure AD PowerShell Module Public Preview release](https://www.powershellgallery.com/packages/AzureADPreview).
-2. Run the `Connect` command to sign in to your Azure AD admin account. Run this command each time you start a new session.
+1. Run the `Connect` command to sign in to your Azure AD admin account. Run this command each time you start a new session.
```powershell Connect-AzureAD -Confirm ```
-3. To see all policies that have been created in your organization, run the following command. Run this command after most operations in the following scenarios. Running the command also helps you get the **
-** of your policies.
+1. To see all policies that have been created in your organization, run the [Get-AzureADPolicy](/powershell/module/azuread/get-azureadpolicy?view=azureadps-2.0-preview&preserve-view=true) cmdlet. Any results with defined property values that differ from the defaults listed above are in scope of the retirement.
+
+ ```powershell
+ Get-AzureADPolicy -All
+ ```
+
+1. To see which apps and service principals are linked to a specific policy you identified run the following [Get-AzureADPolicyAppliedObject](/powershell/module/azuread/get-azureadpolicyappliedobject?view=azureadps-2.0-preview&preserve-view=true) cmdlet by replacing **1a37dad8-5da7-4cc8-87c7-efbc0326cf20** with any of your policy IDs. Then you can decide whether to configure Conditional Access sign-in frequency or remain with the Azure AD defaults.
```powershell
- Get-AzureADPolicy
+ Get-AzureADPolicyAppliedObject -id 1a37dad8-5da7-4cc8-87c7-efbc0326cf20
```
-## Manage an organization's default policy
+If your tenant has policies which define custom values for the refresh and session token configuration properties, Microsoft recommends you update those policies to values that reflect the defaults described above. If no changes are made, Azure AD will automatically honor the default values.
+
+## Create a policy for web sign-in
+
+In this example, you create a policy that requires users to authenticate more frequently in your web app. This policy sets the lifetime of the access/ID tokens and the max age of a multi-factor session token to the service principal of your web app.
+
+1. Create a token lifetime policy.
+
+ This policy, for web sign-in, sets the access/ID token lifetime and the max single-factor session token age to two hours.
+
+ 1. To create the policy, run the [New-AzureADPolicy](/powershell/module/azuread/new-azureadpolicy?view=azureadps-2.0-preview&preserve-view=true) cmdlet:
+
+ ```powershell
+ $policy = New-AzureADPolicy -Definition @('{"TokenLifetimePolicy":{"Version":1,"AccessTokenLifetime":"02:00:00","MaxAgeSessionSingleFactor":"02:00:00"}}') -DisplayName "WebPolicyScenario" -IsOrganizationDefault $false -Type "TokenLifetimePolicy"
+ ```
+
+ 1. To see your new policy, and to get the policy **ObjectId**, run the [Get-AzureADPolicy](/powershell/module/azuread/get-azureadpolicy?view=azureadps-2.0-preview&preserve-view=true) cmdlet:
+
+ ```powershell
+ Get-AzureADPolicy -Id $policy.Id
+ ```
+
+1. Assign the policy to your service principal. You also need to get the **ObjectId** of your service principal.
+
+ 1. Use the [Get-AzureADServicePrincipal](/powershell/module/azuread/get-azureadserviceprincipal) cmdlet to see all your organization's service principals or a single service principal.
+ ```powershell
+ # Get ID of the service principal
+ $sp = Get-AzureADServicePrincipal -Filter "DisplayName eq '<service principal display name>'"
+ ```
+
+ 1. When you have the service principal, run the [Add-AzureADServicePrincipalPolicy](/powershell/module/azuread/add-azureadserviceprincipalpolicy?view=azureadps-2.0-preview&preserve-view=true) cmdlet:
+ ```powershell
+ # Assign policy to a service principal
+ Add-AzureADServicePrincipalPolicy -Id $sp.ObjectId -RefObjectId $policy.Id
+ ```
+
+## Create token lifetime policies for refresh and session tokens
+> [!IMPORTANT]
+> As of May 2020, new tenants can not configure refresh and session token lifetimes. Tenants with existing configuration can modify refresh and session token policies until January 30, 2021. Azure Active Directory will stop honoring existing refresh and session token configuration in policies after January 30, 2021. You can still configure access, SAML, and ID token lifetimes after the retirement.
+>
+> If you need to continue to define the time period before a user is asked to sign in again, configure sign-in frequency in Conditional Access. To learn more about Conditional Access, read [Configure authentication session management with Conditional Access](/azure/active-directory/conditional-access/howto-conditional-access-session-lifetime).
+>
+> If you do not want to use Conditional Access after the retirement date, your refresh and session tokens will be set to the [default configuration](active-directory-configurable-token-lifetimes.md#configurable-token-lifetime-properties-after-the-retirement) on that date and you will no longer be able to change their lifetimes.
+
+### Manage an organization's default policy
In this example, you create a policy that lets your users' sign in less frequently across your entire organization. To do this, create a token lifetime policy for single-factor refresh tokens, which is applied across your organization. The policy is applied to every application in your organization, and to each service principal that doesnΓÇÖt already have a policy set. 1. Create a token lifetime policy.
@@ -98,41 +129,7 @@ In this example, you create a policy that lets your users' sign in less frequent
Set-AzureADPolicy -Id $policy.Id -DisplayName $policy.DisplayName -Definition @('{"TokenLifetimePolicy":{"Version":1,"MaxAgeSingleFactor":"2.00:00:00"}}') ```
-## Create a policy for web sign-in
-
-In this example, you create a policy that requires users to authenticate more frequently in your web app. This policy sets the lifetime of the access/ID tokens and the max age of a multi-factor session token to the service principal of your web app.
-
-1. Create a token lifetime policy.
-
- This policy, for web sign-in, sets the access/ID token lifetime and the max single-factor session token age to two hours.
-
- 1. To create the policy, run the [New-AzureADPolicy](/powershell/module/azuread/new-azureadpolicy?view=azureadps-2.0-preview&preserve-view=true) cmdlet:
-
- ```powershell
- $policy = New-AzureADPolicy -Definition @('{"TokenLifetimePolicy":{"Version":1,"AccessTokenLifetime":"02:00:00","MaxAgeSessionSingleFactor":"02:00:00"}}') -DisplayName "WebPolicyScenario" -IsOrganizationDefault $false -Type "TokenLifetimePolicy"
- ```
-
- 1. To see your new policy, and to get the policy **ObjectId**, run the [Get-AzureADPolicy](/powershell/module/azuread/get-azureadpolicy?view=azureadps-2.0-preview&preserve-view=true) cmdlet:
-
- ```powershell
- Get-AzureADPolicy -Id $policy.Id
- ```
-
-1. Assign the policy to your service principal. You also need to get the **ObjectId** of your service principal.
-
- 1. Use the [Get-AzureADServicePrincipal](/powershell/module/azuread/get-azureadserviceprincipal) cmdlet to see all your organization's service principals or a single service principal.
- ```powershell
- # Get ID of the service principal
- $sp = Get-AzureADServicePrincipal -Filter "DisplayName eq '<service principal display name>'"
- ```
-
- 1. When you have the service principal, run the [Add-AzureADServicePrincipalPolicy](/powershell/module/azuread/add-azureadserviceprincipalpolicy?view=azureadps-2.0-preview&preserve-view=true) cmdlet:
- ```powershell
- # Assign policy to a service principal
- Add-AzureADServicePrincipalPolicy -Id $sp.ObjectId -RefObjectId $policy.Id
- ```
-
-## Create a policy for a native app that calls a web API
+### Create a policy for a native app that calls a web API
In this example, you create a policy that requires users to authenticate less frequently. The policy also lengthens the amount of time a user can be inactive before the user must reauthenticate. The policy is applied to the web API. When the native app requests the web API as a resource, this policy is applied. 1. Create a token lifetime policy.
@@ -161,7 +158,7 @@ In this example, you create a policy that requires users to authenticate less fr
Add-AzureADApplicationPolicy -Id $app.ObjectId -RefObjectId $policy.Id ```
-## Manage an advanced policy
+### Manage an advanced policy
In this example, you create a few policies to learn how the priority system works. You also learn how to manage multiple policies that are applied to several objects. 1. Create a token lifetime policy.
@@ -209,4 +206,4 @@ In this example, you create a few policies to learn how the priority system work
You now have the original policy linked to your service principal, and the new policy is set as your organization default policy. It's important to remember that policies applied to service principals have priority over organization default policies. ## Next steps
-Learn about [authentication session management capabilities](../conditional-access/howto-conditional-access-session-lifetime.md) in Azure AD Conditional Access.
\ No newline at end of file
+Learn about [authentication session management capabilities](../conditional-access/howto-conditional-access-session-lifetime.md) in Azure AD Conditional Access.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-create-new-tenant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-create-new-tenant.md
@@ -52,7 +52,7 @@ Many developers already have tenants through services or subscriptions that are
> [!TIP] > If you need to find the tenant ID, you can: > * Hover over your account name to get the directory / tenant ID, or
-> * Select **Azure Active Directory > Properties > Directory ID** in the Azure portal
+> * Search and select **Azure Active Directory > Properties > Tenant ID** in the Azure portal
If you don't have an existing tenant associated with your account, you'll see a GUID under your account name and you won't be able to perform actions like registering apps until you follow the steps of the next section.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/scenario-daemon-acquire-token https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-daemon-acquire-token.md
@@ -91,6 +91,11 @@ catch (MsalServiceException ex) when (ex.Message.Contains("AADSTS70011"))
} ```
+### AcquireTokenForClient uses the application token cache
+
+In MSAL.NET, `AcquireTokenForClient` uses the application token cache. (All the other AcquireToken*XX* methods use the user token cache.)
+Don't call `AcquireTokenSilent` before you call `AcquireTokenForClient`, because `AcquireTokenSilent` uses the *user* token cache. `AcquireTokenForClient` checks the *application* token cache itself and updates it.
+ # [Python](#tab/python) ```Python
@@ -199,11 +204,6 @@ scope=https%3A%2F%2Fgraph.microsoft.com%2F.default
For more information, see the protocol documentation: [Microsoft identity platform and the OAuth 2.0 client credentials flow](v2-oauth2-client-creds-grant-flow.md).
-## Application token cache
-
-In MSAL.NET, `AcquireTokenForClient` uses the application token cache. (All the other AcquireToken*XX* methods use the user token cache.)
-Don't call `AcquireTokenSilent` before you call `AcquireTokenForClient`, because `AcquireTokenSilent` uses the *user* token cache. `AcquireTokenForClient` checks the *application* token cache itself and updates it.
- ## Troubleshooting ### Did you use the resource/.default scope?
@@ -229,6 +229,12 @@ Content: {
} ```
+### Are you calling your own API?
+
+If you call your own web API and couldn't add an app permission to the app registration for your daemon app, did you expose an app role in your web API?
+
+For details, see [Exposing application permissions (app roles)](scenario-protected-web-api-app-registration.md#exposing-application-permissions-app-roles) and, in particular, [Ensuring that Azure AD issues tokens for your web API to only allowed clients](scenario-protected-web-api-app-registration.md#ensuring-that-azure-ad-issues-tokens-for-your-web-api-to-only-allowed-clients).
+ ## Next steps # [.NET](#tab/dotnet)
@@ -246,4 +252,4 @@ Move on to the next article in this scenario,
Move on to the next article in this scenario, [Calling a web API](./scenario-daemon-call-api.md?tabs=java). \ No newline at end of file
+---
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/scenario-desktop-acquire-token https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-desktop-acquire-token.md
@@ -10,7 +10,7 @@ ms.service: active-directory
ms.subservice: develop ms.topic: conceptual ms.workload: identity
-ms.date: 11/04/2020
+ms.date: 01/06/2021
ms.author: jmprieur ms.custom: aaddev, devx-track-python #Customer intent: As an application developer, I want to know how to write a desktop app that calls web APIs by using the Microsoft identity platform for developers.
@@ -180,7 +180,7 @@ On Android, you also need to specify the parent activity by using `.WithParentAc
#### WithParentActivityOrWindow
-The UI is important because it's interactive. `AcquireTokenInteractive` has one specific optional parameter that can specify, for platforms that support it, the parent UI. When used in a desktop application, `.WithParentActivityOrWindow` has a different type, which depends on the platform. Alternatively you can omit the optional parent window parameter to create a window, if you do not want to control where the sign-in dialog appear on the screen. This would be applicable for applications which are command line based, used to pass calls to any other backend service and do not need any windows for user interaction.
+The UI is important because it's interactive. `AcquireTokenInteractive` has one specific optional parameter that can specify, for platforms that support it, the parent UI. When used in a desktop application, `.WithParentActivityOrWindow` has a different type, which depends on the platform. Alternatively you can omit the optional parent window parameter to create a window, if you do not want to control where the sign-in dialog appears on the screen. This would be applicable for applications which are command line based, used to pass calls to any other backend service and do not need any windows for user interaction.
```csharp // net45
@@ -946,7 +946,7 @@ This method takes as parameters:
![DeviceCodeResult properties](https://user-images.githubusercontent.com/13203188/56024968-7af1b980-5d11-11e9-84c2-5be2ef306dc5.png)
-The following sample code presents the most current case, with explanations of the kind of exceptions you can get and their mitigation.
+The following sample code presents the synopsis of most current cases, with explanations of the kind of exceptions you can get and their mitigation. For a fully functional code sample, see [active-directory-dotnetcore-devicecodeflow-v2](https://github.com/azure-samples/active-directory-dotnetcore-devicecodeflow-v2) on GitHub.
```csharp private const string ClientId = "<client_guid>";
@@ -978,7 +978,7 @@ static async Task<AuthenticationResult> GetATokenForGraph()
} }
-private async Task<AuthenticationResult> AcquireByDeviceCodeAsync(IPublicClientApplication pca)
+private static async Task<AuthenticationResult> AcquireByDeviceCodeAsync(IPublicClientApplication pca)
{ try {
@@ -1002,6 +1002,7 @@ private async Task<AuthenticationResult> AcquireByDeviceCodeAsync(IPublicClientA
Console.WriteLine(result.Account.Username); return result; }+ // TODO: handle or throw all these exceptions depending on your app catch (MsalServiceException ex) {
@@ -1035,6 +1036,7 @@ private async Task<AuthenticationResult> AcquireByDeviceCodeAsync(IPublicClientA
} } ```+ # [Java](#tab/java) This extract is from the [MSAL Java dev samples](https://github.com/AzureAD/microsoft-authentication-library-for-java/blob/dev/src/samples/public-client/).
@@ -1177,7 +1179,7 @@ The customization of token cache serialization to share the SSO state between AD
### Simple token cache serialization (MSAL only)
-The following example is a naive implementation of custom serialization of a token cache for desktop applications. Here, the user token cache is in a file in the same folder as the application.
+The following example is a naive implementation of custom serialization of a token cache for desktop applications. Here, the user token cache is in a file in the same folder as the application or, in a per user per app folder in the case where the app is a [packaged desktop application](https://docs.microsoft.com/windows/msix/desktop/desktop-to-uwp-behind-the-scenes). For the full code, see the following sample: [active-directory-dotnet-desktop-msgraph-v2](https://github.com/Azure-Samples/active-directory-dotnet-desktop-msgraph-v2).
After you build the application, you enable the serialization by calling ``TokenCacheHelper.EnableSerialization()`` and passing the application `UserTokenCache`.
@@ -1196,16 +1198,28 @@ static class TokenCacheHelper
{ tokenCache.SetBeforeAccess(BeforeAccessNotification); tokenCache.SetAfterAccess(AfterAccessNotification);
+ try
+ {
+ // For packaged desktop apps (MSIX packages) the executing assembly folder is read-only.
+ // In that case we need to use Windows.Storage.ApplicationData.Current.LocalCacheFolder.Path + "\msalcache.bin"
+ // which is a per-app read/write folder for packaged apps.
+ // See https://docs.microsoft.com/windows/msix/desktop/desktop-to-uwp-behind-the-scenes
+ CacheFilePath = System.IO.Path.Combine(Windows.Storage.ApplicationData.Current.LocalCacheFolder.Path, "msalcache.bin3");
+ }
+ catch (System.InvalidOperationException)
+ {
+ // Fall back for an un-packaged desktop app
+ CacheFilePath = System.Reflection.Assembly.GetExecutingAssembly().Location + ".msalcache.bin";
+ }
} /// <summary> /// Path to the token cache /// </summary>
- public static readonly string CacheFilePath = System.Reflection.Assembly.GetExecutingAssembly().Location + ".msalcache.bin3";
+ public static string CacheFilePath { get; private set; }
private static readonly object FileLock = new object(); - private static void BeforeAccessNotification(TokenCacheNotificationArgs args) { lock (FileLock)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/scenario-protected-web-api-app-configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-protected-web-api-app-configuration.md
@@ -172,7 +172,7 @@ services.AddControllers();
> - `$"api://{ClientId}` in all other cases (for v1.0 [access tokens](access-tokens.md)). > For details, see Microsoft.Identity.Web [source code](https://github.com/AzureAD/microsoft-identity-web/blob/d2ad0f5f830391a34175d48621a2c56011a45082/src/Microsoft.Identity.Web/Resource/RegisterValidAudience.cs#L70-L83).
-The preceding code snippet is extracted from the [ASP.NET Core web API incremental tutorial](https://github.com/Azure-Samples/active-directory-dotnet-native-aspnetcore-v2/blob/63087e83326e6a332d05fee6e1586b66d840b08f/1.%20Desktop%20app%20calls%20Web%20API/TodoListService/Startup.cs#L23-L28). The detail of **AddMicrosoftIdentityWebApiAuthentication** is available in [Microsoft.Identity.Web](https://github.com/AzureAD/microsoft-identity-web/blob/d2ad0f5f830391a34175d48621a2c56011a45082/src/Microsoft.Identity.Web/WebApiExtensions/WebApiServiceCollectionExtensions.cs#L27). This method calls [AddMicrosoftWebAPI](https://github.com/AzureAD/microsoft-identity-web/blob/d2ad0f5f830391a34175d48621a2c56011a45082/src/Microsoft.Identity.Web/WebApiExtensions/WebApiAuthenticationBuilderExtensions.cs#L58), which itself instructs the middleware on how to validate the token. For details see its [source code](https://github.com/AzureAD/microsoft-identity-web/blob/d2ad0f5f830391a34175d48621a2c56011a45082/src/Microsoft.Identity.Web/WebApiExtensions/WebApiAuthenticationBuilderExtensions.cs#L104-L122).
+The preceding code snippet is extracted from the [ASP.NET Core web API incremental tutorial](https://github.com/Azure-Samples/active-directory-dotnet-native-aspnetcore-v2/blob/63087e83326e6a332d05fee6e1586b66d840b08f/1.%20Desktop%20app%20calls%20Web%20API/TodoListService/Startup.cs#L23-L28). The detail of **AddMicrosoftIdentityWebApiAuthentication** is available in [Microsoft.Identity.Web](microsoft-identity-web.md). This method calls [AddMicrosoftIdentityWebAPI](https://docs.microsoft.com/dotnet/api/microsoft.identity.web.microsoftidentitywebapiauthenticationbuilderextensions.addmicrosoftidentitywebapi?view=azure-dotnet-preview&preserve-view=true), which itself instructs the middleware on how to validate the token.
## Token validation
active-directory https://docs.microsoft.com/en-us/azure/active-directory/devices/plan-device-deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/plan-device-deployment.md
@@ -43,7 +43,7 @@ The key benefits of giving your devices an Azure AD identity:
* Increase productivity ΓÇô With Azure AD, your users can do [seamless sign-on (SSO)](./azuread-join-sso.md) to your on-premises and cloud resources, which enables them to be productive wherever they are.
-* Increase security ΓÇô Azure AD devices enable you to apply [Conditional Access (CA) policies](../conditional-access/require-managed-devices.md) to resources based on the identity of the device or user. CA policies can offer extra protection using [Azure AD Identity Protection](../identity-protection/overview-identity-protection.md). Joining a device to Azure AD is a prerequisite for increasing your security with a [Passwordless Authentication](../authentication/concept-authentication-passwordless.md) strategy.
+* Increase security ΓÇô Azure AD devices enable you to apply [Conditional Access policies](../conditional-access/require-managed-devices.md) to resources based on the identity of the device or user. Conditional Access policies can offer extra protection using [Azure AD Identity Protection](../identity-protection/overview-identity-protection.md). Joining a device to Azure AD is a prerequisite for increasing your security with a [Passwordless Authentication](../authentication/concept-authentication-passwordless.md) strategy.
* Improve user experience ΓÇô With device identities in Azure AD, you can provide your users with easy access to your organizationΓÇÖs cloud-based resources from both personal and corporate devices. Administrators can enable [Enterprise State Roaming](enterprise-state-roaming-overview.md) for a unified experience across all Windows devices.
@@ -128,7 +128,7 @@ Conditional Access <br>(Require hybrid Azure AD joined devices)| | | ![Checkmark
Registered devices are often managed with [Microsoft Intune](/mem/intune/enrollment/device-enrollment). Devices are enrolled in Intune in a number of ways, depending on the operating system.
-Azure AD registered devices provide support for Bring Your Own Devices (BYOD) and corporate owned devices to SSO to cloud resources. Access to resources is based on the Azure AD [CA policies](../conditional-access/require-managed-devices.md) applied to the device and the user.
+Azure AD registered devices provide support for Bring Your Own Devices (BYOD) and corporate owned devices to SSO to cloud resources. Access to resources is based on the Azure AD [Conditional Access policies](../conditional-access/require-managed-devices.md) applied to the device and the user.
### Registering devices
active-directory https://docs.microsoft.com/en-us/azure/active-directory/enterprise-users/groups-settings-cmdlets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/groups-settings-cmdlets.md
@@ -24,7 +24,7 @@ This article contains instructions for using Azure Active Directory (Azure AD) P
For more information on how to prevent non-administrator users from creating security groups, set `Set-MsolCompanySettings -UsersPermissionToCreateGroupsEnabled $False` as described in [Set-MSOLCompanySettings](/powershell/module/msonline/set-msolcompanysettings).
-Microsoft 365 groups settings are configured using a Settings object and a SettingsTemplate object. Initially, you don't see any Settings objects in your directory, because your directory is configured with the default settings. To change the default settings, you must create a new settings object using a settings template. Settings templates are defined by Microsoft. There are several different settings templates. To configure Microsoft 365 group settings for your directory, you use the template named "Group.Unified". To configure Microsoft 365 group settings on a single group, use the template named "Group.Unified.Guest". This template is used to manage guest access to an Microsoft 365 group.
+Microsoft 365 groups settings are configured using a Settings object and a SettingsTemplate object. Initially, you don't see any Settings objects in your directory, because your directory is configured with the default settings. To change the default settings, you must create a new settings object using a settings template. Settings templates are defined by Microsoft. There are several different settings templates. To configure Microsoft 365 group settings for your directory, you use the template named "Group.Unified". To configure Microsoft 365 group settings on a single group, use the template named "Group.Unified.Guest". This template is used to manage guest access to a Microsoft 365 group.
The cmdlets are part of the Azure Active Directory PowerShell V2 module. For instructions how to download and install the module on your computer, see the article [Azure Active Directory PowerShell Version 2](/powershell/azure/active-directory/overview). You can install the version 2 release of the module from [the PowerShell gallery](https://www.powershellgallery.com/packages/AzureAD/).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/external-identities/self-service-sign-up-add-api-connector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/self-service-sign-up-add-api-connector.md
@@ -315,7 +315,7 @@ Ensure that:
* The **Endpoint URL** of the API connector points to the correct API endpoint. * Your API explicitly checks for null values of received claims. * Your API responds as quickly as possible to ensure a fluid user experience.
- * If using a serverless function or scalable web service, use a hosting plan that keeps the API "awake" or "warm." For Azure Functions, its recommended to use the [Premium plan](../../azure-functions/functions-scale.md#premium-plan).
+ * If using a serverless function or scalable web service, use a hosting plan that keeps the API "awake" or "warm." For Azure Functions, its recommended to use the [Premium plan](../../azure-functions/functions-premium-plan.md).
### Use logging
active-directory https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/3-secure-access-plan https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/3-secure-access-plan.md
@@ -170,7 +170,7 @@ Azure AD P2 and Microsoft 365 E5 have the full suite of security and governance
| Entitlement Management| **Add user via assignment or self-service accessΓÇï**| **Access packages**| **Access packages**| | | Office 365 Group| | Access to site(s) (and associated content) ΓÇïincluded with group| Access to teams (and associated content)ΓÇïincluded with group| | | Sensitivity labels| | **Manually and automatically classify and restrict access**| **Manually and automatically classify and restrict access**| **Manually and automatically classify and restrict access** |
-| Azure AD security groups| **CA policies for access not included in access packages**| | | |
+| Azure AD security groups| **Conditional Access policies for access not included in access packages**| | | |
### Entitlement Management 
@@ -190,7 +190,7 @@ You can achieve robust governance with Azure AD P1 and Microsoft 365 E3
| Azure AD B2B Collaboration| **Invite via email, OTP, self-service**| Direct B2B federation| **Periodic review per partner**| Remove account<br>Restrict sign in | | Microsoft or Office 365 Groups| | | | Expiration of or deletion of group.<br>Removal from group. | | Security groups| | **Add external users to security groups (org, team, project, etc.)**| | |
-| Conditional Access policies| | **Sign-in CA policies for external users**| | |
+| Conditional Access policies| | **Sign-in Conditional Access policies for external users**| | |
### Access to resources.
@@ -199,7 +199,7 @@ You can achieve robust governance with Azure AD P1 and Microsoft 365 E3
| - |-|-|-|-| | Microsoft or Office 365 Groups| | **Access to site(s) included with group (and associated content)**|**Access to teams included with Microsoft 365 group (and associated content)**| | | Sensitivity labels| | Manually classify and restrict access| Manually classify and restrict access.| Manually classify to restrict and encrypt |
-| Conditional Access Policies| CA policies for access control| | | |
+| Conditional Access Policies| Conditional Access policies for access control| | | |
| Additional methods| | Restrict SharePoint site access granularly with security groups.<br>Disallow direct sharing.| **Restrict external invitations from within teams**| |
active-directory https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/7-secure-access-conditional-access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/7-secure-access-conditional-access.md
@@ -17,13 +17,13 @@ ms.collection: M365-identity-device-management
# Manage external access with Conditional Access policies
-[Conditional Access](../conditional-access/overview.md) is the tool Azure AD uses to bring together signals, enforce policies, and determine whether a user should be allowed access to resources. For detailed information on how to create and use Conditional Access policies (CA policies), see [Plan a Conditional Access deployment](../conditional-access/plan-conditional-access.md).
+[Conditional Access](../conditional-access/overview.md) is the tool Azure AD uses to bring together signals, enforce policies, and determine whether a user should be allowed access to resources. For detailed information on how to create and use Conditional Access policies (Conditional Access policies), see [Plan a Conditional Access deployment](../conditional-access/plan-conditional-access.md).
![Diagram of Conditional Access signals and decisions](media/secure-external-access//7-conditional-access-signals.png)
-This article discusses applying CA policies to external users and assumes you don't have access to [Entitlement Management](../governance/entitlement-management-overview.md) functionality. CA policies can be and are used alongside Entitlement Management.
+This article discusses applying Conditional Access policies to external users and assumes you don't have access to [Entitlement Management](../governance/entitlement-management-overview.md) functionality. Conditional Access policies can be and are used alongside Entitlement Management.
Earlier in this document set, you [created a security plan](3-secure-access-plan.md) that outlined:
@@ -31,27 +31,27 @@ Earlier in this document set, you [created a security plan](3-secure-access-plan
* Sign-in requirements for external users.
-You will use that plan to create your CA policies for external access.
+You will use that plan to create your Conditional Access policies for external access.
> [!IMPORTANT] > Create a few external user test accounts so that you can test the policies you create before applying them to all external users. ## Conditional Access policies for external access
-The following are best practices related to governing external access with CA policies.
+The following are best practices related to governing external access with Conditional Access policies.
-* If you can't use connected organizations in Entitlement Management, create an Azure AD security group or Microsoft 365 group for each partner organization you work with. Assign all users from that partner to the group. You may then use those groups in CA policies.
+* If you can't use connected organizations in Entitlement Management, create an Azure AD security group or Microsoft 365 group for each partner organization you work with. Assign all users from that partner to the group. You may then use those groups in Conditional Access policies.
-* Create as few CA policies as possible. For applications that have the same access needs, add them all to the same policy.
+* Create as few Conditional Access policies as possible. For applications that have the same access needs, add them all to the same policy.
ΓÇÄ > [!NOTE]
- > CA policies can apply to a maximum of 250 applications. If more than 250 Apps have the same access needs, create duplicate policies. Policy A will apply to apps 1-250, policy B will apply to apps 251-500, etc.
+ > Conditional Access policies can apply to a maximum of 250 applications. If more than 250 Apps have the same access needs, create duplicate policies. Policy A will apply to apps 1-250, policy B will apply to apps 251-500, etc.
* Clearly name policies specific to external access with a naming convention. One naming convention is ΓÇÄ*ExternalAccess_actiontaken_AppGroup*. For example ExternalAccess_Block_FinanceApps. ## Block all external users from resources
-You can block external users from accessing specific sets of resources with CA policies. Once you've determined the set of resources to which you want to block access, create a policy.
+You can block external users from accessing specific sets of resources with Conditional Access policies. Once you've determined the set of resources to which you want to block access, create a policy.
To create a policy that blocks access for external users to a set of applications:
active-directory https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/active-directory-ops-guide-govern https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/active-directory-ops-guide-govern.md
@@ -54,7 +54,7 @@ There are changes that require special considerations when testing, from simple
| Scenario| Recommendation | |-|-| |Changing the authentication type from federated to PHS/PTA or vice-versa| Use [staged rollout](../hybrid/how-to-connect-staged-rollout.md) to test the impact of changing the authentication type.|
-|Rolling out a new conditional access (CA) policy or Identity Protection Policy|Create a new CA Policy and assign to test users.|
+|Rolling out a new conditional access (CA) policy or Identity Protection Policy|Create a new Conditional Access policy and assign to test users.|
|Onboarding a test environment of an application|Add the application to a production environment, hide it from the MyApps panel, and assign it to test users during the quality assurance (QA) phase.| |Changing of sync rules|Perform the changes in a test Azure AD Connect with the same configuration that is currently in production, also known as staging mode, and analyze CSExport Results. If satisfied, swap to production when ready.| |Changing of branding|Test in a separate test tenant.|
active-directory https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/whats-new-archive https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/whats-new-archive.md
@@ -4121,7 +4121,7 @@ For more information about listing your application in the Azure AD app gallery,
**Service category:** Other **Product capability:** Directory
-New, step-by-step guidance about how to deploy Azure Active Directory (Azure AD), including self-service password reset (SSPR), single sign-on (SSO), Conditional Access (CA), App proxy, User provisioning, Active Directory Federation Services (ADFS) to Pass-through Authentication (PTA), and ADFS to Password hash sync (PHS).
+New, step-by-step guidance about how to deploy Azure Active Directory (Azure AD), including self-service password reset (SSPR), single sign-on (SSO), Conditional Access, App proxy, User provisioning, Active Directory Federation Services (ADFS) to Pass-through Authentication (PTA), and ADFS to Password hash sync (PHS).
To view the deployment guides, go to the [Identity Deployment Guides](./active-directory-deployment-plans.md) repo on GitHub. To provide feedback about the deployment guides, use the [Deployment Plan Feedback form](https://aka.ms/deploymentplanfeedback). If you have any questions about the deployment guides, contact us at [IDGitDeploy](mailto:idgitdeploy@microsoft.com).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/governance/deploy-access-reviews https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/deploy-access-reviews.md
@@ -332,9 +332,9 @@ Groups that are synchronized from on-premises Active Directory cannot have an ow
> [!NOTE] > We recommend defining business policies that define how groups are created to ensure clear group ownership and accountability for regular review of membership.
-### Review membership of exclusion groups in CA policies
+### Review membership of exclusion groups in Conditional Access policies
-There are times when Conditional Access (CA) policies designed to keep your network secure shouldn't apply to all users. For example, a CA policy that only allows users to sign in while on the corporate network may not apply to the Sales team, which travels extensively. In that case, the Sales team members would be put into a group and that group would be excluded from the CA policy.
+There are times when Conditional Access policies designed to keep your network secure shouldn't apply to all users. For example, a Conditional Access policy that only allows users to sign in while on the corporate network may not apply to the Sales team, which travels extensively. In that case, the Sales team members would be put into a group and that group would be excluded from the Conditional Access policy.
Review such a group membership regularly as the exclusion represents a potential risk if the wrong members are excluded from the requirement.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/hybrid/how-to-connect-health-agent-install https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-health-agent-install.md
@@ -57,9 +57,9 @@ These URLs allow communication with Azure AD Connect Health service endpoints. L
| Domain environment | Required Azure service endpoints | | --- | --- |
-| General public | <li>&#42;.blob.core.windows.net </li><li>&#42;.aadconnecthealth.azure.com </li><li>&#42;.servicebus.windows.net - Port: 5671 (This endpoint isn't required in the latest version of the agent.)</li><li>&#42;.adhybridhealth.azure.com/</li><li>https:\//management.azure.com </li><li>https:\//policykeyservice.dc.ad.msft.net/</li><li>https:\//login.windows.net</li><li>https:\//login.microsoftonline.com</li><li>https:\//secure.aadcdn.microsoftonline-p.com </li><li>https:\//www.office.com (This endpoint is used only for discovery purposes during registration.)</li> |
-| Azure Germany | <li>&#42;.blob.core.cloudapi.de </li><li>&#42;.servicebus.cloudapi.de </li> <li>&#42;.aadconnecthealth.microsoftazure.de </li><li>https:\//management.microsoftazure.de </li><li>https:\//policykeyservice.aadcdi.microsoftazure.de </li><li>https:\//login.microsoftonline.de </li><li>https:\//secure.aadcdn.microsoftonline-p.de </li><li>https:\//www.office.de (This endpoint is used only for discovery purposes during registration.)</li> |
-| Azure Government | <li>&#42;.blob.core.usgovcloudapi.net </li> <li>&#42;.servicebus.usgovcloudapi.net </li> <li>&#42;.aadconnecthealth.microsoftazure.us </li> <li>https:\//management.usgovcloudapi.net </li><li>https:\//policykeyservice.aadcdi.azure.us </li><li>https:\//login.microsoftonline.us </li><li>https:\//secure.aadcdn.microsoftonline-p.com </li><li>https:\//www.office.com (This endpoint is used only for discovery purposes during registration.)</li> |
+| General public | <li>&#42;.blob.core.windows.net </li><li>&#42;.aadconnecthealth.azure.com </li><li>&#42;.servicebus.windows.net - Port: 5671 (This endpoint isn't required in the latest version of the agent.)</li><li>&#42;.adhybridhealth.azure.com/</li><li>https:\//management.azure.com </li><li>https:\//policykeyservice.dc.ad.msft.net/</li><li>https:\//login.windows.net</li><li>https:\//login.microsoftonline.com</li><li>https:\//secure.aadcdn.microsoftonline-p.com </li><li>https:\//www.office.com (This endpoint is used only for discovery purposes during registration.)</li> <li>https://aadcdn.msftauth.net</li><li>https://aadcdn.msauth.net</li> |
+| Azure Germany | <li>&#42;.blob.core.cloudapi.de </li><li>&#42;.servicebus.cloudapi.de </li> <li>&#42;.aadconnecthealth.microsoftazure.de </li><li>https:\//management.microsoftazure.de </li><li>https:\//policykeyservice.aadcdi.microsoftazure.de </li><li>https:\//login.microsoftonline.de </li><li>https:\//secure.aadcdn.microsoftonline-p.de </li><li>https:\//www.office.de (This endpoint is used only for discovery purposes during registration.)</li> <li>https://aadcdn.msftauth.net</li><li>https://aadcdn.msauth.net</li> |
+| Azure Government | <li>&#42;.blob.core.usgovcloudapi.net </li> <li>&#42;.servicebus.usgovcloudapi.net </li> <li>&#42;.aadconnecthealth.microsoftazure.us </li> <li>https:\//management.usgovcloudapi.net </li><li>https:\//policykeyservice.aadcdi.azure.us </li><li>https:\//login.microsoftonline.us </li><li>https:\//secure.aadcdn.microsoftonline-p.com </li><li>https:\//www.office.com (This endpoint is used only for discovery purposes during registration.)</li> <li>https://aadcdn.msftauth.net</li><li>https://aadcdn.msauth.net</li> |
## Install the agent
active-directory https://docs.microsoft.com/en-us/azure/active-directory/identity-protection/concept-identity-protection-risks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/identity-protection/concept-identity-protection-risks.md
@@ -6,7 +6,7 @@ services: active-directory
ms.service: active-directory ms.subservice: identity-protection ms.topic: conceptual
-ms.date: 11/09/2020
+ms.date: 01/05/2021
ms.author: joflore author: MicrosoftGuyJFlo
@@ -60,6 +60,9 @@ These risks can be calculated in real-time or calculated offline using Microsoft
| Suspicious inbox manipulation rules | Offline | This detection is discovered by [Microsoft Cloud App Security (MCAS)](/cloud-app-security/anomaly-detection-policy#suspicious-inbox-manipulation-rules). This detection profiles your environment and triggers alerts when suspicious rules that delete or move messages or folders are set on a user's inbox. This detection may indicate that the user's account is compromised, that messages are being intentionally hidden, and that the mailbox is being used to distribute spam or malware in your organization. | | Password spray | Offline | A password spray attack is where multiple usernames are attacked using common passwords in a unified brute force manner to gain unauthorized access. This risk detection is triggered when a password spray attack has been performed. | | Impossible travel | Offline | This detection is discovered by [Microsoft Cloud App Security (MCAS)](/cloud-app-security/anomaly-detection-policy#impossible-travel). This detection identifies two user activities (is a single or multiple sessions) originating from geographically distant locations within a time period shorter than the time it would have taken the user to travel from the first location to the second, indicating that a different user is using the same credentials. |
+| New country | Offline | This detection is discovered by [Microsoft Cloud App Security (MCAS)](/cloud-app-security/anomaly-detection-policy#activity-from-infrequent-country). This detection considers past activity locations to determine new and infrequent locations. The anomaly detection engine stores information about previous locations used by users in the organization. |
+| Activity from anonymous IP address | Offline | This detection is discovered by [Microsoft Cloud App Security (MCAS)](/cloud-app-security/anomaly-detection-policy#activity-from-anonymous-ip-addresses). This detection identifies that users were active from an IP address that has been identified as an anonymous proxy IP address. |
+| Suspicious inbox forwarding | Offline | This detection is discovered by [Microsoft Cloud App Security (MCAS)](/cloud-app-security/anomaly-detection-policy#suspicious-inbox-forwarding). This detection looks for suspicious email forwarding rules, for example, if a user created an inbox rule that forwards a copy of all emails to an external address. |
### Other risk detections
active-directory https://docs.microsoft.com/en-us/azure/active-directory/identity-protection/overview-identity-protection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/identity-protection/overview-identity-protection.md
@@ -6,7 +6,7 @@ services: active-directory
ms.service: active-directory ms.subservice: identity-protection ms.topic: overview
-ms.date: 08/24/2020
+ms.date: 01/05/2021
ms.author: joflore author: MicrosoftGuyJFlo
@@ -47,13 +47,16 @@ Identity Protection identifies risks in the following classifications:
| Risk detection type | Description | | --- | --- |
-| Atypical travel | Sign in from an atypical location based on the user's recent sign-ins. |
| Anonymous IP address | Sign in from an anonymous IP address (for example: Tor browser, anonymizer VPNs). |
-| Unfamiliar sign-in properties | Sign in with properties we've not seen recently for the given user. |
+| Atypical travel | Sign in from an atypical location based on the user's recent sign-ins. |
| Malware linked IP address | Sign in from a malware linked IP address. |
+| Unfamiliar sign-in properties | Sign in with properties we've not seen recently for the given user. |
| Leaked Credentials | Indicates that the user's valid credentials have been leaked. | | Password spray | Indicates that multiple usernames are being attacked using common passwords in a unified, brute-force manner. | | Azure AD threat intelligence | Microsoft's internal and external threat intelligence sources have identified a known attack pattern. |
+| New country | This detection is discovered by [Microsoft Cloud App Security (MCAS)](/cloud-app-security/anomaly-detection-policy#activity-from-infrequent-country). |
+| Activity from anonymous IP address | This detection is discovered by [Microsoft Cloud App Security (MCAS)](/cloud-app-security/anomaly-detection-policy#activity-from-anonymous-ip-addresses). |
+| Suspicious inbox forwarding | This detection is discovered by [Microsoft Cloud App Security (MCAS)](/cloud-app-security/anomaly-detection-policy#suspicious-inbox-forwarding). |
More detail on these risks and how/when they are calculated can be found in the article, [What is risk](concept-identity-protection-risks.md).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/f5-aad-integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-aad-integration.md
@@ -68,7 +68,7 @@ Steps 1-4 in the diagram illustrate the front-end pre-authentication exchange be
|:------|:-----------| | 1. | User selects an application icon in the portal, resolving URL to the SAML SP (BIG-IP) | | 2. | The BIG-IP redirects user to SAML IDP (Azure AD) for pre-authentication|
-| 3. | Azure AD processes CA policies and [session controls](https://docs.microsoft.com/azure/active-directory/conditional-access/concept-conditional-access-session) for authorization|
+| 3. | Azure AD processes Conditional Access policies and [session controls](https://docs.microsoft.com/azure/active-directory/conditional-access/concept-conditional-access-session) for authorization|
| 4. | User redirects back to BIG-IP presenting the SAML claims issued by Azure AD | | 5. | BIG-IP requests any additional session information to include in [SSO](https://docs.microsoft.com/azure/active-directory/hybrid/how-to-connect-sso) and [Role based access control (RBAC)](https://docs.microsoft.com/azure/role-based-access-control/overview) to the published service | | 6. | BIG-IP forwards the client request to the backend service
active-directory https://docs.microsoft.com/en-us/azure/active-directory/reports-monitoring/reports-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/reports-faq.md
@@ -141,8 +141,8 @@ Here, you can view all the policies that impacted the sign-in and the result for
**A:** Conditional Access status can have the following values:
-* **Not Applied**: This means that there was no CA policy with the user and app in scope.
-* **Success**: This means that there was a CA policy with the user and app in scope and CA policies were successfully satisfied.
+* **Not Applied**: This means that there was no Conditional Access policy with the user and app in scope.
+* **Success**: This means that there was a Conditional Access policy with the user and app in scope and Conditional Access policies were successfully satisfied.
* **Failure**: The sign-in satisfied the user and application condition of at least one Conditional Access policy and grant controls are either not satisfied or set to block access. **Q: What are all possible values for the Conditional Access policy result?**
@@ -156,7 +156,7 @@ Here, you can view all the policies that impacted the sign-in and the result for
**Q: The policy name in the all sign-in report does not match the policy name in CA. Why?**
-**A:** The policy name in the all sign-in report is based on the CA policy name at the time of the sign-in. This can be inconsistent with the policy name in CA if you updated the policy name later, that is, after the sign-in.
+**A:** The policy name in the all sign-in report is based on the Conditional Access policy name at the time of the sign-in. This can be inconsistent with the policy name in CA if you updated the policy name later, that is, after the sign-in.
**Q: My sign-in was blocked due to a Conditional Access policy, but the sign-in activity report shows that the sign-in succeeded. Why?**
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/akamai-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/akamai-tutorial.md
@@ -48,7 +48,7 @@ Microsoft and Akamai EAA partnership allows the flexibility to meet your busines
#### Integration Scenario 1
-Akamai EAA is configured as a single application on the Azure AD. Admin can configure the CA Policy on the Application and once the conditions are satisfied users can gain access to the Akamai EAA Portal.
+Akamai EAA is configured as a single application on the Azure AD. Admin can configure the Conditional Access policy on the Application and once the conditions are satisfied users can gain access to the Akamai EAA Portal.
**Pros**:
@@ -58,13 +58,13 @@ Akamai EAA is configured as a single application on the Azure AD. Admin can conf
* Users end up having two applications portals
-* Single Common CA Policy coverage for all Applications.
+* Single Common Conditional Access policy coverage for all Applications.
![Integration Scenario 1](./media/header-akamai-tutorial/scenario1.png) #### Integration Scenario 2
-Akamai EAA Application is set up individually on the Azure AD Portal. Admin can configure Individual he CA Policy on the Application(s) and once the conditions are satisfied users can directly be redirected to the specific application.
+Akamai EAA Application is set up individually on the Azure AD Portal. Admin can configure Individual he Conditional Access policy on the Application(s) and once the conditions are satisfied users can directly be redirected to the specific application.
**Pros**:
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/benchling-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/benchling-tutorial.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 12/05/2019
+ms.date: 12/16/2020
ms.author: jeedes ---
@@ -21,8 +21,6 @@ In this tutorial, you'll learn how to integrate Benchling with Azure Active Dire
* Enable your users to be automatically signed-in to Benchling with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
@@ -34,8 +32,6 @@ To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment. -- * Benchling supports **SP and IDP** initiated SSO * Benchling supports **Just In Time** user provisioning
@@ -44,7 +40,7 @@ In this tutorial, you configure and test Azure AD SSO in a test environment.
To configure the integration of Benchling into Azure AD, you need to add Benchling from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
@@ -52,11 +48,11 @@ To configure the integration of Benchling into Azure AD, you need to add Benchli
1. Select **Benchling** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for Benchling
+## Configure and test Azure AD SSO for Benchling
Configure and test Azure AD SSO with Benchling using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Benchling.
-To configure and test Azure AD SSO with Benchling, complete the following building blocks:
+To configure and test Azure AD SSO with Benchling, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
@@ -69,9 +65,9 @@ To configure and test Azure AD SSO with Benchling, complete the following buildi
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Benchling** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Benchling** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
@@ -126,15 +122,9 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Benchling**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure Benchling SSO
@@ -147,16 +137,21 @@ In this section, a user called B.Simon is created in Benchling. Benchling suppor
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Benchling Sign on URL where you can initiate the login flow.
+
+* Go to Benchling Sign-on URL directly and initiate the login flow from there.
-When you click the Benchling tile in the Access Panel, you should be automatically signed in to the Benchling for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+#### IDP initiated:
-## Additional resources
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Benchling for which you set up the SSO
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+You can also use Microsoft My Apps to test the application in any mode. When you click the Benchling tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Benchling for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md) -- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [Try Benchling with Azure AD](https://aad.portal.azure.com/)\ No newline at end of file
+Once you configure Benchling you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/bic-cloud-design-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/bic-cloud-design-tutorial.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 02/07/2020
+ms.date: 12/16/2020
ms.author: jeedes ---
@@ -21,8 +21,6 @@ In this tutorial, you'll learn how to integrate BIC Cloud Design with Azure Acti
* Enable your users to be automatically signed-in to BIC Cloud Design with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
@@ -38,13 +36,12 @@ To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment. * BIC Cloud Design supports **SP** initiated SSO
-* Once you configure the BIC Cloud Design you can enforce session controls, which protect exfiltration and infiltration of your organizationΓÇÖs sensitive data in real-time. Session controls extend from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
## Adding BIC Cloud Design from the gallery To configure the integration of BIC Cloud Design into Azure AD, you need to add BIC Cloud Design from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
@@ -52,11 +49,11 @@ To configure the integration of BIC Cloud Design into Azure AD, you need to add
1. Select **BIC Cloud Design** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for BIC Cloud Design
+## Configure and test Azure AD SSO for BIC Cloud Design
Configure and test Azure AD SSO with BIC Cloud Design using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in BIC Cloud Design.
-To configure and test Azure AD SSO with BIC Cloud Design, complete the following building blocks:
+To configure and test Azure AD SSO with BIC Cloud Design, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. * **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
@@ -69,9 +66,9 @@ To configure and test Azure AD SSO with BIC Cloud Design, complete the following
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **BIC Cloud Design** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **BIC Cloud Design** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
@@ -136,15 +133,9 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **BIC Cloud Design**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure BIC Cloud Design SSO
@@ -157,16 +148,15 @@ In this section, you create a user called B.Simon in BIC Cloud Design. Work with
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the BIC Cloud Design tile in the Access Panel, you should be automatically signed in to the BIC Cloud Design for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Click on **Test this application** in Azure portal. This will redirect to BIC Cloud Design Sign-on URL where you can initiate the login flow.
-## Additional resources
+* Go to BIC Cloud Design Sign-on URL directly and initiate the login flow from there.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the BIC Cloud Design tile in the My Apps, this will redirect to BIC Cloud Design Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md) -- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [Try BIC Cloud Design with Azure AD](https://aad.portal.azure.com/)\ No newline at end of file
+Once you configure the BIC Cloud Design you can enforce session controls, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session controls extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/bridgelineunbound-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/bridgelineunbound-tutorial.md
@@ -9,20 +9,16 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 02/08/2019
+ms.date: 12/16/2020
ms.author: jeedes --- # Tutorial: Azure Active Directory integration with Bridgeline Unbound
-In this tutorial, you learn how to integrate Bridgeline Unbound with Azure Active Directory (Azure AD).
-Integrating Bridgeline Unbound with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Bridgeline Unbound with Azure Active Directory (Azure AD). When you integrate Bridgeline Unbound with Azure AD, you can:
-* You can control in Azure AD who has access to Bridgeline Unbound.
-* You can enable your users to be automatically signed-in to Bridgeline Unbound (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Bridgeline Unbound.
+* Enable your users to be automatically signed-in to Bridgeline Unbound with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
@@ -42,60 +38,38 @@ In this tutorial, you configure and test Azure AD single sign-on in a test envir
To configure the integration of Bridgeline Unbound into Azure AD, you need to add Bridgeline Unbound from the gallery to your list of managed SaaS apps.
-**To add Bridgeline Unbound from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **Bridgeline Unbound**, select **Bridgeline Unbound** from result panel then click **Add** button to add the application.
-
- ![Bridgeline Unbound in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Bridgeline Unbound** in the search box.
+1. Select **Bridgeline Unbound** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-In this section, you configure and test Azure AD single sign-on with Bridgeline Unbound based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Bridgeline Unbound needs to be established.
-To configure and test Azure AD single sign-on with Bridgeline Unbound, you need to complete the following building blocks:
+## Configure and test Azure AD SSO for Bridgeline Unbound
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Bridgeline Unbound Single Sign-On](#configure-bridgeline-unbound-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Bridgeline Unbound test user](#create-bridgeline-unbound-test-user)** - to have a counterpart of Britta Simon in Bridgeline Unbound that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+Configure and test Azure AD SSO with Bridgeline Unbound using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Bridgeline Unbound.
-### Configure Azure AD single sign-on
+To configure and test Azure AD SSO with Bridgeline Unbound, perform the following steps:
-In this section, you enable Azure AD single sign-on in the Azure portal.
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
+2. **[Configure Bridgeline Unbound SSO](#configure-bridgeline-unbound-sso)** - to configure the Single Sign-On settings on application side.
+ 1. **[Create Bridgeline Unbound test user](#create-bridgeline-unbound-test-user)** - to have a counterpart of Britta Simon in Bridgeline Unbound that is linked to the Azure AD representation of user.
+6. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-To configure Azure AD single sign-on with Bridgeline Unbound, perform the following steps:
+### Configure Azure AD SSO
-1. In the [Azure portal](https://portal.azure.com/), on the **Bridgeline Unbound** application integration page, select **Single sign-on**.
+Follow these steps to enable Azure AD SSO in the Azure portal.
- ![Configure single sign-on link](common/select-sso.png)
-
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
-
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+1. In the Azure portal, on the **Bridgeline Unbound** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
4. On the **Basic SAML Configuration** section, If you wish to configure the application in **IDP** initiated mode, perform the following steps:
- ![Screenshot shows the Basic SAML Configuration, where you can enter Identifier, Reply U R L, and select Save.](common/idp-intiated.png)
- a. In the **Identifier** text box, type a URL using the following pattern: `iApps_UPSTT_<ENVIRONMENTNAME>`
@@ -104,100 +78,71 @@ To configure Azure AD single sign-on with Bridgeline Unbound, perform the follow
5. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
- ![Screenshot shows Set additional U R Ls where you can enter a Sign on U R L.](common/metadata-upload-additional-signon.png)
- In the **Sign-on URL** text box, type a URL using the following pattern: `https://<SUBDOMAIN>.iapps.com/CommonLogin/login?<INSTANCENAME>`
- > [!NOTE]
- > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [Bridgeline Unbound Client support team](mailto:support@iapps.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [Bridgeline Unbound Client support team](mailto:support@iapps.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
6. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer.
- ![The Certificate download link](common/certificatebase64.png)
+ ![The Certificate download link](common/certificatebase64.png)
7. On the **Set up Bridgeline Unbound** section, copy the appropriate URL(s) as per your requirement.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
-
- a. Login URL
-
- b. Azure Ad Identifier
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
- c. Logout URL
-
-### Configure Bridgeline Unbound Single Sign-On
-
-To configure single sign-on on **Bridgeline Unbound** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Bridgeline Unbound support team](mailto:support@iapps.com). They set this setting to have the SAML SSO connection set properly on both sides.
### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field, type **brittasimon\@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
+In this section, you'll create a test user in the Azure portal called B.Simon.
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Bridgeline Unbound.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Bridgeline Unbound.
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Bridgeline Unbound**.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Bridgeline Unbound**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- ![Enterprise applications blade](common/enterprise-applications.png)
-2. In the applications list, select **Bridgeline Unbound**.
+## Configure Bridgeline Unbound SSO
- ![The Bridgeline Unbound link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
+To configure single sign-on on **Bridgeline Unbound** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Bridgeline Unbound support team](mailto:support@iapps.com). They set this setting to have the SAML SSO connection set properly on both sides.
-5. In the **Users and groups** dialog, select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
+### Create Bridgeline Unbound test user
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
+In this section, a user called Britta Simon is created in Bridgeline Unbound. Bridgeline Unbound supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Bridgeline Unbound, a new one is created after authentication.
-7. In the **Add Assignment** dialog, click the **Assign** button.
+## Test SSO
-### Create Bridgeline Unbound test user
+In this section, you test your Azure AD single sign-on configuration with following options.
-In this section, a user called Britta Simon is created in Bridgeline Unbound. Bridgeline Unbound supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Bridgeline Unbound, a new one is created after authentication.
+#### SP initiated:
-> [!Note]
-> If you need to create a user manually, contact [Bridgeline Unbound support team](mailto:support@iapps.com).
+* Click on **Test this application** in Azure portal. This will redirect to Bridgeline Unbound Sign on URL where you can initiate the login flow.
-### Test single sign-on
+* Go to Bridgeline Unbound Sign-on URL directly and initiate the login flow from there.
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+#### IDP initiated:
-When you click the Bridgeline Unbound tile in the Access Panel, you should be automatically signed in to the Bridgeline Unbound for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Bridgeline Unbound for which you set up the SSO
-## Additional Resources
+You can also use Microsoft My Apps to test the application in any mode. When you click the Bridgeline Unbound tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Bridgeline Unbound for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md) -- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)\ No newline at end of file
+Once you configure Bridgeline Unbound you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/headerf5-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/headerf5-tutorial.md
@@ -193,7 +193,7 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the **Add Assignment** dialog, click the **Assign** button. 1. Click on **Conditional Access** . 1. Click on **New Policy**.
-1. You can now see your F5 App as a resource for CA Policy and apply any conditional access including Multifactor Auth, Device based access control or Identity Protection Policy.
+1. You can now see your F5 App as a resource for Conditional Access policy and apply any conditional access including Multifactor Auth, Device based access control or Identity Protection Policy.
## Configure F5 SSO
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/kerbf5-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/kerbf5-tutorial.md
@@ -193,7 +193,7 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the **Add Assignment** dialog, click the **Assign** button. 1. Click on **Conditional Access** . 1. Click on **New Policy**.
-1. You can now see your F5 App as a resource for CA Policy and apply any conditional access including Multifactor Auth, Device based access control or Identity Protection Policy.
+1. You can now see your F5 App as a resource for Conditional Access policy and apply any conditional access including Multifactor Auth, Device based access control or Identity Protection Policy.
## Configure F5 SSO
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/palo-alto-networks-globalprotect-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/palo-alto-networks-globalprotect-tutorial.md
@@ -65,7 +65,7 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
1. In the Azure portal, on the **Palo Alto Networks - GlobalProtect** application integration page, find the **Manage** section and select **single sign-on**. 1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
@@ -142,12 +142,12 @@ In this section, a user called B.Simon is created in Palo Alto Networks - Global
In this section, you test your Azure AD single sign-on configuration with following options.
-1. Click on **Test this application** in Azure portal. This will redirect to Palo Alto Networks - GlobalProtect Sign-on URL where you can initiate the login flow.
+* Click on **Test this application** in Azure portal. This will redirect to Palo Alto Networks - GlobalProtect Sign-on URL where you can initiate the login flow.
-2. Go to Palo Alto Networks - GlobalProtect Sign-on URL directly and initiate the login flow from there.
+* Go to Palo Alto Networks - GlobalProtect Sign-on URL directly and initiate the login flow from there.
-3. You can use Microsoft Access Panel. When you click the Palo Alto Networks - GlobalProtect tile in the Access Panel, you should be automatically signed in to the Palo Alto Networks - GlobalProtect for which you set up the SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* You can use Microsoft My Apps. When you click the Palo Alto Networks - GlobalProtect tile in the My Apps, you should be automatically signed in to the Palo Alto Networks - GlobalProtect for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-## Next Steps
+## Next steps
Once you configure Palo Alto Networks - GlobalProtect you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/paloaltoadmin-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/paloaltoadmin-tutorial.md
@@ -66,7 +66,7 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
1. In the Azure portal, on the **Palo Alto Networks - Admin UI** application integration page, find the **Manage** section and select **single sign-on**. 1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
@@ -237,13 +237,13 @@ Palo Alto Networks - Admin UI supports just-in-time user provisioning. If a user
In this section, you test your Azure AD single sign-on configuration with following options.
-1. Click on **Test this application** in Azure portal. This will redirect to Palo Alto Networks - Admin UI Sign-on URL where you can initiate the login flow.
+* Click on **Test this application** in Azure portal. This will redirect to Palo Alto Networks - Admin UI Sign-on URL where you can initiate the login flow.
-2. Go to Palo Alto Networks - Admin UI Sign-on URL directly and initiate the login flow from there.
+* Go to Palo Alto Networks - Admin UI Sign-on URL directly and initiate the login flow from there.
-3. You can use Microsoft Access Panel. When you click the Palo Alto Networks - Admin UI tile in the Access Panel, you should be automatically signed in to the Palo Alto Networks - Admin UI for which you set up the SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* You can use Microsoft My Apps. When you click the Palo Alto Networks - Admin UI tile in the My Apps, you should be automatically signed in to the Palo Alto Networks - Admin UI for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-## Next Steps
+## Next steps
Once you configure Palo Alto Networks - Admin UI you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/paloaltonetworks-aperture-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/paloaltonetworks-aperture-tutorial.md
@@ -65,7 +65,7 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
1. In the Azure portal, on the **Palo Alto Networks - Aperture** application integration page, find the **Manage** section and select **single sign-on**. 1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
@@ -168,9 +168,9 @@ In this section, you test your Azure AD single sign-on configuration with follow
* Click on **Test this application** in Azure portal and you should be automatically signed in to the Palo Alto Networks - Aperture for which you set up the SSO
-You can also use Microsoft Access Panel to test the application in any mode. When you click the Palo Alto Networks - Aperture tile in the Access Panel, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Palo Alto Networks - Aperture for which you set up the SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+You can also use Microsoft My Apps to test the application in any mode. When you click the Palo Alto Networks - Aperture tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Palo Alto Networks - Aperture for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-## Next Steps
+## Next steps
Once you configure Palo Alto Networks - Aperture you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/paloaltonetworks-captiveportal-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/paloaltonetworks-captiveportal-tutorial.md
@@ -66,7 +66,7 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
1. In the Azure portal, on the **Palo Alto Networks Captive Portal** application integration page, find the **Manage** section and select **single sign-on**. 1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
@@ -144,10 +144,11 @@ Next, create a user named *Britta Simon* in Palo Alto Networks Captive Portal. P
In this section, you test your Azure AD single sign-on configuration with following options.
-Click on Test this application in Azure portal and you should be automatically signed in to the Palo Alto Networks Captive Portal for which you set up the SSO
+* Click on Test this application in Azure portal and you should be automatically signed in to the Palo Alto Networks Captive Portal for which you set up the SSO
-You can use Microsoft Access Panel. When you click the Palo Alto Networks Captive Portal tile in the Access Panel, you should be automatically signed in to the Palo Alto Networks Captive Portal for which you set up the SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* You can use Microsoft My Apps. When you click the Palo Alto Networks Captive Portal tile in the My Apps, you should be automatically signed in to the Palo Alto Networks Captive Portal for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-## Next Steps
+
+## Next steps
Once you configure Palo Alto Networks Captive Portal you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/sap-customer-cloud-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/sap-customer-cloud-tutorial.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 09/20/2019
+ms.date: 12/28/2020
ms.author: jeedes ---
@@ -21,7 +21,6 @@ In this tutorial, you'll learn how to integrate SAP Cloud for Customer with Azur
* Enable your users to be automatically signed-in to SAP Cloud for Customer with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
## Prerequisites
@@ -40,14 +39,14 @@ In this tutorial, you configure and test Azure AD SSO in a test environment.
To configure the integration of SAP Cloud for Customer into Azure AD, you need to add SAP Cloud for Customer from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **SAP Cloud for Customer** in the search box. 1. Select **SAP Cloud for Customer** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for SAP Cloud for Customer
+## Configure and test Azure AD SSO for SAP Cloud for Customer
Configure and test Azure AD SSO with SAP Cloud for Customer using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in SAP Cloud for Customer.
@@ -64,9 +63,9 @@ To configure and test Azure AD SSO with SAP Cloud for Customer, complete the fol
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **SAP Cloud for Customer** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **SAP Cloud for Customer** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
@@ -129,15 +128,9 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **SAP Cloud for Customer**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure SAP Cloud for Customer SSO
@@ -186,16 +179,15 @@ To enable Azure AD users to sign in to SAP Cloud for Customer, they must be prov
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the SAP Cloud for Customer tile in the Access Panel, you should be automatically signed in to the SAP Cloud for Customer for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Click on **Test this application** in Azure portal. This will redirect to SAP Cloud for Customer Sign-on URL where you can initiate the login flow.
-## Additional resources
+* Go to SAP Cloud for Customer Sign-on URL directly and initiate the login flow from there.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the SAP Cloud for Customer tile in the My Apps, this will redirect to SAP Cloud for Customer Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md) -- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [Try SAP Cloud for Customer with Azure AD](https://aad.portal.azure.com/)\ No newline at end of file
+Once you configure the SAP Cloud for Customer you can enforce session controls, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session controls extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/sap-hana-cloud-platform-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/sap-hana-cloud-platform-tutorial.md
@@ -9,20 +9,16 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 12/17/2018
+ms.date: 12/27/2020
ms.author: jeedes --- # Tutorial: Azure Active Directory integration with SAP Cloud Platform
-In this tutorial, you learn how to integrate SAP Cloud Platform with Azure Active Directory (Azure AD).
-Integrating SAP Cloud Platform with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate SAP Cloud Platform with Azure Active Directory (Azure AD). When you integrate SAP Cloud Platform with Azure AD, you can:
-* You can control in Azure AD who has access to SAP Cloud Platform.
-* You can enable your users to be automatically signed-in to SAP Cloud Platform (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to SAP Cloud Platform.
+* Enable your users to be automatically signed-in to SAP Cloud Platform with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
@@ -47,57 +43,37 @@ In this tutorial, you configure and test Azure AD single sign-on in a test envir
To configure the integration of SAP Cloud Platform into Azure AD, you need to add SAP Cloud Platform from the gallery to your list of managed SaaS apps.
-**To add SAP Cloud Platform from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **SAP Cloud Platform**, select **SAP Cloud Platform** from result panel then click **Add** button to add the application.
-
- ![SAP Cloud Platform in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with SAP Cloud Platform based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in SAP Cloud Platform needs to be established.
-
-To configure and test Azure AD single sign-on with SAP Cloud Platform, you need to complete the following building blocks:
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **SAP Cloud Platform** in the search box.
+1. Select **SAP Cloud Platform** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure SAP Cloud Platform Single Sign-On](#configure-sap-cloud-platform-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create SAP Cloud Platform test user](#create-sap-cloud-platform-test-user)** - to have a counterpart of Britta Simon in SAP Cloud Platform that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+## Configure and test Azure AD SSO for SAP Cloud Platform
-### Configure Azure AD single sign-on
+Configure and test Azure AD SSO with SAP Cloud Platform using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in SAP Cloud Platform.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+To configure and test Azure AD SSO with SAP Cloud Platform, perform the following steps:
-To configure Azure AD single sign-on with SAP Cloud Platform, perform the following steps:
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
+2. **[Configure SAP Cloud Platform SSO](#configure-sap-cloud-platform-sso)** - to configure the Single Sign-On settings on application side.
+ 1. **[Create SAP Cloud Platform test user](#create-sap-cloud-platform-test-user)** - to have a counterpart of Britta Simon in SAP Cloud Platform that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-1. In the [Azure portal](https://portal.azure.com/), on the **SAP Cloud Platform** application integration page, select **Single sign-on**.
+### Configure Azure AD SSO
- ![Configure single sign-on link](common/select-sso.png)
+Follow these steps to enable Azure AD SSO in the Azure portal.
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+1. In the Azure portal, on the **SAP Cloud Platform** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Single sign-on select mode](common/select-saml-option.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
-
-4. On the **Basic SAML Configuration** section, perform the following steps:
+1. On the **Basic SAML Configuration** section, enter the values for the following fields:
![SAP Cloud Platform Domain and URLs single sign-on information](common/sp-identifier-reply.png)
@@ -134,7 +110,31 @@ To configure Azure AD single sign-on with SAP Cloud Platform, perform the follow
![The Certificate download link](common/metadataxml.png)
-### Configure SAP Cloud Platform Single Sign-On
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to SAP Cloud Platform.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **SAP Cloud Platform**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure SAP Cloud Platform SSO
1. In a different web browser window, sign on to the SAP Cloud Platform Cockpit at `https://account.<landscape host>.ondemand.com/cockpit`(for example: https://account.hanatrial.ondemand.com/cockpit).
@@ -219,57 +219,6 @@ For example, if the assertion contains the attribute "*contract=temporary*", you
Use assertion-based groups when you want to simultaneously assign many users to one or more roles of applications in your SAP Cloud Platform account. If you want to assign only a single or small number of users to specific roles, we recommend assigning them directly in the ΓÇ£**Authorizations**ΓÇ¥ tab of the SAP Cloud Platform cockpit.
-### Create an Azure AD test user
-
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon\@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
-
-### Assign the Azure AD test user
-
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to SAP Cloud Platform.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **SAP Cloud Platform**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the applications list, type and select **SAP Cloud Platform**.
-
- ![The SAP Cloud Platform link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
- ### Create SAP Cloud Platform test user In order to enable Azure AD users to log in to SAP Cloud Platform, you must assign roles in the SAP Cloud Platform to them.
@@ -292,16 +241,16 @@ In order to enable Azure AD users to log in to SAP Cloud Platform, you must assi
e. Click **Save**.
-### Test single sign-on
+## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the SAP Cloud Platform tile in the Access Panel, you should be automatically signed in to the SAP Cloud Platform for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Click on **Test this application** in Azure portal. This will redirect to SAP Cloud Platform Sign-on URL where you can initiate the login flow.
-## Additional Resources
+* Go to SAP Cloud Platform Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the SAP Cloud Platform tile in the My Apps, you should be automatically signed in to the SAP Cloud Platform for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)\ No newline at end of file
+Once you configure SAP Cloud Platform you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/sap-netweaver-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/sap-netweaver-tutorial.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 09/10/2020
+ms.date: 12/11/2020
ms.author: jeedes ---
@@ -133,7 +133,7 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
1. In the Azure portal, on the **SAP NetWeaver** application integration page, find the **Manage** section and select **Single sign-on**. 1. On the **Select a Single sign-on method** page, select **SAML**.
-1. On the **Set up Single Sign-On with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up Single Sign-On with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
@@ -202,7 +202,7 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the app's overview page, find the **Manage** section and select **Users and groups**. 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog. 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure SAP NetWeaver using SAML
@@ -257,7 +257,11 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
![Configure Single Sign-On 12](./media/sapnetweaver-tutorial/tutorial_sapnetweaver_nameid.png)
-14. Note that **user ID Source** and **user ID mapping mode** values determine the link between SAP user and Azure AD claim.
+1. Give the **User ID Source** value as **Assertion Attribute**, **User ID mapping mode** value as **Email** and **Assertion Attribute Name** as `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name`.
+
+ ![Configure Single Sign-On ](./media/sapnetweaver-tutorial/nameid-format.png)
+
+14. Note that **User ID Source** and **User ID mapping mode** values determine the link between SAP user and Azure AD claim.
#### Scenario: SAP User to Azure AD user mapping.
@@ -331,7 +335,7 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
![Configure OAuth](./media/sapnetweaver-tutorial/oauth03.png) > [!NOTE]
- > Message `soft state status is not supported` ΓÇô can be ignored, as no problem. For more details, refer [here](https://help.sap.com/doc/saphelp_nw74/7.4.16/1e/c60c33be784846aad62716b4a1df39/content.htm?no_cache=true)
+ > Message `soft state status is not supported` ΓÇô can be ignored, as no problem. For more details, refer [here](https://help.sap.com/doc/saphelp_nw74/7.4.16/1e/c60c33be784846aad62716b4a1df39/content.htm?no_cache=true).
### Create a service user for the OAuth 2.0 Client
@@ -340,7 +344,7 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
2. When registering an OAuth Client we use the `SAML Bearer Grant type`. >[!NOTE]
- >For more details, refer OAuth 2.0 Client Registration for the SAML Bearer Grant Type [here](https://wiki.scn.sap.com/wiki/display/Security/OAuth+2.0+Client+Registration+for+the+SAML+Bearer+Grant+Type)
+ >For more details, refer OAuth 2.0 Client Registration for the SAML Bearer Grant Type [here](https://wiki.scn.sap.com/wiki/display/Security/OAuth+2.0+Client+Registration+for+the+SAML+Bearer+Grant+Type).
3. tcod: SU01 / create user CLIENT1 as `System type` and assign password, save it as need to provide the credential to the API programmer, who should burn it with the username to the calling code. No profile or role should be assigned.
@@ -372,4 +376,4 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Next Steps
-Once you configure Azure AD SAP NetWeaver you can enforce Session Control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session Control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad)
\ No newline at end of file
+Once you configure Azure AD SAP NetWeaver you can enforce Session Control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session Control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/saphana-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/saphana-tutorial.md
@@ -9,20 +9,16 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 12/27/2018
+ms.date: 12/27/2020
ms.author: jeedes --- # Tutorial: Azure Active Directory integration with SAP HANA
-In this tutorial, you learn how to integrate SAP HANA with Azure Active Directory (Azure AD).
-Integrating SAP HANA with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate SAP HANA with Azure Active Directory (Azure AD). When you integrate SAP HANA with Azure AD, you can:
-* You can control in Azure AD who has access to SAP HANA.
-* You can enable your users to be automatically signed-in to SAP HANA (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to SAP HANA.
+* Enable your users to be automatically signed-in to SAP HANA with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
@@ -48,72 +44,51 @@ In this tutorial, you configure and test Azure AD single sign-on in a test envir
* SAP HANA supports **IDP** initiated SSO * SAP HANA supports **just-in-time** user provisioning
-## Adding SAP HANA from the gallery
-
-To configure the integration of SAP HANA into Azure AD, you need to add SAP HANA from the gallery to your list of managed SaaS apps.
-
-**To add SAP HANA from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **SAP HANA**, select **SAP HANA** from result panel then click **Add** button to add the application.
-
- ![SAP HANA in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with SAP HANA based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in SAP HANA needs to be established.
-
-To configure and test Azure AD single sign-on with SAP HANA, you need to complete the following building blocks:
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure SAP HANA Single Sign-On](#configure-sap-hana-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create SAP HANA test user](#create-sap-hana-test-user)** - to have a counterpart of Britta Simon in SAP HANA that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
-### Configure Azure AD single sign-on
+## Adding SAP HANA from the gallery
-In this section, you enable Azure AD single sign-on in the Azure portal.
+To configure the integration of SAP HANA into Azure AD, you need to add SAP HANA from the gallery to your list of managed SaaS apps.
-To configure Azure AD single sign-on with SAP HANA, perform the following steps:
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **SAP HANA** in the search box.
+1. Select **SAP HANA** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-1. In the [Azure portal](https://portal.azure.com/), on the **SAP HANA** application integration page, select **Single sign-on**.
+## Configure and test Azure AD SSO for SAP HANA
- ![Configure single sign-on link](common/select-sso.png)
+Configure and test Azure AD SSO with SAP HANA using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in SAP HANA.
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+To configure and test Azure AD SSO with SAP HANA, perform the following steps:
- ![Single sign-on select mode](common/select-saml-option.png)
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
+2. **[Configure SAP HANA SSO](#configure-sap-hana-sso)** - to configure the Single Sign-On settings on application side.
+ 1. **[Create SAP HANA test user](#create-sap-hana-test-user)** - to have a counterpart of Britta Simon in SAP HANA that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
+### Configure Azure AD SSO
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+Follow these steps to enable Azure AD SSO in the Azure portal.
-4. On the **Set up Single Sign-On with SAML** page, perform the following steps:
+1. In the Azure portal, on the **SAP HANA** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![SAP HANA Domain and URLs single sign-on information](common/idp-intiated.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
- a. In the **Identifier** text box, type the following:
- `HA100`
+1. On the **Basic SAML Configuration** section, enter the values for the following fields:
- b. In the **Reply URL** text box, type a URL using the following pattern:
+ In the **Reply URL** text box, type a URL using the following pattern:
`https://<Customer-SAP-instance-url>/sap/hana/xs/saml/login.xscfunc` > [!NOTE]
- > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [SAP HANA Client support team](https://cloudplatform.sap.com/contact.html) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > The Reply URL value is not real. Update the value with the actual Reply URL. Contact [SAP HANA Client support team](https://cloudplatform.sap.com/contact.html) to get the values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
5. SAP HANA application expects the SAML assertions in a specific format. Configure the following claims for this application. You can manage the values of these attributes from the **User Attributes** section on application integration page. On the **Set up Single Sign-On with SAML** page, click **Edit** button to open **User Attributes** dialog.
@@ -137,7 +112,31 @@ To configure Azure AD single sign-on with SAP HANA, perform the following steps:
![The Certificate download link](common/metadataxml.png)
-### Configure SAP HANA Single Sign-On
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to SAP HANA.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **SAP HANA**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure SAP HANA SSO
1. To configure single sign-on on the SAP HANA side, sign in to your **HANA XSA Web Console** by going to the respective HTTPS endpoint.
@@ -169,56 +168,6 @@ To configure Azure AD single sign-on with SAP HANA, perform the following steps:
![assertion_timeout setting](./media/saphana-tutorial/sap7.png)
-### Create an Azure AD test user
-
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon\@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
-
-### Assign the Azure AD test user
-
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to SAP HANA.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **SAP HANA**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the applications list, type and select **SAP HANA**.
-
- ![The SAP HANA link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
### Create SAP HANA test user
@@ -252,16 +201,15 @@ If you need to create a user manually, take the following steps:
6. Save the user.
-### Test single sign-on
+### Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the SAP HANA tile in the Access Panel, you should be automatically signed in to the SAP HANA for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Click on Test this application in Azure portal and you should be automatically signed in to the SAP HANA for which you set up the SSO
-## Additional Resources
+* You can use Microsoft My Apps. When you click the SAP HANA tile in the My Apps, you should be automatically signed in to the SAP HANA for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md) -- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)\ No newline at end of file
+Once you configure SAP HANA you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/servicenow-provisioning-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/servicenow-provisioning-tutorial.md
@@ -147,6 +147,11 @@ Once you've configured provisioning, use the following resources to monitor your
![Authorizing SOAP request](media/servicenow-provisioning-tutorial/servicenow-webservice.png) If it resolves your issues then contact ServiceNow support and ask them to turn on SOAP debugging to help troubleshoot. +
+* **IP Ranges**
+
+ The Azure AD provisioning service currently operates under a particular IP ranges.So if required you can restrict other IP ranges and add these particular IP ranges to the allowlist of your application to allow traffic flow from Azure AD provisioning service to your application .Refer the documentation at [IP Ranges](https://docs.microsoft.com/azure/active-directory/app-provisioning/use-scim-to-provision-users-and-groups#ip-ranges).
+ ## Additional resources * [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/snowflake-provisioning-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/snowflake-provisioning-tutorial.md
@@ -155,6 +155,12 @@ Once you've configured provisioning, use the following resources to monitor your
* Snowflake generated SCIM tokens expire in 6 months. Be aware that these need to be refreshed before they expire to allow the provisioning syncs to continue working.
+## Troubleshooting Tips
+
+* **IP Ranges**
+
+ The Azure AD provisioning service currently operates under a particular IP ranges. So if required you can restrict other IP ranges and add these particular IP ranges to the allowlist of your application to allow traffic flow from Azure AD provisioning service to your application .Refer the documentation at [IP Ranges](https://docs.microsoft.com/azure/active-directory/app-provisioning/use-scim-to-provision-users-and-groups#ip-ranges).
+ ## Change Log * 07/21/2020 - Enabled soft-delete for all users (via the active attribute).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/zscaler-beta-provisioning-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/zscaler-beta-provisioning-tutorial.md
@@ -76,6 +76,9 @@ This section guides you through the steps to configure the Azure AD provisioning
> [!TIP] > You may also choose to enable SAML-based single sign-on for Zscaler Beta, following the instructions provided in the [Zscaler Beta single sign-on tutorial](zscaler-beta-tutorial.md). Single sign-on can be configured independently of automatic user provisioning, though these two features compliment each other.
+> [!NOTE]
+> When users and groups are provisioned or de-provisioned we recommend to periodically restart provisioning to ensure that group memberships are properly updated. Doing a restart will force our service to re-evaluate all the groups and update the memberships.
+ ### To configure automatic user provisioning for Zscaler Beta in Azure AD: 1. Sign in to the [Azure portal](https://portal.azure.com) and select **Enterprise Applications**, select **All applications**, then select **Zscaler Beta**.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/zscaler-one-provisioning-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/zscaler-one-provisioning-tutorial.md
@@ -73,6 +73,9 @@ This section guides you through the steps to configure the Azure AD provisioning
> [!TIP] > You also can enable SAML-based single sign-on for Zscaler One. Follow the instructions in the [Zscaler One single sign-on tutorial](zscaler-One-tutorial.md). Single sign-on can be configured independently of automatic user provisioning, although these two features complement each other.
+> [!NOTE]
+> When users and groups are provisioned or de-provisioned we recommend to periodically restart provisioning to ensure that group memberships are properly updated. Doing a restart will force our service to re-evaluate all the groups and update the memberships.
+ ### Configure automatic user provisioning for Zscaler One in Azure AD 1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise applications** > **All applications** > **Zscaler One**.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/zscaler-one-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/zscaler-one-tutorial.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 04/24/2019
+ms.date: 12/18/2020
ms.author: jeedes --- # Tutorial: Azure Active Directory integration with Zscaler One
@@ -21,8 +21,6 @@ Integrating Zscaler One with Azure AD provides you with the following benefits:
* You can enable your users to be automatically signed-in to Zscaler One (Single Sign-On) with their Azure AD accounts. * You can manage your accounts in one central location - the Azure portal.
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
## Prerequisites
@@ -43,60 +41,38 @@ In this tutorial, you configure and test Azure AD single sign-on in a test envir
To configure the integration of Zscaler One into Azure AD, you need to add Zscaler One from the gallery to your list of managed SaaS apps.
-**To add Zscaler One from the gallery, perform the following steps:**
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Zscaler One** in the search box.
+1. Select **Zscaler One** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
+## Configure and test Azure AD SSO for Zscaler One
- ![The Azure Active Directory button](common/select-azuread.png)
+Configure and test Azure AD SSO with Zscaler One using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Zscaler One.
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
+To configure and test Azure AD SSO with Zscaler One, perform the following steps:
- ![The Enterprise applications blade](common/enterprise-applications.png)
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
+2. **[Configure Zscaler One SSO](#configure-zscaler-one-sso)** - to configure the Single Sign-On settings on application side.
+ 1. **[Create Zscaler One test user](#create-zscaler-one-test-user)** - to have a counterpart of Britta Simon in Zscaler One that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-3. To add new application, click **New application** button on the top of dialog.
+### Configure Azure AD SSO
- ![The New application button](common/add-new-app.png)
+Follow these steps to enable Azure AD SSO in the Azure portal.
-4. In the search box, type **Zscaler One**, select **Zscaler One** from result panel then click **Add** button to add the application.
+1. In the Azure portal, on the **Zscaler One** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Zscaler One in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with Zscaler One based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Zscaler One needs to be established.
-
-To configure and test Azure AD single sign-on with Zscaler One, you need to complete the following building blocks:
-
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Zscaler One Single Sign-On](#configure-zscaler-one-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Zscaler One test user](#create-zscaler-one-test-user)** - to have a counterpart of Britta Simon in Zscaler One that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
-
-### Configure Azure AD single sign-on
-
-In this section, you enable Azure AD single sign-on in the Azure portal.
-
-To configure Azure AD single sign-on with Zscaler One, perform the following steps:
-
-1. In the [Azure portal](https://portal.azure.com/), on the **Zscaler One** application integration page, select **Single sign-on**.
-
- ![Configure single sign-on link](common/select-sso.png)
-
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
-
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
4. On the **Basic SAML Configuration** section, perform the following steps:
- ![Zscaler One Domain and URLs single sign-on information](common/sp-signonurl.png)
- In the **Sign-on URL** textbox, type the URL used by your users to sign-on to your Zscaler One application. > [!NOTE]
@@ -114,10 +90,6 @@ To configure Azure AD single sign-on with Zscaler One, perform the following ste
a. Click **Add new claim** to open the **Manage user claims** dialog.
- ![Screenshot shows User claims with the option to Add new claim.](common/new-save-attribute.png)
-
- ![Screenshot shows the Manage user claims dialog box where you can enter the values described.](common/new-attribute-details.png)
- b. In the **Name** textbox, type the attribute name shown for that row. c. Leave the **Namespace** blank.
@@ -129,7 +101,7 @@ To configure Azure AD single sign-on with Zscaler One, perform the following ste
f. Click **Save**. > [!NOTE]
- > Please click [here](../develop/active-directory-enterprise-app-role-management.md) to know how to configure Role in Azure AD
+ > Please click [here](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps#app-roles-ui) to know how to configure Role in Azure AD.
7. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer.
@@ -139,13 +111,31 @@ To configure Azure AD single sign-on with Zscaler One, perform the following ste
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
+### Create an Azure AD test user
- b. Azure AD Identifier
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
- c. Logout URL
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Zscaler One.
-### Configure Zscaler One Single Sign-On
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Zscaler One**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you have setup the roles as explained in the above, you can select it from the **Select a role** dropdown.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+### Configure Zscaler One SSO
1. To automate the configuration within Zscaler One, you need to install **My Apps Secure Sign-in browser extension** by clicking **Install the extension**.
@@ -224,62 +214,6 @@ To configure Azure AD single sign-on with Zscaler One, perform the following ste
7. Click **OK** to close the **Internet Options** dialog.
-### Create an Azure AD test user
-
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type brittasimon@yourcompanydomain.extension. For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
-
-### Assign the Azure AD test user
-
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Zscaler One.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Zscaler One**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the applications list, select **Zscaler One**.
-
- ![The Zscaler One link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog, select the user like **Britta Simon** from the list, then click the **Select** button at the bottom of the screen.
-
- ![Screenshot shows the Users and groups dialog box where you can select a user.](./media/zscaler-one-tutorial/tutorial_zscalerone_users.png)
-
-6. From the **Select Role** dialog choose the appropriate user role in the list, then click the **Select** button at the bottom of the screen.
-
- ![Screenshot shows the Select Role dialog box where you can choose a user role.](./media/zscaler-one-tutorial/tutorial_zscalerone_roles.png)
-
-7. In the **Add Assignment** dialog select the **Assign** button.
-
- ![Screenshot shows the Add Assignment dialog box where you can select Assign.](./media/zscaler-one-tutorial/tutorial_zscalerone_assign.png)
- ### Create Zscaler One test user In this section, a user called Britta Simon is created in Zscaler One. Zscaler One supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Zscaler One, a new one is created after authentication.
@@ -287,16 +221,17 @@ In this section, a user called Britta Simon is created in Zscaler One. Zscaler O
>[!Note] >If you need to create a user manually, contact [Zscaler One support team](https://www.zscaler.com/company/contact).
-### Test single sign-on
+### Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+* Click on **Test this application** in Azure portal. This will redirect to Zscaler One Sign-on URL where you can initiate the login flow.
-When you click the Zscaler One tile in the Access Panel, you should be automatically signed in to the Zscaler One for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Go to Zscaler One Sign-on URL directly and initiate the login flow from there.
-## Additional Resources
+* You can use Microsoft My Apps. When you click the Zscaler One tile in the My Apps, this will redirect to Zscaler One Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md) -- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)\ No newline at end of file
+Once you configure Zscaler One you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/zscaler-private-access-provisioning-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/zscaler-private-access-provisioning-tutorial.md
@@ -111,6 +111,9 @@ This section guides you through the steps to configure the Azure AD provisioning
> [!TIP] > You may also choose to enable SAML-based single sign-on for Zscaler Private Access (ZPA) by following the instructions provided in the [Zscaler Private Access (ZPA) Single sign-on tutorial](./zscalerprivateaccess-tutorial.md). Single sign-on can be configured independently of automatic user provisioning, although these two features complement each other.
+> [!NOTE]
+> When users and groups are provisioned or de-provisioned we recommend to periodically restart provisioning to ensure that group memberships are properly updated. Doing a restart will force our service to re-evaluate all the groups and update the memberships.
+ > [!NOTE] > To learn more about Zscaler Private Access's SCIM endpoint, refer [this](https://www.zscaler.com/partners/microsoft).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/zscaler-provisioning-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/zscaler-provisioning-tutorial.md
@@ -75,6 +75,9 @@ This section guides you through the steps to configure the Azure AD provisioning
> [!TIP] > You may also choose to enable SAML-based single sign-on for Zscaler, following the instructions provided in the [Zscaler single sign-on tutorial](zscaler-tutorial.md). Single sign-on can be configured independently of automatic user provisioning, though these two features compliment each other.
+> [!NOTE]
+> When users and groups are provisioned or de-provisioned we recommend to periodically restart provisioning to ensure that group memberships are properly updated. Doing a restart will force our service to re-evaluate all the groups and update the memberships.
+ ### To configure automatic user provisioning for Zscaler in Azure AD: 1. Sign in to the [Azure portal](https://portal.azure.com) and select **Enterprise Applications**, select **All applications**, then select **Zscaler**.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/zscaler-three-provisioning-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/zscaler-three-provisioning-tutorial.md
@@ -70,6 +70,9 @@ This section guides you through the steps for configuring the Azure AD provision
> [!TIP] > You might also want to enable SAML-based single sign-on for Zscaler Three. If you do, follow the instructions in the [Zscaler Three single sign-on tutorial](zscaler-three-tutorial.md). Single sign-on can be configured independently of automatic user provisioning, but the two features complement each other.
+> [!NOTE]
+> When users and groups are provisioned or de-provisioned we recommend to periodically restart provisioning to ensure that group memberships are properly updated. Doing a restart will force our service to re-evaluate all the groups and update the memberships.
+ 1. Sign in to the [Azure portal](https://portal.azure.com) and select **Enterprise applications** > **All applications** > **Zscaler Three**: ![Enterprise applications](common/enterprise-applications.png)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/zscaler-three-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/zscaler-three-tutorial.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 10/17/2019
+ms.date: 12/18/2020
ms.author: jeedes ---
@@ -21,7 +21,6 @@ In this tutorial, you'll learn how to integrate Zscaler Three with Azure Active
* Enable your users to be automatically signed-in to Zscaler Three with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
## Prerequisites
@@ -45,18 +44,18 @@ In this tutorial, you configure and test Azure AD SSO in a test environment.
To configure the integration of Zscaler Three into Azure AD, you need to add Zscaler Three from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Zscaler Three** in the search box. 1. Select **Zscaler Three** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for Zscaler Three
+## Configure and test Azure AD SSO for Zscaler Three
Configure and test Azure AD SSO with Zscaler Three using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Zscaler Three.
-To configure and test Azure AD SSO with Zscaler Three, complete the following building blocks:
+To configure and test Azure AD SSO with Zscaler Three, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
@@ -69,9 +68,9 @@ To configure and test Azure AD SSO with Zscaler Three, complete the following bu
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Zscaler Three** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Zscaler Three** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
@@ -91,7 +90,7 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
| memberOf | user.assignedroles | > [!NOTE]
- > Please click [here](../develop/active-directory-enterprise-app-role-management.md) to know how to configure Role in Azure AD
+ > Please click [here](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps#app-roles-ui) to know how to configure Role in Azure AD.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
@@ -219,16 +218,15 @@ In this section, a user called B.Simon is created in Zscaler Three. Zscaler Thre
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the Zscaler Three tile in the Access Panel, you should be automatically signed in to the Zscaler Three for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Click on **Test this application** in Azure portal. This will redirect to Zscaler Three Sign-on URL where you can initiate the login flow.
-## Additional resources
+* Go to Zscaler Three Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the Zscaler Three tile in the My Apps, this will redirect to Zscaler Three Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) -- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [Try Zscaler Three with Azure AD](https://aad.portal.azure.com/)\ No newline at end of file
+Once you configure Zscaler Three you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/zscaler-two-provisioning-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/zscaler-two-provisioning-tutorial.md
@@ -70,6 +70,9 @@ This section guides you through the steps for configuring the Azure AD provision
> [!TIP] > You might also want to enable SAML-based single sign-on for Zscaler Two. If you do, follow the instructions in the [Zscaler Two single sign-on tutorial](zscaler-two-tutorial.md). Single sign-on can be configured independently of automatic user provisioning, but the two features complement each other.
+> [!NOTE]
+> When users and groups are provisioned or de-provisioned we recommend to periodically restart provisioning to ensure that group memberships are properly updated. Doing a restart will force our service to re-evaluate all the groups and update the memberships.
+ 1. Sign in to the [Azure portal](https://portal.azure.com) and select **Enterprise applications** > **All applications** > **Zscaler Two**: ![Enterprise applications](common/enterprise-applications.png)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/zscaler-two-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/zscaler-two-tutorial.md
@@ -9,27 +9,23 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 04/24/2019
+ms.date: 12/18/2020
ms.author: jeedes --- # Tutorial: Azure Active Directory integration with Zscaler Two
-In this tutorial, you learn how to integrate Zscaler Two with Azure Active Directory (Azure AD).
-Integrating Zscaler Two with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Zscaler Two with Azure Active Directory (Azure AD). When you integrate Zscaler Two with Azure AD, you can:
-* You can control in Azure AD who has access to Zscaler Two.
-* You can enable your users to be automatically signed-in to Zscaler Two (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Zscaler Two.
+* Enable your users to be automatically signed-in to Zscaler Two with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites To configure Azure AD integration with Zscaler Two, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/)
-* Zscaler Two single sign-on enabled subscription
+* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/).
+* Zscaler Two single sign-on enabled subscription.
## Scenario description
@@ -43,61 +39,39 @@ In this tutorial, you configure and test Azure AD single sign-on in a test envir
To configure the integration of Zscaler Two into Azure AD, you need to add Zscaler Two from the gallery to your list of managed SaaS apps.
-**To add Zscaler Two from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **Zscaler Two**, select **Zscaler Two** from result panel then click **Add** button to add the application.
-
- ![Zscaler Two in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Zscaler Two** in the search box.
+1. Select **Zscaler Two** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-In this section, you configure and test Azure AD single sign-on with Zscaler Two based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Zscaler Two needs to be established.
+## Configure and test Azure AD SSO for Zscaler Two
-To configure and test Azure AD single sign-on with Zscaler Two, you need to complete the following building blocks:
+Configure and test Azure AD SSO with Zscaler Two using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Zscaler Two.
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Zscaler Two Single Sign-On](#configure-zscaler-two-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Zscaler Two test user](#create-zscaler-two-test-user)** - to have a counterpart of Britta Simon in Zscaler Two that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+To configure and test Azure AD SSO with Zscaler Two, perform the following steps:
-### Configure Azure AD single sign-on
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Zscaler Two SSO](#configure-zscaler-two-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Zscaler Two test user](#create-zscaler-two-test-user)** - to have a counterpart of B.Simon in Zscaler Two that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+## Configure Azure AD SSO
-To configure Azure AD single sign-on with Zscaler Two, perform the following steps:
+Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Zscaler Two** application integration page, select **Single sign-on**.
+1. In the Azure portal, on the **Zscaler Three** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Configure single sign-on link](common/select-sso.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+1. On the **Basic SAML Configuration** section, enter the values for the following fields:
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
-
-4. On the **Basic SAML Configuration** section, perform the following steps:
-
- ![Zscaler Two Domain and URLs single sign-on information](common/sp-signonurl.png)
-
- In the Sign-on URL textbox, type the URL used by your users to sign-on to your ZScaler Two application.
+ In the **Sign-on URL** textbox, type the URL used by your users to sign-on to your ZScaler Two application.
> [!NOTE] > You update the value with the actual Sign-On URL. Contact [Zscaler Two Client support team](https://www.zscaler.com/company/contact) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
@@ -129,7 +103,7 @@ To configure Azure AD single sign-on with Zscaler Two, perform the following ste
f. Click **Save**. > [!NOTE]
- > Please click [here](../develop/active-directory-enterprise-app-role-management.md) to know how to configure Role in Azure AD
+ > Please click [here](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps#app-roles-ui) to know how to configure Role in Azure AD.
7. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer.
@@ -139,13 +113,40 @@ To configure Azure AD single sign-on with Zscaler Two, perform the following ste
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
- b. Azure AD Identifier
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you enable Britta Simon to use Azure single sign-on by granting access to Zscaler Two.
+
+1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Zscaler Two**.
+2. In the applications list, select **Zscaler Two**.
+3. In the menu on the left, select **Users and groups**.
+4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
+5. In the **Users and groups** dialog, select the user like **Britta Simon** from the list, then click the **Select** button at the bottom of the screen.
- c. Logout URL
+ ![Screenshot shows the Users and groups dialog box where you can select a user.](./media/zscaler-two-tutorial/tutorial_zscalertwo_users.png)
-### Configure Zscaler Two Single Sign-On
+6. From the **Select Role** dialog choose the appropriate user role in the list, then click the **Select** button at the bottom of the screen.
+
+ ![Screenshot shows the Select Role dialog box where you can choose a user role.](./media/zscaler-two-tutorial/tutorial_zscalertwo_roles.png)
+
+7. In the **Add Assignment** dialog select the **Assign** button.
+
+ ![Screenshot shows the Add Assignment dialog box where you can select Assign.](./media/zscaler-two-tutorial/tutorial_zscalertwo_assign.png)
+
+## Configure Zscaler Two SSO
1. To automate the configuration within Zscaler Two, you need to install **My Apps Secure Sign-in browser extension** by clicking **Install the extension**.
@@ -224,61 +225,6 @@ To configure Azure AD single sign-on with Zscaler Two, perform the following ste
6. Click **OK** to close the **Internet Options** dialog.
-### Create an Azure AD test user
-
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type `brittasimon@yourcompanydomain.extension`. For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
-
-### Assign the Azure AD test user
-
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Zscaler Two.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Zscaler Two**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the applications list, select **Zscaler Two**.
-
- ![The Zscaler Two link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog, select the user like **Britta Simon** from the list, then click the **Select** button at the bottom of the screen.
-
- ![Screenshot shows the Users and groups dialog box where you can select a user.](./media/zscaler-two-tutorial/tutorial_zscalertwo_users.png)
-
-6. From the **Select Role** dialog choose the appropriate user role in the list, then click the **Select** button at the bottom of the screen.
-
- ![Screenshot shows the Select Role dialog box where you can choose a user role.](./media/zscaler-two-tutorial/tutorial_zscalertwo_roles.png)
-
-7. In the **Add Assignment** dialog select the **Assign** button.
-
- ![Screenshot shows the Add Assignment dialog box where you can select Assign.](./media/zscaler-two-tutorial/tutorial_zscalertwo_assign.png)
### Create Zscaler Two test user
@@ -287,16 +233,17 @@ In this section, a user called Britta Simon is created in Zscaler Two. Zscaler T
>[!Note] >If you need to create a user manually, contact [Zscaler Two support team](https://www.zscaler.com/company/contact).
-### Test single sign-on
+### Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+* Click on **Test this application** in Azure portal. This will redirect to Zscaler Two Sign-on URL where you can initiate the login flow.
-When you click the Zscaler Two tile in the Access Panel, you should be automatically signed in to the Zscaler Two for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Go to Zscaler Two Sign-on URL directly and initiate the login flow from there.
-## Additional Resources
+* You can use Microsoft My Apps. When you click the Zscaler Two tile in the My Apps, this will redirect to Zscaler Two Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md) -- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)\ No newline at end of file
+Once you configure Zscaler Two you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/zscaler-zscloud-provisioning-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/zscaler-zscloud-provisioning-tutorial.md
@@ -71,6 +71,9 @@ This section guides you through the steps for configuring the Azure AD provision
> [!TIP] > You might also want to enable SAML-based single sign-on for Zscaler ZSCloud. If you do, follow the instructions in the [Zscaler ZSCloud single sign-on tutorial](zscaler-zsCloud-tutorial.md). Single sign-on can be configured independently of automatic user provisioning, but the two features complement each other.
+> [!NOTE]
+> When users and groups are provisioned or de-provisioned we recommend to periodically restart provisioning to ensure that group memberships are properly updated. Doing a restart will force our service to re-evaluate all the groups and update the memberships.
+ 1. Sign in to the [Azure portal](https://portal.azure.com) and select **Enterprise applications** > **All applications** > **Zscaler ZSCloud**: ![Enterprise applications](common/enterprise-applications.png)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/zscaler-zscloud-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/zscaler-zscloud-tutorial.md
@@ -9,27 +9,23 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 04/24/2019
+ms.date: 12/21/2020
ms.author: jeedes --- # Tutorial: Azure Active Directory integration with Zscaler ZSCloud
-In this tutorial, you learn how to integrate Zscaler ZSCloud with Azure Active Directory (Azure AD).
-Integrating Zscaler ZSCloud with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Zscaler ZSCloud with Azure Active Directory (Azure AD). When you integrate Zscaler ZSCloud with Azure AD, you can:
-* You can control in Azure AD who has access to Zscaler ZSCloud.
-* You can enable your users to be automatically signed-in to Zscaler ZSCloud (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Zscaler ZSCloud.
+* Enable your users to be automatically signed-in to Zscaler ZSCloud with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites To configure Azure AD integration with Zscaler ZSCloud, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/)
-* Zscaler ZSCloud single sign-on enabled subscription
+* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/).
+* Zscaler ZSCloud single sign-on enabled subscription.
## Scenario description
@@ -43,59 +39,37 @@ In this tutorial, you configure and test Azure AD single sign-on in a test envir
To configure the integration of Zscaler ZSCloud into Azure AD, you need to add Zscaler ZSCloud from the gallery to your list of managed SaaS apps.
-**To add Zscaler ZSCloud from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **Zscaler ZSCloud**, select **Zscaler ZSCloud** from result panel then click **Add** button to add the application.
-
- ![Zscaler ZSCloud in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with Zscaler ZSCloud based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Zscaler ZSCloud needs to be established.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Zscaler ZSCloud** in the search box.
+1. Select **Zscaler ZSCloud** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-To configure and test Azure AD single sign-on with Zscaler ZSCloud, you need to complete the following building blocks:
+## Configure and test Azure AD SSO for Zscaler ZSCloud
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Zscaler ZSCloud Single Sign-On](#configure-zscaler-zscloud-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Zscaler ZSCloud test user](#create-zscaler-zscloud-test-user)** - to have a counterpart of Britta Simon in Zscaler ZSCloud that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+Configure and test Azure AD SSO with Zscaler ZSCloud using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Zscaler ZSCloud.
-### Configure Azure AD single sign-on
+To configure and test Azure AD SSO with Zscaler ZSCloud, perform the following steps:
-In this section, you enable Azure AD single sign-on in the Azure portal.
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Zscaler ZSCloud SSO](#configure-zscaler-zscloud-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Zscaler ZSCloud test user](#create-zscaler-zscloud-test-user)** - to have a counterpart of B.Simon in Zscaler ZSCloud that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-To configure Azure AD single sign-on with Zscaler ZSCloud, perform the following steps:
+## Configure Azure AD SSO
-1. In the [Azure portal](https://portal.azure.com/), on the **Zscaler ZSCloud** application integration page, select **Single sign-on**.
+Follow these steps to enable Azure AD SSO in the Azure portal.
- ![Configure single sign-on link](common/select-sso.png)
+1. In the Azure portal, on the **Zscaler zscloud** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
-
-4. On the **Basic SAML Configuration** section, perform the following steps:
-
- ![Zscaler ZSCloud Domain and URLs single sign-on information](common/sp-signonurl.png)
+1. On the **Basic SAML Configuration** section, enter the values for the following fields:
In the **Sign-on URL** textbox, type the URL used by your users to sign-on to your ZScaler ZSCloud application.
@@ -129,7 +103,7 @@ To configure Azure AD single sign-on with Zscaler ZSCloud, perform the following
f. Click **Save**. > [!NOTE]
- > Please click [here](../develop/active-directory-enterprise-app-role-management.md) to know how to configure Role in Azure AD
+ > Please click [here](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps#app-roles-ui) to know how to configure Role in Azure AD.
7. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer.
@@ -139,13 +113,42 @@ To configure Azure AD single sign-on with Zscaler ZSCloud, perform the following
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you enable Britta Simon to use Azure single sign-on by granting access to Zscaler ZSCloud.
- b. Azure AD Identifier
+1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Zscaler ZSCloud**.
+2. In the applications list, select **Zscaler ZSCloud**.
+3. In the menu on the left, select **Users and groups**.
+4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
+5. In the **Users and groups** dialog, select the user like **Britta Simon** from the list, then click the **Select** button at the bottom of the screen.
- c. Logout URL
+ ![Screenshot shows the Users and groups dialog box where you can select a user.](./media/zscaler-zscloud-tutorial/tutorial_zscalerzscloud_users.png)
-### Configure Zscaler ZSCloud Single Sign-On
+6. From the **Select Role** dialog choose the appropriate user role in the list, then click the **Select** button at the bottom of the screen.
+
+ ![Screenshot shows the Select Role dialog box where you can choose a user role.](./media/zscaler-zscloud-tutorial/tutorial_zscalerzscloud_roles.png)
+
+7. In the **Add Assignment** dialog select the **Assign** button.
+
+ ![Screenshot shows the Add Assignment dialog box where you can select Assign.](./media/zscaler-zscloud-tutorial/tutorial_zscalerzscloud_assign.png)
+
+ >[!NOTE]
+ >Default access role is not supported as this will break provisioning, so the default role cannot be selected while assigning user.
+
+## Configure Zscaler ZSCloud SSO
1. To automate the configuration within Zscaler ZSCloud, you need to install **My Apps Secure Sign-in browser extension** by clicking **Install the extension**.
@@ -224,64 +227,6 @@ To configure Azure AD single sign-on with Zscaler ZSCloud, perform the following
6. Click **OK** to close the **Internet Options** dialog.
-### Create an Azure AD test user
-
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type brittasimon@yourcompanydomain.extension. For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
-
-### Assign the Azure AD test user
-
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Zscaler ZSCloud.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Zscaler ZSCloud**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the applications list, select **Zscaler ZSCloud**.
-
- ![The Zscaler ZSCloud link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog, select the user like **Britta Simon** from the list, then click the **Select** button at the bottom of the screen.
-
- ![Screenshot shows the Users and groups dialog box where you can select a user.](./media/zscaler-zscloud-tutorial/tutorial_zscalerzscloud_users.png)
-
-6. From the **Select Role** dialog choose the appropriate user role in the list, then click the **Select** button at the bottom of the screen.
-
- ![Screenshot shows the Select Role dialog box where you can choose a user role.](./media/zscaler-zscloud-tutorial/tutorial_zscalerzscloud_roles.png)
-
-7. In the **Add Assignment** dialog select the **Assign** button.
-
- ![Screenshot shows the Add Assignment dialog box where you can select Assign.](./media/zscaler-zscloud-tutorial/tutorial_zscalerzscloud_assign.png)
-
- >[!NOTE]
- >Default access role is not supported as this will break provisioning, so the default role cannot be selected while assigning user.
### Create Zscaler ZSCloud test user
@@ -290,16 +235,17 @@ In this section, a user called Britta Simon is created in Zscaler ZSCloud. Zscal
>[!Note] >If you need to create a user manually, contact [Zscaler ZSCloud support team](https://help.zscaler.com/).
-### Test single sign-on
+### Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+* Click on **Test this application** in Azure portal. This will redirect to Zscaler ZSCloud Sign-on URL where you can initiate the login flow.
-When you click the Zscaler ZSCloud tile in the Access Panel, you should be automatically signed in to the Zscaler ZSCloud for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Go to Zscaler ZSCloud Sign-on URL directly and initiate the login flow from there.
-## Additional Resources
+* You can use Microsoft My Apps. When you click the Zscaler ZSCloud tile in the My Apps, this will redirect to Zscaler ZSCloud Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md) -- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)\ No newline at end of file
+Once you configure Zscaler ZSCloud you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
\ No newline at end of file
analysis-services https://docs.microsoft.com/en-us/azure/analysis-services/analysis-services-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/analysis-services/analysis-services-overview.md
@@ -155,7 +155,7 @@ Compatibility level refers to release-specific behaviors in the Analysis Service
## Your data is secure
-Azure Analysis Services provides security for your sensitive data at multiple levels. As an Azure service, Analysis Services provides **Basic** level of Distributed denial of service (DDoS) attacks automatically enabled as part of the Azure platform. To learn more, see [Azure DDoS Protection Standard overview](../ddos-protection/ddos-protection-overview.md).
+Azure Analysis Services provides security for your sensitive data at multiple levels. As an Azure service, Analysis Services provides the **Basic** level protection of Distributed denial of service (DDoS) attacks automatically enabled as part of the Azure platform. To learn more, see [Azure DDoS Protection Standard overview](../ddos-protection/ddos-protection-overview.md).
At the server level, Analysis Services provides firewall, Azure authentication, server administrator roles, and Server-Side Encryption. At the data model level, user roles, row-level, and object-level security ensure your data is safe and gets seen by only those users who are meant to see it.
@@ -206,7 +206,7 @@ Develop and deploy models with Visual Studio with Analysis Services projects. Th
Microsoft Analysis Services Projects is available as a free installable VSIX package. [Download from Marketplace](https://marketplace.visualstudio.com/items?itemName=ProBITools.MicrosoftAnalysisServicesModelingProjects). The extension works with any version of Visual Studio 2017 and later, including the free Community edition.
-### Sql Server Management Studio
+### SQL Server Management Studio
Manage your servers and model databases by using [SQL Server Management Studio (SSMS)](/sql/ssms/download-sql-server-management-studio-ssms). Connect to your servers in the cloud. Run TMSL scripts right from the XMLA query window, and automate tasks by using TMSL scripts and PowerShell. New features and functionality happen fast - SSMS is updated monthly.
@@ -266,4 +266,4 @@ Analysis Services has a vibrant community of users. Join the conversation on [Az
> [Quickstart: Create a server - Portal](analysis-services-create-server.md) > [!div class="nextstepaction"]
-> [Quickstart: Create a server - PowerShell](analysis-services-create-powershell.md)
\ No newline at end of file
+> [Quickstart: Create a server - PowerShell](analysis-services-create-powershell.md)
app-service https://docs.microsoft.com/en-us/azure/app-service/app-service-web-tutorial-connect-msi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/app-service-web-tutorial-connect-msi.md
@@ -246,7 +246,7 @@ In the publish page, click **Publish**.
```bash git commit -am "configure managed identity"
-git push azure master
+git push azure main
``` When the new webpage shows your to-do list, your app is connecting to the database using the managed identity.
app-service https://docs.microsoft.com/en-us/azure/app-service/deploy-local-git https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/deploy-local-git.md
@@ -75,7 +75,7 @@ Use the URL that returns to deploy your app in the next step.
git remote add azure <url> ```
-1. Push to the Azure remote with `git push azure master`.
+1. Push to the Azure remote with `git push azure main`.
1. In the **Git Credential Manager** window, enter your [deployment user password](#configure-a-deployment-user), not your Azure sign-in password.
@@ -126,7 +126,7 @@ To enable local Git deployment for your app with Azure Pipelines (Preview):
git remote add azure <url> ```
-1. Push to the Azure remote with `git push azure master`.
+1. Push to the Azure remote with `git push azure main`.
1. On the **Git Credential Manager** page, sign in with your visualstudio.com username. For other authentication methods, see [Azure DevOps Services authentication overview](/vsts/git/auth-overview?view=vsts).
@@ -144,8 +144,8 @@ You may see the following common error messages when you use Git to publish to a
---|---|---| |`Unable to access '[siteURL]': Failed to connect to [scmAddress]`|The app isn't up and running.|Start the app in the Azure portal. Git deployment isn't available when the web app is stopped.| |`Couldn't resolve host 'hostname'`|The address information for the 'azure' remote is incorrect.|Use the `git remote -v` command to list all remotes, along with the associated URL. Verify that the URL for the 'azure' remote is correct. If needed, remove and recreate this remote using the correct URL.|
-|`No refs in common and none specified; doing nothing. Perhaps you should specify a branch such as 'main'.`|You didn't specify a branch during `git push`, or you haven't set the `push.default` value in `.gitconfig`.|Run `git push` again, specifying the main branch: `git push azure master`.|
-|`src refspec [branchname] does not match any.`|You tried to push to a branch other than main on the 'azure' remote.|Run `git push` again, specifying the master branch: `git push azure master`.|
+|`No refs in common and none specified; doing nothing. Perhaps you should specify a branch such as 'main'.`|You didn't specify a branch during `git push`, or you haven't set the `push.default` value in `.gitconfig`.|Run `git push` again, specifying the main branch: `git push azure main`.|
+|`src refspec [branchname] does not match any.`|You tried to push to a branch other than main on the 'azure' remote.|Run `git push` again, specifying the main branch: `git push azure main`.|
|`RPC failed; result=22, HTTP code = 5xx.`|This error can happen if you try to push a large git repository over HTTPS.|Change the git configuration on the local machine to make the `postBuffer` bigger. For example: `git config --global http.postBuffer 524288000`.| |`Error - Changes committed to remote repository but your web app not updated.`|You deployed a Node.js app with a _package.json_ file that specifies additional required modules.|Review the `npm ERR!` error messages before this error for more context on the failure. The following are the known causes of this error, and the corresponding `npm ERR!` messages:<br /><br />**Malformed package.json file**: `npm ERR! Couldn't read dependencies.`<br /><br />**Native module doesn't have a binary distribution for Windows**:<br />`npm ERR! \cmd "/c" "node-gyp rebuild"\ failed with 1` <br />or <br />`npm ERR! [modulename@version] preinstall: \make || gmake\ `|
app-service https://docs.microsoft.com/en-us/azure/app-service/overview-hosting-plans https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/overview-hosting-plans.md
@@ -10,7 +10,7 @@ ms.custom: seodec18
--- # Azure App Service plan overview
-In App Service (Web Apps, API Apps, or Mobile Apps), an app always runs in an _App Service plan_. In addition, [Azure Functions](../azure-functions/functions-scale.md#app-service-plan) also has the option of running in an _App Service plan_. An App Service plan defines a set of compute resources for a web app to run. These compute resources are analogous to the [_server farm_](https://wikipedia.org/wiki/Server_farm) in conventional web hosting. One or more apps can be configured to run on the same computing resources (or in the same App Service plan).
+In App Service (Web Apps, API Apps, or Mobile Apps), an app always runs in an _App Service plan_. In addition, [Azure Functions](../azure-functions/dedicated-plan.md) also has the option of running in an _App Service plan_. An App Service plan defines a set of compute resources for a web app to run. These compute resources are analogous to the [_server farm_](https://wikipedia.org/wiki/Server_farm) in conventional web hosting. One or more apps can be configured to run on the same computing resources (or in the same App Service plan).
When you create an App Service plan in a certain region (for example, West Europe), a set of compute resources is created for that plan in that region. Whatever apps you put into this App Service plan run on these compute resources as defined by your App Service plan. Each App Service plan defines:
app-service https://docs.microsoft.com/en-us/azure/app-service/overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/overview.md
@@ -42,6 +42,12 @@ App Service can also host web apps natively on Linux for supported application s
App Service on Linux supports a number of language specific built-in images. Just deploy your code. Supported languages include: Node.js, Java (JRE 8 & JRE 11), PHP, Python, .NET Core and Ruby. Run [`az webapp list-runtimes --linux`](/cli/azure/webapp#az-webapp-list-runtimes) to view the latest languages and supported versions. If the runtime your application requires is not supported in the built-in images, you can deploy it with a custom container.
+Outdated runtimes are periodically removed from the Web Apps Create and Configuration blades in the Portal. These runtimes are hidden from the Portal when they are deprecated by the maintaining organization or found to have significant vulnerabilities. These options are hidden to guide customers to the latest runtimes where they will be the most successful.
+
+When an outdated runtime is hidden from the Portal, any of your existing sites using that version will continue to run. If a runtime is fully removed from the App Service platform, your Azure subscription owner(s) will receive an email notice before the removal.
+
+If you need to create another web app with an outdated runtime version that is no longer shown on the Portal see the language configuration guides for instructions on how to get the runtime version of your site. You can use the Azure CLI to create another site with the same runtime. Alternatively, you can use the **Export Template** button on the web app blade in the Portal to export an ARM template of the site. You can re-use this template to deploy a new site with the same runtime and configuration.
+ ### Limitations - App Service on Linux is not supported on [Shared](https://azure.microsoft.com/pricing/details/app-service/plans/) pricing tier.
app-service https://docs.microsoft.com/en-us/azure/app-service/quickstart-python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/quickstart-python.md
@@ -11,7 +11,7 @@ adobe-target-experience: Experience B
adobe-target-content: ./quickstart-python-1 ---
-# Quickstart: Create a Python app in Azure App Service on Linux
+# Quickstart: Create a Python app using Azure App Service on Linux
In this quickstart, you deploy a Python web app to [App Service on Linux](overview.md#app-service-on-linux), Azure's highly scalable, self-patching web hosting service. You use the local [Azure command-line interface (CLI)](/cli/azure/install-azure-cli) on a Mac, Linux, or Windows computer to deploy a sample with either the Flask or Django frameworks. The web app you configure uses a free App Service tier, so you incur no costs in the course of this article.
app-service https://docs.microsoft.com/en-us/azure/app-service/tutorial-dotnetcore-sqldb-app https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/tutorial-dotnetcore-sqldb-app.md
@@ -282,7 +282,7 @@ Compressing objects: 100% (171/171), done.
Writing objects: 100% (268/268), 1.18 MiB | 1.55 MiB/s, done. Total 268 (delta 95), reused 251 (delta 87), pack-reused 0 remote: Resolving deltas: 100% (95/95), done.
-remote: Updating branch 'master'.
+remote: Updating branch 'main'.
remote: Updating submodules. remote: Preparing deployment for commit id '64821c3558'. remote: Generating deployment script.
@@ -299,7 +299,7 @@ remote: Running post deployment command(s)...
remote: Triggering recycle (preview mode disabled). remote: App container will begin restart within 10 seconds. To https://&lt;app-name&gt;.scm.azurewebsites.net/&lt;app-name&gt;.git
- * [new branch] master -> master
+ * [new branch] main -> main
</pre> ::: zone-end
@@ -317,7 +317,7 @@ Writing objects: 100% (273/273), 1.19 MiB | 1.85 MiB/s, done.
Total 273 (delta 96), reused 259 (delta 88) remote: Resolving deltas: 100% (96/96), done. remote: Deploy Async
-remote: Updating branch 'master'.
+remote: Updating branch 'main'.
remote: Updating submodules. remote: Preparing deployment for commit id 'cccecf86c5'. remote: Repository path is /home/site/repository
@@ -333,7 +333,7 @@ remote: Triggering recycle (preview mode disabled).
remote: Deployment successful. remote: Deployment Logs : 'https://&lt;app-name&gt;.scm.azurewebsites.net/newui/jsonviewer?view_url=/api/deployments/cccecf86c56493ffa594e76ea1deb3abb3702d89/log' To https://&lt;app-name&gt;.scm.azurewebsites.net/&lt;app-name&gt;.git
- * [new branch] master -> master
+ * [new branch] main -> main
</pre> ::: zone-end
@@ -442,7 +442,7 @@ In your browser, navigate to `http://localhost:5000/`. You can now add a to-do i
```bash git add . git commit -m "added done field"
-git push azure master
+git push azure main
``` Once the `git push` is complete, navigate to your App Service app and try adding a to-do item and check **Done**.
app-service https://docs.microsoft.com/en-us/azure/app-service/tutorial-nodejs-mongodb-app https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/tutorial-nodejs-mongodb-app.md
@@ -310,7 +310,7 @@ Delta compression using up to 4 threads.
Compressing objects: 100% (5/5), done. Writing objects: 100% (5/5), 489 bytes | 0 bytes/s, done. Total 5 (delta 3), reused 0 (delta 0)
-remote: Updating branch 'master'.
+remote: Updating branch 'main'.
remote: Updating submodules. remote: Preparing deployment for commit id '6c7c716eee'. remote: Running custom deployment command...
@@ -321,7 +321,7 @@ remote: Handling node.js deployment.
. remote: Deployment successful. To https://&lt;app-name&gt;.scm.azurewebsites.net/&lt;app-name&gt;.git
- * [new branch]      master -> master
+ * [new branch]      main -> main
</pre> You may notice that the deployment process runs [Gulp](https://gulpjs.com/) after `npm install`. App Service does not run Gulp or Grunt tasks during deployment, so this sample repository has two additional files in its root directory to enable it:
@@ -472,7 +472,7 @@ In the local terminal window, commit your changes in Git, then push the code cha
```bash git commit -am "added article comment"
-git push azure master
+git push azure main
``` Once the `git push` is complete, navigate to your Azure app and try out the new functionality.
app-service https://docs.microsoft.com/en-us/azure/app-service/tutorial-php-mysql-app https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/tutorial-php-mysql-app.md
@@ -447,7 +447,7 @@ Delta compression using up to 8 threads.
Compressing objects: 100% (3/3), done. Writing objects: 100% (3/3), 291 bytes | 0 bytes/s, done. Total 3 (delta 2), reused 0 (delta 0)
-remote: Updating branch 'master'.
+remote: Updating branch 'main'.
remote: Updating submodules. remote: Preparing deployment for commit id 'a5e076db9c'. remote: Running custom deployment command...
@@ -478,7 +478,7 @@ Delta compression using up to 8 threads.
Compressing objects: 100% (3/3), done. Writing objects: 100% (3/3), 291 bytes | 0 bytes/s, done. Total 3 (delta 2), reused 0 (delta 0)
-remote: Updating branch 'master'.
+remote: Updating branch 'main'.
remote: Updating submodules. remote: Preparing deployment for commit id 'a5e076db9c'. remote: Running custom deployment command...
@@ -630,7 +630,7 @@ Commit all the changes in Git, and then push the code changes to Azure.
```bash git add . git commit -m "added complete checkbox"
-git push azure master
+git push azure main
``` Once the `git push` is complete, navigate to the Azure app and test the new functionality.
app-service https://docs.microsoft.com/en-us/azure/app-service/tutorial-ruby-postgres-app https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/tutorial-ruby-postgres-app.md
@@ -286,7 +286,7 @@ git remote add azure <paste-copied-url-here>
Push to the Azure remote to deploy the Ruby on Rails application. You are prompted for the password you supplied earlier as part of the creation of the deployment user. ```bash
-git push azure master
+git push azure main
``` During deployment, Azure App Service communicates its progress with Git.
@@ -297,7 +297,7 @@ Delta compression using up to 8 threads.
Compressing objects: 100% (3/3), done. Writing objects: 100% (3/3), 291 bytes | 0 bytes/s, done. Total 3 (delta 2), reused 0 (delta 0)
-remote: Updating branch 'master'.
+remote: Updating branch 'main'.
remote: Updating submodules. remote: Preparing deployment for commit id 'a5e076db9c'. remote: Running custom deployment command...
@@ -416,7 +416,7 @@ Commit all the changes in Git, and then push the code changes to Azure.
```bash git add . git commit -m "added complete checkbox"
-git push azure master
+git push azure main
``` Once the `git push` is complete, navigate to the Azure app and test the new functionality.
app-service https://docs.microsoft.com/en-us/azure/app-service/web-sites-integrate-with-vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/web-sites-integrate-with-vnet.md
@@ -126,6 +126,12 @@ The App Service plan VNet Integration UI shows you all of the VNet integrations
* **Sync network**: The sync network operation is used only for the gateway-dependent VNet Integration feature. Performing a sync network operation ensures that your certificates and network information are in sync. If you add or change the DNS of your VNet, perform a sync network operation. This operation restarts any apps that use this VNet. This operation will not work if you are using an app and a vnet belonging to different subscriptions. * **Add routes**: Adding routes drives outbound traffic into your VNet.
+The private IP assigned to the instance is exposed via the environment variable, **WEBSITE_PRIVATE_IP**. Kudu console UI also shows the list of environment variables available to the Web App. This IP is assigned from the address range of the integrated subnet. For Regional VNet Integration, the value of WEBSITE_PRIVATE_IP is an IP from the address range of the delegated subnet, and for Gateway-required VNet Integration, the value is an IP from the adress range of the Point-to-site address pool configured on the Virtual Network Gateway. This is the IP that will be used by the Web App to connect to the resources through the Virtual Network.
+
+> [!NOTE]
+> The value of WEBSITE_PRIVATE_IP is bound to change. However, it will be an IP within the address range of the integration subnet or the point-to-site address range, so you will need to allow access from the entire address range.
+>
+ ### Gateway-required VNet Integration routing The routes that are defined in your VNet are used to direct traffic into your VNet from your app. To send additional outbound traffic into the VNet, add those address blocks here. This capability only works with gateway-required VNet Integration. Route tables don't affect your app traffic when you use gateway-required VNet Integration the way that they do with regional VNet Integration.
application-gateway https://docs.microsoft.com/en-us/azure/application-gateway/application-gateway-backend-health-troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/application-gateway-backend-health-troubleshooting.md
@@ -18,6 +18,9 @@ Overview
By default, Azure Application Gateway probes backend servers to check their health status and to check whether they're ready to serve requests. Users can also create custom probes to mention the host name, the path to be probed, and the status codes to be accepted as Healthy. In each case, if the backend server doesn't respond successfully, Application Gateway marks the server as Unhealthy and stops forwarding requests to the server. After the server starts responding successfully, Application Gateway resumes forwarding the requests.
+> [!NOTE]
+> This article contains references to the term *whitelist*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
+ ### How to check backend health To check the health of your backend pool, you can use the
azure-app-configuration https://docs.microsoft.com/en-us/azure/azure-app-configuration/enable-dynamic-configuration-azure-functions-csharp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/enable-dynamic-configuration-azure-functions-csharp.md
@@ -21,7 +21,7 @@ ms.tgt_pltfrm: Azure Functions
--- # Tutorial: Use dynamic configuration in an Azure Functions app
-The App Configuration .NET Standard configuration provider supports caching and refreshing configuration dynamically driven by application activity. This tutorial shows how you can implement dynamic configuration updates in your code. It builds on the Azure Functions app introduced in the quickstarts. Before you continue, finish [Create an Azure functions app with Azure App Configuration](./quickstart-azure-functions-csharp.md) first.
+The App Configuration .NET configuration provider supports caching and refreshing configuration dynamically driven by application activity. This tutorial shows how you can implement dynamic configuration updates in your code. It builds on the Azure Functions app introduced in the quickstarts. Before you continue, finish [Create an Azure functions app with Azure App Configuration](./quickstart-azure-functions-csharp.md) first.
In this tutorial, you learn how to:
@@ -38,44 +38,71 @@ In this tutorial, you learn how to:
## Reload data from App Configuration
-1. Open *Function1.cs*. In addition to the `static` property `Configuration`, add a new `static` property `ConfigurationRefresher` to keep a singleton instance of `IConfigurationRefresher` that will be used to signal configuration updates during Functions calls later.
+1. Open *Startup.cs*, and update the `ConfigureAppConfiguration` method.
- ```csharp
- private static IConfiguration Configuration { set; get; }
- private static IConfigurationRefresher ConfigurationRefresher { set; get; }
- ```
+ The `ConfigureRefresh` method registers a setting to be checked for changes whenever a refresh is triggered within the application, which you will do in the later step when adding `_configurationRefresher.TryRefreshAsync()`. The `refreshAll` parameter instructs the App Configuration provider to reload the entire configuration whenever a change is detected in the registered setting.
-2. Update the constructor and use the `ConfigureRefresh` method to specify the setting to be refreshed from the App Configuration store. An instance of `IConfigurationRefresher` is retrieved using `GetRefresher` method. Optionally, we also change the configuration cache expiration time window to 1 minute from the default 30 seconds.
+ All settings registered for refresh have a default cache expiration of 30 seconds. It can be updated by calling the `AzureAppConfigurationRefreshOptions.SetCacheExpiration` method.
```csharp
- static Function1()
+ public override void ConfigureAppConfiguration(IFunctionsConfigurationBuilder builder)
{
- var builder = new ConfigurationBuilder();
- builder.AddAzureAppConfiguration(options =>
+ builder.ConfigurationBuilder.AddAzureAppConfiguration(options =>
{ options.Connect(Environment.GetEnvironmentVariable("ConnectionString"))
+ // Load all keys that start with `TestApp:`
+ .Select("TestApp:*")
+ // Configure to reload configuration if the registered 'Sentinel' key is modified
.ConfigureRefresh(refreshOptions =>
- refreshOptions.Register("TestApp:Settings:Message")
- .SetCacheExpiration(TimeSpan.FromSeconds(60))
- );
- ConfigurationRefresher = options.GetRefresher();
+ refreshOptions.Register("TestApp:Settings:Sentinel", refreshAll: true));
});
- Configuration = builder.Build();
} ```
-3. Update the `Run` method and signal to refresh the configuration using the `TryRefreshAsync` method at the beginning of the Functions call. This will be no-op if the cache expiration time window isn't reached. Remove the `await` operator if you prefer the configuration to be refreshed without blocking.
+ > [!TIP]
+ > When you are updating multiple key-values in App Configuration, you would normally don't want your application to reload configuration before all changes are made. You can register a **sentinel** key and only update it when all other configuration changes are completed. This helps to ensure the consistency of configuration in your application.
+
+2. Update the `Configure` method to make Azure App Configuration services available through dependency injection.
+
+ ```csharp
+ public override void Configure(IFunctionsHostBuilder builder)
+ {
+ builder.Services.AddAzureAppConfiguration();
+ }
+ ```
+
+3. Open *Function1.cs*, and add the following namespaces.
+
+ ```csharp
+ using System.Linq;
+ using Microsoft.Extensions.Configuration.AzureAppConfiguration;
+ ```
+
+ Update the constructor to obtain the instance of `IConfigurationRefresherProvider` through dependency injection, from which you can obtain the instance of `IConfigurationRefresher`.
```csharp
- public static async Task<IActionResult> Run(
+ private readonly IConfiguration _configuration;
+ private readonly IConfigurationRefresher _configurationRefresher;
+
+ public Function1(IConfiguration configuration, IConfigurationRefresherProvider refresherProvider)
+ {
+ _configuration = configuration;
+ _configurationRefresher = refresherProvider.Refreshers.First();
+ }
+ ```
+
+4. Update the `Run` method and signal to refresh the configuration using the `TryRefreshAsync` method at the beginning of the Functions call. It will be a no-op if the cache expiration time window isn't reached. Remove the `await` operator if you prefer the configuration to be refreshed without blocking the current Functions call. In that case, later Functions calls will get updated value.
+
+ ```csharp
+ public async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req, ILogger log) { log.LogInformation("C# HTTP trigger function processed a request.");
- await ConfigurationRefresher.TryRefreshAsync();
+ await _configurationRefresher.TryRefreshAsync();
string keyName = "TestApp:Settings:Message";
- string message = Configuration[keyName];
+ string message = _configuration[keyName];
return message != null ? (ActionResult)new OkObjectResult(message)
@@ -113,19 +140,27 @@ In this tutorial, you learn how to:
![Quickstart Function launch local](./media/quickstarts/dotnet-core-function-launch-local.png)
-5. Sign in to the [Azure portal](https://portal.azure.com). Select **All resources**, and select the App Configuration store instance that you created in the quickstart.
+5. Sign in to the [Azure portal](https://portal.azure.com). Select **All resources**, and select the App Configuration store that you created in the quickstart.
-6. Select **Configuration Explorer**, and update the values of the following key:
+6. Select **Configuration explorer**, and update the value of the following key:
| Key | Value | |---|---| | TestApp:Settings:Message | Data from Azure App Configuration - Updated |
-7. Refresh the browser a few times. When the cached setting expires after a minute, the page shows the response of the Functions call with updated value.
+ Then create the sentinel key or modify its value if it already exists, for example,
+
+ | Key | Value |
+ |---|---|
+ | TestApp:Settings:Sentinel | v1 |
++
+7. Refresh the browser a few times. When the cached setting expires after 30 seconds, the page shows the response of the Functions call with updated value.
![Quickstart Function refresh local](./media/quickstarts/dotnet-core-function-refresh-local.png)
-The example code used in this tutorial can be downloaded from [App Configuration GitHub repo](https://github.com/Azure/AppConfiguration/tree/master/examples/DotNetCore/AzureFunction)
+> [!NOTE]
+> The example code used in this tutorial can be downloaded from [App Configuration GitHub repo](https://github.com/Azure/AppConfiguration/tree/master/examples/DotNetCore/AzureFunction).
## Clean up resources
azure-app-configuration https://docs.microsoft.com/en-us/azure/azure-app-configuration/howto-integrate-azure-managed-service-identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/howto-integrate-azure-managed-service-identity.md
@@ -96,7 +96,7 @@ To set up a managed identity in the portal, you first create an application and
using Azure.Identity; ```
-1. If you wish to access only values stored directly in App Configuration, update the `CreateWebHostBuilder` method by replacing the `config.AddAzureAppConfiguration()` method.
+1. If you wish to access only values stored directly in App Configuration, update the `CreateWebHostBuilder` method by replacing the `config.AddAzureAppConfiguration()` method (this is found in the `Microsoft.Azure.AppConfiguration.AspNetCore` package).
> [!IMPORTANT] > `CreateHostBuilder` replaces `CreateWebHostBuilder` in .NET Core 3.0. Select the correct syntax based on your environment.
azure-app-configuration https://docs.microsoft.com/en-us/azure/azure-app-configuration/quickstart-azure-functions-csharp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/quickstart-azure-functions-csharp.md
@@ -40,39 +40,67 @@ In this quickstart, you incorporate the Azure App Configuration service into an
[!INCLUDE [Create a project using the Azure Functions template](../../includes/functions-vstools-create.md)] ## Connect to an App Configuration store
+This project will use [dependency injection in .NET Azure Functions](/azure/azure-functions/functions-dotnet-dependency-injection) and add Azure App Configuration as an extra configuration source.
-1. Right-click your project, and select **Manage NuGet Packages**. On the **Browse** tab, search for and add the `Microsoft.Extensions.Configuration.AzureAppConfiguration` NuGet package to your project. If you can't find it, select the **Include prerelease** check box.
+1. Right-click your project, and select **Manage NuGet Packages**. On the **Browse** tab, search for and add following NuGet packages to your project.
+ - [Microsoft.Extensions.Configuration.AzureAppConfiguration](https://www.nuget.org/packages/Microsoft.Extensions.Configuration.AzureAppConfiguration/) version 4.1.0 or later
+ - [Microsoft.Azure.Functions.Extensions](https://www.nuget.org/packages/Microsoft.Azure.Functions.Extensions/) version 1.1.0 or later
-2. Open *Function1.cs*, and add the namespaces of the .NET Core configuration and the App Configuration configuration provider.
+2. Add a new file, *Startup.cs*, with the following code. It defines a class named `Startup` that implements the `FunctionsStartup` abstract class. An assembly attribute is used to specify the type name used during Azure Functions startup.
+
+ The `ConfigureAppConfiguration` method is overridden and Azure App Configuration provider is added as an extra configuration source by calling `AddAzureAppConfiguration()`. The `Configure` method is left empty as you don't need to register any services at this point.
+
+ ```csharp
+ using System;
+ using Microsoft.Azure.Functions.Extensions.DependencyInjection;
+ using Microsoft.Extensions.Configuration;
+
+ [assembly: FunctionsStartup(typeof(FunctionApp.Startup))]
+
+ namespace FunctionApp
+ {
+ class Startup : FunctionsStartup
+ {
+ public override void ConfigureAppConfiguration(IFunctionsConfigurationBuilder builder)
+ {
+ string cs = Environment.GetEnvironmentVariable("ConnectionString");
+ builder.ConfigurationBuilder.AddAzureAppConfiguration(cs);
+ }
+
+ public override void Configure(IFunctionsHostBuilder builder)
+ {
+ }
+ }
+ }
+ ```
+
+3. Open *Function1.cs*, and add the following namespace.
```csharp using Microsoft.Extensions.Configuration;
- using Microsoft.Extensions.Configuration.AzureAppConfiguration;
```
-3. Add a `static` property named `Configuration` to create a singleton instance of `IConfiguration`. Then add a `static` constructor to connect to App Configuration by calling `AddAzureAppConfiguration()`. This will load configuration once at the application startup. The same configuration instance will be used for all Functions calls later.
+ Add a constructor used to obtain an instance of `IConfiguration` through dependency injection.
```csharp
- private static IConfiguration Configuration { set; get; }
+ private readonly IConfiguration _configuration;
- static Function1()
+ public Function1(IConfiguration configuration)
{
- var builder = new ConfigurationBuilder();
- builder.AddAzureAppConfiguration(Environment.GetEnvironmentVariable("ConnectionString"));
- Configuration = builder.Build();
+ _configuration = configuration;
} ``` 4. Update the `Run` method to read values from the configuration. ```csharp
- public static async Task<IActionResult> Run(
+ public async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req, ILogger log) { log.LogInformation("C# HTTP trigger function processed a request."); string keyName = "TestApp:Settings:Message";
- string message = Configuration[keyName];
+ string message = _configuration[keyName];
return message != null ? (ActionResult)new OkObjectResult(message)
@@ -80,6 +108,8 @@ In this quickstart, you incorporate the Azure App Configuration service into an
} ```
+ The `Function1` class and the `Run` method should not be static. Remove the `static` modifier if it was autogenerated.
+ ## Test the function locally 1. Set an environment variable named **ConnectionString**, and set it to the access key to your App Configuration store. If you use the Windows command prompt, run the following command and restart the command prompt to allow the change to take effect:
@@ -116,7 +146,7 @@ In this quickstart, you incorporate the Azure App Configuration service into an
## Next steps
-In this quickstart, you created a new App Configuration store and used it with an Azure Functions app via the [App Configuration provider](/dotnet/api/Microsoft.Extensions.Configuration.AzureAppConfiguration). To learn how to configure your Azure Functions app to dynamically refresh configuration settings, continue to the next tutorial.
+In this quickstart, you created a new App Configuration store and used it with an Azure Functions app via the [App Configuration provider](/dotnet/api/Microsoft.Extensions.Configuration.AzureAppConfiguration). To learn how to update your Azure Functions app to dynamically refresh configuration, continue to the next tutorial.
> [!div class="nextstepaction"]
-> [Enable dynamic configuration](./enable-dynamic-configuration-azure-functions-csharp.md)
\ No newline at end of file
+> [Enable dynamic configuration in Azure Functions](./enable-dynamic-configuration-azure-functions-csharp.md)
\ No newline at end of file
azure-arc https://docs.microsoft.com/en-us/azure/azure-arc/data/create-data-controller-using-azdata https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-data-controller-using-azdata.md
@@ -266,13 +266,13 @@ Once you have run the command, continue on to [Monitoring the creation status](#
Before you create the data controller on Azure Red Hat OpenShift, you will need to apply specific security context constraints (SCC). For the preview release, these relax the security constraints. Future releases will provide updated SCC. 1. Download the custom security context constraint (SCC). Use one of the following:
- - [GitHub](https://github.com/microsoft/azure_arc/tree/master/arc_data_services/deploy/yaml/arc-data-scc.yaml)
- - ([Raw](https://raw.githubusercontent.com/microsoft/azure_arc/master/arc_data_services/deploy/yaml/arc-data-scc.yaml))
+ - [GitHub](https://github.com/microsoft/azure_arc/tree/main/arc_data_services/deploy/yaml/arc-data-scc.yaml)
+ - ([Raw](https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/arc-data-scc.yaml))
- `curl` The following command downloads arc-data-scc.yaml: ```console
- curl https://raw.githubusercontent.com/microsoft/azure_arc/master/arc_data_services/deploy/yaml/arc-data-scc.yaml -o arc-data-scc.yaml
+ curl https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/arc-data-scc.yaml -o arc-data-scc.yaml
``` 1. Create SCC.
@@ -325,13 +325,13 @@ Once you have run the command, continue on to [Monitoring the creation status](#
Before you create the data controller on Red Hat OCP, you will need to apply specific security context constraints (SCC). For the preview release, these relax the security constraints. Future releases will provide updated SCC. 1. Download the custom security context constraint (SCC). Use one of the following:
- - [GitHub](https://github.com/microsoft/azure_arc/tree/master/arc_data_services/deploy/yaml/arc-data-scc.yaml)
- - ([Raw](https://raw.githubusercontent.com/microsoft/azure_arc/master/arc_data_services/deploy/yaml/arc-data-scc.yaml))
+ - [GitHub](https://github.com/microsoft/azure_arc/tree/main/arc_data_services/deploy/yaml/arc-data-scc.yaml)
+ - ([Raw](https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/arc-data-scc.yaml))
- `curl` The following command downloads arc-data-scc.yaml: ```console
- curl https://raw.githubusercontent.com/microsoft/azure_arc/master/arc_data_services/deploy/yaml/arc-data-scc.yaml -o arc-data-scc.yaml
+ curl https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/arc-data-scc.yaml -o arc-data-scc.yaml
``` 1. Create SCC.
azure-arc https://docs.microsoft.com/en-us/azure/azure-arc/data/create-data-controller-using-kubernetes-native-tools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-data-controller-using-kubernetes-native-tools.md
@@ -52,7 +52,7 @@ Creating the Azure Arc data controller has the following high level steps:
Run the following command to create the custom resource definitions. **[Requires Kubernetes Cluster Administrator Permissions]** ```console
-kubectl create -f https://raw.githubusercontent.com/microsoft/azure_arc/master/arc_data_services/deploy/yaml/custom-resource-definitions.yaml
+kubectl create -f https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/custom-resource-definitions.yaml
``` ## Create a namespace in which the data controller will be created
@@ -72,7 +72,7 @@ The bootstrapper service handles incoming requests for creating, editing, and de
Run the following command to create a bootstrapper service, a service account for the bootstrapper service, and a role and role binding for the bootstrapper service account. ```console
-kubectl create --namespace arc -f https://raw.githubusercontent.com/microsoft/azure_arc/master/arc_data_services/deploy/yaml/bootstrapper.yaml
+kubectl create --namespace arc -f https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/bootstrapper.yaml
``` Verify that the bootstrapper pod is running using the following command. You may need to run it a few times until the status changes to `Running`.
@@ -143,7 +143,7 @@ echo '<your string to encode here>' | base64
# echo 'example' | base64 ```
-Once you have encoded the username and password you can create a file based on the [template file](https://raw.githubusercontent.com/microsoft/azure_arc/master/arc_data_services/deploy/yaml/controller-login-secret.yaml) and replace the username and password values with your own.
+Once you have encoded the username and password you can create a file based on the [template file](https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/controller-login-secret.yaml) and replace the username and password values with your own.
Then run the following command to create the secret.
@@ -158,7 +158,7 @@ kubectl create --namespace arc -f C:\arc-data-services\controller-login-secret.y
Now you are ready to create the data controller itself.
-First, create a copy of the [template file](https://raw.githubusercontent.com/microsoft/azure_arc/master/arc_data_services/deploy/yaml/data-controller.yaml) locally on your computer so that you can modify some of the settings.
+First, create a copy of the [template file](https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/data-controller.yaml) locally on your computer so that you can modify some of the settings.
Edit the following as needed:
azure-arc https://docs.microsoft.com/en-us/azure/azure-arc/data/create-postgresql-hyperscale-server-group-kubernetes-native-tools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-postgresql-hyperscale-server-group-kubernetes-native-tools.md
@@ -29,7 +29,7 @@ To create a PostgreSQL Hyperscale server group, you need to create a Kubernetes
## Create a yaml file
-You can use the [template yaml](https://raw.githubusercontent.com/microsoft/azure_arc/master/arc_data_services/deploy/yaml/postgresql.yaml) file as a starting point to create your own custom PostgreSQL Hyperscale server group yaml file. Download this file to your local computer and open it in a text editor. It is useful to use a text editor such as [VS Code](https://code.visualstudio.com/download) that support syntax highlighting and linting for yaml files.
+You can use the [template yaml](https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/postgresql.yaml) file as a starting point to create your own custom PostgreSQL Hyperscale server group yaml file. Download this file to your local computer and open it in a text editor. It is useful to use a text editor such as [VS Code](https://code.visualstudio.com/download) that support syntax highlighting and linting for yaml files.
This is an example yaml file:
azure-arc https://docs.microsoft.com/en-us/azure/azure-arc/data/create-sql-managed-instance-using-kubernetes-native-tools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-sql-managed-instance-using-kubernetes-native-tools.md
@@ -29,7 +29,7 @@ To create a SQL managed instance, you need to create a Kubernetes secret to stor
## Create a yaml file
-You can use the [template yaml](https://raw.githubusercontent.com/microsoft/azure_arc/master/arc_data_services/deploy/yaml/sqlmi.yaml) file as a starting point to create your own custom SQL managed instance yaml file. Download this file to your local computer and open it in a text editor. It is useful to use a text editor such as [VS Code](https://code.visualstudio.com/download) that support syntax highlighting and linting for yaml files.
+You can use the [template yaml](https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/sqlmi.yaml) file as a starting point to create your own custom SQL managed instance yaml file. Download this file to your local computer and open it in a text editor. It is useful to use a text editor such as [VS Code](https://code.visualstudio.com/download) that support syntax highlighting and linting for yaml files.
This is an example yaml file:
azure-arc https://docs.microsoft.com/en-us/azure/azure-arc/data/offline-deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/offline-deployment.md
@@ -17,7 +17,7 @@ Typically the container images used in the creation of the Azure Arc data contro
Because monthly updates are provided for Azure Arc enabled data services and there are a large number of container images, it is best to perform this process of pulling, tagging, and pushing the container images to a private container registry using a script. The script can either be automated or run manually.
-A [sample script](https://raw.githubusercontent.com/microsoft/azure_arc/master/arc_data_services/deploy/scripts/pull-and-push-arc-data-services-images-to-private-registry.py) can be found in the Azure Arc GitHub repository.
+A [sample script](https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/scripts/pull-and-push-arc-data-services-images-to-private-registry.py) can be found in the Azure Arc GitHub repository.
> [!NOTE] > This script requires the installation of python and the [Docker CLI](https://docs.docker.com/install/).
azure-arc https://docs.microsoft.com/en-us/azure/azure-arc/data/restore-adventureworks-sample-db-into-postgresql-hyperscale-server-group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/restore-adventureworks-sample-db-into-postgresql-hyperscale-server-group.md
@@ -36,10 +36,10 @@ Run a command like this to download the files replace the value of the pod name
> Use the pod name of the Coordinator node of the Postgres Hyperscale server group. Its name is <server group name>-0. If you are not sure of the pod name run the command `kubectl get pod` ```console
-kubectl exec <PostgreSQL pod name> -n <namespace name> -c postgres -- /bin/bash -c "cd /tmp && curl -k -O https://raw.githubusercontent.com/microsoft/azure_arc/master/azure_arc_data_jumpstart/aks/arm_template/postgres_hs/AdventureWorks.sql"
+kubectl exec <PostgreSQL pod name> -n <namespace name> -c postgres -- /bin/bash -c "cd /tmp && curl -k -O https://raw.githubusercontent.com/microsoft/azure_arc/main/azure_arc_data_jumpstart/aks/arm_template/postgres_hs/AdventureWorks.sql"
#Example:
-#kubectl exec postgres02-0 -n arc -c postgres -- /bin/bash -c "cd /tmp && curl -k -O https://raw.githubusercontent.com/microsoft/azure_arc/master/azure_arc_data_jumpstart/aks/arm_template/postgres_hs/AdventureWorks.sql"
+#kubectl exec postgres02-0 -n arc -c postgres -- /bin/bash -c "cd /tmp && curl -k -O https://raw.githubusercontent.com/microsoft/azure_arc/main/azure_arc_data_jumpstart/aks/arm_template/postgres_hs/AdventureWorks.sql"
``` ## Step 2: Restore the AdventureWorks database
azure-arc https://docs.microsoft.com/en-us/azure/azure-arc/kubernetes/use-gitops-connected-cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/use-gitops-connected-cluster.md
@@ -146,7 +146,7 @@ To customize the configuration, here are more parameters you can use:
`--helm-operator-chart-version` : *Optional* chart version for Helm operator (if enabled). Default: '1.2.0'.
-`--operator-namespace` : *Optional* name for the operator namespace. Default: 'default'
+`--operator-namespace` : *Optional* name for the operator namespace. Default: 'default'. Max 23 characters.
`--operator-params` : *Optional* parameters for operator. Must be given within single quotes. For example, ```--operator-params='--git-readonly --git-path=releases --sync-garbage-collection' ```
@@ -166,12 +166,6 @@ Options supported in --operator-params
* If '--git-user' or '--git-email' are not set (which means that you don't want Flux to write to the repo), then --git-readonly will automatically be set (if you have not already set it).
-* If enableHelmOperator is true, then operatorInstanceName + operatorNamespace strings cannot exceed 47 characters combined. If you fail to adhere to this limit, you will get the following error:
-
- ```console
- {"OperatorMessage":"Error: {failed to install chart from path [helm-operator] for release [<operatorInstanceName>-helm-<operatorNamespace>]: err [release name \"<operatorInstanceName>-helm-<operatorNamespace>\" exceeds max length of 53]} occurred while doing the operation : {Installing the operator} on the config","ClusterState":"Installing the operator"}
- ```
- For more information, see [Flux documentation](https://aka.ms/FluxcdReadme). > [!TIP]
@@ -247,7 +241,7 @@ While the provisioning process happens, the `sourceControlConfiguration` will mo
## Apply configuration from a private Git repository
-If you are using a private Git repo, then you need to configure the SSH public key in your repo. You can configure the public key either on the Git repo or the Git user that has access to the repo. The SSH public key will be either the one you provide or the one that Flux generates.
+If you are using a private Git repo, then you need to configure the SSH public key in your repo. You can configure the public key either on the specific Git repo or on the Git user that has access to the repo. The SSH public key will be either the one you provide or the one that Flux generates.
**Get your own public key**
@@ -256,7 +250,7 @@ If you generated your own SSH keys, then you already have the private and public
**Get the public key using Azure CLI (useful if Flux generates the keys)** ```console
-$ az k8sconfiguration show --resource-group <resource group name> --cluster-name <connected cluster name> --name <configuration name> --query 'repositoryPublicKey'
+$ az k8sconfiguration show --resource-group <resource group name> --cluster-name <connected cluster name> --name <configuration name> --cluster-type connectedClusters --query 'repositoryPublicKey'
Command group 'k8sconfiguration' is in preview. It may be changed/removed in a future release. "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAREDACTED" ```
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/analyze-telemetry-data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/analyze-telemetry-data.md
@@ -103,7 +103,7 @@ The runtime provides the `customDimensions.LogLevel` and `customDimensions.Categ
## Consumption plan-specific metrics
-When running in a [Consumption plan](functions-scale.md#consumption-plan), the execution *cost* of a single function execution is measured in *GB-seconds*. Execution cost is calculated by combining its memory usage with its execution time. To learn more, see [Estimating Consumption plan costs](functions-consumption-costs.md).
+When running in a [Consumption plan](consumption-plan.md), the execution *cost* of a single function execution is measured in *GB-seconds*. Execution cost is calculated by combining its memory usage with its execution time. To learn more, see [Estimating Consumption plan costs](functions-consumption-costs.md).
The following telemetry queries are specific to metrics that impact the cost of running functions in the Consumption plan.
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/configure-monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/configure-monitoring.md
@@ -193,7 +193,7 @@ To learn more, see [Sampling in Application Insights](../azure-monitor/app/sampl
_This feature is in preview._
-You can have the [Azure Functions scale controller](./functions-scale.md#runtime-scaling) emit logs to either Application Insights or to Blob storage to better understand the decisions the scale controller is making for your function app.
+You can have the [Azure Functions scale controller](./event-driven-scaling.md#runtime-scaling) emit logs to either Application Insights or to Blob storage to better understand the decisions the scale controller is making for your function app.
To enable this feature, you add an application setting named `SCALE_CONTROLLER_LOGGING_ENABLED` to your function app settings. The value of this setting must be of the format `<DESTINATION>:<VERBOSITY>`, based on the following:
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/consumption-plan https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/consumption-plan.md new file mode 100644
@@ -0,0 +1,44 @@
+---
+title: Azure Functions Consumption plan hosting
+description: Learn about how Azure Function Consumption plan hosting lets you run your code in an environment that scales dynamically, but you only pay for resources used during execution.
+ms.date: 8/31/2020
+ms.topic: conceptual
+# Customer intent: As a developer, I want to understand the benefits of using the Consumption plan so I can get the scalability benefits of Azure Functions without having to pay for resources I don't need.
+---
+
+# Azure Functions Consumption plan hosting
+
+When you're using the Consumption plan, instances of the Azure Functions host are dynamically added and removed based on the number of incoming events. The Consumption plan is the fully <em>serverless</em> hosting option for Azure Functions.
+
+## Benefits
+
+The Consumption plan scales automatically, even even during periods of high load. When running functions in a Consumption plan, you're charged for compute resources only when your functions are running. On a Consumption plan, a function execution times out after a configurable period of time.
+
+For a comparison of the Consumption plan against the other plan and hosting types, see [function scale and hosting options](functions-scale.md).
+
+## Billing
+
+Billing is based on number of executions, execution time, and memory used. Usage is aggregated across all functions within a function app. For more information, see the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/).
+
+To learn more about how to estimate costs when running in a Consumption plan, see [Understanding Consumption plan costs](functions-consumption-costs.md).
+
+## Create a Consumption plan function app
+
+When you create a function app in the Azure portal, the Consumption plan is the default. When using APIs to create you function app, you don't have to first create an App Service plan as you do with Premium and Dedicated plans.
+
+Use the following links to learn how to create a serverless function app in a Consumption plan, either programmatically or in the Azure portal:
+++ [Azure CLI](./scripts/functions-cli-create-serverless.md)++ [Azure portal](functions-create-first-azure-function.md)++ [Azure Resource Manager template](functions-create-first-function-resource-manager.md)+
+You can also create function apps in a Consumption plan when you publish a Functions project from [Visual Studio Code](functions-create-first-function-vs-code.md#publish-the-project-to-azure) or [Visual Studio](functions-create-your-first-function-visual-studio.md#publish-the-project-to-azure).
+
+## Multiple apps in the same plan
+
+Function apps in the same region can be assigned to the same Consumption plan. There's no downside or impact to having multiple apps running in the same Consumption plan. Assigning multiple apps to the same Consumption plan has no impact on resilience, scalability, or reliability of each app.
+
+## Next steps
+++ [Azure Functions hosting options](functions-scale.md)++ [Event-driven scaling in Azure Functions](event-driven-scaling.md)
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/create-first-function-cli-csharp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/create-first-function-cli-csharp.md
@@ -122,7 +122,7 @@ The return object is an [ActionResult](/dotnet/api/microsoft.aspnetcore.mvc.acti
In the previous example, replace `<STORAGE_NAME>` with the name of the account you used in the previous step, and replace `<APP_NAME>` with a globally unique name appropriate to you. The `<APP_NAME>` is also the default DNS domain for the function app.
- This command creates a function app running in your specified language runtime under the [Azure Functions Consumption Plan](functions-scale.md#consumption-plan), which is free for the amount of usage you incur here. The command also provisions an associated Azure Application Insights instance in the same resource group, with which you can monitor your function app and view logs. For more information, see [Monitor Azure Functions](functions-monitoring.md). The instance incurs no costs until you activate it.
+ This command creates a function app running in your specified language runtime under the [Azure Functions Consumption Plan](consumption-plan.md), which is free for the amount of usage you incur here. The command also provisions an associated Azure Application Insights instance in the same resource group, with which you can monitor your function app and view logs. For more information, see [Monitor Azure Functions](functions-monitoring.md). The instance incurs no costs until you activate it.
[!INCLUDE [functions-publish-project-cli](../../includes/functions-publish-project-cli.md)]
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/create-first-function-cli-node https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/create-first-function-cli-node.md
@@ -127,7 +127,7 @@ Each binding requires a direction, a type, and a unique name. The HTTP trigger h
In the previous example, replace `<STORAGE_NAME>` with the name of the account you used in the previous step, and replace `<APP_NAME>` with a globally unique name appropriate to you. The `<APP_NAME>` is also the default DNS domain for the function app.
- This command creates a function app running in your specified language runtime under the [Azure Functions Consumption Plan](functions-scale.md#consumption-plan), which is free for the amount of usage you incur here. The command also provisions an associated Azure Application Insights instance in the same resource group, with which you can monitor your function app and view logs. For more information, see [Monitor Azure Functions](functions-monitoring.md). The instance incurs no costs until you activate it.
+ This command creates a function app running in your specified language runtime under the [Azure Functions Consumption Plan](consumption-plan.md), which is free for the amount of usage you incur here. The command also provisions an associated Azure Application Insights instance in the same resource group, with which you can monitor your function app and view logs. For more information, see [Monitor Azure Functions](functions-monitoring.md). The instance incurs no costs until you activate it.
[!INCLUDE [functions-publish-project-cli](../../includes/functions-publish-project-cli.md)]
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/create-first-function-cli-powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/create-first-function-cli-powershell.md
@@ -126,7 +126,7 @@ Each binding requires a direction, a type, and a unique name. The HTTP trigger h
In the previous example, replace `<STORAGE_NAME>` with the name of the account you used in the previous step, and replace `<APP_NAME>` with a globally unique name appropriate to you. The `<APP_NAME>` is also the default DNS domain for the function app.
- This command creates a function app running in your specified language runtime under the [Azure Functions Consumption Plan](functions-scale.md#consumption-plan), which is free for the amount of usage you incur here. The command also provisions an associated Azure Application Insights instance in the same resource group, with which you can monitor your function app and view logs. For more information, see [Monitor Azure Functions](functions-monitoring.md). The instance incurs no costs until you activate it.
+ This command creates a function app running in your specified language runtime under the [Azure Functions Consumption Plan](consumption-plan.md), which is free for the amount of usage you incur here. The command also provisions an associated Azure Application Insights instance in the same resource group, with which you can monitor your function app and view logs. For more information, see [Monitor Azure Functions](functions-monitoring.md). The instance incurs no costs until you activate it.
[!INCLUDE [functions-publish-project-cli](../../includes/functions-publish-project-cli.md)]
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/create-first-function-cli-python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/create-first-function-cli-python.md
@@ -249,7 +249,7 @@ Use the following commands to create these items. Both Azure CLI and PowerShell
In the previous example, replace `<STORAGE_NAME>` with the name of the account you used in the previous step, and replace `<APP_NAME>` with a globally unique name appropriate to you. The `<APP_NAME>` is also the default DNS domain for the function app.
- This command creates a function app running in your specified language runtime under the [Azure Functions Consumption Plan](functions-scale.md#consumption-plan), which is free for the amount of usage you incur here. The command also provisions an associated Azure Application Insights instance in the same resource group, with which you can monitor your function app and view logs. For more information, see [Monitor Azure Functions](functions-monitoring.md). The instance incurs no costs until you activate it.
+ This command creates a function app running in your specified language runtime under the [Azure Functions Consumption Plan](consumption-plan.md), which is free for the amount of usage you incur here. The command also provisions an associated Azure Application Insights instance in the same resource group, with which you can monitor your function app and view logs. For more information, see [Monitor Azure Functions](functions-monitoring.md). The instance incurs no costs until you activate it.
[!INCLUDE [functions-publish-project-cli](../../includes/functions-publish-project-cli.md)]
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/create-first-function-cli-typescript https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/create-first-function-cli-typescript.md
@@ -159,7 +159,7 @@ Each binding requires a direction, a type, and a unique name. The HTTP trigger h
In the previous example, replace `<STORAGE_NAME>` with the name of the account you used in the previous step, and replace `<APP_NAME>` with a globally unique name appropriate to you. The `<APP_NAME>` is also the default DNS domain for the function app.
- This command creates a function app running in your specified language runtime under the [Azure Functions Consumption Plan](functions-scale.md#consumption-plan), which is free for the amount of usage you incur here. The command also provisions an associated Azure Application Insights instance in the same resource group, with which you can monitor your function app and view logs. For more information, see [Monitor Azure Functions](functions-monitoring.md). The instance incurs no costs until you activate it.
+ This command creates a function app running in your specified language runtime under the [Azure Functions Consumption Plan](consumption-plan.md), which is free for the amount of usage you incur here. The command also provisions an associated Azure Application Insights instance in the same resource group, with which you can monitor your function app and view logs. For more information, see [Monitor Azure Functions](functions-monitoring.md). The instance incurs no costs until you activate it.
## Deploy the function project to Azure
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/create-function-app-linux-app-service-plan https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/create-function-app-linux-app-service-plan.md
@@ -8,7 +8,7 @@ ms.date: 04/29/2020
--- # Create a function app on Linux in an Azure App Service plan
-Azure Functions lets you host your functions on Linux in a default Azure App Service container. This article walks you through how to use the [Azure portal](https://portal.azure.com) to create a Linux-hosted function app that runs in an [App Service plan](functions-scale.md#app-service-plan). You can also [bring your own custom container](functions-create-function-linux-custom-image.md).
+Azure Functions lets you host your functions on Linux in a default Azure App Service container. This article walks you through how to use the [Azure portal](https://portal.azure.com) to create a Linux-hosted function app that runs in an [App Service plan](dedicated-plan.md). You can also [bring your own custom container](functions-create-function-linux-custom-image.md).
![Create function app in the Azure portal](./media/create-function-app-linux-app-service-plan/function-app-in-portal-editor.png)
@@ -46,7 +46,7 @@ You must have a function app to host the execution of your functions on Linux. T
| Setting | Suggested value | Description | | ------------ | ---------------- | ----------- |
- | **[Storage account](../storage/common/storage-account-create.md)** | Globally unique name | Create a storage account used by your function app. Storage account names must be between 3 and 24 characters in length and can contain numbers and lowercase letters only. You can also use an existing account, which must meet the [storage account requirements](../azure-functions/functions-scale.md#storage-account-requirements). |
+ | **[Storage account](../storage/common/storage-account-create.md)** | Globally unique name | Create a storage account used by your function app. Storage account names must be between 3 and 24 characters in length and can contain numbers and lowercase letters only. You can also use an existing account, which must meet the [storage account requirements](../azure-functions/storage-considerations.md#storage-account-requirements). |
|**Operating system**| **Linux** | An operating system is pre-selected for you based on your runtime stack selection, but you can change the setting if necessary. | | **[Plan](../azure-functions/functions-scale.md)** | **Consumption (Serverless)** | Hosting plan that defines how resources are allocated to your function app. In the default **Consumption** plan, resources are added dynamically as required by your functions. In this [serverless](https://azure.microsoft.com/overview/serverless-computing/) hosting, you pay only for the time your functions run. When you run in an App Service plan, you must manage the [scaling of your function app](../azure-functions/functions-scale.md). |
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/create-premium-plan-function-app-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/create-premium-plan-function-app-portal.md new file mode 100644
@@ -0,0 +1,33 @@
+---
+title: Create an Azure Functions Premium plan in the portal
+description: Learn how to use the Azure portal to create a function app that runs in the Premium plan.
+ms.topic: how-to
+ms.date: 10/30/2020
+---
+
+# Create a Premium plan function app in the Azure portal
+
+Azure Functions offers a scalable Premium plan that provides virtual network connectivity, no cold start, and premium hardware. To learn more, see [Azure Functions Premium plan](functions-premium-plan.md).
+
+In this article, you learn how to use the Azure portal to create a function app in a Premium plan.
+
+## Sign in to Azure
+
+Sign in to the [Azure portal](https://portal.azure.com) with your Azure account.
+
+## Create a function app
+
+You must have a function app to host the execution of your functions. A function app lets you group functions as a logical unit for easier management, deployment, scaling, and sharing of resources.
+
+[!INCLUDE [functions-premium-create](../../includes/functions-premium-create.md)]
+
+At this point, you can create functions in the new function app. These functions can take advantage of the benefits of the [Premium plan](functions-premium-plan.md).
+
+## Clean up resources
+
+[!INCLUDE [Clean-up resources](../../includes/functions-quickstart-cleanup.md)]
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Add an HTTP triggered function](functions-create-first-azure-function.md#create-function)
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/dedicated-plan https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/dedicated-plan.md new file mode 100644
@@ -0,0 +1,46 @@
+---
+title: Azure Functions Dedicated hosting
+description: Learn about the benefits of running Azure Functions on a dedicated App Service hosting plan.
+ms.topic: conceptual
+ms.date: 10/29/2020
+---
+
+# Dedicated hosting plans for Azure Functions
+
+This article is about hosting your function app in an App Service plan, including in an App Service Environment (ASE). For other hosting options, see the [hosting plan article](functions-scale.md).
+
+An App Service plan defines a set of compute resources for an app to run. These compute resources are analogous to the [_server farm_](https://wikipedia.org/wiki/Server_farm) in conventional hosting. One or more function apps can be configured to run on the same computing resources (App Service plan) as other App Service apps, such as web apps. These plans include Basic, Standard, Premium, and Isolated SKUs. For details about how the App Service plan works, see the [Azure App Service plans in-depth overview](../app-service/overview-hosting-plans.md).
+
+Consider an App Service plan in the following situations:
+
+* You have existing, underutilized VMs that are already running other App Service instances.
+* You want to provide a custom image on which to run your functions.
+
+## Billing
+
+You pay for function apps in an App Service Plan as you would for other App Service resources. This differs from Azure Functions [Consumption plan](consumption-plan.md) or [Premium plan](functions-premium-plan.md) hosting, which have consumption-based cost components. You are billed only for the plan, regardless of how many function apps or web apps run in the plan. To learn more, see the [App Service pricing page](https://azure.microsoft.com/pricing/details/app-service/windows/).
+
+## <a name="always-on"></a> Always On
+
+If you run on an App Service plan, you should enable the **Always on** setting so that your function app runs correctly. On an App Service plan, the functions runtime goes idle after a few minutes of inactivity, so only HTTP triggers will "wake up" your functions. The **Always on** setting is available only on an App Service plan. On a Consumption plan, the platform activates function apps automatically.
+
+Even with Always On enabled, the execution timeout for individual functions is controlled by the `functionTimeout` setting in the [host.json](functions-host-json.md#functiontimeout) project file.
+
+## Scaling
+
+Using an App Service plan, you can manually scale out by adding more VM instances. You can also enable autoscale, though autoscale will be slower than the elastic scale of the Premium plan. For more information, see [Scale instance count manually or automatically](../azure-monitor/platform/autoscale-get-started.md?toc=%2fazure%2fapp-service%2ftoc.json). You can also scale up by choosing a different App Service plan. For more information, see [Scale up an app in Azure](../app-service/manage-scale-up.md).
+
+> [!NOTE]
+> When running JavaScript (Node.js) functions on an App Service plan, you should choose a plan that has fewer vCPUs. For more information, see [Choose single-core App Service plans](functions-reference-node.md#choose-single-vcpu-app-service-plans).
+<!-- Note: the portal links to this section via fwlink https://go.microsoft.com/fwlink/?linkid=830855 -->
+
+## App Service Environments
+
+Running in an [App Service Environment](../app-service/environment/intro.md) (ASE) lets you fully isolate your functions and take advantage of higher numbers of instances than an App Service Plan. To get started, see .
+
+If you just want to run your function app in a virtual network, you can do this using the [Premium plan](functions-premium-plan.md). To learn more, see [Establish Azure Functions private site access](functions-create-private-site-access.md).
+
+## Next steps
+++ [Azure Functions hosting options](functions-scale.md)++ [Azure App Service plan overview](../app-service/overview-hosting-plans.md)\ No newline at end of file
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/durable/durable-functions-billing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/durable-functions-billing.md
@@ -12,7 +12,7 @@ ms.author: azfuncdf
[Durable Functions](durable-functions-overview.md) is billed the same way as Azure Functions. For more information, see [Azure Functions pricing](https://azure.microsoft.com/pricing/details/functions/).
-When executing orchestrator functions in Azure Functions [Consumption plan](../functions-scale.md#consumption-plan), you need to be aware of some billing behaviors. The following sections describe these behaviors and their effect in more detail.
+When executing orchestrator functions in Azure Functions [Consumption plan](../consumption-plan.md), you need to be aware of some billing behaviors. The following sections describe these behaviors and their effect in more detail.
## Orchestrator function replay billing
@@ -41,7 +41,7 @@ Several factors contribute to the actual Azure Storage costs incurred by your Du
* A single function app is associated with a single task hub, which shares a set of Azure Storage resources. These resources are used by all durable functions in a function app. The actual number of functions in the function app has no effect on Azure Storage transaction costs. * Each function app instance internally polls multiple queues in the storage account by using an exponential-backoff polling algorithm. An idle app instance polls the queues less often than does an active app, which results in fewer transaction costs. For more information about Durable Functions queue-polling behavior, see the [queue-polling section of the Performance and Scale article](durable-functions-perf-and-scale.md#queue-polling).
-* When running in the Azure Functions Consumption or Premium plans, the [Azure Functions scale controller](../functions-scale.md#how-the-consumption-and-premium-plans-work) regularly polls all task-hub queues in the background. If a function app is under light to moderate scale, only a single scale controller instance will poll these queues. If the function app scales out to a large number of instances, more scale controller instances might be added. These additional scale controller instances can increase the total queue-transaction costs.
+* When running in the Azure Functions Consumption or Premium plans, the [Azure Functions scale controller](../event-driven-scaling.md) regularly polls all task-hub queues in the background. If a function app is under light to moderate scale, only a single scale controller instance will poll these queues. If the function app scales out to a large number of instances, more scale controller instances might be added. These additional scale controller instances can increase the total queue-transaction costs.
* Each function app instance competes for a set of blob leases. These instances will periodically make calls to the Azure Blob service either to renew held leases or to attempt to acquire new leases. The task hub's configured partition count determines the number of blob leases. Scaling out to a larger number of function app instances likely increases the Azure Storage transaction costs associated with these lease operations. You can find more information on Azure Storage pricing in the [Azure Storage pricing](https://azure.microsoft.com/pricing/details/storage/) documentation.
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/durable/durable-functions-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/durable-functions-overview.md
@@ -693,7 +693,7 @@ In order to provide reliable and long-running execution guarantees, orchestrator
## Billing
-Durable Functions are billed the same as Azure Functions. For more information, see [Azure Functions pricing](https://azure.microsoft.com/pricing/details/functions/). When executing orchestrator functions in the Azure Functions [Consumption plan](../functions-scale.md#consumption-plan), there are some billing behaviors to be aware of. For more information on these behaviors, see the [Durable Functions billing](durable-functions-billing.md) article.
+Durable Functions are billed the same as Azure Functions. For more information, see [Azure Functions pricing](https://azure.microsoft.com/pricing/details/functions/). When executing orchestrator functions in the Azure Functions [Consumption plan](../consumption-plan.md), there are some billing behaviors to be aware of. For more information on these behaviors, see the [Durable Functions billing](durable-functions-billing.md) article.
## Jump right in
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/durable/durable-functions-perf-and-scale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/durable-functions-perf-and-scale.md
@@ -46,7 +46,7 @@ The durable task extension implements a random exponential back-off algorithm to
The maximum polling delay is configurable via the `maxQueuePollingInterval` property in the [host.json file](../functions-host-json.md#durabletask). Setting this property to a higher value could result in higher message processing latencies. Higher latencies would be expected only after periods of inactivity. Setting this property to a lower value could result in higher storage costs due to increased storage transactions. > [!NOTE]
-> When running in the Azure Functions Consumption and Premium plans, the [Azure Functions Scale Controller](../functions-scale.md#how-the-consumption-and-premium-plans-work) will poll each control and work-item queue once every 10 seconds. This additional polling is necessary to determine when to activate function app instances and to make scale decisions. At the time of writing, this 10 second interval is constant and cannot be configured.
+> When running in the Azure Functions Consumption and Premium plans, the [Azure Functions Scale Controller](../event-driven-scaling.md) will poll each control and work-item queue once every 10 seconds. This additional polling is necessary to determine when to activate function app instances and to make scale decisions. At the time of writing, this 10 second interval is constant and cannot be configured.
### Orchestration start delays Orchestrations instances are started by putting an `ExecutionStarted` message in one of the task hub's control queues. Under certain conditions, you may observe multi-second delays between when an orchestration is scheduled to run and when it actually starts running. During this time interval, the orchestration instance remains in the `Pending` state. There are two potential causes of this delay:
@@ -133,7 +133,7 @@ Generally speaking, orchestrator functions are intended to be lightweight and sh
## Auto-scale
-As with all Azure Functions running in the Consumption and Elastic Premium plans, Durable Functions supports auto-scale via the [Azure Functions scale controller](../functions-scale.md#runtime-scaling). The Scale Controller monitors the latency of all queues by periodically issuing _peek_ commands. Based on the latencies of the peeked messages, the Scale Controller will decide whether to add or remove VMs.
+As with all Azure Functions running in the Consumption and Elastic Premium plans, Durable Functions supports auto-scale via the [Azure Functions scale controller](../event-driven-scaling.md#runtime-scaling). The Scale Controller monitors the latency of all queues by periodically issuing _peek_ commands. Based on the latencies of the peeked messages, the Scale Controller will decide whether to add or remove VMs.
If the Scale Controller determines that control queue message latencies are too high, it will add VM instances until either the message latency decreases to an acceptable level or it reaches the control queue partition count. Similarly, the Scale Controller will continually add VM instances if work-item queue latencies are high, regardless of the partition count.
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/durable/quickstart-python-vscode https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/quickstart-python-vscode.md
@@ -64,8 +64,8 @@ Python Azure Functions require version 2.x of [Azure Functions extension bundles
```json "extensionBundle": {
- "id": "Microsoft.Azure.Functions.ExtensionBundle",
- "version": "[2.*, 3.0.0)"
+ "id": "Microsoft.Azure.Functions.ExtensionBundle",
+ "version": "[2.*, 3.0.0)"
} ```
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/event-driven-scaling https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/event-driven-scaling.md new file mode 100644
@@ -0,0 +1,65 @@
+---
+title: Event-driven scaling in Azure Functions
+description: Explains the scaling behaviors of Consumption plan and Premium plan function apps.
+ms.date: 10/29/2020
+ms.topic: conceptual
+ms.service: azure-functions
+
+---
+# Event-driven scaling in Azure Functions
+
+In the Consumption and Premium plans, Azure Functions scales CPU and memory resources by adding additional instances of the Functions host. The number of instances is determined on the number of events that trigger a function.
+
+Each instance of the Functions host in the Consumption plan is limited to 1.5 GB of memory and one CPU. An instance of the host is the entire function app, meaning all functions within a function app share resource within an instance and scale at the same time. Function apps that share the same Consumption plan scale independently. In the Premium plan, the plan size determines the available memory and CPU for all apps in that plan on that instance.
+
+Function code files are stored on Azure Files shares on the function's main storage account. When you delete the main storage account of the function app, the function code files are deleted and cannot be recovered.
+
+## Runtime scaling
+
+Azure Functions uses a component called the *scale controller* to monitor the rate of events and determine whether to scale out or scale in. The scale controller uses heuristics for each trigger type. For example, when you're using an Azure Queue storage trigger, it scales based on the queue length and the age of the oldest queue message.
+
+The unit of scale for Azure Functions is the function app. When the function app is scaled out, additional resources are allocated to run multiple instances of the Azure Functions host. Conversely, as compute demand is reduced, the scale controller removes function host instances. The number of instances is eventually "scaled in" to zero when no functions are running within a function app.
+
+![Scale controller monitoring events and creating instances](./media/functions-scale/central-listener.png)
+
+## Cold Start
+
+After your function app has been idle for a number of minutes, the platform may scale the number of instances on which your app runs down to zero. The next request has the added latency of scaling from zero to one. This latency is referred to as a _cold start_. The number of dependencies required by your function app can impact the cold start time. Cold start is more of an issue for synchronous operations, such as HTTP triggers that must return a response. If cold starts are impacting your functions, consider running in a Premium plan or in a Dedicated plan with the **Always on** setting enabled.
+
+## Understanding scaling behaviors
+
+Scaling can vary on a number of factors, and scale differently based on the trigger and language selected. There are a few intricacies of scaling behaviors to be aware of:
+
+* **Maximum instances:** A single function app only scales out to a maximum of 200 instances. A single instance may process more than one message or request at a time though, so there isn't a set limit on number of concurrent executions. You can [specify a lower maximum](#limit-scale-out) to throttle scale as required.
+* **New instance rate:** For HTTP triggers, new instances are allocated, at most, once per second. For non-HTTP triggers, new instances are allocated, at most, once every 30 seconds. Scaling is faster when running in a [Premium plan](functions-premium-plan.md).
+* **Scale efficiency:** For Service Bus triggers, use _Manage_ rights on resources for the most efficient scaling. With _Listen_ rights, scaling isn't as accurate because the queue length can't be used to inform scaling decisions. To learn more about setting rights in Service Bus access policies, see [Shared Access Authorization Policy](../service-bus-messaging/service-bus-sas.md#shared-access-authorization-policies). For Event Hub triggers, see the [scaling guidance](functions-bindings-event-hubs-trigger.md#scaling) in the reference article.
+
+## Limit scale out
+
+You may wish to restrict the maximum number of instances an app used to scale out. This is most common for cases where a downstream component like a database has limited throughput. By default, Consumption plan functions scale out to as many as 200 instances, and Premium plan functions will scale out to as many as 100 instances. You can specify a lower maximum for a specific app by modifying the `functionAppScaleLimit` value. The `functionAppScaleLimit` can be set to `0` or `null` for unrestricted, or a valid value between `1` and the app maximum.
+
+```azurecli
+az resource update --resource-type Microsoft.Web/sites -g <RESOURCE_GROUP> -n <FUNCTION_APP-NAME>/config/web --set properties.functionAppScaleLimit=<SCALE_LIMIT>
+```
+
+## Best practices and patterns for scalable apps
+
+There are many aspects of a function app that impacts how it scales, including host configuration, runtime footprint, and resource efficiency. For more information, see the [scalability section of the performance considerations article](functions-best-practices.md#scalability-best-practices). You should also be aware of how connections behave as your function app scales. For more information, see [How to manage connections in Azure Functions](manage-connections.md).
+
+For more information on scaling in Python and Node.js, see [Azure Functions Python developer guide - Scaling and concurrency](functions-reference-python.md#scaling-and-performance) and [Azure Functions Node.js developer guide - Scaling and concurrency](functions-reference-node.md#scaling-and-concurrency).
+
+## Billing model
+
+Billing for the different plans is described in detail on the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/). Usage is aggregated at the function app level and counts only the time that function code is executed. The following are units for billing:
+
+* **Resource consumption in gigabyte-seconds (GB-s)**. Computed as a combination of memory size and execution time for all functions within a function app.
+* **Executions**. Counted each time a function is executed in response to an event trigger.
+
+Useful queries and information on how to understand your consumption bill can be found [on the billing FAQ](https://github.com/Azure/Azure-Functions/wiki/Consumption-Plan-Cost-Billing-FAQ).
+
+[Azure Functions pricing page]: https://azure.microsoft.com/pricing/details/functions
+
+## Next steps
+++ [Azure Functions hosting options](functions-scale.md)+
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-app-settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-app-settings.md
@@ -251,7 +251,7 @@ For Consumption & Premium plans only. The file path to the function app code and
The maximum number of instances that the function app can scale out to. Default is no limit. > [!IMPORTANT]
-> This setting is in preview. An [app property for function max scale out](./functions-scale.md#limit-scale-out) has been added and is the recommended way to limit scale out.
+> This setting is in preview. An [app property for function max scale out](./event-driven-scaling.md#limit-scale-out) has been added and is the recommended way to limit scale out.
|Key|Sample value| |---|------------|
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-storage-blob-trigger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-storage-blob-trigger.md
@@ -458,7 +458,7 @@ If all 5 tries fail, Azure Functions adds a message to a Storage queue named *we
The blob trigger uses a queue internally, so the maximum number of concurrent function invocations is controlled by the [queues configuration in host.json](functions-host-json.md#queues). The default settings limit concurrency to 24 invocations. This limit applies separately to each function that uses a blob trigger.
-[The Consumption plan](functions-scale.md#how-the-consumption-and-premium-plans-work) limits a function app on one virtual machine (VM) to 1.5 GB of memory. Memory is used by each concurrently executing function instance and by the Functions runtime itself. If a blob-triggered function loads the entire blob into memory, the maximum memory used by that function just for blobs is 24 * maximum blob size. For example, a function app with three blob-triggered functions and the default settings would have a maximum per-VM concurrency of 3*24 = 72 function invocations.
+[The Consumption plan](event-driven-scaling.md) limits a function app on one virtual machine (VM) to 1.5 GB of memory. Memory is used by each concurrently executing function instance and by the Functions runtime itself. If a blob-triggered function loads the entire blob into memory, the maximum memory used by that function just for blobs is 24 * maximum blob size. For example, a function app with three blob-triggered functions and the default settings would have a maximum per-VM concurrency of 3*24 = 72 function invocations.
JavaScript and Java functions load the entire blob into memory, and C# functions do that if you bind to `string`, or `Byte[]`.
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-compare-logic-apps-ms-flow-webjobs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-compare-logic-apps-ms-flow-webjobs.md
@@ -73,9 +73,9 @@ Azure Functions is built on the WebJobs SDK, so it shares many of the same event
| | Functions | WebJobs with WebJobs SDK | | --- | --- | --- |
-|**[Serverless app model](https://azure.microsoft.com/solutions/serverless/) with [automatic scaling](functions-scale.md#how-the-consumption-and-premium-plans-work)**|Γ£ö||
+|**[Serverless app model](https://azure.microsoft.com/solutions/serverless/) with [automatic scaling](event-driven-scaling.md)**|Γ£ö||
|**[Develop and test in browser](functions-create-first-azure-function.md)** |Γ£ö||
-|**[Pay-per-use pricing](functions-scale.md#consumption-plan)**|Γ£ö||
+|**[Pay-per-use pricing](consumption-plan.md)**|Γ£ö||
|**[Integration with Logic Apps](functions-twitter-email.md)**|Γ£ö|| | **Trigger events** |[Timer](functions-bindings-timer.md)<br>[Azure Storage queues and blobs](functions-bindings-storage-blob.md)<br>[Azure Service Bus queues and topics](functions-bindings-service-bus.md)<br>[Azure Cosmos DB](functions-bindings-cosmosdb.md)<br>[Azure Event Hubs](functions-bindings-event-hubs.md)<br>[HTTP/WebHook (GitHub, Slack)](functions-bindings-http-webhook.md)<br>[Azure Event Grid](functions-bindings-event-grid.md)|[Timer](functions-bindings-timer.md)<br>[Azure Storage queues and blobs](functions-bindings-storage-blob.md)<br>[Azure Service Bus queues and topics](functions-bindings-service-bus.md)<br>[Azure Cosmos DB](functions-bindings-cosmosdb.md)<br>[Azure Event Hubs](functions-bindings-event-hubs.md)<br>[File system](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions/Extensions/Files/FileTriggerAttribute.cs)| | **Supported languages** |C#<br>F#<br>JavaScript<br>Java<br>Python<br>PowerShell |C#<sup>1</sup>|
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-consumption-costs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-consumption-costs.md
@@ -12,9 +12,9 @@ There are currently three types of hosting plans for an app that runs in Azure F
| Plan | Description | | ---- | ----------- |
-| [**Consumption**](functions-scale.md#consumption-plan) | You're only charged for the time that your function app runs. This plan includes a [free grant][pricing page] on a per subscription basis.|
-| [**Premium**](functions-scale.md#premium-plan) | Provides you with the same features and scaling mechanism as the Consumption plan, but with enhanced performance and VNET access. Cost is based on your chosen pricing tier. To learn more, see [Azure Functions Premium plan](functions-premium-plan.md). |
-| [**Dedicated (App Service)**](functions-scale.md#app-service-plan) <br/>(basic tier or higher) | When you need to run in dedicated VMs or in isolation, use custom images, or want to use your excess App Service plan capacity. Uses [regular App Service plan billing](https://azure.microsoft.com/pricing/details/app-service/). Cost is based on your chosen pricing tier.|
+| [**Consumption**](consumption-plan.md) | You're only charged for the time that your function app runs. This plan includes a [free grant][pricing page] on a per subscription basis.|
+| [**Premium**](functions-premium-plan.md) | Provides you with the same features and scaling mechanism as the Consumption plan, but with enhanced performance and VNET access. Cost is based on your chosen pricing tier. To learn more, see [Azure Functions Premium plan](functions-premium-plan.md). |
+| [**Dedicated (App Service)**](dedicated-plan.md) <br/>(basic tier or higher) | When you need to run in dedicated VMs or in isolation, use custom images, or want to use your excess App Service plan capacity. Uses [regular App Service plan billing](https://azure.microsoft.com/pricing/details/app-service/). Cost is based on your chosen pricing tier.|
You chose the plan that best supports your function performance and cost requirements. To learn more, see [Azure Functions scale and hosting](functions-scale.md).
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-create-first-azure-function https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-create-first-azure-function.md
@@ -56,7 +56,7 @@ Next, create a function in the new function app.
![Copy the function URL from the Azure portal](./media/functions-create-first-azure-function/function-app-develop-tab-testing.png)
-1. Paste the function URL into your browser's address bar. Add the query string value `?name=<your_name>` to the end of this URL and press Enter to run the request.
+1. Paste the function URL into your browser's address bar. Add the query string value `&name=<your_name>` to the end of this URL and press Enter to run the request.
The following example shows the response in the browser:
@@ -74,4 +74,4 @@ Next, create a function in the new function app.
## Next steps
-[!INCLUDE [Next steps note](../../includes/functions-quickstart-next-steps.md)]
\ No newline at end of file
+[!INCLUDE [Next steps note](../../includes/functions-quickstart-next-steps.md)]
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-create-first-java-gradle https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-create-first-java-gradle.md
@@ -10,7 +10,7 @@ ms.date: 04/08/2020
# Use Java and Gradle to create and publish a function to Azure
-This article shows you how to build and publish a Java function project to Azure Functions with the Gradle command-line tool. When you're done, your function code runs in Azure in a [serverless hosting plan](functions-scale.md#consumption-plan) and is triggered by an HTTP request.
+This article shows you how to build and publish a Java function project to Azure Functions with the Gradle command-line tool. When you're done, your function code runs in Azure in a [serverless hosting plan](consumption-plan.md) and is triggered by an HTTP request.
> [!NOTE] > If Gradle is not your prefered development tool, check out our similar tutorials for Java developers using [Maven](./create-first-function-cli-java.md), [IntelliJ IDEA](/azure/developer/java/toolkit-for-intellij/quickstart-functions) and [VS Code](./create-first-function-vs-code-java.md).
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-create-first-kotlin-maven https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-create-first-kotlin-maven.md
@@ -11,7 +11,7 @@ ms.custom: devx-track-azurepowershell
# Quickstart: Create your first function with Kotlin and Maven
-This article guides you through using the Maven command-line tool to build and publish a Kotlin function project to Azure Functions. When you're done, your function code runs on the [Consumption Plan](functions-scale.md#consumption-plan) in Azure and can be triggered using an HTTP request.
+This article guides you through using the Maven command-line tool to build and publish a Kotlin function project to Azure Functions. When you're done, your function code runs on the [Consumption Plan](consumption-plan.md) in Azure and can be triggered using an HTTP request.
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
@@ -234,7 +234,7 @@ To work with [Azure Functions triggers and bindings](functions-triggers-bindings
You've created a Kotlin function app with a simple HTTP trigger and deployed it to Azure Functions. -- Review the [Java function developer guide](functions-reference-java.md) for more information on developing Java and Kotlin functions.
+- Review the [Azure Functions Java developer guide](functions-reference-java.md) for more information on developing Java and Kotlin functions.
- Add additional functions with different triggers to your project using the `azure-functions:add` Maven target. - Write and debug functions locally with [Visual Studio Code](https://code.visualstudio.com/docs/java/java-azurefunctions), [IntelliJ](functions-create-maven-intellij.md), and [Eclipse](functions-create-maven-eclipse.md). - Debug functions deployed in Azure with Visual Studio Code. See the Visual Studio Code [serverless Java applications](https://code.visualstudio.com/docs/java/java-serverless#_remote-debug-functions-running-in-the-cloud) documentation for instructions.
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-create-function-linux-custom-image https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-create-function-linux-custom-image.md
@@ -15,7 +15,7 @@ In this tutorial, you create and deploy your code to Azure Functions as a custom
Azure Functions supports any language or runtime using [custom handlers](functions-custom-handlers.md). For some languages, such as the R programming language used in this tutorial, you need to install the runtime or additional libraries as dependencies that require the use of a custom container. ::: zone-end
-Deploying your function code in a custom Linux container requires [Premium plan](functions-premium-plan.md#features) or a [Dedicated (App Service) plan](functions-scale.md#app-service-plan) hosting. Completing this tutorial incurs costs of a few US dollars in your Azure account, which you can minimize by [cleaning-up resources](#clean-up-resources) when you're done.
+Deploying your function code in a custom Linux container requires [Premium plan](functions-premium-plan.md) or a [Dedicated (App Service) plan](dedicated-plan.md) hosting. Completing this tutorial incurs costs of a few US dollars in your Azure account, which you can minimize by [cleaning-up resources](#clean-up-resources) when you're done.
You can also use a default Azure App Service container as described on [Create your first function hosted on Linux](./create-first-function-cli-csharp.md?pivots=programming-language-python). Supported base images for Azure Functions are found in the [Azure Functions base images repo](https://hub.docker.com/_/microsoft-azure-functions-base).
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-create-private-site-access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-create-private-site-access.md
@@ -125,7 +125,7 @@ The first step in this tutorial is to create a new virtual machine inside a virt
## Create an Azure Functions app
-The next step is to create a function app in Azure using the [Consumption plan](functions-scale.md#consumption-plan). You deploy your function code to this resource later in the tutorial.
+The next step is to create a function app in Azure using the [Consumption plan](consumption-plan.md). You deploy your function code to this resource later in the tutorial.
1. In the portal, choose **Add** at the top of the resource group view. 1. Select **Compute > Function App**
@@ -144,7 +144,7 @@ The next step is to create a function app in Azure using the [Consumption plan](
| Setting | Suggested value | Description | | ------------ | ---------------- | ---------------- |
- | _Storage account_ | Globally unique name | Create a storage account used by your function app. Storage account names must be between 3 and 24 characters in length and may contain numbers and lowercase letters only. You can also use an existing account, which must meet the [storage account requirements](./functions-scale.md#storage-account-requirements). |
+ | _Storage account_ | Globally unique name | Create a storage account used by your function app. Storage account names must be between 3 and 24 characters in length and may contain numbers and lowercase letters only. You can also use an existing account, which must meet the [storage account requirements](storage-considerations.md#storage-account-requirements). |
| _Operating system_ | Preferred operating system | An operating system is pre-selected for you based on your runtime stack selection, but you can change the setting if necessary. | | _Plan_ | Consumption | The [hosting plan](./functions-scale.md) dictates how the function app is scaled and resources available to each instance. | 1. Select **Review + Create** to review the app configuration selections.
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-create-vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-create-vnet.md
@@ -157,4 +157,4 @@ Functions running in a Premium plan share the same underlying App Service infras
> [!div class="nextstepaction"] > [Learn more about the networking options in Functions](./functions-networking-options.md)
-[Premium plan]: functions-scale.md#premium-plan
+[Premium plan]: functions-premium-plan.md
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-custom-handlers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-custom-handlers.md
@@ -402,7 +402,7 @@ By setting the `message` output equal to the order data that came in from the re
For HTTP-triggered functions with no additional bindings or outputs, you may want your handler to work directly with the HTTP request and response instead of the custom handler [request](#request-payload) and [response](#response-payload) payloads. This behavior can be configured in *host.json* using the `enableForwardingHttpRequest` setting. > [!IMPORTANT]
-> The primary purpose of the custom handlers feature is to enable languages and runtimes that do not currently have first-class support on Azure Functions. While it may be possible to run web applications using custom handlers, Azure Functions is not a standard reverse proxy. Some features such as response streaming, HTTP/2, and WebSockets are not available. Some components of the HTTP request such as certain headers and routes may be restricted. Your application may also experience excessive [cold start](functions-scale.md#cold-start).
+> The primary purpose of the custom handlers feature is to enable languages and runtimes that do not currently have first-class support on Azure Functions. While it may be possible to run web applications using custom handlers, Azure Functions is not a standard reverse proxy. Some features such as response streaming, HTTP/2, and WebSockets are not available. Some components of the HTTP request such as certain headers and routes may be restricted. Your application may also experience excessive [cold start](event-driven-scaling.md#cold-start).
> > To address these circumstances, consider running your web apps on [Azure App Service](../app-service/overview.md).
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-deployment-technologies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-deployment-technologies.md
@@ -21,7 +21,7 @@ The following table describes the available deployment methods for your Function
| -- | -- | -- | | Tools-based | &bull;&nbsp;[Visual&nbsp;Studio&nbsp;Code&nbsp;publish](functions-develop-vs-code.md#publish-to-azure)<br/>&bull;&nbsp;[Visual Studio publish](functions-develop-vs.md#publish-to-azure)<br/>&bull;&nbsp;[Core Tools publish](functions-run-local.md#publish) | Deployments during development and other ad-hock deployments. Deployments are managed locally by the tooling. | | App Service-managed| &bull;&nbsp;[Deployment&nbsp;Center&nbsp;(CI/CD)](functions-continuous-deployment.md)<br/>&bull;&nbsp;[Container&nbsp;deployments](functions-create-function-linux-custom-image.md#enable-continuous-deployment-to-azure) | Continuous deployment (CI/CD) from source control or from a container registry. Deployments are managed by the App Service platform (Kudu).|
-| External pipelines|&bull;&nbsp;[DevOps Pipelines](functions-how-to-azure-devops.md)<br/>&bull;&nbsp;[GitHub actions](functions-how-to-github-actions.md) | Production and DevOps pipelines that include additional validation, testing, and other actions be run as part of an automated deployment. Deployments are managed by the pipeline. |
+| External pipelines|&bull;&nbsp;[Azure Pipelines](functions-how-to-azure-devops.md)<br/>&bull;&nbsp;[GitHub actions](functions-how-to-github-actions.md) | Production and DevOps pipelines that include additional validation, testing, and other actions be run as part of an automated deployment. Deployments are managed by the pipeline. |
While specific Functions deployments use the best technology based on their context, most deployment methods are based on [zip deployment](#zip-deploy).
@@ -29,9 +29,9 @@ While specific Functions deployments use the best technology based on their cont
Azure Functions supports cross-platform local development and hosting on Windows and Linux. Currently, three hosting plans are available:
-+ [Consumption](functions-scale.md#consumption-plan)
-+ [Premium](functions-scale.md#premium-plan)
-+ [Dedicated (App Service)](functions-scale.md#app-service-plan)
++ [Consumption](consumption-plan.md)++ [Premium](functions-premium-plan.md)++ [Dedicated (App Service)](dedicated-plan.md) Each plan has different behaviors. Not all deployment technologies are available for each flavor of Azure Functions. The following chart shows which deployment technologies are supported for each combination of operating system and hosting plan:
@@ -92,7 +92,7 @@ Linux function apps running in the Consumption plan don't have an SCM/Kudu site,
##### Dedicated and Premium plans
-Function apps running on Linux in the [Dedicated (App Service) plan](functions-scale.md#app-service-plan) and the [Premium plan](functions-scale.md#premium-plan) also have a limited SCM/Kudu site.
+Function apps running on Linux in the [Dedicated (App Service) plan](dedicated-plan.md) and the [Premium plan](functions-premium-plan.md) also have a limited SCM/Kudu site.
## Deployment technology details
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-develop-vs-code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-develop-vs-code.md
@@ -251,7 +251,7 @@ The following steps publish your project to a new function app created with adva
| ------ | ----- | ----------- | | Select function app in Azure | Create New Function App in Azure | At the next prompt, type a globally unique name that identifies your new function app and then select Enter. Valid characters for a function app name are `a-z`, `0-9`, and `-`. | | Select an OS | Windows | The function app runs on Windows. |
- | Select a hosting plan | Consumption plan | A serverless [Consumption plan hosting](functions-scale.md#consumption-plan) is used. |
+ | Select a hosting plan | Consumption plan | A serverless [Consumption plan hosting](consumption-plan.md) is used. |
| Select a runtime for your new app | Your project language | The runtime must match the project that you're publishing. | | Select a resource group for new resources | Create New Resource Group | At the next prompt, type a resource group name, like `myResourceGroup`, and then select enter. You can also select an existing resource group. | | Select a storage account | Create new storage account | At the next prompt, type a globally unique name for the new storage account used by your function app and then select Enter. Storage account names must be between 3 and 24 characters long and can contain only numbers and lowercase letters. You can also select an existing account. |
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-dotnet-class-library https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-dotnet-class-library.md
@@ -134,7 +134,7 @@ public static class BindingExpressionsExample
The build process creates a *function.json* file in a function folder in the build folder. As noted earlier, this file is not meant to be edited directly. You can't change binding configuration or disable the function by editing this file.
-The purpose of this file is to provide information to the scale controller to use for [scaling decisions on the Consumption plan](functions-scale.md#how-the-consumption-and-premium-plans-work). For this reason, the file only has trigger info, not input or output bindings.
+The purpose of this file is to provide information to the scale controller to use for [scaling decisions on the Consumption plan](event-driven-scaling.md). For this reason, the file only has trigger info, not input or output bindings.
The generated *function.json* file includes a `configurationSource` property that tells the runtime to use .NET attributes for bindings, rather than *function.json* configuration. Here's an example:
@@ -204,7 +204,7 @@ If you install the Core Tools by using npm, that doesn't affect the Core Tools v
## ReadyToRun
-You can compile your function app as [ReadyToRun binaries](/dotnet/core/whats-new/dotnet-core-3-0#readytorun-images). ReadyToRun is a form of ahead-of-time compilation that can improve startup performance to help reduce the impact of [cold-start](functions-scale.md#cold-start) when running in a [Consumption plan](functions-scale.md#consumption-plan).
+You can compile your function app as [ReadyToRun binaries](/dotnet/core/whats-new/dotnet-core-3-0#readytorun-images). ReadyToRun is a form of ahead-of-time compilation that can improve startup performance to help reduce the impact of [cold-start](event-driven-scaling.md#cold-start) when running in a [Consumption plan](consumption-plan.md).
ReadyToRun is available in .NET 3.0 and requires [version 3.0 of the Azure Functions runtime](functions-versions.md).
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-how-to-use-azure-function-app-settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-how-to-use-azure-function-app-settings.md
@@ -30,7 +30,7 @@ This article describes how to configure and manage your function apps.
You can navigate to everything you need to manage your function app from the overview page, in particular the **[Application settings](#settings)** and **[Platform features](#platform-features)**.
-## <a name="settings"></a>Application settings
+## <a name="settings"></a>Work with application settings
The **Application settings** tab maintains settings that are used by your function app. These settings are stored encrypted, and you must select **Show values** to see the values in the portal. You can also access application settings by using the Azure CLI.
@@ -64,6 +64,56 @@ az functionapp config appsettings set --name <FUNCTION_APP_NAME> \
When you develop a function app locally, you must maintain local copies of these values in the local.settings.json project file. To learn more, see [Local settings file](functions-run-local.md#local-settings-file).
+## Hosting plan type
+
+When you create a function app, you also create an App Service hosting plan in which the app runs. A plan can have one or more function apps. The functionality, scaling, and pricing of your functions depend on the type of plan. To learn more, see the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/).
+
+You can determine the type of plan being used by your function app from the Azure portal, or by using the Azure CLI or Azure PowerShell APIs.
+
+The following values indicate the plan type:
+
+| Plan type | Portal | Azure CLI/PowerShell |
+| --- | --- | --- |
+| [Consumption](consumption-plan.md) | **Consumption** | `Dynamic` |
+| [Premium](functions-premium-plan.md) | **ElasticPremium** | `ElasticPremium` |
+| [Dedicated (App Service)](dedicated-plan.md) | Various | Various |
+
+# [Portal](#tab/portal)
+
+To determine the type of plan used by your function app, see **App Service plan** in the **Overview** tab for the function app in the [Azure portal](https://portal.azure.com). To see the pricing tier, select the name of the **App Service Plan**, and then select **Properties** from the left pane.
+
+![View scaling plan in the portal](./media/functions-scale/function-app-overview-portal.png)
+
+# [Azure CLI](#tab/azurecli)
+
+Run the following Azure CLI command to get your hosting plan type:
+
+```azurecli-interactive
+functionApp=<FUNCTION_APP_NAME>
+resourceGroup=FunctionMonitoringExamples
+appServicePlanId=$(az functionapp show --name $functionApp --resource-group $resourceGroup --query appServicePlanId --output tsv)
+az appservice plan list --query "[?id=='$appServicePlanId'].sku.tier" --output tsv
+
+```
+
+In the previous example replace `<RESOURCE_GROUP>` and `<FUNCTION_APP_NAME>` with the resource group and function app names, respective.
+
+# [Azure PowerShell](#tab/powershell)
+
+Run the following Azure PowerShell command to get your hosting plan type:
+
+```azurepowershell-interactive
+$FunctionApp = '<FUNCTION_APP_NAME>'
+$ResourceGroup = '<RESOURCE_GROUP>'
+
+$PlanID = (Get-AzFunctionApp -ResourceGroupName $ResourceGroup -Name $FunctionApp).AppServicePlan
+(Get-AzFunctionAppPlan -Name $PlanID -ResourceGroupName $ResourceGroup).SkuTier
+```
+In the previous example replace `<RESOURCE_GROUP>` and `<FUNCTION_APP_NAME>` with the resource group and function app names, respective.
+
+---
++ ## Platform features Function apps run in, and are maintained by, the Azure App Service platform. As such, your function apps have access to most of the features of Azure's core web hosting platform. The left pane is where you access the many features of the App Service platform that you can use in your function apps.
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-hybrid-powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-hybrid-powershell.md
@@ -69,7 +69,7 @@ The App Service Hybrid Connections feature is available only in Basic, Standard,
| Setting | Suggested value | Description | | ------------ | ---------------- | ----------- |
- | **[Storage account](../storage/common/storage-account-create.md)** | Globally unique name | Create a storage account used by your function app. Storage account names must be between 3 and 24 characters in length and can contain numbers and lowercase letters only. You can also use an existing account, which must meet the [storage account requirements](../azure-functions/functions-scale.md#storage-account-requirements). |
+ | **[Storage account](../storage/common/storage-account-create.md)** | Globally unique name | Create a storage account used by your function app. Storage account names must be between 3 and 24 characters in length and can contain numbers and lowercase letters only. You can also use an existing account, which must meet the [storage account requirements](../azure-functions/storage-considerations.md#storage-account-requirements). |
|**Operating system**| Preferred operating system | An operating system is pre-selected for you based on your runtime stack selection, but you can change the setting if necessary. | | **[Plan type](../azure-functions/functions-scale.md)** | **App service plan** | Choose **App service plan**. When you run in an App Service plan, you must manage the [scaling of your function app](../azure-functions/functions-scale.md). |
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-infrastructure-as-code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-infrastructure-as-code.md
@@ -171,7 +171,7 @@ These properties are specified in the `appSettings` collection in the `siteConfi
## Deploy on Consumption plan
-The Consumption plan automatically allocates compute power when your code is running, scales out as necessary to handle load, and then scales in when code is not running. You don't have to pay for idle VMs, and you don't have to reserve capacity in advance. To learn more, see [Azure Functions scale and hosting](functions-scale.md#consumption-plan).
+The Consumption plan automatically allocates compute power when your code is running, scales out as necessary to handle load, and then scales in when code is not running. You don't have to pay for idle VMs, and you don't have to reserve capacity in advance. To learn more, see [Azure Functions scale and hosting](consumption-plan.md).
For a sample Azure Resource Manager template, see [Function app on Consumption plan].
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-monitoring.md
@@ -109,7 +109,7 @@ Because Functions also integrates with Azure Monitor, you can also use diagnosti
_This feature is in preview._
-The [Azure Functions scale controller](./functions-scale.md#runtime-scaling) monitors instances of the Azure Functions host on which your app runs. This controller makes decisions about when to add or remove instances based on current performance. You can have the scale controller emit logs to Application Insights to better understand the decisions the scale controller is making for your function app. You can also store the generated logs in Blob storage for analysis by another service.
+The [Azure Functions scale controller](./event-driven-scaling.md#runtime-scaling) monitors instances of the Azure Functions host on which your app runs. This controller makes decisions about when to add or remove instances based on current performance. You can have the scale controller emit logs to Application Insights to better understand the decisions the scale controller is making for your function app. You can also store the generated logs in Blob storage for analysis by another service.
To enable this feature, you add an application setting named `SCALE_CONTROLLER_LOGGING_ENABLED` to your function app settings. To learn how, see [Configure scale controller logs](configure-monitoring.md#configure-scale-controller-logs).
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-networking-options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-networking-options.md
@@ -16,9 +16,9 @@ The hosting models have different levels of network isolation available. Choosin
You can host function apps in a couple of ways: * You can choose from plan options that run on a multitenant infrastructure, with various levels of virtual network connectivity and scaling options:
- * The [Consumption plan](functions-scale.md#consumption-plan) scales dynamically in response to load and offers minimal network isolation options.
- * The [Premium plan](functions-scale.md#premium-plan) also scales dynamically and offers more comprehensive network isolation.
- * The Azure [App Service plan](functions-scale.md#app-service-plan) operates at a fixed scale and offers network isolation similar to the Premium plan.
+ * The [Consumption plan](consumption-plan.md) scales dynamically in response to load and offers minimal network isolation options.
+ * The [Premium plan](functions-premium-plan.md) also scales dynamically and offers more comprehensive network isolation.
+ * The Azure [App Service plan](dedicated-plan.md) operates at a fixed scale and offers network isolation similar to the Premium plan.
* You can run functions in an [App Service Environment](../app-service/environment/intro.md). This method deploys your function into your virtual network and offers full network control and isolation. ## Matrix of networking features
@@ -29,7 +29,7 @@ You can host function apps in a couple of ways:
You can use access restrictions to define a priority-ordered list of IP addresses that are allowed or denied access to your app. The list can include IPv4 and IPv6 addresses, or specific virtual network subnets using [service endpoints](#use-service-endpoints). When there are one or more entries, an implicit "deny all" exists at the end of the list. IP restrictions work with all function-hosting options.
-Access restrictions are available in the [Premium](functions-premium-plan.md), [Consumption](functions-scale.md#consumption-plan), and [App Service](functions-scale.md#app-service-plan).
+Access restrictions are available in the [Premium](functions-premium-plan.md), [Consumption](consumption-plan.md), and [App Service](dedicated-plan.md).
> [!NOTE] > With network restrictions in place, you can deploy only from within your virtual network, or when you've put the IP address of the machine you're using to access the Azure portal on the Safe Recipients list. However, you can still manage the function using the portal.
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-premium-plan https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-premium-plan.md
@@ -13,53 +13,70 @@ ms.custom:
# Azure Functions Premium plan
-The Azure Functions Premium plan (sometimes referred to as Elastic Premium plan) is a hosting option for function apps. The Premium plan provides features like VNet connectivity, no cold start, and premium hardware. Multiple function apps can be deployed to the same Premium plan, and the plan allows you to configure compute instance size, base plan size, and maximum plan size. For a comparison of the Premium plan and other plan and hosting types, see [function scale and hosting options](functions-scale.md).
+The Azure Functions Premium plan (sometimes referred to as Elastic Premium plan) is a hosting option for function apps. For other hosting plan options, see the [hosting plan article](functions-scale.md).
-## Create a Premium plan
+Premium plan hosting provides the following benefits to your functions:
-[!INCLUDE [functions-premium-create](../../includes/functions-premium-create.md)]
+* Avoid cold starts with perpetually warm instances
+* Virtual network connectivity.
+* Unlimited execution duration, with 60 minutes guaranteed.
+* Premium instance sizes: one core, two core, and four core instances.
+* More predictable pricing, compared with the Consumption plan.
+* High-density app allocation for plans with multiple function apps.
-You can also create a Premium plan using [az functionapp plan create](/cli/azure/functionapp/plan#az-functionapp-plan-create) in the Azure CLI. The following example creates an _Elastic Premium 1_ tier plan:
+When you're using the Premium plan, instances of the Azure Functions host are added and removed based on the number of incoming events, just like the [Consumption plan](consumption-plan.md). Multiple function apps can be deployed to the same Premium plan, and the plan allows you to configure compute instance size, base plan size, and maximum plan size.
-```azurecli-interactive
-az functionapp plan create --resource-group <RESOURCE_GROUP> --name <PLAN_NAME> \
-```
+## Billing
-In this example, replace `<RESOURCE_GROUP>` with your resource group and `<PLAN_NAME>` with a name for your plan that is unique in the resource group. Specify a [supported `<REGION>`](https://azure.microsoft.com/global-infrastructure/services/?products=functions). To create a Premium plan that supports Linux, include the `--is-linux` option.
+Billing for the Premium plan is based on the number of core seconds and memory allocated across instances. This billing differs from the Consumption plan, which is billed per execution and memory consumed. There is no execution charge with the Premium plan. At least one instance must be allocated at all times per plan. This billing results in a minimum monthly cost per active plan, regardless if the function is active or idle. Keep in mind that all function apps in a Premium plan share allocated instances. To learn more, see the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/).
-With the plan created, you can use [az functionapp create](/cli/azure/functionapp#az-functionapp-create) to create your function app. In the portal, both the plan and the app are created at the same time. For an example of a complete Azure CLI script, see [Create a function app in a Premium plan](scripts/functions-cli-create-premium-plan.md).
+## Create a Premium plan
-## Features
+When you create a function app in the Azure portal, the Consumption plan is the default. To create a function app that runs in a Premium plan, you must explicitly create an App Service plan using one of the _Elastic Premium_ SKUs. The function app you create is then hosted in this plan. The Azure portal makes it easy to create both the Premium plan and the function app at the same time. You can run more than one function app in the same Premium plan, but they most both run on the same operating system (Windows or Linux).
-The following features are available to function apps deployed to a Premium plan.
+The following articles show you how to create a function app with a Premium plan, either programmatically or in the Azure portal:
-### Always ready instances
++ [Azure portal](create-premium-plan-function-app-portal.md)++ [Azure CLI](scripts/functions-cli-create-premium-plan.md)++ [Azure Resource Manager template](functions-infrastructure-as-code.md#deploy-on-premium-plan)+
+## Eliminate cold starts
+
+When events or executions don't occur in the Consumption plan, your app may scale to zero instances. When new events come in, a new instance with your app running on it must be specialized. Specializing new instances may take some time depending on the app. This additional latency on the first call is often called app _cold start_.
-If no events and executions occur today in the Consumption plan, your app may scale in to zero instances. When new events come in, a new instance needs to be specialized with your app running on it. Specializing new instances may take some time depending on the app. This additional latency on the first call is often called app cold start.
+Premium plan provides two features that work together to effectively eliminate cold starts in your functions: _always ready instances_ and _pre-warmed instances_.
-In the Premium plan, you can have your app always ready on a specified number of instances. The maximum number of always ready instances is 20. When events begin to trigger the app, they are routed to the always ready instances first. As the function becomes active, additional instances will be warmed as a buffer. This buffer prevents cold start for new instances required during scale. These buffered instances are called [pre-warmed instances](#pre-warmed-instances). With the combination of the always ready instances and a pre-warmed buffer, your app can effectively eliminate cold start.
+### Always ready instances
+
+In the Premium plan, you can have your app always ready on a specified number of instances. The maximum number of always ready instances is 20. When events begin to trigger the app, they are first routed to the always ready instances. As the function becomes active, additional instances will be warmed as a buffer. This buffer prevents cold start for new instances required during scale. These buffered instances are called [pre-warmed instances](#pre-warmed-instances). With the combination of the always ready instances and a pre-warmed buffer, your app can effectively eliminate cold start.
> [!NOTE]
-> Every premium plan will have at least one active (billed) instance at all times.
+> Every premium plan has at least one active (billed) instance at all times.
+
+# [Portal](#tab/portal)
You can configure the number of always ready instances in the Azure portal by selected your **Function App**, going to the **Platform Features** tab, and selecting the **Scale Out** options. In the function app edit window, always ready instances are specific to that app. ![Elastic Scale Settings](./media/functions-premium-plan/scale-out.png)
+# [Azure CLI](#tab/azurecli)
+ You can also configure always ready instances for an app with the Azure CLI. ```azurecli-interactive az resource update -g <resource_group> -n <function_app_name>/config/web --set properties.minimumElasticInstanceCount=<desired_always_ready_count> --resource-type Microsoft.Web/sites ```
+---
+
+### Pre-warmed instances
-#### Pre-warmed instances
+Pre-warmed instances are instances warmed as a buffer during scale and activation events. Pre-warmed instances continue to buffer until the maximum scale-out limit is reached. The default pre-warmed instance count is 1, and for most scenarios this value should remain as 1.
-Pre-warmed instances are the number of instances warmed as a buffer during scale and activation events. Pre-warmed instances continue to buffer until the maximum scale-out limit is reached. The default pre-warmed instance count is 1, and for most scenarios should remain as 1. If an app has a long warm up (like a custom container image), you may wish to increase this buffer. A pre-warmed instance will become active only after all active instances have been sufficiently utilized.
+When an app has a long warm-up (like a custom container image), you may need to increase this buffer. A pre-warmed instance becomes active only after all active instances have been sufficiently used.
-Consider this example of how always ready instances and pre-warmed instances work together. A premium function app has five always ready instances configured, and the default of one pre-warmed instance. When the app is idle and no events are triggering, the app will be provisioned and running on five instances. At this time, you will not be billed for a pre-warmed instance as the always ready instances aren't used, and no pre-warmed instance is even allocated.
+Consider this example of how always-ready instances and pre-warmed instances work together. A premium function app has five always ready instances configured, and the default of one pre-warmed instance. When the app is idle and no events are triggering, the app is provisioned and running with five instances. At this time, you aren't billed for a pre-warmed instance as the always-ready instances aren't used, and no pre-warmed instance is allocated.
-As soon as the first trigger comes in, the five always ready instances become active, and a pre-warmed instance is allocated. The app is now running with six provisioned instances: the five now-active always ready instances, and the sixth pre-warmed and inactive buffer. If the rate of executions continues to increase, the five active instances will eventually be utilized. When the platform decides to scale beyond five instances, it will scale into the pre-warmed instance. When that happens, there will now be six active instances, and a seventh instance will instantly be provisioned and fill the pre-warmed buffer. This sequence of scaling and pre-warming will continue until the maximum instance count for the app is reached. No instances will be pre-warmed or activated beyond the maximum.
+As soon as the first trigger comes in, the five always-ready instances become active, and a pre-warmed instance is allocated. The app is now running with six provisioned instances: the five now-active always ready instances, and the sixth pre-warmed and inactive buffer. If the rate of executions continues to increase, the five active instances are eventually used. When the platform decides to scale beyond five instances, it scales into the pre-warmed instance. When that happens, there are now six active instances, and a seventh instance is instantly provisioned and fill the pre-warmed buffer. This sequence of scaling and pre-warming continues until the maximum instance count for the app is reached. No instances are pre-warmed or activated beyond the maximum.
You can modify the number of pre-warmed instances for an app using the Azure CLI.
@@ -67,33 +84,33 @@ You can modify the number of pre-warmed instances for an app using the Azure CLI
az resource update -g <resource_group> -n <function_app_name>/config/web --set properties.preWarmedInstanceCount=<desired_prewarmed_count> --resource-type Microsoft.Web/sites ```
-#### Maximum instances for an app
+### Maximum function app instances
-In addition to the [plan maximum instance count](#plan-and-sku-settings), you can configure a per-app maximum. The app maximum can be configured using the [app scale limit](./functions-scale.md#limit-scale-out).
+In addition to the [plan maximum instance count](#plan-and-sku-settings), you can configure a per-app maximum. The app maximum can be configured using the [app scale limit](./event-driven-scaling.md#limit-scale-out).
-### Private network connectivity
+## Private network connectivity
-Azure Functions deployed to a Premium plan takes advantage of [new VNet integration for web apps](../app-service/web-sites-integrate-with-vnet.md). When configured, your app can communicate with resources within your VNet or secured via service endpoints. IP restrictions are also available on the app to restrict incoming traffic.
+Function apps deployed to a Premium plan can take advantage of [VNet integration for web apps](../app-service/web-sites-integrate-with-vnet.md). When configured, your app can communicate with resources within your VNet or secured via service endpoints. IP restrictions are also available on the app to restrict incoming traffic.
When assigning a subnet to your function app in a Premium plan, you need a subnet with enough IP addresses for each potential instance. We require an IP block with at least 100 available addresses. For more information, see [integrate your function app with a VNet](functions-create-vnet.md).
-### Rapid elastic scale
+## Rapid elastic scale
Additional compute instances are automatically added for your app using the same rapid scaling logic as the Consumption plan. Apps in the same App Service Plan scale independently from one another based on the needs of an individual app. However, Functions apps in the same App Service Plan share VM resources to help reduce costs, when possible. The number of apps associated with a VM depends on the footprint of each app and the size of the VM.
-To learn more about how scaling works, see [Function scale and hosting](./functions-scale.md#how-the-consumption-and-premium-plans-work).
+To learn more about how scaling works, see [Event-driven scaling in Azure Functions](event-driven-scaling.md).
-### Longer run duration
+## Longer run duration
-Azure Functions in a Consumption plan are limited to 10 minutes for a single execution. In the Premium plan, the run duration defaults to 30 minutes to prevent runaway executions. However, you can [modify the host.json configuration](./functions-host-json.md#functiontimeout) to make the duration unbounded for Premium plan apps (guaranteed 60 minutes).
+Azure Functions in a Consumption plan are limited to 10 minutes for a single execution. In the Premium plan, the run duration defaults to 30 minutes to prevent runaway executions. However, you can [modify the host.json configuration](./functions-host-json.md#functiontimeout) to make the duration unbounded for Premium plan apps. When set to an unbounded duration, your function app is guaranteed to run for at least 60 minutes.
## Plan and SKU settings When you create the plan, there are two plan size settings: the minimum number of instances (or plan size) and the maximum burst limit.
-If your app requires instances beyond the always ready instances, it can continue to scale out until the number of instances hits the maximum burst limit. You are billed for instances beyond your plan size only while they are running and allocated to you, on a per-second basis. We will make a best effort at scaling your app out to its defined maximum limit.
+If your app requires instances beyond the always-ready instances, it can continue to scale out until the number of instances hits the maximum burst limit. You're billed for instances beyond your plan size only while they are running and allocated to you, on a per-second basis. The platform makes it's best effort at scaling your app out to the defined maximum limit.
You can configure the plan size and maximums in the Azure portal by selecting the **Scale Out** options in the plan or a function app deployed to that plan (under **Platform Features**).
@@ -103,12 +120,12 @@ You can also increase the maximum burst limit from the Azure CLI:
az functionapp plan update -g <resource_group> -n <premium_plan_name> --max-burst <desired_max_burst> ```
-The minimum for every plan will be at least one instance. The actual minimum number of instances will be autoconfigured for you based on the always ready instances requested by apps in the plan. For example, if app A requests five always ready instances, and app B requests two always ready instances in the same plan, the minimum plan size will be calculated as five. App A will be running on all 5, and app B will only be running on 2.
+The minimum for every plan will be at least one instance. The actual minimum number of instances will be autoconfigured for you based on the always ready instances requested by apps in the plan. For example, if app A requests five always ready instances, and app B requests two always ready instances in the same plan, the minimum plan size will be calculated as five. App A will be running on all 5, and app B will only be running on 2.
> [!IMPORTANT] > You are charged for each instance allocated in the minimum instance count regardless if functions are executing or not.
-In most circumstances this autocalculated minimum should be sufficient. However, scaling beyond the minimum occurs at a best effort. It is possible, though unlikely, that at a specific time scale-out could be delayed if additional instances are unavailable. By setting a minimum higher than the autocalculated minimum, you reserve instances in advance of scale-out.
+In most circumstances, this autocalculated minimum is sufficient. However, scaling beyond the minimum occurs at a best effort. It's possible, though unlikely, that at a specific time scale-out could be delayed if additional instances are unavailable. By setting a minimum higher than the autocalculated minimum, you reserve instances in advance of scale-out.
Increasing the calculated minimum for a plan can be done using the Azure CLI.
@@ -118,7 +135,7 @@ az functionapp plan update -g <resource_group> -n <premium_plan_name> --min-inst
### Available instance SKUs
-When creating or scaling your plan, you can choose between three instance sizes. You will be billed for the total number of cores and memory provisioned, per second that each instance is allocated to you. Your app can automatically scale out to multiple instances as needed.
+When creating or scaling your plan, you can choose between three instance sizes. You will be billed for the total number of cores and memory provisioned, per second that each instance is allocated to you. Your app can automatically scale out to multiple instances as needed.
|SKU|Cores|Memory|Storage| |--|--|--|--|
@@ -126,16 +143,17 @@ When creating or scaling your plan, you can choose between three instance sizes.
|EP2|2|7GB|250GB| |EP3|4|14GB|250GB|
-### Memory utilization considerations
-Running on a machine with more memory does not always mean that your function app will use all available memory.
+### Memory usage considerations
+
+Running on a machine with more memory doesn't always mean that your function app uses all available memory.
For example, a JavaScript function app is constrained by the default memory limit in Node.js. To increase this fixed memory limit, add the app setting `languageWorkers:node:arguments` with a value of `--max-old-space-size=<max memory in MB>`. ## Region Max Scale Out
-Below are the currently supported maximum scale-out values for a single plan in each region and OS configuration. To request an increase, please open a support ticket.
+Below are the currently supported maximum scale-out values for a single plan in each region and OS configuration. To request an increase, you can open a support ticket.
-See the complete regional availability of Functions here: [Azure.com](https://azure.microsoft.com/global-infrastructure/services/?products=functions)
+See the complete regional availability of Functions on the [Azure web site](https://azure.microsoft.com/global-infrastructure/services/?products=functions).
|Region| Windows | Linux | |--| -- | -- |
@@ -180,4 +198,4 @@ See the complete regional availability of Functions here: [Azure.com](https://az
## Next steps > [!div class="nextstepaction"]
-> [Understand Azure Functions scale and hosting options](functions-scale.md)
+> [Understand Azure Functions hosting options](functions-scale.md)
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-reference-node https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-reference-node.md
@@ -488,7 +488,7 @@ When you work with HTTP triggers, you can access the HTTP request and response o
## Scaling and concurrency
-By default, Azure Functions automatically monitors the load on your application and creates additional host instances for Node.js as needed. Functions uses built-in (not user configurable) thresholds for different trigger types to decide when to add instances, such as the age of messages and queue size for QueueTrigger. For more information, see [How the Consumption and Premium plans work](functions-scale.md#how-the-consumption-and-premium-plans-work).
+By default, Azure Functions automatically monitors the load on your application and creates additional host instances for Node.js as needed. Functions uses built-in (not user configurable) thresholds for different trigger types to decide when to add instances, such as the age of messages and queue size for QueueTrigger. For more information, see [How the Consumption and Premium plans work](event-driven-scaling.md).
This scaling behavior is sufficient for many Node.js applications. For CPU-bound applications, you can improve performance further by using multiple language worker processes.
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-reference-powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-reference-powershell.md
@@ -649,11 +649,11 @@ When you work with PowerShell functions, be aware of the considerations in the f
### Cold Start
-When developing Azure Functions in the [serverless hosting model](functions-scale.md#consumption-plan), cold starts are a reality. *Cold start* refers to period of time it takes for your function app to start running to process a request. Cold start happens more frequently in the Consumption plan because your function app gets shut down during periods of inactivity.
+When developing Azure Functions in the [serverless hosting model](consumption-plan.md), cold starts are a reality. *Cold start* refers to period of time it takes for your function app to start running to process a request. Cold start happens more frequently in the Consumption plan because your function app gets shut down during periods of inactivity.
### Bundle modules instead of using `Install-Module`
-Your script is run on every invocation. Avoid using `Install-Module` in your script. Instead use `Save-Module` before publishing so that your function doesn't have to waste time downloading the module. If cold starts are impacting your functions, consider deploying your function app to an [App Service plan](functions-scale.md#app-service-plan) set to *always on* or to a [Premium plan](functions-scale.md#premium-plan).
+Your script is run on every invocation. Avoid using `Install-Module` in your script. Instead use `Save-Module` before publishing so that your function doesn't have to waste time downloading the module. If cold starts are impacting your functions, consider deploying your function app to an [App Service plan](dedicated-plan.md) set to *always on* or to a [Premium plan](functions-premium-plan.md).
## Next steps
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-reference.md
@@ -64,7 +64,7 @@ Function apps can be authored and published using a variety of tools, including
The Functions editor built into the Azure portal lets you update your code and your *function.json* file directly inline. This is recommended only for small changes or proofs of concept - best practice is to use a local development tool like VS Code. ## Parallel execution
-When multiple triggering events occur faster than a single-threaded function runtime can process them, the runtime may invoke the function multiple times in parallel. If a function app is using the [Consumption hosting plan](functions-scale.md#how-the-consumption-and-premium-plans-work), the function app could scale out automatically. Each instance of the function app, whether the app runs on the Consumption hosting plan or a regular [App Service hosting plan](../app-service/overview-hosting-plans.md), might process concurrent function invocations in parallel using multiple threads. The maximum number of concurrent function invocations in each function app instance varies based on the type of trigger being used as well as the resources used by other functions within the function app.
+When multiple triggering events occur faster than a single-threaded function runtime can process them, the runtime may invoke the function multiple times in parallel. If a function app is using the [Consumption hosting plan](event-driven-scaling.md), the function app could scale out automatically. Each instance of the function app, whether the app runs on the Consumption hosting plan or a regular [App Service hosting plan](../app-service/overview-hosting-plans.md), might process concurrent function invocations in parallel using multiple threads. The maximum number of concurrent function invocations in each function app instance varies based on the type of trigger being used as well as the resources used by other functions within the function app.
## Functions runtime versioning
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-scale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-scale.md
@@ -8,251 +8,98 @@ ms.date: 08/17/2020
ms.custom: H1Hack27Feb2017 ---
-# Azure Functions scale and hosting
+# Azure Functions hosting options
-When you create a function app in Azure, you must choose a hosting plan for your app. There are three basic hosting plans available for Azure Functions: [Consumption plan](#consumption-plan), [Premium plan](#premium-plan), and [Dedicated (App Service) plan](#app-service-plan). All hosting plans are generally available (GA) on both Linux and Windows virtual machines.
+When you create a function app in Azure, you must choose a hosting plan for your app. There are three basic hosting plans available for Azure Functions: [Consumption plan](consumption-plan.md), [Premium plan](functions-premium-plan.md), and [Dedicated (App Service) plan](dedicated-plan.md). All hosting plans are generally available (GA) on both Linux and Windows virtual machines.
The hosting plan you choose dictates the following behaviors: * How your function app is scaled. * The resources available to each function app instance.
-* Support for advanced features, such as Azure Virtual Network connectivity.
+* Support for advanced functionality, such as Azure Virtual Network connectivity.
-Both Consumption and Premium plans automatically add compute power when your code is running. Your app is scaled out when needed to handle load, and scaled in when code stops running. For the Consumption plan, you also don't have to pay for idle VMs or reserve capacity in advance.
+This article provides a detailed comparison between the various hosting plans, along with Kubernetes-based hosting.
-Premium plan provides additional features, such as premium compute instances, the ability to keep instances warm indefinitely, and VNet connectivity.
+## Overview of plans
-App Service plan allows you to take advantage of dedicated infrastructure, which you manage. Your function app doesn't scale based on events, which means it never scales in to zero. (Requires that [Always on](#always-on) is enabled.)
+The following is a summary of the benefits of the three main hosting plans for Functions:
-For a detailed comparison between the various hosting plans (including Kubernetes-based hosting), see the [Hosting plans comparison section](#hosting-plans-comparison).
-
-## Consumption plan
-
-When you're using the Consumption plan, instances of the Azure Functions host are dynamically added and removed based on the number of incoming events. This serverless plan scales automatically, and you're charged for compute resources only when your functions are running. On a Consumption plan, a function execution times out after a configurable period of time.
-
-Billing is based on number of executions, execution time, and memory used. Usage is aggregated across all functions within a function app. For more information, see the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/).
-
-The Consumption plan is the default hosting plan and offers the following benefits:
-
-* Pay only when your functions are running
-* Scale out automatically, even during periods of high load
-
-Function apps in the same region can be assigned to the same Consumption plan. There's no downside or impact to having multiple apps running in the same Consumption plan. Assigning multiple apps to the same Consumption plan has no impact on resilience, scalability, or reliability of each app.
-
-To learn more about how to estimate costs when running in a Consumption plan, see [Understanding Consumption plan costs](functions-consumption-costs.md).
-
-## <a name="premium-plan"></a>Premium plan
-
-When you're using the Premium plan, instances of the Azure Functions host are added and removed based on the number of incoming events just like the Consumption plan. Premium plan supports the following features:
-
-* Perpetually warm instances to avoid any cold start
-* VNet connectivity
-* Unlimited execution duration (60 minutes guaranteed)
-* Premium instance sizes (one core, two core, and four core instances)
-* More predictable pricing
-* High-density app allocation for plans with multiple function apps
-
-To learn how you can create a function app in a Premium plan, see [Azure Functions Premium plan](functions-premium-plan.md).
-
-Instead of billing per execution and memory consumed, billing for the Premium plan is based on the number of core seconds and memory allocated across instances. There is no execution charge with the Premium plan. At least one instance must be allocated at all times per plan. This results in a minimum monthly cost per active plan, regardless if the function is active or idle. Keep in mind that all function apps in a Premium plan share allocated instances.
-
-Consider the Azure Functions Premium plan in the following situations:
-
-* Your function apps run continuously, or nearly continuously.
-* You have a high number of small executions and have a high execution bill but low GB second bill in the Consumption plan.
-* You need more CPU or memory options than what is provided by the Consumption plan.
-* Your code needs to run longer than the [maximum execution time allowed](#timeout) on the Consumption plan.
-* You require features that are only available on a Premium plan, such as virtual network connectivity.
-
-## <a name="app-service-plan"></a>Dedicated (App Service) plan
-
-Your function apps can also run on the same dedicated VMs as other App Service apps (Basic, Standard, Premium, and Isolated SKUs).
-
-Consider an App Service plan in the following situations:
-
-* You have existing, underutilized VMs that are already running other App Service instances.
-* You want to provide a custom image on which to run your functions.
-
-You pay the same for function apps in an App Service Plan as you would for other App Service resources, like web apps. For details about how the App Service plan works, see the [Azure App Service plans in-depth overview](../app-service/overview-hosting-plans.md).
-
-Using an App Service plan, you can manually scale out by adding more VM instances. You can also enable autoscale, though autoscale will be slower than the elastic scale of the Premium plan. For more information, see [Scale instance count manually or automatically](../azure-monitor/platform/autoscale-get-started.md?toc=%2fazure%2fapp-service%2ftoc.json). You can also scale up by choosing a different App Service plan. For more information, see [Scale up an app in Azure](../app-service/manage-scale-up.md).
-
-When running JavaScript functions on an App Service plan, you should choose a plan that has fewer vCPUs. For more information, see [Choose single-core App Service plans](functions-reference-node.md#choose-single-vcpu-app-service-plans).
-<!-- Note: the portal links to this section via fwlink https://go.microsoft.com/fwlink/?linkid=830855 -->
-
-Running in an [App Service Environment](../app-service/environment/intro.md) (ASE) lets you fully isolate your functions and take advantage of higher number of instances than an App Service Plan.
-
-### <a name="always-on"></a> Always On
-
-If you run on an App Service plan, you should enable the **Always on** setting so that your function app runs correctly. On an App Service plan, the functions runtime goes idle after a few minutes of inactivity, so only HTTP triggers will "wake up" your functions. Always on is available only on an App Service plan. On a Consumption plan, the platform activates function apps automatically.
-
-[!INCLUDE [Timeout Duration section](../../includes/functions-timeout-duration.md)]
--
-Even with Always On enabled, the execution timeout for individual functions is controlled by the `functionTimeout` setting in the [host.json](functions-host-json.md#functiontimeout) project file.
-
-## Determine the hosting plan of an existing application
-
-To determine the hosting plan used by your function app, see **App Service plan** in the **Overview** tab for the function app in the [Azure portal](https://portal.azure.com). To see the pricing tier, select the name of the **App Service Plan**, and then select **Properties** from the left pane.
-
-![View scaling plan in the portal](./media/functions-scale/function-app-overview-portal.png)
-
-You can also use the Azure CLI to determine the plan, as follows:
-
-```azurecli-interactive
-appServicePlanId=$(az functionapp show --name <my_function_app_name> --resource-group <my_resource_group> --query appServicePlanId --output tsv)
-az appservice plan list --query "[?id=='$appServicePlanId'].sku.tier" --output tsv
-```
-
-When the output from this command is `dynamic`, your function app is in the Consumption plan. When the output from this command is `ElasticPremium`, your function app is in the Premium plan. All other values indicate different tiers of an App Service plan.
-
-## Storage account requirements
-
-On any plan, a function app requires a general Azure Storage account, which supports Azure Blob, Queue, Files, and Table storage. This is because Azure Functions relies on Azure Storage for operations such as managing triggers and logging function executions, but some storage accounts don't support queues and tables. These accounts, which include blob-only storage accounts (including premium storage) and general-purpose storage accounts with zone-redundant storage replication, are filtered-out from your existing **Storage Account** selections when you create a function app.
-
-The same storage account used by your function app can also be used by your triggers and bindings to store your application data. However, for storage-intensive operations, you should use a separate storage account.
-
-It's possible for multiple function apps to share the same storage account without any issues. (A good example of this is when you develop multiple apps in your local environment using the Azure Storage Emulator, which acts like one storage account.)
-
-<!-- JH: Does using a Premium Storage account improve perf? -->
-
-To learn more about storage account types, see [Introducing the Azure Storage services](../storage/common/storage-introduction.md#core-storage-services).
-
-### In Region Data Residency
-
-When necessary for all customer data to remain within a single region, the storage account associated with the function app must be one with [in region redundancy](../storage/common/storage-redundancy.md). An in-region redundant storage account would also need to be used with [Azure Durable Functions](./durable/durable-functions-perf-and-scale.md#storage-account-selection) for Durable Functions.
-
-Other platform-managed customer data will only be stored within the region when hosting in an Internal Load Balancer App Service Environment (or ILB ASE). Details can be found in [ASE zone redundancy](../app-service/environment/zone-redundancy.md#in-region-data-residency).
-
-## How the Consumption and Premium plans work
-
-In the Consumption and Premium plans, the Azure Functions infrastructure scales CPU and memory resources by adding additional instances of the Functions host, based on the number of events that its functions are triggered on. Each instance of the Functions host in the Consumption plan is limited to 1.5 GB of memory and one CPU. An instance of the host is the entire function app, meaning all functions within a function app share resource within an instance and scale at the same time. Function apps that share the same Consumption plan are scaled independently. In the Premium plan, your plan size will determine the available memory and CPU for all apps in that plan on that instance.
-
-Function code files are stored on Azure Files shares on the function's main storage account. When you delete the main storage account of the function app, the function code files are deleted and cannot be recovered.
-
-### Runtime scaling
-
-Azure Functions uses a component called the *scale controller* to monitor the rate of events and determine whether to scale out or scale in. The scale controller uses heuristics for each trigger type. For example, when you're using an Azure Queue storage trigger, it scales based on the queue length and the age of the oldest queue message.
-
-The unit of scale for Azure Functions is the function app. When the function app is scaled out, additional resources are allocated to run multiple instances of the Azure Functions host. Conversely, as compute demand is reduced, the scale controller removes function host instances. The number of instances is eventually *scaled in* to zero when no functions are running within a function app.
-
-![Scale controller monitoring events and creating instances](./media/functions-scale/central-listener.png)
-
-### Cold Start
-
-After your function app has been idle for a number of minutes, the platform may scale the number of instances on which your app runs down to zero. The next request has the added latency of scaling from zero to one. This latency is referred to as a _cold start_. The number of dependencies that must be loaded by your function app can impact the cold start time. Cold start is more of an issue for synchronous operations, such as HTTP triggers that must return a response. If cold starts are impacting your functions, consider running in a Premium plan or in a Dedicated plan with Always on enabled.
-
-### Understanding scaling behaviors
-
-Scaling can vary on a number of factors, and scale differently based on the trigger and language selected. There are a few intricacies of scaling behaviors to be aware of:
-
-* A single function app only scales out to a maximum of 200 instances. A single instance may process more than one message or request at a time though, so there isn't a set limit on number of concurrent executions. You can [specify a lower maximum](#limit-scale-out) to throttle scale as required.
-* For HTTP triggers, new instances are allocated, at most, once per second.
-* For non-HTTP triggers, new instances are allocated, at most, once every 30 seconds. Scaling is faster when running in a [Premium plan](#premium-plan).
-* For Service Bus triggers, use _Manage_ rights on resources for the most efficient scaling. With _Listen_ rights, scaling isn't as accurate because the queue length can't be used to inform scaling decisions. To learn more about setting rights in Service Bus access policies, see [Shared Access Authorization Policy](../service-bus-messaging/service-bus-sas.md#shared-access-authorization-policies).
-* For Event Hub triggers, see the [scaling guidance](functions-bindings-event-hubs-trigger.md#scaling) in the reference article.
-
-### Limit scale out
-
-You may wish to restrict the number of instances an app scales out to. This is most common for cases where a downstream component like a database has limited throughput. By default, consumption plan functions will scale out to as many as 200 instances, and premium plan functions will scale out to as many as 100 instances. You can specify a lower maximum for a specific app by modifying the `functionAppScaleLimit` value. The `functionAppScaleLimit` can be set to 0 or null for unrestricted, or a valid value between 1 and the app maximum.
-
-```azurecli
-az resource update --resource-type Microsoft.Web/sites -g <resource_group> -n <function_app_name>/config/web --set properties.functionAppScaleLimit=<scale_limit>
-```
-
-### Best practices and patterns for scalable apps
-
-There are many aspects of a function app that will impact how well it will scale, including host configuration, runtime footprint, and resource efficiency. For more information, see the [scalability section of the performance considerations article](functions-best-practices.md#scalability-best-practices). You should also be aware of how connections behave as your function app scales. For more information, see [How to manage connections in Azure Functions](manage-connections.md).
-
-For more information on scaling in Python and Node.js, see [Azure Functions Python developer guide - Scaling and concurrency](functions-reference-python.md#scaling-and-performance) and [Azure Functions Node.js developer guide - Scaling and concurrency](functions-reference-node.md#scaling-and-concurrency).
-
-### Billing model
-
-Billing for the different plans is described in detail on the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/). Usage is aggregated at the function app level and counts only the time that function code is executed. The following are units for billing:
-
-* **Resource consumption in gigabyte-seconds (GB-s)**. Computed as a combination of memory size and execution time for all functions within a function app.
-* **Executions**. Counted each time a function is executed in response to an event trigger.
-
-Useful queries and information on how to understand your consumption bill can be found [on the billing FAQ](https://github.com/Azure/Azure-Functions/wiki/Consumption-Plan-Cost-Billing-FAQ).
-
-[Azure Functions pricing page]: https://azure.microsoft.com/pricing/details/functions
-
-## Hosting plans comparison
+| | |
+| --- | --- |
+|**[Consumption plan](consumption-plan.md)**| Scale automatically and only pay for compute resources when your functions are running.<br/><br/>On the Consumption plan, instances of the Functions host are dynamically added and removed based on the number of incoming events.<br/><br/> Γ£ö Default hosting plan.<br/>Γ£ö Pay only when your functions are running.<br/>Γ£ö Scales automatically, even during periods of high load.|
+|**[Premium plan](functions-premium-plan.md)**|Automatically scales based on demand using pre-warmed workers which run applications with no delay after being idle, runs on more powerful instances, and connects to virtual networks. <br/><br/>Consider the Azure Functions Premium plan in the following situations: <br/><br/>Γ£ö Your function apps run continuously, or nearly continuously.<br/>Γ£ö You have a high number of small executions and a high execution bill, but low GB seconds in the Consumption plan.<br/>Γ£ö You need more CPU or memory options than what is provided by the Consumption plan.<br/>Γ£ö Your code needs to run longer than the maximum execution time allowed on the Consumption plan.<br/>Γ£ö You require features that aren't available on the Consumption plan, such as virtual network connectivity.|
+|**[Dedicated plan](dedicated-plan.md)** |Run your functions within an App Service plan at regular [App Service plan rates](https://azure.microsoft.com/pricing/details/app-service/windows/).<br/><br/>Best for long-running scenarios where [Durable Functions](durable/durable-functions-overview.md) can't be used. Consider an App Service plan in the following situations:<br/><br/>Γ£ö You have existing, underutilized VMs that are already running other App Service instances.<br/>Γ£ö You want to provide a custom image on which to run your functions. <br/>Γ£ö Predictive scaling and costs are required.|
-The following comparison table shows all important aspects to help the decision of Azure Functions App hosting plan choice:
+The comparison tables in this article also include the following hosting options, which provide the highest amount of control and isolation in which to run your function apps.
-### Plan summary
| | | | --- | --- |
-|**[Consumption plan](#consumption-plan)**| Scale automatically and only pay for compute resources when your functions are running. On the Consumption plan, instances of the Functions host are dynamically added and removed based on the number of incoming events.<br/> Γ£ö Default hosting plan.<br/>Γ£ö Pay only when your functions are running.<br/>Γ£ö scale-out automatically, even during periods of high load.|
-|**[Premium plan](#premium-plan)**|While automatically scaling based on demand, use pre-warmed workers to run applications with no delay after being idle, run on more powerful instances, and connect to VNETs. Consider the Azure Functions Premium plan in the following situations, in addition to all features of the App Service plan: <br/>Γ£ö Your function apps run continuously, or nearly continuously.<br/>Γ£ö You have a high number of small executions and have a high execution bill but low GB second bill in the Consumption plan.<br/>Γ£ö You need more CPU or memory options than what is provided by the Consumption plan.<br/>Γ£ö Your code needs to run longer than the maximum execution time allowed on the Consumption plan.<br/>Γ£ö You require features that are only available on a Premium plan, such as virtual network connectivity.|
-|**[Dedicated plan](#app-service-plan)**<sup>1</sup>|Run your functions within an App Service plan at regular App Service plan rates. Good fit for long running operations, as well as when more predictive scaling and costs are required. Consider an App Service plan in the following situations:<br/>Γ£ö You have existing, underutilized VMs that are already running other App Service instances.<br/>Γ£ö You want to provide a custom image on which to run your functions.|
-|**[ASE](#app-service-plan)**<sup>1</sup>|App Service Environment (ASE) is an App Service feature that provides a fully isolated and dedicated environment for securely running App Service apps at high scale. ASEs are appropriate for application workloads that require: <br/>Γ£ö Very high scale.<br/>Γ£ö Full compute isolation and secure network access.<br/>Γ£ö High memory utilization.|
-| **[Kubernetes](functions-kubernetes-keda.md)** | Kubernetes provides a fully isolated and dedicated environment running on top of the Kubernetes platform. Kubernetes is appropriate for application workloads that require: <br/>Γ£ö Custom hardware requirements.<br/>Γ£ö Isolation and secure network access.<br/>Γ£ö Ability to run in hybrid or multi-cloud environment.<br/>Γ£ö Run alongside existing Kubernetes applications and services.|
+|**[ASE](dedicated-plan.md)** | App Service Environment (ASE) is an App Service feature that provides a fully isolated and dedicated environment for securely running App Service apps at high scale.<br/><br/>ASEs are appropriate for application workloads that require: <br/><br/>Γ£ö Very high scale.<br/>Γ£ö Full compute isolation and secure network access.<br/>Γ£ö High memory usage.|
+| **[Kubernetes](functions-kubernetes-keda.md)** | Kubernetes provides a fully isolated and dedicated environment running on top of the Kubernetes platform.<br/><br/> Kubernetes is appropriate for application workloads that require: <br/>Γ£ö Custom hardware requirements.<br/>Γ£ö Isolation and secure network access.<br/>Γ£ö Ability to run in hybrid or multi-cloud environment.<br/>Γ£ö Run alongside existing Kubernetes applications and services.|
-<sup>1</sup> For specific limits for the various App Service plan options, see the [App Service plan limits](../azure-resource-manager/management/azure-subscription-service-limits.md#app-service-limits).
+The remaining tables in this article compare the plans on various features and behaviors. For a cost comparison between dynamic hosting plans (Consumption and Premium), see the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/). For pricing of the various Dedicated plan options, see the [App Service pricing page](https://azure.microsoft.com/pricing/details/app-service/windows/).
+
+## Operating system/runtime
-### Operating system/runtime
+The following table shows supported operating system and language runtime support for the hosting plans.
| | Linux<sup>1</sup><br/>Code-only | Windows<sup>2</sup><br/>Code-only | Linux<sup>1,3</sup><br/>Docker container | | --- | --- | --- | --- |
-| **[Consumption plan](#consumption-plan)** | .NET Core<br/>Node.js<br/>Java<br/>Python | .NET Core<br/>Node.js<br/>Java<br/>PowerShell Core | No support |
-| **[Premium plan](#premium-plan)** | .NET Core<br/>Node.js<br/>Java<br/>Python|.NET Core<br/>Node.js<br/>Java<br/>PowerShell Core |.NET Core<br/>Node.js<br/>Java<br/>PowerShell Core<br/>Python |
-| **[Dedicated plan](#app-service-plan)**<sup>4</sup> | .NET Core<br/>Node.js<br/>Java<br/>Python|.NET Core<br/>Node.js<br/>Java<br/>PowerShell Core |.NET Core<br/>Node.js<br/>Java<br/>PowerShell Core<br/>Python |
-| **[ASE](#app-service-plan)**<sup>4</sup> | .NET Core<br/>Node.js<br/>Java<br/>Python |.NET Core<br/>Node.js<br/>Java<br/>PowerShell Core |.NET Core<br/>Node.js<br/>Java<br/>PowerShell Core<br/>Python |
+| **[Consumption plan](consumption-plan.md)** | .NET Core<br/>Node.js<br/>Java<br/>Python | .NET Core<br/>Node.js<br/>Java<br/>PowerShell Core | No support |
+| **[Premium plan](functions-premium-plan.md)** | .NET Core<br/>Node.js<br/>Java<br/>Python|.NET Core<br/>Node.js<br/>Java<br/>PowerShell Core |.NET Core<br/>Node.js<br/>Java<br/>PowerShell Core<br/>Python |
+| **[Dedicated plan](dedicated-plan.md)** | .NET Core<br/>Node.js<br/>Java<br/>Python|.NET Core<br/>Node.js<br/>Java<br/>PowerShell Core |.NET Core<br/>Node.js<br/>Java<br/>PowerShell Core<br/>Python |
+| **[ASE](dedicated-plan.md)** | .NET Core<br/>Node.js<br/>Java<br/>Python |.NET Core<br/>Node.js<br/>Java<br/>PowerShell Core |.NET Core<br/>Node.js<br/>Java<br/>PowerShell Core<br/>Python |
| **[Kubernetes](functions-kubernetes-keda.md)** | n/a | n/a |.NET Core<br/>Node.js<br/>Java<br/>PowerShell Core<br/>Python |
-<sup>1</sup>Linux is the only supported operating system for the Python runtime stack.
-<sup>2</sup>Windows is the only supported operating system for the PowerShell runtime stack.
-<sup>3</sup>Linux is the only supported operating system for Docker containers.
-<sup>4</sup> For specific limits for the various App Service plan options, see the [App Service plan limits](../azure-resource-manager/management/azure-subscription-service-limits.md#app-service-limits).
+<sup>1</sup> Linux is the only supported operating system for the Python runtime stack. <br/>
+<sup>2</sup> Windows is the only supported operating system for the PowerShell runtime stack.<br/>
+<sup>3</sup> Linux is the only supported operating system for Docker containers.<br/>
+
+[!INCLUDE [Timeout Duration section](../../includes/functions-timeout-duration.md)]
+
+## Scale
-### Scale
+The following table compares the scaling behaviors of the various hosting plans.
| | Scale out | Max # instances | | --- | --- | --- |
-| **[Consumption plan](#consumption-plan)** | Event driven. Scale out automatically, even during periods of high load. Azure Functions infrastructure scales CPU and memory resources by adding additional instances of the Functions host, based on the number of events that its functions are triggered on. | 200 |
-| **[Premium plan](#premium-plan)** | Event driven. Scale out automatically, even during periods of high load. Azure Functions infrastructure scales CPU and memory resources by adding additional instances of the Functions host, based on the number of events that its functions are triggered on. |100|
-| **[Dedicated plan](#app-service-plan)**<sup>1</sup> | Manual/autoscale |10-20|
-| **[ASE](#app-service-plan)**<sup>1</sup> | Manual/autoscale |100 |
-| **[Kubernetes](functions-kubernetes-keda.md)** | Event-driven autoscale for Kubernetes clusters using [KEDA](https://keda.sh). | Varies&nbsp;by&nbsp;cluster.&nbsp;&nbsp;|
+| **[Consumption plan](consumption-plan.md)** | [Event driven](event-driven-scaling.md). Scale out automatically, even during periods of high load. Azure Functions infrastructure scales CPU and memory resources by adding additional instances of the Functions host, based on the number of incoming trigger events. | 200 |
+| **[Premium plan](functions-premium-plan.md)** | [Event driven](event-driven-scaling.md). Scale out automatically, even during periods of high load. Azure Functions infrastructure scales CPU and memory resources by adding additional instances of the Functions host, based on the number of events that its functions are triggered on. |100|
+| **[Dedicated plan](dedicated-plan.md)**<sup>1</sup> | Manual/autoscale |10-20|
+| **[ASE](dedicated-plan.md)**<sup>1</sup> | Manual/autoscale |100 |
+| **[Kubernetes](functions-kubernetes-keda.md)** | Event-driven autoscale for Kubernetes clusters using [KEDA](https://keda.sh). | Varies&nbsp;by&nbsp;cluster&nbsp;&nbsp;|
<sup>1</sup> For specific limits for the various App Service plan options, see the [App Service plan limits](../azure-resource-manager/management/azure-subscription-service-limits.md#app-service-limits).
-### Cold start behavior
+## Cold start behavior
| | | | -- | -- |
-| **[Consumption&nbsp;plan](#consumption-plan)** | Apps may scale to zero if idle for a period of time, meaning some requests may have additional latency at startup. The consumption plan does have some optimizations to help decrease cold start time, including pulling from pre-warmed placeholder functions that already have the function host and language processes running. |
-| **[Premium plan](#premium-plan)** | Perpetually warm instances to avoid any cold start. |
-| **[Dedicated plan](#app-service-plan)**<sup>1</sup> | When running in a Dedicated plan, the Functions host can run continuously, which means that cold start isnΓÇÖt really an issue. |
-| **[ASE](#app-service-plan)**<sup>1</sup> | When running in a Dedicated plan, the Functions host can run continuously, which means that cold start isnΓÇÖt really an issue. |
-| **[Kubernetes](functions-kubernetes-keda.md)** | Depends on KEDA configuration. Apps can be configured to always run and never have cold start, or configured to scale to zero, which results in cold start on new events.
+| **[Consumption&nbsp;plan](consumption-plan.md)** | Apps may scale to zero when idle, meaning some requests may have additional latency at startup. The consumption plan does have some optimizations to help decrease cold start time, including pulling from pre-warmed placeholder functions that already have the function host and language processes running. |
+| **[Premium plan](functions-premium-plan.md)** | Perpetually warm instances to avoid any cold start. |
+| **[Dedicated plan](dedicated-plan.md)** | When running in a Dedicated plan, the Functions host can run continuously, which means that cold start isnΓÇÖt really an issue. |
+| **[ASE](dedicated-plan.md)** | When running in a Dedicated plan, the Functions host can run continuously, which means that cold start isnΓÇÖt really an issue. |
+| **[Kubernetes](functions-kubernetes-keda.md)** | Depending on KEDA configuration, apps can be configured to avoid a cold start. If configured to scale to zero, then a cold start is experienced for new events.
-<sup>1</sup> For specific limits for the various App Service plan options, see the [App Service plan limits](../azure-resource-manager/management/azure-subscription-service-limits.md#app-service-limits).
-
-### Service limits
+## Service limits
[!INCLUDE [functions-limits](../../includes/functions-limits.md)]
-### Networking features
+## Networking features
[!INCLUDE [functions-networking-features](../../includes/functions-networking-features.md)]
-### Billing
+## Billing
| | | | --- | --- |
-| **[Consumption plan](#consumption-plan)** | Pay only for the time your functions run. Billing is based on number of executions, execution time, and memory used. |
-| **[Premium plan](#premium-plan)** | Premium plan is based on the number of core seconds and memory used across needed and pre-warmed instances. At least one instance per plan must be kept warm at all times. This plan provides more predictable pricing. |
-| **[Dedicated plan](#app-service-plan)**<sup>1</sup> | You pay the same for function apps in an App Service Plan as you would for other App Service resources, like web apps.|
-| **[ASE](#app-service-plan)**<sup>1</sup> | there's a flat monthly rate for an ASE that pays for the infrastructure and doesn't change with the size of the ASE. In addition, there's a cost per App Service plan vCPU. All apps hosted in an ASE are in the Isolated pricing SKU. |
+| **[Consumption plan](consumption-plan.md)** | Pay only for the time your functions run. Billing is based on number of executions, execution time, and memory used. |
+| **[Premium plan](functions-premium-plan.md)** | Premium plan is based on the number of core seconds and memory used across needed and pre-warmed instances. At least one instance per plan must be kept warm at all times. This plan provides the most predictable pricing. |
+| **[Dedicated plan](dedicated-plan.md)* | You pay the same for function apps in an App Service Plan as you would for other App Service resources, like web apps.|
+| **[App Service Environment (ASE)](dedicated-plan.md)** | There's a flat monthly rate for an ASE that pays for the infrastructure and doesn't change with the size of the ASE. There's also a cost per App Service plan vCPU. All apps hosted in an ASE are in the Isolated pricing SKU. |
| **[Kubernetes](functions-kubernetes-keda.md)**| You pay only the costs of your Kubernetes cluster; no additional billing for Functions. Your function app runs as an application workload on top of your cluster, just like a regular app. |
-<sup>1</sup> For specific limits for the various App Service plan options, see the [App Service plan limits](../azure-resource-manager/management/azure-subscription-service-limits.md#app-service-limits).
- ## Next steps
-+ [Quickstart: Create an Azure Functions project using Visual Studio Code](./create-first-function-vs-code-csharp.md)
+ [Deployment technologies in Azure Functions](functions-deployment-technologies.md) + [Azure Functions developer guide](functions-reference.md)\ No newline at end of file
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/ip-addresses https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/ip-addresses.md
@@ -47,7 +47,7 @@ az webapp show --resource-group <group_name> --name <app_name> --query possibleO
``` > [!NOTE]
-> When a function app that runs on the [Consumption plan](functions-scale.md#consumption-plan) or the [Premium plan](functions-scale.md#premium-plan) is scaled, a new range of outbound IP addresses may be assigned. When running on either of these plans, you may need to add the entire data center to an allow list.
+> When a function app that runs on the [Consumption plan](consumption-plan.md) or the [Premium plan](functions-premium-plan.md) is scaled, a new range of outbound IP addresses may be assigned. When running on either of these plans, you may need to add the entire data center to an allow list.
## Data center outbound IP addresses
@@ -85,7 +85,7 @@ The inbound IP address **might** change when you:
- Delete the last function app in a resource group and region combination, and re-create it. - Delete a TLS binding, such as during [certificate renewal](../app-service/configure-ssl-certificate.md#renew-certificate).
-When your function app runs in a [Consumption plan](functions-scale.md#consumption-plan) or in a [Premium plan](functions-scale.md#premium-plan), the inbound IP address might also change even when you haven't taken any actions such as the ones [listed above](#inbound-ip-address-changes).
+When your function app runs in a [Consumption plan](consumption-plan.md) or in a [Premium plan](functions-premium-plan.md), the inbound IP address might also change even when you haven't taken any actions such as the ones [listed above](#inbound-ip-address-changes).
## Outbound IP address changes
@@ -94,7 +94,7 @@ The set of available outbound IP addresses for a function app might change when
* Take any action that can change the inbound IP address. * Change your App Service plan pricing tier. The list of all possible outbound IP addresses your app can use, for all pricing tiers, is in the `possibleOutboundIPAddresses` property. See [Find outbound IPs](#find-outbound-ip-addresses).
-When your function app runs in a [Consumption plan](functions-scale.md#consumption-plan) or in a [Premium plan](functions-scale.md#premium-plan), the outbound IP address might also change even when you haven't taken any actions such as the ones [listed above](#inbound-ip-address-changes).
+When your function app runs in a [Consumption plan](consumption-plan.md) or in a [Premium plan](functions-premium-plan.md), the outbound IP address might also change even when you haven't taken any actions such as the ones [listed above](#inbound-ip-address-changes).
To deliberately force an outbound IP address change:
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/manage-connections https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/manage-connections.md
@@ -16,7 +16,7 @@ Functions in a function app share resources. Among those shared resources are co
The number of available connections is limited partly because a function app runs in a [sandbox environment](https://github.com/projectkudu/kudu/wiki/Azure-Web-App-sandbox). One of the restrictions that the sandbox imposes on your code is a limit on the number of outbound connections, which is currently 600 active (1,200 total) connections per instance. When you reach this limit, the functions runtime writes the following message to the logs: `Host thresholds exceeded: Connections`. For more information, see the [Functions service limits](functions-scale.md#service-limits).
-This limit is per instance. When the [scale controller adds function app instances](functions-scale.md#how-the-consumption-and-premium-plans-work) to handle more requests, each instance has an independent connection limit. That means there's no global connection limit, and you can have much more than 600 active connections across all active instances.
+This limit is per instance. When the [scale controller adds function app instances](event-driven-scaling.md) to handle more requests, each instance has an independent connection limit. That means there's no global connection limit, and you can have much more than 600 active connections across all active instances.
When troubleshooting, make sure that you have enabled Application Insights for your function app. Application Insights lets you view metrics for your function apps like executions. For more information, see [View telemetry in Application Insights](analyze-telemetry-data.md#view-telemetry-in-application-insights).
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/python-scale-performance-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/python-scale-performance-reference.md
@@ -10,7 +10,7 @@ ms.custom: devx-track-python
When developing for Azure Functions using Python, you need to understand how your functions perform and how that performance affects the way your function app gets scaled. The need is more important when designing highly performant apps. The main factors to consider when designing, writing, and configuring your functions apps are horizontal scaling and throughput performance configurations. ## Horizontal scaling
-By default, Azure Functions automatically monitors the load on your application and creates additional host instances for Python as needed. Azure Functions uses built-in thresholds for different trigger types to decide when to add instances, such as the age of messages and queue size for QueueTrigger. These thresholds aren't user configurable. For more information, see [How the Consumption and Premium plans work](functions-scale.md#how-the-consumption-and-premium-plans-work).
+By default, Azure Functions automatically monitors the load on your application and creates additional host instances for Python as needed. Azure Functions uses built-in thresholds for different trigger types to decide when to add instances, such as the age of messages and queue size for QueueTrigger. These thresholds aren't user configurable. For more information, see [Event-driven scaling in Azure Functions](event-driven-scaling.md).
## Improving throughput performance
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/scripts/functions-cli-create-app-service-plan https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/scripts/functions-cli-create-app-service-plan.md
@@ -18,7 +18,7 @@ This Azure Functions sample script creates a function app, which is a container
## Sample script
-This script creates an Azure Function app using a dedicated [App Service plan](../functions-scale.md#app-service-plan).
+This script creates an Azure Function app using a dedicated [App Service plan](../dedicated-plan.md).
[!code-azurecli-interactive[main](../../../cli_scripts/azure-functions/create-function-app-app-service-plan/create-function-app-app-service-plan.sh "Create an Azure Function on an App Service plan")]
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/scripts/functions-cli-create-function-app-connect-to-cosmos-db https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/scripts/functions-cli-create-function-app-connect-to-cosmos-db.md
@@ -31,7 +31,7 @@ This script uses the following commands: Each command in the table links to comm
|---|---| | [az group create](/cli/azure/group#az-group-create) | Create a resource group with location | | [az storage accounts create](/cli/azure/storage/account#az-storage-account-create) | Create a storage account |
-| [az functionapp create](/cli/azure/functionapp#az-functionapp-create) | Creates a function app in the serverless [Consumption plan](../functions-scale.md#consumption-plan). |
+| [az functionapp create](/cli/azure/functionapp#az-functionapp-create) | Creates a function app in the serverless [Consumption plan](../consumption-plan.md). |
| [az cosmosdb create](/cli/azure/cosmosdb#az-cosmosdb-create) | Create an Azure Cosmos DB database. | | [az cosmosdb show](/cli/azure/cosmosdb#az-cosmosdb-show)| Gets the database account connection. | | [az cosmosdb list-keys](/cli/azure/cosmosdb#az-cosmosdb-list-keys)| Gets the keys for the database. |
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/scripts/functions-cli-create-function-app-connect-to-storage-account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/scripts/functions-cli-create-function-app-connect-to-storage-account.md
@@ -31,7 +31,7 @@ This script uses the following commands. Each command in the table links to comm
|---|---| | [az group create](/cli/azure/group#az-group-create) | Create a resource group with location. | | [az storage account create](/cli/azure/storage/account#az-storage-account-create) | Create a storage account. |
-| [az functionapp create](/cli/azure/functionapp#az-functionapp-create) | Creates a function app in the serverless [Consumption plan](../functions-scale.md#consumption-plan). |
+| [az functionapp create](/cli/azure/functionapp#az-functionapp-create) | Creates a function app in the serverless [Consumption plan](../consumption-plan.md). |
| [az storage account show-connection-string](/cli/azure/storage/account#az-storage-account-show-connection-string) | Gets the connection string for the account. | | [az functionapp config appsettings set](/cli/azure/functionapp/config/appsettings#az-functionapp-config-appsettings-set) | Sets the connection string as an app setting in the function app. |
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/scripts/functions-cli-create-function-app-github-continuous https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/scripts/functions-cli-create-function-app-github-continuous.md
@@ -7,7 +7,7 @@ ms.custom: mvc, devx-track-azurecli
--- # Create a function app in Azure that is deployed from GitHub
-This Azure Functions sample script creates a function app using the [Consumption plan](../functions-scale.md#consumption-plan), along with its related resources. The script also configures your function code for continuous deployment from a GitHub repository.
+This Azure Functions sample script creates a function app using the [Consumption plan](../consumption-plan.md), along with its related resources. The script also configures your function code for continuous deployment from a GitHub repository.
In this sample, you need:
@@ -36,7 +36,7 @@ Each command in the table links to command specific documentation. This script u
|---|---| | [az group create](/cli/azure/group#az-group-create) | Creates a resource group in which all resources are stored. | | [az storage account create](/cli/azure/storage/account#az-storage-account-create) | Creates the storage account required by the function app. |
-| [az functionapp create](/cli/azure/functionapp#az-functionapp-create) | Creates a function app in the serverless [Consumption plan](../functions-scale.md#consumption-plan) and associates it with a Git or Mercurial repository. |
+| [az functionapp create](/cli/azure/functionapp#az-functionapp-create) | Creates a function app in the serverless [Consumption plan](../consumption-plan.md) and associates it with a Git or Mercurial repository. |
## Next steps
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/scripts/functions-cli-create-function-app-vsts-continuous https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/scripts/functions-cli-create-function-app-vsts-continuous.md
@@ -7,7 +7,7 @@ ms.custom: mvc, devx-track-azurecli
--- # Create a function in Azure that is deployed from Azure DevOps
-This topic shows you how to use Azure Functions to create a [serverless](https://azure.microsoft.com/solutions/serverless/) function app using the [Consumption plan](../functions-scale.md#consumption-plan). The function app, which is a container for your functions, is continuously deployed from an Azure DevOps repository.
+This topic shows you how to use Azure Functions to create a [serverless](https://azure.microsoft.com/solutions/serverless/) function app using the [Consumption plan](../consumption-plan.md). The function app, which is a container for your functions, is continuously deployed from an Azure DevOps repository.
To complete this topic, you must have:
@@ -36,7 +36,7 @@ This script uses the following commands to create a resource group, storage acco
|---|---| | [az group create](/cli/azure/group#az-group-create) | Creates a resource group in which all resources are stored. | | [az storage account create](/cli/azure/storage/account#az-storage-account-create) | Creates the storage account required by the function app. |
-| [az functionapp create](/cli/azure/functionapp#az-functionapp-create) | Creates a function app in the serverless [Consumption plan](../functions-scale.md#consumption-plan). |
+| [az functionapp create](/cli/azure/functionapp#az-functionapp-create) | Creates a function app in the serverless [Consumption plan](../consumption-plan.md). |
| [az functionapp deployment source config](/cli/azure/functionapp/deployment/source#az-functionapp-deployment-source-config) | Associates a function app with a Git or Mercurial repository. | ## Next steps
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/scripts/functions-cli-create-serverless-python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/scripts/functions-cli-create-serverless-python.md
@@ -22,7 +22,7 @@ This Azure Functions sample script creates a function app, which is a container
## Sample script
-This script creates an Azure Function app using the [Consumption plan](../functions-scale.md#consumption-plan).
+This script creates an Azure Function app using the [Consumption plan](../consumption-plan.md).
[!code-azurecli-interactive[main](../../../cli_scripts/azure-functions/create-function-app-consumption-python/create-function-app-consumption-python.sh "Create an Azure Function on a Consumption plan")]
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/scripts/functions-cli-create-serverless https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/scripts/functions-cli-create-serverless.md
@@ -10,7 +10,7 @@ ms.custom: mvc, devx-track-azurecli
# Create a function app for serverless code execution
-This Azure Functions sample script creates a function app, which is a container for your functions. The function app is created using the [Consumption plan](../functions-scale.md#consumption-plan), which is ideal for event-driven serverless workloads.
+This Azure Functions sample script creates a function app, which is a container for your functions. The function app is created using the [Consumption plan](../consumption-plan.md), which is ideal for event-driven serverless workloads.
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
@@ -20,7 +20,7 @@ This Azure Functions sample script creates a function app, which is a container
## Sample script
-This script creates an Azure Function app using the [Consumption plan](../functions-scale.md#consumption-plan).
+This script creates an Azure Function app using the [Consumption plan](../consumption-plan.md).
[!code-azurecli-interactive[main](../../../cli_scripts/azure-functions/create-function-app-consumption/create-function-app-consumption.sh "Create an Azure Function on a Consumption plan")]
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/scripts/functions-cli-mount-files-storage-linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/scripts/functions-cli-mount-files-storage-linux.md
@@ -21,7 +21,7 @@ This Azure Functions sample script creates a function app and creates a share in
## Sample script
-This script creates an Azure Function app using the [Consumption plan](../functions-scale.md#consumption-plan).
+This script creates an Azure Function app using the [Consumption plan](../consumption-plan.md).
[!code-azurecli-interactive[main](../../../cli_scripts/azure-functions/functions-cli-mount-files-storage-linux/functions-cli-mount-files-storage-linux.sh "Create an Azure Function on a Consumption plan")]
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/security-baseline.md
@@ -76,7 +76,7 @@ In addition, configure a front-end gateway, such as Azure Web Application Firewa
- [Azure Functions networking options](./functions-networking-options.md) -- [Azure Functions Premium Plan](./functions-scale.md#premium-plan)
+- [Azure Functions Premium Plan](./functions-premium-plan.md)
- [Introduction to the App Service Environments](../app-service/environment/intro.md)
@@ -120,7 +120,7 @@ Alternatively, there are multiple marketplace options like the Barracuda WAF for
- [Azure Functions networking options](./functions-networking-options.md) -- [Azure Functions Premium Plan](./functions-scale.md#premium-plan)
+- [Azure Functions Premium Plan](./functions-premium-plan.md)
- [Introduction to the App Service Environments](../app-service/environment/intro.md)
@@ -142,7 +142,7 @@ Alternatively, there are multiple marketplace options like the Barracuda WAF for
- [Azure Functions networking options](./functions-networking-options.md) -- [Azure Functions Premium Plan](./functions-scale.md#premium-plan)
+- [Azure Functions Premium Plan](./functions-premium-plan.md)
- [Introduction to the App Service Environments](../app-service/environment/intro.md)
@@ -550,7 +550,7 @@ You may also use Private Endpoints to perform network isolation. An Azure Privat
- [Azure Functions networking options](./functions-networking-options.md) -- [Azure Functions Premium Plan](./functions-scale.md#premium-plan)
+- [Azure Functions Premium Plan](./functions-premium-plan.md)
- [Understand Private Endpoint](../private-link/private-endpoint-overview.md)
@@ -830,7 +830,7 @@ Deploy high risk Azure Function apps into their own Virtual Network (VNet). Peri
- [Azure Functions networking options](./functions-networking-options.md) -- [Azure Functions Premium Plan](./functions-scale.md#premium-plan)
+- [Azure Functions Premium Plan](./functions-premium-plan.md)
- [Networking considerations for an App Service Environment](../app-service/environment/network-info.md)
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/storage-considerations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/storage-considerations.md
@@ -14,7 +14,7 @@ Azure Functions requires an Azure Storage account when you create a function app
|Storage service | Functions usage | |---------|---------| | [Azure Blob storage](../storage/blobs/storage-blobs-introduction.md) | Maintain bindings state and function keys. <br/>Also used by [task hubs in Durable Functions](durable/durable-functions-task-hubs.md). |
-| [Azure Files](../storage/files/storage-files-introduction.md) | File share used to store and run your function app code in a [Consumption Plan](functions-scale.md#consumption-plan) and [Premium Plan](functions-scale.md#premium-plan). |
+| [Azure Files](../storage/files/storage-files-introduction.md) | File share used to store and run your function app code in a [Consumption Plan](consumption-plan.md) and [Premium Plan](functions-premium-plan.md). |
| [Azure Queue storage](../storage/queues/storage-queues-introduction.md) | Used by [task hubs in Durable Functions](durable/durable-functions-task-hubs.md). | | [Azure Table storage](../storage/tables/table-storage-overview.md) | Used by [task hubs in Durable Functions](durable/durable-functions-task-hubs.md). |
@@ -29,9 +29,11 @@ To learn more about storage account types, see [Introducing the Azure Storage Se
While you can use an existing storage account with your function app, you must make sure that it meets these requirements. Storage accounts created as part of the function app create flow in the Azure portal are guaranteed to meet these storage account requirements. In the portal, unsupported accounts are filtered out when choosing an existing storage account while creating a function app. In this flow, you are only allowed to choose existing storage accounts in the same region as the function app you're creating. To learn more, see [Storage account location](#storage-account-location).
+<!-- JH: Does using a Premium Storage account improve perf? -->
+ ## Storage account guidance
-Every function app requires a storage account to operate. If that account is deleted your function app won't run. To troubleshoot storage-related issues, see [How to troubleshoot storage-related issues](functions-recover-storage-account.md). The following additional considerations apply to the Storage account used by function apps.
+Every function app requires a storage account to operate. If that account is deleted your function app won't run. To troubleshoot storage-related issues, see [How to troubleshoot storage-related issues](functions-recover-storage-account.md). The following additional considerations apply to the Storage account used by function apps.
### Storage account location
@@ -55,7 +57,15 @@ It's possible for multiple function apps to share the same storage account witho
[!INCLUDE [functions-storage-encryption](../../includes/functions-storage-encryption.md)]
-## Mount file shares (Linux)
+### In-region data residency
+
+When all customer data must remain within a single region, the storage account associated with the function app must be one with [in-region redundancy](../storage/common/storage-redundancy.md). An in-region redundant storage account also must be used with [Azure Durable Functions](./durable/durable-functions-perf-and-scale.md#storage-account-selection).
+
+Other platform-managed customer data is only stored within the region when hosting in an internally load-balanced App Service Environment (ASE). To learn more, see [ASE zone redundancy](../app-service/environment/zone-redundancy.md#in-region-data-residency).
+
+## Mount file shares
+
+_This functionality is current only available when running on Linux._
You can mount existing Azure Files shares to your Linux function apps. By mounting a share to your Linux function app, you can leverage existing machine learning models or other data in your functions. You can use the [`az webapp config storage-account add`](/cli/azure/webapp/config/storage-account#az-webapp-config-storage-account-add) command to mount an existing share to your Linux function app.
azure-government https://docs.microsoft.com/en-us/azure/azure-government/azure-secure-isolation-guidance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/azure-secure-isolation-guidance.md
@@ -112,7 +112,7 @@ With Azure Key Vault, customers can [import or generate encryption keys](../key-
**Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents do not see or extract customer cryptographic keys.**
-Azure Key Vault provides features for a robust solution for encryption key and certificate lifecycle management. Upon creation, every key vault is automatically associated with the Azure Active Directory (Azure AD) tenant that owns the subscription. Anyone trying to manage or retrieve content from a key vault must be authenticated by Azure AD, as described in Azure Key Vault [security overview](../key-vault/general/overview-security.md):
+Azure Key Vault provides features for a robust solution for encryption key and certificate lifecycle management. Upon creation, every key vault is automatically associated with the Azure Active Directory (Azure AD) tenant that owns the subscription. Anyone trying to manage or retrieve content from a key vault must be authenticated by Azure AD, as described in Azure Key Vault [security overview](../key-vault/general/security-overview.md):
- Authentication establishes the identity of the caller (user or application). - Authorization determines which operations the caller can perform, based on a combination of Azure role-based access control (Azure RBAC) and Azure Key Vault policies.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/itsmc-definition https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/itsmc-definition.md
@@ -123,30 +123,35 @@ Use the following procedure to create action groups:
8. If you want to fill out-of-the-box fields with fixed values, select **Use Custom Template**. Otherwise, choose an existing [template](#template-definitions) in the **Template** list and enter the fixed values in the template fields.
-9. In the last section of the action ITSM group definition you can define how many alerts will be created from each alert. This section is relevant only to Log Search Alerts.
+9. In the last section of the action ITSM group definition you can define how many work items will be created for each alert.
+
+ >[!NOTE]
+ >
+ > * This section is relevant only to Log Search Alerts.
+ > * Metric Alerts and Activity Log Alerts will always create one work item per alert.
* In a case you select in the work item dropdown "Incident" or "Alert":
- * If you check the **Create individual work items for each Configuration Item** check box, every configuration item in every alert will create a new work item. There can be more than one work item per configuration item in the ITSM system.
+ * If you check the **"Create individual work items for each Configuration Item"** check box, every configuration item in every alert will create a new work item. There can be more than one work item per configuration item in the ITSM system.
For example: 1) Alert 1 with 3 Configuration Items: A, B, C - will create 3 work items.
- 2) Alert 2 with 1 Configuration Item: D - will create 1 work item.
+ 2) Alert 2 with 1 Configuration Item: A - will create 1 work item.
+ >[!NOTE]
+ > In this case some of the fired alert will not generate new work items in the ITSM tool.
- **By the end of this flow there will be 4 alerts**
- * If you clear the **Create individual work items for each Configuration Item** check box, there will be alerts that will not create a new work item. work items will be merged according to alert rule.
+ * If you clear the **"Create individual work items for each Configuration Item"** check box,
+ ITSM connector will create a single work item for each alert rule and append to it all impacted configuration items. A new work item will be created if the previous one is closed.
For example:
- 1) Alert 1 with 3 Configuration Items: A, B, C - will create 1 work item.
- 2) Alert 2 for the same alert rule as phase 1 with 1 Configuration Item: D - will be merged to the work item in phase 1.
- 3) Alert 3 for a different alert rule with 1 Configuration Item: E - will create 1 work item.
-
- **By the end of this flow there will be 2 alerts**
+ 1) Alert 1 with 3 Configuration Items: A, B, C - will create 1 work item.
+ 2) Alert 2 for the same alert rule as phase 1 with 1 Configuration Item: D - will be merged to the work item in phase 1.
+ 3) Alert 3 for a different alert rule with 1 Configuration Item: E - will create 1 work item.
![Screenshot that shows the ITSM Incident window.](media/itsmc-overview/itsm-action-configuration.png) * In a case you select in the work item dropdown "Event":
- * If you select **Create individual work items for each Log Entry** in the radio buttons selection, an alert will be created per each row in the search results of the log search alert query. In the payload of the alert the description property will have the row from the search results.
- * If you select **Create individual work items for each Configuration Item** in the radio buttons selection, every configuration item in every alert will create a new work item. There can be more than one work item per configuration item in the ITSM system. This will be the same as the checking the checkbox in Incident/Alert section.
+ * If you select **"Create individual work items for each Log Entry (Configuration item field is not filled. Can result in large number of work items.)"** in the radio buttons selection, an alert will be created per each row in the search results of the log search alert query. In the payload of the alert the description property will have the row from the search results.
+ * If you select **"Create individual work items for each Configuration Item"** in the radio buttons selection, every configuration item in every alert will create a new work item. There can be more than one work item per configuration item in the ITSM system. This will be the same as the checking the checkbox in Incident/Alert section.
![Screenshot that shows the ITSM Event window.](media/itsmc-overview/itsm-action-configuration-event.png) 10. Select **OK**.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/resource-logs-schema https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/resource-logs-schema.md
@@ -73,7 +73,7 @@ The schema for resource logs varies depending on the resource and log category.
| Load Balancer |[Log analytics for Azure Load Balancer](../../load-balancer/load-balancer-monitor-log.md) | | Logic Apps |[Logic Apps B2B custom tracking schema](../../logic-apps/logic-apps-track-integration-account-custom-tracking-schema.md) | | Network Security Groups |[Log analytics for network security groups (NSGs)](../../virtual-network/virtual-network-nsg-manage-log.md) |
-| DDOS Protection | [Manage Azure DDoS Protection Standard](../../ddos-protection/diagnostic-logging.md#log-schemas) |
+| DDoS Protection | [Logging for Azure DDoS Protection Standard](../../ddos-protection/diagnostic-logging.md#log-schemas) |
| Power BI Dedicated | [Logging for Power BI Embedded in Azure](/power-bi/developer/azure-pbie-diag-logs) | | Recovery Services | [Data Model for Azure Backup](../../backup/backup-azure-reports-data-model.md)| | Search |[Enabling and using Search Traffic Analytics](../../search/search-traffic-analytics.md) |
azure-netapp-files https://docs.microsoft.com/en-us/azure/azure-netapp-files/configure-kerberos-encryption https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/configure-kerberos-encryption.md
@@ -167,6 +167,7 @@ Performance impact of krb5p:
## Next steps
+* [Troubleshoot NFSv4.1 Kerberos volume issues](troubleshoot-nfsv41-kerberos-volumes.md)
* [FAQs About Azure NetApp Files](azure-netapp-files-faqs.md) * [Create an NFS volume for Azure NetApp Files](azure-netapp-files-create-volumes.md) * [Create an Active Directory connection](azure-netapp-files-create-volumes-smb.md#create-an-active-directory-connection)
azure-netapp-files https://docs.microsoft.com/en-us/azure/azure-netapp-files/troubleshoot-nfsv41-kerberos-volumes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/troubleshoot-nfsv41-kerberos-volumes.md new file mode 100644
@@ -0,0 +1,38 @@
+---
+title: Troubleshoot NFSv4.1 Kerberos volume issues for Azure NetApp Files | Microsoft Docs
+description: Describes error messages and resolutions that can help you troubleshoot NFSv4.1 Kerberos volume issues for Azure NetApp Files.
+services: azure-netapp-files
+documentationcenter: ''
+author: b-juche
+manager: ''
+editor: ''
+
+ms.assetid:
+ms.service: azure-netapp-files
+ms.workload: storage
+ms.tgt_pltfrm: na
+ms.devlang: na
+ms.topic: troubleshooting
+ms.date: 01/05/2020
+ms.author: b-juche
+---
+# Troubleshoot NFSv4.1 Kerberos volume issues
+
+This article describes resolutions to error conditions you might have when creating or managing NFSv4.1 Kerberos volumes.
+
+## Error conditions and resolutions
+
+| Error conditions | Resolutions |
+|-|-|
+|`Error allocating volume - Export policy rules does not match kerberosEnabled flag` | Azure NetApp Files does not support Kerberos for NFSv3 volumes. Kerberos is supported only for the NFSv4.1 protocol. |
+|`This NetApp account has no configured Active Directory connections` | Configure Active Directory for the NetApp account with fields **KDC IP** and **AD Server Name**. See [Configure the Azure portal](configure-kerberos-encryption.md#configure-the-azure-portal) for instructions. |
+|`Mismatch between KerberosEnabled flag value and ExportPolicyRule's access type parameter values.` | Azure NetApp Files does not support converting a plain NFSv4.1 volume to Kerberos NFSv4.1 volume, and vice-versa. |
+|`mount.nfs: access denied by server when mounting volume <SMB_SERVER_NAME-XXX.DOMAIN_NAME>/<VOLUME_NAME>` <br> Example: `smb-test-64d9.xyz.com:/nfs41-vol101` | <ol><li> Ensure that the A/PTR records are properly set up and exist in the Active Directory for the server name `smb-test-64d9.xyz.com`. <br> In the NFS client, if `nslookup` of `smb-test-64d9.xyz.com` resolves to IP address IP1 (that is, `10.1.1.68`), then `nslookup` of IP1 must resolve to only one record (that is, `smb-test-64d9.xyz.com`). `nslookup` of IP1 *must* not resolve to multiple names. </li> <li>Set AES-256 for the NFS machine account of type `NFS-<Smb NETBIOS NAME>-<few random characters>` on AD using either PowerShell or the UI. <br> Example commands: <ul><li>`Set-ADComputer <NFS_MACHINE_ACCOUNT_NAME> -KerberosEncryptionType AES256` </li><li>`Set-ADComputer NFS-SMB-TEST-64 -KerberosEncryptionType AES256` </li></ul> </li> <li>Ensure that the time of the NFS client, AD, and Azure NetApp Files storage software is synchronized with each other and is within a five-minute skew range. </li> <li>Get the Kerberos ticket on the NFS client using the command `kinit <administrator>`.</li> <li>Reduce the NFS client hostname to less than 15 characters and perform the realm join again. </li><li>Restart the NFS client and the `rpcgssd` service as follows. The command might vary depending on the OS.<br> RHEL 7: <br> `service nfs restart` <br> `service rpcgssd restart` <br> CentOS 8: <br> `systemctl enable nfs-client.target && systemctl start nfs-client.target` <br> Ubuntu: <br> (Restart the `rpc-gssd` service.) <br> `sudo systemctl start rpc-gssd.service` </ul>|
+|`mount.nfs: an incorrect mount option was specified` | The issue might be related to the NFS client issue. Reboot the NFS client. |
+|`Hostname lookup failed` | You need to create a reverse lookup zone on the DNS server, and then add a PTR record of the AD host machine in that reverse lookup zone. <br> For example, assume that the IP address of the AD machine is `10.1.1.4`, the hostname of the AD machine (as found by using the hostname command) is `AD1`, and the domain name is `myDomain.com`. The PTR record added to the reverse lookup zone should be `10.1.1.4 -> AD1.myDomain.com`. |
+|`Volume creation fails due to unreachable DNS server` | Two possible solutions are available: <br> <ul><li> This error indicates that DNS is not reachable. The reason might be an incorrect DNS IP or a networking issue. Check the DNS IP entered in AD connection and make sure that the IP is correct. </li> <li> Make sure that the AD and the volume are in same region and in same VNet. If they are in different VNets, ensure that VNet peering is established between the two VNets. </li></ul> |
+|NFSv4.1 Kerberos volume creation fails with an error similar to the following example: <br> `Failed to enable NFS Kerberos on LIF "svm_e719cde8d6d0413fbd6adac0636cdecb_7ad0b82e_73349613". Failed to bind service principal name on LIF "svm_e719cde8d6d0413fbd6adac0636cdecb_7ad0b82e_73349613". SecD Error: server create fail join user auth.` |The KDC IP is wrong and the Kerberos volume has been created. Update the KDC IP with a correct address. <br> After you update the KDC IP, the error will not go away. You need to re-create the volume. |
+
+## Next steps
+
+* [Configure NFSv4.1 Kerberos encryption for Azure NetApp Files](configure-kerberos-encryption.md)
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/azure-subscription-service-limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/azure-subscription-service-limits.md
@@ -127,7 +127,7 @@ For Azure Database for PostgreSQL limits, see [Limitations in Azure Database for
[!INCLUDE [functions-limits](../../../includes/functions-limits.md)]
-For more information, see [Functions Hosting plans comparison](../../azure-functions/functions-scale.md#hosting-plans-comparison).
+For more information, see [Functions Hosting plans comparison](../../azure-functions/functions-scale.md).
## Azure Kubernetes Service limits
@@ -171,6 +171,10 @@ The latest values for Azure Machine Learning Compute quotas can be found in the
[!INCLUDE [policy-limits](../../../includes/azure-policy-limits.md)]
+## Azure role-based access control limits
+
+[!INCLUDE [role-based-access-control-limits](../../../includes/role-based-access-control/limits.md)]
+ ## Azure SignalR Service limits [!INCLUDE [signalr-service-limits](../../../includes/signalr-service-limits.md)]
@@ -335,10 +339,6 @@ The latest values for Azure Purview quotas can be found in the [Azure Purview qu
[!INCLUDE [notification-hub-limits](../../../includes/notification-hub-limits.md)]
-## Azure role-based access control limits
-
-[!INCLUDE [role-based-access-control-limits](../../../includes/role-based-access-control-limits.md)]
- ## Service Bus limits [!INCLUDE [azure-servicebus-limits](../../../includes/service-bus-quotas-table.md)]
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/add-resource-extensions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/add-resource-extensions.md
@@ -1,14 +1,15 @@
--- title: Post-deployment configuration by using extensions
-description: Learn how to use Azure Resource Manager template extensions to provide post-deployment configurations.
+description: Learn how to use Azure Resource Manager template (ARM template) extensions to provide post-deployment configurations.
author: mumian ms.topic: conceptual ms.date: 12/14/2018 ms.author: jgao ---+ # Provide post-deployment configurations by using extensions
-Template extensions are small applications that provide post-deployment configuration and automation tasks on Azure resources. The most popular one is virtual machine extensions. See [Virtual machine extensions and features for Windows](../../virtual-machines/extensions/features-windows.md), and [Virtual machine extensions and features for Linux](../../virtual-machines/extensions/features-linux.md).
+Azure Resource Manager template (ARM template) extensions are small applications that provide post-deployment configuration and automation tasks on Azure resources. The most popular one is virtual machine extensions. See [Virtual machine extensions and features for Windows](../../virtual-machines/extensions/features-windows.md), and [Virtual machine extensions and features for Linux](../../virtual-machines/extensions/features-linux.md).
## Extensions
@@ -17,17 +18,17 @@ The existing extensions are:
- [Microsoft.Compute/virtualMachines/extensions](/azure/templates/microsoft.compute/2018-10-01/virtualmachines/extensions) - [Microsoft.Compute virtualMachineScaleSets/extensions](/azure/templates/microsoft.compute/2018-10-01/virtualmachinescalesets/extensions) - [Microsoft.HDInsight clusters/extensions](/azure/templates/microsoft.hdinsight/2018-06-01-preview/clusters)-- [Microsoft.Sql servers/databases/extensions](/azure/templates/microsoft.sql/2014-04-01/servers/databases/extensions)
+- [Microsoft.Sql servers/databases/extensions](/azure/templates/microsoft.sql/2014-04-01/servers/databases/extensions)
- [Microsoft.Web/sites/siteextensions](/azure/templates/microsoft.web/2016-08-01/sites/siteextensions) To find out the available extensions, browse to the [template reference](/azure/templates/). In **Filter by title**, enter **extension**. To learn how to use these extensions, see: -- [Tutorial: Deploy virtual machine extensions with Azure Resource Manager templates](template-tutorial-deploy-vm-extensions.md).-- [Tutorial: Import SQL BACPAC files with Azure Resource Manager templates](template-tutorial-deploy-sql-extensions-bacpac.md)
+- [Tutorial: Deploy virtual machine extensions with ARM templates](template-tutorial-deploy-vm-extensions.md).
+- [Tutorial: Import SQL BACPAC files with ARM templates](template-tutorial-deploy-sql-extensions-bacpac.md)
## Next steps > [!div class="nextstepaction"]
-> [Tutorial: Deploy virtual machine extensions with Azure Resource Manager templates](template-tutorial-deploy-vm-extensions.md)
+> [Tutorial: Deploy virtual machine extensions with ARM templates](template-tutorial-deploy-vm-extensions.md)
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/child-resource-name-type https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/child-resource-name-type.md
@@ -1,14 +1,15 @@
--- title: Child resources in templates
-description: Describes how to set the name and type for child resources in an Azure Resource Manager template.
+description: Describes how to set the name and type for child resources in an Azure Resource Manager template (ARM template).
ms.topic: conceptual ms.date: 12/21/2020 ---+ # Set name and type for child resources Child resources are resources that exist only within the context of another resource. For example, a [virtual machine extension](/azure/templates/microsoft.compute/virtualmachines/extensions) can't exist without a [virtual machine](/azure/templates/microsoft.compute/virtualmachines). The extension resource is a child of the virtual machine.
-Each parent resource accepts only certain resource types as child resources. The resource type for the child resource includes the resource type for the parent resource. For example, **Microsoft.Web/sites/config** and **Microsoft.Web/sites/extensions** are both child resources of the **Microsoft.Web/sites**. The accepted resource types are specified in the [template schema](https://github.com/Azure/azure-resource-manager-schemas) of the parent resource.
+Each parent resource accepts only certain resource types as child resources. The resource type for the child resource includes the resource type for the parent resource. For example, `Microsoft.Web/sites/config` and `Microsoft.Web/sites/extensions` are both child resources of the `Microsoft.Web/sites`. The accepted resource types are specified in the [template schema](https://github.com/Azure/azure-resource-manager-schemas) of the parent resource.
In an Azure Resource Manager template (ARM template), you can specify the child resource either within the parent resource or outside of the parent resource. The following example shows the child resource included within the resources property of the parent resource.
@@ -83,20 +84,20 @@ The following example shows a virtual network and with a subnet. Notice that the
] ```
-The full resource type is still **Microsoft.Network/virtualNetworks/subnets**. You don't provide **Microsoft.Network/virtualNetworks/** because it's assumed from the parent resource type.
+The full resource type is still `Microsoft.Network/virtualNetworks/subnets`. You don't provide `Microsoft.Network/virtualNetworks/` because it's assumed from the parent resource type.
The child resource name is set to **Subnet1** but the full name includes the parent name. You don't provide **VNet1** because it's assumed from the parent resource. ## Outside parent resource
-When defined outside of the parent resource, you format the type and with slashes to include the parent type and name.
+When defined outside of the parent resource, you format the type and with slashes to include the parent type and name.
```json "type": "{resource-provider-namespace}/{parent-resource-type}/{child-resource-type}", "name": "{parent-resource-name}/{child-resource-name}", ```
-The following example shows a virtual network and subnet that are both defined at the root level. Notice that the subnet isn't included within the resources array for the virtual network. The name is set to **VNet1/Subnet1** and the type is set to **Microsoft.Network/virtualNetworks/subnets**. The child resource is marked as dependent on the parent resource because the parent resource must exist before the child resource can be deployed.
+The following example shows a virtual network and subnet that are both defined at the root level. Notice that the subnet isn't included within the resources array for the virtual network. The name is set to **VNet1/Subnet1** and the type is set to `Microsoft.Network/virtualNetworks/subnets`. The child resource is marked as dependent on the parent resource because the parent resource must exist before the child resource can be deployed.
```json "resources": [
@@ -130,6 +131,5 @@ The following example shows a virtual network and subnet that are both defined a
## Next steps
-* To learn about creating ARM templates, see [Authoring templates](template-syntax.md).
-
+* To learn about creating ARM templates, see [Understand the structure and syntax of ARM templates](template-syntax.md).
* To learn about the format of the resource name when referencing the resource, see the [reference function](template-functions-resource.md#reference).
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/conditional-resource-deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/conditional-resource-deployment.md
@@ -14,7 +14,7 @@ Sometimes you need to optionally deploy a resource in an Azure Resource Manager
## New or existing resource
-You can use conditional deployment to create a new resource or use an existing one. The following example shows how to use condition to deploy a new storage account or use an existing storage account.
+You can use conditional deployment to create a new resource or use an existing one. The following example shows how to use `condition` to deploy a new storage account or use an existing storage account.
```json {
@@ -31,7 +31,7 @@ You can use conditional deployment to create a new resource or use an existing o
} ```
-When the parameter **newOrExisting** is set to **new**, the condition evaluates to true. The storage account is deployed. However, when **newOrExisting** is set to **existing**, the condition evaluates to false and the storage account isn't deployed.
+When the parameter `newOrExisting` is set to **new**, the condition evaluates to true. The storage account is deployed. However, when `newOrExisting` is set to **existing**, the condition evaluates to false and the storage account isn't deployed.
For a complete example template that uses the `condition` element, see [VM with a new or existing Virtual Network, Storage, and Public IP](https://github.com/Azure/azure-quickstart-templates/tree/master/201-vm-new-or-existing-conditions).
@@ -75,13 +75,13 @@ For the complete template, see [Azure SQL logical server](https://github.com/Azu
If you use a [reference](template-functions-resource.md#reference) or [list](template-functions-resource.md#list) function with a resource that is conditionally deployed, the function is evaluated even if the resource isn't deployed. You get an error if the function refers to a resource that doesn't exist.
-Use the [if](template-functions-logical.md#if) function to make sure the function is only evaluated for conditions when the resource is deployed. See the [if function](template-functions-logical.md#if) for a sample template that uses if and reference with a conditionally deployed resource.
+Use the [if](template-functions-logical.md#if) function to make sure the function is only evaluated for conditions when the resource is deployed. See the [if function](template-functions-logical.md#if) for a sample template that uses `if` and `reference` with a conditionally deployed resource.
You set a [resource as dependent](define-resource-dependency.md) on a conditional resource exactly as you would any other resource. When a conditional resource isn't deployed, Azure Resource Manager automatically removes it from the required dependencies. ## Complete mode
-If you deploy a template with [complete mode](deployment-modes.md) and a resource isn't deployed because condition evaluates to false, the result depends on which REST API version you use to deploy the template. If you use a version earlier than 2019-05-10, the resource **isn't deleted**. With 2019-05-10 or later, the resource **is deleted**. The latest versions of Azure PowerShell and Azure CLI delete the resource when condition is false.
+If you deploy a template with [complete mode](deployment-modes.md) and a resource isn't deployed because `condition` evaluates to false, the result depends on which REST API version you use to deploy the template. If you use a version earlier than 2019-05-10, the resource **isn't deleted**. With 2019-05-10 or later, the resource **is deleted**. The latest versions of Azure PowerShell and Azure CLI delete the resource when condition is false.
## Next steps
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/define-resource-dependency https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/define-resource-dependency.md
@@ -1,18 +1,19 @@
--- title: Set deployment order for resources
-description: Describes how to set one resource as dependent on another resource during deployment. The dependencies ensure resources are deployed in the correct order.
+description: Describes how to set one Azure resource as dependent on another resource during deployment. The dependencies ensure resources are deployed in the correct order.
ms.topic: conceptual ms.date: 12/21/2020 ---+ # Define the order for deploying resources in ARM templates
-When deploying resources, you may need to make sure some resources exist before other resources. For example, you need a logical SQL server before deploying a database. You establish this relationship by marking one resource as dependent on the other resource. Use the **dependsOn** element to define an explicit dependency. Use the **reference** or **list** functions to define an implicit dependency.
+When deploying resources, you may need to make sure some resources exist before other resources. For example, you need a logical SQL server before deploying a database. You establish this relationship by marking one resource as dependent on the other resource. Use the `dependsOn` element to define an explicit dependency. Use the **reference** or **list** functions to define an implicit dependency.
-Resource Manager evaluates the dependencies between resources, and deploys them in their dependent order. When resources aren't dependent on each other, Resource Manager deploys them in parallel. You only need to define dependencies for resources that are deployed in the same template.
+Azure Resource Manager evaluates the dependencies between resources, and deploys them in their dependent order. When resources aren't dependent on each other, Resource Manager deploys them in parallel. You only need to define dependencies for resources that are deployed in the same template.
## dependsOn
-Within your template, the dependsOn element enables you to define one resource as a dependent on one or more resources. Its value is a JSON array of strings, each of which is a resource name or ID. The array can include resources that are [conditionally deployed](conditional-resource-deployment.md). When a conditional resource isn't deployed, Azure Resource Manager automatically removes it from the required dependencies.
+Within your Azure Resource Manager template (ARM template), the `dependsOn` element enables you to define one resource as a dependent on one or more resources. Its value is a JavaScript Object Notation (JSON) array of strings, each of which is a resource name or ID. The array can include resources that are [conditionally deployed](conditional-resource-deployment.md). When a conditional resource isn't deployed, Azure Resource Manager automatically removes it from the required dependencies.
The following example shows a network interface that depends on a virtual network, network security group, and public IP address. For the full template, see [the quickstart template for a Linux VM](https://github.com/Azure/azure-quickstart-templates/blob/master/101-vm-simple-linux/azuredeploy.json).
@@ -31,11 +32,11 @@ The following example shows a network interface that depends on a virtual networ
} ```
-While you may be inclined to use dependsOn to map relationships between your resources, it's important to understand why you're doing it. For example, to document how resources are interconnected, dependsOn isn't the right approach. You can't query which resources were defined in the dependsOn element after deployment. Setting unnecessary dependencies slows deployment time because Resource Manager can't deploy those resources in parallel.
+While you may be inclined to use `dependsOn` to map relationships between your resources, it's important to understand why you're doing it. For example, to document how resources are interconnected, `dependsOn` isn't the right approach. You can't query which resources were defined in the `dependsOn` element after deployment. Setting unnecessary dependencies slows deployment time because Resource Manager can't deploy those resources in parallel.
## Child resources
-An implicit deployment dependency isn't automatically created between a [child resource](child-resource-name-type.md) and the parent resource. If you need to deploy the child resource after the parent resource, set the dependsOn property.
+An implicit deployment dependency isn't automatically created between a [child resource](child-resource-name-type.md) and the parent resource. If you need to deploy the child resource after the parent resource, set the `dependsOn` property.
The following example shows a logical SQL server and database. Notice that an explicit dependency is defined between the database and the server, even though the database is a child of the server.
@@ -79,13 +80,13 @@ Reference and list expressions implicitly declare that one resource depends on a
To enforce an implicit dependency, refer to the resource by name, not resource ID. If you pass the resource ID into the reference or list functions, an implicit reference isn't created.
-The general format of the reference function is:
+The general format of the `reference` function is:
```json reference('resourceName').propertyPath ```
-The general format of the listKeys function is:
+The general format of the `listKeys` function is:
```json listKeys('resourceName', 'yyyy-mm-dd')
@@ -159,7 +160,7 @@ The following example shows how to deploy multiple virtual machines. The templat
} ```
-The following example shows how to deploy three storage accounts before deploying the virtual machine. Notice that the copy element has name set to `storagecopy` and the dependsOn element for the virtual machine is also set to `storagecopy`.
+The following example shows how to deploy three storage accounts before deploying the virtual machine. Notice that the `copy` element has `name` set to `storagecopy` and the `dependsOn` element for the virtual machine is also set to `storagecopy`.
```json {
@@ -207,10 +208,9 @@ For information about assessing the deployment order and resolving dependency er
## Next steps
-* To go through a tutorial, see [Tutorial: create Azure Resource Manager templates with dependent resources](template-tutorial-create-templates-with-dependent-resources.md).
+* To go through a tutorial, see [Tutorial: Create ARM templates with dependent resources](template-tutorial-create-templates-with-dependent-resources.md).
* For a Microsoft Learn module that covers resource dependencies, see [Manage complex cloud deployments by using advanced ARM template features](/learn/modules/manage-deployments-advanced-arm-template-features/).
-* For recommendations when setting dependencies, see [Azure Resource Manager template best practices](template-best-practices.md).
+* For recommendations when setting dependencies, see [ARM template best practices](template-best-practices.md).
* To learn about troubleshooting dependencies during deployment, see [Troubleshoot common Azure deployment errors with Azure Resource Manager](common-deployment-errors.md).
-* To learn about creating Azure Resource Manager templates, see [Authoring templates](template-syntax.md).
-* For a list of the available functions in a template, see [Template functions](template-functions.md).
-
+* To learn about creating Azure Resource Manager templates, see [Understand the structure and syntax of ARM templates](template-syntax.md).
+* For a list of the available functions in a template, see [ARM template functions](template-functions.md).
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/deployment-script-template-configure-dev https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deployment-script-template-configure-dev.md
@@ -1,6 +1,6 @@
--- title: Configure development environment for deployment scripts in templates | Microsoft Docs
-description: configure development environment for deployment scripts in Azure Resource Manager templates.
+description: Configure development environment for deployment scripts in Azure Resource Manager templates (ARM templates).
services: azure-resource-manager author: mumian ms.service: azure-resource-manager
@@ -9,13 +9,14 @@ ms.date: 12/14/2020
ms.author: jgao ---
-# Configure development environment for deployment scripts in templates
+
+# Configure development environment for deployment scripts in ARM templates
Learn how to create a development environment for developing and testing deployment scripts with a deployment script image. You can either create [Azure container instance](../../container-instances/container-instances-overview.md) or use [Docker](https://docs.docker.com/get-docker/). Both are covered in this article. ## Prerequisites
-If you don't have a deployment script, you can create a **hello.ps1** file with the following content:
+If you don't have a deployment script, you can create a _hello.ps1_ file with the following content:
```powershell param([string] $name)
@@ -34,11 +35,11 @@ To author your scripts on your computer, you need to create a storage account an
### Create an Azure container instance
-The following ARM template creates a container instance and a file share, and then mounts the file share to the container image.
+The following Azure Resource Manager template (ARM template) creates a container instance and a file share, and then mounts the file share to the container image.
```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0", "parameters": { "projectName": {
@@ -148,12 +149,13 @@ The following ARM template creates a container instance and a file share, and th
] } ```
-The default value for the mount path is **deploymentScript**. This is the path in the container instance where it is mounted to the file share.
-The default container image specified in the template is **mcr.microsoft.com/azuredeploymentscripts-powershell:az4.3"**. See a list of [supported Azure PowerShell versions](https://mcr.microsoft.com/v2/azuredeploymentscripts-powershell/tags/list). See a list of [supported Azure CLI versions](https://mcr.microsoft.com/v2/azure-cli/tags/list).
+The default value for the mount path is `deploymentScript`. This is the path in the container instance where it is mounted to the file share.
+
+The default container image specified in the template is `mcr.microsoft.com/azuredeploymentscripts-powershell:az4.3`. See a list of [supported Azure PowerShell versions](https://mcr.microsoft.com/v2/azuredeploymentscripts-powershell/tags/list). See a list of [supported Azure CLI versions](https://mcr.microsoft.com/v2/azure-cli/tags/list).
>[!IMPORTANT]
- > Deployment script uses the available CLI images from Microsoft Container Registry(MCR) . It takes about one month to certify a CLI image for deployment script. Don't use the CLI versions that were released within 30 days. To find the release dates for the images, see [Azure CLI release notes](/cli/azure/release-notes-azure-cli?view=azure-cli-latest&preserve-view=true). If an un-supported version is used, the error message list the supported versions.
+ > Deployment script uses the available CLI images from Microsoft Container Registry (MCR). It takes about one month to certify a CLI image for deployment script. Don't use the CLI versions that were released within 30 days. To find the release dates for the images, see [Azure CLI release notes](/cli/azure/release-notes-azure-cli?view=azure-cli-latest&preserve-view=true). If an unsupported version is used, the error message lists the supported versions.
The template suspends the container instance 1800 seconds. You have 30 minutes before the container instance goes into terminal state and the session ends.
@@ -191,7 +193,7 @@ You can also upload the file by using the Azure portal and Azure CLI.
1. From the Azure portal, open the resource group where you deployed the container instance and the storage account. 1. Open the container group. The default container group name is the project name with **cg** appended. You shall see the container instance is in the **Running** state.
-1. Select **Containers** from the left menu. You shall see a container instance. The container instance name is the project name with **container** appended.
+1. Select **Containers** from the left menu. You shall see a container instance. The container instance name is the project name with **container** appended.
![deployment script connect container instance](./media/deployment-script-template-configure-dev/deployment-script-container-instance-connect.png)
@@ -243,7 +245,7 @@ You also need to configure file sharing to mount the directory, which contains t
docker run -v <host drive letter>:/<host directory name>:/data -it mcr.microsoft.com/azuredeploymentscripts-powershell:az4.3 ```
- Replace **&lt;host driver letter>** and **&lt;host directory name>** with an existing folder on the shared drive. It maps the folder to the **/data** folder in the container. For examples, to map D:\docker:
+ Replace **&lt;host driver letter>** and **&lt;host directory name>** with an existing folder on the shared drive. It maps the folder to the _/data_ folder in the container. For example, to map _D:\docker_:
```command docker run -v d:/docker:/data -it mcr.microsoft.com/azuredeploymentscripts-powershell:az4.3
@@ -257,7 +259,7 @@ You also need to configure file sharing to mount the directory, which contains t
docker run -v d:/docker:/data -it mcr.microsoft.com/azure-cli:2.0.80 ```
-1. The following screenshot shows how to run a PowerShell script, given that you have a helloworld.ps1 file in the shared drive.
+1. The following screenshot shows how to run a PowerShell script, given that you have a _helloworld.ps1_ file in the shared drive.
![Resource Manager template deployment script docker cmd](./media/deployment-script-template/resource-manager-deployment-script-docker-cmd.png)
@@ -268,4 +270,4 @@ After the script is tested successfully, you can use it as a deployment script i
In this article, you learned how to use deployment scripts. To walk through a deployment script tutorial: > [!div class="nextstepaction"]
-> [Tutorial: Use deployment scripts in Azure Resource Manager templates](./template-tutorial-deployment-script.md)
+> [Tutorial: Use deployment scripts in ARM templates](./template-tutorial-deployment-script.md)
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/template-outputs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-outputs.md
@@ -7,7 +7,7 @@ ms.date: 11/24/2020
# Outputs in ARM templates
-This article describes how to define output values in your Azure Resource Manager template (ARM template). You use outputs when you need to return values from the deployed resources.
+This article describes how to define output values in your Azure Resource Manager template (ARM template). You use `outputs` when you need to return values from the deployed resources.
The format of each output value must match one of the [data types](template-syntax.md#data-types).
@@ -26,7 +26,7 @@ The following example shows how to return the resource ID for a public IP addres
## Conditional output
-In the outputs section, you can conditionally return a value. Typically, you use condition in the outputs when you've [conditionally deployed](conditional-resource-deployment.md) a resource. The following example shows how to conditionally return the resource ID for a public IP address based on whether a new one was deployed:
+In the `outputs` section, you can conditionally return a value. Typically, you use `condition` in the `outputs` when you've [conditionally deployed](conditional-resource-deployment.md) a resource. The following example shows how to conditionally return the resource ID for a public IP address based on whether a new one was deployed:
```json "outputs": {
@@ -42,7 +42,7 @@ For a simple example of conditional output, see [conditional output template](ht
## Dynamic number of outputs
-In some scenarios, you don't know the number of instances of a value you need to return when creating the template. You can return a variable number of values by using the **copy** element.
+In some scenarios, you don't know the number of instances of a value you need to return when creating the template. You can return a variable number of values by using the `copy` element.
```json "outputs": {
@@ -56,7 +56,7 @@ In some scenarios, you don't know the number of instances of a value you need to
} ```
-For more information, see [Outputs iteration in Azure Resource Manager templates](copy-outputs.md).
+For more information, see [Output iteration in ARM templates](copy-outputs.md).
## Linked templates
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/template-parameters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-parameters.md
@@ -15,7 +15,7 @@ Each parameter must be set to one of the [data types](template-syntax.md#data-ty
## Define parameter
-The following example shows a simple parameter definition. It defines a parameter named **storageSKU**. The parameter is a string value, and only accepts values that are valid for its intended use. The parameter uses a default value when no value is provided during deployment.
+The following example shows a simple parameter definition. It defines a parameter named `storageSKU`. The parameter is a string value, and only accepts values that are valid for its intended use. The parameter uses a default value when no value is provided during deployment.
```json "parameters": {
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/template-syntax https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-syntax.md
@@ -30,7 +30,7 @@ In its simplest structure, a template has the following elements:
| Element name | Required | Description | |:--- |:--- |:--- |
-| $schema |Yes |Location of the JSON schema file that describes the version of the template language. The version number you use depends on the scope of the deployment and your JSON editor.<br><br>If you're using [VS Code with the Azure Resource Manager tools extension](quickstart-create-templates-use-visual-studio-code.md), use the latest version for resource group deployments:<br>`https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#`<br><br>Other editors (including Visual Studio) may not be able to process this schema. For those editors, use:<br>`https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#`<br><br>For subscription deployments, use:<br>`https://schema.management.azure.com/schemas/2018-05-01/subscriptionDeploymentTemplate.json#`<br><br>For management group deployments, use:<br>`https://schema.management.azure.com/schemas/2019-08-01/managementGroupDeploymentTemplate.json#`<br><br>For tenant deployments, use:<br>`https://schema.management.azure.com/schemas/2019-08-01/tenantDeploymentTemplate.json#` |
+| $schema |Yes |Location of the JavaScript Object Notation (JSON) schema file that describes the version of the template language. The version number you use depends on the scope of the deployment and your JSON editor.<br><br>If you're using [Visual Studio Code with the Azure Resource Manager tools extension](quickstart-create-templates-use-visual-studio-code.md), use the latest version for resource group deployments:<br>`https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#`<br><br>Other editors (including Visual Studio) may not be able to process this schema. For those editors, use:<br>`https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#`<br><br>For subscription deployments, use:<br>`https://schema.management.azure.com/schemas/2018-05-01/subscriptionDeploymentTemplate.json#`<br><br>For management group deployments, use:<br>`https://schema.management.azure.com/schemas/2019-08-01/managementGroupDeploymentTemplate.json#`<br><br>For tenant deployments, use:<br>`https://schema.management.azure.com/schemas/2019-08-01/tenantDeploymentTemplate.json#` |
| contentVersion |Yes |Version of the template (such as 1.0.0.0). You can provide any value for this element. Use this value to document significant changes in your template. When deploying resources using the template, this value can be used to make sure that the right template is being used. | | apiProfile |No | An API version that serves as a collection of API versions for resource types. Use this value to avoid having to specify API versions for each resource in the template. When you specify an API profile version and don't specify an API version for the resource type, Resource Manager uses the API version for that resource type that is defined in the profile.<br><br>The API profile property is especially helpful when deploying a template to different environments, such as Azure Stack and global Azure. Use the API profile version to make sure your template automatically uses versions that are supported in both environments. For a list of the current API profile versions and the resources API versions defined in the profile, see [API Profile](https://github.com/Azure/azure-rest-api-specs/tree/master/profile).<br><br>For more information, see [Track versions using API profiles](templates-cloud-consistency.md#track-versions-using-api-profiles). | | [parameters](#parameters) |No |Values that are provided when deployment is executed to customize resource deployment. |
@@ -93,13 +93,13 @@ Secure string uses the same format as string, and secure object uses the same fo
For integers passed as inline parameters, the range of values may be limited by the SDK or command-line tool you use for deployment. For example, when using PowerShell to deploy a template, integer types can range from -2147483648 to 2147483647. To avoid this limitation, specify large integer values in a [parameter file](parameter-files.md). Resource types apply their own limits for integer properties.
-When specifying boolean and integer values in your template, don't surround the value with quotation marks. Start and end string values with double quotation marks.
+When specifying boolean and integer values in your template, don't surround the value with quotation marks. Start and end string values with double quotation marks (`"string value"`).
-Objects start with a left brace and end with a right brace. Arrays start with a left bracket and end with a right bracket.
+Objects start with a left brace (`{`) and end with a right brace (`}`). Arrays start with a left bracket (`[`) and end with a right bracket (`]`).
## Parameters
-In the parameters section of the template, you specify which values you can input when deploying the resources. You're limited to 256 parameters in a template. You can reduce the number of parameters by using objects that contain multiple properties.
+In the `parameters` section of the template, you specify which values you can input when deploying the resources. You're limited to 256 parameters in a template. You can reduce the number of parameters by using objects that contain multiple properties.
The available properties for a parameter are:
@@ -136,7 +136,7 @@ For examples of how to use parameters, see [Parameters in ARM templates](templat
## Variables
-In the variables section, you construct values that can be used throughout your template. You don't need to define variables, but they often simplify your template by reducing complex expressions. The format of each variable matches one of the [data types](#data-types).
+In the `variables` section, you construct values that can be used throughout your template. You don't need to define variables, but they often simplify your template by reducing complex expressions. The format of each variable matches one of the [data types](#data-types).
The following example shows the available options for defining a variable:
@@ -206,7 +206,7 @@ When defining a user function, there are some restrictions:
| Element name | Required | Description | |:--- |:--- |:--- | | namespace |Yes |Namespace for the custom functions. Use to avoid naming conflicts with template functions. |
-| function-name |Yes |Name of the custom function. When calling the function, combine the function name with the namespace. For example, to call a function named uniqueName in the namespace contoso, use `"[contoso.uniqueName()]"`. |
+| function-name |Yes |Name of the custom function. When calling the function, combine the function name with the namespace. For example, to call a function named `uniqueName` in the namespace contoso, use `"[contoso.uniqueName()]"`. |
| parameter-name |No |Name of the parameter to be used within the custom function. | | parameter-value |No |Type of the parameter value. The allowed types and values are **string**, **securestring**, **int**, **bool**, **object**, **secureObject**, and **array**. | | output-type |Yes |Type of the output value. Output values support the same types as function input parameters. |
@@ -216,7 +216,7 @@ For examples of how to use custom functions, see [User-defined functions in ARM
## Resources
-In the resources section, you define the resources that are deployed or updated.
+In the `resources` section, you define the resources that are deployed or updated.
You define resources with the following structure:
@@ -277,7 +277,7 @@ You define resources with the following structure:
| Element name | Required | Description | |:--- |:--- |:--- | | condition | No | Boolean value that indicates whether the resource will be provisioned during this deployment. When `true`, the resource is created during deployment. When `false`, the resource is skipped for this deployment. See [condition](conditional-resource-deployment.md). |
-| type |Yes |Type of the resource. This value is a combination of the namespace of the resource provider and the resource type (such as **Microsoft.Storage/storageAccounts**). To determine available values, see [template reference](/azure/templates/). For a child resource, the format of the type depends on whether it's nested within the parent resource or defined outside of the parent resource. See [Set name and type for child resources](child-resource-name-type.md). |
+| type |Yes |Type of the resource. This value is a combination of the namespace of the resource provider and the resource type (such as `Microsoft.Storage/storageAccounts`). To determine available values, see [template reference](/azure/templates/). For a child resource, the format of the type depends on whether it's nested within the parent resource or defined outside of the parent resource. See [Set name and type for child resources](child-resource-name-type.md). |
| apiVersion |Yes |Version of the REST API to use for creating the resource. When creating a new template, set this value to the latest version of the resource you're deploying. As long as the template works as needed, keep using the same API version. By continuing to use the same API version, you minimize the risk of a new API version changing how your template works. Consider updating the API version only when you want to use a new feature that is introduced in a later version. To determine available values, see [template reference](/azure/templates/). | | name |Yes |Name of the resource. The name must follow URI component restrictions defined in RFC3986. Azure services that expose the resource name to outside parties validate the name to make sure it isn't an attempt to spoof another identity. For a child resource, the format of the name depends on whether it's nested within the parent resource or defined outside of the parent resource. See [Set name and type for child resources](child-resource-name-type.md). | | comments |No |Your notes for documenting the resources in your template. For more information, see [Comments in templates](template-syntax.md#comments). |
@@ -293,7 +293,7 @@ You define resources with the following structure:
## Outputs
-In the Outputs section, you specify values that are returned from deployment. Typically, you return values from resources that were deployed.
+In the `outputs` section, you specify values that are returned from deployment. Typically, you return values from resources that were deployed.
The following example shows the structure of an output definition:
@@ -346,7 +346,7 @@ For inline comments, you can use either `//` or `/* ... */` but this syntax does
], ```
-In Visual Studio Code, the [Azure Resource Manager Tools extension](quickstart-create-templates-use-visual-studio-code.md) can automatically detect an ARM template and change the language mode. If you see **Azure Resource Manager Template** at the bottom-right corner of VS Code, you can use the inline comments. The inline comments are no longer marked as invalid.
+In Visual Studio Code, the [Azure Resource Manager Tools extension](quickstart-create-templates-use-visual-studio-code.md) can automatically detect an ARM template and change the language mode. If you see **Azure Resource Manager Template** at the bottom-right corner of Visual Studio Code, you can use the inline comments. The inline comments are no longer marked as invalid.
![Visual Studio Code Azure Resource Manager template mode](./media/template-syntax/resource-manager-template-editor-mode.png)
@@ -364,7 +364,7 @@ You can add a `metadata` object almost anywhere in your template. Resource Manag
}, ```
-For **parameters**, add a `metadata` object with a `description` property.
+For `parameters`, add a `metadata` object with a `description` property.
```json "parameters": {
@@ -380,7 +380,7 @@ When deploying the template through the portal, the text you provide in the desc
![Show parameter tip](./media/template-syntax/show-parameter-tip.png)
-For **resources**, add a `comments` element or a metadata object. The following example shows both a comments element and a metadata object.
+For `resources`, add a `comments` element or a `metadata` object. The following example shows both a `comments` element and a `metadata` object.
```json "resources": [
@@ -406,7 +406,7 @@ For **resources**, add a `comments` element or a metadata object. The following
] ```
-For **outputs**, add a metadata object to the output value.
+For `outputs`, add a `metadata` object to the output value.
```json "outputs": {
@@ -419,11 +419,11 @@ For **outputs**, add a metadata object to the output value.
}, ```
-You can't add a metadata object to user-defined functions.
+You can't add a `metadata` object to user-defined functions.
## Multi-line strings
-You can break a string into multiple lines. For example, see the location property and one of the comments in the following JSON example.
+You can break a string into multiple lines. For example, see the `location` property and one of the comments in the following JSON example.
```json {
@@ -443,7 +443,8 @@ You can break a string into multiple lines. For example, see the location proper
], ```
-To deploy templates with multi-line strings by using Azure CLI with version 2.3.0 or older, you must use the `--handle-extended-json-format` switch.
+> [!NOTE]
+> To deploy templates with multi-line strings by using Azure CLI with version 2.3.0 or older, you must use the `--handle-extended-json-format` switch.
## Next steps
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/template-tutorial-deploy-sql-extensions-bacpac https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-tutorial-deploy-sql-extensions-bacpac.md
@@ -50,7 +50,7 @@ The BACPAC file must be stored in an Azure Storage account before it can be impo
* Upload the BACPAC file to the container. * Display the storage account key and the blob URL.
-1. Select **Try it** to open the shell. Then paste the following PowerShell script into the shell window.
+1. Select **Try it** to open Azure Cloud Shell. Then paste the following PowerShell script into the shell window.
```azurepowershell-interactive $projectName = Read-Host -Prompt "Enter a project name that is used to generate Azure resource names"
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/template-tutorial-deployment-script https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-tutorial-deployment-script.md
@@ -325,13 +325,13 @@ The deployment script adds a certificate to the key vault. Configure the key vau
![Resource Manager template deployment script resources](./media/template-tutorial-deployment-script/resource-manager-template-deployment-script-resources.png)
- Both files have the **azscripts** suffix. One is a storage account and the other is a container instance.
+ Both files have the _azscripts_ suffix. One is a storage account and the other is a container instance.
Select **Show hidden types** to list the `deploymentScripts` resource.
-1. Select the storage account with the **azscripts** suffix.
-1. Select the **File shares** tile. You will see an **azscripts** folder. The folder contains the deployment script execution files.
-1. Select **azscripts**. You will see two folders **azscriptinput** and **azscriptoutput**. The input folder contains a system PowerShell script file and the user deployment script files. The output folder contains a _executionresult.json_ and the script output file. You can see the error message in _executionresult.json_. The output file isn't there because the execution failed.
+1. Select the storage account with the _azscripts_ suffix.
+1. Select the **File shares** tile. You will see an _azscripts_ folder that contains the deployment script execution files.
+1. Select _azscripts_. You will see two folders _azscriptinput_ and _azscriptoutput_. The input folder contains a system PowerShell script file and the user deployment script files. The output folder contains a _executionresult.json_ and the script output file. You can see the error message in _executionresult.json_. The output file isn't there because the execution failed.
Remove the `Write-Output1` line and redeploy the template.
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/template-user-defined-functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-user-defined-functions.md
@@ -38,7 +38,7 @@ Your functions require a namespace value to avoid naming conflicts with template
## Use the function
-The following example shows a template that includes a user-defined function. It uses that function to get a unique name for a storage account. The template has a parameter named **storageNamePrefix** that it passes as a parameter to the function.
+The following example shows a template that includes a user-defined function. It uses that function to get a unique name for a storage account. The template has a parameter named `storageNamePrefix` that it passes as a parameter to the function.
```json {
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/template-variables https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-variables.md
@@ -23,7 +23,7 @@ The following example shows a variable definition. It creates a string value for
}, ```
-You can't use the [reference](template-functions-resource.md#reference) function or any of the [list](template-functions-resource.md#list) functions in the variables section. These functions get the runtime state of a resource, and can't be executed before deployment when variables are resolved.
+You can't use the [reference](template-functions-resource.md#reference) function or any of the [list](template-functions-resource.md#list) functions in the `variables` section. These functions get the runtime state of a resource, and can't be executed before deployment when variables are resolved.
## Use variable
@@ -58,7 +58,7 @@ You can define variables that hold related values for configuring an environment
}, ```
-In parameters, you create a value that indicates which configuration values to use.
+In `parameters`, you create a value that indicates which configuration values to use.
```json "parameters": {
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/authentication-aad-configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/authentication-aad-configure.md
@@ -391,7 +391,7 @@ CREATE USER [appName] FROM EXTERNAL PROVIDER;
``` > [!NOTE]
-> This command requires that SQL access Azure AD (the "external provider") on behalf of the logged-in user. Sometimes, circumstances will arise that cause Azure AD to return an exception back to SQL. In these cases, the user will see SQL error 33134, which should contain the Azure AD-specific error message. Most of the time, the error will say that access is denied, or that the user must enroll in MFA to access the resource, or that access between first-party applications must be handled via preauthorization. In the first two cases, the issue is usually caused by Conditional Access policies that are set in the user's Azure AD tenant: they prevent the user from accessing the external provider. Updating the CA policies to allow access to the application '00000002-0000-0000-c000-000000000000' (the application ID of the Azure AD Graph API) should resolve the issue. In the case that the error says access between first-party applications must be handled via preauthorization, the issue is because the user is signed in as a service principal. The command should succeed if it is executed by a user instead.
+> This command requires that SQL access Azure AD (the "external provider") on behalf of the logged-in user. Sometimes, circumstances will arise that cause Azure AD to return an exception back to SQL. In these cases, the user will see SQL error 33134, which should contain the Azure AD-specific error message. Most of the time, the error will say that access is denied, or that the user must enroll in MFA to access the resource, or that access between first-party applications must be handled via preauthorization. In the first two cases, the issue is usually caused by Conditional Access policies that are set in the user's Azure AD tenant: they prevent the user from accessing the external provider. Updating the Conditional Access policies to allow access to the application '00000002-0000-0000-c000-000000000000' (the application ID of the Azure AD Graph API) should resolve the issue. In the case that the error says access between first-party applications must be handled via preauthorization, the issue is because the user is signed in as a service principal. The command should succeed if it is executed by a user instead.
> [!TIP] > You cannot directly create a user from an Azure Active Directory other than the Azure Active Directory that is associated with your Azure subscription. However, members of other Active Directories that are imported users in the associated Active Directory (known as external users) can be added to an Active Directory group in the tenant Active Directory. By creating a contained database user for that AD group, the users from the external Active Directory can gain access to SQL Database.
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/conditional-access-configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/conditional-access-configure.md
@@ -20,7 +20,7 @@ tag: azure-synpase
[Azure SQL Database](sql-database-paas-overview.md), [Azure SQL Managed Instance](../managed-instance/sql-managed-instance-paas-overview.md), and [Azure Synapse Analytics](../../synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is.md) support Microsoft Conditional Access.
-The following steps show how to configure Azure SQL Database, SQL Managed Instance, or Azure Synapse to enforce a Conditional Access (CA) policy.
+The following steps show how to configure Azure SQL Database, SQL Managed Instance, or Azure Synapse to enforce a Conditional Access policy.
## Prerequisites
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/virtual-machines/windows/sql-agent-extension-manually-register-single-vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/sql-agent-extension-manually-register-single-vm.md
@@ -185,6 +185,9 @@ $sqlvm.SqlManagementType
SQL Server VMs that have registered the extension in *lightweight* mode can upgrade to _full_ using the Azure portal, the Azure CLI, or Azure PowerShell. SQL Server VMs in _NoAgent_ mode can upgrade to _full_ after the OS is upgraded to Windows 2008 R2 and above. It is not possible to downgrade - to do so, you will need to [unregister](#unregister-from-extension) the SQL Server VM from the SQL IaaS Agent extension. Doing so will remove the **SQL virtual machine** _resource_, but will not delete the actual virtual machine.
+> [!NOTE]
+> When you upgrade the management mode for the SQL IaaS extension to full, it will restart the SQL Server service. In some cases, the restart may cause the service principal names (SPNs) associated with the SQL Server service to change to the wrong user account. If you have connectivity issues after upgrading the management mode to full, [unregister and reregister your SPNs](/sql/database-engine/configure-windows/register-a-service-principal-name-for-kerberos-connections).
+ ### Azure portal
backup https://docs.microsoft.com/en-us/azure/backup/azure-backup-glossary https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/azure-backup-glossary.md
@@ -167,7 +167,7 @@ Incremental backups store only the blocks that have changed since the previous b
## Instant restore
-Instant restore involves restoring a machine directly from its backup snapshot rather than from the copy of the snapshot in the vault. Instant restores are faster than restores from a vault. The number of instant restore points available depends on the retention duration configured for snapshots.
+(Workload specific term) Instant restore involves restoring a machine directly from its backup snapshot rather than from the copy of the snapshot in the vault. Instant restores are faster than restores from a vault. The number of instant restore points available depends on the retention duration configured for snapshots. Currently applicable for Azure VM backup only.
## IOPS
@@ -221,23 +221,19 @@ A recovery done from the restore point to the source location from where the bac
A passphrase is used to encrypt and decrypt data while backing up or restoring your on-premises or local machine using the MARS agent to or from Azure.
-## Point in time restore
-
-Restoring an item to its state at a particular point in time (PIT).
- ## Private endpoint Refer to the [Private Endpoint documentation](https://docs.microsoft.com/azure/private-link/private-endpoint-overview). ## Protected instance
-A protected instance refers to the computer, physical or virtual server you use to configure the backup to Azure. From a **billing standpoint**, Protected Instance Count for a machine is a function of its frontend size. [Learn more](https://azure.microsoft.com/pricing/details/backup/).
+A protected instance refers to the computer, physical or virtual server you use to configure the backup to Azure. From a **billing standpoint**, Protected Instance Count for a machine is a function of its frontend size. Thus, a single backup instance (such as a VM backed up to Azure) can correspond to multiple protected instances, depending on its frontend size. [Learn more](https://azure.microsoft.com/pricing/details/backup/).
## RBAC (Role-based access control) Refer to the [RBAC documentation](https://docs.microsoft.com/azure/role-based-access-control/overview).
-## Recovery point/ Restore point/ Retention point
+## Recovery point/ Restore point/ Retention point / Point-in-time (PIT)
A copy of the original data that is being backed up. A retention point is associated with a timestamp so you can use this to restore an item to a particular point in time.
@@ -259,11 +255,11 @@ A user-defined rule that specifies how long backups should be retained.
## RPO (Recovery Point Objective)
-RPO indicates the maximum data loss that is acceptable in a data-loss scenario. This is determined by backup frequency.
+RPO indicates the maximum data loss that is possible in a data-loss scenario. This is determined by backup frequency.
## RTO (Recovery Time Objective)
-RTO indicates the maximum acceptable time in which data can be restored to the last available point-in-time after a data loss scenario.
+RTO indicates the maximum possible time in which data can be restored to the last available point-in-time after a data loss scenario.
## Scheduled backup
@@ -279,7 +275,7 @@ Soft delete is a feature that helps guard against accidental deletion of backup
## Snapshot
-A snapshot is a full, read-only copy of a virtual hard drive (VHD). [Learn more](https://docs.microsoft.com/azure/virtual-machines/windows/snapshot-copy-managed-disk).
+A snapshot is a full, read-only copy of a virtual hard drive (VHD) or an Azure File share. Learn more about [disk snapshots](https://docs.microsoft.com/azure/virtual-machines/windows/snapshot-copy-managed-disk) and [file snapshots](https://docs.microsoft.com/azure/storage/files/storage-snapshots-files).
## Storage account
@@ -309,7 +305,7 @@ A storage entity in Azure that houses backup data. It's also a unit of RBAC and
## Vault credentials
-The vault credentials file is a certificate generated by the portal for each vault. This is used while registering a server to the vault. [Learn more](backup-azure-dpm-introduction.md).
+The vault credentials file is a certificate generated by the portal for each vault. This is used while registering an on-premises server to the vault. [Learn more](backup-azure-dpm-introduction.md).
## VNET (Virtual Network)
backup https://docs.microsoft.com/en-us/azure/backup/backup-azure-dpm-azure-server-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-dpm-azure-server-faq.md
@@ -52,6 +52,10 @@ You don't need separate licensing for VMware/Hyper-V protection.
- If you're a System Center customer, use System Center Data Protection Manager (DPM) to protect VMware VMs. - If you aren't a System Center customer, you can use Azure Backup Server (pay-as-you-go) to protect VMware VMs.
+### Can I restore a backup of a Hyper-V or VMware VM, stored in Azure, to Azure as an Azure VM?
+
+No, this is not currently possible. You can only restore to an on-premises host.
+ ## SharePoint ### Can I recover a SharePoint item to the original location if SharePoint is configured by using SQL AlwaysOn (with protection on disk)?
@@ -67,4 +71,4 @@ Because SharePoint databases are configured in SQL AlwaysOn, they can't be modif
Read the other FAQs: - [Learn more](backup-support-matrix-mabs-dpm.md) about Azure Backup Server and DPM support matrix.-- [Learn more](backup-azure-mabs-troubleshoot.md) about the Azure Backup Server and DPM troubleshooting guidelines.\ No newline at end of file
+- [Learn more](backup-azure-mabs-troubleshoot.md) about the Azure Backup Server and DPM troubleshooting guidelines.
batch https://docs.microsoft.com/en-us/azure/batch/batch-pool-cloud-service-to-virtual-machine-configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-pool-cloud-service-to-virtual-machine-configuration.md new file mode 100644
@@ -0,0 +1,37 @@
+---
+title: Migrate Batch pool configuration from Cloud Services to Virtual Machines
+description: Learn how to update your pool configuration to the latest and recommended configuration
+ms.topic: how-to
+ms.date: 1/4/2021
+---
+
+# Migrate Batch pool configuration from Cloud Services to Virtual Machines
+
+Batch pools can be created using either [cloudServiceConfiguration](https://docs.microsoft.com/rest/api/batchservice/pool/add#cloudserviceconfiguration) or [virtualMachineConfiguration](https://docs.microsoft.com/rest/api/batchservice/pool/add#virtualmachineconfiguration). 'virtualMachineConfiguration' is the recommended configuration as it supports all Batch capabilities. 'cloudServiceConfiguration' pools do not support all features and no new features are planned.
+
+If you use 'cloudServiceConfiguration' pools, it is highly recommended that you move to use 'virtualMachineConfiguration' pools. This article describes how to migrate to the recommended 'virtualMachineConfiguration' configuration.
+
+## New pools are required
+
+Existing active pools cannot be updated from 'cloudServiceConfiguration' to 'virtualMachineConfiguration', new pools must be created. Creating pools using 'virtualMachineConfiguration' is supported by all Batch APIs, command-line tools, Azure portal, and the Batch Explorer UI.
+
+The [.NET](tutorial-parallel-dotnet.md) and [Python](tutorial-parallel-python.md) tutorials provide examples of pool creation using 'virtualMachineConfiguration'.
+
+## Pool configuration differences
+
+The following should be considered when updating pool configuration:
+
+- 'cloudServiceConfiguration' pool nodes are always Windows OS, 'virtualMachineConfiguration' pools can either be Linux or Windows OS.
+- Compared to 'cloudServiceConfiguration' pools, 'virtualMachineConfiguration' pools have a richer set of capabilities, such as container support, data disks, and disk encryption.
+- 'virtualMachineConfiguration' pool nodes utilize managed OS disks. The [managed disk type](../virtual-machines/disks-types.md) that is used for each node depends on the VM size chosen for the pool. If a 's' VM size is specified for the pool, for example 'Standard_D2s_v3', then a premium SSD is used. If a 'non-s' VM size is specified, for example 'Standard_D2_v3', then a standard HDD is used.
+
+ > [!IMPORTANT]
+ > As with Virtual Machines and Virtual Machine Scale Sets, the OS managed disk used for each node incurs a cost, which is additional to the cost of the VMs. There is no OS disk cost for 'cloudServiceConfiguration' nodes as the OS disk is created on the nodes local SSD.
+
+- Pool and node startup and delete times may differ slightly between 'cloudServiceConfiguration' pools and 'virtualMachineConfiguration' pools.
+
+## Next steps
+
+- Learn more about [pool configurations](nodes-and-pools.md#configurations).
+- Learn more about [pool best practices](best-practices.md#pools).
+- REST API reference for [pool addition](https://docs.microsoft.com/rest/api/batchservice/pool/add) and [virtualMachineConfiguration](https://docs.microsoft.com/rest/api/batchservice/pool/add#virtualmachineconfiguration).
batch https://docs.microsoft.com/en-us/azure/batch/best-practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/best-practices.md
@@ -21,6 +21,9 @@ This article discusses a collection of best practices and useful tips for using
- **Pool allocation mode** When creating a Batch account, you can choose between two pool allocation modes: **Batch service** or **user subscription**. For most cases, you should use the default Batch service mode, in which pools are allocated behind the scenes in Batch-managed subscriptions. In the alternative user subscription mode, Batch VMs and other resources are created directly in your subscription when a pool is created. User subscription accounts are primarily used to enable an important, but small subset of scenarios. You can read more about user subscription mode at [Additional configuration for user subscription mode](batch-account-create-portal.md#additional-configuration-for-user-subscription-mode).
+- **'cloudServiceConfiguration' or 'virtualMachineConfiguration'.**
+ 'virtualMachineConfiguration' should be used. All Batch features are supported by 'virtualMachineConfiguration' pools. Not all features are supported for 'cloudServiceConfiguration' pools and no new capabilities are being planned.
+ - **Consider job and task run time when determining job to pool mapping.** If you have jobs comprised primarily of short-running tasks, and the expected total task counts are small, so that the overall expected run time of the job is not long, do not allocate a new pool for each job. The allocation time of the nodes will diminish the run time of the job.
batch https://docs.microsoft.com/en-us/azure/batch/nodes-and-pools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/nodes-and-pools.md
@@ -59,6 +59,9 @@ When you create a Batch pool, you specify the Azure virtual machine configuratio
There are two types of pool configurations available in Batch.
+> [!IMPORTANT]
+> Pools should be configured using 'Virtual Machine Configuration' and not 'Cloud Services Configuration'. All Batch features are supported by 'Virtual Machine Configuration' pools and new features are being added. 'Cloud Services Configuration' pools do not support all features and no new capabilities are planned.
+ ### Virtual Machine Configuration The **Virtual Machine Configuration** specifies that the pool is composed of Azure virtual machines. These VMs may be created from either Linux or Windows images.
@@ -96,7 +99,7 @@ When you create a pool, you can specify which types of nodes you want and the ta
- **Dedicated nodes.** Dedicated compute nodes are reserved for your workloads. They are more expensive than low-priority nodes, but they are guaranteed to never be preempted. - **Low-priority nodes.** Low-priority nodes take advantage of surplus capacity in Azure to run your Batch workloads. Low-priority nodes are less expensive per hour than dedicated nodes, and enable workloads requiring significant compute power. For more information, see [Use low-priority VMs with Batch](batch-low-pri-vms.md).
-Low-priority nodes may be preempted when Azure has insufficient surplus capacity. If a node is preempted while running tasks, the tasks are requeued and run again once a compute node becomes available again. Low-priority nodes are a good option for workloads where the job completion time is flexible and the work is distributed across many nodes. Before you decide to use low-priority nodes for your scenario, make sure that any work lost due to pre-emption will be minimal and easy to recreate.
+Low-priority nodes may be preempted when Azure has insufficient surplus capacity. If a node is preempted while running tasks, the tasks are requeued and run again once a compute node becomes available again. Low-priority nodes are a good option for workloads where the job completion time is flexible and the work is distributed across many nodes. Before you decide to use low-priority nodes for your scenario, make sure that any work lost due to preemption will be minimal and easy to recreate.
You can have both low-priority and dedicated compute nodes in the same pool. Each type of node has its own target setting, for which you can specify the desired number of nodes.
blockchain https://docs.microsoft.com/en-us/azure/blockchain/service/overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/blockchain/service/overview.md
@@ -1,7 +1,7 @@
--- title: Azure Blockchain Service overview description: Overview of Azure Blockchain Service
-ms.date: 05/22/2020
+ms.date: 01/04/2021
ms.topic: overview ms.reviewer: ravastra #Customer intent: As a network operator or developer, I want to understand how I can use Azure Blockchain Service to build and manage consortium blockchain networks on Azure
@@ -79,6 +79,8 @@ Engage with Microsoft engineers and Azure Blockchain community experts.
To get started, try a quickstart or find out more details from these resources. * [Create a blockchain member using the Azure portal](create-member.md) or [create a blockchain member using Azure CLI](create-member-cli.md)
-* For cost comparisons and calculators, see the [pricing page](https://azure.microsoft.com/pricing/details/blockchain-service).
+* Follow the Microsoft Learn path [Get started with blockchain development](/learn/paths/ethereum-blockchain-development)
+* Watch the [Beginner's series to blockchain](https://channel9.msdn.com/Series/Beginners-Series-to-Blockchain)
+* For cost comparisons and calculators, see the [pricing page](https://azure.microsoft.com/pricing/details/blockchain-service)
* Build your first app using the [Azure Blockchain Development Kit](https://github.com/Azure-Samples/blockchain-devkit) * Azure Blockchain VSCode Extension [user guide](https://github.com/Microsoft/vscode-azure-blockchain-ethereum/wiki)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Computer-vision/intro-to-spatial-analysis-public-preview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/intro-to-spatial-analysis-public-preview.md
@@ -14,20 +14,21 @@ ms.date: 12/14/2020
# Introduction to Computer Vision spatial analysis
-Computer Vision spatial analysis is a new feature of Azure Cognitive Services Computer Vision that helps organizations maximize the value of their physical spaces by understanding people's movements and presence within a given area. It allows you to ingest video from CCTV or surveillance cameras, run AI skills to extract insights from the video streams, and generate events to be used by other systems. With input from a camera stream, an AI skill can do things like count the number of people entering a space or measure compliance with social distancing guidelines.
+Computer Vision spatial analysis is a new feature of Azure Cognitive Services Computer Vision that helps organizations maximize the value of their physical spaces by understanding people's movements and presence within a given area. It allows you to ingest video from CCTV or surveillance cameras, run AI operations to extract insights from the video streams, and generate events to be used by other systems. With input from a camera stream, an AI operation can do things like count the number of people entering a space or measure compliance with face mask and social distancing guidelines.
## The basics of spatial analysis
-Today the core skills of spatial analysis are all built on a pipeline that ingests video, detects people in the video, tracks the people as they move around over time, and generates events as people interact with regions of interest.
+Today the core operations of spatial analysis are all built on a pipeline that ingests video, detects people in the video, tracks the people as they move around over time, and generates events as people interact with regions of interest.
## Spatial analysis terms | Term | Definition | |------|------------| | People Detection | This component answers the question "where are the people in this image"? It finds humans in an image and passes a bounding box indicating the location of each person to the people tracking component. |
-| People Tracking | This component connects the people detections over time as the people move around in front of a camera. It uses temporal logic about how people typically move and basic information about the overall appearance of the people to do this. It cannot track people across multiple cameras or reidentify someone who has disappeared for more than approximately one minute. People Tracking does not use any biometric markers like face recognition or gait tracking. |
-| Region of Interest | This is a zone or line defined in the input video as part of configuration. When a person interacts with the region of the video the system generates an event. For example, for the PersonCrossingLine skill, a line is defined in the video. When a person crosses that line an event is generated. |
-| Event | An event is the primary output of spatial analysis. Each skill emits a specific event either periodically (ex. once per minute) or when a specific trigger occurs. The event includes information about what occurred in the input video but does not include any images or video. For example, the PeopleCount skill can emit an event containing the updated count every time the count of people changes (trigger) or once every minute (periodically). |
+| People Tracking | This component connects the people detections over time as the people move around in front of a camera. It uses temporal logic about how people typically move and basic information about the overall appearance of the people to do this. It does not track people across multiple cameras. If a person exists the field of view from a camera for longer than approximately a minute and then re-enters the camera view, the system will perceive this as a new person. People Tracking does not uniquely identify individuals across cameras. It does not use facial recognition or gait tracking. |
+| Face Mask Detection | This component detects the location of a personΓÇÖs face in the cameraΓÇÖs field of view and identifies the presence of a face mask. To do so, the AI operation scans images from video; where a face is detected the service provides a bounding box around the face. Using object detection capabilities, it identifies the presence of face masks within the bounding box. Face Mask detection does not involve distinguishing one face from another face, predicting or classifying facial attributes or performing facial recognition. |
+| Region of Interest | This is a zone or line defined in the input video as part of configuration. When a person interacts with the region of the video the system generates an event. For example, for the PersonCrossingLine operation, a line is defined in the video. When a person crosses that line an event is generated. |
+| Event | An event is the primary output of spatial analysis. Each operation emits a specific event either periodically (ex. once per minute) or when a specific trigger occurs. The event includes information about what occurred in the input video but does not include any images or video. For example, the PeopleCount operation can emit an event containing the updated count every time the count of people changes (trigger) or once every minute (periodically). |
## Example use cases for spatial analysis
@@ -39,6 +40,8 @@ The following are example use cases that we had in mind as we designed and teste
**Queue Management** - Cameras pointed at checkout queues provide alerts to managers when wait time gets too long, allowing them to open more lines. Historical data on queue abandonment gives insights into consumer behavior.
+**Face Mask Compliance** ΓÇô Retail stores can use cameras pointing at the store fronts to check if customers walking into the store are wearing face masks to maintain safety compliance and analyze aggregate statistics to gain insights on mask usage trends.
+ **Building Occupancy & Analysis** - An office building uses cameras focused on entrances to key spaces to measure footfall and how people use the workplace. Insights allow the building manager to adjust service and layout to better serve occupants. **Minimum Staff Detection** - In a data center, cameras monitor activity around servers. When employees are physically fixing sensitive equipment two people are always required to be present during the repair for security reasons. Cameras are used to verify that this guideline is followed.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/LUIS/luis-how-to-batch-test https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/luis-how-to-batch-test.md
@@ -165,7 +165,7 @@ The two sections of the chart in green did match the expected prediction.
LUIS lets you batch test using the LUIS portal and REST API. The endpoints for the REST API are listed below. For information on batch testing using the LUIS portal, see [Tutorial: batch test data sets](luis-tutorial-batch-testing.md). Use the complete URLs below, replacing the placeholder values with your own LUIS Prediction key and endpoint.
-Remember to add your LUIS key to `Apim-Subscription-Id` in the header, and set `Content-Type` to `application/json`.
+Remember to add your LUIS key to `Ocp-Apim-Subscription-Key` in the header, and set `Content-Type` to `application/json`.
### Start a batch test
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/how-to-custom-speech-train-model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-speech-train-model.md
@@ -43,6 +43,11 @@ The **Training** table displays a new entry that corresponds to the new model. T
See the [how-to](how-to-custom-speech-evaluate-data.md) on evaluating and improving Custom Speech model accuracy. If you choose to test accuracy, it's important to select an acoustic dataset that's different from the one you used with your model to get a realistic sense of the model's performance.
+> [!NOTE]
+> Both base models and custom models can be used only up to a certain date (see [Model lifecycle](custom-speech-overview.md#model-lifecycle)). Speech Studio shows this date in the **Expiration** column for each model and endpoint. After that date request to an endpoint or to batch transcription might fail or fall back to base model.
+>
+> Retrain your model using the then most recent base model to benefit from accuracy improvements and to avoid that your model expires.
+ ## Deploy a custom model After you upload and inspect data, evaluate accuracy, and train a custom model, you can deploy a custom endpoint to use with your apps, tools, and products.
@@ -58,7 +63,7 @@ Next, select **Add endpoint** and enter a **Name** and **Description** for your
Next, select **Create**. This action returns you to the **Deployment** page. The table now includes an entry that corresponds to your custom endpoint. The endpointΓÇÖs status shows its current state. It can take up to 30 minutes to instantiate a new endpoint using your custom models. When the status of the deployment changes to **Complete**, the endpoint is ready to use.
-After your endpoint is deployed, the endpoint name appears as a link. Select the link to see information specific to your endpoint, like the endpoint key, endpoint URL, and sample code.
+After your endpoint is deployed, the endpoint name appears as a link. Select the link to see information specific to your endpoint, like the endpoint key, endpoint URL, and sample code. Take a note of the expiration date and update the endpoint's model before that date to ensure uninterrupted service.
## View logging data
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/how-to-develop-custom-commands-application https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-develop-custom-commands-application.md
@@ -174,7 +174,7 @@ Start by editing the existing `TurnOn` command to turn on and turn off multiple
1. Select **Update**. > [!div class="mx-imgBorder"]
- > ![Screenshot showing where to create a required parameter response.](media/custom-commands/add-required-on-off-parameter-response.png)
+ > ![Screenshot that shows the 'Add response for a required parameter' section with the 'Simple editor' tab selected.](media/custom-commands/add-required-on-off-parameter-response.png)
1. Configure the parameter's properties by using the following table. For information about all of the configuration properties of a command, see [Custom Commands concepts and definitions](./custom-commands-references.md).
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/includes/quickstarts/intent-recognition/csharp/dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/includes/quickstarts/intent-recognition/csharp/dotnet.md
@@ -13,7 +13,7 @@ zone_pivot_groups: programming-languages-set-two
Before you get started:
-* <a href="~/articles/cognitive-services/Speech-Service/quickstarts/setup-platform.md?tabs=dotnet&pivots=programming-language-csharp" target="_blank">Install the Speech SDK for your development environment an create and empty sample project<span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+* <a href="~/articles/cognitive-services/Speech-Service/quickstarts/setup-platform.md?tabs=dotnet&pivots=programming-language-csharp" target="_blank">Install the Speech SDK for your development environment and create an empty sample project<span class="docon docon-navigate-external x-hidden-focus"></span></a>.
## Create a LUIS app for intent recognition
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/includes/quickstarts/intent-recognition/header https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/includes/quickstarts/intent-recognition/header.md
@@ -17,3 +17,10 @@ After satisfying a few prerequisites, recognizing speech and identifying intents
> * Create an `IntentRecognizer` object using the `SpeechConfig` object from above. > * Using the `IntentRecognizer` object, start the recognition process for a single utterance. > * Inspect the `IntentRecognitionResult` returned.+
+> [!NOTE]
+> You can create a LanguageUnderstandingModel by passing an endpoint URL to the FromEndpoint method.
+> Speech SDK only supports LUIS v2.0 endpoints, and
+> LUIS v2.0 endpoints always follow one of these two patterns:
+> * `https://{AzureResourceName}.cognitiveservices.azure.com/luis/v2.0/apps/{app-id}?subscription-key={subkey}&verbose=true&q=`
+> * `https://{Region}.api.cognitive.microsoft.com/luis/v2.0/apps/{app-id}?subscription-key={subkey}&verbose=true&q=`
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/sovereign-clouds https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/sovereign-clouds.md
@@ -3,72 +3,163 @@ title: Sovereign Clouds - Speech service
titleSuffix: Azure Cognitive Services description: Learn how to use Sovereign Clouds services: cognitive-services
-author: cbasoglu
-manager: xdh
+author: alexeyo26
+manager: nitinme
ms.service: cognitive-services ms.subservice: speech-service ms.topic: conceptual
-ms.date: 1/14/2020
-ms.author: cbasoglu
+ms.custom: references_regions
+ms.date: 12/26/2020
+ms.author: alexeyo
---
-# Speech services with sovereign clouds
+# Speech Services in sovereign clouds
## Azure Government (United States)
-Only US federal, state, local, and tribal governments and their partners have access to this dedicated instance with operations controlled by screened US citizens.
-- Regions: US Gov Virginia-- SR in SpeechSDK:*config.FromHost("wss://virginia.stt.speech.azure.us", "\<your-key\>");*-- TTS in SpeechSDK: *config.FromHost("https[]()://virginia.tts.speech.azure.us", "\<your-key\>");*-- Authentication Tokens: https[]()://virginia.api.cognitive.microsoft.us/sts/v1.0/issueToken-- Azure Portal: https://portal.azure.us -- Custom Speech Portal: https://virginia.cris.azure.us/Home/CustomSpeech-- Available SKUs: S0-- Supported features:
- - Speech-to-Text
- - Custom Speech (Acoustic/language adaptation)
- - Text-to-Speech
- - Speech Translator
-- Unsupported features
+Available to US government entities and their partners only. See more information about Azure Government [here](../../azure-government/documentation-government-welcome.md) and [here.](../../azure-government/compare-azure-government-global-azure.md)
+
+- **Azure portal:**
+ - [https://portal.azure.us/](https://portal.azure.us/)
+- **Regions:**
+ - US Gov Arizona
+ - US Gov Virginia
+- **Available pricing tiers:**
+ - Free (F0) and Standard (S0). See more details [here](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/)
+- **Supported features:**
+ - Speech-to-text
+ - Custom speech (Acoustic Model (AM) and Language Model (LM) adaptation)
+ - [Speech Studio](https://speech.azure.us/)
+ - Text-to-speech
+ - Speech translator
+- **Unsupported features:**
+ - Neural voice
- Custom Voice
- - Neural voices for Text-to-speech
-- Supported locales: Locales for the following languages are supported.
+- **Supported languages:**
- Arabic (ar-*) - Chinese (zh-*) - English (en-*) - French (fr-*) - German (de-*)
- - Hindi
- - Korean
- - Russian
+ - Hindi (hi-IN)
+ - Korean (ko-KR)
+ - Russian (ru-RU)
- Spanish (es-*)
-## Microsoft Azure China
-
-Located in China, an Azure data center with direct access to China Mobile, China Telecom, China Unicom and other major carrier backbone network, for Chinese users to provide high-speed and stable local network access experience.
-- Regions: China East 2 (Shanghai)-- SR in SpeechSDK: *config.FromHost("wss://chinaeast2.stt.speech.azure.cn", "\<your-key\>");*-- TTS in SpeechSDK: *config.FromHost("https[]()://chinaeast2.tts.speech.azure.cn", "\<your-key\>");*-- Authentication Tokens: https[]()://chinaeast2.api.cognitive.azure.cn/sts/v1.0/issueToken-- Azure Portal: https://portal.azure.cn-- Custom Speech Portal: https://speech.azure.cn/CustomSpeech-- Available SKUs: S0-- Supported features:
- - Speech-to-Text
- - Custom Speech (Acoustic/language adaptation)
- - Text-to-Speech
- - Speech Translator
-- Unsupported features
+### Endpoint information
+
+This section contains Speech Services endpoint information for the usage with [Speech SDK](speech-sdk.md), [Speech-to-text REST API](rest-speech-to-text.md), and [Text-to-speech REST API](rest-text-to-speech.md).
+
+#### Speech Services REST API
+
+Speech Services REST API endpoints in Azure Government have the following format:
+
+| REST API type / operation | Endpoint format |
+|--|--|
+| Access token | `https://<REGION_IDENTIFIER>.api.cognitive.microsoft.us/sts/v1.0/issueToken`
+| [Speech-to-text REST API v3.0](rest-speech-to-text.md#speech-to-text-rest-api-v30) | `https://<REGION_IDENTIFIER>.api.cognitive.microsoft.us/<URL_PATH>` |
+| [Speech-to-text REST API for short audio](rest-speech-to-text.md#speech-to-text-rest-api-for-short-audio) | `https://<REGION_IDENTIFIER>.stt.speech.azure.us/<URL_PATH>` |
+| [Text-to-speech REST API](rest-text-to-speech.md) | `https://<REGION_IDENTIFIER>.tts.speech.azure.us/<URL_PATH>` |
+
+Replace `<REGION_IDENTIFIER>` with the identifier matching the region of your subscription from this table:
+
+| | Region identifier |
+|--|--|
+| **US Gov Arizona** | `usgovarizona` |
+| **US Gov Virginia** | `usgovvirginia` |
+
+#### Speech SDK
+
+For Speech SDK in sovereign clouds you need to use "from host" instantiation of `SpeechConfig` class or `--host` option of [Speech CLI](spx-overview.md). (You may also use "from endpoint" instantiation and `--endpoint` Speech CLI option).
+
+`SpeechConfig` class should be instantiated like this:
+```csharp
+var config = SpeechConfig.FromHost(usGovHost, subscriptionKey);
+```
+Speech CLI should be used like this (note the `--host` option):
+```dos
+spx recognize --host "usGovHost" --file myaudio.wav
+```
+Replace `subscriptionKey` with your Speech resource key. Replace `usGovHost` with the expression matching the required service offering and the region of your subscription from this table:
+
+| Region / Service offering | Host expression |
+|--|--|
+| **US Gov Arizona** | |
+| Speech-to-text | `wss://usgovarizona.stt.speech.azure.us` |
+| Text-to-Speech | `https://usgovarizona.tts.speech.azure.us` |
+| **US Gov Virginia** | |
+| Speech-to-text | `wss://usgovvirginia.stt.speech.azure.us` |
+| Text-to-Speech | `https://usgovvirginia.tts.speech.azure.us` |
++
+## Azure China
+
+Available to organizations with a business presence in China. See more information about Azure China [here.](/azure/china/overview-operations)
++
+- **Azure portal:**
+ - [https://portal.azure.cn/](https://portal.azure.cn/)
+- **Regions:**
+ - China East 2
+- **Available pricing tiers:**
+ - Free (F0) and Standard (S0). See more details [here](https://www.azure.cn/pricing/details/cognitive-services/https://docsupdatetracker.net/index.html)
+- **Supported features:**
+ - Speech-to-text
+ - Custom speech (Acoustic Model (AM) and Language Model (LM) adaptation)
+ - [Speech Studio](https://speech.azure.cn/)
+ - Text-to-speech
+ - Speech translator
+- **Unsupported features:**
+ - Neural voice
- Custom Voice
- - Neural voices for Text-to-speech
-- Supported locales: Locales for the following languages are supported.
+- **Supported languages:**
- Arabic (ar-*) - Chinese (zh-*) - English (en-*) - French (fr-*) - German (de-*)
- - Hindi
- - Korean
- - Russian
+ - Hindi (hi-IN)
+ - Korean (ko-KR)
+ - Russian (ru-RU)
- Spanish (es-*)
+### Endpoint information
+
+This section contains Speech Services endpoint information for the usage with [Speech SDK](speech-sdk.md), [Speech-to-text REST API](rest-speech-to-text.md), and [Text-to-speech REST API](rest-text-to-speech.md).
+
+#### Speech Services REST API
+
+Speech Services REST API endpoints in Azure China have the following format:
+
+| REST API type / operation | Endpoint format |
+|--|--|
+| Access token | `https://<REGION_IDENTIFIER>.api.cognitive.azure.cn/sts/v1.0/issueToken`
+| [Speech-to-text REST API v3.0](rest-speech-to-text.md#speech-to-text-rest-api-v30) | `https://<REGION_IDENTIFIER>.api.cognitive.azure.cn/<URL_PATH>` |
+| [Speech-to-text REST API for short audio](rest-speech-to-text.md#speech-to-text-rest-api-for-short-audio) | `https://<REGION_IDENTIFIER>.stt.speech.azure.cn/<URL_PATH>` |
+| [Text-to-speech REST API](rest-text-to-speech.md) | `https://<REGION_IDENTIFIER>.tts.speech.azure.cn/<URL_PATH>` |
+
+Replace `<REGION_IDENTIFIER>` with the identifier matching the region of your subscription from this table:
+
+| | Region identifier |
+|--|--|
+| **China East 2** | `chinaeast2` |
+
+#### Speech SDK
+
+For Speech SDK in sovereign clouds you need to use "from host" instantiation of `SpeechConfig` class or `--host` option of [Speech CLI](spx-overview.md). (You may also use "from endpoint" instantiation and `--endpoint` Speech CLI option).
+
+`SpeechConfig` class should be instantiated like this:
+```csharp
+var config = SpeechConfig.FromHost(azCnHost, subscriptionKey);
+```
+Speech CLI should be used like this (note the `--host` option):
+```dos
+spx recognize --host "azCnHost" --file myaudio.wav
+```
+Replace `subscriptionKey` with your Speech resource key. Replace `azCnHost` with the expression matching the required service offering and the region of your subscription from this table:
+
+| Region / Service offering | Host expression |
+|--|--|
+| **China East 2** | |
+| Speech-to-text | `wss://chinaeast2.stt.speech.azure.cn` |
+| Text-to-Speech | `https://chinaeast2.tts.speech.azure.cn` |
\ No newline at end of file
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/includes/quickstarts/management-csharp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/includes/quickstarts/management-csharp.md
@@ -79,7 +79,7 @@ Add the following code to your **Main** method to list available resources, crea
[!code-csharp[](~/cognitive-services-quickstart-code/dotnet/azure_management_service/create_delete_resource.cs?name=snippet_calls)]
-## Create a Cognitive Services resource
+## Create a Cognitive Services resource (C#)
To create and subscribe to a new Cognitive Services resource, use the **Create** method. This method adds a new billable resource to the resource group you pass in. When creating your new resource, you'll need to know the "kind" of service you want to use, along with its pricing tier (or SKU) and an Azure location. The following method takes all of these as arguments and creates a resource.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/includes/quickstarts/management-java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/includes/quickstarts/management-java.md
@@ -89,7 +89,7 @@ Add the following code to your **Main** method to list available resources, crea
[!code-java[](~/cognitive-services-quickstart-code/java/azure_management_service/quickstart.java?name=snippet_calls)]
-## Create a Cognitive Services resource
+## Create a Cognitive Services resource (Java)
To create and subscribe to a new Cognitive Services resource, use the **create** method. This method adds a new billable resource to the resource group you pass in. When creating your new resource, you'll need to know the "kind" of service you want to use, along with its pricing tier (or SKU) and an Azure location. The following method takes all of these as arguments and creates a resource.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/includes/quickstarts/management-node https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/includes/quickstarts/management-node.md
@@ -69,7 +69,7 @@ Next, add the following `quickstart` function to handle the main work of your pr
Add the following code to the end of your `quickstart` function to list available resources, create a sample resource, list your owned resources, and then delete the sample resource. You'll define these functions in the next steps.
-## Create a Cognitive Services resource
+## Create a Cognitive Services resource (Node.js)
To create and subscribe to a new Cognitive Services resource, use the **Create** function. This function adds a new billable resource to the resource group you pass in. When you create your new resource, you'll need to know the "kind" of service you want to use, along with its pricing tier (or SKU) and an Azure location. The following function takes all of these arguments and creates a resource.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/includes/quickstarts/management-python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/includes/quickstarts/management-python.md
@@ -51,7 +51,7 @@ Then add the following code to construct a **CognitiveServicesManagementClient**
[!code-python[](~/cognitive-services-quickstart-code/python/azure_management_service/create_delete_resource.py?name=snippet_auth)]
-## Create a Cognitive Services resource
+## Create a Cognitive Services resource (Python)
To create and subscribe to a new Cognitive Services resource, use the **Create** function. This function adds a new billable resource to the resource group you pass in. When you create your new resource, you'll need to know the "kind" of service you want to use, along with its pricing tier (or SKU) and an Azure location. The following function takes all of these arguments and creates a resource.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/text-analytics/text-analytics-resource-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/text-analytics-resource-faq.md
@@ -9,12 +9,12 @@ manager: nitinme
ms.service: cognitive-services ms.subservice: text-analytics ms.topic: conceptual
-ms.date: 02/13/2019
+ms.date: 01/05/2021
ms.author: aahi ---
-# Frequently Asked Questions (FAQ) about the Text Analytics Cognitive Service
+# Frequently Asked Questions (FAQ) about the Text Analytics API
- Find answers to commonly asked questions about concepts, code, and scenarios related to the Text Analytics API for Microsoft Cognitive Services on Azure.
+ Find answers to commonly asked questions about concepts, code, and scenarios related to the Text Analytics API in Azure Cognitive Services.
## Can Text Analytics identify sarcasm?
@@ -42,11 +42,21 @@ Generally, output consists of nouns and objects of the sentence. Output is liste
Improvements to models and algorithms are announced if the change is major, or quietly slipstreamed into the service if the update is minor. Over time, you might find that the same text input results in a different sentiment score or key phrase output. This is a normal and intentional consequence of using managed machine learning resources in the cloud.
+## Service availability and redundancy
+
+### Is Text Analytics service zone resilient?
+
+Yes. The Text Analytics service is zone-resilient by default.
+
+### How do I configure the Text Analytics service to be zone-resilient?
+
+No customer configuration is necessary to enable zone-resiliency. Zone-resiliency for Text Analytics resources is available by default and managed by the service itself.
+ ## Next steps Is your question about a missing feature or functionality? Consider requesting or voting for it on our [UserVoice web site](https://cognitive.uservoice.com/forums/555922-text-analytics). ## See also
- [StackOverflow: Text Analytics API](https://stackoverflow.com/questions/tagged/text-analytics-api)
- [StackOverflow: Cognitive Services](https://stackoverflow.com/questions/tagged/microsoft-cognitive)
\ No newline at end of file
+ * [StackOverflow: Text Analytics API](https://stackoverflow.com/questions/tagged/text-analytics-api)
+ * [StackOverflow: Cognitive Services](https://stackoverflow.com/questions/tagged/microsoft-cognitive)
communication-services https://docs.microsoft.com/en-us/azure/communication-services/concepts/voice-video-calling/calling-sdk-features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/voice-video-calling/calling-sdk-features.md
@@ -53,7 +53,7 @@ The following list presents the set of features which are currently available in
| | Dial-out from a group call as a PSTN participant | ✔️ | ✔️ | ✔️ | General | Test your mic, speaker, and camera with an audio testing service (available by calling 8:echo123) | ✔️ | ✔️ | ✔️
-## Javascript calling client library support by OS and browser
+## JavaScript calling client library support by OS and browser
The following table represents the set of supported browsers and versions which are currently available.
@@ -91,8 +91,8 @@ The Communication Services calling client library supports the following streami
| |Web | Android/iOS| |-----------|----|------------|
-|# of outgoing streams that can be sent simultaneously |1 video + 1 screen sharing | 1 video + 1 screen sharing|
-|# of incoming streams that can be rendered simultaneously |1 video + 1 screen sharing| 6 video + 1 screen sharing |
+|**# of outgoing streams that can be sent simultaneously** |1 video + 1 screen sharing | 1 video + 1 screen sharing|
+|**# of incoming streams that can be rendered simultaneously** |1 video + 1 screen sharing| 6 video + 1 screen sharing |
## Next steps
communication-services https://docs.microsoft.com/en-us/azure/communication-services/quickstarts/voice-video-calling/get-started-teams-interop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/voice-video-calling/get-started-teams-interop.md
@@ -1,7 +1,7 @@
---
-title: Getting started with Teams interop on Azure Communication Services
+title: Quickstart - Teams interop on Azure Communication Services
titleSuffix: An Azure Communication Services quickstart
-description: In this quickstart, you'll learn how to join an Teams meeting with the Azure Communication Calling SDK
+description: In this quickstart, you'll learn how to join an Teams meeting with the Azure Communication Calling SDK.
author: matthewrobertson ms.author: chpalm ms.date: 10/10/2020
container-instances https://docs.microsoft.com/en-us/azure/container-instances/container-instances-volume-azure-files https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/container-instances-volume-azure-files.md
@@ -8,7 +8,7 @@ ms.custom: mvc, devx-track-azurecli
# Mount an Azure file share in Azure Container Instances
-By default, Azure Container Instances are stateless. If the container crashes or stops, all of its state is lost. To persist state beyond the lifetime of the container, you must mount a volume from an external store. As shown in this article, Azure Container Instances can mount an Azure file share created with [Azure Files](../storage/files/storage-files-introduction.md). Azure Files offers fully managed file shares hosted in Azure Storage that are accessible via the industry standard Server Message Block (SMB) protocol. Using an Azure file share with Azure Container Instances provides file-sharing features similar to using an Azure file share with Azure virtual machines.
+By default, Azure Container Instances are stateless. If the container is restarted, crashes, or stops, all of its state is lost. To persist state beyond the lifetime of the container, you must mount a volume from an external store. As shown in this article, Azure Container Instances can mount an Azure file share created with [Azure Files](../storage/files/storage-files-introduction.md). Azure Files offers fully managed file shares hosted in Azure Storage that are accessible via the industry standard Server Message Block (SMB) protocol. Using an Azure file share with Azure Container Instances provides file-sharing features similar to using an Azure file share with Azure virtual machines.
> [!NOTE] > Mounting an Azure Files share is currently restricted to Linux containers. Find current platform differences in the [overview](container-instances-overview.md#linux-and-windows-containers).
container-registry https://docs.microsoft.com/en-us/azure/container-registry/container-registry-firewall-access-rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-firewall-access-rules.md
@@ -1,6 +1,6 @@
--- title: Firewall access rules
-description: Configure rules to access an Azure container registry from behind a firewall, by allowing access to ("whitelisting") REST API and data endpoint domain names or service-specific IP address ranges.
+description: Configure rules to access an Azure container registry from behind a firewall, by allowing access to REST API and data endpoint domain names or service-specific IP address ranges.
ms.topic: article ms.date: 05/18/2020 ---
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/how-to-configure-cosmos-db-trigger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-configure-cosmos-db-trigger.md
@@ -77,7 +77,7 @@ If your Azure Functions project is working with Azure Functions V1 runtime, the
``` > [!NOTE]
-> When working with Azure Functions Consumption Plan Hosting plan, each instance has a limit in the amount of Socket Connections that it can maintain. When working with Direct / TCP mode, by design more connections are created and can hit the [Consumption Plan limit](../azure-functions/manage-connections.md#connection-limit), in which case you can either use Gateway mode or run your Azure Functions in [App Service Mode](../azure-functions/functions-scale.md#app-service-plan).
+> When hosting your function app in a Consumption plan, each instance has a limit in the amount of Socket Connections that it can maintain. When working with Direct / TCP mode, by design more connections are created and can hit the [Consumption plan limit](../azure-functions/manage-connections.md#connection-limit), in which case you can either use Gateway mode or instead host your function app in either a [Premium plan](../azure-functions/functions-premium-plan.md) or a [Dedicated (App Service) plan](../azure-functions/dedicated-plan.md).
## Next steps
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/sql-sdk-connection-modes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-sdk-connection-modes.md
@@ -24,7 +24,7 @@ The two available connectivity modes are:
Gateway mode is supported on all SDK platforms. If your application runs within a corporate network with strict firewall restrictions, gateway mode is the best choice because it uses the standard HTTPS port and a single DNS endpoint. The performance tradeoff, however, is that gateway mode involves an additional network hop every time data is read from or written to Azure Cosmos DB. We also recommend gateway connection mode when you run applications in environments that have a limited number of socket connections.
- When you use the SDK in Azure Functions, particularly in the [Consumption plan](../azure-functions/functions-scale.md#consumption-plan), be aware of the current [limits on connections](../azure-functions/manage-connections.md).
+ When you use the SDK in Azure Functions, particularly in the [Consumption plan](../azure-functions/consumption-plan.md), be aware of the current [limits on connections](../azure-functions/manage-connections.md).
* Direct mode
cost-management-billing https://docs.microsoft.com/en-us/azure/cost-management-billing/costs/understand-cost-mgt-data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/costs/understand-cost-mgt-data.md
@@ -3,7 +3,7 @@ title: Understand Azure Cost Management data
description: This article helps you better understand data that's included in Azure Cost Management and how frequently it's processed, collected, shown, and closed. author: bandersmsft ms.author: banders
-ms.date: 10/26/2020
+ms.date: 01/06/2021
ms.topic: conceptual ms.service: cost-management-billing ms.subservice: cost-management
@@ -96,7 +96,7 @@ The following tables show data that's included or isn't in Cost Management. All
_<sup>**5**</sup> Azure service usage is based on reservation and negotiated prices._
-_<sup>**6**</sup> Marketplace purchases are not available for MSDN and Visual Studio offers at this time._
+_<sup>**6**</sup> Marketplace purchases aren't available for MSDN and Visual Studio offers at this time._
_<sup>**7**</sup> Reservation purchases are only available for Enterprise Agreement (EA) and Microsoft Customer Agreement accounts at this time._
@@ -104,20 +104,20 @@ _<sup>**7**</sup> Reservation purchases are only available for Enterprise Agreem
Azure Cost Management receives tags as part of each usage record submitted by the individual services. The following constraints apply to these tags: -- Tags must be applied directly to resources and are not implicitly inherited from the parent resource group.
+- Tags must be applied directly to resources and aren't implicitly inherited from the parent resource group.
- Resource tags are only supported for resources deployed to resource groups. - Some deployed resources may not support tags or may not include tags in usage data.-- Resource tags are only included in usage data while the tag is applied ΓÇô tags are not applied to historical data.
+- Resource tags are only included in usage data while the tag is applied ΓÇô tags aren't applied to historical data.
- Resource tags are only available in Cost Management after the data is refreshed.-- Resource tags are only available in Cost Management when the resource is active/running and producing usage records (e.g. not when a VM is deallocated).
+- Resource tags are only available in Cost Management when the resource is active/running and producing usage records. For example, when a VM is deallocated.
- Managing tags requires contributor access to each resource. - Managing tag policies requires either owner or policy contributor access to a management group, subscription, or resource group.
-If you do not see a specific tag in Cost Management, consider the following:
+If you don't see a specific tag in Cost Management, consider the following questions:
- Was the tag applied directly to the resource? - Was the tag applied more than 24 hours ago?-- Does the resource type support tags? The following resource types do not support tags in usage data as of December 1, 2019. See [Tags support for Azure resources](../../azure-resource-manager/management/tag-support.md) for the full list of what is supported.
+- Does the resource type support tags? The following resource types don't support tags in usage data as of December 1, 2019. See [Tags support for Azure resources](../../azure-resource-manager/management/tag-support.md) for the full list of what is supported.
- Azure Active Directory B2C Directories - Azure Bastion - Azure Firewalls
@@ -132,10 +132,9 @@ If you do not see a specific tag in Cost Management, consider the following:
Here are a few tips for working with tags: -- Plan ahead and define a tagging strategy that allows you to break costs down by organization, application, environment, etc.
+- Plan ahead and define a tagging strategy that allows you to break down costs by organization, application, environment, and so on.
- Use Azure Policy to copy resource group tags to individual resources and enforce your tagging strategy.-- Use the Tags API in conjunction with either Query or UsageDetails to get all cost based on the current tags.-
+- Use the Tags API with either Query or UsageDetails to get all cost based on the current tags.
## Cost and usage data updates and retention
@@ -146,17 +145,18 @@ Cost and usage data is typically available in Cost Management + Billing in the A
- Estimated charges for the current billing period can change as you incur more usage. - Each update is cumulative and includes all the line items and information from the previous update. - Azure finalizes or _closes_ the current billing period up to 72 hours (three calendar days) after the billing period ends.
+- During the open month (uninvoiced) period, cost management data should be considered an estimate only. In some cases, charges may be latent in arriving to the system after the usage actually occurred.
The following examples illustrate how billing periods could end: * Enterprise Agreement (EA) subscriptions ΓÇô If the billing month ends on March 31, estimated charges are updated up to 72 hours later. In this example, by midnight (UTC) April 4. * Pay-as-you-go subscriptions ΓÇô If the billing month ends on May 15, then the estimated charges might get updated up to 72 hours later. In this example, by midnight (UTC) May 19.
-Once cost and usage data becomes available in Cost Management + Billing, it will be retained for at least 7 years.
+Once cost and usage data becomes available in Cost Management + Billing, it will be retained for at least seven years.
### Rerated data
-Whether you use the Cost Management APIs, Power BI, or the Azure portal to retrieve data, expect the current billing period's charges to get rerated, and consequently change, until the invoice is closed.
+Whether you use the Cost Management APIs, Power BI, or the Azure portal to retrieve data, expect the current billing period's charges to get rerated, and as a consequence change, until the invoice is closed.
## Cost rounding
@@ -179,6 +179,6 @@ Historical data for credit-based and pay-in-advance offers might not match your
- MSDN (MS-AZR-0062P) - Visual Studio (MS-AZR-0029P, MS-AZR-0059P, MS-AZR-0060P, MS-AZR-0063P, MS-AZR-0064P)
-## See also
+## Next steps
- If you haven't already completed the first quickstart for Cost Management, read it at [Start analyzing costs](./quick-acm-cost-analysis.md).\ No newline at end of file
data-factory https://docs.microsoft.com/en-us/azure/data-factory/data-flow-expression-functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-expression-functions.md
@@ -6,7 +6,7 @@ ms.author: makromer
ms.service: data-factory ms.topic: conceptual ms.custom: seo-lt-2019
-ms.date: 12/18/2020
+ms.date: 01/06/2021
--- # Data transformation expressions in mapping data flow
@@ -57,14 +57,6 @@ Logical AND operator. Same as &&.
* ``and(true, false) -> false`` * ``true && false -> false`` ___
-### <code>array</code>
-<code><b>array([<i>&lt;value1&gt;</i> : any], ...) => array</b></code><br/><br/>
-Creates an array of items. All items should be of the same type. If no items are specified, an empty string array is the default. Same as a [] creation operator.
-* ``array('Seattle', 'Washington')``
-* ``['Seattle', 'Washington']``
-* ``['Seattle', 'Washington'][1]``
-* ``'Washington'``
-___
### <code>asin</code> <code><b>asin(<i>&lt;value1&gt;</i> : number) => double</b></code><br/><br/> Calculates an inverse sine value.
@@ -246,45 +238,6 @@ Always returns a false value. Use the function `syntax(false())` if there is a c
* ``(10 + 20 > 30) -> false`` * ``(10 + 20 > 30) -> false()`` ___
-### <code>filter</code>
-<code><b>filter(<i>&lt;value1&gt;</i> : array, <i>&lt;value2&gt;</i> : unaryfunction) => array</b></code><br/><br/>
-Filters elements out of the array that do not meet the provided predicate. Filter expects a reference to one element in the predicate function as #item.
-* ``filter([1, 2, 3, 4], #item > 2) -> [3, 4]``
-* ``filter(['a', 'b', 'c', 'd'], #item == 'a' || #item == 'b') -> ['a', 'b']``
-___
-### <code>find</code>
-<code><b>find(<i>&lt;value1&gt;</i> : array, <i>&lt;value2&gt;</i> : unaryfunction) => any</b></code><br/><br/>
-Find the first item from an array that match the condition. It takes a filter function where you can address the item in the array as #item. For deeply nested maps you can refer to the parent maps using the #item_n(#item_1, #item_2...) notation.
-* ``find([10, 20, 30], #item > 10) -> 20``
-* ``find(['azure', 'data', 'factory'], length(#item) > 4) -> 'azure'``
-* ``find([
- @(
- name = 'Daniel',
- types = [
- @(mood = 'jovial', behavior = 'terrific'),
- @(mood = 'grumpy', behavior = 'bad')
- ]
- ),
- @(
- name = 'Mark',
- types = [
- @(mood = 'happy', behavior = 'awesome'),
- @(mood = 'calm', behavior = 'reclusive')
- ]
- )
- ],
- contains(#item.types, #item.mood=='happy') /*Filter out the happy kid*/
- )``
-* ``
- @(
- name = 'Mark',
- types = [
- @(mood = 'happy', behavior = 'awesome'),
- @(mood = 'calm', behavior = 'reclusive')
- ]
- )
- ``
-___
### <code>floor</code> <code><b>floor(<i>&lt;value1&gt;</i> : number) => number</b></code><br/><br/> Returns the largest integer not greater than the number.
@@ -501,17 +454,6 @@ Left trims a string of leading characters. If second parameter is unspecified, i
* ``ltrim(' dumbo ') -> 'dumbo '`` * ``ltrim('!--!du!mbo!', '-!') -> 'du!mbo!'`` ___
-### <code>map</code>
-<code><b>map(<i>&lt;value1&gt;</i> : array, <i>&lt;value2&gt;</i> : unaryfunction) => any</b></code><br/><br/>
-Maps each element of the array to a new element using the provided expression. Map expects a reference to one element in the expression function as #item.
-* ``map([1, 2, 3, 4], #item + 2) -> [3, 4, 5, 6]``
-* ``map(['a', 'b', 'c', 'd'], #item + '_processed') -> ['a_processed', 'b_processed', 'c_processed', 'd_processed']``
-___
-### <code>mapIndex</code>
-<code><b>mapIndex(<i>&lt;value1&gt;</i> : array, <i>&lt;value2&gt;</i> : binaryfunction) => any</b></code><br/><br/>
-Maps each element of the array to a new element using the provided expression. Map expects a reference to one element in the expression function as #item and a reference to the element index as #index.
-* ``mapIndex([1, 2, 3, 4], #item + 2 + #index) -> [4, 6, 8, 10]``
-___
### <code>md5</code> <code><b>md5(<i>&lt;value1&gt;</i> : any, ...) => string</b></code><br/><br/> Calculates the MD5 digest of set of column of varying primitive datatypes and returns a 32 character hex string. It can be used to calculate a fingerprint for a row.
@@ -638,11 +580,6 @@ ___
Returns a random number given an optional seed within a partition. The seed should be a fixed value and is used in conjunction with the partitionId to produce random values * ``random(1) == 1 -> false`` ___
-### <code>reduce</code>
-<code><b>reduce(<i>&lt;value1&gt;</i> : array, <i>&lt;value2&gt;</i> : any, <i>&lt;value3&gt;</i> : binaryfunction, <i>&lt;value4&gt;</i> : unaryfunction) => any</b></code><br/><br/>
-Accumulates elements in an array. Reduce expects a reference to an accumulator and one element in the first expression function as #acc and #item and it expects the resulting value as #result to be used in the second expression function.
-* ``toString(reduce(['1', '2', '3', '4'], '0', #acc + #item, #result)) -> '01234'``
-___
### <code>regexExtract</code> <code><b>regexExtract(<i>&lt;string&gt;</i> : string, <i>&lt;regex to find&gt;</i> : string, [<i>&lt;match group 1-based index&gt;</i> : integral]) => string</b></code><br/><br/> Extract a matching substring for a given regex pattern. The last parameter identifies the match group and is defaulted to 1 if omitted. Use `<regex>`(back quote) to match a string without escaping.
@@ -751,28 +688,6 @@ ___
Calculates a hyperbolic sine value. * ``sinh(0) -> 0.0`` ___
-### <code>size</code>
-<code><b>size(<i>&lt;value1&gt;</i> : any) => integer</b></code><br/><br/>
-Finds the size of an array or map type
-* ``size(['element1', 'element2']) -> 2``
-* ``size([1,2,3]) -> 3``
-___
-### <code>slice</code>
-<code><b>slice(<i>&lt;array to slice&gt;</i> : array, <i>&lt;from 1-based index&gt;</i> : integral, [<i>&lt;number of items&gt;</i> : integral]) => array</b></code><br/><br/>
-Extracts a subset of an array from a position. Position is 1 based. If the length is omitted, it is defaulted to end of the string.
-* ``slice([10, 20, 30, 40], 1, 2) -> [10, 20]``
-* ``slice([10, 20, 30, 40], 2) -> [20, 30, 40]``
-* ``slice([10, 20, 30, 40], 2)[1] -> 20``
-* ``isNull(slice([10, 20, 30, 40], 2)[0]) -> true``
-* ``isNull(slice([10, 20, 30, 40], 2)[20]) -> true``
-* ``slice(['a', 'b', 'c', 'd'], 8) -> []``
-___
-### <code>sort</code>
-<code><b>sort(<i>&lt;value1&gt;</i> : array, <i>&lt;value2&gt;</i> : binaryfunction) => array</b></code><br/><br/>
-Sorts the array using the provided predicate function. Sort expects a reference to two consecutive elements in the expression function as #item1 and #item2.
-* ``sort([4, 8, 2, 3], compare(#item1, #item2)) -> [2, 3, 4, 8]``
-* ``sort(['a3', 'b2', 'c1'], iif(right(#item1, 1) >= right(#item2, 1), 1, -1)) -> ['c1', 'b2', 'a3']``
-___
### <code>soundex</code> <code><b>soundex(<i>&lt;value1&gt;</i> : string) => string</b></code><br/><br/> Gets the ```soundex``` code for the string.
@@ -882,6 +797,7 @@ ___
<code><b>year(<i>&lt;value1&gt;</i> : datetime) => integer</b></code><br/><br/> Gets the year value of a date. * ``year(toDate('2012-8-8')) -> 2012`` + ## Aggregate functions The following functions are only available in aggregate, pivot, unpivot, and window transformations. ___
@@ -1077,6 +993,99 @@ ___
Based on a criteria, gets the unbiased variance of a column. * ``varianceSampleIf(region == 'West', sales)``
+## Array functions
+Array functions perform transformations on data structures that are arrays. These include special keywords to address array elements and indexes:
+
+* ```#acc``` represents a value that you wish to include in your single output when reducing an array
+* ```#index``` represents the current array index, along with array index numbers ```#index2, #index3 ...```
+* ```#item``` represents the current element value in the array
+
+### <code>array</code>
+<code><b>array([<i>&lt;value1&gt;</i> : any], ...) => array</b></code><br/><br/>
+Creates an array of items. All items should be of the same type. If no items are specified, an empty string array is the default. Same as a [] creation operator.
+* ``array('Seattle', 'Washington')``
+* ``['Seattle', 'Washington']``
+* ``['Seattle', 'Washington'][1]``
+* ``'Washington'``
+___
+### <code>filter</code>
+<code><b>filter(<i>&lt;value1&gt;</i> : array, <i>&lt;value2&gt;</i> : unaryfunction) => array</b></code><br/><br/>
+Filters elements out of the array that do not meet the provided predicate. Filter expects a reference to one element in the predicate function as #item.
+* ``filter([1, 2, 3, 4], #item > 2) -> [3, 4]``
+* ``filter(['a', 'b', 'c', 'd'], #item == 'a' || #item == 'b') -> ['a', 'b']``
+___
+### <code>find</code>
+<code><b>find(<i>&lt;value1&gt;</i> : array, <i>&lt;value2&gt;</i> : unaryfunction) => any</b></code><br/><br/>
+Find the first item from an array that match the condition. It takes a filter function where you can address the item in the array as #item. For deeply nested maps you can refer to the parent maps using the #item_n(#item_1, #item_2...) notation.
+* ``find([10, 20, 30], #item > 10) -> 20``
+* ``find(['azure', 'data', 'factory'], length(#item) > 4) -> 'azure'``
+* ``find([
+ @(
+ name = 'Daniel',
+ types = [
+ @(mood = 'jovial', behavior = 'terrific'),
+ @(mood = 'grumpy', behavior = 'bad')
+ ]
+ ),
+ @(
+ name = 'Mark',
+ types = [
+ @(mood = 'happy', behavior = 'awesome'),
+ @(mood = 'calm', behavior = 'reclusive')
+ ]
+ )
+ ],
+ contains(#item.types, #item.mood=='happy') /*Filter out the happy kid*/
+ )``
+* ``
+ @(
+ name = 'Mark',
+ types = [
+ @(mood = 'happy', behavior = 'awesome'),
+ @(mood = 'calm', behavior = 'reclusive')
+ ]
+ )
+ ``
+___
+### <code>map</code>
+<code><b>map(<i>&lt;value1&gt;</i> : array, <i>&lt;value2&gt;</i> : unaryfunction) => any</b></code><br/><br/>
+Maps each element of the array to a new element using the provided expression. Map expects a reference to one element in the expression function as #item.
+* ``map([1, 2, 3, 4], #item + 2) -> [3, 4, 5, 6]``
+* ``map(['a', 'b', 'c', 'd'], #item + '_processed') -> ['a_processed', 'b_processed', 'c_processed', 'd_processed']``
+___
+### <code>mapIndex</code>
+<code><b>mapIndex(<i>&lt;value1&gt;</i> : array, <i>&lt;value2&gt;</i> : binaryfunction) => any</b></code><br/><br/>
+Maps each element of the array to a new element using the provided expression. Map expects a reference to one element in the expression function as #item and a reference to the element index as #index.
+* ``mapIndex([1, 2, 3, 4], #item + 2 + #index) -> [4, 6, 8, 10]``
+___
+### <code>reduce</code>
+<code><b>reduce(<i>&lt;value1&gt;</i> : array, <i>&lt;value2&gt;</i> : any, <i>&lt;value3&gt;</i> : binaryfunction, <i>&lt;value4&gt;</i> : unaryfunction) => any</b></code><br/><br/>
+Accumulates elements in an array. Reduce expects a reference to an accumulator and one element in the first expression function as #acc and #item and it expects the resulting value as #result to be used in the second expression function.
+* ``toString(reduce(['1', '2', '3', '4'], '0', #acc + #item, #result)) -> '01234'``
+___
+### <code>size</code>
+<code><b>size(<i>&lt;value1&gt;</i> : any) => integer</b></code><br/><br/>
+Finds the size of an array or map type
+* ``size(['element1', 'element2']) -> 2``
+* ``size([1,2,3]) -> 3``
+___
+### <code>slice</code>
+<code><b>slice(<i>&lt;array to slice&gt;</i> : array, <i>&lt;from 1-based index&gt;</i> : integral, [<i>&lt;number of items&gt;</i> : integral]) => array</b></code><br/><br/>
+Extracts a subset of an array from a position. Position is 1 based. If the length is omitted, it is defaulted to end of the string.
+* ``slice([10, 20, 30, 40], 1, 2) -> [10, 20]``
+* ``slice([10, 20, 30, 40], 2) -> [20, 30, 40]``
+* ``slice([10, 20, 30, 40], 2)[1] -> 20``
+* ``isNull(slice([10, 20, 30, 40], 2)[0]) -> true``
+* ``isNull(slice([10, 20, 30, 40], 2)[20]) -> true``
+* ``slice(['a', 'b', 'c', 'd'], 8) -> []``
+___
+### <code>sort</code>
+<code><b>sort(<i>&lt;value1&gt;</i> : array, <i>&lt;value2&gt;</i> : binaryfunction) => array</b></code><br/><br/>
+Sorts the array using the provided predicate function. Sort expects a reference to two consecutive elements in the expression function as #item1 and #item2.
+* ``sort([4, 8, 2, 3], compare(#item1, #item2)) -> [2, 3, 4, 8]``
+* ``sort(['a3', 'b2', 'c1'], iif(right(#item1, 1) >= right(#item2, 1), 1, -1)) -> ['c1', 'b2', 'a3']``
++ ## Conversion functions Conversion functions are used to convert data and data types
data-share https://docs.microsoft.com/en-us/azure/data-share/data-share-troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/data-share-troubleshoot.md
@@ -1,6 +1,6 @@
--- title: Troubleshoot Azure Data Share
-description: Learn how to troubleshoot issues with invitations and errors when creating or receiving data shares with Azure Data Share.
+description: Learn how to troubleshoot problems with invitations and errors when you create or receive data shares in Azure Data Share.
services: data-share author: jifems ms.author: jife
@@ -9,77 +9,77 @@ ms.topic: troubleshooting
ms.date: 12/16/2020 ---
-# Troubleshoot common issues in Azure Data Share
+# Troubleshoot common problems in Azure Data Share
-This article shows how to troubleshoot common issues for Azure Data Share.
+This article explains how to troubleshoot common problems in Azure Data Share.
## Azure Data Share invitations
-In some cases, when a new user clicks **Accept Invitation** from the e-mail invitation that was sent, they may be presented with an empty list of invitations.
+In some cases, when new users select **Accept Invitation** in an email invitation, they might see an empty list of invitations.
-![No invitations](media/no-invites.png)
+:::image type="content" source="media/no-invites.png" alt-text="Screenshot showing an empty list of invitations.":::
-This could be due to the following reasons:
+This problem could have one of the following causes:
-* **Azure Data Share service is not registered as a resource provider of any Azure subscription in the Azure tenant.** You will experience this issue if there is no Data Share resource in your Azure tenant. When you create an Azure Data Share resource, it automatically registers the resource provider in your Azure subscription. You can also manually register the Data Share service following these steps. You'll need to have the Azure Contributor role to complete these steps.
+* **The Azure Data Share service isn't registered as a resource provider of any Azure subscription in the Azure tenant.** This problem happens when your Azure tenant has no Data Share resource.
- 1. In the Azure portal, navigate to **Subscriptions**
- 1. Select the subscription you want to use to create Azure Data Share resource
- 1. Click on **Resource Providers**
- 1. Search for **Microsoft.DataShare**
- 1. Click **Register**
+ When you create an Azure Data Share resource, it automatically registers the resource provider in your Azure subscription. You can manually register the Data Share service by using the following steps. To complete these steps, you need the [Contributor role](../role-based-access-control/built-in-roles.md#contributor) for the Azure subscription.
- You'll need to have the [Azure Contributor role](../role-based-access-control/built-in-roles.md#contributor) to the Azure subscription to complete these steps.
+ 1. In the Azure portal, go to **Subscriptions**.
+ 1. Select the subscription you want to use to create the Azure Data Share resource.
+ 1. Select **Resource Providers**.
+ 1. Search for **Microsoft.DataShare**.
+ 1. Select **Register**.
-* **Invitation is sent to your email alias instead of your Azure login email.** If you have registered the Azure Data Share service or have already created a Data Share resource in the Azure tenant, but still cannot see the invitation, it maybe because the provider has entered your email alias as recipient instead of your Azure login email address. Contact your data provider and ensure that they have sent the invitation to your Azure login e-mail address and not your e-mail alias.
+* **The invitation is sent to your email alias instead of your Azure sign-in email address.** If you already registered the Azure Data Share service or created a Data Share resource in the Azure tenant, but you still can't see the invitation, your email alias might be listed as the recipient. Contact your data provider and ensure that the invitation will be sent to your Azure sign-in email address and not your email alias.
-* **Invitation has already been accepted.** The link in the email takes you to the Data Share Invitation page in Azure portal, which only lists pending invitations. If you have already accepted the invitation, it will no longer show up in the Data Share Invitation page. Proceed to your Data Share resource which you used to accept the invitation into to view received shares and configure your target Azure Data Explorer cluster setting.
+* **The invitation is already accepted.** The link in the email takes you to the **Data Share Invitations** page in the Azure portal. This page lists only pending invitations. Accepted invitations don't appear on the page. To view received shares and configure your target Azure Data Explorer cluster setting, go to the Data Share resource you used to accept the invitation.
-## Error when creating or receiving a new share
+## Creating and receiving shares
-"Failed to add datasets"
+The following errors might appear when you create a new share, add datasets, or map datasets:
-"Failed to map datasets"
+* Failed to add datasets.
+* Failed to map datasets.
+* Unable to grant Data Share resource x access to y.
+* You do not have proper permissions to x.
+* We could not add write permissions for the Azure Data Share account to one or more of your selected resources.
-"Unable to grant Data Share resource x access to y"
+You might see one of these errors if you have insufficient permissions to the Azure data store. For more information, see [Roles and requirements](concepts-roles-permissions.md).
-"You do not have proper permissions to x"
+You need the write permission to share or receive data from an Azure data store. This permission is typically part of the Contributor role.
-"We could not add write permissions for Azure Data Share account to one or more of your selected resources"
+If you're sharing data or receiving data from the Azure data store for the first time, you also need the *Microsoft.Authorization/role assignments/write* permission. This permission is typically part of the Owner role. Even if you created the Azure data store resource, you're not necessarily the owner of the resource.
-If you receive any of the above errors when creating a new share, adding datasets or mapping datasets, it could be due to insufficient permissions to the Azure data store. See [Roles and requirements](concepts-roles-permissions.md) for required permissions.
+When you have the proper permissions, the Azure Data Share service automatically allows the data share resource's managed identity to access the data store. This process can take a few minutes. If you experience failure because of this delay, try again after a few minutes.
-You need write permission to share or receive data from an Azure data store, which typically exists in the **Contributor** role.
+SQL-based sharing requires extra permissions. For information about prerequisites, see [Share from SQL sources](how-to-share-from-sql.md).
-If this is the first time you are sharing or receiving data from the Azure data store, you also need *Microsoft.Authorization/role assignments/write* permission, which typically exists in the **Owner** role. Even if you created the Azure data store resource, it does NOT automatically make you the owner of the resource. With proper permission, Azure Data Share service automatically grants the data share resource's managed identity access to the data store. This process could take a few minutes to take effect. If you experience failure due to this delay, try again in a few minutes.
+## Snapshots
+A snapshot can fail for various reasons. Open a detailed error message by selecting the start time of the snapshot and then the status of each dataset.
-SQL-based sharing requires additional permissions. See [Share from SQL sources](how-to-share-from-sql.md) for detailed list of prerequisites.
+Snapshots commonly fail for these reasons:
-## Snapshot failed
-Snapshot could fail due to a variety of reasons. You can find detailed error message by clicking on the start time of the snapshot and then the status of each dataset. The following are common reasons why snapshot fails:
+* Data Share lacks permission to read from the source data store or to write to the target data store. For more information, see [Roles and requirements](concepts-roles-permissions.md). If you're taking a snapshot for the first time, the Data Share resource might need a few minutes to get access to the Azure data store. After a few minutes, try again.
+* The Data Share connection to the source data store or target data store is blocked by a firewall.
+* A shared dataset, source data store, or target data store was deleted.
-* Data Share does not have permission to read from the source data store or write to the target data store. See [Roles and requirements](concepts-roles-permissions.md) for detailed permission requirements. If this is the first time you are taking a snapshot, it could take a few minutes for Data Share resource to be granted access to the Azure data store. Wait for a few minutes and try again.
-* Data Share connection to source or target data store is blocked by firewall.
-* Shared dataset, or source or target data store is deleted.
+For storage accounts, a snapshot can fail because a file is being updated at the source while the snapshot is happening. As a result, a 0-byte file might appear at the target. After the update at the source, snapshots should succeed.
-For storage account, the following are additional causes of snapshot failures.
+For SQL sources, a snapshot can fail for these other reasons:
-* File is being updated at the source while snapshot is happening. This may result in 0 byte file at the target. Subsequent snapshot after update is completed at the source should succeed.
+* The source SQL script or target SQL script that grants Data Share permission hasn't run. Or for Azure SQL Database or Azure Synapse Analytics (formerly Azure SQL Data Warehouse), the script runs by using SQL authentication rather than Azure Active Directory authentication.
+* The source data store or target SQL data store is paused.
+* The snapshot process or target data store doesn't support SQL data types. For more information, see [Share from SQL sources](how-to-share-from-sql.md#supported-data-types).
+* The source data store or target SQL data store is locked by other processes. Azure Data Share doesn't lock these data stores. But existing locks on these data stores can make a snapshot fail.
+* The target SQL table is referenced by a foreign key constraint. During a snapshot, if a target table has the same name as a table in the source data, Azure Data Share drops the table and creates a new table. If the target SQL table is referenced by a foreign key constraint, the table can't be dropped.
+* A target CSV file is generated, but the data can't be read in Excel. You might see this problem when the source SQL table contains data that includes non-English characters. In Excel, select the **Get Data** tab and choose the CSV file. Select the file origin **65001: Unicode (UTF-8)**, and then load the data.
-For SQL sources, the following are additional causes of snapshot failures.
-
-* The source or target SQL script to grant Data Share permission is not run. Or for Azure SQL Database or Azure Synapse Analytics (formerly Azure SQL DW), it is run using SQL authentication rather than Azure Active Directory authentication.
-* The source or target SQL data store is paused.
-* SQL data types are not supported by the snapshot process or target data store. Refer to [Share from SQL sources](how-to-share-from-sql.md#supported-data-types) for details.
-* Source or target SQL data store are locked by other processes. Azure Data Share does not apply locks to source and target SQL data store. However, existing locks on the source and target SQL data store will cause snapshot failure.
-* The target SQL table is referenced by a foreign key constraint. During snapshot, if a target table with the same name exists, Azure Data Share drops table and creates a new table. If the target SQL table is referenced by a foreign key constraint, the table cannot be dropped.
-* Target CSV file is generated, but data cannot be read in Excel. This could happen when the source SQL table contains data with non-English characters. In Excel, select 'Get Data' tab and choose the CSV file, select file origin as 65001: Unicode (UTF-8) and load data.
-
-## Snapshot issue after updating snapshot schedule
-After data provider updates snapshot schedule for the sent share, data consumer needs to disable the previous snapshot schedule and re-enable the updated snapshot schedule for the received share.
+## Updated snapshot schedules
+After the data provider updates the snapshot schedule for the sent share, the data consumer needs to disable the previous snapshot schedule. Then enable the updated snapshot schedule for the received share.
## Next steps
-To learn how to start sharing data, continue to the [share your data](share-your-data.md) tutorial.
+To learn how to start sharing data, continue to the [Share data](share-your-data.md) tutorial.
-To learn how to receive data, continue to the [accept and receive data](subscribe-to-data-share.md) tutorial.
\ No newline at end of file
+To learn how to receive data, continue to the [Accept and receive data](subscribe-to-data-share.md) tutorial.
\ No newline at end of file
data-share https://docs.microsoft.com/en-us/azure/data-share/how-to-share-from-sql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/how-to-share-from-sql.md
@@ -335,7 +335,7 @@ SQL snapshot performance is impacted by a number of factors. It is always recomm
* Location of source and target data stores. ## Troubleshoot SQL snapshot failure
-The most common cause of snapshot failure is that Data Share does not have permission to the source or target data store. In order to grant Data Share permission to the source or target Azure SQL Database or Azure Synapse Analytics (formerly Azure SQL DW), you must run the provided SQL script when connecting to the SQL database using Azure Active Directory authentication. To troubleshoot additional SQL snapshot failure, refer to [Troubleshoot snapshot failure](data-share-troubleshoot.md#snapshot-failed).
+The most common cause of snapshot failure is that Data Share does not have permission to the source or target data store. In order to grant Data Share permission to the source or target Azure SQL Database or Azure Synapse Analytics (formerly Azure SQL DW), you must run the provided SQL script when connecting to the SQL database using Azure Active Directory authentication. To troubleshoot additional SQL snapshot failure, refer to [Troubleshoot snapshot failure](data-share-troubleshoot.md#snapshots).
## Next steps You have learned how to share and receive data from SQL sources using Azure Data Share service. To learn more about sharing from other data sources, continue to [supported data stores](supported-data-stores.md).\ No newline at end of file
data-share https://docs.microsoft.com/en-us/azure/data-share/how-to-share-from-storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/how-to-share-from-storage.md
@@ -1,6 +1,6 @@
--- title: Share and receive data from Azure Blob Storage and Azure Data Lake Storage
-description: Learn how to share and receive data from Azure Blob Storage and Azure Data Lake Storage
+description: Learn how to share and receive data from Azure Blob Storage and Azure Data Lake Storage.
author: jifems ms.author: jife ms.service: data-share
@@ -11,180 +11,183 @@ ms.date: 12/16/2020
[!INCLUDE[appliesto-storage](includes/appliesto-storage.md)]
-Azure Data Share supports snapshot-based sharing from storage account. This article explains how to share and receive data from the following sources: Azure Blob Storage, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2.
+Azure Data Share supports snapshot-based sharing from a storage account. This article explains how to share and receive data from Azure Blob Storage, Azure Data Lake Storage Gen1, and Azure Data Lake Storage Gen2.
-Azure Data Share supports sharing of files, folders and file systems from Azure Data Lake Gen1 and Azure Data Lake Gen2. It also supports sharing of blobs, folders and containers from Azure Blob Storage. Only block blob is currently supported. Data shared from these sources can be received into Azure Data Lake Gen2 or Azure Blob Storage.
+Azure Data Share supports the sharing of files, folders, and file systems from Azure Data Lake Gen1 and Azure Data Lake Gen2. It also supports the sharing of blobs, folders, and containers from Azure Blob Storage. Only block blobs are currently supported. Data shared from these sources can be received by Azure Data Lake Gen2 or Azure Blob Storage.
-When file systems, containers or folders are shared in snapshot-based sharing, data consumer can choose to make a full copy of the share data, or leverage incremental snapshot capability to copy only new or updated files. Incremental snapshot is based on the last modified time of the files. Existing files with the same name will be overwritten during snapshot. File deleted from the source is not deleted on the target. Empty sub-folders at the source are not copied over to the target.
+When file systems, containers, or folders are shared in snapshot-based sharing, data consumers can choose to make a full copy of the share data. Or they can use the incremental snapshot capability to copy only new or updated files. The incremental snapshot capability is based on the last modified time of the files.
+Existing files that have the same name are overwritten during a snapshot. A file that is deleted from the source isn't deleted on the target. Empty subfolders at the source aren't copied over to the target.
## Share data
+Use the information in the following sections to share data by using Azure Data Share.
### Prerequisites to share data
-* Azure Subscription: If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
-* Your recipient's Azure login e-mail address (using their e-mail alias won't work).
-* If the source Azure data store is in a different Azure subscription than the one you will use to create Data Share resource, register the [Microsoft.DataShare resource provider](concepts-roles-permissions.md#resource-provider-registration) in the subscription where the Azure data store is located.
+* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+* Find your recipient's Azure sign-in email address. The recipient's email alias won't work for your purposes.
+* If the source Azure data store is in a different Azure subscription than the one where you'll create the Data Share resource, register the [Microsoft.DataShare resource provider](concepts-roles-permissions.md#resource-provider-registration) in the subscription where the Azure data store is located.
-### Prerequisites for source storage account
+### Prerequisites for the source storage account
-* An Azure Storage account: If you don't already have one, you can create an [Azure Storage account](../storage/common/storage-account-create.md)
-* Permission to write to the storage account, which is present in *Microsoft.Storage/storageAccounts/write*. This permission exists in the Contributor role.
-* Permission to add role assignment to the storage account, which is present in *Microsoft.Authorization/role assignments/write*. This permission exists in the Owner role.
+* An Azure Storage account. If you don't already have an account, [create one](../storage/common/storage-account-create.md).
+* Permission to write to the storage account. Write permission is in *Microsoft.Storage/storageAccounts/write*. It's part of the Contributor role.
+* Permission to add role assignment to the storage account. This permission is in *Microsoft.Authorization/role assignments/write*. It's part of the Owner role.
### Sign in to the Azure portal Sign in to the [Azure portal](https://portal.azure.com/).
-### Create a Data Share Account
+### Create a Data Share account
Create an Azure Data Share resource in an Azure resource group.
-1. Select the menu button in the upper-left corner of the portal, then select **Create a resource** (+).
+1. In the upper-left corner of the portal, open the menu and then select **Create a resource** (+).
1. Search for *Data Share*.
-1. Select Data Share and Select **Create**.
+1. Select **Data Share** and **Create**.
-1. Fill out the basic details of your Azure Data Share resource with the following information.
+1. Provide the basic details of your Azure Data Share resource:
**Setting** | **Suggested value** | **Field description** |---|---|---|
- | Subscription | Your subscription | Select the Azure subscription that you want to use for your data share account.|
- | Resource group | *test-resource-group* | Use an existing resource group or create a new resource group. |
+ | Subscription | Your subscription | Select an Azure subscription for your data share account.|
+ | Resource group | *test-resource-group* | Use an existing resource group or create a resource group. |
| Location | *East US 2* | Select a region for your data share account.
- | Name | *datashareaccount* | Specify a name for your data share account. |
+ | Name | *datashareaccount* | Name your data share account. |
| | |
-1. Select **Review + create**, then **Create** to provision your data share account. Provisioning a new data share account typically takes about 2 minutes or less.
+1. Select **Review + create** > **Create** to provision your data share account. Provisioning a new data share account typically takes about 2 minutes.
-1. When the deployment is complete, select **Go to resource**.
+1. When the deployment finishes, select **Go to resource**.
### Create a share
-1. Navigate to your Data Share Overview page.
+1. Go to your data share **Overview** page.
- ![Share your data](./media/share-receive-data.png "Share your data")
+ :::image type="content" source="./media/share-receive-data.png" alt-text="Screenshot showing the data share overview.":::
1. Select **Start sharing your data**. 1. Select **Create**.
-1. Fill out the details for your share. Specify a name, share type, description of share contents, and terms of use (optional).
+1. Provide the details for your share. Specify a name, share type, description of share contents, and terms of use (optional).
- ![EnterShareDetails](./media/enter-share-details.png "Enter Share details")
+ ![Screenshot showing data share details.](./media/enter-share-details.png "Enter the data share details.")
1. Select **Continue**.
-1. To add Datasets to your share, select **Add Datasets**.
+1. To add datasets to your share, select **Add Datasets**.
- ![Add Datasets to your share](./media/datasets.png "Datasets")
+ ![Screenshot showing how to add datasets to your share.](./media/datasets.png "Datasets.")
-1. Select the dataset type that you would like to add. You will see a different list of dataset types depending on the share type (snapshot or in-place) you have selected in the previous step.
+1. Select a dataset type to add. The list of dataset types depends on whether you selected snapshot-based sharing or in-place sharing in the previous step.
- ![AddDatasets](./media/add-datasets.png "Add Datasets")
+ ![Screenshot showing where to select a dataset type.](./media/add-datasets.png "Add datasets.")
-1. Navigate to the object you would like to share and select 'Add Datasets'.
+1. Go to the object you want to share. Then select **Add Datasets**.
- ![SelectDatasets](./media/select-datasets.png "Select Datasets")
+ ![Screenshot showing how to select an object to share.](./media/select-datasets.png "Select datasets.")
-1. In the Recipients tab, enter in the email addresses of your Data Consumer by selecting '+ Add Recipient'.
+1. On the **Recipients** tab, add the email address of your data consumer by selecting **Add Recipient**.
- ![AddRecipients](./media/add-recipient.png "Add recipients")
+ ![Screenshot showing how to add recipient email addresses.](./media/add-recipient.png "Add recipients.")
1. Select **Continue**.
-1. If you have selected snapshot share type, you can configure snapshot schedule to provide updates of your data to your data consumer.
+1. If you selected a snapshot share type, you can set up the snapshot schedule to update your data for the data consumer.
- ![EnableSnapshots](./media/enable-snapshots.png "Enable snapshots")
+ ![Screenshot showing the snapshot schedule settings.](./media/enable-snapshots.png "Enable snapshots.")
1. Select a start time and recurrence interval. 1. Select **Continue**.
-1. In the Review + Create tab, review your Package Contents, Settings, Recipients, and Synchronization Settings. Select **Create**.
+1. On the **Review + Create** tab, review your package contents, settings, recipients, and synchronization settings. Then select **Create**.
-Your Azure Data Share has now been created and the recipient of your Data Share is now ready to accept your invitation.
+You've now created your Azure data share. The recipient of your data share can accept your invitation.
## Receive data
+The following sections describe how to receive shared data.
### Prerequisites to receive data
-Before you can accept a data share invitation, you must provision a number of Azure resources, which are listed below.
+Before you accept a data share invitation, make sure you have the following prerequisites:
-Ensure that all pre-requisites are complete before accepting a data share invitation.
+* An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/).
+* An invitation from Azure. The email subject should be "Azure Data Share invitation from *\<yourdataprovider\@domain.com>*".
+* A registered [Microsoft.DataShare resource provider](concepts-roles-permissions.md#resource-provider-registration) in:
+ * The Azure subscription where you'll create a Data Share resource.
+ * The Azure subscription where your target Azure data stores are located.
-* Azure Subscription: If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
-* A Data Share invitation: An invitation from Microsoft Azure with a subject titled "Azure Data Share invitation from **<yourdataprovider@domain.com>**".
-* Register the [Microsoft.DataShare resource provider](concepts-roles-permissions.md#resource-provider-registration) in the Azure subscription where you will create a Data Share resource and the Azure subscription where your target Azure data stores are located.
+### Prerequisites for a target storage account
-### Prerequisites for target storage account
-
-* An Azure Storage account: If you don't already have one, you can create an [Azure Storage account](../storage/common/storage-account-create.md).
-* Permission to write to the storage account, which is present in *Microsoft.Storage/storageAccounts/write*. This permission exists in the Contributor role.
-* Permission to add role assignment to the storage account, which is present in *Microsoft.Authorization/role assignments/write*. This permission exists in the Owner role.
+* An Azure Storage account. If you don't already have one, [create an account](../storage/common/storage-account-create.md).
+* Permission to write to the storage account. This permission is in *Microsoft.Storage/storageAccounts/write*. It's part of the Contributor role.
+* Permission to add role assignment to the storage account. This assignment is in *Microsoft.Authorization/role assignments/write*. It's part of the Owner role.
### Sign in to the Azure portal Sign in to the [Azure portal](https://portal.azure.com/).
-### Open invitation
+### Open an invitation
-1. You can open invitation from email or directly from Azure portal.
+You can open an invitation from email or directly from the Azure portal.
- To open invitation from email, check your inbox for an invitation from your data provider. The invitation is from Microsoft Azure, titled **Azure Data Share invitation from <yourdataprovider@domain.com>**. Click on **View invitation** to see your invitation in Azure.
+1. To open an invitation from email, check your inbox for an invitation from your data provider. The invitation from Microsoft Azure is titled "Azure Data Share invitation from *\<yourdataprovider\@domain.com>*". Select **View invitation** to see your invitation in Azure.
- To open invitation from Azure portal directly, search for **Data Share Invitations** in Azure portal. This takes you to the list of Data Share invitations.
+ To open an invitation from the Azure portal, search for *Data Share invitations*. You see a list of Data Share invitations.
- ![List of Invitations](./media/invitations.png "List of invitations")
+ ![Screenshot showing the list of invitations in the Azure portal.](./media/invitations.png "List of invitations.")
-1. Select the share you would like to view.
+1. Select the share you want to view.
-### Accept invitation
-1. Make sure all fields are reviewed, including the **Terms of Use**. If you agree to the terms of use, you'll be required to check the box to indicate you agree.
+### Accept an invitation
+1. Review all of the fields, including the **Terms of use**. If you agree to the terms, select the check box.
- ![Terms of use](./media/terms-of-use.png "Terms of use")
+ ![Screenshot showing the Terms of use area.](./media/terms-of-use.png "Terms of use.")
-1. Under *Target Data Share Account*, select the Subscription and Resource Group that you'll be deploying your Data Share into.
+1. Under **Target Data Share account**, select the subscription and resource group where you'll deploy your Data Share. Then fill in the following fields:
- For the **Data Share Account** field, select **Create new** if you don't have an existing Data Share account. Otherwise, select an existing Data Share account that you'd like to accept your data share into.
+ * In the **Data share account** field, select **Create new** if you don't have a Data Share account. Otherwise, select an existing Data Share account that will accept your data share.
- For the **Received Share Name** field, you may leave the default specified by the data provide, or specify a new name for the received share.
+ * In the **Received share name** field, either leave the default that the data provider specified or specify a new name for the received share.
- Once you've agreed to the terms of use and specified a Data Share account to manage your received share, Select **Accept and configure**. A share subscription will be created.
+1. Select **Accept and configure**. A share subscription is created.
- ![Accept options](./media/accept-options.png "Accept options")
+ ![Screenshot showing where to accept the configuration options.](./media/accept-options.png "Accept options")
- This takes you to your the received share in your Data Share account.
+ The received share appears in your Data Share account.
- If you don't want to accept the invitation, Select *Reject*.
+ If you don't want to accept the invitation, select **Reject**.
-### Configure received share
-Follow the steps below to configure where you want to receive data.
+### Configure a received share
+Follow the steps in this section to configure a location to receive data.
-1. Select **Datasets** tab. Check the box next to the dataset you'd like to assign a destination to. Select **+ Map to target** to choose a target data store.
+1. On the **Datasets** tab, select the check box next to the dataset where you want to assign a destination. Select **Map to target** to choose a target data store.
- ![Map to target](./media/dataset-map-target.png "Map to target")
+ ![Screenshot showing how to map to a target.](./media/dataset-map-target.png "Map to target.")
-1. Select a target data store that you'd like the data to land in. Any data files in the target data store with the same path and name will be overwritten.
+1. Select a target data store for the data. Files in the target data store that have the same path and name as files in the received data will be overwritten.
- ![Target storage account](./media/map-target.png "Target storage")
+ ![Screenshot showing where to select a target storage account.](./media/map-target.png "Target storage.")
-1. For snapshot-based sharing, if the data provider has created a snapshot schedule to provide regular update to the data, you can also enable snapshot schedule by selecting the **Snapshot Schedule** tab. Check the box next to the snapshot schedule and select **+ Enable**.
+1. For snapshot-based sharing, if the data provider uses a snapshot schedule to regularly update the data, you can enable the schedule from the **Snapshot Schedule** tab. Select the box next to the snapshot schedule. Then select **Enable**.
- ![Enable snapshot schedule](./media/enable-snapshot-schedule.png "Enable snapshot schedule")
+ ![Screenshot showing how to enable a snapshot schedule.](./media/enable-snapshot-schedule.png "Enable snapshot schedule.")
### Trigger a snapshot
-These steps only apply to snapshot-based sharing.
+The steps in this section apply only to snapshot-based sharing.
-1. You can trigger a snapshot by selecting **Details** tab followed by **Trigger snapshot**. Here, you can trigger a full or incremental snapshot of your data. If it is your first time receiving data from your data provider, select full copy.
+1. You can trigger a snapshot from the **Details** tab. On the tab, select **Trigger snapshot**. You can choose to trigger a full snapshot or incremental snapshot of your data. If you're receiving data from your data provider for the first time, select **Full copy**.
- ![Trigger snapshot](./media/trigger-snapshot.png "Trigger snapshot")
+ ![Screenshot showing the Trigger snapshot selection.](./media/trigger-snapshot.png "Trigger snapshot.")
-1. When the last run status is *successful*, go to target data store to view the received data. Select **Datasets**, and click on the link in the Target Path.
+1. When the last run status is *successful*, go to the target data store to view the received data. Select **Datasets**, and then select the target path link.
- ![Consumer datasets](./media/consumer-datasets.png "Consumer dataset mapping")
+ ![Screenshot showing a consumer dataset mapping.](./media/consumer-datasets.png "Consumer dataset mapping.")
### View history
-This step only applies to snapshot-based sharing. To view history of your snapshots, select **History** tab. Here you'll find history of all snapshots that were generated for the past 30 days.
+You can view the history of your snapshots only in snapshot-based sharing. To view the history, open the **History** tab. Here you see the history of all of the snapshots that were generated in the past 30 days.
## Next steps
-You have learned how to share and receive data from storage account using Azure Data Share service. To learn more about sharing from other data sources, continue to [supported data stores](supported-data-stores.md).
\ No newline at end of file
+You've learned how to share and receive data from a storage account by using the Azure Data Share service. To learn about sharing from other data sources, see [Supported data stores](supported-data-stores.md).
\ No newline at end of file
data-share https://docs.microsoft.com/en-us/azure/data-share/supported-data-stores https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/supported-data-stores.md
@@ -1,6 +1,6 @@
--- title: Supported data stores in Azure Data Share
-description: Learn about the data stores that are supported for use Azure Data Share.
+description: Learn about the data stores that are supported for use in Azure Data Share.
ms.service: data-share author: jifems ms.author: jife
@@ -9,59 +9,68 @@ ms.date: 12/16/2020
--- # Supported data stores in Azure Data Share
-Azure Data Share provides open and flexible data sharing, including the ability to share from and to different data stores. Data providers can share data from one type of data store, and their data consumers can choose which data store to receive data into.
+Azure Data Share provides open and flexible data sharing, including the ability to share from and to different data stores. Data providers can share data from one type of data store, and data consumers can choose a data store to receive the data.
-In this article, you'll learn about the rich set of Azure data stores that are supported in Azure Data Share. You can also find information on the combinations of data stores that can be leveraged by data providers and data consumers.
+In this article, you'll learn about the rich set of Azure data stores that Azure Data Share supports. You'll also learn about how data providers and data consumers can combine different data stores.
-## What data stores are supported in Azure Data Share?
+## Supported data stores
-The below table details the supported data sources for Azure Data Share.
+The following table explains the data stores that Azure Data Share supports.
-| Data store | Snapshot-based sharing (full snapshot) | Snapshot-based sharing (incremental snapshot) | In-place sharing
+| Data store | Sharing based on full snapshots | Sharing based on incremental snapshots | Sharing in place
|:--- |:--- |:--- |:--- |:--- |:--- |:--- |
-| Azure Blob storage |Γ£ô |Γ£ô | |
+| Azure Blob Storage |Γ£ô |Γ£ô | |
| Azure Data Lake Storage Gen1 |Γ£ô |Γ£ô | | | Azure Data Lake Storage Gen2 |Γ£ô |Γ£ô || | Azure SQL Database |Γ£ô | | |
-| Azure Synapse Analytics (formerly Azure SQL DW) |Γ£ô | | |
+| Azure Synapse Analytics (formerly Azure SQL Data Warehouse) |Γ£ô | | |
| Azure Synapse Analytics (workspace) dedicated SQL pool |Γ£ô | | | | Azure Data Explorer | | |Γ£ô | ## Data store support matrix
-Azure Data Share offers data consumers flexibility when deciding on a data store to accept data in to. For example, data being shared from Azure SQL Database can be received into Azure Data Lake Store Gen2, Azure SQL Database or Azure Synapse Analytics. Customers can choose which format to receive data in when configuring a received data share.
+Azure Data Share lets data consumers choose a data store to accept data. For example, data that's shared from Azure SQL Database can be received into Azure Data Lake Storage Gen2, Azure SQL Database, or Azure Synapse Analytics. When customers set up a receiving data share, they can choose the format to receive the data.
-The below table details different combinations and choices that data consumers have when accepting and configuring their data share. For more information on how to configure dataset mappings, see [how to configure dataset mappings](how-to-configure-mapping.md).
+The following table explains the combinations and options that data consumers can choose when they accept and configure a data share. For more information, see [Configure a dataset mapping](how-to-configure-mapping.md).
-| Data store | Azure Blob Storage | Azure Data Lake Storage Gen1 | Azure Data Lake Storage Gen2 | Azure SQL Database | Azure Synapse Analytics (formerly Azure SQL DW) | Azure Synapse Analytics (workspace) dedicated SQL pool | Azure Data Explorer
+| Data store | Blob Storage | Data Lake Storage Gen1 | Data Lake Storage Gen2 | SQL Database | Synapse Analytics (formerly SQL Data Warehouse) | Synapse Analytics (workspace) dedicated SQL pool | Data Explorer
|:--- |:--- |:--- |:--- |:--- |:--- |:--- | :--- |
-| Azure Blob storage | Γ£ô || Γ£ô |||
-| Azure Data Lake Storage Gen1 | Γ£ô | | Γ£ô |||
-| Azure Data Lake Storage Gen2 | Γ£ô | | Γ£ô |||
-| Azure SQL Database | Γ£ô | | Γ£ô | Γ£ô | Γ£ô | Γ£ô ||
-| Azure Synapse Analytics (formerly Azure SQL DW) | Γ£ô | | Γ£ô | Γ£ô | Γ£ô | Γ£ô ||
-| Azure Synapse Analytics (workspace) dedicated SQL pool | Γ£ô | | Γ£ô | Γ£ô | Γ£ô | Γ£ô ||
-| Azure Data Explorer ||||||| Γ£ô |
+| Blob Storage | Γ£ô || Γ£ô |||
+| Data Lake Storage Gen1 | Γ£ô | | Γ£ô |||
+| Data Lake Storage Gen2 | Γ£ô | | Γ£ô |||
+| SQL Database | Γ£ô | | Γ£ô | Γ£ô | Γ£ô | Γ£ô ||
+| Synapse Analytics (formerly SQL Data Warehouse) | Γ£ô | | Γ£ô | Γ£ô | Γ£ô | Γ£ô ||
+| Synapse Analytics (workspace) dedicated SQL pool | Γ£ô | | Γ£ô | Γ£ô | Γ£ô | Γ£ô ||
+| Data Explorer ||||||| Γ£ô |
## Share from a storage account
-Azure Data Share supports sharing of files, folders and file systems from Azure Data Lake Gen1 and Azure Data Lake Gen2. It also supports sharing of blobs, folders and containers from Azure Blob Storage. Only block blob is currently supported. When file systems, containers or folders are shared in snapshot-based sharing, data consumer can choose to make a full copy of the share data, or leverage incremental snapshot capability to copy only new or updated files. Incremental snapshot is based on the last modified time of the files. Existing files with the same name will be overwritten during snapshot. File deleted from the source is not deleted on the target.
+Azure Data Share supports the sharing of files, folders, and file systems from Azure Data Lake Storage Gen1 and Azure Data Lake Storage Gen2. It also supports the sharing of blobs, folders, and containers from Azure Blob Storage. Only block blobs are currently supported.
-Please refer to [Share and receive data from Azure Blob Storage and Azure Data Lake Storage](how-to-share-from-storage.md) for details.
+When file systems, containers, or folders are shared in snapshot-based sharing, data consumers can choose to make a full copy of the shared data. Or they can use the incremental snapshot capability to copy only new files or updated files.
+
+An incremental snapshot is based on the last-modified time of the files. Existing files that have the same name as files in the received data are overwritten in a snapshot. Files that are deleted from the source aren't deleted on the target.
+
+For more information, see [Share and receive data from Azure Blob Storage and Azure Data Lake Storage](how-to-share-from-storage.md).
## Share from a SQL-based source
-Azure Data Share supports sharing of both tables and views from Azure SQL Database and Azure Synapse Analytics (formerly Azure SQL DW), and sharing of tables from Azure Synapse Analytics (workspace) dedicated SQL pool. Sharing from Azure Synapse Analytics (workspace) serverless SQL pool is not currently supported. Data consumers can choose to accept the data into Azure Data Lake Storage Gen2 or Azure Blob Storage as csv or parquet file, as well as into Azure SQL Database and Azure Synapse Analytics as tables.
+Azure Data Share supports the sharing of both tables and views from Azure SQL Database and Azure Synapse Analytics (formerly Azure SQL Data Warehouse). It supports the sharing of tables from Azure Synapse Analytics (workspace) dedicated SQL pool. Sharing from Azure Synapse Analytics (workspace) serverless SQL pool isn't currently supported.
+
+Data consumers can choose to accept the data into Azure Data Lake Storage Gen2 or Azure Blob Storage as a CSV file or parquet file. They can also accept data as tables into Azure SQL Database and Azure Synapse Analytics.
+
+When consumers accept data into Azure Data Lake Storage Gen2 or Azure Blob Storage, full snapshots overwrite the contents of the target file if the file already exists. When data is received into a table and the target table doesn't already exist, Azure Data Share creates an SQL table by using the source schema. If a target table already exists and it has the same name, it's dropped and overwritten with the latest full snapshot. Incremental snapshots aren't currently supported.
+
+For more information, see [Share and receive data from Azure SQL Database and Azure Synapse Analytics](how-to-share-from-sql.md).
-When accepting data into Azure Data Lake Store Gen2 or Azure Blob Storage, full snapshots overwrite the contents of the target file if already exists.
-When data is received into table and if the target table does not already exist, Azure Data Share creates the SQL table with the source schema. If a target table already exists with the same name, it will be dropped and overwritten with the latest full snapshot. Incremental snapshots are not currently supported.
+## Share from Data Explorer
+Azure Data Share supports the ability to share databases in-place from Azure Data Explorer clusters. A data provider can share at the level of the database or the cluster.
-Please refer to [Share and receive data from Azure SQL Database and Azure Synapse Analytics](how-to-share-from-sql.md) for details.
+When data is shared at the database level, data consumers can access only the databases that the data provider shared. When a provider shares data at the cluster level, data consumers can access all of the databases from the provider's cluster, including any future databases that the data provider creates.
-## Share from Azure Data Explorer
-Azure Data Share supports the ability to share databases in-place from Azure Data Explorer clusters. Data provider can share at the database or cluster level. When shared at database level, data consumer will only be able to access the specific database(s) shared by the data provider. When shared at cluster level, data consumer can access all the databases from the provider's cluster, including any future databases created by the data provider.
+To access shared databases, data consumers need their own Azure Data Explorer cluster. Their cluster must be in the same Azure datacenter as the data provider's Azure Data Explorer cluster.
-To access shared databases, data consumer needs to have its own Azure Data Explorer cluster. Data consumer's Azure Data Explorer cluster needs to locate in the same Azure data center as the data provider's Azure Data Explorer cluster. When sharing relationship is established, Azure Data Share creates a symbolic link between the provider and consumer's Azure Data Explorer clusters. Data ingested using batch mode into the source Azure Data Explorer cluster will show up on the target cluster within a few seconds to a few minutes.
+When a sharing relationship is established, Azure Data Share creates a symbolic link between the provider's cluster and the consumer's cluster. Data that's ingested into the source cluster by using batch mode appears on the target cluster within a few minutes.
-Please refer to [Share and receive data from Azure Data Explorer](/azure/data-explorer/data-share) for details.
+For more information, see [Share and receive data from Azure Data Explorer](/azure/data-explorer/data-share).
## Next steps
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-gpu-deploy-configure-compute https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-configure-compute.md
@@ -7,7 +7,7 @@ author: alkohli
ms.service: databox ms.subservice: edge ms.topic: tutorial
-ms.date: 08/28/2020
+ms.date: 01/05/2021
ms.author: alkohli Customer intent: As an IT admin, I need to understand how to configure compute on Azure Stack Edge Pro so I can use it to transform the data before sending it to Azure. ---
@@ -41,40 +41,38 @@ Before you set up a compute role on your Azure Stack Edge Pro device, make sure
To configure compute on your Azure Stack Edge Pro, you'll create an IoT Hub resource via the Azure portal.
-1. In the Azure portal of your Azure Stack Edge resource, go to **Overview**. In the right-pane, on the **Compute** tile, select **Get started**.
+1. In the Azure portal of your Azure Stack Edge resource, go to **Overview**, and select **IoT Edge**.
- ![Get started with compute](./media/azure-stack-edge-j-series-deploy-configure-compute/configure-compute-1.png)
+ ![Get started with compute](./media/azure-stack-edge-gpu-deploy-configure-compute/configure-compute-1.png)
-2. On the **Configure Edge compute** tile, select **Configure compute**.
+2. In **Enable IoT Edge service**, select **Add**.
- ![Configure compute](./media/azure-stack-edge-j-series-deploy-configure-compute/configure-compute-2.png)
-
-3. On the **Configure Edge compute** blade, input the following:
+ ![Configure compute](./media/azure-stack-edge-gpu-deploy-configure-compute/configure-compute-2.png)
+3. On the **Configure Edge compute** blade, input the following information:
- |Field |Value |
- |---------|---------|
- |IoT Hub | Choose from **New** or **Existing**. <br> By default, a Standard tier (S1) is used to create an IoT resource. To use a free tier IoT resource, create one and then select the existing resource. <br> In each case, the IoT Hub resource uses the same subscription and resource group that is used by the Azure Stack Edge resource. |
- |Name |Enter a name for your IoT Hub resource. |
+ |Field |Value |
+ |---------|---------|
+ |IoT Hub | Choose from **New** or **Existing**. <br> By default, a Standard tier (S1) is used to create an IoT resource. To use a free tier IoT resource, create one and then select the existing resource. <br> In each case, the IoT Hub resource uses the same subscription and resource group that is used by the Azure Stack Edge resource. |
+ |Name |Enter a name for your IoT Hub resource. |
- ![Get started with compute 2](./media/azure-stack-edge-j-series-deploy-configure-compute/configure-compute-3.png)
+ ![Get started with compute 2](./media/azure-stack-edge-gpu-deploy-configure-compute/configure-compute-3.png)
-4. Select **Create**. The IoT Hub resource creation takes several minutes. After the IoT Hub resource is created, the **Configure compute** tile updates to show the compute configuration.
+4. When you finish the settings, select **Review + Create**. Review the settings for your IoT Hub resource, and select **Create**.
- ![Get started with compute 3](./media/azure-stack-edge-j-series-deploy-configure-compute/configure-compute-4.png)
+ Resource creation for an IoT Hub resource takes several minutes. After the resource is created, the **Overview** indicates the IoT Edge service is now running.
-5. To confirm that the Edge compute role has been configured, select **View Compute** on the **Configure compute** tile.
-
- ![Get started with compute 4](./media/azure-stack-edge-j-series-deploy-configure-compute/configure-compute-5.png)
+ ![Get started with compute 3](./media/azure-stack-edge-gpu-deploy-configure-compute/configure-compute-4.png)
- > [!NOTE]
- > If the **Configure Compute** dialog is closed before the IoT Hub is associated with the Azure Stack Edge Pro device, the IoT Hub gets created but is not shown in the compute configuration.
-
-When the Edge compute role is set up on the Edge device, it creates two devices: an IoT device and an IoT Edge device. Both devices can be viewed in the IoT Hub resource. An IoT Edge Runtime is also running on this IoT Edge device. At this point, only the Linux platform is available for your IoT Edge device.
+5. To confirm the Edge compute role has been configured, select **Properties**.
+
+ ![Get started with compute 4](./media/azure-stack-edge-gpu-deploy-configure-compute/configure-compute-5.png)
+
+ When the Edge compute role is set up on the Edge device, it creates two devices: an IoT device and an IoT Edge device. Both devices can be viewed in the IoT Hub resource. An IoT Edge Runtime is also running on this IoT Edge device. At this point, only the Linux platform is available for your IoT Edge device.
-It can take 20-30 minutes to configure compute since behind the scenes, virtual machines and Kubernetes cluster are being created. 
+It can take 20-30 minutes to configure compute because, behind the scenes, virtual machines and a Kubernetes cluster are being created.
-After you have successfully configured the compute in Azure portal, a Kubernetes cluster and a default user associated with the IoT namespace (a system namespace controlled by Azure Stack Edge Pro) exist.
+After you have successfully configured compute in the Azure portal, a Kubernetes cluster and a default user associated with the IoT namespace (a system namespace controlled by Azure Stack Edge Pro) exist.
## Get Kubernetes endpoints
@@ -85,15 +83,15 @@ To configure a client to access Kubernetes cluster, you will need the Kubernetes
![Device page in local UI](./media/azure-stack-edge-j-series-create-kubernetes-cluster/device-kubernetes-endpoint-1.png)
-3. Save the endpoint string. You will use this later when configuring a client to access the Kubernetes cluster via kubectl.
+3. Save the endpoint string. You will use this endpoint string later when configuring a client to access the Kubernetes cluster via kubectl.
4. While you are in the local web UI, you can:
- - Go to Kubernetes API, select **advanced settings** and download an advanced configuration file for Kubernetes.
+ - Go to Kubernetes API, select **advanced settings**, and download an advanced configuration file for Kubernetes.
![Device page in local UI 1](./media/azure-stack-edge-gpu-deploy-configure-compute/download-advanced-config-1.png)
- If you have been provided a key from Microsoft (select users may have this), then you can use this config file.
+ If you have been provided a key from Microsoft (select users may have a key), then you can use this config file.
![Device page in local UI 2](./media/azure-stack-edge-gpu-deploy-configure-compute/download-advanced-config-2.png)
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-gpu-deploy-prep https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-prep.md
@@ -7,13 +7,13 @@ author: alkohli
ms.service: databox ms.subservice: edge ms.topic: tutorial
-ms.date: 10/21/2020
+ms.date: 01/05/2021
ms.author: alkohli Customer intent: As an IT admin, I need to understand how to prepare the portal to deploy Azure Stack Edge Pro so I can use it to transfer data to Azure. --- # Tutorial: Prepare to deploy Azure Stack Edge Pro with GPU
-This is the first tutorial in the series of deployment tutorials that are required to completely deploy Azure Stack Edge Pro with GPU. This tutorial describes how to prepare the Azure portal to deploy an Azure Stack Edge resource.
+This tutorial is the first in the series of deployment tutorials that are required to completely deploy Azure Stack Edge Pro with GPU. This tutorial describes how to prepare the Azure portal to deploy an Azure Stack Edge resource.
You need administrator privileges to complete the setup and configuration process. The portal preparation takes less than 10 minutes.
@@ -31,7 +31,7 @@ For Azure Stack Edge Pro deployment, you need to first prepare your environment.
| --- | --- | | **Preparation** |These steps must be completed in preparation for the upcoming deployment. | | **[Deployment configuration checklist](#deployment-configuration-checklist)** |Use this checklist to gather and record information before and during the deployment. |
-| **[Deployment prerequisites](#prerequisites)** |These validate the environment is ready for deployment. |
+| **[Deployment prerequisites](#prerequisites)** |These prerequisites validate that the environment is ready for deployment. |
| | | |**Deployment tutorials** |These tutorials are required to deploy your Azure Stack Edge Pro device in production. | |**[1. Prepare the Azure portal for Azure Stack Edge Pro](azure-stack-edge-gpu-deploy-prep.md)** |Create and configure your Azure Stack Edge resource before you install an Azure Stack Box Edge physical device. |
@@ -39,9 +39,9 @@ For Azure Stack Edge Pro deployment, you need to first prepare your environment.
|**[3. Connect to Azure Stack Edge Pro](azure-stack-edge-gpu-deploy-connect.md)** |Once the device is installed, connect to device local web UI. | |**[4. Configure network settings for Azure Stack Edge Pro](azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md)** |Configure network including the compute network and web proxy settings for your device. | |**[5. Configure device settings for Azure Stack Edge Pro](azure-stack-edge-gpu-deploy-set-up-device-update-time.md)** |Assign a device name and DNS domain, configure update server and device time. |
-|**[6. Configure security settings for Azure Stack Edge Pro](azure-stack-edge-gpu-deploy-configure-certificates.md)** |Configure certificates for your device. Use device generated certificates or bring your own certificates. |
+|**[6. Configure security settings for Azure Stack Edge Pro](azure-stack-edge-gpu-deploy-configure-certificates.md)** |Configure certificates for your device. Use device-generated certificates or bring your own certificates. |
|**[7. Activate Azure Stack Edge Pro](azure-stack-edge-gpu-deploy-activate.md)** |Use the activation key from service to activate the device. The device is ready to set up SMB or NFS shares or connect via REST. |
-|**[8. Configure compute](azure-stack-edge-gpu-deploy-configure-compute.md)** |Configure the compute role on your device. This will also create a Kubernetes cluster. |
+|**[8. Configure compute](azure-stack-edge-gpu-deploy-configure-compute.md)** |Configure the compute role on your device. A Kubernetes cluster is also created. |
|**[9A. Transfer data with Edge shares](azure-stack-edge-j-series-deploy-add-shares.md)** |Add shares and connect to shares via SMB or NFS. | |**[9B. Transfer data with Edge storage accounts](azure-stack-edge-j-series-deploy-add-storage-accounts.md)** |Add storage accounts and connect to blob storage via REST APIs. |
@@ -61,7 +61,7 @@ Following are the configuration prerequisites for your Azure Stack Edge resource
Before you begin, make sure that: -- Your Microsoft Azure subscription is enabled for a Azure Stack Edge resource. Make sure that you used a supported subscription such as [Microsoft Enterprise Agreement (EA)](https://azure.microsoft.com/overview/sales-number/), [Cloud Solution Provider (CSP)](/partner-center/azure-plan-lp), or [Microsoft Azure Sponsorship](https://azure.microsoft.com/offers/ms-azr-0036p/). Pay-as-you-go subscriptions are not supported. To identify the type of Azure subscription you have, see [What is an Azure offer?](../cost-management-billing/manage/switch-azure-offer.md#what-is-an-azure-offer).
+- Your Microsoft Azure subscription is enabled for an Azure Stack Edge resource. Make sure that you used a supported subscription such as [Microsoft Enterprise Agreement (EA)](https://azure.microsoft.com/overview/sales-number/), [Cloud Solution Provider (CSP)](/partner-center/azure-plan-lp), or [Microsoft Azure Sponsorship](https://azure.microsoft.com/offers/ms-azr-0036p/). Pay-as-you-go subscriptions are not supported. To identify the type of Azure subscription you have, see [What is an Azure offer?](../cost-management-billing/manage/switch-azure-offer.md#what-is-an-azure-offer).
- You have owner or contributor access at resource group level for the Azure Stack Edge Pro/Data Box Gateway, IoT Hub, and Azure Storage resources. - To create any Azure Stack Edge / Data Box Gateway resource, you should have permissions as a contributor (or higher) scoped at resource group level.
@@ -120,21 +120,21 @@ To create an Azure Stack Edge resource, take the following steps in the Azure po
|Setting |Value | |---------|---------|
- |Subscription |This is automatically populated based on the earlier selection. Subscription is linked to your billing account. |
+ |Subscription |The subscription is automatically populated based on the earlier selection. Subscription is linked to your billing account. |
|Resource group |Select an existing group or create a new group.<br>Learn more about [Azure Resource Groups](../azure-resource-manager/management/overview.md). | 7. Enter or select the following **Instance details**. |Setting |Value | |---------|---------|
- |Name | A friendly name to identify the resource.<br>The name has between 2 and 50 characters containing letter, numbers, and hyphens.<br> Name starts and ends with a letter or a number. |
+ |Name | A friendly name to identify the resource.<br>The name has from 2 to 50 characters containing letters, numbers, and hyphens.<br> Name starts and ends with a letter or a number. |
|Region |For a list of all the regions where the Azure Stack Edge resource is available, see [Azure products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=databox&regions=all). If using Azure Government, all the government regions are available as shown in the [Azure regions](https://azure.microsoft.com/global-infrastructure/regions/).<br> Choose a location closest to the geographical region where you want to deploy your device.| ![Create a resource 5](media/azure-stack-edge-gpu-deploy-prep/create-resource-5.png) 8. Select **Next: Shipping address**.
- - If you already have a device, select the combo box for **I have a Azure Stack Edge Pro device**.
+ - If you already have a device, select the combo box for **I already have a device**.
![Create a resource 6](media/azure-stack-edge-gpu-deploy-prep/create-resource-6.png)
@@ -152,7 +152,7 @@ To create an Azure Stack Edge resource, take the following steps in the Azure po
11. Select **Create**.
-The resource creation takes a few minutes. An MSI is also created that lets the the Azure Stack Edge device communicate with the resource provider in Azure.
+The resource creation takes a few minutes. An MSI is also created that lets the Azure Stack Edge device communicate with the resource provider in Azure.
After the resource is successfully created and deployed, you're notified. Select **Go to resource**.
@@ -171,19 +171,17 @@ If you run into any issues during the order process, see [Troubleshoot order iss
After the Azure Stack Edge resource is up and running, you'll need to get the activation key. This key is used to activate and connect your Azure Stack Edge Pro device with the resource. You can get this key now while you are in the Azure portal.
-1. Select the resource that you created. Select **Overview** and then select **Device setup**.
+1. Select the resource you created, and select **Overview**.
- ![Select Device setup](media/azure-stack-edge-gpu-deploy-prep/azure-stack-edge-resource-2.png)
+2. In the right pane, enter a name for the Azure Key Vault or accept the default name. The key vault name can be between 3 and 24 characters.
-2. On the **Activate** tile, provide a name for the Azure Key Vault or accept the default name. The key vault name can be between 3 and 24 characters.
+ A key vault is created for each Azure Stack Edge resource that is activated with your device. The key vault lets you store and access secrets, for example, the Channel Integrity Key (CIK) for the service is stored in the key vault.
- A key vault is created for each Azure Stack Edge resource that is activated with your device. The key vault lets you store and access secrets, for example, the Channel Integrity Key (CIK) for the service is stored in the key vault.
+ Once you've specified a key vault name, select **Generate key** to create an activation key.
- Once you have specified a key vault name, select **Generate key** to create an activation key.
+ ![Get activation key](media/azure-stack-edge-gpu-deploy-prep/azure-stack-edge-resource-3.png)
- ![Get activation key](media/azure-stack-edge-gpu-deploy-prep/azure-stack-edge-resource-3.png)
-
- Wait a few minutes as the key vault and activation key are created. Select the copy icon to copy the key and save it for later use.
+ Wait a few minutes while the key vault and activation key are created. Select the copy icon to copy the key and save it for later use.<!--Verify that the new screen has a copy icon.-->
> [!IMPORTANT]
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-custom-script-extension https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-custom-script-extension.md
@@ -7,7 +7,7 @@ author: alkohli
ms.service: databox ms.subservice: edge ms.topic: how-to
-ms.date: 12/21/2020
+ms.date: 01/05/2021
ms.author: alkohli #Customer intent: As an IT admin, I need to understand how to create and manage virtual machines (VMs) on my Azure Stack Edge Pro device using APIs so that I can efficiently manage my VMs. ---
@@ -42,7 +42,7 @@ The Custom Script Extension for Linux will run on the following OSs. Other versi
| Distribution | Version | |---|---| | Linux: Ubuntu | 18.04 LTS |
-| Linux: Red Hat Enterprise Linux | 7.4 |
+| Linux: Red Hat Enterprise Linux | 7.4, 7.5, 7.7 |
<!--### Script location
@@ -393,4 +393,4 @@ RequestId IsSuccessStatusCode StatusCode ReasonPhrase
## Next steps
-[Azure Resource Manager cmdlets](/powershell/module/azurerm.resources/?view=azurermps-6.13.0)
\ No newline at end of file
+[Azure Resource Manager cmdlets](/powershell/module/azurerm.resources/?view=azurermps-6.13.0)
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-j-series-configure-gpu-modules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-j-series-configure-gpu-modules.md
@@ -7,7 +7,7 @@ author: alkohli
ms.service: databox ms.subservice: edge ms.topic: how-to
-ms.date: 08/25/2020
+ms.date: 01/04/2021
ms.author: alkohli --- # Configure and run a module on GPU on Azure Stack Edge Pro device
@@ -25,45 +25,58 @@ Before you begin, make sure that:
## Configure module to use GPU
-To configure a module to use the GPU on your Azure Stack Edge Pro device to run a module, follow these steps.
+To configure a module to use the GPU on your Azure Stack Edge Pro device to run a module,<!--Can it be simplified? "To configure a module to be run by the GPU on your Azure Stack Edge Pro device,"?--> follow these steps.
-1. In the Azure portal, go to the resource associated with your device.
+1. In the Azure portal, go to the resource associated with your device.
-2. Go to **Edge compute > Get started**. In the **Configure Edge compute** tile, select Configure.
+2. In **Overview**, select **IoT Edge**.
![Configure module to use GPU 1](media/azure-stack-edge-j-series-configure-gpu-modules/configure-compute-1.png)
-3. In the **Configure Edge compute** blade:
+3. In **Enable IoT Edge service**, select **Add**.
- 1. For **IoT Hub**, choose **Create new**.
- 2. Provide a name for the IoT Hub resource that you want to create for your device. TO use a free tier, select an existing resource.
- 3. Make a note of the IoT Edge device and the IoT Gateway device that are created with the IoT Hub resource. You will use this information in the later steps.
+ ![Configure module to use GPU 2](media/azure-stack-edge-j-series-configure-gpu-modules/configure-compute-2.png)
- ![Configure module to use GPU 2](media/azure-stack-edge-j-series-configure-gpu-modules/configure-compute-2.png)
+4. In **Create IoT Edge service**, enter settings for your IoT Hub resource:
-4. It takes several minutes to create the IoT Hub resource. After the resource is created, in the **Configure Edge compute** tile, select **View config** to view the details of the IoT Hub resource.
+ |Field |Value |
+ |--------|---------|
+ |Subscription | Subscription used by the Azure Stack Edge resource. |
+ |Resource group | Resource group used by the Azure Stack Edge resource. |
+ |IoT Hub | Choose from **Create new** or **Use existing**. <br> By default, a Standard tier (S1) is used to create an IoT resource. To use a free tier IoT resource, create one and then select the existing resource. <br> In each case, the IoT Hub resource uses the same subscription and resource group that is used by the Azure Stack Edge resource. |
+ |Name | If you don't want to use the default name provided for a new IoT Hub resource, enter a different name. |
- ![Configure module to use GPU 4](media/azure-stack-edge-j-series-configure-gpu-modules/configure-compute-4.png)
+ When you finish the settings, select **Review + Create**. Review the settings for your IoT Hub resource, and select **Create**.
-5. Go to **Automatic device management > IoT Edge**.
+ ![Get started with compute 2](./media/azure-stack-edge-j-series-deploy-configure-compute/configure-compute-3.png)
- ![Configure module to use GPU 6](media/azure-stack-edge-j-series-configure-gpu-modules/configure-gpu-2.png)
+ Resource creation for an IoT Hub resource takes several minutes. After the resource is created, the **Overview** indicates the IoT Edge service is now running.
- In the right pane, you see the IoT Edge device associated with your Azure Stack Edge Pro device. This corresponds to the IoT Edge device you created in the previous step when creating the IoT Hub resource.
-
-6. Select this IoT Edge device.
+ ![Get started with compute 3](./media/azure-stack-edge-j-series-deploy-configure-compute/configure-compute-4.png)
+
+5. To confirm the Edge compute role has been configured, select **Properties**.
+
+ ![Get started with compute 4](./media/azure-stack-edge-j-series-deploy-configure-compute/configure-compute-5.png)
+
+6. In **Properties**, select the link for **IoT Edge device**.
+
+ ![Configure module to use GPU 6](media/azure-stack-edge-j-series-configure-gpu-modules/configure-gpu-2.png)
+
+ In the right pane, you see the IoT Edge device associated with your Azure Stack Edge Pro device. This device corresponds to the IoT Edge device you created when creating the IoT Hub resource.
+
+7. Select this IoT Edge device.
![Configure module to use GPU 7](media/azure-stack-edge-j-series-configure-gpu-modules/configure-gpu-3.png)
-7. Select **Set modules**.
+8. Select **Set modules**.
- ![Configure module to use GPU 8](media/azure-stack-edge-j-series-configure-gpu-modules/configure-gpu-4.png)
+ ![Configure module to use GPU 8](media/azure-stack-edge-j-series-configure-gpu-modules/configure-gpu-4.png)
-8. Select **+ Add** and then select **+ IoT Edge module**.
+9. Select **+ Add** and then select **+ IoT Edge module**.
![Configure module to use GPU 9](media/azure-stack-edge-j-series-configure-gpu-modules/configure-gpu-5.png)
-9. In the **Add IoT Edge Module** tab:
+10. In the **Add IoT Edge Module** tab:
1. Provide the **Image URI**. You will use the publicly available Nvidia module **Digits** here.
@@ -73,32 +86,32 @@ To configure a module to use the GPU on your Azure Stack Edge Pro device to run
![Configure module to use GPU 10](media/azure-stack-edge-j-series-configure-gpu-modules/configure-gpu-6.png)
-10. In the **Environment variables** tab, provide the Name of the variable and the corresponding value.
+11. In the **Environment variables** tab, provide the Name of the variable and the corresponding value.
1. To have the current module use one GPU on this device, use the NVIDIA_VISIBLE_DEVICES.
- 2. Set the value to 0 or 1. This ensures that atleast one GPU is used by the device for this module. When you set the value to 0, 1, that implies that both the GPUs on your device are being used by this module.
+ 2. Set the value to 0 or 1. A value of 0 or 1 ensures that at least one GPU is used by the device for this module. When you set the value to 0, 1, that implies that both the GPUs on your device are being used by this module.
- ![Configure module to use GPU 11](media/azure-stack-edge-j-series-configure-gpu-modules/configure-gpu-7.png)
+ ![Configure module to use GPU 11](media/azure-stack-edge-j-series-configure-gpu-modules/configure-gpu-7.png)
- For more information on environment variables that you can use with the Nvidia GPU, go to [nVidia container runtime](https://github.com/NVIDIA/nvidia-container-runtime#environment-variables-oci-spec).
+ For more information on environment variables that you can use with the Nvidia GPU, go to [nVidia container runtime](https://github.com/NVIDIA/nvidia-container-runtime#environment-variables-oci-spec).
> [!NOTE]
- > A GPU can only be mapped to one module. A module can however use one, both or no GPUs.
+ > A GPU can only be mapped to one module. A module can however use one, both or no GPUs.
-11. Enter a name for your module. At this point you can choose to provide container create option and modify module twin settings or if done, select **Add**.
+12. Enter a name for your module. At this point you can choose to provide container create option and modify module twin settings or if done, select **Add**.
![Configure module to use GPU 12](media/azure-stack-edge-j-series-configure-gpu-modules/configure-gpu-8.png)
-12. Make sure that the module is running and select **Review + Create**.
+13. Make sure that the module is running and select **Review + Create**.
![Configure module to use GPU 13](media/azure-stack-edge-j-series-configure-gpu-modules/configure-gpu-9.png)
-13. In the **Review + Create** tab, the deployment options that you selected are displayed. Review the options and select **Create**.
+14. In the **Review + Create** tab, the deployment options that you selected are displayed. Review the options and select **Create**.
![Configure module to use GPU 14](media/azure-stack-edge-j-series-configure-gpu-modules/configure-gpu-10.png)
-14. Make a note of the **runtime status** of the module.
+15. Make a note of the **runtime status** of the module.
![Configure module to use GPU 15](media/azure-stack-edge-j-series-configure-gpu-modules/configure-gpu-11.png)
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-j-series-deploy-configure-compute https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-j-series-deploy-configure-compute.md
@@ -7,7 +7,7 @@ author: alkohli
ms.service: databox ms.subservice: edge ms.topic: tutorial
-ms.date: 08/28/2020
+ms.date: 01/05/2021
ms.author: alkohli Customer intent: As an IT admin, I need to understand how to configure compute on Azure Stack Edge Pro so I can use it to transform the data before sending it to Azure. ---
@@ -33,7 +33,6 @@ In this tutorial, you learn how to:
## Prerequisites Before you set up a compute role on your Azure Stack Edge Pro device, make sure that:- - You've activated your Azure Stack Edge Pro device as described in [Activate your Azure Stack Edge Pro](azure-stack-edge-gpu-deploy-activate.md).
@@ -41,36 +40,36 @@ Before you set up a compute role on your Azure Stack Edge Pro device, make sure
To configure compute on your Azure Stack Edge Pro, you'll create an IoT Hub resource.
-1. In the Azure portal of your Azure Stack Edge resource, go to **Overview**. In the right-pane, on the **Compute** tile, select **Get started**.
+1. In the Azure portal of your Azure Stack Edge resource, go to **Overview**, and select **IoT Edge**.
- ![Get started with compute](./media/azure-stack-edge-j-series-deploy-configure-compute/configure-compute-1.png)
+ ![Get started with compute](./media/azure-stack-edge-j-series-deploy-configure-compute/configure-compute-1.png)
-2. On the **Configure Edge compute** tile, select **Configure compute**.
+2. In **Enable IoT Edge service**, select **Add**.
- ![Configure compute](./media/azure-stack-edge-j-series-deploy-configure-compute/configure-compute-2.png)
+ ![Configure compute](./media/azure-stack-edge-j-series-deploy-configure-compute/configure-compute-2.png)
-3. On the **Configure Edge compute** blade, input the following:
+3. In **Create IoT Edge service**, enter settings for your IoT Hub resource:
-
- |Field |Value |
- |---------|---------|
- |IoT Hub | Choose from **New** or **Existing**. <br> By default, a Standard tier (S1) is used to create an IoT resource. To use a free tier IoT resource, create one and then select the existing resource. <br> In each case, the IoT Hub resource uses the same subscription and resource group that is used by the Azure Stack Edge resource. |
- |Name |Enter a name for your IoT Hub resource. |
+ |Field |Value |
+ |--------|---------|
+ |Subscription | Subscription used by the Azure Stack Edge resource. |
+ |Resource group | Resource group used by the Azure Stack Edge resource. |
+ |IoT Hub | Choose from **Create new** or **Use existing**. <br> By default, a Standard tier (S1) is used to create an IoT resource. To use a free tier IoT resource, create one and then select the existing resource. <br> In each case, the IoT Hub resource uses the same subscription and resource group that is used by the Azure Stack Edge resource. |
+ |Name | If you don't want to use the default name provided for a new IoT Hub resource, enter a different name. |
![Get started with compute 2](./media/azure-stack-edge-j-series-deploy-configure-compute/configure-compute-3.png)
-4. Select **Create**. The IoT Hub resource creation takes several minutes. After the IoT Hub resource is created, the **Configure compute** tile updates to show the compute configuration.
+4. When you finish the settings, select **Review + Create**. Review the settings for your IoT Hub resource, and select **Create**.
+
+ Resource creation for an IoT Hub resource takes several minutes. After the resource is created, the **Overview** indicates the IoT Edge service is now running.
![Get started with compute 3](./media/azure-stack-edge-j-series-deploy-configure-compute/configure-compute-4.png)
-5. To confirm that the Edge compute role has been configured, select **View Compute** on the **Configure compute** tile.
-
- ![Get started with compute 4](./media/azure-stack-edge-j-series-deploy-configure-compute/configure-compute-5.png)
+5. To confirm the Edge compute role has been configured, select **Properties**.
- > [!NOTE]
- > If the **Configure Compute** dialog is closed before the IoT Hub is associated with the Azure Stack Edge Pro device, the IoT Hub gets created but is not shown in the compute configuration.
-
- When the Edge compute role is set up on the Edge device, it creates two devices: an IoT device and an IoT Edge device. Both devices can be viewed in the IoT Hub resource. An IoT Edge Runtime is also running on this IoT Edge device. At this point, only the Linux platform is available for your IoT Edge device.
+ ![Get started with compute 4](./media/azure-stack-edge-j-series-deploy-configure-compute/configure-compute-5.png)
+
+ When the Edge compute role is set up on the Edge device, it creates two devices: an IoT device and an IoT Edge device. Both devices can be viewed in the IoT Hub resource. An IoT Edge Runtime is also running on this IoT Edge device. At this point, only the Linux platform is available for your IoT Edge device.
## Add shares
@@ -90,11 +89,11 @@ For the simple deployment in this tutorial, you'll need two shares: one Edge sha
![Add an Edge share](./media/azure-stack-edge-j-series-deploy-configure-compute/add-edge-share-1.png)
- If you created a local NFS share, use the following remote sync (rsync) command option to copy files onto the share:
+ If you created a local NFS share, use the following remote sync (`rsync`) command option to copy files onto the share:
`rsync <source file path> < destination file path>`
- For more information about the `rsync` command, go to [Rsync documentation](https://www.computerhope.com/unix/rsync.htm).
+ For more information about the `rsync` command, go to [`Rsync` documentation](https://www.computerhope.com/unix/rsync.htm).
> [!NOTE] > To mount NFS share to compute, the compute network must be configured on same subnet as NFS Virtual IP address. For details on how to configure compute network, go to [Enable compute network on your Azure Stack Edge Pro](azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md).
@@ -150,15 +149,15 @@ To verify that the module is running, do the following:
1. In File Explorer, connect to both the Edge local and Edge shares you created previously.
- ![Verify data transform](./media/azure-stack-edge-j-series-deploy-configure-compute/verify-data-2.png)
+ ![Verify data transform - 1](./media/azure-stack-edge-j-series-deploy-configure-compute/verify-data-2.png)
1. Add data to the local share.
- ![Verify data transform](./media/azure-stack-edge-j-series-deploy-configure-compute/verify-data-3.png)
+ ![Verify data transform - 2](./media/azure-stack-edge-j-series-deploy-configure-compute/verify-data-3.png)
The data gets moved to the cloud share.
- ![Verify data transform](./media/azure-stack-edge-j-series-deploy-configure-compute/verify-data-4.png)
+ ![Verify data transform -3](./media/azure-stack-edge-j-series-deploy-configure-compute/verify-data-4.png)
The data is then pushed from the cloud share to the storage account. To view the data, you can use Storage Explorer.
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-mini-r-deploy-prep https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-mini-r-deploy-prep.md
@@ -7,7 +7,7 @@ author: alkohli
ms.service: databox ms.subservice: edge ms.topic: tutorial
-ms.date: 01/04/2021
+ms.date: 01/05/2021
ms.author: alkohli Customer intent: As an IT admin, I need to understand how to prepare the portal to deploy Azure Stack Edge Mini R device so I can use it to transfer data to Azure. ---
@@ -85,7 +85,7 @@ To create an Azure Stack Edge resource, take the following steps in the Azure po
1. Use your Microsoft Azure credentials to sign in to the Azure portal at this URL: [https://portal.azure.com](https://portal.azure.com).
-2. In the left-pane, select **+ Create a resource**. Search for and select **Azure Stack Edge / Data Box Gateway**. Select **Create**.
+2. In the left pane, select **+ Create a resource**. Search for and select **Azure Stack Edge / Data Box Gateway**. Select **Create**.
3. Pick the subscription that you want to use for the Azure Stack Edge Pro device. Select the country to where you want to ship this physical device. Select **Show devices**.
@@ -97,7 +97,7 @@ To create an Azure Stack Edge resource, take the following steps in the Azure po
[![Create a resource 2](media/azure-stack-edge-mini-r-deploy-prep/create-resource-2.png)](media/azure-stack-edge-mini-r-deploy-prep/create-resource-2.png#lightbox)
-6. On the **Basics** tab, enter or select the following **Project details**.
+5. On the **Basics** tab, enter or select the following **Project details**.
|Setting |Value | |---------|---------|
@@ -105,7 +105,7 @@ To create an Azure Stack Edge resource, take the following steps in the Azure po
|Resource group |Select an existing group or create a new group.<br>Learn more about [Azure Resource Groups](../azure-resource-manager/management/overview.md). |
-7. Enter or select the following **Instance details**.
+6. Enter or select the following **Instance details**.
|Setting |Value | |---------|---------|
@@ -115,25 +115,25 @@ To create an Azure Stack Edge resource, take the following steps in the Azure po
![Create a resource 4](media/azure-stack-edge-mini-r-deploy-prep/create-resource-4.png)
-8. Select **Next: Shipping address**.
+7. Select **Next: Shipping address**.
- If you already have a device, select the combo box for **I already have a device**.
- ![Create a resource 5](media/azure-stack-edge-mini-r-deploy-prep/create-resource-5.png)
+ ![Create a resource 5](media/azure-stack-edge-mini-r-deploy-prep/create-resource-5.png)
- If this is the new device that you are ordering, enter the contact name, company, address to ship the device to, and contact information.
- ![Create a resource 6](media/azure-stack-edge-mini-r-deploy-prep/create-resource-6.png)
+ ![Create a resource 6](media/azure-stack-edge-mini-r-deploy-prep/create-resource-6.png)
-9. Select **Next: Tags**. Optionally provide tags to categorize resources and consolidate billing. Select **Next: Review + create**.
+8. Select **Next: Tags**. Optionally provide tags to categorize resources and consolidate billing. Select **Next: Review + create**.
-10. On the **Review + create** tab, review the **Pricing details**, **Terms of use**, and the details for your resource. Select the combo box for **I have reviewed the privacy terms**.
+9. On the **Review + create** tab, review the **Pricing details**, **Terms of use**, and the details for your resource. Select the combo box for **I have reviewed the privacy terms**.
![Create a resource 7](media/azure-stack-edge-mini-r-deploy-prep/create-resource-7.png) You're also notified that during resource creation, a Managed Service Identity (MSI) is enabled that lets you authenticate to cloud services. This identity exists for as long as the resource exists.
-8. Select **Create**.
+10. Select **Create**.
The resource creation takes a few minutes. An MSI is also created that lets the Azure Stack Edge device communicate with the resource provider in Azure.
@@ -149,19 +149,19 @@ To create an Azure Stack Edge resource, take the following steps in the Azure po
After the Azure Stack Edge resource is up and running, you'll need to get the activation key. This key is used to activate and connect your Azure Stack Edge Mini R device with the resource. You can get this key now while you are in the Azure portal.
-1. Select the resource that you created. Select **Overview** and then select **Device setup**.
+1. Select the resource you created, and select **Overview**.
- ![Select Device setup](media/azure-stack-edge-mini-r-deploy-prep/azure-stack-edge-resource-2.png)
+ ![Select Device setup](media/azure-stack-edge-mini-r-deploy-prep/azure-stack-edge-resource-2.png)
2. On the **Activate** tile, provide a name for the Azure Key Vault, or accept the default name. The key vault name can be between 3 and 24 characters. A key vault is created for each Azure Stack Edge resource that is activated with your device. The key vault lets you store and access secrets. For example, the Channel Integrity Key (CIK) for the service is stored in the key vault.
- Once you have specified a key vault name, select **Generate key** to create an activation key.
+ Once you've specified a key vault name, select **Generate activation key** to create an activation key.
[![Get activation key](media/azure-stack-edge-mini-r-deploy-prep/azure-stack-edge-resource-3.png)](media/azure-stack-edge-mini-r-deploy-prep/azure-stack-edge-resource-3.png#lightbox)
- Wait a few minutes as the key vault and activation key are created. Select the copy icon to copy the key and save it for later use.
+ Wait a few minutes while the key vault and activation key are created. Select the copy icon to copy the key and save it for later use.
> [!IMPORTANT] > - The activation key expires three days after it is generated.
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-pro-r-deploy-prep https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-pro-r-deploy-prep.md
@@ -7,13 +7,13 @@ author: alkohli
ms.service: databox ms.subservice: edge ms.topic: tutorial
-ms.date: 12/16/2020
+ms.date: 01/04/2021
ms.author: alkohli Customer intent: As an IT admin, I need to understand how to prepare the portal to deploy Azure Stack Edge Pro R so I can use it to transfer data to Azure. --- # Tutorial: Prepare to deploy Azure Stack Edge Pro R
-This is the first tutorial in the series of deployment tutorials that are required to completely deploy Azure Stack Edge Pro R. This tutorial describes how to prepare the Azure portal to deploy an Azure Stack Edge resource. The tutorial uses a 1-node Azure Stack Edge Pro R device shipped with an Uninterruptible Power Supply (UPS).
+This tutorial is the first in the series of deployment tutorials that are required to completely deploy Azure Stack Edge Pro R. This tutorial describes how to prepare the Azure portal to deploy an Azure Stack Edge resource. The tutorial uses a 1-node Azure Stack Edge Pro R device shipped with an Uninterruptible Power Supply (UPS).
You need administrator privileges to complete the setup and configuration process. The portal preparation takes less than 10 minutes.
@@ -32,7 +32,7 @@ To deploy Azure Stack Edge Pro R, refer to the following tutorials in the prescr
| --- | --- | | **Preparation** |These steps must be completed in preparation for the upcoming deployment. | | **[Deployment configuration checklist](#deployment-configuration-checklist)** |Use this checklist to gather and record information before and during the deployment. |
-| **[Deployment prerequisites](#prerequisites)** |These validate the environment is ready for deployment. |
+| **[Deployment prerequisites](#prerequisites)** |These prerequisites validate the environment is ready for deployment. |
| | | |**Deployment tutorials** |These tutorials are required to deploy your Azure Stack Edge Pro R device in production. | |**[1. Prepare the Azure portal for device](azure-stack-edge-pro-r-deploy-prep.md)** |Create and configure your Azure Stack Edge resource before you install an Azure Stack Box Edge physical device. |
@@ -42,7 +42,7 @@ To deploy Azure Stack Edge Pro R, refer to the following tutorials in the prescr
|**[5. Configure device settings](azure-stack-edge-pro-r-deploy-set-up-device-update-time.md)** |Assign a device name and DNS domain, configure update server and device time. | |**[6. Configure security settings](azure-stack-edge-pro-r-deploy-configure-certificates-vpn-encryption.md)** |Configure certificates, VPN, encryption-at-rest for your device. Use device generated certificates or bring your own certificates. | |**[7. Activate the device](azure-stack-edge-pro-r-deploy-activate.md)** |Use the activation key from service to activate the device. The device is ready to set up SMB or NFS shares or connect via REST. |
-|**[8. Configure compute](azure-stack-edge-gpu-deploy-configure-compute.md)** |Configure the compute role on your device. This will also create a Kubernetes cluster. |
+|**[8. Configure compute](azure-stack-edge-gpu-deploy-configure-compute.md)** |Configure the compute role on your device. A Kubernetes cluster is also created. |
You can now begin to set up the Azure portal.
@@ -104,7 +104,7 @@ To create an Azure Stack Edge resource, take the following steps in the Azure po
|Setting |Value | |---------|---------|
- |Subscription |This is automatically populated based on the earlier selection. Subscription is linked to your billing account. |
+ |Subscription |The subscription is automatically populated based on the earlier selection. Subscription is linked to your billing account. |
|Resource group |Select an existing group or create a new group.<br>Learn more about [Azure Resource Groups](../azure-resource-manager/management/overview.md). | 7. Enter or select the following **Instance details**.
@@ -145,7 +145,7 @@ After the resource is successfully created and deployed, you're notified. Select
After the order is placed, Microsoft reviews the order and reaches out to you (via email) with shipping details.
-<!--![Notification for review of the Azure Stack Edge Pro order](media/azure-stack-edge-gpu-deploy-prep/azure-stack-edge-resource-2.png)-->
+<!--![Notification for review of the Azure Stack Edge Pro order](media/azure-stack-edge-gpu-deploy-prep/azure-stack-edge-resource-2.png) - If this is restored, it must go above "After the resource is successfully created." The azure-stack-edge-resource-1.png would seem superfluous in that case.-->
If you run into any issues during the order process, see [Troubleshoot order issues](azure-stack-edge-troubleshoot-ordering.md).
@@ -153,20 +153,17 @@ If you run into any issues during the order process, see [Troubleshoot order iss
After the Azure Stack Edge resource is up and running, you'll need to get the activation key. This key is used to activate and connect your Azure Stack Edge Pro device with the resource. You can get this key now while you are in the Azure portal.
-1. Select the resource that you created. Select **Overview** and then select **Device setup**.
+1. Select the resource that you created, and select **Overview**.
- ![Select Device setup](media/azure-stack-edge-pro-r-deploy-prep/azure-stack-edge-resource-2.png)
+2. In the right pane, provide a name for the Azure Key Vault or accept the default name. The key vault name can be between 3 and 24 characters.
-2. On the **Activate** tile, provide a name for the Azure Key Vault or accept the default name. The key vault name can be between 3 and 24 characters.
+ A key vault is created for each Azure Stack Edge resource that is activated with your device. The key vault lets you store and access secrets, for example, the Channel Integrity Key (CIK) for the service is stored in the key vault.
- A key vault is created for each Azure Stack Edge resource that is activated with your device. The key vault lets you store and access secrets, for example, the Channel Integrity Key (CIK) for the service is stored in the key vault.
+ Once you've specified a key vault name, select **Generate activation key** to create an activation key.
- Once you have specified a key vault name, select **Generate key** to create an activation key.
-
- ![Get activation key](media/azure-stack-edge-pro-r-deploy-prep/azure-stack-edge-resource-3.png)
-
- Wait a few minutes as the key vault and activation key are created. Select the copy icon to copy the key and save it for later use.
+ ![Get activation key](media/azure-stack-edge-pro-r-deploy-prep/azure-stack-edge-resource-3.png)
+ Wait a few minutes while the key vault and activation key are created. Select the copy icon to copy the key and save it for later use.<!--Verify that the new screen has a copy icon.-->
> [!IMPORTANT] > - The activation key expires three days after it is generated.
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/how-to-create-and-manage-users https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-create-and-manage-users.md
@@ -4,7 +4,7 @@ description: Create and manage users of sensors and the on-premises management c
author: shhazam-ms manager: rkarlin ms.author: shhazam
-ms.date: 12/21/2020
+ms.date: 1/3/2021
ms.topic: article ms.service: azure ---
@@ -15,7 +15,7 @@ This article describes how to create and manage users of sensors and the on-prem
Features are also available to track user activity and enable Active Directory sign-in.
-By default, each sensor and on-premises management console is installed with a *cyberx and support* user. These users have access to advanced tools for troubleshooting and setup. Administrator users should sign in with these user credentials, create an admin user, and then create additional users for security analysts and read-only users.
+By default, each sensor and on-premises management console is installed with a *cyberx and support* user. These users have access to advanced tools for troubleshooting and setup. Administrator users should sign in with these user credentials, create an admin user, and then create extra users for security analysts and read-only users.
## Role-based permissions The following user roles are available:
@@ -84,8 +84,8 @@ This section describes how to define users. Cyberx, support, and administrator u
To define a user: 1. From the left pane for the sensor or the on-premises management console, select **Users**.
-2. In the **Users** window, select **Create User**.
-3. On the **Create User** pane, define the following parameters:
+1. In the **Users** window, select **Create User**.
+1. On the **Create User** pane, define the following parameters:
- **Username**: Enter a username. - **Email**: Enter the user's email address.
@@ -117,7 +117,7 @@ To access the command:
1. Sign in to the CLI for the sensor or on-premises management console by using Defender for IoT administrative credentials.
-2. Enter `sudo nano /var/cyberx/properties/authentication`.
+1. Enter `sudo nano /var/cyberx/properties/authentication`.
```azurecli-interactive infinity_session_expiration = true
@@ -134,7 +134,6 @@ To disable the feature, change `infinity_session_expiration = true` to `infinity
To update sign-out counting periods, adjust the `= <number>` value to the required time. - ## Track user activity You can track user activity in the event timeline on each sensor. The timeline displays the event or affected device, and the time and date that the user carried out the activity.
@@ -166,11 +165,11 @@ To configure Active Directory:
:::image type="content" source="media/how-to-setup-active-directory/ad-system-settings-v2.png" alt-text="View your Active Directory system settings.":::
-2. On the **System Settings** pane, select **Active Directory**.
+1. On the **System Settings** pane, select **Active Directory**.
:::image type="content" source="media/how-to-setup-active-directory/ad-configurations-v2.png" alt-text="Edit your Active Directory configurations.":::
-3. In the **Edit Active Directory Configuration** dialog box, select **Active Directory Integration Enabled** > **Save**. The **Edit Active Directory Configuration** dialog box expands, and you can now enter the parameters to configure Active Directory.
+1. In the **Edit Active Directory Configuration** dialog box, select **Active Directory Integration Enabled** > **Save**. The **Edit Active Directory Configuration** dialog box expands, and you can now enter the parameters to configure Active Directory.
:::image type="content" source="media/how-to-setup-active-directory/ad-integration-enabled-v2.png" alt-text="Enter the parameters to configure Active Directory.":::
@@ -179,7 +178,7 @@ To configure Active Directory:
> - For all the Active Directory parameters, use lowercase only. Use lowercase even when the configurations in Active Directory use uppercase. > - You can't configure both LDAP and LDAPS for the same domain. You can, however, use both for different domains at the same time.
-4. Set the Active Directory server parameters, as follows:
+1. Set the Active Directory server parameters, as follows:
| Server parameter | Description | |--|--|
@@ -189,9 +188,79 @@ To configure Active Directory:
| Active Directory groups | Enter the group names that are defined in your Active Directory configuration on the LDAP server. | | Trusted domains | To add a trusted domain, add the domain name and the connection type of a trusted domain. <br />You can configure trusted domains only for users who were defined under users. |
-5. Select **Save**.
+1. Select **Save**.
+
+1. To add a trusted server, select **Add Server** and configure another server.
+
+## Resetting a user's password for the sensor or on-premises management console
+
+### CyberX or Support user
+
+Only the **CyberX** and **Support** user have access to the **Password recovery** feature. If the **CyberX** or **Support** user forgot their password, they can be reset the password via the **Password recovery** option on the Defender for IoT sign-in page.
+
+To reset the password for a CyberX or Support user:
+
+1. On the Defender for IoT sign-in screen, select **Password recovery**. The **Password recovery** screen opens.
+
+1. Select either **CyberX** or **Support**, and copy the unique identifier.
+
+1. Navigate to the Azure portal and select **Sites and Sensors**.
+
+1. Select the **Subscription Filter** icon :::image type="icon" source="media/password-recovery-images/subscription-icon.png" border="false"::: from the top toolbar, and select the subscription your sensor is connected to.
+
+1. Select the **Recover on-premises management console password** tab.
+
+ :::image type="content" source="media/password-recovery-images/recover-button.png" alt-text="Select the recover on-premises management button to download the recovery file.":::
+
+1. Enter the unique identifier that you received on the **Password recovery** screen and select **Recover**. The `password_recovery.zip` file is downloaded.
+
+ > [!NOTE]
+ > Don't alter the password recovery file. It's a signed file and won't work if you tamper with it.
+
+1. On the **Password recovery** screen, select **Upload**. **The Upload Password Recovery File** window will open.
+
+ :::image type="content" source="media/password-recovery-images/upload.png" alt-text="Upload your recovery file to get a new password.":::
+
+1. Select **Browse** to locate your `password_recovery.zip` file, or drag the `password_recovery.zip` to the window.
+
+ > [!NOTE]
+ > An error message may appear indicating the file is invalid. To fix this error message, ensure you selected the right subscription before downloading the `password_recovery.zip` and download it again.
+
+1. Select **Next**, and your user, and system-generated password for your management console will then appear.
+
+### Administrator, Security analyst and Read only user
+
+Read only and Security analysts canΓÇÿt reset their own password and need to contact a user with either the Administrator, Support, or CyberX roles, in order to reset their password. An Administrator user must contact the **CyberX** or **Support** user to reset their password.
+
+To reset a user's password on the Sensor:
+
+1. An Administrator, Support, or CyberX role user should sign in to the sensor.
+
+1. Select **Users** from the left-hand panel.
+
+ :::image type="content" source="media/password-recovery-images/sensor-page.png" alt-text="Select the user option from the left side pane.":::
+
+1. Locate the user and select **Edit** from the **Actions** dropdown menu.
+
+ :::image type="content" source="media/password-recovery-images/edit.png" alt-text="select edit from the actions dropdown menu.":::
+
+1. Enter the new password in the **New Password** and **Confirm New Password** fields.
+
+1. Select **Update**.
+
+To reset a user's password on the on-premises management console:
+
+1. An Administrator, Support, or CyberX role user should sign in to the sensor.
+
+1. Select **Users** from the left-hand panel.
+
+ :::image type="content" source="media/password-recovery-images/console-page.png" alt-text="On the left panel select the user's option.":::
+
+1. Locate your user and select the edit icon :::image type="icon" source="media/password-recovery-images/edit-icon.png" border="false":::.
+
+1. Enter the new password in the **New Password** and **Confirm New Password** fields.
-6. To add a trusted server, select **Add Server** and configure another server.
+1. Select **Update**.
## See also
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/how-to-troubleshoot-the-sensor-and-on-premises-management-console https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-troubleshoot-the-sensor-and-on-premises-management-console.md
@@ -4,7 +4,7 @@ description: Troubleshoot your sensor and on-premises management console to elim
author: shhazam-ms manager: rkarlin ms.author: shhazam
-ms.date: 12/12/2020
+ms.date: 1/3/2021
ms.topic: article ms.service: azure ---
@@ -22,23 +22,33 @@ This article describes basic troubleshooting tools for the sensor and the on-pre
### Investigate password failure at initial sign-in
-When you're signing in to a preconfigured Arrow sensor for the first time, you'll need to perform the following password recovery:
+When signing into a preconfigured Arrow sensor for the first time, you'll need to perform password recovery.
-1. On the Defender for IoT sign-in screen, select the **Password Recovery** option.
+To recover your password:
- The **Password Recovery** screen opens. There, you're prompted to select the user and subscription, and you're given a unique identifier.
+1. On the Defender for IoT sign-in screen, select **Password recovery**. The **Password recovery** screen opens.
-1. Go to the Defender for IoT **Sites and sensors** page and select the **Recover my password** tab.
+1. Select either **CyberX** or **Support**, and copy the unique identifier.
-1. Enter the unique identifier that you received on the **Password Recovery** screen and select **Recover**. The `password_recovery.zip` file
- is downloaded.
+1. Navigate to the Azure portal and select **Sites and Sensors**.
- > [!NOTE]
- > Don't alter the activation file. It's a signed file and won't work if you tamper with it.
+1. Select the **Recover on-premises management console password** tab.
+
+ :::image type="content" source="media/password-recovery-images/recover-button.png" alt-text="Select the recover on-premises management button to download the recovery file.":::
+
+1. Enter the unique identifier that you received on the **Password recovery** screen and select **Recover**. The `password_recovery.zip` file is downloaded.
+
+ > [!NOTE]
+ > Don't alter the password recovery file. It's a signed file and won't work if you tamper with it.
+
+1. On the **Password recovery** screen, select **Upload**. **The Upload Password Recovery File** window will open.
+
+1. Select **Browse** to locate your `password_recovery.zip` file, or drag the `password_recovery.zip` to the window.
-1. On the **Password Recovery** screen, upload the `password_recovery.zip` file and select **Next**.
+1. Select **Next**, and your user, and system-generated password for your management console will then appear.
-You then receive your system-generated password for your management console.
+ > [!NOTE]
+ > When you sign in to a sensor or on-premise management console for the first time it will be linked to the subscription you connected it to. If you need to reset the password for the CyberX or Support user you will need to select that subscription. For more information on recovering a CyberX or Support user password, see [Resetting a user's password for the sensor or on-premises management console](how-to-create-and-manage-users.md#resetting-a-users-password-for-the-sensor-or-on-premises-management-console)
### Investigate a lack of traffic
@@ -60,35 +70,35 @@ To check system performance:
:::image type="content" source="media/how-to-troubleshoot-the-sensor-and-on-premises-management-console/dashboard-view-v2.png" alt-text="Screenshot of a sample dashboard.":::
-2. From the side menu, select **Devices**.
+1. From the side menu, select **Devices**.
-3. In the **Devices** window, make sure devices are being discovered.
+1. In the **Devices** window, make sure devices are being discovered.
:::image type="content" source="media/how-to-troubleshoot-the-sensor-and-on-premises-management-console/discovered-devices.png" alt-text="Ensure that devices are discovered.":::
-4. From the side menu, select **Data Mining**.
+1. From the side menu, select **Data Mining**.
-5. In the **Data Mining** window, select **ALL** and generate a report.
+1. In the **Data Mining** window, select **ALL** and generate a report.
:::image type="content" source="media/how-to-troubleshoot-the-sensor-and-on-premises-management-console/new-report-generated.png" alt-text="Generate a new report by using data mining.":::
-6. Make sure the report contains data.
+1. Make sure the report contains data.
:::image type="content" source="media/how-to-troubleshoot-the-sensor-and-on-premises-management-console/new-report-generated.png" alt-text="Ensure that the report contains data.":::
-7. From the side menu, select **Trends & Statistics**.
+1. From the side menu, select **Trends & Statistics**.
-8. In the **Trends & Statistics** window, select **Add Widget**.
+1. In the **Trends & Statistics** window, select **Add Widget**.
:::image type="content" source="media/how-to-troubleshoot-the-sensor-and-on-premises-management-console/add-widget.png" alt-text="Add a widget by selecting it.":::
-9. Add a widget and make sure it shows data.
+1. Add a widget and make sure it shows data.
:::image type="content" source="media/how-to-troubleshoot-the-sensor-and-on-premises-management-console/widget-data.png" alt-text="Ensure that the widget is showing data.":::
-10. From the side menu, select **Alerts**. The **Alerts** window appears.
+1. From the side menu, select **Alerts**. The **Alerts** window appears.
-11. Make sure the alerts were created.
+1. Make sure the alerts were created.
:::image type="content" source="media/how-to-troubleshoot-the-sensor-and-on-premises-management-console/alerts-created.png" alt-text="Ensure that alerts were created.":::
@@ -149,9 +159,9 @@ To fix the configuration:
1. Right-click the cloud icon on the device map and select **Export IP Addresses**. Copy the public ranges that are private, and add them to the subnet list. For more information, see [Configure subnets](how-to-control-what-traffic-is-monitored.md#configure-subnets).
-2. Generate a new data-mining report for internet connections.
+1. Generate a new data-mining report for internet connections.
-3. In the data-mining report, select :::image type="icon" source="media/how-to-troubleshoot-the-sensor-and-on-premises-management-console/administrator-mode.png" border="false"::: to enter the administrator mode and delete the IP addresses of your ICS devices.
+1. In the data-mining report, select :::image type="icon" source="media/how-to-troubleshoot-the-sensor-and-on-premises-management-console/administrator-mode.png" border="false"::: to enter the administrator mode and delete the IP addresses of your ICS devices.
### Tweak the sensor's quality of service
@@ -174,7 +184,7 @@ To tweak the quality of service:
> [!NOTE] > For a physical appliance, use the em1 interface.
-2. To clear interface limitation, enter `sudo cyberx-xsense-limit-interface -i eth0 -l 1mbps -c`.
+1. To clear interface limitation, enter `sudo cyberx-xsense-limit-interface -i eth0 -l 1mbps -c`.
## On-premises management console troubleshooting tools
@@ -198,7 +208,7 @@ To tweak the quality of service:
1. Sign in as a Defender for IoT user.
-2. Verify the default values:
+1. Verify the default values:
```bash grep \"notifications\" /var/cyberx/properties/management.properties
@@ -211,20 +221,20 @@ To tweak the quality of service:
notifications.max_time_to_report=10 (seconds) ```
-3. Edit the default settings:
+1. Edit the default settings:
```bash sudo nano /var/cyberx/properties/management.properties ```
-4. Edit the settings of the following lines:
+1. Edit the settings of the following lines:
```bash notifications.max_number_to_report=50 notifications.max_time_to_report=10 (seconds) ```
-5. Save the changes. No restart is required.
+1. Save the changes. No restart is required.
## Export information for troubleshooting
@@ -234,13 +244,13 @@ To export logs:
1. On the left pane, select **System Settings**.
-2. Select **Export Logs**.
+1. Select **Export Logs**.
:::image type="content" source="media/how-to-export-information-for-troubleshooting/export-a-log.png" alt-text="Export a log to system support.":::
-3. In the **File Name** box, enter the file name that you want to use for the log export. The default is the current date.
+1. In the **File Name** box, enter the file name that you want to use for the log export. The default is the current date.
-4. To define what data you want to export, select the data categories:
+1. To define what data you want to export, select the data categories:
| Export category | Description | |--|--|
@@ -256,12 +266,12 @@ To export logs:
| **Web Application Logs** | Select this option to get information about all the requests sent from the application's web interface. | | **System Backup** | Select this option to export a backup of all the system data for investigating the exact state of the system. | | **Dissection Statistics** | Select this option to allow advanced inspection of protocol statistics. |
- | **Database Logs** | Select this option to export logs from the system database. Investigating system logs assists in identifying system problems. |
+ | **Database Logs** | Select this option to export logs from the system database. Investigating system logs helps identify system problems. |
| **Configuration** | Select this option to export information about all the configurable parameters to make sure everything was configured correctly. |
-5. To select all the options, select **Select All** next to **Choose Categories**.
+1. To select all the options, select **Select All** next to **Choose Categories**.
-6. Select **Export Logs**.
+1. Select **Export Logs**.
The exported logs are added to the **Archived Logs** list. Send the OTP to the support team in a separate message and medium from the exported logs. The support team will be able to extract exported logs only by using the unique OTP that's used to encrypt the logs.
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/release-notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/release-notes.md new file mode 100644
@@ -0,0 +1,93 @@
+---
+title: What's new in Azure Defender for IoT
+description: This article lets you know what's new in the latest release of Defender for IoT.
+
+services: defender-for-iot
+ms.service: defender-for-iot
+documentationcenter: na
+author: shhazam-ms
+manager: rkarlin
+editor: ''
+
+ms.devlang: na
+ms.topic: how-to
+ms.tgt_pltfrm: na
+ms.workload: na
+ms.date: 01/03/2021
+ms.author: shhazam
+---
+
+# What's new
+
+Defender for IoT 10.0 provides feature enhancements that improve security, management, and usability.
+
+## Security
+
+Certificate and password recovery enhancements were made for this release.
+
+### Certificates
+
+This version lets you:
+
+- Upload SSL certificates directly to the sensors and on-premises management consoles.
+- Perform validation between the on-premises management console and connected sensors, and between a management console and a High Availability management console. Validation is based on expiration dates, root CA authenticity and Certificate Revocation Lists. If validation fails, the session will not continue.
+
+For upgrades:
+
+- There is no change in SSL certificate or validation functionality during the upgrade.
+- After upgrading, sensor and on-premises management console administrative users can replace SSL certificates, or activate SSL certificate validation from the System Settings, SSL Certificate window.
+
+For Fresh Installations:
+
+- During first-time login, users are required to either use an SSL Certificate (recommended) or a locally generated self-signed certificate (not recommended)
+- Certificate validation is turned on by default for fresh installations.
+
+### Password recovery
+
+Sensor and on-premises management console Administrative users can now recover passwords from the Azure Defender for IoT portal. Previously password recovery required intervention by the support team.
+
+## Onboarding
+
+### On-premises management console - committed devices
+
+Following initial sign-in to the on-premises management console, users are now required to upload an activation file. The file contains the aggregate number of devices to be monitored on the organizational network. This number is referred to as the number of committed devices.
+Committed devices are defined during the onboarding process on the Azure Defender for IoT portal, where the activation file is generated.
+First-time users and users upgrading are required to upload the activation file.
+After initial activation, the number of devices detected on the network might exceed the number of committed devices. This event might happen, for example, if you connect more sensors to the management console. If there is a discrepancy between the number of detected devices and the number of committed devices, a warning appears in the management console. If this event occurs, you should upload a new activation file.
+
+### Pricing page options
+
+Pricing page lets you onboard new subscriptions to Azure Defender for IoT and define committed devices in your network.
+Additionally, the Pricing page now lets you manage existing subscriptions associated with a sensor and update device commitment.
+
+### View and manage onboarded sensors
+
+A new Site and Sensors portal page lets you:
+
+- Add descriptive information about the sensor. For example, a zone associated with the sensor, or free-text tags.
+- View and filter sensor information. For example, view details about sensors that are cloud connected or locally managed or view information about sensors in a specific zone.
+
+## Usability
+
+### Azure Sentinel new connector page
+
+The Azure Defender for IoT data connector page in Azure Sentinel has been redesigned. The data connector is now based on subscriptions rather than IoT Hubs; allowing customers to better manage their configuration connection to Azure Sentinel.
+
+### Azure portal permission updates
+
+Security Reader and Security Administrator support has been added.
+
+## Other updates
+
+### Access group - zone permissions
+
+The on-premises management console Access Group rules will not include the option to grant access to a specific zone. There is no change in defining rules that use sites, regions, and business units. Following upgrade, Access Groups that contained rules allowing access to specific zones will be modified to allow access to its parent site, including all its zones.
+
+### Terminology changes
+
+The term asset has been renamed device in the sensor and on-premises management console, reports and other solution interfaces.
+In sensor and on-premises management console Alerts, the term Manage this Event has been named Remediation Steps.
+
+## Next steps
+
+[Getting started with Defender for IoT](getting-started.md)
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/how-to-use-apis-sdks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-use-apis-sdks.md
@@ -127,13 +127,13 @@ Create and query twins:
```csharp // Initialize twin metadata
-BasicDigitalTwin updateTwinData = new BasicDigitalTwin();
+BasicDigitalTwin twinData = new BasicDigitalTwin();
twinData.Id = $"firstTwin"; twinData.Metadata.ModelId = "dtmi:com:contoso:SampleModel;1"; twinData.Contents.Add("data", "Hello World!"); try {
- await client.CreateOrReplaceDigitalTwinAsync<BasicDigitalTwin>("firstTwin", updateTwinData);
+ await client.CreateOrReplaceDigitalTwinAsync<BasicDigitalTwin>("firstTwin", twinData);
} catch(RequestFailedException rex) { Console.WriteLine($"Create twin error: {rex.Status}:{rex.Message}"); }
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/tutorial-code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/tutorial-code.md
@@ -62,9 +62,6 @@ Next, **add two dependencies to your project** that will be needed to work with
* [**Azure.DigitalTwins.Core**](https://www.nuget.org/packages/Azure.DigitalTwins.Core). This is the package for the [Azure Digital Twins SDK for .NET](/dotnet/api/overview/azure/digitaltwins/client?view=azure-dotnet&preserve-view=true). Add the latest version. * [**Azure.Identity**](https://www.nuget.org/packages/Azure.Identity). This library provides tools to help with authentication against Azure. Add version 1.2.2.
->[!NOTE]
-> There is currently a [known issue](troubleshoot-known-issues.md#issue-with-default-azure-credential-authentication-on-azureidentity-130) affecting the ability to use Azure.Identity version 1.3.0 with this tutorial. Please use version 1.2.2 while this issue persists.
- ## Get started with project code In this section, you will begin writing the code for your new app project to work with Azure Digital Twins. The actions covered include:
event-hubs https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-about https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-about.md
@@ -47,10 +47,9 @@ With Event Hubs, you can start with data streams in megabytes, and grow to gigab
## Rich ecosystem
-[Event Hubs for Apache Kafka ecosystems](event-hubs-for-kafka-ecosystem-overview.md) enables [Apache Kafka (1.0 and later)](https://kafka.apache.org/) clients and applications to talk to Event Hubs. You do not need to set up, configure, and manage your own Kafka clusters.
-
-With a broad ecosystem available in various languages [.NET](https://github.com/Azure/azure-sdk-for-net/), [Java](https://github.com/Azure/azure-sdk-for-java/), [Python](https://github.com/Azure/azure-sdk-for-python/), [JavaScript](https://github.com/Azure/azure-sdk-for-js/), you can easily start processing your streams from Event Hubs. All supported client languages provide low-level integration. The ecosystem also provides you with seamless integration with Azure services like Azure Stream Analytics and Azure Functions and thus enables you to build serverless architectures.
+With a broad ecosystem based on the industry-standard AMQP 1.0 protocol and available in various languages [.NET](https://github.com/Azure/azure-sdk-for-net/), [Java](https://github.com/Azure/azure-sdk-for-java/), [Python](https://github.com/Azure/azure-sdk-for-python/), [JavaScript](https://github.com/Azure/azure-sdk-for-js/), you can easily start processing your streams from Event Hubs. All supported client languages provide low-level integration. The ecosystem also provides you with seamless integration with Azure services like Azure Stream Analytics and Azure Functions and thus enables you to build serverless architectures.
+[Event Hubs for Apache Kafka ecosystems](event-hubs-for-kafka-ecosystem-overview.md) furthermore enables [Apache Kafka (1.0 and later)](https://kafka.apache.org/) clients and applications to talk to Event Hubs. You do not need to set up, configure, and manage your own Kafka and Zookeeper clusters or use some Kafka-as-a-Service offering not native to Azure.
## Key architecture components Event Hubs contains the following [key components](event-hubs-features.md):
event-hubs https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-faq.md
@@ -148,7 +148,7 @@ For step-by-step instructions and more information on setting up an Event Hubs d
## Partitions ### How many partitions do I need?
-The number of partitions is specified at creation and must be between 1 and 32. The partition count isn't changeable, so you should consider long-term scale when setting partition count. Partitions are a data organization mechanism that relates to the downstream parallelism required in consuming applications. The number of partitions in an event hub directly relates to the number of concurrent readers you expect to have. For more information on partitions, see [Partitions](event-hubs-features.md#partitions).
+The number of partitions is specified at creation and must be between 1 and 32. The partition count isn't changeable in all tiers except the [dedicated tier](event-hubs-dedicated-overview.md), so you should consider long-term scale when setting partition count. Partitions are a data organization mechanism that relates to the downstream parallelism required in consuming applications. The number of partitions in an event hub directly relates to the number of concurrent readers you expect to have. For more information on partitions, see [Partitions](event-hubs-features.md#partitions).
You may want to set it to be the highest possible value, which is 32, at the time of creation. Remember that having more than one partition will result in events sent to multiple partitions without retaining the order, unless you configure senders to only send to a single partition out of the 32 leaving the remaining 31 partitions redundant. In the former case, you'll have to read events across all 32 partitions. In the latter case, there's no obvious additional cost apart from the extra configuration you have to make on Event Processor Host.
event-hubs https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-features.md
@@ -11,32 +11,49 @@ Azure Event Hubs is a scalable event processing service that ingests and process
This article builds on the information in the [overview article](./event-hubs-about.md), and provides technical and implementation details about Event Hubs components and features.
-## Namespace
-An Event Hubs namespace provides a unique scoping container, referenced by its [fully qualified domain name](https://en.wikipedia.org/wiki/Fully_qualified_domain_name), in which you create one or more event hubs or Kafka topics.
-
-## Event Hubs for Apache Kafka
-
-[This feature](event-hubs-for-kafka-ecosystem-overview.md) provides an endpoint that enables customers to talk to Event Hubs using the Kafka protocol. This integration provides customers a Kafka endpoint. This enables customers to configure their existing Kafka applications to talk to Event Hubs, giving an alternative to running their own Kafka clusters. Event Hubs for Apache Kafka supports Kafka protocol 1.0 and later.
+> [!TIP]
+> [The protocol support for **Apache Kafka** clients](event-hubs-for-kafka-ecosystem-overview.md) (versions >=1.0) provides network endpoints that enable applications built to use Apache Kafka with any client to use Event Hubs. Most existing Kafka applications can simply be reconfigured to point to an Event Hub namespace instead of a Kafka cluster bootstrap server.
+>
+>From the perspective of cost, operational effort, and reliability, Azure Event Hubs is a great alternative to deploying and operating your own Kafka and Zookeeper clusters and to Kafka-as-a-Service offerings not native to Azure.
+>
+> In addition to getting the same core functionality as of the Apache Kafka broker, you also get access to Azure Event Hub features like automatic batching and archiving via [Event Hubs Capture](event-hubs-capture-overview.md), automatic scaling and balancing, disaster recovery, cost-neutral availability zone support, flexible and secure network integration, and multi-protocol support including the firewall-friendly AMQP-over-WebSockets protocol.
-With this integration, you don't need to run Kafka clusters or manage them with Zookeeper. This also allows you to work with some of the most demanding features of Event Hubs like Capture, Auto-inflate, and Geo-disaster Recovery.
-This integration also allows applications like Mirror Maker or framework like Kafka Connect to work clusterless with just configuration changes.
+## Namespace
+An Event Hubs namespace provides DNS integrated network endpoints and a range of access control and network integration management features such as [IP filtering](event-hubs-ip-filtering.md), [virtual network service endpoint](event-hubs-service-endpoints.md), and [Private Link](private-link-service.md) and is the management container for one of multiple Event Hub instances (or topics, in Kafka parlance).
## Event publishers
-Any entity that sends data to an event hub is an event producer, or *event publisher*. Event publishers can publish events using HTTPS or AMQP 1.0 or Kafka 1.0 and later. Event publishers use a Shared Access Signature (SAS) token to identify themselves to an event hub, and can have a unique identity, or use a common SAS token.
+Any entity that sends data to an Event Hub is an *event publisher* (synonymously used with *event producer*). Event publishers can publish events using HTTPS or AMQP 1.0 or the Kafka protocol. Event publishers use Azure Active Directory based authorization with OAuth2-issued JWT tokens or an Event Hub-specific Shared Access Signature (SAS) token gain publishing access.
### Publishing an event
-You can publish an event via AMQP 1.0, Kafka 1.0 (and later), or HTTPS. The Event Hubs service provides [REST API](/rest/api/eventhub/) and [.NET](event-hubs-dotnet-standard-getstarted-send.md), [Java](event-hubs-java-get-started-send.md), [Python](event-hubs-python-get-started-send.md), [JavaScript](event-hubs-node-get-started-send.md), and [Go](event-hubs-go-get-started-send.md) client libraries for publishing events to an event hub. For other runtimes and platforms, you can use any AMQP 1.0 client, such as [Apache Qpid](https://qpid.apache.org/).
+You can publish an event via AMQP 1.0, the Kafka protocol, or HTTPS. The Event Hubs service provides [REST API](/rest/api/eventhub/) and [.NET](event-hubs-dotnet-standard-getstarted-send.md), [Java](event-hubs-java-get-started-send.md), [Python](event-hubs-python-get-started-send.md), [JavaScript](event-hubs-node-get-started-send.md), and [Go](event-hubs-go-get-started-send.md) client libraries for publishing events to an event hub. For other runtimes and platforms, you can use any AMQP 1.0 client, such as [Apache Qpid](https://qpid.apache.org/).
+
+The choice to use AMQP or HTTPS is specific to the usage scenario. AMQP requires the establishment of a persistent bidirectional socket in addition to transport level security (TLS) or SSL/TLS. AMQP has higher network costs when initializing the session, however HTTPS requires additional TLS overhead for every request. AMQP has significantly higher performance for frequent publishers and can achieve much lower latencies when used with asynchronous publishing code.
-You can publish events individually, or batched. A single publication (event data instance) has a limit of 1 MB, regardless of whether it is a single event or a batch. Publishing events larger than this threshold results in an error. It is a best practice for publishers to be unaware of partitions within the event hub and to only specify a *partition key* (introduced in the next section), or their identity via their SAS token.
+You can publish events individually or batched. A single publication has a limit of 1 MB, regardless of whether it is a single event or a batch. Publishing events larger than this threshold will be rejected.
-The choice to use AMQP or HTTPS is specific to the usage scenario. AMQP requires the establishment of a persistent bidirectional socket in addition to transport level security (TLS) or SSL/TLS. AMQP has higher network costs when initializing the session, however HTTPS requires additional TLS overhead for every request. AMQP has higher performance for frequent publishers.
+Event Hubs throughput is scaled by using partitions and throughput-unit allocations (see below). It is a best practice for publishers to remain unaware of the specific partitioning model chosen for an Event Hub and to only specify a *partition key* that is used to consistently assign related events to the same partition.
![Partition keys](./media/event-hubs-features/partition_keys.png)
-Event Hubs ensures that all events sharing a partition key value are delivered in order, and to the same partition. If partition keys are used with publisher policies, then the identity of the publisher and the value of the partition key must match. Otherwise, an error occurs.
+Event Hubs ensures that all events sharing a partition key value are stored together and delivered in order of arrival. If partition keys are used with publisher policies, then the identity of the publisher and the value of the partition key must match. Otherwise, an error occurs.
+
+### Event Retention
+
+Published events are removed from an Event Hub based on a configurable, timed-based retention policy. The default value and shortest possible retention period is 1 day (24 hours). For Event Hubs Standard, the maximum retention period is 7 days. For Event Hubs Dedicated, the maximum retention period is 90 days.
+
+> [!NOTE]
+> Event Hubs is a real-time event stream engine and is not designed to be used instead of a database and/or as a
+> permanent store for infinitely held event streams.
+>
+> The deeper the history of an event stream gets, the more you will need auxiliary indexes to find a particular historical slice of a given stream. Inspection of event payloads and indexing are not within the feature scope of Event Hubs (or Apache Kafka). Databases and specialized analytics stores and engines such as [Azure Data Lake Store](../data-lake-store/data-lake-store-overview.md), [Azure Data Lake Analytics](../data-lake-analytics/data-lake-analytics-overview.md) and [Azure Synapse](../synapse-analytics/overview-what-is.md) are therefore far better suited for storing historic events.
+>
+> [Event Hubs Capture](event-hubs-capture-overview.md) integrates directly with Azure Blob Storage and Azure Data Lake Storage and, through that integration, also enables [flowing events directly into Azure Synapse](store-captured-data-data-warehouse.md).
+>
+> If you want to use the [Event Sourcing](https://docs.microsoft.com/azure/architecture/patterns/event-sourcing) pattern for your application, you should align your snapshot strategy with the retention limits of Event Hubs. Do not aim to rebuild materialized views from raw events starting at the beginning of time. You would surely come to regret such a strategy once your application is in production for a while and is well used, and your projection builder has to churn through years of change events while trying to catch up to the latest and ongoing changes.
+ ### Publisher policy
event-hubs https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-federation-configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-federation-configuration.md
@@ -5,7 +5,7 @@ ms.topic: article
ms.date: 12/12/2020 ---
-# Configured replication tasks
+# Configured replication tasks - Azure Event Hubs
[!INCLUDE [messaging-configured-functions](../../includes/messaging-configured-functions.md)]
event-hubs https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-federation-patterns https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-federation-patterns.md
@@ -52,7 +52,7 @@ partition](event-hubs-features.md#partitions).
> > In the EventProcessor, you set the position through the InitialOffsetProvider > on the EventProcessorOptions. With the other receiver APIs, the position is
-> passed through teh constructor.
+> passed through the constructor.
The pre-built replication function helpers [provided as
event-hubs https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-kafka-connect-debezium https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-kafka-connect-debezium.md
@@ -1,16 +1,25 @@
---
-title: Integrate Apache Kafka Connect on Azure Event Hubs (Preview) with Debezium for Change Data Capture
+title: Integrate Apache Kafka Connect on Azure Event Hubs with Debezium for Change Data Capture
description: This article provides information on how to use Debezium with Azure Event Hubs for Kafka. ms.topic: how-to author: abhirockzz ms.author: abhishgu
-ms.date: 08/11/2020
+ms.date: 01/06/2021
---
-# Integrate Apache Kafka Connect support on Azure Event Hubs (Preview) with Debezium for Change Data Capture
+# Integrate Apache Kafka Connect support on Azure Event Hubs with Debezium for Change Data Capture
**Change Data Capture (CDC)** is a technique used to track row-level changes in database tables in response to create, update, and delete operations. [Debezium](https://debezium.io/) is a distributed platform that builds on top of Change Data Capture features available in different databases (for example, [logical decoding in PostgreSQL](https://www.postgresql.org/docs/current/static/logicaldecoding-explanation.html)). It provides a set of [Kafka Connect connectors](https://debezium.io/documentation/reference/1.2/connectors/https://docsupdatetracker.net/index.html) that tap into row-level changes in database table(s) and convert them into event streams that are then sent to [Apache Kafka](https://kafka.apache.org/).
+> [!WARNING]
+> Use of the Apache Kafka Connect framework as well as the Debezium platform and its connectors are **not eligible for product support through Microsoft Azure**.
+>
+> Apache Kafka Connect assumes for its dynamic configuration to be held in compacted topics with otherwise unlimited retention. Azure Event Hubs [does not implement compaction as a broker feature](event-hubs-federation-overview.md#log-projections) and always imposes a time-based retention limit on retained events, rooting from the principle that Azure Event Hubs is a real-time event streaming engine and not a long-term data or configuration store.
+>
+> While the Apache Kafka project might be comfortable with mixing these roles, Azure believes that such information is best managed in a proper database or configuration store.
+>
+> Many Apache Kafka Connect scenarios will be functional, but these conceptual differences between Apache Kafka's and Azure Event Hubs' retention models may cause certain configurations not to work as expected.
+ This tutorial walks you through how to set up a change data capture based system on Azure using [Azure Event Hubs](./event-hubs-about.md?WT.mc_id=devto-blog-abhishgu) (for Kafka), [Azure DB for PostgreSQL](../postgresql/overview.md) and Debezium. It will use the [Debezium PostgreSQL connector](https://debezium.io/documentation/reference/1.2/connectors/postgresql.html) to stream database modifications from PostgreSQL to Kafka topics in Azure Event Hubs > [!NOTE]
event-hubs https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-kafka-connect-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-kafka-connect-tutorial.md
@@ -2,11 +2,20 @@
title: Integrate with Apache Kafka Connect- Azure Event Hubs | Microsoft Docs description: This article provides information on how to use Kafka Connect with Azure Event Hubs for Kafka. ms.topic: how-to
-ms.date: 06/23/2020
+ms.date: 01/06/2021
---
-# Integrate Apache Kafka Connect support on Azure Event Hubs (Preview)
-As ingestion for business needs increases, so does the requirement to ingest for various external sources and sinks. [Apache Kafka Connect](https://kafka.apache.org/documentation/#connect) provides such framework to connect and import/export data from/to any external system such as MySQL, HDFS, and file system through a Kafka cluster. This tutorial walks you through using Kafka Connect framework with Event Hubs.
+# Integrate Apache Kafka Connect support on Azure Event Hubs
+[Apache Kafka Connect](https://kafka.apache.org/documentation/#connect) is a framework to connect and import/export data from/to any external system such as MySQL, HDFS, and file system through a Kafka cluster. This tutorial walks you through using Kafka Connect framework with Event Hubs.
+
+> [!WARNING]
+> Use of the Apache Kafka Connect framework and its connectors is **not eligible for product support through Microsoft Azure**.
+>
+> Apache Kafka Connect assumes for its dynamic configuration to be held in compacted topics with otherwise unlimited retention. Azure Event Hubs [does not implement compaction as a broker feature](event-hubs-federation-overview.md#log-projections) and always imposes a time-based retention limit on retained events, rooting from the principle that Azure Event Hubs is a real-time event streaming engine and not a long-term data or configuration store.
+>
+> While the Apache Kafka project might be comfortable with mixing these roles, Azure believes that such information is best managed in a proper database or configuration store.
+>
+> Many Apache Kafka Connect scenarios will be functional, but these conceptual differences between Apache Kafka's and Azure Event Hubs' retention models may cause certain configurations not to work as expected.
This tutorial walks you through integrating Kafka Connect with an event hub and deploying basic FileStreamSource and FileStreamSink connectors. This feature is currently in preview. While these connectors are not meant for production use, they demonstrate an end-to-end Kafka Connect scenario where Azure Event Hubs acts as a Kafka broker.
governance https://docs.microsoft.com/en-us/azure/governance/blueprints/samples/ism-protected/index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/ism-protected/index.md
@@ -6,7 +6,7 @@ ms.topic: sample
--- # Overview of the Australian Government ISM PROTECTED blueprint sample
-ISM Governance blueprint sample provides a set of governance guard-rails using [Azure Policy](../../../policy/overview.md) which help towards ISM PROTECTED attestation (Feb 2020 version). This Blueprint helps customers deploy a core set of policies for any Azure-deployed architecture requiring accreditation or compliance with the ISM framework. The control mapping section provides details on policies included within this initiative and how these policies help meet various controls defined by ISM framework. When assigned to an architecture, resources will be evaluated by Azure Policy for non-compliance with assigned policies.
+ISM Governance blueprint sample provides a set of governance guard-rails using [Azure Policy](../../../policy/overview.md) which help towards ISM PROTECTED attestation (Feb 2020 version). This Blueprint helps customers deploy a core set of policies for any Azure-deployed architecture requiring accreditation or compliance with the ISM framework.
## Control mapping
@@ -30,4 +30,4 @@ Addition articles about blueprints and how to use them:
- Understand how to use [static and dynamic parameters](../../concepts/parameters.md). - Learn to customize the [blueprint sequencing order](../../concepts/sequencing-order.md). - Find out how to make use of [blueprint resource locking](../../concepts/resource-locking.md).-- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).\ No newline at end of file
+- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
governance https://docs.microsoft.com/en-us/azure/governance/blueprints/samples/pci-dss-3.2.1/control-mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/pci-dss-3.2.1/control-mapping.md
@@ -10,10 +10,10 @@ The following article details how the Azure Blueprints PCI-DSS v3.2.1 blueprint
PCI-DSS v3.2.1 controls. For more information about the controls, see [PCI-DSS v3.2.1](https://www.pcisecuritystandards.org/documents/PCI_DSS_v3-2-1.pdf). The following mappings are to the **PCI-DSS v3.2.1:2018** controls. Use the navigation on the right
-to jump directly to a specific control mapping. Many of the mapped controls are implemented with an [Azure Policy](../../../policy/overview.md)
-initiative. To review the complete initiative, open **Policy** in the Azure portal and select the
-**Definitions** page. Then, find and select the **\[Preview\] Audit PCI v3.2.1:2018 controls and
-deploy specific VM Extensions to support audit requirements** built-in policy initiative.
+to jump directly to a specific control mapping. Many of the mapped controls are implemented with an
+[Azure Policy](../../../policy/overview.md) initiative. To review the complete initiative, open
+**Policy** in the Azure portal and select the **Definitions** page. Then, find and select the **PCI
+v3.2.1:2018** built-in policy initiative.
> [!IMPORTANT] > Each control below is associated with one or more [Azure Policy](../../../policy/overview.md)
governance https://docs.microsoft.com/en-us/azure/governance/blueprints/samples/pci-dss-3.2.1/deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/pci-dss-3.2.1/deploy.md
@@ -118,7 +118,7 @@ The following table provides a list of the blueprint artifact parameters:
|Artifact name|Artifact type|Parameter name|Description| |-|-|-|-|
-|\[Preview\] Audit PCI v3.2.1:2018 controls and deploy specific VM Extensions to support audit requirements|Policy Assignment|List of Resource Types | Audit diagnostic setting for selected resource types. Default value is all resources are selected|
+|PCI v3.2.1:2018|Policy Assignment|List of Resource Types | Audit diagnostic setting for selected resource types. Default value is all resources are selected|
|Allowed locations|Policy Assignment|List Of Allowed Locations|List of data center locations allowed for any resource to be deployed into. This list is customizable to the desired Azure locations globally. Select locations you wish to allow.| |Allowed Locations for resource groups|Policy Assignment |Allowed Location |This policy enables you to restrict the locations your organization can create resource groups in. Use to enforce your geo-compliance requirements.| |Deploy Auditing on SQL servers|Policy Assignment|Retention days|Data retention in number of days. Default value is 180 but PCI requires 365.|
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/hbase/apache-hbase-advisor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hbase/apache-hbase-advisor.md
@@ -12,13 +12,13 @@ ms.date: 01/03/2021
--- # Apache HBase advisories in Azure HDInsight
-This article describes several advisories that help you optimize Apache HBase performance in Azure HDInsight.
+This article describes several advisories to help you optimize the Apache HBase performance in Azure HDInsight.
## Optimize HBase to read most recently written data
-When you use Apache HBase in Azure HDInsight, you can optimize the configuration of HBase for the scenario where your application reads the most recently written data. For high performance, it's optimal that HBase reads are to be served from memstore, instead of the remote storage.
+If your usecase involves reading the most recently written data from HBase, this advisory can help you. For high performance, it's optimal that HBase reads are to be served from memstore, instead of the remote storage.
-The query advisory indicates that for a given column family in a table has > 75% reads that are getting served from memstore. This indicator suggests that even if a flush happens on the memstore the recent file needs to be accessed and that needs to be in cache. The data is first written to memstore the system accesses the recent data there. There's a chance that the internal HBase flusher threads detect that a given region has reached 128M (default) size and can trigger a flush. This scenario happens to even the most recent data that was written when the memstore was around 128M in size. Therefore, a later read of those recent records may require a file read rather than from memstore. Hence it is best to optimize that even recent data that is recently flushed can reside in the cache.
+The query advisory indicates that for a given column family in a table > 75% reads that are getting served from memstore. This indicator suggests that even if a flush happens on the memstore the recent file needs to be accessed and that needs to be in cache. The data is first written to memstore the system accesses the recent data there. There's a chance that the internal HBase flusher threads detect that a given region has reached 128M (default) size and can trigger a flush. This scenario happens to even the most recent data that was written when the memstore was around 128M in size. Therefore, a later read of those recent records may require a file read rather than from memstore. Hence it is best to optimize that even recent data that is recently flushed can reside in the cache.
To optimize the recent data in cache, consider the following configuration settings:
@@ -28,9 +28,9 @@ To optimize the recent data in cache, consider the following configuration setti
3. If you follow step 2 and set compactionThreshold, then change `hbase.hstore.compaction.max` to a higher value for example `100`, and also increase the value for the config `hbase.hstore.blockingStoreFiles` to higher value for example `300`.
-4. If you're sure that you need to read only in the recent data, set `hbase.rs.cachecompactedblocksonwrite` configuration to **ON**. This configuration tells the system that even if compaction happens, the data stays in cache. The configurations can be set at the family level also.
+4. If you're sure that you need to read only the recent data, set `hbase.rs.cachecompactedblocksonwrite` configuration to **ON**. This configuration tells the system that even if compaction happens, the data stays in cache. The configurations can be set at the family level also.
- In the HBase Shell, run the following command:
+ In the HBase Shell, run the following command to set `hbase.rs.cachecompactedblocksonwrite` config:
``` alter '<TableName>', {NAME => '<FamilyName>', CONFIGURATION => {'hbase.hstore.blockingStoreFiles' => '300'}}
@@ -38,15 +38,15 @@ To optimize the recent data in cache, consider the following configuration setti
5. Block cache can be turned off for a given family in a table. Ensure that it's turned **ON** for families that have most recent data reads. By default, block cache is turned ON for all families in a table. In case you have disabled the block cache for a family and need to turn it ON, use the alter command from the hbase shell.
- These configurations help ensure that the data is in cache and that the recent data does not undergo compaction. If a TTL is possible in your scenario, then consider using date-tiered compaction. For more information, see [Apache HBase Reference Guide: Date Tiered Compaction](https://hbase.apache.org/book.html#ops.date.tiered)
+ These configurations help ensure that the data is available in cache and that the recent data does not undergo compaction. If a TTL is possible in your scenario, then consider using date-tiered compaction. For more information, see [Apache HBase Reference Guide: Date Tiered Compaction](https://hbase.apache.org/book.html#ops.date.tiered)
## Optimize the flush queue
-The optimize the flush queue advisory indicates that HBase flushes may need tuning. The flush handlers might not be high enough as configured.
+This advisory indicates that HBase flushes may need tuning. The current configuration for flush handlers may not be high enough to handle with write traffic which may lead to slow down of flushes .
In the region server UI, notice if the flush queue grows beyond 100. This threshold indicates the flushes are slow and you may have to tune the `hbase.hstore.flusher.count` configuration. By default, the value is 2. Ensure that the max flusher threads don't increase beyond 6.
-Additionally, see if you have a recommendation for region count tuning. If so first try the region tuning to see if that helps in faster flushes. Tuning the flusher threads might help in multiple ways like
+Additionally, see if you have a recommendation for region count tuning. If we yes, we suggest you to try the region tuning to see if that helps in faster flushes. Otherwise, tuning the flusher threads may help you.
## Region count tuning
@@ -60,7 +60,7 @@ As an example scenario:
- With these settings in place, the number of regions is 100. The 4-GB global memstore is now split across 100 regions. So effectively each region gets only 40 MB for memstore. When the writes are uniform, the system does frequent flushes and smaller size of the order < 40 MB. Having many flusher threads might increase the flush speed `hbase.hstore.flusher.count`.
-The advisory means that it would be good to reconsider the number of regions per server, the heap size, and the global memstore size configuration along with the flush threads tuning so that such updates getting blocked can be avoided.
+The advisory means that it would be good to reconsider the number of regions per server, the heap size, and the global memstore size configuration along with the tuning of flush threads to avoid updates getting blocked.
## Compaction queue tuning
key-vault https://docs.microsoft.com/en-us/azure/key-vault/certificates/quick-create-cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/certificates/quick-create-cli.md
@@ -87,4 +87,4 @@ In this quickstart you created a Key Vault and stored a certificate in it. To le
- Read an [Overview of Azure Key Vault](../general/overview.md) - See the reference for the [Azure CLI az keyvault commands](/cli/azure/keyvault?view=azure-cli-latest)-- Review [Azure Key Vault best practices](../general/best-practices.md)
+- Review the [Key Vault security overview](../general/security-overview.md)
key-vault https://docs.microsoft.com/en-us/azure/key-vault/certificates/quick-create-java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/certificates/quick-create-java.md
@@ -1,6 +1,6 @@
---
-title: Quickstart - Azure Key Vault Certificate client library for Java
-description: Provides a quickstart for the Azure Key Vault Certificate client library for Java.
+title: Quickstart for Azure Key Vault Certificate client library - Java
+description: Learn about the the Azure Key Vault Certificate client library for Java with the steps in this quickstart.
author: msmbaldwin ms.custom: devx-track-java, devx-track-azurecli ms.author: mbaldwin
@@ -10,7 +10,7 @@ ms.subservice: certificates
ms.topic: quickstart ---
-# Quickstart: Azure Key Vault Certificate client library for Java
+# Quickstart: Azure Key Vault Certificate client library for Java (Certificates)
Get started with the Azure Key Vault Certificate client library for Java. Follow the steps below to install the package and try out example code for basic tasks. Additional resources:
@@ -122,7 +122,7 @@ set KEY_VAULT_NAME=<your-key-vault-name>
```` Windows PowerShell ```powershell
-$Env:KEY_VAULT_NAME=<your-key-vault-name>
+$Env:KEY_VAULT_NAME="<your-key-vault-name>"
``` macOS or Linux
key-vault https://docs.microsoft.com/en-us/azure/key-vault/certificates/quick-create-net https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/certificates/quick-create-net.md
@@ -104,7 +104,7 @@ set KEY_VAULT_NAME=<your-key-vault-name>
```` Windows PowerShell ```powershell
-$Env:KEY_VAULT_NAME=<your-key-vault-name>
+$Env:KEY_VAULT_NAME="<your-key-vault-name>"
``` macOS or Linux
@@ -250,4 +250,4 @@ To learn more about Key Vault and how to integrate it with your apps, see the fo
- See an [Access Key Vault from App Service Application Tutorial](../general/tutorial-net-create-vault-azure-web-app.md) - See an [Access Key Vault from Virtual Machine Tutorial](../general/tutorial-net-virtual-machine.md) - See the [Azure Key Vault developer's guide](../general/developers-guide.md)-- Review [Azure Key Vault best practices](../general/best-practices.md)
+- Review the [Key Vault security overview](../general/security-overview.md)
key-vault https://docs.microsoft.com/en-us/azure/key-vault/certificates/quick-create-node https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/certificates/quick-create-node.md
@@ -87,7 +87,7 @@ set KEY_VAULT_NAME=<your-key-vault-name>
```` Windows PowerShell ```powershell
-$Env:KEY_VAULT_NAME=<your-key-vault-name>
+$Env:KEY_VAULT_NAME="<your-key-vault-name>"
``` macOS or Linux
@@ -282,4 +282,4 @@ In this quickstart, you created a key vault, stored a certificate, and retrieved
- See an [Access Key Vault from App Service Application Tutorial](../general/tutorial-net-create-vault-azure-web-app.md) - See an [Access Key Vault from Virtual Machine Tutorial](../general/tutorial-net-virtual-machine.md) - See the [Azure Key Vault developer's guide](../general/developers-guide.md)-- Review [Azure Key Vault best practices](../general/best-practices.md)
+- Review the [Key Vault security overview](../general/security-overview.md)
key-vault https://docs.microsoft.com/en-us/azure/key-vault/certificates/quick-create-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/certificates/quick-create-portal.md
@@ -84,4 +84,4 @@ In this quickstart, you created a Key Vault and stored a certificate in it. To l
- Read an [Overview of Azure Key Vault](../general/overview.md) - See the [Azure Key Vault developer's guide](../general/developers-guide.md)-- Review [Azure Key Vault best practices](../general/best-practices.md)
+- Review the [Key Vault security overview](../general/security-overview.md)
key-vault https://docs.microsoft.com/en-us/azure/key-vault/certificates/quick-create-powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/certificates/quick-create-powershell.md
@@ -94,4 +94,4 @@ In this quickstart you created a Key Vault and stored a certificate in it. To le
- Read an [Overview