Updates from: 02/03/2021 04:07:33
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/page-layout https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/page-layout.md
@@ -20,6 +20,9 @@ Page layout packages are periodically updated to include fixes and improvements
## Self-asserted page (selfasserted)
+**2.1.2**
+- Fixed the localization encoding issue for languages such as Spanish and French.
+ **2.1.1** - Added a UXString `heading` in addition to `intro` to display on the page as a title. This is hidden by default.
@@ -67,6 +70,10 @@ Page layout packages are periodically updated to include fixes and improvements
## Unified sign-in sign-up page with password reset link (unifiedssp)
+**2.1.2**
+- Fixed the localization encoding issue for languages such as Spanish and French.
+- Allowing the "forgot password" link to use as claims exchange like social IDP.
+ **2.1.1** - Added a UXString `heading` in addition to `intro` to display on the page as a title. This is hidden by default. - Added support for using policy or the QueryString parameter `pageFlavor` to select the layout (classic, oceanBlue, or slateGray).
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/service-limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/service-limits.md
@@ -9,7 +9,7 @@
Previously updated : 01/29/2021 Last updated : 02/02/2021
@@ -50,7 +50,7 @@ The following table lists the administrative configuration limits in the Azure A
## Next steps -- Learn about [Microsoft GraphΓÇÖs throttling guidance](/graph/throttling.md)
+- Learn about [Microsoft GraphΓÇÖs throttling guidance](/graph/throttling)
- Learn about the [validation differences for Azure AD B2C applications](../active-directory/develop/supported-accounts-validation.md)
active-directory-domain-services https://docs.microsoft.com/en-us/azure/active-directory-domain-services/faqs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-domain-services/faqs.md
@@ -102,7 +102,7 @@ Any user account that's part of the managed domain can join a VM. Members of the
No. You aren't granted administrative privileges on the managed domain. *Domain Administrator* and *Enterprise Administrator* privileges aren't available for you to use within the domain. Members of the domain administrator or enterprise administrator groups in your on-premises Active Directory are also not granted domain / enterprise administrator privileges on the managed domain. ### Can I modify group memberships using LDAP or other AD administrative tools on managed domains?
-Users and groups that are synchronized from Azure Active Directory to Azure AD Domain Services cannot be modified because their source of origin is Azure Active Directory. Any user or group originating in the managed domain may be modified.
+Users and groups that are synchronized from Azure Active Directory to Azure AD Domain Services cannot be modified because their source of origin is Azure Active Directory. This includes moving users or groups from the AADDC Users managed organizational unit to a custom organizational unit. Any user or group originating in the managed domain may be modified.
### How long does it take for changes I make to my Azure AD directory to be visible in my managed domain? Changes made in your Azure AD directory using either the Azure AD UI or PowerShell are automatically synchronized to your managed domain. This synchronization process runs in the background. There's no defined time period for this synchronization to complete all the object changes.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/use-scim-to-provision-users-and-groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/use-scim-to-provision-users-and-groups.md
@@ -8,20 +8,19 @@
Previously updated : 01/12/2021 Last updated : 02/01/2021 - # Tutorial: Develop and plan provisioning for a SCIM endpoint As an application developer, you can use the System for Cross-Domain Identity Management (SCIM) user management API to enable automatic provisioning of users and groups between your application and Azure AD. This article describes how to build a SCIM endpoint and integrate with the Azure AD provisioning service. The SCIM specification provides a common user schema for provisioning. When used in conjunction with federation standards like SAML or OpenID Connect, SCIM gives administrators an end-to-end, standards-based solution for access management.
-SCIM is a standardized definition of two endpoints: a `/Users` endpoint and a `/Groups` endpoint. It uses common REST verbs to create, update, and delete objects, and a pre-defined schema for common attributes like group name, username, first name, last name and email. Apps that offer a SCIM 2.0 REST API can reduce or eliminate the pain of working with a proprietary user management API. For example, any compliant SCIM client knows how to make an HTTP POST of a JSON object to the `/Users` endpoint to create a new user entry. Instead of needing a slightly different API for the same basic actions, apps that conform to the SCIM standard can instantly take advantage of pre-existing clients, tools, and code.
- ![Provisioning from Azure AD to an app with SCIM](media/use-scim-to-provision-users-and-groups/scim-provisioning-overview.png)
+SCIM is a standardized definition of two endpoints: a `/Users` endpoint and a `/Groups` endpoint. It uses common REST verbs to create, update, and delete objects, and a pre-defined schema for common attributes like group name, username, first name, last name and email. Apps that offer a SCIM 2.0 REST API can reduce or eliminate the pain of working with a proprietary user management API. For example, any compliant SCIM client knows how to make an HTTP POST of a JSON object to the `/Users` endpoint to create a new user entry. Instead of needing a slightly different API for the same basic actions, apps that conform to the SCIM standard can instantly take advantage of pre-existing clients, tools, and code.
+ The standard user object schema and rest APIs for management defined in SCIM 2.0 (RFC [7642](https://tools.ietf.org/html/rfc7642), [7643](https://tools.ietf.org/html/rfc7643), [7644](https://tools.ietf.org/html/rfc7644)) allow identity providers and apps to more easily integrate with each other. Application developers that build a SCIM endpoint can integrate with any SCIM-compliant client without having to do custom work. Automating provisioning to an application requires building and integrating a SCIM endpoint with the Azure AD SCIM client. Perform the following steps to start provisioning users and groups into your application.
@@ -64,6 +63,7 @@ The schema defined above would be represented using the JSON payload below. Note
"urn:ietf:params:scim:schemas:extension:enterprise:2.0:User", "urn:ietf:params:scim:schemas:extension:CustomExtensionName:2.0:User"], "userName":"bjensen@testuser.com",
+ "id": "48af03ac28ad4fb88478",
"externalId":"bjensen", "name":{ "familyName":"Jensen",
active-directory https://docs.microsoft.com/en-us/azure/active-directory/authentication/concept-authentication-oath-tokens https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-authentication-oath-tokens.md
@@ -6,7 +6,7 @@
Previously updated : 09/02/2020 Last updated : 02/02/2021
@@ -48,7 +48,7 @@ Helga@contoso.com,1234567,1234567abcdef1234567abcdef,60,Contoso,HardwareKey
``` > [!NOTE]
-> Make sure you include the header row in your CSV file.
+> Make sure you include the header row in your CSV file. If a UPN has a single quote, escape it with another single quote. For example, if the UPN is myΓÇÖuser@domain.com, change it to myΓÇÖΓÇÖuser@domain.com when uploading the file.
Once properly formatted as a CSV file, an administrator can then sign in to the Azure portal, navigate to **Azure Active Directory > Security > MFA > OATH tokens**, and upload the resulting CSV file.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/authentication/howto-sspr-windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-sspr-windows.md
@@ -37,7 +37,7 @@ The following limitations apply to using SSPR from the Windows sign-in screen:
- Hybrid Azure AD joined machines must have network connectivity line of sight to a domain controller to use the new password and update cached credentials. This means that devices must either be on the organization's internal network or on a VPN with network access to an on-premises domain controller. - If using an image, prior to running sysprep ensure that the web cache is cleared for the built-in Administrator prior to performing the CopyProfile step. More information about this step can be found in the support article [Performance poor when using custom default user profile](https://support.microsoft.com/help/4056823/performance-issue-with-custom-default-user-profile). - The following settings are known to interfere with the ability to use and reset passwords on Windows 10 devices:
- - If Ctrl+Alt+Del is required by policy in versions of Windows 10 before v1909, **Reset password** won't work.
+ - If Ctrl+Alt+Del is required by policy in Windows 10, **Reset password** won't work.
- If lock screen notifications are turned off, **Reset password** won't work. - *HideFastUserSwitching* is set to enabled or 1 - *DontDisplayLastUserName* is set to enabled or 1
@@ -49,6 +49,10 @@ The following limitations apply to using SSPR from the Windows sign-in screen:
- *DisableLockScreenAppNotifications* = 1 or Enabled - Windows SKU isn't Home or Professional edition
+> [!NOTE]
+> These limitations also apply to Windows Hello for Business PIN reset from the device lock screen.
+>
+ ## Windows 10 password reset To configure a Windows 10 device for SSPR at the sign-in screen, review the following prerequisites and configuration steps.
@@ -182,4 +186,4 @@ More information for users on using this feature can be found in [Reset your wor
## Next steps
-To simplify the user registration experience, you can [pre-populate user authentication contact information for SSPR](howto-sspr-authenticationdata.md).
+To simplify the user registration experience, you can [pre-populate user authentication contact information for SSPR](howto-sspr-authenticationdata.md).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/conditional-access/howto-conditional-access-session-lifetime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/howto-conditional-access-session-lifetime.md
@@ -85,7 +85,7 @@ The Azure AD default for browser session persistence allows users on personal de
Conditional Access is an Azure AD Premium capability and requires a premium license. If you would like to learn more about Conditional Access, see [What is Conditional Access in Azure Active Directory?](overview.md#license-requirements) > [!WARNING]
-> If you are using the [configurable token lifetime](../develop/active-directory-configurable-token-lifetimes.md) feature currently in public preview, please note that we donΓÇÖt support creating two different policies for the same user or app combination: one with this feature and another one with configurable token lifetime feature. Microsoft plans to retire the configurable token lifetime feature for refresh and session token lifetimes on January 30, 2021 and replace it with the Conditional Access authentication session management feature.
+> If you are using the [configurable token lifetime](../develop/active-directory-configurable-token-lifetimes.md) feature currently in public preview, please note that we donΓÇÖt support creating two different policies for the same user or app combination: one with this feature and another one with configurable token lifetime feature. Microsoft retired the configurable token lifetime feature for refresh and session token lifetimes on January 30, 2021 and replaced it with the Conditional Access authentication session management feature.
> > Before enabling Sign-in Frequency, make sure other reauthentication settings are disabled in your tenant. If "Remember MFA on trusted devices" is enabled, be sure to disable it before using Sign-in frequency, as using these two settings together may lead to prompting users unexpectedly. To learn more about reauthentication prompts and session lifetime, see the article, [Optimize reauthentication prompts and understand session lifetime for Azure AD Multi-Factor Authentication](../authentication/concepts-azure-multi-factor-authentication-prompts-session-lifetime.md).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-configurable-token-lifetimes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/active-directory-configurable-token-lifetimes.md
@@ -10,7 +10,7 @@
Previously updated : 01/04/2021 Last updated : 02/01/2021
@@ -72,16 +72,14 @@ For an example, see [Create a policy for web sign-in](configure-token-lifetimes.
## Token lifetime policies for refresh tokens and session tokens
-You can set token lifetime policies for refresh tokens and session tokens.
+You can not set token lifetime policies for refresh tokens and session tokens.
> [!IMPORTANT]
-> As of May 2020, new tenants can not configure refresh and session token lifetimes. Tenants with existing configuration can modify refresh and session token policies until January 30, 2021. Azure Active Directory will stop honoring existing refresh and session token configuration in policies after January 30, 2021. You can still configure access, SAML, and ID token lifetimes after the retirement.
->
-> If you need to continue to define the time period before a user is asked to sign in again, configure sign-in frequency in Conditional Access. To learn more about Conditional Access, read [Configure authentication session management with Conditional Access](../conditional-access/howto-conditional-access-session-lifetime.md).
->
-> If you do not want to use Conditional Access after the retirement date, your refresh and session tokens will be set to the [default configuration](#configurable-token-lifetime-properties-after-the-retirement) on that date and you will no longer be able to change their lifetimes.
+> As of January 30, 2021 you can not configure refresh and session token lifetimes. Azure Active Directory no longer honors refresh and session token configuration in existing policies. New tokens issued after existing tokens have expired are now set to the [default configuration](#configurable-token-lifetime-properties-after-the-retirement). You can still configure access, SAML, and ID token lifetimes after the refresh and session token configuration retirement.
> > Existing tokenΓÇÖs lifetime will not be changed. After they expire, a new token will be issued based on the default value.
+>
+> If you need to continue to define the time period before a user is asked to sign in again, configure sign-in frequency in Conditional Access. To learn more about Conditional Access, read [Configure authentication session management with Conditional Access](../conditional-access/howto-conditional-access-session-lifetime.md).
:::image type="content" source="./media/active-directory-configurable-token-lifetimes/roadmap.svg" alt-text="Retirement information":::
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/configure-token-lifetimes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/configure-token-lifetimes.md
@@ -10,7 +10,7 @@
Previously updated : 01/04/2021 Last updated : 02/01/2021
@@ -80,11 +80,11 @@ In this example, you create a policy that requires users to authenticate more fr
## Create token lifetime policies for refresh and session tokens > [!IMPORTANT]
-> As of May 2020, new tenants cannot configure refresh and session token lifetimes. Tenants with existing configuration can modify refresh and session token policies until January 30, 2021. Azure Active Directory will stop honoring existing refresh and session token configuration in policies after January 30, 2021. You can still configure access, SAML, and ID token lifetimes after the retirement.
+> As of January 30, 2021 you can not configure refresh and session token lifetimes. Azure Active Directory no longer honors refresh and session token configuration in existing policies. New tokens issued after existing tokens have expired are now set to the [default configuration](active-directory-configurable-token-lifetimes.md#configurable-token-lifetime-properties-after-the-retirement). You can still configure access, SAML, and ID token lifetimes after the refresh and session token configuration retirement.
>
-> If you need to continue to define the time period before a user is asked to sign in again, configure sign-in frequency in Conditional Access. To learn more about Conditional Access, read [Configure authentication session management with Conditional Access](../conditional-access/howto-conditional-access-session-lifetime.md).
+> Existing tokenΓÇÖs lifetime will not be changed. After they expire, a new token will be issued based on the default value.
>
-> If you do not want to use Conditional Access after the retirement date, your refresh and session tokens will be set to the [default configuration](active-directory-configurable-token-lifetimes.md#configurable-token-lifetime-properties-after-the-retirement) on that date and you will no longer be able to change their lifetimes.
+> If you need to continue to define the time period before a user is asked to sign in again, configure sign-in frequency in Conditional Access. To learn more about Conditional Access, read [Configure authentication session management with Conditional Access](../conditional-access/howto-conditional-access-session-lifetime.md).
### Manage an organization's default policy In this example, you create a policy that lets your users' sign in less frequently across your entire organization. To do this, create a token lifetime policy for single-factor refresh tokens, which is applied across your organization. The policy is applied to every application in your organization, and to each service principal that doesnΓÇÖt already have a policy set.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/reference-aadsts-error-codes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/reference-aadsts-error-codes.md
@@ -9,7 +9,7 @@
Previously updated : 11/09/2020 Last updated : 02/01/2021
@@ -91,6 +91,7 @@ For example, if you received the error code "AADSTS50058" then do a search in [h
| AADSTS50000 | TokenIssuanceError - There's an issue with the sign-in service. [Open a support ticket](../fundamentals/active-directory-troubleshooting-support-howto.md) to resolve this issue. | | AADSTS50001 | InvalidResource - The resource is disabled or does not exist. Check your app's code to ensure that you have specified the exact resource URL for the resource you are trying to access. | | AADSTS50002 | NotAllowedTenant - Sign-in failed because of a restricted proxy access on the tenant. If it's your own tenant policy, you can change your restricted tenant settings to fix this issue. |
+| AADSTS500021 | Access to '{tenant}' tenant is denied. AADSTS500021 indicates that the tenant restriction feature is configured and that the user is trying to access a tenant that is not in the list of allowed tenants specified in the header `Restrict-Access-To-Tenant`. For more information, see [Use tenant restrictions to manage access to SaaS cloud applications](/azure/active-directory/manage-apps/tenant-restrictions).|
| AADSTS50003 | MissingSigningKey - Sign-in failed because of a missing signing key or certificate. This might be because there was no signing key configured in the app. Check out the resolutions outlined at [../manage-apps/application-sign-in-problem-federated-sso-gallery.md#certificate-or-key-not-configured](../manage-apps/application-sign-in-problem-federated-sso-gallery.md#certificate-or-key-not-configured). If you still see issues, contact the app owner or an app admin. | | AADSTS50005 | DevicePolicyError - User tried to log in to a device from a platform that's currently not supported through Conditional Access policy. | | AADSTS50006 | InvalidSignature - Signature verification failed because of an invalid signature. |
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/reference-app-manifest https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/reference-app-manifest.md
@@ -9,7 +9,7 @@
Previously updated : 04/15/2020 Last updated : 02/02/2021
@@ -445,7 +445,7 @@ The verified publisher domain for the application. Read-only.
Example: ```json
- "publisherDomain": "https://www.contoso.com",
+ "publisherDomain": "{tenant}.onmicrosoft.com",
``` ### replyUrlsWithType attribute
active-directory https://docs.microsoft.com/en-us/azure/active-directory/identity-protection/troubleshooting-identity-protection-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/identity-protection/troubleshooting-identity-protection-faq.md
@@ -34,7 +34,7 @@ If you are an Azure AD Identity Protection customer, go to the [risky users](how
### Why was my sign-in blocked but Identity Protection didn't generate a risk detection? Sign-ins can be blocked for several reasons. It is important to note that Identity Protection only generates risk detections when correct credentials are used in the authentication request. If a user uses incorrect credentials, it will not be flagged by Identity Protection since there is not of risk of credential compromise unless a bad actor uses the correct credentials. Some reasons a user can be blocked from signing that will not generate an Identity Protection detection include:
-* The **IP can been blocked** due to malicious activity from the IP address. The IP blocked message does not differentiate whether the credentials were correct or not. If the IP is blocked and correct credentials are not used, it will not generate an Identity Protection detection
+* The **IP can be blocked** due to malicious activity from the IP address. The IP blocked message does not differentiate whether the credentials were correct or not. If the IP is blocked and correct credentials are not used, it will not generate an Identity Protection detection
* **[Smart Lockout](../authentication/howto-password-smart-lockout.md)** can block the account from signing-in after multiple failed attempts * A **Conditional Access policy** can be enforced that uses conditions other than risk level to block an authentication request
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-sign-in-problem-federated-sso-gallery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-sign-in-problem-federated-sso-gallery.md
@@ -49,7 +49,7 @@ Ensure that the `Issuer` attribute in the SAML request matches the Identifier va
On the SAML-based SSO configuration page, in the **Basic SAML configuration** section, verify that the value in the Identifier textbox matches the value for the identifier value displayed in the error. ## The reply address does not match the reply addresses configured for the application
-`Error AADSTS50011: The reply address 'https:\//contoso.com' does not match the reply addresses configured for the application.`
+`Error AADSTS50011: The reply URL specified in the request does not match the reply URLs configured for the application: '{application identifier}'.`
**Possible cause**
@@ -172,4 +172,4 @@ Compare the resource youΓÇÖre requesting access to in code with the configured p
## Next steps - [Quickstart Series on Application Management](add-application-portal-assign-users.md) - [How to debug SAML-based single sign-on to applications in Azure AD](./debug-saml-sso-issues.md)-- [Azure AD Single Sign-on SAML protocol requirements](../develop/single-sign-on-saml-protocol.md)
+- [Azure AD Single Sign-on SAML protocol requirements](../develop/single-sign-on-saml-protocol.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/roles/permissions-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/permissions-reference.md
@@ -9,7 +9,7 @@
Previously updated : 01/29/2020 Last updated : 02/01/2020
@@ -64,15 +64,9 @@ Users in this role can create application registrations when the "Users can regi
### [Authentication Administrator](#authentication-administrator-permissions)
-Users with this role can set or reset non-password credentials for some users and can update passwords for all users. Authentication administrators can require users who are non-administrators or assigned to some roles to re-register against existing non-password credentials (for example, MFA or FIDO), and can also revoke **remember MFA on the device**, which prompts for MFA on the next sign-in. These actions apply only to users who are non-administrators or who are assigned one or more of the following roles:
+Users with this role can set or reset non-password credentials for some users and can update passwords for all users. Authentication administrators can require users who are non-administrators or assigned to some roles to re-register against existing non-password credentials (for example, MFA or FIDO), and can also revoke **remember MFA on the device**, which prompts for MFA on the next sign-in. Whether an Authentication Administrator can reset a user's password depends on the role the user is assigned. For a list of the roles that an Authentication Administrator can reset passwords for, see [Password reset permissions](#password-reset-permissions).
-* Authentication Administrator
-* Directory Readers
-* Guest Inviter
-* Message Center Reader
-* Reports Reader
-
-The [Privileged authentication administrator](#privileged-authentication-administrator) role has permission can force re-registration and multi-factor authentication for all users.
+The [Privileged Authentication Administrator](#privileged-authentication-administrator) role has permission can force re-registration and multi-factor authentication for all users.
> [!IMPORTANT] > Users with this role can change credentials for people who may have access to sensitive or private information or critical configuration inside and outside of Azure Active Directory. Changing the credentials of a user may mean the ability to assume that user's identity and permissions. For example:
@@ -248,14 +242,7 @@ Users in this role can manage Azure Active Directory B2B guest user invitations
### [Helpdesk Administrator](#helpdesk-administrator-permissions)
-Users with this role can change passwords, invalidate refresh tokens, manage service requests, and monitor service health. Invalidating a refresh token forces the user to sign in again. Helpdesk administrators can reset passwords and invalidate refresh tokens of other users who are non-administrators or assigned the following roles only:
-
-* Directory Readers
-* Guest Inviter
-* Helpdesk Administrator
-* Message Center Reader
-* Password Administrator
-* Reports Reader
+Users with this role can change passwords, invalidate refresh tokens, manage service requests, and monitor service health. Invalidating a refresh token forces the user to sign in again. Whether a Helpdesk Administrator can reset a user's password and invalidate refresh tokens depends on the role the user is assigned. For a list of the roles that a Helpdesk Administrator can reset passwords for and invalidate refresh tokens, see [Password reset permissions](#password-reset-permissions).
> [!IMPORTANT] > Users with this role can change passwords for people who may have access to sensitive or private information or critical configuration inside and outside of Azure Active Directory. Changing the password of a user may mean the ability to assume that user's identity and permissions. For example:
@@ -266,7 +253,7 @@ Users with this role can change passwords, invalidate refresh tokens, manage ser
>- Administrators in other services outside of Azure AD like Exchange Online, Office Security and Compliance Center, and human resources systems. >- Non-administrators like executives, legal counsel, and human resources employees who may have access to sensitive or private information.
-Delegating administrative permissions over subsets of users and applying policies to a subset of users is possible with [Administrative Units (now in public preview)](administrative-units.md).
+Delegating administrative permissions over subsets of users and applying policies to a subset of users is possible with [Administrative Units](administrative-units.md).
This role was previously called "Password Administrator" in the [Azure portal](https://portal.azure.com/). The "Helpdesk Administrator" name in Azure AD now matches its name in Azure AD PowerShell and the Microsoft Graph API.
@@ -339,11 +326,7 @@ Do not use. This role has been deprecated and will be removed from Azure AD in t
### [Password Administrator](#password-administrator-permissions)
-Users with this role have limited ability to manage passwords. This role does not grant the ability to manage service requests or monitor service health. Password administrators can reset passwords of other users who are non-administrators or members of the following roles only:
-
-* Directory Readers
-* Guest Inviter
-* Password Administrator
+Users with this role have limited ability to manage passwords. This role does not grant the ability to manage service requests or monitor service health. Whether a Password Administrator can reset a user's password depends on the role the user is assigned. For a list of the roles that a Password Administrator can reset passwords for, see [Password reset permissions](#password-reset-permissions).
### [Power BI Administrator](#power-bi-service-administrator-permissions)
@@ -366,13 +349,7 @@ Users with this role can register printers and manage printer status in the Micr
### [Privileged Authentication Administrator](#privileged-authentication-administrator-permissions)
-Users with this role can set or reset non-password credentials for all users, including Global Administrators, and can update passwords for all users. Privileged Authentication Administrators can force users to re-register against existing non-password credential (such as MFA or FIDO) and revoke 'remember MFA on the device', prompting for MFA on the next sign-in of all users. The [Authentication administrator](#authentication-administrator) role can force re-registration and MFA for only non-admins and users assigned to the following Azure AD roles:
-
-* Authentication Administrator
-* Directory Readers
-* Guest Inviter
-* Message Center Reader
-* Reports Reader
+Users with this role can set or reset non-password credentials for all users, including Global Administrators, and can update passwords for all users. Privileged Authentication Administrators can force users to re-register against existing non-password credential (such as MFA or FIDO) and revoke 'remember MFA on the device', prompting for MFA on the next sign-in of all users.
### [Privileged Role Administrator](#privileged-role-administrator-permissions)
@@ -495,11 +472,12 @@ Users with this role can access tenant level aggregated data and associated insi
Users with this role can create users, and manage all aspects of users with some restrictions (see the table), and can update password expiration policies. Additionally, users with this role can create and manage all groups. This role also includes the ability to create and manage user views, manage support tickets, and monitor service health. User administrators don't have permission to manage some user properties for users in most administrator roles. User with this role do not have permissions to manage MFA. The roles that are exceptions to this restriction are listed in the following table.
-| **Permission** | **Can do** |
+| User Administrator permission | Notes |
| | |
-|General permissions|<p>Create users and groups</p><p>Create and manage user views</p><p>Manage Office support tickets<p>Update password expiration policies|
-| <p>On all users, including all admins</p>|<p>Manage licenses</p><p>Manage all user properties except User Principal Name</p>
-| Only on users who are non-admins or in any of the following limited admin roles:<ul><li>Directory Readers<li>Groups Administrator<li>Guest Inviter<li>Helpdesk Administrator<li>Message Center Reader<li>Password Administrator<li>Reports Reader<li>User Administrator|<p>Delete and restore</p><p>Disable and enable</p><p>Invalidate refresh Tokens</p><p>Manage all user properties including User Principal Name</p><p>Reset password</p><p>Update (FIDO) device keys</p>|
+| Create users and groups<br/>Create and manage user views<br/>Manage Office support tickets<br/>Update password expiration policies | |
+| Manage licenses<br/>Manage all user properties except User Principal Name | Applies to all users, including all admins |
+| Delete and restore<br/>Disable and enable<br/>Manage all user properties including User Principal Name<br/>Update (FIDO) device keys | Applies to users who are non-admins or in any of the following roles:<ul><li>Helpdesk Administrator</li><li>User with no role</li><li>User Administrator</li></ul> |
+| Invalidate refresh Tokens<br/>Reset password | For a list of the roles that a User Administrator can reset passwords for and invalidate refresh tokens, see [Password reset permissions](#password-reset-permissions). |
> [!IMPORTANT] > Users with this role can change passwords for people who may have access to sensitive or private information or critical configuration inside and outside of Azure Active Directory. Changing the password of a user may mean the ability to assume that user's identity and permissions. For example:
@@ -510,7 +488,7 @@ Users with this role can create users, and manage all aspects of users with some
>- Administrators in other services outside of Azure AD like Exchange Online, Office Security and Compliance Center, and human resources systems. >- Non-administrators like executives, legal counsel, and human resources employees who may have access to sensitive or private information.
-## Role Permissions
+## Role permissions
The following tables describe the specific permissions in Azure Active Directory given to each role. Some roles may have additional permissions in Microsoft services outside of Azure Active Directory.
@@ -567,6 +545,7 @@ Can create and manage all aspects of app registrations and enterprise apps.
| microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets. | | microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Microsoft 365 Service Health. | | microsoft.office365.supportTickets/allEntities/allTasks | Create and manage Office 365 support tickets. |
+| microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in microsoft.office365.webPortal. |
### Application Developer permissions
@@ -642,6 +621,7 @@ Can manage all aspects of the Azure Information Protection service.
| microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets. | | microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Microsoft 365 Service Health. | | microsoft.office365.supportTickets/allEntities/allTasks | Create and manage Office 365 support tickets. |
+| microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in microsoft.office365.webPortal. |
### B2C IEF Keyset Administrator permissions
@@ -720,6 +700,7 @@ Can create and manage all aspects of app registrations and enterprise apps excep
| microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets. | | microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Microsoft 365 Service Health. | | microsoft.office365.supportTickets/allEntities/allTasks | Create and manage Office 365 support tickets. |
+| microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in microsoft.office365.webPortal. |
### Cloud Device Administrator permissions
@@ -2060,6 +2041,31 @@ Restricted Guest User | Not shown because it can't be used | NA
User | Not shown because it can't be used | NA Workplace Device Join | Deprecated | [Deprecated roles documentation](permissions-reference.md#deprecated-roles)
+## Password reset permissions
+
+Column headings represent the roles that can reset passwords. Table rows contain the roles for which their password can be reset.
+
+Password can be reset | Authentication Admin | Helpdesk Admin | Password Admin | User Admin | Privileged Authentication Admin | Global Admin
+ | | | | | |
+Authentication Admin | :heavy_check_mark: | &nbsp; | &nbsp; | &nbsp; | :heavy_check_mark: | :heavy_check_mark:
+Directory Readers | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
+Global Admin | &nbsp; | &nbsp; | &nbsp; | &nbsp; | :heavy_check_mark: | :heavy_check_mark:\*
+Groups Admin | &nbsp; | &nbsp; | &nbsp; | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
+Guest | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
+Guest Inviter | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
+Helpdesk Admin | &nbsp; | :heavy_check_mark: | &nbsp; | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
+Message Center Reader | :heavy_check_mark: | :heavy_check_mark: | &nbsp; | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
+Password Admin | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
+Privileged Authentication Admin | &nbsp; | &nbsp; | &nbsp; | &nbsp; | :heavy_check_mark: | :heavy_check_mark:
+Privileged Role Admin | &nbsp; | &nbsp; | &nbsp; | &nbsp; | :heavy_check_mark: | :heavy_check_mark:
+Reports Reader | :heavy_check_mark: | :heavy_check_mark: | &nbsp; | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
+Restricted Guest | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
+User (no admin role) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
+User Admin | &nbsp; | &nbsp; | &nbsp; | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
+Usage Summary Reports Reader | :heavy_check_mark: | :heavy_check_mark: | &nbsp; | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
+
+\* A Global Administrator cannot remove their own Global Administrator assignment. This is to prevent a situation where an organization has 0 Global Administrators.
+ ## Next steps * To learn more about how to assign a user as an administrator of an Azure subscription, see [Add or remove Azure role assignments (Azure RBAC)](../../role-based-access-control/role-assignments-portal.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/asana-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/asana-tutorial.md
@@ -9,27 +9,23 @@
Previously updated : 12/31/2018 Last updated : 01/27/2021 # Tutorial: Azure Active Directory integration with Asana
-In this tutorial, you learn how to integrate Asana with Azure Active Directory (Azure AD).
-Integrating Asana with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Asana with Azure Active Directory (Azure AD). When you integrate Asana with Azure AD, you can:
-* You can control in Azure AD who has access to Asana.
-* You can enable your users to be automatically signed-in to Asana (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Asana.
+* Enable your users to be automatically signed-in to Asana with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with Asana, you need the following items:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* Asana single sign-on enabled subscription
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Asana single sign-on (SSO) enabled subscription.
## Scenario description
@@ -39,68 +35,48 @@ In this tutorial, you configure and test Azure AD single sign-on in a test envir
* Asana supports [**Automated** user provisioning](asana-provisioning-tutorial.md)
-## Adding Asana from the gallery
+## Add Asana from the gallery
To configure the integration of Asana into Azure AD, you need to add Asana from the gallery to your list of managed SaaS apps.
-**To add Asana from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **Asana**, select **Asana** from result panel then click **Add** button to add the application.
-
- ![Asana in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with Asana based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Asana needs to be established.
-
-To configure and test Azure AD single sign-on with Asana, you need to complete the following building blocks:
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Asana** in the search box.
+1. Select **Asana** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Asana Single Sign-On](#configure-asana-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Asana test user](#create-asana-test-user)** - to have a counterpart of Britta Simon in Asana that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+## Configure and test Azure AD SSO for Asana
-### Configure Azure AD single sign-on
+Configure and test Azure AD SSO with Asana using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Asana.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+To configure and test Azure AD SSO with Asana, perform the following steps:
-To configure Azure AD single sign-on with Asana, perform the following steps:
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Asana SSO](#configure-asana-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Asana test user](#create-asana-test-user)** - to have a counterpart of B.Simon in Asana that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-1. In the [Azure portal](https://portal.azure.com/), on the **Asana** application integration page, select **Single sign-on**.
+### Configure Azure AD SSO
- ![Configure single sign-on link](common/select-sso.png)
+Follow these steps to enable Azure AD SSO in the Azure portal.
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+1. In the Azure portal, on the **Asana** application integration page, find the **Manage** section and select **Single sign-on**.
+1. On the **Select a Single sign-on method** page, select **SAML**.
+1. On the **Set up Single Sign-On with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
4. On the **Basic SAML Configuration** section, perform the following steps: ![Asana Domain and URLs single sign-on information](common/sp-identifier.png)
- a. In the **Sign on URL** text box, type URL:
+ a. In the **Sign on URL** text box, type the URL:
`https://app.asana.com/`
- b. In the **Identifier (Entity ID)** text box, type URL:
+ b. In the **Identifier (Entity ID)** text box, type the URL:
`https://app.asana.com/` 5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer.
@@ -111,78 +87,45 @@ To configure Azure AD single sign-on with Asana, perform the following steps:
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
-
- b. Azure Ad Identifier
-
- c. Logout URL
-
-### Configure Asana Single Sign-On
-
-1. In a different browser window, sign-on to your Asana application. To configure SSO in Asana, access the workspace settings by clicking the workspace name on the top right corner of the screen. Then, click on **\<your workspace name\> Settings**.
-
- ![Asana sso settings](./media/asana-tutorial/tutorial_asana_09.png)
-
-2. On the **Organization settings** window, click **Administration**. Then, click **Members must log in via SAML** to enable the SSO configuration. The perform the following steps:
-
- ![Configure Single Sign-On Organization settings](./media/asana-tutorial/tutorial_asana_10.png)
-
- a. In the **Sign-in page URL** textbox, paste the **Login URL**.
-
- b. Right click the certificate downloaded from Azure portal, then open the certificate file using Notepad or your preferred text editor. Copy the content between the begin and the end certificate title and paste it in the **X.509 Certificate** textbox.
-
-3. Click **Save**. Go to [Asana guide for setting up SSO](https://asana.com/guide/help/premium/authentication#gl-saml) if you need further assistance.
- ### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
+In this section, you'll create a test user in the Azure portal called B.Simon.
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon\@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Asana.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Asana.
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Asana**.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Asana**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- ![Enterprise applications blade](common/enterprise-applications.png)
+### Configure Asana SSO
-2. In the applications list, select **Asana**.
-
- ![The Asana link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
+1. In a different browser window, sign-on to your Asana application. To configure SSO in Asana, access the workspace settings by clicking the workspace name on the top right corner of the screen. Then, click on **\<your workspace name\> Settings**.
- ![The "Users and groups" link](common/users-groups-blade.png)
+ ![Asana sso settings](./media/asana-tutorial/settings.png)
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
+2. On the **Organization settings** window, click **Administration**. Then, click **Members must log in via SAML** to enable the SSO configuration. The perform the following steps:
- ![The Add Assignment pane](common/add-assign-user.png)
+ ![Configure Single Sign-On Organization settings](./media/asana-tutorial/save.png)
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
+ a. In the **Sign-in page URL** textbox, paste the **Login URL**.
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
+ b. Right click the certificate downloaded from Azure portal, then open the certificate file using Notepad or your preferred text editor. Copy the content between the begin and the end certificate title and paste it in the **X.509 Certificate** textbox.
-7. In the **Add Assignment** dialog click the **Assign** button.
+3. Click **Save**. Go to [Asana guide for setting up SSO](https://asana.com/guide/help/premium/authentication#gl-saml) if you need further assistance.
### Create Asana test user
@@ -194,24 +137,22 @@ In this section, you create a user called Britta Simon in Asana.
1. On **Asana**, go to the **Teams** section on the left panel. Click the plus sign button.
- ![Creating an Azure AD test user](./media/asana-tutorial/tutorial_asana_12.png)
+ ![Creating an Azure AD test user](./media/asana-tutorial/teams.png)
2. Type the email of the user like **britta.simon\@contoso.com** in the text box and then select **Invite**. 3. Click **Send Invite**. The new user will receive an email into their email account. user will need to create and validate the account.
-### Test single sign-on
-
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+### Test SSO
-When you click the Asana tile in the Access Panel, you should be automatically signed in to the Asana for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+In this section, you test your Azure AD single sign-on configuration with following options.
-## Additional Resources
+* Click on **Test this application** in Azure portal. This will redirect to Asana Sign-on URL where you can initiate the login flow.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* Go to Asana Sign-on URL directly and initiate the login flow from there.
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* You can use Microsoft My Apps. When you click the Asana tile in the My Apps, this will redirect to Asana Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [Configure User Provisioning](asana-provisioning-tutorial.md)
+Once you configure Asana you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/atea-provisioning-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/atea-provisioning-tutorial.md
@@ -20,7 +20,7 @@
# Tutorial: Configure Atea for automatic user provisioning
-This tutorial describes the steps you need to perform in both Atea and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Atea](https://www.atea.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../manage-apps/user-provisioning.md).
+This tutorial describes the steps you need to do in both Atea and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Atea](https://www.atea.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works and frequently asked questions refer [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../manage-apps/user-provisioning.md).
## Capabilities supported
@@ -44,24 +44,24 @@ The scenario outlined in this tutorial assumes that you already have the followi
## Step 2. Configure Atea to support provisioning with Azure AD
-To configure Atea to support provisioning with Azure AD, email servicedesk@atea.dk.
+To configure Iris Intranet to support provisioning with Azure AD one needs to get the **Tenant URL** and **Secret Token** by dropping a mail to [Atea support team](mailto:servicedesk@atea.dk). These values will be entered in the **Secret Token** and **Tenant URL** field in the Provisioning tab of your Atea's application in Azure portal.
## Step 3. Add Atea from the Azure AD application gallery
-Add Atea from the Azure AD application gallery to start managing provisioning to Atea. If you have previously setup Atea for SSO you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](https://docs.microsoft.com/azure/active-directory/manage-apps/add-gallery-app).
+Add Atea from the Azure AD application gallery to start managing provisioning to Atea. If you have previously setup Atea for SSO, you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](https://docs.microsoft.com/azure/active-directory/manage-apps/add-gallery-app).
## Step 4. Define who will be in scope for provisioning
-The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user and group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
-* When assigning users and groups to Atea, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps) to add additional roles.
+* When assigning users and groups to Atea, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps) to add other roles.
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control it by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
## Step 5. Configure automatic user provisioning to Atea
-This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and groups in Atea, based on user and group assignments in Azure AD.
### To configure automatic user provisioning for Atea in Azure AD:
@@ -85,7 +85,7 @@ This section guides you through the steps to configure the Azure AD provisioning
![Atea authorize](media/atea-provisioning-tutorial/provisioning-authorize.png)
-6. On the Atea's login dialog, sign in to your Atea's tenant and verify your identity.
+6. On the Atea's log in dialog, sign in to your Atea's tenant and verify your identity.
![Atea login dialog](media/atea-provisioning-tutorial/atea-login.png)
@@ -93,7 +93,7 @@ This section guides you through the steps to configure the Azure AD provisioning
![Atea test connection](media/atea-provisioning-tutorial/test-connection.png)
-8. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+8. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications. And then select the **Send an email notification when a failure occurs** check box.
![Notification Email](common/provisioning-notification-email.png)
@@ -101,7 +101,7 @@ This section guides you through the steps to configure the Azure AD provisioning
10. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Atea**.
-11. Review the user attributes that are synchronized from Azure AD to Atea in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Atea for update operations. If you choose to change the [matching target attribute](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes), you will need to ensure that the Atea API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+11. Review the user attributes that are synchronized from Azure AD to Atea in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Atea for update operations. If you choose to change the [matching target attribute](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes), you'll need to ensure that the Atea API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
|Attribute|Type|Supported for filtering| ||||
@@ -121,21 +121,21 @@ This section guides you through the steps to configure the Azure AD provisioning
![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
-14. Define the users and/or groups that you would like to provision to Atea by choosing the desired values in **Scope** in the **Settings** section.
+14. Define the users and groups that you would like to provision to Atea by choosing the relevant value in **Scope** in the **Settings** section.
![Provisioning Scope](common/provisioning-scope.png)
-15. When you are ready to provision, click **Save**.
+15. When you're ready to provision, click **Save**.
![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
-This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to complete than next cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
## Step 6. Monitor your deployment Once you've configured provisioning, use the following resources to monitor your deployment:
-* Use the [provisioning logs](https://docs.microsoft.com/azure/active-directory/reports-monitoring/concept-provisioning-logs) to determine which users have been provisioned successfully or unsuccessfully
-* Check the [progress bar](https://docs.microsoft.com/azure/active-directory/app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user) to see the status of the provisioning cycle and how close it is to completion
+* Use the [provisioning logs](https://docs.microsoft.com/azure/active-directory/reports-monitoring/concept-provisioning-logs) to determine which users have been provisioned successfully or unsuccessfully.
+* Check the [progress bar](https://docs.microsoft.com/azure/active-directory/app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user) to see the status of the provisioning cycle and how close it's to completion.
* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](https://docs.microsoft.com/azure/active-directory/manage-apps/application-provisioning-quarantine-status). ## Additional resources
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/blackboard-learn-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/blackboard-learn-tutorial.md
@@ -9,7 +9,7 @@
Previously updated : 08/23/2019 Last updated : 01/25/2021
@@ -21,8 +21,6 @@ In this tutorial, you'll learn how to integrate Blackboard Learn with Azure Acti
* Enable your users to be automatically signed-in to Blackboard Learn with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
@@ -38,22 +36,22 @@ In this tutorial, you configure and test Azure AD SSO in a test environment.
* Blackboard Learn supports **Just In Time** user provisioning
-## Adding Blackboard Learn from the gallery
+## Add Blackboard Learn from the gallery
To configure the integration of Blackboard Learn into Azure AD, you need to add Blackboard Learn from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Blackboard Learn** in the search box. 1. Select **Blackboard Learn** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for Blackboard Learn
+## Configure and test Azure AD SSO for Blackboard Learn
Configure and test Azure AD SSO with Blackboard Learn using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Blackboard Learn.
-To configure and test Azure AD SSO with Blackboard Learn, complete the following building blocks:
+To configure and test Azure AD SSO with Blackboard Learn, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
@@ -66,9 +64,9 @@ To configure and test Azure AD SSO with Blackboard Learn, complete the following
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Blackboard Learn** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Blackboard Learn** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
@@ -110,15 +108,9 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Blackboard Learn**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure Blackboard Learn SSO
@@ -128,20 +120,18 @@ To configure single sign-on on **Blackboard Learn** side, follow the [link](http
### Create Blackboard Learn test user
-In this section, you create a user called Britta Simon in Blackboard Learn. Blackboard Learn application support just in time user provisioning. Make sure that you have configured the claims as described in the section **Configuring Azure AD Single Sign-On**.
+In this section, a user called B.Simon is created in Blackboard Learn. Blackboard Learn supports just-in-time user provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in Blackboard Learn, a new one is created after authentication.
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
-
-When you click the Blackboard Learn tile in the Access Panel, you should be automatically signed in to the Blackboard Learn for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+In this section, you test your Azure AD single sign-on configuration with following options.
-## Additional resources
+* Click on **Test this application** in Azure portal. This will redirect to Blackboard Learn Sign-on URL where you can initiate the login flow.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* Go to Blackboard Learn Sign-on URL directly and initiate the login flow from there.
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+* You can use Microsoft My Apps. When you click the Blackboard Learn tile in the My Apps, this will redirect to Blackboard Learn Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [Try Blackboard Learn with Azure AD](https://aad.portal.azure.com/)
+Once you configure Blackboard Learn you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/ceridiandayforcehcm-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/ceridiandayforcehcm-tutorial.md
@@ -9,27 +9,23 @@
Previously updated : 01/02/2019 Last updated : 01/27/2021 # Tutorial: Azure Active Directory integration with Ceridian Dayforce HCM
-In this tutorial, you learn how to integrate Ceridian Dayforce HCM with Azure Active Directory (Azure AD).
-Integrating Ceridian Dayforce HCM with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Ceridian Dayforce HCM with Azure Active Directory (Azure AD). When you integrate Ceridian Dayforce HCM with Azure AD, you can:
-* You can control in Azure AD who has access to Ceridian Dayforce HCM.
-* You can enable your users to be automatically signed-in to Ceridian Dayforce HCM (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Ceridian Dayforce HCM.
+* Enable your users to be automatically signed-in to Ceridian Dayforce HCM with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with Ceridian Dayforce HCM, you need the following items:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* Ceridian Dayforce HCM single sign-on enabled subscription
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Ceridian Dayforce HCM single sign-on (SSO) enabled subscription.
## Scenario description
@@ -37,59 +33,39 @@ In this tutorial, you configure and test Azure AD single sign-on in a test envir
* Ceridian Dayforce HCM supports **SP** initiated SSO
-## Adding Ceridian Dayforce HCM from the gallery
+## Add Ceridian Dayforce HCM from the gallery
To configure the integration of Ceridian Dayforce HCM into Azure AD, you need to add Ceridian Dayforce HCM from the gallery to your list of managed SaaS apps.
-**To add Ceridian Dayforce HCM from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **Ceridian Dayforce HCM**, select **Ceridian Dayforce HCM** from result panel then click **Add** button to add the application.
-
- ![Ceridian Dayforce HCM in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Ceridian Dayforce HCM** in the search box.
+1. Select **Ceridian Dayforce HCM** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-In this section, you configure and test Azure AD single sign-on with Ceridian Dayforce HCM based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Ceridian Dayforce HCM needs to be established.
+## Configure and test Azure AD SSO for Ceridian Dayforce HCM
-To configure and test Azure AD single sign-on with Ceridian Dayforce HCM, you need to complete the following building blocks:
+Configure and test Azure AD SSO with Ceridian Dayforce HCM using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Ceridian Dayforce HCM.
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Ceridian Dayforce HCM Single Sign-On](#configure-ceridian-dayforce-hcm-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Ceridian Dayforce HCM test user](#create-ceridian-dayforce-hcm-test-user)** - to have a counterpart of Britta Simon in Ceridian Dayforce HCM that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+To configure and test Azure AD SSO with Ceridian Dayforce HCM, perform the following steps:
-### Configure Azure AD single sign-on
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Ceridian Dayforce HCM SSO](#configure-ceridian-dayforce-hcm-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Ceridian Dayforce HCM test user](#create-ceridian-dayforce-hcm-test-user)** - to have a counterpart of B.Simon in Ceridian Dayforce HCM that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+### Configure Azure AD SSO
-To configure Azure AD single sign-on with Ceridian Dayforce HCM, perform the following steps:
+Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Ceridian Dayforce HCM** application integration page, select **Single sign-on**.
+1. In the Azure portal, on the **Ceridian Dayforce HCM** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Configure single sign-on link](common/select-sso.png)
-
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
-
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
4. On the **Basic SAML Configuration** section, perform the following steps:
@@ -102,7 +78,7 @@ To configure Azure AD single sign-on with Ceridian Dayforce HCM, perform the fol
| For production | `https://sso.dayforcehcm.com/<DayforcehcmNamespace>` | | For test | `https://ssotest.dayforcehcm.com/<DayforcehcmNamespace>` |
- b. In the **Identifier** textbox, type a URL using the following pattern:
+ b. In the **Identifier** textbox, type the URL using the following pattern:
| Environment | URL | | :-- | :-- |
@@ -155,81 +131,48 @@ To configure Azure AD single sign-on with Ceridian Dayforce HCM, perform the fol
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
-
- b. Azure Ad Identifier
-
- c. Logout URL
-
-### Configure Ceridian Dayforce HCM Single Sign-On
-
-To configure single sign-on on **Ceridian Dayforce HCM** side, you need to send the downloaded **Metadata XML** and appropriate copied URLs from Azure portal to [Ceridian Dayforce HCM support team](https://www.ceridian.com/support). They set this setting to have the SAML SSO connection set properly on both sides.
- ### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon\@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
+In this section, you'll create a test user in the Azure portal called B.Simon.
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Ceridian Dayforce HCM.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Ceridian Dayforce HCM.
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Ceridian Dayforce HCM**.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Ceridian Dayforce HCM**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- ![Enterprise applications blade](common/enterprise-applications.png)
+### Configure Ceridian Dayforce HCM SSO
-2. In the applications list, select **Ceridian Dayforce HCM**.
-
- ![The Ceridian Dayforce HCM link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
+To configure single sign-on on **Ceridian Dayforce HCM** side, you need to send the downloaded **Metadata XML** and appropriate copied URLs from Azure portal to [Ceridian Dayforce HCM support team](https://www.ceridian.com/support). They set this setting to have the SAML SSO connection set properly on both sides.
### Create Ceridian Dayforce HCM test user In this section, you create a user called Britta Simon in Ceridian Dayforce HCM. Work with [Ceridian Dayforce HCM support team](https://www.ceridian.com/support) to add the users in the Ceridian Dayforce HCM platform. Users must be created and activated before you use single sign-on.
-### Test single sign-on
+### Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the Ceridian Dayforce HCM tile in the Access Panel, you should be automatically signed in to the Ceridian Dayforce HCM for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Click on **Test this application** in Azure portal. This will redirect to Ceridian Dayforce HCM Sign-on URL where you can initiate the login flow.
-## Additional Resources
+* Go to Ceridian Dayforce HCM Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the Ceridian Dayforce HCM tile in the My Apps, this will redirect to Ceridian Dayforce HCM Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Ceridian Dayforce HCM you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/dropboxforbusiness-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/dropboxforbusiness-tutorial.md
@@ -9,7 +9,7 @@
Previously updated : 06/23/2020 Last updated : 01/28/2021 # Tutorial: Integrate Dropbox Business with Azure Active Directory
@@ -20,13 +20,11 @@ In this tutorial, you'll learn how to integrate Dropbox Business with Azure Acti
* Enable your users to be automatically signed-in to Dropbox Business with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
-* An Azure AD subscription. If you don't have a subscription, you can get one-month free trial [here](https://azure.microsoft.com/pricing/free-trial/).
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
* Dropbox Business single sign-on (SSO) enabled subscription. > [!NOTE]
@@ -37,24 +35,26 @@ To get started, you need the following items:
* In this tutorial, you configure and test Azure AD SSO in a test environment. Dropbox Business supports **SP** initiated SSO * Dropbox Business supports [Automated user provisioning and deprovisioning](dropboxforbusiness-tutorial.md)
-* Once you configure Dropbox you can enforce Session Control, which protect exfiltration and infiltration of your organizationΓÇÖs sensitive data in real-time. Session Control extend from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad)
-## Adding Dropbox Business from the gallery
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Add Dropbox Business from the gallery
To configure the integration of Dropbox Business into Azure AD, you need to add Dropbox Business from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Dropbox Business** in the search box. 1. Select **Dropbox Business** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on
+## Configure and test Azure AD SSO for Dropbox Business
Configure and test Azure AD SSO with Dropbox Business using a test user called **Britta Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Dropbox Business.
-To configure and test Azure AD SSO with Dropbox Business, complete the following building blocks:
+To configure and test Azure AD SSO with Dropbox Business, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
@@ -67,9 +67,9 @@ To configure and test Azure AD SSO with Dropbox Business, complete the following
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Dropbox Business** application integration page, find the **Manage** section and select **Single sign-on**.
+1. In the Azure portal, on the **Dropbox Business** application integration page, find the **Manage** section and select **Single sign-on**.
1. On the **Select a Single sign-on method** page, select **SAML**.
-1. On the **Set up Single Sign-On with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up Single Sign-On with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
@@ -78,7 +78,7 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
a. In the **Sign on URL** text box, type a URL using the following pattern: `https://www.dropbox.com/sso/<id>`
- b. In the **Identifier (Entity ID)** text box, type a value:
+ b. In the **Identifier (Entity ID)** text box, type the value:
`Dropbox` > [!NOTE]
@@ -92,12 +92,6 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
-
- b. Azure AD Identifier
-
- c. Logout URL
- ### Create an Azure AD test user
@@ -113,20 +107,14 @@ In this section, you'll create a test user in the Azure portal called Britta Sim
### Assign the Azure AD test user
-In this section, you'll enable Britta Simon to use Azure single sign-on by granting access to Dropbox Business.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Dropbox Business.
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Dropbox Business**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
-
-1. In the **Users and groups** dialog, select **Britta Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure Dropbox Business SSO
@@ -141,7 +129,7 @@ In this section, you'll enable Britta Simon to use Azure single sign-on by grant
3. If you want to setup Dropbox Business manually, open a new web browser window and go on your Dropbox Business tenant and sign on to your Dropbox Business tenant. and perform the following steps:
- ![Screenshot that shows the "Dropbox Business Sign in" page.](./media/dropboxforbusiness-tutorial/ic769509.png "Configure single sign-on")
+ ![Screenshot that shows the "Dropbox Business Sign in" page.](./media/dropboxforbusiness-tutorial/account.png "Configure single sign-on")
4. Click on the **User Icon** and select **Settings** tab.
@@ -167,7 +155,7 @@ In this section, you'll enable Britta Simon to use Azure single sign-on by grant
b. Click on **Add sign-in URL** and in the **Identity provider sign-in URL** textbox, paste the **Login URL** value which you have copied from the Azure portal and then select **Done**.
- ![Configure single sign-on](./media/dropboxforbusiness-tutorial/configure6.png "Configure single sign-on")
+ ![Configure single sign-on](./media/dropboxforbusiness-tutorial/sso.png "Configure single sign-on")
c. Click **Upload certificate**, and then browse to your **Base64 encoded certificate file** which you have downloaded from the Azure portal.
@@ -184,16 +172,14 @@ In this section, a user called B.Simon is created in Dropbox Business. Dropbox B
### Test SSO
-When you select the Dropbox Business tile in the Access Panel, you should be automatically signed in to the Dropbox Business for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
-
-## Additional Resources
+In this section, you test your Azure AD single sign-on configuration with following options.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* Click on **Test this application** in Azure portal. This will redirect to Dropbox Business Sign-on URL where you can initiate the login flow.
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* Go to Dropbox Business Sign-on URL directly and initiate the login flow from there.
-- [What is Conditional Access in Azure Active Directory?](/cloud-app-security/proxy-intro-aad)
+* You can use Microsoft My Apps. When you click the Dropbox Business tile in the My Apps, this will redirect to Dropbox Business Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [Try Dropbox Business with Azure AD](https://aad.portal.azure.com/)
+## Next steps
-- [What is session control in Microsoft Cloud App Security?](/cloud-app-security/proxy-intro-aad)
+Once you configure Dropbox Business you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/everbridge-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/everbridge-tutorial.md
@@ -9,7 +9,7 @@
Previously updated : 04/18/2019 Last updated : 01/27/2021 # Tutorial: Azure Active Directory integration with Everbridge
@@ -20,8 +20,6 @@ When you integrate Everbridge with Azure AD, you can:
* Control in Azure AD who has access to Everbridge. * Allow your users to be automatically signed in to Everbridge with their Azure AD accounts. This access control is called single sign-on (SSO). * Manage your accounts in one central location by using the Azure portal.
-For more information about software as a service (SaaS) app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
## Prerequisites
@@ -36,60 +34,39 @@ In this tutorial, you configure and test Azure AD single sign-on in a test envir
* Everbridge supports IDP-initiated SSO.
-## Add Everbridge from the Azure Marketplace
+## Add Everbridge from the Gallery
-To configure the integration of Everbridge into Azure AD, add Everbridge from the Azure Marketplace to your list of managed SaaS apps.
+To configure the integration of Everbridge into Azure AD, you need to add Everbridge from the gallery to your list of managed SaaS apps.
-To add Everbridge from the Azure Marketplace, follow these steps.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Everbridge** in the search box.
+1. Select **Everbridge** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-1. In the [Azure portal](https://portal.azure.com), on the left navigation pane, select **Azure Active Directory**.
+## Configure and test Azure AD SSO for Everbridge
- ![Azure Active Directory button](common/select-azuread.png)
+Configure and test Azure AD SSO with Everbridge using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Everbridge.
-2. Go to **Enterprise applications**, and then select **All applications**.
+To configure and test Azure AD SSO with Everbridge, perform the following steps:
- ![Enterprise applications blade](common/enterprise-applications.png)
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Everbridge SSO](#configure-everbridge-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Everbridge test user](#create-everbridge-test-user)** - to have a counterpart of B.Simon in Everbridge that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-3. To add a new application, select **New application** at the top of the dialog box.
+### Configure Azure AD SSO
- ![New application button](common/add-new-app.png)
+Follow these steps to enable Azure AD SSO in the Azure portal.
-4. In the search box, enter **Everbridge**. Select **Everbridge** from the result panel, and select **Add**.
+1. In the Azure portal, on the **Everbridge** application integration page, find the **Manage** section and select **Single sign-on**.
+1. On the **Select a Single sign-on method** page, select **SAML**.
+1. On the **Set up Single Sign-On with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Everbridge in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with Everbridge based on the test user Britta Simon.
-For single sign-on to work, establish a link relationship between an Azure AD user and the related user in Everbridge.
-
-To configure and test Azure AD single sign-on with Everbridge, complete the following building blocks:
--- [Configure Azure AD single sign-on](#configure-azure-ad-single-sign-on) to enable your users to use this feature.-- [Configure Everbridge as Everbridge manager portal single sign-on](#configure-everbridge-as-everbridge-manager-portal-single-sign-on) to configure the single sign-on settings on the application side.-- [Configure Everbridge as Everbridge member portal single sign-on](#configure-everbridge-as-everbridge-member-portal-single-sign-on) to configure the single sign-on settings on the application side.-- [Create an Azure AD test user](#create-an-azure-ad-test-user) to test Azure AD single sign-on with Britta Simon.-- [Assign the Azure AD test user](#assign-the-azure-ad-test-user) to enable Britta Simon to use Azure AD single sign-on.-- [Create an Everbridge test user](#create-an-everbridge-test-user) to have a counterpart of Britta Simon in Everbridge that's linked to the Azure AD representation of the user.-- [Test single sign-on](#test-single-sign-on) to verify whether the configuration works.-
-### Configure Azure AD single sign-on
-
-In this section, you enable Azure AD single sign-on in the Azure portal.
-
-To configure Azure AD single sign-on with Everbridge, follow these steps.
-
-1. In the [Azure portal](https://portal.azure.com/), on the **Everbridge** application integration page, select **Single sign-on**.
-
- ![Configure single sign-on link](common/select-sso.png)
-
-2. In the **Select a single sign-on method** dialog box, select the **SAML/WS-Fed** mode to enable single sign-on.
-
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, select **Edit** to open the **Basic SAML Configuration** dialog box.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
>[!NOTE] >Configure the application either as the manager portal *or* as the member portal on both the Azure portal and the Everbridge portal.
@@ -98,10 +75,10 @@ To configure Azure AD single sign-on with Everbridge, follow these steps.
![Everbridge domain and URLs single sign-on information](common/idp-intiated.png)
- a. In the **Identifier** box, enter a URL that follows the pattern
+ a. In the **Identifier** box, enter a URL that follows the pattern.
`https://sso.everbridge.net/<API_Name>`
- b. In the **Reply URL** box, enter a URL that follows the pattern
+ b. In the **Reply URL** box, enter a URL that follows the pattern.
`https://manager.everbridge.net/saml/SSO/<API_Name>/alias/defaultAlias` > [!NOTE]
@@ -134,11 +111,31 @@ To configure Azure AD single sign-on with Everbridge, follow these steps.
![Copy configuration URLs](common/copy-configuration-urls.png)
- - Login URL
- - Azure AD Identifier
- - Logout URL
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Everbridge.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Everbridge**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
-### Configure Everbridge as Everbridge manager portal single sign-on
+### Configure Everbridge SSO
To configure SSO on **Everbridge** as an **Everbridge manager portal** application, follow these steps.
@@ -146,7 +143,7 @@ To configure SSO on **Everbridge** as an **Everbridge manager portal** applicati
1. In the menu on the top, select the **Settings** tab. Under **Security**, select **Single Sign-On**.
- ![Configure single sign-on](./media/everbridge-tutorial/tutorial_everbridge_002.png)
+ ![Configure single sign-on](./media/everbridge-tutorial/sso.png)
a. In the **Name** box, enter the name of the identifier provider. An example is your company name.
@@ -162,72 +159,22 @@ To configure SSO on **Everbridge** as an **Everbridge manager portal** applicati
g. Select **Save**.
-### Configure Everbridge as Everbridge member portal single sign-on
+### Configure Everbridge as Everbridge member portal SSO
To configure single sign-on on **Everbridge** as an **Everbridge member portal**, send the downloaded **Federation Metadata XML** to the [Everbridge support team](mailto:support@everbridge.com). They set this setting to have the SAML SSO connection set properly on both sides.
-### Create an Azure AD test user
-
-To create the test user Britta Simon in the Azure portal, follow these steps.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory** > **Users** > **All users**.
-
- ![Users and All users links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user button](common/new-user.png)
-
-3. In the **User** dialog box, follow these steps.
-
- ![User dialog box](common/user-properties.png)
-
- a. In the **Name** box, enter **BrittaSimon**.
-
- b. In the **User name** box, enter `brittasimon@yourcompanydomain.extension`. An example is BrittaSimon@contoso.com.
-
- c. Select the **Show Password** check box. Write down the value that displays in the **Password** box.
-
- d. Select **Create**.
-
-### Assign the Azure AD test user
-
-Enable Britta Simon to use Azure single sign-on by granting access to Everbridge.
-
-1. In the Azure portal, select **Enterprise applications** > **All applications** >**Everbridge**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the applications list, select **Everbridge**.
-
- ![Everbridge link in the applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![Users and groups link](common/users-groups-blade.png)
-
-4. Select **Add user**. In the **Add Assignment** dialog box, select **Users and groups**.
-
- ![Add Assignment dialog box](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog box, select **Britta Simon** in the users list. Choose **Select** at the bottom of the screen.
-
-6. If you expect any role value in the SAML assertion, in the **Select Role** dialog box, select the appropriate role for the user from the list. Choose **Select** at the bottom of the screen.
-
-7. In the **Add Assignment** dialog box, select **Assign**.
-
-### Create an Everbridge test user
+### Create Everbridge test user
In this section, you create the test user Britta Simon in Everbridge. To add users in the Everbridge platform, work with the [Everbridge support team](mailto:support@everbridge.com). Users must be created and activated in Everbridge before you use single sign-on.
-### Test single sign-on
+### Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
-Test your Azure AD single sign-on configuration by using the Access Panel.
+* Click on Test this application in Azure portal and you should be automatically signed in to the Everbridge for which you set up the SSO.
-When you select the Everbridge tile in the Access Panel, you should be automatically signed in to the Everbridge account for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* You can use Microsoft My Apps. When you click the Everbridge tile in the My Apps, you should be automatically signed in to the Everbridge for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-## Additional resources
+## Next steps
-- [List of tutorials on how to integrate SaaS apps with Azure Active Directory](./tutorial-list.md)-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Everbridge you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/helloid-provisioning-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/helloid-provisioning-tutorial.md
@@ -0,0 +1,164 @@
+
+ Title: 'Tutorial: Configure HelloID for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to HelloID.
+
+documentationcenter: ''
+
+writer: Zhchia
++
+ms.assetid: ffd450a5-03ec-4364-8921-5c468e119c4d
+++
+ na
+ms.devlang: na
+ Last updated : 01/15/2021+++
+# Tutorial: Configure HelloID for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both HelloID and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [HelloID](https://www.helloid.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../manage-apps/user-provisioning.md).
++
+## Capabilities Supported
+> [!div class="checklist"]
+> * Create users in HelloID
+> * Remove users in HelloID when they do not require access anymore
+> * Keep user attributes synchronized between Azure AD and HelloID
+> * Provision groups and group memberships in HelloID
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](https://docs.microsoft.com/azure/active-directory/develop/quickstart-create-new-tenant)
+* A user account in Azure AD with [permission](https://docs.microsoft.com/azure/active-directory/users-groups-roles/directory-assign-admin-roles) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A [HelloID tenant](https://www.helloid.com/).
+* A user account in HelloID with Admin permissions.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](https://docs.microsoft.com/azure/active-directory/manage-apps/user-provisioning).
+2. Determine who will be in [scope for provisioning](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+3. Determine what data to [map between Azure AD and HelloID](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes).
+
+## Step 2. Configure HelloID to support provisioning with Azure AD
+
+1. Sign in to your HelloID administrator dashboard.
+
+ ![HelloID admin sign in](media/helloid-provisioning-tutorial/admin-sign-in.png)
+
+2. Go to **Directory** > **Azure AD**.
+
+ ![Directory > Azure AD](media/helloid-provisioning-tutorial/directory-azure-ad.png)
+
+3. Select the **New Secret** button.
+
+ ![New Secret button](media/helloid-provisioning-tutorial/new-secret.png)
+
+4. The **URL** and **Secret** fields are automatically populated. Copy and save the URL and Secret. These value will be entered in the **Tenant URL** * and **Secret Token** * field in the Provisioning tab of your HelloID application in the Azure portal.
+
+ ![URL and secret generated](media/helloid-provisioning-tutorial/url-secret.png)
+
+## Step 3. Add HelloID from the Azure AD application gallery
+
+Add HelloID from the Azure AD application gallery to start managing provisioning to HelloID. If you have previously setup HelloID for SSO you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](https://docs.microsoft.com/azure/active-directory/manage-apps/add-gallery-app).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+
+* When assigning users and groups to HelloID, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps) to add additional roles.
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
++
+## Step 5. Configure automatic user provisioning to HelloID
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in HelloID based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for HelloID in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Enterprise applications blade](common/enterprise-applications.png)
+
+2. In the applications list, select **HelloID**.
+
+ ![The HelloID link in the Applications list](common/all-applications.png)
+
+3. Select the **Provisioning** tab.
+
+ ![Provisioning tab](common/provisioning.png)
+
+4. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Provisioning tab automatic](common/provisioning-automatic.png)
+
+5. Under the **Admin Credentials** section, input your HelloID Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to HelloID. If the connection fails, ensure your HelloID account has Admin permissions and try again.
+
+ ![Token](common/provisioning-testconnection-tenanturltoken.png)
+
+6. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Notification Email](common/provisioning-notification-email.png)
+
+7. Select **Save**.
+
+8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to HelloID**.
+
+9. Review the user attributes that are synchronized from Azure AD to HelloID in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in HelloID for update operations. If you choose to change the [matching target attribute](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes), you will need to ensure that the HelloID API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|
+ ||||
+ |userName|String|&check;|
+ |active|Boolean|
+ |displayName|String|
+ |emails[type eq "work"].value|String|
+ |name.givenName|String|
+ |name.familyName|String|
+ |externalId|String|
+
+10. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to HelloID**.
+
+11. Review the group attributes that are synchronized from Azure AD to HelloID in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in HelloID for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|
+ ||||
+ |displayName|String|&check;|
+ |externalId|String|
+ |members|Reference|
+
+12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../manage-apps/define-conditional-rules-for-provisioning-user-accounts.md).
+
+13. To enable the Azure AD provisioning service for HelloID, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+
+14. Define the users and/or groups that you would like to provision to HelloID by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Provisioning Scope](common/provisioning-scope.png)
+
+15. When you are ready to provision, click **Save**.
+
+ ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](https://docs.microsoft.com/azure/active-directory/reports-monitoring/concept-provisioning-logs) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](https://docs.microsoft.com/azure/active-directory/app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user) to see the status of the provisioning cycle and how close it is to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](https://docs.microsoft.com/azure/active-directory/manage-apps/application-provisioning-quarantine-status).
+
+## Additional resources
+
+* [Managing user account provisioning for Enterprise Apps](../manage-apps/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../manage-apps/check-status-user-account-provisioning.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/iris-intranet-provisioning-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/iris-intranet-provisioning-tutorial.md
@@ -0,0 +1,143 @@
+
+ Title: 'Tutorial: Configure Iris Intranet for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Iris Intranet.
+
+documentationcenter: ''
+
+writer: Zhchia
++
+ms.assetid: 38db8479-6d33-43de-9f71-1f1bd184fe69
+++
+ na
+ms.devlang: na
+ Last updated : 01/15/2021+++
+# Tutorial: Configure Iris Intranet for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Iris Intranet and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Iris Intranet](https://www.triptic.nl/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../manage-apps/user-provisioning.md).
++
+## Capabilities Supported
+> [!div class="checklist"]
+> * Create users in Iris Intranet
+> * Remove users in Iris Intranet when they do not require access anymore
+> * Keep user attributes synchronized between Azure AD and Iris Intranet
+> * [Single sign-on](iris-intranet-tutorial.md) to Iris Intranet (recommended)
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](https://docs.microsoft.com/azure/active-directory/develop/quickstart-create-new-tenant)
+* A user account in Azure AD with [permission](https://docs.microsoft.com/azure/active-directory/users-groups-roles/directory-assign-admin-roles) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A Iris Intranet tenant.
+* A user account in Iris Intranet with Admin permissions.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](https://docs.microsoft.com/azure/active-directory/manage-apps/user-provisioning).
+2. Determine who will be in [scope for provisioning](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+3. Determine what data to [map between Azure AD and Iris Intranet](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes).
+
+## Step 2. Configure Iris Intranet to support provisioning with Azure AD
+
+To configure Iris Intranet to support provisioning with Azure AD one needs to get the **Tenant URL** and **Secret Token** by dropping a mail to [Iris Intranet support team](mailto:support@triptic.nl).These values will be entered in the **Secret Token** and **Tenant URL** field in the Provisioning tab of your Iris Intranet's application in the Azure portal.
+
+## Step 3. Add Iris Intranet from the Azure AD application gallery
+
+Add Iris Intranet from the Azure AD application gallery to start managing provisioning to Iris Intranet. If you have previously setup Iris Intranet for SSO you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](https://docs.microsoft.com/azure/active-directory/manage-apps/add-gallery-app).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+
+* When assigning users and groups to Iris Intranet, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps) to add additional roles.
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
++
+## Step 5. Configure automatic user provisioning to Iris Intranet
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for Iris Intranet in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Enterprise applications blade](common/enterprise-applications.png)
+
+2. In the applications list, select **Iris Intranet**.
+
+ ![The Iris Intranet link in the Applications list](common/all-applications.png)
+
+3. Select the **Provisioning** tab.
+
+ ![Provisioning tab](common/provisioning.png)
+
+4. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Provisioning tab automatic](common/provisioning-automatic.png)
+
+5. Under the **Admin Credentials** section, input your Iris Intranet Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Iris Intranet. If the connection fails, ensure your Iris Intranet account has Admin permissions and try again.
+
+ ![Token](common/provisioning-testconnection-tenanturltoken.png)
+
+6. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Notification Email](common/provisioning-notification-email.png)
+
+7. Select **Save**.
+
+8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Iris Intranet**.
+
+9. Review the user attributes that are synchronized from Azure AD to Iris Intranet in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Iris Intranet for update operations. If you choose to change the [matching target attribute](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes), you will need to ensure that the Iris Intranet API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|
+ ||||
+ |userName|String|&check;|
+ |active|Boolean|
+ |emails[type eq "work"].value|String|
+ |name.givenName|String|
+ |name.familyName|String|
+ |name.formatted|String|
+ |phoneNumbers[type eq "work"].value|String|
+ |phoneNumbers[type eq "mobile"].value|String|
+ |externalId|String|
+
+
+10. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../manage-apps/define-conditional-rules-for-provisioning-user-accounts.md).
+
+11. To enable the Azure AD provisioning service for Iris Intranet, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+
+12. Define the users and/or groups that you would like to provision to Iris Intranet by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Provisioning Scope](common/provisioning-scope.png)
+
+13. When you are ready to provision, click **Save**.
+
+ ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](https://docs.microsoft.com/azure/active-directory/reports-monitoring/concept-provisioning-logs) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](https://docs.microsoft.com/azure/active-directory/app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user) to see the status of the provisioning cycle and how close it is to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](https://docs.microsoft.com/azure/active-directory/manage-apps/application-provisioning-quarantine-status).
+
+## Additional resources
+
+* [Managing user account provisioning for Enterprise Apps](../manage-apps/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../manage-apps/check-status-user-account-provisioning.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/meraki-dashboard-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/meraki-dashboard-tutorial.md
@@ -99,7 +99,7 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
![Edit SAML Signing Certificate](common/edit-certificate.png)
-1. In the **SAML Signing Certificate** section, copy the **Thumbprint Value** and save it on your computer.
+1. In the **SAML Signing Certificate** section, copy the **Thumbprint Value** and save it on your computer. This value needs to be converted to include colons in order for the Meraki dashboard to understand it . For example, if the thumbprint from Azure is `C2569F50A4AAEDBB8E` it will need to be changed to `C2:56:9F:50:A4:AA:ED:BB:8E` to use it later in Meraki Dashboard.
![Copy Thumbprint value](common/copy-thumbprint.png)
@@ -161,7 +161,7 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
![Meraki Dashboard Add a SAML IdP](./media/meraki-dashboard-tutorial/configure-3.png)
-1. Paste the **Thumbprint** Value, which you have copied from the Azure portal into **X.590 cert SHA1 fingerprint** textbox. Then click **Save**. After saving, the Consumer URL will show up. Copy Consumer URL value and paste this into **Reply URL** textbox in the **Basic SAML Configuration Section** in the Azure portal.
+1. Paste the converted **Thumbprint** Value, which you have copied from the Azure portal and converted in specified format as mentioned in step 9 of previous section into **X.590 cert SHA1 fingerprint** textbox. Then click **Save**. After saving, the Consumer URL will show up. Copy Consumer URL value and paste this into **Reply URL** textbox in the **Basic SAML Configuration Section** in the Azure portal.
![Meraki Dashboard Configuration](./media/meraki-dashboard-tutorial/configure-4.png)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/moveittransfer-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/moveittransfer-tutorial.md
@@ -9,27 +9,23 @@
Previously updated : 02/25/2019 Last updated : 01/27/2021 # Tutorial: Azure Active Directory integration with MOVEit Transfer - Azure AD integration
-In this tutorial, you learn how to integrate MOVEit Transfer - Azure AD integration with Azure Active Directory (Azure AD).
-Integrating MOVEit Transfer - Azure AD integration with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate MOVEit Transfer - Azure AD integration with Azure Active Directory (Azure AD). When you integrate MOVEit Transfer - Azure AD integration with Azure AD, you can:
-* You can control in Azure AD who has access to MOVEit Transfer - Azure AD integration.
-* You can enable your users to be automatically signed-in to MOVEit Transfer - Azure AD integration (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to MOVEit Transfer - Azure AD integration.
+* Enable your users to be automatically signed-in to MOVEit Transfer - Azure AD integration with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with MOVEit Transfer - Azure AD integration, you need the following items:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* MOVEit Transfer - Azure AD integration single sign-on enabled subscription
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* MOVEit Transfer - Azure AD integration single sign-on (SSO) enabled subscription.
## Scenario description
@@ -37,59 +33,39 @@ In this tutorial, you configure and test Azure AD single sign-on in a test envir
* MOVEit Transfer - Azure AD integration supports **SP** initiated SSO
-## Adding MOVEit Transfer - Azure AD integration from the gallery
+## Add MOVEit Transfer - Azure AD integration from the gallery
To configure the integration of MOVEit Transfer - Azure AD integration into Azure AD, you need to add MOVEit Transfer - Azure AD integration from the gallery to your list of managed SaaS apps.
-**To add MOVEit Transfer - Azure AD integration from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **MOVEit Transfer - Azure AD integration**, select **MOVEit Transfer - Azure AD integration** from result panel then click **Add** button to add the application.
-
- ![MOVEit Transfer - Azure AD integration in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with MOVEit Transfer - Azure AD integration based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in MOVEit Transfer - Azure AD integration needs to be established.
-
-To configure and test Azure AD single sign-on with MOVEit Transfer - Azure AD integration, you need to complete the following building blocks:
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **MOVEit Transfer - Azure AD integration** in the search box.
+1. Select **MOVEit Transfer - Azure AD integration** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure MOVEit Transfer - Azure AD integration Single Sign-On](#configure-moveit-transferazure-ad-integration-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create MOVEit Transfer - Azure AD integration test user](#create-moveit-transferazure-ad-integration-test-user)** - to have a counterpart of Britta Simon in MOVEit Transfer - Azure AD integration that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+## Configure and test Azure AD SSO for MOVEit Transfer - Azure AD integration
-### Configure Azure AD single sign-on
+Configure and test Azure AD SSO with MOVEit Transfer - Azure AD integration using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in MOVEit Transfer - Azure AD integration.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+To configure and test Azure AD SSO with MOVEit Transfer - Azure AD integration, perform the following steps:
-To configure Azure AD single sign-on with MOVEit Transfer - Azure AD integration, perform the following steps:
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure MOVEit Transfer - Azure AD integration SSO](#configure-moveit-transferazure-ad-integration-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create MOVEit Transfer - Azure AD integration test user](#create-moveit-transferazure-ad-integration-test-user)** - to have a counterpart of B.Simon in MOVEit Transfer - Azure AD integration that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-1. In the [Azure portal](https://portal.azure.com/), on the **MOVEit Transfer - Azure AD integration** application integration page, select **Single sign-on**.
+### Configure Azure AD SSO
- ![Configure single sign-on link](common/select-sso.png)
+Follow these steps to enable Azure AD SSO in the Azure portal.
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+1. In the Azure portal, on the **MOVEit Transfer - Azure AD integration** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
4. On the **Basic SAML Configuration** section, if you have **Service Provider metadata file**, perform the following steps:
@@ -105,7 +81,7 @@ To configure Azure AD single sign-on with MOVEit Transfer - Azure AD integration
![MOVEit Transfer - Azure AD integration Domain and URLs single sign-on information](common/sp-identifier-reply.png)
- In the **Sign-on URL** text box, type a URL using the following pattern:
+ In the **Sign-on URL** text box, type the URL:
`https://contoso.com` > [!NOTE]
@@ -119,48 +95,66 @@ To configure Azure AD single sign-on with MOVEit Transfer - Azure AD integration
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
- b. Azure AD Identifier
+### Assign the Azure AD test user
- c. Logout URL
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to MOVEit Transfer - Azure AD integration.
-### Configure MOVEit Transfer - Azure AD integration Single Sign-On
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **MOVEit Transfer - Azure AD integration**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+### Configure MOVEit Transfer - Azure AD integration SSO
1. Sign on to your MOVEit Transfer tenant as an administrator. 2. On the left navigation pane, click **Settings**.
- ![Settings Section On App side](./media/moveittransfer-tutorial/tutorial_moveittransfer_000.png)
+ ![Settings Section On App side](./media/moveittransfer-tutorial/settings.png)
3. Click **Single Signon** link, which is under **Security Policies -> User Auth**.
- ![Security Policies On App side](./media/moveittransfer-tutorial/tutorial_moveittransfer_001.png)
+ ![Security Policies On App side](./media/moveittransfer-tutorial/sso.png)
4. Click the Metadata URL link to download the metadata document.
- ![Service Provider Metadata URL](./media/moveittransfer-tutorial/tutorial_moveittransfer_002.png)
+ ![Service Provider Metadata URL](./media/moveittransfer-tutorial/metadata.png)
* Verify **entityID** matches **Identifier** in the **Basic SAML Configuration** section . * Verify **AssertionConsumerService** Location URL matches **REPLY URL** in the **Basic SAML Configuration** section.
- ![Configure Single Sign-On On App side](./media/moveittransfer-tutorial/tutorial_moveittransfer_007.png)
+ ![Configure Single Sign-On On App side](./media/moveittransfer-tutorial/xml.png)
5. Click **Add Identity Provider** button to add a new Federated Identity Provider.
- ![Add Identity Provider](./media/moveittransfer-tutorial/tutorial_moveittransfer_003.png)
+ ![Add Identity Provider](./media/moveittransfer-tutorial/idp.png)
6. Click **Browse...** to select the metadata file which you downloaded from Azure portal, then click **Add Identity Provider** to upload the downloaded file.
- ![SAML Identity Provider](./media/moveittransfer-tutorial/tutorial_moveittransfer_004.png)
+ ![SAML Identity Provider](./media/moveittransfer-tutorial/saml.png)
7. Select "**Yes**" as **Enabled** in the **Edit Federated Identity Provider Settings...** page and click **Save**.
- ![Federated Identity Provider Settings](./media/moveittransfer-tutorial/tutorial_moveittransfer_005.png)
+ ![Federated Identity Provider Settings](./media/moveittransfer-tutorial/save.png)
8. In the **Edit Federated Identity Provider User Settings** page, perform the following actions:
- ![Edit Federated Identity Provider Settings](./media/moveittransfer-tutorial/tutorial_moveittransfer_006.png)
+ ![Edit Federated Identity Provider Settings](./media/moveittransfer-tutorial/attributes.png)
a. Select **SAML NameID** as **Login name**.
@@ -172,57 +166,6 @@ To configure Azure AD single sign-on with MOVEit Transfer - Azure AD integration
e. Click **Save** button.
-### Create an Azure AD test user
-
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon\@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
-
-### Assign the Azure AD test user
-
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to MOVEit Transfer - Azure AD integration.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **MOVEit Transfer - Azure AD integration**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the applications list, select **MOVEit Transfer - Azure AD integration**.
-
- ![The MOVEit Transfer - Azure AD integration link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
- ### Create MOVEit Transfer - Azure AD integration test user The objective of this section is to create a user called Britta Simon in MOVEit Transfer - Azure AD integration. MOVEit Transfer - Azure AD integration supports just-in-time provisioning, which you have enabled. There is no action item for you in this section. A new user is created during an attempt to access MOVEit Transfer - Azure AD integration if it doesn't exist yet.
@@ -230,16 +173,16 @@ The objective of this section is to create a user called Britta Simon in MOVEit
>[!NOTE] >If you need to create a user manually, you need to contact the [MOVEit Transfer - Azure AD integration Client support team](https://community.ipswitch.com/s/support).
-### Test single sign-on
+### Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the MOVEit Transfer - Azure AD integration tile in the Access Panel, you should be automatically signed in to the MOVEit Transfer - Azure AD integration for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Click on **Test this application** in Azure portal. This will redirect to MOVEit Transfer - Azure AD integration Sign-on URL where you can initiate the login flow.
-## Additional resources
+* Go to MOVEit Transfer - Azure AD integration Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the MOVEit Transfer - Azure AD integration tile in the My Apps, you should be automatically signed in to the MOVEit Transfer - Azure AD integration for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure MOVEit Transfer - Azure AD integration you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/notion-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/notion-tutorial.md
@@ -36,7 +36,7 @@ In this tutorial, you configure and test Azure AD SSO in a test environment.
* Notion supports **SP and IDP** initiated SSO * Notion supports **Just In Time** user provisioning > [!NOTE]
-> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+> Identifier of this application is a fixed string value so one Notion workspace can be configured in one tenant.
## Adding Notion from the gallery
@@ -76,16 +76,16 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
- In the **Reply URL** text box, type a URL using the following pattern:
+ In the **Reply URL** text box, enter the URL with the following pattern that you can obtain from your Notion workspace **Settings & Members** > **Security & identity** > **Single sign-on URL**:
`https://www.notion.so/sso/saml/<CUSTOM_ID>` 1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
- In the **Sign-on URL** text box, type a URL using the following pattern:
- `https://www.notion.so/sso/saml/<CUSTOM_ID>`
+ In the **Sign-on URL** text box, enter the following URL:
+ `https://www.notion.so/login`
> [!NOTE]
- > These values are not real. Update these values with the actual Reply URL and Sign-On URL. Contact [Notion Client support team](mailto:team@makenotion.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Reply URL and Sign-On URL. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
1. Notion application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
@@ -100,7 +100,7 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
| lastName | user.surname |
-1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url**. Go to your **Notion** workspace **Settings & Members** > **Security & identity**, and paste the value you copied into the **IDP metadata URL** field.
![The Certificate download link](common/copy-metadataurl.png)
@@ -130,7 +130,13 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure Notion SSO
-To configure single sign-on on **Notion** side, you need to send the **App Federation Metadata Url** to [Notion support team](mailto:team@makenotion.com). They set this setting to have the SAML SSO connection set properly on both sides.
+Go to your **Notion** workspace **Settings & Members** > **Security & identity**, and paste the **App Federation Metadata Url** value you copied into the **IDP metadata URL** field.
+
+On the same settings page, under **Email domains** click **Contact support** to add your organization's email domain(s).
+
+After your email domains are approved and added, enable SAML SSO using the **Enable SAML** toggle.
+
+After successful testing, you may enforce SAML SSO using the **Enforce SAML** toggle. Please note that your Notion workspace administrastrators retain the ability to log in with email, but all other members will have to use SAML SSO to log in to Notion.
### Create Notion test user
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/pegasystems-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/pegasystems-tutorial.md
@@ -9,29 +9,23 @@
Previously updated : 03/26/2019 Last updated : 01/25/2021 # Tutorial: Azure Active Directory integration with Pega Systems
-In this tutorial, you'll learn how to integrate Pega Systems with Azure Active Directory (Azure AD).
+In this tutorial, you'll learn how to integrate Pega Systems with Azure Active Directory (Azure AD). When you integrate Pega Systems with Azure AD, you can:
-This integration provides these benefits:
-
-* You can use Azure AD to control who has access to Pega Systems.
-* You can enable your users to be automatically signed-in to Pega Systems (single sign-on) with their Azure AD accounts.
-* You can manage your accounts in one central location: the Azure portal.
-
-To learn more about SaaS app integration with Azure AD, see [Single sign-on to applications in Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you start.
+* Control in Azure AD who has access to Pega Systems.
+* Enable your users to be automatically signed-in to Pega Systems with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with Pega Systems, you need to have:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can sign up for a [one-month trial](https://azure.microsoft.com/pricing/free-trial/).
-* A Pega Systems subscription that has single sign-on enabled.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Pega Systems single sign-on (SSO) enabled subscription.
## Scenario description
@@ -41,55 +35,37 @@ In this tutorial, you'll configure and test Azure AD single sign-on in a test en
## Add Pega Systems from the gallery
-To set up the integration of Pega Systems into Azure AD, you need to add Pega Systems from the gallery to your list of managed SaaS apps.
-
-1. In the [Azure portal](https://portal.azure.com), in the left pane, select **Azure Active Directory**:
-
- ![Select Azure Active Directory](common/select-azuread.png)
-
-2. Go to **Enterprise applications** > **All applications**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add an application, select **New application** at the top of the window:
-
- ![Select New application](common/add-new-app.png)
-
-4. In the search box, enter **Pega Systems**. Select **Pega Systems** in the search results, and then select **Add**.
-
- ![Search results](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
+To configure the integration of Pega Systems into Azure AD, you need to add Pega Systems from the gallery to your list of managed SaaS apps.
-In this section, you'll configure and test Azure AD single sign-on with Pega Systems by using a test user named Britta Simon.
-To enable single sign-on, you need to establish a relationship between an Azure AD user and the corresponding user in Pega Systems.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Pega Systems** in the search box.
+1. Select **Pega Systems** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-To configure and test Azure AD single sign-on with Pega Systems, you need to complete these steps:
+## Configure and test Azure AD SSO for Pega Systems
-1. **[Configure Azure AD single sign-on](#configure-azure-ad-single-sign-on)** to enable the feature for your users.
-2. **[Configure Pega Systems single sign-on](#configure-pega-systems-single-sign-on)** on the application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** to test Azure AD single sign-on.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** to enable Azure AD single sign-on for the user.
-5. **[Create a Pega Systems test user](#create-a-pega-systems-test-user)** that's linked to the Azure AD representation of the user.
-6. **[Test single sign-on](#test-single-sign-on)** to verify that the configuration works.
+Configure and test Azure AD SSO with Pega Systems using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Pega Systems.
-### Configure Azure AD single sign-on
+To configure and test Azure AD SSO with Pega Systems, perform the following steps:
-In this section, you'll enable Azure AD single sign-on in the Azure portal.
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Pega Systems SSO](#configure-pega-systems-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Pega Systems test user](#create-pega-systems-test-user)** - to have a counterpart of B.Simon in Pega Systems that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-To configure Azure AD single sign-on with Pega Systems, take these steps:
+### Configure Azure AD SSO
-1. In the [Azure portal](https://portal.azure.com/), on the **Pega Systems** application integration page, select **Single sign-on**:
+Follow these steps to enable Azure AD SSO in the Azure portal.
- ![Select Single sign-on](common/select-sso.png)
+1. In the Azure portal, on the **Pega Systems** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
-2. In the **Select a single sign-on method** dialog box, select **SAML/WS-Fed** mode to enable single sign-on:
-
- ![Select a single sign-on method](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, select the **Edit** icon to open the **Basic SAML Configuration** dialog box:
-
- ![Edit icon](common/edit-urls.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
4. In the **Basic SAML Configuration** dialog box, if you want to configure the application in IdP-initiated mode, complete the following steps.
@@ -161,23 +137,41 @@ To configure Azure AD single sign-on with Pega Systems, take these steps:
![Copy the configuration URLs](common/copy-configuration-urls.png)
- 1. **Login URL**.
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
- 1. **Azure AD Identifier**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
- 1. **Logout URL**.
+### Assign the Azure AD test user
-### Configure Pega Systems single sign-on
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Pega Systems.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Pega Systems**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+### Configure Pega Systems SSO
1. To configure single sign-on on the **Pega Systems** side, sign in to the Pega Portal with an admin account in another browser window. 2. Select **Create** > **SysAdmin** > **Authentication Service**:
- ![Select Authentication Service](./media/pegasystems-tutorial/tutorial_pegasystems_admin.png)
+ ![Select Authentication Service](./media/pegasystems-tutorial/admin.png)
3. Complete the following steps on the **Create Authentication Service** screen.
- ![Create Authentication Service screen](./media/pegasystems-tutorial/tutorial_pegasystems_admin1.png)
+ ![Create Authentication Service screen](./media/pegasystems-tutorial/admin1.png)
1. In the **Type** list, select **SAML 2.0**.
@@ -189,15 +183,15 @@ To configure Azure AD single sign-on with Pega Systems, take these steps:
4. In the **Identity Provider (IdP) information** section, select **Import IdP metadata** and browse to the metadata file that you downloaded from the Azure portal. Click **Submit** to load the metadata:
- ![Identity Provider (IdP) information section](./media/pegasystems-tutorial/tutorial_pegasystems_admin2.png)
+ ![Identity Provider (IdP) information section](./media/pegasystems-tutorial/admin2.png)
The import will populate the IdP data as shown here:
- ![Imported IdP data](./media/pegasystems-tutorial/tutorial_pegasystems_admin3.png)
+ ![Imported IdP data](./media/pegasystems-tutorial/idp.png)
6. Complete the following steps in the **Service Provider (SP) settings** section.
- ![Service provider settings](./media/pegasystems-tutorial/tutorial_pegasystems_admin4.png)
+ ![Service provider settings](./media/pegasystems-tutorial/sp.png)
1. Copy the **Entity Identification** value and paste it into the **Identifier** box in the **Basic SAML Configuration** section in the Azure portal.
@@ -207,70 +201,26 @@ To configure Azure AD single sign-on with Pega Systems, take these steps:
7. Select **Save**.
-### Create an Azure AD test user
-
-In this section, you'll create a test user named Britta Simon in the Azure portal.
-
-1. In the Azure portal, select **Azure Active Directory** in the left pane, select **Users**, and then select **All users**:
+### Create Pega Systems test user
- ![Select All users](common/users.png)
-
-2. Select **New user** at the top of the screen:
-
- ![Select New user](common/new-user.png)
-
-3. In the **User** dialog box, complete the following steps.
-
- ![User dialog box](common/user-properties.png)
-
- a. In the **Name** box, enter **BrittaSimon**.
-
- b. In the **User name** box, enter **brittasimon@\<yourcompanydomain>.\<extension>**. (For example, BrittaSimon@contoso.com.)
-
- c. Select **Show password**, and then write down the value that's in the **Password** box.
-
- d. Select **Create**.
-
-### Assign the Azure AD test user
-
-In this section, you'll enable Britta Simon to use Azure single sign-on by granting her access to Pega Systems.
-
-1. In the Azure portal, select **Enterprise applications**, select **All applications**, and then select **Pega Systems**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the list of applications, select **Pega Systems**.
-
- ![List of applications](common/all-applications.png)
-
-3. In the left pane, select **Users and groups**:
-
- ![Select Users and groups](common/users-groups-blade.png)
-
-4. Select **Add user**, and then select **Users and groups** in the **Add Assignment** dialog box.
-
- ![Select Add user](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog box, select **Britta Simon** in the users list, and then click the **Select** button at the bottom of the screen.
-
-6. If you expect a role value in the SAML assertion, in the **Select Role** dialog box, select the appropriate role for the user from the list. Click the **Select** button at the bottom of the screen.
+Next, you need to create a user named Britta Simon in Pega Systems. Work with the [Pega Systems support team](https://www.pega.com/contact-us) to create users.
-7. In the **Add Assignment** dialog box, select **Assign**.
+### Test SSO
-### Create a Pega Systems test user
+In this section, you test your Azure AD single sign-on configuration with following options.
-Next, you need to create a user named Britta Simon in Pega Systems. Work with the [Pega Systems support team](https://www.pega.com/contact-us) to create users.
+#### SP initiated:
-### Test single sign-on
+* Click on **Test this application** in Azure portal. This will redirect to Pega Systems Sign on URL where you can initiate the login flow.
-Now you need to test your Azure AD single sign-on configuration by using the Access Panel.
+* Go to Pega Systems Sign-on URL directly and initiate the login flow from there.
-When you select the Pega Systems tile in the Access Panel, you should be automatically signed in to the Pega Systems instance for which you set up SSO. For more information, see [Access and use apps on the My Apps portal](../user-help/my-apps-portal-end-user-access.md).
+#### IDP initiated:
-## Additional resources
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Pega Systems for which you set up the SSO.
-- [Tutorials for integrating SaaS applications with Azure Active Directory](./tutorial-list.md)
+You can also use Microsoft My Apps to test the application in any mode. When you click the Pega Systems tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Pega Systems for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Pega Systems you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/people-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/people-tutorial.md
@@ -9,7 +9,7 @@
Previously updated : 08/27/2019 Last updated : 01/27/2021
@@ -21,8 +21,6 @@ In this tutorial, you'll learn how to integrate People with Azure Active Directo
* Enable your users to be automatically signed-in to People with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
@@ -40,22 +38,22 @@ In this tutorial, you configure and test Azure AD SSO in a test environment.
>[!NOTE] >Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-## Adding People from the gallery
+## Add People from the gallery
To configure the integration of People into Azure AD, you need to add People from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **People** in the search box. 1. Select **People** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for People
+## Configure and test Azure AD SSO for People
Configure and test Azure AD SSO with People using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in People.
-To configure and test Azure AD SSO with People, complete the following building blocks:
+To configure and test Azure AD SSO with People, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
@@ -68,9 +66,9 @@ To configure and test Azure AD SSO with People, complete the following building
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **People** application integration page, find the **Manage** section and select **Single sign-on**.
+1. In the Azure portal, on the **People** application integration page, find the **Manage** section and select **Single sign-on**.
1. On the **Select a Single sign-on method** page, select **SAML**.
-1. On the **Set up Single Sign-On with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up Single Sign-On with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
@@ -79,7 +77,7 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
a. In the **Sign-on URL** text box, type a URL using the following pattern: `https://<company name>.peoplehr.net`
- b. In the **Identifier** box, type a URL:
+ b. In the **Identifier** box, type the URL:
`https://www.peoplehr.com` c. In the **Reply URL** text box, type a URL using the following pattern:
@@ -115,15 +113,9 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **People**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure People SSO
@@ -140,15 +132,15 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
4. In the menu on the left side, click **Settings**.
- ![Screenshot that shows the left-side menu with "Settings" selected.](./media/people-tutorial/tutorial_people_001.png)
+ ![Screenshot that shows the left-side menu with "Settings" selected.](./media/people-tutorial/settings.png)
5. Click **Company**.
- ![Screenshot that shows "Company" selected from the "Settings" menu.](./media/people-tutorial/tutorial_people_002.png)
+ ![Screenshot that shows "Company" selected from the "Settings" menu.](./media/people-tutorial/company.png)
6. On the **Upload 'Single Sign On' SAML meta-data file**, click **Browse** to upload the downloaded metadata file.
- ![Configure Single Sign-On](./media/people-tutorial/tutorial_people_003.png)
+ ![Configure Single Sign-On](./media/people-tutorial/xml.png)
### Create People test user
@@ -156,9 +148,13 @@ In this section, you create a user called B.Simon in People. Work with [People C
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the People tile in the Access Panel, you should be automatically signed in to the People for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Click on **Test this application** in Azure portal. This will redirect to People Sign-on URL where you can initiate the login flow.
+
+* Go to People Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the People tile in the My Apps, this will redirect to People Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
## Test SSO for People (Mobile)
@@ -174,12 +170,6 @@ When you click the People tile in the Access Panel, you should be automatically
![The once](./media/people-tutorial/test03.png)
-## Additional Resources
--- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)--- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)--- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [Try People with Azure AD](https://aad.portal.azure.com)
+Once you configure People you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/pluralsight-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/pluralsight-tutorial.md
@@ -9,7 +9,7 @@
Previously updated : 08/27/2019 Last updated : 01/27/2021
@@ -21,8 +21,6 @@ In this tutorial, you'll learn how to integrate Pluralsight with Azure Active Di
* Enable your users to be automatically signed-in to Pluralsight with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
@@ -40,23 +38,22 @@ In this tutorial, you configure and test Azure AD SSO in a test environment.
> [!NOTE] > Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-## Adding Pluralsight from the gallery
+## Add Pluralsight from the gallery
To configure the integration of Pluralsight into Azure AD, you need to add Pluralsight from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Pluralsight** in the search box. 1. Select **Pluralsight** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. -
-## Configure and test Azure AD single sign-on for Pluralsight
+## Configure and test Azure AD SSO for Pluralsight
Configure and test Azure AD SSO with Pluralsight using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Pluralsight.
-To configure and test Azure AD SSO with Pluralsight, complete the following building blocks:
+To configure and test Azure AD SSO with Pluralsight, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
@@ -69,9 +66,9 @@ To configure and test Azure AD SSO with Pluralsight, complete the following buil
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Pluralsight** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Pluralsight** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
@@ -80,7 +77,7 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
a. In the **Sign-on URL** text box, type a URL using the following pattern: `https://<instancename>.pluralsight.com/sso/<companyname>`
- b. In the **Identifier** box, type a URL using the following pattern:
+ b. In the **Identifier** box, type the URL:
`www.pluralsight.com` c. In the **Reply URL** text box, type a URL using the following pattern:
@@ -116,15 +113,9 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Pluralsight**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure Pluralsight SSO
@@ -137,16 +128,14 @@ In this section, a user called Britta Simon is created in Pluralsight. Pluralsig
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
-
-When you click the Pluralsight tile in the Access Panel, you should be automatically signed in to the Pluralsight for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+In this section, you test your Azure AD single sign-on configuration with following options.
-## Additional resources
+* Click on **Test this application** in Azure portal. This will redirect to Pluralsight Sign-on URL where you can initiate the login flow.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* Go to Pluralsight Sign-on URL directly and initiate the login flow from there.
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+* You can use Microsoft My Apps. When you click the Pluralsight tile in the My Apps, you should be automatically signed in to the Pluralsight for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [Try Pluralsight with Azure AD](https://aad.portal.azure.com/)
+Once you configure Pluralsight you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/preciate-provisioning-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/preciate-provisioning-tutorial.md
@@ -35,6 +35,7 @@ The scenario outlined in this tutorial assumes that you already have the followi
* [An Azure AD tenant](https://docs.microsoft.com/azure/active-directory/develop/quickstart-create-new-tenant) * A user account in Azure AD with [permission](https://docs.microsoft.com/azure/active-directory/users-groups-roles/directory-assign-admin-roles) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A Preciate tenant.
* A user account in Preciate with Admin permissions. ## Step 1. Plan your provisioning deployment
@@ -66,9 +67,9 @@ Add Preciate from the Azure AD application gallery to start managing provisionin
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
-* When assigning users and groups to Preciate, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps) to add additional roles.
+* When assigning users and groups to Preciate, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps) to add other roles.
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control it by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
## Step 5. Configure automatic user provisioning to Preciate
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/proofpoint-ondemand-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/proofpoint-ondemand-tutorial.md
@@ -9,27 +9,23 @@
Previously updated : 12/31/2018 Last updated : 01/29/2021 # Tutorial: Azure Active Directory integration with Proofpoint on Demand
-In this tutorial, you learn how to integrate Proofpoint on Demand with Azure Active Directory (Azure AD).
-Integrating Proofpoint on Demand with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Proofpoint on Demand with Azure Active Directory (Azure AD). When you integrate Proofpoint on Demand with Azure AD, you can:
-* You can control in Azure AD who has access to Proofpoint on Demand.
-* You can enable your users to be automatically signed-in to Proofpoint on Demand (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Proofpoint on Demand.
+* Enable your users to be automatically signed-in to Proofpoint on Demand with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with Proofpoint on Demand, you need the following items:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* Proofpoint on Demand single sign-on enabled subscription
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Proofpoint on Demand single sign-on (SSO) enabled subscription.
> [!NOTE] > If you are using MFA or Passwordless authentication with Azure AD then switch off the AuthnContext value in the SAML Request. Otherwise Azure AD will throw the error on mismatch of the AuthnContext and will not send the token back to the application.
@@ -40,59 +36,39 @@ In this tutorial, you configure and test Azure AD single sign-on in a test envir
* Proofpoint on Demand supports **SP** initiated SSO
-## Adding Proofpoint on Demand from the gallery
+## Add Proofpoint on Demand from the gallery
To configure the integration of Proofpoint on Demand into Azure AD, you need to add Proofpoint on Demand from the gallery to your list of managed SaaS apps.
-**To add Proofpoint on Demand from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **Proofpoint on Demand**, select **Proofpoint on Demand** from result panel then click **Add** button to add the application.
-
- ![Proofpoint on Demand in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Proofpoint on Demand** in the search box.
+1. Select **Proofpoint on Demand** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-In this section, you configure and test Azure AD single sign-on with Proofpoint on Demand based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Proofpoint on Demand needs to be established.
+## Configure and test Azure AD SSO for Proofpoint on Demand
-To configure and test Azure AD single sign-on with Proofpoint on Demand, you need to complete the following building blocks:
+Configure and test Azure AD SSO with Proofpoint on Demand using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Proofpoint on Demand.
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Proofpoint on Demand Single Sign-On](#configure-proofpoint-on-demand-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Proofpoint on Demand test user](#create-proofpoint-on-demand-test-user)** - to have a counterpart of Britta Simon in Proofpoint on Demand that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+To configure and test Azure AD SSO with Proofpoint on Demand, perform the following steps:
-### Configure Azure AD single sign-on
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Proofpoint on Demand SSO](#configure-proofpoint-on-demand-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Proofpoint on Demand test user](#create-proofpoint-on-demand-test-user)** - to have a counterpart of B.Simon in Proofpoint on Demand that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+### Configure Azure AD SSO
-To configure Azure AD single sign-on with Proofpoint on Demand, perform the following steps:
+Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Proofpoint on Demand** application integration page, select **Single sign-on**.
+1. In the Azure portal, on the **Proofpoint on Demand** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Configure single sign-on link](common/select-sso.png)
-
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
-
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
4. On the **Basic SAML Configuration** section, perform the following steps:
@@ -118,81 +94,48 @@ To configure Azure AD single sign-on with Proofpoint on Demand, perform the foll
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
-
- b. Azure Ad Identifier
-
- c. Logout URL
-
-### Configure Proofpoint on Demand Single Sign-On
-
-To configure single sign-on on **Proofpoint on Demand** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Proofpoint on Demand support team](https://www.proofpoint.com/us/support-services). They set this setting to have the SAML SSO connection set properly on both sides.
- ### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon\@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
+In this section, you'll create a test user in the Azure portal called B.Simon.
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Proofpoint on Demand.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Proofpoint on Demand.
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Proofpoint on Demand**.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Proofpoint on Demand**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- ![Enterprise applications blade](common/enterprise-applications.png)
+### Configure Proofpoint on Demand SSO
-2. In the applications list, select **Proofpoint on Demand**.
-
- ![The Proofpoint on Demand link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
+To configure single sign-on on **Proofpoint on Demand** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Proofpoint on Demand support team](https://www.proofpoint.com/us/support-services). They set this setting to have the SAML SSO connection set properly on both sides.
### Create Proofpoint on Demand test user In this section, you create a user called Britta Simon in Proofpoint on Demand. Work with [Proofpoint on Demand Client support team](https://www.proofpoint.com/us/support-services) to add users in the Proofpoint on Demand platform.
-### Test single sign-on
+### Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the Proofpoint on Demand tile in the Access Panel, you should be automatically signed in to the Proofpoint on Demand for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Click on **Test this application** in Azure portal. This will redirect to Proofpoint on Demand Sign-on URL where you can initiate the login flow.
-## Additional Resources
+* Go to Proofpoint on Demand Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the Proofpoint on Demand tile in the My Apps, you should be automatically signed in to the Proofpoint on Demand for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Proofpoint on Demand you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/tableauserver-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/tableauserver-tutorial.md
@@ -9,7 +9,7 @@
Previously updated : 12/27/2020 Last updated : 01/25/2021
@@ -35,7 +35,7 @@ In this tutorial, you configure and test Azure AD SSO in a test environment.
* Tableau Server supports **SP** initiated SSO
-## Adding Tableau Server from the gallery
+## Add Tableau Server from the gallery
To configure the integration of Tableau Server into Azure AD, you need to add Tableau Server from the gallery to your list of managed SaaS apps.
@@ -81,7 +81,7 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
`https://azure.<domain name>.link/wg/saml/SSO/https://docsupdatetracker.net/index.html` > [!NOTE]
- > The preceding values are not real values. Update the values with the actual URL and identifier from the Tableau Server configuration page which is explained later in the tutorial.
+ > The preceding values are not real values. Update the values with the actual Sign-on URL, Identifier and Reply URL from the Tableau Server configuration page which is explained later in the tutorial.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
@@ -121,27 +121,27 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
2. On the **CONFIGURATION** tab, select **User Identity & Access**, and then select the **Authentication** Method tab.
- ![Screenshot shows Authentication selected from User Identity & Access.](./media/tableauserver-tutorial/tutorial-tableauserver-auth.png)
+ ![Screenshot shows Authentication selected from User Identity & Access.](./media/tableauserver-tutorial/auth.png)
3. On the **CONFIGURATION** page, perform the following steps:
- ![Screenshot shows the Configuration page where you can enter the values described.](./media/tableauserver-tutorial/tutorial-tableauserver-config.png)
+ ![Screenshot shows the Configuration page where you can enter the values described.](./media/tableauserver-tutorial/config.png)
a. For **Authentication Method**, select SAML. b. Select the checkbox of **Enable SAML Authentication for the server**.
- c. Tableau Server return URLΓÇöThe URL that Tableau Server users will be accessing, such as `http://tableau_server`. Using `http://localhost` is not recommended. Using a URL with a trailing slash (for example, `http://tableau_server/`) is not supported. Copy **Tableau Server return URL** and paste it in to **Sign On URL** textbox in **Basic SAML Configuration** section in the Azure portal
+ c. Tableau Server return URLΓÇöThe URL that Tableau Server users will be accessing, such as `http://tableau_server`. Using `http://localhost` is not recommended. Using a URL with a trailing slash (for example, `http://tableau_server/`) is not supported. Copy **Tableau Server return URL** and paste it in to **Sign On URL** textbox in **Basic SAML Configuration** section in the Azure portal.
- d. SAML entity IDΓÇöThe entity ID uniquely identifies your Tableau Server installation to the IdP. You can enter your Tableau Server URL again here, if you like, but it does not have to be your Tableau Server URL. Copy **SAML entity ID** and paste it in to **Identifier** textbox in **Basic SAML Configuration** section in the Azure portal
+ d. SAML entity IDΓÇöThe entity ID uniquely identifies your Tableau Server installation to the IdP. You can enter your Tableau Server URL again here, if you like, but it does not have to be your Tableau Server URL. Copy **SAML entity ID** and paste it in to **Identifier** textbox in **Basic SAML Configuration** section in the Azure portal.
- e. Click the **Download XML Metadata File** and open it in the text editor application. Locate Assertion Consumer Service URL with Http Post and Index 0 and copy the URL. Now paste it in to **Reply URL** textbox in **Basic SAML Configuration** section in the Azure portal
+ e. Click the **Download XML Metadata File** and open it in the text editor application. Locate Assertion Consumer Service URL with Http Post and Index 0 and copy the URL. Now paste it in to **Reply URL** textbox in **Basic SAML Configuration** section in the Azure portal.
f. Locate your Federation Metadata file downloaded from Azure portal, and then upload it in the **SAML Idp metadata file**. g. Enter the names for the attributes that the IdP uses to hold the user names, display names, and email addresses.
- h. Click **Save**
+ h. Click **Save**.
> [!NOTE] > Customer have to upload A PEM-encoded x509 Certificate file with a .crt extension and a RSA or DSA private key file that has the .key extension, as a Certificate Key file. For more information on Certificate file and Certificate Key file, please refer to [this](https://help.tableau.com/current/server/en-us/saml_requ.htm) document. If you need help configuring SAML on Tableau Server then please refer to this article [Configure Server Wide SAML](https://help.tableau.com/current/server/en-us/config_saml.htm).
@@ -163,9 +163,8 @@ In this section, you test your Azure AD single sign-on configuration with follow
* Go to Tableau Server Sign-on URL directly and initiate the login flow from there.
-* You can use Microsoft My Apps. When you click the Tableau Server tile in the My Apps, this will redirect to Tableau Server Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-
+* You can use Microsoft My Apps. When you click the Tableau Server tile in the My Apps, this will redirect to Tableau Server Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
## Next steps
-Once you configure the Tableau Server you can enforce session controls, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session controls extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad)
+Once you configure Tableau Server you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/travelperk-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/travelperk-tutorial.md
@@ -27,7 +27,7 @@ In this tutorial, you'll learn how to integrate TravelPerk with Azure Active Dir
To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* TravelPerk single sign-on (SSO) enabled subscription.
+* A TravelPerk account with Premium subscription.
## Scenario description
@@ -84,9 +84,9 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
`https://<COMPANY>.travelperk.com/accounts/saml2/callback/<APPLICATION_ID>/?acs` > [!NOTE]
- > These values are not real. Update these values with the actual Sign on URL, Reply URL and Identifier. Contact [TravelPerk Client support team](mailto:trex@travelperk.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Sign on URL, Reply URL, and Identifier. The values can be found inside your TravelPerk account: go to **Company Settings** > **Integrations** > **Single Sign On**. For assistance, visit the [TravelPerk helpcenter](https://support.travelperk.com/hc/en-us/articles/360052450271-How-can-I-setup-SSO-for-Azure-SAML-).
-1. Your TravelPerk application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes, where as **emailaddress** is mapped with **user.mail**. TravelPerk application expects **emailaddress** to be mapped with **user.userprincipalname**, so you need to edit the attribute mapping by clicking on **Edit** icon and change the attribute mapping.
+1. Your TravelPerk application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes. In the default mapping, **emailaddress** is mapped with **user.mail**. However, the TravelPerk application expects **emailaddress** to be mapped with **user.userprincipalname**. For TravelPerk, you must edit the attribute mapping: click the **Edit** icon, and then change the attribute mapping. To edit an attribute, just click the attribute to open edit mode.
![image](common/default-attributes.png)
@@ -141,4 +141,4 @@ In this section, you test your Azure AD single sign-on configuration with follow
## Next steps
-Once you configure TravelPerk you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
+Once you configure TravelPerk you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/veracode-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/veracode-tutorial.md
@@ -9,7 +9,7 @@
Previously updated : 10/10/2019 Last updated : 01/22/2021
@@ -21,8 +21,6 @@ In this tutorial, you'll learn how to integrate Veracode with Azure Active Direc
* Enable your users to be automatically signed-in to Veracode with their Azure AD accounts. * Manage your accounts in one central location: the Azure portal.
-To learn more about software as a service (SaaS) app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
@@ -38,18 +36,18 @@ In this tutorial, you configure and test Azure AD SSO in a test environment. Ver
To configure the integration of Veracode into Azure AD, add Veracode from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) by using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal by using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Go to **Enterprise Applications**, and then select **All Applications**. 1. To add a new application, select **New application**. 1. In the **Add from the gallery** section, type "Veracode" in the search box. 1. Select **Veracode** from the results panel, and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for Veracode
+## Configure and test Azure AD SSO for Veracode
Configure and test Azure AD SSO with Veracode by using a test user called **B.Simon**. For SSO to work, you must establish a link between an Azure AD user and the related user in Veracode.
-To configure and test Azure AD SSO with Veracode, complete the following building blocks:
+To configure and test Azure AD SSO with Veracode, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** to enable your users to use this feature. * **[Create an Azure AD test user](#create-an-azure-ad-test-user)** to test Azure AD single sign-on with B.Simon.
@@ -62,11 +60,11 @@ To configure and test Azure AD SSO with Veracode, complete the following buildin
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Veracode** application integration page, find the **Manage** section. Select **single sign-on**.
+1. In the Azure portal, on the **Veracode** application integration page, find the **Manage** section. Select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Screenshot of Set up Single Sign-On with SAML, with pencil icon highlighted](common/edit-urls.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
1. On the **Basic SAML Configuration** section, the application is pre-configured and the necessary URLs are already pre-populated with Azure. Select **Save**.
@@ -90,19 +88,43 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
![Screenshot of Set up Veracode section, with configuration URLs highlighted](common/copy-configuration-urls.png)
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Veracode.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Veracode**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+ ## Configure Veracode SSO 1. In a different web browser window, sign in to your Veracode company site as an administrator. 1. From the menu on the top, select **Settings** > **Admin**.
- ![Screenshot of Veracode Administration, with Settings icon and Admin highlighted](./media/veracode-tutorial/ic802911.png "Administration")
+ ![Screenshot of Veracode Administration, with Settings icon and Admin highlighted](./media/veracode-tutorial/admin.png "Administration")
1. Select the **SAML** tab. 1. In the **Organization SAML Settings** section, perform the following steps:
- ![Screenshot of Organization SAML Settings section](./media/veracode-tutorial/ic802912.png "Administration")
+ ![Screenshot of Organization SAML Settings section](./media/veracode-tutorial/saml.png "Administration")
a. For **Issuer**, paste the value of the **Azure AD Identifier** that you've copied from the Azure portal.
@@ -112,7 +134,7 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
1. In the **Self Registration Settings** section, perform the following steps, and then select **Save**:
- ![Screenshot of Self Registration Settings section, with various options highlighted](./media/veracode-tutorial/ic802913.png "Administration")
+ ![Screenshot of Self Registration Settings section, with various options highlighted](./media/veracode-tutorial/save.png "Administration")
a. For **New User Activation**, select **No Activation Required**.
@@ -130,56 +152,21 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
* **Team Memberships** * **Default Team**
-### Create an Azure AD test user
-
-In this section, you'll create a test user in the Azure portal called B.Simon.
-
-1. From the left pane in the Azure portal, select **Azure Active Directory** >**Users** > **All users**.
-1. Select **New user** at the top of the screen.
-1. In the **User** properties, follow these steps:
-
- 1. For **Name**, enter `B.Simon`.
- 1. For **User name**, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
- 1. Select **Show password**, and then write down the value shown.
- 1. Select **Create**.
-
-### Assign the Azure AD test user
-
-In this section, enable B.Simon to use Azure single sign-on by granting access to Veracode.
-
-1. In the Azure portal, select **Enterprise Applications** > **All applications**.
-1. In the applications list, select **Veracode**.
-1. In the app's overview page, find the **Manage** section, and select **Users and groups**.
-
- ![Screenshot of Manage section, with Users and groups highlighted](common/users-groups-blade.png)
-
-1. Select **Add user**. In the **Add Assignment** dialog box, select **Users and groups**.
-
- ![Screenshot of Users and groups page, with Add user highlighted](common/add-assign-user.png)
-
-1. In the **Users and groups** dialog box, from **Users**, select **B.Simon**. Then choose **Select** at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog box, select the appropriate role for the user from the list. Then choose **Select** at the bottom of the screen.
-1. In the **Add Assignment** dialog box, select **Assign**.
- ### Create Veracode test user
-To sign in to Veracode, Azure AD users must be provisioned into Veracode. This task is automated, and you don't need to do anything manually. Users are automatically created if necessary during the first single sign-on attempt.
+In this section, a user called B.Simon is created in Veracode. Veracode supports just-in-time user provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in Veracode, a new one is created after authentication.
> [!NOTE] > You can use any other Veracode user account creation tools or APIs provided by Veracode to provision Azure AD user accounts. ## Test SSO
-In this section, you test your Azure AD single sign-on configuration by using the Access Panel.
-
-When you select **Veracode** in the Access Panel, you should be automatically signed in to the Veracode for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
-
-## Additional resources
+In this section, you test your Azure AD single sign-on configuration with following options.
-- [ List of tutorials on how to integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* Click on Test this application in Azure portal and you should be automatically signed in to the Veracode for which you set up the SSO.
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+* You can use Microsoft My Apps. When you click the Veracode tile in the My Apps, you should be automatically signed in to the Veracode for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [Try Veracode with Azure AD](https://aad.portal.azure.com/)
+Once you configure Veracode you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
aks https://docs.microsoft.com/en-us/azure/aks/kubernetes-service-principal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/kubernetes-service-principal.md
@@ -126,7 +126,7 @@ When using AKS and Azure AD service principals, keep the following consideration
- If you do not specifically pass a service principal in additional AKS CLI commands, the default service principal located at `~/.azure/aksServicePrincipal.json` is used. - You can also optionally remove the aksServicePrincipal.json file, and AKS will create a new service principal. - When you delete an AKS cluster that was created by [az aks create][az-aks-create], the service principal that was created automatically is not deleted.
- - To delete the service principal, query for your cluster *servicePrincipalProfile.clientId* and then delete with [az ad app delete][az-ad-app-delete]. Replace the following resource group and cluster names with your own values:
+ - To delete the service principal, query for your cluster *servicePrincipalProfile.clientId* and then delete with [az ad sp delete][az-ad-sp-delete]. Replace the following resource group and cluster names with your own values:
```azurecli az ad sp delete --id $(az aks show -g myResourceGroup -n myAKSCluster --query servicePrincipalProfile.clientId -o tsv)
aks https://docs.microsoft.com/en-us/azure/aks/supported-kubernetes-versions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/supported-kubernetes-versions.md
@@ -136,7 +136,6 @@ For the past release history, see [Kubernetes](https://en.wikipedia.org/wiki/Kub
| K8s version | Upstream release | AKS preview | AKS GA | End of life | |--|-|--||-|
-| 1.16 | Sep-19-19 | Jan 2019 | Mar 2020 | Jan 2021|
| 1.17 | Dec-09-19 | Jan 2019 | Jul 2020 | 1.20 GA | | 1.18 | Mar-23-20 | May 2020 | Aug 2020 | 1.21 GA | | 1.19 | Aug-04-20 | Sep 2020 | Nov 2020 | 1.22 GA |
@@ -196,4 +195,4 @@ For information on how to upgrade your cluster, see [Upgrade an Azure Kubernetes
<!-- LINKS - Internal --> [aks-upgrade]: upgrade-cluster.md [az-aks-get-versions]: /cli/azure/aks#az-aks-get-versions
-[preview-terms]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/
+[preview-terms]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/
aks https://docs.microsoft.com/en-us/azure/aks/tutorial-kubernetes-prepare-acr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/tutorial-kubernetes-prepare-acr.md
@@ -3,7 +3,7 @@ Title: Kubernetes on Azure tutorial - Create a container registry
description: In this Azure Kubernetes Service (AKS) tutorial, you create an Azure Container Registry instance and upload a sample application container image. Previously updated : 01/12/2021 Last updated : 01/31/2021
@@ -59,7 +59,7 @@ The command returns a *Login Succeeded* message once completed.
To see a list of your current local images, use the [docker images][docker-images] command: ```console
-$ docker images
+docker images
``` The above command's output shows list of your current local images:
@@ -86,8 +86,8 @@ docker tag mcr.microsoft.com/azuredocs/azure-vote-front:v1 <acrLoginServer>/azur
To verify the tags are applied, run [docker images][docker-images] again.
-```azurecli
-$ docker images
+```console
+docker images
``` An image is tagged with the ACR instance address and a version number.
analysis-services https://docs.microsoft.com/en-us/azure/analysis-services/analysis-services-datasource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/analysis-services/analysis-services-datasource.md
@@ -4,7 +4,7 @@ description: Describes data sources and connectors supported for tabular 1200 an
Previously updated : 01/21/2021 Last updated : 02/02/2021
@@ -123,7 +123,9 @@ Provider=MSOLEDBSQL;Data Source=[server];Initial Catalog=[database];Authenticati
## OAuth credentials
-For tabular models at the 1400 and higher compatibility level using in-memory mode, Azure SQL Database, Azure Synapse, Dynamics 365, and SharePoint List support OAuth credentials. Azure Analysis Services manages token refresh for OAuth data sources to avoid timeouts for long-running refresh operations. To generate valid tokens, set credentials by using Power Query.
+For tabular models at the 1400 and higher compatibility level using *in-memory* mode, Azure SQL Database, Azure Synapse, Dynamics 365, and SharePoint List support OAuth credentials. To generate valid tokens, set credentials by using Power Query. Azure Analysis Services manages token refresh for OAuth data sources to avoid timeouts for long-running refresh operations.
+> [!NOTE]
+> Managed token refresh is not supported for data sources accessed through a gateway. For example, one or more mashup query data sources is accessed through a gateway, and/or the [ASPaaS\AlwaysUseGateway](analysis-services-vnet-gateway.md) property is set to **true**.
Direct Query mode is not supported with OAuth credentials.
api-management https://docs.microsoft.com/en-us/azure/api-management/api-management-authentication-policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-authentication-policies.md
@@ -12,7 +12,7 @@
na Previously updated : 06/12/2020 Last updated : 01/27/2021 # API Management authentication policies
@@ -62,7 +62,10 @@ This topic provides a reference for the following API Management policies. For i
- **Policy scopes:** all scopes ## <a name="ClientCertificate"></a> Authenticate with client certificate
- Use the `authentication-certificate` policy to authenticate with a backend service using client certificate. The certificate needs to be [installed into API Management](./api-management-howto-mutual-certificates.md) first and is identified by its thumbprint.
+ Use the `authentication-certificate` policy to authenticate with a backend service using a client certificate. The certificate needs to be [installed into API Management](./api-management-howto-mutual-certificates.md) first and is identified by its thumbprint or certificate ID (resource name).
+
+> [!CAUTION]
+> If the certificate references a certificate stored in Azure Key Vault, identify it using the certificate ID. When a key vault certificate is rotated, its thumbprint in API Management will change, and the policy will not resolve the new certificate if it is identified by thumbprint.
### Policy statement
@@ -72,18 +75,17 @@ This topic provides a reference for the following API Management policies. For i
### Examples
-In this example, the client certificate is identified by its thumbprint:
-
-```xml
-<authentication-certificate thumbprint="CA06F56B258B7A0D4F2B05470939478651151984" />
-```
-
-In this example, the client certificate is identified by the resource name:
+In this example, the client certificate is identified by the certificate ID:
```xml <authentication-certificate certificate-id="544fe9ddf3b8f30fb490d90f" /> ```
+In this example, the client certificate is identified by its thumbprint:
+
+```xml
+<authentication-certificate thumbprint="CA06F56B258B7A0D4F2B05470939478651151984" />
+```
In this example, the client certificate is set in the policy rather than retrieved from the built-in certificate store: ```xml
api-management https://docs.microsoft.com/en-us/azure/api-management/api-management-howto-configure-custom-domain-gateway https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-howto-configure-custom-domain-gateway.md
@@ -5,19 +5,16 @@ description: This topic describes the steps for configuring a custom domain name
documentationcenter: '' - - Last updated 03/31/2020
-# Configure a custom domain name
+# Configure a custom domain name for a self-hosted gateway
-When you provision a [self-hosted Azure API Management gateway](self-hosted-gateway-overview.md) it is not assigned host name and has to be referenced by its IP address. This article shows how to map an existing custom DNS name (also referred to as hostname) a self-hosted gateway.
+When you provision a [self-hosted Azure API Management gateway](self-hosted-gateway-overview.md), it is not assigned a host name and has to be referenced by its IP address. This article shows how to map an existing custom DNS name (also referred to as hostname) to a self-hosted gateway.
## Prerequisites
@@ -31,18 +28,16 @@ To perform the steps described in this article, you must have:
- A self-hosted gateway. For more information, see [How to provision self-hosted gateway](api-management-howto-provision-self-hosted-gateway.md) - A custom domain name that is owned by you or your organization. This topic does not provide instructions on how to procure a custom domain name. - A DNS record hosted on a DNS server that maps the custom domain name to the self-hosted gateway's IP address. This topic does not provide instructions on how to host a DNS record.-- You must have a valid certificate with a public and private key (.PFX). Subject or subject alternative name (SAN) has to match the domain name (this enables API Management instance to securely expose URLs over TLS).
+- You must have a valid certificate with a public and private key (.PFX). The subject or subject alternative name (SAN) has to match the domain name (this enables the API Management instance to securely expose URLs over TLS).
[!INCLUDE [api-management-navigate-to-instance.md](../../includes/api-management-navigate-to-instance.md)] ## Add custom domain certificate to your API Management service
-1. Select **Certificates** from under **Security**.
-2. Select **+ Add**.
-3. Enter a resource name for the certificate into **ID** field.
-4. Select the file containing the certificate (.PFX) by selecting the **Certificate** field or the folder icon adjacent to it.
-5. Enter the password for the certificate into the **Password** field.
-6. Select **Create** to add the certificate to your API Management service.
+Add a custom domain certificate (.PFX) file to your API Management instance, or reference a certificate stored in Azure Key Vault. Follow steps in [Secure backend services using client certificate authentication in Azure API Management](api-management-howto-mutual-certificates.md).
+
+> [!NOTE]
+> We recommend using a key vault certificate for the self-hosted gateway domain.
## Use the Azure portal to set a custom domain name for your self-hosted gateway
api-management https://docs.microsoft.com/en-us/azure/api-management/api-management-howto-mutual-certificates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-howto-mutual-certificates.md
@@ -1,94 +1,141 @@
Title: Secure back-end services using client certificate authentication
+ Title: Secure API Management backend using client certificate authentication
-description: Learn how to secure back-end services using client certificate authentication in Azure API Management.
+description: Learn how to manage client certificates and secure backend services using client certificate authentication in Azure API Management.
documentationcenter: '' - - Previously updated : 01/08/2020 Last updated : 01/26/2021
-# How to secure back-end services using client certificate authentication in Azure API Management
+# Secure backend services using client certificate authentication in Azure API Management
-API Management allows you to secure access to the back-end service of an API using client certificates. This guide shows how to manage certificates in the Azure API Management service instance in the Azure portal. It also explains how to configure an API to use a certificate to access a back-end service.
+API Management allows you to secure access to the backend service of an API using client certificates. This guide shows how to manage certificates in an Azure API Management service instance using the Azure portal. It also explains how to configure an API to use a certificate to access a backend service.
-For information about managing certificates using the API Management REST API, see <a href="/rest/api/apimanagement/apimanagementrest/azure-api-management-rest-api-certificate-entity">Azure API Management REST API Certificate entity</a>.
+You can also manage API Management certificates using the [API Management REST API](/rest/api/apimanagement/2020-06-01-preview/certificate).
-## <a name="prerequisites"> </a>Prerequisites
+## Certificate options
+
+API Management provides two options to manage certificates used to secure access to backend
+
+* Reference a certificate managed in [Azure Key Vault](../key-vault/general/overview.md)
+* Add a certificate file directly in API Management
+
+Using key vault certificates is recommended because it helps improve API Management security:
+
+* Certificates stored in key vaults can be reused across services
+* Granular [access policies](../key-vault/general/secure-your-key-vault.md#data-plane-and-access-policies) can be applied to certificates stored in key vaults
+* Certificates updated in the key vault are automatically rotated in API Management. After update in the key vault, a certificate in API Management is updated within 4 hours. You can also manually refresh the certificate using the Azure portal or via the management REST API.
+
+## Prerequisites
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
-This guide shows you how to configure your API Management service instance to use client certificate authentication to access the back-end service for an API. Before following the steps in this article, you should have your back-end service configured for client certificate authentication ([to configure certificate authentication in the Azure App Service refer to this article][to configure certificate authentication in Azure WebSites refer to this article]). You need access to the certificate and the password for uploading it to the API Management service.
+* If you have not created an API Management service instance yet, see [Create an API Management service instance][Create an API Management service instance].
+* You should have your backend service configured for client certificate authentication. To configure certificate authentication in the Azure App Service, refer to [this article][to configure certificate authentication in Azure WebSites refer to this article].
+* You need access to the certificate and the password for management in an Azure key vault or upload to the API Management service. The certificate must be in **PFX** format. Self-signed certificates are allowed.
-## <a name="step1"> </a>Upload a Certificate
+### Prerequisites for key vault integration
-> [!NOTE]
-> Instead of an uploaded certificate you can use a certificate stored in the [Azure Key Vault](https://azure.microsoft.com/services/key-vault/) service as shown in this [example](https://github.com/galiniliev/api-management-policy-snippets/blob/galin/AkvCert/examples/Look%20up%20Key%20Vault%20certificate%20using%20Managed%20Service%20Identity%20and%20call%20backend.policy.xml).
+1. For steps to create a key vault, see [Quickstart: Create a key vault using the Azure portal](../key-vault/general/quick-create-portal.md).
+1. Enable a system-assigned or user-assigned [managed identity](api-management-howto-use-managed-service-identity.md) in the API Management instance.
+1. Assign a [key vault access policy](../key-vault/general/assign-access-policy-portal.md) to the managed identity with permissions to get and list certificates from the vault. To add the policy:
+ 1. In the portal, navigate to your key vault.
+ 1. Select **Settings > Access policies > + Add Access Policy**.
+ 1. Select **Certificate permissions**, then select **Get** and **List**.
+ 1. In **Select principal**, select the resource name of your managed identity. If you're using a system-assigned identity, the principal is the name of your API Management instance.
+1. Create or import a certificate to the key vault. See [Quickstart: Set and retrieve a certificate from Azure Key Vault using the Azure portal](../key-vault/certificates/quick-create-portal.md).
-![Add client certificates](media/api-management-howto-mutual-certificates/apim-client-cert-new.png)
-Follow the steps below to upload a new client certificate. If you have not created an API Management service instance yet, see the tutorial [Create an API Management service instance][Create an API Management service instance].
+## Add a key vault certificate
-1. Navigate to your Azure API Management service instance in the Azure portal.
-2. Select **Certificates** from the menu.
-3. Click the **+ Add** button.
- ![Screenshot that highlights the + Add button.](media/api-management-howto-mutual-certificates/apim-client-cert-add.png)
-4. Browse for the certificate, provide its ID and password.
-5. Click **Create**.
+See [Prerequisites for key vault integration](#prerequisites-for-key-vault-integration).
-> [!NOTE]
-> The certificate must be in **.pfx** format. Self-signed certificates are allowed.
+> [!CAUTION]
+> When using a key vault certificate in API Management, be careful not to delete the certificate, key vault, or managed identity used to access the key vault.
-Once the certificate is uploaded, it shows in the **Certificates**. If you have many certificates, make a note of the thumbprint of the desired certificate in order to [Configure an API to use a client certificate for gateway authentication][Configure an API to use a client certificate for gateway authentication].
+To add a key vault certificate to API Management:
-> [!NOTE]
-> To turn off certificate chain validation when using, for example, a self-signed certificate, follow the steps described in this FAQ [item](api-management-faq.md#can-i-use-a-self-signed-tlsssl-certificate-for-a-back-end).
+1. In the [Azure portal](https://portal.azure.com), navigate to your API Management instance.
+1. Under **Security**, select **Certificates**.
+1. Select **Certificates** > **+ Add**.
+1. In **Id**, enter a name of your choice.
+1. In **Certificate**, select **Key vault**.
+1. Enter the identifier of a key vault certificate, or choose **Select** to select a certificate from a key vault.
+ > [!IMPORTANT]
+ > If you enter a key vault certificate identifier yourself, ensure that it doesn't have version information. Otherwise, the certificate won't rotate automatically in API Management after an update in the key vault.
+1. In **Client identity**, select a system-assigned or an existing user-assigned managed identity. Learn how to [add or modify managed identities in your API Management service](api-management-howto-use-managed-service-identity.md).
+ > [!NOTE]
+ > The identity needs permissions to get and list certificate from the key vault. If you haven't already configured access to the key vault, API Management prompts you so it can automatically configure the identity with the necessary permissions.
+1. Select **Add**.
-## <a name="step1a"> </a>Delete a client certificate
+ :::image type="content" source="media/api-management-howto-mutual-certificates/apim-client-cert-kv.png" alt-text="Add key vault certificate":::
-To delete a certificate, click context menu **...** and select **Delete** beside the certificate.
+## Upload a certificate
-![Delete client certificates](media/api-management-howto-mutual-certificates/apim-client-cert-delete-new.png)
+To upload a client certificate to API Management:
-If the certificate is in use by an API, then a warning screen is displayed. To delete the certificate, you must first remove the certificate from any APIs that are configured to use it.
+1. In the [Azure portal](https://portal.azure.com), navigate to your API Management instance.
+1. Under **Security**, select **Certificates**.
+1. Select **Certificates** > **+ Add**.
+1. In **Id**, enter a name of your choice.
+1. In **Certificate**, select **Custom**.
+1. Browse to select the certificate .pfx file, and enter its password.
+1. Select **Add**.
-![Delete client certificates failure](media/api-management-howto-mutual-certificates/apim-client-cert-delete-failure.png)
+ :::image type="content" source="media/api-management-howto-mutual-certificates/apim-client-cert-add.png" alt-text="Upload client certificate":::
-## <a name="step2"> </a>Configure an API to use a client certificate for gateway authentication
+After the certificate is uploaded, it shows in the **Certificates** window. If you have many certificates, make a note of the thumbprint of the desired certificate in order to configure an API to use a client certificate for [gateway authentication](#configure-an-api-to-use-client-certificate-for-gateway-authentication).
-1. Click **APIs** from the **API Management** menu on the left and navigate to the API.
- ![Enable client certificates](media/api-management-howto-mutual-certificates/apim-client-cert-enable.png)
+> [!NOTE]
+> To turn off certificate chain validation when using, for example, a self-signed certificate, follow the steps described in [Self-signed certificates](#self-signed-certificates), later in this article.
-2. In the **Design** tab, click on a pencil icon of the **Backend** section.
-3. Change the **Gateway credentials** to **Client cert** and select your certificate from the dropdown.
- ![Screenshot that shows where to change the gateway credentials and select your certificate.](media/api-management-howto-mutual-certificates/apim-client-cert-enable-select.png)
+## Configure an API to use client certificate for gateway authentication
-4. Click **Save**.
+1. In the [Azure portal](https://portal.azure.com), navigate to your API Management instance.
+1. Under **APIs**, select **APIs**.
+1. Select an API from the list.
+2. In the **Design** tab, select the editor icon in the **Backend** section.
+3. In **Gateway credentials**, select **Client cert** and select your certificate from the dropdown.
+1. Select **Save**.
-> [!WARNING]
-> This change is effective immediately, and calls to operations of that API will use the certificate to authenticate on the back-end server.
+ :::image type="content" source="media/api-management-howto-mutual-certificates/apim-client-cert-enable-select.png" alt-text="Use client certificate for gateway authentication":::
+> [!CAUTION]
+> This change is effective immediately, and calls to operations of that API will use the certificate to authenticate on the backend server.
> [!TIP]
-> When a certificate is specified for gateway authentication for the back-end service of an API, it becomes part of the policy for that API, and can be viewed in the policy editor.
+> When a certificate is specified for gateway authentication for the backend service of an API, it becomes part of the policy for that API, and can be viewed in the policy editor.
## Self-signed certificates
-If you are using self-signed certificates, you will need to disable certificate chain validation in order for API Management to communicate with the backend system. Otherwise it will return a 500 error code. To configure this, you can use the [`New-AzApiManagementBackend`](/powershell/module/az.apimanagement/new-azapimanagementbackend) (for new back end) or [`Set-AzApiManagementBackend`](/powershell/module/az.apimanagement/set-azapimanagementbackend) (for existing back end) PowerShell cmdlets and set the `-SkipCertificateChainValidation` parameter to `True`.
+If you are using self-signed certificates, you will need to disable certificate chain validation for API Management to communicate with the backend system. Otherwise it will return a 500 error code. To configure this, you can use the [`New-AzApiManagementBackend`](/powershell/module/az.apimanagement/new-azapimanagementbackend) (for new backend) or [`Set-AzApiManagementBackend`](/powershell/module/az.apimanagement/set-azapimanagementbackend) (for existing backend) PowerShell cmdlets and set the `-SkipCertificateChainValidation` parameter to `True`.
```powershell $context = New-AzApiManagementContext -resourcegroup 'ContosoResourceGroup' -servicename 'ContosoAPIMService' New-AzApiManagementBackend -Context $context -Url 'https://contoso.com/myapi' -Protocol http -SkipCertificateChainValidation $true ```
+## Delete a client certificate
+
+To delete a certificate, select it and then select **Delete** from the context menu (**...**).
++
+> [!IMPORTANT]
+> If the certificate is referenced by any policies, then a warning screen is displayed. To delete the certificate, you must first remove the certificate from any policies that are configured to use it.
+
+## Next steps
+
+* [How to secure APIs using client certificate authentication in API Management](api-management-howto-mutual-certificates-for-clients.md)
+* Learn about [policies in API Management](api-management-howto-policies.md)
++ [How to add operations to an API]: ./mock-api-responses.md [How to add and publish a product]: api-management-howto-add-products.md [Monitoring and analytics]: ../api-management-monitoring.md
@@ -103,9 +150,3 @@ New-AzApiManagementBackend -Context $context -Url 'https://contoso.com/myapi' -
[WebApp-GraphAPI-DotNet]: https://github.com/AzureADSamples/WebApp-GraphAPI-DotNet [to configure certificate authentication in Azure WebSites refer to this article]: ../app-service/app-service-web-configure-tls-mutual-auth.md
-[Prerequisites]: #prerequisites
-[Upload a client certificate]: #step1
-[Delete a client certificate]: #step1a
-[Configure an API to use a client certificate for gateway authentication]: #step2
-[Test the configuration by calling an operation in the Developer Portal]: #step3
-[Next steps]: #next-steps
api-management https://docs.microsoft.com/en-us/azure/api-management/api-management-howto-properties https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-howto-properties.md
@@ -39,7 +39,7 @@ Using key vault secrets is recommended because it helps improve API Management s
* Secrets stored in key vaults can be reused across services * Granular [access policies](../key-vault/general/secure-your-key-vault.md#data-plane-and-access-policies) can be applied to secrets
-* Secrets updated in the key vault are automatically rotated in API Management. After update in the key vault, a named value in API Management is updated within 4 hours.
+* Secrets updated in the key vault are automatically rotated in API Management. After update in the key vault, a named value in API Management is updated within 4 hours. You can also manually refresh the secret using the Azure portal or via the management REST API.
### Prerequisites for key vault integration
@@ -54,19 +54,7 @@ Using key vault secrets is recommended because it helps improve API Management s
To use the key vault secret, [add or edit a named value](#add-or-edit-a-named-value), and specify a type of **Key vault**. Select the secret from the key vault.
-> [!CAUTION]
-> When using a key vault secret in API Management, be careful not to delete the secret, key vault, or managed identity used to access the key vault.
-
-If [Key Vault firewall](../key-vault/general/network-security.md) is enabled on your key vault, the following are additional requirements for using key vault secrets:
-
-* You must use the API Management instance's **system-assigned** managed identity to access the key vault.
-* In Key Vault firewall, enable the **Allow Trusted Microsoft Services to bypass this firewall** option.
-
-If the API Management instance is deployed in a virtual network, also configure the following network settings:
-* Enable a [service endpoint](../key-vault/general/overview-vnet-service-endpoints.md) to Azure Key Vault on the API Management subnet.
-* Configure a network security group (NSG) rule to allow outbound traffic to the AzureKeyVault and AzureActiveDirectory [service tags](../virtual-network/service-tags-overview.md).
-
-For details, see network configuration details in [Connect to a virtual network](api-management-using-with-vnet.md#-common-network-configuration-issues).
## Add or edit a named value
@@ -74,6 +62,9 @@ For details, see network configuration details in [Connect to a virtual network]
See [Prerequisites for key vault integration](#prerequisites-for-key-vault-integration).
+> [!CAUTION]
+> When using a key vault secret in API Management, be careful not to delete the secret, key vault, or managed identity used to access the key vault.
+ 1. In the [Azure portal](https://portal.azure.com), navigate to your API Management instance. 1. Under **APIs**, select **Named values** > **+Add**. 1. Enter a **Name** identifier, and enter a **Display name** used to reference the property in policies.
api-management https://docs.microsoft.com/en-us/azure/api-management/api-management-terminology https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-terminology.md
@@ -21,10 +21,10 @@ This article gives definitions for the terms that are specific to API Management
## Term definitions
-* **Backend API** - An HTTP service that implements your API and its operations.
-* **Frontend API**/**APIM API** - An APIM API does not host APIs, it creates facades for your APIs in order to customize the facade according to your needs without touching the back end API. For more information, see [Import and publish an API](import-and-publish.md).
+* **Backend API** - An HTTP service that implements your API and its operations. For more information, see [Backends](backends.md).
+* **Frontend API**/**APIM API** - An APIM API does not host APIs, it creates façades for your APIs. You customize the façade according to your needs without touching the backend API. For more information, see [Import and publish an API](import-and-publish.md).
* **APIM product** - a product contains one or more APIs as well as a usage quota and the terms of use. You can include a number of APIs and offer them to developers through the Developer portal. For more information, see [Create and publish a product](api-management-howto-add-products.md).
-* **APIM API operation** - Each APIM API represents a set of operations available to developers. Each APIM API contains a reference to the back end service that implements the API, and its operations map to the operations implemented by the back end service. For more information, see [Mock API responses](mock-api-responses.md).
+* **APIM API operation** - Each APIM API represents a set of operations available to developers. Each APIM API contains a reference to the backend service that implements the API, and its operations map to the operations implemented by the backend service. For more information, see [Mock API responses](mock-api-responses.md).
* **Version** - Sometimes you want to publish new or different API features to some users, while others want to stick with the API that currently works for them. For more information, see [Publish multiple versions of your API](api-management-get-started-publish-versions.md). * **Revision** - When your API is ready to go and starts to be used by developers, you usually need to take care in making changes to that API and at the same time not to disrupt callers of your API. It's also useful to let developers know about the changes you made. For more information, see [Use revisions](api-management-get-started-revise-api.md). * **Developer portal** - Your customers (developers) should use the Developer portal to access your APIs. The Developer portal can be customized. For more information, see [Customize the Developer portal](api-management-customize-styles.md).
api-management https://docs.microsoft.com/en-us/azure/api-management/api-management-transformation-policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-transformation-policies.md
@@ -209,7 +209,7 @@ or
``` > [!NOTE]
-> Backend entities can be managed via management [API](/rest/api/apimanagement/2019-12-01/backend) and [PowerShell](https://www.powershellgallery.com/packages?q=apimanagement).
+> Backend entities can be managed via [Azure portal](how-to-configure-service-fabric-backend.md), management [API](/rest/api/apimanagement), and [PowerShell](https://www.powershellgallery.com/packages?q=apimanagement).
### Example
@@ -264,7 +264,7 @@ In this example the policy routes the request to a service fabric backend, using
|Name|Description|Required|Default| |-|--|--|-| |base-url|New backend service base URL.|One of `base-url` or `backend-id` must be present.|N/A|
-|backend-id|Identifier of the backend to route to. (Backend entities are managed via [API](/rest/api/apimanagement/2019-12-01/backend) and [PowerShell](https://www.powershellgallery.com/packages?q=apimanagement).)|One of `base-url` or `backend-id` must be present.|N/A|
+|backend-id|Identifier of the backend to route to. (Backend entities are managed via [Azure portal](how-to-configure-service-fabric-backend.md), [API](/rest/api/apimanagement), and [PowerShell](https://www.powershellgallery.com/packages?q=apimanagement).)|One of `base-url` or `backend-id` must be present.|N/A|
|sf-partition-key|Only applicable when the backend is a Service Fabric service and is specified using 'backend-id'. Used to resolve a specific partition from the name resolution service.|No|N/A| |sf-replica-type|Only applicable when the backend is a Service Fabric service and is specified using 'backend-id'. Controls if the request should go to the primary or secondary replica of a partition. |No|N/A| |sf-resolve-condition|Only applicable when the backend is a Service Fabric service. Condition identifying if the call to Service Fabric backend has to be repeated with new resolution.|No|N/A|
api-management https://docs.microsoft.com/en-us/azure/api-management/backends https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/backends.md
@@ -0,0 +1,37 @@
+
+ Title: Azure API Management backends | Microsoft Docs
+description: Learn about custom backends in API Management
+
+documentationcenter: ''
+
+editor: ''
+++ Last updated : 01/29/2021+++
+# Backends in API Management
+
+A *backend* (or *API backend*) in API Management is an HTTP service that implements your front-end API and its operations.
+
+When importing certain APIs, API Management configures the API backend automatically. For example, API Management configures the backend when importing an [OpenAPI specification](import-api-from-oas.md), [SOAP API](import-soap-api.md), or Azure resources such as an HTTP-triggered [Azure Function App](import-function-app-as-api.md) or [Logic App](import-logic-app-as-api.md).
+
+API Management also supports using other Azure resources such as a [Service Fabric cluster](how-to-configure-service-fabric-backend.md) or a custom service as an API backend. Using these custom backends requires extra configuration, for example, to authorize credentials of requests to the backend service and to define API operations. You configure and manage these backends in the Azure portal or using Azure APIs or tools.
+
+After creating a backend, you can reference the backend URL in your APIs. Use the [`set-backend-service`](api-management-transformation-policies.md#SetBackendService) policy to redirect an incoming API request to the custom backend instead of the default backend for that API.
+
+## Benefits of backends
+
+A custom backend has several benefits, including:
+
+* Abstracts information about the backend service, promoting reusability across APIs and improved governance
+* Easily used by configuring a transformation policy on an existing API
+* Takes advantage of API Management functionality to maintain secrets in Azure Key Vault if [named values](api-management-howto-properties.md) are configured for header or query parameter authentication
+
+## Next steps
+
+* Set up a [Service Fabric backend](how-to-configure-service-fabric-backend.md) using the Azure portal.
+* Backends can also be configured using the API Management [REST API](/rest/api/apimanagement), [Azure PowerShell](/powershell/module/az.apimanagement/new-azapimanagementbackend), or [Azure Resource Manager templates](../service-fabric/service-fabric-tutorial-deploy-api-management.md).
+
api-management https://docs.microsoft.com/en-us/azure/api-management/how-to-configure-service-fabric-backend https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/how-to-configure-service-fabric-backend.md
@@ -0,0 +1,138 @@
+
+ Title: Set up Service Fabric backend in Azure API Management | Microsoft Docs
+description: How to create a Service Fabric service backend in Azure API Management using the Azure portal
+
+documentationcenter: ''
+
+editor: ''
+++ Last updated : 01/29/2021+++
+# Set up a Service Fabric backend in API Management using the Azure portal
+
+This article shows how to configure a [Service Fabric](../service-fabric/service-fabric-api-management-overview.md) service as a custom API backend using the Azure portal. For demonstration purposes, it shows how to set up a basic stateless ASP.NET Core Reliable Service as the Service Fabric backend.
+
+For background, see [Backends in API Management](backends.md).
+
+## Prerequisites
+
+Prerequisites to configure a sample service in a Service Fabric cluster running Windows as a custom backend:
+
+* **Windows development environment** - Install [Visual Studio 2019](https://www.visualstudio.com) and the **Azure development**, **ASP.NET and web development**, and **.NET Core cross-platform development** workloads. Then set up a [.NET development environment](../service-fabric/service-fabric-get-started.md).
+
+* **Service Fabric cluster** - See [Tutorial: Deploy a Service Fabric cluster running Windows into an Azure virtual network](../service-fabric/service-fabric-tutorial-create-vnet-and-windows-cluster.md). You can create a cluster with an existing X.509 certificate or for test purposes create a new, self-signed certificate. The cluster is created in a virtual network.
+
+* **Sample Service Fabric app** - Create a Web API app and deploy to the Service Fabric cluster as described in [Integrate API Management with Service Fabric in Azure](../service-fabric/service-fabric-tutorial-deploy-api-management.md).
+
+ These steps create a basic stateless ASP.NET Core Reliable Service using the default Web API project template. Later, you expose the HTTP endpoint for this service through Azure API Management.
+
+ Take note of the application name, for example `fabric:/myApplication/myService`.
+
+* **API Management instance** - An existing or new API Management instance in the **Premium** or **Developer** tier and in the same region as the Service Fabric cluster. If you need one, [create an API Management instance](get-started-create-service-instance.md).
+
+* **Virtual network** - Add your API Management instance to the virtual network you created for your Service Fabric cluster. API Management requires a dedicated subnet in the virtual network.
+
+ For steps to enable virtual network connectivity for the API Management instance, see [How to use Azure API Management with virtual networks](api-management-using-with-vnet.md).
+
+## Create backend - portal
+
+### Add Service Fabric cluster certificate to API Management
+
+The Service Fabric cluster certificate is stored and managed in an Azure key vault associated with the cluster. Add this certificate to your API Management instance as a client certificate.
+
+For steps to add a certificate to your API Management instance, see [How to secure backend services using client certificate authentication in Azure API Management](api-management-howto-mutual-certificates.md).
+
+> [!NOTE]
+> We recommend adding the certificate to API Management by referencing the key vault certificate.
+
+### Add Service Fabric backend
+
+1. In the [Azure portal](https://portal.azure.com), navigate to your API Management instance.
+1. Under **APIs**, select **Backends** > **+ Add**.
+1. Enter a backend name and an optional description
+1. In **Type**, select **Service Fabric**.
+1. In **Runtime URL**, enter the name of the Service Fabric backend service that API Management will forward requests to. Example: `fabric:/myApplication/myService`.
+1. In **Maximum number of partition resolution retries**, enter a number between 0 and 10.
+1. Enter the management endpoint of the Service Fabric cluster. This endpoint is the URL of the cluster on port `19080`, for example, `https://mysfcluster.eastus.cloudapp.azure.com:19080`.
+1. In **Client certificate**, select the Service Fabric cluster certificate you added to your API Management instance in the previous section.
+1. In **Management endpoint authorization method**, enter a thumbprint or server X509 name of a certificate used by the Service Fabric cluster management service for TLS communication.
+1. Enable the **Validate certificate chain** and **Validate certificate name** settings.
+1. In **Authorization credentials**, provide credentials, if necessary, to reach the configured backend service in Service Fabric. For the sample app used in this scenario, authorization credentials aren't needed.
+1. Select **Create**.
++
+## Use the backend
+
+To use a custom backend, reference it using the [`set-backend-service`](api-management-transformation-policies.md#SetBackendService) policy. This policy transforms the default backend service base URL of an incoming API request to a specified backend, in this case the Service Fabric backend.
+
+The `set-backend-service` policy can be useful with an existing API to transform an incoming request to a different backend than the one specified in the API settings. For demonstration purposes in this article, create a test API and set the policy to direct API requests to the Service Fabric backend.
+
+### Create API
+
+Follow the steps in [Add an API manually](add-api-manually.md) to create a blank API.
+
+* In the API settings, leave the **Web service URL** blank.
+* Add an **API URL suffix**, such as *fabric*.
+
+ :::image type="content" source="media/backends/create-blank-api.png" alt-text="Create blank API":::
+
+### Add GET operation to the API
+
+As shown in [Deploy a Service Fabric back-end service](../service-fabric/service-fabric-tutorial-deploy-api-management.md#deploy-a-service-fabric-back-end-service), the sample ASP.NET Core service deployed on the Service Fabric cluster supports a single HTTP GET operation on the URL path `/api/values`.
+
+The default response on that path is a JSON array of two strings:
+
+```json
+["value1", "value2"]
+```
+
+To test the integration of API Management with the cluster, add the corresponding GET operation to the API on the path `/api/values`:
+
+1. Select the API you created in the previous step.
+1. Select **+ Add Operation**.
+1. In the **Frontend** window, enter the following values, and select **Save**.
+
+ | Setting | Value |
+ ||--|
+ | **Display name** | *Test backend* |
+ | **URL** | GET |
+ | **URL** | `/api/values` |
+
+ :::image type="content" source="media/backends/configure-get-operation.png" alt-text="Add GET operation to API":::
+
+### Configure `set-backend` policy
+
+Add the [`set-backend-service`](api-management-transformation-policies.md#SetBackendService) policy to the test API.
+
+1. On the **Design** tab, in the **Inbound processing** section, select the code editor (**</>**) icon.
+1. Position the cursor inside the **&lt;inbound&gt;** element
+1. Add the following policy statement. In `backend-id`, substitute the name of your Service Fabric backend.
+
+ The `sf-resolve-condition` is a retry condition if the cluster partition isn't resolved. The number of retries was set when configuring the backend.
+
+ ```xml
+ <set-backend-service backend-id="mysfbackend" sf-resolve-condition="@(context.LastError?.Reason == "BackendConnectionFailure")" />
+ ```
+1. Select **Save**.
+
+ :::image type="content" source="media/backends/set-backend-service.png" alt-text="Configure set-backend-service policy":::
+
+### Test backend API
+
+1. On the **Test** tab, select the **GET** operation you created in a previous section.
+1. Select **Send**.
+
+When properly configured, the HTTP response shows an HTTP success code and displays the JSON returned from the backend Service Fabric service.
++
+## Next steps
+
+* Learn how to [configure policies](api-management-advanced-policies.md) to forward requests to a backend
+* Backends can also be configured using the API Management [REST API](/rest/api/apimanagement/2020-06-01-preview/backend), [Azure PowerShell](/powershell/module/az.apimanagement/new-azapimanagementbackend), or [Azure Resource Manager templates](../service-fabric/service-fabric-tutorial-deploy-api-management.md)
+
app-service https://docs.microsoft.com/en-us/azure/app-service/configure-language-python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-language-python.md
@@ -2,7 +2,7 @@
Title: Configure Linux Python apps description: Learn how to configure the Python container in which web apps are run, using both the Azure portal and the Azure CLI. Previously updated : 11/16/2020 Last updated : 02/01/2021
@@ -62,10 +62,13 @@ You can run an unsupported version of Python by building your own container imag
App Service's build system, called Oryx, performs the following steps when you deploy your app using Git or zip packages:
-1. Run a custom pre-build script if specified by the `PRE_BUILD_COMMAND` setting.
+1. Run a custom pre-build script if specified by the `PRE_BUILD_COMMAND` setting. (The script can itself run other Python and Node.js scripts, pip and npm commands, and Node-based tools like yarn, for example, `yarn install` and `yarn build`.)
+ 1. Run `pip install -r requirements.txt`. The *requirements.txt* file must be present in the project's root folder. Otherwise, the build process reports the error: "Could not find setup.py or requirements.txt; Not running pip install."+ 1. If *manage.py* is found in the root of the repository (indicating a Django app), run *manage.py collectstatic*. However, if the `DISABLE_COLLECTSTATIC` setting is `true`, this step is skipped.
-1. Run custom post-build script if specified by the `POST_BUILD_COMMAND` setting.
+
+1. Run custom post-build script if specified by the `POST_BUILD_COMMAND` setting. (Again, the script can run other Python and Node.js scripts, pip and npm commands, and Node-based tools.)
By default, the `PRE_BUILD_COMMAND`, `POST_BUILD_COMMAND`, and `DISABLE_COLLECTSTATIC` settings are empty.
@@ -126,6 +129,52 @@ The following table describes the production settings that are relevant to Azure
| `ALLOWED_HOSTS` | In production, Django requires that you include app's URL in the `ALLOWED_HOSTS` array of *settings.py*. You can retrieve this URL at runtime with the code, `os.environ['WEBSITE_HOSTNAME']`. App Service automatically sets the `WEBSITE_HOSTNAME` environment variable to the app's URL. | | `DATABASES` | Define settings in App Service for the database connection and load them as environment variables to populate the [`DATABASES`](https://docs.djangoproject.com/en/3.1/ref/settings/#std:setting-DATABASES) dictionary. You can alternately store the values (especially the username and password) as [Azure Key Vault secrets](../key-vault/secrets/quick-create-python.md). |
+## Serve static files for Django apps
+
+If your Django web app includes static front-end files, first follow the instructions on [Managing static files](https://docs.djangoproject.com/en/3.1/howto/static-files/) in the Django documentation.
+
+For App Service, you then make the following modifications:
+
+1. Consider using environment variables (for local development) and App Settings (when deploying to the cloud) to dynamically set the Django `STATIC_URL` and `STATIC_ROOT` variables. For example:
+
+ ```python
+ STATIC_URL = os.environ.get("DJANGO_STATIC_URL", "/static/")
+ STATIC_ROOT = os.environ.get("DJANGO_STATIC_ROOT", "./static/")
+ ```
+
+ `DJANGO_STATIC_URL` and `DJANGO_STATIC_ROOT` can be changed as necessary for your local and cloud environments. For example, if the build process for your static files places them in a folder named `django-static`, then you can set `DJANGO_STATIC_URL` to `/django-static/` to avoid using the default.
+
+1. If you have a pre-build script that generates static files in a different folder, include that folder in the Django `STATICFILES_DIRS` variable so that Django's `collectstatic` process finds them. For example, if you run `yarn build` in your front-end folder, and yarn generates a `build/static` folder containing static files, then include that folder as follows:
+
+ ```python
+ FRONTEND_DIR = "path-to-frontend-folder"
+ STATICFILES_DIRS = [os.path.join(FRONTEND_DIR, 'build', 'static')]
+ ```
+
+ Here, `FRONTEND_DIR`, to build a path to where a build tool like yarn is run. You can again use an environment variable and App Setting as desired.
+
+1. Add `whitenoise` to your *requirements.txt* file. [Whitenoise](http://whitenoise.evans.io/en/stable/) (whitenoise.evans.io) is a Python package that makes it simple for a production Django app to serve it's own static files. Whitenoise specifically serves those files that are found in the folder specified by the Django `STATIC_ROOT` variable.
+
+1. In your *settings.py* file, add the following line for Whitenoise:
+
+ ```python
+ STATICFILES_STORAGE = ('whitenoise.storage.CompressedManifestStaticFilesStorage')
+ ```
+
+1. Also modify the `MIDDLEWARE` and `INSTALLED_APPS` lists to include Whitenoise:
+
+ ```python
+ MIDDLEWARE = [
+ "whitenoise.middleware.WhiteNoiseMiddleware",
+ # Other values follow
+ ]
+
+ INSTALLED_APPS = [
+ "whitenoise.runserver_nostatic",
+ # Other values follow
+ ]
+ ```
+ ## Container characteristics When deployed to App Service, Python apps run within a Linux Docker container that's defined in the [App Service Python GitHub repository](https://github.com/Azure-App-Service/python). You can find the image configurations inside the version-specific directories.
@@ -145,6 +194,8 @@ This container has the following characteristics:
- App Service automatically defines an environment variable named `WEBSITE_HOSTNAME` with the web app's URL, such as `msdocs-hello-world.azurewebsites.net`. It also defines `WEBSITE_SITE_NAME` with the name of your app, such as `msdocs-hello-world`.
+- npm and Node.js are installed in the container so you can run Node-based build tools, such as yarn.
+ ## Container startup process During startup, the App Service on Linux container runs the following steps:
@@ -265,7 +316,7 @@ For example, if you've created app setting called `DATABASE_SERVER`, the followi
```python db_server = os.environ['DATABASE_SERVER'] ```
-
+ ## Detect HTTPS session In App Service, [SSL termination](https://wikipedia.org/wiki/TLS_termination_proxy) (wikipedia.org) happens at the network load balancers, so all HTTPS requests reach your app as unencrypted HTTP requests. If your app logic needs to check if the user requests are encrypted or not, inspect the `X-Forwarded-Proto` header.
@@ -368,4 +419,4 @@ The following sections provide additional guidance for specific issues.
> [Tutorial: Deploy from private container repository](tutorial-custom-container.md?pivots=container-linux) > [!div class="nextstepaction"]
-> [App Service Linux FAQ](faq-app-service-linux.md)
+> [App Service Linux FAQ](faq-app-service-linux.md)
app-service https://docs.microsoft.com/en-us/azure/app-service/tutorial-python-postgresql-app https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/tutorial-python-postgresql-app.md
@@ -3,7 +3,7 @@ Title: 'Tutorial: Deploy a Python Django app with Postgres'
description: Create a Python web app with a PostgreSQL database and deploy it to Azure. The tutorial uses the Django framework and the app is hosted on Azure App Service on Linux. ms.devlang: python Previously updated : 01/04/2021 Last updated : 02/02/2021 # Tutorial: Deploy a Django web app with PostgreSQL in Azure App Service
@@ -27,7 +27,7 @@ You can also use the [Azure portal version of this tutorial](/azure/developer/py
1. Have an Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). 1. Install <a href="https://www.python.org/downloads/" target="_blank">Python 3.6 or higher</a>.
-1. Install the <a href="/cli/azure/install-azure-cli" target="_blank">Azure CLI</a> 2.0.80 or higher, with which you run commands in any shell to provision and configure Azure resources.
+1. Install the <a href="/cli/azure/install-azure-cli" target="_blank">Azure CLI</a> 2.18.0 or higher, with which you run commands in any shell to provision and configure Azure resources.
Open a terminal window and check your Python version is 3.6 or higher:
@@ -51,12 +51,14 @@ py -3 --version
-Check that your Azure CLI version is 2.0.80 or higher:
+Check that your Azure CLI version is 2.18.0 or higher:
```azurecli az --version ```
+If you need to upgrade, try the `az upgrade` command (requires version 2.11+) or see <a href="/cli/azure/install-azure-cli" target="_blank">Install the Azure CLI</a>.
+ Then sign in to Azure through the CLI: ```azurecli
@@ -217,7 +219,7 @@ Django database migrations ensure that the schema in the PostgreSQL on Azure dat
Replace `<app-name>` with the name used earlier in the `az webapp up` command.
- On macOS and Linux, you can alternately connect to an SSH session with the [`az webapp ssh`](/cli/azure/webapp?view=azure-cli-latest&preserve-view=true#az_webapp_ssh) command.
+ You can alternately connect to an SSH session with the [`az webapp ssh`](/cli/azure/webapp?view=azure-cli-latest&preserve-view=true#az_webapp_ssh) command. On Windows, this command requires the Azure CLI 2.18.0 or higher.
If you cannot connect to the SSH session, then the app itself has failed to start. [Check the diagnostic logs](#6-stream-diagnostic-logs) for details. For example, if you haven't created the necessary app settings in the previous section, the logs will indicate `KeyError: 'DBNAME'`.
@@ -227,9 +229,12 @@ Django database migrations ensure that the schema in the PostgreSQL on Azure dat
# Change to the app folder cd $APP_PATH
- # Activate the venv (requirements.txt is installed automatically)
+ # Activate the venv
source /antenv/bin/activate
+ # Install requirements
+ pip install -r requirements.txt
+ # Run database migrations python manage.py migrate
@@ -386,6 +391,7 @@ Open an SSH session again in the browser by navigating to `https://<app-name>.sc
``` cd $APP_PATH source /antenv/bin/activate
+pip instal -r requirements.txt
python manage.py migrate ```
attestation https://docs.microsoft.com/en-us/azure/attestation/basic-concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/attestation/basic-concepts.md
@@ -7,6 +7,7 @@
Last updated 08/31/2020 + # Basic Concepts
@@ -25,9 +26,9 @@ Below are some basic concepts related to Microsoft Azure Attestation.
Attestation provider belongs to Azure resource provider named Microsoft.Attestation. The resource provider is a service endpoint that provides Azure Attestation REST contract and is deployed using [Azure Resource Manager](../azure-resource-manager/management/overview.md). Each attestation provider honors a specific, discoverable policy. Attestation providers get created with a default policy for each attestation type (note that VBS enclave has no default policy). See [examples of an attestation policy](policy-examples.md) for more details on the default policy for SGX.
-### Regional default provider
+### Regional shared provider
-Azure Attestation provides a default provider in each region. Customers can choose to use the default provider for attestation, or create their own providers with custom policies. The default providers are accessible by any Azure AD user and the policy associated with a default provider cannot be altered.
+Azure Attestation provides a regional shared provider in every available region. Customers can choose to use the regional shared provider for attestation, or create their own providers with custom policies. The shared providers are accessible by any Azure AD user and the policy associated with it cannot be altered.
| Region | Attest Uri | |--|--|
attestation https://docs.microsoft.com/en-us/azure/attestation/overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/attestation/overview.md
@@ -62,7 +62,7 @@ Azure Attestation is the preferred choice for attesting TEEs as it offers the fo
- Unified framework for attesting multiple environments such as TPMs, SGX enclaves and VBS enclaves - Multi-tenant service which allows configuration of custom attestation providers and policies to restrict token generation-- Offers default providers which can attest with no configuration from users
+- Offers regional shared providers which can attest with no configuration from users
- Protects its data while-in use with implementation in an SGX enclave - Highly available service
attestation https://docs.microsoft.com/en-us/azure/attestation/quickstart-azure-cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/attestation/quickstart-azure-cli.md
@@ -13,6 +13,10 @@
Get started with [Azure Attestation by using Azure CLI](/cli/azure/ext/attestation/attestation?view=azure-cli-latest).
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+ ## Get started 1. Install this extension using the below CLI command
attestation https://docs.microsoft.com/en-us/azure/attestation/quickstart-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/attestation/quickstart-portal.md
@@ -12,13 +12,17 @@
# Quickstart: Set up Azure Attestation with Azure portal
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+ Follow the below steps to manage an attestation provider using Azure portal.
-## Attestation provider
+## 1. Attestation provider
-### Create an attestation provider
+### 1.1 Create an attestation provider
-#### To configure the provider with unsigned policies
+#### 1.1.1 To configure the provider with unsigned policies
1. From the Azure portal menu, or from the Home page, select **Create a resource** 2. In the Search box, enter **attestation**
@@ -38,7 +42,7 @@ Follow the below steps to manage an attestation provider using Azure portal.
6. After providing the required inputs, click **Review+Create** 7. Fix validation issues if any and click **Create**.
-#### To configure the provider with signed policies
+#### 1.1.2 To configure the provider with signed policies
1. From the Azure portal menu, or from the Home page, select **Create a resource** 2. In the Search box, enter **attestation**
@@ -58,12 +62,12 @@ Follow the below steps to manage an attestation provider using Azure portal.
6. After providing the required inputs, click **Review+Create** 7. Fix validation issues if any and click **Create**.
-### View attestation provider
+### 1.2 View attestation provider
1. From the Azure portal menu, or from the Home page, select **All resources** 2. In the filter box, enter attestation provider name and select it
-### Delete attestation provider
+### 1.3 Delete attestation provider
1. From the Azure portal menu, or from the Home page, select **All resources** 2. In the filter box, enter attestation provider name
@@ -76,9 +80,9 @@ Follow the below steps to manage an attestation provider using Azure portal.
4. Click **Delete** in the top menu and click **Yes**
-## Attestation policy signers
+## 2. Attestation policy signers
-### View policy signer certificates
+### 2.1 View policy signer certificates
1. From the Azure portal menu, or from the Home page, select **All resources** 2. In the filter box, enter attestation provider name
@@ -88,7 +92,7 @@ Follow the below steps to manage an attestation provider using Azure portal.
6. The text file downloaded will have all certs in a JWS format. a. Verify the certificates count and certs downloaded.
-### Add policy signer certificate
+### 2.2 Add policy signer certificate
1. From the Azure portal menu, or from the Home page, select **All resources** 2. In the filter box, enter attestation provider name
@@ -97,7 +101,7 @@ a. Verify the certificates count and certs downloaded.
5. Click **Add** in the top menu (The button will be disabled for the attestation providers created without policy signing requirement) 6. Upload policy signer certificate file and click **Add**. See examples [here](./policy-signer-examples.md)
-### Delete policy signer certificate
+### 2.3 Delete policy signer certificate
1. From the Azure portal menu, or from the Home page, select **All resources** 2. In the filter box, enter attestation provider name
@@ -106,9 +110,9 @@ a. Verify the certificates count and certs downloaded.
5. Click **Delete** in the top menu (The button will be disabled for the attestation providers created without policy signing requirement) 6. Upload policy signer certificate file and click **Delete**. See examples [here](./policy-signer-examples.md)
-## Attestation policy
+## 3. Attestation policy
-### View attestation policy
+### 3.1 View attestation policy
1. From the Azure portal menu, or from the Home page, select **All resources** 2. In the filter box, enter attestation provider name
@@ -116,9 +120,9 @@ a. Verify the certificates count and certs downloaded.
4. Click **Policy** in left-side resource menu or in the bottom pane 5. Select the preferred **Attestation Type** and view the **Current policy**
-### Configure attestation policy
+### 3.2 Configure attestation policy
-#### When attestation provider is created without policy signing requirement
+#### 3.2.1 When attestation provider is created without policy signing requirement
##### Upload policy in JWT format
@@ -150,7 +154,7 @@ a. Verify the certificates count and certs downloaded.
8. Click **Refresh** to view the configured policy
-#### When attestation provider is created with policy signing requirement
+#### 3.2.2 When attestation provider is created with policy signing requirement
##### Upload policy in JWT format
@@ -166,3 +170,8 @@ a. Verify the certificates count and certs downloaded.
8. Click **Refresh** to view the configured policy
+## Next steps
+
+- [How to author and sign an attestation policy](author-sign-policy.md)
+- [Attest an SGX enclave using code samples](/samples/browse/?expanded=azure&terms=attestation)
+
attestation https://docs.microsoft.com/en-us/azure/attestation/quickstart-powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/attestation/quickstart-powershell.md
@@ -227,4 +227,4 @@ For more information on the cmdlets and its parameters, see [Azure Attestation P
## Next steps - [How to author and sign an attestation policy](author-sign-policy.md)-- [Attest an SGX enclave using code samples](/samples/browse/?expanded=azure&terms=attestation)
+- [Attest an SGX enclave using code samples](/samples/browse/?expanded=azure&terms=attestation)
automation https://docs.microsoft.com/en-us/azure/automation/automation-child-runbooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-child-runbooks.md
@@ -97,7 +97,7 @@ Connect-AzAccount `
-ApplicationId $ServicePrincipalConnection.ApplicationId ` -CertificateThumbprint $ServicePrincipalConnection.CertificateThumbprint
-$AzureContext = Get-AzSubscription -SubscriptionId $ServicePrincipalConnection.SubscriptionID
+$AzureContext = Set-AzContext -SubscriptionId $ServicePrincipalConnection.SubscriptionID
$params = @{"VMName"="MyVM";"RepeatCount"=2;"Restart"=$true}
@@ -112,4 +112,4 @@ Start-AzAutomationRunbook `
## Next steps * To run run your runbook, see [Start a runbook in Azure Automation](start-runbooks.md).
-* For monitoring of runbook operation, see [Runbook output and messages in Azure Automation](automation-runbook-output-and-messages.md).
+* For monitoring of runbook operation, see [Runbook output and messages in Azure Automation](automation-runbook-output-and-messages.md).
azure-app-configuration https://docs.microsoft.com/en-us/azure/azure-app-configuration/enable-dynamic-configuration-aspnet-core https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/enable-dynamic-configuration-aspnet-core.md
@@ -183,6 +183,7 @@ A *sentinel key* is a special key used to signal when configuration has changed.
{ services.Configure<Settings>(Configuration.GetSection("TestApp:Settings")); services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_1);
+ services.AddAzureAppConfiguration();
} ```
azure-arc https://docs.microsoft.com/en-us/azure/azure-arc/data/connectivity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/connectivity.md
@@ -32,8 +32,6 @@ Additionally, Azure Active Directory and Azure Role-Based Access Control can be
Some Azure-attached services are only available when they can be directly reached such as the Azure Defender security services, Container Insights, and Azure Backup to blob storage.
-Currently, in the preview only the indirectly connected mode is supported.
- ||**Indirectly connected**|**Directly connected**|**Never connected**| ||||| |**Description**|Indirectly connected mode offers most of the management services locally in your environment with no direct connection to Azure. A minimal amount of data must be sent to Azure for inventory and billing purposes _only_. It is exported to a file and uploaded to Azure at least once per month. No direct or continuous connection to Azure is required. Some features and services which require a connection to Azure will not be available.|Directly connected mode offers all of the available services when a direct connection can be established with Azure. Connections are always initiated _from_ your environment to Azure and use standard ports and protocols such as HTTPS/443.|No data can be sent to or from Azure in any way.|
azure-arc https://docs.microsoft.com/en-us/azure/azure-arc/data/create-data-controller-using-azdata https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-data-controller-using-azdata.md
@@ -265,6 +265,8 @@ Azure Red Hat OpenShift requires a security context constraint.
#### Apply the security context
+Before you create the data controller on Azure Red Hat OpenShift, you will need to apply specific security context constraints (SCC). For the preview release, these relax the security constraints. Future releases will provide updated SCC.
+ [!INCLUDE [apply-security-context-constraint](includes/apply-security-context-constraint.md)] #### Create custom deployment profile
azure-arc https://docs.microsoft.com/en-us/azure/azure-arc/data/includes/apply-security-context-constraint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/includes/apply-security-context-constraint.md
@@ -11,8 +11,9 @@ This section explains how to apply a security context constraint (SCC). For the
1. Download the custom security context constraint (SCC). Use one of the following: - [GitHub](https://github.com/microsoft/azure_arc/tree/main/arc_data_services/deploy/yaml/arc-data-scc.yaml)
- - ([Raw](https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/arc-data-scc.yaml))
+ - [Raw](https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/arc-data-scc.yaml)
- `curl`
+
The following command downloads arc-data-scc.yaml: ```console
@@ -32,4 +33,14 @@ This section explains how to apply a security context constraint (SCC). For the
```console oc adm policy add-scc-to-user arc-data-scc --serviceaccount default --namespace arc
- ```
+ ```
+
+ > [!NOTE]
+ > RedHat OpenShift 4.5 or greater, changes how to apply the SCC to the service account.
+ > Use the same namespace here and in the `azdata arc dc create` command below. Example is `arc`.
+ >
+ > If you are using RedHat OpenShift 4.5 or greater, run:
+ >
+ >```console
+ >oc create rolebinding arc-data-rbac --clusterrole=system:openshift:scc:arc-data-scc --serviceaccount=arc:default
+ >```
azure-arc https://docs.microsoft.com/en-us/azure/azure-arc/servers/manage-vm-extensions-template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/manage-vm-extensions-template.md
@@ -1,7 +1,7 @@
Title: Enable VM extension using Azure Resource Manager template description: This article describes how to deploy virtual machine extensions to Azure Arc enabled servers running in hybrid cloud environments using an Azure Resource Manager template. Previously updated : 11/06/2020 Last updated : 02/03/2021
@@ -695,6 +695,90 @@ Save the template file to disk. You can then install the extension on all the co
New-AzResourceGroupDeployment -ResourceGroupName "ContosoEngineering" -TemplateFile "D:\Azure\Templates\KeyVaultExtension.json" ```
+## Deploy the Azure Defender integrated scanner
+
+To use the Azure Defender integrated scanner extension, the following sample is provided to run on Windows and Linux. If you are unfamiliar with the integrated scanner, see [Overview of Azure Defender's vulnerability assessment solution](../../security-center/deploy-vulnerability-assessment-vm.md) for hybrid machines.
+
+### Template file for Windows
+
+```json
+{
+ "properties": {
+ "mode": "Incremental",
+ "template": {
+ "contentVersion": "1.0.0.0",
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "parameters": {
+ "vmName": {
+ "type": "string"
+ },
+ "apiVersionByEnv": {
+ "type": "string"
+ }
+ },
+ "resources": [
+ {
+ "type": "resourceType/providers/WindowsAgent.AzureSecurityCenter",
+ "name": "[concat(parameters('vmName'), '/Microsoft.Security/default')]",
+ "apiVersion": "[parameters('apiVersionByEnv')]"
+ }
+ ]
+ },
+ "parameters": {
+ "vmName": {
+ "value": "resourceName"
+ },
+ "apiVersionByEnv": {
+ "value": "2015-06-01-preview"
+ }
+ }
+ }
+}
+```
+
+### Template file for Linux
+
+```json
+{
+ "properties": {
+ "mode": "Incremental",
+ "template": {
+ "contentVersion": "1.0.0.0",
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "parameters": {
+ "vmName": {
+ "type": "string"
+ },
+ "apiVersionByEnv": {
+ "type": "string"
+ }
+ },
+ "resources": [
+ {
+ "type": "resourceType/providers/LinuxAgent.AzureSecurityCenter",
+ "name": "[concat(parameters('vmName'), '/Microsoft.Security/default')]",
+ "apiVersion": "[parameters('apiVersionByEnv')]"
+ }
+ ]
+ },
+ "parameters": {
+ "vmName": {
+ "value": "resourceName"
+ },
+ "apiVersionByEnv": {
+ "value": "2015-06-01-preview"
+ }
+ }
+ }
+}
+```
+
+Save the template file to disk. You can then install the extension on all the connected machines within a resource group with the following command.
+
+```powershell
+New-AzResourceGroupDeployment -ResourceGroupName "ContosoEngineering" -TemplateFile "D:\Azure\Templates\AzureDefenderScanner.json"
+```
+ ## Next steps * You can deploy, manage, and remove VM extensions using the [Azure PowerShell](manage-vm-extensions-powershell.md), from the [Azure portal](manage-vm-extensions-portal.md), or the [Azure CLI](manage-vm-extensions-cli.md).
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/analyze-telemetry-data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/analyze-telemetry-data.md
@@ -60,7 +60,7 @@ The following areas of Application Insights can be helpful when evaluating the b
| **[Failures](../azure-monitor/app/asp-net-exceptions.md)** | Create charts and alerts based on function failures and server exceptions. The **Operation Name** is the function name. Failures in dependencies aren't shown unless you implement custom telemetry for dependencies. | | **[Performance](../azure-monitor/app/performance-counters.md)** | Analyze performance issues by viewing resource utilization and throughput per **Cloud role instances**. This performance data can be useful for debugging scenarios where functions are bogging down your underlying resources. | | **[Metrics](../azure-monitor/platform/metrics-charts.md)** | Create charts and alerts that are based on metrics. Metrics include the number of function invocations, execution time, and success rates. |
-| **[Live Metrics ](../azure-monitor/app/live-stream.md)** | View metrics data as it's created in near real time. |
+| **[Live Metrics](../azure-monitor/app/live-stream.md)** | View metrics data as it's created in near real time. |
## Query telemetry data
@@ -72,18 +72,18 @@ Choose **Logs** to explore or query for logged events.
Here's a query example that shows the distribution of requests per worker over the last 30 minutes.
-<pre>
+```kusto
requests | where timestamp > ago(30m) | summarize count() by cloud_RoleInstance, bin(timestamp, 1m) | render timechart
-</pre>
+```
The tables that are available are shown in the **Schema** tab on the left. You can find data generated by function invocations in the following tables: | Table | Description | | -- | -- |
-| **traces** | Logs created by the runtime and traces from your function code. |
+| **traces** | Logs created by the runtime, scale controller, and traces from your function code. |
| **requests** | One request for each function invocation. | | **exceptions** | Any exceptions thrown by the runtime. | | **customMetrics** | The count of successful and failing invocations, success rate, and duration. |
@@ -94,13 +94,39 @@ The other tables are for availability tests, and client and browser telemetry. Y
Within each table, some of the Functions-specific data is in a `customDimensions` field. For example, the following query retrieves all traces that have log level `Error`.
-<pre>
+```kusto
traces | where customDimensions.LogLevel == "Error"
-</pre>
+```
The runtime provides the `customDimensions.LogLevel` and `customDimensions.Category` fields. You can provide additional fields in logs that you write in your function code. For an example in C#, see [Structured logging](functions-dotnet-class-library.md#structured-logging) in the .NET class library developer guide.
+## Query scale controller logs
+
+_This feature is in preview._
+
+After enabling both [scale controller logging](configure-monitoring.md#configure-scale-controller-logs) and [Application Insights integration](configure-monitoring.md#enable-application-insights-integration), you can use the Application Insights log search to query for the emitted scale controller logs. Scale controller logs are saved in the `traces` collection under the **ScaleControllerLogs** category.
+
+The following query can be used to search for all scale controller logs for the current function app within the specified time period:
+
+```kusto
+traces
+| extend CustomDimensions = todynamic(tostring(customDimensions))
+| where CustomDimensions.Category == "ScaleControllerLogs"
+```
+
+The following query expands on the previous query to show how to get only logs indicating a change in scale:
+
+```kusto
+traces
+| extend CustomDimensions = todynamic(tostring(customDimensions))
+| where CustomDimensions.Category == "ScaleControllerLogs"
+| where message == "Instance count changed"
+| extend Reason = CustomDimensions.Reason
+| extend PreviousInstanceCount = CustomDimensions.PreviousInstanceCount
+| extend NewInstanceCount = CustomDimensions.CurrentInstanceCount
+```
+ ## Consumption plan-specific metrics When running in a [Consumption plan](consumption-plan.md), the execution *cost* of a single function execution is measured in *GB-seconds*. Execution cost is calculated by combining its memory usage with its execution time. To learn more, see [Estimating Consumption plan costs](functions-consumption-costs.md).
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/configure-monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/configure-monitoring.md
@@ -225,6 +225,8 @@ az functionapp config appsettings delete --name <FUNCTION_APP_NAME> \
--setting-names SCALE_CONTROLLER_LOGGING_ENABLED ```
+With scale controller logging enabled, you are now able to [query your scale controller logs](analyze-telemetry-data.md#query-scale-controller-logs).
+ ## Enable Application Insights integration For a function app to send data to Application Insights, it needs to know the instrumentation key of an Application Insights resource. The key must be in an app setting named **APPINSIGHTS_INSTRUMENTATIONKEY**.
@@ -268,30 +270,6 @@ If an Application Insights resource wasn't created with your function app, use t
> [!NOTE] > Early versions of Functions used built-in monitoring, which is no longer recommended. When enabling Application Insights integration for such a function app, you must also [disable built-in logging](#disable-built-in-logging).
-## Query Scale Controller logs
-
-After enabling both Scale Controller logging and Application Insights integration, you can use the Application Insights log search to query for the emitted scale controller logs. Scale controller logs are saved in the `traces` collection under the **ScaleControllerLogs** category.
-
-The following query can be used to search for all scale controller logs for the current function app within the specified time period:
-
-```kusto
-traces
-| extend CustomDimensions = todynamic(tostring(customDimensions))
-| where CustomDimensions.Category == "ScaleControllerLogs"
-```
-
-The following query expands on the previous query to show how to get only logs indicating a change in scale:
-
-```kusto
-traces
-| extend CustomDimensions = todynamic(tostring(customDimensions))
-| where CustomDimensions.Category == "ScaleControllerLogs"
-| where message == "Instance count changed"
-| extend Reason = CustomDimensions.Reason
-| extend PreviousInstanceCount = CustomDimensions.PreviousInstanceCount
-| extend NewInstanceCount = CustomDimensions.CurrentInstanceCount
-```
- ## Disable built-in logging When you enable Application Insights, disable the built-in logging that uses Azure Storage. The built-in logging is useful for testing with light workloads, but isn't intended for high-load production use. For production monitoring, we recommend Application Insights. If built-in logging is used in production, the logging record might be incomplete because of throttling on Azure Storage.
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/create-first-function-vs-code-csharp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/create-first-function-vs-code-csharp.md
@@ -73,7 +73,7 @@ After you've verified that the function runs correctly on your local computer, i
## Next steps
-You have used Visual Studio Code to create a function app with a simple HTTP-triggered function. In the next article, you expand that function by adding an output binding. This binding writes the string from the HTTP request to a message in an Azure Queue Storage queue.
+You have used [Visual Studio Code](functions-develop-vs-code.md?tabs=csharp) to create a function app with a simple HTTP-triggered function. In the next article, you expand that function by connecting to Azure Storage. To learn more about connecting to other Azure services, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=csharp).
> [!div class="nextstepaction"] > [Connect to an Azure Storage queue](functions-add-output-binding-storage-queue-vs-code.md?pivots=programming-language-csharp)
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/create-first-function-vs-code-java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/create-first-function-vs-code-java.md
@@ -81,7 +81,7 @@ After you've verified that the function runs correctly on your local computer, i
## Next steps
-You have used Visual Studio Code to create a function app with a simple HTTP-triggered function. In the next article, you expand that function by adding an output binding. This binding writes the string from the HTTP request to a message in an Azure Queue Storage queue.
+You have used [Visual Studio Code](functions-develop-vs-code.md?tabs=java) to create a function app with a simple HTTP-triggered function. In the next article, you expand that function by connecting to Azure Storage. To learn more about connecting to other Azure services, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=java).
> [!div class="nextstepaction"] > [Connect to an Azure Storage queue](functions-add-output-binding-storage-queue-vs-code.md?pivots=programming-language-java)
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/create-first-function-vs-code-node https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/create-first-function-vs-code-node.md
@@ -70,7 +70,7 @@ In this section, you create a function app and related resources in your Azure s
1. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app...** button.
- ![Publish your project to Azure](./media/functions-create-first-function-vs-code/function-app-publish-project.png)
+ ![Publish your project to Azure](../../includes/media/functions-publish-project-vscode/function-app-publish-project.png)
1. Provide the following information at the prompts:
@@ -86,16 +86,18 @@ In this section, you create a function app and related resources in your Azure s
+ **Select a location for new resources**: For better performance, choose a [region](https://azure.microsoft.com/regions/) near you.
+ The extension shows the status of individual resources as they are being created in Azure in the notification area.
+
+ :::image type="content" source="../../includes/media/functions-publish-project-vscode/resource-notification.png" alt-text="Notification of Azure resource creation":::
+ 1. When completed, the following Azure resources are created in your subscription, using names based on your function app name:
- + A resource group, which is a logical container for related resources.
- + A standard Azure Storage account, which maintains state and other information about your projects.
- + A consumption plan, which defines the underlying host for your serverless function app.
- + A function app, which provides the environment for executing your function code. A function app lets you group functions as a logical unit for easier management, deployment, and sharing of resources within the same hosting plan.
- + An Application Insights instance connected to the function app, which tracks usage of your serverless function.
+ [!INCLUDE [functions-vs-code-created-resources](../../includes/functions-vs-code-created-resources.md)]
A notification is displayed after your function app is created and the deployment package is applied.
+ [!INCLUDE [functions-vs-code-create-tip](../../includes/functions-vs-code-create-tip.md)]
+ 1. Select **View Output** in this notification to view the creation and deployment results, including the Azure resources that you created. If you miss the notification, select the bell icon in the lower right corner to see it again. ![Create complete notification](./media/functions-create-first-function-vs-code/function-create-notifications.png)
@@ -106,7 +108,7 @@ In this section, you create a function app and related resources in your Azure s
## Next steps
-You have used Visual Studio Code to create a function app with a simple HTTP-triggered function. In the next article, you expand that function by adding an output binding. This binding writes the string from the HTTP request to a message in an Azure Queue Storage queue.
+You have used [Visual Studio Code](functions-develop-vs-code.md?tabs=javascript) to create a function app with a simple HTTP-triggered function. In the next article, you expand that function by connecting to Azure Storage. To learn more about connecting to other Azure services, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=javascript).
> [!div class="nextstepaction"] > [Connect to an Azure Storage queue](functions-add-output-binding-storage-queue-vs-code.md?pivots=programming-language-javascript)
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/create-first-function-vs-code-other https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/create-first-function-vs-code-other.md
@@ -221,7 +221,7 @@ You can run this project on your local development computer before you publish t
1. A response is returned, which looks like the following in a browser:
- ![Browser - localhost example output](../../includes/media/functions-run-function-test-local-vs-code/functions-test-local-browser.png)
+ ![Browser - localhost example output](./media/create-first-function-vs-code-other/functions-test-local-browser.png)
1. Information about the request is shown in **Terminal** panel.
@@ -303,7 +303,7 @@ In this section, you create a function app and related resources in your Azure s
1. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app...** button.
- ![Publish your project to Azure](./media/functions-create-first-function-vs-code/function-app-publish-project.png)
+ ![Publish your project to Azure](../../includes/media/functions-publish-project-vscode/function-app-publish-project.png)
1. Provide the following information at the prompts:
@@ -332,19 +332,17 @@ In this section, you create a function app and related resources in your Azure s
+ **Select an Application Insights resource**: Choose `+ Create Application Insights resource`. This name must be globally unique within Azure. You can use the name suggested in the prompt.
- + **Select a location for new resources**: For better performance, choose a [region](https://azure.microsoft.com/regions/) near you.
+ + **Select a location for new resources**: For better performance, choose a [region](https://azure.microsoft.com/regions/) near you.The extension shows the status of individual resources as they are being created in Azure in the notification area.
-1. When completed, the following Azure resources are created in your subscription, using names based on your function app name:
+ :::image type="content" source="../../includes/media/functions-publish-project-vscode/resource-notification.png" alt-text="Notification of Azure resource creation":::
- + A resource group, which is a logical container for related resources.
- + A standard Azure Storage account, which maintains state and other information about your projects.
- + A consumption plan, which defines the underlying host for your serverless function app.
- + A function app, which provides the environment for executing your function code. A function app lets you group functions as a logical unit for easier management, deployment, and sharing of resources within the same hosting plan.
- + An Application Insights instance connected to the function app, which tracks usage of your serverless function.
+1. When completed, the following Azure resources are created in your subscription:
+
+ [!INCLUDE [functions-vs-code-created-resources](../../includes/functions-vs-code-created-resources.md)]
A notification is displayed after your function app is created and the deployment package is applied.
-1. Select **View Output** in this notification to view the creation and deployment results, including the Azure resources that you created. If you miss the notification, select the bell icon in the lower right corner to see it again.
+4. Select **View Output** in this notification to view the creation and deployment results, including the Azure resources that you created. If you miss the notification, select the bell icon in the lower right corner to see it again.
![Create complete notification](./media/functions-create-first-function-vs-code/function-create-notifications.png)
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/create-first-function-vs-code-powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/create-first-function-vs-code-powershell.md
@@ -60,7 +60,7 @@ In this section, you use Visual Studio Code to create a local Azure Functions pr
1. Using this information, Visual Studio Code generates an Azure Functions project with an HTTP trigger. You can view the local project files in the Explorer. To learn more about files that are created, see [Generated project files](functions-develop-vs-code.md#generated-project-files). After you've verified that the function runs correctly on your local computer, it's time to use Visual Studio Code to publish the project directly to Azure.
@@ -74,7 +74,7 @@ After you've verified that the function runs correctly on your local computer, i
## Next steps
-You have used Visual Studio Code to create a function app with a simple HTTP-triggered function. In the next article, you expand that function by adding an output binding. This binding writes the string from the HTTP request to a message in an Azure Queue Storage queue.
+You have used [Visual Studio Code](functions-develop-vs-code.md?tabs=powershell) to create a function app with a simple HTTP-triggered function. In the next article, you expand that function by connecting to Azure Storage. To learn more about connecting to other Azure services, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=powershell).
> [!div class="nextstepaction"] > [Connect to an Azure Storage queue](functions-add-output-binding-storage-queue-vs-code.md?pivots=programming-language-powershell)
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/create-first-function-vs-code-python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/create-first-function-vs-code-python.md
@@ -49,7 +49,8 @@ In this section, you use Visual Studio Code to create a local Azure Functions pr
+ **Select a language for your function project**: Choose `Python`.
- + **Select a Python alias to create a virtual environment**: Choose the location of your Python interpreter. If the location isn't shown, type in the full path to your Python binary.
+ + **Select a Python alias to create a virtual environment**: Choose the location of your Python interpreter.
+ If the location isn't shown, type in the full path to your Python binary.
+ **Select a template for your project's first function**: Choose `HTTP trigger`.
@@ -76,15 +77,18 @@ In this section, you create a function app and related resources in your Azure s
1. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app...** button.
- ![Publish your project to Azure](./media/functions-create-first-function-vs-code/function-app-publish-project.png)
+ ![Publish your project to Azure](../../includes/media/functions-publish-project-vscode/function-app-publish-project.png)
1. Provide the following information at the prompts:
- + **Select folder**: Choose a folder from your workspace or browse to one that contains your function app. You won't see this if you already have a valid function app opened.
+ + **Select folder**: Choose a folder from your workspace or browse to one that contains your function app.
+ You won't see this if you already have a valid function app opened.
- + **Select subscription**: Choose the subscription to use. You won't see this if you only have one subscription.
+ + **Select subscription**: Choose the subscription to use.
+ You won't see this if you only have one subscription.
- + **Select Function App in Azure**: Choose `+ Create new Function App`. (Don't choose the `Advanced` option, which isn't covered in this article.)
+ + **Select Function App in Azure**: Choose `+ Create new Function App`.
+ (Don't choose the `Advanced` option, which isn't covered in this article.)
+ **Enter a globally unique name for the function app**: Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions.
@@ -92,17 +96,19 @@ In this section, you create a function app and related resources in your Azure s
+ **Select a location for new resources**: For better performance, choose a [region](https://azure.microsoft.com/regions/) near you.
+ The extension shows the status of individual resources as they are being created in Azure in the notification area.
+
+ :::image type="content" source="../../includes/media/functions-publish-project-vscode/resource-notification.png" alt-text="Notification of Azure resource creation":::
+ 1. When completed, the following Azure resources are created in your subscription, using names based on your function app name:
- + A resource group, which is a logical container for related resources.
- + A standard Azure Storage account, which maintains state and other information about your projects.
- + A consumption plan, which defines the underlying host for your serverless function app.
- + A function app, which provides the environment for executing your function code. A function app lets you group functions as a logical unit for easier management, deployment, and sharing of resources within the same hosting plan.
- + An Application Insights instance connected to the function app, which tracks usage of your serverless function.
+ [!INCLUDE [functions-vs-code-created-resources](../../includes/functions-vs-code-created-resources.md)]
A notification is displayed after your function app is created and the deployment package is applied.
-1. Select **View Output** in this notification to view the creation and deployment results, including the Azure resources that you created. If you miss the notification, select the bell icon in the lower right corner to see it again.
+ [!INCLUDE [functions-vs-code-create-tip](../../includes/functions-vs-code-create-tip.md)]
+
+4. Select **View Output** in this notification to view the creation and deployment results, including the Azure resources that you created. If you miss the notification, select the bell icon in the lower right corner to see it again.
![Create complete notification](./media/functions-create-first-function-vs-code/function-create-notifications.png)
@@ -112,7 +118,7 @@ In this section, you create a function app and related resources in your Azure s
## Next steps
-You have used Visual Studio Code to create a function app with a simple HTTP-triggered function. In the next article, you expand that function by adding an output binding. This binding writes the string from the HTTP request to a message in an Azure Queue Storage queue.
+You have used [Visual Studio Code](functions-develop-vs-code.md?tabs=python) to create a function app with a simple HTTP-triggered function. In the next article, you expand that function by connecting to Azure Storage. To learn more about connecting to other Azure services, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=python).
> [!div class="nextstepaction"] > [Connect to an Azure Storage queue](functions-add-output-binding-storage-queue-vs-code.md?pivots=programming-language-python)
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/create-first-function-vs-code-typescript https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/create-first-function-vs-code-typescript.md
@@ -69,7 +69,7 @@ In this section, you create a function app and related resources in your Azure s
1. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app...** button.
- ![Publish your project to Azure](media/functions-create-first-function-vs-code/function-app-publish-project.png)
+ ![Publish your project to Azure](../../includes/media/functions-publish-project-vscode/function-app-publish-project.png)
1. Provide the following information at the prompts:
@@ -85,17 +85,19 @@ In this section, you create a function app and related resources in your Azure s
+ **Select a location for new resources**: For better performance, choose a [region](https://azure.microsoft.com/regions/) near you.
+ The extension shows the status of individual resources as they are being created in Azure in the notification area.
+
+ :::image type="content" source="../../includes/media/functions-publish-project-vscode/resource-notification.png" alt-text="Notification of Azure resource creation":::
+ 1. When completed, the following Azure resources are created in your subscription, using names based on your function app name:
- + A resource group, which is a logical container for related resources.
- + A standard Azure Storage account, which maintains state and other information about your projects.
- + A consumption plan, which defines the underlying host for your serverless function app.
- + A function app, which provides the environment for executing your function code. A function app lets you group functions as a logical unit for easier management, deployment, and sharing of resources within the same hosting plan.
- + An Application Insights instance connected to the function app, which tracks usage of your serverless function.
+ [!INCLUDE [functions-vs-code-created-resources](../../includes/functions-vs-code-created-resources.md)]
A notification is displayed after your function app is created and the deployment package is applied.
-1. Select **View Output** in this notification to view the creation and deployment results, including the Azure resources that you created. If you miss the notification, select the bell icon in the lower right corner to see it again.
+ [!INCLUDE [functions-vs-code-create-tip](../../includes/functions-vs-code-create-tip.md)]
+
+4. Select **View Output** in this notification to view the creation and deployment results, including the Azure resources that you created. If you miss the notification, select the bell icon in the lower right corner to see it again.
![Create complete notification](./media/functions-create-first-function-vs-code/function-create-notifications.png)
@@ -105,7 +107,7 @@ In this section, you create a function app and related resources in your Azure s
## Next steps
-You have used Visual Studio Code to create a function app with a simple HTTP-triggered function. In the next article, you expand that function by adding an output binding. This binding writes the string from the HTTP request to a message in an Azure Queue Storage queue.
+You have used [Visual Studio Code](functions-develop-vs-code.md?tabs=javascript) to create a function app with a simple HTTP-triggered function. In the next article, you expand that function by connecting to Azure Storage. To learn more about connecting to other Azure services, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=typescript).
> [!div class="nextstepaction"] > [Connect to an Azure Storage queue](functions-add-output-binding-storage-queue-vs-code.md?pivots=programming-language-typescript)
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-add-output-binding-storage-queue-vs-code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-add-output-binding-storage-queue-vs-code.md
@@ -92,7 +92,7 @@ Now, you can add the storage output binding to your project.
In Functions, each type of binding requires a `direction`, `type`, and a unique `name` to be defined in the function.json file. The way you define these attributes depends on the language of your function app. [!INCLUDE [functions-add-output-binding-json](../../includes/functions-add-output-binding-json.md)]
@@ -144,35 +144,25 @@ After the binding is defined, you can use the `name` of the binding to access it
[!INCLUDE [functions-add-storage-binding-java-code](../../includes/functions-add-storage-binding-java-code.md)]
-## Update the test set
+## Update the tests
[!INCLUDE [functions-add-output-binding-java-test](../../includes/functions-add-output-binding-java-test.md)] ::: zone-end
-<! Local testing section >
------
+## Run the function locally
-A new queue named **outqueue** is created in your storage account by the Functions runtime when the output binding is first used. You'll use Storage Explorer to verify that the queue was created along with the new message.
+1. As in the previous article, press <kbd>F5</kbd> to start the function app project and Core Tools.
+1. With Core Tools running, go to the **Azure: Functions** area. Under **Functions**, expand **Local Project** > **Functions**. Right-click (Ctrl-click on Mac) the `HttpExample` function and choose **Execute Function Now...**.
-## Update the tests
+ :::image type="content" source="../../includes/media/functions-run-function-test-local-vs-code/execute-function-now.png" alt-text="Execute function now from Visual Studio Code":::
+1. In **Enter request body** you see the request message body value of `{ "name": "Azure" }`. Press Enter to send this request message to your function.
+
+1. After a response is returned, press <kbd>Ctrl + C</kbd> to stop Core Tools.
+Because you are using the storage connection string, your function connects to the Azure storage account when running locally. A new queue named **outqueue** is created in your storage account by the Functions runtime when the output binding is first used. You'll use Storage Explorer to verify that the queue was created along with the new message.
### Connect Storage Explorer to your account
@@ -208,11 +198,7 @@ Now, it's time to republish the updated function app to Azure.
1. Choose the function app that you created in the first article. Because you're redeploying your project to the same app, select **Deploy** to dismiss the warning about overwriting files.
-1. After deployment completes, you can again use cURL or a browser to test the redeployed function. As before, append the query string `&name=<yourname>` to the URL, as in the following example:
-
- ```bash
- curl https://myfunctionapp.azurewebsites.net/api/httptrigger?code=cCr8sAxfBiow548FBDLS1....&name=<yourname>
- ```
+1. After deployment completes, you can again use the **Execute Function Now...** feature to trigger the function in Azure.
1. Again [view the message in the storage queue](#examine-the-output-queue) to verify that the output binding again generates a new message in the queue.
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-app-settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-app-settings.md
@@ -14,7 +14,7 @@ App settings in a function app contain global configuration options that affect
There are other global configuration options in the [host.json](functions-host-json.md) file and in the [local.settings.json](functions-run-local.md#local-settings-file) file. > [!NOTE]
-> You can use application settings to override host.json setting values without having to change the host.json file itself. This is helpful for scenarios where you need to configure or modify specific host.json settings for a specific environment. This also lets you change host.json settings without having to republish your project. To learn more, see the [host.json reference article](functions-host-json.md#override-hostjson-values).
+> You can use application settings to override host.json setting values without having to change the host.json file itself. This is helpful for scenarios where you need to configure or modify specific host.json settings for a specific environment. This also lets you change host.json settings without having to republish your project. To learn more, see the [host.json reference article](functions-host-json.md#override-hostjson-values). Changes to function app settings require your function app to be restarted.
## APPINSIGHTS_INSTRUMENTATIONKEY
@@ -230,7 +230,7 @@ Connection string for storage account where the function app code and configurat
||| |WEBSITE_CONTENTAZUREFILECONNECTIONSTRING|DefaultEndpointsProtocol=https;AccountName=[name];AccountKey=[key]|
-Only used when deploying to a Consumption or Premium plans running on Windows. Not supported for Linux. Changing or removing this setting may cause your function app to not start. To learn more, see [this troubleshooting article](functions-recover-storage-account.md#storage-account-application-settings-were-deleted).
+Only used when deploying to a Premium plan or to a Consumption plan running on Windows. Not supported for Consumptions plans running Linux. Changing or removing this setting may cause your function app to not start. To learn more, see [this troubleshooting article](functions-recover-storage-account.md#storage-account-application-settings-were-deleted).
## WEBSITE\_CONTENTOVERVNET
@@ -248,7 +248,7 @@ The file path to the function app code and configuration in an event-driven scal
||| |WEBSITE_CONTENTSHARE|functionapp091999e2|
-Only used by function apps on a Consumption or Premium plans running on Windows. Not supported for Linux. Changing or removing this setting may cause your function app to not start. To learn more, see [this troubleshooting article](functions-recover-storage-account.md#storage-account-application-settings-were-deleted).
+Only used when deploying to a Premium plan or to a Consumption plan running on Windows. Not supported for Consumptions plans running Linux. Changing or removing this setting may cause your function app to not start. To learn more, see [this troubleshooting article](functions-recover-storage-account.md#storage-account-application-settings-were-deleted).
When using a Azure Resource Manager to create a function app during deployment, don't include WEBSITE_CONTENTSHARE in the template. This application setting is generated during deployment. To learn more, see [Automate resource deployment for your function app](functions-infrastructure-as-code.md#windows).
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-best-practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-best-practices.md
@@ -61,6 +61,31 @@ If a queue item was already processed, allow your function to be a no-op.
Take advantage of defensive measures already provided for components you use in the Azure Functions platform. For example, see **Handling poison queue messages** in the documentation for [Azure Storage Queue triggers and bindings](functions-bindings-storage-queue-trigger.md#poison-messages).
+## Function organization best practices
+
+As part of your solution, you may develop and publish multiple functions. These functions are often combined into a single function app, but they can also run in separate function apps. In Premium and dedicated (App Service) hosting plans, multiple function apps can also share the same resources by running in the same plan. How you group your functions and function apps can impact the performance, scaling, configuration, deployment, and security of your overall solution. There aren't rules that apply to every scenario, so consider the information in this section when planning and developing your functions.
+
+### Organize functions for performance and scaling
+
+Each function that you create has a memory footprint. While this footprint is usually small, having too many functions within a function app can lead to slower startup of your app on new instances. It also means that the overall memory usage of your function app might be higher. It's hard to say how many functions should be in a single app, which depends on your particular workload. However, if your function stores a lot of data in memory, consider having fewer functions in a single app.
+
+If you run multiple function apps in a single Premium plan or dedicated (App Service) plan, these apps are all scaled together. If you have one function app that has a much higher memory requirement than the others, it uses a disproportionate amount of memory resources on each instance to which the app is deployed. Because this could leave less memory available for the other apps on each instance, you might want to run a high-memory-using function app like this in its own separate hosting plan.
+
+> [!NOTE]
+> When using the [Consumption plan](./functions-scale.md), we recommend you always put each app in its own plan, since apps are scaled independently anyway.
+
+Consider whether you want to group functions with different load profiles. For example, if you have a function that processes many thousands of queue messages, and another that is only called occasionally but has high memory requirements, you might want to deploy them in separate function apps so they get their own sets of resources and they scale independently of each other.
+
+### Organize functions for configuration and deployment
+
+Function apps have a `host.json` file, which is used to configure advanced behavior of function triggers and the Azure Functions runtime. Changes to the `host.json` file apply to all functions within the app. If you have some functions that need custom configurations, consider moving them into their own function app.
+
+All functions in your local project are deployed together as a set of files to your function app in Azure. You might need to deploy individual functions separately or use features like [deployment slots](./functions-deployment-slots.md) for some functions and not others. In such cases, you should deploy these functions (in separate code projects) to different function apps.
+
+### Organize functions by privilege
+
+Connection strings and other credentials stored in application settings gives all of the functions in the function app the same set of permissions in the associated resource. Consider minimizing the number of functions with access to specific credentials by moving functions that don't use those credentials to a separate function app. You can always use techniques such as [function chaining](/learn/modules/chain-azure-functions-data-using-bindings/) to pass data between functions in different function apps.
+ ## Scalability best practices There are a number of factors that impact how instances of your function app scale. The details are provided in the documentation for [function scaling](functions-scale.md). The following are some best practices to ensure optimal scalability of a function app.
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-develop-vs-code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-develop-vs-code.md
@@ -45,10 +45,55 @@ Before you install and run the [Azure Functions extension][Azure Functions exten
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
-Other resources that you need, like an Azure storage account, are created in your subscription when you [publish by using Visual Studio Code](#publish-to-azure).
+Other resources that you need, like an Azure storage account, are created in your subscription when you [publish by using Visual Studio Code](#publish-to-azure).
-> [!IMPORTANT]
-> You can develop functions locally and publish them to Azure without having to start and run them locally. To run your functions locally, you'll need to meet some additional requirements, including an automatic download of Azure Functions Core Tools. To learn more, see [Additional requirements for running a project locally](#additional-requirements-for-running-a-project-locally).
+### Run local requirements
+
+These prerequisites are only required to [run and debug your functions locally](#run-functions-locally). They aren't required to create or publish projects to Azure Functions.
+
+# [C\#](#tab/csharp)
+++ The [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) version 2.x or later. The Core Tools package is downloaded and installed automatically when you start the project locally. Core Tools includes the entire Azure Functions runtime, so download and installation might take some time.+++ The [C# extension](https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.csharp) for Visual Studio Code. +++ [.NET Core CLI tools](/dotnet/core/tools/?tabs=netcore2x). +
+# [Java](#tab/java)
+++ The [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) version 2.x or later. The Core Tools package is downloaded and installed automatically when you start the project locally. Core Tools includes the entire Azure Functions runtime, so download and installation might take some time.+++ [Debugger for Java extension](https://marketplace.visualstudio.com/items?itemName=vscjava.vscode-java-debug).+++ [Java 8](/azure/developer/jav#java-versions).+++ [Maven 3 or later](https://maven.apache.org/)+
+# [JavaScript](#tab/nodejs)
+++ The [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) version 2.x or later. The Core Tools package is downloaded and installed automatically when you start the project locally. Core Tools includes the entire Azure Functions runtime, so download and installation might take some time.+++ [Node.js](https://nodejs.org/), Active LTS and Maintenance LTS versions (10.14.1 recommended). Use the `node --version` command to check your version. +
+# [PowerShell](#tab/powershell)
+++ The [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) version 2.x or later. The Core Tools package is downloaded and installed automatically when you start the project locally. Core Tools includes the entire Azure Functions runtime, so download and installation might take some time.+++ [PowerShell 7](/powershell/scripting/install/installing-powershell-core-on-windows) recommended. For version information, see [PowerShell versions](functions-reference-powershell.md#powershell-versions).+++ Both [.NET Core 3.1 runtime](https://www.microsoft.com/net/download) and [.NET Core 2.1 runtime](https://dotnet.microsoft.com/download/dotnet-core/2.1) +++ The [PowerShell extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-vscode.PowerShell). +
+# [Python](#tab/python)
+++ The [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) version 2.x or later. The Core Tools package is downloaded and installed automatically when you start the project locally. Core Tools includes the entire Azure Functions runtime, so download and installation might take some time.+++ [Python 3.x](https://www.python.org/downloads/). For version information, see [Python versions](functions-reference-python.md#python-version) by the Azure Functions runtime.+++ [Python extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python) for Visual Studio Code.++ [!INCLUDE [functions-install-vs-code-extension](../../includes/functions-install-vs-code-extension.md)]
@@ -62,8 +107,6 @@ The Functions extension lets you create a function app project, along with your
1. Select the folder for your function app project, and then **Select a language for your function project**.
-1. If you haven't already installed the Core Tools, you are asked to **Select a version** of the Core Tools to install. Choose version 2.x or a later version.
- 1. Select the **HTTP trigger** function template, or you can select **Skip for now** to create a project without a function. You can always [add a function to your project](#add-a-function-to-your-project) later. ![Choose the HTTP trigger template](./media/functions-develop-vs-code/create-function-choose-template.png)
@@ -93,7 +136,11 @@ Depending on your language, these other files are created:
* [HttpExample.cs class library file](functions-dotnet-class-library.md#functions-class-library-project) that implements the function.
-At this point, you can add input and output bindings to your function by [adding a parameter to a C# class library function](#add-input-and-output-bindings).
+# [Java](#tab/java)
+++ A pom.xml file in the root folder that defines the project and deployment parameters, including project dependencies and the [Java version](functions-reference-java.md#java-versions). The pom.xml also contains information about the Azure resources that are created during a deployment. +++ A [Functions.java file](functions-reference-java.md#triggers-and-annotations) in your src path that implements the function. # [JavaScript](#tab/nodejs)
@@ -101,20 +148,19 @@ At this point, you can add input and output bindings to your function by [adding
* An HttpExample folder that contains the [function.json definition file](functions-reference-node.md#folder-structure) and the [index.js file](functions-reference-node.md#exporting-a-function), a Node.js file that contains the function code.
-At this point, you can add input and output bindings to your function by [modifying the function.json file](#add-input-and-output-bindings).
-
-<!-- # [PowerShell](#tab/powershell)
+# [PowerShell](#tab/powershell)
-* An HttpExample folder that contains the [function.json definition file](functions-reference-python.md#programming-model) and the run.ps1 file, which contains the function code.
+* An HttpExample folder that contains the [function.json definition file](functions-reference-powershell.md#folder-structure) and the run.ps1 file, which contains the function code.
# [Python](#tab/python) * A project-level requirements.txt file that lists packages required by Functions.
-* An HttpExample folder that contains the [function.json definition file](functions-reference-python.md#programming-model) and the \_\_init\_\_.py file, which contains the function code.
- -->
+* An HttpExample folder that contains the [function.json definition file](functions-reference-python.md#folder-structure) and the \_\_init\_\_.py file, which contains the function code.
+
+At this point, you can [add input and output bindings](#add-input-and-output-bindings) to your function.
You can also [add a new function to your project](#add-a-function-to-your-project). ## Install binding extensions
@@ -129,10 +175,22 @@ Run the [dotnet add package](/dotnet/core/tools/dotnet-add-package) command in t
dotnet add package Microsoft.Azure.WebJobs.Extensions.Storage --version 3.0.4 ```
+# [Java](#tab/java)
++ # [JavaScript](#tab/nodejs) [!INCLUDE [functions-extension-bundles](../../includes/functions-extension-bundles.md)]
+# [PowerShell](#tab/powershell)
++
+# [Python](#tab/python)
++ ## Add a function to your project
@@ -145,15 +203,27 @@ The results of this action depend on your project's language:
A new C# class library (.cs) file is added to your project.
+# [Java](#tab/java)
+
+A new Java (.java) file is added to your project.
+ # [JavaScript](#tab/nodejs) A new folder is created in the project. The folder contains a new function.json file and the new JavaScript code file.
+# [PowerShell](#tab/powershell)
+
+A new folder is created in the project. The folder contains a new function.json file and the new PowerShell code file.
+
+# [Python](#tab/python)
+
+A new folder is created in the project. The folder contains a new function.json file and the new Python code file.
+
-## Add input and output bindings
+## <a name="add-input-and-output-bindings"></a>Connect to services
-You can expand your function by adding input and output bindings. The process for adding bindings depends on your project's language. To learn more about bindings, see [Azure Functions triggers and bindings concepts](functions-triggers-bindings.md).
+You can connect your function to other Azure services by adding input and output bindings. Bindings connect your function to other services without you having to write the connection code. The process for adding bindings depends on your project's language. To learn more about bindings, see [Azure Functions triggers and bindings concepts](functions-triggers-bindings.md).
The following examples connect to a storage queue named `outqueue`, where the connection string for the storage account is set in the `MyStorageConnection` application setting in local.settings.json.
@@ -161,61 +231,69 @@ The following examples connect to a storage queue named `outqueue`, where the co
Update the function method to add the following parameter to the `Run` method definition:
-```cs
-[Queue("outqueue"),StorageAccount("MyStorageConnection")] ICollector<string> msg
-```
-This code requires you to add the following `using` statement:
+The `msg` parameter is an `ICollector<T>` type, which represents a collection of messages that are written to an output binding when the function completes. The following code adds a message to the collection:
-```cs
-using Microsoft.Azure.WebJobs.Extensions.Storage;
-```
-The `msg` parameter is an `ICollector<T>` type, which represents a collection of messages that are written to an output binding when the function completes. You add one or more messages to the collection. These messages are sent to the queue when the function completes.
+ Messages are sent to the queue when the function completes.
-To learn more, see the [Queue storage output binding](functions-bindings-storage-queue-output.md) documentation.
+To learn more, see the [Queue storage output binding reference article](functions-bindings-storage-queue-output.md?tabs=csharp) documentation. To learn more in general about which bindings can be added to a function, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=csharp).
-# [JavaScript](#tab/nodejs)
+# [Java](#tab/java)
-Visual Studio Code lets you add bindings to your function.json file by following a convenient set of prompts. To create a binding, right-click (Ctrl+click on macOS) the **function.json** file in your function folder and select **Add binding**:
+Update the function method to add the following parameter to the `Run` method definition:
-![Add a binding to an existing JavaScript function ](media/functions-develop-vs-code/function-add-binding.png)
-Following are example prompts to define a new storage output binding:
+The `msg` parameter is an `OutputBinding<T>` type, where is `T` is a string that is written to an output binding when the function completes. The following code sets the message in the output binding:
-| Prompt | Value | Description |
-| -- | -- | -- |
-| **Select binding direction** | `out` | The binding is an output binding. |
-| **Select binding with direction** | `Azure Queue Storage` | The binding is an Azure Storage queue binding. |
-| **The name used to identify this binding in your code** | `msg` | Name that identifies the binding parameter referenced in your code. |
-| **The queue to which the message will be sent** | `outqueue` | The name of the queue that the binding writes to. When the *queueName* doesn't exist, the binding creates it on first use. |
-| **Select setting from "local.settings.json"** | `MyStorageConnection` | The name of an application setting that contains the connection string for the storage account. The `AzureWebJobsStorage` setting contains the connection string for the storage account you created with the function app. |
-In this example, the following binding is added to the `bindings` array in your function.json file:
+This message is sent to the queue when the function completes.
-```javascript
-{
- "type": "queue",
- "direction": "out",
- "name": "msg",
- "queueName": "outqueue",
- "connection": "MyStorageConnection"
-}
-```
+To learn more, see the [Queue storage output binding reference article](functions-bindings-storage-queue-output.md?tabs=java) documentation. To learn more in general about which bindings can be added to a function, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=java).
-You can also add the same binding definition directly to your function.json.
+# [JavaScript](#tab/nodejs)
+ In your function code, the `msg` binding is accessed from the `context`, as in this example:
-```javascript
-context.bindings.msg = "Name passed to the function: " req.query.name;
-```
-To learn more, see the [Queue storage output binding](functions-bindings-storage-queue-output.md) reference.
+This message is sent to the queue when the function completes.
-
+To learn more, see the [Queue storage output binding reference article](functions-bindings-storage-queue-output.md?tabs=javascript) documentation. To learn more in general about which bindings can be added to a function, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=javascript).
+
+# [PowerShell](#tab/powershell)
+++
+This message is sent to the queue when the function completes.
+
+To learn more, see the [Queue storage output binding reference article](functions-bindings-storage-queue-output.md?tabs=powershell) documentation. To learn more in general about which bindings can be added to a function, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=powershell).
+
+# [Python](#tab/python)
++
+Update the `Main` definition to add an output parameter `msg: func.Out[func.QueueMessage]` so that the definition looks like the following example:
+
+The following code adds string data from the request to the output queue:
++
+This message is sent to the queue when the function completes.
+
+To learn more, see the [Queue storage output binding reference article](functions-bindings-storage-queue-output.md?tabs=python) documentation. To learn more in general about which bindings can be added to a function, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=python).
++ [!INCLUDE [functions-sign-in-vs-code](../../includes/functions-sign-in-vs-code.md)]
@@ -223,7 +301,7 @@ To learn more, see the [Queue storage output binding](functions-bindings-storage
Visual Studio Code lets you publish your Functions project directly to Azure. In the process, you create a function app and related resources in your Azure subscription. The function app provides an execution context for your functions. The project is packaged and deployed to the new function app in your Azure subscription.
-When you publish from Visual Studio Code to a new function app in Azure, you are offered both a quick function app create path and an advanced path.
+When you publish from Visual Studio Code to a new function app in Azure, you can choose either a quick function app create path using defaults or an advanced path, where you have more control over the remote resources created.
When you publish from Visual Studio Code, you take advantage of the [Zip deploy](functions-deployment-technologies.md#zip-deploy) technology.
@@ -237,9 +315,7 @@ If you want to provide explicit names for the created resources, you must choose
The following steps publish your project to a new function app created with advanced create options:
-1. In the **Azure: Functions** area, select the **Deploy to Function App** icon.
-
- ![Function app settings](./media/functions-develop-vs-code/function-app-publish-project.png)
+1. In the command pallet, enter **Azure Functions: Deploy to function app**.
1. If you're not signed in, you're prompted to **Sign in to Azure**. You can also **Create a free Azure account**. After signing in from the browser, go back to Visual Studio Code.
@@ -259,47 +335,54 @@ The following steps publish your project to a new function app created with adva
A notification appears after your function app is created and the deployment package is applied. Select **View Output** in this notification to view the creation and deployment results, including the Azure resources that you created.
+### <a name="get-the-url-of-the-deployed-function"></a>Get the URL of an HTTP triggered function in Azure
+
+To call an HTTP-triggered function from a client, you need the URL of the function when it's deployed to your function app. This URL includes any required function keys. You can use the extension to get these URLs for your deployed functions. If you just want to run the remote function in Azure, [use the Execute function now](#run-functions-in-azure) functionality of the extension.
+
+1. Select F1 to open the command palette, and then search for and run the command **Azure Functions: Copy Function URL**.
+
+1. Follow the prompts to select your function app in Azure and then the specific HTTP trigger that you want to invoke.
+
+The function URL is copied to the clipboard, along with any required keys passed by the `code` query parameter. Use an HTTP tool to submit POST requests, or a browser for GET requests to the remote function.
+
+When getting the URL of functions in Azure, the extension uses your Azure account to automatically retrieve the keys it needs to start the function. [Learn more about function access keys](security-concepts.md#function-access-keys). Starting non-HTTP triggered functions requires using the admin key.
+ ## Republish project files
-When you set up [continuous deployment](functions-continuous-deployment.md), your function app in Azure is updated whenever source files are updated in the connected source location. We recommend continuous deployment, but you can also republish your project file updates from Visual Studio Code.
+When you set up [continuous deployment](functions-continuous-deployment.md), your function app in Azure is updated when you update source files in the connected source location. We recommend continuous deployment, but you can also republish your project file updates from Visual Studio Code.
> [!IMPORTANT] > Publishing to an existing function app overwrites the content of that app in Azure. [!INCLUDE [functions-republish-vscode](../../includes/functions-republish-vscode.md)]
-## Get the URL of the deployed function
+## Run functions
-To call an HTTP-triggered function, you need the URL of the function when it's deployed to your function app. This URL includes any required [function keys](functions-bindings-http-webhook-trigger.md#authorization-keys). You can use the extension to get these URLs for your deployed functions.
+The Azure Functions extension lets you run individual functions, either in your project on your local development computer or in your Azure subscription.
-1. Select F1 to open the command palette, and then search for and run the command **Azure Functions: Copy Function URL**.
+For HTTP trigger functions, the extension calls the HTTP endpoint. For other kinds of triggers, it calls administrator APIs to start the function. The message body of the request sent to the function depends on the type of trigger. When a trigger requires test data, you're prompted to enter data in a specific JSON format.
-1. Follow the prompts to select your function app in Azure and then the specific HTTP trigger that you want to invoke.
+### Run functions in Azure
-The function URL is copied to the clipboard, along with any required keys passed by the `code` query parameter. Use an HTTP tool to submit POST requests, or a browser for GET requests to the remote function.
-
-## Run functions locally
+To execute a function in Azure from Visual Studio Code.
-The Azure Functions extension lets you run a Functions project on your local development computer. The local runtime is the same runtime that hosts your function app in Azure. Local settings are read from the [local.settings.json file](#local-settings-file).
+1. In the command pallet, enter **Azure Functions: Execute function now** and choose your Azure subscription.
-### Additional requirements for running a project locally
+1. Choose your function app in Azure from the list. If you don't see your function app, make sure you're signed in to the correct subscription.
-To run your Functions project locally, you must meet these additional requirements:
+1. Choose the function you want to run from the list and type the message body of the request in **Enter request body**. Press Enter to send this request message to your function. The default text in **Enter request body** should indicate the format of the body. If your function app has no functions, a notification error is shown with this error.
-* Install version 2.x or later of [Azure Functions Core Tools](functions-run-local.md#v2). The Core Tools package is downloaded and installed automatically when you start the project locally. Core Tools includes the entire Azure Functions runtime, so download and installation might take some time.
+1. When the function executes in Azure and returns a response, a notification is raised in Visual Studio Code.
+
+You can also run your function from the **Azure: Functions** area by right-clicking (Ctrl-clicking on Mac) the function you want to run from your function app in your Azure subscription and choosing **Execute Function Now...**.
-* Install the specific requirements for your chosen language:
+When running functions in Azure, the extension uses your Azure account to automatically retrieve the keys it needs to start the function. [Learn more about function access keys](security-concepts.md#function-access-keys). Starting non-HTTP triggered functions requires using the admin key.
- | Language | Requirement |
- | -- | |
- | **C#** | [C# extension](https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.csharp)<br/>[.NET Core CLI tools](/dotnet/core/tools/?tabs=netcore2x) |
- | **Java** | [Debugger for Java extension](https://marketplace.visualstudio.com/items?itemName=vscjava.vscode-java-debug)<br/>[Java 8](/azure/developer/java/fundamentals/java-jdk-long-term-support)<br/>[Maven 3 or later](https://maven.apache.org/) |
- | **JavaScript** | [Node.js](https://nodejs.org/)<sup>*</sup> |
- | **Python** | [Python extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python)<br/>[Python 3.6.8](https://www.python.org/downloads/) recommended|
+### Run functions locally
- <sup>*</sup>Active LTS and Maintenance LTS versions (8.11.1 and 10.14.1 recommended).
+The local runtime is the same runtime that hosts your function app in Azure. Local settings are read from the [local.settings.json file](#local-settings-file). To run your Functions project locally, you must meet [additional requirements](#run-local-requirements).
-### Configure the project to run locally
+#### Configure the project to run locally
The Functions runtime uses an Azure Storage account internally for all trigger types other than HTTP and webhooks. So you need to set the **Values.AzureWebJobsStorage** key to a valid Azure Storage account connection string.
@@ -315,15 +398,19 @@ To set the storage account connection string:
For more information, see [Local settings file](#local-settings-file).
-### Debugging functions locally
+#### <a name="debugging-functions-locally"></a>Debug functions locally
+
+To debug your functions, select F5. If you haven't already downloaded [Core Tools][Azure Functions Core Tools], you're prompted to do so. When Core Tools is installed and running, output is shown in the Terminal. This is the same as running the `func host start` Core Tools command from the Terminal, but with extra build tasks and an attached debugger.
+
+When the project is running, you can use the **Execute Function Now...** feature of the extension to trigger your functions as you would when the project is deployed to Azure. With the project running in debug mode, breakpoints are hit in Visual Studio Code as you would expect.
-To debug your functions, select F5. If you haven't already downloaded [Core Tools][Azure Functions Core Tools], you're prompted to do so. When Core Tools is installed and running, output is shown in the Terminal. This is the same as running the `func host start` Core Tools command from the Terminal, but with additional build tasks and an attached debugger.
+1. In the command pallet, enter **Azure Functions: Execute function now** and choose **Local project**.
-When the project is running, you can trigger your functions as you would when the project is deployed to Azure. When the project is running in debug mode, breakpoints are hit in Visual Studio Code, as expected.
+1. Choose the function you want to run in your project and type the message body of the request in **Enter request body**. Press Enter to send this request message to your function. The default text in **Enter request body** should indicate the format of the body. If your function app has no functions, a notification error is shown with this error.
-The request URL for HTTP triggers is displayed in the output in the Terminal. Function keys for HTTP triggers aren't used when a project is running locally. For more information, see [Strategies for testing your code in Azure Functions](functions-test-a-function.md).
+1. When the function runs locally and after the response is received, a notification is raised in Visual Studio Code. Information about the function execution is shown in **Terminal** panel.
-To learn more, see [Work with Azure Functions Core Tools][Azure Functions Core Tools].
+Running functions locally doesn't require using keys.
[!INCLUDE [functions-local-settings-file](../../includes/functions-local-settings-file.md)]
@@ -337,10 +424,12 @@ The function application settings values can also be read in your code as enviro
* [C# script (.csx)](functions-reference-csharp.md#environment-variables) * [Java](functions-reference-java.md#environment-variables) * [JavaScript](functions-reference-node.md#environment-variables)
+* [PowerShell](functions-reference-powershell.md#environment-variables)
+* [Python](functions-reference-python.md#environment-variables)
## Application settings in Azure
-The settings in the local.settings.json file in your project should be the same as the application settings in the function app in Azure. Any settings you add to local.settings.json must also be added to the function app in Azure. These settings aren't uploaded automatically when you publish the project. Likewise, any settings that you create in your function app [in the portal](functions-how-to-use-azure-function-app-settings.md#settings) must be downloaded to your local project.
+The settings in the local.settings.json file in your project should be the same as the application settings in the function app in Azure. Any settings you add to local.settings.json you must also add to the function app in Azure. These settings aren't uploaded automatically when you publish the project. Likewise, any settings that you create in your function app [in the portal](functions-how-to-use-azure-function-app-settings.md#settings) must be downloaded to your local project.
### Publish application settings
@@ -420,7 +509,7 @@ The Azure Functions extension provides a useful graphical interface in the area
| **Download Remote Settings** | Downloads settings from the chosen function app in Azure into your local.settings.json file. If the local file is encrypted, it's decrypted, updated, and encrypted again. If there are settings that have conflicting values in the two locations, you're prompted to choose how to proceed. Be sure to save changes to your local.settings.json file before you run this command. | | **Edit settings** | Changes the value of an existing function app setting in Azure. This command doesn't affect settings in your local.settings.json file. | | **Encrypt settings** | Encrypts individual items in the `Values` array in the [local settings](#local-settings-file). In this file, `IsEncrypted` is also set to `true`, which specifies that the local runtime will decrypt settings before using them. Encrypt local settings to reduce the risk of leaking valuable information. In Azure, application settings are always stored encrypted. |
-| **Execute Function Now** | Manually starts a [timer-triggered function](functions-bindings-timer.md) in Azure. This command is used for testing. To learn more about triggering non-HTTP functions in Azure, see [Manually run a non HTTP-triggered function](functions-manually-run-non-http.md). |
+| **Execute Function Now** | Manually starts a function using admin APIs. This command is used for testing, both locally during debugging and against functions running in Azure. When triggering a function in Azure, the extension first automatically obtains an admin key, which it uses to call the remote admin APIs that start functions in Azure. The body of the message sent to the API depends on the type of trigger. Timer triggers don't require you to pass any data. |
| **Initialize Project for Use with VS Code** | Adds the required Visual Studio Code project files to an existing Functions project. Use this command to work with a project that you created by using Core Tools. | | **Install or Update Azure Functions Core Tools** | Installs or updates [Azure Functions Core Tools], which is used to run functions locally. | | **Redeploy** | Lets you redeploy project files from a connected Git repository to a specific deployment in Azure. To republish local updates from Visual Studio Code, [republish your project](#republish-project-files). |
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-infrastructure-as-code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-infrastructure-as-code.md
@@ -469,7 +469,7 @@ Linux apps should also include a `linuxFxVersion` property under `siteConfig`. I
||-| | Python | `python|3.7` | | JavaScript | `node|12` |
-| .NET | `dotnet|3.0` |
+| .NET | `dotnet|3.1` |
```json {
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/azure-vm-vmss-apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/azure-vm-vmss-apps.md
@@ -8,9 +8,11 @@ Last updated 08/26/2019
# Deploy the Azure Monitor Application Insights Agent on Azure virtual machines and Azure virtual machine scale sets
-Enabling monitoring on your .NET based web applications running on [Azure virtual machines](https://azure.microsoft.com/services/virtual-machines/) and [Azure virtual machine scale sets](../../virtual-machine-scale-sets/index.yml) is now easier than ever. Get all the benefits of using Application Insights without modifying your code.
+Enabling monitoring for your .NET or Java based web applications running on [Azure virtual machines](https://azure.microsoft.com/services/virtual-machines/) and [Azure virtual machine scale sets](../../virtual-machine-scale-sets/index.yml) is now easier than ever. Get all the benefits of using Application Insights without modifying your code.
This article walks you through enabling Application Insights monitoring using the Application Insights Agent and provides preliminary guidance for automating the process for large-scale deployments.
+> [!IMPORTANT]
+> **Java** based applications running on Azure VMs and VMSS are monitored with **[Application Insights Java 3.0 agent](https://docs.microsoft.com/azure/azure-monitor/app/java-in-process-agent)**, which is generally available.
> [!IMPORTANT] > Azure Application Insights Agent for ASP.NET applications running on **Azure VMs and VMSS** is currently in public preview. For monitoring your ASP.Net applications running **on-premises**, use the [Azure Application Insights Agent for on-premises servers](./status-monitor-v2-overview.md), which is generally available and fully supported.
@@ -21,23 +23,47 @@ This article walks you through enabling Application Insights monitoring using th
There are two ways to enable application monitoring for Azure virtual machines and Azure virtual machine scale sets hosted applications:
-* **Codeless** via Application Insights Agent
- * This method is the easiest to enable, and no advanced configuration is required. It is often referred to as "runtime" monitoring.
+### Auto-instrumentation via Application Insights Agent
+
+* This method is the easiest to enable, and no advanced configuration is required. It is often referred to as "runtime" monitoring.
- * For Azure virtual machines and Azure virtual machine scale sets we recommend at a minimum enabling this level of monitoring. After that, based on your specific scenario, you can evaluate whether manual instrumentation is needed.
+* For Azure virtual machines and Azure virtual machine scale sets we recommend at a minimum enabling this level of monitoring. After that, based on your specific scenario, you can evaluate whether manual instrumentation is needed.
- * The Application Insights Agent auto-collects the same dependency signals out-of-the-box as the .NET SDK. See [Dependency auto-collection](./auto-collect-dependencies.md#net) to learn more.
- > [!NOTE]
- > Currently only .Net IIS-hosted applications are supported. Use an SDK to instrument ASP.NET Core, Java, and Node.js applications hosted on an Azure virtual machines and virtual machine scale sets.
+> [!NOTE]
+> Auto-instrumentation is currently only available for .NET IIS-hosted applications and Java. Use an SDK to instrument ASP.NET Core, Node.js, and Python applications hosted on an Azure virtual machines and virtual machine scale sets.
-* **Code-based** via SDK
- * This approach is much more customizable, but it requires [adding a dependency on the Application Insights SDK NuGet packages](./asp-net.md). This method, also means you have to manage the updates to the latest version of the packages yourself.
+#### .NET
- * If you need to make custom API calls to track events/dependencies not captured by default with agent-based monitoring, you would need to use this method. Check out the [API for custom events and metrics article](./api-custom-events-metrics.md) to learn more.
+ * The Application Insights Agent auto-collects the same dependency signals out-of-the-box as the .NET SDK. See [Dependency auto-collection](./auto-collect-dependencies.md#net) to learn more.
+
+#### Java
+ * For Java, **[Application Insights Java 3.0 agent](https://docs.microsoft.com/azure/azure-monitor/app/java-in-process-agent)** is the recommended approach. The most popular libraries and frameworks, as well as logs and dependencies are [auto-collected](https://docs.microsoft.com/azure/azure-monitor/app/java-in-process-agent#auto-collected-requests-dependencies-logs-and-metrics), with a multitude of [additional configurations](https://docs.microsoft.com/azure/azure-monitor/app/java-standalone-config)
-> [!NOTE]
-> If both agent based monitoring and manual SDK based instrumentation is detected only the manual instrumentation settings will be honored. This is to prevent duplicate data from being sent. To learn more about this check out the [troubleshooting section](#troubleshooting) below.
+### Code-based via SDK
+
+#### .NET
+ * For .NET apps, this approach is much more customizable, but it requires [adding a dependency on the Application Insights SDK NuGet packages](./asp-net.md). This method, also means you have to manage the updates to the latest version of the packages yourself.
+
+ * If you need to make custom API calls to track events/dependencies not captured by default with agent-based monitoring, you would need to use this method. Check out the [API for custom events and metrics article](./api-custom-events-metrics.md) to learn more.
+
+ > [!NOTE]
+ > For .NET apps only - if both agent based monitoring and manual SDK based instrumentation is detected only the manual instrumentation settings will be honored. This is to prevent duplicate data from being sent. To learn more about this check out the [troubleshooting section](#troubleshooting) below.
+
+#### .NET Core
+To monitor .NET Core applications, use the [SDK](https://docs.microsoft.com/azure/azure-monitor/app/asp-net-core)
+
+#### Java
+
+If you need additional custom telemetry for Java applications, see what [is available](https://docs.microsoft.com/azure/azure-monitor/app/java-in-process-agent#send-custom-telemetry-from-your-application), add [custom dimensions](https://docs.microsoft.com/azure/azure-monitor/app/java-standalone-config#custom-dimensions), or use [telemetry processors](https://docs.microsoft.com/azure/azure-monitor/app/java-standalone-telemetry-processors).
+
+#### Node.js
+
+To instrument your Node.js application, use the [SDK](https://docs.microsoft.com/azure/azure-monitor/app/nodejs).
+
+#### Python
+
+To monitor Python apps, use the [SDK](https://docs.microsoft.com/azure/azure-monitor/app/opencensus-python).
## Manage Application Insights Agent for .NET applications on Azure virtual machines using PowerShell
@@ -45,7 +71,7 @@ There are two ways to enable application monitoring for Azure virtual machines a
> Before installing the Application Insights Agent, you'll need a connection string. [Create a new Application Insights Resource](./create-new-resource.md) or copy the connection string from an existing application insights resource. > [!NOTE]
-> New to powershell? Check out the [Get Started Guide](/powershell/azure/get-started-azureps).
+> New to PowerShell? Check out the [Get Started Guide](/powershell/azure/get-started-azureps).
Install or update the Application Insights Agent as an extension for Azure virtual machines ```powershell
@@ -73,7 +99,7 @@ Set-AzVMExtension -ResourceGroupName "<myVmResourceGroup>" -VMName "<myVmName>"
``` > [!NOTE]
-> You may install or update the Application Insights Agent as an extension across multiple Virtual Machines at-scale using a Powershell loop.
+> You may install or update the Application Insights Agent as an extension across multiple Virtual Machines at-scale using a PowerShell loop.
Uninstall Application Insights Agent extension from Azure virtual machine ```powershell
@@ -100,7 +126,7 @@ You may also view installed extensions in the [Azure virtual machine blade](../.
> [!NOTE] > Verify installation by clicking on Live Metrics Stream within the Application Insights Resource associated with the connection string you used to deploy the Application Insights Agent Extension. If you are sending data from multiple Virtual Machines, select the target Azure virtual machines under Server Name. It may take up to a minute for data to begin flowing.
-## Manage Application Insights Agent for .NET applications on Azure virtual machine scale sets using powershell
+## Manage Application Insights Agent for .NET applications on Azure virtual machine scale sets using PowerShell
Install or update the Application Insights Agent as an extension for Azure virtual machine scale set ```powershell
@@ -164,7 +190,7 @@ Get-AzResource -ResourceId /subscriptions/<mySubscriptionId>/resourceGroups/<myR
Find troubleshooting tips for Application Insights Monitoring Agent Extension for .NET applications running on Azure virtual machines and virtual machine scale sets. > [!NOTE]
-> .NET Core, Java, and Node.js applications are only supported on Azure virtual machines and Azure virtual machine scale sets via manual SDK based instrumentation and therefore the steps below do not apply to these scenarios.
+> .NET Core, Node.js, and Python applications are only supported on Azure virtual machines and Azure virtual machine scale sets via manual SDK based instrumentation and therefore the steps below do not apply to these scenarios.
Extension execution output is logged to files found in the following directories: ```Windows
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/itsmc-connections-scsm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/itsmc-connections-scsm.md
@@ -28,7 +28,7 @@ Ensure the following prerequisites are met:
> [!NOTE] > - ITSM Connector can only connect to cloud-based ServiceNow instances. On-premises ServiceNow instances are currently not supported.
-> - In order to use custom [templates](./itsmc-definition.md#template-definitions) as a part of the actions the parameter "ProjectionType" in the SCSM template should be mapped to "IncidentManagement!System.WorkItem.Incident.ProjectionType"
+> - In order to use custom [templates](./itsmc-definition.md#define-a-template) as a part of the actions the parameter "ProjectionType" in the SCSM template should be mapped to "IncidentManagement!System.WorkItem.Incident.ProjectionType"
## Connection procedure
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/itsmc-dashboard-errors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/itsmc-dashboard-errors.md
@@ -1,6 +1,6 @@
Title: Common errors
-description: This document contain information about common errors that exists in the dashboard
+ Title: Connector status errors in the ITSMC dashboard
+description: Learn about common errors that exist in the IT Service Management Connector dashboard.
@@ -9,63 +9,77 @@ Last updated 01/18/2021
-# Errors in the connector status section
+# Connector status errors in the ITSMC dashboard
-In the connector status list section in the dashboard you can find errors that can help you to fix issues in your ITSM connector.
+The IT Service Management Connector (ITSMC) dashboard presents errors that can help you to fix problems in your connector.
-## Status Common Errors
+The following sections describe common errors that appear in the connector status section of the dashboard and how you can resolve them.
-In this section you can find the common errors that presented in the connector status section and how you should resolve them:
+## Unexpected response
-* **Error**: "Unexpected response from ServiceNow along with success status code. Response: { "import_set": "{import_set_id}", "staging_table": "x_mioms_microsoft_oms_incident", "result": [ { "transform_map": "OMS Incident", "table": "incident", "status": "error", "error_message": "{Target record not found|Invalid table|Invalid staging table" }"
+**Error**: "Unexpected response from ServiceNow along with success status code. Response: { "import_set": "{import_set_id}", "staging_table": "x_mioms_microsoft_oms_incident", "result": [ { "transform_map": "OMS Incident", "table": "incident", "status": "error", "error_message": "{Target record not found|Invalid table|Invalid staging table" }"
- **Cause**: Such error is returned from ServiceNow when:
- * A custom script deployed in ServiceNow instance causes incidents to be ignored.
- * "OMS Integrator App" code itself was modified on ServiceNow side, e.g. the onBefore script.
+**Cause**: ServiceNow returns this error when:
- **Resolution**: Disable all custom scripts or code modifications.
+* A custom script deployed in a ServiceNow instance causes incidents to be ignored.
+* "OMS Integrator app" code was modified on the ServiceNow side (for example, through the `onBefore` script).
-* **Error**: "{"error":{"message":"Operation Failed","detail":"ACL Exception Update Failed due to security constraints"}"
+**Resolution**: Disable all custom scripts or code modifications.
- **Cause**: ServiceNow permissions misconfiguration
+## Exception update failure
- **Resolution**: Check that all the roles have been properly assigned as [specified](itsmc-connections-servicenow.md#install-the-user-app-and-create-the-user-role).
+**Error**: "{"error":{"message":"Operation Failed","detail":"ACL Exception Update Failed due to security constraints"}"
-* **Error**: "An error occurred while sending the request."
+**Cause**: ServiceNow permissions are misconfigured.
- **Cause**: "ServiceNow Instance unavailable"
+**Resolution**: Check that all the roles are properly assigned as [specified](itsmc-connections-servicenow.md#install-the-user-app-and-create-the-user-role).
- **Resolution**: Check your instance in ServiceNow it might be deleted or unavailable.
+## Problem sending a request
-* **Error**: "ServiceDeskHttpBadRequestException: StatusCode=429"
+**Error**: "An error occurred while sending the request."
- **Cause**: ServiceNow rate limits are too high/low.
+**Cause**: A ServiceNow instance is unavailable.
- **Resolution**: Increase or cancel the rate limits in ServiceNow instance as explained [here](https://docs.servicenow.com/bundle/london-application-development/page/integrate/inbound-rest/task/investigate-rate-limit-violations.html).
+**Resolution**: Check your instance in ServiceNow. It might be deleted or unavailable.
-* **Error**: "AccessToken and RefreshToken invalid. User needs to authenticate again."
+## ServiceNow rate problem
- **Cause**: Refresh token is expired.
+**Error**: "ServiceDeskHttpBadRequestException: StatusCode=429"
- **Resolution**: Sync the ITSM Connector to generate a new refresh token as explained [here](./itsmc-resync-servicenow.md).
+**Cause**: ServiceNow rate limits are too high or too low.
-* **Error**: "Could not create/update work item for alert {alertName}. ITSM Connector {connectionIdentifier} does not exist or was deleted."
+**Resolution**: Increase or cancel the rate limits in the ServiceNow instance, as explained in the [ServiceNow documentation](https://docs.servicenow.com/bundle/london-application-development/page/integrate/inbound-rest/task/investigate-rate-limit-violations.html).
- **Cause**: ITSM Connector was deleted.
+## Invalid refresh token
- **Resolution**: The ITSM Connector was deleted but there are still ITSM action groups defined associated to it. There are 2 options to solve this issue:
- * Find and disable or delete such action groups
- * [Reconfigure the action group](./itsmc-definition.md#create-itsm-work-items-from-azure-alerts) to use an existing ITSM Connector.
- * [Create a new ITSM connector](./itsmc-definition.md#create-an-itsm-connection) and [reconfigure the action group to use it](itsmc-definition.md#create-itsm-work-items-from-azure-alerts).
+**Error**: "AccessToken and RefreshToken invalid. User needs to authenticate again."
-## UI Common Errors
+**Cause**: A refresh token is expired.
-* **Error**:"Something went wrong. Could not get connection details." This error presented when the customer defines ITSM action group.
+**Resolution**: Sync ITSMC to generate a new refresh token, as explained in [How to manually fix sync problems](./itsmc-resync-servicenow.md).
- **Cause**: Such error is displayed when:
- * Newly created ITSM Connector has yet to finish the initial Sync.
- * The connector was not defined correctly
+## Missing connector
- **Resolution**:
- * When a new ITSM connector is created, ITSM Connector starts syncing information from ITSM system, such as work item templates and work items. Sync the ITSM Connector to generate a new refresh token as explained [here](./itsmc-resync-servicenow.md).
- * Review your connection details in the ITSM connector as explained [here](./itsmc-connections-servicenow.md#create-a-connection) and check that your ITSM connector can successfully [sync](./itsmc-resync-servicenow.md).
+**Error**: "Could not create/update work item for alert {alertName}. ITSM Connector {connectionIdentifier} does not exist or was deleted."
+
+**Cause**: ITSMC was deleted.
+
+**Resolution**: ITSMC was deleted, but defined IT Service Management (ITSM) action groups are still associated with it. There are three options to solve this problem:
+
+* Find and disable or delete such action groups.
+* [Reconfigure the action groups](./itsmc-definition.md#create-itsm-work-items-from-azure-alerts) to use an existing ITSMC instance.
+* [Create a new ITSMC instance](./itsmc-definition.md#create-an-itsm-connection) and [reconfigure the action groups to use it](itsmc-definition.md#create-itsm-work-items-from-azure-alerts).
+
+## Lack of connection details
+
+**Error**:"Something went wrong. Could not get connection details." This error appears when you define an ITSM action group.
+
+**Cause**: Such an error appears in either of these situations:
+
+* A newly created ITSM Connector instance has yet to finish the initial sync.
+* The connector was not defined correctly.
+
+**Resolution**:
+
+* When a new ITSMC instance is created, it starts syncing information from the ITSM system, such as work item templates and work items. [Sync ITSMC to generate a new refresh token](./itsmc-resync-servicenow.md).
+* [Review your connection details in ITSMC](./itsmc-connections-servicenow.md#create-a-connection) and check that ITSMC can successfully [sync](./itsmc-resync-servicenow.md).
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/itsmc-dashboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/itsmc-dashboard.md
@@ -1,6 +1,6 @@
Title: Investigate errors by using the dashboard
-description: This document contains information about errors on the ITSMC dashboard.
+ Title: Investigate errors by using the ITSMC dashboard
+description: Learn how to use the IT Service Management Connector dashboard to investigate errors.
@@ -11,48 +11,64 @@ Last updated 01/15/2021
# Investigate errors by using the ITSMC dashboard
-This article contains information about the IT Service Management Connector (ITSMC) dashboard. The dashboard helps you investigate the status of ITSMC.
+This article contains information about the IT Service Management Connector (ITSMC) dashboard. The dashboard helps you to investigate the status of your connector.
-## View the dashboard
+## View errors
-Follow these steps to open the dashboard.
+To view errors in the dashboard:
1. Select **All resources**, and then find **ServiceDesk(*your workspace name*)**. ![Screenshot that shows the resources in Azure services.](media/itsmc-definition/create-new-connection-from-resource.png)
-1. In the left pane, select **Workspace Data Sources**, and then select **ITSM Connections**.
+2. Under **Workspace Data Sources** on the left pane, select **ITSM Connections**:
![Screenshot that shows selecting ITSM Connections under Workplace Data Sources.](media/itsmc-overview/add-new-itsm-connection.png)
-1. In the **Summary** section, select **View Summary** to view a summary graph.
+3. Under **Summary**, in the **IT Service Management Connector** area, select **View Summary**:
- ![Screenshot that shows the View Summary option in the Summary section.](media/itsmc-resync-servicenow/dashboard-view-summary.png)
+ ![Screenshot that shows the View Summary button.](media/itsmc-resync-servicenow/dashboard-view-summary.png)
-1. Select the graph in the **Summary** section to open the dashboard.
+4. When a graph appears in the **IT Service Management Connector** area, select it:
- ![Screenshot that shows selecting the Summary graph.](media/itsmc-resync-servicenow/dashboard-graph-click.png)
+ ![Screenshot that shows selection of a graph.](media/itsmc-resync-servicenow/dashboard-graph-click.png)
-1. Review the dashboard for status and any errors in your connector.
- ![Screenshot that shows the dashboard.](media/itsmc-resync-servicenow/connector-dashboard.png)
+5. The dashboard appears. Use it to review the status and the errors in your connector.
+
+ ![Screenshot that shows connector status on the dashboard.](media/itsmc-resync-servicenow/connector-dashboard.png)
## Understand dashboard elements
-The dashboard contains information on the alerts that were sent into the ITSM tool by using this connector.
+The dashboard contains information on the alerts that were sent to the ITSM tool through this connector. The dashboard is split into four parts.
-The dashboard is split into four sections:
+### Created work items
-- **WORK ITEMS CREATED**: The graph and table show the number of the work items by type. Select the graph or the table to learn more about your work items.
- ![Screenshot that shows the work items created section.](media/itsmc-resync-servicenow/itsm-dashboard-workitems.png)
-- **IMPACTED COMPUTERS**: The table contains details about the configuration items that created work items.
- Select rows in the tables for more details about the configuration items.
- The table contains a limited number of rows. To see the entire list, select **See all**.
- ![Screenshot that shows the impacted computers section.](media/itsmc-resync-servicenow/itsm-dashboard-impacted-comp.png)
-- **CONNECTOR STATUS**: The graph and the table show information about the status of the connector. Select the graph or the messages in the table for more details. The table shows a limited number of rows. To see the entire list, select **See all**.
- ![Screenshot that shows the connector status section.](media/itsmc-resync-servicenow/itsm-dashboard-connector-status.png)
-- **ALERT RULES**: This section shows information about the number of alert rules that were detected. Select rows in the tables for more details on the rules that were detected. The table has a limited number of rows. To see the entire list, select **See all**.
- ![Screenshot that shows the alert rules section.](media/itsmc-resync-servicenow/itsm-dashboard-alert-rules.png)
+In the **WORK ITEMS CREATED** area, the graph and the table below it contain the count of the work items per type. If you select the graph or the table, you can see more details about the work items.
-## Next steps
+![Screenshot that shows a created work item.](media/itsmc-resync-servicenow/itsm-dashboard-workitems.png)
-Check out [Common connector status errors](itsmc-dashboard-errors.md).
+### Affected computers
+
+In the **IMPACTED COMPUTERS** area, the table lists computers and their associated work items. By selecting rows in the tables, you can get more details about the computers.
+
+The table contains a limited number of rows. If you want to see all the rows, select **See all**.
+
+![Screenshot that shows affected computers.](media/itsmc-resync-servicenow/itsm-dashboard-impacted-comp.png)
+
+### Connector status
+
+In the **CONNECTOR STATUS** area, the graph and the table below it contain messages about the status of the connector. By selecting the graph or rows in the table, you can get more details about the messages.
+
+The table contains a limited number of rows. If you want to see all the rows, select **See all**.
+
+To learn more about the messages in the table, see [this article](itsmc-dashboard-errors.md).
+
+![Screenshot that shows connector status.](media/itsmc-resync-servicenow/itsm-dashboard-connector-status.png)
+
+### Alert rules
+
+In the **ALERT RULES** area, the table contains information on the number of alert rules that were detected. By selecting rows in the table, you can get more details about the detected rules.
+
+The table contains a limited number of rows. If you want to see all the rows, select **See all**.
+
+![Screenshot that shows alert rules.](media/itsmc-resync-servicenow/itsm-dashboard-alert-rules.png)
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/itsmc-definition https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/itsmc-definition.md
@@ -14,30 +14,30 @@
:::image type="icon" source="media/itsmc-overview/itsmc-symbol.png":::
-This article provides information about how to configure the IT Service Management Connector (ITSMC) in Log Analytics to centrally manage your work items.
+This article provides information about how to configure IT Service Management Connector (ITSMC) in Log Analytics to centrally manage your IT Service Management (ITSM) work items.
## Add IT Service Management Connector
-Before you can create a connection, you need to add ITSMC.
+Before you can create a connection, you need to install ITSMC.
1. In the Azure portal, select **Create a resource**:
- ![Screenshot that shows the Create a resource menu item.](media/itsmc-overview/azure-add-new-resource.png)
+ ![Screenshot that shows the menu item for creating a resource.](media/itsmc-overview/azure-add-new-resource.png)
-2. Search for **IT Service Management Connector** in Azure Marketplace. Select **Create**:
+2. Search for **IT Service Management Connector** in Azure Marketplace. Then select **Create**:
![Screenshot that shows the Create button in Azure Marketplace.](media/itsmc-overview/add-itsmc-solution.png)
-3. In the **LA Workspace** section, select the Azure Log Analytics workspace where you want to install ITSMC.
- >[!NOTE]
- >
- > * ITSMC can be installed only in Log Analytics workspaces in the following regions: East US, West US 2, South Central US, West Central US, US Gov Arizona, US Gov Virginia, Canada Central, West Europe, South UK, Southeast Asia, Japan East, Central India, and Australia Southeast.
+3. In the **LA Workspace** section, select the Log Analytics workspace where you want to install ITSMC.
+ > [!NOTE]
+ > You can install ITSMC in Log Analytics workspaces only in the following regions: East US, West US 2, South Central US, West Central US, US Gov Arizona, US Gov Virginia, Canada Central, West Europe, South UK, Southeast Asia, Japan East, Central India, and Australia Southeast.
4. In the **Log Analytics workspace** section, select the resource group where you want to create the ITSMC resource: ![Screenshot that shows the Log Analytics workspace section.](media/itsmc-overview/itsmc-solution-workspace.png)
- >[!NOTE]
- >As part of the ongoing transition from Microsoft Operations Management Suite (OMS) to Azure Monitor, OMS workspaces are now referred to as *Log Analytics workspaces*.
+
+ > [!NOTE]
+ > As part of the ongoing transition from Microsoft Operations Management Suite (OMS) to Azure Monitor, OMS workspaces are now called *Log Analytics workspaces*.
5. Select **OK**.
@@ -45,29 +45,26 @@ When the ITSMC resource is deployed, a notification appears at the upper-right c
## Create an ITSM connection
-After you've installed ITSMC, you can create a connection.
-
-To create a connection, you'll need to prep your ITSM tool to allow the connection from ITSMC.
-
-Based on the ITSM product you're connecting to, select one of the following links for instructions:
+After you've installed ITSMC, you must prep your ITSM tool to allow the connection from ITSMC. Based on the ITSM product that you're connecting to, select one of the following links for instructions:
- [ServiceNow](./itsmc-connections-servicenow.md) - [System Center Service Manager](./itsmc-connections-scsm.md) - [Cherwell](./itsmc-connections-cherwell.md) - [Provance](./itsmc-connections-provance.md)
-After you've prepped your ITSM tools, complete these steps to create a connection:
+After you've prepped your ITSM tool, complete these steps to create a connection:
1. In **All resources**, look for **ServiceDesk(*your workspace name*)**: ![Screenshot that shows recent resources in the Azure portal.](media/itsmc-definition/create-new-connection-from-resource.png)
-1. Under **Workspace Data Sources** in the left pane, select **ITSM Connections**:
+1. Under **Workspace Data Sources** on the left pane, select **ITSM Connections**:
![Screenshot that shows the ITSM Connections menu item.](media/itsmc-overview/add-new-itsm-connection.png)+ 1. Select **Add Connection**.
-1. Specify the connection settings as described according to ITSM products/
+1. Specify the connection settings according to the ITSM product that you're using:
- [ServiceNow](./itsmc-connections-servicenow.md) - [System Center Service Manager](./itsmc-connections-scsm.md)
@@ -75,97 +72,84 @@ After you've prepped your ITSM tools, complete these steps to create a connectio
- [Provance](./itsmc-connections-provance.md) > [!NOTE]
+ > By default, ITSMC refreshes the connection's configuration data once every 24 hours. To refresh your connection's data instantly to reflect any edits or template updates that you make, select the **Sync** button on your connection's pane:
>
- > By default, ITSMC refreshes the connection's configuration data once every 24 hours. To refresh your connection's data instantly to reflect any edits or template updates that you make, select the **Sync** button on your connection's blade:
- >
- > ![Screenshot that shows the Sync button on the connection blade.](media/itsmc-overview/itsmc-connections-refresh.png)
-
-## Use ITSMC
-
- You can use ITSMC to create alerts from Azure Monitor Alerts into the ITSM tool.
+ > ![Screenshot that shows the Sync button on the connection's pane.](media/itsmc-overview/itsmc-connections-refresh.png)
## Create ITSM work items from Azure alerts
-After you create your ITSM connection, you can create work items in your ITSM tool based on Azure alerts. To create the work items, you'll use the ITSM action in action groups.
+After you create your ITSM connection, you can use ITMC to create work items in your ITSM tool based on Azure alerts. To create the work items, you'll use the ITSM action in action groups.
-Action groups provide a modular and reusable way to trigger actions for your Azure alerts. You can use action groups with metric alerts, activity log alerts, and Azure Log Analytics alerts in the Azure portal.
+Action groups provide a modular and reusable way to trigger actions for your Azure alerts. You can use action groups with metric alerts, activity log alerts, and Log Analytics alerts in the Azure portal.
> [!NOTE]
-> After you create the ITSM connection, you need to wait for 30 minutes for the sync process to finish.
+> After you create the ITSM connection, you need to wait 30 minutes for the sync process to finish.
-### Template definitions
+## Define a template
- There are work item types that can use templates that are defined by the ITSM tool.
- By using templates, you can define fields that will be automatically populated according to fixed values that are defined as part of the action group. You define templates in the ITSM tool.
- You can define which template you would like to use as a part of the definition of the action group.
+Certain work item types can use templates that you define in the ITSM tool. By using templates, you can define fields that will be automatically populated according to fixed values for an action group. You can define which template you want to use as a part of the definition of an action group.
-Use the following procedure to create action groups:
+To create an action group:
1. In the Azure portal, select **Alerts**.
-2. In the menu at the top of the screen, select **Manage actions**:
+2. On the menu at the top of the screen, select **Manage actions**:
![Screenshot that shows the Manage actions menu item.](media/itsmc-overview/action-groups-selection-big.png) The **Create action group** window appears.
-3. Select the **Subscription** and **Resource group** where you want to create your action group. Provide an **Action group name** and **Display name** for your action group. Select **Next: Notifications**.
+3. Select the **Subscription** and **Resource group** where you want to create your action group. Provide values in **Action group name** and **Display name** for your action group. Then select **Next: Notifications**.
![Screenshot that shows the Create action group window.](media/itsmc-overview/action-groups-details.png)
-4. In the notification list, select **Next: Actions**.
-5. In the actions list, select **ITSM** in the **Action Type** list. Provide a **Name** for the action. Select the pen button that represents **Edit details**.
+4. On the **Notifications** tab, select **Next: Actions**.
+5. On the **Actions** tab, select **ITSM** in the **Action Type** list. For **Name**, provide a name for the action. Then select the pen button that represents **Edit details**.
- ![Screenshot that shows action group definition.](media/itsmc-definition/action-group-pen.png)
+ ![Screenshot that shows selections for creating an action group.](media/itsmc-definition/action-group-pen.png)
-6. In the **Subscription** list, select the subscription in which your Log Analytics workspace is located. In the **Connection** list, select your ITSM connector name. It will be followed by your workspace name. For example, MyITSMConnector(MyWorkspace).
+6. In the **Subscription** list, select the subscription that contains your Log Analytics workspace. In the **Connection** list, select your ITSM connector name. It will be followed by your workspace name. An example is *MyITSMConnector(MyWorkspace)*.
7. Select a **Work Item** type.
-8. If you want to fill out-of-the-box fields with fixed values, select **Use Custom Template**. Otherwise, choose an existing [template](#template-definitions) in the **Template** list and enter the fixed values in the template fields.
-
-9. In the last section of the action ITSM group definition you can define how many work items will be created for each alert.
+8. If you want to fill out-of-the-box fields with fixed values, select **Use Custom Template**. Otherwise, choose an existing [template](#define-a-template) in the **Template** list and enter the fixed values in the template fields.
- >[!NOTE]
- >
- > * This section is relevant only for Log Search Alerts.
- > * For all other alert types one work item will be created per alert.
+9. In the last section of the interface for creating an ITSM action group, you can define how many work items will be created for each alert.
- * In a case you select in the "Work Item" dropdown "Incident" or "Alert":
- ![Screenshot that shows the ITSM Incident window.](media/itsmc-overview/itsm-action-configuration.png)
- * If you check the **"Create individual work items for each Configuration Item"** check box, every configuration item in every alert will create a new work item. As a result of several alert for the same configuration items impacted, there are going to be more than one work item for each configuration item.
+ > [!NOTE]
+ > This section is relevant only for log search alerts. For all other alert types, you'll create one work item per alert.
- For example:
- 1) Alert 1 with 3 Configuration Items: A, B, C - will create 3 work items.
- 2) Alert 2 with 1 Configuration Item: A - will create 1 work item.
+ * If you selected **Incident** or **Alert** in the **Work Item** drop-down list, you have the option to create individual work items for each configuration item.
+
+ ![Screenshot that shows the I T S M Ticket area with Incident selected as a work item.](media/itsmc-overview/itsm-action-configuration.png)
+
+ * If you select the **Create individual work items for each Configuration Item** check box, every configuration item in every alert will create a new work item. Because several alerts will occur for the same affected configuration items, there will be more than one work item for each configuration item.
- * If you clear the **"Create individual work items for each Configuration Item"** check box,
- ITSM connector will create a single work item for each alert rule and append to it all impacted configuration items. A new work item will be created if the previous one is closed.
+ For example, an alert that has three configuration items will create three work items. An alert that has one configuration item will create one work item.
+
+ * If you clear the **Create individual work items for each Configuration Item** check box, ITSMC will create a single work item for each alert rule and append to it all affected configuration items. A new work item will be created if the previous one is closed.
- >[!NOTE]
- > In this case some of the fired alert will not generate new work items in the ITSM tool.
+ >[!NOTE]
+ > In this case, some of the fired alerts won't generate new work items in the ITSM tool.
- For example:
- 1) Alert 1 with 3 Configuration Items: A, B, C - will create 1 work item.
- 2) Alert 2 for the same alert rule as in step a with 1 Configuration Item: D - D will be attached to the impacted configuration items list in the work item created in the step a.
- 3) Alert 3 for a different alert rule with 1 Configuration Item: E - will create 1 work item.
+ For example, an alert that has three configuration items will create one work item. If an alert for the same alert rule as the previous example has one configuration item, that configuration item will be attached to the list of affected configuration items in the created work item. An alert for a different alert rule that has one configuration item will create one work item.
- * In a case you select in the "Work Item" dropdown "Event":
- ![Screenshot that shows the ITSM Event window.](media/itsmc-overview/itsm-action-configuration-event.png)
+ * If you selected **Event** in the **Work Item** drop-down list, you can choose to create individual work items for each log entry or for each configuration item.
+
+ ![Screenshot that shows the I T S M Ticket area with Event selected as a work item.](media/itsmc-overview/itsm-action-configuration-event.png)
- * If you select **"Create individual work items for each Log Entry (Configuration item field is not filled. Can result in large number of work items.)"** in the radio buttons selection, a work item will be created per each row in the search results of the log search alert query. The description property in the payload of the work item will contain the row from the search results.
- * If you select **"Create individual work items for each Configuration Item"** in the radio buttons selection, every configuration item in every alert will create a new work item. There can be more than one work item per configuration item in the ITSM system. This will be the same as the checking the checkbox in Incident/Alert section.
+ * If you select **Create individual work items for each Log Entry (Configuration item field is not filled. Can result in large number of work items.)**, a work item will be created for each row in the search results of the log search alert query. The description property in the payload of the work item will contain the row from the search results.
+
+ * If you select **Create individual work items for each Configuration Item**, every configuration item in every alert will create a new work item. Each configuration item can have more than one work item in the ITSM system. This option is the same as the selecting the check box that appears after you select **Incident** as the work item type.
10. Select **OK**. When you create or edit an Azure alert rule, use an action group, which has an ITSM action. When the alert triggers, the work item is created or updated in the ITSM tool. > [!NOTE]
+> For information about the pricing of the ITSM action, see the [pricing page](https://azure.microsoft.com/pricing/details/monitor/) for action groups.
>
->- For information about the pricing of the ITSM action, see the [pricing page](https://azure.microsoft.com/pricing/details/monitor/) for action groups.
->
->
->- The short description field in the alert rule definition is limited to 40 characters when you send it by using the ITSM action.
+> The short description field in the alert rule definition is limited to 40 characters when you send it by using the ITSM action.
## Next steps
-* [Troubleshooting problems in ITSM Connector](./itsmc-resync-servicenow.md)
+* [Troubleshoot problems in ITSMC](./itsmc-resync-servicenow.md)
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/itsmc-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/itsmc-overview.md
@@ -42,7 +42,7 @@ You can start using ITSMC by completing the following steps:
1. [Setup your ITSM Environment to accept alerts from Azure.](./itsmc-connections.md) 1. [Configure Azure ITSM Solution](./itsmc-definition.md#add-it-service-management-connector) 1. [Configure Azure ITSM connector for your ITSM environment.](./itsmc-definition.md#create-an-itsm-connection)
-1. [Configure Action Group to leverage ITSM connector.](./itsmc-definition.md#use-itsmc)
+1. [Configure Action Group to leverage ITSM connector.](./itsmc-definition.md#define-a-template)
## Next steps
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/itsmc-troubleshoot-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/itsmc-troubleshoot-overview.md
@@ -1,6 +1,6 @@
Title: Troubleshooting problems in ITSM Connector
-description: Troubleshooting problems in IT Service Management Connector
+ Title: Troubleshoot problems in ITSMC
+description: Learn how to resolve common problems in IT Service Management Connector.
@@ -8,75 +8,85 @@
Last updated 04/12/2020
-# Troubleshooting problems in ITSM Connector
+# Troubleshoot problems in IT Service Management Connector
-This article discusses common problems in ITSM Connector and how to troubleshoot them.
+This article discusses common problems in IT Service Management Connector (ITSMC) and how to troubleshoot them.
-Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues before the users of your system notice them.
-The customer can select how they want to be notified on the alert whether it is by mail, SMS, Webhook or even to automate a solution. Another option to be notified is using ITSM.
-ITSM gives you the option to send the alerts to external ticketing system such as ServiceNow.
+Azure Monitor proactively notifies you in alerts when it finds important conditions in your monitoring data. These alerts help you identify and address problems before the users of your system notice them.
-## Visualize and analyze the incident and change request data
+You can select how you want to receive alerts. You can choose mail, SMS, or webhook, or even automate a solution.
-Depending on your configuration when you set up a connection, ITSMC can sync up to 120 days of incident and change request data. The log record schema for this data is provided in the [Additional information Section](./itsmc-synced-data.md) of this article.
+An alternative is to be notified through ITSMC. ITSMC gives you the option to send alerts to an external ticketing system such as ServiceNow.
+
+## Use the dashboard to analyze incident and change request data
+
+Depending on your configuration when you set up a connection, ITSMC can sync up to 120 days of incident and change request data. To get the log record schema for this data, see the [Data synced from your ITSM product](./itsmc-synced-data.md) article.
You can visualize the incident and change request data by using the ITSMC dashboard: ![Screenshot that shows the ITSMC dashboard.](media/itsmc-overview/itsmc-overview-sample-log-analytics.png)
-The dashboard also provides information about connector status, which you can use as a starting point to analyze problems with the connections.
-
-In order to get more information about the dashboard investigation, see [Error Investigation using the dashboard](./itsmc-dashboard.md).
+The dashboard also provides information about connector status. You can use that information as a starting point to analyze problems with the connections. For more information, see [Error investigation using the dashboard](./itsmc-dashboard.md).
-### Service map
+## Use Service Map to visualize incidents
You can also visualize the incidents synced against the affected computers in Service Map.
-Service Map automatically discovers the application components on Windows and Linux systems and maps the communication between services. It allows you to view your servers as you think of them: as interconnected systems that deliver critical services. Service Map shows connections between servers, processes, and ports across any TCP-connected architecture. Other than the installation of an agent, no configuration is required. For more information, see [Using Service Map](../insights/service-map.md).
+Service Map automatically discovers the application components on Windows and Linux systems and maps the communication between services. It allows you to view your servers as you think of them: as interconnected systems that deliver critical services.
-If you're using Service Map, you can view the service desk items created in ITSM solutions, as shown here:
+Service Map shows connections between servers, processes, and ports across any TCP-connected architecture. Other than the installation of an agent, no configuration is required. For more information, see [Using Service Map](../insights/service-map.md).
+
+If you're using Service Map, you can view the service desk items created in IT Service Management (ITSM) solutions, as shown in this example:
![Screenshot that shows the Log Analytics screen.](media/itsmc-overview/itsmc-overview-integrated-solutions.png)
-## Common Symptoms - how should it be resolved?
+## Resolve problems
+
+The following sections identify common symptoms, possible causes, and resolutions.
+
+### A connection to the ITSM system fails and you get an "Error in saving connection" message
+
+**Cause**: The cause can be one of these options:
+
+* Credentials are incorrect.
+* Privileges are insufficient.
+* The web app was incorrectly deployed.
+
+**Resolution**:
+
+* For ServiceNow, Cherwell, and Provance connections:
+ * Ensure that you correctly entered the username, password, client ID, and client secret for each of the connections.
+ * For ServiceNow, ensure that you have [sufficient privileges](itsmc-connections-servicenow.md#install-the-user-app-and-create-the-user-role) in the corresponding ITSM product.
+
+* For Service Manager connections:
+ * Ensure that the web app is successfully deployed and that the hybrid connection is created. To verify that the connection is successfully established with the on-premises Service Manager computer, go to the web app URL as described in the [documentation for making a hybrid connection](./itsmc-connections-scsm.md#configure-the-hybrid-connection).
+
+### Duplicate work items are created
-The list below contain common symptoms and how should it be resolved:
+**Cause**: The cause can be one of these two options:
-* **Symptom**: If a connection fails to connect to the ITSM system and you get an **Error in saving connection** message.
+* More than one ITSM action is defined for the alert.
+* The alert is resolved.
- **Cause**: the cause can be one of the options:
- * Incorrect credentials
- * Insufficient privileges
- * Web app should be deployed correctly
+**Resolution**: There can be two solutions:
- **Resolution**:
- * For ServiceNow, Cherwell, and Provance connections:
- * Ensure that you correctly entered the user name, password, client ID, and client secret for each of the connections.
- * For ServiceNow: Ensure that you have sufficient privileges in the corresponding ITSM product to make the connection as [specified](itsmc-connections-servicenow.md#install-the-user-app-and-create-the-user-role).
- * For Service Manager connections:
- * Ensure that the web app is successfully deployed and that the hybrid connection is created. To verify the connection is successfully established with the on-premises Service Manager computer, go to the web app URL as described in the documentation for making the [hybrid connection](./itsmc-connections-scsm.md#configure-the-hybrid-connection).
-* **Symptom**: Duplicate work items are created
+* Make sure that you have a single ITSM action group per alert.
+* ITSMC doesn't support matching work items' status updates when an alert is resolved. Create a new resolved work item.
- **Cause**: the cause can be one of the two options:
- * More than one ITSM action are defined for the alert.
- * Alert is resolved.
+### Work items are not created
- **Resolution**: There can be two solutions:
- * Make sure that you have a single ITSM action group per alert.
- * ITSM Connector does not support matching work items status update when an alert is resolved. A new resolved work item is created.
-* **Symptom**: Work items are not created
+**Cause**: There can be several reasons for this symptom:
- **Cause**: There can be couple of reasons for this symptom:
- * Code modification in ServiceNow side
- * Permissions misconfiguration
- * ServiceNow rate limits are too high/low
- * Refresh token is expired
- * ITSM Connector was deleted
+* Code was modified on the ServiceNow side.
+* Permissions are misconfigured.
+* ServiceNow rate limits are too high or too low.
+* A refresh token is expired.
+* ITSMC was deleted.
- **Resolution**: You can check the [dashboard](itsmc-dashboard.md) and review the errors in the connector status section. Review the [common errors](itsmc-dashboard-errors.md) and find out how to resolve the error.
+**Resolution**: Check the [dashboard](itsmc-dashboard.md) and review the errors in the section for connector status. Then review the [common errors and their resolutions](itsmc-dashboard-errors.md).
-* **Symptom**: Unable to create ITSM Action for Action Group
+### You can't create an ITSM action for an action group
- **Cause**:Newly created ITSM Connector has yet to finish the initial Sync.
+**Cause**: A newly created ITSMC instance has yet to finish the initial sync.
- **Resolution**: you can review the [common UI errors](itsmc-dashboard-errors.md#ui-common-errors) and find out how to resolve the error.
+**Resolution**: Review the [common errors and their resolutions](itsmc-dashboard-errors.md).
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/samples/resource-manager-data-collection-rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/samples/resource-manager-data-collection-rules.md
@@ -17,7 +17,7 @@ This article includes sample [Azure Resource Manager templates](../../azure-reso
## Create association with Azure VM
-The following sample installs the Azure Monitor agent on a Windows Azure virtual machine. An association is created between an Azure virtual machine and a data collection rule.
+The following sample creates an association between an Azure virtual machine and a data collection rule.
### Template file
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/lock-resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/lock-resources.md
@@ -2,13 +2,15 @@
Title: Lock resources to prevent changes description: Prevent users from updating or deleting Azure resources by applying a lock for all users and roles. Previously updated : 11/11/2020 Last updated : 02/01/2021 # Lock resources to prevent unexpected changes
-As an administrator, you may need to lock a subscription, resource group, or resource to prevent other users in your organization from accidentally deleting or modifying critical resources. You can set the lock level to **CanNotDelete** or **ReadOnly**. In the portal, the locks are called **Delete** and **Read-only** respectively.
+As an administrator, you can lock a subscription, resource group, or resource to prevent other users in your organization from accidentally deleting or modifying critical resources. The lock overrides any permissions the user might have.
+
+You can set the lock level to **CanNotDelete** or **ReadOnly**. In the portal, the locks are called **Delete** and **Read-only** respectively.
* **CanNotDelete** means authorized users can still read and modify a resource, but they can't delete the resource. * **ReadOnly** means authorized users can read a resource, but they can't delete or update the resource. Applying this lock is similar to restricting all authorized users to the permissions granted by the **Reader** role.
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/deploy-to-management-group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deploy-to-management-group.md
@@ -107,7 +107,7 @@ For more detailed information about deployment commands and options for deployin
For management group level deployments, you must provide a location for the deployment. The location of the deployment is separate from the location of the resources you deploy. The deployment location specifies where to store deployment data. [Subscription](deploy-to-subscription.md) and [tenant](deploy-to-tenant.md) deployments also require a location. For [resource group](deploy-to-resource-group.md) deployments, the location of the resource group is used to store the deployment data.
-You can provide a name for the deployment, or use the default deployment name. The default name is the name of the template file. For example, deploying a template named **azuredeploy.json** creates a default deployment name of **azuredeploy**.
+You can provide a name for the deployment, or use the default deployment name. The default name is the name of the template file. For example, deploying a template named _azuredeploy.json_ creates a default deployment name of **azuredeploy**.
For each deployment name, the location is immutable. You can't create a deployment in one location when there's an existing deployment with the same name in a different location. For example, if you create a management group deployment with the name **deployment1** in **centralus**, you can't later create another deployment with the name **deployment1** but a location of **westus**. If you get the error code `InvalidDeploymentLocation`, either use a different name or the same location as the previous deployment for that name.
@@ -159,9 +159,9 @@ To use a management group deployment for creating a resource group within a subs
### Scope to tenant
-You can create resources at the tenant by setting the `scope` set to `/`. The user deploying the template must have the [required access to deploy at the tenant](deploy-to-tenant.md#required-access).
+To create resources at the tenant, set the `scope` to `/`. The user deploying the template must have the [required access to deploy at the tenant](deploy-to-tenant.md#required-access).
-You can use a nested deployment with `scope` and `location` set.
+To use a nested deployment, set `scope` and `location`.
:::code language="json" source="~/resourcemanager-templates/azure-resource-manager/scope/management-group-to-tenant.json" highlight="9,10,14":::
@@ -217,7 +217,7 @@ The next example creates a new management group in the management group specifie
## Azure Policy
-Custom policy definitions that are deployed to the management group are extensions of the management group. To get the ID of a custom policy definition, use the [extensionResourceId()](template-functions-resource.md#extensionresourceid) function. Built-in policy definitions are tenant level resources. To get the ID of a built-in policy definition, use the [tenantResourceId](template-functions-resource.md#tenantresourceid) function.
+Custom policy definitions that are deployed to the management group are extensions of the management group. To get the ID of a custom policy definition, use the [extensionResourceId()](template-functions-resource.md#extensionresourceid) function. Built-in policy definitions are tenant level resources. To get the ID of a built-in policy definition, use the [tenantResourceId()](template-functions-resource.md#tenantresourceid) function.
The following example shows how to [define](../../governance/policy/concepts/definition-structure.md) a policy at the management group level, and assign it.
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/deploy-to-resource-group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deploy-to-resource-group.md
@@ -123,9 +123,9 @@ For an example template, see [Create resource group](#create-resource-group).
### Scope to tenant
-You can create resources at the tenant by setting the `scope` set to `/`. The user deploying the template must have the [required access to deploy at the tenant](deploy-to-tenant.md#required-access).
+To create resources at the tenant, set the `scope` to `/`. The user deploying the template must have the [required access to deploy at the tenant](deploy-to-tenant.md#required-access).
-You can use a nested deployment with `scope` and `location` set.
+To use a nested deployment, set `scope` and `location`.
:::code language="json" source="~/resourcemanager-templates/azure-resource-manager/scope/resource-group-to-tenant.json" highlight="9,10,14":::
@@ -137,7 +137,7 @@ For more information, see [Management group](deploy-to-management-group.md#manag
## Deploy to target resource group
-To deploy resources in the target resource group, define those resources in the **resources** section of the template. The following template creates a storage account in the resource group that is specified in the deployment operation.
+To deploy resources in the target resource group, define those resources in the `resources` section of the template. The following template creates a storage account in the resource group that is specified in the deployment operation.
:::code language="json" source="~/resourcemanager-templates/get-started-with-templates/add-outputs/azuredeploy.json":::
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/deploy-to-subscription https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deploy-to-subscription.md
@@ -99,7 +99,7 @@ az deployment sub create \
# [PowerShell](#tab/azure-powershell)
-For the PowerShell deployment command, use [New-AzDeployment](/powershell/module/az.resources/new-azdeployment) or **New-AzSubscriptionDeployment**. The following example deploys a template to create a resource group:
+For the PowerShell deployment command, use [New-AzDeployment](/powershell/module/az.resources/new-azdeployment) or its alias `New-AzSubscriptionDeployment`. The following example deploys a template to create a resource group:
```azurepowershell-interactive New-AzSubscriptionDeployment `
@@ -125,7 +125,7 @@ For more detailed information about deployment commands and options for deployin
For subscription level deployments, you must provide a location for the deployment. The location of the deployment is separate from the location of the resources you deploy. The deployment location specifies where to store deployment data. [Management group](deploy-to-management-group.md) and [tenant](deploy-to-tenant.md) deployments also require a location. For [resource group](deploy-to-resource-group.md) deployments, the location of the resource group is used to store the deployment data.
-You can provide a name for the deployment, or use the default deployment name. The default name is the name of the template file. For example, deploying a template named **azuredeploy.json** creates a default deployment name of **azuredeploy**.
+You can provide a name for the deployment, or use the default deployment name. The default name is the name of the template file. For example, deploying a template named _azuredeploy.json_ creates a default deployment name of **azuredeploy**.
For each deployment name, the location is immutable. You can't create a deployment in one location when there's an existing deployment with the same name in a different location. For example, if you create a subscription deployment with the name **deployment1** in **centralus**, you can't later create another deployment with the name **deployment1** but a location of **westus**. If you get the error code `InvalidDeploymentLocation`, either use a different name or the same location as the previous deployment for that name.
@@ -168,9 +168,9 @@ For an example of deploying to a resource group, see [Create resource group and
### Scope to tenant
-You can create resources at the tenant by setting the `scope` set to `/`. The user deploying the template must have the [required access to deploy at the tenant](deploy-to-tenant.md#required-access).
+To create resources at the tenant, set the `scope` to `/`. The user deploying the template must have the [required access to deploy at the tenant](deploy-to-tenant.md#required-access).
-You can use a nested deployment with `scope` and `location` set.
+To use a nested deployment, set `scope` and `location`.
:::code language="json" source="~/resourcemanager-templates/azure-resource-manager/scope/subscription-to-tenant.json" highlight="9,10,14":::
@@ -249,7 +249,7 @@ Use the [copy element](copy-resources.md) with resource groups to create more th
} ```
-For information about resource iteration, see [Deploy more than one instance of a resource in Azure Resource Manager Templates](./copy-resources.md), and [Tutorial: Create multiple resource instances with Resource Manager templates](./template-tutorial-create-multiple-instances.md).
+For information about resource iteration, see [Resource iteration in ARM templates](./copy-resources.md), and [Tutorial: Create multiple resource instances with ARM templates](./template-tutorial-create-multiple-instances.md).
### Create resource group and resources
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/deploy-to-tenant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deploy-to-tenant.md
@@ -128,7 +128,7 @@ For more detailed information about deployment commands and options for deployin
For tenant level deployments, you must provide a location for the deployment. The location of the deployment is separate from the location of the resources you deploy. The deployment location specifies where to store deployment data. [Subscription](deploy-to-subscription.md) and [management group](deploy-to-management-group.md) deployments also require a location. For [resource group](deploy-to-resource-group.md) deployments, the location of the resource group is used to store the deployment data.
-You can provide a name for the deployment, or use the default deployment name. The default name is the name of the template file. For example, deploying a template named **azuredeploy.json** creates a default deployment name of **azuredeploy**.
+You can provide a name for the deployment, or use the default deployment name. The default name is the name of the template file. For example, deploying a template named _azuredeploy.json_ creates a default deployment name of **azuredeploy**.
For each deployment name, the location is immutable. You can't create a deployment in one location when there's an existing deployment with the same name in a different location. For example, if you create a tenant deployment with the name **deployment1** in **centralus**, you can't later create another deployment with the name **deployment1** but a location of **westus**. If you get the error code `InvalidDeploymentLocation`, either use a different name or the same location as the previous deployment for that name.
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/rollback-on-error https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/rollback-on-error.md
@@ -2,19 +2,28 @@
Title: Roll back on error to successful deployment description: Specify that a failed deployment should roll back to a successful deployment. Previously updated : 10/04/2019 Last updated : 02/02/2021 # Rollback on error to successful deployment
-When a deployment fails, you can automatically redeploy an earlier, successful deployment from your deployment history. This functionality is useful if you've got a known good state for your infrastructure deployment and want to revert to this state. There are a number of caveats and restrictions:
+When a deployment fails, you can automatically redeploy an earlier, successful deployment from your deployment history. This functionality is useful if you've got a known good state for your infrastructure deployment and want to revert to this state. You can specify either a particular earlier deployment or the last successful deployment.
+> [!IMPORTANT]
+> This feature rollbacks a failed deployment by redeploying an earlier deployment. This result may be different than what you would expect from undoing the failed deployment. Make sure you understand how the earlier deployment is redeployed.
+
+## Considerations for redeploying
+
+Before using this feature, consider these details about how the redeployment is handled:
+
+- The previous deployment is run using the [complete mode](./deployment-modes.md#complete-mode), even if you used [incremental mode](./deployment-modes.md#incremental-mode) during the earlier deployment. Redeploying in complete mode could produce unexpected results when the earlier deployment used incremental. The complete mode means that any resources not included in the previous deployment are deleted. Specify an earlier deployment that represents all of the resources and their states that you want to exist in the resource group. For more information, see [deployment modes](./deployment-modes.md).
- The redeployment is run exactly as it was run previously with the same parameters. You can't change the parameters.-- The previous deployment is run using the [complete mode](./deployment-modes.md#complete-mode). Any resources not included in the previous deployment are deleted, and any resource configurations are set to their previous state. Make sure you fully understand the [deployment modes](./deployment-modes.md). - The redeployment only affects the resources, any data changes aren't affected.-- You can use this feature only with resource group deployments, not subscription or management group level deployments. For more information about subscription level deployment, see [Create resource groups and resources at the subscription level](./deploy-to-subscription.md).
+- You can use this feature only with resource group deployments. It doesn't support subscription, management group, or tenant level deployments. For more information about subscription level deployment, see [Create resource groups and resources at the subscription level](./deploy-to-subscription.md).
- You can only use this option with root level deployments. Deployments from a nested template aren't available for redeployment.
-To use this option, your deployments must have unique names so they can be identified in the history. If you don't have unique names, the current failed deployment might overwrite the previously successful deployment in the history.
+To use this option, your deployments must have unique names in the deployment history. It's only with unique names that a specific deployment can be identified. If you don't have unique names, a failed deployment might overwrite a successful deployment in the history.
+
+If you specify an earlier deployment that doesn't exist in the deployment history, the rollback returns an error.
## PowerShell
@@ -109,7 +118,5 @@ The specified deployment must have succeeded.
## Next steps -- To safely roll out your service to more than one region, see [Azure Deployment Manager](deployment-manager-overview.md).-- To specify how to handle resources that exist in the resource group but aren't defined in the template, see [Azure Resource Manager deployment modes](deployment-modes.md).
+- To understand complete and incremental modes, see [Azure Resource Manager deployment modes](deployment-modes.md).
- To understand how to define parameters in your template, see [Understand the structure and syntax of Azure Resource Manager templates](template-syntax.md).-- For information about deploying a template that requires a SAS token, see [Deploy private template with SAS token](secure-template-with-sas-token.md).
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/scope-extension-resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/scope-extension-resources.md
@@ -80,6 +80,6 @@ The following example creates a storage account and applies a role to it.
## Next steps
-* To understand how to define parameters in your template, see [Understand the structure and syntax of Azure Resource Manager templates](template-syntax.md).
+* To understand how to define parameters in your template, see [Understand the structure and syntax of ARM templates](template-syntax.md).
* For tips on resolving common deployment errors, see [Troubleshoot common Azure deployment errors with Azure Resource Manager](common-deployment-errors.md).
-* For information about deploying a template that requires a SAS token, see [Deploy private template with SAS token](secure-template-with-sas-token.md).
+* For information about deploying a template that requires a SAS token, see [Deploy private ARM template with SAS token](secure-template-with-sas-token.md).
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/scope-functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/scope-functions.md
@@ -35,15 +35,15 @@ When deploying to different scopes, there are some important considerations:
* Use the [extensionResourceId()](template-functions-resource.md#extensionresourceid) function for resources that are implemented as extensions of the management group. Custom policy definitions that are deployed to the management group are extensions of the management group. To get the resource ID for a custom policy definition at the management group level, use:
-
+ ```json "policyDefinitionId": "[extensionResourceId(variables('mgScope'), 'Microsoft.Authorization/policyDefinitions', parameters('policyDefinitionID'))]" ```
-* Use the [tenantResourceId](template-functions-resource.md#tenantresourceid) function to get the ID for a resource deployed at the tenant. Built-in policy definitions are tenant level resources. When assigning a built-in policy at the management group level, use the tenantResourceId function.
+* Use the [tenantResourceId()](template-functions-resource.md#tenantresourceid) function to get the ID for a resource deployed at the tenant. Built-in policy definitions are tenant level resources. When assigning a built-in policy at the management group level, use the tenantResourceId function.
To get the resource ID for a built-in policy definition, use:
-
+ ```json "policyDefinitionId": "[tenantResourceId('Microsoft.Authorization/policyDefinitions', parameters('policyDefinitionID'))]" ```
@@ -133,6 +133,6 @@ The output from the preceding example is:
## Next steps
-* To understand how to define parameters in your template, see [Understand the structure and syntax of Azure Resource Manager templates](template-syntax.md).
+* To understand how to define parameters in your template, see [Understand the structure and syntax of ARM templates](template-syntax.md).
* For tips on resolving common deployment errors, see [Troubleshoot common Azure deployment errors with Azure Resource Manager](common-deployment-errors.md).
-* For information about deploying a template that requires a SAS token, see [Deploy private template with SAS token](secure-template-with-sas-token.md).
+* For information about deploying a template that requires a SAS token, see [Deploy private ARM template with SAS token](secure-template-with-sas-token.md).
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/resource-limits-logical-server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/resource-limits-logical-server.md
@@ -10,7 +10,7 @@
Previously updated : 1/14/2021 Last updated : 02/02/2021 # Resource limits for Azure SQL Database and Azure Synapse Analytics servers
@@ -64,7 +64,8 @@ When encountering high space utilization, mitigation options include:
- Increasing the max size of the database or elastic pool, or adding more storage. See [Scale single database resources](single-database-scale.md) and [Scale elastic pool resources](elastic-pool-scale.md). - If the database is in an elastic pool, then alternatively the database can be moved outside of the pool so that its storage space isn't shared with other databases.-- Shrink a database to reclaim unused space. For more information, see [Manage file space in Azure SQL Database](file-space-manage.md)
+- Shrink a database to reclaim unused space. For more information, see [Manage file space in Azure SQL Database](file-space-manage.md).
+- Check if high space utilization is due to a spike in the size of Persistent Version Store (PVS). PVS is a part of each database, and is used to implement [Accelerated Database Recovery](../accelerated-database-recovery.md). To determine current PVS size, see [PVS troubleshooting](https://docs.microsoft.com/sql/relational-databases/accelerated-database-recovery-management#troubleshooting). A common reason for large PVS size is a transaction that is open for a long time (hours), preventing cleanup of older versions in PVS.
### Sessions and workers (requests)
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/managed-instance/frequently-asked-questions-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/frequently-asked-questions-faq.md
@@ -359,13 +359,6 @@ Yes. See [How to configure a Custom DNS for Azure SQL Managed Instance](./custom
Yes. See [Synchronize virtual network DNS servers setting on SQL Managed Instance virtual cluster](./synchronize-vnet-dns-servers-setting-on-virtual-cluster.md).
-DNS configuration is eventually refreshed:
--- When DHCP lease expires.-- On platform upgrade.-
-As a workaround, downgrade SQL Managed Instance to 4 vCores and upgrade it again afterward. This has a side effect of refreshing the DNS configuration.
- ## Change time zone **Can I change the time zone for an existing managed instance?**
azure-vmware https://docs.microsoft.com/en-us/azure/azure-vmware/concepts-networking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-networking.md
@@ -53,8 +53,11 @@ The diagram below shows the on-premises to private cloud interconnectivity, whic
For full interconnectivity to your private cloud, enable ExpressRoute Global Reach and then request an authorization key and private peering ID for Global Reach in the Azure portal. The authorization key and peering ID are used to establish Global Reach between an ExpressRoute circuit in your subscription and the ExpressRoute circuit for your new private cloud. Once linked, the two ExpressRoute circuits route network traffic between your on-premises environments to your private cloud. For more information on the procedures to request and use the authorization key and peering ID, see the [tutorial for creating an ExpressRoute Global Reach peering to a private cloud](tutorial-expressroute-global-reach-private-cloud.md). ## Next steps
-Learn about [private cloud storage concepts](concepts-storage.md).
+Now that you've covered these network and interconnectivity concepts, you may want to learn about:
+
+- [Azure VMware Solution storage concepts](concepts-storage.md).
+- [Azure VMware Solution identity concepts](concepts-identity.md)
<!-- LINKS - external --> [enable Global Reach]: ../expressroute/expressroute-howto-set-global-reach.md
azure-vmware https://docs.microsoft.com/en-us/azure/azure-vmware/concepts-private-clouds-clusters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-private-clouds-clusters.md
@@ -61,7 +61,10 @@ Private cloud vCenter and NSX-T configurations are on an hourly backup schedule.
## Next steps
-The next step is to learn [networking and interconnectivity concepts](concepts-networking.md).
+Now that you've covered these Azure VMware Solution private cloud concepts, you may want to learn about:
+
+- [Azure VMware Solution networking and interconnectivity concepts](concepts-networking.md).
+- [Azure VMware Solution storage concepts](concepts-storage.md).
<!-- LINKS - internal -->
azure-vmware https://docs.microsoft.com/en-us/azure/azure-vmware/netapp-files-with-azure-vmware-solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/netapp-files-with-azure-vmware-solution.md
@@ -97,7 +97,10 @@ The following are just a few compelling Azure NetApp Files use cases.
- File shares on Azure VMware Solution ## Next steps-- [Resource limits for Azure NetApp Files](../azure-netapp-files/azure-netapp-files-resource-limits.md#resource-limits)-- [Guidelines for Azure NetApp Files network planning](../azure-netapp-files/azure-netapp-files-network-topologies.md)-- [Cross-region replication of Azure NetApp Files volumes](../azure-netapp-files/cross-region-replication-introduction.md) -- [FAQs about Azure NetApp Files](../azure-netapp-files/azure-netapp-files-faqs.md)+
+Once you've integrated Azure NetApp Files with your Azure VMware Solution workloads, you may want to learn more about:
+
+- [Resource limits for Azure NetApp Files](../azure-netapp-files/azure-netapp-files-resource-limits.md#resource-limits).
+- [Guidelines for Azure NetApp Files network planning](../azure-netapp-files/azure-netapp-files-network-topologies.md).
+- [Cross-region replication of Azure NetApp Files volumes](../azure-netapp-files/cross-region-replication-introduction.md).
+- [FAQs about Azure NetApp Files](../azure-netapp-files/azure-netapp-files-faqs.md).
backup https://docs.microsoft.com/en-us/azure/backup/backup-sql-server-azure-troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-sql-server-azure-troubleshoot.md
@@ -213,7 +213,7 @@ Check for one or more of the following symptoms before you trigger the re-regist
- Lack of permission to perform backup-related operations on the VM. - Shutdown of the VM, so backups can't take place.
- - Network issues.
+ - [Network issues](#usererrorvminternetconnectivityissue)
![re-registering VM](./media/backup-azure-sql-database/re-register-vm.png)
backup https://docs.microsoft.com/en-us/azure/backup/disk-backup-support-matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/disk-backup-support-matrix.md
@@ -17,7 +17,7 @@ You can use [Azure Backup](./backup-overview.md) to protect Azure Disks. This ar
## Supported regions
-Azure Disk Backup is available in preview in the following regions: West US, West Central US, East US2, Korea Central, Korea South, Japan West, East Asia, UAE North, Brazil South, Central India.
+Azure Disk Backup is available in preview in the following regions: West US, West Central US, East US2, Canada Central, UK West, Australia Central, Korea Central, Korea South, Japan West, East Asia, UAE North, Brazil South, Central India.
More regions will be announced when they become available.
bastion https://docs.microsoft.com/en-us/azure/bastion/bastion-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/bastion-overview.md
@@ -27,7 +27,7 @@ RDP and SSH are some of the fundamental means through which you can connect to y
This figure shows the architecture of an Azure Bastion deployment. In this diagram:
-* The Bastion host is deployed in the virtual network.
+* The Bastion host is deployed in the virtual network that contains the AzureBastionSubnet subnet that has a minimum /27 prefix.
* The user connects to the Azure portal using any HTML5 browser. * The user selects the virtual machine to connect to. * With a single click, the RDP/SSH session opens in the browser.
batch https://docs.microsoft.com/en-us/azure/batch/batch-rendering-applications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-rendering-applications.md
@@ -1,7 +1,7 @@
Title: Rendering applications description: It's possible to use any rendering applications with Azure Batch. However, Azure Marketplace VM images are available with common applications pre-installed. Previously updated : 09/19/2019 Last updated : 02/01/2021
@@ -13,7 +13,7 @@ Where applicable, pay-per-use licensing is available for the pre-installed rende
Some applications only support Windows, but most are supported on both Windows and Linux.
-## Applications on CentOS 7 rendering images
+## Applications on CentOS 7 rendering image
The following list applies to CentOS 7.6, version 1.1.6 rendering images.
@@ -28,7 +28,26 @@ The following list applies to CentOS 7.6, version 1.1.6 rendering images.
* Blender (2.68) * Blender (2.8)
-## Applications on latest Windows Server 2016 rendering images
+## Applications on latest Windows Server rendering image
+
+The following list applies to the Windows Server rendering image, version 1.5.0.
+
+* Autodesk Maya I/O 2020 Update 4.4
+* Autodesk 3ds Max I/O 2021 Update 3
+* Autodesk Arnold for Maya 2020 (Arnold version 6.1.0.1) MtoA-4.1.1.1-2020
+* Autodesk Arnold for 3ds Max 2021 (Arnold version 6.1.0.1) MAXtoA-4.2.2.20-2021
+* Chaos Group V-Ray for Maya 2020 (version 5.00.21)
+* Chaos Group V-Ray for 3ds Max 2021 (version 5.00.05)
+* Blender (2.79)
+* Blender (2.80)
+* AZ 10
+
+> [!IMPORTANT]
+> To run V-Ray with Maya outside of the [Azure Batch extension templates](https://github.com/Azure/batch-extension-templates), start `vrayses.exe` before running the render. To start the vrayses.exe outside of the templates you can use the following command `%MAYA_2020%\vray\bin\vrayses.exe"`.
+>
+> For an example, see the start task of the [Maya and V-Ray template](https://github.com/Azure/batch-extension-templates/blob/master/templates/maya/render-vray-windows/pool.template.json) on GitHub.
+
+## Applications on previous Windows Server rendering images
The following list applies to Windows Server 2016, version 1.3.8 rendering images.
@@ -54,13 +73,6 @@ The following list applies to Windows Server 2016, version 1.3.8 rendering image
* Blender (2.80) * AZ 10
-> [!IMPORTANT]
-> To run V-Ray with Maya outside of the [Azure Batch extension templates](https://github.com/Azure/batch-extension-templates), start `vrayses.exe` before running the render. To start the vrayses.exe outside of the templates you can use the following command `%MAYA_2017%\vray\bin\vrayses.exe"`.
->
-> For an example, see the start task of the [Maya and V-Ray template](https://github.com/Azure/batch-extension-templates/blob/master/templates/maya/render-vray-windows/pool.template.json) on GitHub.
-
-## Applications on previous Windows Server 2016 rendering images
- The following list applies to Windows Server 2016, version 1.3.7 rendering images. * Autodesk Maya I/O 2017 Update 5 (version 17.4.5459)
batch https://docs.microsoft.com/en-us/azure/batch/batch-rendering-functionality https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-rendering-functionality.md
@@ -3,7 +3,7 @@ Title: Rendering capabilities
description: Standard Azure Batch capabilities are used to run rendering workloads and apps. Batch includes specific features to support rendering workloads. Previously updated : 01/14/2021 Last updated : 02/01/2021
@@ -27,7 +27,7 @@ Most rendering applications will require licenses obtained from a license server
An Azure Marketplace rendering VM image can be specified in the pool configuration if only the pre-installed applications need to be used.
-There is a Windows 2016 image and a CentOS image. In the [Azure Marketplace](https://azuremarketplace.microsoft.com), the VM images can be found by searching for 'batch rendering'.
+There is a Windows image and a CentOS image. In the [Azure Marketplace](https://azuremarketplace.microsoft.com), the VM images can be found by searching for 'batch rendering'.
For an example pool configuration, see the [Azure CLI rendering tutorial](./tutorial-rendering-cli.md). The Azure portal and Batch Explorer provide GUI tools to select a rendering VM image when you create a pool. If using a Batch API, then specify the following property values for [ImageReference](/rest/api/batchservice/pool/add#imagereference) when creating a pool:
@@ -65,13 +65,10 @@ To be able to create the command line for rendering tasks, the installation loca
|Application|Application Executable|Environment Variable| ||||
-|Autodesk 3ds Max 2018|3dsmaxcmdio.exe|3DSMAX_2018_EXEC|
-|Autodesk 3ds Max 2019|3dsmaxcmdio.exe|3DSMAX_2019_EXEC|
-|Autodesk Maya 2017|render.exe|MAYA_2017_EXEC|
-|Autodesk Maya 2018|render.exe|MAYA_2018_EXEC|
-|Chaos Group V-Ray Standalone|vray.exe|VRAY_3.60.4_EXEC|
-Arnold 2017 command line|kick.exe|ARNOLD_2017_EXEC|
-|Arnold 2018 command line|kick.exe|ARNOLD_2018_EXEC|
+|Autodesk 3ds Max 2021|3dsmaxcmdio.exe|3DSMAX_2021_EXEC|
+|Autodesk Maya 2020|render.exe|MAYA_2020_EXEC|
+|Chaos Group V-Ray Standalone|vray.exe|VRAY_4.10.03_EXEC|
+|Arnold 2020 command line|kick.exe|ARNOLD_2020_EXEC|
|Blender|blender.exe|BLENDER_2018_EXEC| ## Azure VM families
batch https://docs.microsoft.com/en-us/azure/batch/batch-rendering-using https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-rendering-using.md
@@ -47,15 +47,6 @@ It's also possible for custom templates to be produced, from scratch or by modif
The 'Data' section in Batch Explorer allows files to be copied between a local file system and Azure Storage accounts.
-## Client application plug-ins
-
-Plug-ins are available for some of the client applications. The plug-ins allow pools and jobs to be created directly from the application or invoke Batch Explorer.
-
-* [Blender 2.79](https://github.com/Azure/azure-batch-rendering/tree/master/plugins/blender)
-* [Blender 2.8+](https://github.com/Azure/azure-batch-rendering/tree/master/plugins/blender28)
-* [Autodesk 3ds Max](https://github.com/Azure/azure-batch-rendering/tree/master/plugins/3ds-max)
-* [Autodesk Maya](https://github.com/Azure/azure-batch-maya)
- ## Next steps For examples of Batch rendering try out the two tutorials:
cloud-services-extended-support https://docs.microsoft.com/en-us/azure/cloud-services-extended-support/enable-wad https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/enable-wad.md
@@ -19,14 +19,15 @@ Windows Azure Diagnostics extension can be enabled for Cloud Services (extended
```powershell # Create WAD extension object
-$wadExtension = New-AzCloudServiceDiagnosticsExtension -Name "WADExtension" -ResourceGroupName "ContosOrg" -CloudServiceName "ContosCS" -StorageAccountName "ContosSA" -StorageAccountKey $storageAccountKey[0].Value -DiagnosticsConfigurationPath $configFile -TypeHandlerVersion "1.5" -AutoUpgradeMinorVersion $true
-$extensionProfile = @{extension = @($rdpExtension, $wadExtension)}
+$storageAccountKey = Get-AzStorageAccountKey -ResourceGroupName "ContosOrg" -Name "contosostorageaccount"
+$configFile = "<WAD public configuration file path>"
+$wadExtension = New-AzCloudServiceDiagnosticsExtension -Name "WADExtension" -ResourceGroupName "ContosOrg" -CloudServiceName "ContosoCS" -StorageAccountName "contosostorageaccount" -StorageAccountKey $storageAccountKey[0].Value -DiagnosticsConfigurationPath $configFile -TypeHandlerVersion "1.5" -AutoUpgradeMinorVersion $true
# Get existing Cloud Service $cloudService = Get-AzCloudService -ResourceGroup "ContosOrg" -CloudServiceName "ContosoCS" # Add WAD extension to existing Cloud Service extension object
-$cloudService.ExtensionProfileExtension = $cloudService.ExtensionProfileExtension + $wadExtension
+$cloudService.ExtensionProfile.Extension = $cloudService.ExtensionProfile.Extension + $wadExtension
# Update Cloud Service $cloudService | Update-AzCloudService
@@ -58,4 +59,4 @@ $cloudService | Update-AzCloudService
## Next steps - Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services (extended support). - Review [frequently asked questions](faq.md) for Cloud Services (extended support).-- Deploy a Cloud Service (extended support) using the [Azure portal](deploy-portal.md), [PowerShell](deploy-powershell.md), [Template](deploy-template.md) or [Visual Studio](deploy-visual-studio.md).
+- Deploy a Cloud Service (extended support) using the [Azure portal](deploy-portal.md), [PowerShell](deploy-powershell.md), [Template](deploy-template.md) or [Visual Studio](deploy-visual-studio.md).
cloud-services-extended-support https://docs.microsoft.com/en-us/azure/cloud-services-extended-support/sample-create-cloud-service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/sample-create-cloud-service.md
@@ -17,27 +17,29 @@ These samples cover various ways to to create a new Azure Cloud Service (extende
```powershell # Create role profile object
-$role = New-AzCloudServiceCloudServiceRoleProfilePropertiesObject -Name 'ContosoFrontend' -SkuName 'Standard_D1_v2' -SkuTier 'Standard' -SkuCapacity 2
+PS C:\> $role = New-AzCloudServiceRoleProfilePropertiesObject -Name 'ContosoFrontend' -SkuName 'Standard_D1_v2' -SkuTier 'Standard' -SkuCapacity 2
+PS C:\> $roleProfile = @{role = @($role)}
# Create network profile object
-$publicIp = Get-AzPublicIpAddress -ResourceGroupName ContosOrg -Name ContosIp
-$feIpConfig = New-AzCloudServiceLoadBalancerFrontendIPConfigurationObject -Name 'ContosoFe' -PublicIPAddressId $publicIp.Id
-$loadBalancerConfig = New-AzCloudServiceLoadBalancerConfigurationObject -Name 'ContosoLB' -FrontendIPConfiguration $feIpConfig
+PS C:\> $publicIp = Get-AzPublicIpAddress -ResourceGroupName ContosOrg -Name ContosIp
+PS C:\> $feIpConfig = New-AzCloudServiceLoadBalancerFrontendIPConfigurationObject -Name 'ContosoFe' -PublicIPAddressId $publicIp.Id
+PS C:\> $loadBalancerConfig = New-AzCloudServiceLoadBalancerConfigurationObject -Name 'ContosoLB' -FrontendIPConfiguration $feIpConfig
+PS C:\> $networkProfile = @{loadBalancerConfiguration = $loadBalancerConfig}
# Read Configuration File
-$cscfgFile = "<Path to cscfg configuration file>"
+$cscfgFile = ""
$cscfgContent = Get-Content $cscfgFile | Out-String
-# Create Cloud Service
+# Create cloud service
$cloudService = New-AzCloudService `--Name ContosoCS `--ResourceGroupName ContosOrg `--Location EastUS `--PackageUrl "https://xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" `--Configuration $cscfgContent `--UpgradeMode 'Auto' `--RoleProfileRole $role `--NetworkProfileLoadBalancerConfiguration $loadBalancerConfig
+ -Name ContosoCS `
+ -ResourceGroupName ContosOrg `
+ -Location EastUS `
+ -PackageUrl "https://xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" `
+ -Configuration $cscfgContent `
+ -UpgradeMode 'Auto' `
+ -RoleProfile $roleProfile `
+ -NetworkProfile $networkProfile
```
@@ -45,141 +47,115 @@ $cloudService = New-AzCloudService
```powershell # Create role profile object
-$role = New-AzCloudServiceCloudServiceRoleProfilePropertiesObject -Name 'ContosoFrontend' -SkuName 'Standard_D1_v2' -SkuTier 'Standard' -SkuCapacity 2
+PS C:\> $role = New-AzCloudServiceRoleProfilePropertiesObject -Name 'ContosoFrontend' -SkuName 'Standard_D1_v2' -SkuTier 'Standard' -SkuCapacity 2
+PS C:\> $roleProfile = @{role = @($role)}
# Create network profile object
-$publicIp = Get-AzPublicIpAddress -ResourceGroupName ContosoOrg -Name ContosIp
-$feIpConfig = New-AzCloudServiceLoadBalancerFrontendIPConfigurationObject -Name 'ContosoFe' -PublicIPAddressId $publicIp.Id
-$loadBalancerConfig = New-AzCloudServiceLoadBalancerConfigurationObject -Name 'ContosoLB' -FrontendIPConfiguration $feIpConfig
+PS C:\> $publicIp = Get-AzPublicIpAddress -ResourceGroupName ContosoOrg -Name ContosIp
+PS C:\> $feIpConfig = New-AzCloudServiceLoadBalancerFrontendIPConfigurationObject -Name 'ContosoFe' -PublicIPAddressId $publicIp.Id
+PS C:\> $loadBalancerConfig = New-AzCloudServiceLoadBalancerConfigurationObject -Name 'ContosoLB' -FrontendIPConfiguration $feIpConfig
+PS C:\> $networkProfile = @{loadBalancerConfiguration = $loadBalancerConfig}
# Create RDP extension object
-$credential = Get-Credential
-$expiration = (Get-Date).AddYears(1)
-$extension = New-AzCloudServiceRemoteDesktopExtensionObject -Name 'RDPExtension' -Credential $credential -Expiration $expiration -TypeHandlerVersion '1.2.1'
+PS C:\> $credential = Get-Credential
+PS C:\> $expiration = (Get-Date).AddYears(1)
+PS C:\> $extension = New-AzCloudServiceRemoteDesktopExtensionObject -Name 'RDPExtension' -Credential $credential -Expiration $expiration -TypeHandlerVersion '1.2.1'
+PS C:\> $extensionProfile = @{extension = @($extension)}
# Read Configuration File
-$cscfgFile = "<Path to cscfg configuration file>"
+$cscfgFile = ""
$cscfgContent = Get-Content $cscfgFile | Out-String
-# Create Cloud Service
+# Create cloud service
$cloudService = New-AzCloudService `--Name ContosoCS `--ResourceGroupName ContosOrg `--Location EastUS `--PackageUrl "https://xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" `--Configuration $cscfgContent `--UpgradeMode 'Auto' `--RoleProfileRole $role `--NetworkProfileLoadBalancerConfiguration $loadBalancerConfig `--ExtensionProfileExtension $extension
+ -Name ContosoCS `
+ -ResourceGroupName ContosOrg `
+ -Location EastUS `
+ -PackageUrl "https://xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" `
+ -Configuration $cscfgContent `
+ -UpgradeMode 'Auto' `
+ -RoleProfile $roleProfile `
+ -NetworkProfile $networkProfile `
+ -ExtensionProfile $extensionProfile
```
-## Create a Cloud Service with a single role and Windows Azure Diagnostics extension
+## Create new cloud service with single role and certificate from key vault
```PowerShell # Create role profile object
-$role = New-AzCloudServiceCloudServiceRoleProfilePropertiesObject -Name 'ContosoFrontend' -SkuName 'Standard_D1_v2' -SkuTier 'Standard' -SkuCapacity 2
-
-# Create network profile object
-$publicIp = Get-AzPublicIpAddress -ResourceGroupName ContosoOrg -Name ContosIp
-$feIpConfig = New-AzCloudServiceLoadBalancerFrontendIPConfigurationObject -Name 'ContosoFe' -PublicIPAddressId $publicIp.Id
-$loadBalancerConfig = New-AzCloudServiceLoadBalancerConfigurationObject -Name 'ContosoLB' -FrontendIPConfiguration $feIpConfig
-
-# Create WAD extension object
-$storageAccountKey = Get-AzStorageAccountKey -ResourceGroupName "ContosOrg" -Name "ContosSA"
-$configFile = "<WAD configuration file path>"
-$extension = New-AzCloudServiceDiagnosticsExtension -Name "WADExtension" -ResourceGroupName "ContosOrg" -CloudServiceName "ContosCS" -StorageAccountName "ContosSA" -StorageAccountKey $storageAccountKey[0].Value -DiagnosticsConfigurationPath $configFile -TypeHandlerVersion "1.5" -AutoUpgradeMinorVersion $true
-
-# Read Configuration File
-$cscfgFile = "<Path to cscfg configuration file>"
-$cscfgContent = Get-Content $cscfgFile | Out-String
-
-## Create Cloud Service
-$cloudService = New-AzCloudService `
--Name ContosoCS `--ResourceGroupName ContosOrg `--Location EastUS `--PackageUrl "https://xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" `--Configuration $cscfgContent `--UpgradeMode 'Auto' `--RoleProfileRole $role `--NetworkProfileLoadBalancerConfiguration $loadBalancerConfig `--ExtensionProfileExtension $extension
-```
-## Create new Cloud Service with single role and certificate from Key Vault
-
-```powershell
-# Create role profile object
-$role = New-AzCloudServiceCloudServiceRoleProfilePropertiesObject -Name 'ContosoFrontend' -SkuName 'Standard_D1_v2' -SkuTier 'Standard' -SkuCapacity 2
+$role = New-AzCloudServiceRoleProfilePropertiesObject -Name 'ContosoFrontend' -SkuName 'Standard_D1_v2' -SkuTier 'Standard' -SkuCapacity 2
+$roleProfile = @{role = @($role)}
# Create OS profile object $keyVault = Get-AzKeyVault -ResourceGroupName ContosOrg -VaultName ContosKeyVault $certificate=Get-AzKeyVaultCertificate -VaultName ContosKeyVault -Name ContosCert $secretGroup = New-AzCloudServiceVaultSecretGroupObject -Id $keyVault.ResourceId -CertificateUrl $certificate.SecretId
+$osProfile = @{secret = @($secretGroup)}
# Create network profile object $publicIp = Get-AzPublicIpAddress -ResourceGroupName ContosOrg -Name ContosIp $feIpConfig = New-AzCloudServiceLoadBalancerFrontendIPConfigurationObject -Name 'ContosoFe' -PublicIPAddressId $publicIp.Id
- $loadBalancerConfig = New-AzCloudServiceLoadBalancerConfigurationObject -Name 'ContosoLB' -FrontendIPConfiguration $feIpConfig
+$loadBalancerConfig = New-AzCloudServiceLoadBalancerConfigurationObject -Name 'ContosoLB' -FrontendIPConfiguration $feIpConfig
+$networkProfile = @{loadBalancerConfiguration = $loadBalancerConfig}
# Read Configuration File
-$cscfgFile = "<Path to cscfg configuration file>"
+$cscfgFile = ""
$cscfgContent = Get-Content $cscfgFile | Out-String
-# Create Cloud Service
+# Create cloud service
$cloudService = New-AzCloudService `--Name ContosoCS `--ResourceGroupName ContosOrg `--Location EastUS `--PackageUrl "https://xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" `--Configuration $cscfgContent `--UpgradeMode 'Auto' `--RoleProfileRole $role `--NetworkProfileLoadBalancerConfiguration $loadBalancerConfig `--OSProfileSecret $secretGroup
+ -Name ContosoCS `
+ -ResourceGroupName ContosOrg `
+ -Location EastUS `
+ -PackageUrl "https://xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" `
+ -Configuration $cscfgContent `
+ -UpgradeMode 'Auto' `
+ -RoleProfile $roleProfile `
+ -NetworkProfile $networkProfile `
+ -OSProfile $osProfile
```-
-## Create new Cloud Service with multiple roles and extensions
+## Create new cloud service with multiple roles and extensions
```powershell # Create role profile object
- $role1 = New-AzCloudServiceCloudServiceRoleProfilePropertiesObject -Name 'ContosoFrontend' -SkuName 'Standard_D1_v2' -SkuTier 'Standard' -SkuCapacity 2
- $role2 = New-AzCloudServiceCloudServiceRoleProfilePropertiesObject -Name 'ContosoBackend' -SkuName 'Standard_D1_v2' -SkuTier 'Standard' -SkuCapacity 2
- $roles = @($role1, $role2)
+$role1 = New-AzCloudServiceRoleProfilePropertiesObject -Name 'ContosoFrontend' -SkuName 'Standard_D1_v2' -SkuTier 'Standard' -SkuCapacity 2
+$role2 = New-AzCloudServiceRoleProfilePropertiesObject -Name 'ContosoBackend' -SkuName 'Standard_D1_v2' -SkuTier 'Standard' -SkuCapacity 2
+$roleProfile = @{role = @($role1, $role2)}
# Create network profile object
- $publicIp = Get-AzPublicIpAddress -ResourceGroupName ContosOrg -Name ContosIp
- $feIpConfig = New-AzCloudServiceLoadBalancerFrontendIPConfigurationObject -Name 'ContosoFe' -PublicIPAddressId $publicIp.Id
- $loadBalancerConfig = New-AzCloudServiceLoadBalancerConfigurationObject -Name 'ContosoLB' -FrontendIPConfiguration $feIpConfig
+$publicIp = Get-AzPublicIpAddress -ResourceGroupName ContosOrg -Name ContosIp
+$feIpConfig = New-AzCloudServiceLoadBalancerFrontendIPConfigurationObject -Name 'ContosoFe' -PublicIPAddressId $publicIp.Id
+$loadBalancerConfig = New-AzCloudServiceLoadBalancerConfigurationObject -Name 'ContosoLB' -FrontendIPConfiguration $feIpConfig
+$networkProfile = @{loadBalancerConfiguration = $loadBalancerConfig}
# Create RDP extension object
- $credential = Get-Credential
- $expiration = (Get-Date).AddYears(1)
- $rdpExtension = New-AzCloudServiceRemoteDesktopExtensionObject -Name 'RDPExtension' -Credential $credential -Expiration $expiration -TypeHandlerVersion '1.2.1'
+$credential = Get-Credential
+$expiration = (Get-Date).AddYears(1)
+$rdpExtension = New-AzCloudServiceRemoteDesktopExtensionObject -Name 'RDPExtension' -Credential $credential -Expiration $expiration -TypeHandlerVersion '1.2.1'
# Create Geneva extension object
- $genevaExtension = New-AzCloudServiceExtensionObject -Name GenevaExtension -Publisher Microsoft.Azure.Geneva -Type GenevaMonitoringPaaS -TypeHandlerVersion "2.14.0.2"
- $extensions = @($rdpExtension, $genevaExtension)
+$genevaExtension = New-AzCloudServiceExtensionObject -Name GenevaExtension -Publisher Microsoft.Azure.Geneva -Type GenevaMonitoringPaaS -TypeHandlerVersion "2.14.0.2"
+$extensionProfile = @{extension = @($rdpExtension, $genevaExtension)}
# Add tags $tag=@{"Owner" = "Contoso"} # Read Configuration File
-$cscfgFile = "<Path to cscfg configuration file>"
+$cscfgFile = ""
$cscfgContent = Get-Content $cscfgFile | Out-String
-# Create Cloud Service
+# Create cloud service
$cloudService = New-AzCloudService `--Name ContosoCS `--ResourceGroupName ContosOrg `--Location EastUS `--PackageUrl "https://xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" `--Configuration $cscfgContent `--UpgradeMode 'Auto' `--RoleProfileRole $roles `--NetworkProfileLoadBalancerConfiguration $loadBalancerConfig `--ExtensionProfileExtension $extensions `--Tag $tag
+ -Name ContosoCS `
+ -ResourceGroupName ContosOrg `
+ -Location EastUS `
+ -PackageUrl "https://xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" `
+ -Configuration $cscfgContent `
+ -UpgradeMode 'Auto' `
+ -RoleProfile $roleProfile `
+ -NetworkProfile $networkProfile `
+ -ExtensionProfile $extensionProfile `
+ -Tag $tag
``` ## Next steps
cloud-services-extended-support https://docs.microsoft.com/en-us/azure/cloud-services-extended-support/sample-get-cloud-service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/sample-get-cloud-service.md
@@ -17,7 +17,7 @@ These samples cover various was to retrieve information about existing Azure Clo
## Get all Cloud Services under a resource group ```powershell
-Get-AzCloudService -ResourceGroup "ContosOrg"
+Get-AzCloudService -ResourceGroupName "ContosOrg"
ResourceGroupName Name Location ProvisioningState -- - -- --
@@ -27,72 +27,86 @@ ContosOrg ContosoCSTest eastus2euap Failed
## Get single Cloud Service ```powershell
-Get-AzCloudService -ResourceGroup "ContosOrg" -CloudServiceName "ContosoCS"
+Get-AzCloudService -ResourceGroupName "ContosOrg" -CloudServiceName "ContosoCS"
ResourceGroupName Name Location ProvisioningState -- - -- -- ContosOrg ContosoCS eastus2euap Succeeded
-$cloudService = Get-AzCloudService -ResourceGroup "ContosOrg" -CloudServiceName "ContosoCS"
-$cloudService | Format-List
-ResourceGroupName : ContosOrg
-Configuration : <?xml version="1.0" encoding="utf-8"?>
- <ServiceConfiguration
- xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
- </ServiceConfiguration>
-ConfigurationUrl :
-ExtensionProfileExtension : {RDPExtension}
-Id : /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/ContosOrg/providers/Microsoft.Compute/cloudServices/ContosoCS
-Location : eastus2euap
-Name : ContosoCS
-NetworkProfileLoadBalancerConfiguration : {xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}
-OSProfileSecret : {xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}
-PackageUrl : https://xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
-ProvisioningState : Succeeded
-RoleProfileRole : {ContosoFrontEnd, ContosoBackEnd}
-StartCloudService :
-SwappableCloudServiceId : /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/ContosOrg/providers/Microsoft.Compute/cloudServices/ContosoCSTest
-Tag : {
- "Owner": "Contos"
- }
-Type : Microsoft.Compute/cloudServices
-UniqueId : xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
-UpgradeMode : Auto
+PS C:\> $cloudService = Get-AzCloudService -ResourceGroupName "ContosOrg" -CloudServiceName "ContosoCS"
+PS C:\> $cloudService | Format-List
+ResourceGroupName : ContosOrg
+Configuration : xxxxxxxx
+ConfigurationUrl :
+ExtensionProfile : xxxxxxxx
+Id : xxxxxxxx
+Location : East US
+Name : ContosoCS
+NetworkProfile : xxxxxxxx
+OSProfile : xxxxxxxx
+PackageUrl : xxxxxxxx
+ProvisioningState : Succeeded
+RoleProfile : xxxxxxxx
+StartCloudService :
+Tag : {
+ "Owner": "Contos"
+ }
+Type : Microsoft.Compute/cloudServices
+UniqueId : xxxxxxxx
+UpgradeMode : Auto
``` ## Get Cloud Service instance view ```powershell
-Get-AzCloudService -ResourceGroup "ContosOrg" -CloudServiceName "ContosoCS" -InstanceView | Format-List
+PS C:\>$cloudServiceInstanceView = Get-AzCloudServiceInstanceView -ResourceGroupName "ContosOrg" -CloudServiceName "ContosoCS"
-RoleInstanceStatusesSummary : {{
- "code": "ProvisioningState/succeeded",
- "count": 4
- }, {
- "code": "PowerState/started",
- "count": 4
- }}
-Statuses : {{
- "code": "ProvisioningState/succeeded",
- "displayStatus": "Provisioning succeeded",
- "level": "Info",
- "time": "2020-10-20T13:13:45.0067685Z"
- }, {
- "code": "PowerState/started",
- "displayStatus": "Started",
- "level": "Info",
- "time": "2020-10-20T13:13:45.0067685Z"
- }, {
- "code": "CurrentUpgradeDomain/-1",
- "displayStatus": "Current Upgrade Domain of Cloud Service is -1.",
- "level": "Info"
- }, {
- "code": "MaxUpgradeDomain/1",
- "displayStatus": "Max Upgrade Domain of Cloud Service is 1.",
- "level": "Info"
- }}
+PS C:\>$cloudServiceInstanceView
+RoleInstanceStatusesSummary Statuses
+ --
+{{ProvisioningState/succeeded : 4}, {PowerState/started : 4}} {Provisioning succeeded, Started, Current Upgrade Domain of cloud service is -1., Max Upgrade Domain of cloud service is 1.}
+
+PS C:\>$cloudServiceInstanceView.ToJsonString()
+{
+ "roleInstance": {
+ "statusesSummary": [
+ {
+ "code": "ProvisioningState/succeeded",
+ "count": 4
+ },
+ {
+ "code": "PowerState/started",
+ "count": 4
+ }
+ ]
+ },
+ "statuses": [
+ {
+ "code": "ProvisioningState/succeeded",
+ "displayStatus": "Provisioning succeeded",
+ "level": "Info",
+ "time": "2020-10-28T13:26:48.8109686Z"
+ },
+ {
+ "code": "PowerState/started",
+ "displayStatus": "Started",
+ "level": "Info",
+ "time": "2020-10-28T13:26:48.8109686Z"
+ },
+ {
+ "code": "CurrentUpgradeDomain/-1",
+ "displayStatus": "Current Upgrade Domain of cloud service is -1.",
+ "level": "Info"
+ },
+ {
+ "code": "MaxUpgradeDomain/1",
+ "displayStatus": "Max Upgrade Domain of cloud service is 1.",
+ "level": "Info"
+ }
+ ]
+}
``` ## Next steps - For more information on Azure Cloud Services (extended support), see [Azure Cloud Services (extended support) overview](overview.md).-- Visit the [Cloud Services (extended support) samples repository](https://github.com/Azure-Samples/cloud-services-extended-support)
+- Visit the [Cloud Services (extended support) samples repository](https://github.com/Azure-Samples/cloud-services-extended-support)
cloud-services-extended-support https://docs.microsoft.com/en-us/azure/cloud-services-extended-support/sample-reset-cloud-service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/sample-reset-cloud-service.md
@@ -12,29 +12,52 @@
# Reset an Azure Cloud Service (extended support) These samples cover various ways to reset an existing Azure Cloud Service (extended support) deployment.
-## Reimage role instances of Cloud Service
+## Reimage role instances of cloud service
```powershell $roleInstances = @("ContosoFrontEnd_IN_0", "ContosoBackEnd_IN_1")
-Reset-AzCloudService -ResourceGroupName "ContosOrg" -CloudServiceName "ContosoCS" -RoleInstance $roleInstances -Reimage
+Invoke-AzCloudServiceReimage -ResourceGroupName "ContosOrg" -CloudServiceName "ContosoCS" -RoleInstance $roleInstances
```
-This command reimages 2 role instances **ContosoFrontEnd\_IN\_0** and **ContosoBackEnd\_IN\_1** of Cloud Service named ContosoCS that belongs to the resource group named ContosOrg.
+This command reimages 2 role instances ContosoFrontEnd_IN_0 and ContosoBackEnd_IN_1 of cloud service named ContosoCS that belongs to the resource group named ContosOrg.
## Reimage all roles of Cloud Service ```powershell
-Reset-AzCloudService -ResourceGroupName "ContosOrg" -CloudServiceName "ContosoCS" -RoleInstance "*" -Reimage
+Invoke-AzCloudServiceReimage -ResourceGroupName "ContosOrg" -CloudServiceName "ContosoCS" -RoleInstance "*"
```
+This command reimages all role instances of cloud service named ContosoCS that belongs to the resource group named ContosOrg.
## Reimage a single role instance of a Cloud Service ```powershell
-Reset-AzCloudServiceRoleInstance -ResourceGroupName "ContosOrg" -CloudServiceName "ContosoCS" -RoleInstanceName "ContosoFrontEnd_IN_0" -Reimage
+Invoke-AzCloudServiceRoleInstanceReimage -ResourceGroupName "ContosOrg" -CloudServiceName "ContosoCS" -RoleInstanceName "ContosoFrontEnd_IN_0"
```
+This command reimages role instance named ContosoFrontEnd_IN_0 of cloud service named ContosoCS that belongs to the resource group named ContosOrg.
-## Restart a single role instance of a Cloud Service
+## Rebuild role instances of cloud service
```powershell
-Reset-AzCloudService -ResourceGroupName "ContosOrg" -CloudServiceName "ContosoCS" -RoleInstance "*" -Restart
+$roleInstances = @("ContosoFrontEnd_IN_0", "ContosoBackEnd_IN_1")
+Invoke-AzCloudServiceRebuild -ResourceGroupName "ContosOrg" -CloudServiceName "ContosoCS" -RoleInstance $roleInstances
+```
+This command rebuilds 2 role instances ContosoFrontEnd_IN_0 and ContosoBackEnd_IN_1 of cloud service named ContosoCS that belongs to the resource group named ContosOrg.
+
+## Rebuild all roles of cloud service
+```powershell
+Invoke-AzCloudServiceRebuild -ResourceGroupName "ContosOrg" -CloudServiceName "ContosoCS" -RoleInstance "*"
+```
+This command rebuilds all role instances of cloud service named ContosoCS that belongs to the resource group named ContosOrg.
+
+## Restart role instances of cloud service
+```powershell
+$roleInstances = @("ContosoFrontEnd_IN_0", "ContosoBackEnd_IN_1")
+Restart-AzCloudService -ResourceGroupName "ContosOrg" -CloudServiceName "ContosoCS" -RoleInstance $roleInstances
+```
+This command restarts 2 role instances ContosoFrontEnd_IN_0 and ContosoBackEnd_IN_1 of cloud service named ContosoCS that belongs to the resource group named ContosOrg.
+
+## Restart all roles of cloud service
+```powershell
+Restart-AzCloudService -ResourceGroupName "ContosOrg" -CloudServiceName "ContosoCS" -RoleInstance "*"
```
+This command restarts all role instances of cloud service named ContosoCS that belongs to the resource group named ContosOrg.
## Next steps - For more information on Azure Cloud Services (extended support), see [Azure Cloud Services (extended support) overview](overview.md).-- Visit the [Cloud Services (extended support) samples repository](https://github.com/Azure-Samples/cloud-services-extended-support)
+- Visit the [Cloud Services (extended support) samples repository](https://github.com/Azure-Samples/cloud-services-extended-support)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Computer-vision/whats-new https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/whats-new.md
@@ -25,6 +25,12 @@ A new version of the [spatial analysis container](spatial-analysis-container.md)
* [Spatial analysis operations](spatial-analysis-operations.md) can be now configured to detect if a person is wearing a protective face covering such as a mask. * A mask classifier can be enabled for the `personcount`, `personcrossingline` and `personcrossingpolygon` operations by configuring the `ENABLE_FACE_MASK_CLASSIFIER` parameter. * The attributes `face_mask` and `face_noMask` will be returned as metadata with confidence score for each person detected in the video stream
+* The *personcrossingpolygon* operation has been extended to allow the calculation of the dwell time a person spends in a zone. You can set the `type` parameter in the Zone configuration for the operation to `zonedwelltime` and a new event of type *personZoneDwellTimeEvent* will include the `durationMs` field populated with the number of milliseconds that the person spent in the zone.
+* **Breaking change**: The *personZoneEvent* event has been renamed to *personZoneEnterExitEvent*. This event is raised by the *personcrossingpolygon* operation when a person enters or exits the zone and provides directional info with the numbered side of the zone that was crossed.
+* Video URL can be provided as "Private Parameter/obfuscated" in all operations. Obfuscation is optional now and it will only work if `KEY` and `IV` are provided as environment variables.
+* Calibration is enabled by default for all operations. Set the `do_calibration: false` to disable it.
+* Added support for auto recalibration (by default disabled) via the `enable_recalibration` parameter, please refer to [Spatial analysis operations](https://docs.microsoft.com/azure/cognitive-services/computer-vision/spatial-analysis-operations) for details
+* Camera calibration parameters to the `DETECTOR_NODE_CONFIG`. Refer to [Spatial analysis operations](https://docs.microsoft.com/azure/cognitive-services/computer-vision/spatial-analysis-operations) for details.
## October 2020
@@ -90,4 +96,4 @@ Follow an [Extract text quickstart](https://github.com/Azure-Samples/cognitive-s
## Cognitive Service updates
-[Azure update announcements for Cognitive Services](https://azure.microsoft.com/updates/?product=cognitive-services)
+[Azure update announcements for Cognitive Services](https://azure.microsoft.com/updates/?product=cognitive-services)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Custom-Vision-Service/includes/choose-training-images https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/includes/choose-training-images.md
@@ -24,4 +24,4 @@ Additionally, make sure all of your training images meet the following criteria:
* no less than 256 pixels on the shortest edge; any images shorter than this will be automatically scaled up by the Custom Vision Service > [!NOTE]
-> Trove, a Microsoft Garage project, allows you to collect and purchase sets of images for training purposes. Once you've collected your images, you can download them and then import them into your Custom Vision project in the usual way. Visit the [Trove page](https://www.microsoft.com/en-us/ai/trove?activetab=pivot1:primaryr3) to learn more.
+> Do you need a broader set of images to complete your training? Trove, a Microsoft Garage project, allows you to collect and purchase sets of images for training purposes. Once you've collected your images, you can download them and then import them into your Custom Vision project in the usual way. Visit the [Trove page](https://www.microsoft.com/en-us/ai/trove?activetab=pivot1:primaryr3) to learn more.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Custom-Vision-Service/includes/quickstarts/csharp-tutorial-od https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/includes/quickstarts/csharp-tutorial-od.md
@@ -139,7 +139,7 @@ This method defines the tags that you will train the model on.
First, download the sample images for this project. Save the contents of the [sample Images folder](https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/CustomVision/ObjectDetection/Images) to your local device. > [!NOTE]
-> Trove, a Microsoft Garage project, allows you to collect and purchase sets of images for training purposes. Once you've collected your images, you can download them and then import them into your Custom Vision project in the usual way. Visit the [Trove page](https://www.microsoft.com/en-us/ai/trove?activetab=pivot1:primaryr3) to learn more.
+> Do you need a broader set of images to complete your training? Trove, a Microsoft Garage project, allows you to collect and purchase sets of images for training purposes. Once you've collected your images, you can download them and then import them into your Custom Vision project in the usual way. Visit the [Trove page](https://www.microsoft.com/en-us/ai/trove?activetab=pivot1:primaryr3) to learn more.
When you tag images in object detection projects, you need to specify the region of each tagged object using normalized coordinates. The following code associates each of the sample images with its tagged region.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Custom-Vision-Service/includes/quickstarts/csharp-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/includes/quickstarts/csharp-tutorial.md
@@ -144,7 +144,7 @@ This method defines the tags that you will train the model on.
First, download the sample images for this project. Save the contents of the [sample Images folder](https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/CustomVision/ImageClassification/Images) to your local device. > [!NOTE]
-> Trove, a Microsoft Garage project, allows you to collect and purchase sets of images for training purposes. Once you've collected your images, you can download them and then import them into your Custom Vision project in the usual way. Visit the [Trove page](https://www.microsoft.com/en-us/ai/trove?activetab=pivot1:primaryr3) to learn more.
+> Do you need a broader set of images to complete your training? Trove, a Microsoft Garage project, allows you to collect and purchase sets of images for training purposes. Once you've collected your images, you can download them and then import them into your Custom Vision project in the usual way. Visit the [Trove page](https://www.microsoft.com/en-us/ai/trove?activetab=pivot1:primaryr3) to learn more.
Then define a helper method to upload the images in this directory. You may need to edit the **GetFiles** argument to point to the location where your images are saved.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Custom-Vision-Service/includes/quickstarts/java-tutorial-od https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/includes/quickstarts/java-tutorial-od.md
@@ -150,7 +150,7 @@ This method defines the tags that you will train the model on.
First, download the sample images for this project. Save the contents of the [sample Images folder](https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/CustomVision/ObjectDetection/Images) to your local device. > [!NOTE]
-> Trove, a Microsoft Garage project, allows you to collect and purchase sets of images for training purposes. Once you've collected your images, you can download them and then import them into your Custom Vision project in the usual way. Visit the [Trove page](https://www.microsoft.com/en-us/ai/trove?activetab=pivot1:primaryr3) to learn more.
+> Do you need a broader set of images to complete your training? Trove, a Microsoft Garage project, allows you to collect and purchase sets of images for training purposes. Once you've collected your images, you can download them and then import them into your Custom Vision project in the usual way. Visit the [Trove page](https://www.microsoft.com/en-us/ai/trove?activetab=pivot1:primaryr3) to learn more.
When you tag images in object detection projects, you need to specify the region of each tagged object using normalized coordinates. The following code associates each of the sample images with its tagged region.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Custom-Vision-Service/includes/quickstarts/java-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/includes/quickstarts/java-tutorial.md
@@ -152,7 +152,7 @@ This method defines the tags that you will train the model on.
First, download the sample images for this project. Save the contents of the [sample Images folder](https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/CustomVision/ImageClassification/Images) to your local device. > [!NOTE]
-> Trove, a Microsoft Garage project, allows you to collect and purchase sets of images for training purposes. Once you've collected your images, you can download them and then import them into your Custom Vision project in the usual way. Visit the [Trove page](https://www.microsoft.com/en-us/ai/trove?activetab=pivot1:primaryr3) to learn more.
+> Do you need a broader set of images to complete your training? Trove, a Microsoft Garage project, allows you to collect and purchase sets of images for training purposes. Once you've collected your images, you can download them and then import them into your Custom Vision project in the usual way. Visit the [Trove page](https://www.microsoft.com/en-us/ai/trove?activetab=pivot1:primaryr3) to learn more.
[!code-java[](~/cognitive-services-quickstart-code/java/CustomVision/src/main/java/com/microsoft/azure/cognitiveservices/vision/customvision/samples/CustomVisionSamples.java?name=snippet_upload)]
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Custom-Vision-Service/includes/quickstarts/node-tutorial-object-detection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/includes/quickstarts/node-tutorial-object-detection.md
@@ -121,7 +121,7 @@ Start a new function to contain all of your Custom Vision function calls. Add th
First, download the sample images for this project. Save the contents of the [sample Images folder](https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/CustomVision/ObjectDetection/Images) to your local device. > [!NOTE]
-> Trove, a Microsoft Garage project, allows you to collect and purchase sets of images for training purposes. Once you've collected your images, you can download them and then import them into your Custom Vision project in the usual way. Visit the [Trove page](https://www.microsoft.com/en-us/ai/trove?activetab=pivot1:primaryr3) to learn more.
+> Do you need a broader set of images to complete your training? Trove, a Microsoft Garage project, allows you to collect and purchase sets of images for training purposes. Once you've collected your images, you can download them and then import them into your Custom Vision project in the usual way. Visit the [Trove page](https://www.microsoft.com/en-us/ai/trove?activetab=pivot1:primaryr3) to learn more.
To add the sample images to the project, insert the following code after the tag creation. This code uploads each image with its corresponding tag. When you tag images in object detection projects, you need to specify the region of each tagged object using normalized coordinates. For this tutorial, the regions are hardcoded inline with the code. The regions specify the bounding box in normalized coordinates, and the coordinates are given in the order: left, top, width, height. You can upload up to 64 images in a single batch.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Custom-Vision-Service/includes/quickstarts/node-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/includes/quickstarts/node-tutorial.md
@@ -126,7 +126,7 @@ To create classification tags to your project, add the following code to your fu
First, download the sample images for this project. Save the contents of the [sample Images folder](https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/CustomVision/ImageClassification/Images) to your local device. > [!NOTE]
-> Trove, a Microsoft Garage project, allows you to collect and purchase sets of images for training purposes. Once you've collected your images, you can download them and then import them into your Custom Vision project in the usual way. Visit the [Trove page](https://www.microsoft.com/en-us/ai/trove?activetab=pivot1:primaryr3) to learn more.
+> Do you need a broader set of images to complete your training? Trove, a Microsoft Garage project, allows you to collect and purchase sets of images for training purposes. Once you've collected your images, you can download them and then import them into your Custom Vision project in the usual way. Visit the [Trove page](https://www.microsoft.com/en-us/ai/trove?activetab=pivot1:primaryr3) to learn more.
To add the sample images to the project, insert the following code after the tag creation. This code uploads each image with its corresponding tag.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Custom-Vision-Service/includes/quickstarts/python-tutorial-od https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/includes/quickstarts/python-tutorial-od.md
@@ -108,7 +108,7 @@ To create object tags in your project, add the following code:
First, download the sample images for this project. Save the contents of the [sample Images folder](https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/CustomVision/ObjectDetection/Images) to your local device. > [!NOTE]
-> Trove, a Microsoft Garage project, allows you to collect and purchase sets of images for training purposes. Once you've collected your images, you can download them and then import them into your Custom Vision project in the usual way. Visit the [Trove page](https://www.microsoft.com/en-us/ai/trove?activetab=pivot1:primaryr3) to learn more.
+> Do you need a broader set of images to complete your training? Trove, a Microsoft Garage project, allows you to collect and purchase sets of images for training purposes. Once you've collected your images, you can download them and then import them into your Custom Vision project in the usual way. Visit the [Trove page](https://www.microsoft.com/en-us/ai/trove?activetab=pivot1:primaryr3) to learn more.
When you tag images in object detection projects, you need to specify the region of each tagged object using normalized coordinates. The following code associates each of the sample images with its tagged region. The regions specify the bounding box in normalized coordinates, and the coordinates are given in the order: left, top, width, height.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Custom-Vision-Service/includes/quickstarts/python-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/includes/quickstarts/python-tutorial.md
@@ -105,7 +105,7 @@ To add classification tags to your project, add the following code:
First, download the sample images for this project. Save the contents of the [sample Images folder](https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/CustomVision/ImageClassification/Images) to your local device. > [!NOTE]
-> Trove, a Microsoft Garage project, allows you to collect and purchase sets of images for training purposes. Once you've collected your images, you can download them and then import them into your Custom Vision project in the usual way. Visit the [Trove page](https://www.microsoft.com/en-us/ai/trove?activetab=pivot1:primaryr3) to learn more.
+> Do you need a broader set of images to complete your training? Trove, a Microsoft Garage project, allows you to collect and purchase sets of images for training purposes. Once you've collected your images, you can download them and then import them into your Custom Vision project in the usual way. Visit the [Trove page](https://www.microsoft.com/en-us/ai/trove?activetab=pivot1:primaryr3) to learn more.
To add the sample images to the project, insert the following code after the tag creation. This code uploads each image with its corresponding tag. You can upload up to 64 images in a single batch.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Custom-Vision-Service/includes/quickstarts/rest-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/includes/quickstarts/rest-tutorial.md
@@ -99,7 +99,7 @@ You'll get a JSON response like the following. Save the `"id"` value of each tag
Next, download the sample images for this project. Save the contents of the [sample Images folder](https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/CustomVision/ImageClassification/Images) to your local device. > [!NOTE]
-> Trove, a Microsoft Garage project, allows you to collect and purchase sets of images for training purposes. Once you've collected your images, you can download them and then import them into your Custom Vision project in the usual way. Visit the [Trove page](https://www.microsoft.com/en-us/ai/trove?activetab=pivot1:primaryr3) to learn more.
+> Do you need a broader set of images to complete your training? Trove, a Microsoft Garage project, allows you to collect and purchase sets of images for training purposes. Once you've collected your images, you can download them and then import them into your Custom Vision project in the usual way. Visit the [Trove page](https://www.microsoft.com/en-us/ai/trove?activetab=pivot1:primaryr3) to learn more.
Use the following command to upload the images and apply tags; once for the "Hemlock" images, and separately for the "Japanese Cherry" images. See the [Create Images From Data](https://southcentralus.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddeb5) API for more options.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/QnAMaker/reference-document-format-guidelines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/reference-document-format-guidelines.md
@@ -28,6 +28,9 @@ QnA Maker identifies sections and subsections and relationships in the file base
* numbering * colors
+> [!NOTE]
+> We don't support extraction of images from uploaded documents currently.
+ ## Product manuals A manual is typically guidance material that accompanies a product. It helps the user to set up, use, maintain, and troubleshoot the product. When QnA Maker processes a manual, it extracts the headings and subheadings as questions and the subsequent content as answers. See an example [here](https://download.microsoft.com/download/2/9/B/29B20383-302C-4517-A006-B0186F04BE28/surface-pro-4-user-guide-EN.pdf).
@@ -113,4 +116,4 @@ Importing a knowledge base replaces the content of the existing knowledge base.
## Next steps
-See a full list of [content types and examples](./concepts/data-sources-and-content.md#content-types-of-documents-you-can-add-to-a-knowledge-base)
+See a full list of [content types and examples](./concepts/data-sources-and-content.md#content-types-of-documents-you-can-add-to-a-knowledge-base)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/concepts-guidelines-responsible-deployment-synthetic https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/concepts-guidelines-responsible-deployment-synthetic.md
@@ -13,14 +13,26 @@
# Guidelines for responsible deployment of synthetic voice technology+
+## General considerations to keep in mind when implementing AI systems
+
+This article talks specifically about synthetic speech and Custom Neural Voice and the key considerations for making use of this technology responsibly. In general, however, there are several things you need to consider carefully when deciding how to use and implement AI-powered products and features:
+
+* Will this product or feature perform well in my scenario? Before deploying AI into your scenario, test how it performs using real-life data and make sure it can deliver the accuracy you need.
+* Are we equipped to identify and respond to errors? AI-powered products and features will not always be 100% accurate, so consider how you will identify and respond to any errors that may occur.
+
+## General guidelines for using synthetic voice technology
Here are MicrosoftΓÇÖs general design guidelines for using synthetic voice technology. These were developed in studies that Microsoft conducted with voice talent, consumers, as well as individuals with speech disorders to guide the responsible development of synthetic voice.
-## General considerations
For deployment of synthetic speech technology, the following guidelines apply across most scenarios. ### Disclose when the voice is synthetic Disclosing that a voice is computer generated not only minimizes the risk of harmful outcomes from deception but also increases the trust in the organization delivering the voice. Learn more about [how to disclose](concepts-disclosure-guidelines.md).
+Microsoft requires its customers to disclose the synthetic nature of custom neural voice to its users.
+* Make sure to provide adequate disclosure to audiences, especially when using voice of a well-known person - People make their judgment on information based on the person who delivers it, whether they do it consciously or unconsciously. For example, disclosure could be verbally shared at the start of a broadcast. For more information visit the [disclosure patterns](concepts-disclosure-patterns.md).
+* Consider proper disclosure to parents or other parties with use cases that are designed for minors and children - If your use case is intended for minors or children, you will need to ensure that the parents or legal guardians are able to understand the disclosure about the use of synthetic media and make the right decision for the minors or children on whether to use the experience.
+ ### Select appropriate voice types for your scenario Carefully consider the context of use and the potential harms associated with using synthetic voice. For example, high-fidelity synthetic voices may not be appropriate in high-risk scenarios, such as for personal messaging, financial transactions, or complex situations that require human adaptability or empathy. Users may also have different expectations for voice types. For example, when listening to sensitive news being read by a synthetic voice, some users prefer a more empathetic and human-like reading of the news, while others preferred a more monotone, unbiased voice. Consider testing your application to better understand user preferences.
@@ -34,8 +46,9 @@ In ambiguous, transactional scenarios (for example, a call support center), user
When working with voice talent, such as voice actors, to create synthetic voices, the guideline below applies. ### Obtain meaningful consent from voice talent
-Voice talent expect to have control over their voice font (how and where it will be used) and be compensated anytime it's used. System owners should therefore obtain explicit written permission from voice talent, and have clear contractual specifications on use cases, duration of use, compensation, and so on. Some voice talent are unaware of the potential malicious uses of the technology and should be educated by system owners about the capabilities of the technology. For more on voice talent and consent, read our [Disclosure for Voice Talent](/legal/cognitive-services/speech-service/disclosure-voice-talent).
+Voice talents should have control over their voice model (how and where it will be used) and be compensated for its use. Microsoft requires custom voice customers to obtain explicit written permission from their voice talent to create a synthetic voice and its agreement with voice talents contemplate the duration, use and any content limitations. If you are creating a synthetic voice of a well-known person, you should provide a way for the person behind the voice to edit or approve the contents.
+Some voice talents are unaware of the potential malicious uses of the technology and should be educated by system owners about the capabilities of the technology. Microsoft requires Customers to share MicrosoftΓÇÖs [Disclosure for Voice Talent](/legal/cognitive-services/speech-service/disclosure-voice-talent) with Voice Talent directly or through Voice TalentΓÇÖs authorized representative that describes how synthetic voices are developed and operate in conjunction with text to speech services.
## Considerations for those with speech disorders When working with individuals with speech disorders, to create or deploy synthetic voice technology, the following guidelines apply.
@@ -61,4 +74,4 @@ Individuals with speech disorders desire to make updates to their synthetic voic
* [Disclosure for Voice Talent](/legal/cognitive-services/speech-service/disclosure-voice-talent) * [How to Disclose](concepts-disclosure-guidelines.md)
-* [Disclosure Design Patterns](concepts-disclosure-patterns.md)
+* [Disclosure Design Patterns](concepts-disclosure-patterns.md)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/text-analytics/includes/entity-types/general-entities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/includes/entity-types/general-entities.md
@@ -29,6 +29,7 @@ The NER feature for Text Analytics returns the following general (non identifyin
| [URL](#category-url) | URLs to websites. | | [IP](#category-ip) | Network IP addresses. | | [DateTime](#category-datetime) | Dates and times of day. |
+| [Quantity](#category-quantity) | Numerical measurements and units. |
### Category: Person
communication-services https://docs.microsoft.com/en-us/azure/communication-services/concepts/voice-video-calling/calling-sdk-features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/voice-video-calling/calling-sdk-features.md
@@ -91,7 +91,7 @@ The Communication Services calling client library supports the following streami
| |Web | Android/iOS| |--|-||
-|**# of outgoing streams that can be sent simultaneously** |1 video + 1 screen sharing | 1 video + 1 screen sharing|
+|**# of outgoing streams that can be sent simultaneously** |1 video + 1 screen sharing | 1 video |
|**# of incoming streams that can be rendered simultaneously** |1 video + 1 screen sharing| 6 video + 1 screen sharing |
communication-services https://docs.microsoft.com/en-us/azure/communication-services/quickstarts/voice-video-calling/includes/calling-sdk-js https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/voice-video-calling/includes/calling-sdk-js.md
@@ -511,7 +511,7 @@ await deviceManager.setSpeaker(AudioDeviceInfo);
You can use `DeviceManager` and `Renderer` to begin rendering streams from your local camera. This stream won't be sent to other participants; it's a local preview feed. This is an asynchronous action. ```js
-const localVideoDevice = deviceManager().getCameraList()[0];
+const localVideoDevice = deviceManager.getCameraList()[0];
const localCameraStream = new LocalVideoStream(localVideoDevice); const renderer = new Renderer(localCameraStream); const view = await renderer.createView();
communication-services https://docs.microsoft.com/en-us/azure/communication-services/samples/calling-hero-sample https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/samples/calling-hero-sample.md
@@ -10,108 +10,15 @@
Last updated 07/20/2020 -
+zone_pivot_groups: acs-web-ios
# Get started with the group calling hero sample -
-<!-
-> [!WARNING]
-> Add links to our Hero Sample repo when the sample is publicly available.
-->-
-> [!IMPORTANT]
-> [This sample is available on GitHub.](https://github.com/Azure-Samples/communication-services-web-calling-hero)
-
-The Azure Communication Services **Group Calling Hero Sample** demonstrates how the Communication Services Calling Web client library can be used to build a group calling experience.
-
-In this Sample quickstart, we'll learn how the sample works before we run the sample on your local machine. We'll then deploy the sample to Azure using your own Azure Communication Services resources.
-
-## Overview
-
-The sample has both a client-side application and a server-side application. The **client-side application** is a React/Redux web application that uses Microsoft's Fluent UI framework. This application sends requests to an ASP.NET Core **server-side application** that helps the client-side application connect to Azure.
-
-Here's what the sample looks like:
--
-When you press the "Start a call" button, the web application fetches a user access token from the server-side application. This token is then used to connect the client app to Azure Communication Services. Once the token is retrieved, you'll be prompted to specify the camera and microphone that you want to use. You'll be able to disable/enable your devices with toggle controls:
--
-Once you configure your display name and devices, you can join the call session. Now you will see the main call canvas where the core calling experience lives.
--
-Components of the main calling screen:
-
-1. **Media Gallery**: The main stage where participants are shown. If a participant has their camera enabled, their video feed is shown here. Each participant has an individual tile which shows their display name and video stream (when there is one)
-2. **Header**: This is where the primary call controls are located to toggle settings and participant side bar, turn video and mix on/off, share screen and leave the call.
-3. **Side Bar**: This is where participants and settings information are shown when toggled using the controls on the header. The component can be dismissed using the 'X' on the top right corner. Participants side bar will show a list of participants and a link to invite more users to chat. Settings side bar allows you to configure microphone and camera settings.
-
-Below you'll find more information on prerequisites and steps to set up the sample.
-
-## Prerequisites
--- Create an Azure account with an active subscription. For details, see [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)-- [Node.js (12.18.4 and above)](https://nodejs.org/en/download/)-- [Visual Studio (2019 and above)](https://visualstudio.microsoft.com/vs/)-- [.NET Core 3.1](https://dotnet.microsoft.com/download/dotnet-core/3.1) (Make sure to install version that corresponds with your visual studio instance, 32 vs 64 bit)-- Create an Azure Communication Services resource. For details, see [Create an Azure Communication Resource](../quickstarts/create-communication-resource.md). You'll need to record your resource **connection string** for this quickstart.-
-## Locally deploy the service & client applications
-
-The group calling sample is essentially two applications: the ClientApp and the Service.NET app.
-
-When we want to deploy locally we need to start up both applications. When the server app is visited from the browser, it will use the locally deployed ClientApp for the user experience.
-
-You can test the sample locally by opening multiple browser sessions with the URL of your call to simulate a multi-user call.
-
-### Before running the sample for the first time
-
-1. Open an instance of PowerShell, Windows Terminal, Command Prompt or equivalent and navigate to the directory that you'd like to clone the sample to.
-2. `git clone https://github.com/Azure-Samples/communication-services-web-calling-hero.git`
-3. Get the `Connection String` from the Azure portal. For more information on connection strings, see [Create an Azure Communication Resources](../quickstarts/create-communication-resource.md)
-4. Once you get the `Connection String`, add the connection string to the **Calling/appsetting.json** file found under the Service .NET folder. Input your connection string in the variable: `ResourceConnectionString`.
-
-### Local Run
-
-1. Go to Calling folder and open `Calling.csproj` solution in Visual Studio
-2. Run `Calling` project. The browser will open at localhost:5001
-
-#### Troubleshooting
--- The solution doesn't build; it throws errors during NPM installation/build.-
- Try to clean/rebuild the projects.
-
-## Publish the sample to Azure
-
-1. Right click on the `Calling` project and select Publish.
-2. Create a new publish profile and select your Azure subscription.
-3. Before publishing, add your connection string with `Edit App Service Settings`, and fill in `ResourceConnectionString` as the key and provide your connection string (copied from appsettings.json) as the value.
-
-## Clean up resources
-
-If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../quickstarts/create-communication-resource.md#clean-up-resources).
-
-## Next steps
-
->[!div class="nextstepaction"]
->[Download the sample from GitHub](https://github.com/Azure-Samples/communication-services-web-calling-hero)
-
-For more information, see the following articles:
--- Familiarize yourself with [using the calling client library](../quickstarts/voice-video-calling/calling-client-samples.md)-- Learn more about [how calling works](../concepts/voice-video-calling/about-call-types.md)-- Review the [Contoso Med App](https://github.com/Azure-Samples/communication-services-contoso-med-app) sample-
-## Additional reading
-- [Azure Communication GitHub](https://github.com/Azure/communication) - Find more examples and information on the official GitHub page-- [Redux](https://redux.js.org/) - Client-side state management-- [FluentUI](https://aka.ms/fluent-ui) - Microsoft powered UI library-- [React](https://reactjs.org/) - Library for building user interfaces-- [ASP.NET Core](/aspnet/core/introduction-to-aspnet-core?preserve-view=true&view=aspnetcore-3.1) - Framework for building web applications
communication-services https://docs.microsoft.com/en-us/azure/communication-services/samples/includes/ios-calling-hero https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/samples/includes/ios-calling-hero.md
@@ -0,0 +1,99 @@
+
+ Title: include file
+description: include file
+++++ Last updated : 9/1/2020++++++
+The Azure Communication Services **Group Calling Hero Sample for iOS** demonstrates how the Communication Services Calling iOS client library can be used to build a group calling experience that includes voice and video. In this sample quickstart, you will learn how to set up and run the sample. An overview of the sample is provided for context.
+
+## Overview
+
+The sample is a native iOS application that uses the Azure Communication Services iOS client libraries to build a calling experience that features both voice and video calling. The application uses a server-side component to provision access tokens that are then used to initialize the Azure Communication Services client library. To configure this server-side component, feel free to follow the [Trusted Service with Azure Functions](../../tutorials/trusted-service-tutorial.md) tutorial.
+
+Here's what the sample looks like:
++
+When you press the "Start new call" button, the iOS application creates a new call and joins it. The application also allows you to join an existing Azure Communication Services call by specifying the existing call's ID.
+
+After joining a call, you'll be prompted to give the application permission to access your camera and microphone. You'll also be asked to provide a display name.
++
+Once you configure your display name and devices, you can join the call. You'll see the main call canvas where the core calling experience lives.
++
+Components of the main calling screen:
+
+- **Media Gallery**: The main stage where participants are shown. If a participant has their camera enabled, their video feed is shown here. Each participant has an individual tile which shows their display name and video stream (when there is one). The gallery supports multiple participants and is updated when participants are added or removed to the call.
+- **Action Bar**: This is where the primary call controls are located. These controls let you turn your video and microphone on/off, share your screen, and leave the call.
+
+Below you'll find more information on prerequisites and steps to set up the sample.
+
+## Prerequisites
+
+- An Azure account with an active subscription. For details, see [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- A Mac running [Xcode](https://go.microsoft.com/fwLink/p/?LinkID=266532), along with a valid developer certificate installed into your Keychain.
+- An Azure Communication Services resource. For details, see [Create an Azure Communication Resource](../../quickstarts/create-communication-resource.md).
+- An Azure Function running [Trusted Service logic](../../tutorials/trusted-service-tutorial.md) to fetch access tokens.
+
+## Running sample locally
+
+The group calling sample can be run locally using XCode. Developers can either use their physical device or an emulator to test the application.
+
+### Before running the sample for the first time
+
+1. Install dependencies by running `pod install`.
+2. Open `ACSCall.xcworkspace` in XCode.
+3. Update `AppSettings.plist`. Set the value for the `acsTokenFetchUrl` key to be the URL for your Authentication Endpoint.
+
+### Run sample
+
+Build and run the sample in XCode.
+
+## (Optional) Securing an authentication endpoint
+
+For demonstration purposes, this sample uses a publicly accessible endpoint by default to fetch an Azure Communication Services token. For production scenarios, we recommend using your own secured endpoint to provision your own tokens.
+
+With additional configuration, this sample supports connecting to an **Azure Active Directory** (Azure AD) protected endpoint so that user login is required for the app to fetch an Azure Communication Services token. See steps below:
+
+1. Enable Azure Active Directory authentication in your app.
+ - [Register your app under Azure Active Directory (using iOS / macOS platform settings)](https://docs.microsoft.com/azure/active-directory/develop/tutorial-v2-ios)
+ - [Configure your App Service or Azure Functions app to use Azure AD login](https://docs.microsoft.com/azure/app-service/configure-authentication-provider-aad)
+2. Go to your registered app overview page under Azure Active Directory App Registrations. Take note of the `Application (client) ID`, `Directory (tenant) ID`, `Application ID URI`
++
+3. Open `AppSettings.plist` in Xcode, add the following key values:
+ - `acsTokenFetchUrl`: The URL to request Azure Communication Services token
+ - `isAADAuthEnabled`: A boolean value to indicate if the Azure Communication Services token authentication is required or not
+ - `aadClientId`: Your Application (client) ID
+ - `aadTenantId`: Your Directory (tenant) ID
+ - `aadRedirectURI`: The redirect URI should be in this format: `msauth.<app_bundle_id>://auth`
+ - `aadScopes`: An array of permission scopes requested from users for authorization. Add `<Application ID URI>/user_impersonation` to the array to grant access to authentication endpoint
+
+## Clean up resources
+
+If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../../quickstarts/create-communication-resource.md#clean-up-resources).
+
+## Next steps
+
+For more information, see the following articles:
+
+- Familiarize yourself with [using the calling client library](../../quickstarts/voice-video-calling/calling-client-samples.md)
+- Learn more about [how calling works](../../concepts/voice-video-calling/about-call-types.md)
+
+### Additional reading
+
+- [Azure Communication GitHub](https://github.com/Azure/communication) - Find more examples and information on the official GitHub page
communication-services https://docs.microsoft.com/en-us/azure/communication-services/samples/includes/web-calling-hero https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/samples/includes/web-calling-hero.md
@@ -0,0 +1,103 @@
+
+ Title: include file
+description: include file
+++++ Last updated : 9/1/2020++++++
+The Azure Communication Services **Group Calling Hero Sample** demonstrates how the Communication Services Calling Web client library can be used to build a group calling experience.
+
+In this Sample quickstart, we'll learn how the sample works before we run the sample on your local machine. We'll then deploy the sample to Azure using your own Azure Communication Services resources.
+
+## Download code
+
+Find the finalized code for this quickstart on [GitHub](https://github.com/Azure-Samples/communication-services-web-calling-hero).
+
+## Overview
+
+The sample has both a client-side application and a server-side application. The **client-side application** is a React/Redux web application that uses Microsoft's Fluent UI framework. This application sends requests to an ASP.NET Core **server-side application** that helps the client-side application connect to Azure.
+
+Here's what the sample looks like:
++
+When you press the "Start a call" button, the web application fetches a user access token from the server-side application. This token is then used to connect the client app to Azure Communication Services. Once the token is retrieved, you'll be prompted to specify the camera and microphone that you want to use. You'll be able to disable/enable your devices with toggle controls:
++
+Once you configure your display name and devices, you can join the call session. You'll then see the main call canvas where the core calling experience lives.
++
+Components of the main calling screen:
+
+- **Media Gallery**: The main stage where participants are shown. If a participant has their camera enabled, their video feed is shown here. Each participant has an individual tile which shows their display name and video stream (when there is one)
+- **Header**: This is where the primary call controls are located to toggle settings and participant side bar, turn video and mix on/off, share screen and leave the call.
+- **Side Bar**: This is where participants and settings information are shown when toggled using the controls on the header. The component can be dismissed using the 'X' on the top right corner. Participants side bar will show a list of participants and a link to invite more users to chat. Settings side bar allows you to configure microphone and camera settings.
+
+Below you'll find more information on prerequisites and steps to set up the sample.
+
+## Prerequisites
+
+- An Azure account with an active subscription. For details, see [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
+- [Node.js (12.18.4 and above)](https://nodejs.org/en/download/)
+- [Visual Studio (2019 and above)](https://visualstudio.microsoft.com/vs/)
+- [.NET Core 3.1](https://dotnet.microsoft.com/download/dotnet-core/3.1) (Make sure to install version that corresponds with your visual studio instance, 32 vs 64 bit)
+- An Azure Communication Services resource. For details, see [Create an Azure Communication Resource](../../quickstarts/create-communication-resource.md). You'll need to record your resource **connection string** for this quickstart.
+
+## Locally deploy the service & client applications
+
+The group calling sample is essentially two applications: the ClientApp and the Service.NET app.
+
+When we want to deploy locally we need to start up both applications. When the server app is visited from the browser, it will use the locally deployed ClientApp for the user experience.
+
+You can test the sample locally by opening multiple browser sessions with the URL of your call to simulate a multi-user call.
+
+### Before running the sample for the first time
+
+1. Open an instance of PowerShell, Windows Terminal, Command Prompt or equivalent and navigate to the directory that you'd like to clone the sample to.
+2. `git clone https://github.com/Azure-Samples/communication-services-web-calling-hero.git`
+3. Get the `Connection String` from the Azure portal. For more information on connection strings, see [Create an Azure Communication Resources](../../quickstarts/create-communication-resource.md).
+4. Once you get the `Connection String`, add the connection string to the **Calling/appsetting.json** file found under the Service .NET folder. Input your connection string in the variable: `ResourceConnectionString`.
+
+### Local run
+
+1. Go to Calling folder and open `Calling.csproj` solution in Visual Studio.
+2. Run `Calling` project. The browser will open at `localhost:5001`.
+
+## Publish the sample to Azure
+
+1. Right click on the `Calling` project and select Publish.
+2. Create a new publish profile and select your Azure subscription.
+3. Before publishing, add your connection string with `Edit App Service Settings`, and fill in `ResourceConnectionString` as the key and provide your connection string (copied from appsettings.json) as the value.
+
+## Clean up resources
+
+If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../../quickstarts/create-communication-resource.md#clean-up-resources).
+
+## Next steps
+
+>[!div class="nextstepaction"]
+>[Download the sample from GitHub](https://github.com/Azure-Samples/communication-services-web-calling-hero)
+
+For more information, see the following articles:
+
+- Familiarize yourself with [using the calling client library](../../quickstarts/voice-video-calling/calling-client-samples.md)
+- Learn more about [how calling works](../../concepts/voice-video-calling/about-call-types.md)
+
+### Additional reading
+
+- [Azure Communication GitHub](https://github.com/Azure/communication) - Find more examples and information on the official GitHub page
+- [Redux](https://redux.js.org/) - Client-side state management
+- [FluentUI](https://aka.ms/fluent-ui) - Microsoft powered UI library
+- [React](https://reactjs.org/) - Library for building user interfaces
+- [ASP.NET Core](/aspnet/core/introduction-to-aspnet-core?preserve-view=true&view=aspnetcore-3.1) - Framework for building web applications
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/cassandra-migrate-cosmos-db-databricks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cassandra-migrate-cosmos-db-databricks.md
@@ -109,11 +109,32 @@ DFfromNativeCassandra
``` > [!NOTE]
-> The `spark.cassandra.output.concurrent.writes` and `connections_per_executor_max` configurations are important for avoiding [rate limiting](/samples/azure-samples/azure-cosmos-cassandra-java-retry-sample/azure-cosmos-db-cassandra-java-retry-sample/), which happens when requests to Cosmos DB exceed provisioned throughput ([request units](./request-units.md)). You may need to adjust these settings depending on the number of executors in the Spark cluster, and potentially the size (and therefore RU cost) of each record being written to the target tables.
+> The `spark.cassandra.output.batch.size.rows`, `spark.cassandra.output.concurrent.writes` and `connections_per_executor_max` configurations are important to avoid [rate limiting](/samples/azure-samples/azure-cosmos-cassandra-java-retry-sample/azure-cosmos-db-cassandra-java-retry-sample/), which happens when requests to Azure Cosmos DB exceed provisioned throughput/([request units](./request-units.md)). You may need to adjust these settings depending on the number of executors in the Spark cluster, and potentially the size (and therefore RU cost) of each record being written to the target tables.
+
+## Troubleshooting
+
+### Rate limiting (429 error)
+You may see an error code of 429 or `request rate is large` error text, despite reducing the above settings to their minimum values. The following are some such scenarios:
+
+- **Throughput allocated to the table is less than 6000 [request units](./request-units.md)**. Even at minimum settings, Spark will be able to execute writes at a rate of around 6000 request units or more. If you have provisioned a table in a keyspace with shared throughput provisioned, it is possible that this table has less than 6000 RUs available at runtime. Ensure the table you are migrating to has at least 6000 RUs available to it when running the migration, and if necessary allocate dedicated request units to that table.
+- **Excessive data skew with large data volume**. If you have a large amount of data (that is table rows) to migrate into a given table but have a significant skew in the data (i.e. a large number of records being written for the same partition key value), then you may still experience rate-limiting even if you have a large amount of [request units](./request-units.md) provisioned in your table. This is because request units are divided equally among physical partitions, and heavy data skew can result in a bottleneck of requests to a single partition, causing rate limiting. In this scenario, it is advised to reduce to minimal throughput settings in Spark to avoid rate limiting and force the migration to run slowly. This scenario can be more common when migrating reference or control tables, where access is less frequent but skew can be high. However, if a significant skew is present in any other type of table, it may also be advisable to review your data model to avoid hot partition issues for your workload during steady-state operations.
+- **Unable to get count on large table**. Running `select count(*) from table` is not currently supported for large tables. You can get the count from metrics in Azure portal (see our [troubleshooting article](cassandra-troubleshoot.md)), but if you need to determine the count of a large table from within the context of a Spark job, you can copy the data to a temporary table and then use Spark SQL to get the count, e.g. below (replace `<primary key>` with some field from the resulting temporary table).
+
+ ```scala
+ val ReadFromCosmosCassandra = sqlContext
+ .read
+ .format("org.apache.spark.sql.cassandra")
+ .options(cosmosCassandra)
+ .load
+
+ ReadFromCosmosCassandra.createOrReplaceTempView("CosmosCassandraResult")
+ %sql
+ select count(<primary key>) from CosmosCassandraResult
+ ```
## Next steps * [Provision throughput on containers and databases](set-throughput.md) * [Partition key best practices](partitioning-overview.md#choose-partitionkey) * [Estimate RU/s using the Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md) articles
-* [Elastic Scale in Azure Cosmos DB Cassandra API](manage-scale-cassandra.md)
+* [Elastic Scale in Azure Cosmos DB Cassandra API](manage-scale-cassandra.md)
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/cassandra-troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cassandra-troubleshoot.md
@@ -15,18 +15,74 @@
Cassandra API in Azure Cosmos DB is a compatibility layer, which provides [wire protocol support](cassandra-support.md) for the popular open-source Apache Cassandra database, and is powered by [Azure Cosmos DB](./introduction.md). As a fully managed cloud-native service, Azure Cosmos DB provides [guarantees on availability, throughput, and consistency](https://azure.microsoft.com/support/legal/sla/cosmos-db/v1_3/) for Cassandra API. These guarantees are not possible in legacy implementations of Apache Cassandra. Cassandra API also facilitates zero-maintenance platform operations, and zero-downtime patching. As such, many of it's backend operations are different from Apache Cassandra, so we recommend particular settings and approaches to avoid common errors.
-This article describes common errors and solutions for applications consuming Azure Cosmos DB Cassandra API.
+This article describes common errors and solutions for applications consuming Azure Cosmos DB Cassandra API. If your error is not listed below, and you are experiencing an error when executing a [supported operation in Cassandra API](cassandra-support.md), where the error is *not present when using native Apache Cassandra*, [create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md).
-## Common errors and solutions
+## NoNodeAvailableException
+This is a top-level wrapper exception with a large number of possible causes and inner exceptions, many of which can be client-related.
+### Solution
+Some popular causes and solutions are as follows:
+- Idle timeout of Azure LoadBalancers: This may also manifest as `ClosedConnectionException`. To resolve this, set keep alive setting in driver (see [below](#enable-keep-alive-for-java-driver)) and increase keep-alive settings in operating system, or [adjust idle timeout in Azure Load Balancer](../load-balancer/load-balancer-tcp-idle-timeout.md?tabs=tcp-reset-idle-portal).
+- **Client application resource exhaustion:** ensure that client machines have sufficient resources to complete the request.
-| Error | Description | Solution |
-||--|--|
-| OverloadedException (Java) | The total number of request units consumed is more than the request-units provisioned on the keyspace or table. So the requests are throttled. | Consider scaling the throughput assigned to a keyspace or table from the Azure portal (see [here](manage-scale-cassandra.md) for scaling operations in Cassandra API) or you can implement a retry policy. For Java, see retry samples for [v3.x driver](https://github.com/Azure-Samples/azure-cosmos-cassandra-java-retry-sample) and [v4.x driver](https://github.com/Azure-Samples/azure-cosmos-cassandra-java-retry-sample-v4). See also [Azure Cosmos Cassandra Extensions for Java](https://github.com/Azure/azure-cosmos-cassandra-extensions) |
-| OverloadedException (Java) even with sufficient throughput | The system appears to be throttling requests despite sufficient throughput being provisioned for request volume and/or consumed request unit cost | Cassandra API implements a system throughput budget for schema-level operations (CREATE TABLE, ALTER TABLE, DROP TABLE). This budget should be enough for schema operations in a production system. However, if you have a high number of schema-level operations, it is possible you are exceeding this limit. As this budget is not user controlled, you will need to consider lowering the number of schema operations being run. If taking this action does not resolve the issue, or it is not feasible for your workload, [create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md).|
-| ClosedConnectionException (Java) | After a period of idle time following successful connections, application is unable to connect| This error could be due to idle timeout of Azure LoadBalancers, which is 4 minutes. Set keep alive setting in driver (see below) and increase keep-alive settings in operating system, or [adjust idle timeout in Azure Load Balancer](../load-balancer/load-balancer-tcp-idle-timeout.md?tabs=tcp-reset-idle-portal). |
-| Other intermittent connectivity errors (Java) | Connection drops or times out unexpectedly | The Apache Cassandra drivers for Java provide two native reconnection policies: `ExponentialReconnectionPolicy` and `ConstantReconnectionPolicy`. The default is `ExponentialReconnectionPolicy`. However, for Azure Cosmos DB Cassandra API, we recommend `ConstantReconnectionPolicy` with a delay of 2 seconds. See the [driver documentation](https://docs.datastax.com/en/developer/java-driver/4.9/manual/core/reconnection/) for Java v4.x driver, and [here](https://docs.datastax.com/en/developer/java-driver/3.7/manual/reconnection/) for Java 3.x guidance (see also the examples below).|
+## Cannot connect to host
+You may see this error: `Cannot connect to any host, scheduling retry in 600000 milliseconds`.
+
+### Solution
+This could be SNAT exhaustion on the client-side. Please follow the steps at [SNAT for outbound connections](https://docs.microsoft.com/azure/load-balancer/load-balancer-outbound-connections) to rule out this issue. This may also be an idle timeout issue where the Azure load balancer has 4 minutes of idle timeout by default. See documentation at [Load balancer idle timeout](../load-balancer/load-balancer-tcp-idle-timeout.md?tabs=tcp-reset-idle-portal). Enable tcp-keep alive from the driver settings (see [below](#enable-keep-alive-for-java-driver)) and set the `keepAlive` interval on the operating system to less than 4 minutes.
+
+
+
+## OverloadedException (Java)
+The total number of request units consumed is more than the request-units provisioned on the keyspace or table. So the requests are throttled.
+### Solution
+Consider scaling the throughput assigned to a keyspace or table from the Azure portal (see [here](manage-scale-cassandra.md) for scaling operations in Cassandra API) or you can implement a retry policy. For Java, see retry samples for [v3.x driver](https://github.com/Azure-Samples/azure-cosmos-cassandra-java-retry-sample) and [v4.x driver](https://github.com/Azure-Samples/azure-cosmos-cassandra-java-retry-sample-v4). See also [Azure Cosmos Cassandra Extensions for Java](https://github.com/Azure/azure-cosmos-cassandra-extensions).
+
+### OverloadedException even with sufficient throughput
+The system appears to be throttling requests despite sufficient throughput being provisioned for request volume and/or consumed request unit cost. There are two possible causes of unexpected rate limiting:
+- **Schema level operations:** Cassandra API implements a system throughput budget for schema-level operations (CREATE TABLE, ALTER TABLE, DROP TABLE). This budget should be enough for schema operations in a production system. However, if you have a high number of schema-level operations, it is possible you are exceeding this limit. As this budget is not user-controlled, you will need to consider lowering the number of schema operations being run. If taking this action does not resolve the issue, or it is not feasible for your workload, [create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md).
+- **Data skew:** when throughput is provisioned in Cassandra API, it is divided equally among physical partitions, and each physical partition has an upper limit. If you have a high amount of data being inserted or queried from one particular partition, it is possible to be rate-limited despite provisioning a large amount of overall throughput (request units) for that table. Review your data model and ensure you do not have excessive skew that could be causing hot partitions.
+
+## Intermittent connectivity errors (Java)
+Connection drops or times out unexpectedly.
+
+### Solution
+The Apache Cassandra drivers for Java provide two native reconnection policies: `ExponentialReconnectionPolicy` and `ConstantReconnectionPolicy`. The default is `ExponentialReconnectionPolicy`. However, for Azure Cosmos DB Cassandra API, we recommend `ConstantReconnectionPolicy` with a delay of 2 seconds. See the [driver documentation](https://docs.datastax.com/en/developer/java-driver/4.9/manual/core/reconnection/) for Java v4.x driver, and [here](https://docs.datastax.com/en/developer/java-driver/3.7/manual/reconnection/) for Java 3.x guidance see also [Configuring ReconnectionPolicy for Java Driver](#configuring-reconnectionpolicy-for-java-driver) examples below.
+
+## Error with load-balancing policy
+
+If you have implemented a load-balancing policy in v3.x of the Java Datastax driver, with code similar to the below:
+
+```java
+cluster = Cluster.builder()
+ .addContactPoint(cassandraHost)
+ .withPort(cassandraPort)
+ .withCredentials(cassandraUsername, cassandraPassword)
+ .withPoolingOptions(new PoolingOptions() .setConnectionsPerHost(HostDistance.LOCAL, 1, 2)
+ .setMaxRequestsPerConnection(HostDistance.LOCAL, 32000).setMaxQueueSize(Integer.MAX_VALUE))
+ .withSSL(sslOptions)
+ .withLoadBalancingPolicy(DCAwareRoundRobinPolicy.builder().withLocalDc("West US").build())
+ .withQueryOptions(new QueryOptions().setConsistencyLevel(ConsistencyLevel.LOCAL_QUORUM))
+ .withSocketOptions(getSocketOptions())
+ .build();
+```
+
+If the value for `withLocalDc()` does not match the contact point datacenter, you may experience a very intermittent error: `com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (no host was tried)`.
+
+### Solution
+Implement [CosmosLoadBalancingPolicy](https://github.com/Azure/azure-cosmos-cassandra-extensions/blob/master/package/src/main/java/com/microsoft/azure/cosmos/cassandra/CosmosLoadBalancingPolicy.java) (you may need to upgrade datastax minor version to make it work):
+
+```java
+LoadBalancingPolicy loadBalancingPolicy = new CosmosLoadBalancingPolicy.Builder().withWriteDC("West US").withReadDC("West US").build();
+```
+
+## Count fails on large table
+When running `select count(*) from table` or similar for a large number of rows, the server times out.
+
+### Solution
+If using a local CQLSH client you can try to change the `--connect-timeout` or `--request-timeout` settings (see more details [here](https://cassandra.apache.org/doc/latest/tools/cqlsh.html)). If this is not sufficient and count still times out, you can get a count of records from the Azure Cosmos DB backend telemetry by going to metrics tab in Azure portal, selecting the metric `document count`, then adding a filter for the database or collection (the analog of table in Azure Cosmos DB). You can then hover over the resulting graph for the point in time at which you want a count of the number of records.
+
-If your error is not listed above, and you are experiencing an error when executing a [supported operation in Cassandra API](cassandra-support.md), where the error is *not present when using native Apache Cassandra*, [create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md)
## Configuring ReconnectionPolicy for Java Driver
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/index-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/index-policy.md
@@ -5,7 +5,7 @@
Previously updated : 01/21/2021 Last updated : 02/02/2021
@@ -37,10 +37,7 @@ In Azure Cosmos DB, the total consumed storage is the combination of both the Da
* The index size depends on the indexing policy. If all the properties are indexed, then the index size can be larger than the data size. * When data is deleted, indexes are compacted on a near continuous basis. However, for small data deletions, you may not immediately observe a decrease in index size.
-* The Index size can grow on the following cases:
-
- * Partition split duration- The index space is released after the partition split is completed.
- * When a partition is splitting, index space will temporarily increase during the partition split.
+* The Index size can temporarily grow when physical partitions split. The index space is released after the partition split is completed.
## <a id="include-exclude-paths"></a>Including and excluding property paths
@@ -181,33 +178,35 @@ You should customize your indexing policy so you can serve all necessary `ORDER
If a query has filters on two or more properties, it may be helpful to create a composite index for these properties.
-For example, consider the following query which has an equality filter on two properties:
+For example, consider the following query which has both an equality and range filter:
```sql
-SELECT * FROM c WHERE c.name = "John" AND c.age = 18
+SELECT *
+FROM c
+WHERE c.name = "John" AND c.age > 18
```
-This query will be more efficient, taking less time and consuming fewer RU's, if it is able to leverage a composite index on (name ASC, age ASC).
+This query will be more efficient, taking less time and consuming fewer RU's, if it is able to leverage a composite index on `(name ASC, age ASC)`.
-Queries with range filters can also be optimized with a composite index. However, the query can only have a single range filter. Range filters include `>`, `<`, `<=`, `>=`, and `!=`. The range filter should be defined last in the composite index.
+Queries with multiple range filters can also be optimized with a composite index. However, each individual composite index can only optimize a single range filter. Range filters include `>`, `<`, `<=`, `>=`, and `!=`. The range filter should be defined last in the composite index.
-Consider the following query with both equality and range filters:
+Consider the following query with an equality filter and two range filters:
```sql
-SELECT * FROM c WHERE c.name = "John" AND c.age > 18
+SELECT *
+FROM c
+WHERE c.name = "John" AND c.age > 18 AND c._ts > 1612212188
```
-This query will be more efficient with a composite index on (name ASC, age ASC). However, the query would not utilize a composite index on (age ASC, name ASC) because the equality filters must be defined first in the composite index.
+This query will be more efficient with a composite index on `(name ASC, age ASC)` and `(name ASC, _ts ASC)`. However, the query would not utilize a composite index on `(age ASC, name ASC)` because the properties with equality filters must be defined first in the composite index. Two separate composite indexes are required instead of a single composite index on `(name ASC, age ASC, _ts ASC)` since each composite index can only optimize a single range filter.
The following considerations are used when creating composite indexes for queries with filters on multiple properties
+- Filter expressions can use multiple composite indexes.
- The properties in the query's filter should match those in composite index. If a property is in the composite index but is not included in the query as a filter, the query will not utilize the composite index. - If a query has additional properties in the filter that were not defined in a composite index, then a combination of composite and range indexes will be used to evaluate the query. This will require fewer RU's than exclusively using range indexes.-- If a property has a range filter (`>`, `<`, `<=`, `>=`, or `!=`), then this property should be defined last in the composite index. If a query has more than one range filter, it will not utilize the composite index.
+- If a property has a range filter (`>`, `<`, `<=`, `>=`, or `!=`), then this property should be defined last in the composite index. If a query has more than one range filter, it may benefit from multiple composite indexes.
- When creating a composite index to optimize queries with multiple filters, the `ORDER` of the composite index will have no impact on the results. This property is optional.-- If you do not define a composite index for a query with filters on multiple properties, the query will still succeed. However, the RU cost of the query can be reduced with a composite index.-- Queries with both aggregates (for example, COUNT or SUM) and filters also benefit from composite indexes.-- Filter expressions can use multiple composite indexes. Consider the following examples where a composite index is defined on properties name, age, and timestamp:
@@ -222,43 +221,76 @@ Consider the following examples where a composite index is defined on properties
| ```(name ASC, age ASC, timestamp ASC)``` | ```SELECT * FROM c WHERE c.name = "John" AND c.age < 18 AND c.timestamp = 123049923``` | ```No``` | | ```(name ASC, age ASC) and (name ASC, timestamp ASC)``` | ```SELECT * FROM c WHERE c.name = "John" AND c.age < 18 AND c.timestamp > 123049923``` | ```Yes``` |
-### Queries with a filter as well as an ORDER BY clause
+### Queries with a filter and ORDER BY
If a query filters on one or more properties and has different properties in the ORDER BY clause, it may be helpful to add the properties in the filter to the `ORDER BY` clause.
-For example, by adding the properties in the filter to the ORDER BY clause, the following query could be rewritten to leverage a composite index:
+For example, by adding the properties in the filter to the `ORDER BY` clause, the following query could be rewritten to leverage a composite index:
Query using range index: ```sql
-SELECT * FROM c WHERE c.name = "John" ORDER BY c.timestamp
+SELECT *
+FROM c
+WHERE c.name = "John"
+ORDER BY c.timestamp
``` Query using composite index: ```sql
-SELECT * FROM c WHERE c.name = "John" ORDER BY c.name, c.timestamp
+SELECT *
+FROM c
+WHERE c.name = "John"
+ORDER BY c.name, c.timestamp
```
-The same pattern and query optimizations can be generalized for queries with multiple equality filters:
+The same query optimizations can be generalized for any `ORDER BY` queries with filters, keeping in mind that individual composite indexes can only support, at most, one range filter.
Query using range index: ```sql
-SELECT * FROM c WHERE c.name = "John", c.age = 18 ORDER BY c.timestamp
+SELECT *
+FROM c
+WHERE c.name = "John" AND c.age = 18 AND c.timestamp > 1611947901
+ORDER BY c.timestamp
``` Query using composite index: ```sql
-SELECT * FROM c WHERE c.name = "John", c.age = 18 ORDER BY c.name, c.age, c.timestamp
+SELECT *
+FROM c
+WHERE c.name = "John" AND c.age = 18 AND c.timestamp > 1611947901
+ORDER BY c.name, c.age, c.timestamp
+```
+
+In addition, you can use composite indexes to optimize queries with system functions and ORDER BY:
+
+Query using range index:
+
+```sql
+SELECT *
+FROM c
+WHERE c.firstName = "John" AND Contains(c.lastName, "Smith", true)
+ORDER BY c.lastName
+```
+
+Query using composite index:
+
+```sql
+SELECT *
+FROM c
+WHERE c.firstName = "John" AND Contains(c.lastName, "Smith", true)
+ORDER BY c.firstName, c.lastName
``` The following considerations are used when creating composite indexes to optimize a query with a filter and `ORDER BY` clause:
-* If the query filters on properties, these should be included first in the `ORDER BY` clause.
-* If the query filters on multiple properties, the equality filters must be the first properties in the `ORDER BY` clause
* If you do not define a composite index on a query with a filter on one property and a separate `ORDER BY` clause using a different property, the query will still succeed. However, the RU cost of the query can be reduced with a composite index, particularly if the property in the `ORDER BY` clause has a high cardinality.
+* If the query filters on properties, these should be included first in the `ORDER BY` clause.
+* If the query filters on multiple properties, the equality filters must be the first properties in the `ORDER BY` clause.
+* If the query filters on multiple properties, you can have a maximum of one range filter or system function utilized per composite index. The property used in the range filter or system function should be defined last in the composite index.
* All considerations for creating composite indexes for `ORDER BY` queries with multiple properties as well as queries with filters on multiple properties still apply.
@@ -272,6 +304,7 @@ The following considerations are used when creating composite indexes to optimiz
| ```(age ASC, name ASC, timestamp ASC)``` | ```SELECT * FROM c WHERE c.age = 18 and c.name = "John" ORDER BY c.age ASC, c.name ASC,c.timestamp ASC``` | `Yes` | | ```(age ASC, name ASC, timestamp ASC)``` | ```SELECT * FROM c WHERE c.age = 18 and c.name = "John" ORDER BY c.timestamp ASC``` | `No` | + ## Modifying the indexing policy A container's indexing policy can be updated at any time [by using the Azure portal or one of the supported SDKs](how-to-manage-indexing-policy.md). An update to the indexing policy triggers a transformation from the old index to the new one, which is performed online and in-place (so no additional storage space is consumed during the operation). The old indexing policy is efficiently transformed to the new policy without affecting the write availability, read availability, or the throughput provisioned on the container. Index transformation is an asynchronous operation, and the time it takes to complete depends on the provisioned throughput, the number of items and their size.
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/sql-query-getting-started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-query-getting-started.md
@@ -5,7 +5,7 @@
Previously updated : 11/04/2020 Last updated : 02/02/2021
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/sql-query-object-array https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-query-object-array.md
@@ -5,14 +5,38 @@
Previously updated : 01/07/2021 Last updated : 02/02/2021 # Working with arrays and objects in Azure Cosmos DB [!INCLUDE[appliesto-sql-api](includes/appliesto-sql-api.md)]
-A key feature of the Azure Cosmos DB SQL API is array and object creation.
+A key feature of the Azure Cosmos DB SQL API is array and object creation. This document uses examples that can be recreated using the [Family dataset](sql-query-getting-started.md#upload-sample-data).
+
+Here's an example item in this dataset:
+
+```json
+{
+ "id": "AndersenFamily",
+ "lastName": "Andersen",
+ "parents": [
+ { "firstName": "Thomas" },
+ { "firstName": "Mary Kay"}
+ ],
+ "children": [
+ {
+ "firstName": "Henriette Thaulow",
+ "gender": "female",
+ "grade": 5,
+ "pets": [{ "givenName": "Fluffy" }]
+ }
+ ],
+ "address": { "state": "WA", "county": "King", "city": "Seattle" },
+ "creationDate": 1431620472,
+ "isRegistered": true
+}
+```
## Arrays
@@ -173,6 +197,8 @@ The results are:
> [!NOTE] > When using the IN keyword for iteration, you cannot filter or project any properties outside of the array. Instead, you should use [JOINs](sql-query-join.md).
+For additional examples, read our [blog post on working with arrays in Azure Cosmos DB](https://devblogs.microsoft.com/cosmosdb/understanding-how-to-query-arrays-in-azure-cosmos-db/).
+ ## Next steps - [Getting started](sql-query-getting-started.md)
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/troubleshoot-query-performance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/troubleshoot-query-performance.md
@@ -4,7 +4,7 @@ description: Learn how to identify, diagnose, and troubleshoot Azure Cosmos DB S
Previously updated : 10/12/2020 Last updated : 02/02/2021
@@ -57,6 +57,8 @@ Refer to the following sections to understand the relevant query optimizations f
- [Understand which system functions use the index.](#understand-which-system-functions-use-the-index)
+- [Improve string system function execution.](#improve-string-system-function-execution)
+ - [Understand which aggregate queries use the index.](#understand-which-aggregate-queries-use-the-index) - [Optimize queries that have both a filter and an ORDER BY clause.](#optimize-queries-that-have-both-a-filter-and-an-order-by-clause)
@@ -192,10 +194,11 @@ You can add properties to the indexing policy at any time, with no effect on wri
Most system functions use indexes. Here's a list of some common string functions that use indexes: -- STARTSWITH(str_expr1, str_expr2, bool_expr) -- CONTAINS(str_expr, str_expr, bool_expr)-- LEFT(str_expr, num_expr) = str_expr-- SUBSTRING(str_expr, num_expr, num_expr) = str_expr, but only if the first num_expr is 0
+- StartsWith
+- Contains
+- RegexMatch
+- Left
+- Substring - but only if the first num_expr is 0
Following are some common system functions that don't use the index and must load each document:
@@ -204,11 +207,21 @@ Following are some common system functions that don't use the index and must loa
| UPPER/LOWER | Instead of using the system function to normalize data for comparisons, normalize the casing upon insertion. A query like ```SELECT * FROM c WHERE UPPER(c.name) = 'BOB'``` becomes ```SELECT * FROM c WHERE c.name = 'BOB'```. | | Mathematical functions (non-aggregates) | If you need to compute a value frequently in your query, consider storing the value as a property in your JSON document. | -
+### Improve string system function execution
+
+For some system functions that use indexes, you can improve query execution by adding an `ORDER BY` clause to the query.
+
+More specifically, any system function whose RU charge increases as the cardinality of the property increases may benefit from having `ORDER BY` in the query. These queries do an index scan, so having the query results sorted can make the query more efficient.
-If a system function uses indexes and still has a high RU charge, you can try adding `ORDER BY` to the query. In some cases, adding `ORDER BY` can improve system function index utilization, particularly if the query is long-running or spans multiple pages.
+This optimization can improve execution for the following system functions:
-For example, consider the below query with `CONTAINS`. `CONTAINS` should use an index but let's imagine that, after adding the relevant index, you still observe a very high RU charge when running the below query:
+- StartsWith (where case-insensitive = true)
+- StringEquals (where case-insensitive = true)
+- Contains
+- RegexMatch
+- EndsWith
+
+For example, consider the below query with `CONTAINS`. `CONTAINS` will use indexes but sometimes, even after adding the relevant index, you may still observe a very high RU charge when running the below query.
Original query:
@@ -218,7 +231,7 @@ FROM c
WHERE CONTAINS(c.town, "Sea") ```
-Updated query with `ORDER BY`:
+You can improve query execution by adding `ORDER BY`:
```sql SELECT *
@@ -227,6 +240,25 @@ WHERE CONTAINS(c.town, "Sea")
ORDER BY c.town ```
+The same optimization can help in queries with additional filters. In this case, it's best to also add properties with equality filters to the `ORDER BY` clause.
+
+Original query:
+
+```sql
+SELECT *
+FROM c
+WHERE c.name = "Samer" AND CONTAINS(c.town, "Sea")
+```
+
+You can improve query execution by adding `ORDER BY` and [a composite index](index-policy.md#composite-indexes) for (c.name, c.town):
+
+```sql
+SELECT *
+FROM c
+WHERE c.name = "Samer" AND CONTAINS(c.town, "Sea")
+ORDER BY c.name, c.town
+```
+ ### Understand which aggregate queries use the index In most cases, aggregate system functions in Azure Cosmos DB will use the index. However, depending on the filters or additional clauses in an aggregate query, the query engine may be required to load a high number of documents. Typically, the query engine will apply equality and range filters first. After applying these filters,
cost-management-billing https://docs.microsoft.com/en-us/azure/cost-management-billing/manage/programmatically-create-subscription https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/programmatically-create-subscription.md
@@ -11,7 +11,7 @@
-# Create Azure subscriptions programatically
+# Create Azure subscriptions programmatically
This article helps you understand options available to programmatically create Azure subscriptions.
cost-management-billing https://docs.microsoft.com/en-us/azure/cost-management-billing/manage/subscription-states https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/subscription-states.md
@@ -23,4 +23,4 @@ This article describes the various states that an Azure subscription may have. Y
| **Disabled** | Your Azure subscription is disabled and can no longer be used to create or manage Azure resources. While in this state, your virtual machines are de-allocated, temporary IP addresses are freed, storage is read-only and other services are disabled. A subscription can get disabled because of the following reasons: Your credit may have expired. You may have reached your spending limit. You have a past due bill. Your credit card limit was exceeded. Or, it was explicitly disabled or canceled. Depending on the subscription type, a subscription may remain disabled between 1 - 90 days. After which, it's permanently deleted. For more information, see [Reactivate a disabled Azure subscription](subscription-disabled.md).<br><br>Operations to create or update resources (PUT, PATCH) are disabled. Operations that take an action (POST) are also disabled. You can retrieve or delete resources (GET, DELETE). Your resources are still available. | | **Expired** | Your Azure subscription is expired because it was canceled. You can reactivate an expired subscription. For more information, see [Reactivate a disabled Azure subscription](subscription-disabled.md).<br><br>Operations to create or update resources (PUT, PATCH) are disabled. Operations that take an action (POST) are also disabled. You can retrieve or delete resources (GET, DELETE).| | **Past Due** | Your Azure subscription has an outstanding payment pending. Your subscription is still active but failure to pay the dues may result in subscription being disabled. For more information, see [Resolve past due balance for your Azure subscription.](resolve-past-due-balance.md).<br><br>All operations are available. |
-| **Warned** | Your Azure subscription is in a warned state and while can be used normally, it will be disabled shortly if the warning reason is not addressed. A subscription may be in warned state if its past due, cancelled by user, subscription has expired, etc.<br><br>All operations are available. |
+| **Warned** | Your Azure subscription is in a warned state and will be disabled shortly if the warning reason is not addressed. A subscription may be in warned state if its past due, cancelled by user, subscription has expired, etc.<br><br>You can retrieve or delete resources (GET/DELETE), but you wont be able to create any resources (PUT/PATCH/POST) |
data-factory https://docs.microsoft.com/en-us/azure/data-factory/author-management-hub https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/author-management-hub.md
@@ -8,7 +8,7 @@
Previously updated : 06/02/2020 Last updated : 02/01/2021 # Management hub in Azure Data Factory
@@ -35,7 +35,11 @@ An integration runtime is a compute infrastructure used by Azure Data Factory to
### Git configuration
-View and edit your configured git repository settings in the management hub. For more information, learn about [source control in Azure Data Factory](source-control.md).
+You can view/ edit all the Git-related information under the Git configuration settings in the management hub.
+
+Last published commit information is listed as well and can help to understand the precise commit, which was last published/ deployed across environments. It can also be helpful when doing Hot Fixes in production.
+
+For more information, learn about [source control in Azure Data Factory](source-control.md).
![Manage git repo](media/author-management-hub/management-hub-git.png)
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connect-data-factory-to-azure-purview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connect-data-factory-to-azure-purview.md
@@ -31,7 +31,7 @@ You have two ways to connect data factory to Azure Purview:
3. Once connected, you should be able to see the name of the Purview account in the tab **Purview account**. 4. You can use the Search bar at the top center of Azure Data Factory portal to search for data.
-If you see warning in Azure Data Factor portal after you register Azure Purview account to Data Factory, follow below steps to fix the issue:
+If you see warning in Azure Data Factory portal after you register Azure Purview account to Data Factory, follow below steps to fix the issue:
:::image type="content" source="./media/data-factory-purview/register-purview-account-warning.png" alt-text="Screenshot for warning of registering a Purview account.":::
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-dynamics-crm-office-365 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-dynamics-crm-office-365.md
@@ -11,10 +11,11 @@
Previously updated : 02/01/2021 Last updated : 02/02/2021 # Copy data from and to Dynamics 365 (Common Data Service) or Dynamics CRM by using Azure Data Factory+ [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)] This article outlines how to use a copy activity in Azure Data Factory to copy data from and to Microsoft Dynamics 365 and Microsoft Dynamics CRM. It builds on the [copy activity overview](copy-activity-overview.md) article that presents a general overview of a copy activity.
@@ -83,7 +84,7 @@ The following properties are supported for the Dynamics linked service.
| servicePrincipalCredential | The service-principal credential. <br/><br/>When you use "ServicePrincipalKey" as the credential type, `servicePrincipalCredential` can be a string that Azure Data Factory encrypts upon linked service deployment. Or it can be a reference to a secret in Azure Key Vault. <br/><br/>When you use "ServicePrincipalCert" as the credential, `servicePrincipalCredential` must be a reference to a certificate in Azure Key Vault. | Yes when authentication is "AADServicePrincipal" | | username | The username to connect to Dynamics. | Yes when authentication is "Office365" | | password | The password for the user account you specified as the username. Mark this field with "SecureString" to store it securely in Data Factory, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | Yes when authentication is "Office365" |
-| connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. If no value is specified, the property uses the default Azure integration runtime. | No for source, and yes for sink if the source linked service doesn't have an integration runtime |
+| connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. If no value is specified, the property uses the default Azure integration runtime. | No |
>[!NOTE] >The Dynamics connector formerly used the optional **organizationName** property to identify your Dynamics CRM or Dynamics 365 online instance. While that property still works, we suggest you specify the new **serviceUri** property instead to gain better performance for instance discovery.
@@ -179,7 +180,7 @@ Additional properties that compare to Dynamics online are **hostName** and **por
| authenticationType | The authentication type to connect to the Dynamics server. Specify "Ifd" for Dynamics on-premises with IFD. | Yes. | | username | The username to connect to Dynamics. | Yes. | | password | The password for the user account you specified for the username. You can mark this field with "SecureString" to store it securely in Data Factory. Or you can store a password in Key Vault and let the copy activity pull from there when it does data copy. Learn more from [Store credentials in Key Vault](store-credentials-in-key-vault.md). | Yes. |
-| connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. If no value is specified, the property uses the default Azure integration runtime. | No for source and yes for sink. |
+| connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. If no value is specified, the property uses the default Azure integration runtime. | No |
#### Example: Dynamics on-premises with IFD using IFD authentication
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-salesforce-service-cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-salesforce-service-cloud.md
@@ -10,10 +10,11 @@
Previously updated : 01/11/2021 Last updated : 02/02/2021 # Copy data from and to Salesforce Service Cloud by using Azure Data Factory+ [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)] This article outlines how to use Copy Activity in Azure Data Factory to copy data from and to Salesforce Service Cloud. It builds on the [Copy Activity overview](copy-activity-overview.md) article that presents a general overview of the copy activity.
@@ -65,10 +66,7 @@ The following properties are supported for the Salesforce linked service.
| password |Specify a password for the user account.<br/><br/>Mark this field as a SecureString to store it securely in Data Factory, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). |Yes | | securityToken |Specify a security token for the user account. <br/><br/>To learn about security tokens in general, see [Security and the API](https://developer.salesforce.com/docs/atlas.en-us.api.met). |No | | apiVersion | Specify the Salesforce REST/Bulk API version to use, e.g. `48.0`. By default, the connector uses [v45](https://developer.salesforce.com/docs/atlas.en-us.218.0.api_rest.meta/api_rest/dome_versions.htm) to copy data from Salesforce, and uses [v40](https://developer.salesforce.com/docs/atlas.en-us.208.0.api_asynch.meta/api_asynch/asynch_api_intro.htm) to copy data to Salesforce. | No |
-| connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. If not specified, it uses the default Azure Integration Runtime. | No for source, Yes for sink if the source linked service doesn't have integration runtime |
-
->[!IMPORTANT]
->When you copy data into Salesforce Service Cloud, the default Azure Integration Runtime can't be used to execute copy. In other words, if your source linked service doesn't have a specified integration runtime, explicitly [create an Azure Integration Runtime](create-azure-integration-runtime.md#create-azure-ir) with a location near your Salesforce Service Cloud instance. Associate the Salesforce Service Cloud linked service as in the following example.
+| connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. If not specified, it uses the default Azure Integration Runtime. | No |
**Example: Store credentials in Data Factory**
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-salesforce https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-salesforce.md
@@ -10,7 +10,7 @@
Previously updated : 01/11/2021 Last updated : 02/02/2021 # Copy data from and to Salesforce by using Azure Data Factory
@@ -70,10 +70,7 @@ The following properties are supported for the Salesforce linked service.
| password |Specify a password for the user account.<br/><br/>Mark this field as a SecureString to store it securely in Data Factory, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). |Yes | | securityToken |Specify a security token for the user account. <br/><br/>To learn about security tokens in general, see [Security and the API](https://developer.salesforce.com/docs/atlas.en-us.api.met). |No | | apiVersion | Specify the Salesforce REST/Bulk API version to use, e.g. `48.0`. By default, the connector uses [v45](https://developer.salesforce.com/docs/atlas.en-us.218.0.api_rest.meta/api_rest/dome_versions.htm) to copy data from Salesforce, and uses [v40](https://developer.salesforce.com/docs/atlas.en-us.208.0.api_asynch.meta/api_asynch/asynch_api_intro.htm) to copy data to Salesforce. | No |
-| connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. If not specified, it uses the default Azure Integration Runtime. | No for source, Yes for sink if the source linked service doesn't have integration runtime |
-
->[!IMPORTANT]
->When you copy data into Salesforce, the default Azure Integration Runtime can't be used to execute copy. In other words, if your source linked service doesn't have a specified integration runtime, explicitly [create an Azure Integration Runtime](create-azure-integration-runtime.md#create-azure-ir) with a location near your Salesforce instance. Associate the Salesforce linked service as in the following example.
+| connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. If not specified, it uses the default Azure Integration Runtime. | No |
**Example: Store credentials in Data Factory**
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-sap-business-warehouse-open-hub https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sap-business-warehouse-open-hub.md
@@ -11,10 +11,11 @@
Previously updated : 06/12/2020 Last updated : 02/02/2020 # Copy data from SAP Business Warehouse via Open Hub using Azure Data Factory+ [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)] This article outlines how to use the Copy Activity in Azure Data Factory to copy data from an SAP Business Warehouse (BW) via Open Hub. It builds on the [copy activity overview](copy-activity-overview.md) article that presents a general overview of copy activity.
@@ -33,8 +34,8 @@ You can copy data from SAP Business Warehouse via Open Hub to any supported sink
Specifically, this SAP Business Warehouse Open Hub connector supports: -- SAP Business Warehouse **version 7.01 or higher (in a recent SAP Support Package Stack released after the year 2015)**. SAP BW4/HANA is not supported by this connector.-- Copying data via Open Hub Destination local table which underneath can be DSO, InfoCube, MultiProvider, DataSource, etc.
+- SAP Business Warehouse **version 7.01 or higher (in a recent SAP Support Package Stack released after the year 2015)**. SAP BW/4HANA is not supported by this connector.
+- Copying data via Open Hub Destination local table, which underneath can be DSO, InfoCube, MultiProvider, DataSource, etc.
- Copying data using basic authentication. - Connecting to an SAP application server or SAP message server. - Retrieving data via RFC.
@@ -54,7 +55,7 @@ ADF SAP BW Open Hub Connector offers two optional properties: `excludeLastReques
- **excludeLastRequestId**: Whether to exclude the records of the last request. Default value is true. - **baseRequestId**: The ID of request for delta loading. Once it is set, only data with requestId larger than the value of this property will be retrieved.
-Overall, the extraction from SAP InfoProviders to Azure Data Factory (ADF) consists of 2 steps:
+Overall, the extraction from SAP InfoProviders to Azure Data Factory (ADF) consists of two steps:
1. **SAP BW Data Transfer Process (DTP)** This step copies the data from an SAP BW InfoProvider to an SAP BW Open Hub table
@@ -66,11 +67,11 @@ Overall, the extraction from SAP InfoProviders to Azure Data Factory (ADF) consi
In the first step, a DTP is executed. Each execution creates a new SAP request ID. The request ID is stored in the Open Hub table and is then used by the ADF connector to identify the delta. The two steps run asynchronously: the DTP is triggered by SAP, and the ADF data copy is triggered through ADF.
-By default, ADF is not reading the latest delta from the Open Hub table (option "exclude last request" is true). Hereby, the data in ADF is not 100% up-to-date with the data in the Open Hub table (the last delta is missing). In return, this procedure ensures that no rows get lost caused by the asynchronous extraction. It works fine even when ADF is reading the Open Hub table while the DTP is still writing into the same table.
+By default, ADF is not reading the latest delta from the Open Hub table (option "exclude last request" is true). Hereby, the data in ADF is not 100% up to date with the data in the Open Hub table (the last delta is missing). In return, this procedure ensures that no rows get lost caused by the asynchronous extraction. It works fine even when ADF is reading the Open Hub table while the DTP is still writing into the same table.
You typically store the max copied request ID in the last run by ADF in a staging data store (such as Azure Blob in above diagram). Therefore, the same request is not read a second time by ADF in the subsequent run. Meanwhile, note the data is not automatically deleted from the Open Hub table.
-For proper delta handling it is not allowed to have request IDs from different DTPs in the same Open Hub table. Therefore, you must not create more than one DTP for each Open Hub Destination (OHD). When needing Full and Delta extraction from the same InfoProvider, you should create two OHDs for the same InfoProvider.
+For proper delta handling, it is not allowed to have request IDs from different DTPs in the same Open Hub table. Therefore, you must not create more than one DTP for each Open Hub Destination (OHD). When needing Full and Delta extraction from the same InfoProvider, you should create two OHDs for the same InfoProvider.
## Prerequisites
@@ -87,7 +88,7 @@ To use this SAP Business Warehouse Open Hub connector, you need to:
- Authorization for RFC and SAP BW. - Permissions to the ΓÇ£ExecuteΓÇ¥ Activity of Authorization Object ΓÇ£S_SDSAUTHΓÇ¥. -- Create SAP Open Hub Destination type as **Database Table** with "Technical Key" option checked. It is also recommended to leave the Deleting Data from Table as unchecked although it is not required. Leverage the DTP (directly execute or integrate into existing process chain) to land data from source object (such as cube) you have chosen to the open hub destination table.
+- Create SAP Open Hub Destination type as **Database Table** with "Technical Key" option checked. It is also recommended to leave the Deleting Data from Table as unchecked although it is not required. Use the DTP (directly execute or integrate into existing process chain) to land data from source object (such as cube) you have chosen to the open hub destination table.
## Getting started
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-sap-cloud-for-customer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sap-cloud-for-customer.md
@@ -11,10 +11,11 @@
Previously updated : 06/12/2020 Last updated : 02/02/2021 # Copy data from SAP Cloud for Customer (C4C) using Azure Data Factory+ [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)] This article outlines how to use the Copy Activity in Azure Data Factory to copy data from/to SAP Cloud for Customer (C4C). It builds on the [copy activity overview](copy-activity-overview.md) article that presents a general overview of copy activity.
@@ -49,10 +50,7 @@ The following properties are supported for SAP Cloud for Customer linked service
| url | The URL of the SAP C4C OData service. | Yes | | username | Specify the user name to connect to the SAP C4C. | Yes | | password | Specify the password for the user account you specified for the username. Mark this field as a SecureString to store it securely in Data Factory, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | Yes |
-| connectVia | The [Integration Runtime](concepts-integration-runtime.md) to be used to connect to the data store. If not specified, it uses the default Azure Integration Runtime. | No for source, Yes for sink |
-
->[!IMPORTANT]
->To copy data into SAP Cloud for Customer, explicitly [create an Azure IR](create-azure-integration-runtime.md#create-azure-ir) with a location near your SAP Cloud for Customer, and associate in the linked service as the following example:
+| connectVia | The [Integration Runtime](concepts-integration-runtime.md) to be used to connect to the data store. If not specified, it uses the default Azure Integration Runtime. | No |
**Example:**
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-sap-table https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sap-table.md
@@ -10,7 +10,7 @@
Previously updated : 09/01/2020 Last updated : 02/01/2021 # Copy data from an SAP table by using Azure Data Factory
@@ -97,7 +97,7 @@ The following properties are supported for the SAP BW Open Hub linked service:
| `sncQop` | The SNC Quality of Protection level to apply.<br/>Applies when `sncMode` is On. <br/>Allowed values are `1` (Authentication), `2` (Integrity), `3` (Privacy), `8` (Default), `9` (Maximum). | No | | `connectVia` | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. A self-hosted integration runtime is required, as mentioned earlier in [Prerequisites](#prerequisites). |Yes |
-**Example 1: Connect to an SAP application server**
+### Example 1: Connect to an SAP application server
```json {
@@ -290,6 +290,60 @@ In `rfcTableOptions`, you can use the following common SAP query operators to fi
] ```
+## Join SAP tables
+
+Currently SAP Table connector only supports one single table with the default function module. To get the joined data of multiple tables, you can leverage the [customRfcReadTableFunctionModule](#copy-activity-properties) property in the SAP Table connector following steps below:
+
+- [Write a custom function module](#create-custom-function-module), which can take a query as OPTIONS and apply your own logic to retrieve the data.
+- For the "Custom function module", enter the name of your custom function module.
+- For the "RFC table options", specify the table join statement to feed into your function module as OPTIONS, such as "`<TABLE1>` INNER JOIN `<TABLE2>` ON COLUMN0".
+
+Below is an example:
+
+![Sap Table Join](./media/connector-sap-table/sap-table-join.png)
+
+>[!TIP]
+>You can also consider having the joined data aggregated in the VIEW, which is supported by SAP Table connector.
+>You can also try to extract related tables to get onboard onto Azure (e.g. Azure Storage, Azure SQL Database), then use Data Flow to proceed with further join or filter.
+
+## Create custom function module
+
+For SAP table, currently we support [customRfcReadTableFunctionModule](#copy-activity-properties) property in the copy source, which allows you to leverage your own logic and process data.
+
+As a quick guidance, here are some requirements to get started with the "Custom function module":
+
+- Definition:
+
+ ![Definition](./media/connector-sap-table/custom-function-module-definition.png)
+
+- Export data into one of the tables below:
+
+ ![Export table 1](./media/connector-sap-table/export-table-1.png)
+
+ ![Export table 2](./media/connector-sap-table/export-table-2.png)
+
+Below are illustrations of how SAP table connector works with custom function module:
+
+1. Build connection with SAP server via SAP NCO.
+
+1. Invoke "Custom function module" with the parameters set as below:
+
+ - QUERY_TABLE: the table name you set in the ADF SAP Table dataset;
+ - Delimiter: the delimiter you set in the ADF SAP Table Source;
+ - ROWCOUNT/Option/Fields: the Rowcount/Aggregated Option/Fields you set in the ADF Table source.
+
+1. Get the result and parse the data in below ways:
+
+ 1. Parse the value in the Fields table to get the schemas.
+
+ ![Parse values in Fields](./media/connector-sap-table/parse-values.png)
+
+ 1. Get the values of the output table to see which table contains these values.
+
+ ![Get values in output table](./media/connector-sap-table/get-values.png)
+
+ 1. Get the values in the OUT_TABLE, parse the data and then write it into the sink.
+ ## Data type mappings for an SAP table When you're copying data from an SAP table, the following mappings are used from the SAP table data types to the Azure Data Factory interim data types. To learn how the copy activity maps the source schema and data type to the sink, see [Schema and data type mappings](copy-activity-schema-and-type-mapping.md).
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-sharepoint-online-list https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sharepoint-online-list.md
@@ -179,6 +179,9 @@ To copy data from SharePoint Online List, the following properties are supported
] ```
+> [!NOTE]
+> In Azure Data Factory, you can't select more than one *choice* data type for a SharePoint Online List source.
+ ## Data type mapping for SharePoint Online List When you copy data from SharePoint Online List, the following mappings are used between SharePoint Online List data types and Azure Data Factory interim data types.
data-factory https://docs.microsoft.com/en-us/azure/data-factory/continuous-integration-deployment-improvements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/continuous-integration-deployment-improvements.md
@@ -0,0 +1,189 @@
+
+ Title: Automated publishing for continuous integration and delivery
+description: Learn how to publish for continuous integration and delivery automatically.
+
+documentationcenter: ''
+++++++ Last updated : 02/02/2021++
+# Automated publishing for continuous integration and delivery
++
+## Overview
+
+Continuous integration is the practice of testing each change made to your codebase automatically and as early as possible Continuous delivery follows the testing that happens during continuous integration and pushes changes to a staging or production system.
+
+In Azure Data Factory, continuous integration and delivery (CI/CD) means moving Data Factory pipelines from one environment (development, test, production) to another. Azure Data Factory utilizes [Azure Resource Manager templates](../azure-resource-manager/templates/overview.md) to store the configuration of your various ADF entities (pipelines, datasets, data flows, and so on). There are two suggested methods to promote a data factory to another environment:
+
+- Automated deployment using Data Factory's integration with [Azure Pipelines](/azure/devops/pipelines/get-started/what-is-azure-pipelines).
+- Manually upload a Resource Manager template using Data Factory UX integration with Azure Resource Manager.
+
+For more information, see [Continuous integration and delivery in Azure Data Factory](continuous-integration-deployment.md).
+
+In this article, we focus on the continuous deployment improvements and the automated publish feature for CI/CD.
+
+## Continuous deployment improvements
+
+The "Automated publish" feature takes the *validate all* and export *Azure Resource Manager (ARM) template* features from the ADF UX and makes the logic consumable via a publicly available npm package [@microsoft/azure-data-factory-utilities](https://www.npmjs.com/package/@microsoft/azure-data-factory-utilities). This allows you to programmatically trigger these actions instead of having to go to the ADF UI and do a button click. This will give your CI/CD pipelines a truer continuous integration experience.
+
+### Current CI/CD flow
+
+1. Each user makes changes in their private branches.
+2. Push to master is forbidden, users must create a PR to master to make changes.
+3. Users must load ADF UI and click publish to deploy changes to Data Factory and generate the ARM templates in the Publish branch.
+4. DevOps Release pipeline is configured to create a new release and deploy the ARM template each time a new change is pushed to the publish branch.
+
+![Current CI/CD Flow](media/continuous-integration-deployment-improvements/current-ci-cd-flow.png)
+
+### Manual step
+
+In current CI/CD flow, the UX is the intermediary to create the ARM template, therefore a user must go to ADF UI and manually click publish to start the ARM template generation and drop it in the publish branch, which is a bit of a hack.
+
+### The new CI/CD flow
+
+1. Each user makes changes in their private branches.
+2. Push to master is forbidden, users must create a PR to master to make changes.
+3. **Azure DevOps pipeline build is triggered every time a new commit is made to master, validates the resources and generates an ARM template as an artifact if validation succeeds.**
+4. DevOps Release pipeline is configured to create a new release and deploy the ARM template each time a new build is available.
+
+![New CI/CD Flow](media/continuous-integration-deployment-improvements/new-ci-cd-flow.png)
+
+### What changed?
+
+- We now have a build process using a DevOps build pipeline.
+- The build pipeline uses ADFUtilities NPM package, which will validate all the resources and generate the ARM templates (single and linked templates).
+- The build pipeline will be responsible of validating ADF resources and generating the ARM template instead of ADF UI (Publish button).
+- DevOps release definition will now consume this new build pipeline instead of the Git artifact.
+
+> [!NOTE]
+> You can continue to use existing mechanism (adf_publish branch) or use the new flow. Both are supported.
+
+## Package overview
+
+There are two commands currently available in the package:
+- Export ARM template
+- Validate
+
+### Export ARM template
+
+Run npm run start export <rootFolder> <factoryId> [outputFolder] to export the ARM template using the resources of a given folder. This command runs a validation check as well prior to generating the ARM template. Below is an example:
+
+```
+npm run start export C:\DataFactories\DevDataFactory /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/testResourceGroup/providers/Microsoft.DataFactory/factories/DevDataFactory ArmTemplateOutput
+```
+
+- RootFolder is a mandatory field that represents where the Data Factory resources are located.
+- FactoryId is a mandatory field that represents the Data factory resource ID in the format: "/subscriptions/<subId>/resourceGroups/<rgName>/providers/Microsoft.DataFactory/factories/<dfName>".
+- OutputFolder is an optional parameter that specifies the relative path to save the generated ARM template.
+
+> [!NOTE]
+> The ARM template generated is not published to the `Live` version of the factory. Deployment should be done using a CI/CD pipeline.
+
+
+### Validate
+
+Run npm run start validate <rootFolder> <factoryId> to validate all the resources of a given folder. Below is an example:
+
+```
+npm run start validate C:\DataFactories\DevDataFactory /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/testResourceGroup/providers/Microsoft.DataFactory/factories/DevDataFactory
+```
+
+- RootFolder is a mandatory field that represents where the Data Factory resources are located.
+- FactoryId is a mandatory field that represents the Data factory resource ID in the format: "/subscriptions/<subId>/resourceGroups/<rgName>/providers/Microsoft.DataFactory/factories/<dfName>".
++
+## Create an Azure pipeline
+
+While npm packages can be consumed in various ways, one of the primary benefits is being consumed via an [Azure Pipeline](https://nam06.safelinks.protection.outlook.com/?url=https:%2F%2Fdocs.microsoft.com%2F%2Fazure%2Fdevops%2Fpipelines%2Fget-started%2Fwhat-is-azure-pipelines%3Fview%3Dazure-devops%23:~:text%3DAzure%2520Pipelines%2520is%2520a%2520cloud%2Cit%2520available%2520to%2520other%2520users.%26text%3DAzure%2520Pipelines%2520combines%2520continuous%2520integration%2Cship%2520it%2520to%2520any%2520target.&data=04%7C01%7Cabnarain%40microsoft.com%7C5f064c3d5b7049db540708d89564b0bc%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C1%7C637423607000268277%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=jo%2BkIvSBiz6f%2B7kmgqDN27TUWc6YoDanOxL9oraAbmA%3D&reserved=0). On each merge into your collaboration branch, a pipeline can be triggered that first validates all of the code and then exports the ARM template into a [build artifact](https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.microsoft.com%2F%2Fazure%2Fdevops%2Fpipelines%2Fartifacts%2Fbuild-artifacts%3Fview%3Dazure-devops%26tabs%3Dyaml%23how-do-i-consume-artifacts&data=04%7C01%7Cabnarain%40microsoft.com%7C5f064c3d5b7049db540708d89564b0bc%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C1%7C637423607000278113%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=dN3t%2BF%2Fzbec4F28hJqigGANvvedQoQ6npzegTAwTp1A%3D&reserved=0) that can be consumed by a release pipeline. **How it differs from the current CI/CD process is that you will point your release pipeline at this artifact instead of the existing `adf_publish` branch.**
+
+Follow the below steps to get started:
+
+1. Open an Azure DevOps project and go to "Pipelines". Select "New Pipeline".
+
+ ![New Pipeline](media/continuous-integration-deployment-improvements/new-pipeline.png)
+
+2. Select the repository where you wish to save your Pipeline YAML script. We recommend saving it in a *build* folder within the same repository of your ADF resources. Ensure there is a **package.json** file in the repository as well that contains the package name (as shown in below example).
+
+ ```json
+ {
+ "scripts":{
+ "build":"node node_modules/@microsoft/azure-data-factory-utilities/lib/index"
+ },
+ "dependencies":{
+ "@microsoft/azure-data-factory-utilities":"^0.1.2"
+ }
+ }
+ ```
+
+3. Select *Starter pipeline*. If you have uploaded or merged the YAML file (as shown in below example), you can also point directly at that and edit it.
+
+ ![Starter pipeline](media/continuous-integration-deployment-improvements/starter-pipeline.png)
+
+ ```yaml
+ # Sample YAML file to validate and export an ARM template into a Build Artifact
+ # Requires a package.json file located in the target repository
+
+ trigger:
+ - main #collaboration branch
+
+ pool:
+ vmImage: 'ubuntu-latest'
+
+ steps:
+
+ # Installs Node and the npm packages saved in your package.json file in the build
+
+ - task: NodeTool@0
+ inputs:
+ versionSpec: '10.x'
+ displayName: 'Install Node.js'
+
+ - task: Npm@1
+ inputs:
+ command: 'install'
+ verbose: true
+ displayName: 'Install npm package'
+
+ # Validates all of the ADF resources in the repository. You will get the same validation errors as when "Validate All" is clicked
+ # Enter the appropriate subscription and name for the source factory
+
+ - task: Npm@1
+ inputs:
+ command: 'custom'
+ customCommand: 'run build validate $(Build.Repository.LocalPath) /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/testResourceGroup/providers/Microsoft.DataFactory/factories/yourFactoryName'
+ displayName: 'Validate'
+
+ # Validate and then generate the ARM template into the destination folder. Same as clicking "Publish" from UX
+ # The ARM template generated is not published to the ΓÇÿLiveΓÇÖ version of the factory. Deployment should be done using a CI/CD pipeline.
+
+ - task: Npm@1
+ inputs:
+ command: 'custom'
+ customCommand: 'run build export $(Build.Repository.LocalPath) /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/testResourceGroup/providers/Microsoft.DataFactory/factories/yourFactoryName "ArmTemplate"'
+ displayName: 'Validate and Generate ARM template'
+
+ # Publish the Artifact to be used as a source for a release pipeline
+
+ - task: PublishPipelineArtifact@1
+ inputs:
+ targetPath: '$(Build.Repository.LocalPath)/ArmTemplate'
+ artifact: 'ArmTemplates'
+ publishLocation: 'pipeline'
+ ```
+
+4. Enter in your YAML code. We recommend taking the YAML file and using it as a starting point.
+5. Save and run. If using the YAML, it will get triggered every time the "main" branch is updated.
+
+## Next steps
+
+Learn more information about continuous integration and delivery in Data Factory:
+
+- [Continuous integration and delivery in Azure Data Factory](continuous-integration-deployment.md).
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-powershell.md
@@ -1,6 +1,6 @@
Title: Deploy VMs on your Azure Stack Edge Pro GPU device via Azure PowerShell
-description: Describes how to create and manage virtual machines (VMs) on an Azure Stack Edge Pro GPU device using Azure PowerShell.
+ Title: Deploy VMs on your Azure Stack Edge device via Azure PowerShell
+description: Describes how to create and manage virtual machines on an Azure Stack Edge device by using Azure PowerShell.
@@ -9,18 +9,18 @@
Last updated 01/22/2021
-#Customer intent: As an IT admin, I need to understand how to create and manage virtual machines (VMs) on my Azure Stack Edge Pro device using APIs so that I can efficiently manage my VMs.
+#Customer intent: As an IT admin, I need to understand how to create and manage virtual machines (VMs) on my Azure Stack Edge Pro device. I want to use APIs so that I can efficiently manage my VMs.
-# Deploy VMs on your Azure Stack Edge Pro GPU device via Azure PowerShell
+# Deploy VMs on your Azure Stack Edge device via Azure PowerShell
-This article describes how to create and manage a VM on your Azure Stack Edge Pro device using Azure PowerShell. This article applies to Azure Stack Edge Pro GPU, Azure Stack Edge Pro R, and Azure Stack Edge Mini R devices.
+This article describes how to create and manage a VM on your Azure Stack Edge device by using Azure PowerShell. This article applies to Azure Stack Edge Pro GPU, Azure Stack Edge Pro R, and Azure Stack Edge Mini R devices.
## VM deployment workflow
-The deployment workflow is illustrated in the following diagram.
+Here's what the deployment workflow looks like:
-![VM deployment workflow](media/azure-stack-edge-gpu-deploy-virtual-machine-powershell/vm-workflow-r.svg)
+![Diagram of the VM deployment workflow.](media/azure-stack-edge-gpu-deploy-virtual-machine-powershell/vm-workflow-r.svg)
## Prerequisites
@@ -29,7 +29,7 @@ The deployment workflow is illustrated in the following diagram.
## Query for built-in subscription on the device
-For Azure Resource Manager, only a single user-visible fixed subscription is supported. This subscription is unique per device, and the subscription name or subscription ID cannot be changed.
+For Azure Resource Manager, only a single fixed subscription that's user-visible is supported. This subscription is unique per device, and the subscription name or subscription ID can't be changed.
This subscription contains all the resources that are created required for VM creation.
@@ -38,13 +38,13 @@ This subscription contains all the resources that are created required for VM cr
This subscription is used to deploy the VMs.
-1. To list this subscription, type:
+1. To list this subscription, enter:
```powershell Get-AzureRmSubscription ```
- A sample output is shown below.
+ Here's a sample output:
```powershell PS C:\windows\system32> Get-AzureRmSubscription
@@ -56,16 +56,16 @@ This subscription is used to deploy the VMs.
PS C:\windows\system32> ```
-3. Get the list of the registered resource providers running on the device. This list typically includes Compute, Network, and Storage.
+1. Get the list of the registered resource providers running on the device. This list typically includes compute, network, and storage.
```powershell Get-AzureRMResourceProvider ``` > [!NOTE]
- > The resource providers are pre-registered and cannot be modified or changed.
+ > The resource providers are pre-registered, and can't be modified or changed.
- A sample output is shown below:
+ Here's a sample output:
```powershell Get-AzureRmResourceProvider
@@ -97,16 +97,16 @@ This subscription is used to deploy the VMs.
## Create a resource group
-Create an Azure resource group with [New-AzureRmResourceGroup](/powershell/module/az.resources/new-azresourcegroup). A resource group is a logical container into which the Azure resources such as storage account, disk, managed disk are deployed and managed.
+Create an Azure resource group with [New-AzureRmResourceGroup](/powershell/module/az.resources/new-azresourcegroup). A resource group is a logical container into which Azure resources, such as a storage account, disk, and managed disk, are deployed and managed.
> [!IMPORTANT]
-> All the resources are created in the same location as that of the device and the location is set to **DBELocal**.
+> All the resources are created in the same location as that of the device, and the location is set to **DBELocal**.
```powershell New-AzureRmResourceGroup -Name <Resource group name> -Location DBELocal ```
-A sample output is shown below.
+Here's a sample output:
```powershell New-AzureRmResourceGroup -Name rg191113014333 -Location DBELocal
@@ -115,16 +115,16 @@ Successfully created Resource Group:rg191113014333
## Create a storage account
-Create a new storage account using the resource group created in the previous step. This account is a **local storage account** that will be used to upload the virtual disk image for the VM.
+Create a new storage account by using the resource group created in the previous step. This is a local storage account that you use to upload the virtual disk image for the VM.
```powershell New-AzureRmStorageAccount -Name <Storage account name> -ResourceGroupName <Resource group name> -Location DBELocal -SkuName Standard_LRS ``` > [!NOTE]
-> Only the local storage accounts such as Locally redundant storage (Standard_LRS or Premium_LRS) can be created via Azure Resource Manager. To create tiered storage accounts, see the steps in [Add, connect to storage accounts on your Azure Stack Edge Pro](azure-stack-edge-j-series-deploy-add-storage-accounts.md).
+> Using Azure Resource Manager, you can only create local storage accounts, such as locally redundant storage (standard or premium). To create tiered storage accounts, see [Tutorial: Transfer data via storage accounts with Azure Stack Edge Pro GPU](azure-stack-edge-j-series-deploy-add-storage-accounts.md).
-A sample output is shown below.
+Here's a sample output:
```powershell New-AzureRmStorageAccount -Name sa191113014333 -ResourceGroupName rg191113014333 -SkuName Standard_LRS -Location DBELocal
@@ -155,7 +155,7 @@ Context : Microsoft.WindowsAzure.Commands.Common.Storage.LazyAzur
ExtendedProperties : {} ```
-To get the storage account key, run the `Get-AzureRmStorageAccountKey` command. A sample output of this command is shown here.
+To get the storage account key, run the `Get-AzureRmStorageAccountKey` command. Here's a sample output of this command:
```powershell PS C:\Users\Administrator> Get-AzureRmStorageAccountKey
@@ -172,20 +172,19 @@ key1 /IjVJN+sSf7FMKiiPLlDm8mc9P4wtcmhhbnCa7...
key2 gd34TcaDzDgsY9JtDNMUgLDOItUU0Qur3CBo6Q... ```
-## Add blob URI to hosts file
+## Add the blob URI to the host file
-You already added the blob URI in the hosts file for the client that you are using to connect to Blob storage in the section [Modify host file for endpoint name resolution](azure-stack-edge-j-series-connect-resource-manager.md#step-5-modify-host-file-for-endpoint-name-resolution). This entry was used to add the blob URI:
+You already added the blob URI in the hosts file for the client that you're using to connect to Azure Blob Storage in the section [Modify host file for endpoint name resolution](azure-stack-edge-j-series-connect-resource-manager.md#step-5-modify-host-file-for-endpoint-name-resolution). This entry was used to add the blob URI:
\<Azure consistent network services VIP \> \<storage name\>.blob.\<appliance name\>.\<dnsdomain\> - ## Install certificates
-If you are using *https*, then you need to install appropriate certificates on your device. In this case, install the blob endpoint certificate. For more information, see how to create and upload certificates in [Manage certificates](azure-stack-edge-j-series-manage-certificates.md).
+If you're using *https*, then you need to install appropriate certificates on your device. In this case, install the blob endpoint certificate. For more information, see how to create and upload certificates in [Use certificates with Azure Stack Edge Pro GPU device](azure-stack-edge-j-series-manage-certificates.md).
## Upload a VHD
-Copy any disk images to be used into page blobs in the local storage account that you created in the earlier steps. You can use a tool such as [AzCopy](../storage/common/storage-use-azcopy-v10.md) to upload the VHD to the storage account that you created in earlier steps.
+Copy any disk images to be used into page blobs in the local storage account that you created in the earlier steps. You can use a tool such as [AzCopy](../storage/common/storage-use-azcopy-v10.md) to upload the VHD to the storage account.
<!--Before you use AzCopy, make sure that the [AzCopy is configured correctly](#configure-azcopy) for use with the blob storage REST API version that you are using with your Azure Stack Edge Pro device.
@@ -194,11 +193,11 @@ AzCopy /Source:<sourceDirectoryForVHD> /Dest:<blobContainerUri> /DestKey:<storag
``` > [!NOTE]
-> Set `BlobType` to page for creating a managed disk out of VHD. Set `BlobType` to block when writing to tiered storage accounts using AzCopy.
+> Set `BlobType` to `page` for creating a managed disk out of VHD. Set `BlobType` to `block` when you're writing to tiered storage accounts by using AzCopy.
-You can download the disk images from the marketplace. For detailed steps, go to [Get the virtual disk image from Azure marketplace](azure-stack-edge-j-series-create-virtual-machine-image.md).
+You can download the disk images from Azure Marketplace. For detailed steps, see [Get the virtual disk image from Azure Marketplace](azure-stack-edge-j-series-create-virtual-machine-image.md).
-A sample output using AzCopy 7.3 is shown below. For more information on this command, go to [Upload VHD file to storage account using AzCopy](../devtest-labs/devtest-lab-upload-vhd-using-azcopy.md).
+Here's a sample output using AzCopy 7.3. For more information on this command, see [Upload VHD file to storage account using AzCopy](../devtest-labs/devtest-lab-upload-vhd-using-azcopy.md).
```powershell
@@ -218,7 +217,7 @@ $StorageAccountSAS = New-AzureStorageAccountSASToken -Service Blob,File,Queue,Ta
<AzCopy exe path> cp "Full VHD path" "<BlobEndPoint>/<ContainerName><StorageAccountSAS>" ```
-Here is an example output:
+Here's an example output:
```powershell $ContainerName = <ContainerName>
@@ -245,7 +244,8 @@ Create a managed disk from the uploaded VHD.
```powershell $DiskConfig = New-AzureRmDiskConfig -Location DBELocal -CreateOption Import -SourceUri "Source URL for your VHD" ```
-A sample output is shown below:
+Here's a sample output:
+ <code> $DiskConfig = New-AzureRmDiskConfig -Location DBELocal -CreateOption Import ΓÇôSourceUri http://</code><code>sa191113014333.blob.dbe-1dcmhq2.microsoftdatabox.com/vmimages/ubuntu13.vhd</code>
@@ -253,7 +253,7 @@ $DiskConfig = New-AzureRmDiskConfig -Location DBELocal -CreateOption Import ΓÇôS
New-AzureRMDisk -ResourceGroupName <Resource group name> -DiskName <Disk name> -Disk $DiskConfig ```
-A sample output is shown below. For more information on this cmdlet, go to [New-AzureRmDisk](/powershell/module/azurerm.compute/new-azurermdisk?view=azurermps-6.13.0&preserve-view=true).
+Here's a sample output. For more information on this cmdlet, go to [New-AzureRmDisk](/powershell/module/azurerm.compute/new-azurermdisk?view=azurermps-6.13.0&preserve-view=true).
```powershell Tags :
@@ -293,7 +293,7 @@ Set-AzureRmImageOsDisk -Image $imageConfig -OsType 'Linux' -OsState 'Generalized
New-AzureRmImage -Image $imageConfig -ImageName <Image name> -ResourceGroupName <Resource group name> ```
-A sample output is shown below. For more information on this cmdlet, go to [New-AzureRmImage](/powershell/module/azurerm.compute/new-azurermimage?view=azurermps-6.13.0&preserve-view=true).
+Here's a sample output. For more information on this cmdlet, go to [New-AzureRmImage](/powershell/module/azurerm.compute/new-azurermimage?view=azurermps-6.13.0&preserve-view=true).
```powershell New-AzureRmImage -Image Microsoft.Azure.Commands.Compute.Automation.Models.PSImage -ImageName ig191113014333 -ResourceGroupName rg191113014333
@@ -314,16 +314,15 @@ Tags : {}
You must create one virtual network and associate a virtual network interface before you create and deploy the VM. > [!IMPORTANT]
-> While creating virtual network and virtual network interface, the following rules apply:
-> - Only one Vnet can be created (even across resource groups) and it must match exactly with the logical network in terms of the address space.
-> - Only one subnet will be allowed in the Vnet. The subnet must be the exact same address space as the Vnet.
-> - Only static allocation method will be allowed during Vnic creation and user needs to provide a private IP address.
+> The following rules apply:
+> - You can create only one virtual network, even across resource groups. The virtual network must have exactly the same address space as the logical network.
+> - The virtual network can have only one subnet. The subnet must have exactly the same address space as the virtual network.
+> - When you create the virtual network interface card, you can use only the static allocation method. The user needs to provide a private IP address.
-
-**Query the automatically created Vnet**
+### Query the automatically created virtual network
-When you enable compute from the local UI of your device, a Vnet `ASEVNET` is created automatically under `ASERG` resource group.
-Use the following command to query the existing Vnet:
+When you enable compute from the local UI of your device, a virtual network called `ASEVNET` is created automatically, under the `ASERG` resource group.
+Use the following command to query the existing virtual network:
```powershell $aRmVN = Get-AzureRMVirtualNetwork -Name ASEVNET -ResourceGroupName ASERG
@@ -334,14 +333,16 @@ $subNetId=New-AzureRmVirtualNetworkSubnetConfig -Name <Subnet name> -AddressPref
$aRmVN = New-AzureRmVirtualNetwork -ResourceGroupName <Resource group name> -Name <Vnet name> -Location DBELocal -AddressPrefix <Address prefix> -Subnet $subNetId ```-->
-**Create a Vnic using the Vnet subnet ID**
+### Create a virtual network interface card
+
+Here's the command to create a virtual network interface card by using the virtual network subnet ID:
```powershell $ipConfig = New-AzureRmNetworkInterfaceIpConfig -Name <IP config Name> -SubnetId $aRmVN.Subnets[0].Id -PrivateIpAddress <Private IP> $Nic = New-AzureRmNetworkInterface -Name <Nic name> -ResourceGroupName <Resource group name> -Location DBELocal -IpConfiguration $ipConfig ```
-The sample output of these commands is shown below:
+Here's the sample output of these commands:
```powershell PS C:\Users\Administrator> $subNetId=New-AzureRmVirtualNetworkSubnetConfig -Name my-ase-subnet -AddressPrefix "5.5.0.0/16"
@@ -403,7 +404,7 @@ Primary : True
MacAddress : 00155D18E432 : ```
-Optionally, while creating a Vnic for a VM, you can pass the public IP. In this instance, the public IP will return the private IP.
+Optionally, while creating a virtual network interface card for a VM, you can pass the public IP. In this instance, the public IP returns the private IP.
```powershell New-AzureRmPublicIPAddress -Name <Public IP> -ResourceGroupName <ResourceGroupName> -AllocationMethod Static -Location DBELocal
@@ -411,8 +412,7 @@ $publicIP = (Get-AzureRmPublicIPAddress -Name <Public IP> -ResourceGroupName <Re
$ipConfig = New-AzureRmNetworkInterfaceIpConfig -Name <ConfigName> -PublicIpAddressId $publicIP -SubnetId $subNetId ``` -
-**Create a VM**
+### Create a VM
You can now use the VM image to create a VM and attach it to the virtual network that you created earlier.
@@ -456,15 +456,15 @@ Follow these steps to connect to a Windows VM.
[!INCLUDE [azure-stack-edge-gateway-connect-vm](../../includes/azure-stack-edge-gateway-connect-virtual-machine-windows.md)]
-<!--Connect to the VM using the private IP that you passed during the VM creation.
+<!--Connect to the VM by using the private IP that you passed during the VM creation.
Open an SSH session to connect with the IP address. `ssh -l <username> <ip address>`
-When prompted, provide the password that you used when creating the VM.
+When you're prompted, provide the password that you used when creating the VM.
-If you need to provide the SSH key, use this command.
+If you need to provide the SSH key, use this command:
ssh -i c:/users/Administrator/.ssh/id_rsa Administrator@5.5.41.236
@@ -473,16 +473,16 @@ If you used a public IP address during VM creation, you can use that IP to conne
```powershell $publicIp = Get-AzureRmPublicIpAddress -Name <Public IP> -ResourceGroupName <Resource group name> ```
-The public IP in this case will be the same as the private IP that you passed during virtual network interface creation.-->
+The public IP in this case is the same as the private IP that you passed during the virtual network interface creation.-->
-## Manage VM
+## Manage the VM
-The following section describes some of the common operations around the VM that you will create on your Azure Stack Edge Pro device.
+The following sections describe some of the common operations that you can create on your Azure Stack Edge Pro device.
### List VMs running on the device
-To return a list of all the VMs running on your Azure Stack Edge Pro device, run the following command.
+To return a list of all the VMs running on your Azure Stack Edge device, run this command:
`Get-AzureRmVM -ResourceGroupName <String> -Name <String>`
@@ -490,29 +490,27 @@ To return a list of all the VMs running on your Azure Stack Edge Pro device, run
### Turn on the VM
-Run the following cmdlet to turn on a virtual machine running on your device:
+Run the following cmdlet to turn on a virtual machine that's running on your device:
`Start-AzureRmVM [-Name] <String> [-ResourceGroupName] <String>` - For more information on this cmdlet, go to [Start-AzureRmVM](/powershell/module/azurerm.compute/start-azurermvm?view=azurermps-6.13.0&preserve-view=true). ### Suspend or shut down the VM
-Run the following cmdlet to stop or shut down a virtual machine running on your device:
+Run the following cmdlet to stop or shut down a virtual machine that's running on your device:
```powershell Stop-AzureRmVM [-Name] <String> [-StayProvisioned] [-ResourceGroupName] <String> ``` - For more information on this cmdlet, go to [Stop-AzureRmVM cmdlet](/powershell/module/azurerm.compute/stop-azurermvm?view=azurermps-6.13.0&preserve-view=true). ### Add a data disk
-If the workload requirements on your VM increase, then you may need to add a data disk.
+If the workload requirements on your VM increase, then you might need to add a data disk.
```powershell Add-AzureRmVMDataDisk -VM $VirtualMachine -Name "disk1" -VhdUri "https://contoso.blob.core.windows.net/vhds/diskstandard03.vhd" -LUN 0 -Caching ReadOnly -DiskSizeinGB 1 -CreateOption Empty
@@ -530,8 +528,6 @@ Remove-AzureRmVM [-Name] <String> [-ResourceGroupName] <String>
For more information on this cmdlet, go to [Remove-AzureRmVm cmdlet](/powershell/module/azurerm.compute/remove-azurermvm?view=azurermps-6.13.0&preserve-view=true). -- ## Next steps
-[Azure Resource Manager cmdlets](/powershell/module/azurerm.resources/?view=azurermps-6.13.0&preserve-view=true)
+[Azure Resource Manager cmdlets](/powershell/module/azurerm.resources/?view=azurermps-6.13.0&preserve-view=true)
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-gpu-enable-azure-monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-enable-azure-monitor.md
@@ -7,7 +7,7 @@
Previously updated : 11/05/2020 Last updated : 02/02/2021
@@ -26,7 +26,7 @@ Before you begin, you'll need:
- You've completed **Configure compute** step as per the [Tutorial: Configure compute on your Azure Stack Edge Pro device](azure-stack-edge-gpu-deploy-configure-compute.md) on your device. Your device should have an associated IoT Hub resource, an IoT device, and an IoT Edge device.
-## Create Log Analytics workspace.
+## Create Log Analytics workspace
Take the following steps to create a log analytics workspace. A log analytics workspace is a logical storage unit where the log data is collected and stored.
@@ -114,8 +114,8 @@ Take the following steps to enable Container Insights on your workspace.
1. Get the resource ID and location. Go to `Your Log Analytics workspace > General > Properties`. Copy the following information:
- - **resource ID** which is the fully qualified Azure resource ID of the Azure Log Analytics workspace.
- - **location** which is the Azure region.
+ - **resource ID**, which is the fully qualified Azure resource ID of the Azure Log Analytics workspace.
+ - **location**, which is the Azure region.
![Properties of Log Analytics workspace](media/azure-stack-edge-gpu-enable-azure-monitor/log-analytics-workspace-properties-1.png)
@@ -237,3 +237,4 @@ Take the following steps to enable Container Insights on your workspace.
## Next steps - Learn how to [Monitor Kubernetes workloads via the Kubernetes Dashboard](azure-stack-edge-gpu-monitor-kubernetes-dashboard.md).
+- Learn how to [Manage device event alert notifications](azure-stack-edge-gpu-manage-device-event-alert-notifications.md).
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-gpu-manage-device-event-alert-notifications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-manage-device-event-alert-notifications.md
@@ -0,0 +1,125 @@
+
+ Title: Manage device event alert notifications for your Azure Stack Edge Pro resources | Microsoft Docs
+description: Describes how to use the Azure portal to manage alerts for device events on your Azure Stack Edge Pro resources.
++++++ Last updated : 02/02/2021++
+# Manage device event alert notifications on Azure Stack Edge Pro resources
+
+This article describes how to create action rules in the Azure portal to trigger or suppress alert notifications for device events that occur within a resource group, an Azure subscription, or an individual Azure Stack Edge resource. This article applies to all models of Azure Stack Edge.
+
+## About action rules
+
+An action rule can trigger or suppress alert notifications. The action rule is added to an *action group* - a set of notification preferences that's used to notify users who need to act on alerts triggered in different contexts for a resource or set of resources.
+
+For more information about action rules, see [Configuring an action rule](/azure/azure-monitor/platform/alerts-action-rules?tabs=portal#configuring-an-action-rule). For more information about action groups, see [Create and manage action groups in the Azure portal](/blob/master/articles/azure-monitor/platform/action-groups).
+
+> [!NOTE]
+> The action rules feature is in preview. Some screens and steps might change as the process is refined.
++
+## Create an action rule
+
+Take the following steps in the Azure portal to create an action rule for your Azure Stack Edge device.
+
+> [!NOTE]
+> These steps create an action rule that sends notifications to an action group. For details about creating an action rule to suppress notifications, see [Configuring an action rule](/azure/azure-monitor/platform/alerts-action-rules?tabs=portal#configuring-an-action-rule).
+
+1. Go to the Azure Stack Edge device in the Azure portal, and then go to **Monitoring > Alerts**. Select **Manage actions**.
+
+ ![Monitoring Alerts, Manage actions view](media/azure-stack-edge-gpu-manage-device-event-alert-notifications/action-rules-open-view-01.png)
+
+2. Select **Action rules (preview)**, and then select **+ New action rule**.
+
+ ![Manage actions, Action rules option](media/azure-stack-edge-gpu-manage-device-event-alert-notifications/action-rules-open-view-02.png)
+
+3. On the **Create action rule** screen, use **Scope** to select an Azure subscription, resource group, or target resource. The action rule will act on all alerts generated within that scope.
+
+ 1. Choose **Select** by **Scope**.
+
+ ![Select a scope for a new action rule](media/azure-stack-edge-gpu-manage-device-event-alert-notifications/new-action-rule-scope-01.png)
+
+ 2. Select the **Subscription** for the action rule, optionally filter by a **Resource** type. To filter to Azure Stack Edge resources, select **Data Box Edge devices (dataBoxEdge)**.
+
+ The **Resource** area lists the available resources based on your selections.
+
+ ![Available resources on the Select Scope screen](media/azure-stack-edge-gpu-manage-device-event-alert-notifications/new-action-rule-scope-02.png)
+
+ 3. Select the check box by each resource you want to apply the rule to. You can select the subscription, resource groups, or individual resources. Then select **Done**.
+
+ ![Sample settings for an action rule scope](media/azure-stack-edge-gpu-manage-device-event-alert-notifications/new-action-rule-scope-03.png)
+
+ The **Create action rule** screen shows the selected scope.
+
+ ![Completed scope for an Azure Stack Edge action rule](media/azure-stack-edge-gpu-manage-device-event-alert-notifications/new-action-rule-scope-04.png)
+
+4. Use **Filter** options to narrow the application of the rule to subset of alerts within the selected scope.
+
+ 1. Select **Add** to open the **Add filters** pane.
+
+ ![Option for adding filters to an action rule](media/azure-stack-edge-gpu-manage-device-event-alert-notifications/new-action-rule-filter-01.png)
+
+ 2. Under **Filters**, add each filter you want to apply. For each filter, select the filter type, **Operator**, and **Value**.
+
+ For a list of filter options, see [Filter criteria](/azure/azure-monitor/platform/alerts-action-rules?tabs=portal#filter-criteria).
+
+ The sample filters below apply to all alerts at Severity levels 2, 3, and 4 that the Monitor service raises for Azure Stack Edge resources.
+
+ When you finish adding filters, select **Done**.
+
+ ![Sample filter for an action rule](media/azure-stack-edge-gpu-manage-device-event-alert-notifications/new-action-rule-filter-02.png)
+
+5. On the **Create action rule** screen, select **Action group** to create a rule that sends notifications. Then, by **Actions**, choose **Select**.
+
+ ![Action group option for creating an action rule that sends notifications](media/azure-stack-edge-gpu-manage-device-event-alert-notifications/new-action-rule-action-group-01.png)
+
+ > [!NOTE]
+ > To create a rule that suppresses notifications, you would choose **Suppression**. For more information, see [Configuring an action rule](/azure/azure-monitor/platform/alerts-action-rules?tabs=portal#configuring-an-action-rule).
+
+6. Select the action group that you want to use with this action rule. Then choose **Select**. Your new action rule will be added to the notification preferences of the selected action group.
+
+ If you need to create a new action group, select **+ Create action group**, and follow the steps in [Create an action group by using the Azure portal](/azure/azure-monitor/platform/action-groups#create-an-action-group-by-using-the-azure-portal).
+
+ ![Select an action group to use with the rule, and then choose Select.](media/azure-stack-edge-gpu-manage-device-event-alert-notifications/new-action-rule-action-group-02.png)
+
+7. Give the new action rule a **Name** and **Description**, and assign the rule to a resource group.
+
+9. The new rule is enabled by default. If you don't want to start using the rule immediately, select **No** for **Enable rule update creation**.
+
+10. When you finish your settings, select **Create**.
+
+ ![Completed settings for an action rule that will send alert notifications](media/azure-stack-edge-gpu-manage-device-event-alert-notifications/new-action-rule-completed-settings.png)
+
+ The **Action rules (Preview)** screen opens, but you might not see your new action rule immediately. The focus is **All** resource groups.
+
+11. To see your new action rule, select the resource group for the rule.
+
+ ![Action rules screen with the new rule displayed](media/azure-stack-edge-gpu-manage-device-event-alert-notifications/new-action-rule-displayed.png)
++
+## View notifications
+
+Notifications go out when a new event triggers an alert for a resource that's within the scope of an action rule.
+
+The action group for a rule sets who receives a notification and the type of notification that's sent - email, a Short Message Service (SMS) message, or both.
+
+It might take a few minutes to receive notifications after an alert is triggered.
+
+The email notification will look similar to this one.
+
+![Sample email notification for an action rule](media/azure-stack-edge-gpu-manage-device-event-alert-notifications/sample-action-rule-email-notification.png)
++
+## Next steps
+
+<!-
+- See [Configure an action rule](/azure/azure-monitor/platform/alerts-action-rules?tabs=portal#configuring-an-action-rule) for more info about creating action rules that send or suppress alert notifications. -2 bullets referenced above. Making room for local tasks in "Next Steps." -->
+- See [Monitor your Azure Stack Edge Pro](azure-stack-edge-monitor.md) for info about reviewing device events, hardware status, and metrics charts.
+- See [Using Azure Monitor](azure-stack-edge-gpu-enable-azure-monitor.md) for info about optimizing Azure Monitor for Azure Stack Edge Pro GPU devices.
+- See [Create, view, and manage metric alerts using Azure Monitor Link target](/../azure-monitor/platform/alerts-metric.md) for info about managing individual alerts.
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-gpu-virtual-machine-sizes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-virtual-machine-sizes.md
@@ -1,6 +1,6 @@
Title: Supported virtual machine sizes on your Azure Stack Edge
-description: Describes the supported sizes for virtual machines (VMs) on an Azure Stack Edge Pro device templates.
+ Title: Supported virtual machine sizes on Azure Stack Edge
+description: Describes the supported sizes for virtual machines (VMs) on an Azure Stack Edge Pro device.
@@ -9,10 +9,10 @@
Last updated 12/21/2020
-#Customer intent: As an IT admin, I need to understand how to create and manage virtual machines (VMs) on my Azure Stack Edge Pro device using APIs so that I can efficiently manage my VMs.
+#Customer intent: As an IT admin, I need to understand how to create and manage virtual machines (VMs) on my Azure Stack Edge Pro device by using APIs, so that I can efficiently manage my VMs.
-# VM sizes and types for your Azure Stack Edge Pro
+# VM sizes and types for Azure Stack Edge Pro
This article describes the supported sizes for the virtual machines running on your Azure Stack Edge Pro devices. Use this article before you deploy virtual machines on your Azure Stack Edge Pro devices.
@@ -23,8 +23,8 @@ This article describes the supported sizes for the virtual machines running on y
## Unsupported VM operations and cmdlets
-Scale sets, availability sets, snapshots are not supported.
+Scale sets, availability sets, and snapshots aren't supported.
## Next steps
-[Deploy VM on your Azure Stack Edge Pro via the Azure portal ](azure-stack-edge-gpu-deploy-virtual-machine-portal.md)
+[Deploy VMs on your Azure Stack Edge Pro GPU device via the Azure portal](azure-stack-edge-gpu-deploy-virtual-machine-portal.md)
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-monitor.md
@@ -7,7 +7,7 @@
Previously updated : 04/15/2019 Last updated : 02/02/2021 # Monitor your Azure Stack Edge Pro
@@ -21,7 +21,6 @@ In this article, you learn how to:
> * View device events and the corresponding alerts > * View hardware status of device components > * View capacity and transaction metrics for your device
-> * Configure and manage alerts
## View device events
@@ -54,29 +53,26 @@ A full list of the metrics is shown in the following table:
|Capacity metrics |Description | |-|-|
-|**Available capacity** | Refers to the size of the data that can be written to the device. In other words, this is the capacity that can be made available on the device. <br></br>You can free up the device capacity by deleting the local copy of files that have a copy on both the device as well as the cloud. |
-|**Total capacity** | Refers to the total bytes on the device to write data to. This is also referred to as the total size of the local cache. <br></br> You can now increase the capacity of an existing virtual device by adding a data disk. Add a data disk through the hypervisor management for the VM and then restart your VM. The local storage pool of the Gateway device will expand to accommodate the newly added data disk. <br></br>For more information, go to [Add a hard drive for Hyper-V virtual machine](https://www.youtube.com/watch?v=EWdqUw9tTe4). |
+|**Available capacity** | Refers to the size of the data that can be written to the device. In other words, this metric is the capacity that can be made available on the device. <br></br>You can free up the device capacity by deleting the local copy of files that have a copy on both the device and the cloud. |
+|**Total capacity** | Refers to the total bytes on the device to write data to, which is also referred to as the total size of the local cache. <br></br> You can now increase the capacity of an existing virtual device by adding a data disk. Add a data disk through the hypervisor management for the VM and then restart your VM. The local storage pool of the Gateway device will expand to accommodate the newly added data disk. <br></br>For more information, go to [Add a hard drive for Hyper-V virtual machine](https://www.youtube.com/watch?v=EWdqUw9tTe4). |
|Transaction metrics | Description | |-|| |**Cloud bytes uploaded (device)** | Sum of all the bytes uploaded across all the shares on your device |
-|**Cloud bytes uploaded (share)** | Bytes uploaded per share. This can be: <br></br> Avg, which is the (Sum of all the bytes uploaded per share / Number of shares), <br></br>Max, which is the maximum number of bytes uploaded from a share <br></br>Min, which is the minimum number of bytes uploaded from a share |
-|**Cloud download throughput (share)**| Bytes downloaded per share. This can be: <br></br> Avg, which is the (Sum of all bytes read or downloaded to a share / Number of shares) <br></br> Max, which is the maximum number of bytes downloaded from a share<br></br> and Min, which is the minimum number of bytes downloaded from a share |
+|**Cloud bytes uploaded (share)** | Bytes uploaded per share. This metric can be: <br></br> Avg, which is the (Sum of all the bytes uploaded per share / Number of shares), <br></br>Max, which is the maximum number of bytes uploaded from a share <br></br>Min, which is the minimum number of bytes uploaded from a share |
+|**Cloud download throughput (share)**| Bytes downloaded per share. This metric can be: <br></br> Avg, which is the (Sum of all bytes read or downloaded to a share / Number of shares) <br></br> Max, which is the maximum number of bytes downloaded from a share<br></br> and Min, which is the minimum number of bytes downloaded from a share |
|**Cloud read throughput** | Sum of all the bytes read from the cloud across all the shares on your device | |**Cloud upload throughput** | Sum of all the bytes written to the cloud across all the shares on your device | |**Cloud upload throughput (share)** | Sum of all bytes written to the cloud from a share / # of shares is average, max, and min per share |
-|**Read throughput (network)** | Includes the system network throughput for all the bytes read from the cloud. This view can include data that is not restricted to shares. <br></br>Splitting will show the traffic over all the network adapters on the device. This includes adapters that are not connected or enabled. |
-|**Write throughput (network)** | Includes the system network throughput for all the bytes written to the cloud. This view can include data that is not restricted to shares. <br></br>Splitting will show the traffic over all the network adapters on the device. This includes adapters that are not connected or enabled. |
+|**Read throughput (network)** | Includes the system network throughput for all the bytes read from the cloud. This view can include data that is not restricted to shares. <br></br>Splitting will show the traffic over all the network adapters on the device, including adapters that are not connected or enabled. |
+|**Write throughput (network)** | Includes the system network throughput for all the bytes written to the cloud. This view can include data that is not restricted to shares. <br></br>Splitting will show the traffic over all the network adapters on the device, including adapters that are not connected or enabled. |
| Edge compute metrics | Description | |-|| |**Edge compute - memory usage** | | |**Edge compute - percentage CPU** | |
-## Manage alerts
-- ## Next steps Learn how to [Manage bandwidth](azure-stack-edge-manage-bandwidth-schedules.md).
+Learn how to [Manage device event alert notifications](azure-stack-edge-gpu-manage-device-event-alert-notifications.md).
event-grid https://docs.microsoft.com/en-us/azure/event-grid/custom-event-to-queue-storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/custom-event-to-queue-storage.md
@@ -1,7 +1,7 @@
Title: 'Quickstart: Send custom events to storage queue - Event Grid, Azure CLI' description: 'Quickstart: Use Azure Event Grid and Azure CLI to publish a topic, and subscribe to that event. A storage queue is used for the endpoint.' Previously updated : 07/07/2020 Last updated : 02/02/2021
@@ -112,6 +112,11 @@ Navigate to the Queue storage in the portal, and notice that Event Grid sent tho
![Show messages](./media/custom-event-to-queue-storage/messages.png)
+> [!NOTE]
+> If you use an [Azure Queue storage trigger for Azure Functions](../azure-functions/functions-bindings-storage-queue-trigger.md) for a queue that receives messages from Event Grid, you may see the following error message on the function execution: `The input is not a valid Base-64 string as it contains a non-base 64 character, more than two padding characters, or an illegal character among the padding characters.`
+>
+> The reason is that when you use an [Azure Queue storage trigger](../azure-functions/functions-bindings-storage-queue-trigger.md), Azure Functions expect a **base64 encoded string**, but Event Grid sends messages to a storage queue in a plain text format. Currently, it's not possible to configure the queue trigger for Azure Functions to accept plain text.
+ ## Clean up resources If you plan to continue working with this event, don't clean up the resources created in this article. Otherwise, use the following command to delete the resources you created in this article.
event-hubs https://docs.microsoft.com/en-us/azure/event-hubs/configure-customer-managed-key https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/configure-customer-managed-key.md
@@ -2,14 +2,15 @@
Title: Configure your own key for encrypting Azure Event Hubs data at rest description: This article provides information on how to configure your own key for encrypting Azure Event Hubs data rest. Previously updated : 06/23/2020 Last updated : 02/01/2021 # Configure customer-managed keys for encrypting Azure Event Hubs data at rest by using the Azure portal Azure Event Hubs provides encryption of data at rest with Azure Storage Service Encryption (Azure SSE). The Event Hubs service uses Azure Storage to store the data. All the data that's stored with Azure Storage is encrypted using Microsoft-managed keys. If you use your own key (also referred to as Bring Your Own Key (BYOK) or customer-managed key), the data is still encrypted using the Microsoft-managed key, but in addition the Microsoft-managed key will be encrypted using the customer-managed key. This feature enables you to create, rotate, disable, and revoke access to customer-managed keys that are used for encrypting Microsoft-managed keys. Enabling the BYOK feature is a one time setup process on your namespace. > [!NOTE]
-> The BYOK capability is supported by [Event Hubs dedicated single-tenant](event-hubs-dedicated-overview.md) clusters. It can't be enabled for standard Event Hubs namespaces.
+> - The BYOK capability is supported by [Event Hubs dedicated single-tenant](event-hubs-dedicated-overview.md) clusters. It can't be enabled for standard Event Hubs namespaces.
+> - The encryption can be enabled only for new or empty namespaces. If the namespace contains event hubs, the encryption operation will fail.
You can use Azure Key Vault to manage your keys and audit your key usage. You can either create your own keys and store them in a key vault, or you can use the Azure Key Vault APIs to generate keys. For more information about Azure Key Vault, see [What is Azure Key Vault?](../key-vault/general/overview.md)
governance https://docs.microsoft.com/en-us/azure/governance/policy/concepts/definition-structure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/concepts/definition-structure.md
@@ -441,7 +441,7 @@ using the `resourcegroup()` lookup function.
"value": "[resourcegroup().tags[parameters('tagName')]]" }], "roleDefinitionIds": [
- "/providers/microsoft.authorization/roleDefinitions/b24988ac-6180-42a0-ab88-20f7382dd24c"
+ "/providers/microsoft.authorization/roleDefinitions/4a9ae827-6dc8-4573-8ac7-8239d42aa03f"
] } }
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/hdinsight-management-ip-addresses https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-management-ip-addresses.md
@@ -73,6 +73,7 @@ Allow traffic from the IP addresses listed for the Azure HDInsight health and ma
| &nbsp; | UK South | 51.140.47.39</br>51.140.52.16 | \*:443 | Inbound | | United States | Central US | 13.89.171.122</br>13.89.171.124 | \*:443 | Inbound | | &nbsp; | East US | 13.82.225.233</br>40.71.175.99 | \*:443 | Inbound |
+| &nbsp; | East US2 | 20.44.16.8/29</br>20.49.102.48/29 | \*:443 | Inbound |
| &nbsp; | North Central US | 157.56.8.38</br>157.55.213.99 | \*:443 | Inbound | | &nbsp; | West Central US | 52.161.23.15</br>52.161.10.167 | \*:443 | Inbound | | &nbsp; | West US | 13.64.254.98</br>23.101.196.19 | \*:443 | Inbound |
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/optimize-hbase-ambari https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/optimize-hbase-ambari.md
@@ -3,7 +3,7 @@ Title: Optimize Apache HBase with Apache Ambari in Azure HDInsight
description: Use the Apache Ambari web UI to configure and optimize Apache HBase. Previously updated : 05/04/2020 Last updated : 02/01/2021 # Optimize Apache HBase with Apache Ambari in Azure HDInsight
@@ -14,6 +14,9 @@ Apache HBase configuration is modified from the **HBase Configs** tab. The follo
## Set HBASE_HEAPSIZE
+> [!NOTE]
+> This article contains references to the term *master*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
+ The HBase heap size specifies the maximum amount of heap to be used in megabytes by *region* and *master* servers. The default value is 1,000 MB. This value should be tuned for the cluster workload. 1. To modify, navigate to the **Advanced HBase-env** pane in the HBase **Configs** tab, and then find the `HBASE_HEAPSIZE` setting.
healthcare-apis https://docs.microsoft.com/en-us/azure/healthcare-apis/convert-data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/convert-data.md
@@ -11,7 +11,13 @@
-# How to convert data to FHIR
+# How to convert data to FHIR (Preview)
+
+> [!IMPORTANT]
+> This capability is in public preview, is provided without a service level agreement,
+> and is not recommended for production workloads. Certain features might not be supported
+> or might have constrained capabilities. For more information, see
+> [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
The $convert-data custom endpoint in the Azure API for FHIR is meant for data conversion from different formats to FHIR. It uses the Liquid template engine and the templates from the [FHIR Converter](https://github.com/microsoft/FHIR-Converter) project as the default templates. You can customize these conversion templates as needed. Currently it supports HL7v2 to FHIR conversion.
healthcare-apis https://docs.microsoft.com/en-us/azure/healthcare-apis/customer-managed-key https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/customer-managed-key.md
@@ -2,12 +2,12 @@
Title: Configure customer-managed keys for Azure API for FHIR description: Bring your own key feature supported in Azure API for FHIR through Cosmos DB -+ Last updated 09/28/2020-+ # Configure customer-managed keys at rest
healthcare-apis https://docs.microsoft.com/en-us/azure/healthcare-apis/fhir-features-supported https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir-features-supported.md
@@ -113,6 +113,8 @@ All the operations that are supported that extend the RESTful API.
| $export (whole system) | Yes | Yes | Yes | | | Patient/$export | Yes | Yes | Yes | | | Group/$export | Yes | Yes | Yes | |
+| $convert-data | Yes | Yes | Yes | |
+ ## Persistence
iot-dps https://docs.microsoft.com/en-us/azure/iot-dps/quick-create-simulated-device-x509-csharp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-dps/quick-create-simulated-device-x509-csharp.md
@@ -1,9 +1,9 @@
Title: Quickstart - Provision simulated X.509 device to Azure IoT Hub using C#
-description: Quickstart - Create and provision a simulated X.509 device using C# device SDK for Azure IoT Hub Device Provisioning Service (DPS). This quickstart uses individual enrollments.
+description: Quickstart - Create and provision a X.509 device using C# device SDK for Azure IoT Hub Device Provisioning Service (DPS). This quickstart uses individual enrollments.
Previously updated : 11/08/2018 Last updated : 02/01/2021
@@ -11,11 +11,11 @@ ms.devlang: csharp
-# Quickstart: Create and provision a simulated X.509 device using C# device SDK for IoT Hub Device Provisioning Service
+# Quickstart: Create and provision an X.509 device using C# device SDK for IoT Hub Device Provisioning Service
[!INCLUDE [iot-dps-selector-quick-create-simulated-device-x509](../../includes/iot-dps-selector-quick-create-simulated-device-x509.md)]
-These steps show you how to use the [Azure IoT Samples for C#](https://github.com/Azure-Samples/azure-iot-samples-csharp) to simulate an X.509 device on a development machine running the Windows OS. The sample also connects the simulated device to an IoT Hub using the Device Provisioning Service.
+These steps show you how to use device code from the [Azure IoT Samples for C#](https://github.com/Azure-Samples/azure-iot-samples-csharp) to provision an X.509 device. In this article, you will run device sample code on your development machine to connect to an IoT Hub using the Device Provisioning Service.
If you're unfamiliar with the process of autoprovisioning, review the [provisioning](about-iot-dps.md#provisioning-process) overview. Also make sure you've completed the steps in [Set up IoT Hub Device Provisioning Service with the Azure portal](./quick-setup-auto-provision.md) before continuing.
@@ -30,51 +30,78 @@ This article will demonstrate individual enrollments.
<a id="setupdevbox"></a> ## Prepare the development environment
-1. Make sure you have the [.NET Core 2.1 SDK or later](https://www.microsoft.com/net/download/windows) installed on your machine.
- 1. Make sure `git` is installed on your machine and is added to the environment variables accessible to the command window. See [Software Freedom Conservancy's Git client tools](https://git-scm.com/download/) for the latest version of `git` tools to install, which includes the **Git Bash**, the command-line app that you can use to interact with your local Git repository. 1. Open a command prompt or Git Bash. Clone the Azure IoT Samples for C# GitHub repo:
- ```cmd
+ ```bash
git clone https://github.com/Azure-Samples/azure-iot-samples-csharp.git ```
-## Create a self-signed X.509 device certificate and individual enrollment entry
+1. Make sure you have the [.NET Core 3.0.0 SDK or later](https://www.microsoft.com/net/download/windows) installed on your machine. You can use the following command to check your version.
+
+ ```bash
+ dotnet --info
+ ```
+
-In this section you, will use a self-signed X.509 certificate, it is important to keep in mind the following:
+
+## Create a self-signed X.509 device certificate
+
+In this section you, will create a self-signed X.509 test certificate using `iothubx509device1` as the subject common name. It is important to keep in mind the following points:
* Self-signed certificates are for testing only, and should not be used in production. * The default expiration date for a self-signed certificate is one year.
+* The device ID of the IoT device will be the subject common name on the certificate. Make sure to use a subject name that complies with the [Device ID string requirements](../iot-hub/iot-hub-devguide-identity-registry.md#device-identity-properties).
-You will use sample code from the [Provisioning Device Client Sample - X.509 Attestation](https://github.com/Azure-Samples/azure-iot-samples-csharp/tree/master/provisioning/Samples/device/X509Sample) to create the certificate to be used with the individual enrollment entry for the simulated device.
+You will use sample code from the [X509Sample](https://github.com/Azure-Samples/azure-iot-samples-csharp/tree/master/provisioning/Samples/device/X509Sample) to create the certificate to be used with the individual enrollment entry for the device.
-1. In a command prompt, change directories to the project directory for the X.509 device provisioning sample.
+1. In a PowerShell prompt, change directories to the project directory for the X.509 device provisioning sample.
- ```cmd
+ ```powershell
cd .\azure-iot-samples-csharp\provisioning\Samples\device\X509Sample ```
-2. The sample code is set up to use X.509 certificates stored within a password-protected PKCS12 formatted file (certificate.pfx). Additionally, you need a public key certificate file (certificate.cer) to create an individual enrollment later in this quickstart. To generate a self-signed certificate and its associated .cer and .pfx files, run the following command:
+2. The sample code is set up to use X.509 certificates stored within a password-protected PKCS12 formatted file (certificate.pfx). Additionally, you need a public key certificate file (certificate.cer) to create an individual enrollment later in this quickstart. To generate the self-signed certificate and its associated .cer and .pfx files, run the following command:
- ```cmd
- powershell .\GenerateTestCertificate.ps1
+ ```powershell
+ PS D:\azure-iot-samples-csharp\provisioning\Samples\device\X509Sample> .\GenerateTestCertificate.ps1 iothubx509device1
```
-3. The script prompts you for a PFX password. Remember this password, you must use it when you run the sample.
+3. The script prompts you for a PFX password. Remember this password, you must use it later again when you run the sample. You can run `certutil` to dump the certificate and verify the subject name.
+
+ ```powershell
+ PS D:\azure-iot-samples-csharp\provisioning\Samples\device\X509Sample> certutil .\certificate.pfx
+ Enter PFX password:
+ ================ Certificate 0 ================
+ ================ Begin Nesting Level 1 ================
+ Element 0:
+ Serial Number: 7b4a0e2af6f40eae4d91b3b7ff05a4ce
+ Issuer: CN=iothubx509device1, O=TEST, C=US
+ NotBefore: 2/1/2021 6:18 PM
+ NotAfter: 2/1/2022 6:28 PM
+ Subject: CN=iothubx509device1, O=TEST, C=US
+ Signature matches Public Key
+ Root Certificate: Subject matches Issuer
+ Cert Hash(sha1): e3eb7b7cc1e2b601486bf8a733887a54cdab8ed6
+ - End Nesting Level 1 -
+ Provider = Microsoft Strong Cryptographic Provider
+ Signature test passed
+ CertUtil: -dump command completed successfully.
+ ```
- ![ Enter the PFX password](./media/quick-create-simulated-device-x509-csharp/generate-certificate.png)
+ ## Create an individual enrollment entry for the device
-4. Sign in to the Azure portal, select the **All resources** button on the left-hand menu and open your provisioning service.
+1. Sign in to the Azure portal, select the **All resources** button on the left-hand menu and open your provisioning service.
-5. From the Device Provisioning Service menu, select **Manage enrollments**. Select **Individual Enrollments** tab and select the **Add individual enrollment** button at the top.
+2. From the Device Provisioning Service menu, select **Manage enrollments**. Select **Individual Enrollments** tab and select the **Add individual enrollment** button at the top.
-6. In the **Add Enrollment** panel, enter the following information:
+3. In the **Add Enrollment** panel, enter the following information:
- Select **X.509** as the identity attestation *Mechanism*. - Under the *Primary certificate .pem or .cer file*, choose *Select a file* to select the certificate file **certificate.cer** created in the previous steps.
- - Leave **Device ID** blank. Your device will be provisioned with its device ID set to the common name (CN) in the X.509 certificate, **iothubx509device1**. This will also be the name used for the registration ID for the individual enrollment entry.
+ - Leave **Device ID** blank. Your device will be provisioned with its device ID set to the common name (CN) in the X.509 certificate, **iothubx509device1**. This common name will also be the name used for the registration ID for the individual enrollment entry.
- Optionally, you may provide the following information: - Select an IoT hub linked with your provisioning service. - Update the **Initial device twin state** with the desired initial configuration for the device.
@@ -84,7 +111,9 @@ You will use sample code from the [Provisioning Device Client Sample - X.509 Att
On successful enrollment, your X.509 enrollment entry appears as **iothubx509device1** under the *Registration ID* column in the *Individual Enrollments* tab.
-## Provision the simulated device
++
+## Provision the device
1. From the **Overview** blade for your provisioning service, note the **_ID Scope_** value.
@@ -93,15 +122,37 @@ You will use sample code from the [Provisioning Device Client Sample - X.509 Att
2. Type the following command to build and run the X.509 device provisioning sample. Replace the `<IDScope>` value with the ID Scope for your provisioning service.
- ```cmd
- dotnet run <IDScope>
+ The certificate file will default to *./certificate.pfx* and prompt for the .pfx password.
+
+ ```powershell
+ dotnet run -- -s <IDScope>
```
-3. When prompted, enter the password for the PFX file that you created previously. Notice the messages that simulate the device booting and connecting to the Device Provisioning Service to get your IoT hub information.
+ If you want to pass everything as a parameter, you can use the following example format.
- ![Sample device output](./media/quick-create-simulated-device-x509-csharp/sample-output.png)
+ ```powershell
+ dotnet run -- -s 0ne00000A0A -c certificate.pfx -p 1234
+ ```
++
+3. The device will connect to DPS and be assigned to an IoT Hub. The device will additionally send a telemetry message to the hub.
+
+ ```output
+ Loading the certificate...
+ Found certificate: 10952E59D13A3E388F88E534444484F52CD3D9E4 CN=iothubx509device1, O=TEST, C=US; PrivateKey: True
+ Using certificate 10952E59D13A3E388F88E534444484F52CD3D9E4 CN=iothubx509device1, O=TEST, C=US
+ Initializing the device provisioning client...
+ Initialized for registration Id iothubx509device1.
+ Registering with the device provisioning service...
+ Registration status: Assigned.
+ Device iothubx509device2 registered to sample-iot-hub1.azure-devices.net.
+ Creating X509 authentication for IoT Hub...
+ Testing the provisioned device with IoT Hub...
+ Sending a telemetry message...
+ Finished.
+ ```
-4. Verify that the device has been provisioned. On successful provisioning of the simulated device to the IoT hub linked with your provisioning service, the device ID appears on the hub's **IoT devices** blade.
+4. Verify that the device has been provisioned. On successful provisioning of the device to the IoT hub linked with your provisioning service, the device ID appears on the hub's **IoT devices** blade.
![Device is registered with the IoT hub](./media/quick-create-simulated-device-x509-csharp/registration.png)
@@ -119,7 +170,7 @@ If you plan to continue working on and exploring the device client sample, do no
## Next steps
-In this quickstart, youΓÇÖve created a simulated X.509 device on your Windows machine and provisioned it to your IoT hub using the Azure IoT Hub Device Provisioning Service on the portal. To learn how to enroll your X.509 device programmatically, continue to the quickstart for programmatic enrollment of X.509 devices.
+In this quickstart, you provisioned an X.509 device to your IoT hub using the Azure IoT Hub Device Provisioning Service. To learn how to enroll your X.509 device programmatically, continue to the quickstart for programmatic enrollment of X.509 devices.
> [!div class="nextstepaction"] > [Azure quickstart - Enroll X.509 devices to Azure IoT Hub Device Provisioning Service](quick-enroll-device-x509-csharp.md)
iot-edge https://docs.microsoft.com/en-us/azure/iot-edge/how-to-auto-provision-simulated-device-linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-auto-provision-simulated-device-linux.md
@@ -199,8 +199,12 @@ Once the runtime is installed on your device, configure the device with the info
attestation: method: "tpm" registration_id: "<REGISTRATION_ID>"
+ # always_reprovision_on_startup: true
+ # dynamic_reprovisioning: false
```
+ Optionally, use the `always_reprovision_on_startup` or `dynamic_reprovisioning` lines to configure your device's reprovisioning behavior. If a device is set to reprovision on startup, it will always attempt to provision with DPS first and then fall back to the provisioning backup if that fails. If a device is set to dynamically reprovision itself, IoT Edge will restart and reprovision if a reprovisioning event is detected. For more information, see [IoT Hub device reprovisioning concepts](../iot-dps/concepts-device-reprovision.md).
+ 1. Update the values of `scope_id` and `registration_id` with your DPS and device information. ## Give IoT Edge access to the TPM
iot-edge https://docs.microsoft.com/en-us/azure/iot-edge/how-to-auto-provision-symmetric-keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-auto-provision-symmetric-keys.md
@@ -187,8 +187,12 @@ Have the following information ready:
method: "symmetric_key" registration_id: "<REGISTRATION_ID>" symmetric_key: "<SYMMETRIC_KEY>"
+ # always_reprovision_on_startup: true
+ # dynamic_reprovisioning: false
```
+ Optionally, use the `always_reprovision_on_startup` or `dynamic_reprovisioning` lines to configure your device's reprovisioning behavior. If a device is set to reprovision on startup, it will always attempt to provision with DPS first and then fall back to the provisioning backup if that fails. If a device is set to dynamically reprovision itself, IoT Edge will restart and reprovision if a reprovisioning event is detected. For more information, see [IoT Hub device reprovisioning concepts](../iot-dps/concepts-device-reprovision.md).
+ 1. Update the values of `scope_id`, `registration_id`, and `symmetric_key` with your DPS and device information. 1. Restart the IoT Edge runtime so that it picks up all the configuration changes that you made on the device.
iot-edge https://docs.microsoft.com/en-us/azure/iot-edge/how-to-auto-provision-x509-certs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-auto-provision-x509-certs.md
@@ -243,8 +243,12 @@ Have the following information ready:
# registration_id: "<OPTIONAL REGISTRATION ID. LEAVE COMMENTED OUT TO REGISTER WITH CN OF identity_cert>" identity_cert: "<REQUIRED URI TO DEVICE IDENTITY CERTIFICATE>" identity_pk: "<REQUIRED URI TO DEVICE IDENTITY PRIVATE KEY>"
+ # always_reprovision_on_startup: true
+ # dynamic_reprovisioning: false
```
+ Optionally, use the `always_reprovision_on_startup` or `dynamic_reprovisioning` lines to configure your device's reprovisioning behavior. If a device is set to reprovision on startup, it will always attempt to provision with DPS first and then fall back to the provisioning backup if that fails. If a device is set to dynamically reprovision itself, IoT Edge will restart and reprovision if a reprovisioning event is detected. For more information, see [IoT Hub device reprovisioning concepts](../iot-dps/concepts-device-reprovision.md).
+ 1. Update the values of `scope_id`, `identity_cert`, and `identity_pk` with your DPS and device information. When you add the X.509 certificate and key information to the config.yaml file, the paths should be provided as file URIs. For example:
iot-edge https://docs.microsoft.com/en-us/azure/iot-edge/how-to-install-iot-edge-on-windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-install-iot-edge-on-windows.md
@@ -292,7 +292,7 @@ This section covers provisioning your device automatically using DPS and X.509 c
1. Copy the following command into a text editor. Replace the placeholder text with your information as detailed. ```azurepowershell-interactive
- Provision-EflowVm -provisioningType x509 -ΓÇïscopeId <ID_SCOPE_HERE> -registrationId <REGISTRATION_ID_HERE> -identityCertLocWin <ABSOLUTE_CERT_SOURCE_PATH_ON_WINDOWS_MACHINE> -identityPkLocWin <ABSOLUTE_PRIVATE_KEY_SOURCE_PATH_ON_WINDOWS_MACHINE> -identityCertLocWin <ABSOLUTE_CERT_DEST_PATH_ON_LINUX_MACHINE -identityPkLocVm <ABSOLUTE_PRIVATE_KEY_DEST_PATH_ON_LINUX_MACHINE>
+ Provision-EflowVm -provisioningType x509 -ΓÇïscopeId <ID_SCOPE_HERE> -registrationId <REGISTRATION_ID_HERE> -identityCertLocWin <ABSOLUTE_CERT_SOURCE_PATH_ON_WINDOWS_MACHINE> -identityPkLocWin <ABSOLUTE_PRIVATE_KEY_SOURCE_PATH_ON_WINDOWS_MACHINE> -identityCertLocVm <ABSOLUTE_CERT_DEST_PATH_ON_LINUX_MACHINE -identityPkLocVm <ABSOLUTE_PRIVATE_KEY_DEST_PATH_ON_LINUX_MACHINE>
``` 1. In the [Azure portal](https://ms.portal.azure.com/), navigate to your DPS instance.
iot-edge https://docs.microsoft.com/en-us/azure/iot-edge/how-to-retrieve-iot-edge-logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-retrieve-iot-edge-logs.md
@@ -45,8 +45,8 @@ This method accepts a JSON payload with the following schema:
"id": "regex string", "filter": { "tail": "int",
- "since": "int",
- "until": "int",
+ "since": "string",
+ "until": "string",
"loglevel": "int", "regex": "regex string" }
@@ -64,8 +64,8 @@ This method accepts a JSON payload with the following schema:
| ID | string | A regular expression that supplies the module name. It can match multiple modules on an edge device. [.NET Regular Expressions](/dotnet/standard/base-types/regular-expressions) format is expected. | | filter | JSON section | Log filters to apply to the modules matching the `id` regular expression in the tuple. | | tail | integer | Number of log lines in the past to retrieve starting from the latest. OPTIONAL. |
-| since | integer | Only return logs since this time, as a duration (1 d, 90 m, 2 days 3 hours 2 minutes), rfc3339 timestamp, or UNIX timestamp. If both `tail` and `since` are specified, the logs are retrieved using the `since` value first. Then, the `tail` value is applied to the result, and the final result is returned. OPTIONAL. |
-| until | integer | Only return logs before the specified time, as an rfc3339 timestamp, UNIX timestamp, or duration (1 d, 90 m, 2 days 3 hours 2 minutes). OPTIONAL. |
+| since | string | Only return logs since this time, as a duration (1 d, 90 m, 2 days 3 hours 2 minutes), rfc3339 timestamp, or UNIX timestamp. If both `tail` and `since` are specified, the logs are retrieved using the `since` value first. Then, the `tail` value is applied to the result, and the final result is returned. OPTIONAL. |
+| until | string | Only return logs before the specified time, as an rfc3339 timestamp, UNIX timestamp, or duration (1 d, 90 m, 2 days 3 hours 2 minutes). OPTIONAL. |
| log level | integer | Filter log lines less than or equal to specified log level. Log lines should follow recommended logging format and use [Syslog severity level](https://en.wikipedia.org/wiki/Syslog#Severity_level) standard. OPTIONAL. | | regex | string | Filter log lines that have content that match the specified regular expression using [.NET Regular Expressions](/dotnet/standard/base-types/regular-expressions) format. OPTIONAL. | | encoding | string | Either `gzip` or `none`. Default is `none`. |
@@ -154,8 +154,8 @@ This method accepts a JSON payload similar to **GetModuleLogs**, with the additi
"id": "regex string", "filter": { "tail": "int",
- "since": "int",
- "until": "int",
+ "since": "string",
+ "until": "string",
"loglevel": "int", "regex": "regex string" }
@@ -287,8 +287,8 @@ This method accepts a JSON payload with the following schema:
|-|-|-| | schemaVersion | string | Set to `1.0` | | sasURL | string (URI) | [Shared Access Signature URL with write access to Azure Blob Storage container](/archive/blogs/jpsanders/easily-create-a-sas-to-download-a-file-from-azure-storage-using-azure-storage-explorer) |
-| since | integer | Only return logs since this time, as a duration (1 d, 90 m, 2 days 3 hours 2 minutes), rfc3339 timestamp, or UNIX timestamp. OPTIONAL. |
-| until | integer | Only return logs before the specified time, as an rfc3339 timestamp, UNIX timestamp, or duration (1 d, 90 m, 2 days 3 hours 2 minutes). OPTIONAL. |
+| since | string | Only return logs since this time, as a duration (1 d, 90 m, 2 days 3 hours 2 minutes), rfc3339 timestamp, or UNIX timestamp. OPTIONAL. |
+| until | string | Only return logs before the specified time, as an rfc3339 timestamp, UNIX timestamp, or duration (1 d, 90 m, 2 days 3 hours 2 minutes). OPTIONAL. |
| edgeRuntimeOnly | boolean | If true, only return logs from Edge Agent, Edge Hub, and the Edge Security Daemon. Default: false. OPTIONAL. | > [!IMPORTANT]
key-vault https://docs.microsoft.com/en-us/azure/key-vault/general/authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/authentication.md
@@ -108,11 +108,9 @@ The following table links to different articles that demonstrate how to work wit
| Key Vault Secrets | Key Vault Keys | Key Vault Certificates | | | | | | [Python](../secrets/quick-create-python.md) | [Python](../keys/quick-create-python.md) | [Python](../certificates/quick-create-python.md) |
-| [.NET (SDK v4)](../secrets/quick-create-net.md) | -- | -- |
-| [.NET (SDK v3)](https://dotnet.microsoft.com/download/dotnet-core/3.0) | -- | -- |
-| [Java](../secrets/quick-create-java.md) | -- | -- |
-| [JavaScript](../secrets/quick-create-node.md) | -- | -- |
-| | | |
+| [.NET](../secrets/quick-create-net.md) | [.NET](../keys/quick-create-net.md) | [.NET](../certificates/quick-create-net.md) |
+| [Java](../secrets/quick-create-java.md) | [Java](../keys/quick-create-java.md) | [Java](../certificates/quick-create-java.md) |
+| [JavaScript](../secrets/quick-create-node.md) | [JavaScript](../keys/quick-create-node.md) | [JavaScript](../certificates/quick-create-node.md) |
| [Azure portal](../secrets/quick-create-portal.md) | [Azure portal](../keys/quick-create-portal.md) | [Azure portal](../certificates/quick-create-portal.md) | | [Azure CLI](../secrets/quick-create-cli.md) | [Azure CLI](../keys/quick-create-cli.md) | [Azure CLI](../certificates/quick-create-cli.md) | | [Azure PowerShell](../secrets/quick-create-powershell.md) | [Azure PowerShell](../keys/quick-create-powershell.md) | [Azure PowerShell](../certificates/quick-create-powershell.md) |
key-vault https://docs.microsoft.com/en-us/azure/key-vault/general/client-libraries https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/client-libraries.md
@@ -24,10 +24,10 @@ Each SDK has separate client libraries for key vault, secrets, keys, and certifi
| Language | Secrets | Keys | Certificates | Key Vault (Management plane) | |--|--|--|--|--|
-| .NET | - [API Reference](/dotnet/api/azure.security.keyvault.secrets?view=azure-dotnet)<br>- [NuGet package](https://www.nuget.org/packages/Azure.Security.KeyVault.Secrets/)<br>- [Library source code](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/keyvault/Azure.Security.KeyVault.Secrets)<br>- [Quickstart](../secrets/quick-create-net.md) | - [API Reference](/dotnet/api/azure.security.keyvault.keys?view=azure-dotnet)<br>- [NuGet package](https://www.nuget.org/packages/Azure.Security.KeyVault.Keys/)<br>- [Library source code](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/keyvault/Azure.Security.KeyVault.Keys) | - [API Reference](/dotnet/api/azure.security.keyvault.certificates?view=azure-dotnet)<br>- [NuGet package](https://www.nuget.org/packages/Azure.Security.KeyVault.Certificates/)<br>- [Library source code](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/keyvault/Azure.Security.KeyVault.Certificates) | - [API Reference](/dotnet/api/microsoft.azure.management.keyvault?view=azure-dotnet)<br>- [NuGet Package](https://www.nuget.org/packages/Microsoft.Azure.Management.KeyVault/)<br> - [Library source code](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/keyvault/Microsoft.Azure.Management.KeyVault)|
+| .NET | - [API Reference](/dotnet/api/azure.security.keyvault.secrets?view=azure-dotnet)<br>- [NuGet package](https://www.nuget.org/packages/Azure.Security.KeyVault.Secrets/)<br>- [Library source code](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/keyvault/Azure.Security.KeyVault.Secrets)<br>- [Quickstart](../secrets/quick-create-net.md) | - [API Reference](/dotnet/api/azure.security.keyvault.keys?view=azure-dotnet)<br>- [NuGet package](https://www.nuget.org/packages/Azure.Security.KeyVault.Keys/)<br>- [Library source code](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/keyvault/Azure.Security.KeyVault.Keys)<br>- [Quickstart](../keys/quick-create-net.md) | - [API Reference](/dotnet/api/azure.security.keyvault.certificates?view=azure-dotnet)<br>- [NuGet package](https://www.nuget.org/packages/Azure.Security.KeyVault.Certificates/)<br>- [Library source code](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/keyvault/Azure.Security.KeyVault.Certificates)<br>- [Quickstart](../certificates/quick-create-net.md) | - [API Reference](/dotnet/api/microsoft.azure.management.keyvault?view=azure-dotnet)<br>- [NuGet Package](https://www.nuget.org/packages/Microsoft.Azure.Management.KeyVault/)<br> - [Library source code](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/keyvault/Microsoft.Azure.Management.KeyVault)|
| Python| - [API Reference](/python/api/overview/azure/keyvault-secrets-readme?view=azure-python)<br>- [PyPi package](https://pypi.org/project/azure-keyvault-secrets/)<br>- [Library source code](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/keyvault/azure-keyvault-secrets)<br>- [Quickstart](../secrets/quick-create-python.md) |- [API Reference](/python/api/overview/azure/keyvault-keys-readme?view=azure-python)<br>- [PyPi package](https://pypi.org/project/azure-keyvault-keys/)<br>- [Library source code](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/keyvault/azure-keyvault-keys)<br>- [Quickstart](../keys/quick-create-python.md) | - [API Reference](/python/api/overview/azure/keyvault-certificates-readme?view=azure-python)<br>- [PyPi package](https://pypi.org/project/azure-keyvault-certificates/)<br>- [Library source code](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/keyvault/azure-keyvault-certificates)<br>- [Quickstart](../certificates/quick-create-python.md) | - [API Reference](/python/api/azure-mgmt-keyvault/azure.mgmt.keyvault?view=azure-python)<br> - [PyPi package](https://pypi.org/project/azure-mgmt-keyvault/)<br> - [Library source code](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/keyvault/azure-mgmt-keyvault)|
-| Java | - [API Reference](https://azuresdkdocs.blob.core.windows.net/$web/jav) |- [API Reference](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-security-keyvault-keys/4.2.0/https://docsupdatetracker.net/index.html)<br>- [Library source code](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/keyvault/azure-security-keyvault-keys) | - [API Reference](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-security-keyvault-certificates/4.1.0/https://docsupdatetracker.net/index.html)<br>- [Library source code](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/keyvault/azure-security-keyvault-certificates) |- [API Reference](/java/api/com.microsoft.azure.management.keyvault?view=azure-java-stable)<br>- [Library source code](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/keyvault/mgmt-v2016_10_01)|
-| Node.js | - [API Reference](/javascript/api/@azure/keyvault-secrets/?view=azure-node-latest)<br>- [npm package](https://www.npmjs.com/package/@azure/keyvault-secrets)<br>- [Library source code](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/keyvault/keyvault-secrets)<br>- [Quickstart](../secrets/quick-create-node.md) |- [API Reference](/javascript/api/@azure/keyvault-keys/?view=azure-node-latest)<br>- [npm package](https://www.npmjs.com/package/@azure/keyvault-keys)<br>- [Library source code](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/keyvault/keyvault-keys)| - [API Reference](/javascript/api/@azure/keyvault-certificates/?view=azure-node-latest)<br>- [npm package](https://www.npmjs.com/package/@azure/keyvault-certificates)<br>- [Library source code](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/keyvault/keyvault-certificates) | - [API Reference](/javascript/api/@azure/arm-keyvault/?view=azure-node-latest)<br>- [npm package](https://www.npmjs.com/package/@azure/arm-keyvault)<br>- [Library source code](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/keyvault/arm-keyvault)
+| Java | - [API Reference](https://azuresdkdocs.blob.core.windows.net/$web/jav) |- [API Reference](/java/api/com.microsoft.azure.management.keyvault?view=azure-java-stable)<br>- [Library source code](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/keyvault/mgmt-v2016_10_01)|
+| Node.js | - [API Reference](/javascript/api/@azure/keyvault-secrets/?view=azure-node-latest)<br>- [npm package](https://www.npmjs.com/package/@azure/keyvault-secrets)<br>- [Library source code](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/keyvault/keyvault-secrets)<br>- [Quickstart](../secrets/quick-create-node.md) |- [API Reference](/javascript/api/@azure/keyvault-keys/?view=azure-node-latest)<br>- [npm package](https://www.npmjs.com/package/@azure/keyvault-keys)<br>- [Library source code](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/keyvault/keyvault-keys)<br>- [Quickstart](../keys/quick-create-node.md)| - [API Reference](/javascript/api/@azure/keyvault-certificates/?view=azure-node-latest)<br>- [npm package](https://www.npmjs.com/package/@azure/keyvault-certificates)<br>- [Library source code](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/keyvault/keyvault-certificates)<br>- [Quickstart](../certificates/quick-create-node.md) | - [API Reference](/javascript/api/@azure/arm-keyvault/?view=azure-node-latest)<br>- [npm package](https://www.npmjs.com/package/@azure/arm-keyvault)<br>- [Library source code](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/keyvault/arm-keyvault)
## Next steps
lab-services https://docs.microsoft.com/en-us/azure/lab-services/class-type-networking-gns3 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lab-services/class-type-networking-gns3.md
@@ -0,0 +1,99 @@
+
+ Title: Set up a networking lab with Azure Lab Services and GNS3 | Microsoft Docs
+description: Learn how to set up a lab using Azure Lab Services to teach networking with GNS3.
+ Last updated : 01/19/2021++
+# Set up a lab to teach a networking class
+This article shows you how to set up a class that focuses on allowing students to emulate, configure, test, and troubleshoot virtual and real networks using [GNS3](https://www.gns3.com/) software.
+
+This article has two main sections. The first section covers how to create the classroom lab. The second section covers how to create the template machine with nested virtualization enabled and with GNS3 installed and configured.
+
+## Lab configuration
+To set up this lab, you need an Azure subscription to get started. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin. Once you get an Azure subscription, you can either create a new lab account in Azure Lab Services or use an existing account. See the following tutorial for creating a new lab account: [Tutorial to setup a lab account](tutorial-setup-lab-account.md).
+
+Follow [this tutorial](tutorial-setup-classroom-lab.md) to create a new lab and then apply the following settings:
+
+| Virtual machine size | Image |
+| -- | -- |
+| Large (Nested Virtualization) | Windows 10 Pro, Version 1909 |
+
+## Template machine
+
+After the template machine is created, start the machine and connect to it to complete the following three major tasks.
+
+1. Prepare the template machine for nested virtualization.
+2. Install GNS3.
+3. Create nested GNS3 VM in Hyper-V.
+4. Configure GNS3 to use Windows Hyper-V VM.
+5. Add appropriate appliances.
+6. Publish template.
++
+### Prepare template machine for nested virtualization
+- Follow instructions in [this article](how-to-enable-nested-virtualization-template-vm.md) to prepare your template virtual machine for nested virtualization.
+
+### Install GNS3
+- Follow the instructions for [installing GNS3 on Windows](https://docs.gns3.com/docs/getting-started/installation/windows). Make sure to include installing the **GNS3 VM** in the component dialog, see below.
+
+![SelectGNS3vm](./media/class-type-networking-gns3/gns3-select-vm.png)
+
+Eventually you'll reach the GNS3 VM selection. Make sure to select the **Hyper-V** option.
+
+![SelectHyperV](./media/class-type-networking-gns3/gns3-vm-hyper-v.png)
+
+ This option will download the PowerShell script and VHD files to create the GNS3 VM in the Hyper-V manager. Continue installation using the default values. **Once the setup is complete, don't start GNS3**.
+
+### Create GNS3 VM
+Once the setup has completed, a zip file **"GNS3.VM.Hyper-V.2.2.17.zip"** is downloaded to the same folder as the installation file, containing the drives and the PowerShell script to create the Hyper-V vm.
+- **Extract all** on the GNS3.VM.Hyper-V.2.2.17.zip. This action will extract out the drives and the PowerShell script to create the VM.
+- **Run with PowerShell** on the "create-vm.ps1" PowerShell script by right clicking on the file.
+- An Execution Policy Change request may show up. Enter "Y" to execute the script.
+
+![PSExecutionPolicy](./media/class-type-networking-gns3/powershell-execution-policy-change.png)
+
+- Once the script has completed, you can confirm the VM "GNS3 VM" has been created in the Hyper-V Manager.
+
+### Configure GNS3 to use Hyper-V VM
+Now that GNS3 is installed and the GNS3 VM is added, start up GNS3 to link the two together. The [GNS3 Setup wizard will start automatically.](https://docs.gns3.com/docs/getting-started/setup-wizard-gns3-vm#local-gns3-vm-setup-wizard).
+- Use the **Run appliances from virtual machine.** option. Use the defaults for the rest of the wizard until you hit the **VMware vmrun tool cannot be found.** error.
+
+![VMWareError](./media/class-type-networking-gns3/gns3-vmware-vmrun-tool-not-found.png)
+
+- Choose **Ok**, and **Cancel** out of the wizard.
+- To complete the connection to the Hyper-V vm, open the **Edit** -> **Preferences** -> **GNS3 VM** and select **Enable the GNS3 VM** and select the **Hyper-V** option.
+
+![EnableGNS3VMs](./media/class-type-networking-gns3/gns3-preference-vm.png)
+
+### Add appropriate appliances
+
+At this point, you'll want to add the appropriate [appliances for the class.](https://docs.gns3.com/docs/using-gns3/beginners/install-from-marketplace)
+
+### Publish template
+
+Now that the template VM is set up properly, and ready for publishing there are a few key points to check.
+- Make sure that the GNS3 VM is shut down or turned off. Publishing while the VM is still running will corrupt the VM.
+- Close down GNS3, publishing while and running can lead to unintended side effects.
+- Clean up any installation files or other unnecessary files.
+
+## Cost
+
+If you would like to estimate the cost of this lab, you can use the following example:
+
+For a class of 25 students with 20 hours of scheduled class time and 10 hours of quota for homework or assignments, the price for the lab would be:
+
+25 students * (20 + 10) hours * 84 Lab Units * 0.01 USD per hour = 630 USD.
+
+**Important:** Cost estimate is for example purposes only. For current details on pricing, see [Azure Lab Services Pricing](https://azure.microsoft.com/pricing/details/lab-services/).
+
+## Conclusion
+This article walked you through the steps to create a lab for network configuration using GNS3.
+
+## Next steps
+Next steps are common to setting up any lab:
+
+- [Add users](tutorial-setup-classroom-lab.md#add-users-to-the-lab)
+- [Set quota](how-to-configure-student-usage.md#set-quotas-for-users)
+- [Set a schedule](tutorial-setup-classroom-lab.md#set-a-schedule-for-the-lab)
+- [Email registration links to students](how-to-configure-student-usage.md#send-invitations-to-users).
load-balancer https://docs.microsoft.com/en-us/azure/load-balancer/backend-pool-management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/backend-pool-management.md
@@ -21,6 +21,8 @@ Configure your backend pool by NIC when using existing virtual machines and virt
When preallocating your backend pool with an IP address range which you plan to later create virtual machines and virtual machine scale sets, configure your backend pool by IP address and VNET ID combination.
+You can configure IP-based and NIC-based backend pools for the same load balancer however, you cannot create a single backend pool that mixes backed addresses targeted by NIC and IP addresses within the same pool.
+ The configuration sections of this article will focus on: * Azure PowerShell
logic-apps https://docs.microsoft.com/en-us/azure/logic-apps/logic-apps-overview-preview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-overview-preview.md
@@ -5,7 +5,7 @@
ms.suite: integration Previously updated : 01/22/2021 Last updated : 02/01/2021 # Overview: Azure Logic Apps Preview
@@ -175,6 +175,8 @@ For more information about the pricing models that apply to this new resource ty
In Azure Logic Apps Preview, these capabilities have changed, or they are currently limited, unavailable, or unsupported:
+* **OS support**: Currently, the designer in Visual Studio Code doesn't work on Linux OS, but you can still deploy logic apps that use the Logic Apps Preview runtime to Linux-based virtual machines. For now, you can build your logic apps in Visual Studio Code on Windows or macOS and then deploy to a Linux-based virtual machine.
+ * **Triggers and actions**: Some built-in triggers are unavailable, such as Sliding Window and Batch. To start your workflow, use the [built-in Recurrence, Request, HTTP, HTTP Webhook, Event Hubs, or Service Bus trigger](../connectors/apis-list.md). Built-in triggers and actions run natively in the Azure Logic Apps Preview runtime, while managed connectors are deployed in Azure. In the designer, built-in triggers and actions appear under the **Built-in** tab, while managed connector triggers and actions appear under the **Azure** tab. > [!NOTE]
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/azure-machine-learning-release-notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/azure-machine-learning-release-notes.md
@@ -15,6 +15,21 @@ Last updated 09/10/2020
In this article, learn about Azure Machine Learning releases. For the full SDK reference content, visit the Azure Machine Learning's [**main SDK for Python**](/python/api/overview/azure/ml/intro?preserve-view=true&view=azure-ml-py) reference page.
+ ## 2021-01-31
+### Azure Machine Learning Studio Notebooks Experience (January Update)
++ **New features**
+ + Native Markdown Editor in AzureML. Users can now render and edit markdown files natively in AzureML Studio.
+ + [Run Button for Scripts (.py, .R and .sh)](https://docs.microsoft.com/azure/machine-learning/how-to-run-jupyter-notebooks#run-a-notebook-or-python-script). Users can easily now run Python, R and Bash script in AzureML
+ + [Variable Explorer](https://docs.microsoft.com/azure/machine-learning/how-to-run-jupyter-notebooks#explore-variables-in-the-notebook). Explore the contents of variables and data frames in a pop-up panel. Users can easily check data type, size, and contents.
+ + [Table of Content](https://docs.microsoft.com/azure/machine-learning/how-to-run-jupyter-notebooks#navigate-with-a-toc). Navigate to sections of your notebook, indicated by Markdown headers.
+ + Export your Notebook as Latex/HTML/Py. Create easy-to-share notebook files by exporting to LaTex, HTML, or .py
+ + Intellicode. ML-powered results provides an enhanced [intelligent autocompletion experience](https://docs.microsoft.com/visualstudio/intellicode/overview).
+++ **Bug fixes and improvements**
+ + Improved page load times
+ + Improved performance
+ + Improved speed and kernel reliability
+
## 2021-01-25 ### Azure Machine Learning SDK for Python v1.21.0
@@ -2311,4 +2326,4 @@ The [`PipelineEndpoint`](/python/api/azureml-pipeline-core/azureml.pipeline.core
## Next steps
-Read the overview for [Azure Machine Learning](overview-what-is-azure-ml.md).
+Read the overview for [Azure Machine Learning](overview-what-is-azure-ml.md).
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-create-attach-kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-create-attach-kubernetes.md
@@ -65,6 +65,8 @@ Azure Machine Learning can deploy trained machine learning models to Azure Kuber
- [Manually scale the node count in an AKS cluster](../aks/scale-cluster.md) - [Set up cluster autoscaler in AKS](../aks/cluster-autoscaler.md)
+- __Do not directly update the cluster by using a YAML configuration__. While Azure Kubernetes Services supports updates via YAML configuration, Azure Machine Learning deployments will override your changes. The only two YAML fields that will not overwritten are __request limits__ and and __cpu and memory__.
+ ## Azure Kubernetes Service version Azure Kubernetes Service allows you to create a cluster using a variety of Kubernetes versions. For more information on available versions, see [supported Kubernetes versions in Azure Kubernetes Service](../aks/supported-kubernetes-versions.md).
@@ -376,7 +378,6 @@ In Azure Machine Learning studio, select __Compute__, __Inference clusters__, an
## Troubleshooting- ### Update the cluster Updates to Azure Machine Learning components installed in an Azure Kubernetes Service cluster must be manually applied.
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-move-data-in-out-of-pipelines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-move-data-in-out-of-pipelines.md
@@ -7,7 +7,7 @@
Previously updated : 01/11/2021 Last updated : 02/01/2021 # As a data scientist using Python, I want to get data into my pipeline and flowing between steps
@@ -16,7 +16,6 @@
# Moving data into and between ML pipeline steps (Python) - This article provides code for importing, transforming, and moving data between steps in an Azure Machine Learning pipeline. For an overview of how data works in Azure Machine Learning, see [Access data in Azure storage services](how-to-access-data.md). For the benefits and structure of Azure Machine Learning pipelines, see [What are Azure Machine Learning pipelines?](concept-ml-pipelines.md). This article will show you how to:
@@ -173,7 +172,7 @@ You may choose to upload the contents of your `OutputFileDatasetConfig` object a
```python #get blob datastore already registered with the workspace blob_store= ws.datastores['my_blob_store']
-OutputFileDatasetConfig(name="clean_data", destination=blob_store).as_upload(overwrite=False)
+OutputFileDatasetConfig(name="clean_data", destination=(blob_store, 'outputdataset')).as_upload(overwrite=False)
``` > [!NOTE]
@@ -207,7 +206,7 @@ In the following code,
```python # get adls gen 2 datastore already registered with the workspace datastore = workspace.datastores['my_adlsgen2']
-step1_output_data = OutputFileDatasetConfig(name="processed_data", destination=datastore).as_upload()
+step1_output_data = OutputFileDatasetConfig(name="processed_data", destination=(datastore, "mypath/{run-id}/{output-name}")).as_upload()
step1 = PythonScriptStep( name="generate_data",
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-secure-training-vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-secure-training-vnet.md
@@ -59,16 +59,19 @@ To use either a [managed Azure Machine Learning __compute target__](concept-comp
> * If the Azure Storage Account(s) for the workspace are also secured in a virtual network, they must be in the same virtual network as the Azure Machine Learning compute instance or cluster. > * For compute instance Jupyter functionality to work, ensure that web socket communication is not disabled. Please ensure your network allows websocket connections to *.instances.azureml.net and *.instances.azureml.ms. > * When compute instance is deployed in a private link workspace it can be only be accessed from within virtual network. If you are using custom DNS or hosts file please add an entry for `<instance-name>.<region>.instances.azureml.ms` with private IP address of workspace private endpoint. For more information see the [custom DNS](./how-to-custom-dns.md) article.
+> * The subnet used to deploy compute cluster/instance should not be delegated to any other service like ACI
+> * Virtual network service endpoint policies do not work for compute cluster/instance system storage accounts
+ > [!TIP] > The Machine Learning compute instance or cluster automatically allocates additional networking resources __in the resource group that contains the virtual network__. For each compute instance or cluster, the service allocates the following resources: > > * One network security group
-> * One public IP address
+> * One public IP address. If you have Azure policy prohibiting Public IP creation then deployment of cluster/instances will fail
> * One load balancer > > In the case of clusters these resources are deleted (and recreated) every time the cluster scales down to 0 nodes, however for an instance the resources are held onto till the instance is completely deleted (stopping does not remove the resources).
-> These resources are limited by the subscription's [resource quotas](../azure-resource-manager/management/azure-subscription-service-limits.md).
+> These resources are limited by the subscription's [resource quotas](../azure-resource-manager/management/azure-subscription-service-limits.md). If the virtual network resource group is locked then deletion of compute cluster/instance will fail. Load balancer cannot be deleted until the compute cluster/instance is deleted.
### <a id="mlcports"></a> Required ports
@@ -314,4 +317,4 @@ This article is part three in a four-part virtual network series. See the rest o
* [Part 1: Virtual network overview](how-to-network-security-overview.md) * [Part 2: Secure the workspace resources](how-to-secure-workspace-vnet.md) * [Part 4: Secure the inferencing environment](how-to-secure-inferencing-vnet.md)
-* [Part 5:Enable studio functionality](how-to-enable-studio-virtual-network.md)
+* [Part 5:Enable studio functionality](how-to-enable-studio-virtual-network.md)
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-tune-hyperparameters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-tune-hyperparameters.md
@@ -2,13 +2,13 @@
Title: Hyperparameter tuning a model description: Automate hyperparameter tuning for deep learning and machine learning models using Azure Machine Learning.--++ Previously updated : 03/30/2020 Last updated : 01/29/2021
@@ -16,7 +16,6 @@
# Hyperparameter tuning a model with Azure Machine Learning - Automate efficient hyperparameter tuning by using Azure Machine Learning [HyperDrive package](/python/api/azureml-train-core/azureml.train.hyperdrive?preserve-view=true&view=azure-ml-py). Learn how to complete the steps required to tune hyperparameters with the [Azure Machine Learning SDK](/python/api/overview/azure/ml/?preserve-view=true&view=azure-ml-py): 1. Define the parameter search space
@@ -379,6 +378,30 @@ hd_config = HyperDriveConfig(run_config=src,
## Visualize hyperparameter tuning runs
+You can visualize your hyperparameter tuning runs in the Azure Machine Learning studio, or you can use a notebook widget.
+
+### Studio
+
+You can visualize all of your hyperparameter tuning runs in the [Azure Machine Learning studio](https://ml.azure.com). For more information on how to view an experiment in the portal, see [View run records in the studio](how-to-monitor-view-training-logs.md#view-the-experiment-in-the-web-portal).
+
+- **Metrics chart**: This visualization tracks the metrics logged for each hyperdrive child run over the duration of hyperparameter tuning. Each line represents a child run, and each point measures the primary metric value at that iteration of runtime.
+
+ :::image type="content" source="media/how-to-tune-hyperparameters/hyperparameter-tuning-metrics.png" alt-text="Hyperparameter tuning metrics chart":::
+
+- **Parallel Coordinates Chart**: This visualization shows the correlation between primary metric performance and individual hyperparameter values. The chart is interactive via movement of axes (click and drag by the axis label), and by highlighting values across a single axis (click and drag vertically along a single axis to highlight a range of desired values).
+
+ :::image type="content" source="media/how-to-tune-hyperparameters/hyperparameter-tuning-parallel-coordinates.png" alt-text="Hyperparameter tuning parallel coordinates chart":::
+
+- **2-Dimensional Scatter Chart**: This visualization shows the correlation between any two individual hyperparameters along with their associated primary metric value.
+
+ :::image type="content" source="media/how-to-tune-hyperparameters/hyperparameter-tuning-2-dimensional-scatter.png" alt-text="Hyparameter tuning 2-dimensional scatter chart":::
+
+- **3-Dimensional Scatter Chart**: This visualization is the same as 2D but allows for three hyperparameter dimensions of correlation with the primary metric value. You can also click and drag to reorient the chart to view different correlations in 3D space.
+
+ :::image type="content" source="media/how-to-tune-hyperparameters/hyperparameter-tuning-3-dimensional-scatter.png" alt-text="Hyparameter tuning 3-dimensional scatter chart":::
+
+### Notebook widget
+ Use the [Notebook widget](/python/api/azureml-widgets/azureml.widgets.rundetails?preserve-view=true&view=azure-ml-py) to visualize the progress of your training runs. The following snippet visualizes all your hyperparameter tuning runs in one place in a Jupyter notebook: ```Python
@@ -388,17 +411,9 @@ RunDetails(hyperdrive_run).show()
This code displays a table with details about the training runs for each of the hyperparameter configurations.
-![hyperparameter tuning table](./media/how-to-tune-hyperparameters/hyperparameter-tuning-table.png)
-
-You can also visualize the performance of each of the runs as training progresses.
-
-![hyperparameter tuning plot](./media/how-to-tune-hyperparameters/hyperparameter-tuning-plot.png)
-
-You can visually identify the correlation between performance and values of individual hyperparameters by using a Parallel Coordinates Plot.
-
-[![hyperparameter tuning parallel coordinates](./media/how-to-tune-hyperparameters/hyperparameter-tuning-parallel-coordinates.png)](media/how-to-tune-hyperparameters/hyperparameter-tuning-parallel-coordinates-expanded.png)
-You can also visualize all of your hyperparameter tuning runs in the Azure web portal. For more information on how to view an experiment in the portal, see [how to track experiments](how-to-monitor-view-training-logs.md#view-the-experiment-in-the-web-portal).
+You can also visualize the performance of each of the runs as training progresses.
## Find the best model
@@ -425,4 +440,4 @@ Refer to train-hyperparameter-* notebooks in this folder:
## Next steps * [Track an experiment](how-to-track-experiments.md)
-* [Deploy a trained model](how-to-deploy-and-where.md)
+* [Deploy a trained model](how-to-deploy-and-where.md)
media-services https://docs.microsoft.com/en-us/azure/media-services/latest/concept-trusted-storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/concept-trusted-storage.md
@@ -13,23 +13,21 @@
# Trusted storage for Media Services
-When you create a Media Services account, you must associate it with a storage account. Media Services can access that storage account using system authentication. Media Services validates that the Media Services account and the storage account are in the same subscription and it validates that the user adding the association has access the storage account with Azure Resource Manager RBAC.
+When you create a Media Services account, you must associate it with a storage account. Media Services can access that storage account using system authentication or Managed Identity authentication. Media Services validates that the Media Services account and the storage account are in the same subscription and it validates that the user adding the association has access the storage account with Azure Resource Manager RBAC.
-However, if you want to use a firewall to secure your storage account and enable trusted storage, you must use [Managed Identities](concept-managed-identities.md) authentication. It allows Media Services to access the storage account that has been configured with a firewall or a VNet restriction through trusted storage access.
+## Trusted storage with a firewall
-To understand the methods of creating trusted storage with Managed Identities, read [Managed Identities and Media Services](concept-managed-identities.md).
-
-For more information about customer managed keys and Key Vault, see [Bring your own key (customer-managed keys) with Media Services](concept-use-customer-managed-keys-byok.md)
+However, if you want to use a firewall to secure your storage account and enable trusted storage, [Managed Identities](concept-managed-identities.md) authentication is the preferred option. It allows Media Services to access the storage account that has been configured with a firewall or a VNet restriction through trusted storage access. It allows Media Services to access the storage account that has been configured with a firewall or a VNet restriction through trusted storage access.
-For more information about Trusted Microsoft Services, see [Configure Azure Storage firewalls and virtual networks](../../storage/common/storage-network-security.md#trusted-microsoft-services).
+> [!NOTE]
+> You need to grant the AMS Managed Identity Storage Blob Data Contributor access in order for Media Services to be able to read and write to the storage account. Granting the generic Contributor role wonΓÇÖt work as it doesnΓÇÖt enable the correct permissions on the data plane.
-## Tutorials
+## Further reading
-These tutorials include both of the scenarios mentioned above.
+To understand the methods of creating trusted storage with Managed Identities, read [Managed Identities and Media Services](concept-managed-identities.md).
-- [Use the Azure portal to use customer-managed keys or BYOK with Media Services](tutorial-byok-portal.md)-- [Use customer-managed keys or BYOK with Media Services REST API](tutorial-byok-postman.md).
+For more information about Trusted Microsoft Services, see [Configure Azure Storage firewalls and virtual networks](../../storage/common/storage-network-security.md#trusted-microsoft-services).
## Next steps
-To learn more about what managed identities can do for you and your Azure applications, see [Azure AD Managed Identities](../../active-directory/managed-identities-azure-resources/overview.md).
+To learn more about what managed identities can do for you and your Azure applications, see [Azure AD Managed Identities](../../active-directory/managed-identities-azure-resources/overview.md).
media-services https://docs.microsoft.com/en-us/azure/media-services/latest/concept-use-customer-managed-keys-byok https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/concept-use-customer-managed-keys-byok.md
@@ -34,9 +34,10 @@ You can specify a key name and key version, or just a key name. When you use onl
## Double encryption
-Media Services supports double encryption. To learn more about double encryption, see [Azure double encryption](../../security/fundamentals/double-encryption.md).
+Media Services automatically supports double encryption. For data at rest, the first layer of encryption uses a customer managed key or a Microsoft managed key depending on the `AccountEncryption` setting on the account. The second layer of encryption for data at rest is provided automatically using a separate Microsoft managed key. To learn more about double encryption, see [Azure double encryption](../../security/fundamentals/double-encryption.md).
-Double encryption is enabled automatically on the Media Services account. However, you need to configure the customer managed key and double encryption on your storage account separately.
+> [!NOTE]
+> Double encryption is enabled automatically on the Media Services account. However, you need to configure the customer managed key and double encryption on your storage account separately. See, [Storege encryption](https://docs.microsoft.com/azure/storage/common/storage-service-encryption).
## Tutorials
media-services https://docs.microsoft.com/en-us/azure/media-services/live-video-analytics-edge/continuous-video-recording-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/live-video-analytics-edge/continuous-video-recording-tutorial.md
@@ -158,6 +158,12 @@ When you use the Live Video Analytics on IoT Edge module to record the live vide
![Start Monitoring Built-in Event Endpoint](./media/quickstarts/start-monitoring-iothub-events.png)
+> [!NOTE]
+> You might be asked to provide Built-in endpoint information for the IoT Hub. To get that information, in Azure portal, navigate to your IoT Hub and look for **Built-in endpoints** option in the left navigation pane. Click there and look for the **Event Hub-compatible endpoint** under **Event Hub compatible endpoint** section. Copy and use the text in the box. The endpoint will look something like this:
+ ```
+ Endpoint=sb://iothub-ns-xxx.servicebus.windows.net/;SharedAccessKeyName=iothubowner;SharedAccessKey=XXX;EntityPath=<IoT Hub name>
+ ```
+ ## Run the program 1. In Visual Studio Code, open the **Extensions** tab (or press Ctrl+Shift+X) and search for Azure IoT Hub.
media-services https://docs.microsoft.com/en-us/azure/media-services/live-video-analytics-edge/deploy-iot-edge-device https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/live-video-analytics-edge/deploy-iot-edge-device.md
@@ -236,7 +236,7 @@ Next, lets test the sample by invoking a direct method. Read [direct methods for
``` {
- "@apiVersion" : "1.0"
+ "@apiVersion" : "2.0"
} ``` 1. Click on ΓÇ£Invoke MethodΓÇ¥ option on top of the page
@@ -251,4 +251,4 @@ Next, lets test the sample by invoking a direct method. Read [direct methods for
Try [Quickstart: Get started - Live Video Analytics on IoT Edge](get-started-detect-motion-emit-events-quickstart.md#deploy-modules-on-your-edge-device) > [!TIP]
-> If you proceed with the above quickstart, when invoking the direct methods using Visual Studio Code, you will use the device that was added to the IoT Hub via this article, instead of the default `lva-sample-device`.
+> If you proceed with the above quickstart, when invoking the direct methods using Visual Studio Code, you will use the device that was added to the IoT Hub via this article, instead of the default `lva-sample-device`.
media-services https://docs.microsoft.com/en-us/azure/media-services/live-video-analytics-edge/event-based-video-recording-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/live-video-analytics-edge/event-based-video-recording-tutorial.md
@@ -72,7 +72,7 @@ The diagram is a pictorial representation of a [media graph](media-graph-concept
As the diagram shows, you'll use an [RTSP source](media-graph-concept.md#rtsp-source) node in the media graph to capture the simulated live video of traffic on a highway and send that video to two paths:
-* The first path is to a an HTTP extension node. The node samples the video frames to a value set by you using the `samplingOptions` field and then relays the frames, as images, to the AI module YOLOv3, which is an object detector. The node receives the results, which are the objects (vehicles in traffic) detected by the model. The HTTP extension node then publishes the results via the IoT Hub message sink node to the IoT Edge hub.
+* The first path is to a HTTP extension node. The node samples the video frames to a value set by you using the `samplingOptions` field and then relays the frames, as images, to the AI module YOLOv3, which is an object detector. The node receives the results, which are the objects (vehicles in traffic) detected by the model. The HTTP extension node then publishes the results via the IoT Hub message sink node to the IoT Edge hub.
* The objectCounter module is set up to receive messages from the IoT Edge hub, which include the object detection results (vehicles in traffic). The module checks these messages and looks for objects of a certain type, which were configured via a setting. When such an object is found, this module sends a message to the IoT Edge hub. Those "object found" messages are then routed to the IoT Hub source node of the media graph. Upon receiving such a message, the IoT Hub source node in the media graph triggers the [signal gate processor](media-graph-concept.md#signal-gate-processor) node. The signal gate processor node then opens for a configured amount of time. Video flows through the gate to the asset sink node for that duration. That portion of the live stream is then recorded via the [asset sink](media-graph-concept.md#asset-sink) node to an [asset](terminology.md#asset) in your Azure Media Services account. ## Set up your development environment
media-services https://docs.microsoft.com/en-us/azure/media-services/live-video-analytics-edge/playback-multi-day-recordings-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/live-video-analytics-edge/playback-multi-day-recordings-tutorial.md
@@ -53,7 +53,7 @@ As part of the [CVR tutorial](continuous-video-recording-tutorial.md), you would
} ```
-Next, in Visual Studio code, open src/ams-asset-player. This folder contains the necessary files for this tutorial. Open the appsettings.json file, and copy its contents into a new file, appsettings.development.json. Make the following edits to the latter file:
+Next, in Visual Studio code, open src/ams-asset-player. This folder contains the necessary files for this tutorial. Open the appsettings.json file, and copy its contents into a new file, appsettings.development.json. Make the following edits to the newly created appsettings.development.json:
``` "AMS" : {
media-services https://docs.microsoft.com/en-us/azure/media-services/live-video-analytics-edge/playback-recordings-how-to https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/live-video-analytics-edge/playback-recordings-how-to.md
@@ -47,7 +47,7 @@ Where the precision value can be one of: year, month, day, or full (as shown bel
|||||| |Query|`/availableMedia?precision=year&startTime=2018&endTime=2019`|`/availableMedia?precision=month& startTime=2018-01& endTime=2019-02`|`/availableMedia?precision=day& startTime=2018-01-15& endTime=2019-02-02`|`/availableMedia?precision=full& startTime=2018-01-15T10:08:11.123& endTime=2019-01-015T12:00:01.123`| |Response|`{ "timeRanges":[{ "start":"2018", "end":"2019" }]}`|`{ "timeRanges":[{ "start":"2018-03", "end":"2019-01" }]}`|`{ "timeRanges":[ { "start":"2018-03-01", "end":"2018-03-07" }, { "start":"2018-03-09", "end":"2018-03-31" } ]}`|Full fidelity response. If there were no gaps at all, the start would be startTime, and end would be endTime.|
-|Constrains|&#x2022;startTime <= endTime<br/>&#x2022;Both should be in YYYY format, otherwise return error.<br/>&#x2022;Values can be any number of years apart.<br/>&#x2022;Values are inclusive.|&#x2022;startTime <= endTime<br/>&#x2022;Both should be in YYYY-MM format, otherwise return error.<br/>&#x2022;Values can be at most 12 months apart.<br/>&#x2022;Values are inclusive.|&#x2022;startTime <= endTime<br/>&#x2022;Both should be in YYYY-MM-DD format, otherwise return error.<br/>&#x2022;Values can be at most 31 days apart.<br/>Values are inclusive.|&#x2022;startTime < endTime<br/>&#x2022;Values can be at most 25 hours apart.<br/>&#x2022;Values are inclusive.|
+|Constraints|&#x2022;startTime <= endTime<br/>&#x2022;Both should be in YYYY format, otherwise return error.<br/>&#x2022;Values can be any number of years apart.<br/>&#x2022;Values are inclusive.|&#x2022;startTime <= endTime<br/>&#x2022;Both should be in YYYY-MM format, otherwise return error.<br/>&#x2022;Values can be at most 12 months apart.<br/>&#x2022;Values are inclusive.|&#x2022;startTime <= endTime<br/>&#x2022;Both should be in YYYY-MM-DD format, otherwise return error.<br/>&#x2022;Values can be at most 31 days apart.<br/>Values are inclusive.|&#x2022;startTime < endTime<br/>&#x2022;Values can be at most 25 hours apart.<br/>&#x2022;Values are inclusive.|
#### Additional request format considerations
media-services https://docs.microsoft.com/en-us/azure/media-services/live-video-analytics-edge/use-azure-portal-to-invoke-direct-methods https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/live-video-analytics-edge/use-azure-portal-to-invoke-direct-methods.md
@@ -50,7 +50,7 @@ Use the `GraphTopologyList` method call to retrieve a list of all the graph topo
1. Copy and paste the JSON below in the **Payload** field. ```json {
- "@apiVersion":
+ "@apiVersion": "2.0"
} ``` 1. Select the **Invoke Method** button at the top of the page.<br><br>
@@ -73,4 +73,4 @@ More direct methods can be found on the [direct methods](./direct-methods.md) pa
> [!NOTE] > A graph instance instantiates a specific topology, so please ensure you have the right topology set before creating a graph instance.
-[Quickstart: Detect motion emit events](./get-started-detect-motion-emit-events-quickstart.md) is a good reference for understanding the exact sequence of direct method calls to be made.
+[Quickstart: Detect motion emit events](./get-started-detect-motion-emit-events-quickstart.md) is a good reference for understanding the exact sequence of direct method calls to be made.
media-services https://docs.microsoft.com/en-us/azure/media-services/video-indexer/release-notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/video-indexer/release-notes.md
@@ -11,7 +11,7 @@
Previously updated : 01/06/2021 Last updated : 02/01/2021
@@ -26,7 +26,25 @@ To stay up-to-date with the most recent developments, this article provides you
* Bug fixes * Deprecated functionality
-## December 2020
+## January 2021
+
+### Video Indexer is deployed on US Government cloud
+
+You can now create a Video Indexer paid account on US government cloud in Virginia and Arizona regions.
+Video Indexer free trial offering isn't available in the mentioned region. For more information go to Video Indexer Documentation.
+
+### Video Indexer deployed in the India Central region
+
+You can now create a Video Indexer paid account in the India Central region.
+
+### New Dark Mode for the Video Indexer website experience
+
+The Video Indexer website experiences is now available in dark mode.
+To enable the dark mode open the settings panel and toggle on the **Dark Mode** option.
++
+## December 2020
### Video Indexer deployed in the Switzerland West and Switzerland North
migrate https://docs.microsoft.com/en-us/azure/migrate/concepts-dependency-visualization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/concepts-dependency-visualization.md
@@ -10,7 +10,7 @@ Last updated 09/15/2020
# Dependency analysis
-This article describes dependency analysis in Azure Migrate:Server Assessment.
+This article describes dependency analysis in Azure Migrate: Server Assessment.
Dependency analysis identifies dependencies between discovered on-premises machines. It provides these advantages:
purview https://docs.microsoft.com/en-us/azure/purview/register-scan-azure-sql-database-managed-instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-azure-sql-database-managed-instance.md
@@ -101,20 +101,6 @@ It is required to get the service principal's application ID and secret:
1. If your key vault is not connected to Purview yet, you will need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-azure-purview-account) 1. Finally, [create a new credential](manage-credentials.md#create-a-new-credential) using the Service Principal to setup your scan
-### Firewall settings
-
-Your database server must allow Azure connections to be enabled. This will allow Azure Purview to reach and connect the server. You can follow the How-to guide for [Connections from inside Azure](../azure-sql/database/firewall-configure.md#connections-from-inside-azure).
-
-1. Navigate to your database account
-1. Select the server name in the **Overview** page
-1. Select **Security > Firewalls and virtual networks**
-1. Select **Yes** for **Allow Azure services and resources to access this server**
-
- :::image type="content" source="media/register-scan-azure-sql-database/sql-firewall.png" alt-text="register sources options" border="true":::
-
-> [!Note]
-> Currently Azure Purview does not support VNET configuration. Therefore you cannot do IP-based firewall settings.
- ## Register an Azure SQL Database Managed Instance data source 1. Navigate to your Purview account
role-based-access-control https://docs.microsoft.com/en-us/azure/role-based-access-control/resource-provider-operations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/resource-provider-operations.md
@@ -77,6 +77,7 @@ Click the resource provider name in the following table to see the list of opera
| [Microsoft.PowerBIDedicated](#microsoftpowerbidedicated) | | [Microsoft.Purview](#microsoftpurview) | | [Microsoft.StreamAnalytics](#microsoftstreamanalytics) |
+| [Microsoft.Synapse](#microsoftsynapse) |
| **Blockchain** | | [Microsoft.Blockchain](#microsoftblockchain) | | **AI + machine learning** |
@@ -4761,6 +4762,169 @@ Azure service: [Stream Analytics](../stream-analytics/index.yml)
> | Microsoft.StreamAnalytics/streamingjobs/transformations/Read | Read Stream Analytics Job Transformation | > | Microsoft.StreamAnalytics/streamingjobs/transformations/Write | Write Stream Analytics Job Transformation |
+### Microsoft.Synapse
+
+Azure service: [Synapse Analytics](../synapse-analytics/index.yml)
+
+> [!div class="mx-tableFixed"]
+> | Action | Description |
+> | | |
+> | Microsoft.Synapse/checkNameAvailability/action | Checks Workspace name availability. |
+> | Microsoft.Synapse/register/action | Registers the Azure Synapse Analytics (workspaces) Resource Provider and enables the creation of Workspaces. |
+> | Microsoft.Synapse/unregister/action | Unregisters the Azure Synapse Analytics (workspaces) Resource Provider and disables the creation of Workspaces. |
+> | Microsoft.Synapse/workspaces/integrationRuntimes/read | Get any Integration Runtime. |
+> | Microsoft.Synapse/workspaces/integrationruntimes/write | Create or Update any Integration Runtimes. |
+> | Microsoft.Synapse/workspaces/integrationRuntimes/delete | Delete any Integration Runtime |
+> | Microsoft.Synapse/workspaces/integrationRuntimes/getStatus/action | Get any Integration Runtime's Status |
+> | Microsoft.Synapse/workspaces/integrationRuntimes/createExpressSHIRInstallLink/action | Create an Integration Runtime Install Link |
+> | Microsoft.Synapse/workspaces/integrationRuntimes/start/action | Start any Integration Runtime |
+> | Microsoft.Synapse/workspaces/integrationRuntimes/stop/action | Stop any Integration Runtime |
+> | Microsoft.Synapse/workspaces/integrationRuntimes/getConnectionInfo/action | Get Connection Info of any Integration Runtime |
+> | Microsoft.Synapse/workspaces/integrationRuntimes/regenerateAuthKey/action | Regenerate auth key of any Integration Runtime |
+> | Microsoft.Synapse/workspaces/integrationRuntimes/listAuthKeys/action | List Auth Keys of any Integration Runtime |
+> | Microsoft.Synapse/workspaces/integrationRuntimes/removeNode/action | Remove any Integration Runtime node |
+> | Microsoft.Synapse/workspaces/integrationRuntimes/monitoringData/action | Get any Integration Runtime's monitoring data |
+> | Microsoft.Synapse/workspaces/integrationRuntimes/syncCredentials/action | Sync credential on any Integration Runtime |
+> | Microsoft.Synapse/workspaces/integrationRuntimes/upgrade/action | Upgrade any Integration Runtime |
+> | Microsoft.Synapse/workspaces/integrationRuntimes/removeLinks/action | Remove any Integration Runtime link |
+> | Microsoft.Synapse/workspaces/integrationRuntimes/enableInteractiveQuery/action | Enable Interactive query on any Integration Runtime |
+> | Microsoft.Synapse/workspaces/integrationRuntimes/disableInteractiveQuery/action | Disable Interactive query on any Integration Runtime |
+> | Microsoft.Synapse/workspaces/integrationRuntimes/refreshObjectMetadata/action | Refresh Object metadata on any Intergration Runtime |
+> | Microsoft.Synapse/workspaces/integrationRuntimes/getObjectMetadata/action | Get Object metadata on any Intergration Runtime |
+> | Microsoft.Synapse/workspaces/managedIdentitySqlControlSettings/write | Update Managed Identity SQL Control Settings on the workspace |
+> | Microsoft.Synapse/workspaces/managedIdentitySqlControlSettings/read | Get Managed Identity SQL Control Settings |
+> | Microsoft.Synapse/workspaces/scopePools/write | Create or Update any Scope pools. |
+> | Microsoft.Synapse/workspaces/scopePools/read | Read any Scope pools. |
+> | Microsoft.Synapse/workspaces/scopePools/delete | Delete any Scope pools. |
+> | Microsoft.Synapse/operations/read | Read Available Operations from the Azure Synapse Analytics Resource Provider. |
+> | Microsoft.Synapse/workspaces/integrationRuntimes/nodes/read | Get any Integration Runtime Node. |
+> | Microsoft.Synapse/workspaces/integrationRuntimes/nodes/delete | Delete any Integration Runtime Node. |
+> | Microsoft.Synapse/workspaces/integrationRuntimes/nodes/write | Patch any Integration Runtime Node. |
+> | Microsoft.Synapse/workspaces/integrationRuntimes/nodes/ipAddress/action | Get Integration Runtime Ip Address |
+> | Microsoft.Synapse/workspaces/firewallRules/write | Create or update any IP Firewall Rule. |
+> | Microsoft.Synapse/workspaces/firewallRules/read | Read IP Firewall Rule |
+> | Microsoft.Synapse/workspaces/firewallRules/delete | Delete any IP Firewall Rule. |
+> | Microsoft.Synapse/workspaces/replaceAllIpFirewallRules/action | Replaces all Ip Firewall Rules for the Workspace. |
+> | Microsoft.Synapse/workspaces/write | Create or Update any Workspaces. |
+> | Microsoft.Synapse/workspaces/read | Read any Workspaces. |
+> | Microsoft.Synapse/workspaces/delete | Delete any Workspaces. |
+> | Microsoft.Synapse/workspaces/checkDefaultStorageAccountStatus/action | Checks Default Storage Account Status. |
+> | Microsoft.Synapse/workspaces/sqlPools/write | Create or Update any SQL Analytics pools. |
+> | Microsoft.Synapse/workspaces/sqlPools/read | Read any SQL Analytics pools. |
+> | Microsoft.Synapse/workspaces/sqlPools/delete | Delete any SQL Analytics pools. |
+> | Microsoft.Synapse/workspaces/sqlPools/pause/action | Pause any SQL Analytics pools. |
+> | Microsoft.Synapse/workspaces/sqlPools/resume/action | Resume any SQL Analytics pools. |
+> | Microsoft.Synapse/workspaces/sqlPools/restorePoints/action | Create a SQL Analytics pool Restore Point. |
+> | Microsoft.Synapse/workspaces/sqlPools/move/action | Rename any SQL Analytics pools. |
+> | Microsoft.Synapse/workspaces/sqlPools/dataWarehouseQueries/read | Read any SQL Analytics pool Queries. |
+> | Microsoft.Synapse/workspaces/sqlPools/geoBackupPolicies/read | Read any SQL Analytics pool Geo Backup Policies. |
+> | Microsoft.Synapse/workspaces/sqlPools/dataWarehouseUserActivities/read | Read any SQL Analytics pool User Activities. |
+> | Microsoft.Synapse/workspaces/sqlPools/restorePoints/read | Read any SQL Analytics pool Restore Points. |
+> | Microsoft.Synapse/workspaces/sqlPools/restorePoints/delete | Deletes a restore point. |
+> | Microsoft.Synapse/workspaces/sqlPools/dataWarehouseQueries/dataWarehouseQuerySteps/read | Read any SQL Analytics pool Query Steps. |
+> | Microsoft.Synapse/workspaces/sqlPools/maintenanceWindows/read | Read any SQL Analytics pool Maintenance Windows. |
+> | Microsoft.Synapse/workspaces/sqlPools/maintenanceWindows/write | Read any SQL Analytics pool Maintenance Windows. |
+> | Microsoft.Synapse/workspaces/sqlPools/maintenanceWindowOptions/read | Read any SQL Analytics pool Maintenance Window Options. |
+> | Microsoft.Synapse/workspaces/sqlPools/replicationLinks/read | Read any SQL Analytics pool Replication Links. |
+> | Microsoft.Synapse/workspaces/sqlPools/transparentDataEncryption/read | Read any SQL Analytics pool Transparent Data Encryption Configuration. |
+> | Microsoft.Synapse/workspaces/sqlPools/transparentDataEncryption/write | Create or Update any SQL Analytics pool Transparent Data Encryption Configuration. |
+> | Microsoft.Synapse/workspaces/sqlPools/transparentDataEncryption/operationResults/read | Read any SQL Analytics pool Transparent Data Encryption Configuration Operation Results. |
+> | Microsoft.Synapse/workspaces/sqlPools/auditingSettings/read | Read any SQL Analytics pool Auditing Settings. |
+> | Microsoft.Synapse/workspaces/sqlPools/auditingSettings/write | Create or Update any SQL Analytics pool Auditing Settings. |
+> | Microsoft.Synapse/workspaces/sqlPools/operations/read | Read any SQL Analytics pool Operations. |
+> | Microsoft.Synapse/workspaces/sqlPools/usages/read | Read any SQL Analytics pool Usages. |
+> | Microsoft.Synapse/workspaces/sqlPools/currentSensitivityLabels/read | Read any SQL Analytics pool Current Sensitivity Labels. |
+> | Microsoft.Synapse/workspaces/sqlPools/currentSensitivityLabels/write | Batch update current sensitivity labels |
+> | Microsoft.Synapse/workspaces/sqlPools/recommendedSensitivityLabels/read | Read any SQL Analytics pool Recommended Sensitivity Labels. |
+> | Microsoft.Synapse/workspaces/sqlPools/recommendedSensitivityLabels/write | Batch update recommended sensitivity labels |
+> | Microsoft.Synapse/workspaces/sqlPools/schemas/read | Read any SQL Analytics pool Schemas. |
+> | Microsoft.Synapse/workspaces/sqlPools/schemas/tables/read | Read any SQL Analytics pool Schema Tables. |
+> | Microsoft.Synapse/workspaces/sqlPools/schemas/tables/columns/read | Read any SQL Analytics pool Schema Table Columns. |
+> | Microsoft.Synapse/workspaces/sqlPools/connectionPolicies/read | Read any SQL Analytics pool Connection Policies. |
+> | Microsoft.Synapse/workspaces/sqlPools/vulnerabilityAssessments/read | Read any SQL Analytics pool Vulnerability Assessment. |
+> | Microsoft.Synapse/workspaces/sqlPools/vulnerabilityAssessments/write | Creates or updates the Sql pool vulnerability assessment |
+> | Microsoft.Synapse/workspaces/sqlPools/vulnerabilityAssessments/delete | Delete any SQL Analytics pool Vulnerability Assessment. |
+> | Microsoft.Synapse/workspaces/sqlPools/vulnerabilityAssessments/scans/read | Read any SQL Analytics pool Vulnerability Assessment Scan Records. |
+> | Microsoft.Synapse/workspaces/sqlPools/vulnerabilityAssessments/scans/initiateScan/action | Initiate any SQL Analytics pool Vulnerability Assessment Scan Records. |
+> | Microsoft.Synapse/workspaces/sqlPools/vulnerabilityAssessments/scans/export/action | Export any SQL Analytics pool Vulnerability Assessment Scan Records. |
+> | Microsoft.Synapse/workspaces/sqlPools/securityAlertPolicies/read | Read any Sql Analytics pool Threat Detection Policies. |
+> | Microsoft.Synapse/workspaces/sqlPools/securityAlertPolicies/write | Create or Update any SQL Analytics pool Threat Detection Policies. |
+> | Microsoft.Synapse/workspaces/sqlPools/schemas/tables/columns/sensitivityLabels/read | Gets the sensitivity label of a given column. |
+> | Microsoft.Synapse/workspaces/sqlPools/schemas/tables/columns/sensitivityLabels/enable/action | Enable any SQL Analytics pool Sensitivity Labels. |
+> | Microsoft.Synapse/workspaces/sqlPools/schemas/tables/columns/sensitivityLabels/disable/action | Disable any SQL Analytics pool Sensitivity Labels. |
+> | Microsoft.Synapse/workspaces/sqlPools/schemas/tables/columns/sensitivityLabels/write | Create or Update any SQL Analytics pool Sensitivity Labels. |
+> | Microsoft.Synapse/workspaces/sqlPools/schemas/tables/columns/sensitivityLabels/delete | Delete any SQL Analytics pool Sensitivity Labels. |
+> | Microsoft.Synapse/workspaces/sqlPools/vulnerabilityAssessments/rules/baselines/read | Get a SQL Analytics pool Vulnerability Assessment Rule Baseline. |
+> | Microsoft.Synapse/workspaces/sqlPools/vulnerabilityAssessments/rules/baselines/write | Create or Update any SQL Analytics pool Vulnerability Assessment Rule Baseline. |
+> | Microsoft.Synapse/workspaces/sqlPools/vulnerabilityAssessments/rules/baselines/delete | Delete any SQL Analytics pool Vulnerability Assessment Rule Baseline. |
+> | Microsoft.Synapse/workspaces/operationStatuses/read | Read any Async Operation Status. |
+> | Microsoft.Synapse/workspaces/operationResults/read | Read any Async Operation Result. |
+> | Microsoft.Synapse/workspaces/sqlPools/operationResults/read | Read any Async Operation Result. |
+> | Microsoft.Synapse/workspaces/bigDataPools/write | Create or Update any Spark pools. |
+> | Microsoft.Synapse/workspaces/bigDataPools/read | Read any Spark pools. |
+> | Microsoft.Synapse/workspaces/bigDataPools/delete | Delete any Spark pools. |
+> | Microsoft.Synapse/workspaces/sqlPools/metadataSync/write | Create or Update SQL Analytics pool Metadata Sync Config |
+> | Microsoft.Synapse/workspaces/sqlPools/metadataSync/read | Read SQL Analytics pool Metadata Sync Config |
+> | Microsoft.Synapse/workspaces/recoverableSqlpools/read | Gets recoverable SQL Analytics Pools, which are the resources representing geo backups of SQL Analytics Pools |
+> | Microsoft.Synapse/workspaces/administrators/write | Set Active Directory Administrator on the Workspace |
+> | Microsoft.Synapse/workspaces/administrators/read | Get Workspace Active Directory Administrator |
+> | Microsoft.Synapse/workspaces/administrators/delete | Delete Workspace Active Directory Administrator |
+> | Microsoft.Synapse/workspaces/privateEndpointConnections/write | Create or Update Private Endpoint Connection |
+> | Microsoft.Synapse/workspaces/privateEndpointConnections/read | Read any Private Endpoint Connection |
+> | Microsoft.Synapse/workspaces/privateEndpointConnections/delete | Delete Private Endpoint Connection |
+> | Microsoft.Synapse/workspaces/privateLinkResources/read | Get a list of Private Link Resources |
+> | Microsoft.Synapse/workspaces/sqlPools/extensions/read | Get SQL Analytics Pool extension |
+> | Microsoft.Synapse/workspaces/sqlPools/extensions/write | Change the extension for a given SQL Analytics Pool |
+> | Microsoft.Synapse/privateLinkHubs/write | Create any PrivateLinkHubs. |
+> | Microsoft.Synapse/privateLinkHubs/read | Read any PrivateLinkHubs. |
+> | Microsoft.Synapse/privateLinkHubs/delete | Delete PrivateLinkHubs. |
+> | Microsoft.Synapse/locations/operationStatuses/read | Read any Async Operation Status. |
+> | Microsoft.Synapse/locations/operationResults/read | Read any Async Operation Result. |
+> | Microsoft.Synapse/privateLinkHubs/privateLinkResources/read | Get a list of Private Link Resources |
+> | Microsoft.Synapse/privateLinkHubs/privateEndpointConnections/write | Create or Update Private Endpoint Connection for PrivateLinkHub |
+> | Microsoft.Synapse/privateLinkHubs/privateEndpointConnections/read | Read any Private Endpoint Connection for PrivateLinkHub |
+> | Microsoft.Synapse/privateLinkHubs/privateEndpointConnections/delete | Delete Private Endpoint Connection for PrivateLinkHub |
+> | Microsoft.Synapse/workspaces/sqlPools/operationStatuses/read | Read any Async Operation Result. |
+> | Microsoft.Synapse/workspaces/keys/write | Create or Update Workspace Keys |
+> | Microsoft.Synapse/workspaces/keys/read | Read any Workspace Key Definition. |
+> | Microsoft.Synapse/workspaces/keys/delete | Delete any Workspace Key. |
+> | Microsoft.Synapse/workspaces/libraries/read | Read Library Artifacts |
+> | Microsoft.Synapse/workspaces/sqlPools/workloadGroups/read | Lists the workload groups for a selected SQL pool. |
+> | Microsoft.Synapse/workspaces/sqlPools/workloadGroups/write | Sets the properties for a specific workload group. |
+> | Microsoft.Synapse/workspaces/sqlPools/workloadGroups/delete | Drops a specific workload group. |
+> | Microsoft.Synapse/workspaces/sqlPools/workloadGroups/workloadClassifiers/read | Lists the workload classifiers for a selected SQL Analytics Pool. |
+> | Microsoft.Synapse/workspaces/sqlPools/workloadGroups/workloadClassifiers/write | Sets the properties for a specific workload classifier. |
+> | Microsoft.Synapse/workspaces/sqlPools/workloadGroups/workloadClassifiers/delete | Drops a specific workload classifier. |
+> | Microsoft.Synapse/workspaces/sqlPools/extendedAuditingSettings/read | Read any SQL Analytics pool Extended Auditing Settings. |
+> | Microsoft.Synapse/workspaces/sqlPools/extendedAuditingSettings/write | Create or Update any SQL Analytics pool Extended Auditing Settings. |
+> | Microsoft.Synapse/workspaces/sqlPools/dataMaskingPolicies/read | Return the list of SQL Analytics pool data masking policies. |
+> | Microsoft.Synapse/workspaces/sqlPools/dataMaskingPolicies/write | Creates or updates a SQL Analytics pool data masking policy |
+> | Microsoft.Synapse/workspaces/sqlPools/dataMaskingPolicies/rules/read | Gets a list of SQL Analytics pool data masking rules. |
+> | Microsoft.Synapse/workspaces/sqlPools/dataMaskingPolicies/rules/write | Creates or updates a SQL Analytics pool data masking rule. |
+> | Microsoft.Synapse/workspaces/sqlPools/columns/read | Return a list of columns for a SQL Analytics pool |
+> | Microsoft.Synapse/workspaces/sqlPools/sensitivityLabels/read | Gets the sensitivity label of a given column. |
+> | Microsoft.Synapse/workspaces/sqlPools/auditRecords/read | Get Sql pool blob audit records |
+> | Microsoft.Synapse/resourceGroups/operationStatuses/read | Read any Async Operation Status. |
+> | Microsoft.Synapse/workspaces/extendedAuditingSettings/write | Create or Update SQL server extended auditing settings. |
+> | Microsoft.Synapse/workspaces/extendedAuditingSettings/read | Read default SQL server extended auditing settings. |
+> | Microsoft.Synapse/workspaces/auditingSettings/write | Create or Update SQL server auditing settings. |
+> | Microsoft.Synapse/workspaces/auditingSettings/read | Read default SQL server auditing settings. |
+> | Microsoft.Synapse/workspaces/securityAlertPolicies/write | Create or Update SQL server security alert policies. |
+> | Microsoft.Synapse/workspaces/securityAlertPolicies/read | Read default SQL server security alert policies. |
+> | Microsoft.Synapse/workspaces/vulnerabilityAssessments/write | Create or Update SQL server vulnerability assement report. |
+> | Microsoft.Synapse/workspaces/vulnerabilityAssessments/read | Read default SQL server vulnerability assement report. |
+> | Microsoft.Synapse/workspaces/vulnerabilityAssessments/delete | Delete SQL server vulnerability assement report. |
+> | Microsoft.Synapse/workspaces/restorableDroppedSqlPools/read | Gets a deleted Sql pool that can be restored |
+> | Microsoft.Synapse/workspaces/sqlPools/dataWarehouseQueries/Steps/read | Read any SQL Analytics pool Query Steps. |
+> | Microsoft.Synapse/workspaces/sqlPools/workloadGroups/operationStatuses/read | SQL Analytics Pool workload group operation status |
+> | Microsoft.Synapse/workspaces/sqlPools/workloadGroups/workloadClassifiers/operationStatuses/read | SQL Analytics Pool workload classifier operation status |
+> | Microsoft.Synapse/workspaces/sqlPools/workloadGroups/workloadClassifiers/operationResults/read | SQL Analytics Pool workload classifier operation result |
+> | Microsoft.Synapse/workspaces/sqlUsages/read | Gets usage limits available for SQL Analytics Pools |
+> | Microsoft.Synapse/workspaces/auditingSettings/operationResults/read | SQL Server Auditing Settings |
+> | Microsoft.Synapse/workspaces/sqlAdministrators/write | Set Active Directory Administrator on the Workspace |
+> | Microsoft.Synapse/workspaces/sqlAdministrators/read | Get Workspace Active Directory Administrator |
+> | Microsoft.Synapse/workspaces/sqlAdministrators/delete | Delete Workspace Active Directory Administrator |
+ ## Blockchain ### Microsoft.Blockchain
search https://docs.microsoft.com/en-us/azure/search/cognitive-search-concept-troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-concept-troubleshooting.md
@@ -95,7 +95,7 @@ Image analysis is computationally-intensive for even simple cases, so when image
Maximum run time varies by tier: several minutes on the Free tier, 24-hour indexing on billable tiers. If processing fails to complete within a 24-hour period for on-demand processing, switch to a schedule to have the indexer pick up processing where it left off.
-For scheduled indexers, indexing resumes on schedule at the last known good document. By using a recurring schedule, the indexer can work its way through the image backlog over a series of hours or days, until all un-processed images are processed. For more information on schedule syntax, see [Step 3: Create-an-indexer](search-howto-indexing-azure-blob-storage.md#step-3-create-an-indexer) or see [How to schedule indexers for Azure Cognitive Search](search-howto-schedule-indexers.md).
+For scheduled indexers, indexing resumes on schedule at the last known good document. By using a recurring schedule, the indexer can work its way through the image backlog over a series of hours or days, until all un-processed images are processed. For more information on schedule syntax, see [Schedule an indexer](search-howto-schedule-indexers.md).
> [!NOTE] > If an indexer is set to a certain schedule but repeatedly fails on the same document over and over again each time it runs, the indexer will begin running on a less frequent interval (up to the maximum of at least once every 24 hours) until it successfully makes progress again. If you believe you have fixed whatever the issue that was causing the indexer to be stuck at a certain point, you can perform an on demand run of the indexer, and if that successfully makes progress, the indexer will return to its set schedule interval again.
@@ -104,12 +104,12 @@ For portal-based indexing (as described in the quickstart), choosing the "run on
## Tip 8: Increase indexing throughput
-For [parallel indexing](search-howto-large-index.md), place your data into multiple containers or multiple virtual folders inside the same container. Then create multiple datasource and indexer pairs. All indexers can use the same skillset and write into the same target search index, so your search app doesnΓÇÖt need to be aware of this partitioning.
+For [parallel indexing](search-howto-large-index.md), place your data into multiple containers or multiple virtual folders inside the same container. Then create multiple data source and indexer pairs. All indexers can use the same skillset and write into the same target search index, so your search app doesnΓÇÖt need to be aware of this partitioning.
## See also + [Quickstart: Create an AI enrichment pipeline in the portal](cognitive-search-quickstart-blob.md) + [Tutorial: Learn AI enrichment REST APIs](cognitive-search-tutorial-blob.md)
-+ [Specifying data source credentials](search-howto-indexing-azure-blob-storage.md#how-to-specify-credentials)
++ [How to configure blob indexers](search-howto-indexing-azure-blob-storage.md) + [How to define a skillset](cognitive-search-defining-skillset.md)
-+ [How to map enriched fields to an index](cognitive-search-output-field-mapping.md)
++ [How to map enriched fields to an index](cognitive-search-output-field-mapping.md)
search https://docs.microsoft.com/en-us/azure/search/cognitive-search-skill-document-extraction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-skill-document-extraction.md
@@ -32,7 +32,7 @@ Parameters are case-sensitive.
| Inputs | Allowed Values | Description | |--|-|-| | `parsingMode` | `default` <br/> `text` <br/> `json` | Set to `default` for document extraction from files that are not pure text or json. Set to `text` to improve performance on plain text files. Set to `json` to extract structured content from json files. If `parsingMode` is not defined explicitly, it will be set to `default`. |
-| `dataToExtract` | `contentAndMetadata` <br/> `allMetadata` | Set to `contentAndMetadata` to extract all metadata and textual content from each file. Set to `allMetadata` to extract only the [content-type specific metadata](search-howto-indexing-azure-blob-storage.md#ContentSpecificMetadata) (for example, metadata unique to just .png files). If `dataToExtract` is not defined explicitly, it will be set to `contentAndMetadata`. |
+| `dataToExtract` | `contentAndMetadata` <br/> `allMetadata` | Set to `contentAndMetadata` to extract all metadata and textual content from each file. Set to `allMetadata` to extract only the [metadata properties for the content type](search-blob-metadata-properties.md) (for example, metadata unique to just .png files). If `dataToExtract` is not defined explicitly, it will be set to `contentAndMetadata`. |
| `configuration` | See below. | A dictionary of optional parameters that adjust how the document extraction is performed. See the below table for descriptions of supported configuration properties. | | Configuration Parameter | Allowed Values | Description |
search https://docs.microsoft.com/en-us/azure/search/search-blob-metadata-properties https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-blob-metadata-properties.md
@@ -0,0 +1,60 @@
+
+ Title: Content metadata properties
+
+description: Metadata properties of blobs can provide content to fields in a search index, or information that informs indexing behavior at run time. This article lists metadata properties supported in Azure Cognitive Search.
++++++ Last updated : 02/03/2021++
+# Content metadata properties used in blob indexing in Azure Cognitive Search
+
+Blobs can contain various content, and many of those content types have metadata properties that can be useful in blob indexing. Just as you can create search fields for standard blob properties like **`metadata_storage_name`**, you can create fields for metadata properties that are specific to a document format.
+
+## Supported document formats
+
+Cognitive Search supports blob indexing for the following document formats:
++
+## Properties by document format
+
+The following table summarizes processing done for each document format, and describes the metadata properties extracted by a blob indexer.
+
+| Document format / content type | Extracted metadata | Processing details |
+| | | |
+| HTML (text/html) |`metadata_content_encoding`<br/>`metadata_content_type`<br/>`metadata_language`<br/>`metadata_description`<br/>`metadata_keywords`<br/>`metadata_title` |Strip HTML markup and extract text |
+| PDF (application/pdf) |`metadata_content_type`<br/>`metadata_language`<br/>`metadata_author`<br/>`metadata_title` |Extract text, including embedded documents (excluding images) |
+| DOCX (application/vnd.openxmlformats-officedocument.wordprocessingml.document) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_character_count`<br/>`metadata_creation_date`<br/>`metadata_last_modified`<br/>`metadata_page_count`<br/>`metadata_word_count` |Extract text, including embedded documents |
+| DOC (application/msword) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_character_count`<br/>`metadata_creation_date`<br/>`metadata_last_modified`<br/>`metadata_page_count`<br/>`metadata_word_count` |Extract text, including embedded documents |
+| DOCM (application/vnd.ms-word.document.macroenabled.12) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_character_count`<br/>`metadata_creation_date`<br/>`metadata_last_modified`<br/>`metadata_page_count`<br/>`metadata_word_count` |Extract text, including embedded documents |
+| WORD XML (application/vnd.ms-word2006ml) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_character_count`<br/>`metadata_creation_date`<br/>`metadata_last_modified`<br/>`metadata_page_count`<br/>`metadata_word_count` |Strip XML markup and extract text |
+| WORD 2003 XML (application/vnd.ms-wordml) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_creation_date` |Strip XML markup and extract text |
+| XLSX (application/vnd.openxmlformats-officedocument.spreadsheetml.sheet) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_creation_date`<br/>`metadata_last_modified` |Extract text, including embedded documents |
+| XLS (application/vnd.ms-excel) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_creation_date`<br/>`metadata_last_modified` |Extract text, including embedded documents |
+| XLSM (application/vnd.ms-excel.sheet.macroenabled.12) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_creation_date`<br/>`metadata_last_modified` |Extract text, including embedded documents |
+| PPTX (application/vnd.openxmlformats-officedocument.presentationml.presentation) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_creation_date`<br/>`metadata_last_modified`<br/>`metadata_slide_count`<br/>`metadata_title` |Extract text, including embedded documents |
+| PPT (application/vnd.ms-powerpoint) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_creation_date`<br/>`metadata_last_modified`<br/>`metadata_slide_count`<br/>`metadata_title` |Extract text, including embedded documents |
+| PPTM (application/vnd.ms-powerpoint.presentation.macroenabled.12) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_creation_date`<br/>`metadata_last_modified`<br/>`metadata_slide_count`<br/>`metadata_title` |Extract text, including embedded documents |
+| MSG (application/vnd.ms-outlook) |`metadata_content_type`<br/>`metadata_message_from`<br/>`metadata_message_from_email`<br/>`metadata_message_to`<br/>`metadata_message_to_email`<br/>`metadata_message_cc`<br/>`metadata_message_cc_email`<br/>`metadata_message_bcc`<br/>`metadata_message_bcc_email`<br/>`metadata_creation_date`<br/>`metadata_last_modified`<br/>`metadata_subject` |Extract text, including text extracted from attachments. `metadata_message_to_email`, `metadata_message_cc_email` and `metadata_message_bcc_email` are string collections, the rest of the fields are strings.|
+| ODT (application/vnd.oasis.opendocument.text) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_character_count`<br/>`metadata_creation_date`<br/>`metadata_last_modified`<br/>`metadata_page_count`<br/>`metadata_word_count` |Extract text, including embedded documents |
+| ODS (application/vnd.oasis.opendocument.spreadsheet) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_creation_date`<br/>`metadata_last_modified` |Extract text, including embedded documents |
+| ODP (application/vnd.oasis.opendocument.presentation) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_creation_date`<br/>`metadata_last_modified`<br/>`title` |Extract text, including embedded documents |
+| ZIP (application/zip) |`metadata_content_type` |Extract text from all documents in the archive |
+| GZ (application/gzip) |`metadata_content_type` |Extract text from all documents in the archive |
+| EPUB (application/epub+zip) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_creation_date`<br/>`metadata_title`<br/>`metadata_description`<br/>`metadata_language`<br/>`metadata_keywords`<br/>`metadata_identifier`<br/>`metadata_publisher` |Extract text from all documents in the archive |
+| XML (application/xml) |`metadata_content_type`<br/>`metadata_content_encoding`<br/> |Strip XML markup and extract text |
+| JSON (application/json) |`metadata_content_type`<br/>`metadata_content_encoding` |Extract text<br/>NOTE: If you need to extract multiple document fields from a JSON blob, see [Indexing JSON blobs](search-howto-index-json-blobs.md) for details |
+| EML (message/rfc822) |`metadata_content_type`<br/>`metadata_message_from`<br/>`metadata_message_to`<br/>`metadata_message_cc`<br/>`metadata_creation_date`<br/>`metadata_subject` |Extract text, including attachments |
+| RTF (application/rtf) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_character_count`<br/>`metadata_creation_date`<br/>`metadata_page_count`<br/>`metadata_word_count`<br/> | Extract text|
+| Plain text (text/plain) |`metadata_content_type`<br/>`metadata_content_encoding`<br/> | Extract text|
+
+## See also
+
+* [Indexers in Azure Cognitive Search](search-indexer-overview.md)
+* [Understand blobs using AI](search-blob-ai-integration.md)
+* [Blob indexing overview](search-blob-storage-integration.md)
search https://docs.microsoft.com/en-us/azure/search/search-howto-create-indexers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-create-indexers.md
@@ -151,7 +151,7 @@ Given that indexers don't fix data problems, other forms of data cleansing or ma
## Know your index
-Recall that indexers pass off the search documents to the search engine for indexing. Just as indexers have properties that determine execution behavior, an index schema has properties that profoundly effect how strings are indexed (only strings are analyzed and tokenized). Depending on analyzer assignments, indexed strings might be different from what you passed in. You can evaluate the effects of analyzers using [Analyze Text (REST)](/rest/api/searchservice/test-analyzer). For more information about analyzers, see [Analyzers for text processing](search-analyzers.md).
+Recall that indexers pass off the search documents to the search engine for indexing. Just as indexers have properties that determine execution behavior, an index schema has properties that profoundly affect how strings are indexed (only strings are analyzed and tokenized). Depending on analyzer assignments, indexed strings might be different from what you passed in. You can evaluate the effects of analyzers using [Analyze Text (REST)](/rest/api/searchservice/test-analyzer). For more information about analyzers, see [Analyzers for text processing](search-analyzers.md).
In terms of how indexers interact with an index, an indexer only checks field names and types. There is no validation step that ensures incoming content is correct for the corresponding search field in the index. As a verification step, you can run queries on the populated index that return entire documents or selected fields. For more information about querying the contents of an index, see [Create a basic query](search-query-create.md).
search https://docs.microsoft.com/en-us/azure/search/search-howto-index-changed-deleted-blobs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-index-changed-deleted-blobs.md
@@ -8,34 +8,39 @@
Previously updated : 09/25/2020 Last updated : 01/29/2021 # How to set up change and deletion detection for blobs in Azure Cognitive Search indexing
-After an initial search index is created, you might want to configure subsequent indexer jobs to pick up just those documents that have been created or deleted since the initial run. For search content that originates from Azure Blob storage, change detection occurs automatically when you use a schedule to trigger indexing. By default, the service reindexes only the changed blobs, as determined by the blob's `LastModified` timestamp. In contrast with other data sources supported by search indexers, blobs always have a timestamp, which eliminates the need to set up a change detection policy manually.
+After an initial search index is created, you might want subsequent indexer jobs to only pick up new and changed documents. For search content that originates from Azure Blob storage, change detection occurs automatically when you use a schedule to trigger indexing. By default, the service reindexes only the changed blobs, as determined by the blob's `LastModified` timestamp. In contrast with other data sources supported by search indexers, blobs always have a timestamp, which eliminates the need to set up a change detection policy manually.
Although change detection is a given, deletion detection is not. If you want to detect deleted documents, make sure to use a "soft delete" approach. If you delete the blobs outright, corresponding documents will not be removed from the search index.
-There are two ways to implement the soft delete approach. Both are described below.
+There are two ways to implement the soft delete approach:
+++ Native blob soft delete (preview), described next++ [Soft delete using custom metadata](#soft-delete-using-custom-metadata) ## Native blob soft delete (preview)
+For this deletion detection approach, Cognitive Search depends on the [native blob soft delete](../storage/blobs/soft-delete-blob-overview.md) feature in Azure Blob storage to determine whether blobs have transitioned to a soft deleted state. When blobs are detected in this state, a search indexer uses this information to remove the corresponding document from the index.
+ > [!IMPORTANT] > Support for native blob soft delete is in preview. Preview functionality is provided without a service level agreement, and is not recommended for production workloads. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). The [REST API version 2020-06-30-Preview](./search-api-preview.md) provides this feature. There is currently no portal or .NET SDK support.
-> [!NOTE]
-> When using the native blob soft delete policy the document keys for the documents in your index must either be a blob property or blob metadata.
+### Prerequisites
-In this method you will use the [native blob soft delete](../storage/blobs/soft-delete-blob-overview.md) feature offered by Azure Blob storage. If native blob soft delete is enabled on your storage account, your data source has a native soft delete policy set, and the indexer finds a blob that has been transitioned to a soft deleted state, the indexer will remove that document from the index. The native blob soft delete policy is not supported when indexing blobs from Azure Data Lake Storage Gen2.
++ [Enable soft delete for blobs](../storage/blobs/soft-delete-blob-enable.md).++ Blobs must be in an Azure Blob storage container. The Cognitive Search native blob soft delete policy is not supported for blobs from Azure Data Lake Storage Gen2.++ Document keys for the documents in your index must be mapped to either be a blob property or blob metadata.++ You must use the preview REST API (`api-version=2020-06-30-Preview`) to configure support for soft delete.
-Use the following steps:
+### How to configure deletion detection using native soft delete
-1. Enable [native soft delete for Azure Blob storage](../storage/blobs/soft-delete-blob-overview.md). We recommend setting the retention policy to a value that's much higher than your indexer interval schedule. This way if there's an issue running the indexer or if you have a large number of documents to index, there's plenty of time for the indexer to eventually process the soft deleted blobs. Azure Cognitive Search indexers will only delete a document from the index if it processes the blob while it's in a soft deleted state.
+1. In Blob storage, when enabling soft delete, set the retention policy to a value that's much higher than your indexer interval schedule. This way if there's an issue running the indexer or if you have a large number of documents to index, there's plenty of time for the indexer to eventually process the soft deleted blobs. Azure Cognitive Search indexers will only delete a document from the index if it processes the blob while it's in a soft deleted state.
-1. Configure a native blob soft deletion detection policy on the data source. An example is shown below. Since this feature is in preview, you must use the preview REST API.
-
-1. Run the indexer or set the indexer to run on a schedule. When the indexer runs and processes the blob the document will be removed from the index.
+1. In Cognitive Search, set a native blob soft deletion detection policy on the data source. An example is shown below. Because this feature is in preview, you must use the preview REST API.
```http PUT https://[service name].search.windows.net/datasources/blob-datasource?api-version=2020-06-30-Preview
@@ -52,27 +57,25 @@ Use the following steps:
} ```
-### Reindexing un-deleted blobs (using native soft delete policies)
+1. [Run the indexer](/rest/api/searchservice/run-indexer) or set the indexer to run [on a schedule](search-howto-schedule-indexers.md). When the indexer runs and processes a blob having a soft delete state, the corresponding search document will be removed from the index.
+
+### Reindexing undeleted blobs (using native soft delete policies)
-If you delete a blob from Azure Blob storage with native soft delete enabled on your storage account, the blob will transition to a soft deleted state, giving you the option to un-delete that blob within the retention period. If you reverse a deletion after the indexer processed it, the indexer will not always index the restored blob. This is because the indexer determines which blobs to index based on the blob's `LastModified` timestamp. When a soft deleted blob is un-deleted, its `LastModified` timestamp does not get updated, so if the indexer has already processed blobs with more recent `LastModified` timestamps, it won't reindex the un-deleted blob.
+If you restore a soft deleted blob in Blob storage, the indexer will not always reindex it. This is because the indexer uses the blob's `LastModified` timestamp to determine whether indexing is needed. When a soft deleted blob is undeleted, its `LastModified` timestamp does not get updated, so if the indexer has already processed blobs with more recent `LastModified` timestamps, it won't reindex the undeleted blob.
-To make sure that an un-deleted blob is reindexed, you will need to update the blob's `LastModified` timestamp. One way to do this is by resaving the metadata of that blob. You don't need to change the metadata, but resaving the metadata will update the blob's `LastModified` timestamp so that the indexer knows that it needs to reindex this blob.
+To make sure that an undeleted blob is reindexed, you will need to update the blob's `LastModified` timestamp. One way to do this is by resaving the metadata of that blob. You don't need to change the metadata, but resaving the metadata will update the blob's `LastModified` timestamp so that the indexer knows to pick it up.
## Soft delete using custom metadata
-In this method you will use a blob's metadata to indicate when a document should be removed from the search index. This method requires two separate actions, deleting the search document from the index, followed by blob deletion in Azure Storage.
+This method uses a blob's metadata to determine whether a search document should be removed from the index. This method requires two separate actions, deleting the search document from the index, followed by blob deletion in Azure Storage.
-Use the following steps:
+There are steps to follow in both Blob storage and Cognitive Search, but there are no other feature dependencies. This capability is supported in generally available APIs.
1. Add a custom metadata key-value pair to t