Updates from: 03/03/2023 02:14:01
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Data Residency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/data-residency.md
The following locations are in the process of being added to the list. For now,
## EU Data Boundary
-The EU Data Boundary is Microsoft's commitment for our public sector and commercial customers in the EU and EFTA to process and store their customer data in the EU.
-
-### Services temporarily excluded from the EU Data Boundary
-
-Some services have work in progress to be EU Data Boundary compliant, but this work is delayed beyond January 1, 2023. The services listed will become compliant over the coming months. The following details explain the customer data that these features currently transfer out of the EU Data Boundary as part of their service operations:
-
-* **Reason for customer data egress** - These features haven't completed changes to fully process admin actions and user sign-in actions within the EU Data Boundary.
-* **Types of customer data being egressed** - User account and usage data, and service configuration such as policy.
-* **Customer data location at rest** - In the EU Data Boundary.
-* **Customer data processing** - Some processing may occur globally.
-* **Services** - Administrator actions in the Azure portal or APIs, and User Sign-In Service
+> [!IMPORTANT]
+> For comprehensive details about Microsoft's EU Data Boundary commitment, see [Microsoft's EU Data Boundary documentation](https://learn.microsoft.com/privacy/eudb/eu-data-boundary-learn).
## Remote profile solution
After sign-up, profile editing, or sign-in is complete, Azure AD B2C includes th
## Next steps -- [Create an Azure AD B2C tenant](tutorial-create-tenant.md).
+- [Create an Azure AD B2C tenant](tutorial-create-tenant.md).
active-directory Active Directory App Proxy Protect Ndes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/active-directory-app-proxy-protect-ndes.md
Azure AD Application Proxy is built on Azure. It gives you a massive amount of n
1. You should see an **HTTP Error 403 ΓÇô Forbidden** response.
-1. Change the NDES URL provided (via Microsoft Intune) to devices. This change could either be in Microsoft Endpoint Configuration Manager or the Microsoft Endpoint Manager admin center.
+1. Change the NDES URL provided (via Microsoft Intune) to devices. This change could either be in Microsoft Configuration Manager or the Microsoft Intune admin center.
* For Configuration Manager, go to the certificate registration point and adjust the URL. This URL is what devices call out to and present their challenge. * For Intune standalone, either edit or create a new SCEP policy and add the new URL.
active-directory Active Directory Certificate Based Authentication Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/active-directory-certificate-based-authentication-get-started.md
The EAS profile must contain the following information:
- The EAS endpoint (for example, outlook.office365.com)
-An EAS profile can be configured and placed on the device through the utilization of Mobile device management (MDM) such as Microsoft Endpoint Manager or by manually placing the certificate in the EAS profile on the device.
+An EAS profile can be configured and placed on the device through the utilization of Mobile device management (MDM) such as Microsoft Intune or by manually placing the certificate in the EAS profile on the device.
### Testing EAS client applications on Android
active-directory Concept Password Ban Bad Combined Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-password-ban-bad-combined-policy.md
Password expiration policies are unchanged but they're included in this topic fo
You can also use PowerShell to remove the never-expires configuration, or to see user passwords that are set to never expire.
-The following expiration requirements apply to other providers that use Azure AD for identity and directory services, such as Microsoft Endpoint Manager and Microsoft 365.
+The following expiration requirements apply to other providers that use Azure AD for identity and directory services, such as Microsoft Intune and Microsoft 365.
| Property | Requirements | | | |
active-directory Concept Resilient Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-resilient-controls.md
Incorporate the following access controls in your existing Conditional Access po
- Provision multiple authentication methods for each user that rely on different communication channels, for example the Microsoft Authenticator app (internet-based), OATH token (generated on-device), and SMS (telephonic). The following PowerShell script will help you identify in advance, which additional methods your users should register: [Script for Azure AD MFA authentication method analysis](/samples/azure-samples/azure-mfa-authentication-method-analysis/azure-mfa-authentication-method-analysis/). - Deploy Windows Hello for Business on Windows 10 devices to satisfy MFA requirements directly from device sign-in.-- Use trusted devices via [Azure AD Hybrid Join](../devices/overview.md) or [Microsoft Endpoint Manager](/intune/planning-guide). Trusted devices will improve user experience because the trusted device itself can satisfy the strong authentication requirements of policy without an MFA challenge to the user. MFA will then be required when enrolling a new device and when accessing apps or resources from untrusted devices.
+- Use trusted devices via [Azure AD Hybrid Join](../devices/overview.md) or [Microsoft Intune](/intune/planning-guide). Trusted devices will improve user experience because the trusted device itself can satisfy the strong authentication requirements of policy without an MFA challenge to the user. MFA will then be required when enrolling a new device and when accessing apps or resources from untrusted devices.
- Use Azure AD identity protection risk-based policies that prevent access when the user or sign-in is at risk in place of fixed MFA policies. - If you are protecting VPN access using Azure AD MFA NPS extension, consider federating your VPN solution as a [SAML app](../manage-apps/view-applications-portal.md) and determine the app category as recommended below.
active-directory Concept System Preferred Multifactor Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-system-preferred-multifactor-authentication.md
description: Learn how to use system-preferred multifactor authentication
Previously updated : 02/28/2023 Last updated : 03/02/2023
Content-Type: application/json
## Known issues -- [FIDO2 security key isn't supported on iOS mobile devices](../develop/support-fido2-authentication.md#mobile). This issue might surface when system-preferred MFA is enabled. Until a fix is available, we recommend not using FIDO2 security keys on iOS devices.
+- [FIDO2 security key isn't supported on mobile devices](../develop/support-fido2-authentication.md#mobile). This issue might surface when system-preferred MFA is enabled. Until a fix is available, we recommend not using FIDO2 security keys on mobile devices.
## Common questions
When a user signs in, the authentication process checks which authentication met
System-preferred MFA doesn't affect users who sign in by using Active Directory Federation Services (AD FS) or Network Policy Server (NPS) extension. Those users don't see any change to their sign-in experience.
-### What if the most secure MFA method isn't available?
-
-If the user doesn't have that have the most secure method available, they can sign in with another method. After sign-in, they're redirected to their Security info page to remove the registration of the authentication method that isn't available.
-
-For example, let's say an end user misplaces their FIDO2 security key. When they try to sign in without their security key, they can click **I can't use my security key right now** and continue to sign in by using another method, like a time-based one-time password (TOTP). After sign-in, their Security info page appears and they need to remove their FIDO2 security key registration. They can register the method again later if they find their FIDO2 security key.
- ### What happens for users who aren't specified in the Authentication methods policy but enabled in the legacy MFA tenant-wide policy? The system-preferred MFA also applies for users who are enabled for MFA in the legacy MFA policy.
active-directory Howto Authentication Passwordless Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-deployment.md
There are three types of passwordless sign-in deployments available with securit
Enabling Windows 10 sign-in using FIDO2 security keys requires you to enable the credential provider functionality in Windows 10. Choose one of the following:
-* [Enable credential provider with Microsoft Endpoint Manager](howto-authentication-passwordless-security-key-windows.md)
+* [Enable credential provider with Microsoft Intune](howto-authentication-passwordless-security-key-windows.md)
- * We recommend Microsoft Endpoint Manager deployment.
+ * We recommend Microsoft Intune deployment.
* [Enable credential provider with a provisioning package](howto-authentication-passwordless-security-key-windows.md)
- * If Microsoft Endpoint Manager deployment isn't possible, administrators must deploy a package on each machine to enable the credential provider functionality. The package installation can be carried out by one of the following options:
+ * If Microsoft Intune deployment isn't possible, administrators must deploy a package on each machine to enable the credential provider functionality. The package installation can be carried out by one of the following options:
* Group Policy or Configuration Manager * Local installation on a Windows 10 machine
active-directory Howto Authentication Passwordless Security Key Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-security-key-windows.md
This document focuses on enabling FIDO2 security key based passwordless authenti
| [Hybrid Azure AD joined devices](../devices/concept-azure-ad-join-hybrid.md) require Windows 10 version 2004 or higher | | X | | Fully patched Windows Server 2016/2019 Domain Controllers. | | X | | [Azure AD Hybrid Authentication Management module](https://www.powershellgallery.com/packages/AzureADHybridAuthenticationManagement/2.1.1.0) | | X |
-| [Microsoft Endpoint Manager](/intune/fundamentals/what-is-intune) (Optional) | X | X |
+| [Microsoft Intune](/intune/fundamentals/what-is-intune) (Optional) | X | X |
| Provisioning package (Optional) | X | X | | Group Policy (Optional) | | X |
Hybrid Azure AD joined devices must run Windows 10 version 2004 or newer.
Organizations may choose to use one or more of the following methods to enable the use of security keys for Windows sign-in based on their organization's requirements: -- [Enable with Endpoint Manager](#enable-with-endpoint-manager)-- [Targeted Endpoint Manager deployment](#targeted-endpoint-manager-deployment)
+- [Enable with Microsoft Intune](#enable-with-microsoft-intune)
+- [Targeted Microsoft Intune deployment](#targeted-intune-deployment)
- [Enable with a provisioning package](#enable-with-a-provisioning-package) - [Enable with Group Policy (Hybrid Azure AD joined devices only)](#enable-with-group-policy)
Organizations may choose to use one or more of the following methods to enable t
> > Organizations with **Azure AD joined devices** must do this before their devices can authenticate to on-premises resources with FIDO2 security keys.
-### Enable with Endpoint Manager
+### Enable with Microsoft Intune
-To enable the use of security keys using Endpoint Manager, complete the following steps:
+To enable the use of security keys using Intune, complete the following steps:
-1. Sign in to the [Microsoft Endpoint Manager admin center](https://endpoint.microsoft.com).
+1. Sign in to the [Microsoft Intune admin center](https://endpoint.microsoft.com).
1. Browse to **Devices** > **Enroll Devices** > **Windows enrollment** > **Windows Hello for Business**. 1. Set **Use security keys for sign-in** to **Enabled**. Configuration of security keys for sign-in isn't dependent on configuring Windows Hello for Business.
-### Targeted Endpoint Manager deployment
+### Targeted Intune deployment
-To target specific device groups to enable the credential provider, use the following custom settings via Endpoint
+To target specific device groups to enable the credential provider, use the following custom settings via Intune:
-1. Sign in to the [Microsoft Endpoint Manager admin center](https://endpoint.microsoft.com).
+1. Sign in to the [Microsoft Intune admin center](https://endpoint.microsoft.com).
1. Browse to **Devices** > **Windows** > **Configuration profiles** > **Create profile**. 1. Configure the new profile with the following settings: - Platform: Windows 10 and later
To target specific device groups to enable the credential provider, use the foll
- OMA-URI: ./Device/Vendor/MSFT/PassportForWork/SecurityKey/UseSecurityKeyForSignin - Data Type: Integer - Value: 1
-1. The remainder of the policy settings include assigning to specific users, devices, or groups. For more information, see [Assign user and device profiles in Microsoft Endpoint Manager](/intune/device-profile-assign).
+1. The remainder of the policy settings include assigning to specific users, devices, or groups. For more information, see [Assign user and device profiles in Microsoft Intune](/intune/device-profile-assign).
### Enable with a provisioning package
-For devices not managed by Microsoft Endpoint Manager, a provisioning package can be installed to enable the functionality. The Windows Configuration Designer app can be installed from the [Microsoft Store](https://www.microsoft.com/p/windows-configuration-designer/9nblggh4tx22). Complete the following steps to create a provisioning package:
+For devices not managed by Microsoft Intune, a provisioning package can be installed to enable the functionality. The Windows Configuration Designer app can be installed from the [Microsoft Store](https://www.microsoft.com/p/windows-configuration-designer/9nblggh4tx22). Complete the following steps to create a provisioning package:
1. Launch the Windows Configuration Designer. 1. Select **File** > **New project**.
active-directory Howto Sspr Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-sspr-deployment.md
To enable your support team's success, you can create a FAQ based on questions y
| User isn't receiving a text or call on their office or cell phone| A user is trying to verify their identity via text or call but isn't receiving a text/call. | | User can't access the password reset portal| A user wants to reset their password but isn't enabled for password reset and can't access the page to update passwords. | | User can't set a new password| A user completes verification during the password reset flow but can't set a new password. |
-| User doesn't see a Reset Password link on a Windows 10 device| A user is trying to reset password from the Windows 10 lock screen, but the device is either not joined to Azure AD, or the Microsoft Endpoint Manager device policy isn't enabled |
+| User doesn't see a Reset Password link on a Windows 10 device| A user is trying to reset password from the Windows 10 lock screen, but the device is either not joined to Azure AD, or the Microsoft Intune device policy isn't enabled |
### Plan rollback
active-directory Howto Sspr Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-sspr-windows.md
To configure a Windows 11 or 10 device for SSPR at the sign-in screen, review th
- Azure AD joined - Hybrid Azure AD joined
-### Enable for Windows 11 and 10 using Microsoft Endpoint Manager
+### Enable for Windows 11 and 10 using Microsoft Intune
-Deploying the configuration change to enable SSPR from the login screen using Microsoft Endpoint Manager is the most flexible method. Microsoft Endpoint Manager allows you to deploy the configuration change to a specific group of machines you define. This method requires Microsoft Endpoint Manager enrollment of the device.
+Deploying the configuration change to enable SSPR from the login screen using Microsoft Intune is the most flexible method. Microsoft Intune allows you to deploy the configuration change to a specific group of machines you define. This method requires Microsoft Intune enrollment of the device.
-#### Create a device configuration policy in Microsoft Endpoint Manager
+#### Create a device configuration policy in Microsoft Intune
-1. Sign in to the [Azure portal](https://portal.azure.com) and select **Endpoint Manager**.
+1. Sign in to the [Microsoft Intune admin center](https://go.microsoft.com/fwlink/?linkid=2109431).
1. Create a new device configuration profile by going to **Device configuration** > **Profiles**, then select **+ Create Profile** - For **Platform** choose *Windows 10 and later* - For **Profile type**, choose Templates then select the Custom template below
Deploying the configuration change to enable SSPR from the login screen using Mi
Select **Add**, then **Next**. 1. The policy can be assigned to specific users, devices, or groups. Assign the profile as desired for your environment, ideally to a test group of devices first, then select **Next**.
- For more information, see [Assign user and device profiles in Microsoft Microsoft Endpoint Manager](/mem/intune/configuration/device-profile-assign).
+ For more information, see [Assign user and device profiles in Microsoft Microsoft Intune](/mem/intune/configuration/device-profile-assign).
1. Configure applicability rules as desired for your environment, such as to *Assign profile if OS edition is Windows 10 Enterprise*, then select **Next**. 1. Review your profile, then select **Create**.
active-directory What Is Cloud Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/what-is-cloud-sync.md
The following table provides a comparison between Azure AD Connect and Azure AD
| Groups with up to 50,000 members |ΓùÅ |ΓùÅ | | Large groups with up to 250,000 members |ΓùÅ | | | Cross domain references|ΓùÅ |ΓùÅ |
-| On-demand provisioning|ΓùÅ |ΓùÅ |
+| On-demand provisioning| |ΓùÅ |
| Support for US Government|ΓùÅ |ΓùÅ | ## Next steps
active-directory App Resilience Continuous Access Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/app-resilience-continuous-access-evaluation.md
Previously updated : 07/09/2021 Last updated : 01/03/2023 # Customer intent: As an application developer, I want to learn how to use Continuous Access Evaluation for building resiliency through long-lived, refreshable tokens that can be revoked based on critical events and policy evaluation. # How to use Continuous Access Evaluation enabled APIs in your applications
-[Continuous Access Evaluation](../conditional-access/concept-continuous-access-evaluation.md) (CAE) is an Azure AD feature that allows access tokens to be revoked based on [critical events](../conditional-access/concept-continuous-access-evaluation.md#critical-event-evaluation) and [policy evaluation](../conditional-access/concept-continuous-access-evaluation.md#conditional-access-policy-evaluation) rather than relying on token expiry based on lifetime. For some resource APIs, because risk and policy are evaluated in real time, this can increase token lifetime up to 28 hours. These long-lived tokens will be proactively refreshed by the Microsoft Authentication Library (MSAL), increasing the resiliency of your applications.
+[Continuous Access Evaluation](../conditional-access/concept-continuous-access-evaluation.md) (CAE) is an Azure AD feature that allows access tokens to be revoked based on [critical events](../conditional-access/concept-continuous-access-evaluation.md#critical-event-evaluation) and [policy evaluation](../conditional-access/concept-continuous-access-evaluation.md#conditional-access-policy-evaluation) rather than relying on token expiry based on lifetime. For some resource APIs, because risk and policy are evaluated in real time, this can increase token lifetime up to 28 hours. These long-lived tokens are proactively refreshed by the Microsoft Authentication Library (MSAL), increasing the resiliency of your applications.
This article shows you how to use CAE-enabled APIs in your applications. Applications not using MSAL can add support for [claims challenges, claims requests, and client capabilities](claims-challenge.md) to use CAE. ## Implementation considerations
-To use Continuous Access Evaluation, both your app and the resource API it's accessing must be CAE-enabled. However, preparing your code to use a CAE enabled resource will not prevent you from using APIs that are not CAE enabled.
+To use CAE, both your app and the resource API it's accessing must be CAE-enabled. However, preparing your code to use a CAE enabled resource won't prevent you from using APIs that aren't CAE enabled.
-If a resource API implements CAE and your application declares it can handle CAE, your app will get CAE tokens for that resource. For this reason, if you declare your app CAE ready, your application must handle the CAE claim challenge for all resource APIs that accept Microsoft Identity access tokens. If you do not handle CAE responses in these API calls, your app could end up in a loop retrying an API call with a token that is still in the returned lifespan of the token but has been revoked due to CAE.
+If a resource API implements CAE and your application declares it can handle CAE, your app receives CAE tokens for that resource. For this reason, if you declare your app CAE ready, your application must handle the CAE claim challenge for all resource APIs that accept Microsoft Identity access tokens. If you don't handle CAE responses in these API calls, your app could end up in a loop retrying an API call with a token that is still in the returned lifespan of the token but has been revoked due to CAE.
## The code
active-directory Apple Sso Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/apple-sso-plugin.md
The SSO plug-in is installed automatically by devices that have:
* Downloaded the Authenticator app on iOS or iPadOS, or downloaded the Intune Company Portal app on macOS. * Registered their device with your organization.
-Your organization likely uses the Authenticator app for scenarios like multifactor authentication (MFA), passwordless authentication, and conditional access. By using an MDM provider, you can turn on the SSO plug-in for your applications. Microsoft has made it easy to configure the plug-in inside the Microsoft Endpoint Manager in Intune. An allowlist is used to configure these applications to use the SSO plug-in.
+Your organization likely uses the Authenticator app for scenarios like multifactor authentication (MFA), passwordless authentication, and conditional access. By using an MDM provider, you can turn on the SSO plug-in for your applications. Microsoft has made it easy to configure the plug-in using Microsoft Intune. An allowlist is used to configure these applications to use the SSO plug-in.
>[!IMPORTANT] > The Microsoft Enterprise SSO plug-in supports only apps that use native Apple network technologies or webviews. It doesn't support applications that ship their own network layer implementation.
active-directory Azureadjoin Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/azureadjoin-plan.md
There are two approaches for managing Azure AD joined devices:
- **MDM-only** - A device is exclusively managed by an MDM provider like Intune. All policies are delivered as part of the MDM enrollment process. For Azure AD Premium or EMS customers, MDM enrollment is an automated step that is part of an Azure AD join. - **Co-management** - A device is managed by an MDM provider and Microsoft Endpoint Configuration Manager. In this approach, the Microsoft Endpoint Configuration Manager agent is installed on an MDM-managed device to administer certain aspects.
-If you're using Group Policies, evaluate your GPO and MDM policy parity by using [Group Policy analytics](/mem/intune/configuration/group-policy-analytics) in Microsoft Endpoint Manager.
+If you're using Group Policies, evaluate your GPO and MDM policy parity by using [Group Policy analytics](/mem/intune/configuration/group-policy-analytics) in Microsoft Intune.
Review supported and unsupported policies to determine whether you can use an MDM solution instead of Group policies. For unsupported policies, consider the following questions:
active-directory Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/overview.md
There are three ways to get a device identity:
- Azure AD join - Hybrid Azure AD join
-Device identities are a prerequisite for scenarios like [device-based Conditional Access policies](../conditional-access/require-managed-devices.md) and [Mobile Device Management with Microsoft Endpoint Manager](/mem/endpoint-manager-overview).
+Device identities are a prerequisite for scenarios like [device-based Conditional Access policies](../conditional-access/require-managed-devices.md) and [Mobile Device Management with the Microsoft Intune family of products](/mem/endpoint-manager-overview).
## Modern device scenario
active-directory Directory Self Service Signup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/directory-self-service-signup.md
Title: Self-service sign-up for email-verified users - Azure AD | Microsoft Docs
+ Title: Self-service sign up for email-verified users - Azure AD | Microsoft Docs
description: Use self-service sign-up in an Azure Active Directory (Azure AD) organization documentationcenter: ''
Previously updated : 06/23/2022 Last updated : 03/02/2022
Admins have two self-service controls today. They can control whether:
An admin can configure these capabilities using the following Azure AD cmdlet Set-MsolCompanySettings parameters:
-* **AllowEmailVerifiedUsers** controls whether users can join the tenant by email validation. To join, the user must have an email address in a domain which matches one of the verified domains in the tenant. This setting is applied company-wide for all domains in the tenant. If you set that parameter to $false, no email-verified user can join the tenant.
+* **AllowEmailVerifiedUsers** controls whether users can join the tenant by email validation. To join, the user must have an email address in a domain that matches one of the verified domains in the tenant. This setting is applied company-wide for all domains in the tenant. If you set that parameter to $false, no email-verified user can join the tenant.
* **AllowAdHocSubscriptions** controls the ability for users to perform self-service sign-up. If you set that parameter to $false, no user can perform self-service sign-up. AllowEmailVerifiedUsers and AllowAdHocSubscriptions are tenant-wide settings that can be applied to a managed or unmanaged tenant. Here's an example where: * You administer a tenant with a verified domain such as contoso.com
-* You use B2B collaboration from a different tenant to invite a user that does not already exist (userdoesnotexist@contoso.com) in the home tenant of contoso.com
+* You use B2B collaboration from a different tenant to invite a user that doesn't already exist (userdoesnotexist@contoso.com) in the home tenant of contoso.com
* The home tenant has the AllowEmailVerifiedUsers turned on If the preceding conditions are true, then a member user is created in the home tenant, and a B2B guest user is created in the inviting tenant.
+>[!NOTE]
+> Office 365 for Education users, are currently the only ones who are added to existing managed tenants even when this toggle is enabled
+ For more information on Flow and Power Apps trial sign-ups, see the following articles: * [How can I prevent my existing users from starting to use Power BI?](https://support.office.com/article/Power-BI-in-your-Organization-d7941332-8aec-4e5e-87e8-92073ce73dc5#bkmk_preventjoining) * [Flow in your organization Q&A](/power-automate/organization-q-and-a) ### How do the controls work together?
-These two parameters can be used in conjunction to define more precise control over self-service sign-up. For example, the following command will allow users to perform self-service sign-up, but only if those users already have an account in Azure AD (in other words, users who would need an email-verified account to be created first cannot perform self-service sign-up):
+These two parameters can be used in conjunction to define more precise control over self-service sign-up. For example, the following command allows users to perform self-service sign-up, but only if those users already have an account in Azure AD (in other words, users who would need an email-verified account to be created first can't perform self-service sign-up):
```powershell Set-MsolCompanySettings -AllowEmailVerifiedUsers $false -AllowAdHocSubscriptions $true
The following flowchart explains the different combinations for these parameters
![flowchart of self-service sign-up controls](./media/directory-self-service-signup/SelfServiceSignUpControls.png)
-The details of this setting can be retrieved by the following PowerShell cmdlet Get-MsolCompanyInformation. For more information on this, see [Get-MsolCompanyInformation](/powershell/module/msonline/get-msolcompanyinformation).
+This setting's details may be retrieved using the PowerShell cmdlet Get-MsolCompanyInformation. For more information on this, see [Get-MsolCompanyInformation](/powershell/module/msonline/get-msolcompanyinformation).
```powershell Get-MsolCompanyInformation | Select AllowEmailVerifiedUsers, AllowAdHocSubscriptions
active-directory Auth Kcd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-kcd.md
description: Architectural guidance on achieving Kerberos constrained delegation
- Previously updated : 08/19/2022 Last updated : 03/01/2023 - # Windows authentication - Kerberos constrained delegation with Azure Active Directory
-Kerberos Constrained Delegation (KCD) provides constrained delegation between resources and is based on Service Principle Names. It requires domain administrators to create the delegations and is limited to a single domain. Resource-based KCD is often used as a way of providing Kerberos authentication for a web application that has users in multiple domains within an Active Directory forest.
+Based on Service Principle Names, Kerberos Constrained Delegation (KCD) provides constrained delegation between resources. It requires domain administrators to create the delegations and is limited to a single domain. You can use resource-based KCD to provide Kerberos authentication for a web application that has users in multiple domains within an Active Directory forest.
Azure Active Directory Application Proxy can provide single sign-on (SSO) and remote access to KCD-based applications that require a Kerberos ticket for access and Kerberos Constrained Delegation (KCD).
-You enable SSO to your on-premises KCD applications that use integrated Windows authentication (IWA) by giving Application Proxy connectors permission to impersonate users in Active Directory. The Application Proxy connector uses this permission to send and receive tokens on the users' behalf.
+To enable SSO to your on-premises KCD applications that use integrated Windows authentication (IWA), give Application Proxy connectors permission to impersonate users in Active Directory. The Application Proxy connector uses this permission to send and receive tokens on the users' behalf.
-## Use when
+## When to use KCD
-There is a need to provide remote access, protect with pre-authentication, and provide SSO to on-premises IWA applications.
+Use KCD when there's a need to provide remote access, protect with pre-authentication, and provide SSO to on-premises IWA applications.
![Diagram of architecture](./media/authentication-patterns/kcd-auth.png) ## Components of system
-* **User**: Accesses legacy application served by Application Proxy.
-
+* **User**: Accesses legacy application that Application Proxy serves.
* **Web browser**: The component that the user interacts with to access the external URL of the application.- * **Azure AD**: Authenticates the user. -
-* **Application Proxy service**: Acts as reverse proxy to send request from the user to the on-premises application. It sits in Azure AD. Application Proxy can also enforce any conditional access policies.
-
-* **Application Proxy connector**: Installed on-premises on Windows servers to provide connectivity to the application. Returns the response to Azure AD. Performs KCD negotiation with Active Directory, impersonating the user to get a Kerberos token to the application.
-
+* **Application Proxy service**: Acts as reverse proxy to send requests from the user to the on-premises application. It sits in Azure AD. Application Proxy can enforce conditional access policies.
+* **Application Proxy connector**: Installed on Windows on premises servers to provide connectivity to the application. Returns the response to Azure AD. Performs KCD negotiation with Active Directory, impersonating the user to get a Kerberos token to the application.
* **Active Directory**: Sends the Kerberos token for the application to the Application Proxy connector.- * **Legacy applications**: Applications that receive user requests from Application Proxy. The legacy applications return the response to the Application Proxy connector. ## Implement Windows authentication (KCD) with Azure AD
-* [Kerberos Constrained Delegation for single sign-on to your apps with Application Proxy](../app-proxy/application-proxy-configure-single-sign-on-with-kcd.md)
+Explore the following resources to learn more about implementing Windows authentication (KCD) with Azure AD.
+
+* [Kerberos-based single sign-on (SSO) in Azure Active Directory with Application Proxy](../app-proxy/application-proxy-configure-single-sign-on-with-kcd.md) describes prerequisites and configuration steps.
+* The [Tutorial - Add an on-premises app - Application Proxy in Azure Active Directory](../app-proxy/application-proxy-add-on-premises-application.md) helps you to prepare your environment for use with Application Proxy.
+
+## Next steps
-* [Add an on-premises application for remote access through Application Proxy in Azure Active Directory](../app-proxy/application-proxy-add-on-premises-application.md)
+* [Azure Active Directory authentication and synchronization protocol overview](auth-sync-overview.md) describes integration with authentication and synchronization protocols. Authentication integrations enable you to use Azure AD and its security and management features with little or no changes to your applications that use legacy authentication methods. Synchronization integrations enable you to sync user and group data to Azure AD and then user Azure AD management capabilities. Some sync patterns enable automated provisioning.
+* [Understand single sign-on with an on-premises app using Application Proxy](../app-proxy/application-proxy-config-sso-how-to.md) describes how SSO allows your users to access an application without authenticating multiple times. SSO occurs in the cloud against Azure AD and allows the service or Connector to impersonate the user to complete authentication challenges from the application.
+* [SAML single sign-on for on-premises apps with Azure Active Directory Application Proxy](../app-proxy/application-proxy-configure-single-sign-on-on-premises-apps.md) describes how you can provide remote access to on-premises applications that are secured with SAML authentication through Application Proxy.
active-directory Auth Passwordless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-passwordless.md
+
+ Title: Passwordless authentication with Azure Active Directory
+description: Microsoft Azure Active Directory (Azure AD) enables integration with passwordless authentication protocols that include certificate-based authentication, passwordless security key sign-in, Windows Hello for Business, and passwordless sign-in with Microsoft Authenticator.
+++++++ Last updated : 03/01/2023+++
+# Passwordless authentication with Azure Active Directory
+
+Microsoft Azure Active Directory (Azure AD) enables integration with the following passwordless authentication protocols.
+
+- [Overview of Azure AD certificate-based authentication](../authentication/concept-certificate-based-authentication.md): Azure AD certificate-based authentication (CBA) enables customers to allow or require users to authenticate directly with X.509 certificates against their Azure AD for applications and browser sign-in. This feature enables customers to adopt phishing resistant authentication and authenticate with an X.509 certificate against their Public Key Infrastructure (PKI).
+- [Enable passwordless security key sign-in](../authentication/howto-authentication-passwordless-security-key.md): For enterprises that use passwords and have a shared PC environment, security keys provide a seamless way for workers to authenticate without entering a username or password. Security keys provide improved productivity for workers, and have better security. This article explains how to sign in to web-based applications with your Azure AD account using a FIDO2 security key.
+- [Windows Hello for Business Overview](/windows/security/identity-protection/hello-for-business/hello-overview.md): Windows Hello for Business replaces passwords with strong two-factor authentication on devices. This authentication consists of a type of user credential that is tied to a device and uses a biometric or PIN.
+- [Enable passwordless sign-in with Microsoft Authenticator](../authentication/howto-authentication-passwordless-phone.md): Microsoft Authenticator can be used to sign in to any Azure AD account without using a password. Microsoft Authenticator uses key-based authentication to enable a user credential that is tied to a device, where the device uses a PIN or biometric. Windows Hello for Business uses a similar technology. Microsoft Authenticator can be used on any device platform, including mobile. Microsoft Authenticator can be used with any app or website that integrates with Microsoft Authentication Libraries.
active-directory Auth Remote Desktop Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-remote-desktop-gateway.md
description: Architectural guidance on achieving Remote Desktop Gateway Services
- Previously updated : 08/19/2022 Last updated : 03/01/2023 - # Remote Desktop Gateway Services
-A standard Remote Desktop Services (RDS) deployment includes various [Remote Desktop role services](/windows-server/remote/remote-desktop-services/Desktop-hosting-logical-architecture) running on Windows Server. The RDS deployment with Azure Active Directory (Azure AD) Application Proxy has a permanent outbound connection from the server running the connector service. Other deployments leave open inbound connections through a load balancer. This authentication pattern allows you to offer more types of applications by publishing on-premises applications through Remote Desktop Services. It also reduces the attack surface of their deployment by using Azure AD Application Proxy.
+A standard Remote Desktop Services (RDS) deployment includes various [Remote Desktop role services](/windows-server/remote/remote-desktop-services/Desktop-hosting-logical-architecture) running on Windows Server. The RDS deployment with Azure Active Directory (Azure AD) Application Proxy has a permanent outbound connection from the server that is running the connector service. Other deployments leave open inbound connections through a load balancer.
-## Use when
+This authentication pattern allows you to offer more types of applications by publishing on premises applications through Remote Desktop Services. It reduces the attack surface of their deployment by using Azure AD Application Proxy.
-You need to provide remote access and protect your Remote Desktop Services deployment with pre-authentication.
+## When to use Remote Desktop Gateway Services
+
+Use Remote Desktop Gateway Services when you need to provide remote access and protect your Remote Desktop Services deployment with pre-authentication.
![architectural diagram](./media/authentication-patterns/rdp-auth.png)
-## Components of system
+## System components
* **User**: Accesses RDS served by Application Proxy.- * **Web browser**: The component that the user interacts with to access the external URL of the application.- * **Azure AD**: Authenticates the user.
+* **Application Proxy service**: Acts as reverse proxy to forward request from the user to RDS. Application Proxy can also enforce any Conditional Access policies.
+* **Remote Desktop Services**: Acts as a platform for individual virtualized applications, providing secure mobile and remote desktop access. It provides end users with the ability to run their applications and desktops from the cloud.
-* **Application Proxy service**: Acts as reverse proxy to forward request from the user to RDS. Application Proxy can also enforce any Conditional Access policies.
+## Implement Remote Desktop Gateway services with Azure AD
-* **Remote Desktop Services**: Acts as a platform for individual virtualized applications, providing secure mobile and remote desktop access, and providing end users the ability to run their applications and desktops from the cloud.
+Explore the following resources to learn more about implementing Remote Desktop Gateway services with Azure AD.
-## Implement Remote Desktop Gateway services with Azure AD
+* [Publish Remote Desktop with Azure Active Directory Application Proxy](../app-proxy/application-proxy-integrate-with-remote-desktop-services.md) describes how Remote Desktop Service and Azure AD Application Proxy work together to improve productivity of workers who are away from the corporate network.
+* The [Tutorial - Add an on-premises app - Application Proxy in Azure Active Directory](../app-proxy/application-proxy-add-on-premises-application.md) helps you to prepare your environment for use with Application Proxy.
-* [Publish remote desktop with Azure AD Application Proxy](../app-proxy/application-proxy-integrate-with-remote-desktop-services.md)
+## Next steps
-* [Add an on-premises application for remote access through Application Proxy in Azure AD](../app-proxy/application-proxy-add-on-premises-application.md)
+* [Azure Active Directory authentication and synchronization protocol overview](auth-sync-overview.md) describes integration with authentication and synchronization protocols. Authentication integrations enable you to use Azure AD and its security and management features with little or no changes to your applications that use legacy authentication methods. Synchronization integrations enable you to sync user and group data to Azure AD and then user Azure AD management capabilities. Some sync patterns enable automated provisioning.
+* [Remote Desktop Services architecture](/windows-server/remote/remote-desktop-services/Desktop-hosting-logical-architecture.md) describes configurations for deploying Remote Desktop Services to host Windows apps and desktops for end-users.
active-directory Five Steps To Full Application Integration With Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/five-steps-to-full-application-integration-with-azure-ad.md
Title: Five steps for integrating all your apps with Azure AD
-description: This guide explains how to integrate all your applications with Azure AD. In each step, we explain the value and provide links to resources that will explain the technical details.
+ Title: Five steps to integrate your apps with Azure Active Directory
+description: Learn to integrate your applications with Azure AD by adding apps, discovery, and integration methods.
-+ Previously updated : 08/05/2020 Last updated : 03/01/2023
-# Five steps for integrating all your apps with Azure AD
+# Five steps to integrate your apps with Azure Active Directory
-Azure Active Directory (Azure AD) is the Microsoft cloud-based identity and access management service. Azure AD provides secure authentication and authorization solutions so that customers, partners, and employees can access the applications they need. With Azure AD, [conditional access](../conditional-access/overview.md), [multi-factor authentication](../authentication/concept-mfa-howitworks.md), [single-sign on](../hybrid/how-to-connect-sso.md), and [automatic user provisioning](../app-provisioning/user-provisioning.md) make identity and access management easy and secure.
+Learn to integrate your applications with Azure Active Directory (Azure AD), which is a cloud-based identity and access management service. Organizations use Azure AD for secure authentication and authorization so customers, partners, and employees can access applications. With Azure AD, features such as Conditional Access, Azure AD Multi-Factor Authentication (MFA), single sign-on, and application provisioning make identity and access management easier to manage and more secure.
-If your company has a Microsoft 365 subscription, you likely [already use](/office365/enterprise/about-office-365-identity) Azure AD. However, Azure AD can be used for all your applications, and by [centralizing your application management](../manage-apps/common-scenarios.md) you can use the same identity management features, tools, and policies across your entire app portfolio. Doing so will provide a unified solution that improves security, reduces costs, increases productivity, and enables you to ensure compliance. And you will get remote access to on-premises apps.
+Learn more:
-This guide explains how to integrate all your applications with Azure AD. In each step, we explain the value and provide links to resources that will explain the technical details. We present these steps in an order we recommend. However, you can jump to any part of the process to get started with whatever adds the most value for you.
+* [What is Conditional Access?](../conditional-access/overview.md)
+* [How it works: Azure AD Multi-Factor Authentication](../authentication/concept-mfa-howitworks.md)
+* [Azure AD seamless single sign-on](../hybrid/how-to-connect-sso.md)
+* [What is app provisioning in Azure AD?](../app-provisioning/user-provisioning.md)
-Other resources on this topic, including in-depth business process whitepapers, that can be found on our [Resources for migrating applications to Azure Active Directory](../manage-apps/migration-resources.md) page.
+If your company has a Microsoft 365 subscription, you likely use Azure AD. However, you can use Azure AD for applications. If you centralize application management, identity management features, tools, and policies for your app portfolio. The benefit is a unified solution that improves security, reduces costs, increases productivity, and enables compliance. In addition, there's remote access to on-premises apps.
-## 1. Use Azure AD for new applications
+Learn more:
-First, focus on newly acquired applications. When your business starts using a new application, [add it to your Azure AD tenant](../manage-apps/add-application-portal.md) right away. Set up a company policy so that adding new apps to Azure AD is the standard practice in your organization. This is minimally disruptive to existing business processes and allows you to investigate and prove the value you get from integrating apps without changing the way that people do business in your environment today.
+* [Deploy your identity infrastructure for Microsoft 365](/microsoft-365/enterprise/deploy-identity-solution-overview?view=o365-worldwide&preserve-view=true)
+* [What is application management in Azure AD?](../manage-apps/what-is-application-management.md)
-Azure Active Directory (Azure AD) has a gallery that contains thousands of pre-integrated applications to make it easy to get started. You can [add a gallery app to your Azure AD organization](../manage-apps/add-application-portal.md) with step-by-step [tutorials](../saas-apps/tutorial-list.md) for integrating with popular apps like:
+## Azure AD for new applications
-- [ServiceNow](../saas-apps/servicenow-tutorial.md)-- [Workday](../saas-apps/workday-tutorial.md)-- [Salesforce](../saas-apps/salesforce-tutorial.md)-- [AWS](../saas-apps/amazon-web-service-tutorial.md)-- [Slack](../saas-apps/slack-tutorial.md)
+When your business acquires new applications, add them to the Azure AD tenant. Establish a company policy of adding new apps to Azure AD.
-In addition you can [integrate applications not in the gallery](../manage-apps/view-applications-portal.md), including any application that already exists in your organization, or any third-party application from a vendor who is not already part of the Azure AD gallery. You can also [add your app to the gallery](../manage-apps/v2-howto-app-gallery-listing.md) if it is not there.
+See, [Quickstart: Add an enterprise application](../manage-apps/add-application-portal.md)
-Finally, you can also integrate the apps you develop in-house. This is covered in step five of this guide.
+Azure AD has a gallery of integrated applications to make it easy to get started. Add a gallery app to your Azure AD organization (see, previous link) and learn about integrating software as a service (SaaS) tutorials.
-## 2. Determine existing application usage and prioritize work
+See, [Tutorials for integrating SaaS applications with Azure AD](../saas-apps/tutorial-list.md)
-Next, discover the applications employees are frequently using, and prioritize your work for integrating them with Azure AD.
+### Integration tutorials
-You can start by using the Microsoft Defender for Cloud Apps [cloud discovery tools](/cloud-app-security/tutorial-shadow-it) to discover and manage "shadow" IT in your network (that is, apps not managed by the IT department). You can [use Microsoft Defender Advanced Threat Protection (ATP)](/windows/security/threat-protection/microsoft-defender-atp/microsoft-defender-advanced-threat-protection) to simplify and extend the discovery process.
+Use the following tutorials to learn to integrate common tools with Azure AD single sign-on (SSO).
-In addition, you can use the [AD FS application activity report](../manage-apps/migrate-adfs-application-activity.md) in the Azure portal to discover all the AD FS apps in your organization, the number of unique users that have signed in to them, and compatibility for integrating them with Azure AD.
+* [Tutorial: Azure AD SSO integration with ServiceNow](../saas-apps/servicenow-tutorial.md)
+* [Tutorial: Azure AD SSO integration with Workday](../saas-apps/workday-tutorial.md)
+* [Tutorial: Azure AD SSO integration with Salesforce](../saas-apps/salesforce-tutorial.md)
+* [Tutorial: Azure AD SSO integration with AWS Single-Account Access](../saas-apps/amazon-web-service-tutorial.md)
+* [Tutorial: Azure AD SSO integration with Slack](../saas-apps/slack-tutorial.md)
-Once you have discovered your existing landscape, you will want to [create a plan](../manage-apps/migration-resources.md) and prioritize the highest priority apps to integrate. Some example questions you can ask to guide this process are:
+### Apps not in the gallery
-- Which apps are the most used?-- Which are the riskiest?-- Which apps will be decommissioned in the future, making a move unnecessary?-- Which apps need to stay on-premises and cannot be moved to the cloud?
+You can integrate applications that don't appear in the gallery, including applications in your organization, or third-party application from vendors. Submit a request to publish your app in the gallery. To learn about integrating apps you develop in-house, see **Integrate apps your developers build**.
-You will see the largest benefits and cost savings once all your apps are integrated and you no longer rely on multiple identity solutions. However, you will experience easier identity management and increased security as you move stepwise towards this goal. You want to use this time to prioritize your work and decide what makes sense for your situation.
+Learn more:
-## 3. Integrate apps that rely on other identity providers
+* [Quickstart: View enterprise applications](../manage-apps/view-applications-portal.md)
+* [Submit a request to publish your application in Azure AD application gallery](../manage-apps/v2-howto-app-gallery-listing.md)
-During your discovery process, you may have found applications that are untracked by the IT department, which leave your data and resources vulnerable. You may also have applications that use alternative identity solutions, including Active Directory Federation Services (ADFS) or other identity providers. Consider how you can consolidate your identity and access management to save money and increase security. Reducing the number of identity solutions you have will:
+## Determine application usage and prioritize integration
-- Save you money by eliminating the need for on-premises user provisioning and authentication as well as licensing fees paid to other cloud identity providers for the same service.-- Reduce the administrative overhead and enable tighter security with fewer redundancies in your identity and access management process.-- Enable employees to get secure single sign-on access to ALL the applications they need via the [MyApps portal](../manage-apps/access-panel-collections.md).-- Improve the intelligence of Azure AD's [identity protection](../identity-protection/overview-identity-protection.md) related services like conditional access by increasing the amount of data it gets from your app usage, and extend its benefits to the newly added apps.
+Discover the applications employees use, and prioritize integrating the apps with Azure AD. Use the Microsoft Defender for Cloud Apps Cloud Discovery tools to discover and manage apps not managed by your IT team. Microsoft Defender for Endpoint (formerly known as Microsoft Defender Advanced Threat Protection) simplifies and extends the discovery process.
-We have published guidance for managing the business process of integrating apps with Azure AD, including a [poster](https://aka.ms/AppOnePager) and [presentation](https://aka.ms/AppGuideline) you can use to make business and application owners aware and interested. You can modify those samples with your own branding and publish them to your organization through your company portal, newsletter, or other medium as you go about completing this process.
+Learn more:
-A good place to start is by evaluating your use of Active Directory Federation Services (ADFS). Many organizations use ADFS for authentication with SaaS apps, custom Line-of-Business apps, and Microsoft 365 and Azure AD-based apps:
+* [Set up Cloud Discovery](/defender-cloud-apps/set-up-cloud-discovery)
+* [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint?view=o365-worldwide&preserve-view=true)
-![Diagram shows on-premises apps, line of business apps, SaaS apps, and, via Azure AD, Office 365 all connecting with dotted lines into Active Directory and AD FS.](./media/five-steps-to-full-application-integration-with-azure-ad/adfs-integration-1.png)
+In addition, use the Active Directory Federation Services (AD FS) in the Azure portal to discover AD FS apps in your organization. Discover unique users that signed in to the apps, and see information about integration compatibility.
-You can upgrade this configuration by [replacing ADFS with Azure AD as the center](../manage-apps/migrate-adfs-apps-to-azure.md) of your identity management solution. Doing so enables sign-on for every app your employees want to access, and makes it easy for employees to find any business application they need via the [MyApps portal](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510), in addition to the other benefits mentioned above.
+See, [Review the application activity report](../manage-apps/migrate-adfs-application-activity.md)
-![Diagram shows on-premises apps via Active Directory and AD FS, line of business apps, SaaS apps, and Office 365 all connecting with dotted lines into Azure Active Directory.](./media/five-steps-to-full-application-integration-with-azure-ad/adfs-integration-2.png)
+### Application migration
-Once Azure AD becomes the central identity provider, you may be able to switch from ADFS completely, rather than using a federated solution. Apps that previously used ADFS for authentication can now use Azure AD alone.
+After you discover apps in your environment, prioritize the apps to migrate and integrate. Consider the following parameters:
-![Diagram shows on-premises, line of business apps, SaaS apps, and Office 365 all connecting with dotted lines into Azure Active Directory. Active Directory and AD FS is not present.](./media/five-steps-to-full-application-integration-with-azure-ad/adfs-integration-3.png)
+- Apps used most frequently
+- Riskiest apps
+- Apps to be decommissioned, therefore not in migration
+- Apps that stay on-premises
-You can also migrate apps that use a different cloud-based identity provider to Azure AD. Your organization may have multiple Identity Access Management (IAM) solutions in place. Migrating to one Azure AD infrastructure is an opportunity to reduce dependencies on IAM licenses (on-premises or in the cloud) and infrastructure costs. In cases where you may have already paid for Azure AD via M365 licenses, there is no reason to pay the added cost of another IAM solution.
+See, [Resources for migrating applications to Azure AD](../manage-apps/migration-resources.md)
-## 4. Integrate on-premises applications
+## Integrate apps and identity providers
-Traditionally, applications were kept secure by allowing access only while connected to the corporate network. However, in an increasingly connected world we want to allow access to apps for customers, partners, and/or employees, regardless of where they are in the world. [Azure AD Application Proxy](../app-proxy/what-is-application-proxy.md) (AppProxy) is a feature of Azure AD that connects your existing on-premises apps to Azure AD and does not require that you maintain edge servers or other additional infrastructure to do so.
+During discovery, there might be applications not tracked by the IT team, which can create vulnerabilities. Some applications use alternative identity solutions, including AD FS, or other identity providers (IdPs). We recommend you consolidate identity and access management. Benefits include:
-![A diagram shows the Application Proxy Service in action. A user accesses "https://sales.contoso.com" and their request is redirected through "https://sales-contoso.msappproxy.net" in Azure Active Directory to the on-premises address "http://sales"](./media/five-steps-to-full-application-integration-with-azure-ad\app-proxy.png)
+* Reduce on-premises user set-up, authentication, and IdP licensing fees
+* Lower administrative overhead with streamlined identity and access management process
+* Enable single sign-on (SSO) access to applications in the My Apps portal
+ * See, [Create collections on the My Apps portal](../manage-apps/access-panel-collections.md)
+* Use Identity Protection and Conditional Access to increase data from app usage, and extend benefits to recently added apps
+ * [What is Identity Protection?](../identity-protection/overview-identity-protection.md)
+ * [What is Conditional Access?](../conditional-access/overview.md)
-You can use [Tutorial: Add an on-premises application for remote access through Application Proxy in Azure Active Directory](../app-proxy/application-proxy-add-on-premises-application.md) to enable Application Proxy and add an on-premises application to your Azure AD tenant.
+### App owner awareness
-In addition, you can integrate application delivery controllers like F5 BIG-IP APM or Zscaler Private Access. By integrating these with Azure AD, you get the modern authentication and identity management of Azure AD alongside the traffic management and security features of the partner product. We call this solution [Secure Hybrid Access](../manage-apps/secure-hybrid-access.md). If you use any of the following services today, we have tutorials that will step you through how to integrate them with Azure AD.
+To help manage app integration with Azure AD, use the following material for application owner awareness and interest. Modify the material with your branding.
-- [Akamai Enterprise Application Access (EAA)](../saas-apps/akamai-tutorial.md)-- [Citrix Application Deliver Controller (ADC)](../saas-apps/citrix-netscaler-tutorial.md) (Formerly known as Citrix Netscaler)-- [F5 BIG-IP APM](../manage-apps/f5-aad-integration.md)-- [Zscaler Private Access (ZPA)](../saas-apps/zscalerprivateaccess-tutorial.md)
+You can download:
-## 5. Integrate apps your developers build
+* Zip file, [Editable Azure AD App Integration One-Pager](https://aka.ms/AppOnePager)
+* Microsoft PowerPoint presentation, [Azure AD application integration guidelines](https://aka.ms/AppGuideline)
-For apps that are built within your company, your developers can use the [Microsoft identity platform](../develop/index.yml) to implement authentication and authorization. Applications integrated with the platform with be [registered with Azure AD](../develop/quickstart-register-app.md) and managed just like any other app in your portfolio.
+### Active Directory Federation Services
-Developers can use the platform for both internal-use apps and customer facing apps, and there are other benefits that come with using the platform. [Microsoft Authentication Libraries (MSAL)](../develop/msal-overview.md), which is part of the platform, allows developers to enable modern experiences like multi-factor authentication and the use of security keys to access their apps without needing to implement it themselves. Additionally, apps integrated with the Microsoft identity platform can access [Microsoft Graph](/graph/overview) - a unified API endpoint providing the Azure AD data that describes the patterns of productivity, identity, and security in an organization. Developers can use this information to implement features that increase productivity for your users. For example, by identifying the people the user has been interacting with recently and surfacing them in the app's UI.
+Evaluate use of AD FS for authentication with SaaS apps, line-of-business apps, also Microsoft 365 and Azure AD apps.
-We have a [video series](https://www.youtube.com/watch?v=zjezqZPPOfc&list=PLLasX02E8BPBxGouWlJV-u-XZWOc2RkiX) that provides a comprehensive introduction to the platform as well as [many code samples](../develop/sample-v2-code.md) in supported languages and platforms.
+ ![Diagram AD FS authenticating with SaaS apps, line-of-business apps, also Microsoft 365 and Azure AD apps](./media/five-steps-to-full-application-integration-with-azure-ad/adfs-integration-1.png)
-## Next steps
+Improve the configuration illustrated in the previous diagram by moving application authentication to Azure AD. Enable sign-on for apps and ease application discovery with the My Apps portal.
-- [Resources for migrating applications to Azure Active Directory](../manage-apps/migration-resources.md)
+Learn more:
+
+* [Move application authentication to Azure AD](../manage-apps/migrate-adfs-apps-to-azure.md)
+* [Sign in and start apps from the My Apps portal](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510)
+
+See the following diagram of app authentication simplified by Azure AD.
+
+ ![Diagram of app authentication with Azure AD.](./media/five-steps-to-full-application-integration-with-azure-ad/adfs-integration-2.png)
+
+After Azure AD is the central IdP, you might be able to discontinue ADFS.
+
+ ![Diagram of Azure AD integration with on-premises apps, LOB apps, SaaS apps, and Office 365.](./media/five-steps-to-full-application-integration-with-azure-ad/adfs-integration-3.png)
+
+You can migrate apps that use a different cloud-based IdP. Your organization might have multiple Identity Access Management (IAM) solutions. Migrating to one Azure AD infrastructure can reduce dependencies on IAM licenses and infrastructure costs. If you paid for Azure AD with Microsoft 365 licenses, likely you don't have to purchase another IAM solution.
+
+## Integrate on-premises applications
+
+Traditionally, application security enabled access during a connection to a corporate network. However, organization grant access to apps for customers, partners, and/or employees, regardless of location. Application Proxy Service in Azure AD connects on-premises apps to Azure AD and doesn't require edge servers or more infrastructure.
+
+See, [Using Azure AD Application Proxy to publish on-premises apps for remote users](../app-proxy/what-is-application-proxy.md)
+
+The following diagram illustrates Application Proxy Service processing a user request.
+
+ ![Diagram of the Azure AD Application Proxy Service processing a user request.](./media/five-steps-to-full-application-integration-with-azure-ad/app-proxy.png)
+
+See, [Tutorial: Add an on-premises application for remote access through Application Proxy in Azure AD](../app-proxy/application-proxy-add-on-premises-application.md)
+
+In addition, integrate application delivery controllers like F5 BIG-IP APM, or Zscaler Private Access, with Azure AD. Benefits are modern authentication and identity management, traffic management, and security features. We call this solution secure hybrid access.
+
+See, [Secure hybrid access: Protect legacy apps with Azure AD](../manage-apps/secure-hybrid-access.md)
+
+For the following services, there are Azure AD integration tutorials.
+
+* [Tutorial: Azure AD SSO integration with Akamai](../saas-apps/akamai-tutorial.md)
+* [Tutorial: Azure AD SSO integration with Citrix ADC SAML Connector for Azure AD (Kerberos-based authentication)](../saas-apps/citrix-netscaler-tutorial.md)
+ * Formerly known as Citrix Netscaler
+* [Integrate F5 BIG-IP with Azure AD](../manage-apps/f5-aad-integration.md)
+* [Tutorial: Integrate Zscaler Private Access (ZPA) with Azure AD](../saas-apps/zscalerprivateaccess-tutorial.md)
+
+## Integrate apps your developers build
+
+For your developers' apps, use the Microsoft identity platform for authentication and authorization. Integrated applications are registered and managed like other apps in your portfolio.
+
+Learn more:
+
+* [Microsoft identity platform documentation](../develop/index.yml)
+* [Quickstart: Register an application with the Microsoft identity platform](../develop/quickstart-register-app.md)
+
+Developers can use the platform for internal and customer-facing apps. For instance, use Microsoft Authentication Libraries (MSAL) to enable multi-factor authentication and security to access apps.
+
+Learn more:
+
+* [Overview of the Microsoft Authentication Library (MSAL)](../develop/msal-overview.md)
+* [Microsoft identity platform code samples](../develop/sample-v2-code.md)
+* Video: [Overview of the Microsoft identity platform for developers](https://www.youtube.com/watch?v=zjezqZPPOfc&list=PLLasX02E8BPBxGouWlJV-u-XZWOc2RkiX) (33:54)
+
+## Next step
+
+[Resources for migrating applications to Azure AD](../manage-apps/migration-resources.md)
active-directory Frontline Worker Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/frontline-worker-management.md
My Staff also enables frontline managers to register their team members' phone n
![SMS sign-in](media/concept-fundamentals-frontline-worker/sms-signin.png)
-Frontline managers can also use Managed Home Screen (MHS) application to allow workers to have access to a specific set of applications on their Intune-enrolled Android dedicated devices. The dedicated devices are enrolled with [Azure AD shared device mode](../develop/msal-shared-devices.md). When configured in multi-app kiosk mode in the Microsoft Endpoint Manager (MEM) console, MHS is automatically launched as the default home screen on the device and appears to the end user as the *only* home screen. To learn more, see how to [configure the Microsoft Managed Home Screen app for Android Enterprise](/mem/intune/apps/app-configuration-managed-home-screen-app).
+Frontline managers can also use Managed Home Screen (MHS) application to allow workers to have access to a specific set of applications on their Intune-enrolled Android dedicated devices. The dedicated devices are enrolled with [Azure AD shared device mode](../develop/msal-shared-devices.md). When configured in multi-app kiosk mode in the Microsoft Intune admin center, MHS is automatically launched as the default home screen on the device and appears to the end user as the *only* home screen. To learn more, see how to [configure the Microsoft Managed Home Screen app for Android Enterprise](/mem/intune/apps/app-configuration-managed-home-screen-app).
## Secure sign-out of frontline workers from shared devices
active-directory Protect M365 From On Premises Attacks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/protect-m365-from-on-premises-attacks.md
Deploy Azure AD joined Windows 10 workstations with mobile device management pol
- **Use Windows 10 workstations**. - Deprecate machines that run Windows 8.1 and earlier. - Don't deploy computers that have server operating systems as workstations.-- **Use Microsoft Endpoint Manager as the authority for all device management workloads.** See [Microsoft Endpoint Manager](https://www.microsoft.com/security/business/microsoft-endpoint-manager).
+- **Use Microsoft Intune as the authority for all device management workloads.** See [Microsoft Intune](https://www.microsoft.com/security/business/endpoint-management/microsoft-intune).
- **Deploy privileged access devices.** For more information, see [Device roles and profiles](/security/compass/privileged-access-devices#device-roles-and-profiles). ### Workloads, applications, and resources
active-directory Resilience App Development Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/resilience-app-development-overview.md
Title: Increase resilience of authentication and authorization applications you develop
-description: Overview of our resilience guidance for application development using Azure Active Directory and the Microsoft identity platform
+ Title: Increase the resilience of authentication and authorization applications you develop
+description: Resilience guidance for application development using Azure Active Directory and the Microsoft identity platform
-+ Previously updated : 11/23/2020 Last updated : 03/02/2023
-# Increase resilience of authentication and authorization applications you develop
+# Increase the resilience of authentication and authorization applications you develop
-Microsoft Identity uses modern, token-based authentication and authorization. This means that a client application acquires tokens from an Identity provider to authenticate the user and to authorize the application to call protected APIs. A service will validate tokens.
+The Microsoft identity platform helps you build applications your users and customers can sign in to using their Microsoft identities or social accounts. Microsoft identity platform uses token-based authentication and authorization. Client applications acquire tokens from an identity provider (IdP) to authenticate users and authorize applications to call protected APIs. A service validates tokens.
-A token is valid for a certain length of time before the app must acquire a new one. Rarely, a call to retrieve a token could fail due to an issue like network or infrastructure failure or authentication service outage. In this document, we outline steps a developer can take to increase resilience in their applications if a token acquisition failure occurs.
+Learn more:
-These articles provide guidance on increasing resiliency in apps using the Microsoft identity platform and Azure Active Directory. There is guidance for both for client and service applications that work on behalf of a signed in user as well as daemon applications that work on their own behalf. They contain best practices for using tokens as well as calling resources.
+[What is the Microsoft identity platform?](../develop/v2-overview.md)
+[Security tokens](../develop/security-tokens.md)
-- [Build resilience into applications that sign-in users](resilience-client-app.md)-- [Build resilience into applications without users](resilience-daemon-app.md)
+A token is valid for a length of time, and then the app must acquire a new one. Rarely, a call to retrieve a token fails due to network or infrastructure issues or an authentication service outage.
+
+The following articles have guidance for client and service applications for a signed in user and daemon applications. They contain best practices for using tokens and calling resources.
+
+- [Increase the resilience of authentication and authorization in client applications you develop](resilience-client-app.md)
+- [Increase the resilience of authentication and authorization in daemon applications you develop](resilience-daemon-app.md)
- [Build resilience in your identity and access management infrastructure](resilience-in-infrastructure.md)-- [Build resilience in your CIAM systems](resilience-b2c.md)-- [Build services that are resilient to metadata refresh](../develop/howto-build-services-resilient-to-metadata-refresh.md)
+- [Build resilience in your customer identity and access management with Azure AD B2C](resilience-b2c.md)
+- [Build services that are resilient to Azure AD's OpenID Connect metadata refresh](../develop/howto-build-services-resilient-to-metadata-refresh.md)
active-directory Secure With Azure Ad Multiple Tenants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/secure-with-azure-ad-multiple-tenants.md
Another approach could have been to utilize the capabilities of Azure AD Connect
## Multi-tenant resource isolation
-A new tenant provides the ability to have a separate set of administrators. Organizations can choose to use corporate identities through [Azure AD B2B collaboration](../external-identities/what-is-b2b.md). Similarly, organizations can implement [Azure Lighthouse](../../lighthouse/overview.md) for cross-tenant management of Azure resources so that non-production Azure subscriptions can be managed by identities in the production counterpart. Azure Lighthouse can't be used to manage services outside of Azure, such as Intune or Microsoft Endpoint Manager. For Managed Service Providers (MSPs), [Microsoft 365 Lighthouse](/microsoft-365/lighthouse/m365-lighthouse-overview?view=o365-worldwide&preserve-view=true) is an admin portal that helps secure and manage devices, data, and users at scale for small- and medium-sized business (SMB) customers who are using Microsoft 365 Business Premium, Microsoft 365 E3, or Windows 365 Business.
+A new tenant provides the ability to have a separate set of administrators. Organizations can choose to use corporate identities through [Azure AD B2B collaboration](../external-identities/what-is-b2b.md). Similarly, organizations can implement [Azure Lighthouse](../../lighthouse/overview.md) for cross-tenant management of Azure resources so that non-production Azure subscriptions can be managed by identities in the production counterpart. Azure Lighthouse can't be used to manage services outside of Azure, such as Microsoft Intune. For Managed Service Providers (MSPs), [Microsoft 365 Lighthouse](/microsoft-365/lighthouse/m365-lighthouse-overview?view=o365-worldwide&preserve-view=true) is an admin portal that helps secure and manage devices, data, and users at scale for small- and medium-sized business (SMB) customers who are using Microsoft 365 Business Premium, Microsoft 365 E3, or Windows 365 Business.
This will allow users to continue to use their corporate credentials, while achieving the benefits of separation as described above.
active-directory Sync Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/sync-directory.md
description: Architectural guidance on achieving directory synchronization with
- Previously updated : 10/10/2020 Last updated : 03/01/2023 - # Directory synchronization
-Many organizations have a hybrid infrastructure encompassing both on-premises and cloud components. Synchronizing usersΓÇÖ identities between local and cloud directories lets users access resources with a single set of credentials.
+Many organizations have a hybrid infrastructure that encompasses both on-premises and cloud components. Synchronizing users' identities between local and cloud directories lets users access resources with a single set of credentials.
Synchronization is the process of
-* creating an object based on certain conditions
-* keeping the object updated
-* removing the object when conditions are no longer met.
+* creating an object based on certain conditions,
+* keeping the object updated, and
+* removing the object when conditions are no longer met.
-On-premises provisioning involves provisioning from on-premises sources (like Active Directory) to Azure Active Directory (Azure AD).
+On-premises provisioning involves provisioning from on-premises sources (such as Active Directory) to Azure Active Directory (Azure AD).
-## Use when
+## When to use directory synchronization
-You need to synchronize identity data from your on-premises Active Directory environments to Azure AD.
+Use directory synchronization when you need to synchronize identity data from your on premises Active Directory environments to Azure AD as illustrated in the following diagram.
![architectural diagram](./media/authentication-patterns/dir-sync-auth.png)
-## Components of system
-
-* **User**: Accesses an application using Azure AD.
-
-* **Web browser**: The component that the user interacts with to access the external URL of the application.
+## System components
-* **Application**: Web app that relies on the use of Azure AD for authentication and authorization purposes.
+* **Azure AD**: Synchronizes identity information from organization's on premises directory via Azure AD Connect.
+* **Azure AD Connect**: A tool for connecting on premises identity infrastructures to Microsoft Azure AD. The wizard and guided experiences help you to deploy and configure prerequisites and components required for the connection (including sync and sign on from Active Directories to Azure AD).
+* **Active Directory**: Active Directory is a directory service that is included in most Windows Server operating systems. Servers that run Active Directory Domain Services (AD DS) are called domain controllers. They authenticate and authorize all users and computers in the domain.
-* **Azure AD**: Synchronizes identity information from organizationΓÇÖs on-premises directory via Azure AD Connect.
-
-* **Azure AD Connect**: A tool for connecting on premises identity infrastructures to Microsoft Azure AD. The wizard and guided experiences help you deploy and configure pre-requisites and components required for the connection, including sync and sign on from Active Directories to Azure AD.
-
-* **Active Directory**: Active Directory is a directory service included in most Windows Server operating systems. Servers running Active Directory Domain Services (AD DS) are called domain controllers. They authenticate and authorize all users and computers in the domain.
+Microsoft designed [Azure AD Connect cloud sync](../cloud-sync/what-is-cloud-sync.md) to meet and accomplish your hybrid identity goals for synchronization of users, groups, and contacts to Azure AD. Azure AD Connect cloud sync uses the Azure AD cloud provisioning agent instead of the Azure AD Connect application.
## Implement directory synchronization with Azure AD
-* [What is identity provisioning?](../cloud-sync/what-is-provisioning.md)
+Explore the following resources to learn more about directory synchronization with Azure AD.
+
+* [What is identity provisioning with Azure AD?](../cloud-sync/what-is-provisioning.md)Provisioning is the process of creating an object based on certain conditions, keeping the object up-to-date and deleting the object when conditions are no longer met. On-premises provisioning involves provisioning from on premises sources (like Active Directory) to Azure AD.
+* [Hybrid Identity: Directory integration tools comparison](../hybrid/plan-hybrid-identity-design-considerations-tools-comparison.md) describes differences between Azure AD Connect sync and Azure AD Connect cloud provisioning.
+* [Azure AD Connect and Azure AD Connect Health installation roadmap](../hybrid/how-to-connect-install-roadmap.md) provides detailed installation and configuration steps.
-* [Hybrid identity directory integration tools](../hybrid/plan-hybrid-identity-design-considerations-tools-comparison.md)
+## Next steps
-* [Azure AD Connect installation roadmap](../hybrid/how-to-connect-install-roadmap.md)
+* [What is hybrid identity with Azure Active Directory?](../../active-directory/hybrid/whatis-hybrid-identity.md) Microsoft's identity solutions span on-premises and cloud-based capabilities. Hybrid identity solutions create a common user identity for authentication and authorization to all resources, regardless of location.
+* [Install the Azure AD Connect provisioning agent](../cloud-sync/how-to-install.md) walks you through the installation process for the Azure Active Directory (Azure AD) Connect provisioning agent and how to initially configure it in the Azure portal.
+* [Azure AD Connect cloud sync new agent configuration](../cloud-sync/how-to-configure.md) guides you through configuring Azure AD Connect cloud sync.
+* [Azure Active Directory authentication and synchronization protocol overview](auth-sync-overview.md) describes integration with authentication and synchronization protocols. Authentication integrations enable you to use Azure AD and its security and management features with little or no changes to your applications that use legacy authentication methods. Synchronization integrations enable you to sync user and group data to Azure AD and then user Azure AD management capabilities. Some sync patterns enable automated provisioning.
active-directory Sync Ldap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/sync-ldap.md
description: Architectural guidance on achieving LDAP synchronization with Azure
- Previously updated : 08/19/2022 Last updated : 03/01/2023 - # LDAP synchronization with Azure Active Directory
-The Lightweight Directory Access Protocol (LDAP) is a directory service protocol that runs on the TCP/IP stack. It provides a mechanism used to connect to, search, and modify internet directories. The LDAP directory service is based on a client-server model and its function is to enable access to an existing directory. Many companies depend on on-premises LDAP servers to store users and groups for their critical business apps.
-
-Azure Active Directory (Azure AD) can replace LDAP synchronization with Azure AD Connect. The Azure AD Connect synchronization service performs all the operations related to synchronizing identity data between your on-premises environments and Azure AD.
-
-## Use when
-
-You need to synchronize identity data between your on-premises LDAP v3 directories and Azure AD.
-
-![architectural diagram](./media/authentication-patterns/ldap-sync.png)
+Lightweight Directory Access Protocol (LDAP) is a directory service protocol that runs on the TCP/IP stack. It provides a mechanism that you can use to connect to, search, and modify internet directories. Based on a client-server model, the LDAP directory service enables access to an existing directory.
-## Components of system
+Many companies depend on on-premises LDAP servers to store users and groups for their critical business apps.
-* **User**: Accesses an application that relies on the use of a LDAP v3 directory for sorting users and passwords.
+Azure Active Directory (Azure AD) can replace LDAP synchronization with Azure AD Connect. The Azure AD Connect synchronization service performs all operations related to synchronizing identity data between you're on premises environments and Azure AD.
-* **Web browser**: The component that the user interacts with to access the external URL of the application
+## When to use LDAP synchronization
-* **Web app**: Application with dependencies on LDAP v3 directories.
+Use LDAP synchronization when you need to synchronize identity data between your on premises LDAP v3 directories and Azure AD as illustrated in the following diagram.
-* **Azure AD**: Azure AD synchronizes identity information (users, groups) from organizationΓÇÖs on-premises LDAP directories via Azure AD Connect.
+![architectural diagram](./media/authentication-patterns/ldap-sync.png)
-* **Azure AD Connect**: is a tool for connecting on premises identity infrastructures to Microsoft Azure AD. The wizard and guided experiences help to deploy and configure pre-requisites and components required for the connection.
+## System components
+* **Azure AD**: Azure AD synchronizes identity information (users, groups) from organization's on-premises LDAP directories via Azure AD Connect.
+* **Azure AD Connect**: is a tool for connecting on premises identity infrastructures to Microsoft Azure AD. The wizard and guided experiences help to deploy and configure prerequisites and components required for the connection.
* **Custom Connector**: A Generic LDAP Connector enables you to integrate the Azure AD Connect synchronization service with an LDAP v3 server. It sits on Azure AD Connect.-
-* **Active Directory**: Active Directory is a directory service included in most Windows Server operating systems. Servers running Active Directory Directory Services are called domain controllers and they authenticate and authorize all users and computers in a Windows domain.
-
+* **Active Directory**: Active Directory is a directory service included in most Windows Server operating systems. Servers that run Active Directory Services, referred to as domain controllers, authenticate and authorize all users and computers in a Windows domain.
* **LDAP v3 server**: LDAP protocol-compliant directory storing corporate users and passwords used for directory services authentication. ## Implement LDAP synchronization with Azure AD
-* [Hybrid Identity directory integration tools](../hybrid/plan-hybrid-identity-design-considerations-tools-comparison.md)
+Explore the following resources to learn more about LDAP synchronization with Azure AD.
-* [Azure AD Connect installation roadmap](../hybrid/how-to-connect-install-roadmap.md)
-
-* [Overview and creation a LDAP Connector](/microsoft-identity-manager/reference/microsoft-identity-manager-2016-connector-genericldap)
+* [Hybrid Identity: Directory integration tools comparison](../hybrid/plan-hybrid-identity-design-considerations-tools-comparison.md) describes differences between Azure AD Connect sync and Azure AD Connect cloud provisioning.
+* [Azure AD Connect and Azure AD Connect Health installation roadmap](../hybrid/how-to-connect-install-roadmap.md) provides detailed installation and configuration steps.
+* The [Generic LDAP Connector](/microsoft-identity-manager/reference/microsoft-identity-manager-2016-connector-genericldap) enables you to integrate the synchronization service with an LDAP v3 server.
> [!NOTE]
- > Deploying the LDAP Connector requires an advanced configuration and this connector is provided under limited support. Configuring this connector requires familiarity with Microsoft Identity Manager and the specific LDAP directory.
+ > Deploying the LDAP Connector requires an advanced configuration. Microsoft provides this connector with limited support. Configuring this connector requires familiarity with Microsoft Identity Manager and the specific LDAP directory.
>
- > Customers who require to deploy this configuration in a production environment are recommended to work with a partner such as Microsoft Consulting Services for help, guidance and support for this configuration.
+ > When you deploy this configuration in a production environment, collaborate with a partner such as Microsoft Consulting Services for help, guidance, and support.
+
+## Next steps
+
+* [What is hybrid identity with Azure Active Directory?](../../active-directory/hybrid/whatis-hybrid-identity.md) Microsoft's identity solutions span on-premises and cloud-based capabilities. Hybrid identity solutions create a common user identity for authentication and authorization to all resources, regardless of location.
+* [Azure Active Directory authentication and synchronization protocol overview](auth-sync-overview.md) describes integration with authentication and synchronization protocols. Authentication integrations enable you to use Azure AD and its security and management features with little or no changes to your applications that use legacy authentication methods. Synchronization integrations enable you to sync user and group data to Azure AD and then user Azure AD management capabilities. Some sync patterns enable automated provisioning.
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-archive.md
Customers can work around this requirement for testing purposes by using a featu
**Service category:** Device Registration and Management **Product capability:** Identity Security & Protection
-Azure AD and Microsoft Endpoint Manager teams have combined to bring the capability to customize, scale, and secure your frontline worker devices.
+Azure AD and Microsoft Intune teams have combined to bring the capability to customize, scale, and secure your frontline worker devices.
The following preview capabilities will allow you to:-- Provision Android shared devices at scale with Microsoft Endpoint Manager
+- Provision Android shared devices at scale with Microsoft Intune
- Secure your access for shift workers using device-based conditional access - Customize sign-in experiences for the shift workers with Managed Home Screen
active-directory Reference Connect Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-version-history.md
This article helps you keep track of the versions that have been released and un
You can upgrade your Azure AD Connect server from all supported versions with the latest versions:
-You can download the latest version of Azure AD Connect 2.0 from the [Microsoft Download Center](https://www.microsoft.com/download/details.aspx?id=47594). See the [release notes for the latest V2.0 release](reference-connect-version-history.md#20280).
+You can download the latest version of Azure AD Connect 2.0 from the [Microsoft Download Center](https://www.microsoft.com/download/details.aspx?id=47594). See the [release notes for the latest V2.0 release](reference-connect-version-history.md#20280).\
+
+Get notified about when to revisit this page for updates by copying and pasting this URL: `https://aka.ms/aadconnectrss` into your ![RSS feed reader icon](../fundamentals/media/whats-new/feed-icon-16x16.png) feed reader.
The following table lists related topics:
active-directory Manage Application Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/manage-application-permissions.md
$assignments | ForEach-Object {
Using the following Microsoft Graph PowerShell script revokes all permissions granted to an application. ```powershell
-Connect-MgGraph -Scopes "Application.ReadWrite.All", "Directory.ReadWrite.All", "DelegatedPermissionGrant.ReadWrite.All" "AppRoleAssignment.ReadWrite.All"
+Connect-MgGraph -Scopes "Application.ReadWrite.All", "Directory.ReadWrite.All", "DelegatedPermissionGrant.ReadWrite.All", "AppRoleAssignment.ReadWrite.All"
# Get Service Principal using objectId $sp = Get-MgServicePrincipal -ServicePrincipalID "$ServicePrincipalID"
$spOauth2PermissionsGrants |ForEach-Object {
} # Get all application permissions for the service principal
-$spApplicationPermissions = Get-MgServicePrincipalAppRoleAssignedTo -ServicePrincipalId $Sp.Id -All | Where-Object { $_.PrincipalType -eq "ServicePrincipal" }
+$spApplicationPermissions = Get-MgServicePrincipalAppRoleAssignment -ServicePrincipalId $Sp.Id -All | Where-Object { $_.PrincipalType -eq "ServicePrincipal" }
# Remove all application permissions $spApplicationPermissions | ForEach-Object {
active-directory Migrate Okta Sign On Policies To Azure Active Directory Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access.md
In some scenarios, you might need more setup before you configure the Conditiona
- **Okta device trust to device-based CA**: Conditional Access offers two possible options when you evaluate a user's device: - [Use hybrid Azure AD join](#hybrid-azure-ad-join-configuration), which is a feature enabled within the Azure AD Connect server that synchronizes Windows current devices, such as Windows 10, Windows Server 2016, and Windows Server 2019, to Azure AD.
- - [Enroll the device in Endpoint Manager](#configure-device-compliance) and assign a compliance policy.
+ - [Enroll the device in Microsoft Intune](#configure-device-compliance) and assign a compliance policy.
### Hybrid Azure AD join configuration
To enable hybrid Azure AD join on your Azure AD Connect server, run the configur
### Configure device compliance
-Hybrid Azure AD join is a direct replacement for Okta device trust on Windows. Conditional Access policies can also look at device compliance for devices that have fully enrolled in Endpoint
+Hybrid Azure AD join is a direct replacement for Okta device trust on Windows. Conditional Access policies can also look at device compliance for devices that have fully enrolled in Microsoft Intune:
- **Compliance overview**: Refer to [device compliance policies in Intune](/mem/intune/protect/device-compliance-get-started#:~:text=Reference%20for%20non-compliance%20and%20Conditional%20Access%20on%20the,applicable%20%20...%20%203%20more%20rows). - **Device compliance**: Create [policies in Intune](/mem/intune/protect/create-compliance-policy).
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/whats-new-docs.md
Title: "What's new in Azure Active Directory application management" description: "New and updated documentation for the Azure Active Directory application management." Previously updated : 02/01/2023 Last updated : 03/02/2023
Welcome to what's new in Azure Active Directory (Azure AD) application management documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the application management service, see [What's new in Azure AD](../fundamentals/whats-new.md).
+## February 2023
+
+### Updated articles
+
+[Manage custom security attributes for an application (Preview)](custom-security-attributes-apps.md)
+- [Manage app consent policies](manage-app-consent-policies.md)
+- [Configure permission classifications](configure-permission-classifications.md)
+- [Disable user sign-in for an application](disable-user-sign-in-portal.md)
+- [Configure Datawiza for Azure AD Multi-Factor Authentication and single sign-on to Oracle EBS](datawiza-azure-ad-sso-mfa-oracle-ebs.md)
+ ## January 2023 ### New articles
Welcome to what's new in Azure Active Directory (Azure AD) application managemen
- [Tutorial: Configure F5 BIG-IP Access Policy Manager for Kerberos authentication](f5-big-ip-kerberos-advanced.md) - [Tutorial: Configure F5 BIG-IP Easy Button for Kerberos single sign-on](f5-big-ip-kerberos-easy-button.md) - [Tutorial: Configure F5 BIG-IP Easy Button for header-based and LDAP single sign-on](f5-big-ip-ldap-header-easybutton.md)-
-## November 2022
-
-### Updated articles
--- [Review permissions granted to enterprise applications](manage-application-permissions.md)-- [Assign users and groups to an application](assign-user-or-group-access-portal.md)-- [Tutorial: Configure Secure Hybrid Access with Azure Active Directory and Silverfort](silverfort-azure-ad-integration.md)-- [Grant tenant-wide admin consent to an application](grant-admin-consent.md)-- [Restore an enterprise application in Azure AD](restore-application.md)
active-directory Pim Resource Roles Activate Your Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-activate-your-roles.md
na Previously updated : 10/27/2022 Last updated : 3/1/2023
Use Privileged Identity Management (PIM) in Azure Active Directory (Azure AD), p
This article is for members who need to activate their Azure resource role in Privileged Identity Management.
+>[!NOTE]
+>As of March 2023, you may now activate your assignments and view your access directly from blades outside of PIM in the Azure portal. Read more [here](pim-resource-roles-activate-your-roles.md#activate-with-azure-portal).
+ ## Activate a role When you need to take on an Azure resource role, you can request activation by using the **My roles** navigation option in Privileged Identity Management.
If you do not require activation of a role that requires approval, you can cance
When a role assignment is activated, you'll see a **Deactivate** option in the PIM portal for the role assignment. When you select **Deactivate**, there's a short time lag before the role is deactivated. Also, you can't deactivate a role assignment within five minutes after activation.
+## Activate with Azure portal
+
+Privileged Identity Management role activation has been integrated into the Billing and Access Control (AD) extensions within the Azure portal. Shortcuts to Subscriptions (billing) and Access Control (AD) allow you to activate PIM roles directly from these blades.
+
+From the Subscriptions blade, select ΓÇ£View eligible subscriptionsΓÇ¥ in the horizontal command menu to check your eligible, active, and expired assignments. From there, you can activate an eligible assignment in the same pane.
+
+ ![Screenshot of view eligible subscriptions on the Subscriptions page.](./media/pim-resource-roles-activate-your-roles/view-subscriptions-1.png)
+
+ ![Screenshot of view eligible subscriptions on the Cost Management: Integration Service page.](./media/pim-resource-roles-activate-your-roles/view-subscriptions-2.png)
+
+In Access control (IAM) for a resource, you can now select ΓÇ£View my accessΓÇ¥ to see your currently active and eligible role assignments and activate directly.
+
+ ![Screenshot of current role assignments on the Measurement page.](./media/pim-resource-roles-activate-your-roles/view-my-access.png)
+
+By integrating PIM capabilities into different Azure portal blades, this new feature allows you to gain temporary access to view or edit subscriptions and resources more easily.
+ ## Troubleshoot ### Permissions are not granted after activating a role
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/permissions-reference.md
Users in this role can manage Microsoft 365 apps' cloud settings. This includes
Assign the Organizational Messages Writer role to users who need to do the following tasks: -- Write, publish, and delete organizational messages using Microsoft 365 admin center or Microsoft Endpoint Manager-- Manage organizational message delivery options using Microsoft 365 admin center or Microsoft Endpoint Manager-- Read organizational message delivery results using Microsoft 365 admin center or Microsoft Endpoint Manager
+- Write, publish, and delete organizational messages using Microsoft 365 admin center or Microsoft Intune
+- Manage organizational message delivery options using Microsoft 365 admin center or Microsoft Intune
+- Read organizational message delivery results using Microsoft 365 admin center or Microsoft Intune
- View usage reports and most settings in the Microsoft 365 admin center, but can't make changes > [!div class="mx-tableFixed"]
This role can create and manage security groups, but does not have administrator
Assign the Windows 365 Administrator role to users who need to do the following tasks: -- Manage Windows 365 Cloud PCs in Microsoft Endpoint Manager
+- Manage Windows 365 Cloud PCs in Microsoft Intune
- Enroll and manage devices in Azure AD, including assigning users and policies - Create and manage security groups, but not role-assignable groups - View basic properties in the Microsoft 365 admin center
active-directory Akamai Enterprise Application Access Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/akamai-enterprise-application-access-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Akamai Enterprise Application Access for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Akamai Enterprise Application Access.
++
+writer: twimmers
+
+ms.assetid: e4eb183a-192f-49e0-8724-549b2f360b8e
++++ Last updated : 02/27/2023+++
+# Tutorial: Configure Akamai Enterprise Application Access for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Akamai Enterprise Application Access and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Akamai Enterprise Application Access](https://www.akamai.com) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Supported capabilities
+> [!div class="checklist"]
+> * Create users in Akamai Enterprise Application Access.
+> * Remove users in Akamai Enterprise Application Access when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and Akamai Enterprise Application Access.
+> * Provision groups and group memberships in Akamai Enterprise Application Access
+> * [Single sign-on](akamai-tutorial.md) to Akamai Enterprise Application Access (recommended).
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (e.g. Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* An administrator account with Akamai [Enterprise Application Access](https://www.akamai.com/products/enterprise-application-access).
++
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Akamai Enterprise Application Access](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure Akamai Enterprise Application Access to support provisioning with Azure AD
+
+Configure a SCIM directory of type Azure in Akamai Enterprise Center and save the SCIM base URL and the Provisioning key.
+
+1. Sign in to [Akamai Enterprise Center](https://control.akamai.com/apps/zt-ui/#/identity/directories).
+
+1. In menu, navigate to **Application Access > Identity & Users > Directories**.
+1. Select **Add New Directory** (+).
+1. Enter a name and description for directory.
+1. In **Directory Type** select **SCIM**, and in **SCIM Schema** select **Azure**.
+1. Select **Add New Directory**.
+1. Open your new directory **Settings** > **General** and copy **SCIM base URL**. Save it for Azure SCIM provisioning in STEP 4.
+1. In **Settings** > **General** select **Create Provisioning Key**.
+1. Enter a name and description for the key.
+1. Copy **Provisioning key** by clicking on the copy to clipboard icon. Save it for Azure SCIM provisioning in STEP 5.
+1. In **Login preference Attributes** select either **User principal name** (default) or **Email** to choose for a user a way to log in.
+1. Select **Save**.
+ The new SCIM directory appears in the directories list in **Identity & Users** > **Directories**.
++
+## Step 3. Add Akamai Enterprise Application Access from the Azure AD application gallery
+
+Add Akamai Enterprise Application Access from the Azure AD application gallery to start managing provisioning to Akamai Enterprise Application Access. If you have previously setup Akamai Enterprise Application Access for SSO you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
++
+## Step 5. Configure automatic user provisioning to Akamai Enterprise Application Access
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for Akamai Enterprise Application Access in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png)
+
+1. In the applications list, select **Akamai Enterprise Application Access**.
+
+ ![Screenshot of the Akamai Enterprise Application Access link in the Applications list.](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Screenshot of Provisioning tab.](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
+
+1. Under the **Admin Credentials** section, input your Akamai Enterprise Application Access Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Akamai Enterprise Application Access. If the connection fails, ensure your Akamai Enterprise Application Access account has Admin permissions and try again.
+
+ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Screenshot of Notification Email.](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Akamai Enterprise Application Access**.
+
+1. Review the user attributes that are synchronized from Azure AD to Akamai Enterprise Application Access in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Akamai Enterprise Application Access for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the Akamai Enterprise Application Access API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute| Type |Supported for filtering|Required by Akamai Enterprise Application Access|
+ |||||
+ |userName| String |✓|✓
+ |active| Boolean |||
+ |displayName| String |||
+ |emails[type eq "work"].value| String ||✓
+ |name.givenName| String |||
+ |name.familyName| String |||
+ |phoneNumbers[type eq "mobile"].value| String|||
+ |externalId| String |||
++
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Akamai Enterprise Application Access**.
+
+1. Review the group attributes that are synchronized from Azure AD to Akamai Enterprise Application Access in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Akamai Enterprise Application Access for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Akamai Enterprise Application Access|
+ |||||
+ |displayName|String|✓|✓
+ |externalId|String|||
+ |members|Reference|||
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for Akamai Enterprise Application Access, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png)
+
+1. Define the users and/or groups that you would like to provision to Akamai Enterprise Application Access by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png)
+
+1. When you are ready to provision, click **Save**.
+
+ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Akamai Enterprise Application Access - Getting Started](https://techdocs.akamai.com/eaa/docs/welcome-guide)
+* [Configuring Custom Attributes in EAA](https://techdocs.akamai.com/eaa/docs/scim-provisioning-with-azure#step-7-optional-add-a-custom-attribute-in--and-map-it-to-the-scim-attribute-in-your--scim-directory)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Fleet Management System Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/fleet-management-system-tutorial.md
+
+ Title: Azure Active Directory SSO integration with Fleet Management System
+description: Learn how to configure single sign-on between Azure Active Directory and Fleet Management System.
++++++++ Last updated : 03/02/2023++++
+# Azure Active Directory SSO integration with Fleet Management System
+
+In this article, you learn how to integrate Fleet Management System with Azure Active Directory (Azure AD). Manages and monitors a fleet of surface level vehicles and subterranean tugs and carts that Microsoft utilizes. When you integrate Fleet Management System with Azure AD, you can:
+
+* Control in Azure AD who has access to Fleet Management System.
+* Enable your users to be automatically signed-in to Fleet Management System with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You'll configure and test Azure AD single sign-on for Fleet Management System in a test environment. Fleet Management System supports **IDP** initiated single sign-on.
+
+## Prerequisites
+
+To integrate Azure Active Directory with Fleet Management System, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Fleet Management System single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the Fleet Management System application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add Fleet Management System from the Azure AD gallery
+
+Add Fleet Management System from the Azure AD application gallery to configure single sign-on with Fleet Management System. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **Fleet Management System** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, the user doesn't have to perform any step as the app is already pre-integrated with Azure.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
+
+1. On the **Set up Fleet Management System** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+## Configure Fleet Management System SSO
+
+To configure single sign-on on **Fleet Management System** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Fleet Management System support team](mailto:fms-datashare@navagis.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Fleet Management System test user
+
+In this section, you create a user called Britta Simon at Fleet Management System. Work with [Fleet Management System support team](mailto:fms-datashare@navagis.com) to add the users in the Fleet Management System platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on Test this application in Azure portal and you should be automatically signed in to the Fleet Management System for which you set up the SSO.
+
+* You can use Microsoft My Apps. When you click the Fleet Management System tile in the My Apps, you should be automatically signed in to the Fleet Management System for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure Fleet Management System you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Illumio Sso Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/illumio-sso-tutorial.md
+
+ Title: Azure Active Directory SSO integration with Illumio SSO
+description: Learn how to configure single sign-on between Azure Active Directory and Illumio SSO.
++++++++ Last updated : 03/02/2023++++
+# Azure Active Directory SSO integration with Illumio SSO
+
+In this article, you learn how to integrate Illumio SSO with Azure Active Directory (Azure AD). Illumio SSO app provides a simple, convenient, and secure way for organizations to manage user access to illumio PCE. When you integrate Illumio SSO with Azure AD, you can:
+
+* Control in Azure AD who has access to Illumio SSO.
+* Enable your users to be automatically signed-in to Illumio SSO with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You'll configure and test Azure AD single sign-on for Illumio SSO in a test environment. Illumio SSO supports both **SP** and **IDP** initiated single sign-on.
+
+## Prerequisites
+
+To integrate Azure Active Directory with Illumio SSO, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Illumio SSO single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the Illumio SSO application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add Illumio SSO from the Azure AD gallery
+
+Add Illumio SSO from the Azure AD application gallery to configure single sign-on with Illumio SSO. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **Illumio SSO** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a URL using the following pattern:
+ `https://<DOMAIN>/login`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://<DOMAIN>/login/acs/<ID>`
+
+1. If you wish to configure the application in **SP** initiated mode, then perform the following step:
+
+ In the **Sign on URL** textbox, type a URL using the following pattern:
+ `https://<DOMAIN>/login`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Illumio SSO Client support team](mailto:support@illumio.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. Your Illumio SSO application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows an example for this. The default value of **Unique User Identifier** is **user.userprincipalname** but Illumio SSO expects this to be mapped with the user's email address. For that you can use **user.mail** attribute from the list or use the appropriate attribute value based on your organization configuration.
+
+ ![Screenshot shows the image of attributes configuration.](common/default-attributes.png "Image")
+
+1. In addition to above, Illumio SSO application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | | |
+ | User.MemberOf | user.assignedroles |
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
+
+1. On the **Set up Illumio SSO** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+## Configure Illumio SSO
+
+To configure single sign-on on **Illumio SSO** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Illumio SSO support team](mailto:support@illumio.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Illumio SSO test user
+
+In this section, you create a user called Britta Simon at Illumio SSO. Work with [Illumio SSO support team](mailto:support@illumio.com) to add the users in the Illumio SSO platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+1. Click on **Test this application** in Azure portal. This will redirect to Illumio SSO Sign-on URL where you can initiate the login flow.
+
+1. Go to Illumio SSO Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+1. Click on **Test this application** in Azure portal and you should be automatically signed in to the Illumio SSO for which you set up the SSO.
+
+1. You can also use Microsoft My Apps to test the application in any mode. When you click the Illumio SSO tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Illumio SSO for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure Illumio SSO you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Karlsgate Identity Exchange Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/karlsgate-identity-exchange-tutorial.md
+
+ Title: Azure Active Directory SSO integration with Karlsgate Identity Exchange (KIE)
+description: Learn how to configure single sign-on between Azure Active Directory and Karlsgate Identity Exchange (KIE).
++++++++ Last updated : 03/02/2023++++
+# Azure Active Directory SSO integration with Karlsgate Identity Exchange (KIE)
+
+In this article, you learn how to integrate the Karlsgate Identity Exchange (KIE) with Azure Active Directory (Azure AD). Karlsgate provides Privacy Enhancing Technology for protecting data at rest, in transit, & in use. KarlsgateΓÇÖs zero-trust approach allows the free flow of insights while maintaining custody of sensitive data. When you integrate Karlsgate Identity Exchange (KIE) with Azure AD, you can:
+
+* Control in Azure AD who has access to Karlsgate Identity Exchange (KIE).
+* Enable your users to be automatically signed-in to Karlsgate Identity Exchange (KIE) with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You'll configure and test Azure AD single sign-on for Karlsgate Identity Exchange (KIE) in a test environment. Karlsgate Identity Exchange (KIE) supports **SP** and **IDP** initiated single sign-on.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Prerequisites
+
+To integrate Azure Active Directory with Karlsgate Identity Exchange (KIE), you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* An existing Karlsgate Identity Exchange (KIE) single sign-on (SSO) eligible account.
+
+* At least one (1) user created in your Karlsgate Identity Exchange (KIE) account.
+
+> [!NOTE]
+> To be eligible for single sign-on (SSO) access for your Karlsgate Identity Exchange (KIE) account, your KIE account must have an SSO eligible subscription. If you have questions, please contact the [Karlsgate Identity Exchange (KIE) support team](mailto:help@karlsgate.com).
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the Karlsgate Identity Exchange (KIE) application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add Karlsgate Identity Exchange (KIE) from the Azure AD gallery
+
+Add Karlsgate Identity Exchange (KIE) from the Azure AD application gallery to configure single sign-on with Karlsgate Identity Exchange (KIE). For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **Karlsgate Identity Exchange (KIE)** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
+
+1. If you wish to configure the application in **SP** initiated mode, then perform the following step:
+
+ In the **Sign on URL** textbox, type the URL:
+ `https://portal.karlsgate.com/Identity/Account/Login`
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
+
+## Configure Karlsgate Identity Exchange (KIE) SSO
+
+To configure single sign-on on the **Karlsgate Identity Exchange (KIE)** side, you must send the following information to the [Karlsgate Identity Exchange (KIE) support team](mailto:help@karlsgate.com).
+
+1. Your KIE accountΓÇÖs configured, non-blank **Public name plate** (as a secondary confirmation of your KIE account) this value is available at: `https://portal.karlsgate.com/Profile/Edit`.
+
+1. Your configured **Identifier (Entity ID)** (to confirm your configured value).
+
+1. Your configured **App Federation Metadata Url**.
+
+1. A list of one (or more) **email domain(s)** (min. 1) for users who will be accessing the Karlsgate Identity Exchange (KIE), e.g: northwind.com, de.contoso.com, fr.contoso.com, etc. (See note below.)
+
+ > [!NOTE]
+ > Many organizations have one (1) email domain configured for their users. For example, at the fictitious company Northwind, the user "jane.smith@northwind.com" has an email domain of "northwind.com". Some organizations have multiple (2+) email domains configured for their users. For example, at the fictitious company Contoso, the user "erika.mustermann@de.contoso.com" has an email domain of "de.contoso.com", while the user "jean.dupont@fr.contoso.com" has an email domain of "fr.contoso.com".
+
+1. The Karlsgate Identity Exchange (KIE) support team will use these settings to configure the Karlsgate Identity Exchange (KIE) application for SAML SSO access.
+
+> [!NOTE]
+> You must have an existing KIE account with an SSO eligible subscription to configure SAML SSO access. For SSO access, a KIE userΓÇÖs email address must match their Azure AD email address.
+
+If you have questions, please contact the [Karlsgate Identity Exchange (KIE) support team](mailto:help@karlsgate.com).
+
+### Create Karlsgate Identity Exchange (KIE) test user
+
+Work with [Karlsgate Identity Exchange (KIE) support team](mailto:help@karlsgate.com) to create a KIE account and add users to your KIE account.
+
+> [!NOTE]
+> For SSO access, a KIE userΓÇÖs email address must match their Azure AD email address.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+1. Click on **Test this application** in Azure portal. This will redirect to Karlsgate Identity Exchange (KIE) Sign-on URL where you can initiate the login flow.
+
+1. Go to Karlsgate Identity Exchange (KIE) Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+1. Click on **Test this application** in Azure portal and you should be automatically signed in to the Karlsgate Identity Exchange (KIE) for which you set up the SSO.
+
+1. You can also use Microsoft My Apps to test the application in any mode. When you click the Karlsgate Identity Exchange (KIE) tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Karlsgate Identity Exchange (KIE) for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure Karlsgate Identity Exchange (KIE) you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Ledgy Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ledgy-tutorial.md
+
+ Title: Azure Active Directory SSO integration with Ledgy
+description: Learn how to configure single sign-on between Azure Active Directory and Ledgy.
++++++++ Last updated : 03/02/2023++++
+# Azure Active Directory SSO integration with Ledgy
+
+In this article, you learn how to integrate Ledgy with Azure Active Directory (Azure AD). Automate your equity. Grant shares and options to employees around the world, integrate equity into all your key systems, and help your team understand their ownership stakes. When you integrate Ledgy with Azure AD, you can:
+
+* Control in Azure AD who has access to Ledgy.
+* Enable your users to be automatically signed-in to Ledgy with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You configure and test Azure AD single sign-on for Ledgy in a test environment. Ledgy supports both **SP** and **IDP** initiated single sign-on and **Just In Time** user provisioning.
+
+## Prerequisites
+
+To integrate Azure Active Directory with Ledgy, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Ledgy single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the Ledgy application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add Ledgy from the Azure AD gallery
+
+Add Ledgy from the Azure AD application gallery to configure single sign-on with Ledgy. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **Ledgy** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a URL using the following pattern:
+ `https://app.ledgy.com/auth/saml/<orgSlug>/metadata.xml`
+
+ b. In the Reply URL textbox, type a URL using the following pattern:
+ `https://app.ledgy.com/auth/saml/<orgSlug>/acs`
+
+1. If you wish to configure the application in SP initiated mode, then perform the following step:
+
+ In the **Sign on URL** textbox, type the URL:
+ `https://app.ledgy.com/login`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [Ledgy Client support team](mailto:support@ledgy.com) to get these values. You can also refer to the patterns shown in the Basic SAML Configuration section in the Azure portal.
+
+1. Ledgy application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![Screenshot shows the image of attributes configuration.](common/default-attributes.png "Image")
+
+1. In addition to above, Ledgy application expects few more attributes to be passed back in SAML response, which are shown. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | | |
+ | email | user.mail |
+ | ID | user.userprincipalname |
+ | firstName | user.givenname |
+ | lastName | user.surname |
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
+
+## Configure Ledgy SSO
+
+To configure single sign-on on **Ledgy** side, you need to send the **App Federation Metadata Url** to [Ledgy support team](mailto:support@ledgy.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Ledgy test user
+
+In this section, a user called B.Simon is created in Ledgy. Ledgy supports just-in-time user provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in Ledgy, a new one is commonly created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated
+
+* Click on **Test this application** in Azure portal. This will redirect to Ledgy Sign-on URL where you can initiate the login flow.
+
+* Go to Ledgy Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Ledgy for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Ledgy tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Ledgy for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure Ledgy you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Testim Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/testim-tutorial.md
+
+ Title: Azure Active Directory SSO integration with Testim
+description: Learn how to configure single sign-on between Azure Active Directory and Testim.
++++++++ Last updated : 03/02/2023++++
+# Azure Active Directory SSO integration with Testim
+
+In this article, you'll learn how to integrate Testim with Azure Active Directory (Azure AD). Testim is the fastest way to create your most resilient e2e tests. The AI- based platform fits your workflow and accelerates software releases. When you integrate Testim with Azure AD, you can:
+
+* Control in Azure AD who has access to Testim.
+* Enable your users to be automatically signed-in to Testim with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You'll configure and test Azure AD single sign-on for Testim in a test environment. Testim supports both **SP** and **IDP** initiated single sign-on.
+
+## Prerequisites
+
+To integrate Azure Active Directory with Testim, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Testim single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the Testim application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add Testim from the Azure AD gallery
+
+Add Testim from the Azure AD application gallery to configure single sign-on with Testim. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **Testim** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a URL using the following pattern:
+ `https://services.testim.io/auth/sso/<ID>/metadata`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://services.testim.io/auth/sso/<ID>/callback`
+
+1. If you wish to configure the application in **SP** initiated mode, then perform the following step:
+
+ In the **Sign on URL** textbox, type the URL:
+ `https://app.testim.io/#/azure-signin`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [Testim Client support team](mailto:support@testim.io) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up Testim** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+## Configure Testim SSO
+
+To configure single sign-on on **Testim** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Testim support team](mailto:support@testim.io). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Testim test user
+
+In this section, you create a user called Britta Simon at Testim. Work with [Testim support team](mailto:support@testim.io) to add the users in the Testim platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Testim Sign-on URL where you can initiate the login flow.
+
+* Go to Testim Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Testim for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Testim tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Testim for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure Testim you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Configure Cmmc Level 2 Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/configure-cmmc-level-2-access-control.md
The following table provides a list of practice statement and objectives, and Az
| AC.L2-3.1.6<br><br>**Practice statement:** Use non-privileged accounts or roles when accessing non security functions.<br><br>**Objectives:**<br>Determine if:<br>[a.] non security functions are identified; and <br>[b.] users are required to use non-privileged accounts or roles when accessing non security functions.<br><br>AC.L2-3.1.7<br><br>**Practice statement:** Prevent non-privileged users from executing privileged functions and capture the execution of such functions in audit logs.<br><br>**Objectives:**<br>Determine if:<br>[a.] privileged functions are defined;<br>[b.] non-privileged users are defined;<br>[c.] non-privileged users are prevented from executing privileged functions; and<br>[d.] the execution of privileged functions is captured in audit logs. |Requirements in AC.L2-3.1.6 and AC.L2-3.1.7 complement each other. Require separate accounts for privilege and non-privileged use. Configure Privileged Identity Management (PIM) to bring just-in-time(JIT) privileged access and remove standing access. Configure role based conditional access policies to limit access to productivity application for privileged users. For highly privileged users, secure devices as part of the privileged access story. All privileged actions are captured in the Azure AD Audit logs.<br>[Securing privileged access overview](/security/compass/overview)<br>[Configure Azure AD role settings in PIM](../privileged-identity-management/pim-how-to-change-default-settings.md)<br>[Users and groups in Conditional Access policy](../conditional-access/concept-conditional-access-users-groups.md)<br>[Why are privileged access devices important](/security/compass/privileged-access-devices) | | AC.L2-3.1.8<br><br>**Practice statement:** Limit unsuccessful sign-on attempts.<br><br>**Objectives:**<br>Determine if:<br>[a.] the means of limiting unsuccessful sign-on attempts is defined; and<br>[b.] the defined means of limiting unsuccessful sign-on attempts is implemented. | Enable custom smart lock-out settings. Configure lock-out threshold and lock-out duration in seconds to implement these requirements.<br>[Protect user accounts from attacks with Azure Active Directory smart lockout](../authentication/howto-password-smart-lockout.md)<br>[Manage Azure AD smart lockout values](../authentication/howto-password-smart-lockout.md) | | AC.L2-3.1.9<br><br>**Practice statement:** Provide privacy and security notices consistent with applicable CUI rules.<br><br>**Objectives:**<br>Determine if:<br>[a.] privacy and security notices required by CUI-specified rules are identified, consistent, and associated with the specific CUI category; and<br>[b.] privacy and security notices are displayed. | With Azure AD, you can deliver notification or banner messages for all apps that require and record acknowledgment before granting access. You can granularly target these terms of use policies to specific users (Member or Guest). You can also customize them per application via conditional access policies.<br><br>**Conditional access** <br>[What is conditional access in Azure AD?](../conditional-access/overview.md)<br><br>**Terms of use**<br>[Azure Active Directory terms of use](../conditional-access/terms-of-use.md)<br>[View report of who has accepted and declined](../conditional-access/terms-of-use.md) |
-| AC.L2-3.1.10<br><br>**Practice statement:** Use session lock with pattern-hiding displays to prevent access and viewing of data after a period of inactivity.<br><br>**Objectives:**<br>Determine if:<br>[a.] the period of inactivity after which the system initiates a session lock is defined;<br>[b.] access to the system and viewing of data is prevented by initiating a session lock after the defined period of inactivity; and<br>[c.] previously visible information is concealed via a pattern-hiding display after the defined period of inactivity. | Implement device lock by using a conditional access policy to restrict access to compliant or hybrid Azure AD joined devices. Configure policy settings on the device to enforce device lock at the OS level with MDM solutions such as Intune. Endpoint Manager or group policy objects can also be considered in hybrid deployments. For unmanaged devices, configure the Sign-In Frequency setting to force users to reauthenticate.<br>[Require device to be marked as compliant](../conditional-access/require-managed-devices.md)<br>[Grant controls in Conditional Access policy - Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md)<br>[User sign-in frequency](../conditional-access/howto-conditional-access-session-lifetime.md)<br><br>Configure devices for maximum minutes of inactivity until the screen locks ([Android](/mem/intune/configuration/device-restrictions-android), [iOS](/mem/intune/configuration/device-restrictions-ios), [Windows 10](/mem/intune/configuration/device-restrictions-windows-10)).|
+| AC.L2-3.1.10<br><br>**Practice statement:** Use session lock with pattern-hiding displays to prevent access and viewing of data after a period of inactivity.<br><br>**Objectives:**<br>Determine if:<br>[a.] the period of inactivity after which the system initiates a session lock is defined;<br>[b.] access to the system and viewing of data is prevented by initiating a session lock after the defined period of inactivity; and<br>[c.] previously visible information is concealed via a pattern-hiding display after the defined period of inactivity. | Implement device lock by using a conditional access policy to restrict access to compliant or hybrid Azure AD joined devices. Configure policy settings on the device to enforce device lock at the OS level with MDM solutions such as Intune. Microsoft Intune, Configuration Manager, or group policy objects can also be considered in hybrid deployments. For unmanaged devices, configure the Sign-In Frequency setting to force users to reauthenticate.<br>[Require device to be marked as compliant](../conditional-access/require-managed-devices.md)<br>[Grant controls in Conditional Access policy - Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md)<br>[User sign-in frequency](../conditional-access/howto-conditional-access-session-lifetime.md)<br><br>Configure devices for maximum minutes of inactivity until the screen locks ([Android](/mem/intune/configuration/device-restrictions-android), [iOS](/mem/intune/configuration/device-restrictions-ios), [Windows 10](/mem/intune/configuration/device-restrictions-windows-10)).|
| AC.L2-3.1.11<br><br>**Practice statement:** Terminate (automatically) a user session after a defined condition.<br><br>**Objectives:**<br>Determine if:<br>[a.] conditions requiring a user session to terminate are defined; and<br>[b.] a user session is automatically terminated after any of the defined conditions occur. | Enable Continuous Access Evaluation (CAE) for all supported applications. For application that don't support CAE, or for conditions not applicable to CAE, implement policies in Microsoft Defender for Cloud Apps to automatically terminate sessions when conditions occur. Additionally, configure Azure Active Directory Identity Protection to evaluate user and sign-in Risk. Use conditional access with Identity protection to allow user to automatically remediate risk.<br>[Continuous access evaluation in Azure AD](../conditional-access/concept-continuous-access-evaluation.md)<br>[Control cloud app usage by creating policies](/defender-cloud-apps/control-cloud-apps-with-policies)<br>[What is Azure Active Directory Identity Protection?](../identity-protection/overview-identity-protection.md) |AC.L2-3.1.12<br><br>**Practice statement:** Monitor and control remote access sessions.<br><br>**Objectives:**<br>Determine if:<br>[a.] remote access sessions are permitted;<br>[b.] the types of permitted remote access are identified;<br>[c.] remote access sessions are controlled; and<br>[d.] remote access sessions are monitored. | In todayΓÇÖs world, users access cloud-based applications almost exclusively remotely from unknown or untrusted networks. It's critical to securing this pattern of access to adopt zero trust principals. To meet these controls requirements in a modern cloud world we must verify each access request explicitly, implement least privilege and assume breach.<br><br>Configure named locations to delineate internal vs external networks. Configure conditional access app control to route access via Microsoft Defender for Cloud Apps. Configure Defender for Cloud Apps to control and monitor all sessions.<br>[Zero Trust Deployment Guide for Microsoft Azure Active Directory](https://www.microsoft.com/security/blog/2020/04/30/zero-trust-deployment-guide-azure-active-directory/)<br>[Location condition in Azure Active Directory Conditional Access](../conditional-access/location-condition.md)<br>[Deploy Cloud App Security Conditional Access App Control for Azure AD apps](/cloud-app-security/proxy-deployment-aad)<br>[What is Microsoft Defender for Cloud Apps?](/cloud-app-security/what-is-cloud-app-security)<br>[Monitor alerts raised in Microsoft Defender for Cloud Apps](/cloud-app-security/monitor-alerts) | | AC.L2-3.1.13<br><br>**Practice statement:** Employ cryptographic mechanisms to protect the confidentiality of remote access sessions.<br><br>**Objectives:**<br>Determine if:<br>[a.] cryptographic mechanisms to protect the confidentiality of remote access sessions are identified; and<br>[b.] cryptographic mechanisms to protect the confidentiality of remote access sessions are implemented. | All Azure AD customer-facing web services are secured with the Transport Layer Security (TLS) protocol and are implemented using FIPS-validated cryptography.<br>[Azure Active Directory Data Security Considerations (microsoft.com)](https://azure.microsoft.com/resources/azure-active-directory-data-security-considerations/) | | AC.L2-3.1.14<br><br>**Practice statement:** Route remote access via managed access control points.<br><br>**Objectives:**<br>Determine if:<br>[a.] managed access control points are identified and implemented; and<br>[b.] remote access is routed through managed network access control points. | Configure named locations to delineate internal vs external networks. Configure conditional access app control to route access via Microsoft Defender for Cloud Apps. Configure Defender for Cloud Apps to control and monitor all sessions. Secure devices used by privileged accounts as part of the privileged access story.<br>[Location condition in Azure Active Directory Conditional Access](../conditional-access/location-condition.md)<br>[Session controls in Conditional Access policy](../conditional-access/concept-conditional-access-session.md)<br>[Securing privileged access overview](/security/compass/overview) | | AC.L2-3.1.15<br><br>**Practice statement:** Authorize remote execution of privileged commands and remote access to security-relevant information.<br><br>**Objectives:**<br>Determine if:<br>[a.] privileged commands authorized for remote execution are identified;<br>[b.] security-relevant information authorized to be accessed remotely is identified;<br>[c.] the execution of the identified privileged commands via remote access is authorized; and<br>[d.] access to the identified security-relevant information via remote access is authorized. | Conditional Access is the Zero Trust control plane to target policies for access to your apps when combined with authentication context. You can apply different policies in those apps. Secure devices used by privileged accounts as part of the privileged access story. Configure conditional access policies to require the use of these secured devices by privileged users when performing privileged commands.<br>[Cloud apps, actions, and authentication context in Conditional Access policy](../conditional-access/concept-conditional-access-cloud-apps.md)<br>[Securing privileged access overview](/security/compass/overview)<br>[Filter for devices as a condition in Conditional Access policy](../conditional-access/concept-condition-filters-for-devices.md) |
-| AC.L2-3.1.18<br><br>**Practice statement:** Control connection of mobile devices.<br><br>**Objectives:**<br>Determine if:<br>[a.] mobile devices that process, store, or transmit CUI are identified;<br>[b.] mobile device connections are authorized; and<br>[c.] mobile device connections are monitored and logged. | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to enforce mobile device configuration and connection profile. Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](../conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started)<br>[What is app management in Microsoft Intune?](/mem/intune/apps/app-management) |
+| AC.L2-3.1.18<br><br>**Practice statement:** Control connection of mobile devices.<br><br>**Objectives:**<br>Determine if:<br>[a.] mobile devices that process, store, or transmit CUI are identified;<br>[b.] mobile device connections are authorized; and<br>[c.] mobile device connections are monitored and logged. | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Intune, Configuration Manager, or group policy objects (GPO) to enforce mobile device configuration and connection profile. Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](../conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started)<br>[What is app management in Microsoft Intune?](/mem/intune/apps/app-management) |
| AC.L2-3.1.19<br><br>**Practice statement:** Encrypt CUI on mobile devices and mobile computing platforms.<br><br>**Objectives:**<br>Determine if:<br>[a.] mobile devices and mobile computing platforms that process, store, or transmit CUI are identified; and<br>[b.] encryption is employed to protect CUI on identified mobile devices and mobile computing platforms. | **Managed Device**<br>Configure conditional access policies to enforce compliant or HAADJ device and to ensure managed devices are configured appropriately via device management solution to encrypt CUI.<br><br>**Unmanaged Device**<br>Configure conditional access policies to require app protection policies.<br>[Grant controls in Conditional Access policy - Require device to be marked as compliant](../conditional-access/concept-conditional-access-grant.md)<br>[Grant controls in Conditional Access policy - Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md)<br>[Grant controls in Conditional Access policy - Require app protection policy](../conditional-access/concept-conditional-access-grant.md) |
-| AC.L2-3.1.21<br><br>**Practice statement:** Limit use of portable storage devices on external systems.<br><br>**Objectives:**<br>Determine if:<br>[a.] the use of portable storage devices containing CUI on external systems is identified and documented;<br>[b.] limits on the use of portable storage devices containing CUI on external systems are defined; and<br>[c.] the use of portable storage devices containing CUI on external systems is limited as defined. | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to control the use of portable storage devices on systems. Configure policy settings on the Windows device to completely prohibit or restrict use of portable storage at the OS level. For all other devices where you may be unable to granularly control access to portable storage block download entirely with Microsoft Defender for Cloud Apps. Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](../conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md)<br>[Configure authentication session management - Azure Active Directory](../conditional-access/howto-conditional-access-session-lifetime.md)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started)<br>[Restrict USB devices using administrative templates in Microsoft Intune](/mem/intune/configuration/administrative-templates-restrict-usb)<br><br>**Microsoft Defender for Cloud Apps**<br>[Create session policies in Defender for Cloud Apps](/defender-cloud-apps/session-policy-aad)
+| AC.L2-3.1.21<br><br>**Practice statement:** Limit use of portable storage devices on external systems.<br><br>**Objectives:**<br>Determine if:<br>[a.] the use of portable storage devices containing CUI on external systems is identified and documented;<br>[b.] limits on the use of portable storage devices containing CUI on external systems are defined; and<br>[c.] the use of portable storage devices containing CUI on external systems is limited as defined. | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Intune, Configuration Manager, or group policy objects (GPO) to control the use of portable storage devices on systems. Configure policy settings on the Windows device to completely prohibit or restrict use of portable storage at the OS level. For all other devices where you may be unable to granularly control access to portable storage block download entirely with Microsoft Defender for Cloud Apps. Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](../conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md)<br>[Configure authentication session management - Azure Active Directory](../conditional-access/howto-conditional-access-session-lifetime.md)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started)<br>[Restrict USB devices using administrative templates in Microsoft Intune](/mem/intune/configuration/administrative-templates-restrict-usb)<br><br>**Microsoft Defender for Cloud Apps**<br>[Create session policies in Defender for Cloud Apps](/defender-cloud-apps/session-policy-aad)
### Next steps
active-directory Configure Cmmc Level 2 Additional Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/configure-cmmc-level-2-additional-controls.md
The following table provides a list of practice statement and objectives, and Az
| CMMC practice statement and objectives | Azure AD guidance and recommendations | | - | - |
-| CM.L2-3.4.2<br><br>**Practice statement:** Establish and enforce security configuration settings for information technology products employed in organizational systems.<br><br>**Objectives:**<br>Determine if:<br>[a.] security configuration settings for information technology products employed in the system are established and included in the baseline configuration; and<br>[b.] security configuration settings for information technology products employed in the system are enforced. | Adopt a zero-trust security posture. Use conditional access policies to restrict access to compliant devices. Configure policy settings on the device to enforce security configuration settings on the device with MDM solutions such as Microsoft Intune. Microsoft Endpoint Configuration Manager(MECM) or group policy objects can also be considered in hybrid deployments and combined with conditional access require hybrid Azure AD joined device.<br><br>**Zero-trust**<br>[Securing identity with Zero Trust](/security/zero-trust/identity)<br><br>**Conditional access**<br>[What is conditional access in Azure AD?](../conditional-access/overview.md)<br>[Grant controls in Conditional Access policy](../conditional-access/concept-conditional-access-grant.md)<br><br>**Device policies**<br>[What is Microsoft Intune?](/mem/intune/fundamentals/what-is-intune)<br>[What is Defender for Cloud Apps?](/cloud-app-security/what-is-cloud-app-security)<br>[What is app management in Microsoft Intune?](/mem/intune/apps/app-management)<br>[Microsoft Endpoint Manager overview](/mem/endpoint-manager-overview) |
+| CM.L2-3.4.2<br><br>**Practice statement:** Establish and enforce security configuration settings for information technology products employed in organizational systems.<br><br>**Objectives:**<br>Determine if:<br>[a.] security configuration settings for information technology products employed in the system are established and included in the baseline configuration; and<br>[b.] security configuration settings for information technology products employed in the system are enforced. | Adopt a zero-trust security posture. Use conditional access policies to restrict access to compliant devices. Configure policy settings on the device to enforce security configuration settings on the device with MDM solutions such as Microsoft Intune. Microsoft Configuration Manager or group policy objects can also be considered in hybrid deployments and combined with conditional access require hybrid Azure AD joined device.<br><br>**Zero-trust**<br>[Securing identity with Zero Trust](/security/zero-trust/identity)<br><br>**Conditional access**<br>[What is conditional access in Azure AD?](../conditional-access/overview.md)<br>[Grant controls in Conditional Access policy](../conditional-access/concept-conditional-access-grant.md)<br><br>**Device policies**<br>[What is Microsoft Intune?](/mem/intune/fundamentals/what-is-intune)<br>[What is Defender for Cloud Apps?](/cloud-app-security/what-is-cloud-app-security)<br>[What is app management in Microsoft Intune?](/mem/intune/apps/app-management)<br>[Microsoft endpoint management solutions](/mem/endpoint-manager-overview) |
| CM.L2-3.4.5<br><br>**Practice statement:** Define, document, approve, and enforce physical and logical access restrictions associated with changes to organizational systems.<br><br>**Objectives:**<br>Determine if:<br>[a.] physical access restrictions associated with changes to the system are defined;<br>[b.] physical access restrictions associated with changes to the system are documented;<br>[c.] physical access restrictions associated with changes to the system are approved;<br>[d.] physical access restrictions associated with changes to the system are enforced;<br>[e.] logical access restrictions associated with changes to the system are defined;<br>[f.] logical access restrictions associated with changes to the system are documented;<br>[g.] logical access restrictions associated with changes to the system are approved; and<br>[h.] logical access restrictions associated with changes to the system are enforced. | Azure Active Directory (Azure AD) is a cloud-based identity and access management service. Customers don't have physical access to the Azure AD datacenters. As such, each physical access restriction is satisfied by Microsoft and inherited by the customers of Azure AD. Implement Azure AD role based access controls. Eliminate standing privileged access, provide just in time access with approval workflows with Privileged Identity Management.<br>[Overview of Azure Active Directory role-based access control (RBAC)](../roles/custom-overview.md)<br>[What is Privileged Identity Management?](../privileged-identity-management/pim-configure.md)<br>[Approve or deny requests for Azure AD roles in PIM](../privileged-identity-management/azure-ad-pim-approval-workflow.md) | | CM.L2-3.4.6<br><br>**Practice statement:** Employ the principle of least functionality by configuring organizational systems to provide only essential capabilities.<br><br>**Objectives:**<br>Determine if:<br>[a.] essential system capabilities are defined based on the principle of least functionality; and<br>[b.] the system is configured to provide only the defined essential capabilities. | Configure device management solutions (Such as Microsoft Intune) to implement a custom security baseline applied to organizational systems to remove non-essential applications and disable unnecessary services. Leave only the fewest capabilities necessary for the systems to operate effectively. Configure conditional access to restrict access to compliant or hybrid Azure AD joined devices. <br>[What is Microsoft Intune](/mem/intune/fundamentals/what-is-intune)<br>[Require device to be marked as compliant](../conditional-access/require-managed-devices.md)<br>[Grant controls in Conditional Access policy - Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md) | | CM.L2-3.4.7<br><br>**Practice statement:** Restrict, disable, or prevent the use of nonessential programs, functions, ports, protocols, and services.<br><br>**Objectives:**<br>Determine if:<br>[a.]essential programs are defined;<br>[b.] the use of nonessential programs is defined;<br>[c.] the use of nonessential programs is restricted, disabled, or prevented as defined;<br>[d.] essential functions are defined;<br>[e.] the use of nonessential functions is defined;<br>[f.] the use of nonessential functions is restricted, disabled, or prevented as defined;<br>[g.] essential ports are defined;<br>[h.] the use of nonessential ports is defined;<br>[i.] the use of nonessential ports is restricted, disabled, or prevented as defined;<br>[j.] essential protocols are defined;<br>[k.] the use of nonessential protocols is defined;<br>[l.] the use of nonessential protocols is restricted, disabled, or prevented as defined;<br>[m.] essential services are defined;<br>[n.] the use of nonessential services is defined; and<br>[o.] the use of nonessential services is restricted, disabled, or prevented as defined. | Use Application Administrator role to delegate authorized use of essential applications. Use App Roles or group claims to manage least privilege access within application. Configure user consent to require admin approval and don't allow group owner consent. Configure Admin consent request workflows to enable users to request access to applications that require admin consent. Use Microsoft Defender for Cloud Apps to identify unsanctioned/unknown application use. Use this telemetry to then determine essential/non-essential apps.<br>[Azure AD built-in roles - Application Administrator](../roles/permissions-reference.md)<br>[Azure AD App Roles - App Roles vs. Groups ](../develop/howto-add-app-roles-in-azure-ad-apps.md)<br>[Configure how users consent to applications](../manage-apps/configure-user-consent.md?tabs=azure-portal.md)<br>[Configure group owner consent to apps accessing group data](../manage-apps/configure-user-consent-groups.md?tabs=azure-portal.md)<br>[Configure the admin consent workflow](../manage-apps/configure-admin-consent-workflow.md)<br>[What is Defender for Cloud Apps?](/defender-cloud-apps/what-is-defender-for-cloud-apps)<br>[Discover and manage Shadow IT tutorial](/defender-cloud-apps/tutorial-shadow-it) |
The following table provides a list of practice statement and objectives, and Az
| CMMC practice statement and objectives | Azure AD guidance and recommendations | | - | - | | MA.L2-3.7.5<br><br>**Practice statement:** Require multifactor authentication to establish nonlocal maintenance sessions via external network connections and terminate such connections when nonlocal maintenance is complete.<br><br>**Objectives:**<br>Determine if:<br>[a.] multifactor authentication is used to establish nonlocal maintenance sessions via external network connections; and<br>[b.] nonlocal maintenance sessions established via external network connections are terminated when nonlocal maintenance is complete.| Accounts assigned administrative rights are targeted by attackers, including accounts used to establish non-local maintenance sessions. Requiring multifactor authentication (MFA) on those accounts is an easy way to reduce the risk of those accounts being compromised.<br>[Conditional Access - Require MFA for administrators](../conditional-access/howto-conditional-access-policy-admin-mfa.md) |
-| MP.L2-3.8.7<br><br>**Practice statement:** Control the use of removable media on system components.<br><br>**Objectives:**<br>Determine if:<br>[a.] the use of removable media on system components is controlled. | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to control the use of removable media on systems. Deploy and manage Removable Storage Access Control using Intune or Group Policy. Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](../conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md#require-hybrid-azure-ad-joined-device)<br><br>**Intune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started)<br><br>**Removable storage access control**<br>[Deploy and manage Removable Storage Access Control using Intune](/microsoft-365/security/defender-endpoint/deploy-manage-removable-storage-intune?view=o365-worldwide&preserve-view=true)<br>[Deploy and manage Removable Storage Access Control using group policy](/microsoft-365/security/defender-endpoint/deploy-manage-removable-storage-group-policy?view=o365-worldwide&preserve-view=true) |
+| MP.L2-3.8.7<br><br>**Practice statement:** Control the use of removable media on system components.<br><br>**Objectives:**<br>Determine if:<br>[a.] the use of removable media on system components is controlled. | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Intune, Configuration Manager, or group policy objects (GPO) to control the use of removable media on systems. Deploy and manage Removable Storage Access Control using Intune, Configuration Manager, or Group Policy. Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](../conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md#require-hybrid-azure-ad-joined-device)<br><br>**Intune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started)<br><br>**Removable storage access control**<br>[Deploy and manage Removable Storage Access Control using Intune](/microsoft-365/security/defender-endpoint/deploy-manage-removable-storage-intune?view=o365-worldwide&preserve-view=true)<br>[Deploy and manage Removable Storage Access Control using group policy](/microsoft-365/security/defender-endpoint/deploy-manage-removable-storage-group-policy?view=o365-worldwide&preserve-view=true) |
## Personnel Security (PS)
The following table provides a list of practice statement and objectives, and Az
| CMMC practice statement and objectives | Azure AD guidance and recommendations | | - | - | | SC.L2-3.13.3<br><br>**Practice statement:** Separate user functionality form system management functionality. <br><br>**Objectives:**<br>Determine if:<br>[a.] user functionality is identified;<br>[b.] system management functionality is identified; and<br>[c.] user functionality is separated from system management functionality. | Maintain separate user accounts in Azure Active Directory for everyday productivity use and administrative or system/privileged management. Privileged accounts should be cloud-only or managed accounts and not synchronized from on-premises to protect the cloud environment from on-premises compromise. System/privileged access should only be permitted from a security hardened privileged access workstation (PAW). Configure Conditional Access device filters to restrict access to administrative applications from PAWs that are enabled using Azure Virtual Desktops.<br>[Why are privileged access devices important](/security/compass/privileged-access-devices)<br>[Device Roles and Profiles](/security/compass/privileged-access-devices)<br>[Filter for devices as a condition in Conditional Access policy](../conditional-access/concept-condition-filters-for-devices.md)<br>[Azure Virtual Desktop](https://azure.microsoft.com/products/virtual-desktop/) |
-| SC.L2-3.13.4<br><br>**Practice statement:** Prevent unauthorized and unintended information transfer via shared system resources.<br><br>**Objectives:**<br>Determine if:<br>[a.] unauthorized and unintended information transfer via shared system resources is prevented. | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to ensure devices are compliant with system hardening procedures. Include compliance with company policy regarding software patches to prevent attackers from exploiting flaws.<br><br>Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](../conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started) |
-| SC.L2-3.13.13<br><br>**Practice statement:** Control and monitor the use of mobile code.<br><br>**Objectives:**<br>Determine if:<br>[a.] use of mobile code is controlled; and<br>[b.] use of mobile code is monitored. | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to disable the use of mobile code. Where use of mobile code is required monitor the use with endpoint security such as Microsoft Defender for Endpoint.<br><br>Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](../conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started)<br><br>**Defender for Endpoint**<br>[Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint?view=o365-worldwide&preserve-view=true) |
+| SC.L2-3.13.4<br><br>**Practice statement:** Prevent unauthorized and unintended information transfer via shared system resources.<br><br>**Objectives:**<br>Determine if:<br>[a.] unauthorized and unintended information transfer via shared system resources is prevented. | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Intune, Configuration Manager, or group policy objects (GPO) to ensure devices are compliant with system hardening procedures. Include compliance with company policy regarding software patches to prevent attackers from exploiting flaws.<br><br>Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](../conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started) |
+| SC.L2-3.13.13<br><br>**Practice statement:** Control and monitor the use of mobile code.<br><br>**Objectives:**<br>Determine if:<br>[a.] use of mobile code is controlled; and<br>[b.] use of mobile code is monitored. | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Intune, Configuration Manager, or group policy objects (GPO) to disable the use of mobile code. Where use of mobile code is required monitor the use with endpoint security such as Microsoft Defender for Endpoint.<br><br>Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](../conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started)<br><br>**Defender for Endpoint**<br>[Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint?view=o365-worldwide&preserve-view=true) |
## System and Information Integrity (SI)
The following table provides a list of practice statement and objectives, and Az
| CMMC practice statement and objectives | Azure AD guidance and recommendations | | - | - |
-| SI.L2-3.14.7<br><br>**Practice statement:**<br><br>**Objectives:** Identify unauthorized use of organizational systems.<br>Determine if:<br>[a.] authorized use of the system is defined; and<br>[b.] unauthorized use of the system is identified. | Consolidate telemetry: Azure AD logs to stream to SIEM, such as Azure Sentinel Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM), or group policy objects (GPO) to require Intrusion Detection/Protection (IDS/IPS) such as Microsoft Defender for Endpoint is installed and in use. Use telemetry provided by the IDS/IPS to identify unusual activities or conditions related to inbound and outbound communications traffic or unauthorized use.<br><br>Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](../conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started)<br><br>**Defender for Endpoint**<br>[Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint?view=o365-worldwide&preserve-view=true) |
+| SI.L2-3.14.7<br><br>**Practice statement:**<br><br>**Objectives:** Identify unauthorized use of organizational systems.<br>Determine if:<br>[a.] authorized use of the system is defined; and<br>[b.] unauthorized use of the system is identified. | Consolidate telemetry: Azure AD logs to stream to SIEM, such as Azure Sentinel Configure device management policies via MDM (such as Microsoft Intune), Microsoft Intune, Configuration Manager, or group policy objects (GPO) to require Intrusion Detection/Protection (IDS/IPS) such as Microsoft Defender for Endpoint is installed and in use. Use telemetry provided by the IDS/IPS to identify unusual activities or conditions related to inbound and outbound communications traffic or unauthorized use.<br><br>Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](../conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started)<br><br>**Defender for Endpoint**<br>[Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint?view=o365-worldwide&preserve-view=true) |
### Next steps
active-directory Fedramp Access Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/fedramp-access-controls.md
Each row in the following table provides prescriptive guidance to help you devel
| AC-02(1)| **Employ automated mechanisms to support management of customer-controlled accounts.**<p>Configure automated provisioning of customer-controlled accounts from external HR systems or on-premises Active Directory. For applications that support application provisioning, configure Azure AD to automatically create user identities and roles in cloud software as a solution (SaaS) applications that users need access to. In addition to creating user identities, automatic provisioning includes the maintenance and removal of user identities as status or roles change. To ease monitoring of account usage, you can stream Azure AD Identity Protection logs, which show risky users, risky sign-ins, and risk detections, and audit logs directly into Microsoft Sentinel or Event Hubs.<p>Provision<br><li>[Plan cloud HR application to Azure Active Directory user provisioning](../app-provisioning/plan-cloud-hr-provision.md)<br><li>[Azure AD Connect sync: Understand and customize synchronization](../hybrid/how-to-connect-sync-whatis.md)<br><li>[What is automated SaaS app user provisioning in Azure AD?](../app-provisioning/user-provisioning.md)<br><li>[SaaS app integration tutorials for use with Azure AD](../saas-apps/tutorial-list.md)<p>Monitor and audit<br><li>[Investigate risk](../identity-protection/howto-identity-protection-investigate-risk.md)<br><li>[Audit activity reports in the Azure Active Directory portal](../reports-monitoring/concept-audit-logs.md)<br><li>[What is Microsoft Sentinel?](../../sentinel/overview.md)<br><li>[Microsoft Sentinel: Connect data from Azure Active Directory](../../sentinel/connect-azure-active-directory.md)<br><li>[Tutorial: Stream Azure Active Directory logs to an Azure event hub](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md)ΓÇÄ| | AC-02(2)<br>AC-02(3)| **Employ automated mechanisms to support automatically removing or disabling temporary and emergency accounts after 24 hours from last use and all customer-controlled accounts after 35 days of inactivity.**<p>Implement account management automation with Microsoft Graph and Azure AD PowerShell. Use Microsoft Graph to monitor sign-in activity and Azure AD PowerShell to take action on accounts within the required time frame. <p>Determine inactivity<br><li>[Manage inactive user accounts in Azure AD](../reports-monitoring/howto-manage-inactive-user-accounts.md)<br><li>[Manage stale devices in Azure AD](../devices/manage-stale-devices.md)<p>Remove or disable accounts<br><li>[Working with users in Microsoft Graph](/graph/api/resources/users)<br><li>[Get a user](/graph/api/user-get?tabs=http)<br><li>[Update user](/graph/api/user-update?tabs=http)<br><li>[Delete a user](/graph/api/user-delete?tabs=http)<p>Work with devices in Microsoft Graph<br><li>[Get device](/graph/api/device-get?tabs=http)<br><li>[Update device](/graph/api/device-update?tabs=http)<br><li>[Delete device](/graph/api/device-delete?tabs=http)<p>Use [Azure AD PowerShell](/powershell/module/azuread/)<br><li>[Get-AzureADUser](/powershell/module/azuread/get-azureaduser)<br><li>[Set-AzureADUser](/powershell/module/azuread/set-azureaduser)<br><li>[Get-AzureADDevice](/powershell/module/azuread/get-azureaddevice)<br><li>[Set-AzureADDevice](/powershell/module/azuread/set-azureaddevice) | | AC-02(4)| **Implement an automated audit and notification system for the lifecycle of managing customer-controlled accounts.**<p>All account lifecycle operations, such as account creation, modification, enabling, disabling, and removal actions, are audited within the Azure audit logs. You can stream the logs directly into Microsoft Sentinel or Event Hubs to help with notification.<p>Audit<br><li>[Audit activity reports in the Azure Active Directory portal](../reports-monitoring/concept-audit-logs.md)<br><li>[Microsoft Sentinel: Connect data from Azure Active Directory](../../sentinel/connect-azure-active-directory.md)<P>Notification<br><li>[What is Microsoft Sentinel?](../../sentinel/overview.md)<br><li>[Tutorial: Stream Azure Active Directory logs to an Azure event hub](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) |
-| AC-02(5)| **Implement device log-out after a 15-minute period of inactivity.**<p>Implement device lock by using a conditional access policy that restricts access to compliant devices. Configure policy settings on the device to enforce device lock at the OS level with mobile device management (MDM) solutions such as Intune. Endpoint Manager or group policy objects can also be considered in hybrid deployments. For unmanaged devices, configure the Sign-In Frequency setting to force users to reauthenticate.<P>Conditional access<br><li>[Require device to be marked as compliant](../conditional-access/require-managed-devices.md)<br><li>[User sign-in frequency](../conditional-access/howto-conditional-access-session-lifetime.md)<p>MDM policy<br><li>Configure devices for maximum minutes of inactivity until the screen locks and requires a password to unlock ([Android](/mem/intune/configuration/device-restrictions-android), [iOS](/mem/intune/configuration/device-restrictions-ios), [Windows 10](/mem/intune/configuration/device-restrictions-windows-10)). |
+| AC-02(5)| **Implement device log-out after a 15-minute period of inactivity.**<p>Implement device lock by using a conditional access policy that restricts access to compliant devices. Configure policy settings on the device to enforce device lock at the OS level with mobile device management (MDM) solutions such as Intune. Microsoft Intune, Configuration Manager, or group policy objects can also be considered in hybrid deployments. For unmanaged devices, configure the Sign-In Frequency setting to force users to reauthenticate.<P>Conditional access<br><li>[Require device to be marked as compliant](../conditional-access/require-managed-devices.md)<br><li>[User sign-in frequency](../conditional-access/howto-conditional-access-session-lifetime.md)<p>MDM policy<br><li>Configure devices for maximum minutes of inactivity until the screen locks and requires a password to unlock ([Android](/mem/intune/configuration/device-restrictions-android), [iOS](/mem/intune/configuration/device-restrictions-ios), [Windows 10](/mem/intune/configuration/device-restrictions-windows-10)). |
| AC-02(7)| **Administer and monitor privileged role assignments by following a role-based access scheme for customer-controlled accounts. Disable or revoke privilege access for accounts when no longer appropriate.**<p>Implement Azure AD Privileged Identity Management with access reviews for privileged roles in Azure AD to monitor role assignments and remove role assignments when no longer appropriate. You can stream audit logs directly into Microsoft Sentinel or Event Hubs to help with monitoring.<p>Administer<br><li>[What is Azure AD Privileged Identity Management?](../privileged-identity-management/pim-configure.md)<br><li>[Activation maximum duration](../privileged-identity-management/pim-how-to-change-default-settings.md?tabs=new)<p>Monitor<br><li>[Create an access review of Azure AD roles in Privileged Identity Management](../privileged-identity-management/pim-create-azure-ad-roles-and-resource-roles-review.md)<br><li>[View audit history for Azure AD roles in Privileged Identity Management](../privileged-identity-management/pim-how-to-use-audit-log.md?tabs=new)<br><li>[Audit activity reports in the Azure Active Directory portal](../reports-monitoring/concept-audit-logs.md)<br><li>[What is Microsoft Sentinel?](../../sentinel/overview.md)<br><li>[Connect data from Azure Active Directory](../../sentinel/connect-azure-active-directory.md)<br><li>[Tutorial: Stream Azure Active Directory logs to an Azure event hub](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) | | AC-02(11)| **Enforce usage of customer-controlled accounts to meet customer-defined conditions or circumstances.**<p>Create conditional access policies to enforce access control decisions across users and devices.<p>Conditional access<br><li>[Create a conditional access policy](../authentication/tutorial-enable-azure-mfa.md?bc=%2fazure%2factive-directory%2fconditional-access%2fbreadcrumb%2ftoc.json&toc=%2fazure%2factive-directory%2fconditional-access%2ftoc.json)<br><li>[What is conditional access?](../conditional-access/overview.md) | | AC-02(12)| **Monitor and report customer-controlled accounts with privileged access for atypical usage.**<p>For help with monitoring of atypical usage, you can stream Identity Protection logs, which show risky users, risky sign-ins, and risk detections, and audit logs, which help with correlation with privilege assignment, directly into a SIEM solution such as Microsoft Sentinel. You can also use Event Hubs to integrate logs with third-party SIEM solutions.<p>Identity protection<br><li>[What is Azure AD Identity Protection?](../identity-protection/overview-identity-protection.md)<br><li>[Investigate risk](../identity-protection/howto-identity-protection-investigate-risk.md)<br><li>[Azure Active Directory Identity Protection notifications](../identity-protection/howto-identity-protection-configure-notifications.md)<p>Monitor accounts<br><li>[What is Microsoft Sentinel?](../../sentinel/overview.md)<br><li>[Audit activity reports in the Azure Active Directory portal](../reports-monitoring/concept-audit-logs.md)<br><li>[Connect Azure Active Directory data to Microsoft Sentinel](../../sentinel/connect-azure-active-directory.md) <br><li>[Tutorial: Stream logs to an Azure event hub](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) |
Each row in the following table provides prescriptive guidance to help you devel
| AC-06(7)| **Review and validate all users with privileged access every year. Ensure privileges are reassigned (or removed if necessary) to align with organizational mission and business requirements.**<p>Use Azure AD entitlement management with access reviews for privileged users to verify if privileged access is required. <p>Access reviews<br><li>[What is Azure AD entitlement management?](../governance/entitlement-management-overview.md)<br><li>[Create an access review of Azure AD roles in Privileged Identity Management](../privileged-identity-management/pim-create-azure-ad-roles-and-resource-roles-review.md)<br><li>[Review access of an access package in Azure AD entitlement management](../governance/entitlement-management-access-reviews-review-access.md) | | AC-07| **Enforce a limit of no more than three consecutive failed login attempts on customer-deployed resources within a 15-minute period. Lock the account for a minimum of three hours or until unlocked by an administrator.**<p>Enable custom smart lockout settings. Configure lockout threshold and lockout duration in seconds to implement these requirements. <p>Smart lockout<br><li>[Protect user accounts from attacks with Azure Active Directory smart lockout](../authentication/howto-password-smart-lockout.md)<br><li>[Manage Azure AD smart lockout values](../authentication/howto-password-smart-lockout.md) | | AC-08| **Display and require user acknowledgment of privacy and security notices before granting access to information systems.**<p>With Azure AD, you can deliver notification or banner messages for all apps that require and record acknowledgment before granting access. You can granularly target these terms of use policies to specific users (Member or Guest). You can also customize them per application via conditional access policies.<p>Terms of use<br><li>[Azure Active Directory terms of use](../conditional-access/terms-of-use.md)<br><li>[View report of who has accepted and declined](../conditional-access/terms-of-use.md) |
-| AC-10|**Limit concurrent sessions to three sessions for privileged access and two for nonprivileged access.** <p>Nowadays, users connect from multiple devices, sometimes simultaneously. Limiting concurrent sessions leads to a degraded user experience and provides limited security value. A better approach to address the intent behind this control is to adopt a zero-trust security posture. Conditions are explicitly validated before a session is created and continually validated throughout the life of a session. <p>In addition, use the following compensating controls. <p>Use conditional access policies to restrict access to compliant devices. Configure policy settings on the device to enforce user sign-in restrictions at the OS level with MDM solutions such as Intune. Endpoint Manager or group policy objects can also be considered in hybrid deployments.<p> Use Privileged Identity Management to further restrict and control privileged accounts. <p> Configure smart account lockout for invalid sign-in attempts.<p>**Implementation guidance** <p>Zero trust<br><li> [Securing identity with Zero Trust](/security/zero-trust/identity)<br><li>[Continuous access evaluation in Azure AD](../conditional-access/concept-continuous-access-evaluation.md)<p>Conditional access<br><li>[What is conditional access in Azure AD?](../conditional-access/overview.md)<br><li>[Require device to be marked as compliant](../conditional-access/require-managed-devices.md)<br><li>[User sign-in frequency](../conditional-access/howto-conditional-access-session-lifetime.md)<p>Device policies<br><li>[Use PowerShell scripts on Windows 10 devices in Intune](/mem/intune/apps/intune-management-extension)<br><li>[Other smart card Group Policy settings and registry keys](/windows/security/identity-protection/smart-cards/smart-card-group-policy-and-registry-settings)<br><li>[Microsoft Endpoint Manager overview](/mem/endpoint-manager-overview)<p>Resources<br><li>[What is Azure AD Privileged Identity Management?](../privileged-identity-management/pim-configure.md)<br><li>[Protect user accounts from attacks with Azure Active Directory smart lockout](../authentication/howto-password-smart-lockout.md)<p>See AC-12 for more session reevaluation and risk mitigation guidance. |
-| AC-11<br>AC-11(1)| **Implement a session lock after a 15-minute period of inactivity or upon receiving a request from a user. Retain the session lock until the user reauthenticates. Conceal previously visible information when a session lock is initiated.**<p> Implement device lock by using a conditional access policy to restrict access to compliant devices. Configure policy settings on the device to enforce device lock at the OS level with MDM solutions such as Intune. Endpoint Manager or group policy objects can also be considered in hybrid deployments. For unmanaged devices, configure the Sign-In Frequency setting to force users to reauthenticate.<p>Conditional access<br><li>[Require device to be marked as compliant](../conditional-access/require-managed-devices.md)<br><li>[User sign-in frequency](../conditional-access/howto-conditional-access-session-lifetime.md)<p>MDM policy<br><li>Configure devices for maximum minutes of inactivity until the screen locks ([Android](/mem/intune/configuration/device-restrictions-android), [iOS](/mem/intune/configuration/device-restrictions-ios), [Windows 10](/mem/intune/configuration/device-restrictions-windows-10)). |
+| AC-10|**Limit concurrent sessions to three sessions for privileged access and two for nonprivileged access.** <p>Nowadays, users connect from multiple devices, sometimes simultaneously. Limiting concurrent sessions leads to a degraded user experience and provides limited security value. A better approach to address the intent behind this control is to adopt a zero-trust security posture. Conditions are explicitly validated before a session is created and continually validated throughout the life of a session. <p>In addition, use the following compensating controls. <p>Use conditional access policies to restrict access to compliant devices. Configure policy settings on the device to enforce user sign-in restrictions at the OS level with MDM solutions such as Intune. Microsoft Intune, Configuration Manager, or group policy objects can also be considered in hybrid deployments.<p> Use Privileged Identity Management to further restrict and control privileged accounts. <p> Configure smart account lockout for invalid sign-in attempts.<p>**Implementation guidance** <p>Zero trust<br><li> [Securing identity with Zero Trust](/security/zero-trust/identity)<br><li>[Continuous access evaluation in Azure AD](../conditional-access/concept-continuous-access-evaluation.md)<p>Conditional access<br><li>[What is conditional access in Azure AD?](../conditional-access/overview.md)<br><li>[Require device to be marked as compliant](../conditional-access/require-managed-devices.md)<br><li>[User sign-in frequency](../conditional-access/howto-conditional-access-session-lifetime.md)<p>Device policies<br><li>[Use PowerShell scripts on Windows 10 devices in Intune](/mem/intune/apps/intune-management-extension)<br><li>[Other smart card Group Policy settings and registry keys](/windows/security/identity-protection/smart-cards/smart-card-group-policy-and-registry-settings)<br><li>[Microsoft Intune overview](/mem/endpoint-manager-overview)<p>Resources<br><li>[What is Azure AD Privileged Identity Management?](../privileged-identity-management/pim-configure.md)<br><li>[Protect user accounts from attacks with Azure Active Directory smart lockout](../authentication/howto-password-smart-lockout.md)<p>See AC-12 for more session reevaluation and risk mitigation guidance. |
+| AC-11<br>AC-11(1)| **Implement a session lock after a 15-minute period of inactivity or upon receiving a request from a user. Retain the session lock until the user reauthenticates. Conceal previously visible information when a session lock is initiated.**<p> Implement device lock by using a conditional access policy to restrict access to compliant devices. Configure policy settings on the device to enforce device lock at the OS level with MDM solutions such as Intune. Microsoft Intune, Configuration Manager, or group policy objects can also be considered in hybrid deployments. For unmanaged devices, configure the Sign-In Frequency setting to force users to reauthenticate.<p>Conditional access<br><li>[Require device to be marked as compliant](../conditional-access/require-managed-devices.md)<br><li>[User sign-in frequency](../conditional-access/howto-conditional-access-session-lifetime.md)<p>MDM policy<br><li>Configure devices for maximum minutes of inactivity until the screen locks ([Android](/mem/intune/configuration/device-restrictions-android), [iOS](/mem/intune/configuration/device-restrictions-ios), [Windows 10](/mem/intune/configuration/device-restrictions-windows-10)). |
| AC-12| **Automatically terminate user sessions when organizational defined conditions or trigger events occur.**<p>Implement automatic user session reevaluation with Azure AD features such as risk-based conditional access and continuous access evaluation. You can implement inactivity conditions at a device level as described in AC-11.<p>Resources<br><li>[Sign-in risk-based conditional access](../conditional-access/howto-conditional-access-policy-risk.md)<br><li>[User risk-based conditional access](../conditional-access/howto-conditional-access-policy-risk-user.md)<br><li>[Continuous access evaluation](../conditional-access/concept-continuous-access-evaluation.md) | AC-12(1)| **Provide a logout capability for all sessions and display an explicit logout message.** <p>All Azure AD surfaced web interfaces provide a logout capability for user-initiated communications sessions. When SAML applications are integrated with Azure AD, implement single sign-out. <p>Logout capability<br><li>When the user selects [Sign-out everywhere](https://aka.ms/mysignins), all current issued tokens are revoked. <p>Display message<br>Azure AD automatically displays a message after user-initiated logout.<br><p>![Screenshot that shows an access control message.](medi) | | AC-20<br>AC-20(1)| **Establish terms and conditions that allow authorized individuals to access the customer-deployed resources from external information systems such as unmanaged devices and external networks.**<p>Require terms of use acceptance for authorized users who access resources from external systems. Implement conditional access policies to restrict access from external systems. Conditional access policies might also be integrated with Defender for Cloud Apps to provide controls for cloud and on-premises applications from external systems. Mobile application management in Intune can protect organization data at the application level, including custom apps and store apps, from managed devices that interact with external systems. An example would be accessing cloud services. You can use app management on organization-owned devices and personal devices.<P>Terms and conditions<br><li>[Terms of use: Azure Active Directory](../conditional-access/terms-of-use.md)<p>Conditional access<br><li>[Require device to be marked as compliant](../conditional-access/require-managed-devices.md)<br><li>[Conditions in conditional access policy: Device state (preview)](../conditional-access/concept-conditional-access-conditions.md)<br><li>[Protect with Microsoft Defender for Cloud Apps Conditional Access App Control](/cloud-app-security/proxy-intro-aad)<br><li>[Location condition in Azure Active Directory conditional access](../conditional-access/location-condition.md)<p>MDM<br><li>[What is Microsoft Intune?](/mem/intune/fundamentals/what-is-intune)<br><li>[What is Defender for Cloud Apps?](/cloud-app-security/what-is-cloud-app-security)<br><li>[What is app management in Microsoft Intune?](/mem/intune/apps/app-management)<p>Resource<br><li>[Integrate on-premises apps with Defender for Cloud Apps](../app-proxy/application-proxy-integrate-with-microsoft-cloud-application-security.md) |
active-directory Fedramp Identification And Authentication Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/fedramp-identification-and-authentication-controls.md
Each row in the following table provides prescriptive guidance to help you devel
| - | - | | IA-02| **Uniquely identify and authenticate users or processes acting for users.**<p> Azure AD uniquely identifies user and service principal objects directly. Azure AD provides multiple authentication methods, and you can configure methods that adhere to National Institute of Standards and Technology (NIST) authentication assurance level (AAL) 3.<p>Identifiers <br> <li>Users: [Working with users in Microsoft Graph: ID property](/graph/api/resources/users)<br><li>Service principals: [ServicePrincipal resource type : ID property](/graph/api/resources/serviceprincipal)<p>Authentication and multifactor authentication<br> <li>[Achieving NIST authenticator assurance levels with the Microsoft identity platform](nist-overview.md) | | IA-02(1)<br>IA-02(3)| **Multifactor authentication for all access to privileged accounts.** <p>Configure the following elements for a complete solution to ensure all access to privileged accounts requires multifactor authentication.<p>Configure conditional access policies to require multifactor authentication for all users.<br> Implement Azure AD Privileged Identity Management to require multifactor authentication for activation of privileged role assignment prior to use.<p>With Privileged Identity Management activation requirement in place, privilege account activation isn't possible without network access, so local access is never privileged.<p>Multifactor authentication and Privileged Identity Management<br> <li>[Conditional access: Require multifactor authentication for all users](../conditional-access/howto-conditional-access-policy-all-users-mfa.md)<br> <li>[Configure Azure AD role settings in Privileged Identity Management](../privileged-identity-management/pim-how-to-change-default-settings.md?tabs=new) |
-| IA-02(2)<br>IA-02(4)| **Implement multi-factor authentication for all access to non-privileged accounts**<p>Configure the following elements as an overall solution to ensure all access to non-privileged accounts requires MFA.<p> Configure Conditional Access policies to require MFA for all users.<br> Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to enforce use of specific authentication methods.<br> Configure Conditional Access policies to enforce device compliance.<p>Microsoft recommends using a multi-factor cryptographic hardware authenticator (e.g., FIDO2 security keys, Windows Hello for Business (with hardware TPM), or smart card) to achieve AAL3. If your organization is completely cloud-based, we recommend using FIDO2 security keys or Windows Hello for Business.<p>Windows Hello for Business has not been validated at the required FIPS 140 Security Level and as such federal customers would need to conduct risk assessment and evaluation before accepting it as AAL3. For additional details regarding Windows Hello for Business FIPS 140 validation please refer to [Microsoft NIST AALs](nist-overview.md).<p>Guidance regarding MDM polices differ slightly based on authentication methods, they are broken out below. <p>Smart Card / Windows Hello for Business<br> [Passwordless Strategy - Require Windows Hello for Business or smart card](/windows/security/identity-protection/hello-for-business/passwordless-strategy)<br> [Require device to be marked as compliant](../conditional-access/require-managed-devices.md)<br> [Conditional Access - Require MFA for all users](../conditional-access/howto-conditional-access-policy-all-users-mfa.md)<p> Hybrid Only<br> [Passwordless Strategy - Configure user accounts to disallow password authentication](/windows/security/identity-protection/hello-for-business/passwordless-strategy)<p> Smart Card Only<br>[Create a Rule to Send an Authentication Method Claim](/windows-server/identity/ad-fs/operations/create-a-rule-to-send-an-authentication-method-claim)<br>[Configure Authentication Policies](/windows-server/identity/ad-fs/operations/configure-authentication-policies)<p>FIDO2 Security Key<br> [Passwordless Strategy - Excluding the password credential provider](/windows/security/identity-protection/hello-for-business/passwordless-strategy)<br> [Require device to be marked as compliant](../conditional-access/require-managed-devices.md)<br> [Conditional Access - Require MFA for all users](../conditional-access/howto-conditional-access-policy-all-users-mfa.md)<p>Authentication Methods<br> [Azure Active Directory passwordless sign-in (preview) | FIDO2 security keys](../authentication/concept-authentication-passwordless.md)<br> [Passwordless security key sign-in Windows - Azure Active Directory](../authentication/howto-authentication-passwordless-security-key-windows.md)<br> [ADFS: Certificate Authentication with Azure AD & Office 365](/archive/blogs/samueld/adfs-certauth-aad-o365)<br> [How Smart Card Sign-in Works in Windows (Windows 10)](/windows/security/identity-protection/smart-cards/smart-card-how-smart-card-sign-in-works-in-windows)<br> [Windows Hello for Business Overview (Windows 10)](/windows/security/identity-protection/hello-for-business/hello-overview)<p>Additional Resources:<br> [Policy CSP - Windows Client Management](/windows/client-management/mdm/policy-configuration-service-provider)<br> [Use PowerShell scripts on Windows 10 devices in Intune](/mem/intune/apps/intune-management-extension)<br> [Plan a passwordless authentication deployment with Azure AD](../authentication/howto-authentication-passwordless-deployment.md)<br> |
+| IA-02(2)<br>IA-02(4)| **Implement multi-factor authentication for all access to non-privileged accounts**<p>Configure the following elements as an overall solution to ensure all access to non-privileged accounts requires MFA.<p> Configure Conditional Access policies to require MFA for all users.<br> Configure device management policies via MDM (such as Microsoft Intune), Microsoft Intune, Configuration Manager, or group policy objects (GPO) to enforce use of specific authentication methods.<br> Configure Conditional Access policies to enforce device compliance.<p>Microsoft recommends using a multi-factor cryptographic hardware authenticator (e.g., FIDO2 security keys, Windows Hello for Business (with hardware TPM), or smart card) to achieve AAL3. If your organization is completely cloud-based, we recommend using FIDO2 security keys or Windows Hello for Business.<p>Windows Hello for Business has not been validated at the required FIPS 140 Security Level and as such federal customers would need to conduct risk assessment and evaluation before accepting it as AAL3. For additional details regarding Windows Hello for Business FIPS 140 validation please refer to [Microsoft NIST AALs](nist-overview.md).<p>Guidance regarding MDM polices differ slightly based on authentication methods, they are broken out below. <p>Smart Card / Windows Hello for Business<br> [Passwordless Strategy - Require Windows Hello for Business or smart card](/windows/security/identity-protection/hello-for-business/passwordless-strategy)<br> [Require device to be marked as compliant](../conditional-access/require-managed-devices.md)<br> [Conditional Access - Require MFA for all users](../conditional-access/howto-conditional-access-policy-all-users-mfa.md)<p> Hybrid Only<br> [Passwordless Strategy - Configure user accounts to disallow password authentication](/windows/security/identity-protection/hello-for-business/passwordless-strategy)<p> Smart Card Only<br>[Create a Rule to Send an Authentication Method Claim](/windows-server/identity/ad-fs/operations/create-a-rule-to-send-an-authentication-method-claim)<br>[Configure Authentication Policies](/windows-server/identity/ad-fs/operations/configure-authentication-policies)<p>FIDO2 Security Key<br> [Passwordless Strategy - Excluding the password credential provider](/windows/security/identity-protection/hello-for-business/passwordless-strategy)<br> [Require device to be marked as compliant](../conditional-access/require-managed-devices.md)<br> [Conditional Access - Require MFA for all users](../conditional-access/howto-conditional-access-policy-all-users-mfa.md)<p>Authentication Methods<br> [Azure Active Directory passwordless sign-in (preview) | FIDO2 security keys](../authentication/concept-authentication-passwordless.md)<br> [Passwordless security key sign-in Windows - Azure Active Directory](../authentication/howto-authentication-passwordless-security-key-windows.md)<br> [ADFS: Certificate Authentication with Azure AD & Office 365](/archive/blogs/samueld/adfs-certauth-aad-o365)<br> [How Smart Card Sign-in Works in Windows (Windows 10)](/windows/security/identity-protection/smart-cards/smart-card-how-smart-card-sign-in-works-in-windows)<br> [Windows Hello for Business Overview (Windows 10)](/windows/security/identity-protection/hello-for-business/hello-overview)<p>Additional Resources:<br> [Policy CSP - Windows Client Management](/windows/client-management/mdm/policy-configuration-service-provider)<br> [Use PowerShell scripts on Windows 10 devices in Intune](/mem/intune/apps/intune-management-extension)<br> [Plan a passwordless authentication deployment with Azure AD](../authentication/howto-authentication-passwordless-deployment.md)<br> |
| IA-02(5)| **When multiple users have access to a shared or group account password, require each user to first authenticate by using an individual authenticator.**<p>Use an individual account per user. If a shared account is required, Azure AD permits binding of multiple authenticators to an account so that each user has an individual authenticator. <p>Resources<br><li>[How it works: Azure AD multifactor authentication](../authentication/concept-mfa-howitworks.md)<br> <li>[Manage authentication methods for Azure AD multifactor authentication](../authentication/howto-mfa-userdevicesettings.md) | | IA-02(8)| **Implement replay-resistant authentication mechanisms for network access to privileged accounts.**<p>Configure conditional access policies to require multifactor authentication for all users. All Azure AD authentication methods at authentication assurance level 2 and 3 use either nonce or challenges and are resistant to replay attacks.<p>References<br> <li>[Conditional access: Require multifactor authentication for all users](../conditional-access/howto-conditional-access-policy-all-users-mfa.md)<br> <li>[Achieving NIST authenticator assurance levels with the Microsoft identity platform](nist-overview.md) | | IA-02(11)| **Implement Azure AD multifactor authentication to access customer-deployed resources remotely so that one of the factors is provided by a device separate from the system gaining access where the device meets FIPS-140-2, NIAP certification, or NSA approval.**<p>See guidance for IA-02(1-4). Azure AD authentication methods to consider at AAL3 meeting the separate device requirements are:<p> FIDO2 security keys<br> <li>Windows Hello for Business with hardware TPM (TPM is recognized as a valid "something you have" factor by NIST 800-63B Section 5.1.7.1.)<br> <li>Smart card<p>References<br><li>[Achieving NIST authenticator assurance levels with the Microsoft identity platform](nist-overview.md)<br> <li>[NIST 800-63B Section 5.1.7.1](https://pages.nist.gov/800-63-3/sp800-63b.html) |
aks Concepts Vulnerability Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-vulnerability-management.md
Title: Vulnerability management for Azure Kubernetes Service
description: Learn how Microsoft manages security vulnerabilities for Azure Kubernetes Service (AKS) clusters. Previously updated : 02/24/2023 Last updated : 03/02/2023
Each evening, Linux nodes in AKS receive security patches through their distribu
Nightly, we apply security updates to the OS on the node, but the node image used to create nodes for your cluster remains unchanged. If a new Linux node is added to your cluster, the original image is used to create the node. This new node receives all the security and kernel updates available during the automatic assessment performed every night, but remains unpatched until all checks and restarts are complete. You can use node image upgrade to check for and update node images used by your cluster. For more information on node image upgrade, see [Azure Kubernetes Service (AKS) node image upgrade][aks-node-image-upgrade].
-For AKS clusters on auto upgrade channel, a *node-image* doesn't pull security updates through the unattended upgrade process. They receive security updates through the weekly node image upgrade.
+For AKS clusters on the [OS auto upgrade][aks-node-image-upgrade] channel, the unattended upgrade process is disabled, and the OS nodes will receive security updates through the weekly node image upgrade.
### Windows Server nodes
Microsoft's goal is to mitigate detected vulnerabilities within a time period ap
## How vulnerabilities and updates are communicated
-In general, Microsoft doesn't broadly communicate the release of new patch versions for AKS. However, Microsoft constantly monitors and validates available CVE patches to support them in AKS in a timely manner. If a critical patch is found or user action is required, Microsoft [notifies you to upgrade to the newly available patch][aks-cve-feed].
+In general, Microsoft doesn't broadly communicate the release of new patch versions for AKS. However, Microsoft constantly monitors and validates available CVE patches to support them in AKS in a timely manner. If a critical patch is found or user action is required, Microsoft [posts and updates CVE issue details on GitHub][aks-cve-feed].
## Security Reporting
See the overview about [Upgrading Azure Kubernetes Service clusters and node poo
[apply-security-kernel-updates-to-aks-nodes]: node-updates-kured.md [aks-node-image-upgrade]: node-image-upgrade.md [upgrade-node-pool-in-aks]: use-multiple-node-pools.md#upgrade-a-node-pool
+[aks-node-image-upgrade]: auto-upgrade-node-image.md
<!-- LINKS - external --> [microsoft-bug-bounty-program-overview]: https://aka.ms/opensource/security/bounty
See the overview about [Upgrading Azure Kubernetes Service clusters and node poo
[mrc-create-report]: https://aka.ms/opensource/security/create-report [msrc-pgp-key-page]: https://aka.ms/opensource/security/pgpkey [microsoft-security-response-center]: https://aka.ms/opensource/security/msrc
-[azure-bounty-program-overview]: https://www.microsoft.com/msrc/bounty-microsoft-azure
+[azure-bounty-program-overview]: https://www.microsoft.com/msrc/bounty-microsoft-azure
aks Deploy Marketplace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/deploy-marketplace.md
You can delete a purchased plan for an Azure container offer by deleting the ext
az k8s-extension delete --name <extension-name> --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type managedClusters ```
+## Troubleshooting
+
+If you experience issues, see the [troubleshooting checklist for failed deployments of a Kubernetes offer][marketplace-troubleshoot].
+ ## Next steps - Learn more about [exploring and analyzing costs][billing].
az k8s-extension delete --name <extension-name> --cluster-name <clusterName> --r
[azure-marketplace]: /marketplace/azure-marketplace-overview [cluster-extensions]: ./cluster-extensions.md [billing]: ../cost-management-billing/costs/quick-acm-cost-analysis.md
+[marketplace-troubleshoot]: /troubleshoot/azure/azure-kubernetes/troubleshoot-failed-kubernetes-deployment-offer
aks Image Cleaner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/image-cleaner.md
description: Learn how to use Image Cleaner to clean up stale images on Azure Ku
Previously updated : 02/07/2023 Last updated : 03/02/2023 # Use Image Cleaner to clean up stale images on your Azure Kubernetes Service cluster (preview)
Image Cleaner does not support the following:
## How Image Cleaner works
-When enabled, an `eraser-controller-manager` pod is deployed on each agent node, which will use an `ImageList` CRD to determine unreferenced and vulnerable images. Vulnerability is determined based on a [trivy][trivy] scan, after which images with a `LOW`, `MEDIUM`, `HIGH`, or `CRITICAL` classification are flagged. An updated `ImageList` will be automatically generated by Image Cleaner based on a set time interval, and can also be supplied manually.
+When enabled, an `eraser-controller-manager` pod is deployed, which generates an `ImageList` CRD. The eraser pods running on each nodes will clean up the unreferenced and vulnerable images according to the ImageList. Vulnerability is determined based on a [trivy][trivy] scan, after which images with a `LOW`, `MEDIUM`, `HIGH`, or `CRITICAL` classification are flagged. An updated `ImageList` will be automatically generated by Image Cleaner based on a set time interval, and can also be supplied manually.
++ Once an `ImageList` is generated, Image Cleaner will remove all the images in the list from node VMs.
aks Node Pool Snapshot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-pool-snapshot.md
This article assumes that you have an existing AKS cluster. If you need an AKS c
### Limitations - Any node pool or cluster created from a snapshot must use a VM from the same virtual machine family as the snapshot, for example, you can't create a new N-Series node pool based of a snapshot captured from a D-Series node pool because the node images in those cases are structurally different.-- Snapshots must be created and used in the same region as the source node pool.
+- Snapshots must be created in the same region as the source node pool.
## Take a node pool snapshot
aks Operator Best Practices Cluster Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/operator-best-practices-cluster-security.md
Title: Best practices for cluster security
description: Learn the cluster operator best practices for how to manage cluster security and upgrades in Azure Kubernetes Service (AKS) Previously updated : 04/07/2021 Last updated : 03/02/2023
aks Use Pod Sandboxing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-pod-sandboxing.md
Title: Pod Sandboxing (preview) with Azure Kubernetes Service (AKS) description: Learn about and deploy Pod Sandboxing (preview), also referred to as Kernel Isolation, on an Azure Kubernetes Service (AKS) cluster. Previously updated : 02/23/2023 Last updated : 03/01/2023
To demonstrate the deployed application on the AKS cluster isn't isolated and is
```output root@untrusted:/# uname -r
- 5.15.48.1-8.cm2
+ 5.15.80.mshv2-hvl1.m2
``` 3. Start a shell session to the container of the *trusted* pod to verify the kernel output:
To demonstrate the deployed application on the AKS cluster isn't isolated and is
The following example resembles output from the VM that is running the *trusted* pod, which is a different kernel than the *untrusted* pod running within the pod sandbox: ```output
- 5.15.80.mshv2-hvl1.m2
+ 5.15.48.1-8.cm2
## Cleanup
kubectl delete pod pod-name
[csi-secret-store driver]: csi-secrets-store-driver.md [az-aks-update]: /cli/azure/aks#az-aks-update [mariner-cluster-config]: cluster-configuration.md#mariner-os
-[register-the-katavmisolationpreview-feature-flag]: #register-the-katavmisolationpreview-feature-flag
+[register-the-katavmisolationpreview-feature-flag]: #register-the-katavmisolationpreview-feature-flag
app-service Configure Ssl Certificate In Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-ssl-certificate-in-code.md
Title: Use a TLS/SSL certificate in code description: Learn how to use client certificates in your code. Authenticate with remote resources with a client certificate, or run cryptographic tasks with them. Previously updated : 09/22/2020 Last updated : 02/15/2023
az webapp config appsettings set --name <app-name> --resource-group <resource-gr
To make all your certificates accessible, set the value to `*`. > [!NOTE]
-> If your are using `*` for the App Setting, you will need to restart your web app after adding a new certificate to your web app to ensure that new certificate becomes accessible to your app.
+> When `WEBSITE_LOAD_CERTIFICATES` is set `*`, all previously added certificates are accessible to application code. If you add a certificate to your app later, restart the app to make the new certificate accessible to your app. For more information, see [When updating (renewing) a certificate](#when-updating-renewing-a-certificate).
## Load certificate in Windows apps
var cert = new X509Certificate2(bytes);
To see how to load a TLS/SSL certificate from a file in Node.js, PHP, Python, Java, or Ruby, see the documentation for the respective language or web platform.
+## When updating (renewing) a certificate
+
+When you renew a certificate and add it to your app, it gets a new thumbprint, which also needs to be [made accessible](#make-the-certificate-accessible). How it works depends on your certificate type.
+
+If you manually upload the [public](configure-ssl-certificate.md#upload-a-public-certificate) or [private](configure-ssl-certificate.md#upload-a-private-certificate) certificate:
+
+- If you list thumbprints explicitly in `WEBSITE_LOAD_CERTIFICATES`, add the new thumbprint to the app setting.
+- If `WEBSITE_LOAD_CERTIFICATES` is set to `*`, restart the app to make the new certificate accessible.
+
+If you renew a certificate [in Key Vault](configure-ssl-certificate.md#renew-a-certificate-imported-from-key-vault), such as with an [App Service certificate](configure-ssl-certificate.md#renew-app-service-certificate), the daily sync from Key Vault makes the necessary update automatically when synchronizing your app with the renewed certificate.
+
+- If `WEBSITE_LOAD_CERTIFICATES` contains the old thumbprint of your renewed certificate, the daily sync updates the old thumbprint to the new thumbprint automatically.
+- If `WEBSITE_LOAD_CERTIFICATES` is set to `*`, the daily sync makes the new certificate accessible automatically.
+ ## More resources * [Secure a custom DNS name with a TLS/SSL binding in Azure App Service](configure-ssl-bindings.md)
application-gateway Configuration Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-infrastructure.md
Subnet Size /24 = 256 IP addresses - 5 reserved from the platform = 251 availabl
> It is possible to change the subnet of an existing Application Gateway within the same virtual network. You can do this using Azure PowerShell or Azure CLI. For more information, see [Frequently asked questions about Application Gateway](application-gateway-faq.yml#can-i-change-the-virtual-network-or-subnet-for-an-existing-application-gateway) ### Virtual network permission
-Since application gateway resources are deployed within a virtual network, Application Gateway performs a check to verify the permission on the provided virtual network resource. This validation is performed during both creation and management operations.
+Since the application gateway resource is deployed inside a virtual network, we also perform a check to verify the permission on the provided virtual network resource. This validation is performed during both creation and management operations. You should check your [Azure role-based access control](../role-based-access-control/role-assignments-list-portal.md) to verify the users or service principals that operate application gateways also have at least **Microsoft.Network/virtualNetworks/subnets/join/action** permission on the Virtual Network or Subnet.
-You should check your [Azure role-based access control](../role-based-access-control/role-assignments-list-portal.md) to verify the users or service principals that operate application gateways have at least **Microsoft.Network/virtualNetworks/subnets/join/action** permission. Use built-in roles, such as [Network contributor](../role-based-access-control/built-in-roles.md#network-contributor), which already support this permission. If a built-in role doesn't provide the right permission, you can [create and assign a custom role](../role-based-access-control/custom-roles-portal.md). Learn more about [managing subnet permissions](../virtual-network/virtual-network-manage-subnet.md#permissions). You may have to allow sufficient time for [Azure Resource Manager cache refresh](../role-based-access-control/troubleshooting.md?tabs=bicep#symptomrole-assignment-changes-are-not-being-detected) after role assignment changes.
+You may use the built-in roles, such as [Network contributor](../role-based-access-control/built-in-roles.md#network-contributor), which already support this permission. If a built-in role doesn't provide the right permission, you can [create and assign a custom role](../role-based-access-control/custom-roles-portal.md). Learn more about [managing subnet permissions](../virtual-network/virtual-network-manage-subnet.md#permissions). You may have to allow sufficient time for [Azure Resource Manager cache refresh](../role-based-access-control/troubleshooting.md?tabs=bicep#symptomrole-assignment-changes-are-not-being-detected) after role assignment changes.
#### Identifying affected users or service principals for your subscription By visiting Azure Advisor for your account, you can verify if your subscription has any users or service principals with insufficient permission. The details of that recommendation are as follows:
By visiting Azure Advisor for your account, you can verify if your subscription
**Category**: Reliability </br> **Impact**: High </br>
-#### Using temporary Azure Feature Exposure Control (AFEC) flag
-
-As a temporary extension, we have introduced a subscription-level [Azure Feature Exposure Control (AFEC)](../azure-resource-manager/management/preview-features.md?tabs=azure-portal) that you can register for, until you fix the permissions for all your users and/or service principals. [Set up this flag](../azure-resource-manager/management/preview-features.md?#required-access) for your Azure subscription.
-
-**Name**: Microsoft.Network/DisableApplicationGatewaySubnetPermissionCheck </br>
-**Description**: Disable Application Gateway Subnet Permission Check </br>
-**ProviderNamespace**: Microsoft.Network </br>
-**EnrollmentType**: AutoApprove </br>
-
-> [!NOTE]
-> The provision to circumvent the virtual network permission check by using this feature control (AFEC) is available only for a limited period, **until 6th April 2023**. Ensure all the roles and permissions managing Application Gateways are updated by then, as there will be no further extensions. Set up this flag in your Azure subscription.
- ## Network security groups Network security groups (NSGs) are supported on Application Gateway. But there are some restrictions:
azure-app-configuration Quickstart Container Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-container-apps.md
+
+ Title: "Quickstart: Use Azure App Configuration in Azure Container Apps"
+description: Learn how to connect a containerized application to Azure App Configuration, using Service Connector.
+++++ Last updated : 03/02/2023++++
+# Quickstart: Use Azure App Configuration in Azure Container Apps
+
+In this quickstart, you will use Azure App Configuration in an app running in Azure Container Apps. This way, you can centralize the storage and management of the configuration of your apps in Container Apps. This quickstart leverages the ASP.NET Core app created in [Quickstart: Create an ASP.NET Core app with App Configuration](./quickstart-aspnet-core-app.md). You will containerize the app and deploy it to Azure Container Apps. Complete the quickstart before you continue.
+
+> [!TIP]
+> While following this quickstart, preferably register all new resources within a single resource group, so that you can regroup them all in a single place and delete them faster later on if you don't need them anymore.
+
+## Prerequisites
+
+- An application using an App Configuration store. If you don't have one, create an instance using the [Quickstart: Create an ASP.NET Core app with App Configuration](./quickstart-aspnet-core-app.md).
+- An Azure Container Apps instance. If you don't have one, create an instance using the [Azure portal](/azure/container-apps/quickstart-portal) or [the CLI](/azure/container-apps/get-started).
+- [Docker Desktop](https://www.docker.com/products/docker-desktop)
+- The [Azure CLI](/cli/azure/install-azure-cli)
++
+## Connect Azure App Configuration to the container app
+
+In the Azure portal, navigate to your Container App instance. Follow the [Service Connector quickstart for Azure Container Apps](../service-connector/quickstart-portal-container-apps.md) to create a service connection with your App Configuration store using the settings below.
+
+- In the **Basics** tab:
+ - select **App Configuration** for **Service type**
+ - pick your App Configuration store for "**App Configuration**"
+
+ :::image type="content" border="true" source="media\connect-container-app\use-service-connector.png" alt-text="Screenshot the Azure platform showing a form in the Service Connector menu in a Container App." lightbox="media\connect-container-app\use-service-connector.png":::
+
+- In the **Authentication** tab:
+ - pick **Connection string** authentication type and **Read-Only** for "**Permissions for the connection string**
+ - expand the **Advanced** menu. In the Configuration information, there should be an environment variable already created called "AZURE_APPCONFIGURATION_CONNECTIONSTRING". Edit the environment variable by selecting the icon on the right and change the name to *ConnectionStrings__AppConfig*. We need to make this change as *ConnectionStrings__AppConfig* is the name of the environment variable the application built in the [ASP.NET Core quickstart](./quickstart-aspnet-core-app.md) will look for. This is the environment variable which contains the connection string for App Configuration. If you have used another application to follow this quickstart, please use the corresponding environment variable name. Then select **Done**.
+- Use default values for everything else.
+
+Once done, an environment variable named **ConnectionStrings__AppConfig** will be added to the container of your Container App. Its value is a reference of the Container App secret, the connection string of your App Configuration store.
+
+## Build a container
+
+1. Run the [dotnet publish](/dotnet/core/tools/dotnet-publish) command to build the app in release mode and create the assets in the *published* folder.
+
+ ```dotnet
+ dotnet publish -c Release -o published
+ ```
+
+1. Create a file named *Dockerfile* in the directory containing your .csproj file, open it in a text editor, and enter the following content. A Dockerfile is a text file that doesn't have an extension and that is used to create a container image.
+
+ ```docker
+ FROM mcr.microsoft.com/dotnet/aspnet:6.0 AS runtime
+ WORKDIR /app
+ COPY published/ ./
+ ENTRYPOINT ["dotnet", "TestAppConfig.dll"]
+ ```
+
+1. Build the container by running the following command.
+
+ ```docker
+ docker build --tag aspnetapp .
+ ```
+
+## Create an Azure Container Registry instance
+
+Create an Azure Container Registry (ACR). ACR enables you to build, store, and manage container images.
+
+#### [Portal](#tab/azure-portal)
+
+1. To create the container registry, follow the [Azure Container Registry quickstart](/azure/container-registry/container-registry-get-started-portal).
+1. Once the deployment is complete, open your ACR instance and from the left menu, select **Settings > Access keys**.
+1. Take note of the **Login server** value listed on this page. You'll use this information in a later step.
+1. Switch **Admin user** to *Enabled*. This option lets you connect the ACR to Azure Container Apps using admin user credentials. Alternatively, you can leave it disabled and configure the container app to [pull images from the registry with a managed identity](../container-apps/managed-identity-image-pull.md).
+
+#### [Azure CLI](#tab/azure-cli)
+
+1. Create an ACR instance using the following command. It creates a basic tier registry named *myregistry* with admin user enabled that allows the container app to connect to the registry using admin user credentials. For more information, see [Azure Container Registry quickstart](/azure/container-registry/container-registry-get-started-azure-cli).
+
+ ```azurecli
+ az acr create
+ --resource-group AppConfigTestResources \
+ --name myregistry \
+ --admin-enabled true \
+ --sku Basic
+ ```
+1. In the command output, take note of the ACR login server value listed after `loginServer`.
+1. Retrieve the ACR username and password by running `az acr credential show --name myregistry`. You'll need these values later.
+++
+## Push the image to Azure Container Registry
+
+Push the Docker image to the ACR created earlier.
+
+1. Run the [az acr login](/cli/azure/acr#az-acr-login) command to log in to the registry.
+
+ ```azurecli
+ az acr login --name myregistry
+ ```
+
+ The command returns `Login Succeeded` once login is successful.
+
+1. Use [docker tag](https://docs.docker.com/engine/reference/commandline/tag/) to tag the image appropriate details.
+
+ ```docker
+ docker tag aspnetapp myregistry.azurecr.io/aspnetapp:v1
+ ```
+
+ > [!TIP]
+ > To review the list of your existing docker images and tags, run `docker image ls`. In this scenario, you should see at least two images: `aspnetapp` and `myregistry.azurecr.io/aspnetapp`.
+
+1. Use [docker push](https://docs.docker.com/engine/reference/commandline/push/) to push the image to the container registry. This example creates the *aspnetapp* repository in ACR containing the `aspnetapp` image. In the example below, replace the placeholders `<login-server`, `<image-name>` and `<tag>` by the ACR's log-in server value, the image name and the image tag.
+
+ Method:
+
+ ```docker
+ docker push <login-server>/<image-name>:<tag>
+ ```
+
+ Example:
+
+ ```docker
+ docker push myregistry.azurecr.io/aspnetapp:v1
+ ```
+
+1. Open your Azure Container Registry in the Azure portal and confirm that under **Repositories**, you can see your new repository.
+
+ :::image type="content" border="true" source="media\connect-container-app\container-registry-repository.png" alt-text="Screenshot of the Azure platform showing a repository in Azure Container Registries.":::
+
+## Add your container image to Azure Container Apps
+
+Update your Container App to load the container image from your ACR.
+
+1. In the Azure portal, open your Azure Container Apps instance.
+1. In the left menu, under **Application**, select **Containers**.
+1. Select **Edit and deploy**.
+1. Under **Container image**, click on the name of the existing container image.
+1. Update the following settings:
+
+ | Setting | Suggested value | Description |
+ |-|-|-|
+ | Image source | *Azure Container Registry* | Select Azure Container Registry as your image source. |
+ | Authentication | *Admin Credentials* | Use the admin user credential option that was enabled earlier in the container registry. If you didn't enable the admin user but configured to [use a managed identity](../container-apps/managed-identity-image-pull.md?tabs=azure-cli&pivots=azure-portal), you would need to manually enter the image and tag in the form. |
+ | Registry | *myregistry.azurecr.io* | Select the Azure Container Registry you created earlier. |
+ | Image | *aspnetapp* | Select the docker image you created and pushed to ACR earlier. |
+ | Image tag | *v1* | Select your image tag from the list. |
+
+1. Select **Save** and then **Create** to deploy the update to Azure Container App.
+
+## Browse to the URL of the Azure Container App
+
+In the Azure portal, in the Azure Container Apps instance, go to the **Overview** tab and open the **Application Url**.
+
+The web page looks like this:
++
+## Clean up resources
++
+## Next steps
+
+In this quickstart, you:
+
+- Connected Azure App Configuration to Azure Container Apps
+- Used Docker to build a container image from an ASP.NET Core app with App Configuration settings
+- Created an Azure Container Registry instance
+- Pushed the image to the Azure Container Registry instance
+- Added the container image to Azure Container Apps
+- Browsed to the URL of the Azure Container Apps instance updated with the settings you configured in your App Configuration store.
+
+The managed identity enables one Azure resource to access another without you maintaining secrets. You can streamline access from Container Apps to other Azure resources. For more information, see how to [access App Configuration using the managed identity](howto-integrate-azure-managed-service-identity.md) and how to [[access Container Registry using the managed identity](../container-registry/container-registry-authentication-managed-identity.md)].
+
+To learn how to configure your ASP.NET Core web app to dynamically refresh configuration settings, continue to the next tutorial.
+
+> [!div class="nextstepaction"]
+> [Enable dynamic configuration](./enable-dynamic-configuration-aspnet-core.md)
azure-arc Validation Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/validation-program.md
To see how all Azure Arc-enabled components are validated, see [Validation progr
|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version |--|--|--|--|--|
-|HPE Superdome Flex 280 | 1.26.0 | 1.15.0_2023-01-10 | 16.0.816.19223 | 14.5 (Ubuntu 20.04)|
+|HPE Superdome Flex 280 | 1.23.5 | 1.15.0_2023-01-10 | 16.0.816.19223 | 14.5 (Ubuntu 20.04)|
|HPE Apollo 4200 Gen10 Plus | 1.22.6 | 1.11.0_2022-09-13 |16.0.312.4243|12.3 (Ubuntu 12.3-1)| ### Kublr
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/overview.md
In order to use Arc resource bridge in a region, Arc resource bridge and the pri
Arc resource bridge supports the following Azure regions: * East US
+* East US2
+* West US2
+* West US3
+* South Central US
* West Europe
+* North Europe
* UK South * Canada Central * Australia East * Southeast Asia + ### Regional resiliency While Azure has a number of redundancy features at every level of failure, if a service impacting event occurs, this preview release of Azure Arc resource bridge does not support cross-region failover or other resiliency capabilities. In the event of the service becoming unavailable, the on-premises VMs continue to operate unaffected. Management from Azure is unavailable during that service outage.
azure-government Documentation Government Impact Level 5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-impact-level-5.md
For Management and governance services availability in Azure Government, see [Pr
### [Azure Managed Applications](../azure-resource-manager/managed-applications/index.yml) -- You can store your managed application definition in a storage account that you provide when you create the application. Doing so allows you to manage its location and access for your regulatory needs, including [storage encryption with customer-managed keys](#storage-encryption-with-key-vault-managed-keys). For more information, see [Bring your own storage](../azure-resource-manager/managed-applications/publish-service-catalog-app.md#bring-your-own-storage-for-the-managed-application-definition).
+- You can store your managed application definition in a storage account that you provide when you create the application. Doing so allows you to manage its location and access for your regulatory needs, including [storage encryption with customer-managed keys](#storage-encryption-with-key-vault-managed-keys). For more information, see [Bring your own storage](../azure-resource-manager/managed-applications/publish-service-catalog-bring-your-own-storage.md).
### [Azure Monitor](../azure-monitor/index.yml)
azure-monitor Alerts Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-logic-apps.md
This example creates a logic app that uses the [common alerts schema](./alerts-c
} ```
- :::image type="content" source="./media/alerts-logic-apps/configure-http-request-received.png" alt-text="Screenshot that shows the Parameters tab for the When a HTTP request is received pane.":::
+ :::image type="content" source="./media/alerts-logic-apps/configure-http-request-received.png" alt-text="Screenshot that shows the Parameters tab for the When an HTTP request is received pane.":::
1. (Optional). You can customize the alert notification by extracting information about the affected resource on which the alert fired, for example, the resource's tags. You can then include those resource tags in the alert payload and use the information in your logical expressions for sending the notifications. To do this step, we will: - Create a variable for the affected resource IDs.
This example creates a logic app that uses the [common alerts schema](./alerts-c
|Subscription|`variables('AffectedResource')[2]`| |Resource Group|`variables('AffectedResource')[4]`| |Resource Provider|`variables('AffectedResource')[6]`|
- |Short Resource Id|`concat(variables('AffectedResource')[7], '/', variables('AffectedResource')[8]`)|
- |Client Api Version|2021-06-01|
+ |Short Resource ID|`concat(variables('AffectedResource')[7], '/', variables('AffectedResource')[8]`)|
+ |Client Api Version|Resource type's api version|
+
+ To find your resource type's api version, select the **JSON view** link on the top right-hand side of the resource overview page.
+ The **Resource JSON** page is displayed with the **ResourceID** and **API version** at the top of the page.
The dynamic content now includes tags from the affected resource. You can use those tags when you configure your notifications as described in the following steps. 1. Send an email or post a Teams message. 1. Select **+** > **Add an action** to insert a new step.
- :::image type="content" source="./media/alerts-logic-apps/configure-http-request-received.png" alt-text="Screenshot that shows the parameters for When a HTTP request is received.":::
+ :::image type="content" source="./media/alerts-logic-apps/configure-http-request-received.png" alt-text="Screenshot that shows the parameters for When an HTTP request is received.":::
## [Send an email](#tab/send-email)
azure-monitor App Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md
Several other community-supported Application Insights SDKs exist. However, Azur
Review [frequently asked questions](../faq.yml). ### Microsoft Q&A questions forum
-Post questions to the Microsoft Q&A [answers forum](/answers/topics/24223/azure-monitor.html).
+Post general questions to the Microsoft Q&A [answers forum](/answers/topics/24223/azure-monitor.html).
### Stack Overflow
-Post coding questions to [Stack Overflow]() using an Application Insights tag.
+Post coding questions to [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-application-insights) using an Application Insights tag.
### User Voice
azure-monitor Resource Manager Function App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/resource-manager-function-app.md
- Title: Resource Manager template samples for Azure Function App + Application Insights Resources
-description: Sample Azure Resource Manager templates to deploy an Azure Function App with an Application Insights resource.
- Previously updated : 04/27/2022--
-# Resource Manager template sample for creating Azure Function apps with Application Insights monitoring
-
-This article includes sample [Azure Resource Manager templates](../../azure-resource-manager/templates/syntax.md) to deploy and configure [classic Application Insights resources](../app/create-new-resource.md) in conjunction with an Azure Function app. The sample includes a template file and a parameters file with sample values to provide to the template.
--
-## Azure Function App
-
-The following sample creates a .NET Core 3.1 Azure Function app running on a Windows App Service plan and a [classic Application Insights resource](../app/create-new-resource.md) with monitoring enabled.
-
-### Template file
-
-# [Bicep](#tab/bicep)
-
-```bicep
-param subscriptionId string
-param name string
-param location string
-param hostingPlanName string
-param serverFarmResourceGroup string
-param alwaysOn bool
-param storageAccountName string
-
-resource site 'Microsoft.Web/sites@2021-03-01' = {
- name: name
- kind: 'functionapp'
- location: location
- properties: {
- siteConfig: {
- appSettings: [
- {
- name: 'FUNCTIONS_EXTENSION_VERSION'
- value: '~3'
- }
- {
- name: 'FUNCTIONS_WORKER_RUNTIME'
- value: 'dotnet'
- }
- {
- name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
- value: reference('microsoft.insights/components/function-app-01', '2015-05-01').InstrumentationKey
- }
- {
- name: 'APPLICATIONINSIGHTS_CONNECTION_STRING'
- value: reference('microsoft.insights/components/function-app-01', '2015-05-01').ConnectionString
- }
- {
- name: 'AzureWebJobsStorage'
- value: 'DefaultEndpointsProtocol=https;AccountName=${storageAccountName};AccountKey=${storageAccount.listKeys().keys[0].value};EndpointSuffix=core.windows.net'
- }
- ]
- alwaysOn: alwaysOn
- }
- serverFarmId: '/subscriptions/${subscriptionId}/resourcegroups/${serverFarmResourceGroup}/providers/Microsoft.Web/serverfarms/${hostingPlanName}'
- clientAffinityEnabled: true
- }
- dependsOn: [
- functionApp
- ]
-}
-
-resource functionApp 'microsoft.insights/components@2015-05-01' = {
- name: 'function-app-01'
- location: location
- kind: 'web'
- properties: {
- Application_Type: 'web'
- Request_Source: 'rest'
- }
-}
-
-resource storageAccount 'Microsoft.Storage/storageAccounts@2021-08-01' = {
- name: storageAccountName
- location: location
- sku: {
- name: 'Standard_LRS'
- }
- kind: 'Storage'
- properties: {
- supportsHttpsTrafficOnly: true
- }
-}
-```
-
-# [JSON](#tab/json)
-
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "subscriptionId": {
- "type": "string"
- },
- "name": {
- "type": "string"
- },
- "location": {
- "type": "string"
- },
- "hostingPlanName": {
- "type": "string"
- },
- "serverFarmResourceGroup": {
- "type": "string"
- },
- "alwaysOn": {
- "type": "bool"
- },
- "storageAccountName": {
- "type": "string"
- }
- },
- "resources": [
- {
- "type": "Microsoft.Web/sites",
- "apiVersion": "2021-03-01",
- "name": "[parameters('name')]",
- "kind": "functionapp",
- "location": "[parameters('location')]",
- "properties": {
- "siteConfig": {
- "appSettings": [
- {
- "name": "FUNCTIONS_EXTENSION_VERSION",
- "value": "~3"
- },
- {
- "name": "FUNCTIONS_WORKER_RUNTIME",
- "value": "dotnet"
- },
- {
- "name": "APPINSIGHTS_INSTRUMENTATIONKEY",
- "value": "[reference('microsoft.insights/components/function-app-01', '2015-05-01').InstrumentationKey]"
- },
- {
- "name": "APPLICATIONINSIGHTS_CONNECTION_STRING",
- "value": "[reference('microsoft.insights/components/function-app-01', '2015-05-01').ConnectionString]"
- },
- {
- "name": "AzureWebJobsStorage",
- "value": "[format('DefaultEndpointsProtocol=https;AccountName={0};AccountKey={1};EndpointSuffix=core.windows.net', parameters('storageAccountName'), listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2021-08-01').keys[0])]"
- }
- ],
- "alwaysOn": "[parameters('alwaysOn')]"
- },
- "serverFarmId": "[format('/subscriptions/{0}/resourcegroups/{1}/providers/Microsoft.Web/serverfarms/{2}', parameters('subscriptionId'), parameters('serverFarmResourceGroup'), parameters('hostingPlanName'))]",
- "clientAffinityEnabled": true
- },
- "dependsOn": [
- "[resourceId('Microsoft.Insights/components', 'function-app-01')]",
- "[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName'))]"
- ]
- },
- {
- "type": "Microsoft.Insights/components",
- "apiVersion": "2015-05-01",
- "name": "function-app-01",
- "location": "[parameters('location')]",
- "kind": "web",
- "properties": {
- "Application_Type": "web",
- "Request_Source": "rest"
- }
- },
- {
- "type": "Microsoft.Storage/storageAccounts",
- "apiVersion": "2021-08-01",
- "name": "[parameters('storageAccountName')]",
- "location": "[parameters('location')]",
- "sku": {
- "name": "Standard_LRS"
- },
- "kind": "Storage",
- "properties": {
- "supportsHttpsTrafficOnly": true
- }
- }
- ]
-}
-```
---
-### Parameters file
-
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "subscriptionId": {
- "value": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
- },
- "name": {
- "value": "function-app-01"
- },
- "location": {
- "value": "Central US"
- },
- "hostingPlanName": {
- "value": "WebApplication2720191003010920Plan"
- },
- "serverFarmResourceGroup": {
- "value": "Testwebapp01"
- },
- "alwaysOn": {
- "value": true
- },
- "storageAccountName": {
- "value": "storageaccountest93"
- }
- }
-}
-```
-
-## Next steps
-
-* [Get other sample templates for Azure Monitor](../resource-manager-samples.md).
-* [Learn more about classic Application Insights resources](../app/create-new-resource.md).
azure-monitor Resource Manager Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/resource-manager-web-app.md
- Title: Resource Manager template samples for Azure App Service + Application Insights Resources
-description: Sample Azure Resource Manager templates to deploy an Azure App Service with an Application Insights resource.
-- Previously updated : 11/15/2022---
-# Resource Manager template samples for creating Azure App Services web apps with Application Insights monitoring
-
-This article includes sample [Azure Resource Manager templates](../../azure-resource-manager/templates/syntax.md) to deploy and configure [classic Application Insights resources](../app/create-new-resource.md) in conjunction with an Azure App Services web app. Each sample includes a template file and a parameters file with sample values to provide to the template.
--
-## .NET Core runtime
-
-The following sample creates a basic Azure App Service web app with the .NET Core runtime and a [classic Application Insights resource](../app/create-new-resource.md) with monitoring enabled.
-
-### Template file
-
-# [Bicep](#tab/bicep)
-
-```bicep
-param subscriptionId string
-param name string
-param location string
-param hostingPlanName string
-param serverFarmResourceGroup string
-param alwaysOn bool
-param phpVersion string
-param errorLink string
-
-resource site 'Microsoft.Web/sites@2021-03-01' = {
- name: name
- location: location
- properties: {
- siteConfig: {
- appSettings: [
- {
- name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
- value: reference('microsoft.insights/components/web-app-name-01', '2015-05-01').InstrumentationKey
- }
- {
- name: 'APPLICATIONINSIGHTS_CONNECTION_STRING'
- value: reference('microsoft.insights/components/web-app-name-01', '2015-05-01').ConnectionString
- }
- {
- name: 'ApplicationInsightsAgent_EXTENSION_VERSION'
- value: '~2'
- }
- {
- name: 'XDT_MicrosoftApplicationInsights_Mode'
- value: 'default'
- }
- {
- name: 'ANCM_ADDITIONAL_ERROR_PAGE_LINK'
- value: errorLink
- }
- ]
- phpVersion: phpVersion
- alwaysOn: alwaysOn
- }
- serverFarmId: '/subscriptions/${subscriptionId}/resourcegroups/${serverFarmResourceGroup}/providers/Microsoft.Web/serverfarms/${hostingPlanName}'
- clientAffinityEnabled: true
- }
- dependsOn: [
- webApp
- ]
-}
-
-resource webApp 'Microsoft.Insights/components@2020-02-02' = {
- name: 'web-app-name-01'
- location: location
- kind: 'web'
- properties: {
- Application_Type: 'web'
- Request_Source: 'rest'
- }
-}
-```
-
-# [JSON](#tab/json)
-
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "subscriptionId": {
- "type": "string"
- },
- "name": {
- "type": "string"
- },
- "location": {
- "type": "string"
- },
- "hostingPlanName": {
- "type": "string"
- },
- "serverFarmResourceGroup": {
- "type": "string"
- },
- "alwaysOn": {
- "type": "bool"
- },
- "phpVersion": {
- "type": "string"
- },
- "errorLink": {
- "type": "string"
- }
- },
- "resources": [
- {
- "type": "Microsoft.Web/sites",
- "apiVersion": "2021-03-01",
- "name": "[parameters('name')]",
- "location": "[parameters('location')]",
- "properties": {
- "siteConfig": {
- "appSettings": [
- {
- "name": "APPINSIGHTS_INSTRUMENTATIONKEY",
- "value": "[reference('microsoft.insights/components/web-app-name-01', '2015-05-01').InstrumentationKey]"
- },
- {
- "name": "APPLICATIONINSIGHTS_CONNECTION_STRING",
- "value": "[reference('microsoft.insights/components/web-app-name-01', '2015-05-01').ConnectionString]"
- },
- {
- "name": "ApplicationInsightsAgent_EXTENSION_VERSION",
- "value": "~2"
- },
- {
- "name": "XDT_MicrosoftApplicationInsights_Mode",
- "value": "default"
- },
- {
- "name": "ANCM_ADDITIONAL_ERROR_PAGE_LINK",
- "value": "[parameters('errorLink')]"
- }
- ],
- "phpVersion": "[parameters('phpVersion')]",
- "alwaysOn": "[parameters('alwaysOn')]"
- },
- "serverFarmId": "[format('/subscriptions/{0}/resourcegroups/{1}/providers/Microsoft.Web/serverfarms/{2}', parameters('subscriptionId'), parameters('serverFarmResourceGroup'), parameters('hostingPlanName'))]",
- "clientAffinityEnabled": true
- },
- "dependsOn": [
- "[resourceId('Microsoft.Insights/components', 'web-app-name-01')]"
- ]
- },
- {
- "type": "Microsoft.Insights/components",
- "apiVersion": "2020-02-02",
- "name": "web-app-name-01",
- "location": "[parameters('location')]",
- "kind": "web",
- "properties": {
- "Application_Type": "web",
- "Request_Source": "rest"
- }
- }
- ]
-}
-```
---
-### Parameter file
-
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "subscriptionId": {
- "value": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
- },
- "name": {
- "value": "web-app-name-01"
- },
- "location": {
- "value": "Central US"
- },
- "hostingPlanName": {
- "value": "WebApplication2720191003010920Plan"
- },
- "serverFarmResourceGroup": {
- "value": "Testwebapp01"
- },
- "alwaysOn": {
- "value": true
- },
- "phpVersion": {
- "value": "OFF"
- },
- "errorLink": {
- "value": "https://web-app-name-01.scm.azurewebsites.net/detectors?type=tools&name=eventviewer"
- }
- }
-}
-
-```
-
-## ASP.NET runtime
-
-The following sample creates a basic Azure App Service web app with the ASP.NET runtime and a [classic Application Insights resource](../app/create-new-resource.md) with monitoring enabled.
-
-### Template file
-
-# [Bicep](#tab/bicep)
-
-```bicep
-param subscriptionId string
-param name string
-param location string
-param hostingPlanName string
-param serverFarmResourceGroup string
-param alwaysOn bool
-param phpVersion string
-param netFrameworkVersion string
-
-resource sites 'Microsoft.Web/sites@2021-03-01' = {
- name: name
- location: location
- properties: {
- siteConfig: {
- appSettings: [
- {
- name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
- value: reference('microsoft.insights/components/web-app-name-01', '2015-05-01').InstrumentationKey
- }
- {
- name: 'APPLICATIONINSIGHTS_CONNECTION_STRING'
- value: reference('microsoft.insights/components/web-app-name-01', '2015-05-01').ConnectionString
- }
- {
- name: 'ApplicationInsightsAgent_EXTENSION_VERSION'
- value: '~2'
- }
- {
- name: 'XDT_MicrosoftApplicationInsights_Mode'
- value: 'default'
- }
- ]
- phpVersion: phpVersion
- netFrameworkVersion: netFrameworkVersion
- alwaysOn: alwaysOn
- }
- serverFarmId: '/subscriptions/${subscriptionId}/resourcegroups/${serverFarmResourceGroup}/providers/Microsoft.Web/serverfarms/${hostingPlanName}'
- clientAffinityEnabled: true
- }
- dependsOn: [
- webApp
- ]
-}
-
-resource webApp 'Microsoft.Insights/components@2020-02-02' = {
- name: 'web-app-name-01'
- location: location
- kind: 'web'
- properties: {
- Application_Type: 'web'
- Request_Source: 'rest'
- }
-}
-
-```
-
-# [JSON](#tab/json)
-
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "subscriptionId": {
- "type": "string"
- },
- "name": {
- "type": "string"
- },
- "location": {
- "type": "string"
- },
- "hostingPlanName": {
- "type": "string"
- },
- "serverFarmResourceGroup": {
- "type": "string"
- },
- "alwaysOn": {
- "type": "bool"
- },
- "phpVersion": {
- "type": "string"
- },
- "netFrameworkVersion": {
- "type": "string"
- }
- },
- "resources": [
- {
- "type": "Microsoft.Web/sites",
- "apiVersion": "2021-03-01",
- "name": "[parameters('name')]",
- "location": "[parameters('location')]",
- "properties": {
- "siteConfig": {
- "appSettings": [
- {
- "name": "APPINSIGHTS_INSTRUMENTATIONKEY",
- "value": "[reference('microsoft.insights/components/web-app-name-01', '2015-05-01').InstrumentationKey]"
- },
- {
- "name": "APPLICATIONINSIGHTS_CONNECTION_STRING",
- "value": "[reference('microsoft.insights/components/web-app-name-01', '2015-05-01').ConnectionString]"
- },
- {
- "name": "ApplicationInsightsAgent_EXTENSION_VERSION",
- "value": "~2"
- },
- {
- "name": "XDT_MicrosoftApplicationInsights_Mode",
- "value": "default"
- }
- ],
- "phpVersion": "[parameters('phpVersion')]",
- "netFrameworkVersion": "[parameters('netFrameworkVersion')]",
- "alwaysOn": "[parameters('alwaysOn')]"
- },
- "serverFarmId": "[format('/subscriptions/{0}/resourcegroups/{1}/providers/Microsoft.Web/serverfarms/{2}', parameters('subscriptionId'), parameters('serverFarmResourceGroup'), parameters('hostingPlanName'))]",
- "clientAffinityEnabled": true
- },
- "dependsOn": [
- "[resourceId('Microsoft.Insights/components', 'web-app-name-01')]"
- ]
- },
- {
- "type": "Microsoft.Insights/components",
- "apiVersion": "2020-02-02",
- "name": "web-app-name-01",
- "location": "[parameters('location')]",
- "kind": "web",
- "properties": {
- "Application_Type": "web",
- "Request_Source": "rest"
- }
- }
- ]
-}
-```
---
-### Parameters file
-
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "subscriptionId": {
- "value": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
- },
- "name": {
- "value": "web-app-name-01"
- },
- "location": {
- "value": "Central US"
- },
- "hostingPlanName": {
- "value": "WebApplication2720191003010920Plan"
- },
- "serverFarmResourceGroup": {
- "value": "Testwebapp01"
- },
- "alwaysOn": {
- "value": true
- },
- "phpVersion": {
- "value": "OFF"
- },
- "netFrameworkVersion": {
- "value": "v4.0"
- }
- }
-}
-```
-
-## Node.js runtime (Linux)
-
-The following sample creates a basic Azure App Service Linux web app with the Node.js runtime and a [classic Application Insights resource](../app/create-new-resource.md) with monitoring enabled.
-
-### Template file
-
-# [Bicep](#tab/bicep)
-
-```bicep
-param subscriptionId string
-param name string
-param location string
-param hostingPlanName string
-param serverFarmResourceGroup string
-param alwaysOn bool
-param linuxFxVersion string
-
-resource site 'Microsoft.Web/sites@2021-03-01' = {
- name: name
- location: location
- properties: {
- siteConfig: {
- appSettings: [
- {
- name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
- value: reference('microsoft.insights/components/web-app-name-01', '2015-05-01').InstrumentationKey
- }
- {
- name: 'APPLICATIONINSIGHTS_CONNECTION_STRING'
- value: reference('microsoft.insights/components/web-app-name-01', '2015-05-01').ConnectionString
- }
- {
- name: 'ApplicationInsightsAgent_EXTENSION_VERSION'
- value: '~2'
- }
- {
- name: 'XDT_MicrosoftApplicationInsights_Mode'
- value: 'default'
- }
- ]
- linuxFxVersion: linuxFxVersion
- alwaysOn: alwaysOn
- }
- serverFarmId: '/subscriptions/${subscriptionId}/resourcegroups/${serverFarmResourceGroup}/providers/Microsoft.Web/serverfarms/${hostingPlanName}'
- clientAffinityEnabled: false
- }
- dependsOn: [
- webApp
- ]
-}
-
-resource webApp 'Microsoft.Insights/components@2020-02-02' = {
- name: 'web-app-name-01'
- location: location
- kind: 'web'
- properties: {
- Application_Type: 'web'
- Request_Source: 'rest'
- }
-}
-```
-
-# [JSON](#tab/json)
-
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "subscriptionId": {
- "type": "string"
- },
- "name": {
- "type": "string"
- },
- "location": {
- "type": "string"
- },
- "hostingPlanName": {
- "type": "string"
- },
- "serverFarmResourceGroup": {
- "type": "string"
- },
- "alwaysOn": {
- "type": "bool"
- },
- "linuxFxVersion": {
- "type": "string"
- }
- },
- "resources": [
- {
- "type": "Microsoft.Web/sites",
- "apiVersion": "2021-03-01",
- "name": "[parameters('name')]",
- "location": "[parameters('location')]",
- "properties": {
- "siteConfig": {
- "appSettings": [
- {
- "name": "APPINSIGHTS_INSTRUMENTATIONKEY",
- "value": "[reference('microsoft.insights/components/web-app-name-01', '2015-05-01').InstrumentationKey]"
- },
- {
- "name": "APPLICATIONINSIGHTS_CONNECTION_STRING",
- "value": "[reference('microsoft.insights/components/web-app-name-01', '2015-05-01').ConnectionString]"
- },
- {
- "name": "ApplicationInsightsAgent_EXTENSION_VERSION",
- "value": "~2"
- },
- {
- "name": "XDT_MicrosoftApplicationInsights_Mode",
- "value": "default"
- }
- ],
- "linuxFxVersion": "[parameters('linuxFxVersion')]",
- "alwaysOn": "[parameters('alwaysOn')]"
- },
- "serverFarmId": "[format('/subscriptions/{0}/resourcegroups/{1}/providers/Microsoft.Web/serverfarms/{2}', parameters('subscriptionId'), parameters('serverFarmResourceGroup'), parameters('hostingPlanName'))]",
- "clientAffinityEnabled": false
- },
- "dependsOn": [
- "[resourceId('Microsoft.Insights/components', 'web-app-name-01')]"
- ]
- },
- {
- "type": "Microsoft.Insights/components",
- "apiVersion": "2020-02-02",
- "name": "web-app-name-01",
- "location": "[parameters('location')]",
- "kind": "web",
- "properties": {
- "Application_Type": "web",
- "Request_Source": "rest"
- }
- }
- ]
-}
-```
---
-### Parameter file
-
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "subscriptionId": {
- "value": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
- },
- "name": {
- "value": "web-app-name-01"
- },
- "location": {
- "value": "Central US"
- },
- "hostingPlanName": {
- "value": "App-planTest01-916e"
- },
- "serverFarmResourceGroup": {
- "value": "app_resource_group_custom"
- },
- "alwaysOn": {
- "value": true
- },
- "linuxFxVersion": {
- "value": "NODE|12-lts"
- }
- }
-}
-```
-
-## Next steps
-
-* [Get other sample templates for Azure Monitor](../resource-manager-samples.md).
-* [Learn more about classic Application Insights resources](../app/create-new-resource.md).
azure-monitor Best Practices Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-alerts.md
Typically, you'll want to alert on issues for all your critical Azure applicatio
## Next steps
-[Define alerts and automated actions from Azure Monitor data](best-practices-alerts.md)
+[Optimize cost in Azure Monitor](https://learn.microsoft.com/azure/azure-monitor/best-practices-cost)
azure-monitor Best Practices Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-plan.md
While the operational data stored in Azure Monitor might be useful for investiga
- [Microsoft Sentinel](../sentinel/overview.md) is a security information event management (SIEM) and security orchestration automated response (SOAR) solution. Sentinel collects security data from a wide range of Microsoft and third-party sources to provide alerting, visualization, and automation. This solution focuses on consolidating as many security logs as possible, including Windows Security Events. Microsoft Sentinel can also collect Windows Security Event Logs and commonly shares a Log Analytics workspace with Defender for Cloud. Security events can only be collected from Microsoft Sentinel or Defender for Cloud when they share the same workspace. Unlike Defender for Cloud, security events are a key component of alerting and analysis in Microsoft Sentinel. -- [Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint) is an enterprise endpoint security platform designed to help enterprise networks prevent, detect, investigate, and respond to advanced threats. It was designed with a primary focus on protecting Windows user devices. Defender for Endpoint monitors workstations, servers, tablets, and cellphones with various operating systems for security issues and vulnerabilities. Defender for Endpoint is closely aligned with Microsoft Endpoint Manager to collect data and provide security assessments. Data collection is primarily based on ETW trace logs and is stored in an isolated workspace.
+- [Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint) is an enterprise endpoint security platform designed to help enterprise networks prevent, detect, investigate, and respond to advanced threats. It was designed with a primary focus on protecting Windows user devices. Defender for Endpoint monitors workstations, servers, tablets, and cellphones with various operating systems for security issues and vulnerabilities. Defender for Endpoint is closely aligned with Microsoft Intune to collect data and provide security assessments. Data collection is primarily based on ETW trace logs and is stored in an isolated workspace.
### System Center Operations Manager
azure-monitor Container Insights Enable Arc Enabled Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md
If the Azure Arc-enabled Kubernetes cluster is on Azure Stack Edge, then a custo
az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings amalogs.logsettings.custommountpath=/home/data/docker ```
+### Option 5 - With Azure Monitor Private Link Scope (AMPLS) + Proxy
+
+If the cluster is configured with a forward proxy, then proxy settings are automatically applied to the extension. In the case of a cluster with AMPLS + proxy, proxy config should be ignored. Onboard the extension with the configuration setting `amalogs.ignoreExtensionProxySettings=true`.
+
+```azurecli
+az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings amalogs.ignoreExtensionProxySettings=true
+```
>[!NOTE] > If you are explicitly specifying the version of the extension to be installed in the create command, then ensure that the version specified is >= 2.8.2.
azure-monitor Container Insights Prometheus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-prometheus.md
Title: Collect Prometheus metrics with Container insights
description: Describes different methods for configuring the Container insights agent to scrape Prometheus metrics from your Kubernetes cluster. Previously updated : 09/28/2022 Last updated : 03/01/2023
Active scraping of metrics from Prometheus is performed from one of two perspect
| Endpoint | Scope | Example | |-|-|| | Pod annotation | Cluster-wide | `prometheus.io/scrape: "true"` <br>`prometheus.io/path: "/mymetrics"` <br>`prometheus.io/port: "8000"` <br>`prometheus.io/scheme: "http"` |
-| Kubernetes service | Cluster-wide | `http://my-service-dns.my-namespace:9100/metrics` <br>`https://metrics-server.kube-system.svc.cluster.local/metrics`ΓÇï |
+| Kubernetes service | Cluster-wide | `http://my-service-dns.my-namespace:9100/metrics` <br>`http://metrics-server.kube-system.svc.cluster.local/metrics`ΓÇï |
| URL/endpoint | Per-node and/or cluster-wide | `http://myurl:9101/metrics` | When a URL is specified, Container insights only scrapes the endpoint. When Kubernetes service is specified, the service name is resolved with the cluster DNS server to get the IP address. Then the resolved service is scraped.
When a URL is specified, Container insights only scrapes the endpoint. When Kube
||--|--|-|-| | Cluster-wide | | | | Specify any one of the following three methods to scrape endpoints for metrics. | | | `urls` | String | Comma-separated array | HTTP endpoint (either IP address or valid URL path specified). For example: `urls=[$NODE_IP/metrics]`. ($NODE_IP is a specific Container insights parameter and can be used instead of a node IP address. Must be all uppercase.) |
-| | `kubernetes_services` | String | Comma-separated array | An array of Kubernetes services to scrape metrics from kube-state-metrics. Fully qualified domain names must be used here. For example,`kubernetes_services = ["https://metrics-server.kube-system.svc.cluster.local/metrics",http://my-service-dns.my-namespace.svc.cluster.local:9100/metrics]`|
+| | `kubernetes_services` | String | Comma-separated array | An array of Kubernetes services to scrape metrics from kube-state-metrics. Fully qualified domain names must be used here. For example,`kubernetes_services = ["http://metrics-server.kube-system.svc.cluster.local/metrics",http://my-service-dns.my-namespace.svc.cluster.local:9100/metrics]`|
| | `monitor_kubernetes_pods` | Boolean | true or false | When set to `true` in the cluster-wide settings, the Container insights agent will scrape Kubernetes pods across the entire cluster for the following Prometheus annotations:<br> `prometheus.io/scrape:`<br> `prometheus.io/scheme:`<br> `prometheus.io/path:`<br> `prometheus.io/port:` | | | `prometheus.io/scrape` | Boolean | true or false | Enables scraping of the pod, and `monitor_kubernetes_pods` must be set to `true`. |
-| | `prometheus.io/scheme` | String | http or https | Defaults to scraping over HTTP. If necessary, set to `https`. |
+| | `prometheus.io/scheme` | String | http | Defaults to scraping over HTTP. |
| | `prometheus.io/path` | String | Comma-separated array | The HTTP resource path from which to fetch metrics. If the metrics path isn't `/metrics`, define it with this annotation. | | | `prometheus.io/port` | String | 9102 | Specify a port to scrape from. If the port isn't set, it will default to 9102. | | | `monitor_kubernetes_pods_namespaces` | String | Comma-separated array | An allowlist of namespaces to scrape metrics from Kubernetes pods.<br> For example, `monitor_kubernetes_pods_namespaces = ["default1", "default2", "default3"]` |
azure-monitor Prometheus Metrics Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-overview.md
Title: Overview of Azure Monitor Managed Service for Prometheus (preview) description: Overview of Azure Monitor managed service for Prometheus, which provides a Prometheus-compatible interface for storing and retrieving metric data.-+++ Last updated 09/28/2022 # Azure Monitor managed service for Prometheus (preview)
-Azure Monitor managed service for Prometheus allows you to collect and analyze metrics at scale using a Prometheus-compatible monitoring solution, based on the [Prometheus](https://aka.ms/azureprometheus-promio) project from the Cloud Native Compute Foundation. This fully managed service allows you to use the [Prometheus query language (PromQL)](https://aka.ms/azureprometheus-promio-promql) to analyze and alert on the performance of monitored infrastructure and workloads without having to operate the underlying infrastructure.
Azure Monitor managed service for Prometheus is a component of [Azure Monitor Metrics](data-platform-metrics.md), providing additional flexibility in the types of metric data that you can collect and analyze with Azure Monitor. Prometheus metrics share some features with platform and custom metrics, but use some different features to better support open source tools such as [PromQL](https://aka.ms/azureprometheus-promio-promql) and [Grafana](../../managed-grafan).
+Azure Monitor managed service for Prometheus allows you to collect and analyze metrics at scale using a Prometheus-compatible monitoring solution, based on the [Prometheus](https://aka.ms/azureprometheus-promio) project from the Cloud Native Compute Foundation. This fully managed service allows you to use the [Prometheus query language (PromQL)](https://aka.ms/azureprometheus-promio-promql) to analyze and alert on the performance of monitored infrastructure and workloads without having to operate the underlying infrastructure.
+ > [!IMPORTANT] > Azure Monitor managed service for Prometheus is intended for storing information about service health of customer machines and applications. It is not intended for storing any data classified as Personal Identifiable Information (PII) or End User Identifiable Information (EUII). We strongly recommend that you do not send any sensitive information (usernames, credit card numbers etc.) into Azure Monitor managed service for Prometheus fields like metric names, label names, or label values.
azure-monitor Prometheus Remote Write https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-remote-write.md
Last updated 11/01/2022
# Azure Monitor managed service for Prometheus remote write (preview)
-Azure Monitor managed service for Prometheus is intended to be a replacement for self managed Prometheus so you don't need to manage a Prometheus server in your Kubernetes clusters. You may also choose to use the managed service to centralize data from self-managed Prometheus clusters for long term data retention and to create a centralized view across your clusters. In this case, you can use [remote_write](https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage) to send data from your self-managed Prometheus into our managed service.
+Azure Monitor managed service for Prometheus is intended to be a replacement for self managed Prometheus so you don't need to manage a Prometheus server in your Kubernetes clusters. You may also choose to use the managed service to centralize data from self-managed Prometheus clusters for long term data retention and to create a centralized view across your clusters. In this case, you can use [remote_write](https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage) to send data from your self-managed Prometheus into the Azure managed service.
## Architecture
-Azure Monitor provides a reverse proxy container (Azure Monitor side car container) that provides an abstraction for ingesting Prometheus remote write metrics and helps in authenticating packets. The Azure Monitor side car container currently supports User Assigned Identity and Azure Active Directory (Azure AD) based authentication to ingest Prometheus remote write metrics to Azure Monitor workspace.
+Azure Monitor provides a reverse proxy container (Azure Monitor [side car container](https://learn.microsoft.com/azure/architecture/patterns/sidecar)) that provides an abstraction for ingesting Prometheus remote write metrics and helps in authenticating packets. The Azure Monitor side car container currently supports User Assigned Identity and Azure Active Directory (Azure AD) based authentication to ingest Prometheus remote write metrics to Azure Monitor workspace.
## Prerequisites
azure-monitor Surface Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/surface-hubs.md
You'll need the workspace ID and workspace key for the Log Analytics workspace t
Intune is a Microsoft product that allows you to centrally manage the Log Analytics workspace configuration settings that are applied to one or more of your devices. Follow these steps to configure your devices through Intune:
-1. Sign in to [Microsoft Endpoint Manager Admin Center](https://endpoint.microsoft.com/).
+1. Sign in to [Microsoft Intune admin center](https://endpoint.microsoft.com/).
2. Go to **Devices** > **Configuration profiles**. 3. Create a new Windows 10 profile, and then select **templates**. 4. In the list of templates, select **Device restrictions (Windows 10 Team)**.
azure-netapp-files Azacsnap Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-release-notes.md
na Previously updated : 02/24/2023 Last updated : 03/02/2023
AzAcSnap 7 is being released with the following fixes and improvements:
- Fixes and Improvements: - Backup (`-c backup`) changes:
- - Shorten suffix added to the snapshot name. The previous 26 character suffix of "YYYY-MM-DDThhhhss-nnnnnnnZ" (for example, 2022-11-17T030002-7299835Z) was too long, this is replaced with an 11 character hex-decimal equivalent based on the ten-thousandths of a second since the Unix epoch to avoid naming collisions (for example, F2D212540D5).
+ - Shorten suffix added to the snapshot name. The previous 26 character suffix of "YYYY-MM-DDThhhhss-nnnnnnnZ" was too long. The suffix is now an 11 character hex-decimal based on the ten-thousandths of a second since the Unix epoch to avoid naming collisions for example, F2D212540D5.
- Increased validation when creating snapshots to avoid failures on snapshot creation retry. - Time out when executing AzAcSnap mechanism to disable/enable backint (`autoDisableEnableBackint=true`) now aligns with other SAP HANA related operation timeout values.
- - Azure Backup now allows third party snapshot-based backups without impact to streaming backups (also known as 'backint'). Therefore, AzAcSnap 'backint' detection logic has been reordered to allow for future deprecation of this feature. By default this setting is disabled (`autoDisableEnableBackint=false`). For customers who have relied on this feature to take snapshots with AzAcSnap and use Azure Backup, keeping this value as true means AzAcSnap 7 will continue to disable/enable backint. As this setting is no longer necessary for Azure Backup, we recommend testing AzAcSnap backups with the value of `autoDisableEnableBackint=false`, and then if successful make the same change in your production deployment.
+ - Azure Backup now allows third party snapshot-based backups without impact to streaming backups (also known as 'backint'). Therefore, AzAcSnap 'backint' detection logic has been reordered to allow for future deprecation of this feature. By default this setting is disabled (`autoDisableEnableBackint=false`). For customers who have relied on this feature to take snapshots with AzAcSnap and use Azure Backup, keeping this value as true means AzAcSnap 7 continues to disable/enable backint. As this setting is no longer necessary for Azure Backup, we recommend testing AzAcSnap backups with the value of `autoDisableEnableBackint=false`, and then if successful make the same change in your production deployment.
- Restore (`-c restore`) changes: - Ability to create a custom suffix for Volume clones created when using `-c restore --restore snaptovol` either: - via the command-line with `--clonesuffix <custom suffix>`. - interactively when running the command without the `--force` option.
- - When doing a `--restore snaptovol` on ANF, then Volume Clone will also inherit the new 'NetworkFeatures' setting from the Source Volume.
- - Can now do a restore if there are no Data Volumes configured. It will only do a restore of the Other Volumes using the Other Volumes latest snapshot (the `--snapshotfilter` option only applies to Data Volumes).
+ - When doing a `--restore snaptovol` on ANF, then Volume Clone inherits the new 'NetworkFeatures' setting from the Source Volume.
+ - Can now do a restore if there are no Data Volumes configured. It will only restore the Other Volumes using the Other Volumes latest snapshot (the `--snapshotfilter` option only applies to Data Volumes).
- Extra logging for `-c restore` command to help with user debugging. - Test (`-c test`) changes: - Now tests managing snapshots for all otherVolume(s) and all dataVolume(s).
AzAcSnap 7 is being released with the following fixes and improvements:
- Features added to [Preview](azacsnap-preview.md): - Preliminary support for Azure NetApp Files Backup. - Db2 database support adding options to configure, test, and snapshot backup IBM Db2 in an application consistent manner.
-
-Download the [latest release](https://aka.ms/azacsnapinstaller) of the installer and review how to [get started](azacsnap-get-started.md). For specific information on Preview features, refer to the [AzAcSnap Preview](azacsnap-preview.md) page.
## Jul-2022
Download the [latest release](https://aka.ms/azacsnapinstaller) of the installer
> [!IMPORTANT] > AzAcSnap 6 brings a new release model for AzAcSnap and includes fully supported GA features and Preview features in a single release.
-Since AzAcSnap v5.0 was released as GA in April 2021, there have been eight releases of AzAcSnap across two branches. Our goal with the new release model is to align with how Azure components are released. This change allows moving features from Preview to GA (without having to move an entire branch), and introduce new Preview features (without having to create a new branch). From AzAcSnap 6, we'll have a single branch with fully supported GA features and Preview features (which are subject to Microsoft's Preview Ts&Cs). ItΓÇÖs important to note customers can't accidentally use Preview features, and must enable them with the `--preview` command line option. This means the next release will be AzAcSnap 7, which could include; patches (if necessary) for GA features, current Preview features moving to GA, or new Preview features.
+Since AzAcSnap v5.0 was released as GA in April 2021, there have been eight releases of AzAcSnap across two branches. Our goal with the new release model is to align with how Azure components are released. This change allows moving features from Preview to GA (without having to move an entire branch), and introduce new Preview features (without having to create a new branch). From AzAcSnap 6, we have a single branch with fully supported GA features and Preview features (which are subject to Microsoft's Preview Ts&Cs). ItΓÇÖs important to note customers can't accidentally use Preview features, and must enable them with the `--preview` command line option. Therefore the next release will be AzAcSnap 7, which could include; patches (if necessary) for GA features, current Preview features moving to GA, or new Preview features.
AzAcSnap 6 is being released with the following fixes and improvements:
AzAcSnap v5.0.3 (Build: 20220524.14204) is provided as a patch update to the v5.
- Fix for handling delimited identifiers when querying SAP HANA. This issue only impacted SAP HANA in HSR-HA node when there's a Secondary node configured with 'logreplay_readaccss' and has been resolved.
-Download the [latest release](https://aka.ms/azacsnapinstaller) of the installer and review how to [get started](azacsnap-get-started.md).
- ### AzAcSnap v5.1 Preview (Build: 20220524.15550) AzAcSnap v5.1 Preview (Build: 20220524.15550) is an updated build to extend the preview expiry date for 90 days. This update contains the fix for handling delimited identifiers when querying SAP HANA as provided in v5.0.3.
-Read about the [AzAcSnap Preview](azacsnap-preview.md).
-Download the [latest release of the Preview installer](https://aka.ms/azacsnap-preview-installer).
- ## Mar-2022 ### AzAcSnap v5.1 Preview (Build: 20220302.81795)
AzAcSnap v5.1 Preview (Build: 20220125.85030) has been released with the followi
AzAcSnap v5.0.2 (Build: 20210827.19086) is provided as a patch update to the v5.0 branch with the following fixes and improvements: - Ignore `ssh` 255 exit codes. In some cases the `ssh` command, which is used to communicate with storage on Azure Large Instance, would emit an exit code of 255 when there were no errors or execution failures (refer `man ssh` "EXIT STATUS") - then AzAcSnap would trap this exit code as a failure and abort. With this update extra verification is done to validate correct execution, this includes parsing `ssh` STDOUT and STDERR for errors in addition to traditional exit code checks.-- Fix the installer's check for the location of the hdbuserstore. The installer would check for the existence of an incorrect source directory for the hdbuserstore for the user running the install - this is fixed to check for `~/.hdb`. This fix is applicable to systems (for example, Azure Large Instance) where the hdbuserstore was pre-configured for the `root` user before installing `azacsnap`.
+- Fix the installer's check for the location of the hdbuserstore. The installer would search the filesystem for an incorrect source directory for the hdbuserstore location for the user running the install - the installer now searches for `~/.hdb`. This fix is applicable to systems (for example, Azure Large Instance) where the hdbuserstore was pre-configured for the `root` user before installing `azacsnap`.
- Installer now shows the version it will install/extract (if the installer is run without any arguments). ## May-2021
azure-netapp-files Configure Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-customer-managed-keys.md
Customer-managed keys in Azure NetApp Files volume encryption enable you to use your own keys rather than a Microsoft-managed key when creating a new volume. With customer-managed keys, you can fully manage the relationship between a key's life cycle, key usage permissions, and auditing operations on keys.
+The following diagram demonstrates how customer-managed keys work with Azure NetApp Files:
++
+1. Azure NetApp Files grants permissions to encryption keys to a managed identity. The managed identity is either a user-assigned managed identity that you create and manage or a system-assigned managed identity associated with the NetApp account.
+2. You configure encryption with a customer-managed key for the NetApp account.
+3. You use the managed identity to which the Azure Key Vault admin granted permissions in step one to authenticate access to Azure Key Vault via Azure Active Directory.
+4. Azure NetApp Files wraps the account encryption key with the customer-managed key in Azure Key Vault.
+5. For read/write operations, Azure NetApp Files sends requests to Azure Key Vault to unwrap the account encryption key to perform encryption and decryption operations.
+ ## Considerations > [!IMPORTANT]
azure-netapp-files Large Volumes Requirements Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/large-volumes-requirements-considerations.md
na Previously updated : 02/23/2023 Last updated : 03/02/2023 # Requirements and considerations for large volumes (preview)
To enroll in the preview for large volumes, use the [large volumes preview sign-
## Requirements and considerations
-* Existing regular volumes can't be resized over 100 TiB. You can't convert regular Azure NetApp Files volumes to large volumes.
+* Existing regular volumes can't be resized over 100 TiB.
+ * You cannot convert regular Azure NetApp Files volumes to large volumes.
* You must create a large volume at a size greater than 100 TiB. A single volume can't exceed 500 TiB.
-* You can't resize a large volume to less than 100 TiB. You can only resize a large volume can up to 30% of lowest provisioned size.
-* Large volumes are currently not supported with Azure NetApp Files backup.
-* Large volumes are not currently supported with cross-region replication.
+* You can't resize a large volume to less than 100 TiB.
+ * You can only resize a large volume up to 30% of lowest provisioned size.
+* Large volumes aren't currently supported with Azure NetApp Files backup.
+* Large volumes aren't currently supported with cross-region replication.
* You can't create a large volume with application volume groups. * Large volumes aren't currently supported with cross-zone replication. * The SDK for large volumes isn't currently available.
-* Throughput ceilings for the three performance tiers (Standard, Premium, and Ultra) of large volumes are based on the existing 100-TiB maximum capacity targets. You'll be able to grow to 500 TiB with the throughput ceiling as per the table below.
+* Throughput ceilings for the three performance tiers (Standard, Premium, and Ultra) of large volumes are based on the existing 100-TiB maximum capacity targets. You're able to grow to 500 TiB with the throughput ceiling per the following table:
| Capacity tier | Volume size (TiB) | Throughput (MiB/s) | | | | |
azure-resource-manager Deploy Service Catalog Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/deploy-service-catalog-quickstart.md
Title: Use Azure portal to deploy service catalog managed application
-description: Shows consumers of Azure Managed Applications how to deploy a service catalog managed application from the Azure portal.
+ Title: Deploy a service catalog managed application
+description: Describes how to deploy a service catalog's managed application for an Azure Managed Application.
Previously updated : 08/17/2022 Last updated : 03/01/2023
-# Quickstart: Deploy service catalog managed application from Azure portal
+# Quickstart: Deploy a service catalog managed application
-In the quickstart article to [publish the definition](publish-service-catalog-app.md), you published an Azure managed application definition. In this quickstart, you use that definition to deploy a service catalog managed application. The deployment creates two resource groups. One resource group contains the managed application and the other is a managed resource group for the deployed resource. In this article, the managed application definition deploys a managed storage account.
+In this quickstart, you use the definition you created in the quickstarts to [publish an application definition](publish-service-catalog-app.md) or [publish a definition with bring your own storage](publish-service-catalog-bring-your-own-storage.md) to deploy a service catalog managed application. The deployment creates two resource groups. One resource group contains the managed application and the other is a managed resource group for the deployed resource. The managed application definition deploys an App Service plan, App Service, and storage account.
## Prerequisites
-To complete this quickstart, you need an Azure account with an active subscription. If you completed the quickstart to publish a definition, you should already have an account. Otherwise, [create a free account](https://azure.microsoft.com/free/) before you begin.
+To complete this quickstart, you need an Azure account with an active subscription. If you completed a quickstart to publish a definition, you should already have an account. Otherwise, [create a free account](https://azure.microsoft.com/free/) before you begin.
## Create service catalog managed application
In the Azure portal, use the following steps:
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Select **Create a resource**.
- :::image type="content" source="./media/deploy-service-catalog-quickstart/create-resource.png" alt-text="Create a resource":::
+ :::image type="content" source="./media/deploy-service-catalog-quickstart/create-resource.png" alt-text="Screenshot of Azure home page with Create a resource highlighted.":::
1. Search for _Service Catalog Managed Application_ and select it from the available options. 1. **Service Catalog Managed Application** is displayed. Select **Create**.
- :::image type="content" source="./media/deploy-service-catalog-quickstart/create-service-catalog-managed-application.png" alt-text="Select create":::
+ :::image type="content" source="./media/deploy-service-catalog-quickstart/create-service-catalog-managed-application.png" alt-text="Screenshot of search result for Service Catalog Managed Application with create button highlighted.":::
-1. The portal shows the managed application definitions that you can access. From the available definitions, select the one you want to deploy. In this quickstart, use the **Managed Storage Account** definition that you created in the preceding quickstart. Select **Create**.
+1. Select **Sample managed application** and then select **Create**.
- :::image type="content" source="./media/deploy-service-catalog-quickstart/select-service-catalog-managed-application.png" alt-text="Screenshot that shows managed application definitions that you can select and deploy.":::
+ The portal displays the managed application definitions that you created with the quickstart articles to publish an application definition.
-1. Provide values for the **Basics** tab and select **Next: Storage settings**.
+ :::image type="content" source="./media/deploy-service-catalog-quickstart/select-service-catalog-managed-application.png" alt-text="Screenshot that shows managed application definitions that you can deploy.":::
- :::image type="content" source="./media/deploy-service-catalog-quickstart/basics-info.png" alt-text="Screenshot that highlights the information needed on the basics tab.":::
+1. Provide values for the **Basics** tab and select **Next: Web App settings**.
+
+ :::image type="content" source="./media/deploy-service-catalog-quickstart/basics-info.png" alt-text="Screenshot that highlights the required information on the basics tab.":::
- **Subscription**: Select the subscription where you want to deploy the managed application. - **Resource group**: Select the resource group. For this example, create a resource group named _applicationGroup_. - **Region**: Select the location where you want to deploy the resource. - **Application Name**: Enter a name for your application. For this example, use _demoManagedApplication_.
- - **Managed Resource Group**: Uses a default name in the format `mrg-{definitionName}-{dateTime}` like the example _mrg-ManagedStorage-20220817085240_. You can change the name.
+ - **Application resources Resource group name**: The name of the managed resource group that contains the resources that are deployed for the managed application. The default name is in the format `rg-{definitionName}-{dateTime}` but you can change the name.
+
+1. Provide values for the **Web App settings** tab and select **Next: Storage settings**.
+
+ :::image type="content" source="./media/deploy-service-catalog-quickstart/web-app-settings.png" alt-text="Screenshot that highlights the required information on the Web App settings tab.":::
+
+ - **App Service plan name**: Create a plan name. Maximum of 40 alphanumeric characters and hyphens. For example, _demoAppServicePlan_. App Service plan names must be unique within a resource group in your subscription.
+ - **App Service name prefix**: Create a prefix for the plan name. Maximum of 47 alphanumeric characters or hyphens. For example, _demoApp_. During deployment, the prefix is concatenated with a unique string to create a name that's globally unique across Azure.
1. Enter a prefix for the storage account name and select the storage account type. Select **Next: Review + create**.
- :::image type="content" source="./media/deploy-service-catalog-quickstart/storage-info.png" alt-text="Screenshot that shows the information needed to create a storage account.":::
+ :::image type="content" source="./media/deploy-service-catalog-quickstart/storage-settings.png" alt-text="Screenshot that shows the information needed to create a storage account.":::
- - **Storage account name prefix**: Use only lowercase letters and numbers and a maximum of 11 characters. During deployment, the prefix is concatenated with a unique string to create the storage account name.
+ - **Storage account name prefix**: Use only lowercase letters and numbers and a maximum of 11 characters. For example, _demostg1234_. During deployment, the prefix is concatenated with a unique string to create a name globally unique across Azure. Although you're creating a prefix, the control checks for existing names in Azure and might post a validation message that the name already exists. If so, choose a different prefix.
- **Storage account type**: Select **Change type** to choose a storage account type. The default is Standard LRS.
-1. Review the summary of the values you selected and verify **Validation Passed** is displayed. Select **Create** to begin the deployment.
+1. Review the summary of the values you selected and verify **Validation Passed** is displayed. Select **Create** to deploy the managed application.
- :::image type="content" source="./media/deploy-service-catalog-quickstart/summary-validation.png" alt-text="Screenshot that summarizes the values you selected and shows the validation status.":::
+ :::image type="content" source="./media/deploy-service-catalog-quickstart/summary-validation.png" alt-text="Screenshot that summarizes the values you selected and shows the status of validation passed.":::
## View results
After the service catalog managed application is deployed, you have two new reso
### Managed application
-Go to the resource group named **applicationGroup**. The resource group contains your managed application named _demoManagedApplication_.
+Go to the resource group named **applicationGroup** and select **Overview**. The resource group contains your managed application named _demoManagedApplication_.
:::image type="content" source="./media/deploy-service-catalog-quickstart/view-application-group.png" alt-text="Screenshot that shows the resource group that contains the managed application.":::
+Select the managed application's name to get more information like the link to the managed resource group.
+
+ :::image type="content" source="./media/deploy-service-catalog-quickstart/view-managed-application.png" alt-text="Screenshot that shows the managed application's details and highlights the link to the managed resource group.":::
+ ### Managed resource
-Go to the managed resource group with the name prefix **mrg-ManagedStorage** to see the resource that was deployed. The resource group contains the managed storage account that uses the prefix you specified. In this example, the storage account prefix is _demoappstg_.
+Go to the managed resource group with the name prefix **rg-sampleManagedApplication** and select **Overview** to display the resources that were deployed. The resource group contains an App Service, App Service plan, and storage account.
- :::image type="content" source="./media/deploy-service-catalog-quickstart/view-managed-resource-group.png" alt-text="Screenshot that shows the managed resource group that contains the resource deployed by the managed application.":::
+ :::image type="content" source="./media/deploy-service-catalog-quickstart/view-managed-resource-group.png" alt-text="Screenshot that shows the managed resource group that contains the resources deployed by the managed application definition.":::
-The storage account that's created by the managed application has a role assignment. In the [publish the definition](publish-service-catalog-app.md#create-an-azure-active-directory-user-group-or-application) article, you created an Azure Active Directory group. That group was used in the managed application definition. When you deployed the managed application, a role assignment for that group was added to the managed storage account.
+The managed resource group and each resource created by the managed application has a role assignment. When you used a quickstart article to create the definition, you created an Azure Active Directory group. That group was used in the managed application definition. When you deployed the managed application, a role assignment for that group was added to the managed resources.
To see the role assignment from the Azure portal:
-1. Go to the **mrg-ManagedStorage** resource group.
+1. Go to your **rg-sampleManagedApplication** resource group.
1. Select **Access Control (IAM)** > **Role assignments**. You can also view the resource's **Deny assignments**.
The role assignment gives the application's publisher access to manage the stora
## Clean up resources
-When your finished with the managed application, you can delete the resource groups and that will remove all the resources you created. For example, in this quickstart you created the resource groups _applicationGroup_ and a managed resource group with the prefix _mrg-ManagedStorage_.
+When your finished with the managed application, you can delete the resource groups and that removes all the resources you created. For example, in this quickstart you created the resource groups _applicationGroup_ and a managed resource group with the prefix _rg-sampleManagedApplication_.
1. From Azure portal **Home**, in the search field, enter _resource groups_. 1. Select **Resource groups**. 1. Select **applicationGroup** and **Delete resource group**. 1. To confirm the deletion, enter the resource group name and select **Delete**.
-When the resource group that contains the managed application is deleted, the managed resource group is also deleted. In this example, when _applicationGroup_ is deleted the _mrg-ManagedStorage_ resource group is also deleted.
+When the resource group that contains the managed application is deleted, the managed resource group is also deleted. In this example, when _applicationGroup_ is deleted the _rg-sampleManagedApplication_ resource group is also deleted.
+
+If you want to delete the managed application definition, delete the resource groups you created in the quickstart articles.
-If you want to delete the managed application definition, you can delete the resource groups you created in the quickstart to [publish the definition](publish-service-catalog-app.md).
+- **Publish application definition**: _packageStorageGroup_ and _appDefinitionGroup_.
+- **Publish definition with bring your own storage**: _packageStorageGroup_, _byosDefinitionStorageGroup_, and _byosAppDefinitionGroup_.
## Next steps -- To learn how to create the definition files for a managed application, see [Quickstart: Create and publish an Azure Managed Application definition](publish-service-catalog-app.md).-- For Azure CLI, see [Deploy managed application with Azure CLI](./scripts/managed-application-cli-sample-create-application.md).-- For PowerShell, see [Deploy managed application with PowerShell](./scripts/managed-application-poweshell-sample-create-application.md).
+- To learn how to create and publish the definition files for a managed application, go to [Quickstart: Create and publish an Azure Managed Application definition](publish-service-catalog-app.md).
+- To use your own storage to create and publish the definition files for a managed application, go to [Quickstart: Bring your own storage to create and publish an Azure Managed Application definition](publish-service-catalog-bring-your-own-storage.md).
azure-resource-manager Publish Service Catalog App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/publish-service-catalog-app.md
Title: Publish Azure Managed Application in service catalog
-description: Describes how to publish an Azure Managed Application in your service catalog that's intended for members of your organization.
+ Title: Create and publish Azure Managed Application in service catalog
+description: Describes how to create and publish an Azure Managed Application in your service catalog that's intended for members of your organization.
Previously updated : 09/29/2022 Last updated : 03/01/2023 # Quickstart: Create and publish an Azure Managed Application definition
-This quickstart provides an introduction to working with [Azure Managed Applications](overview.md). You create and publish a managed application that's stored in your service catalog and is intended for members of your organization.
+This quickstart provides an introduction to working with [Azure Managed Applications](overview.md). You create and publish a managed application definition that's stored in your service catalog and is intended for members of your organization.
To publish a managed application to your service catalog, do the following tasks: - Create an Azure Resource Manager template (ARM template) that defines the resources to deploy with the managed application. - Define the user interface elements for the portal when deploying the managed application.-- Create a _.zip_ package that contains the required template files. The _.zip_ package file has a 120-MB limit for a service catalog's managed application definition.-- Decide which user, group, or application needs access to the resource group in the user's subscription.-- Create the managed application definition that points to the _.zip_ package and requests access for the identity.
+- Create a _.zip_ package that contains the required JSON files. The _.zip_ package file has a 120-MB limit for a service catalog's managed application definition.
+- Deploy the managed application definition so it's available in your service catalog.
-**Optional**: If you want to deploy your managed application definition with an ARM template in your own storage account, see [bring your own storage](#bring-your-own-storage-for-the-managed-application-definition).
+If your managed application definition is more than 120 MB or if you want to use your own storage account for your organization's compliance reasons, go to [Quickstart: Bring your own storage to create and publish an Azure Managed Application definition](publish-service-catalog-bring-your-own-storage.md).
> [!NOTE]
-> Bicep files can't be used in a managed application. You must convert a Bicep file to ARM template JSON with the Bicep [build](../bicep/bicep-cli.md#build) command.
+> You can use Bicep to develop a managed application definition but it must be converted to ARM template JSON before you can publish the definition in Azure. To convert Bicep to JSON, use the Bicep [build](../bicep/bicep-cli.md#build) command. After the file is converted to JSON it's recommended to verify the code for accuracy.
+>
+> Bicep files can be used to deploy an existing managed application definition.
## Prerequisites To complete this quickstart, you need the following items: -- An Azure account with an active subscription. If you don't have an account, [create a free account](https://azure.microsoft.com/free/) before you begin.-- [Visual Studio Code](https://code.visualstudio.com/) with the latest [Azure Resource Manager Tools extension](https://marketplace.visualstudio.com/items?itemName=msazurermtools.azurerm-vscode-tools).
+- An Azure account with an active subscription and permissions to Azure Active Directory resources like users, groups, or service principals. If you don't have an account, [create a free account](https://azure.microsoft.com/free/) before you begin.
+- [Visual Studio Code](https://code.visualstudio.com/) with the latest [Azure Resource Manager Tools extension](https://marketplace.visualstudio.com/items?itemName=msazurermtools.azurerm-vscode-tools). For Bicep files, install the [Bicep extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep).
- Install the latest version of [Azure PowerShell](/powershell/azure/install-az-ps) or [Azure CLI](/cli/azure/install-azure-cli). ## Create the ARM template
Every managed application definition includes a file named _mainTemplate.json_.
Open Visual Studio Code, create a file with the case-sensitive name _mainTemplate.json_ and save it.
-Add the following JSON and save the file. It defines the parameters for creating a storage account, and specifies the properties for the storage account.
+Add the following JSON and save the file. It defines the resources to deploy an App Service, App Service plan, and storage account for the application. This storage account isn't used to store the managed application definition.
```json { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "parameters": {
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]"
+ },
+ "appServicePlanName": {
+ "type": "string",
+ "maxLength": 40,
+ "metadata": {
+ "description": "App Service plan name."
+ }
+ },
+ "appServiceNamePrefix": {
+ "type": "string",
+ "maxLength": 47,
+ "metadata": {
+ "description": "App Service name prefix."
+ }
+ },
"storageAccountNamePrefix": { "type": "string", "maxLength": 11, "metadata": {
- "description": "Storage prefix must be maximum of 11 characters with only lowercase letters or numbers."
+ "description": "Storage account name prefix."
} }, "storageAccountType": {
- "type": "string"
- },
- "location": {
"type": "string",
- "defaultValue": "[resourceGroup().location]"
+ "allowedValues": [
+ "Premium_LRS",
+ "Standard_LRS",
+ "Standard_GRS"
+ ],
+ "metadata": {
+ "description": "Storage account type allowed values"
+ }
} }, "variables": {
- "storageAccountName": "[concat(parameters('storageAccountNamePrefix'), uniqueString(resourceGroup().id))]"
+ "appServicePlanSku": "F1",
+ "appServicePlanCapacity": 1,
+ "appServiceName": "[format('{0}{1}', parameters('appServiceNamePrefix'), uniqueString(resourceGroup().id))]",
+ "storageAccountName": "[format('{0}{1}', parameters('storageAccountNamePrefix'), uniqueString(resourceGroup().id))]"
}, "resources": [
+ {
+ "type": "Microsoft.Web/serverfarms",
+ "apiVersion": "2022-03-01",
+ "name": "[parameters('appServicePlanName')]",
+ "location": "[parameters('location')]",
+ "sku": {
+ "name": "[variables('appServicePlanSku')]",
+ "capacity": "[variables('appServicePlanCapacity')]"
+ }
+ },
+ {
+ "type": "Microsoft.Web/sites",
+ "apiVersion": "2022-03-01",
+ "name": "[variables('appServiceName')]",
+ "location": "[parameters('location')]",
+ "properties": {
+ "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', parameters('appServicePlanName'))]",
+ "httpsOnly": true,
+ "siteConfig": {
+ "appSettings": [
+ {
+ "name": "AppServiceStorageConnectionString",
+ "value": "[format('DefaultEndpointsProtocol=https;AccountName={0};EndpointSuffix={1};Key={2}', variables('storageAccountName'), environment().suffixes.storage, listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName')), '2022-09-01').keys[0].value)]"
+ }
+ ]
+ }
+ },
+ "dependsOn": [
+ "[resourceId('Microsoft.Web/serverfarms', parameters('appServicePlanName'))]",
+ "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]"
+ ]
+ },
{ "type": "Microsoft.Storage/storageAccounts",
- "apiVersion": "2021-09-01",
+ "apiVersion": "2022-09-01",
"name": "[variables('storageAccountName')]", "location": "[parameters('location')]", "sku": { "name": "[parameters('storageAccountType')]" }, "kind": "StorageV2",
- "properties": {}
+ "properties": {
+ "accessTier": "Hot"
+ }
} ], "outputs": {
- "storageEndpoint": {
+ "appServicePlan": {
+ "type": "string",
+ "value": "[parameters('appServicePlanName')]"
+ },
+ "appServiceApp": {
+ "type": "string",
+ "value": "[reference(resourceId('Microsoft.Web/sites', variables('appServiceName')), '2022-03-01').defaultHostName]"
+ },
+ "storageAccount": {
"type": "string",
- "value": "[reference(resourceId('Microsoft.Storage/storageAccounts/', variables('storageAccountName')), '2021-09-01').primaryEndpoints.blob]"
+ "value": "[reference(resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName')), '2022-09-01').primaryEndpoints.blob]"
} } } ```
-## Define your create experience
+## Define your portal experience
-As a publisher, you define the portal experience for creating the managed application. The _createUiDefinition.json_ file generates the portal interface. You define how users provide input for each parameter using [control elements](create-uidefinition-elements.md) including drop-downs, text boxes, and password boxes.
+As a publisher, you define the portal experience to create the managed application. The _createUiDefinition.json_ file generates the portal's user interface. You define how users provide input for each parameter using [control elements](create-uidefinition-elements.md) like drop-downs and text boxes.
-Open Visual Studio Code, create a file with the case-sensitive name _createUiDefinition.json_ and save it.
+Open Visual Studio Code, create a file with the case-sensitive name _createUiDefinition.json_ and save it. The user interface allows the user to input the App Service name prefix, App Service plan's name, storage account prefix, and storage account type. During deployment, the variables in _mainTemplate.json_ use the `uniqueString` function to append a 13-character string to the name prefixes so the names are globally unique across Azure.
Add the following JSON to the file and save it.
Add the following JSON to the file and save it.
{} ], "steps": [
+ {
+ "name": "webAppSettings",
+ "label": "Web App settings",
+ "subLabel": {
+ "preValidation": "Configure the web app settings",
+ "postValidation": "Completed"
+ },
+ "elements": [
+ {
+ "name": "appServicePlanName",
+ "type": "Microsoft.Common.TextBox",
+ "label": "App Service plan name",
+ "placeholder": "App Service plan name",
+ "defaultValue": "",
+ "toolTip": "Use alphanumeric characters or hyphens with a maximum of 40 characters.",
+ "constraints": {
+ "required": true,
+ "regex": "^[a-z0-9A-Z-]{1,40}$",
+ "validationMessage": "Only alphanumeric characters or hyphens are allowed, with a maximum of 40 characters."
+ },
+ "visible": true
+ },
+ {
+ "name": "appServiceName",
+ "type": "Microsoft.Common.TextBox",
+ "label": "App Service name prefix",
+ "placeholder": "App Service name prefix",
+ "defaultValue": "",
+ "toolTip": "Use alphanumeric characters or hyphens with minimum of 2 characters and maximum of 47 characters.",
+ "constraints": {
+ "required": true,
+ "regex": "^[a-z0-9A-Z-]{2,47}$",
+ "validationMessage": "Only alphanumeric characters or hyphens are allowed, with a minimum of 2 characters and maximum of 47 characters."
+ },
+ "visible": true
+ }
+ ]
+ },
{ "name": "storageConfig", "label": "Storage settings", "subLabel": {
- "preValidation": "Configure the infrastructure settings",
- "postValidation": "Done"
+ "preValidation": "Configure the storage settings",
+ "postValidation": "Completed"
},
- "bladeTitle": "Storage settings",
"elements": [ { "name": "storageAccounts",
Add the following JSON to the file and save it.
"prefix": "Storage account name prefix", "type": "Storage account type" },
+ "toolTip": {
+ "prefix": "Enter maximum of 11 lowercase letters or numbers.",
+ "type": "Available choices are Standard_LRS, Standard_GRS, and Premium_LRS."
+ },
"defaultValue": { "type": "Standard_LRS" },
Add the following JSON to the file and save it.
"Standard_LRS", "Standard_GRS" ]
- }
+ },
+ "visible": true
} ] } ], "outputs": {
+ "location": "[location()]",
+ "appServicePlanName": "[steps('webAppSettings').appServicePlanName]",
+ "appServiceNamePrefix": "[steps('webAppSettings').appServiceName]",
"storageAccountNamePrefix": "[steps('storageConfig').storageAccounts.prefix]",
- "storageAccountType": "[steps('storageConfig').storageAccounts.type]",
- "location": "[location()]"
+ "storageAccountType": "[steps('storageConfig').storageAccounts.type]"
} } }
To learn more, see [Get started with CreateUiDefinition](create-uidefinition-ove
## Package the files
-Add the two files to a file named _app.zip_. The two files must be at the root level of the _.zip_ file. If you put the files in a folder, you receive an error that states the required files aren't present when you create the managed application definition.
+Add the two files to a package file named _app.zip_. The two files must be at the root level of the _.zip_ file. If the files are in a folder, when you create the managed application definition, you receive an error that states the required files aren't present.
-Upload the package to an accessible location from where it can be consumed. The storage account name must be globally unique across Azure and the length must be 3-24 characters with only lowercase letters and numbers. In the `Name` parameter, replace the placeholder `demostorageaccount` with your unique storage account name.
+Upload _app.zip_ to an Azure storage account so you can use it when you deploy the managed application's definition. The storage account name must be globally unique across Azure and the length must be 3-24 characters with only lowercase letters and numbers. In the `Name` parameter, replace the placeholder `demostorageaccount` with your unique storage account name.
# [PowerShell](#tab/azure-powershell)
-```azurepowershell-interactive
-New-AzResourceGroup -Name storageGroup -Location eastus
+```azurepowershell
+New-AzResourceGroup -Name packageStorageGroup -Location westus3
$storageAccount = New-AzStorageAccount `
- -ResourceGroupName storageGroup `
+ -ResourceGroupName packageStorageGroup `
-Name "demostorageaccount" `
- -Location eastus `
+ -Location westus3 `
-SkuName Standard_LRS ` -Kind StorageV2
$ctx = $storageAccount.Context
New-AzStorageContainer -Name appcontainer -Context $ctx -Permission blob Set-AzStorageBlobContent `
- -File "D:\myapplications\app.zip" `
+ -File "app.zip" `
-Container appcontainer ` -Blob "app.zip" ` -Context $ctx
Set-AzStorageBlobContent `
# [Azure CLI](#tab/azure-cli)
-```azurecli-interactive
-az group create --name storageGroup --location eastus
+```azurecli
+az group create --name packageStorageGroup --location westus3
az storage account create \ --name demostorageaccount \
- --resource-group storageGroup \
- --location eastus \
+ --resource-group packageStorageGroup \
+ --location westus3 \
--sku Standard_LRS \ --kind StorageV2 ```
After you create the storage account, add the role assignment _Storage Blob Data
After you add the role to the storage account, it takes a few minutes to become active in Azure. You can then use the parameter `--auth-mode login` in the commands to create the container and upload the file.
-```azurecli-interactive
+```azurecli
az storage container create \ --account-name demostorageaccount \ --name appcontainer \
az storage blob upload \
--container-name appcontainer \ --auth-mode login \ --name "app.zip" \
- --file "./app.zip"
+ --file "app.zip"
``` For more information about storage authentication, see [Choose how to authorize access to blob data with Azure CLI](../../storage/blobs/authorize-data-operations-cli.md).
For more information about storage authentication, see [Choose how to authorize
## Create the managed application definition
-In this section you'll get identity information from Azure Active Directory, create a resource group, and create the managed application definition.
+In this section you get identity information from Azure Active Directory, create a resource group, and create the managed application definition.
-### Create an Azure Active Directory user group or application
+### Get group ID and role definition ID
-The next step is to select a user group, user, or application for managing the resources for the customer. This identity has permissions on the managed resource group according to the role that's assigned. The role can be any Azure built-in role like Owner or Contributor. To create a new Active Directory user group, see [Create a group and add members in Azure Active Directory](../../active-directory/fundamentals/active-directory-groups-create-azure-portal.md).
+The next step is to select a user, security group, or application for managing the resources for the customer. This identity has permissions on the managed resource group according to the assigned role. The role can be any Azure built-in role like Owner or Contributor.
-This example uses a user group, so you need the object ID of the user group to use for managing the resources. Replace the placeholder `mygroup` with your group's name.
+This example uses a security group, and your Azure Active Directory account should be a member of the group. To get the group's object ID, replace the placeholder `managedAppDemo` with your group's name. You use this variable's value when you deploy the managed application definition.
+
+To create a new Azure Active Directory group, go to [Manage Azure Active Directory groups and group membership](../../active-directory/fundamentals/how-to-manage-groups.md).
# [PowerShell](#tab/azure-powershell)
-```azurepowershell-interactive
-$groupID=(Get-AzADGroup -DisplayName mygroup).Id
+```azurepowershell
+$principalid=(Get-AzADGroup -DisplayName managedAppDemo).Id
``` # [Azure CLI](#tab/azure-cli)
-```azurecli-interactive
-groupid=$(az ad group show --group mygroup --query id --output tsv)
+```azurecli
+principalid=$(az ad group show --group managedAppDemo --query id --output tsv)
```
-### Get the role definition ID
-
-Next, you need the role definition ID of the Azure built-in role you want to grant access to the user, user group, or application. Typically, you use the Owner, Contributor, or Reader role. The following command shows how to get the role definition ID for the Owner role:
+Next, get the role definition ID of the Azure built-in role you want to grant access to the user, group, or application. You use this variable's value when you deploy the managed application definition.
# [PowerShell](#tab/azure-powershell)
-```azurepowershell-interactive
+```azurepowershell
$roleid=(Get-AzRoleDefinition -Name Owner).Id ``` # [Azure CLI](#tab/azure-cli)
-```azurecli-interactive
+```azurecli
roleid=$(az role definition list --name Owner --query [].name --output tsv) ```
roleid=$(az role definition list --name Owner --query [].name --output tsv)
### Create the managed application definition
-If you don't already have a resource group for storing your managed application definition, create a new resource group.
-
-**Optional**: If you want to deploy your managed application definition with an ARM template in your own storage account, see [bring your own storage](#bring-your-own-storage-for-the-managed-application-definition).
+Create a resource group for your managed application definition.
# [PowerShell](#tab/azure-powershell)
-```azurepowershell-interactive
-New-AzResourceGroup -Name appDefinitionGroup -Location westcentralus
+```azurepowershell
+New-AzResourceGroup -Name appDefinitionGroup -Location westus3
``` # [Azure CLI](#tab/azure-cli)
-```azurecli-interactive
-az group create --name appDefinitionGroup --location westcentralus
+```azurecli
+az group create --name appDefinitionGroup --location westus3
```
-Create the managed application definition resource. In the `Name` parameter, replace the placeholder `demostorageaccount` with your unique storage account name.
+Create the managed application definition in the resource group.
The `blob` command that's run from Azure PowerShell or Azure CLI creates a variable that's used to get the URL for the package _.zip_ file. That variable is used in the command that creates the managed application definition. # [PowerShell](#tab/azure-powershell)
-```azurepowershell-interactive
+```azurepowershell
$blob = Get-AzStorageBlob -Container appcontainer -Blob app.zip -Context $ctx New-AzManagedApplicationDefinition `
- -Name "ManagedStorage" `
- -Location "westcentralus" `
+ -Name "sampleManagedApplication" `
+ -Location "westus3" `
-ResourceGroupName appDefinitionGroup ` -LockLevel ReadOnly `
- -DisplayName "Managed Storage Account" `
- -Description "Managed Azure Storage Account" `
- -Authorization "${groupID}:$roleid" `
+ -DisplayName "Sample managed application" `
+ -Description "Sample managed application that deploys web resources" `
+ -Authorization "${principalid}:$roleid" `
-PackageFileUri $blob.ICloudBlob.StorageUri.PrimaryUri.AbsoluteUri ``` # [Azure CLI](#tab/azure-cli)
-```azurecli-interactive
+In the `blob` command's `account-name` parameter, replace the placeholder `demostorageaccount` with your unique storage account name.
+
+```azurecli
blob=$(az storage blob url \ --account-name demostorageaccount \ --container-name appcontainer \
blob=$(az storage blob url \
--name app.zip --output tsv) az managedapp definition create \
- --name "ManagedStorage" \
- --location "westcentralus" \
+ --name "sampleManagedApplication" \
+ --location "westus3" \
--resource-group appDefinitionGroup \ --lock-level ReadOnly \
- --display-name "Managed Storage Account" \
- --description "Managed Azure Storage Account" \
- --authorizations "$groupid:$roleid" \
+ --display-name "Sample managed application" \
+ --description "Sample managed application that deploys web resources" \
+ --authorizations "$principalid:$roleid" \
--package-file-uri "$blob" ```
When the command completes, you have a managed application definition in your re
Some of the parameters used in the preceding example are: - **resource group**: The name of the resource group where the managed application definition is created.-- **lock level**: The type of lock placed on the managed resource group. It prevents the customer from performing undesirable operations on this resource group. Currently, `ReadOnly` is the only supported lock level. When `ReadOnly` is specified, the customer can only read the resources present in the managed resource group. The publisher identities that are granted access to the managed resource group are exempt from the lock.
+- **lock level**: The `lockLevel` on the managed resource group prevents the customer from performing undesirable operations on this resource group. Currently, `ReadOnly` is the only supported lock level. `ReadOnly` specifies that the customer can only read the resources present in the managed resource group. The publisher identities that are granted access to the managed resource group are exempt from the lock level.
- **authorizations**: Describes the principal ID and the role definition ID that are used to grant permission to the managed resource group.
- - **Azure PowerShell**: `"${groupid}:$roleid"` or you can use curly braces for each variable `"${groupid}:${roleid}"`. Use a comma to separate multiple values: `"${groupid1}:$roleid1", "${groupid2}:$roleid2"`.
- - **Azure CLI**: `"$groupid:$roleid"` or you can use curly braces as shown in PowerShell. Use a space to separate multiple values: `"$groupid1:$roleid1" "$groupid2:$roleid2"`.
+ - **Azure PowerShell**: `"${principalid}:$roleid"` or you can use curly braces for each variable `"${principalid}:${roleid}"`. Use a comma to separate multiple values: `"${principalid1}:$roleid1", "${principalid2}:$roleid2"`.
+ - **Azure CLI**: `"$principalid:$roleid"` or you can use curly braces as shown in PowerShell. Use a space to separate multiple values: `"$principalid1:$roleid1" "$principalid2:$roleid2"`.
- **package file URI**: The location of a _.zip_ package file that contains the required files.
-## Bring your own storage for the managed application definition
-
-This section is optional. You can store your managed application definition in your own storage account so that its location and access can be managed by you for your regulatory needs. The _.zip_ package file has a 120-MB limit for a service catalog's managed application definition.
-
-> [!NOTE]
-> Bring your own storage is only supported with ARM template or REST API deployments of the managed application definition.
-
-### Create your storage account
-
-You must create a storage account that will contain your managed application definition for use with a service catalog. The storage account name must be globally unique across Azure and the length must be 3-24 characters with only lowercase letters and numbers.
-
-This example creates a new resource group named `byosStorageRG`. In the `Name` parameter, replace the placeholder `definitionstorage` with your unique storage account name.
-
-# [PowerShell](#tab/azure-powershell)
-
-```azurepowershell-interactive
-New-AzResourceGroup -Name byosStorageRG -Location eastus
-
-New-AzStorageAccount `
- -ResourceGroupName byosStorageRG `
- -Name "definitionstorage" `
- -Location eastus `
- -SkuName Standard_LRS `
- -Kind StorageV2
-```
-
-Use the following command to store the storage account's resource ID in a variable named `storageId`. You'll use this variable when you deploy the managed application definition.
-
-```azurepowershell-interactive
-$storageId = (Get-AzStorageAccount -ResourceGroupName byosStorageRG -Name definitionstorage).Id
-```
-
-# [Azure CLI](#tab/azure-cli)
-
-```azurecli-interactive
-az group create --name byosStorageRG --location eastus
-
-az storage account create \
- --name definitionstorage \
- --resource-group byosStorageRG \
- --location eastus \
- --sku Standard_LRS \
- --kind StorageV2
-```
-
-Use the following command to store the storage account's resource ID in a variable named `storageId`. You'll use the variable's value when you deploy the managed application definition.
-
-```azurecli-interactive
-storageId=$(az storage account show --resource-group byosStorageRG --name definitionstorage --query id)
-```
---
-### Set the role assignment for your storage account
-
-Before your managed application definition can be deployed to your storage account, assign the **Contributor** role to the **Appliance Resource Provider** user at the storage account scope. This assignment lets the identity write definition files to your storage account's container.
-
-# [PowerShell](#tab/azure-powershell)
-
-In PowerShell, you can use variables for the role assignment. This example uses the `$storageId` you created in a previous step and creates the `$arpId` variable.
-
-```azurepowershell-interactive
-$arpId = (Get-AzADServicePrincipal -SearchString "Appliance Resource Provider").Id
-
-New-AzRoleAssignment -ObjectId $arpId `
--RoleDefinitionName Contributor `--Scope $storageId
-```
-
-# [Azure CLI](#tab/azure-cli)
-
-In Azure CLI, you need to use the string values to create the role assignment. This example gets string values from the `storageId` variable you created in a previous step and gets the object ID value for the Appliance Resource Provider. The command has placeholders for those values `arpGuid` and `storageId`. Replace the placeholders with the string values and use the quotes as shown.
-
-```azurecli-interactive
-echo $storageId
-az ad sp list --display-name "Appliance Resource Provider" --query [].id --output tsv
-
-az role assignment create --assignee "arpGuid" \
role "Contributor" \scope "storageId"
-```
-
-If you're running CLI commands with Git Bash for Windows, you might get an `InvalidSchema` error because of the `scope` parameter's string. To fix the error, run `export MSYS_NO_PATHCONV=1` and then rerun your command to create the role assignment.
---
-The **Appliance Resource Provider** is a service principal in your Azure Active Directory's tenant. From the Azure portal, you can see if it's registered by going to **Azure Active Directory** > **Enterprise applications** and change the search filter to **Microsoft Applications**. Search for _Appliance Resource Provider_. If it's not found, [register](../troubleshooting/error-register-resource-provider.md) the `Microsoft.Solutions` resource provider.
-
-### Deploy the managed application definition with an ARM template
-
-Use the following ARM template to deploy your packaged managed application as a new managed application definition in your service catalog. The definition files are stored and maintained in your storage account.
-
-Open Visual Studio Code, create a file with the name _azuredeploy.json_ and save it.
-
-Add the following JSON and save the file.
-
-```json
-{
- "$schema": "http://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "location": {
- "type": "string",
- "defaultValue": "[resourceGroup().location]"
- },
- "applicationName": {
- "type": "string",
- "metadata": {
- "description": "Managed Application name."
- }
- },
- "definitionStorageResourceID": {
- "type": "string",
- "metadata": {
- "description": "Storage account's resource ID where you're storing your managed application definition."
- }
- },
- "packageFileUri": {
- "type": "string",
- "metadata": {
- "description": "The URI where the .zip package file is located."
- }
- }
- },
- "variables": {
- "lockLevel": "None",
- "description": "Sample Managed application definition",
- "displayName": "Sample Managed application definition",
- "managedApplicationDefinitionName": "[parameters('applicationName')]",
- "packageFileUri": "[parameters('packageFileUri')]",
- "defLocation": "[parameters('definitionStorageResourceID')]"
- },
- "resources": [
- {
- "type": "Microsoft.Solutions/applicationDefinitions",
- "apiVersion": "2021-07-01",
- "name": "[variables('managedApplicationDefinitionName')]",
- "location": "[parameters('location')]",
- "properties": {
- "lockLevel": "[variables('lockLevel')]",
- "description": "[variables('description')]",
- "displayName": "[variables('displayName')]",
- "packageFileUri": "[variables('packageFileUri')]",
- "storageAccountId": "[variables('defLocation')]"
- }
- }
- ],
- "outputs": {}
-}
-```
-
-For more information about the ARM template's properties, see [Microsoft.Solutions/applicationDefinitions](/azure/templates/microsoft.solutions/applicationdefinitions?pivots=deployment-language-arm-template). Managed applications only use ARM template JSON.
-
-### Deploy the definition
-
-Create a resource group named _byosDefinitionRG_ and deploy the managed application definition to your storage account.
-
-# [PowerShell](#tab/azure-powershell)
-
-```azurepowershell-interactive
-New-AzResourceGroup -Name byosDefinitionRG -Location eastus
-
-$storageId
-
-New-AzResourceGroupDeployment `
- -ResourceGroupName byosDefinitionRG `
- -TemplateFile .\azuredeploy.json
-```
-
-# [Azure CLI](#tab/azure-cli)
-
-```azurecli-interactive
-az group create --name byosDefinitionRG --location eastus
-
-echo $storageId
-
-az deployment group create \
- --resource-group byosDefinitionRG \
- --template-file ./azuredeploy.json
-```
---
-You'll be prompted for three parameters to deploy the definition.
-
-| Parameter | Value |
-| - | - |
-| `applicationName` | Choose a name for your managed application definition. For this example, use _sampleManagedAppDefintion_.|
-| `definitionStorageResourceID` | Enter your storage account's resource ID. You created the `storageId` variable with this value in an earlier step. Don't wrap the resource ID with quotes. |
-| `packageFileUri` | Enter the URI to your _.zip_ package file. Use the URI for the _.zip_ [package file](#package-the-files) you created in an earlier step. The format is `https://yourStorageAccountName.blob.core.windows.net/appcontainer/app.zip`. |
-
-### Verify definition files storage
-
-During deployment, the template's `storageAccountId` property uses your storage account's resource ID and creates a new container with the case-sensitive name `applicationdefinitions`. The files from the _.zip_ package you specified during the deployment are stored in the new container.
-
-You can use the following commands to verify that the managed application definition files are saved in your storage account's container. In the `Name` parameter, replace the placeholder `definitionstorage` with your unique storage account name.
-
-# [PowerShell](#tab/azure-powershell)
-
-```azurepowershell-interactive
-Get-AzStorageAccount -ResourceGroupName byosStorageRG -Name definitionstorage |
-Get-AzStorageContainer -Name applicationdefinitions |
-Get-AzStorageBlob | Select-Object -Property *
-```
-
-# [Azure CLI](#tab/azure-cli)
-
-```azurecli-interactive
-az storage blob list \
- --container-name applicationdefinitions \
- --account-name definitionstorage \
- --query "[].{container:container, name:name}"
-```
-
-When you run the Azure CLI command, you might see a warning message similar to the CLI command in [package the files](#package-the-files).
---
-> [!NOTE]
-> For added security, you can create a managed applications definition and store it in an [Azure storage account blob where encryption is enabled](../../storage/common/storage-service-encryption.md). The definition contents are encrypted through the storage account's encryption options. Only users with permissions to the file can see the definition in your service catalog.
- ## Make sure users can see your definition You have access to the managed application definition, but you want to make sure other users in your organization can access it. Grant them at least the Reader role on the definition. They may have inherited this level of access from the subscription or resource group. To check who has access to the definition and add users or groups, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
You have access to the managed application definition, but you want to make sure
You've published the managed application definition. Now, learn how to deploy an instance of that definition. > [!div class="nextstepaction"]
-> [Quickstart: Deploy service catalog app](deploy-service-catalog-quickstart.md)
+> [Quickstart: Deploy a service catalog managed application](deploy-service-catalog-quickstart.md)
azure-resource-manager Publish Service Catalog Bring Your Own Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/publish-service-catalog-bring-your-own-storage.md
+
+ Title: Bring your own storage to create and publish an Azure Managed Application definition
+description: Describes how to bring your own storage to create and publish an Azure Managed Application definition in your service catalog.
++++ Last updated : 03/01/2023++
+# Quickstart: Bring your own storage to create and publish an Azure Managed Application definition
+
+This quickstart provides an introduction to bring your own storage (BYOS) for an [Azure Managed Application](overview.md). You create and publish a managed application definition in your service catalog for members of your organization. When you use your own storage account, your managed application definition can exceed the service catalog's 120-MB limit.
+
+To publish a managed application definition to your service catalog, do the following tasks:
+
+- Create an Azure Resource Manager template (ARM template) that defines the Azure resources deployed by the managed application.
+- Define the user interface elements for the portal when deploying the managed application.
+- Create a _.zip_ package that contains the required JSON files.
+- Create a storage account where you store the managed application definition.
+- Deploy the managed application definition to your own storage account so it's available in your service catalog.
+
+If you're managed application definition is less than 120 MB and you don't want to use your own storage account, go to [Quickstart: Create and publish an Azure Managed Application definition](publish-service-catalog-app.md).
+
+> [!NOTE]
+> You can use Bicep to develop a managed application definition but it must be converted to ARM template JSON before you can publish the definition in Azure. To convert Bicep to JSON, use the Bicep [build](../bicep/bicep-cli.md#build) command. After the file is converted to JSON it's recommended to verify the code for accuracy.
+>
+> Bicep files can be used to deploy an existing managed application definition.
+
+## Prerequisites
+
+To complete this quickstart, you need the following items:
+
+- An Azure account with an active subscription and permissions to Azure Active Directory resources like users, groups, or service principals. If you don't have an account, [create a free account](https://azure.microsoft.com/free/) before you begin.
+- [Visual Studio Code](https://code.visualstudio.com/) with the latest [Azure Resource Manager Tools extension](https://marketplace.visualstudio.com/items?itemName=msazurermtools.azurerm-vscode-tools). For Bicep files, install the [Bicep extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep).
+- Install the latest version of [Azure PowerShell](/powershell/azure/install-az-ps) or [Azure CLI](/cli/azure/install-azure-cli).
+
+## Create the ARM template
+
+Every managed application definition includes a file named _mainTemplate.json_. The template defines the Azure resources to deploy and is no different than a regular ARM template.
+
+Open Visual Studio Code, create a file with the case-sensitive name _mainTemplate.json_ and save it.
+
+Add the following JSON and save the file. It defines the managed application's resources to deploy an App Service, App Service plan, and a storage account.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]"
+ },
+ "appServicePlanName": {
+ "type": "string",
+ "maxLength": 40,
+ "metadata": {
+ "description": "App Service plan name."
+ }
+ },
+ "appServiceNamePrefix": {
+ "type": "string",
+ "maxLength": 47,
+ "metadata": {
+ "description": "App Service name prefix."
+ }
+ },
+ "storageAccountNamePrefix": {
+ "type": "string",
+ "maxLength": 11,
+ "metadata": {
+ "description": "Storage account name prefix."
+ }
+ },
+ "storageAccountType": {
+ "type": "string",
+ "allowedValues": [
+ "Premium_LRS",
+ "Standard_LRS",
+ "Standard_GRS"
+ ],
+ "metadata": {
+ "description": "Storage account type allowed values"
+ }
+ }
+ },
+ "variables": {
+ "appServicePlanSku": "F1",
+ "appServicePlanCapacity": 1,
+ "appServiceName": "[format('{0}{1}', parameters('appServiceNamePrefix'), uniqueString(resourceGroup().id))]",
+ "storageAccountName": "[format('{0}{1}', parameters('storageAccountNamePrefix'), uniqueString(resourceGroup().id))]"
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Web/serverfarms",
+ "apiVersion": "2022-03-01",
+ "name": "[parameters('appServicePlanName')]",
+ "location": "[parameters('location')]",
+ "sku": {
+ "name": "[variables('appServicePlanSku')]",
+ "capacity": "[variables('appServicePlanCapacity')]"
+ }
+ },
+ {
+ "type": "Microsoft.Web/sites",
+ "apiVersion": "2022-03-01",
+ "name": "[variables('appServiceName')]",
+ "location": "[parameters('location')]",
+ "properties": {
+ "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', parameters('appServicePlanName'))]",
+ "httpsOnly": true,
+ "siteConfig": {
+ "appSettings": [
+ {
+ "name": "AppServiceStorageConnectionString",
+ "value": "[format('DefaultEndpointsProtocol=https;AccountName={0};EndpointSuffix={1};Key={2}', variables('storageAccountName'), environment().suffixes.storage, listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName')), '2022-09-01').keys[0].value)]"
+ }
+ ]
+ }
+ },
+ "dependsOn": [
+ "[resourceId('Microsoft.Web/serverfarms', parameters('appServicePlanName'))]",
+ "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]"
+ ]
+ },
+ {
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2022-09-01",
+ "name": "[variables('storageAccountName')]",
+ "location": "[parameters('location')]",
+ "sku": {
+ "name": "[parameters('storageAccountType')]"
+ },
+ "kind": "StorageV2",
+ "properties": {
+ "accessTier": "Hot"
+ }
+ }
+ ],
+ "outputs": {
+ "appServicePlan": {
+ "type": "string",
+ "value": "[parameters('appServicePlanName')]"
+ },
+ "appServiceApp": {
+ "type": "string",
+ "value": "[reference(resourceId('Microsoft.Web/sites', variables('appServiceName')), '2022-03-01').defaultHostName]"
+ },
+ "storageAccount": {
+ "type": "string",
+ "value": "[reference(resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName')), '2022-09-01').primaryEndpoints.blob]"
+ }
+ }
+}
+```
+
+## Define your portal experience
+
+As a publisher, you define the portal experience to create the managed application. The _createUiDefinition.json_ file generates the portal's user interface. You define how users provide input for each parameter using [control elements](create-uidefinition-elements.md) like drop-downs and text boxes.
+
+Open Visual Studio Code, create a file with the case-sensitive name _createUiDefinition.json_ and save it. The user interface allows the user to input the App Service name prefix, App Service plan's name, storage account prefix, and storage account type. During deployment, the variables in _mainTemplate.json_ use the `uniqueString` function to append a 13-character string to the name prefixes so the names are globally unique across Azure.
+
+Add the following JSON to the file and save it.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/0.1.2-preview/CreateUIDefinition.MultiVm.json#",
+ "handler": "Microsoft.Azure.CreateUIDef",
+ "version": "0.1.2-preview",
+ "parameters": {
+ "basics": [
+ {}
+ ],
+ "steps": [
+ {
+ "name": "webAppSettings",
+ "label": "Web App settings",
+ "subLabel": {
+ "preValidation": "Configure the web app settings",
+ "postValidation": "Completed"
+ },
+ "elements": [
+ {
+ "name": "appServicePlanName",
+ "type": "Microsoft.Common.TextBox",
+ "label": "App Service plan name",
+ "placeholder": "App Service plan name",
+ "defaultValue": "",
+ "toolTip": "Use alphanumeric characters or hyphens with a maximum of 40 characters.",
+ "constraints": {
+ "required": true,
+ "regex": "^[a-z0-9A-Z-]{1,40}$",
+ "validationMessage": "Only alphanumeric characters or hyphens are allowed, with a maximum of 40 characters."
+ },
+ "visible": true
+ },
+ {
+ "name": "appServiceName",
+ "type": "Microsoft.Common.TextBox",
+ "label": "App Service name prefix",
+ "placeholder": "App Service name prefix",
+ "defaultValue": "",
+ "toolTip": "Use alphanumeric characters or hyphens with minimum of 2 characters and maximum of 47 characters.",
+ "constraints": {
+ "required": true,
+ "regex": "^[a-z0-9A-Z-]{2,47}$",
+ "validationMessage": "Only alphanumeric characters or hyphens are allowed, with a minimum of 2 characters and maximum of 47 characters."
+ },
+ "visible": true
+ }
+ ]
+ },
+ {
+ "name": "storageConfig",
+ "label": "Storage settings",
+ "subLabel": {
+ "preValidation": "Configure the storage settings",
+ "postValidation": "Completed"
+ },
+ "elements": [
+ {
+ "name": "storageAccounts",
+ "type": "Microsoft.Storage.MultiStorageAccountCombo",
+ "label": {
+ "prefix": "Storage account name prefix",
+ "type": "Storage account type"
+ },
+ "toolTip": {
+ "prefix": "Enter maximum of 11 lowercase letters or numbers.",
+ "type": "Available choices are Standard_LRS, Standard_GRS, and Premium_LRS."
+ },
+ "defaultValue": {
+ "type": "Standard_LRS"
+ },
+ "constraints": {
+ "allowedTypes": [
+ "Premium_LRS",
+ "Standard_LRS",
+ "Standard_GRS"
+ ]
+ },
+ "visible": true
+ }
+ ]
+ }
+ ],
+ "outputs": {
+ "location": "[location()]",
+ "appServicePlanName": "[steps('webAppSettings').appServicePlanName]",
+ "appServiceNamePrefix": "[steps('webAppSettings').appServiceName]",
+ "storageAccountNamePrefix": "[steps('storageConfig').storageAccounts.prefix]",
+ "storageAccountType": "[steps('storageConfig').storageAccounts.type]"
+ }
+ }
+}
+```
+
+To learn more, go to [Get started with CreateUiDefinition](create-uidefinition-overview.md).
+
+## Package the files
+
+Add the two files to a package file named _app.zip_. The two files must be at the root level of the _.zip_ file. If the files are in a folder, when you create the managed application definition, you receive an error that states the required files aren't present.
+
+Upload _app.zip_ to an Azure storage account so you can use it when you deploy the managed application's definition. The storage account name must be globally unique across Azure and the length must be 3-24 characters with only lowercase letters and numbers. In the `Name` parameter, replace the placeholder `demostorageaccount` with your unique storage account name.
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+New-AzResourceGroup -Name packageStorageGroup -Location westus3
+
+$storageAccount = New-AzStorageAccount `
+ -ResourceGroupName packageStorageGroup `
+ -Name "demostorageaccount" `
+ -Location westus3 `
+ -SkuName Standard_LRS `
+ -Kind StorageV2
+
+$ctx = $storageAccount.Context
+
+New-AzStorageContainer -Name appcontainer -Context $ctx -Permission blob
+
+Set-AzStorageBlobContent `
+ -File "app.zip" `
+ -Container appcontainer `
+ -Blob "app.zip" `
+ -Context $ctx
+```
+
+Use the following command to store the package file's URI in a variable named `packageuri`. You use the variable's value when you deploy the managed application definition.
+
+```azurepowershell
+$packageuri=(Get-AzStorageBlob -Container appcontainer -Blob app.zip -Context $ctx).ICloudBlob.StorageUri.PrimaryUri.AbsoluteUri
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli
+az group create --name packageStorageGroup --location westus3
+
+az storage account create \
+ --name demostorageaccount \
+ --resource-group packageStorageGroup \
+ --location westus3 \
+ --sku Standard_LRS \
+ --kind StorageV2
+```
+
+After you create the storage account, add the role assignment _Storage Blob Data Contributor_ to the storage account scope. Assign access to your Azure Active Directory user account. Depending on your access level in Azure, you might need other permissions assigned by your administrator. For more information, go to [Assign an Azure role for access to blob data](../../storage/blobs/assign-azure-role-data-access.md).
+
+After you add the role to the storage account, it takes a few minutes to become active in Azure. You can then use the parameter `--auth-mode login` in the commands to create the container and upload the file.
+
+```azurecli
+az storage container create \
+ --account-name demostorageaccount \
+ --name appcontainer \
+ --auth-mode login \
+ --public-access blob
+
+az storage blob upload \
+ --account-name demostorageaccount \
+ --container-name appcontainer \
+ --auth-mode login \
+ --name "app.zip" \
+ --file "app.zip"
+```
+
+For more information about storage authentication, go to [Choose how to authorize access to blob data with Azure CLI](../../storage/blobs/authorize-data-operations-cli.md).
+
+Use the following command to store the package file's URI in a variable named `packageuri`. You use the variable's value when you deploy the managed application definition.
+
+```azurecli
+packageuri=$(az storage blob url \
+ --account-name demostorageaccount \
+ --container-name appcontainer \
+ --auth-mode login \
+ --name app.zip --output tsv)
+```
+++
+## Bring your own storage for the managed application definition
+
+You store your managed application definition in your own storage account so that its location and access can be managed by you for your organization's regulatory needs. Using your own storage account allows you to have an application that exceeds the 120-MB limit for a service catalog's managed application definition.
+
+> [!NOTE]
+> Bring your own storage is only supported with ARM template or REST API deployments of the managed application definition.
+
+### Create the storage account
+
+Create the storage account for your managed application definition. The storage account name must be globally unique across Azure and the length must be 3-24 characters with only lowercase letters and numbers.
+
+This example creates a new resource group named `byosDefinitionStorageGroup`. In the `Name` parameter, replace the placeholder `definitionstorage` with your unique storage account name.
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+New-AzResourceGroup -Name byosDefinitionStorageGroup -Location westus3
+
+New-AzStorageAccount `
+ -ResourceGroupName byosDefinitionStorageGroup `
+ -Name "definitionstorage" `
+ -Location westus3 `
+ -SkuName Standard_LRS `
+ -Kind StorageV2
+```
+
+Use the following command to store the storage account's resource ID in a variable named `storageid`. You use the variable's value when you deploy the managed application definition.
+
+```azurepowershell
+$storageid = (Get-AzStorageAccount -ResourceGroupName byosDefinitionStorageGroup -Name definitionstorage).Id
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli
+az group create --name byosDefinitionStorageGroup --location westus3
+
+az storage account create \
+ --name definitionstorage \
+ --resource-group byosDefinitionStorageGroup \
+ --location westus3 \
+ --sku Standard_LRS \
+ --kind StorageV2
+```
+
+Use the following command to store the storage account's resource ID in a variable named `storageid`. You use the variable's value to set up the storage account's role assignment and when you deploy the managed application definition.
+
+```azurecli
+storageid=$(az storage account show --resource-group byosDefinitionStorageGroup --name definitionstorage --query id --output tsv)
+```
+++
+### Set the role assignment for your storage account
+
+Before you deploy your managed application definition to your storage account, assign the **Contributor** role to the **Appliance Resource Provider** user at the storage account scope. This assignment lets the identity write definition files to your storage account's container.
+
+# [PowerShell](#tab/azure-powershell)
+
+You can use variables to set up the role assignment. This example uses the `$storageid` variable you created in the previous step and creates the `$arpid` variable.
+
+```azurepowershell
+$arpid = (Get-AzADServicePrincipal -SearchString "Appliance Resource Provider").Id
+
+New-AzRoleAssignment -ObjectId $arpid `
+-RoleDefinitionName Contributor `
+-Scope $storageid
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+You can use variables to set up the role assignment. This example uses the `$storageid` variable you created in the previous step and creates the `$arpid` variable.
+
+```azurecli
+arpid=$(az ad sp list --display-name "Appliance Resource Provider" --query [].id --output tsv)
+
+az role assignment create --assignee $arpid \
+--role "Contributor" \
+--scope $storageid
+```
+
+If you're running CLI commands with Git Bash for Windows, you might get an `InvalidSchema` error because of the `scope` parameter's string. To fix the error, run `export MSYS_NO_PATHCONV=1` and then rerun your command to create the role assignment.
+++
+The _Appliance Resource Provider_ is a service principal in your Azure Active Directory's tenant. From the Azure portal, you can verify if it's registered by going to **Azure Active Directory** > **Enterprise applications** and change the search filter to **Microsoft Applications**. Search for _Appliance Resource Provider_. If it isn't found, [register](../troubleshooting/error-register-resource-provider.md) the `Microsoft.Solutions` resource provider.
+
+## Get group ID and role definition ID
+
+The next step is to select a user, security group, or application for managing the resources for the customer. This identity has permissions on the managed resource group according to the assigned role. The role can be any Azure built-in role like Owner or Contributor.
+
+This example uses a security group, and your Azure Active Directory account should be a member of the group. To get the group's object ID, replace the placeholder `managedAppDemo` with your group's name. You use the variable's value when you deploy the managed application definition.
+
+To create a new Azure Active Directory group, go to [Manage Azure Active Directory groups and group membership](../../active-directory/fundamentals/how-to-manage-groups.md).
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+$principalid=(Get-AzADGroup -DisplayName managedAppDemo).Id
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli
+principalid=$(az ad group show --group managedAppDemo --query id --output tsv)
+```
+++
+Next, get the role definition ID of the Azure built-in role you want to grant access to the user, group, or application. You use the variable's value when you deploy the managed application definition.
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+$roleid=(Get-AzRoleDefinition -Name Owner).Id
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli
+roleid=$(az role definition list --name Owner --query [].name --output tsv)
+```
+++
+## Create the definition deployment template
+
+Use a Bicep file to deploy the managed application definition in your service catalog. After the deployment, the definition files are stored in your own storage account.
+
+Open Visual Studio Code, create a file with the name _deployDefinition.bicep_ and save it.
+
+Add the following Bicep code and save the file.
+
+```bicep
+param location string = resourceGroup().location
+
+@description('Name of the managed application definition.')
+param managedApplicationDefinitionName string
+
+@description('Resource ID for the bring your own storage account where the definition is stored.')
+param definitionStorageResourceID string
+
+@description('The URI of the .zip package file.')
+param packageFileUri string
+
+@description('Publishers Principal ID that needs permissions to manage resources in the managed resource group.')
+param principalId string
+
+@description('Role ID for permissions to the managed resource group.')
+param roleId string
+
+var definitionLockLevel = 'ReadOnly'
+var definitionDisplayName = 'Sample BYOS managed application'
+var definitionDescription = 'Sample BYOS managed application that deploys web resources'
+
+resource managedApplicationDefinition 'Microsoft.Solutions/applicationDefinitions@2021-07-01' = {
+ name: managedApplicationDefinitionName
+ location: location
+ properties: {
+ lockLevel: definitionLockLevel
+ description: definitionDescription
+ displayName: definitionDisplayName
+ packageFileUri: packageFileUri
+ storageAccountId: definitionStorageResourceID
+ authorizations: [
+ {
+ principalId: principalId
+ roleDefinitionId: roleId
+ }
+ ]
+ }
+}
+```
+
+For more information about the template's properties, go to [Microsoft.Solutions/applicationDefinitions](/azure/templates/microsoft.solutions/applicationdefinitions).
+
+The `lockLevel` on the managed resource group prevents the customer from performing undesirable operations on this resource group. Currently, `ReadOnly` is the only supported lock level. `ReadOnly` specifies that the customer can only read the resources present in the managed resource group. The publisher identities that are granted access to the managed resource group are exempt from the lock level.
+
+## Create the parameter file
+
+The managed application definition's deployment template needs input for several parameters. The deployment command prompts you for the values or you can create a parameter file for the values. In this example, we use a parameter file to pass the parameter values to the deployment command.
+
+In Visual Studio Code, create a new file named _deployDefinition.parameters.json_ and save it.
+
+Add the following to your parameter file and save it. Then, replace the `{placeholder values}` including the curly braces, with your values.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "managedApplicationDefinitionName": {
+ "value": "{placeholder for managed application name}"
+ },
+ "definitionStorageResourceID": {
+ "value": "{placeholder for you storage account ID}"
+ },
+ "packageFileUri": {
+ "value": "{placeholder for the packageFileUri}"
+ },
+ "principalId": {
+ "value": "{placeholder for principalid value}"
+ },
+ "roleId": {
+ "value": "{placeholder for roleid value}"
+ }
+ }
+}
+```
+
+The following table describes the parameter values for the managed application definition.
+
+| Parameter | Value |
+| - | - |
+| `managedApplicationDefinitionName` | Name of the managed application definition. For this example, use _sampleByosManagedApplication_.|
+| `definitionStorageResourceID` | Resource ID for the storage account where the definition is stored. Use your `storageid` variable's value. |
+| `packageFileUri` | Enter the URI for your _.zip_ package file. Use your `packageuri` variable's value. The format is `https://yourStorageAccountName.blob.core.windows.net/appcontainer/app.zip`. |
+| `principalId` | The publishers Principal ID that needs permissions to manage resources in the managed resource group. Use your `principalid` variable's value. |
+| `roleId` | Role ID for permissions to the managed resource group. For example Owner, Contributor, Reader. Use your `roleid` variable's value. |
+
+To get your variable values from the command prompt:
+- Azure PowerShell: type `$variableName` to display the value.
+- Azure CLI: type `echo $variableName` to display the value.
+
+## Deploy the definition
+
+When you deploy the managed application's definition, it becomes available in your service catalog. This process doesn't deploy the managed application's resources.
+
+Create a resource group named _byosAppDefinitionGroup_ and deploy the managed application definition to your storage account.
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+New-AzResourceGroup -Name byosAppDefinitionGroup -Location westus3
+
+New-AzResourceGroupDeployment `
+ -ResourceGroupName byosAppDefinitionGroup `
+ -TemplateFile deployDefinition.bicep `
+ -TemplateParameterFile deployDefinition.parameters.json
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli
+az group create --name byosAppDefinitionGroup --location westus3
+
+az deployment group create \
+ --resource-group byosAppDefinitionGroup \
+ --template-file deployDefinition.bicep \
+ --parameters @deployDefinition.parameters.json
+```
+++
+## Verify definition files storage
+
+During deployment, the template's `storageAccountId` property uses your storage account's resource ID and creates a new container with the case-sensitive name `applicationdefinitions`. The files from the _.zip_ package you specified during the deployment are stored in the new container.
+
+You can use the following commands to verify that the managed application definition files are saved in your storage account's container. In the `Name` parameter, replace the placeholder `definitionstorage` with your unique storage account name.
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Get-AzStorageAccount -ResourceGroupName byosDefinitionStorageGroup -Name definitionstorage |
+Get-AzStorageContainer -Name applicationdefinitions |
+Get-AzStorageBlob | Select-Object -Property Name | Format-List
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli
+az storage blob list \
+ --container-name applicationdefinitions \
+ --account-name definitionstorage \
+ --query "[].{Name:name}"
+```
+
+When you run the Azure CLI command, a credentials warning message might be displayed similar to the CLI command in [package the files](#package-the-files). To clear the warning message, you can assign yourself _Storage Blob Data Contributor_ or _Storage Blob Data Reader_ to the storage account's scope, and then include the `--auth-mode login` parameter in the command.
+++
+> [!NOTE]
+> For added security, you can create a managed applications definition and store it in an [Azure storage account blob where encryption is enabled](../../storage/common/storage-service-encryption.md). The definition contents are encrypted through the storage account's encryption options. Only users with permissions to the file can access the definition in your service catalog.
+
+## Make sure users can access your definition
+
+You have access to the managed application definition, but you want to make sure other users in your organization can access it. Grant them at least the Reader role on the definition. They may have inherited this level of access from the subscription or resource group. To check who has access to the definition and add users or groups, go to [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+
+## Next steps
+
+You've published the managed application definition. Now, learn how to deploy an instance of that definition.
+
+> [!div class="nextstepaction"]
+> [Quickstart: Deploy a service catalog managed application](deploy-service-catalog-quickstart.md)
azure-resource-manager Request Limits And Throttling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/request-limits-and-throttling.md
Title: Request limits and throttling description: Describes how to use throttling with Azure Resource Manager requests when subscription limits have been reached. Previously updated : 12/16/2022 Last updated : 03/02/2023 # Throttling Resource Manager requests
The remaining requests are returned in the [response header values](#remaining-r
## Resource provider limits
-Resource providers apply their own throttling limits. The resource provider throttles per region of the resource in the request and per principal ID. Because Resource Manager throttles by instance of Resource Manager, and there are several instances of Resource Manager in each region, the resource provider might receive more requests than the default limits in the previous section.
+Resource providers apply their own throttling limits. Within each subscription, the resource provider throttles per region of the resource in the request. Because Resource Manager throttles by instance of Resource Manager, and there are several instances of Resource Manager in each region, the resource provider might receive more requests than the default limits in the previous section.
This section discusses the throttling limits of some widely used resource providers.
azure-resource-manager Tag Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-support.md
To get the same data as a file of comma-separated values, download [tag-support.
> | sqlServerRegistrations | Yes | Yes | > | sqlServerRegistrations / sqlServers | No | No |
-## Microsoft.AzurePercept
-
-> [!div class="mx-tableFixed"]
-> | Resource type | Supports tags | Tag in cost report |
-> | - | -- | -- |
-> | accounts | Yes | Yes |
-> | accounts / devices | No | No |
-> | accounts / devices / sensors | No | No |
-> | accounts / sensors | No | No |
-> | accounts / solutioninstances | No | No |
-> | accounts / solutions | No | No |
-> | accounts / targets | No | No |
- ## Microsoft.AzureScan > [!div class="mx-tableFixed"]
azure-resource-manager Deployment Complete Mode Deletion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deployment-complete-mode-deletion.md
The resources are listed by resource provider namespace. To match a resource pro
> | sqlServerRegistrations | Yes | > | sqlServerRegistrations / sqlServers | No |
-## Microsoft.AzurePercept
-
-> [!div class="mx-tableFixed"]
-> | Resource type | Complete mode deletion |
-> | - | -- |
-> | accounts | Yes |
-> | accounts / devices | No |
-> | accounts / devices / sensors | No |
-> | accounts / sensors | No |
-> | accounts / solutioninstances | No |
-> | accounts / solutions | No |
-> | accounts / targets | No |
- ## Microsoft.AzureScan > [!div class="mx-tableFixed"]
azure-video-analyzer Access Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/access-policies.md
- Title: Access policies
-description: This article explains how Azure Video Analyzer uses JWT tokens in access policies to secure videos.
- Previously updated : 11/04/2021--
-
-# Access policies
--
-Access policies define the permissions and duration of access to a given Video Analyzer video resource. These access policies allow for greater control and flexibility by allowing 3rd party (Non AAD Clients) JWT tokens to provide authorization to client APIΓÇÖs that enable:
--- access to Video Metadata. -- access to Video streaming. -
-## Access Policy definition
-
-```json
-"name": "accesspolicyname1",
-"properties": {
- "role": "Reader",
- "authentication": {
- "@type": "#Microsoft.VideoAnalyzer.JwtAuthentication",
- "issuers": [
- "issuer1",
- "issuer2"
- ],
- "audiences": [
- "audience1"
- ],
- "claims": [
- {
- "name":"claimname1",
- "value":"claimvalue1"
- },
- {
- "name":"claimname2",
- "value":"claimvalue2"
- }
- ],
- "keys": [
- {
- "@type": "#Microsoft.VideoAnalyzer.RsaTokenKey",
- "alg": "RS256",
- "kid": "123",
- "n": "YmFzZTY0IQ==",
- "e": "ZLFzZTY0IQ=="
- },
- {
- "@type": "#Microsoft.VideoAnalyzer.EccTokenKey",
- "alg": "ES256",
- "kid": "124",
- "x": "XX==",
- "y": "YY=="
- }
- ]
- }
-}
-```
-
-> [!NOTE]
-> Only one key type is required.
-
-### Roles
-
-Currently only reader role is supported.
-
-### Issuer Matching Rules
-
-Multiple issues can be specified in the policy, single issuer can be specified in the token. Issuer matches if the token issuer is among the issuers specified in the policy.
-
-### Audience Matching Rules
-
-If the audience value is ${System.Runtime.BaseResourceUrlPattern} for the video resource, then the audience that is provided in the JWT token must match the base resource URL. If not, then the token audience must match the audience from the access policy.
-
-### Claims Matching Rules
-
-Multiple claims can be specified in the access policy and in the JWT token. All the claims form an access policy must be provided in the token to pass validation, however, the JWT token can have additional claims that are not listed in the access policy.
-
-### Keys
-
-Two types of keys are supported these are the RSA and the ECC types.
-
-[RSA](https://wikipedia.org/wiki/RSA_(cryptosystem))
-
-* @type- \#Microsoft.VideoAnalyzer.RsaTokenKey
-* alg - Algorithm. Can be 256, 384 or 512
-* kid - Key ID
-* n - Modulus
-* e - Public Exponent
-
-[ECC](https://wikipedia.org/wiki/Elliptic-curve_cryptography)
-
-* @type- \#Microsoft.VideoAnalyzer.EccTokenKey
-* alg - Algorithm. Can be 256, 384 or 512
-* kid - Key ID
-* x - Coordinate value.
-* y - Coordinate value.
-
-### Token validation Process
-
-Customers must create their own JWT tokens and will be validated using the following method:
--- From the list of policies that match the Key ID we validate:
- - Token signature
- - Token expiration
- - Issuer
- - Audience
- - Additional claims
-
-### Policy Audience and Token Matching Examples:
-
-| **Policy Audience** | Requested URL | Token URL | Result |
-| - | - | | |
-| (Any literal) | (ANY) | (Match) | Grant |
-| (Any Literal) | (ANY) | (Not Match) | Deny |
-| ${System.Runtime.BaseResourceUrlPattern} | https://fqdn/videos | https://fqdn/videos/* | Grant |
-| ${System.Runtime.BaseResourceUrlPattern} | https://fqdn/videos | https://fqdn/videos/{videoName} | Deny |
-| ${System.Runtime.BaseResourceUrlPattern} | https://fqdn/videos/{videoName} | https://fqdn/vid* | Grant |
-| ${System.Runtime.BaseResourceUrlPattern} | https://fqdn/videos/{videoName} | https://fqdn/videos/* | Grant |
-| ${System.Runtime.BaseResourceUrlPattern} | https://fqdn/videos/{videoName} | https://fqdn/videos/{baseVideoName}* | Grant |
-| ${System.Runtime.BaseResourceUrlPattern} | https://fqdn/videos/{videoName} | https://fqdn/videos/{videoName} | Grant |
-| ${System.Runtime.BaseResourceUrlPattern} | https://fqdn/videos/{videoName}Suffix | https://fqdn/videos/{videoName} | Deny |
-| ${System.Runtime.BaseResourceUrlPattern} | https://fqdn/videos/{otherVideoName} | https://fqdn/videos/{videoName} | Deny |
-
-> [!NOTE]
-> Video Analyzer supports a maximum of 20 policies. ${System.Runtime.BaseResourceUrlPattern} allows for greater flexibility to access specific resources by using one access policy and multiple tokens. These tokens then allow access to different Video Analyzer resources based on the audience.
-
-## Creating a token
-
-In this section, we will create a JWT token that we will use later in the article. We will use a sample application that will generate the JWT token and provide you with all the fields required to create the access policy.
-
-> [!NOTE]
-> If you are familiar with how to generate a JWT token based on either an RSA or ECC certificate, you can skip this section.
-
-1. Clone the [AVA C# samples repository](https://github.com/Azure-Samples/video-analyzer-iot-edge-csharp). Then, go to the JWTTokenIssuer application folder *src/jwt-token-issuer* and find the JWTTokenIssuer application.
-2. Open Visual Studio Code, and then go to the folder where you downloaded the JWTTokenIssuer application. This folder should contain the *\*.csproj* file.
-3. In the explorer pane, go to the *program.cs* file.
-4. On line 77, change the audience to your Video Analyzer endpoint, followed by /videos/\*, so it looks like:
-
- ```
- https://{Azure Video Analyzer Account ID}.api.{Azure Long Region Code}.videoanalyzer.azure.net/videos/*
- ```
-
- > [!NOTE]
- > The Video Analyzer endpoint can be found in overview section of the Video Analyzer resource in the Azure portal.
-
- :::image type="content" source="media/player-widget/client-api-url.png" alt-text="Screenshot that shows the player widget endpoint.":::
-
-5. On line 78, change the issuer to the issuer value of your certificate. Example: `https://contoso.com`
-6. Save the file.
-
- > [!NOTE]
- > You might be prompted with the message `Required assets to build and debug are missing from 'jwt token issuer'. Add them?` Select `Yes`.
-
- :::image type="content" source="media/player-widget/visual-studio-code-required-assets.png" alt-text="Screenshot that shows the required asset prompt in Visual Studio Code.":::
-
-7. Open a Command Prompt window and go to the folder with the JWTTokenIssuer files. Run the following two commands: `dotnet build`, followed by `dotnet run`. If you have the C# extension on Visual Studio Code, you also can select F5 to run the JWTTokenIssuer application.
-
-The application builds and then executes. After it builds, it creates a self-signed certificate and generates the JWT token information from that certificate. You also can run the JWTTokenIssuer.exe file that's located in the debug folder of the directory where the JWTTokenIssuer built from. The advantage of running the application is that you can specify input options as follows:
--- `JwtTokenIssuer [--audience=<audience>] [--issuer=<issuer>] [--expiration=<expiration>] [--certificatePath=<filepath> --certificatePassword=<password>]`-
-JWTTokenIssuer creates the JWT token and the following needed components:
--- `Issuer`, `Audience`, `Key Type`, `Algorithm`, `Key Id`, `RSA Key Modulus`, `RSA Key Exponent`, `Token`-
-Be sure to copy these values for later use.
--
-## Creating an Access Policy
-
-There are two ways to create an access policy.
-
-### In the Azure portal
-
-1. Sign in to the Azure portal and go to your resource group where your Video Analyzer account is located.
-1. Select the Video Analyzer resource.
-1. Under **Video Analyzer**, select **Access Policies**.
-
- :::image type="content" source="./media/player-widget/portal-access-policies.png" alt-text="Player widget - portal access policies.":::
-
-1. Select **New** and enter the following information:
-
- > [!NOTE]
- > These values come from the JWTTokenIssuer application created in the previous step.
-
- - Access policy name - any name
-
- - Issuer - must match the JWT Token Issuer
-
- - Audience - Audience for the JWT Token -- `${System.Runtime.BaseResourceUrlPattern}` is the default.
-
- - Key Type - RSA
-
- - Algorithm - supported values are RS256, RS384, RS512
-
- - Key ID - generated from your certificate. For more information, see [Create a token](#creating-a-token).
-
- - RSA Key Modulus - generated from your certificate. For more information, see [Create a token](#creating-a-token).
-
- - RSA Key Exponent - generated from your certificate. For more information, see [Create a token](#creating-a-token).
-
- :::image type="content" source="./media/player-widget/access-policies-portal.png" alt-text="Player widget - access policies portal":::
-
-1. Select **Save**.
-### Create Access Policy via API
-
-See Azure Resource Manager (ARM) API
-
-## Next steps
-
-[Overview](overview.md)
azure-video-analyzer Access Public Endpoints Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/access-public-endpoints-networking.md
- Title: Public endpoints and networking
-description: Azure Video Analyzer exposes a set of public network endpoints which enable different product scenarios, including management, ingestion, and playback. This article explains how to access public endpoints and networking.
- Previously updated : 11/04/2021--
-# Public endpoints and networking
--
-Azure Video Analyzer exposes a set of public network endpoints that enable different product scenarios, including management, ingestion, and playback. This article describes those endpoints, and provides some details about how they are used. The diagram below depicts those endpoints, in addition to some key endpoints exposed by associated Azure Services.
--
-## Video Analyzer endpoints
-
-This section provides a list of Video Analyzer endpoints.
-
-### Streaming
-
-* **Purpose**: exposes audio, video and inference data, which can be consumed by [Video Analyzer player widget](player-widget.md) or compatible DASH/HLS players.
-* **Authentication & Authorization**: endpoint authorization is enforced through tokens issued by Video Analyzer service. The tokens are constrained to a single video and are issued implicitly based on the authorization rules applied on the Client and Management APIs on a per video basis. Authorization flow is automatically handled by the Video Analyzer player widget.
-* **Requirement**: access to this set of endpoints is required for content playback through the cloud.
-
-### Client APIs
-
-* **Purpose**: exposes metadata (Title, Description, etc.) for the [Video Analyzer video resource](terminology.md#video). This enables the display of rich video objects on customer developed client applications. This metadata is leveraged by the Video Analyzer player widget and can also be leveraged directly by customer applications.
-* **Authentication and Authorization**: endpoint authorization is enforced through a combination of customer defined [Access Policies](access-policies.md) plus customer issued JWT Tokens. One or more Access Policies can be defined through the Video Analyzer management APIs. Such policies describe the scope of access and the required claims to be validated on the tokens. Access is denied if there are no access policies created in the Video Analyzer account.
-* **Requirement**: access to this endpoint is required for Video Analyzer player widget and similar customer-developed client applications to retrieve video metadata and for playback authorization.
-
-### Edge Service Integration
-
-* **Purpose**:
-
- * Exposes policies that are periodically downloaded by the Video Analyzer edge module. Such policies control basic behaviors of the edge module, such as billing and connectivity requirements.
- * Orchestration of video publishing to the cloud, including the retrieval of Azure storage SAS URLs that allows for the Video Analyzer edge module to record video data into the customerΓÇÖs storage account.
-* **Authentication and Authorization**: initial authentication is done through a short-lived Provisioning Token issued by the Video Analyzer management APIs. Once the initial handshake is completed, the module and service exchange a set of auto-rotating authorization keys that are used from this point forward.
-* **Requirement**: access to this endpoint is required for the correct functioning of the Video Analyzer Edge module. The edge module will stop functioning if this endpoint cannot be reached within a period of 36 hours.
-
-## Telemetry
-
-* **Purpose**: optional periodic submission of telemetry data which enables Microsoft to better understand how the Video Analyzer edge module is used and proactively identify future improvements that can be done on compatibility, performance, and other product areas.
-* **Authentication and Authorization**: authorization is based on a pre-established key.
-* **Requirement**: access to this endpoint is optional and does not interfere with the product functionality. Data collection and submission can be disabled through the module twin properties.
-
-## Associated Azure endpoints
-
-> [!NOTE]
-> The list of endpoints described in this article, is not meant to be a comprehensive list of the associated service endpoints. It is an informative list of the endpoints which are required for the normal operation of Video Analyzer. Refer to each individual Azure service documentation for a complete list of the endpoints exposed by each respective service.
-
-## Azure Storage
-
-* **Purpose**: to record audio, video, and inference data when [pipelines](pipeline.md) are configured to store video on the cloud via the [video sink](pipeline.md#video-sink) node.
-* **Authentication and Authorization**: authorization is performed by standard Azure Storage service authentication and authorization enforcement. In this case, storage is accessed through container specific SAS URLs.
-* **Requirement**: access to this endpoint is only required when a Video Analyzer edge pipeline is configured to archive the video to the cloud.
-
-## IoT Hub
-
-* **Purpose**: control and data plane for Azure IoT Hub and Edge Devices.
-* **Authentication and Authorization**: please refer to the Azure IoT Hub documentation.
-* **Requirement**: Properly configured and functioning edge device with Azure IoT Edge Runtime is required to ensure that the Azure Video Analyzer edge module operates correctly.
-
-## TLS encryption
-
-* **Encryption and Server Authentication**: all Video Analyzer endpoints are exposed through TLS 1.2 compliant endpoints.
-
-## References
-
-Public:
-
-* [Azure Resource Manager overview](../../azure-resource-manager/management/overview.md)
-* [Understand Azure IoT Hub endpoints](../../iot-hub/iot-hub-devguide-endpoints.md)
-* [What is Azure Private Link?](../../private-link/private-link-overview.md)
-* [Azure service tags overview](../../virtual-network/service-tags-overview.md)
-
-## Next steps
-
-[Access Policies](access-policies.md)
azure-video-analyzer Ai Composition Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/ai-composition-overview.md
- Title: AI compositions
-description: This article gives a high-level overview of Azure Video Analyzer support for three kinds of AI composition. The topic also provides scenario explanation for each kind of AI composition.
- Previously updated : 11/04/2021---
-# AI composition
--
-This article gives a high-level overview of Azure Video Analyzer support for three kinds of AI composition.
-
-* [Sequential](#sequential-ai-composition)
-* [Parallel](#parallel-ai-composition)
-* [Combined](#combined-ai-composition)
-
-## Sequential AI composition
-
-AI nodes can be sequentially composed. This allows a downstream node to augment inferences generated by an upstream node.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/ai-composition/sequential.svg" alt-text="Sequential AI composition":::
-
-### Key aspects
-
-* Pipeline extensions act as media passthrough nodes and can be configured such that external AI servers receive frames at different rates, formats and resolutions. Additionally, configuration can be specified such that external AI servers can receive all frames or only frames, which already contain inferences.
-* Inferences are added to the frames as they go through the different extension nodes, an unlimited number of such nodes can be added in sequence.
-* Other scenarios such as continuous video recording or event-based video recording can be combined with sequential AI composition.
-
-
-## Parallel AI composition
-
-AI nodes can also be composed in parallel instead of in sequence. This allows independent inferences to be performed on the ingested video stream, saving ingest bandwidth on the edge.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/ai-composition/parallel.svg" alt-text="Parallel AI composition":::
-
-### Key aspects
-
-* Video can be split into an arbitrary number of parallel branches and such split can happen at any point after the following nodes.
-
- * RTSP source
- * Motion Detector
- * Pipeline extension
-
-## Combined AI composition
-
-Both sequential and parallel composition constructs can be combined to develop complex composable AI pipelines. This is possible since AVA pipelines allow extension nodes to be combined sequentially and/or with a parallel composition indefinitely alongside other supported nodes.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/ai-composition/complex.svg" alt-text="Combined AI composition":::
-
--
-## Next steps
-
-[Analyze live video streams with multiple AI models using AI composition](analyze-ai-composition.md)
azure-video-analyzer Analyze Live Video Without Recording https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/analyze-live-video-without-recording.md
- Title: Analyzing live video without recording
-description: A pipeline topology can be used to just extract analytics from a live video stream, without having to record it on the edge or in the cloud. This article discusses this concept.
- Previously updated : 11/04/2021--
-# Analyzing live videos without recording
--
-## Suggested pre-reading
-
-* [Pipeline concept](pipeline.md)
-* [Pipeline extension concept](pipeline-extension.md)
-* [Event-based video recording concept](event-based-video-recording-concept.md)
-
-## Overview
-
-You can use a pipeline topology to analyze live video, without recording any portions of the video to a file or an asset. The pipeline topologies shown below are similar to the ones in the article on [Event-based video recording](event-based-video-recording-concept.md), but without a video sink node or file sink node.
-
-> [!NOTE]
-> Analyzing live videos is currently available only for edge module and not for cloud.
-
-### Motion detection
-
-The pipeline topology shown below consists of an [RTSP source](pipeline.md#rtsp-source) node, a [motion detection processor](pipeline.md#motion-detection-processor) node, and an [IoT Hub message sink](pipeline.md#iot-hub-message-sink) node - you can see the settings used in its [JSON representation](https://github.com/Azure/video-analyzer/blob/main/pipelines/live/topologies/motion-detection/topology.json). This topology enables you to detect motion in the incoming live video stream and relay the motion events to other apps and services via the IoT Hub message sink node. The external apps or services can trigger an alert or send a notification to appropriate personnel.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/analyze-live-video-without-recording/motion-detection.svg" alt-text="Detecting motion in live video":::
-
-### Analyzing video using a custom vision model
-
-The pipeline topology shown below enables you to analyze a live video stream using a custom vision model packaged in a separate module. You can see the settings used in its [JSON representation](https://github.com/Azure/video-analyzer/blob/main/pipelines/live/topologies/httpExtension/topology.json). There are other [examples](https://github.com/Azure/video-analyzer/tree/main/edge-modules/extensions) available for wrapping models into IoT Edge modules that run as an inference service.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/analyze-live-video-without-recording/motion-detected-frames.svg" alt-text="Analyzing live video using a custom vision module":::
-
-In this pipeline topology, the video input from the RTSP source is sent to an [HTTP extension processor](pipeline.md#http-extension-processor) node, which sends image frames (in JPEG, BMP, or PNG formats) to an external inference service over REST. The results from the external inference service are retrieved by the HTTP extension node, and relayed to the IoT Edge hub via IoT Hub message sink node. This type of pipeline topology can be used to build solutions for a variety of scenarios, such as understanding the time-series distribution of vehicles at an intersection, understanding the consumer traffic pattern in a retail store, and so on.
-
-> [!TIP]
-> You can manage the frame rate within the HTTP extension processor node using the `samplingOptions` field before sending it downstream.
-
-An enhancement to this example is to use a motion detector processor ahead of the HTTP extension processor node. This will reduce the load on the inference service, since it is used only when there is motion activity in the video.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/analyze-live-video-without-recording/custom-model.svg" alt-text="Analyzing live video using a custom vision module on frames with motion":::
-
-## Next steps
-
-[Quickstart: Analyze a live video feed from a (simulated) IP camera using your own HTTP model](analyze-live-video-use-your-model-http.md)
azure-video-analyzer Connect Cameras To Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/cloud/connect-cameras-to-cloud.md
- Title: Connecting cameras to the service
-description: This article discusses ways to connect cameras directly to Azure Video Analyzer service.
-- Previously updated : 11/04/2021---
-# Connect cameras to the cloud
---
-Azure Video Analyzer service allows users to connect RTSP cameras directly to the cloud in order capture and record video, using [live pipelines](../pipeline.md). This will either reduce the computational load on an edge device or eliminate the need for an edge device completely. Video Analyzer service currently supports three different methods for connecting cameras to the cloud: connecting via a remote device adapter, connecting from behind a firewall using an IoT PnP command, and connecting over the internet without a firewall.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/connect-cameras-to-cloud/connect-cameras-to-cloud.svg" alt-text="3 different methods for connecting cameras to the cloud":::
-
-## Connect via a remote device adapter
-
-You can deploy the Video Analyzer edge module to an IoT Edge device on the same (private) network as the RTSP cameras, and connect the edge device to the internet. The edge module can now be set up as an *adapter* that enables Video Analyzer service to connect to the *remote devices* (cameras). The edge module acts as a [transparent gateway](../../../iot-edge/iot-edge-as-gateway.md) for video traffic between the RTSP cameras and the Video Analyzer service. This approach is useful in the following scenarios:
-
-* When cameras/devices need to be shielded from exposure to the internet
-* When cameras/devices do not have the functionality to connect to IoT Hub independently
-* When power, space, or other considerations permit only a lightweight edge device to be deployed on-premises
-
-The Video Analyzer edge module does not act as a transparent gateway for messaging and telemetry from the camera to IoT Hub, but only as a transparent gateway for video.
-
-## Connect behind a firewall using an IoT PnP device implementation
-
-This method allows RTSP cameras or devices to connect to Video Analyzer behind a firewall using [IoT Plug and Play command interface](../../../iot-develop/overview-iot-plug-and-play.md) and eliminates the need for an edge device. This method requires that a suitable IoT PnP device implementation be installed and run on cameras or devices. Information for how to connect a compatible devices from any manufacturer is [here](connect-devices.md).
-
-## Connect over the internet (no firewall)
-
-This method should only be used for supervised proof-of-concept exercises, where it may be permissible to permit the Video Analyzer service to access the device over the internet, without a firewall.
-
-A related use case is when a module is deployed to an Azure VM to simulate an RTSP camera, as described in this [quickstart](get-started-livepipelines-portal.md).
--
-## Next Steps
--- Follow [this how-to guide](use-remote-device-adapter.md) to connect cameras via a remote device adapter-- Follow [this how-to guide](connect-devices.md) for information on connecting devices using an IoT PnP implementation-- Follow [this quickstart](get-started-livepipelines-portal.md) to connect cameras over the internet
azure-video-analyzer Connect Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/cloud/connect-devices.md
- Title: Connect devices to the service
-description: This article describes how to connect devices to Azure Video Analyzer
-- Previously updated : 11/04/2021--
-# Connect devices to Azure Video Analyzer
---
-In order to capture and record video from a device, Azure Video Analyzer service needs to establish an [RTSP](../terminology.md#rtsp) connection to it. If the device is behind a firewall, such connections are blocked, and it may not always be possible to create rules to allow inbound connections from Azure. To support such devices, you can build and install an [Azure IoT Plug and Play](../../../iot-develop/overview-iot-plug-and-play.md) device implementation, which listens to commands sent via IoT Hub from Video Analyzer and then opens a secure websocket tunnel to the service. Once such a tunnel is established, Video Analyzer can then connect to the RTSP server.
-
-## Overview
-
-This article provides high-level concepts about building an Azure IoT PnP device implementation that can enable Video Analyzer to capture and record video from a device.
-
-The application will need to:
-
-1. Run as an IoT device
-1. Implement the [IoT PnP](../../../iot-develop/overview-iot-plug-and-play.md) interface with a specific command (`tunnelOpen`)
-1. Upon receiving such a command:
- * Validate the arguments received
- * Open a secure websocket connection to the URL provided using the token provided
- * Forward the websocket bytes to the camera's RTSP server TCP connection
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/connect-devices/connect-devices.svg" alt-text="Connect devices to the cloud":::
-
-## Run as an IoT Device
-
-The Video Analyzer application will be deployed as a Video Analyzer PnP plugin. This requires using one of the [Azure IoT device SDKs](../../../iot-develop/libraries-sdks.md#device-sdks) to build your IoT PnP device implementation. Register the IoT device with your IoT Hub to get the IoT Hub Device ID and Device Connection String.
-
-### IoT Device Client Configuration
-
-* Set OPTION_MODEL_ID to `“dtmi:azure:videoanalyzer:WebSocketTunneling;1”` to support PnP queries 
-* Ensure your device is using either the MQTT or MQTT over WebSockets protocol to connect to Azure IoT Hub
- * Connect to IoT Hub over an HTTPS proxy if configured on the IoT deviceΓÇ»
-* Register callback for `tunnelOpen` direct method
-
-## Implement the IoT PnP Interface for Video Analyzer
-
-The following [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl) model describes a device that can connect to Video Analyzer.
-
-```json
-{
- "@context": "dtmi:dtdl:context;2",
- "@id": "dtmi:azure:videoanalyzer:WebSocketTunneling;1",
- "@type": "Interface",
- "displayName": "Azure Video Analyzer Web Socket Tunneling",
- "description": "This interface enables media publishing to Azure Video Analyzer service from a RTSP compatible device which is located behind a firewall or NAT device.",
- "contents": [
- {
- "@type": "Command",
- "displayName": "Tunnel Open",
- "name": "tunnelOpen",
- "request": {
- "@type": "CommandPayload",
- "displayName": "Parameters",
- "name": "parameters",
- "schema": {
- "@type": "Object",
- "fields": [
- {
- "displayName": "Remote Endpoint",
- "description": "The remote endpoint for the web socket tunnel.",
- "name": "remoteEndpoint",
- "schema": "string"
- },
- {
- "displayName": "Remote Authorization Token",
- "description": "The bearer token for the web socket authentication.",
- "name": "remoteAuthorizationToken",
- "schema": "string"
- },
- {
- "displayName": "Local Port",
- "description": "The local port where web socket data should be tunneled to.",
- "name": "localPort",
- "schema": "integer"
- }
- ]
- }
- }
- }
- ]
-}
-```
-
-The IoT device registers a direct method `tunnelOpen`, where the body of the request will have the parameters `remoteEndpoint`, `remoteAuthorizationToken`, and `localPort` as shown above.
-
-## Implement the direct method `tunnelOpen`
-When the `tunnelOpen` direct method is invoked by Video Analyzer service, the application needs to do the following:
-
-1. Get the available RTSP port(s) of the device
-1. Compare the `localPort` value specified in the direct method call with the available ports
- * Return **BadRequest** if no match is found (see Error Responses section below)
-1. Open a TCP connection to "(camera IP or hostname):`localPort`"
- * Return **BadRequest** if the connection fails
- * NOTE: hostname is typically **localhost**
-1. Open a web socket connection to the `remoteEndpoint` (through a proxy if configured on the device)
- * Set the HTTP "Authorization" header as "Bearer (remoteAuthorizationToken)"
- * Set the header "TunnelConnectionSource" with value "PnpDevice"
- * Set User-Agent to a suitable value that would help you identify your implementation.
- * For example, you may want to capture the architecture of the CPU, the OS, the model/make of the device.
- * Return 200 OK if the web socket connection was successful, otherwise return the appropriate error code
-1. Return response (do not block)
-1. IoT PnP device implementation starts sending TCP data bi-directionally between the websocket and RTSP server TCP connection
-
-Video Analyzer service will retry `tunnelOpen` requests on failure, so retries are not needed in the application.
-
-### Error responses
-If the `tunnelOpen` request fails then the response body should be as follows
-
-```
-{
- "code": "<errorCode>", // Use HTTP status error codes
- "target": "<uri>", // The target URI experiencing the issue
- "message": "<Error message>", // Short error message describing issue. Do not include end user identifiable information.
-}
-```
-Examples of such error responses are:
-
-* Local port is not available as an RTSP or RTSPS port
-{ "code": "400", "target": "(camera IP or hostname):{localPort}", "message": "Local port is not available"}
-* Timeout/could not connect to RTSP endpoint
-{ "code": "400", "target": "(camera IP or hostname):{localPort}", "message":"Could not connect to RTSP endpoint"}
-* Timeout/error response from web socket connect attempt
-{ "code": "{WebSocket response code}", "target": "{remoteEndpoint}", "message": "{Web socket response error message}"}
--
-## Ingestion to Video Analyzer
-In order to capture and record video to Video Analyzer, a pipeline topology with tunneling enabled must be created. From that topology, a live pipeline must be created and activated. [Instructions for this process are outlined here.](use-remote-device-adapter.md#create-pipeline-topology-in-the-video-analyzer-service)
-
-
-## Example implementation
-Contact videoanalyzerhelp@microsoft.com if you would like to implement an application on your device to connect it to Video Analyzer.
-
-## See Also
-
-[What is IoT Plug and Play?](../../../iot-develop/overview-iot-plug-and-play.md)
azure-video-analyzer Export Portion Of Video As Mp4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/cloud/export-portion-of-video-as-mp4.md
- Title: Export a portion of an Azure Video Analyzer recorded video to an MP4 file
-description: In this tutorial, you learn how to export a portion of a Video Analyzer recorded video as an MP4 which is stored as a Video Analyzer account. The video can be downloaded and consumed outside of the Video Analyzer account ecosystem.
- Previously updated : 11/04/2021--
-# Tutorial: Export portion of recorded video as an MP4 file
---
-In this tutorial, you learn how to export a portion of video that has been recorded in Azure Video Analyzer account. This exported portion of video is saved as an MP4 file which can be downloaded and consumed outside of the Video Analyzer account.
-
-The topic demonstrates how to export a portion of the video using the Azure portal and a C# SDK code sample.
-
-## Suggested pre-reading
-
-Read these articles before you begin:
-
-* [Azure Video Analyzer overview](../overview.md)
-* [Video Analyzer Pipeline concepts](../pipeline.md)
-
-## Prerequisites
-
-Prerequisites for this tutorial are:
-
-* An Azure account that includes an active subscription. [Create an account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) for free if you don't already have one.
-* [Video Analyzer account](../create-video-analyzer-account.md).
-* Have completed [Quickstart: Detect motion in a (simulated) live video, record the video to the Video Analyzer account](../edge/detect-motion-record-video-clips-cloud.md) or any Video Analyzer pipeline that records to a video sink.
-
-## Overview
-
-Video Analyzer can record videos from an RTSP source. These videos are recorded in a segmented archive and stored in the Video Analyzer account. The segmented archive format allows unbounded duration of video recording. However, in some cases it is necessary to save a portion of video as an MP4 so that it can be individually archived, downloaded, or played outside of the Video Analyzer ecosystem.
-
-In this tutorial, you will learn:
-
-* About batch pipeline topologies and batch pipeline jobs.
-* How to create a batch topology.
-* How to create a batch pipeline job from a Video Analyzer video archive. The new video archive creates an MP4 file that contains a specified value of time (up to 24 hours).
-
-## Pipeline topology of **batch** kind
-
-A pipeline topology of batch kind enables you to describe how recorded video should be processed for export. The topology is based on your custom needs through three interconnected nodes. A pipeline topology of batch kind is the base that is used for pipeline jobs. A pipeline job is the individual instance of a pipeline topology of batch kind. The pipeline job imports the recorded Video Analyzer video and saves it to the Video Analyzer's storage account, as a downloadable MP4 file. The pipeline topology of batch kind uses a [video source node](../pipeline.md#video-source) that connects to an [encoder processor node](../pipeline.md#encoder-processor) and then connects to a [video sink node](../pipeline.md#video-sink).
-
-> [!NOTE]
-> For more information about sources, processors, and sinks, see [sources, processors, and sinks](../pipeline.md#sources-processors-and-sinks). For more information on pipeline jobs, see [pipeline jobs](../pipeline.md#batch-pipeline)
-
-## [Azure portal](#tab/azure-portal)
-
-### Create a pipeline job (from Videos)
-
-1. In the Azure portal navigate to your Video Analyzer account.
-1. Select **Videos** under the **Video Analyzer** section. Then select the video stream that should be used to export a portion of the video from.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/export-portion-of-video-as-mp4/video-analyzer-video.png" alt-text="Image of Azure Video Analyzer's Video Analyzer menu section highlighting the Videos selection.":::
-1. On the Video widget player blade click **Create Job** at the top.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/export-portion-of-video-as-mp4/create-job.png" alt-text="Image of Azure Video Analyzer's video widget blade highlighting the create a job selection.":::
-1. In the **Create Job** fly out blade select:
-
- 1. Select **Create from sample** for the `Batch topology`.
- 1. Select the **Video export** sample topology in the `Batch topology name` drop-down list.
- 1. Enter a name in the **Batch topology name** field to save the topology as.
-
- > [!NOTE]
- > The sample topology will be saved as the name entered above. You can re-use the name next time a video is to be exported.
-1. In the **Name your job** section enter a job name and a description (optional).
-1. In the **Define parameters** section specify the following:
-
- 1. In the **sourceVideoName** field, enter the name of the Video Analyzer recorded video.
- 1. In the **videoSourceTimeSequenceParameter** field, select the start and end dates by clicking on the calendar icon and selecting the dates for each value. In the time fields, enter the start and end times that are used by the pipeline job for creating the video clip.
-
- > [!NOTE]
- > The time value for a given recorded video is displayed in the upper right-hand side of the video widget player. This time value is shown in the image below with a red box around it. The calendar icon is also shown in the image below with a green box around it.
- 1. Enter a name for the exported MP4 file in the **exportedVideoName** field.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/export-portion-of-video-as-mp4/video-widget-job-creation.png" alt-text="Image of Video Analyzer's video player widget and pipeline jobs fly-out blade highlighting the video time stamp in a red box and a green box around the calendar icon.":::
-1. Click **Create** at the bottom of the **Create a job** fly-out blade.
-
- To monitor the Pipeline Job, navigate to the **Batch Jobs tab**.
-1. Under the **Video Analyzer** section select **Batch**.
-1. Click on the **Jobs** tab at the top of the Batch blade.
-
- The Batch Job will enter a processing state, then upon successful completion it will change state to `Completed`. To view the associated MP4 video file:
-1. Click on **Videos** under the **Video Analyzer** section.
-1. Click on the video name that matches the Batch Jobs name used previously in step 5.
-
-The Video Widget player should start playing the MP4 file. To download the MP4 file click **Download video** at the top of the blade. This opens the MP4 file in a new browser tab. Right click on the video and click **save as**.
-
-### Cancel a pipeline job
-
-Once a pipeline job has entered the processing state the pipeline job can be canceled. To cancel a pipeline job:
-
-1. Navigate to the Video Analyzer account and select **Batch** under **Video Analyzer** section.
-1. In the Batch blade select the **Jobs** tab at the top.
-1. Under the jobs tab you will find a list of jobs that are in different states. Find the job you wish to cancel in the processing state and select **Cancel** on the right-hand side of the Batch pipeline Jobs tab and then click **Yes**.
-
- > [!NOTE]
- > A failed pipeline job cannot be canceled.
-
-### Delete a pipeline job
-
-Once a pipeline job has entered the completed or failed state the pipeline job can be deleted. To delete a pipeline job:
-
-1. Navigate to the Video Analyzer account and select **Batch** under **Video Analyzer** section.
-1. In the Batch blade select the **Jobs** tab at the top.
-1. Under the jobs tab you will find a list of jobs that are in different states. Find the job you wish to delete (in the canceled, completed or failed state) and select **Delete** on the right-hand side pipeline Jobs and then click **Delete**.
-
-### Delete a pipeline topology of batch kind
-
-In order to delete a pipeline topology of batch kind all pipeline jobs that are associated with the pipeline topology must be deleted. To delete a pipeline topology of batch kind:
-
-1. Navigate to the Video Analyzer account.
-2. Click on **Batch** under the **Video Analyzer** section.
-3. Under the topologies tab locate your pipeline topology of batch kind to delete.
-4. Click '**...**' at the right-hand side of the pipeline topology of batch kind.
-5. Click **Delete topology**.
-
- > [!NOTE]
- > All pipeline jobs must be deleted from a pipeline topology of batch kind before a pipeline topology of batch kind can be deleted.
-
-### Clean up resources
-
-If you want to try other quickstarts or tutorials, keep the resources that you created. Otherwise, go to the Azure portal, go to your resource groups, select the resource group where you ran this quickstart, and delete all the resources.
-
-## [C# SDK sample](#tab/csharp-sdk-sample)
-
-In this tab, learn how to export a portion of recorded video as an MP4 file using Video AnalyzerΓÇÖs [C# SDK sample code](https://github.com/Azure-Samples/video-analyzer-csharp).
-
-### Additional prerequisites
-
-Complete the following prerequisites to run the [C# SDK sample code](https://github.com/Azure-Samples/video-analyzer-csharp).
-
-1. Get your Azure Active Directory [Tenant ID](../../../active-directory/fundamentals/active-directory-how-to-find-tenant.md).
-1. Register an application with Microsoft identity platform to get app registration [Client ID](../../../active-directory/develop/quickstart-register-app.md#register-an-application) and [Client secret](../../../active-directory/develop/quickstart-register-app.md#add-a-client-secret).
-1. [Visual Studio Code](https://code.visualstudio.com/) on your development machine with following extensions -
- * [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit)
- * [C#](https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.csharp).
-1. [.NET Core 3.1 SDK](https://dotnet.microsoft.com/download/dotnet-core/3.1) on your development machine.
-1. A recorded video in the Video Analyzer account, or an [RTSP camera](../quotas-limitations.md#supported-cameras-1) accessible over the internet. Alternatively, you can deploy an [RTSP camera simulator](get-started-livepipelines-portal.md#deploy-rtsp-camera-simulator).
-
-### Get the sample code
-
-* Clone the Video Analyzer [C# samples repository](https://github.com/Azure-Samples/video-analyzer-csharp).
-* Open your local clone of this git repository in Visual Studio Code.
-* The `src\video-export` folder contains a .NET Core console app to export a portion of a recorded video as an MP4 file.
-* Navigate to `src\video-export\Program.cs`. Provide values for the following variables & save the changes.
-
-| Variable | Description |
-|-|--|
-| SubscriptionId | Provide Azure subscription ID |
-| ResourceGroup | Provide resource group name |
-| AccountName | Provide Video Analyzer account name |
-| TenantId | Provide tenant ID |
-| ClientId | Provide app registration client ID |
-| Secret | Provide app registration client secret |
-| AuthenticationEndpoint | Provide authentication end point (example: https://login.microsoftonline.com) |
-| ArmEndPoint | Provide ARM end point (example: https://management.azure.com) |
-| TokenAudience | Provide token audience (example: https://management.core.windows.net) |
-| PublicCameraSourceRTSPURL *(optional)* | Provide RTSP source url |
-| PublicCameraSourceRTSPUserName *(optional)* | Provide RTSP source username |
-| PublicCameraSourceRTSPPassword *(optional)* | Provide RTSP source password |
-| SourceVideoName | Provide source video name for export |
-
-> [!NOTE]
-> The sample code will first activate a live pipeline to create a video recording. If you already have a video recording in your Video Analyzer account (should be of type `archive`), refer to [Use existing video section](https://github.com/Azure-Samples/video-analyzer-csharp/tree/main/src/video-export#use-existing-video) for the necessary code changes.
-
-### Run the sample program to create a batch pipeline
-
-* Start a debugging session in VS Code (If this is the default project, hit F5 key or see instructions to [run the sample](https://github.com/Azure-Samples/video-analyzer-csharp/tree/main/src/video-export#running-the-sample)). The TERMINAL window will start displaying messages as you run the program.
-* If the program runs successfully, a batch pipeline is created and activated. You will start seeing some messages printed in the TERMINAL window regarding creation of the topology and pipeline.
-* If the job is successful, login to [Azure portal](https://portal.azure.com). Go to the Video Analyzer account used for this article.
-* Click on Videos blade and choose the video resource created by the pipeline job. The default video name in sample code is `batch-pipeline-exported-video`. Click on the video, and it will trigger a download and playback in the browser window. Alternatively, you can download the file.
-
-> [!NOTE]
-> For detailed instructions on customizing pipeline parameters such as exported video name, refer to **[readme.md file for video export](https://github.com/Azure-Samples/video-analyzer-csharp/tree/main/src/video-export)**.
-
-### Clean-up resources
-
-In the VS Code TERMINAL window, press enter to deactivate the pipeline and cleanup the pipeline and topology created in this article.
-
-## Next steps
-
-* [Connect cameras directly to the cloud](./connect-cameras-to-cloud.md) in order capture and record video, using [cloud pipelines](../pipeline.md).
-* Connect cameras to Video Analyzer's service via the [Video Analyzer edge module acting as a transparent gateway](./use-remote-device-adapter.md) for video packets via RTSP protocol.
azure-video-analyzer Get Started Livepipelines Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/cloud/get-started-livepipelines-portal.md
- Title: Get started with live pipelines using the portal
-description: This quickstart walks you through the steps to capture and record video from an RTSP camera using live pipelines in Azure Video Analyzer service.
-- Previously updated : 12/07/2021---
-# Quickstart: Get started with Video Analyzer live pipelines in the Azure portal
---
-This quickstart walks you through the steps to capture and record video from a Real Time Streaming Protocol (RTSP) camera using live pipelines in Azure Video Analyzer service.
-You will create a Video Analyzer account and its accompanying resources by using the Azure portal. You will deploy an RTSP camera simulator, if you donΓÇÖt have access to an actual RTSP camera (that can be made accessible over the internet). YouΓÇÖll then deploy the relevant Video Analyzer resources to record video to your Video Analyzer account.
-
-The steps outlined in this document apply to cameras that are made accessible over the internet and not shielded behind a firewall. The following diagram graphically represents the live [pipeline](../pipeline.md) that you will deploy to your Video Analyzer account.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/live-pipeline.svg" alt-text="Representation of a live pipeline on the cloud":::
-
-## Prerequisites
--- An active Azure subscription. If you don't have one, [create a free account](https://azure.microsoft.com/free/).-
- [!INCLUDE [the video analyzer account and storage account must be in the same subscription and region](../includes/note-account-storage-same-subscription.md)]
-- Either an RTSP camera accessible over the internet, or an Azure Linux VM (with admin privileges) to host an RTSP camera simulator-
-## Sample Architecture - Recording video from a camera over the internet
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/public-camera-to-cloud-live-pipeline-arch.png" alt-text="Diagram of a sample architecture of a public camera video feed integrating with Video Analyzer's live pipeline that captures videos on the cloud.":::
-
-## RTSP camera
-
-You will need access to an RTSP capable camera [see for supported cameras](../quotas-limitations.md). The camera should be configured to encode video with a maximum bitrate under 3 Mbps. Make a note of this maximum bitrate setting. Further, the RTSP server on this camera needs to be accessible over the public internet. If you are able to use such a camera, then you can skip to Create Azure resources section. Alternatively, you can deploy an RTSP camera simulator as described in the section below.
-
-## Deploy RTSP camera simulator
-
-This section shows you how to deploy an RTSP camera simulator on Azure Linux VM, running 'Ubuntu Server 18.04' operating system. This simulator makes use of the [Live555 Media Server](http://www.live555.com/mediaServer/).
-
-> [!NOTE]
-> References to third-party software are for informational and convenience purposes only. Microsoft does not endorse nor provide rights for the third-party software. For more information, see [Live555 Media Server](http://www.live555.com/mediaServer/).
-
-> [!WARNING]
-> Please note that this RTSP camera simulator endpoint is exposed over the internet and hence will be accessible to anyone who knows the RTSP URL.
-
-**Deployment steps:**
-1. Deploy a standard_D2s_v3 series Azure Linux VM running 'Ubuntu Server 18.04' operating system, [see here](../../../virtual-machines/linux/quick-create-portal.md) for VM creation steps, don't have to install web server mentioned in the linked article. Also allow SSH port in deployment wizard so that you could connect to VM using SSH connection.
-1. Enable inbound connections for RTSP protocol. In the Azure portal, open the management pane for the Linux VM you created above.
-
- 1. Click on Networking - you will see the blade open to the inbound port rules for the network security group (NSG) that was created for you to support inbound SSH connections.
- 1. Click on Add inbound port rule to add a new one
- 1. In the pane that opens up, change Destination port ranges to 554. Choose a Name for the rule, such as "RTSP". Keep all other values as default. See [here](../../../virtual-machines/windows/nsg-quickstart-portal.md) for more details.
-1. Install Docker on the VM using instructions [here](https://docs.docker.com/engine/install/ubuntu/), only follow the steps till verifying the Docker installation by running the ΓÇÿhello-worldΓÇÖ image.
-1. Connect to your VM, for example using SSH. From the terminal window, create a local folder such as 'localmedia' to host media files, this VM local folder will be used to map to RTSP mediaserver container.
-1. Copy an MKV file used to simulate the camera feed as follows:
-
- ```
- cd localmedia
- wget https://avamedia.blob.core.windows.net/public/camera-1800s.mkv
- ```
-1. Start the RTSP server on the VM using the pre-built container image as follow
-
- ```
- sudo docker run -d -p 554:554 -v ${PWD}:/live/mediaServer/media mcr.microsoft.com/ava-utilities/rtspsim-live555:1.2
- ```
-1. Once the RTSP server is running, clients can now connect to it via an RTSP URL:
-
- - Go to 'Overview' page of your VM in Azure portal and note down the value of 'Public IP address'
-
- - The RTSP URL is rtsp://{Public IP address}:554/media/camera-1800s.mkv, can be tested with a player from desktop e.g. VLC
-
-## Create Azure resources
-
-The next step is to create the required Azure resources (Video Analyzer account, storage account and user-assigned managed identity).
-
-### Create a Video Analyzer account in the Azure portal
-
-1. Sign in at the [Azure portal](https://portal.azure.com/).
-1. On the search bar at the top, enter **Video Analyzer**.
-1. Select **Video Analyzers** under **Services**.
-1. Select **Add**.
-1. In the **Create Video Analyzer account** section, enter these required values:
-
- - **Subscription**: Choose the subscription that you're using to create the Video Analyzer account.
- - **Resource group**: Choose a resource group where you're creating the Video Analyzer account, or select **Create new** to create a resource group.
- - **Video Analyzer account name**: Enter a name for your Video Analyzer account. The name must be all lowercase letters or numbers with no spaces and 3 to 24 characters in length.
- - **Location**: Choose a location to deploy your Video Analyzer account (for example, **West US 2**).
- - **Storage account**: Create a storage account. We recommend that you select a [standard general-purpose v2](../../../storage/common/storage-account-overview.md#types-of-storage-accounts) storage account.
- - **User identity**: Create and name a new user-assigned managed identity.
-1. Select **Review + create** at the bottom of the form.
-
-### Deploy a live pipeline
-
-### [Azure portal](#tab/portal)
-Once the Video Analyzer account is created, you can go ahead with next steps to create a live pipeline topology and a live pipeline.
-1. Go to Video Analyzer account and locate the **Live** menu item at the bottom left, select it.
-1. In the Topologies plane, select the **Create topology** option from the top to create a live topology. Follow the portal wizard steps to create a live pipeline topology
-
- - **Create a pipeline topology** wizard will appear on the portal
- - Select **Try sample topologies**-> select **Live capture, record, and stream from RTSP camera** topology-> Select 'Proceed' on **Load sample topology** dialog box.
- - The wizard to create the live pipeline topology will be displayed, showing RTSP source node connected to a Video sink node.
- - Enter the required fields to create topology:
-
- - **Topology name** ΓÇô Enter the name for the topology
- - **Description** (optional) ΓÇô Brief description about the topology
- - **Kind** (prepopulated ΓÇÿLiveΓÇÖ)
- - Select the **RTSP source** node, then set **Transport** property value as TCP
- - Select **Save** with default configuration for rest of the properties
-1. Next step is to create a live pipeline using the topology created in previous step.
-
- - Select **Pipelines**-> Select **Create pipeline** -> then select the live pipeline topology created in previous step to a create a pipeline. After selecting the topology click **Create**
- - **Create a live pipeline** wizard will appear on the portal. Enter the required fields:
-
- - **Live pipeline name** ΓÇô Use a unique name, allows alpha numerals and dashes
- - **Bitrate** ΓÇô It is the maximum capacity in Kbps that is reserved for the live pipeline, allowed range is 500 kbps to 3000 kbps. Use default 1000 for RTSP camera simulator camera-1800s.mkv file (this value should match with sample video file used).
- - **rtspUserNameParameter**, **rtspPasswordParameter** - Set dummy values for these fields if using RTSP camera simulator else enter authentication credentials for actual RTSP camera stream
- - **rtspUrlParameter** ΓÇô Use `rtsp://<VMpublicIP>:554/media/camera-1800s.mkv` (for RTSP camera simulator) else actual RTSP camera stream URL
- - **videoNameParameter** - Unique name for the target video resource to be recorded. Note: use a unique video resource for each camera (or MKV file)
- - Select **Create** and you will see a pipeline is created in the pipeline grid on the portal.
- - Select the live pipeline created in the grid, select **Activate** option available towards the right of the pane to activate the live pipeline. This will start your live pipeline and start recording the video
-1. Now you would be able to see the video resource under Video Analyzer account-> **Videos** pane in the portal. Its status will indicate **Recording** as pipeline is active and recording the live video stream.
-1. After a few seconds, select the video and you will be able to see the [low latency stream](../viewing-videos-how-to.md).
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/camera-1800s-mkv.png" alt-text="Diagram of the recorded video captured by live pipeline on the cloud.":::
-
- > [!NOTE]
- > If you are using an RTSP camera simulator, itΓÇÖs not possible to accurately determine end-to-end latency. Further, after the RTSP camera simulator reaches the end of the MKV file, it will stop. The live pipeline will attempt to reconnect and after a while, the simulator will restart the stream from the beginning of the file. If you let this live pipeline run for many hours, you will see gaps in the video recording whenever the simulator stops and restarts.
-1. If necessary, refer Activity log to quickly verify your deployment operations. Refer [here](./monitor-log-cloud.md) for monitoring and event logs.
-1. To deactivate the pipeline recording go to your Video Analyzer account, on the left panel select **Live**-> **Pipelines**-> select the pipeline to be deactivated then select **Deactivate** in pipeline grid, it will stop the recording.
-1. You can also continue to delete the pipeline & topology if they are not needed.
-
-**Clean up resources**
-
-If you want to try other quickstarts or tutorials, keep the resources that you created. Otherwise, go to the Azure portal, go to your resource groups, select the resource group where you ran this quickstart and delete all the resources.
-
-### [C# SampleCode](#tab/SampleCode)
-In this tab, learn how to deploy live pipeline using using Video AnalyzerΓÇÖs [C# SDK sample code](https://github.com/Azure-Samples/video-analyzer-csharp).
-
-### Prerequisites
-- Retrieve your Azure Active Directory [Tenant ID](../../../active-directory/fundamentals/active-directory-how-to-find-tenant.md).
- - Register an application with Microsoft identity platform to get app registration [Client ID](../../../active-directory/develop/quickstart-register-app.md#register-an-application) and [Client secret](../../../active-directory/develop/quickstart-register-app.md#add-a-client-secret).
-- [Visual Studio Code](https://code.visualstudio.com/) on your development machine with following extensions:
- * [C#](https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.csharp).
-- [.NET Core 3.1 SDK](https://dotnet.microsoft.com/download/dotnet-core/3.1) on your development machine.-
-### Get the sample code
-- Clone the Video Analyzer [C# samples repository](https://github.com/Azure-Samples/video-analyzer-csharp). -- Open your local clone of this git repository in Visual Studio Code.-- src\cloud-video-processing\capture-from-rtsp-camera folder contains C# console app for capturing and recording live video from an RTSP capable camera accessible over the internet. -- Navigate to `src\video-export\Program.cs`. Provide values for the following variables & save the changes.-
-| Variable | Description |
-|-|--|
-| SubscriptionId | Provide Azure subscription ID |
-| ResourceGroup | Provide resource group name |
-| AccountName | Provide Video Analyzer account name |
-| TenantId | Provide tenant ID |
-| ClientId | Provide app registration client ID |
-| Secret | Provide app registration client secret |
-| AuthenticationEndpoint | Provide authentication end point (example: https://login.microsoftonline.com) |
-| ArmEndPoint | Provide ARM end point (example: https://management.azure.com) |
-| TokenAudience | Provide token audience (example: https://management.core.windows.net) |
-| PublicCameraSourceRTSPURL | Provide RTSP source url. For RTSP camera simulator, use rtsp://[VMpublicIP]:554/media/camera-1800s.mkv |
-| PublicCameraSourceRTSPUserName | Provide RTSP source username |
-| PublicCameraSourceRTSPPassword | Provide RTSP source password |
-| PublicCameraVideoName | Provide unique video name to capture live video from this RTSP source|
-
-### Run the sample program
--- Start a debugging session in VS code. If this project is not set as default, you can set it as default project to run on hitting F5 by modifying the files in .vscode folder:
- - launch.json - Update the "program" and "cwd" to launch PublicCameraPipelineSampleCode.
- - tasks.json - Update "args" to point to PublicCameraPipelineSampleCode.csproj.
-- Alternatively, go to TERMINAL window in the Visual Studio Code, navigate using cd 'path' to src\cloud-video-processing\ingest-from-rtsp-camera. Type commands **dotnet build** and **dotnet run** to compile and run the program respectively.-- You will start seeing some messages printed in the TERMINAL window regarding creation of the topologies and pipelines. If console app runs successfully, a live pipeline is created and activated. Code walkthrough is available [here](https://github.com/Azure-Samples/video-analyzer-csharp/tree/main/src/cloud-video-processing/capture-from-rtsp-camera)-- Now you could go to Azure portal to play the recorded video under Video Analyzer account-> Videos pane. Its status will indicate Recording as pipeline is active and recording the live video stream.
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/camera-1800s-mkv.png" alt-text="Diagram of the recorded video captured by live pipeline on the cloud.":::
-- Console Terminal window pauses after this step so that you can examine the program's output in the TERMINAL window, see the recorded video in portal and will wait for user input to proceed.-
-> [!NOTE]
-> If you are using an RTSP camera simulator, itΓÇÖs not possible to accurately determine end-to-end latency. Further, after the RTSP camera simulator reaches the end of the MKV file, it will stop. The live pipeline will attempt to reconnect and after a while, the simulator will restart the stream from the beginning of the file. If you let this live pipeline run for many hours, you will see gaps in the video recording whenever the simulator stops and restarts.
--- **Clean up resources**
-In the terminal window, pressing enter will deactivate the pipeline, delete the pipeline and delete the topology deployed earlier. Program calls CleanUpResourcesAsync() method to cleanup the deployed resources.
---
-## Next steps
--- Learn more about managing video's [retention policy](../manage-retention-policy.md)-- Try out different MKV sample files for media simulator from [here](https://github.com/Azure/video-analyzer/tree/main/media), bitrate of sample file should match with pipeline setup.-- Learn more about [Monitoring & logging for cloud pipelines](./monitor-log-cloud.md).
azure-video-analyzer Monitor Log Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/cloud/monitor-log-cloud.md
- Title: Monitoring and logging the service
-description: This article provides an overview of monitoring and logging in Azure Video Analyzer service.
- Previously updated : 11/04/2021--
-# Monitor and log
-
-![cloud icon](media/env-icon/cloud.png)
-Alternatively, check out [monitor and log on the edge](../edge/monitor-log-edge.md).
----
-In this article, you'll learn about events and logs generated by Azure Video Analyzer service. You'll also learn how to consume the logs that the service generates and how to monitor the service events.
-
-## Taxonomy of events
-
-Below diagram represents common taxonomy used for the events or telemetry data emitted by Video Analyzer service:
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/event-taxonomy-cloud.png" alt-text="Diagram that shows the taxonomy of events.":::
-
-## Event schema
-
- Events generated by Video Analyzer service consist of system properties, application properties and a body.
-
-### Common properties
-
-Every event has a set of common properties:
-
-| Property | Property type | Data type | Description |
-| - | - | | |
-| `trace-id` | system | guid | Unique event ID. |
-| `resourceId` | applicationProperty | string | ARM path for the Azure Video Analyzer account. |
-| `subject` | applicationProperty | string | Subpath of the entity emitting the event. |
-| `time` | applicationProperty | string | Time the event was generated. |
-| `category` | system | string | Audit, Operational, Diagnostics. |
-| `operationName`| applicationProperty| string | Event type identifier (See the following section). |
-| `level` | system | string | Event level (Informational, Warning, Error, Critical). |
-| `body` | body | object | Particular event data. |
-| `operationVersion` | system | string | Event Data version {Major}.{Minor} |
-
-## Event types
-Video Analyzer service emits following types of event data:
-
-**Operational:** Events generated by the actions of a user or during the execution of a [pipeline](../pipeline.md).
-* Volume: Expected to be low (a few times a minute, or even less). Examples:
-
-| Event | Level | Short Description |
-| - | - | |
-|RecordingStarted |Informational| Media recording started|
-|RecordingAvailable |Informational| Media recording available|
-|RecordingStopped |Informational| Media recording stopped|
-|PipelineStateChanged |Informational| State of pipeline changed|
-
- *Sample operational event*
-
-```json
-{
- "time": "2021-10-06T21:19:36.0988630Z",
- "resourceId": "/SUBSCRIPTIONS/{SUBID}/RESOURCEGROUPS/{RGNAME}/PROVIDERS/MICROSOFT.MEDIA/VIDEOANALYZERS/{NAME}",
- "region": "westcentralus",
- "category": "Operational",
- "operationName": "Microsoft.VideoAnalyzer.Operational.RecordingStarted",
- "operationVersion": "1.0",
- "level": "Informational",
- "correlationId": "c7887efd-0043-4ada-aa3d-9a411e612497",
- "traceContext": "{\n \"traceId\": \"e74a9d4c-c4b9-4024-9acf-06be76d978ad\"\n}",
- "properties": {
- "subject": "/livePipelines/livepipeline/sinks/videoSink",
- "body": {
- "type": "video",
- "location": "/videos/livetest1",
- "startTime": "2021-10-06T21:19:32.520Z"
- }
- }
-}
-```
-
-**Diagnostics:** Events that help to diagnose problems and/or performance.
- * Volume: Can be high (several times a minute), it could also impact storage cost. Recommended to only enable these events when diagnosis is needed or for troubleshooting purposes. Examples:
-
-| Event | Level | Short Description |
-| -- | -| |
-|AuthenticationError|Error|Server/client authentication error|
-|AuthorizationError | Error| Server/client authorization error|
-|FormatError | Error| Packaging, format, or encoding issues|
-|MediaSessionEstablished| Informational| SDP or other session information|
-|NetworkError | Error| DNS, network error|
-|ProtocolError | Error| RTSP or any other protocol error|
-|StorageError | Error| Storage read/write error|
-|RtspPlaybackSessionEstablished| Informational| RTSP playback session is established|
-|RtspPlaybackSessionClosed| Informational| RTSP playback session is closed|
-
-*Sample diagnostic event*
-
-```json
-{
- "time": "2021-10-06T21:19:34.1290000Z",
- "resourceId": "/SUBSCRIPTIONS/{SUBID}/RESOURCEGROUPS/{RGNAME}/PROVIDERS/MICROSOFT.MEDIA/VIDEOANALYZERS/{NAME}",
- "region": "westcentralus",
- "category": "Diagnostics",
- "operationName": "Microsoft.VideoAnalyzer.Diagnostics.MediaSessionEstablished",
- "operationVersion": "1.0",
- "level": "Informational",
- "traceContext": "{\n \"traceId\": \"b03cabe2-b9d1-4be7-9770-4ac9e4fdc012\"\n}",
- "properties": {
- "subject": "/livePipelines/livepipesample/sources/rtspSource",
- "body": {
- "sdp": "SDP:\nv=0\r\no=- 0 0 IN IP4 127.0.0.1\r\ns=No Name\r\nt=0 0\r\na=tool:libavformat 58.76.100\r\nm=video 0 RTP/AVP 96\r\na=rtpmap:96 H264/90000\r\na=fmtp:96 packetization-mode=1; sprop-parameter-sets=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX; profile-level-id=4D0029\r\na=control:streamid=0\r\n"
- }
- }
-}
-
-```
-**Audit:** Event is used to log API access.
- * Volume: Low, recommended to only enable these events when auditing is needed.
-
-*Sample audit event*
-
-```json
-{
- "time": "2021-10-07T23:53:31.6792370Z",
- "resourceId": "/SUBSCRIPTIONS/{SUBID}/RESOURCEGROUPS/{RGNAME}/PROVIDERS/MICROSOFT.MEDIA/VIDEOANALYZERS/{NAME}",
- "region": "westcentralus",
- "category": "Audit",
- "operationName": "Microsoft.VideoAnalyzer.Audit.ResourceGet",
- "level": "Warning",
- "uri": "https://{GUID}.api.{region}.videoanalyzer.azure.net/videos/batchjobsinknode",
- "resultType": "Failed",
- "resultSignature": "403",
- "identity": [
- {
- "alg": "RS256",
- "kid": "{KID}",
- "typ": "JWT"
- },
- {
- "sub": "livetest1",
- "aud": [ "https://{GUID}.streaming.{region}.videoanalyzer.azure.net/{GUID}", "wss://{GUID}.rtsp-tunnel.{region}.videoanalyzer.azure.net/{GUID}" ],
- "exp": 1633410145,
- "iss": "https://{region}.videoanalyzer.azure.net/"
- }
- ],
- "traceContext": "{\n \"traceId\": \"4bb0dcf5-5c6d-4aa3-8c03-3f3d7e2c6210\"\n}",
- "properties": { "subject": "/videos/batchjobsinknodesample" }
-}
-
-```
-## Metrics
-
-The Video Analyzer service generates metrics for [pipelines](../pipeline.md). You can access the metrics using the Azure portal, by navigating to the Monitoring section from the management pane of your Video Analyzer account.
-
-| Metric name | Type | Dimension | Description |
-| - | - | -- | |
-| Ingress Bytes | Counter | Pipeline Kind , State, Topology Name | The total number of bytes received by a pipeline. Only supported for RTSP sources. |
-| Pipelines | Counter | Pipeline Kind , State, Topology Name | Provides the pipelines in different states. Emitted at a regular interval. |
-
-## Monitoring of events
-
-You can save the events generated by the Video Analyzer service to your storage account and consume them using Azure monitor.
-
-**Azure Monitor events collection**
-
-Video Analyzer service currently supports writing of telemetry events to a storage account and the use of Azure Monitor to consume those events. Use [Azure Monitor](../../../azure-monitor/overview.md) to consume events generated by service, customers have a built-in monitoring experience via Azure Monitor.
-
-Follow the steps to enable monitoring and events collection with Azure Monitor:
-* On Azure portal navigate to **Monitoring** section of your Video Analyzer account then select **Diagnostic settings**.
-* Click on **Add Diagnostic setting** to enable the collection of the following logs:
- * Operational
- * Diagnostics
- * Audit
-* Select the logs category that you would like to enable and corresponding retention period.
-* For destination details, select **Archive to a storage account** and specify the storage account where these event logs will be stored.
- > [!IMPORTANT]
- > Video Analyzer service currently does not support sending diagnostics to destinations other than Azure storage.
-
-* Click **Save** after configuring the diagnostic settings
-* To access the diagnostic logs, navigate to storage explorer then expand your storage account and you will see Blob containers. The 'insights-logs-category' container will have logs in a JSON file format.
-* Download the desired log file and content of the downloaded log file would look similar to sample below:
-
-```json
-{
- "time": "2021-10-06T21:19:36.0988630Z",
- "resourceId": "/SUBSCRIPTIONS/{SUBID}/RESOURCEGROUPS/{RGNAME}/PROVIDERS/MICROSOFT.MEDIA/VIDEOANALYZERS/{NAME}",
- "region": "westcentralus",
- "category": "Operational",
- "operationName": "Microsoft.VideoAnalyzer.Operational.RecordingStarted",
- "operationVersion": "1.0",
- "level": "Informational",
- "correlationId": "c7887efd-0043-4ada-aa3d-9a411e612497",
- "traceContext": "{\n \"traceId\": \"e74a9d4c-c4b9-4024-9acf-06be76d978ad\"\n}",
- "properties": {
- "subject": "/livePipelines/livepipesample/sinks/videoSink",
- "body": {
- "type": "video",
- "location": "/videos/livetest1",
- "startTime": "2021-10-06T21:19:32.520Z"
- }
- }
-}
-{
- "time": "2021-10-06T21:19:34.1290000Z",
- "resourceId": "/SUBSCRIPTIONS/{SUBID}/RESOURCEGROUPS/{RGNAME}/PROVIDERS/MICROSOFT.MEDIA/VIDEOANALYZERS/{NAME}",
- "region": "westcentralus",
- "category": "Diagnostics",
- "operationName": "Microsoft.VideoAnalyzer.Diagnostics.MediaSessionEstablished",
- "operationVersion": "1.0",
- "level": "Informational",
- "traceContext": "{\n \"traceId\": \"b03cabe2-b9d1-4be7-9770-4ac9e4fdc012\"\n}",
- "properties": {
- "subject": "/livePipelines/livepipe/sources/rtspSource",
- "body": {
- "sdp": "SDP:\nv=0\r\no=- 0 0 IN IP4 127.0.0.1\r\ns=No Name\r\nt=0 0\r\na=tool:libavformat 58.76.100\r\nm=video 0 RTP/AVP 96\r\na=rtpmap:96 H264/90000\r\na=fmtp:96 packetization-mode=1; sprop-parameter-sets=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX; profile-level-id=4D0029\r\na=control:streamid=0\r\n"
- }
- }
-```
-
-**Activity log** is also generated automatically for the pipeline operations and can be accessed by navigating to 'Activity log' section of Video Analyzer account on Azure portal. You can see the history of ARM API calls made to your account, and relevant details.
-
-![Activity log sample](./media/activity-log.png)
--
-## Next steps
-
-[Troubleshoot Azure Video Analyzer service](./troubleshoot.md)
azure-video-analyzer Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/cloud/troubleshoot.md
- Title: Troubleshooting the service
-description: This article covers troubleshooting steps for Azure Video Analyzer service.
- Previously updated : 11/04/2021---
-# Troubleshoot Azure Video Analyzer service
-
-![cloud icon](media/env-icon/cloud.png)
-Alternatively, check out [troubleshoot on the edge](../edge/troubleshoot.md).
----
-This article covers troubleshooting steps for common error scenarios you might see while using the service.
-
-## Enable diagnostics
-
-[Monitoring and logging](./monitor-log-cloud.md) should help in understanding the Video Analyzer service events taxonomy and how to generate logs that will help with debugging issues.
-- On Azure portal, navigate to **Monitoring** section of your Video Analyzer account and select **Diagnostic settings**. -- Click on **Add diagnostic setting** to enable the logs of desired event types: `Diagnostics`, `Audit`, `Operational`. For more details, refer [here](./monitor-log-cloud.md)--
-## View diagnostics
-
-Once you have enabled diagnostics, you can access the logs as follows:
-
-> [!TIP]
-> Recommend that at least one live pipeline be activated to emit events.
--- In the portal, locate the storage account to which logs were written to-- Open the management blade for that storage account-- Navigate to storage explorer -> then expand your storage account and you will see a Blob container "insights-logs-category". This blob will have logs in JSON file format. You can download these logs to investigate the issue-
-## View metrics
-
-Video analyzer also emits metrics for ingestion & pipelines, which can help in identifying issues as follows.
-- IngressBytes - The total number of bytes received by a pipeline. A steadily increasing value indicates the pipeline is healthy, and is receiving video data from the RTSP camera-- Pipelines - Helps with checking pipeline status and counts.-
-## View activity logs
-
-An **activity log** is generated automatically and can be accessed by navigating to 'Activity log' section of Video Analyzer account management blade on Azure portal. You can see the history of ARM API calls made to your account, and relevant details.
-
-## Common error scenarios
-
-Some of the common errors that you'll encounter with the Video Analyzer service are described below.
-
-### Unable to play video after activating live pipeline
--- If you are using the Video Analyzer widget to play the video, try using the Azure portal if you have access to it. If the video plays in the Azure portal, but not in the widget, then you should proceed to check the widget section [below](#playback-error-with-the-widget)--- If the video is not playing back in the Azure portal, check if the Video Analyzer service is receiving video data from the RTSP camera. Select the "Ingress Bytes" metric by navigating to Monitoring->Metrics section of the portal. If the aggregation is increasing, then the connection between the RTSP camera and the service is healthy, Ingress Bytes sum would be available below the graph. --- If the service is not receiving video data from the RTSP camera, the next step is to [view the relevant diagnostic logs](#view-diagnostics). You are likely to see an error such as a [ProtocolError](#diagnostic-logs-have-a-protocolerror-with-code-401), and you can troubleshoot further as discussed below.-
-### Diagnostic logs have a ProtocolError with code 401
--- If you see Microsoft.VideoAnalyzer.Diagnostics.ProtocolError in diagnostics logs, with a code set to 401 as follows, then the first step is to recheck the RTSP credentials. Sample event you would find is as below:-
- ```
- {
- "time": "2021-10-15T02:56:18.7890000Z",
- "resourceId": "/SUBSCRIPTIONS/{GUID}/RESOURCEGROUPS/8AVA/PROVIDERS/MICROSOFT.MEDIA/VIDEOANALYZERS/AVASAMPLEZ2OHI3VBIRQPC",
- "region": "westcentralus",
- "category": "Diagnostics",
- "operationName": "Microsoft.VideoAnalyzer.Diagnostics.ProtocolError",
- "operationVersion": "1.0",
- "level": "Error",
- "traceContext": "{\n \"traceId\": \"f728d155-b4fd-4aec-8307-bbe2a324f4c3\"\n}",
- "properties": {
- "subject": "/livePipelines/your-pipeline/sources/rtspSource",
- "body": {
- "code": "401",
- "target": "rtsp://127.0.0.1:33643/some-path",
- "protocol": "rtsp"
- }
- }
- }
-
- ```
--- If you are using a live pipeline to connect to a camera that is accessible over the internet, then check RTSP URL, username and password that were used when creating the live pipeline. You can call GET on the live pipeline to view the URL and username, but the password will not be echoed back. You should check the code that was used to create the live pipeline.--- If you are using a [remote device adapter](./use-remote-device-adapter.md), then try the following steps.-
- - Verify that your [IoT hub is attached to your Video Analyzer account](../create-video-analyzer-account.md#post-deployment-steps). It is required for using a remote device adapter.
- - Run `remoteDeviceAdapterList` direct method on the edge module and verify IP address. Sample request and response are shown [here](../edge/direct-methods.md)
- - Examine the response for the remote device adapter that you are using in the live pipeline that is experiencing the issue, and compare with the example in [this article](use-remote-device-adapter.md). Check that the IP address of the camera is correct
- - Go to Azure portal->Video Analyzer account -> Live -> Pipelines -> Edit live pipeline -> reenter the RTSP user name and password. Check that the RTSP URL you provide begins with `rtsp://localhost:554/…`. Here, the use of `localhost` is required.
--- If the above steps do not help resolve the issue and video playback still does not work, then log into the Azure portal and open a support ticket. You may need to attach the logs from the Video Analyzer edge module, refer the 'Use support-bundle command' section of [edge troubleshooting doc](../edge/troubleshoot.md#common-error-resolutions)-
-### Unable to record to a video resource
-
-With Video Analyzer, you should use a distinct video resource when recording from a distinct RTSP camera. You would also need to switch to a new video resource if you change the camera settings (for example its resolution). Some of the sample pipeline topologies have hard-coded names for the video resource in the video sink node properties. If you use these topologies directly with different cameras, you will encounter this issue. Modify the `videoName` property in the video sink node to ensure uniqueness.
-
-### Interrupted recording or playback
-
-When you create a live pipeline, you are required to specify the maximum bitrate (`bitrateKbps`) at which the RTSP camera would send video to the service.
-If the camera exceeds that limit, then the Video Analyzer service will disconnect from the camera briefly. It can re-attempt to connect to the camera in case there was a temporary spike in the bitrate. You can identify this situation by looking for Microsoft.VideoAnalyzer.Diagnostics.RtspIngestionSessionEnded event in the diagnostic logs, with a `SourceBitrateExceeded` error code.
-To resolve this, either reduce the bitrate setting on the camera, or increase the `bitrateKbps` value of the live pipeline to match the camera setting.
-
-### Playback error with the widget
-
-If you get an 'Access Forbidden' error in the Video Analyzer widget, then you should see a warning event in the Audit log.
--- Make sure you have generated the Client API JWT Token and corresponding access policy, refer to the documentation for [creating-a-token](../access-policies.md) and [creating-an-access-policy](../access-policies.md#creating-an-access-policy). The player will not work if the access policy has not been set up correctly, and the JWT token does not match with the policy. -- If issue is not resolved, gather the widget logs, and file a support ticket using Azure portal-- Gathering widget logs:
- - Hit F12 to enable Browser Developer tools, go to the Console TAB, enable "All levels" logging.
- - From the Settings Icon , select Preferences --> Console --> Show timestamps. Save the logs.
-
-## Collect logs for submitting a support ticket
-
-When self-guided troubleshooting steps doesnt resolve your problem and there are more issues that you may need help with, please open a support ticket using the Azure portal with the relevant details about the issue & attach the [diagnostic](#view-diagnostics) JSON log files downloaded from your storage account. You can also reach out to us by sending an email at videoanalyzerhelp@microsoft.com.
-
-> [!WARNING]
-> The logs may contain personally identifiable information (PII) such as your IP address. All local copies of the logs will be deleted as soon as we complete examining them and close the support ticket.
-
-## Next steps
-
-[FAQ](./faq.yml)
azure-video-analyzer Use Remote Device Adapter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/cloud/use-remote-device-adapter.md
- Title: Connect camera to cloud using a remote device adapter
-description: This article explains how to connect a camera to Azure Video Analyzer service using a remote device adapter
- Previously updated : 11/04/2021---
-# Connect cameras to the cloud using a remote device adapter
---
-Azure Video Analyzer service allows users to capture and record video from RTSP cameras that are connected to the cloud. This requires that such cameras must be accessible over the internet. In cases where this may not be permissible, the Video Analyzer edge module can instead be deployed to a lightweight edge device with internet connectivity. With the lightweight edge device on the same (private) network as the RTSP cameras, the edge module can now be set up as an *adapter* that enables the Video Analyzer service to connect to the *remote devices* (cameras). The edge module enables the edge device to act as a [transparent gateway](../../../iot-edge/iot-edge-as-gateway.md) for video traffic between the RTSP cameras and the Video Analyzer service.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/use-remote-device-adapter/use-remote-device-adapter.svg" alt-text="Connect cameras to the cloud with a remote device adapter":::
-
-## Pre-reading
-
-* [Connect camera to the cloud](connect-cameras-to-cloud.md)
-* [Quickstart: Get started with Video Analyzer live pipelines in the Azure portal](get-started-livepipelines-portal.md)
-
-## Prerequisites
-
-* An Azure account that has an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) if you don't already have one.
-* [Deploy Azure Video Analyzer to an IoT Edge device](../edge/deploy-iot-edge-device.md)
-* [IoT Hub must be attached to Video Analyzer account](../create-video-analyzer-account.md#post-deployment-steps)
-* [RTSP cameras](../quotas-limitations.md#supported-cameras)
- * Ensure that camera(s) are on the same network as the edge device
- * Ensure that you can configure the camera to send video at or below a maximum bandwidth (measured in kBps or kilobits/second)
-
-## Overview
-In order to connect a camera to the Video Analyzer service using a remote device adapter, you need to:
-
-1. Create an IoT device in the IoT Hub to represent the RTSP camera
-1. Create a device adapter on the Video Analyzer edge module to act as the transparent gateway for the above device
-1. Use the IoT device and the device adapter when creating a live pipeline in the Video Analyzer service to capture and record video from the camera
---
-## Create an IoT device
-
-Create an IoT device to represent each RTSP camera that needs to be connected to the Video Analyzer service. In the Azure portal:
-
-1. Navigate to the IoT Hub
-1. Select the **Devices** pane under **Device management**
-1. Select **+Add device**
-1. Enter a **Device ID** using a unique string (Ex: building404-camera1)
-1. **Authentication type** can be left as **Symmetric key**
-1. All other properties can be left as default
-1. Select **Save** to create the IoT device
-1. Select the IoT device, and record the **Primary key** or **Secondary key**, as it will be needed below
-
-## Create a remote device adapter
-
-To enable the Video Analyzer edge module to act as a transparent gateway for video between the camera and the Video Analyzer service, you must create a remote device adapter for each camera. Invoke the [**remoteDeviceAdapterSet** direct method](../edge/direct-methods.md) that requires the following values:
-
-* Device ID for the IoT device
-* Primary key for the IoT device
-* Camera's IP address
-
-In the Azure portal:
-
-1. Navigate to the IoT Hub
-1. Select the **IoT Edge** pane under **Device management**
-1. Select the IoT Edge device (such as **ava-sample-device**) to which Video Analyzer edge module has been deployed
-1. Under modules, select the Video Analyzer edge module (such as **avaedge**)
-1. Select **</> Direct Method**
-1. Enter **remoteDeviceAdapterSet** for the Method Name
-1. Enter the following for **Payload**:
-
-```
- {
- "@apiVersion" : "1.1",
- "name": "<name of remote device adapter such as remoteDeviceAdapterCamera1>",
- "properties": {
- "target": {
- "host": "<Camera's IP address>"
- },
- "iotHubDeviceConnection": {
- "deviceId": "<IoT Hub Device ID>",
- "credentials": {
- "@type": "#Microsoft.VideoAnalyzer.SymmetricKeyCredentials",
- "key": "<Primary or Secondary Key>"
- }
- }
- }
- }
-
-```
-
-If successful, you will receive a response with a status code 201.
-
-To list all of the remote device adapters that are set, invoke the **remoteDeviceAdapterList** direct method with the following payload:
-```
- {
- "@apiVersion" : "1.1"
- }
-```
-
-## [Azure portal](#tab/azure-portal)
-### Create pipeline topology in the Video Analyzer service
-
-When creating a cloud pipeline topology to ingest from a camera behind a firewall, tunneling must be enabled on the RTSP source node of the pipeline topology. See an example of such a [pipeline topology](https://github.com/Azure/video-analyzer/tree/main/pipelines/live/cloud-topologies/cloud-record-camera-behind-firewall).
--
-The following values, based on the IoT device created in the previous instructions, are required to enable tunneling on the RTSP source node:
-
-* IoT Hub Name
-* IoT Hub Device ID
-
-```
- {
- "@type": "#Microsoft.VideoAnalyzer.RtspSource",
- "name": "rtspSource",
- "transport": "tcp",
- "endpoint": {
- "@type": "#Microsoft.VideoAnalyzer.UnsecuredEndpoint",
- "url": "${rtspUrlParameter}",
- "credentials": {
- "@type": "#Microsoft.VideoAnalyzer.UsernamePasswordCredentials",
- "username": "${rtspUsernameParameter}",
- "password": "${rtspPasswordParameter}"
- },
- "tunnel": {
- "@type": "#Microsoft.VideoAnalyzer.SecureIotDeviceRemoteTunnel",
- "iotHubName" : "<IoT Hub Name>",
- "deviceId": "${ioTHubDeviceIdParameter}"
- }
- }
- }
-```
-
-Ensure that:
-* `Transport` is set to `tcp`
-* `Endpoint` is set to `UnsecuredEndpoint`
-* `Tunnel` is set to `SecureIotDeviceRemoteTunnel`
-
-[This quickstart](get-started-livepipelines-portal.md#deploy-a-live-pipeline) can be used a reference as it outlines the steps for creating a pipeline topology and live pipeline in Azure portal. Use the sample topology `Live capture, record, and stream from RTSP camera behind firewall`.
-
-### Create and activate a live pipeline
-
-When creating the live pipeline, the RTSP URL, RTSP username, RTSP password, and IoT Hub Device ID must be defined. A sample payload is below.
-
-```
- {
- "name": "record-from-building404-camera1",
- "properties": {
- "topologyName": "record-camera-behind-firewall",
- "description": "Capture, record and stream video from building404-camera1 via a remote device adapter",
- "bitrateKbps": 1500,
- "parameters": [
- {
- "name": "rtspUrlParameter",
- "value": "<RTSP URL for building404-camera1 such as rtsp://localhost:554/media/video>"
- },
- {
- "name": "rtspUsernameParameter",
- "value": "<User name for building404-camera1>"
- },
- {
- "name": "rtspPasswordParameter",
- "value": "<Password for building404-camera1>"
- },
- {
- "name": "ioTHubDeviceIdParameter",
- "value": "<IoT Hub Device ID such as building404-camera1>"
- },
- {
- "name": "videoName",
- "value": "video-from-building404-camera1"
- }
- ]
- }
- }
-```
-The RTSP URL IP address must be **localhost**. Ensure that the`bitrateKbps` value matches the maximum bitrate setting for the video from the RTSP camera.
-
-After creating the live pipeline, the pipeline can be activated to start recording to the Video Analyzer video resource. [The quickstart](get-started-livepipelines-portal.md#deploy-a-live-pipeline) mentioned in the previous step also outlines how to activate a live pipeline in Azure portal.
--
-### Playback recorded video in the Azure portal
-1. After activating the live pipeline, the video resource will be available under the Video Analyzer account **Videos** pane in Azure portal. The status will indicate **Is in use** as pipeline is active and recording.
-1. Select the video resource that was defined in the live pipeline to view the video.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/camera-1800s-mkv.png" alt-text="Screenshot of the live video captured by live pipeline in the cloud.":::
-
-If you encounter errors while attempting to playback the video, follow the steps in [this troubleshooting guide](troubleshoot.md#unable-to-play-video-after-activating-live-pipeline).
--
-To deactivate the pipeline, go to your Video Analyzer account, and select **Live** > **Pipelines** on the left panel. Select the pipeline and then select `Deactivate` in the pipeline grid to stop the recording.
-
-### Next steps
-Now that a video exists in your Video Analyzer account, you can export a clip of this recorded video to MP4 format using [this tutorial](export-portion-of-video-as-mp4.md).
-
-## [C# SDK sample](#tab/csharp-sdk-sample)
-### Prerequisites
-- Retrieve your Azure Active Directory [Tenant ID](../../../active-directory/fundamentals/active-directory-how-to-find-tenant.md).
- - Register an application with Microsoft identity platform to get app registration [Client ID](../../../active-directory/develop/quickstart-register-app.md#register-an-application) and [Client secret](../../../active-directory/develop/quickstart-register-app.md#add-a-client-secret).
- - [Give the application ΓÇ£OwnerΓÇ¥ access to the subscription you are using](../../../active-directory/develop/howto-create-service-principal-portal.md#assign-a-role-to-the-application)
-- [Visual Studio Code](https://code.visualstudio.com/) on your development machine with following extensions:
- * [C#](https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.csharp).
-- [.NET Core 3.1 SDK](https://dotnet.microsoft.com/download/dotnet-core/3.1) on your development machine.-
-### Get the sample code
-- Clone the Video Analyzer [C# samples repository](https://github.com/Azure-Samples/video-analyzer-csharp). -- Open your local clone of this git repository in Visual Studio Code.-- `src\cloud-video-processing\capture-from-rtsp-camera-behind-firewall` folder contains C# console app for capturing and recording live video from an RTSP capable camera behind a firewall. -- Navigate to `src\cloud-video-processing\capture-from-rtsp-camera-behind-firewall\Program.cs`. Provide values for the following variables & save the changes.-
-| Variable | Description |
-|-|--|
-| SubscriptionId | Provide Azure subscription ID |
-| ResourceGroup | Provide resource group name |
-| AccountName | Provide Video Analyzer account name |
-| TenantId | Provide tenant ID |
-| ClientId | Provide app registration client ID |
-| Secret | Provide app registration client secret |
-| AuthenticationEndpoint | Provide authentication end point (example: https://login.microsoftonline.com) |
-| ArmEndPoint | Provide ARM end point (example: https://management.azure.com) |
-| TokenAudience | Provide token audience (example: https://management.core.windows.net) |
-| PrivateCameraTunnelingDeviceId | Provide IoT device ID |
-| IotHubNameForPrivateCamera | Provide IoT Hub name |
-| PrivateCameraSourceRTSPURL | Provide RTSP source url (localhost) |
-| PrivateCameraSourceRTSPUserName | Provide RTSP source username |
-| PrivateCameraSourceRTSPPassword | Provide RTSP source password |
-| PrivateCameraVideoName | Provide unique video name to capture live video from this RTSP source|
-
-Bitrate is set to 1500 kbps by default but can be edited on `line 282 of Program.cs`. This represents the maximum bitrate for the RTSP camera, and it must be between 500 and 3000 Kbps. If bitrate of the live video from the camera exceeds this threshold, then the service will keep disconnecting from the camera, and retrying later - with exponential backoff.
-
-### Run the sample program
--- Start a debugging session in VS code. If this project is not set as default, you can set it as default project to run on hitting F5 by modifying the files in .vscode folder:
- - launch.json - Update the "program" and "cwd" to launch PrivateCameraPipelineSampleCode.
- ```
- "program": "${workspaceFolder}/src/cloud-video-processing/capture-from-rtsp-camera-behind-firewall/bin/Debug/netcoreapp3.1/PrivateCameraPipelineSampleCode.dll",
- "args": [],
- "cwd": "${workspaceFolder}/src/cloud-video-processing/capture-from-rtsp-camera-behind-firewall",
- ```
- - tasks.json - Update "args" to point to PrivateCameraPipelineSampleCode.csproj.
- ```
- "${workspaceFolder}/src/cloud-video-processing/ingest-from-rtsp-camera-behind-firewall/PrivateCameraPipelineSampleCode.csproj"
- ```
-- Alternatively, go to TERMINAL window in the Visual Studio Code, navigate using `cd <path>` to `src\cloud-video-processing\ingest-from-rtsp-camera-behind-firewall`. Type commands **dotnet build** and **dotnet run** to compile and run the program respectively.-- You will start seeing some messages printed in the TERMINAL window regarding creation of the topologies and pipelines. If console app runs successfully, a live pipeline is created and activated. Code walkthrough is available [here](https://github.com/Azure-Samples/video-analyzer-csharp/tree/main/src/cloud-video-processing/capture-from-rtsp-camera-behind-firewall).-- The terminal window pauses after this step so that you can examine the program's output in the TERMINAL window, see the recorded video in portal and will wait for user input to proceed.--
-### Playback recorded video in the Azure portal
-
-1. After activating the live pipeline, the video resource will be available under the Video Analyzer account **Videos** pane in Azure portal. The status will indicate **Is in use** as pipeline is active and recording.
-1. Select the video resource that was defined in the live pipeline to view the video.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/camera-1800s-mkv.png" alt-text="Screenshot of the live video captured by live pipeline in the cloud.":::
-
-If you encounter errors while attempting to playback the video, follow the steps in [this troubleshooting guide](troubleshoot.md#unable-to-play-video-after-activating-live-pipeline).
--
-In the terminal window, pressing `Enter` will deactivate the pipeline, delete the pipeline and delete the topology. Program.cs calls CleanUpResourcesAsync() method to cleanup the deployed resources.
-
-### Next steps
-
-Now that a video exists in your Video Analyzer account, you can export a clip of this recorded video to MP4 format using [this tutorial](export-portion-of-video-as-mp4.md).
azure-video-analyzer Continuous Video Recording https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/continuous-video-recording.md
- Title: Continuous video recording from the edge
-description: Continuous video recording (CVR) refers to the process of continuously recording from a live video source. This topic discusses what CVR is and how to use it with Azure Video Analyzer.
-- Previously updated : 11/04/2021---
-# Continuous video recording
--
-Continuous video recording (CVR) refers to the process of continuously recording the video from a video source. Azure Video Analyzer supports recording video continuously, on a 24x7 basis, from a CCTV camera via a video processing [pipeline topology](pipeline.md) consisting of an RTSP source node and a video sink node. The diagram below shows a graphical representation of such a pipeline. The JSON representation of the topology can be found in this [document](https://raw.githubusercontent.com/Azure/video-analyzer/main/pipelines/live/topologies/cvr-video-sink/topology.json). You can use such a topology to create arbitrarily long recordings (years worth of content). The timestamps for the recordings are stored in UTC.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/continuous-video-recording/continuous-video-recording-overview.svg" alt-text="Continuous video recording":::
-
-An instance of the pipeline topology depicted above can be run on an edge device in the Video Analyzer service, with the video sink recording to a [video resource](terminology.md#video). The video will be recorded for as long as the pipeline stays in the activated state. Recorded video can be played back using the streaming capabilities of Video Analyzer. See [Recorded and live videos](viewing-videos-how-to.md) for more details.
-
-## Suggested pre-reading
-
-It is recommended to read the following articles before proceeding.
-
-* [Pipeline topology concept](pipeline.md)
-* [Video recording concept](video-recording.md)
-
-## Resilient recording
-
-Video Analyzer edge module supports operating under conditions where the edge device may occasionally lose connectivity with the cloud or experience a drop in available bandwidth. To account for this, the video from the source is recorded locally into a cache and is automatically synced with the video resource on a periodic basis. If you examine the [pipeline topology](https://raw.githubusercontent.com/Azure/video-analyzer/main/pipelines/live/topologies/cvr-video-sink/topology.json), you will see it has the following properties defined:
-
-```
-"segmentLength": "PT30S",
-"localMediaCacheMaximumSizeMiB": "2048",
-"localMediaCachePath": "/var/lib/videoanalyzer/tmp/",
-```
-
-The latter two properties are relevant to resilient recording (both are also required properties for a video sink node). The `localMediaCachePath` property tells the video sink to use that folder path to cache media data before uploading to the cloud. You can see [this](../../iot-edge/how-to-access-host-storage-from-module.md) article to understand how the edge module can make use of your device's local storage. The `localMediaCacheMaximumSizeMiB` property defines how much disk space the video sink can use as a cache (1 MiB = 1024 * 1024 bytes).
-
-If your edge module loses connectivity for a long time and the content stored in the cache folder reaches the `localMediaCacheMaximumSizeMiB` value, the video sink will start discarding data from the cache, starting from the oldest data. For example, if the device lost connectivity at 10AM and the cache hits the maximum limit at 6PM, then the video sink starts to delete data recorded at 10AM.
-
-When network connectivity is restored, the video sink will begin uploading from the cache, again starting from the oldest data. In the above example, suppose 5 minutes worth of video had to be discarded from cache by the time connectivity was restored (say at 6:02PM), then the video sink will start uploading from the 10:05AM mark.
-
-Gaps in recordings can also occur, for example, if you restart pipelines for whatever reason. You can also stop a pipeline and restart at a later time - as long as the camera settings do not change, you can continue to record to the same Video Analyzer video resource.
-
-## Segmented recording
-
-The `segmentLength` property, shown above, will help you control the write transactions cost associated with writing data to your storage account where the video resource is being recorded. For example, if you increase the value from 30 seconds to 5 minutes, then the number of storage transactions will drop by a factor of 10 (5*60/30).
-
-The `segmentLength` property ensures that video is written to the storage account at most once per `segmentLength` seconds. This property has a minimum value of 30 seconds (also the default), and can be increased by 30-second increments to a maximum of 5 minutes.
-
-This property applies to both the Video Analyzer edge module and the Video Analyzer service. See the [Recorded and live videos](viewing-videos-how-to.md) article for the effect that `segmentLength` has on playback.
-
-## See also
-
-* [Event-based video recording](event-based-video-recording-concept.md)
-* [Recorded and live videos](viewing-videos-how-to.md)
-
-## Next steps
-
-[Tutorial: continuous video recording](edge/use-continuous-video-recording.md)
azure-video-analyzer Create Video Analyzer Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/create-video-analyzer-account.md
- Title: Create a Video Analyzer account
-description: This topic explains how to create an account for Azure Video Analyzer.
-- Previously updated : 11/04/2021---
-# Create a Video Analyzer account
--
-To start using Azure Video Analyzer, you will need to create a Video Analyzer account. The account needs to be associated with a storage account and at least one [user-assigned managed identity][docs-uami](UAMI). The UAMI will need to have the permissions of the [Storage Blob Data Contributor][docs-storage-access] role and [Reader][docs-role-reader] role to your storage account. You can optionally associate an IoT Hub with your Video Analyzer account ΓÇô this is needed if you use Video Analyzer edge module as a [transparent gateway](./cloud/use-remote-device-adapter.md). If you do so, then you will need to add a UAMI which has [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role permissions. You can use the same UAMI for both storage account and IoT Hub, or separate UAMIs.
-
-This article describes the steps for creating a new Video Analyzer account. You can use the Azure portal or an [Azure Resource Manager (ARM) template][docs-arm-template]. Choose the tab for the method you would like to use.
--
-## [Portal](#tab/portal/)
--
-### Create a Video Analyzer account in the Azure portal
-
-1. Sign in at the [Azure portal](https://portal.azure.com/).
-1. Using the search bar at the top, enter **Video Analyzer**.
-1. Click on *Video Analyzers* under *Services*.
-1. Click **Create**.
-1. In the **Create Video Analyzer account** section enter required values.
-
- | Name | Description |
- | ||
- |**Subscription**|If you have more than one subscription, select one from the list of Azure subscriptions that you have access to.|
- |**Resource Group**|Select an existing resource or create a new one. A resource group is a collection of resources that share lifecycle, permissions, and policies. Learn more [here](../../azure-resource-manager/management/overview.md#resource-groups).|
- |**Video Analyzer account name**|Enter the name of the new Video Analyzer account. A Video Analyzer account name is all lowercase letters or numbers with no spaces, and is 3 to 24 characters in length.|
- |**Location**|Select the geographic region that will be used to store the video and metadata records for your Video Analyzer account. Only the available Video Analyzer regions appear in the drop-down list box. |
- |**Storage account**|Select a storage account to provide blob storage of the video content for your Video Analyzer account. You can select an existing storage account in the same geographic region as your Video Analyzer account, or you can create a new storage account. A new storage account is created in the same region. The rules for storage account names are the same as for Video Analyzer accounts.<br/>|
- |**Managed identity**|Select a user-assigned managed identity that the new Video Analyzer account will use to access the storage account. You can select an existing user-assigned managed identity or you can create a new one. The user-assignment managed identity will be assigned the roles of [Storage Blob Data Contributor][docs-storage-access] and [Reader][docs-role-reader] for the storage account.
-
-1. Click **Review + create** at the bottom of the form.
-
-### Post deployment steps
-You can choose to attach an existing IoT Hub to the Video Analyzer account and this will require an existing UAMI.
-
-1. Click **Go to resource** after the resource has finished deploying.
-1. Under settings select **IoT Hub**, then click on **Attach**. In the **Attach IoT Hub** configuration fly-out blade enter the required values:
- - Subscription - Select the Azure subscription name under which the IoT Hub has been created
- - IoT Hub - Select the desired IoT Hub
- - Managed Identity - Select the existing UAMI to be used to access the IoT Hub
-1. Click on **Save** to link IoT Hub to your Video Analyzer account.
-
-## [Template](#tab/template/)
--
-### Create a Video Analyzer account using a template
-
-The following resources are defined in the template:
--- [**Microsoft.Media/videoAnalyzers**](/azure/templates/Microsoft.Media/videoAnalyzers): the account resource for Video Analyzer.-- [**Microsoft.Storage/storageAccounts**](/azure/templates/Microsoft.Storage/storageAccounts): the storage account that will be used by Video Analyzer for storing videos and metadata.-- [**Microsoft.ManagedIdentity/userAssignedIdentities**](/azure/templates/Microsoft.ManagedIdentity/userAssignedIdentities): the user-assigned managed identity that Video Analyzer will use to access storage.-- [**Microsoft.Storage/storageAccounts/providers/roleAssignments**](/azure/templates/microsoft.authorization/roleassignments): the role assignments that enables Video Analyzer to access the storage account.-
-<!-- TODO replace with a reference like this:
>-
-```json
-
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "namePrefix": {
- "metadata": {
- "description": "Used to qualify the names of all of the resources created in this template."
- },
- "defaultValue": "avasample",
- "type": "string",
- "minLength": 3,
- "maxLength": 13
- }
- },
- "variables": {
- "storageAccountName": "[concat(parameters('namePrefix'),uniqueString(resourceGroup().id))]",
- "accountName": "[concat(parameters('namePrefix'),uniqueString(resourceGroup().id))]",
- "managedIdentityName": "[concat(parameters('namePrefix'),'-',resourceGroup().name,'-storage-access-identity')]"
- },
- "resources": [
- {
- "type": "Microsoft.Resources/deployments",
- "apiVersion": "2020-10-01",
- "name": "deploy-storage-and-identity",
- "properties": {
- "mode": "Incremental",
- "expressionEvaluationOptions": {
- "scope": "Inner"
- },
- "parameters": {
- "namePrefix": {
- "value": "[parameters('namePrefix')]"
- },
- "managedIdentityName": {
- "value": "[variables('managedIdentityName')]"
- }
- },
- "template": {
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "namePrefix": {
- "type": "string"
- },
- "managedIdentityName": {
- "type": "string"
- }
- },
- "variables": {
- "storageAccountName": "[concat(parameters('namePrefix'),uniqueString(resourceGroup().id))]",
- "managedIdentityName": "[parameters('managedIdentityName')]",
- "storageBlobDataContributorAssignment": "[guid('Storage Blob Data Contributor',variables('managedIdentityName'))]",
- "storageBlobDataContributorDefinitionId": "[concat(resourceGroup().id, '/providers/Microsoft.Authorization/roleDefinitions/', 'ba92f5b4-2d11-453d-a403-e96b0029c9fe')]",
- "readerAssignment": "[guid('Reader',variables('managedIdentityName'))]",
- "readerDefinitionId": "[concat(resourceGroup().id, '/providers/Microsoft.Authorization/roleDefinitions/', 'acdd72a7-3385-48ef-bd42-f606fba81ae7')]"
- },
- "resources": [
- {
- "type": "Microsoft.ManagedIdentity/userAssignedIdentities",
- "name": "[variables('managedIdentityName')]",
- "apiVersion": "2015-08-31-preview",
- "location": "[resourceGroup().location]"
- },
- {
- "type": "Microsoft.Storage/storageAccounts",
- "apiVersion": "2019-04-01",
- "name": "[variables('storageAccountName')]",
- "location": "[resourceGroup().location]",
- "sku": {
- "name": "Standard_LRS"
- },
- "kind": "StorageV2",
- "properties": {
- "accessTier": "Hot"
- }
- },
- {
- "name": "[concat(variables('storageAccountName'), '/Microsoft.Authorization/', variables('storageBlobDataContributorAssignment'))]",
- "type": "Microsoft.Storage/storageAccounts/providers/roleAssignments",
- "apiVersion": "2021-04-01-preview",
- "dependsOn": [
- "[variables('managedIdentityName')]",
- "[variables('storageAccountName')]"
- ],
- "properties": {
- "roleDefinitionId": "[variables('storageBlobDataContributorDefinitionId')]",
- "principalId": "[reference(resourceId('Microsoft.ManagedIdentity/userAssignedIdentities',variables('managedIdentityName')), '2018-11-30').principalId]",
- "principalType": "ServicePrincipal"
- }
- },
- {
- "name": "[concat(variables('storageAccountName'), '/Microsoft.Authorization/', variables('readerAssignment'))]",
- "type": "Microsoft.Storage/storageAccounts/providers/roleAssignments",
- "apiVersion": "2021-04-01-preview",
- "dependsOn": [
- "[variables('managedIdentityName')]",
- "[variables('storageAccountName')]"
- ],
- "properties": {
- "roleDefinitionId": "[variables('readerDefinitionId')]",
- "principalId": "[reference(resourceId('Microsoft.ManagedIdentity/userAssignedIdentities',variables('managedIdentityName')), '2018-11-30').principalId]",
- "principalType": "ServicePrincipal"
- }
- }
- ],
- "outputs": {}
- }
- }
- },
- {
- "type": "Microsoft.Media/videoAnalyzers",
- "comments": "The Azure Video Analyzer account",
- "apiVersion": "2021-05-01-preview",
- "name": "[variables('accountName')]",
- "location": "[resourceGroup().location]",
- "dependsOn": [
- "deploy-storage-and-identity"
- ],
- "properties": {
- "storageAccounts": [
- {
- "id": "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]",
- "identity": {
- "userAssignedIdentity": "[resourceId('Microsoft.ManagedIdentity/userAssignedIdentities',variables('managedIdentityName'))]"
- }
- }
- ]
- },
- "identity": {
- "type": "UserAssigned",
- "userAssignedIdentities": {
- "[resourceId('Microsoft.ManagedIdentity/userAssignedIdentities',variables('managedIdentityName'))]": {}
- }
- }
- }
- ],
- "outputs": { }
-}
-```
-
-> [!NOTE]
-> The template uses a nested deployment for the role assignments to ensure that it is available before deploying the Video Analyzer account resource.
-
-### Deploy the template
-
-1. Click on the *Deploy to Azure* button to sign in to Azure and open a template.
-
- [![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/ava-click-to-deploy)
-
-1. Select or enter the following values. Use the default values, when available.
-
- - **Subscription**: select an Azure subscription.
- - **Resource group**: select an existing resource group from the drop-down, or select **Create new**, enter a unique name for the resource group, and then click **OK**.
- - **Location**: select a location. For example, **West US 2**.
- - **Name Prefix**: provide a string that is used to prefix the name of the resources (the default values are recommended).
-
-1. Select **Review + create**. After validation completes, select **Create** to create and deploy the template.
-
-In the above, the Azure portal is used to deploy the template. In addition to the Azure portal, you can also use the Azure CLI, Azure PowerShell, and REST API. To learn other deployment methods, see [Deploy templates](../../azure-resource-manager/templates/deploy-cli.md).
-
-### Post deployment steps
-You can choose to attach an existing IoT Hub to the Video Analyzer account and this will require an existing UAMI.
-
-1. Click **Go to resource** after the resource has finished deploying.
-1. Under settings select **IoT Hub**, then click on **Attach**. In the **Attach IoT Hub** configuration fly-out blade enter the required values:
- - Subscription - Select the Azure subscription name under which the IoT Hub has been created
- - IoT Hub - Select the desired IoT Hub
- - Managed Identity - Select the existing UAMI to be used to access the IoT Hub
-1. Click on **Save** to link IoT Hub to your Video Analyzer account.
-
-### Review deployed resources
-
-You can use the Azure portal to check on the account and other resource that were created. After the deployment is finished, select **Go to resource group** to see the account and other resources.
-
-### Clean up resources
-
-When no longer needed, delete the resource group, which deletes the account and all of the resources in the resource group.
-
-1. Select the **Resource group**.
-1. On the page for the resource group, select **Delete**.
-1. When prompted, type the name of the resource group and then select **Delete**.
---
-### Next steps
-
-* Learn how to [deploy Video Analyzer on an IoT Edge device][docs-deploy-on-edge].
-* Learn how to [capture and record video directly to the cloud](cloud/get-started-livepipelines-portal.md).
-
-<!-- links -->
-[docs-uami]: ../../active-directory/managed-identities-azure-resources/overview.md
-[docs-storage-access]: ../../role-based-access-control/built-in-roles.md#storage-blob-data-contributor
-[docs-role-reader]: ../../role-based-access-control/built-in-roles.md#reader
-[docs-arm-template]: ../../azure-resource-manager/templates/overview.md
-[docs-deploy-on-edge]: ./edge/deploy-iot-edge-device.md
azure-video-analyzer Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/customer-managed-keys.md
- Title: Customer managed keys
-description: You can use a customer managed key (that is, bring your own key) with Azure Video Analyzer.
--- Previously updated : 11/04/2021--
-# Customer managed keys with Azure Video Analyzer
--
-Bring Your Own Key (BYOK) is an Azure wide initiative to help customers move their workloads to the cloud. Customer managed keys allow customers to adhere to industry compliance regulations and improves tenant isolation of a service. Giving customers control of encryption keys is a way to minimize unnecessary access and control and build confidence in Microsoft services.
-
-## Keys and key management
-
-An account key is created for all Video Analyzer accounts. By default this account key is encrypted by a system key owned by Video Analyzer (i.e. system-managed key). Instead, you can use your own key with Azure Video Analyzer. In that case, your account key is encrypted with your key. Access policies and video resource metadata get encrypted using the account key.
-
-Video Analyzer uses a User Assigned Managed Identity to read your key from a Key Vault owned by you. You must provide the User Assigned Managed Identity when creating or updating the Video Analyzer account and assign appropriate [Azure role-based access control](../../role-based-access-control/overview.md) to the Key Vault. Video Analyzer requires that the Key Vault is in the same region as the account, and that it has soft-delete and purge protection enabled.
-
-Your key can be a 2048, 3072, or a 4096 RSA key, and both HSM and software keys are supported.
-
-> [!NOTE]
-> EC keys are not supported.
-
-You can specify a key name and key version, or just a key name. When you use only a key name, Video Analyzer will use the latest key version. New versions of customer keys are automatically detected, and the account key is re-encrypted.
-
-> [!WARNING]
-> Video Analyzer monitors access to the customer key. If the customer key becomes inaccessible (for example, the key has been deleted or the Key Vault has been deleted or the access grant has been removed), Video Analyzer will transition the account to the Customer Key Inaccessible State (effectively disabling the account). You can either delete the account, or restore the account key to resume access.
-
-## Double encryption
-
-Video Analyzer protects your sensitive data using double encryption as per Azure standard practice - see [Azure double encryption](../../security/fundamentals/double-encryption.md). For data at rest, the first layer of encryption uses a customer managed key or a Microsoft managed key depending on the `encryption` setting on the account. The second layer of encryption for data at rest is provided automatically using a separate Microsoft managed key.
-
-> [!NOTE]
-> Double encryption is enabled automatically on the Video Analyzer account. However, you need to configure the customer managed key and double encryption on your storage account separately. See [Storage encryption](../../storage/common/storage-service-encryption.md).
--
-## Next steps
-
-Review [Managed identity](managed-identity.md)
azure-video-analyzer Analyze Ai Composition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/analyze-ai-composition.md
- Title: Analyze live video streams with multiple AI models using AI composition
-description: This article provides guidance on how to analyze live video streams with multiple AI models using AI composition feature of Azure Video Analyzer.
-- Previously updated : 11/04/2021---
-# Analyze live video streams with multiple AI models using AI composition
---
-Certain customer scenarios require that video be analyzed with multiple AI models. Such models can be either [augmenting each other](../ai-composition-overview.md#sequential-ai-composition) or [working independently in parallel](../ai-composition-overview.md#parallel-ai-composition) on the [same video stream or a combination](../ai-composition-overview.md#combined-ai-composition) of such augmented and independently parallel models can be acting on the same video stream to derive actionable insights.
-
-Azure Video Analyzer supports such scenarios via a feature called [AI Composition](../ai-composition-overview.md). This guide shows you how you can apply multiple models in an augmented fashion on the same video stream. It uses a Tiny(Light) YOLO and a regular YOLO model in parallel, to detect an object of interest. The Tiny YOLO model is computationally lighter but less accurate than the YOLO model and is called first. If the object detected passes a specific confidence threshold, then the sequentially staged regular YOLO model is not invoked, thus utilizing the underlying resources efficiently.
-
-After completing the steps in this guide, you'll be able to run a simulated live video stream through a pipeline with AI composability and extend it to your specific scenarios. The following diagram graphically represents that pipeline.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/analyze-ai-composition/motion-with-object-detection-using-ai-composition.svg" alt-text="AI composition overview":::
-
-## Prerequisites
-
-* An Azure account that has an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) if you don't already have one.
-
- > [!NOTE]
- > You will need an Azure subscription with permissions for creating service principals (owner role provides this). If you do not have the right permissions, please reach out to your account administrator to grant you the right permissions.
-* [Visual Studio Code](https://code.visualstudio.com/) on your development machine. Make sure you have the [Azure IoT Tools extension](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit).
-* Make sure the network that your development machine is connected to permits Advanced Message Queueing Protocol (AMQP) over port 5671 for outbound traffic. This setup enables Azure IoT Tools to communicate with Azure IoT Hub.
-* Complete [Quickstart: Analyze a live video feed from a (simulated) IP camera using your own gRPC model](analyze-live-video-use-your-model-grpc.md). Do not skip this step as this is a strict requirement for the how to guide.
-
-> [!TIP]
-> You might be prompted to install Docker while you're installing the Azure IoT Tools extension. Feel free to ignore the prompt.
->
-> If you run into issues with Azure resources that get created, please view our [troubleshooting guide](troubleshoot.md#common-error-resolutions) to resolve some commonly encountered issues.
-
-## Review the video sample
-
-Since you have already completed the quickstart specified in the prerequisite section, you will have an edge device already created. This edge device will have the following input folder - /home/localedgeuser/samples/input- that includes certain video files. Log into the IoT Edge device, change to the directory to: /home/localedgeuser/samples/input/ and run the following command to get the input file we will be using for this how to guide.
-
-wget https://avamedia.blob.core.windows.net/public/co-final.mkv
-
-Additionally, if you like, on your machine that has [VLC media player](https://www.videolan.org/vlc/), select Ctrl+N and then paste a link to [sample video (.mkv)](https://avamedia.blob.core.windows.net/public/co-final.mkv) to start playback. You see the footage of cars on a freeway.
-
-## Create and deploy the pipeline
-
-Similar to the steps in the quickstart that you completed in the prerequisites, you can follow the steps here but with minor adjustments.
-
-1. Follow the guidelines in [Create and deploy the pipeline](analyze-live-video-use-your-model-grpc.md#create-and-deploy-the-pipeline) section of the quickstart you just finished. Be sure to make the following adjustments as you continue with the steps. These steps help to ensure that the correct body for the direct method calls are used.
-
- Edit the *operations.json* file:
-
- * Change the link to the pipeline topology:
- `"pipelineTopologyUrl" : "https://raw.githubusercontent.com/Azure/video-analyzer/main/pipelines/live/topologies/ai-composition/topology.json"`
- * Under `livePipelineSet`,
- 1. ensure : `"topologyName" : "AIComposition"` and
- 2. Change the `rtspUrl` parameter value to `"rtsp://rtspsim:554/media/co-final.mkv"`.
-
- * Under `pipelineTopologyDelete`, edit the name:
- `"name" : "AIComposition"`
-2. Follow the guidelines in [Generate and deploy the IoT Edge deployment manifest](analyze-live-video-use-your-model-grpc.md#generate-and-deploy-the-iot-edge-deployment-manifest) section but use the following deployment manifest instead - src/edge/deployment.composite.template.json
-3. Follow the guidelines in [Run the sample program](analyze-live-video-use-your-model-grpc.md#run-the-sample-program) section.
-4. For result details, see the [interpret the results](analyze-live-video-use-your-model-grpc.md#interpret-results) section. In addition to the analytics events on the hub and the diagnostic events, the topology that you have used also creates a relevant video clip on the cloud that is triggered by the AI signal-based activation of the signal gate. This clip is also accompanied with [operational events](record-event-based-live-video.md#operational-events) on the hub for downstream workflows to take. You can [examine and play](record-event-based-live-video.md#playing-back-the-recording) the video clip by logging into the Azure portal.
-
-## Clean up
-
-If you're not going to continue to use this application, delete the resources you created in this quickstart.
-
-## Next steps
-
-Learn more about [diagnostic messages](monitor-log-edge.md).
azure-video-analyzer Analyze Live Video Custom Vision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/analyze-live-video-custom-vision.md
- Title: Analyze live video with Video Analyzer on IoT Edge and Azure Custom Vision
-description: This tutorial walks you through the steps to analyze live video with Azure Video Analyzer on IoT Edge and Azure Custom Vision.
- Previously updated : 06/01/2021
-zone_pivot_groups: video-analyzer-programming-languages
---
-# Tutorial: Analyze live video with Azure Video Analyzer on IoT Edge and Azure Custom Vision
---
-In this tutorial, you'll learn how to use Azure [Custom Vision](https://azure.microsoft.com/services/cognitive-services/custom-vision-service/) to build a containerized model that can detect a toy truck and use the [AI extensibility capability](../analyze-live-video-without-recording.md#analyzing-video-using-a-custom-vision-model) of Azure Video Analyzer on Azure IoT Edge to deploy the model on the edge for detecting toy trucks from a live video stream.
-
-We'll show you how to bring together the power of Custom Vision to build and train a computer vision model by uploading and labeling a few images. You don't need any knowledge of data science, machine learning, or AI. You'll also learn about the capabilities of Video Analyzer and how to easily deploy a custom model as a container on the edge and analyze a simulated live video feed.
---
-The tutorial shows you how to:
--- Set up the relevant resources.-- Build a Custom Vision model in the cloud to detect toy trucks and deploy it on the edge.-- Create and deploy a pipeline with an HTTP extension to a Custom Vision model.-- Run the sample code.-- Examine and interpret the results.-
-If you don't have an [Azure subscription](../../../guides/developer/azure-developer-guide.md#understanding-accounts-subscriptions-and-billing), create a [free account](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) before you begin.
-
-## Suggested pre-reading
-
-Read through the following articles before you begin:
--- [Video Analyzer on IoT Edge overview](../overview.md)-- [Azure Custom Vision overview](../../../cognitive-services/custom-vision-service/overview.md)-- [Video Analyzer on IoT Edge terminology](../terminology.md)-- [Pipeline concept](../pipeline.md)-- [Video Analyzer without video recording](../analyze-live-video-without-recording.md)-- [Tutorial: Developing an IoT Edge module](../../../iot-edge/tutorial-develop-for-linux.md)-- [How to edit deployment.*.template.json](https://github.com/microsoft/vscode-azure-iot-edge/wiki/How-to-edit-deployment.*.template.json)-
-## Prerequisites
-
-* [Install Docker](https://docs.docker.com/desktop/#download-and-install) on your machine.
----
-## Review the sample video
-
-This tutorial uses a [toy car inference video](https://avamedia.blob.core.windows.net/public/t2.mkv) file to simulate a live stream. You can examine the video via an application such as [VLC media player](https://www.videolan.org/vlc/). Select **Ctrl+N**, and then paste a link to the [toy car inference video](https://avamedia.blob.core.windows.net/public/t2.mkv) to start playback. As you watch the video, note that at the 36-second marker a toy truck appears in the video. The custom model has been trained to detect this specific toy truck.
-
-> [!VIDEO https://www.microsoft.com/videoplayer/embed/RE4LPwK]
-
-In this tutorial, you'll use Video Analyzer on IoT Edge to detect such toy trucks and publish associated inference events to the IoT Edge hub.
-
-## Overview
-
-![Diagram that shows a Custom Vision overview.](./media/custom-vision/topology-custom-vision.svg)
-
-This diagram shows how the signals flow in this tutorial. An [edge module](https://github.com/Azure/video-analyzer/tree/main/edge-modules/sources/rtspsim-live555) simulates an IP camera hosting a Real-Time Streaming Protocol (RTSP) server. An [RTSP source](../pipeline.md#rtsp-source) node pulls the video feed from this server and sends video frames to the [HTTP extension processor](../pipeline.md#http-extension-processor) node.
-
-The HTTP extension node plays the role of a proxy. It samples the incoming video frames set by you using the `samplingOptions` field and also converts the video frames to the specified image type. Then it relays the image to the toy truck detector model built by using Custom Vision. The HTTP extension processor node gathers the detection results and publishes events to the [Azure IoT Hub message sink](../pipeline.md#iot-hub-message-sink) node, which sends those events to the [IoT Edge hub](../../../iot-fundamentals/iot-glossary.md#iot-edge-hub).
-
-## Build and deploy a Custom Vision toy detection model
-
-As the name Custom Vision suggests, you can use it to build your own custom object detector or classifier in the cloud. It provides a simple, easy-to-use, and intuitive interface to build Custom Vision models that can be deployed in the cloud or on the edge via containers.
-
-To build a toy truck detector, follow the steps in [Quickstart: Build an object detector with the Custom Vision website](../../../cognitive-services/custom-vision-service/get-started-build-detector.md).
-
-> [!IMPORTANT]
-> This Custom Vision module only supports **Intel x86 and amd64** architectures only. Check the architecture of your edge device before continuing.
-
-Additional notes:
--- For this tutorial, don't use the sample images provided in the quickstart article's [Prerequisites section](../../../cognitive-services/custom-vision-service/get-started-build-detector.md#prerequisites). Instead, we've used a certain image set to build a toy detector Custom Vision model. Use [these images](https://avamedia.blob.core.windows.net/public/ToyCarTrainingImages.zip) when you're asked to [choose your training images](../../../cognitive-services/custom-vision-service/get-started-build-detector.md#choose-training-images) in the [quickstart](../../../cognitive-services/custom-vision-service/get-started-build-detector.md).-- In the tagging image section of the quick start, ensure that you're tagging the toy truck seen in the picture with the tag "delivery truck."-- Ensure to select General(compact) as the option for Domains when creating the Custom Vision project-
-After you're finished, you can export the model to a Docker container by using the **Export** button on the **Performance** tab. Ensure you choose Linux as the container platform type. This is the platform on which the container will run. The machine you download the container on could be either Windows or Linux. The instructions that follow were based on the container file downloaded onto a Windows machine.
-
-![Screen that shows Dockerfile selected.](./media/custom-vision/docker-file.png)
-
-1. You should have a zip file downloaded onto your local machine named `<projectname>.DockerFile.Linux.zip`.
-2. Check if you have Docker installed. If not, install [Docker](https://docs.docker.com/get-docker/) for your Windows desktop.
-3. Unzip the downloaded file in a location of your choice. Use the command line to go to the unzipped folder directory. You should see the following two files - app\labels.txt and app\model.pb
-4. Clone the [Video Analyzer repository](https://github.com/Azure/video-analyzer) and use the command line to go to the edge-modules\extensions\customvision\avaextension folder
-5. Copy the labels.txt and model.pb files from Step 3 into the edge-modules\extensions\customvision\avaextension folder. In the same folder -
-
- Run the following commands:
-
- 1. `docker build -t cvtruck .`
-
- This command downloads many packages, builds the Docker image, and tags it as `cvtruck:latest`.
-
- > [!NOTE]
- > If successful, you should see the following messages: `Successfully built <docker image id>` and `Successfully tagged cvtruck:latest`. If the build command fails, try again. Sometimes dependency packages don't download the first time around.
- 2. `docker image ls`
-
- This command checks if the new image is in your local registry.
-
-## Set up your development environment
---
-## Examine the sample files
---
-## Generate and deploy the deployment manifest
-
-1. In Visual Studio Code, go to src/cloud-to-device-console-app/operations.json.
-2. Under `pipelineTopologySet`, ensure the following is true:<br/>
- `"pipelineTopologyUrl" : "https://raw.githubusercontent.com/Azure/video-analyzer/main/pipelines/live/topologies/httpExtension/topology.json"`
-3. Under `livePipelineSet`, ensure:
-
- 1. `"topologyName" : "InferencingWithHttpExtension"`
- 2. Add the following to the top of the parameters array: `{"name": "inferencingUrl","value": "http://cv/score"},`
- 3. Change the `rtspUrl` parameter value to `"rtsp://rtspsim:554/media/t2.mkv"`.
-4. Under `pipelineTopologyDelete`, ensure `"name": "InferencingWithHttpExtension"`.
-5. Right-click the src/edge/ deployment.customvision.template.json file, and select **Generate IoT Edge Deployment Manifest**.
-
- ![Screenshot that shows Generate IoT Edge Deployment Manifest.](./media/custom-vision/deployment-template-json.png)
-
- This action should create a manifest file in the src/edge/config folder named deployment.customvision.amd64.json.
-6. Open the src/edge/ deployment.customvision.template.json file, and find the `registryCredentials` JSON block. In this block, you'll find the address of your Azure container registry along with its username and password.
-7. Push the local Custom Vision container into your Azure Container Registry instance by following these steps on the command line:
-
- 1. Sign in to the registry by executing the following command:
-
- `docker login <address>`
-
- Enter the username and password when asked for authentication.
-
- > [!NOTE]
- > The password isn't visible on the command line.
- 2. Tag your image by using this command:<br/>
- `docker tag cvtruck <address>/cvtruck`.
- 3. Push your image by using this command:<br/>
- `docker push <address>/cvtruck`.
-
- If successful, you should see `Pushed` on the command line along with the SHA for the image.
- 4. You can also confirm by checking your Azure Container Registry instance in the Azure portal. Here you'll see the name of the repository along with the tag.
-8. Set the IoT Hub connection string by selecting the **More actions** icon next to the **AZURE IOT HUB** pane in the lower-left corner. You can copy the string from the appsettings.json file. (Here's another recommended approach to ensure you have the proper IoT hub configured within Visual Studio Code via the [Select IoT Hub command](https://github.com/Microsoft/vscode-azure-iot-toolkit/wiki/Select-IoT-Hub).)
-
- ![Screenshot that shows Set IoT Hub Connection String.](./media/custom-vision/connection-string.png)
-9. Next, right-click src/edge/config/deployment.customvision.amd64.json, and select **Create Deployment for Single Device**.
-
- ![Screenshot that shows Create Deployment for Single Device.](./medi64-json.png)
-10. You'll then be asked to select an IoT Hub device. Select **ava-sample-iot-edge-device** from the drop-down list.
-11. In about 30 seconds, refresh the Azure IoT hub in the lower-left section. You should have the edge device with the following modules deployed:
-
- - Edge Hub (module name **edgeHub**)
- - Edge Agent (module name **edgeAgent**)
- - Video Analyzer (module name **avaedge**)
- - RTSP simulator (module name **rtspsim**, which simulates an RTSP server that acts as the source of a live video feed)
- - Custom Vision (module named **cv**, which is based on the toy truck detection model)
-
-From these steps, the Custom Vision module has now been added.
-
-## Run the sample program
-
-If you open the topology for this tutorial in a browser, you'll see that the value of `inferencingUrl` has been set to `http://cv/score`. This setting means the inference server will return results after detecting toy trucks, if any, in the live video.
-
-1. In Visual Studio Code, open the **Extensions** tab (or select **Ctrl+Shift+X**) and search for Azure IoT Hub.
-2. Right-click and select **Extension Settings**.
-
- ![Screenshot that shows Extension Settings.](./media/custom-vision/extensions-tab.png)
-3. Search and enable **Show Verbose Message**.
-
- ![Screenshot that shows Show Verbose Message.](./media/custom-vision/show-verbose-message.png)
-4. ::: zone pivot="programming-language-csharp"
- [!INCLUDE [header](includes/common-includes/csharp-run-program.md)]
- ::: zone-end
-
- ::: zone pivot="programming-language-python"
- [!INCLUDE [header](includes/common-includes/python-run-program.md)]
- ::: zone-end
-
-5. The operations.json code starts off with calls to the direct methods `livePipelineList` and `livePipelineList`. If you cleaned up resources after you completed previous quickstarts, this process will return empty lists and then pause. To continue, select the **Enter** key.
-
- The **TERMINAL** window shows the next set of direct method calls:
-
- - A call to `pipelineTopologySet` that uses the preceding `pipelineTopologyUrl`.
- - A call to `livePipelineSet` that uses the following body:
-
- ```
- {
- "@apiVersion": "1.1",
- "name": "Sample-Pipeline-1",
- "properties": {
- "topologyName": "InferencingWithHttpExtension",
- "description": "Sample pipeline description",
- "parameters": [
- {
- "name": "inferencingUrl",
- "value": "http://cv/score"
- },
- {
- "name": "rtspUrl",
- "value": "rtsp://rtspsim:554/media/t2.mkv"
- },
- {
- "name": "rtspUserName",
- "value": "testuser"
- },
- {
- "name": "rtspPassword",
- "value": "testpassword"
- }
- ]
- }
- }
- ```
-
- - A call to `livePipelineActivate` that activates the pipeline and the flow of video.
- - A second call to `livePipelineList` that shows that the active pipeline.
-
-6. The output in the **TERMINAL** window pauses at a **Press Enter to continue** prompt. Don't select **Enter** yet. Scroll up to see the JSON response payloads for the direct methods you invoked.
-7. Switch to the **OUTPUT** window in Visual Studio Code. You see messages that the Video Analyzer on IoT Edge module is sending to the IoT hub. The following section of this tutorial discusses these messages.
-8. The pipeline continues to run and print results. The RTSP simulator keeps looping the source video. To stop the pipeline, return to the **TERMINAL** window and select **Enter**. The next series of calls cleans up resources:
-
- - A call to `livePipelineDeactivate` deactivates the pipeline.
- - A call to `livePipelineDelete` deletes the pipeline.
- - A call to `pipelineTopologyDelete` deletes the topology.
- - A final call to `pipelineTopologyList` shows that the list is empty.
-
-## Interpret the results
-
-When you run the pipeline, the results from the HTTP extension processor node pass through the IoT Hub message sink node to the IoT hub. The messages you see in the **OUTPUT** window contain a body section and an `applicationProperties` section. For more information, see [Create and read IoT Hub messages](../../../iot-hub/iot-hub-devguide-messages-construct.md).
-
-In the following messages, the Video Analyzer module defines the application properties and the content of the body.
-
-### MediaSessionEstablished event
-
-When a pipeline is instantiated, the RTSP source node attempts to connect to the RTSP server that runs on the rtspsim-live555 container. If the connection succeeds, the following event is printed.
--
-```
-[IoTHubMonitor] [9:42:18 AM] Message received from [avasample-iot-edge-device/avaedge]:
-{
- "body": {
- "sdp": "SDP:\nv=0\r\no=- 1586450538111534 1 IN IP4 XXX.XX.XX.XX\r\ns=Matroska video+audio+(optional)subtitles, streamed by the LIVE555 Media Server\r\ni=media/camera-300s.mkv\r\nt=0 0\r\na=tool:LIVE555 Streaming Media v2020.03.06\r\na=type:broadcast\r\na=control:*\r\na=range:npt=0-300.000\r\na=x-qt-text-nam:Matroska video+audio+(optional)subtitles, streamed by the LIVE555 Media Server\r\na=x-qt-text-inf:media/camera-300s.mkv\r\nm=video 0 RTP/AVP 96\r\nc=IN IP4 0.0.0.0\r\nb=AS:500\r\na=rtpmap:96 H264/90000\r\na=fmtp:96 packetization-mode=1;profile-level-id=4D0029;sprop-parameter-sets=XXXXXXXXXXXXXXXXXXXXXX\r\na=control:track1\r\n"
- },
- "applicationProperties": {
- "dataVersion": "1.0",
- "topic": "/subscriptions/{subscriptionID}/resourceGroups/{name}/providers/microsoft.media/videoanalyzers/{ava-account-name}",
- "subject": "/edgeModules/avaedge/livePipelines/Sample-Pipeline-1/sources/rtspSource",
- "eventType": "Microsoft.VideoAnalyzers.Diagnostics.MediaSessionEstablished",
- "eventTime": "2021-04-09T09:42:18.1280000Z"
- }
-}
-```
-
-In this message, notice these details:
--- The message is a diagnostics event. `MediaSessionEstablished` indicates that the RTSP source node (the subject) connected with the RTSP simulator and has begun to receive a simulated live feed.-- In `properties`, `subject` indicates that the message was generated from the RTSP source node in the pipeline.-- In `properties`, the event type indicates that this event is a diagnostics event.-- The event time indicates the time when the event occurred.-- The body contains data about the diagnostics event. In this case, the data comprises the [Session Description Protocol (SDP)](https://en.wikipedia.org/wiki/Session_Description_Protocol) details.-
-### Inference event
-
-The HTTP extension processor node receives inference results from the Custom Vision container and emits the results through the IoT Hub message sink node as inference events.
--
-```
-{
- "body": {
- "timestamp": 145892470449324,
- "inferences": [
- {
- "type": "entity",
- "entity": {
- "tag": {
- "value": "delivery truck",
- "confidence": 0.20541823
- },
- "box": {
- "l": 0.6826309,
- "t": -0.01415127,
- "w": 0.3135161,
- "h": 0.94683206
- }
- }
- },
- {
- "type": "entity",
- "entity": {
- "tag": {
- "value": "delivery truck",
- "confidence": 0.14967085
- },
- "box": {
- "l": 0.33310884,
- "t": 0.03174839,
- "w": 0.13532706,
- "h": 0.54967254
- }
- }
- },
- {
- "type": "entity",
- "entity": {
- "tag": {
- "value": "delivery truck",
- "confidence": 0.1352181
- },
- "box": {
- "l": 0.48884687,
- "t": 0.44746214,
- "w": 0.025887,
- "h": 0.05414263
- }
- }
- }
- ]
- },
- "properties": {
- "topic": "/subscriptions/...",
- "subject": "/edgeModules/avaedge/livePipelines/Sample-Pipeline-1/processors/httpExtension",
- "eventType": "Microsoft.VideoAnalyzer.Analytics.Inference",
- "eventTime": "2021-05-14T21:24:09.436Z",
- "dataVersion": "1.0"
- },
- "systemProperties": {
- "iothub-connection-device-id": "avasample-iot-edge-device",
- "iothub-connection-module-id": "avaedge",
- "iothub-connection-auth-method": "{\"scope\":\"module\",\"type\":\"sas\",\"issuer\":\"iothub\",\"acceptingIpFilterRule\":null}",
- "iothub-connection-auth-generation-id": "637563926153483223",
- "iothub-enqueuedtime": 1621027452077,
- "iothub-message-source": "Telemetry",
- "messageId": "96f7f0b5-728d-4e3e-a7bb-4e3198c58726",
- "contentType": "application/json",
- "contentEncoding": "utf-8"
- }
-```
-
-Note the following information in the preceding messages:
--- The subject in `properties` references the node in the pipeline from which the message was generated. In this case, the message originates from the HTTP extension processor.-- The event type in `properties` indicates that this is an analytics inference event.-- The event time indicates the time when the event occurred.-- The body contains data about the analytics event. In this case, the event is an inference event, so the body contains an array of inferences called predictions.-- The inferences section contains a list of predictions where a toy delivery truck (tag is "delivery truck") is found in the frame. As you recall, "delivery truck" is the custom tag that you provided to your custom trained model for the toy truck. The model inferences and identifies the toy truck in the input video with different probability confidence scores.-
-## Clean up resources
-
-If you intend to try the other tutorials or quickstarts, hold on to the resources you created. Otherwise, go to the Azure portal, browse to your resource groups, select the resource group under which you ran this tutorial, and delete all the resources.
-
-## Next steps
-
-Review additional challenges for advanced users:
--- Use an [IP camera](https://en.wikipedia.org/wiki/IP_camera) that has support for RTSP instead of using the RTSP simulator. You can search for IP cameras that support RTSP on the [ONVIF conformant](https://www.onvif.org/conformant-products/) products page. Look for devices that conform with profiles G, S, or T.-- Use an AMD64 or x64 Linux device instead of an Azure Linux VM. This device must be in the same network as the IP camera. You can follow the instructions in [Install Azure IoT Edge runtime on Linux](../../../iot-edge/how-to-install-iot-edge.md).-
-Then register the device with Azure IoT Hub by following instructions in [Deploy your first IoT Edge module to a virtual Linux device](../../../iot-edge/quickstart-linux.md).
azure-video-analyzer Analyze Live Video Use Your Model Grpc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/analyze-live-video-use-your-model-grpc.md
- Title: Analyze live video with your own gRPC model
-description: This quickstart describes how to analyze live video with your own gRPC model with Video Analyzer.
-- Previously updated : 06/01/2021
-zone_pivot_groups: video-analyzer-programming-languages
---
-# Quickstart: Analyze a live video feed from a (simulated) IP camera using your own gRPC model
---
-This quickstart shows you how to use Azure Video Analyzer to analyze a live video feed from a (simulated) IP camera. You'll see how to apply a computer vision model to detect objects. A subset of the frames in the live video feed is sent to an inference service. The results are sent to IoT Edge Hub.
-
-This quickstart uses an Azure VM as an IoT Edge device, and it uses a simulated live video stream. It's based on sample code written in C#, and it builds on the [Detect motion and emit events quickstart](detect-motion-emit-events-quickstart.md).
-
-## Prerequisites
---
-## Review the sample video
-
-When you set up the Azure resources, a short video of highway traffic is copied to the Linux VM in Azure that you're using as the IoT Edge device. This quickstart uses the video file to simulate a live stream.
-
-Open an application such as [VLC media player](https://www.videolan.org/vlc/). Select Ctrl+N and then paste a link to [the highway intersection sample video](https://avamedia.blob.core.windows.net/public/camera-300s.mkv) to start playback. You see the footage of many vehicles moving in highway traffic.
-
-> [!VIDEO https://www.microsoft.com/videoplayer/embed/RE4LTY4]
-
-In this quickstart, you'll use Video Analyzer to detect objects such as vehicles and persons. You'll publish associated inference events to IoT Edge Hub.
-
-## Create and deploy the pipeline
-
-### Examine and edit the sample files
---
-## Generate and deploy the IoT Edge deployment manifest
-
-1. Right-click the _src/edge/_ deployment.grpcyolov3icpu.template.json file and then select **Generate IoT Edge Deployment Manifest**.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/analyze-live-video-use-your-model-grpc/generate-deployment-manifest.png" alt-text="Generate IoT Edge Deployment Manifest":::
-
-1. The _deployment.grpcyolov3icpu.amd64.json_ manifest file is created in the src/edge/config folder.
-
-1. Right-click src/edge/config/ **deployment.grpcyolov3icpu.amd64.json** and select **Create Deployment for Single Device**.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/analyze-live-video-use-your-model-grpc/deployment-single-device.png" alt-text= "Create Deployment for Single Device":::
-
-1. When you're prompted to select an IoT Hub device, select avasample-iot-edge-device.
-1. After about 30 seconds, in the lower-left corner of the window, refresh Azure IoT Hub. The edge device now shows the following deployed modules:
-
- - The Video Analyzer module, named **avaedge**.
- - The **rtspsim** module, which simulates an RTSP server and acts as the source of a live video feed.
- - The **avaextension** module, which is the YOLOv3 object detection model that uses gRPC as the communication method and applies computer vision to the images and returns multiple classes of object types.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/vscode-common-screenshots/avaextension.png" alt-text= "YoloV3 object detection model":::
-
-## Run the sample program
-
-1. ::: zone pivot="programming-language-csharp"
- [!INCLUDE [header](includes/common-includes/csharp-run-program.md)]
- ::: zone-end
-
- ::: zone pivot="programming-language-python"
- [!INCLUDE [header](includes/common-includes/python-run-program.md)]
- ::: zone-end
-1. The **operations.json** code starts off with calls to the direct methods pipelineTopologyList and livePipelineList. If you cleaned up resources after you completed previous quickstarts, then this process will return empty lists and then pause. To continue, select the Enter key.
-
- ```
- --
- Executing operation pipelineTopologyList
- -- Request: pipelineTopologyList --
- {
- "@apiVersion": "1.1"
- }
- Response: pipelineTopologyList - Status: 200
- {
- "value": []
- }
- --
- Executing operation WaitForInput
-
- Press Enter to continue
- ```
-
-1. The TERMINAL window shows the next set of direct method calls:
-
- - A call to pipelineTopologySet that uses the preceding pipelineTopologyUrl
- - A call to livePipelineSet that uses the following body:
-
- ```
- {
- "@apiVersion": "1.1",
- "name": "Sample-Pipeline-1",
- "properties": {
- "topologyName": "InferencingWithGrpcExtension",
- "description": "Sample pipeline description",
- "parameters": [
- {
- "name": "rtspUrl",
- "value": "rtsp://rtspsim:554/media/camera-300s.mkv"
- },
- {
- "name": "rtspUserName",
- "value": "testuser"
- },
- {
- "name": "rtspPassword",
- "value": "testpassword"
- },
- {
- "name": "grpcExtensionAddress",
- "value": "tcp://avaextension:44000"
- }
- ]
- }
- }
- ```
-
- - A call to livePipelineActivate that starts the live pipeline and the flow of video.
- - A second call to livePipelineList that shows that the live pipeline is in the running state.
-
-1. The output in the TERMINAL window pauses at a Press Enter to continue prompt. Don't select Enter yet. Scroll up to see the JSON response payloads for the direct methods you invoked.
-1. Switch to the OUTPUT window in Visual Studio Code. You see messages that the Video Analyzer module is sending to the IoT hub. The following section of this quickstart discusses these messages.
-1. The pipeline continues to run and print results. The RTSP simulator keeps looping the source video. To stop the pipeline, return to the TERMINAL window and select Enter.
-
- The next series of calls cleans up resources:
-
- - A call to `livePipelineDeactivate` deactivates the live pipeline
- - A call to `livePipelineDelete` deletes the live pipeline.
- - A call to `pipelineTopologyDelete` deletes the topology.
- - A final call to `pipelineTopologyList` shows that the list is empty.
-
-## Interpret results
-
-When you run the pipeline topology, the results from the gRPC extension processor node pass through the IoT Hub message sink node to the IoT hub. The messages you see in the OUTPUT window contain a body section and an applicationProperties section. For more information, see [Create and read IoT Hub messages](../../../iot-hub/iot-hub-devguide-messages-construct.md).
-
-In the following messages, the Video Analyzer module defines the application properties and the content of the body.
-
-### MediaSessionEstablished event
-
-When a pipeline is instantiated, the RTSP source node attempts to connect to the RTSP server that runs on the rtspsim-live555 container. If the connection succeeds, then the following event is printed. The event type is Microsoft.VideoAnalyzer..Diagnostics.MediaSessionEstablished.
-
-```
-[IoTHubMonitor] [9:42:18 AM] Message received from [avasample-iot-edge-device/avaedge]:
-{
- "body": {
- "sdp": "SDP:\nv=0\r\no=- 1586450538111534 1 IN IP4 XXX.XX.XX.XX\r\ns=Matroska video+audio+(optional)subtitles, streamed by the LIVE555 Media Server\r\ni=media/camera-300s.mkv\r\nt=0 0\r\na=tool:LIVE555 Streaming Media v2020.03.06\r\na=type:broadcast\r\na=control:*\r\na=range:npt=0-300.000\r\na=x-qt-text-nam:Matroska video+audio+(optional)subtitles, streamed by the LIVE555 Media Server\r\na=x-qt-text-inf:media/camera-300s.mkv\r\nm=video 0 RTP/AVP 96\r\nc=IN IP4 0.0.0.0\r\nb=AS:500\r\na=rtpmap:96 H264/90000\r\na=fmtp:96 packetization-mode=1;profile-level-id=4D0029;sprop-parameter-sets=XXXXXXXXXXXXXXXXXXXXXX\r\na=control:track1\r\n"
- },
- "applicationProperties": {
- "dataVersion": "1.0",
- "topic": "/subscriptions/{subscriptionID}/resourceGroups/{name}/providers/microsoft.media/videoanalyzers/{ava-account-name}",
- "subject": "/edgeModules/avaedge/livePipelines/Sample-Pipeline-1/sources/rtspSource",
- "eventType": "Microsoft.VideoAnalyzers.Diagnostics.MediaSessionEstablished",
- "eventTime": "2021-04-09T09:42:18.1280000Z"
- }
-}
-```
-
-In this message, notice these details:
--- The message is a diagnostics event. MediaSessionEstablished indicates that the RTSP source node (the subject) connected with the RTSP simulator and has begun to receive a (simulated) live feed.-- In applicationProperties, subject indicates that the message was generated from the RTSP source node in the pipeline.-- In applicationProperties, eventType indicates that this event is a diagnostics event.-- The eventTime indicates the time when the event occurred.-- The body contains data about the diagnostics event. In this case, the data comprises the Session Description Protocol (SDP) details.-
-### Inference event
-
-The gRPC extension processor node receives inference results from the avaextension module. It then emits the results through the IoT Hub message sink node as inference events. In these events, the type is set to entity to indicate it's an entity, such as a car or truck. The eventTime value is the
-UTC time when the object was detected. In the following example, three cars were detected in the same video frame, with varying levels of confidence.
-
-```json
-[IoTHubMonitor] [1:48:04 PM] Message received from [avasample-iot-edge-device/avaedge]:
-{
- "timestamp": 145589011404622,
- "inferences": [
- {
- "type": "entity",
- "entity": {
- "tag": {
- "value": "car",
- "confidence": 0.97052866
- },
- "box": {
- "l": 0.40896654,
- "t": 0.60390747,
- "w": 0.045092657,
- "h": 0.029998193
- }
- }
- },
- {
- "type": "entity",
- "entity": {
- "tag": {
- "value": "car",
- "confidence": 0.9547283
- },
- "box": {
- "l": 0.20050547,
- "t": 0.6094412,
- "w": 0.043425046,
- "h": 0.037724357
- }
- }
- },
- {
- "type": "entity",
- "entity": {
- "tag": {
- "value": "car",
- "confidence": 0.94567955
- },
- "box": {
- "l": 0.55363107,
- "t": 0.5320657,
- "w": 0.037418623,
- "h": 0.027014252
- }
- }
- },
- {
- "type": "entity",
- "entity": {
- "tag": {
- "value": "car",
- "confidence": 0.8916893
- },
- "box": {
- "l": 0.6642384,
- "t": 0.581689,
- "w": 0.034349587,
- "h": 0.027812533
- }
- }
- },
- {
- "type": "entity",
- "entity": {
- "tag": {
- "value": "car",
- "confidence": 0.8547814
- },
- "box": {
- "l": 0.584758,
- "t": 0.60079926,
- "w": 0.07082855,
- "h": 0.034121
- }
- }
- }
- ]
-}
-```
-
-In the messages, notice the following details:
--- The body section contains data about the analytics event. In this case, the event is an inference event, so the body contains inferences data.-- The inferences section indicates that the type is entity. This section includes additional data about the entity.-
-## Clean up resources
--
-## Next steps
--- Try running different pipeline topologies using gRPC protocol.--- Review additional challenges for advanced users:-
- - Use an [IP camera](https://en.wikipedia.org/wiki/IP_camera) that has support for RTSP instead of using the RTSP simulator. You can search for IP cameras that support RTSP on the [ONVIF conformant](https://www.onvif.org/conformant-products/) products page. Look for devices that conform with profiles G, S, or T.
- - Use an AMD64 or x64 Linux device instead of an Azure Linux VM. This device must be in the same network as the IP camera. You can follow the instructions in [Install Azure IoT Edge runtime on Linux](../../../iot-edge/how-to-install-iot-edge.md?view=iotedge-2018-06&preserve-view=true). Then register the device with Azure IoT Hub by following instructions in [Deploy your first IoT Edge module to a virtual Linux device](../../../iot-edge/quickstart-linux.md?view=iotedge-2018-06&preserve-view=true).
azure-video-analyzer Analyze Live Video Use Your Model Http https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/analyze-live-video-use-your-model-http.md
- Title: Analyze live video with your own HTTP model
-description: Analyze a live video feed from a (simulated) IP camera; apply a computer vision model to detect objects. Some frames in the feed are sent to an inference service; results are sent to IoT Edge Hub.
-- Previously updated : 11/04/2021
-zone_pivot_groups: video-analyzer-programming-languages
---
-# Quickstart: Analyze a live video feed from a (simulated) IP camera using your own HTTP model
---
-This quickstart shows you how to use Azure Video Analyzer to analyze a live video feed from a (simulated) IP camera. You'll see how to apply a computer vision model to detect objects. A subset of the frames in the live video feed is sent to an inference service. The results are sent to IoT Edge Hub.
-
-The quickstart uses an Azure VM as an IoT Edge device, and it uses a simulated live video stream. It builds on the [Detect motion and emit events](detect-motion-emit-events-quickstart.md) quickstart.
-
-## Prerequisites
---
-## Review the sample video
-
-When you set up the Azure resources, a short video of highway traffic is copied to the Linux VM in Azure that you're using as the IoT Edge device. This quickstart uses the video file to simulate a live stream.
-
-Open an application such as [VLC media player](https://www.videolan.org/vlc/). Select Ctrl+N and then paste a link to [the highway intersection sample video](https://avamedia.blob.core.windows.net/public/camera-300s.mkv) to start playback. You see the footage of many vehicles moving in highway traffic.
-
-> [!VIDEO https://www.microsoft.com/videoplayer/embed/RE4LTY4]
-
-## Create and deploy the livePipeline
-
-### Examine and edit the sample files
---
-## Generate and deploy the IoT Edge deployment manifest
-
-1. Right-click the _src/edge/deployment.yolov3.template.json_ file and then select **Generate IoT Edge Deployment Manifest**.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/analyze-live-video-use-your-model-http/generate-deployment-manifest.png" alt-text="Screenshot of Generate IoT Edge Deployment Manifest":::
-
-1. The _deployment.yolov3.amd64.json_ manifest file is created in the _src/edge/config_ folder.
-1. Right-click _src/edge/config/deployment.yolov3.amd64.json_ and select **Create Deployment for Single Device**.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/analyze-live-video-use-your-model-http/deployment-single-device.png" alt-text= "Screenshot of Create Deployment for Single Device":::
-
-1. When you're prompted to select an IoT Hub device, select **ava-sample-iot-edge-device**.
-1. After about 30 seconds, in the lower-left corner of the window, refresh Azure IoT Hub. The edge device now shows the following deployed modules:
-
- - The Video Analyzer module, named **avaedge**.
- - The **rtspsim** module, which simulates an RTSP server and acts as the source of a live video feed.
- - The **avaextension** module, which is the YoloV3 object detection model that applies computer vision to the images and returns multiple classes of object types
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/vscode-common-screenshots/avaextension.png" alt-text= "Screenshot of YoloV3 object detection model":::
-
-## Run the sample program
-
-1. ::: zone pivot="programming-language-csharp"
- [!INCLUDE [header](includes/common-includes/csharp-run-program.md)]
- ::: zone-end
-
- ::: zone pivot="programming-language-python"
- [!INCLUDE [header](includes/common-includes/python-run-program.md)]
- ::: zone-end
-1. The operations.json code starts off with calls to the direct methods `pipelineTopologyList` and `livePipelineList`. If you cleaned up resources after you completed previous quickstarts, then this process will return empty lists and then pause. To continue, select the Enter key.
-
- ```
- --
- Executing operation pipelineTopologyList
- -- Request: pipelineTopologyList --
- {
- "@apiVersion": "1.1"
- }
- Response: pipelineTopologyList - Status: 200
- {
- "value": []
- }
- --
- Executing operation WaitForInput
-
- Press Enter to continue
- ```
-
-1. The **TERMINAL** window shows the next set of direct method calls:
-
- - A call to `pipelineTopologySet` that uses the preceding pipelineTopologyUrl.
- - A call to `livePipelineSet` that uses the following body:
-
- ```
- {
- "@apiVersion": "1.1",
- "name": "Sample-Pipeline-1",
- "properties": {
- "topologyName": "InferencingWithHttpExtension",
- "description": "Sample pipeline description",
- "parameters": [
- {
- "name": "rtspUrl",
- "value": "rtsp://rtspsim:554/media/camera-300s.mkv"
- },
- {
- "name": "rtspUserName",
- "value": "testuser"
- },
- {
- "name": "rtspPassword",
- "value": "testpassword"
- }
- ]
- }
- }
- ```
-
- - A call to livePipelineActivate that starts the live pipeline and the flow of video.
- - A second call to `livePipelineList` that shows that the live pipeline is in the running state.
-
-1. The output in the **TERMINAL** window pauses at a **Press Enter to continue** prompt. Don't select Enter yet. Scroll up to see the JSON response payloads for the direct methods you invoked.
-1. Switch to the **OUTPUT** window in Visual Studio Code. You see messages that the Video Analyzer module is sending to the IoT hub. The following section of this quickstart discusses these messages.
-1. The pipeline continues to run and print results. The RTSP simulator keeps looping the source video. To stop the pipeline, return to the **TERMINAL** window and select Enter.
-
- The next series of calls cleans up resources:
-
- - A call to `livePipelineDeactivate` deactivates the live pipeline.
- - A call to `livePipelineDelete` deletes the live pipeline.
- - A call to `pipelineTopologyDelete` deletes the topology.
- - A final call to `pipelineTopologyList` shows that the list is empty.
-
-## Interpret results
-
-When you run the live pipeline, the results from the HTTP extension processor node pass through the IoT Hub message sink node to the IoT hub. The messages you see in the **OUTPUT** window contain a body section and an applicationProperties section. For more information, see [Create and read IoT Hub messages](../../../iot-hub/iot-hub-devguide-messages-construct.md).
-
-In the following messages, the Video Analyzer module defines the application properties and the content of the body.
-
-**MediaSessionEstablished event**
-
-When a pipeline is instantiated, the RTSP source node attempts to connect to the RTSP server that runs on the rtspsim-live555 container. If the connection succeeds, then the following event is printed. The event type is **Microsoft.VideoAnalyzer.Diagnostics.MediaSessionEstablished**.
-
-```
-[IoTHubMonitor] [9:42:18 AM] Message received from [avasampleiot-edge-device/avaedge]:
-{
- "body": {
- "sdp": "SDP:\nv=0\r\no=- 1586450538111534 1 IN IP4 XXX.XX.XX.XX\r\ns=Matroska video+audio+(optional)subtitles, streamed by the LIVE555 Media Server\r\ni=media/camera-300s.mkv\r\nt=0 0\r\na=tool:LIVE555 Streaming Media v2020.03.06\r\na=type:broadcast\r\na=control:*\r\na=range:npt=0-300.000\r\na=x-qt-text-nam:Matroska video+audio+(optional)subtitles, streamed by the LIVE555 Media Server\r\na=x-qt-text-inf:media/camera-300s.mkv\r\nm=video 0 RTP/AVP 96\r\nc=IN IP4 0.0.0.0\r\nb=AS:500\r\na=rtpmap:96 H264/90000\r\na=fmtp:96 packetization-mode=1;profile-level-id=4D0029;sprop-parameter-sets=XXXXXXXXXXXXXXXXXXXXXX\r\na=control:track1\r\n"
- },
- "applicationProperties": {
- "dataVersion": "1.0",
- "topic": "/subscriptions/{subscriptionID}/resourceGroups/{name}/providers/microsoft.media/videoanalyzers/{ava-account-name}",
- "subject": "/edgeModules/avaedge/livePipelines/Sample-Pipeline-1/sources/rtspSource",
- "eventType": "Microsoft.VideoAnalyzers.Diagnostics.MediaSessionEstablished",
- "eventTime": "2021-04-09T09:42:18.1280000Z"
- }
-}
-```
-
-In this message, notice these details:
--- The message is a diagnostics event. MediaSessionEstablished indicates that the RTSP source node (the subject) connected with the RTSP simulator and has begun to receive a (simulated) live feed.-- In applicationProperties, subject indicates that the message was generated from the RTSP source node in the pipeline.-- In applicationProperties, eventType indicates that this event is a diagnostics event.-- The eventTime indicates the time when the event occurred.-- The body contains data about the diagnostics event. In this case, the data comprises the [Session Description Protocol (SDP)](https://en.wikipedia.org/wiki/Session_Description_Protocol) details.-
-### Inference event
-
-The HTTP extension processor node receives inference results from the yolov3 module. It then emits the results through the IoT Hub message sink node as inference events.
-
-In these events, the type is set to entity to indicate it's an entity, such as a car or truck. The eventTime value is the UTC time when the object was detected.
-
-In the following example, two cars were detected in the same video frame, with varying levels of confidence.
-
-```
-[IoTHubMonitor] [1:48:04 PM] Message received from [avasample-iot-edge-device/avaedge]:
-{
- "timestamp": 145589011404622,
- "inferences": [
- {
- "type": "entity",
- "entity": {
- "tag": {
- "value": "car",
- "confidence": 0.97052866
- },
- "box": {
- "l": 0.40896654,
- "t": 0.60390747,
- "w": 0.045092657,
- "h": 0.029998193
- }
- }
- },
- {
- "type": "entity",
- "entity": {
- "tag": {
- "value": "car",
- "confidence": 0.9547283
- },
- "box": {
- "l": 0.20050547,
- "t": 0.6094412,
- "w": 0.043425046,
- "h": 0.037724357
- }
- }
- },
- {
- "type": "entity",
- "entity": {
- "tag": {
- "value": "car",
- "confidence": 0.94567955
- },
- "box": {
- "l": 0.55363107,
- "t": 0.5320657,
- "w": 0.037418623,
- "h": 0.027014252
- }
- }
- },
- {
- "type": "entity",
- "entity": {
- "tag": {
- "value": "car",
- "confidence": 0.8916893
- },
- "box": {
- "l": 0.6642384,
- "t": 0.581689,
- "w": 0.034349587,
- "h": 0.027812533
- }
- }
- },
- {
- "type": "entity",
- "entity": {
- "tag": {
- "value": "car",
- "confidence": 0.8547814
- },
- "box": {
- "l": 0.584758,
- "t": 0.60079926,
- "w": 0.07082855,
- "h": 0.034121
- }
- }
- }
- ]
-}
-```
-
-In the messages, notice the following details:
--- The body section contains data about the analytics event. In this case, the event is an inference event, so the body contains inferences data.-- The inferences section indicates that the type is entity. This section includes additional data about the entity.-- The timestamp value indicates the time when the when the event was received.-
-## Clean up resources
--
-## Next steps
-
-Review additional challenges for advanced users:
--- Use an [IP camera](https://en.wikipedia.org/wiki/IP_camera) that has support for RTSP instead of using the RTSP simulator. You can search for IP cameras that support RTSP on the [ONVIF conformant](https://www.onvif.org/conformant-products/) products page. Look for devices that conform with profiles G, S, or T.-- Use an AMD64 or x64 Linux device instead of an Azure Linux VM. This device must be in the same network as the IP camera. You can follow the instructions in [Install Azure IoT Edge runtime on Linux](../../../iot-edge/how-to-install-iot-edge.md?view=iotedge-2018-06&preserve-view=true). Then register the device with Azure IoT Hub by following instructions in [Deploy your first IoT Edge module to a virtual Linux device](../../../iot-edge/quickstart-linux.md?view=iotedge-2018-06&preserve-view=true).
azure-video-analyzer Camera Discovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/camera-discovery.md
- Title: Discovering ONVIF-capable cameras in the local subnet
-description: This how-to shows you how you can use Video Analyzer edge module to discover ONVIF-capable cameras in your local subnet.
- Previously updated : 11/04/2021---
-# Discovering ONVIF-capable cameras in the local subnet
---
-This how to guide walks you through how to use the Azure Video Analyzer edge module to discover ONVIF compliant cameras on the same subnet as the IoT Edge device. Open Network Video Interface Forum (ONVIF) is an open standard where discrete IP-based physical devices, such as surveillance cameras, can communicate with additional networked devices and software. For more information about ONVIF please visit the [ONVIF](https://www.onvif.org/about/mission/) website.
-
-## Prerequisites
-
-To complete the steps in this article, you need to:
--- Have an active Azure subscription.-- Install the [Azure CLI extension for IoT](https://github.com/Azure/azure-iot-cli-extension#installation)-- Complete one of the following:
- - [Quickstart: Get started with Azure Video Analyzer](get-started-detect-motion-emit-events.md)
- - [Quickstart: Get started with Azure Video Analyzer in the Azure portal](get-started-detect-motion-emit-events-portal.md)
-- Have the Video Analyzer edge module version 1.1 (or newer) deployed to your IoT Edge device-- If using a [Hyper-V](/previous-versions/windows/it-pro/windows-server-2012-r2-and-2012/mt169373(v=ws.11)) Virtual Machine or [EFLOW](../../../iot-edge/how-to-install-iot-edge-on-windows.md?view=iotedge-2018-06&tabs=windowsadmincenter&preserve-view=true) an [external switch](/windows-server/virtualization/hyper-v/get-started/create-a-virtual-switch-for-hyper-v-virtual-machines) is required to perform the onvifDeviceDiscover direct method call. This requirement is due to a [multicast](https://en.wikipedia.org/wiki/Multicast) call used for ONVIF device discovery.-
-The ONVIF feature of the Video Analyzer edge module requires specific container create options, as described in [Enable ONVIF discovery feature](#enable-onvif-discovery-feature). This ONVIF discovery feature require that port 3702 be available.
-
-> [!NOTE]
-> If you have a new deployment of version 1.1 of the Video Analyzer edge module, you can skip to the section for [Use direct method calls](#use-direct-method-calls). Otherwise, follow the below sections to upgrade your existing Video Analyzer edge module to enable the ONVIF discovery feature.
-
-## Check the version
-
-From a command prompt run the following commands:
-
-```CLI
-az account set --subscription <YOUR_SUBSCRIPTION_NAME>
-az iot hub module-twin show -m <VIDEO_ANALYZER_IOT_EDGE_MODULE_NAME> -n <IOT_HUB_NAME> -d <IOT_EDGE_DEVICE_NAME> --query 'properties.reported.ProductInfo' -o tsv
-```
-
-If the result of the above command is **Azure Video Analyzer:1.0.1** then run the following steps to update the Video Analyzer edge module to version 1.1.
-
-1. In the Azure portal navigate to the IoT Hub (`<IOT_HUB_NAME>` above) that is used with your Video Analyzer edge module deployment
-1. Click on **IoT Edge** under Automatic Device Management and select the IoT Edge device (`<IOT_EDGE_DEVICE_NAME>` above) to which the Video Analyzer edge module has been deployed
-1. Click on **Set modules** and click on **Review + create**
-1. Click on **Create**
-1. After a few minutes the Video Analyzer edge module will update and you can run the above command again to verify
-
-### Enable ONVIF discovery feature
-
-Next, add additional options into the container create options for the Video Analyzer edge module. This allows it to communicate through the host network to discover ONVIF enabled cameras on the same subnet.
-
-1. In the Azure portal navigate to the IoT Hub
-1. Click on **IoT Edge** under Automatic Device Management and select the IoT Edge device to which the Video Analyzer edge module has been deployed
-1. Click on **Set modules** and select the Video Analyzer edge module
-1. Go to the **Container Create Options** tab and add the following JSON entries to the respective sections:
-
- - In the JSON entry box, after the first `{` enter the following:
-
- ```JSON
- "NetworkingConfig": {
- "EndpointsConfig": {
- "host": {}
- }
- },
- ```
- - In the `HostConfig` JSON object:
-
- ```JSON
- "NetworkMode":ΓÇ»"host",
- ```
-1. Click **Update** at the bottom
-1. Click **Review + create**
-1. Click **Create**
-
-## Use direct method calls
-
-The Video Analyzer edge module provides direct method calls for ONVIF discovery of network attached cameras on the same subnet. The following steps apply to both the `onvifDeviceDiscover` and the `onvifDeviceGet` sections below:
-
-1. In the Azure portal navigate to the IoT Hub
-1. Click on **IoT Edge** under Automatic Device Management and select the IoT Edge device to which the Video Analyzer edge module has been deployed
-1. Click on the Video Analyzer edge module and click on **Direct method** in the menu bar at the top
-
-### onvifDeviceDiscover
-
-This direct method lists all the discoverable ONVIF devices on the same subnet as the Video Analyzer edge module.
--
-1. In the method name enter:
-
- ```JSON
- onvifDeviceDiscover
- ```
-1. In the payload enter:
-
- ```JSON
- {
- "@apiVersion":"1.1",
- "discoveryDuration":"PT8S"
- }
- ```
-
- The discovery duration is the amount of time that the Video Analyzer edge module waits to receive responses from ONVIF discoverable devices. It might be necessary in a large environment to adjust this value.
-
-1. Within a few seconds you should see a`result` such as the following:
-
- ```JSON
- {
- "status": 200,
- "payload": {
- "value": [
- {
- "serviceIdentifier": "{urn:uuid}",
- "remoteIPAddress": "{IP_ADDRESS}",
- "scopes": [
- "onvif://www.onvif.org/type/Network_Video_Transmitter",
- "onvif://www.onvif.org/name/{CAMERA_MANUFACTURE}",
- "onvif://www.onvif.org/location/",
- "onvif://www.onvif.org/hardware/{CAMERA_MODEL}",
- "onvif://www.onvif.org/Profile/Streaming",
- "onvif://www.onvif.org/Profile/G",
- "onvif://www.onvif.org/Profile/T"
- ],
- "endpoints": [
- "http://<IP_ADDRESS>/onvif/device_service",
- "https://<IP_ADDRESS>/onvif/device_service"
- ],
-
- }
- ]
- }
- }
- ```
- The return status of 200 indicates that the direct method call was handled successfully.
-
-### onvifDeviceGet
-
-The onvifDeviceGet call supports both unsecured and TLS enabled endpoints. This direct method call retrieves detailed information about a specific ONVIF device.
-
-# [UnsecuredEndpoint](#tab/unsecuredendpoint)
-
-> [!NOTE]
-> When the onvifDeviceGet call is made to an unsecured endpoint on an ONVIF enabled camera you need to set the Video Edge module Identity Twin setting `"allowUnsecuredEndpoints"` to `true`. For more information see article [Module twin properties](./module-twin-configuration-schema.md).
-
-1. In the method name enter:
-
- ```JSON
- onvifDeviceGet
- ```
-1. In the payload enter:
-
- ```JSON
- {
- "endpoint": {
- "@type": "#Microsoft.VideoAnalyzer.UnsecuredEndpoint",
- "credentials": {
- "username": "<USER_NAME>",
- "password": "<PASSWORD>",
- "@type": "#Microsoft.VideoAnalyzer.UsernamePasswordCredentials"
- },
- "url": "http://<IP_ADDRESS>/onvif/device_service"
- },
- "@apiVersion": "1.1"
- }
- ```
-
-# [TlsEndpoint](#tab/tlsendpoint)
-
-1. In the method name enter:
-
- ```JSON
- onvifDeviceGet
- ```
-1. In the payload enter:
-
- ```JSON
- {
- "endpoint": {
- "@type": "#Microsoft.VideoAnalyzer.TlsEndpoint",
- "credentials": {
- "username": "<USER_NAME>",
- "password": "<PASSWORD>",
- "@type": "#Microsoft.VideoAnalyzer.UsernamePasswordCredentials"
- },
- "url": "https://<IP_ADDRESS>/onvif/device_service"
- },
- "@apiVersion": "1.1"
- }
---
-In the above payload:
--- `url` is the IP address of the ONVIF device from which you wish to get additional details-- `username` is the user name that is used to authenticate with the device, which has permissions to the ONVIF features. Some devices require specific accounts to be created for this purpose.-- `password` is the password for that user name-
- Within a few seconds you should see a result such as:
-
- ```JSON
- {
- "status": 200,
- "payload": {
- "hostname": {
- "fromDhcp": true,
- "hostname": "{NAME_OF_THE_ONVIF_DEVICE}"
- },
- "systemDateTime": {
- "type": "ntp",
- "time": "2021-09-28T03:05:05.000Z",
- "timeZone": "GMT"
- },
- "dns": {
- "fromDhcp": true,
- "ipv4Address": [
- "{IP_ADDRESS}"
- ],
- "ipv6Address": []
- },
- "mediaProfiles": [
- {
- "name": "{Profile1}",
- "mediaUri": {
- "uri": "{RTSP_URI}"
- },
- "videoEncoderConfiguration": {
- "encoding": "h264",
- "resolution": {
- "width": 3840,
- "height": 2160
- },
- "rateControl": {
- "bitRateLimit": 15600,
- "encodingInterval": 1,
- "frameRateLimit": 30,
- "guaranteedFrameRate": false
- },
- "quality": 50,
- "h264": {
- "govLength": 255,
- "profile": "main"
- }
- }
- },
- {
- "name": "{Profile2}",
- "mediaUri": {
- "uri": "{RTSP_URI}"
- },
- "videoEncoderConfiguration": {
- "encoding": "h264",
- "resolution": {
- "width": 1280,
- "height": 720
- },
- "rateControl": {
- "bitRateLimit": 1900,
- "encodingInterval": 1,
- "frameRateLimit": 30,
- "guaranteedFrameRate": false
- },
- "quality": 50,
- "h264": {
- "govLength": 255,
- "profile": "main"
- }
- }
- }
- ]
- }
- }
- ```
-
-### Return status of onvifDeviceGet
-
-| Status | Code | Meaning / solution |
-| | | -- |
-| 200 | Success | The direct method call completed successfully. |
-| 400 | Bad Request | The request was malformed |
-| 403 | Forbidden | The direct method call could not successfully retrieve the requested information from the ONVIF device due to an authentication failure. Check to ensure that the username and / or password in the message body was correct. |
-| 500 | Error | An unknown error occurred. |
-| 502 | Bad Gateway | Received an invalid response from an upstream service. |
-| 504 | Timeout | The direct method call expired before the response of the ONVIF device was received. |
-
-## Troubleshooting
-
-This section covers some troubleshooting steps:
--- In the Azure portal on the IoT Edge module direct method blade if you receive the error "An error prevented the operation from successfully completing. The request failed with status code 504." (See image below):-
- :::image type="content" source="./media/camera-discovery/five-zero-four-error.png" alt-text="Screenshot that shows the 504 error.":::
-
- Check to ensure that the setting for `"discoveryDuration":"PT8S"` in the above direct method call is shorter than the `Connection Timeout` or `Method Timeout` values.
-
- :::image type="content" source="./media/camera-discovery/five-zero-four-error-fix.png" alt-text="Screenshot that shows the 504 error fix.":::
--- If the direct method `Results` field displays "{"status":200,"payload":{"value":[]}}, you may need longer discovery durations-
- :::image type="content" source="./media/camera-discovery/result-status-two-hundred-null.png" alt-text="The message return is displayed in the direct method `Results` field":::
-
- Adjust the time value (x) in `"discoveryDuration":"PTxS"` to a larger number. Also adjust the `Connection Timeout` and/or `Method Timeout` values accordingly.
-- The `onvifDeviceGet` direct method call will not display any media profiles for H.265 encoded media streams.-- Return status of 403 can be returned in the event that the user account used to connect to the ONVIF device does not have permissions to the ONVIF camera features. Some ONVIF-compliant cameras require that a user is added to the ONVIF security settings to retrieve the ONVIF device information.-- Currently the Video Analyzer edge module will return up to 200 ONVIF enabled cameras that are reachable on the same subnet via multicast. This also requires that port 3702 is available.-- If onvifDeviceGet is called with a TLS Endpoint the ONVIF device name and IP address must be added to the hosts (/etc/hosts) file on the edge device. If a self-signed certificate is used for TLS encryption the certificate also needs to be added to the IoT Edge deviceΓÇÖs certificate store. -- For onvifDeviceDiscover direct method call from an EFLOW virtual machine the following steps must be performed:
- - Connect to the EFLOW virtual machine and run `sudo iptables -I INPUT -p udp -j ACCEPT`
-
-## Next steps
--- Try [Quickstart: Analyze a live video feed from a (simulated) IP camera using your own HTTP model](analyze-live-video-use-your-model-http.md) with the discovered ONVIF device
azure-video-analyzer Computer Vision For Spatial Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/computer-vision-for-spatial-analysis.md
- Title: Analyze Live Video with Computer Vision for Spatial Analysis
-description: This tutorial shows you how to use Azure Video Analyzer together with Computer Vision spatial analysis AI feature from Azure Cognitive Services to analyze a live video feed from a (simulated) IP camera.
---- Previously updated : 11/04/2021---
-# Tutorial: Live Video with Computer Vision for Spatial Analysis (preview)
---
-This tutorial shows you how to use Azure Video Analyzer together with [Computer Vision for spatial analysis AI service from Azure Cognitive Services](../../../cognitive-services/computer-vision/intro-to-spatial-analysis-public-preview.md) to analyze a live video feed from a (simulated) IP camera. You'll see how this inference server enables you to analyze the streaming video to understand spatial relationships between people and movement in physical space. A subset of the frames in the video feed is sent to this inference server, and the results are sent to IoT Edge Hub and when some conditions are met, video clips are recorded and stored as videos in the Video Analyzer account.
-
-In this tutorial you will:
-
-> [!div class="checklist"]
->
-> - Set up resources
-> - Examine the code
-> - Run the sample code
-> - Monitor events
--
-## Suggested pre-reading
-
-Read these articles before you begin:
--- [Video Analyzer overview](../overview.md)-- [Video Analyzer terminology](../terminology.md)-- [Pipeline concepts](../pipeline.md)-- [Event-based video recording](record-event-based-live-video.md)-- [Azure Cognitive Service Computer Vision container](../../../cognitive-services/computer-vision/intro-to-spatial-analysis-public-preview.md) for spatial analysis.-
-## Prerequisites
-
-The following are prerequisites for connecting the spatial-analysis module to Azure Video Analyzer module.
--
- > [!Note]
- > Make sure the network that your development machine is connected to permits Advanced Message Queueing Protocol over port 5671. This setup enables Azure IoT Tools to communicate with Azure IoT Hub.
-
-## Set up Azure resources
-
-1. **Choose a compute device**
-
- To run the Spatial Analysis container, you need a compute device with a [NVIDIA Tesla T4 GPU](https://www.nvidia.com/en-us/data-center/tesla-t4/). We recommend that you use **[Azure Stack Edge](https://azure.microsoft.com/products/azure-stack/edge/)** with GPU acceleration, however the container runs on any other **desktop machine** or **Azure VM** that has [Ubuntu Desktop 18.04 LTS](http://releases.ubuntu.com/18.04/) installed on the host computer.
-
- #### [Azure Stack Edge device](#tab/azure-stack-edge)
-
- Azure Stack Edge is a Hardware-as-a-Service solution and an AI-enabled edge computing device with network data transfer capabilities. For detailed preparation and setup instructions, see the [Azure Stack Edge documentation](../../../databox-online/azure-stack-edge-deploy-prep.md).
-
- #### [Desktop machine](#tab/desktop-machine)
-
- #### Minimum hardware requirements
-
- - 4 GB system RAM
- - 4 GB of GPU RAM
- - 8 core CPU
- - 1 NVIDIA Tesla T4 GPU
- - 20 GB of HDD space
-
- #### Recommended hardware
-
- - 32 GB system RAM
- - 16 GB of GPU RAM
- - 8 core CPU
- - 2 NVIDIA Tesla T4 GPUs
- - 50 GB of SSD space
-
- #### [Azure VM with GPU](#tab/virtual-machine)
-
- You can utilize an [NC series VM](../../../virtual-machines/nc-series.md?bc=%2fazure%2fvirtual-machines%2flinux%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json) that has one K80 GPU.
-
-1. **Set up the edge device**
-
- #### [Azure Stack Edge device](#tab/azure-stack-edge)
- [Configure compute on the Azure Stack Edge portal](../../../cognitive-services/computer-vision/spatial-analysis-container.md#configure-compute-on-the-azure-stack-edge-portal)
- #### [Desktop machine](#tab/desktop-machine)
- [Follow these instructions if your host computer isn't an Azure Stack Edge device.](../../../cognitive-services/computer-vision/spatial-analysis-container.md#install-nvidia-cuda-toolkit-and-nvidia-graphics-drivers-on-the-host-computer)
- #### [Azure VM with GPU](#tab/virtual-machine)
- 1. [Create the VM](../../../cognitive-services/computer-vision/spatial-analysis-container.md?tabs=virtual-machine#create-the-vm) and install the necessary dockers on the VM
-
- > [!Important]
- > Please **skip the IoT Deployment manifest** step mentioned in that document. You will be using our own **[deployment manifest](#configure-deployment-template)** file to deploy the required containers.
-
- 1. Connect to your VM and in the terminal type in the following command:
- ```bash
- bash -c "$(curl -sL https://aka.ms/ava-edge/prep_device)"
- ```
- Azure Video Analyzer module runs on the edge device with non-privileged local user accounts. Additionally, it needs certain local folders for storing application configuration data. Finally, for this how-to guide you are using a [RTSP simulator](https://github.com/Azure/video-analyzer/tree/main/edge-modules/sources/rtspsim-live555) that relays a video feed in real time to AVA module for analysis. This simulator takes as input pre-recorded video files from an input directory.
-
- The prep-device script used above automates these tasks away, so you can run one command and have all relevant input and configuration folders, video input files, and user accounts with privileges created seamlessly. Once the command finishes successfully, you should see the following folders created on your edge device.
-
- * `/home/localedgeuser/samples`
- * `/home/localedgeuser/samples/input`
- * `/var/lib/videoanalyzer`
- * `/var/media`
-
- Note the video files (*.mkv) in the /home/localedgeuser/samples/input folder, which serve as input files to be analyzed.
-
-1. Next, deploy the other Azure resources.
-
- [![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/ava-click-to-deploy)
-
- > [!NOTE]
- > The button above creates and uses the default Virtual Machine which does NOT have the NVIDIA GPU. Please use the "Use existing edge device" option when asked in the Azure Resource Manager (ARM) template and use the IoT Hub and the device information from the step above.
- > :::image type="content" source="./media/spatial-analysis/use-existing-device.png" alt-text="Use existing device":::
-
- [!INCLUDE [resources](./includes/common-includes/azure-resources.md)]
-
-## Overview
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/spatial-analysis/overview.png" alt-text="Spatial Analysis overview":::
-
-This diagram shows how the signals flow in this tutorial. An [edge module](https://github.com/Azure/video-analyzer/tree/main/edge-modules/sources/rtspsim-live555) simulates an IP camera hosting a Real-Time Streaming Protocol (RTSP) server. An [RTSP source](../pipeline.md#rtsp-source) node pulls the video feed from this server and sends video frames to the `CognitiveServicesVisionProcessor` node.
-
-The `CognitiveServicesVisionProcessor` node plays the role of a proxy. It converts the video frames to the specified image type. Then it relays the image over **shared memory** to another edge module that runs AI operations behind a gRPC endpoint. In this example, that edge module is the spatial-analysis module. The `CognitiveServicesVisionProcessor` node does two things:
--- It gathers the results and publishes events to the [IoT Hub sink](../pipeline.md#iot-hub-message-sink) node. The node then sends those events to [IoT Edge Hub](../../../iot-fundamentals/iot-glossary.md#iot-edge-hub).-- It also captures a 30-second video clip from the RTSP source using a [signal gate processor](../pipeline.md#signal-gate-processor) and stores it as a Video sink.-
-## Create the Computer Vision resource
-
-You need to create an Azure resource of type [Computer Vision](https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision) for the **Standard S1 tier** either on [Azure portal](../../../iot-edge/how-to-deploy-modules-portal.md) or via Azure CLI.
-
-## Set up your development environment
--
-## Configure deployment template
-#### [Azure Stack Edge device](#tab/azure-stack-edge)
-Look for the deployment file in **/src/edge/deployment.spatialAnalysis.ase.template.json**. From the template, there are `avaedge` module, `rtspsim` module, and our `spatialanalysis` module.
-
-In your deployment template file:
-
-1. The deployment manifest uses port 50051 to communicate between `avaedge` and `spatialanalysis` module. If the port is being used by any other application, then set the port binding in the `spatialanalysis` module to an open port.
-
- ```
- "PortBindings": {
- "50051/tcp": [
- {
- "HostPort": "50051"
- }
- ]
- }
- ```
-
-1. `IpcMode` in `avaedge` and `spatialanalysis` module createOptions should be same and set to **host**.
-1. For the RTSP simulator to work, ensure that you have set up the Volume Bounds when using an Azure Stack Edge device.
-
- 1. [Connect to the SMB share](../../../databox-online/azure-stack-edge-deploy-add-shares.md#connect-to-an-smb-share) and copy the [sample retail shop video file](https://avamedia.blob.core.windows.net/public/retailshop-15fps.mkv) to the Local share.
-
- > [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RWMIPP]
-
- 1. See that the rtspsim module has the following configuration:
- ```
- "createOptions": {
- "HostConfig": {
- "Mounts": [
- {
- "Target": "/live/mediaServer/media",
- "Source": "<your Local Docker Volume Mount name>",
- "Type": "volume"
- }
- ],
- "PortBindings": {
- "554/tcp": [
- {
- "HostPort": "554"
- }
- ]
- }
- }
- }
- ```
-#### [Desktop machine](#tab/desktop-machine)
-Look for the deployment file in **/src/edge/deployment.spatialAnalysis.generic.template.json**. From the template, there are `avaedge` module, `rtspsim` module and our `spatialanalysis` module.
-
-#### [Azure VM with GPU](#tab/virtual-machine)
-Look for the deployment file in **/src/edge/deployment.spatialAnalysis.generic.template.json**. From the template, there are `avaedge` module, `rtspsim` module and our `spatialanalysis` module.
---
-The following table shows the various Environment Variables used by the IoT Edge Module. You can also set them in the deployment manifest mentioned above, using the `env` attribute in `spatialanalysis`:
-
-| Setting Name | Value | Description|
-||||
-| DISPLAY | :1 | This value needs to be same as the output of `echo $DISPLAY` on the host computer. Azure Stack Edge devices do not have a display. This setting is not applicable|
-| ARCHON_SHARED_BUFFER_LIMIT | 377487360 | **Do not modify**|
-| ARCHON_LOG_LEVEL | Info; Verbose | Logging level, select one of the two values|
-| QT_X11_NO_MITSHM | 1 | **Do not modify**|
-| OMP_WAIT_POLICY | PASSIVE | **Do not modify**|
-| EULA | accept | This value needs to be set to *accept* for the container to run |
-| ARCHON_TELEMETRY_IOTHUB | true | Set this value to true to send the telemetry events to IoT Hub |
-| BILLING | your Endpoint URI| Collect this value from Azure portal from your Computer Vision resource. You can find it in the **Keys and Endpoint** blade for your resource.|
-| APIKEY | your API Key| Collect this value from Azure portal from your Computer Vision resource. You can find it in the **Keys and Endpoint** blade for your resource. |
-| LAUNCHER_TYPE | avaBackend | **Do not modify** |
-| ARCHON_GRAPH_READY_TIMEOUT | 600 | Add this environment variable if your GPU is **not** T4 or NVIDIA 2080 Ti|
-
-> [!IMPORTANT]
-> The `Eula`, `Billing`, and `ApiKey` options must be specified to run the container; otherwise, the container won't start.
-
-### Gathering Keys and Endpoint URI
-
-An API key is used to start the spatial-analysis container, and is available on the Azure portal's `Keys and Endpoint` page of your Computer Vision resource. Navigate to that page, and find the key and the endpoint URI that is needed by the `spatialAnalysis` container.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/spatial-analysis/keys-endpoint.png" alt-text="Endpoint URI":::
-
-## Generate and deploy the deployment manifest
-
-The deployment manifest defines what modules are deployed to an edge device. It also defines configuration settings for those modules.
-
-Follow these steps to generate the manifest from the template file and then deploy it to the edge device.
-
-1. Open Visual Studio Code.
-1. Next to the `AZURE IOT HUB` pane, select the More actions icon to set the IoT Hub connection string. You can copy the string from the `src/cloud-to-device-console-app/appsettings.json` file.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/vscode-common-screenshots/set-connection-string.png" alt-text="Spatial Analysis: connection string":::
-
-1. In your Folder explorer, right click on your deployment template file and select Generate IoT Edge Deployment Manifest.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./medi64 json":::
-
- This action should create a manifest file in the **src/edge/config** folder.
-
-1. Right-click on the generated manifest file and select **Create Deployment for Single Device**, and then select the name of your edge device.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/spatial-analysis/deployment-single-device.png" alt-text="Spatial Analysis: deploy to single device":::
-
-1. At the top of the page, you will be prompted to select an IoT Hub device, choose your edge device name from the drop-down menu.
-1. After about 30-seconds, in the lower-left corner of the window, refresh **AZURE IOT HUB** pane. The edge device now shows the following deployed modules:
-
- - Azure Video Analyzer (module name **avaedge**).
- - Real-Time Streaming Protocol (RTSP) simulator (module name **rtspsim**).
- - Spatial Analysis (module name **spatialanalysis**).
-
-Upon successful deployment, there will be a message in OUTPUT window like this:
-
-```
-[Edge] Start deployment to device [<edge device name>]
-[Edge] Deployment succeeded.
-```
-
-Then you can find `avaedge`, `spatialanalysis` and `rtspsim` modules under Devices/Modules, and their status should be "**running**".
-
-## Prepare to monitor events
-
-To see these events, follow these steps:
-
-1. In Visual Studio Code, open the **Extensions** tab (or press Ctrl+Shift+X) and search for Azure IoT Hub.
-1. Right-click and select **Extension Settings**.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/vscode-common-screenshots/extension-settings.png" alt-text="Extension Settings":::
-
-1. Search and enable ΓÇ£Show Verbose MessageΓÇ¥.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/vscode-common-screenshots/verbose-message.png" alt-text="Show Verbose Message":::
-
-1. Open the Explorer pane and look for **AZURE IOT HUB** in the lower-left corner, right click and select Start Monitoring Built-in Event Endpoint.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/vscode-common-screenshots/start-monitoring.png" alt-text="Spatial Analysis: start monitoring":::
-
-## Run the program
-
-There is a program.cs which will invoke the direct methods in `src/cloud-to-device-console-app/operations.json`. You will need to edit the `operations.json` file and update the pipeline topology URL, the name of the topology, as well as the RTSP URL.
-
-In operations.json:
--- Set the pipeline topology like this:-
- ```json
- {
- "opName": "pipelineTopologySet",
- "opParams": {
- "pipelineTopologyUrl": "https://raw.githubusercontent.com/Azure/video-analyzer/main/pipelines/live/topologies/spatial-analysis/person-zone-crossing-operation-topology.json"
- }
- },
- ```
-* Under `livePipelineSet`, edit the name of the topology to match the value in the preceding link:
- * `"topologyName" : "PersonZoneCrossingTopology"`
-* Under `pipelineTopologyDelete`, edit the name:
- * `"name" : "PersonZoneCrossingTopology"`
-
-> [!Important]
-> The topology used above has a hard-coded name for the VideoSink resource `videoSink`. If you decide to choose a different video source, remember to change this value.
--- Create a live pipeline like this, set the parameters in pipeline topology here:-
- ```json
- {
- "opName": "livePipelineSet",
- "opParams": {
- "name": "Sample-Pipeline-1",
- "properties": {
- "topologyName": "PersonZoneCrossingTopology",
- "description": "Sample pipeline description",
- "parameters": [
- {
- "name": "rtspUrl",
- "value": "rtsp://rtspsim:554/media/retailshop-15fps.mkv"
- },
- {
- "name": "rtspUserName",
- "value": "testuser"
- },
- {
- "name": "rtspPassword",
- "value": "testpassword"
- }
- ]
- }
- }
- },
- ```
-
-Run a debug session by selecting F5 and follow **TERMINAL** instructions, it will set pipelineTopology, set live pipeline, activate live pipeline, and finally delete the resources.
-
-> [!Note]
-> The program will pause at the activate live pipeline step. Open the Terminal tab and press **Enter** to continue and start the deactivating and deleting on resources steps.
-
-## Interpret results
-
-The `spatialanalysis` is a large container and its startup time can take up to 30 seconds. Once the spatialanalysis container is up and running, it will start to send the inferences events. You will see events such as:
-
-```JSON
-[IoTHubMonitor] [3:37:28 PM] Message received from [ase03-edge/avaedge]:
-{
- "sdp": "SDP:\nv=0\r\no=- 1620671848135494 1 IN IP4 172.27.86.122\r\ns=Matroska video+audio+(optional)subtitles, streamed by the LIVE555 Media Server\r\ni=media/cafeteria.mkv\r\nt=0 0\r\na=tool:LIVE555 Streaming Media v2020.08.19\r\na=type:broadcast\r\na=control:*\r\na=range:npt=0-300.066\r\na=x-qt-text-nam:Matroska video+audio+(optional)subtitles, streamed by the LIVE555 Media Server\r\na=x-qt-text-inf:media/retailshop-15fps.mkv\r\nm=video 0 RTP/AVP 96\r\nc=IN IP4 0.0.0.0\r\nb=AS:500\r\na=rtpmap:96 H264/90000\r\na=fmtp:96 packetization-mode=1;profile-level-id=640028;sprop-parameter-sets=Z2QAKKzZQHgCHoQAAAMABAAAAwDwPGDGWA==,aOvssiw=\r\na=control:track1\r\n"
-}
-[IoTHubMonitor] [3:37:30 PM] Message received from [ase03-edge/avaedge]:
-{
- "type": "video",
- "location": "/videos/<your video name>",
- "startTime": "2021-05-10T18:37:27.931Z"
-}
-[IoTHubMonitor] [3:37:40 PM] Message received from [ase03-edge/avaedge]:
-{
- "state": "initializing"
-}
-[IoTHubMonitor] [3:37:50 PM] Message received from [ase03-edge/avaedge]:
-{
- "state": "initializing"
-}
-[IoTHubMonitor] [3:38:18 PM] Message received from [ase03-edge/avaedge]:
-{
- "type": "video",
- "location": "/videos/<your video name>",
- "startTime": "2021-05-10T18:37:27.931Z"
-}
-
-```
-> [!NOTE]
-> You will see the **"initializing"** messages. These messages show up while the spatialAnalysis module is starting up and can take up to 60 seconds to get to a running state. Please be patient and you should see the inference event flow through.
-
-When a pipeline topology is instantiated, you should see "MediaSessionEstablished" event, here is a [sample MediaSessionEstablished event](detect-motion-emit-events-quickstart.md#mediasessionestablished-event).
-
-The spatialanalysis module will also send out AI Insight events to Azure Video Analyzer and then to IoTHub, it will also show in **OUTPUT** window. These AI insights are recorded along with video via the video sink node. You can use Video Analyzer to view these, as discussed below.
-
-## Supported Spatial Analysis Operations
-
-Here are the operations that the `spatialAnalysis` module offers and is supported by Azure Video Analyzer:
--- **personZoneCrossing**-- **personCrossingLine**-- **personDistance**-- **personCount**-- **customOperation**--
-Read our **supported** **[Spatial Analysis operations](../spatial-analysis-operations.md)** reference document to learn more about the different operations and the properties supported in them.
-
-## Playing back the recording
-
-You can examine the Video Analyzer video resource that was created by the live pipeline by logging in to the Azure portal and viewing the video.
-
-1. Open your web browser, and go to the [Azure portal](https://portal.azure.com/). Enter your credentials to sign in to the portal. The default view is your service dashboard.
-1. Locate your Video Analyzers account among the resources you have in your subscription, and open the account pane.
-1. Select **Videos** in the **Video Analyzers** list.
-1. You'll find a video listed with the name `personzonecrossing`. This is the name chosen in your pipeline topology file.
-1. Select the video.
-1. On the video details page, click the **Play** icon
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/spatial-analysis/sa-video-playback.png" alt-text="Screenshot of video playback":::
-
-1. To view the inference metadata on the video, click the **Metadata rendering** icon
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/record-stream-inference-data-with-video/bounding-box.png" alt-text="Metadata rendering icon":::
-
- You will find 3 options to view as overlay on the video:
- - **Bounding boxes**: Display a bounding box boxes around each person with a unique id
- - **Attributes** - Display person attributes such as its speed (in ft/s) and orientation (using an arrow), when available
- - **Object path** - Display a short trail for each person's movement, when available
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/spatial-analysis/sa-video-playback-bounding-boxes.png" alt-text="Screenshot of video playback with bounding boxes":::
--
-## Next steps
-
-Try different operations that the `spatialAnalysis` module offers, refer to the following pipelineTopologies:
--- [personCount](https://raw.githubusercontent.com/Azure/video-analyzer/main/pipelines/live/topologies/spatial-analysis/person-count-operation-topology.json)-- [personDistance](https://raw.githubusercontent.com/Azure/video-analyzer/main/pipelines/live/topologies/spatial-analysis/person-distance-operation-topology.json)-- [personCrossingLine](https://raw.githubusercontent.com/Azure/video-analyzer/main/pipelines/live/topologies/spatial-analysis/person-line-crossing-operation-topology.json)-- [customOperation](https://raw.githubusercontent.com/Azure/video-analyzer/main/pipelines/live/topologies/spatial-analysis/custom-operation-topology.json)
azure-video-analyzer Configure Signal Gate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/configure-signal-gate.md
- Title: Configuring a signal gate for event-based video recording
-description: This article provides guidance about how to configure a signal gate in a pipeline.
- Previously updated : 11/04/2021---
-# Configuring a signal gate for event-based video recording
---
-Within a pipeline, a [signal gate processor node](../pipeline.md#signal-gate-processor) allows you to forward media from one node to another when the gate is triggered by an event. When it's triggered, the gate opens and lets media flow through for a specified duration. In the absence of events to trigger the gate, the gate closes, and media stops flowing. You can use the signal gate processor for event-based video recording.
-
-> [!NOTE]
-> A signal gate processor node must be immediately followed by a video sink or file sink.
-
-In this article, you'll learn how to configure a signal gate processor.
-
-## Suggested prereading
--- [Pipeline topology](../pipeline.md)-- [Event-based video recording](../event-based-video-recording-concept.md)-
-## Problem
-
-A user might want to start recording at a particular time before or after the gate is triggered by an event. The user knows the acceptable latency within their system. So they want to specify the latency of the signal gate processor. They also want to specify the minimum and maximum duration of their recording, no matter how many new events are received.
-
-### Use case scenario
-
-Suppose you want to record video every time the front door of your building opens. You want the recording to:
--- Include the *X* seconds before the door opens. -- Last at least *Y* seconds if the door isn't opened again. -- Last at most *Z* seconds if the door is repeatedly opened.
-
-You know that your door sensor has a latency of *K* seconds. To reduce the chance of events being disregarded as late arrivals, you want to allow at least *K* seconds for the events to arrive.
-
-## Solution
-
-To address the problem, modify your signal gate processor parameters.
-
-To configure a signal gate processor, use these four parameters:
--- Activation evaluation window-- Activation signal offset-- Minimum activation window-- Maximum activation window-
-When the signal gate processor is triggered, it stays open for the minimum activation time. The activation event begins at the time stamp for the earliest event, plus the activation signal offset.
-
-If the signal gate processor is triggered again while it's open, the timer resets and the gate stays open for at least the minimum activation time. The signal gate processor never stays open longer than the maximum activation time.
-
-An event (event 1) that occurs before another event (event 2), based on media time stamps, could be disregarded if the system lags and event 1 arrives at the signal gate processor after event 2. If event 1 doesn't arrive between the arrival of event 2 and the activation evaluation window, event 1 is disregarded. It isn't passed through the signal gate processor.
-
-Correlation IDs are set for every event. These IDs are set from the initial event. They're sequential for each following event.
-
-> [!IMPORTANT]
-> Media time is based on the media time stamp of when an event occurs in the media. The sequence of events that arrive at the signal gate might not reflect the sequence of events that arrive in media time.
-
-### Parameters, based on the physical time that events arrive at the signal gate
-
-* **minimumActivationTime (shortest possible duration of a recording)**: The minimum number of seconds that the signal gate processor remains open after it's triggered to receive new events, unless it's interrupted by the maximumActivationTime.
-* **maximumActivationTime (longest possible duration of a recording)**: The maximum number of seconds from the initial event that the signal gate processor remains open after being triggered to receive new events, regardless of what events are received.
-* **activationSignalOffset**: The number of seconds between the activation of the signal gate processor and the start of the video recording. Typically, this value is negative because it starts the recording before the triggering event.
-* **activationEvaluationWindow**: Starting from the initial triggering event, the number of seconds in which an event that occurred before the initial event, in media time, must arrive at the signal gate processor before it's disregarded and considered a late arrival.
-
-> [!NOTE]
-> A *late arrival* is any event that arrives after the activation evaluation window has passed but that arrives before the initial event in media time.
-
-### Limits of parameters
-
-* **activationEvaluationWindow**: 0 seconds to 10 seconds
-* **activationSignalOffset**: -1 minute to 1 minute
-* **minimumActivationTime**: 10 seconds to 1 hour
-* **maximumActivationTime**: 10 seconds to 1 hour
-
-In the use case, you would set the parameters as follows:
-
-* **activationEvaluationWindow**: *K* seconds
-* **activationSignalOffset**: *-X* seconds
-* **minimumActivationWindow**: *Y* seconds
-* **maximumActivationWindow**: *Z* seconds
-
-Here's an example of how the **Signal Gate Processor** node section would look in a pipeline topology for the following parameter values:
-
-* **activationEvaluationWindow**: 1 second
-* **activationSignalOffset**: -5 seconds
-* **minimumActivationTime**: 20 seconds
-* **maximumActivationTime**: 40 seconds
-
-> [!IMPORTANT]
-> [ISO 8601 duration format](https://en.wikipedia.org/wiki/ISO_8601#Durations) is expected for each parameter value. For example, PT1S = 1 second.
-
-```
-"processors":
-[
- {
- "@type": "#Microsoft.VideoAnalyzer.SignalGateProcessor",
- "name": "signalGateProcessor",
- "inputs": [
- {
- "nodeName": "iotMessageSource"
- },
- {
- "nodeName": "rtspSource"
- }
- ],
- "activationEvaluationWindow": "PT1S",
- "activationSignalOffset": "-PT5S",
- "minimumActivationTime": "PT20S",
- "maximumActivationTime": "PT40S"
- }
-]
-```
-
-Now consider how this signal gate processor configuration will behave in different recording scenarios.
-
-### Recording scenarios
-
-**One event from one source (*normal activation*)**
-
-A signal gate processor that receives one event results in a recording that starts 5 seconds (activation signal = 5 seconds) before the event arrives at the gate. The rest of the recording is 20 seconds (minimum activation time = 20 seconds) because no other events arrive before the end of the minimum activation time to retrigger the gate.
-
-Example diagram:
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/configure-signal-gate/normal-activation.svg" alt-text="Diagram showing the normal activation of one event from one source.":::
-
-* Duration of recording = -offset + minimumActivationTime = [E1+offset, E1+minimumActivationTime]
-
-**Two events from one source (*retriggered activation*)**
-
-A signal gate processor that receives two events results in a recording that starts 5 seconds (activation signal offset = 5 seconds) before the event arrives at the gate. Also, event 2 arrives 5 seconds after event 1. Because event 2 arrives before the end of event 1's minimum activation time (20 seconds), the gate is retriggered. The rest of the recording is 20 seconds (minimum activation time = 20 seconds) because no other events arrive before the end of the minimum activation time from event 2 to retrigger the gate.
-
-Example diagram:
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/configure-signal-gate/retriggering-activation.svg" alt-text="Diagram showing the retriggered activation of two events from one source.":::
-
-* Duration of recording = -offset + (arrival of event 2 - arrival of event 1) + minimumActivationTime
-
-***N* events from one source (*maximum activation*)**
-
-A signal gate processor that receives *N* events results in a recording that starts 5 seconds (activation signal offset = 5 seconds) before the first event arrives at the gate. As each event arrives before the end of the minimum activation time of 20 seconds from the previous event, the gate is continuously retriggered. It remains open until the maximum activation time of 40 seconds after the first event. Then the gate closes and no longer accepts any new events.
-
-Example diagram:
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/configure-signal-gate/maximum-activation.svg" alt-text="Diagram showing the maximum activation of N events from one source.":::
-
-* Duration of recording = -offset + maximumActivationTime
-
-> [!IMPORTANT]
-> The preceding diagrams assume that every event arrives at the same instant in physical time and media time. That is, they assume that there are no late arrivals.
-
-### Naming video or files
-
-Pipelines allows for recording of videos to the cloud, or as MP4 files on the edge device. These can be generated by [continuous video recording](use-continuous-video-recording.md) or by [event-based video recording](record-event-based-live-video.md).
-
-The recommended naming structure for recording to the cloud is to name the video resource as `<anytext>-${System.TopologyName}-${System.PipelineName}`. A given live pipeline can only connect to one RTSP-capable IP camera, and you should record the input from that camera to one video resource. As an example, you can set the `VideoName` on the Video Sink as follows:
-
-```
-"VideoName": "sampleVideo-${System.TopologyName}-${System.PipelineName}"
-```
-Note that the substitution pattern is defined by the `$` sign followed by braces: **${variableName}**.
-
-When recording to MP4 files on the edge device using event-based recording, you can use:
-
-```
-"fileNamePattern": "sampleFilesFromEVR-${System.TopologyName}-${System.PipelineName}-${fileSinkOutputName}-${System.Runtime.DateTime}"
-```
-
-> [!Note]
-> In the example above, the variable **fileSinkOutputName** is a sample variable name that you define when creating the live pipeline. This is **not** a system variable. Note how the use of **DateTime** ensures a unique MP4 file name for each event.
-
-#### System variables
-
-Some system defined variables that you can use are:
-
-| System Variable | Description | Example |
-| : | :-- | :- |
-| System.Runtime.DateTime | UTC date time in ISO8601 file compliant format (basic representation YYYYMMDDThhmmss). | 20200222T173200Z |
-| System.Runtime.PreciseDateTime | UTC date time in ISO8601 file compliant format with milliseconds (basic representation YYYYMMDDThhmmss.sss). | 20200222T173200.123Z |
-| System.TopologyName | User provided name of the executing pipeline topology. | IngestAndRecord |
-| System.PipelineName | User provided name of the executing live pipeline. | camera001 |
-
-> [!Tip]
-> System.Runtime.PreciseDateTime and System.Runtime.DateTime cannot be used when naming videos in the cloud.
-
-## Next steps
-
-Try out the [Event-based video recording tutorial](record-event-based-live-video.md). Start by editing the [topology.json](https://raw.githubusercontent.com/Azure/video-analyzer/main/pipelines/live/topologies/evr-hubMessage-video-sink/topology.json). Modify the parameters for the signalgateProcessor node, and then follow the rest of the tutorial. Review the video recordings to analyze the effect of the parameters.
azure-video-analyzer Deploy Iot Edge Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/deploy-iot-edge-device.md
- Title: Deploy Video Analyzer to an IoT Edge device
-description: This article lists the steps that will help you deploy Azure Video Analyzer to your IoT Edge device. You would do this, for example, if you have access to a local Linux machine.
- Previously updated : 11/04/2021--
-# Deploy Azure Video Analyzer to an IoT Edge device
---
-This article describes how you can deploy the Azure Video Analyzer edge module on an IoT Edge device which has no other modules previously installed. When you finish the steps in this article you will have a Video Analyzer account created and the Video Analyzer module deployed to your IoT Edge device, along with a module that simulates an RTSP-capable IP camera. The process is intended for use with the quickstarts and tutorials for Video Analyzer. You should review the [production readiness and best practices](production-readiness.md) article if you intend to deploy the Video Analyzer module for use in production.
-
-> [!NOTE]
-> The process outlined in this article will uninstall edge modules, if any, that are installed on your IoT Edge device.
-
-## Prerequisites
-
-* An x86-64 or an ARM64 device running one of the [supported Linux operating systems](../../../iot-edge/support.md#operating-systems)
-* An Azure account that has an active subscription
-* [Create and setup IoT Hub](../../../iot-hub/iot-hub-create-through-portal.md)
-* [Register IoT Edge device](../../../iot-edge/how-to-register-device.md)
-* [Install the Azure IoT Edge runtime on Debian-based Linux systems](../../../iot-edge/how-to-install-iot-edge.md)
--
-## Create resources on IoT Edge device
-
-Azure Video Analyzer module should be configured to run on the IoT Edge device with a non-privileged local user account. The module needs certain local folders for storing application configuration data. For this how-to guide we are leveraging a [RTSP simulator](https://github.com/Azure/video-analyzer/tree/main/edge-modules/sources/rtspsim-live555) that relays a video feed in real time to AVA module for analysis. This simulator takes as input pre-recorded video files from an input directory. The following script will prepare your device to be able to be used with our quickstarts and tutorials.
-
-https://aka.ms/ava/prepare-device
-
-`bash -c "$(curl -sL https://aka.ms/ava-edge/prep_device)"`
-
-The prep-device script used above automates the task of creating input and configuration folders, downloading video input files, and creating user accounts with correct privileges. Once the command finishes successfully, you should see the following folders created on your edge device.
-
-* `/home/localedgeuser/samples`
-* `/home/localedgeuser/samples/input`
-* `/var/lib/videoanalyzer`
-* `/var/media`
-
- Note the video files ("*.mkv") in the /home/localedgeuser/samples/input folder, which are used to simulate live video.
-
-## Creating Azure resources and deploying edge modules
-The next step is to create the required Azure resources (Video Analyzer account, storage account, user-assigned managed identity), registering a Video Analyzer edge module with the Video Analyzer account, and deploying the Video Analyzer edge module and the RTSP simulator module to the IoT Edge device.
-
-Click the **Deploy to Azure** button
-
-> [!WARNING]
-> Do not use this with IoT Edge devices that already have edge modules installed, such as a Percept DK. Also not supported with Azure Stack Edge.
-
-[![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/ava/click-to-deploy/form)
-
-1. Select your **subscription**
-2. Select your preferred **region**
-3. Select the **resource group** to which your IoT Hub and IoT Edge device belong
-4. In the dropdown menu for **Do you need an edge device?**, select the ***Use an existing edge device*** option
-5. Click **Next**
-![Screenshot of initial deployment form](./media/deploy-iot-edge-device/project-details.png)
-
-1. Select the **Existing IoT Hub Name** that your IoT Edge device is connected to
-1. Click **Next**
-![Screenshot of second deployment form](./media/deploy-iot-edge-device/iot-hub-name.png)
-
-1. On the final page, click **Create**
-
-It may take a few moments for the Azure resources to be created and the edge modules to be deployed.
-
-### Verify your deployment
-
-After creating the deployment, in the Azure portal navigate to the IoT Edge device page of your IoT hub.
-
-1. Select the IoT Edge device that you targeted with the deployment to open its details.
-2. In the device details, verify that the modules are listed as both **Specified in deployment and Reported by device**.
-
-It may take a few moments for the modules to be started on the device and then reported back to IoT Hub. Refresh the page to see an updated status.
-Status code: 200 ΓÇôOK means that [the IoT Edge runtime](../../../iot-edge/iot-edge-runtime.md) is healthy and is operating fine.
-
-![Screenshot shows a status value for an IoT Edge runtime.](./media/deploy-iot-edge-device/status.png)
-
-#### Invoke a direct method
-
-Next, lets test the sample by invoking a direct method. Read [direct methods for Azure Video Analyzer ](direct-methods.md) to understand all the direct methods provided by our avaEdge module.
-
-1. Clicking on the edge module you created, will take you to its configuration page.
-
- ![Screenshot shows the configuration page of an edge module.](./media/deploy-iot-edge-device/modules.png)
-1. Click on the **Direct Method** menu option.
-
- > [!NOTE]
- > You will need to add a value in the Connection string sections as you can see on the current page. You do not need to hide or change anything in the **Setting name** section. It is ok to let it be public.
-
- ![Direct method](./media/deploy-iot-edge-device/module-details.png)
-1. Next, Enter "pipelineTopologyList" in the `Method Name` box.
-1. Next, copy and paste the below JSON payload in the payload box.
-
- ```
- {
- "@apiVersion": "1.1"
- }
- ```
-1. Click on **Invoke Method** option on top of the page
-1. You should see a status 200 message in the `Result` box
-
- ![The status 200 message](./media/deploy-iot-edge-device/connection-timeout.png)
-
-## Next steps
-
-Try [Quickstart: Get started - Azure Video Analyzer](get-started-detect-motion-emit-events.md)
-
-> [!TIP]
-> If you proceed with the above quickstart, when invoking the direct methods using Visual Studio Code, you will use the device that was added to the IoT Hub via this article, instead of the default `avasample-iot-edge-device`.
azure-video-analyzer Deploy Iot Edge Linux On Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/deploy-iot-edge-linux-on-windows.md
- Title: Deploy Azure Video Analyzer to a Windows device using EFLOW
-description: This article provides guidance on how to deploy to an IoT Edge for Linux on Windows device.
- Previously updated : 11/04/2021---
-# Deploy Azure Video Analyzer to a Windows device using EFLOW
---
-In this article, you'll learn how to deploy Azure Video Analyzer on an edge device that has [IoT Edge for Linux on Windows (EFLOW)](../../../iot-edge/iot-edge-for-linux-on-windows.md). Once you have finished following the steps in this document, you will be able to run a [pipeline](../pipeline.md) that detects motion in a video and emits such events to the IoT Hub. You can then switch out the pipeline for advanced scenarios and bring the power of Azure Video Analyzer to your Windows-based IoT Edge device.
-
-## Prerequisites
-
-* An Azure account that has an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) if you don't already have one.
-
-* [Visual Studio Code](https://code.visualstudio.com/) on your development machine. Make sure you have the [Azure IoT Tools extension](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit).
-* Read [What is EFLOW](../../../iot-edge/iot-edge-for-linux-on-windows.md).
-
-## Deployment steps
-
-The following depicts the overall flow of the document and in 5 simple steps you should be all set up to run Azure Video Analyzer on a Windows device that has EFLOW:
-
-![Diagram of IoT Edge for Linux on Windows (E FLOW).](./media/deploy-iot-edge-linux-on-windows/eflow.png)
-
-1. [Install EFLOW](../../../iot-edge/how-to-install-iot-edge-on-windows.md) on your Windows device using PowerShell.
-
- > [!NOTE]
- > There are two ways to deploy EFLOW (PowerShell and Windows Admin Center) and two ways to provision the virtual machine (manual provisioning using the connection string and manual provisioning using X.509 certificates). Please follow the [PowerShell deployment](../../../iot-edge/how-to-install-iot-edge-on-windows.md#create-a-new-deployment) and [provision the machine using the connection string from the IoT Hub](../../../iot-edge/how-to-install-iot-edge-on-windows.md#manual-provisioning-using-the-connection-string).
-
-1. Once EFLOW is set up, type the command `Connect-EflowVm` into PowerShell (with administrative privilege) to connect. This will bring up a bash terminal within PowerShell to control the EFLOW VM, where you can run Linux commands including utilities like Top and Nano.
-
- > [!TIP]
- > To exit the EFLOW VM, type `exit` within the terminal.
-
-1. Log into the EFLOW VM via PowerShell and type in the following commands:
-
- `bash -c "$(curl -sL https://aka.ms/ava-edge/prep_device)"`
-
- `sudo iptables -I INPUT -p udp -j ACCEPT`
-
- Video Analyzer needs certain local folders for storing application configuration data. For this how-to guide we are leveraging a [RTSP simulator](https://github.com/Azure/video-analyzer/tree/main/edge-modules/sources/rtspsim-live555) that relays a video feed in real time to the Video Analyzer module for analysis. This simulator takes as input pre-recorded video files from an input directory.
-
- The prep-device script used above automates these tasks away, so you can run one command and have all relevant input and configuration folders, video input files, and user accounts with privileges created seamlessly. Once the command finishes successfully, you should see the following folders created on your edge device.
-
- * `/home/localedgeuser/samples`
- * `/home/localedgeuser/samples/input`
- * `/var/lib/videoanalyzer`
- * `/var/media`
-
- Note the video files (*.mkv) in the /home/localedgeuser/samples/input folder, which serve as input files to be analyzed.
-
-1. Now that you have the edge device set up, registered to the hub, and running successfully with the correct folder structures created, the next step is to set up the following additional Azure resources and deploy the Video Analyzer module. The following deployment template will take care of the resource creation:
-
- [![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/ava-click-to-deploy)
-
- The deployment process will take about 20 minutes. Upon completion, you will have certain Azure resources deployed in the Azure subscription, including:
-
- * Video Analyzer account - This cloud service is used to register the Video Analyzer edge module, and for playing back recorded video and video analytics.
- * Storage account - For storing recorded video and video analytics.
- * Managed Identity - This is the user assigned managed identity used to manage access to the above storage account.
- * IoT Hub - This acts as a central message hub for bi-directional communication between your IoT application, IoT Edge modules and the devices it manages.
-
- In the template, when asked if you need an edge device, choose the "Use and existing edge device" option since you created both the device and the IoT Hub earlier. You will also be prompted for your IoT Hub name and IoT Edge device ID in the subsequent steps.
-
- ![Use Existing Device](./media/deploy-iot-edge-linux-on-windows/use-existing-device.png)
-
- Once finished, you can log back onto the EFLOW VM and run the following command.
-
- **`sudo iotedge list`**
-
- You should see the following four modules deployed and running on your edge device. Please note that the resource creation script deploys the AVA module along with IoT Edge modules (edgeAgent and edgeHub) and an RTSP simulator module to provide the simulated RTSP video feed.
-
- ![Deployed Modules](./media/vscode-common-screenshots/avaedge-module.png)
-
-1. With the modules deployed and set up, you are ready to run your first AVA pipeline on EFLOW. You can run a simple motion detection pipeline as below and visualize the results by executing the following steps:
-
- ![Video Analyzer based on motion detection](./media/get-started-detect-motion-emit-events/motion-detection.svg)
-
- 1. [Configure](get-started-detect-motion-emit-events.md#prepare-to-monitor-the-modules) the Azure IoT Tools extension.
- 1. Set the pipelineTopology, instantiate a livePipeline and activate it via these [direct method calls](get-started-detect-motion-emit-events.md#use-direct-method-calls).
- 1. [Observe the results](get-started-detect-motion-emit-events.md#observe-results) on the Hub.
- 1. Invoke [clean up methods](get-started-detect-motion-emit-events.md#deactivate-the-live-pipeline).
- 1. Delete your resources if not needed further.
-
- > [!IMPORTANT]
- > Undeleted resources can still be active and incur Azure costs. Please ensure that you delete the resources you do not intend to use.
-
-## Next steps
-
-* Try motion detection along with recording relevant videos in the Cloud. Follow the steps from the [detect motion and record video clips](detect-motion-record-video-edge-devices.md) quickstart.
-* Use our [VS Code extension](https://marketplace.visualstudio.com/vscode) to view additional pipelines.
-* Use an [IP camera](https://en.wikipedia.org/wiki/IP_camera) that supports RTSP instead of using the RTSP simulator. You can find IP cameras that support RTSP on the [ONVIF conformant products](https://www.onvif.org/conformant-products/) page. Look for devices that conform with profiles G, S, or T.
-* Run [AI on Live Video](analyze-live-video-use-your-model-http.md#overview) (you can skip the prerequisite setup as it has already been done above).
-
- > [!WARNING]
- > For advanced users who wish to run memory-intensive AI models like YOLO, you may have to increase the resources allotted to the EFLOW VM. First, exit the EFLOW VM and return to the Windows PowerShell terminal by typing `exit`. Then, run the command `Set-EflowVM` on PowerShell with elevated privilege. After running the command, input your desired [parameters](../../../iot-edge/reference-iot-edge-for-linux-on-windows-functions.md#set-eflowvm) by following the prompts in PowerShell, for example `cpuCount: 2`, `memoryInMB: 2048`. After a few minutes, redeploy the Edge module(s) and reactivate the live pipeline to view inferences. If you are encountering connection issues (e.g., error 137 or 255 listed on IoT Hub), you may have to rerun this step.
azure-video-analyzer Deploy On Stack Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/deploy-on-stack-edge.md
- Title: Deploy Video Analyzer on Azure Stack Edge
-description: This article discusses how to deploy Azure Video Analyzer on Azure Stack Edge.
- Previously updated : 11/04/2021--
-# Deploy Azure Video Analyzer on Azure Stack Edge
--
-This article provides full instructions for deploying Azure Video Analyzer on your Azure Stack Edge device. After you've set up and activated the device, it's ready for Video Analyzer deployment.
-
-In the article, we'll deploy Video Analyzer by using Azure IoT Hub, but the Azure Stack Edge resources expose a Kubernetes API, with which you can deploy additional non-IoT Hub-aware solutions that can interface with Video Analyzer.
-
-> [!TIP]
-> Using the Kubernetes API for custom deployment is an advanced case. We recommend that you create edge modules and deploy them via IoT Hub to each Azure Stack Edge resource instead of using the Kubernetes API. This article shows you how to deploy the Video Analyzer module by using IoT Hub.
-
-## Prerequisites
-
-* An Azure Video Analyzer account
-
- This [cloud service](../overview.md) is used to register the Video Analyzer edge module, and for playing back recorded video and video analytics.
-* A managed identity
-
- This is the user-assigned [managed identity](../../../active-directory/managed-identities-azure-resources/overview.md) that you use to manage access to your storage account.
-* An [Azure Stack Edge](../../../databox-online/azure-stack-edge-gpu-deploy-prep.md) resource
-* An [IoT hub](../../../iot-hub/iot-hub-create-through-portal.md)
-* A storage account
-
- We recommend that you use a [general-purpose v2 storage account](../../../storage/common/storage-account-upgrade.md?tabs=azure-portal).
-* [Visual Studio Code](https://code.visualstudio.com/), installed on your development machine
-* The [Azure IoT Tools extension](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit), installed in Visual Studio Code
-* Make sure the network that your development machine is connected to permits Advanced Message Queueing Protocol over port 5671. This setup enables Azure IoT Tools to communicate with your Azure IoT hub.
-
-## Configure Azure Stack Edge to use Video Analyzer
-
-Azure Stack Edge is a hardware-as-a-service solution and an AI-enabled edge computing device with network data transfer capabilities. For more information, see [Azure Stack Edge and detailed setup instructions](../../../databox-online/azure-stack-edge-gpu-deploy-prep.md).
-
-To get started, do the following:
-
-1. [Create an Azure Stack Edge or Azure Data Box Gateway resource](../../../databox-online/azure-stack-edge-gpu-deploy-prep.md?tabs=azure-portal#create-a-new-resource).
-1. [Install and set up Azure Stack Edge Pro with GPU](../../../databox-online/azure-stack-edge-gpu-deploy-install.md).
-1. Connect and activate the resource by doing the following:
-
- a. [Connect to the local web UI setup](../../../databox-online/azure-stack-edge-gpu-deploy-connect.md).
- b. [Configure the network](../../../databox-online/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md).
- c. [Configure the device](../../../databox-online/azure-stack-edge-gpu-deploy-set-up-device-update-time.md).
- d. [Configure the certificates](../../../databox-online/azure-stack-edge-gpu-deploy-configure-certificates.md).
- e. [Activate the device](../../../databox-online/azure-stack-edge-gpu-deploy-activate.md).
-
-1. [Attach an IoT hub to Azure Stack Edge](../../../databox-online/azure-stack-edge-gpu-deploy-configure-compute.md#configure-compute).
-
-### Meet the compute prerequisites on the Azure Stack Edge local UI
-
-Before you continue, make sure that you've completed the following:
-
-* You've activated your Azure Stack Edge resource.
-* You have access to a Windows client system that's running PowerShell 5.0 or later to access the Azure Stack Edge resource.
-* To deploy Kubernetes clusters, you've configured your Azure Stack Edge resource on its [local web UI](../../../databox-online/azure-stack-edge-deploy-connect-setup-activate.md#connect-to-the-local-web-ui-setup).
-
- 1. Connect and configure the resource by doing the following:
- a. [Connect to the local web UI setup](../../../databox-online/azure-stack-edge-gpu-deploy-connect.md).
- b. [Configure the network](../../../databox-online/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md).
- c. [Configure the device](../../../databox-online/azure-stack-edge-gpu-deploy-set-up-device-update-time.md)
- d. [Configure the certificates](../../../databox-online/azure-stack-edge-gpu-deploy-configure-certificates.md).
- e. [Activate the device](../../../databox-online/azure-stack-edge-gpu-deploy-activate.md).
-
- 1. To enable the compute, on the local web UI of your device, go to the **Compute** page.
-
- a. Select a network interface that you want to enable for compute, and then select **Enable**. Enabling compute creates a virtual switch on your device on that network interface.
- b. Leave the Kubernetes test node IPs and the Kubernetes external services IPs blank.
- c. Select **Apply**. The operation should take about two minutes.
-
- > [!div class="mx-imgBorder"]
-
- > :::image type="content" source="../../../databox-online/media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/compute-network-2.png" alt-text="Screenshot of compute prerequisites on the Azure Stack Edge local UI.":::
-
- If Azure DNS isn't configured for the Kubernetes API and Azure Stack Edge resource, you can update your Windows host file by doing the following:
-
- a. Open a text editor as Administrator.
- b. Open the *hosts* file at *C:\Windows\System32\drivers\etc\\*.
- c. Add the Kubernetes API device name's Internet Protocol version 4 (IPv4) and hostname to the file. You can find this information in the Azure Stack Edge portal, under **Devices**.
- d. Save and close the file.
-
-### Deploy Video Analyzer Edge modules by using the Azure portal
-
-The Azure portal, you can create a deployment manifest and push the deployment to an IoT Edge device.
-
-#### Select your device and set modules
-
-1. Sign in to the [Azure portal](https://portal.azure.com/), and then go to your IoT hub.
-1. On the left pane, select **IoT Edge**.
-1. In the list of devices, select the ID of the target device.
-1. Select **Set Modules**.
-
-#### Configure a deployment manifest
-
-A deployment manifest is a JSON document that describes which modules to deploy, how data flows between the modules, and the desired properties of the module twins. The Azure portal has a wizard that walks you through creating a deployment manifest. Its three steps are organized into **Modules**, **Routes**, and **Review + Create** tabs.
-
-#### Add modules
-
-1. In the **IoT Edge Modules** section, in the **Add** dropdown list, select **IoT Edge Module** to display the **Add IoT Edge Module** page.
-1. Select the **Module Settings** tab, provide a name for the module, and then specify the container image URI. Example values are shown in the following image:
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/deploy-on-stack-edge/add-module.png" alt-text="Screenshot of the Module Settings pane on the Add IoT Edge Module page.":::
-
- > [!TIP]
- > Don't select **Add** until you've specified values on the **Module Settings**, **Container Create Options**, and **Module Twin Settings** tabs, as described in this procedure.
-
- > [!IMPORTANT]
- > Azure IoT Edge values are case-sensitive when you make calls to modules. Make note of the exact string you're using as the module name.
-1. Select the **Environment Variables** tab, and then enter the values, as shown in the following image:
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/deploy-on-stack-edge/environment-variables.png" alt-text="Screenshot of the 'Environment Variables' pane on the 'Add IoT Edge Module' page.":::
-1. Select the **Container Create Options** tab.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/deploy-on-stack-edge/container-create-options.png" alt-text="Screenshot of the Container Create Options pane on the Add IoT Edge Module page.":::
-
- In the box on the **Container Create Options** pane, paste the following JSON code. This action limits the size of the log files that are produced by the module.
-
- ```
- {
- "HostConfig": {
- "LogConfig": {
- "Type": "",
- "Config": {
- "max-size": "10m",
- "max-file": "10"
- }
- },
- "Binds": [
- "/var/lib/videoanalyzer/:/var/lib/videoanalyzer",
- "/var/media:/var/media"
- ],
- "IpcMode": "host",
- "ShmSize": 1536870912
- }
- }
- ````
-
- The "Binds" section in the JSON has two entries:
- * **"/var/lib/videoanalyzer:/var/lib/videoanalyzer"** is used to bind the persistent application configuration data from the container and store it on the edge device.
- * **"/var/media:/var/media"** binds the media folders between the edge device and the container. It's used to store the video recordings when you run a pipelineTopology that supports storing video clips on the edge device.
-
-1. Select the **Module Twin Settings** tab.
-
- To run, Video Analyzer edge module requires a set of mandatory twin properties, as listed in [Module Twin configuration schema](module-twin-configuration-schema.md).
-1. In the box on the **Module Twin Settings** pane, paste the following JSON code:
- ```
- {
- "applicationDataDirectory": "/var/lib/videoanalyzer",
- "ProvisioningToken": "{provisioning-token}",
- ...
- }
- ```
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/deploy-on-stack-edge/twin-settings.png" alt-text="Screenshot of the 'Module Twin Settings' pane on the 'Add IoT Edge Module' page.":::
-
- To help with monitoring the module, you can add the following *recommended* properties to the JSON code. For more information, see [Monitoring and logging](monitor-log-edge.md).
-
- ```
- "diagnosticsEventsOutputName": "diagnostics",
- "OperationalEventsOutputName": "operational",
- "logLevel": "Information",
- "logCategories": "Application,Events",
- "allowUnsecuredEndpoints": true,
- "telemetryOptOut": false
- ```
-1. Select **Add**.
-
-#### Add the Real-Time Streaming Protocol (RTSP) simulator edge module
-
-1. In the **IoT Edge Modules** section, in the **Add** dropdown list, select **IoT Edge Module** to display the **Add IoT Edge Module** page.
-1. Select the **Module Settings** tab, provide a name for the module, and then specify the container image URI. For example:
-
- * **IoT Edge Module Name**: rtspsim
- * **Image URI**: mcr.microsoft.com/ava-utilities/rtspsim-live555:1.2
-
-1. Select the **Container Create Options** tab and then, in the box, paste the following JSON code:
-
- ```
- {
- "HostConfig": {
- "Binds": [
- "/home/localedgeuser/samples/input/:/live/mediaServer/media/"
- ],
- "PortBindings": {
- "554/tcp": [
- {
- "HostPort": "554"
- }
- ]
- }
- }
- }
- ```
-1. Select **Add**.
-1. Select **Next: Routes** to continue to the routes section.
-1. To specify routes, under **Name**, enter **AVAToHub** and then, under **Value**, enter **FROM /messages/modules/avaedge/outputs/ INTO $upstream**.
-1. Select **Next: Review + create** to continue to the review section.
-1. Review your deployment information, and then select **Create** to deploy the module.
-
-#### Generate the provisioning token
-
-1. In the Azure portal, go to Video Analyzer.
-1. On the left pane, select **Edge modules**.
-1. Select the edge module, and then select **Generate token**.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/deploy-on-stack-edge/generate-provisioning-token.png" alt-text="Screenshot of the 'Add edge modules' pane for generating a token." lightbox="./media/deploy-on-stack-edge/generate-provisioning-token.png":::
-1. Copy the provisioning token, as shown in the following image:
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/deploy-on-stack-edge/copy-provisioning-token.png" alt-text="Screenshot of the 'Copy provisioning token' page.":::
---
-#### (Optional) Set up Docker volume mounts
-
-If you want to view the data in the working directories, set up Docker volume mounts before you deploy it.
-
-This section covers how to create a gateway user and set up file shares to view the contents of the Video Analyzer working directory and Video Analyzer media folder.
-
-> [!NOTE]
-> Bind mounts are supported, but volume mounts allow the data to be viewable and, if you choose, remotely copied. It's possible to use both bind and volume mounts, but they can't point to the same container path.
-
-1. In the Azure portal, go to the Azure Stack Edge resource.
-1. Create a gateway user that can access shares by doing the following:
-
- a. On the left pane, select **Cloud storage gateway**.
- b. On the left pane, select **Users**.
- c. Select **Add User** to set the username (for example, we recommend *avauser*) and password.
- d. Select **Add**.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/deploy-on-stack-edge/add-user.png" alt-text="Screenshot of the Azure Stack Edge resource 'Add user' page.":::
-1. Create a *local share* for Video Analyzer persistence by doing the following:
-
- a. Select **Cloud storage gateway** > **Shares**.
- b. Select **Add share**.
- c. Set a share name (for example, we recommend *ava*).
- d. Keep the share type as **SMB**.
- e. Ensure that the **Use the share with Edge compute** checkbox is selected.
- f. Ensure that the **Configure as Edge local share** checkbox is selected.
- g. Under **User details**, give access to the share to the recently created user by selecting **Use existing**.
- h. Select **Create**.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/deploy-on-stack-edge/local-share.png" alt-text="Screenshot of the 'Add share' page for creating a local share.":::
-
- > [!TIP]
- > With your Windows client connected to your Azure Stack Edge device, follow the instructions in the "Connect to an SMB share" section of [Transfer data with Azure Stack Edge Pro FPGA](../../../databox-online/azure-stack-edge-deploy-add-shares.md#connect-to-an-smb-share).
-1. Create a *remote share* for file sync storage by doing the following:
-
- a. Create an Azure Blob Storage account in the same region by selecting **Cloud storage gateway** > **Storage accounts**.
- b. Select **Cloud storage gateway** > **Shares**.
- c. Select **Add Shares**.
- d. In the **Name** box, enter a share name (for example, we recommend *media*).
- e. For **Type**, keep the share type as **SMB**.
- f. Ensure that the **Use the share with Edge compute** checkbox is selected.
- g. Ensure that the **Configure as Edge local share** checkbox is cleared.
- h. In the **Storage account** dropdown list, select the recently created storage account.
- i. In the **Storage service** dropdown list, select **Block Blob**.
- j. In the **Select blob container** box, enter the container name.
- k. Under **User Details**, select **Use existing** to give access to the share to the recently created user.
- l. Select **Create**.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/deploy-on-stack-edge/remote-share.png" alt-text="Screenshot of the 'Add share' page for creating a remote share.":::
-
-1. To use volume mounts, update the settings on the **Container Create Options** pane for the RTSP simulator module by doing the following:
-
- a. Select the **Set modules** button.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/deploy-on-stack-edge/set-modules.png" alt-text="Screenshot showing the 'Set modules' button on the edge device settings pane." lightbox="./media/deploy-on-stack-edge/set-modules.png":::
-
- b. In the **Name** list, select the **rtspsim** module:
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/deploy-on-stack-edge/select-module.png" alt-text="Screenshot of the 'rtspsim' module under 'IoT Edge Modules' on the edge device settings pane.":::
-
- c. On the **Update IoT Edge Module** pane, select the **Container Create Options** tab, and then add the mounts as shown in the following JSON code:
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/deploy-on-stack-edge/update-module.png" alt-text="Screenshot of the JSON mounts code on the 'Container Create Options' pane.":::
-
- ```json
- "createOptions":
- {
- "HostConfig":
- {
- "Mounts":
- [
- {
- "Target": "/live/mediaServer/media",
- "Source": "media",
- "Type": "volume"
- }
- ],
- "PortBindings": {
- "554/tcp": [
- {
- "HostPort": "554"
- }
- ]
- }
- }
- }
- ```
- d. Select **Update**.
- e. To update the module, select **Review and create**, and then select **Create**.
-
-### Verify that the module is running
-
-Finally, ensure that your IoT Edge device module is connected and running as expected. To check the module's runtime status, do the following:
-
-1. In the Azure portal, return to your Azure Stack Edge resource.
-1. On the left pane, select **Modules**.
-1. On the **Modules** pane, in the **Name** list, select the module you deployed. In the **Runtime status** column, the module's status should be *running*.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/deploy-on-stack-edge/running-module.png" alt-text="Screenshot of the 'Module' pane, showing the selected module's runtime status as 'running'." lightbox="./media/deploy-on-stack-edge/running-module.png":::
-
-### Configure the Azure IoT Tools extension
-
-To connect to your IoT hub by using the Azure IoT Tools extension, do the following:
-
-1. In Visual Studio Code, select **View** > **Explorer**.
-1. On the **Explorer** pane, at the lower left, select **Azure IoT Hub**.
-1. Select the **More Options** icon to show the context menu, and then select **Set IoT Hub Connection String**.
-
- An input box appears, into which you'll enter your IoT hub connection string. To get the connection string, do the following:
-
- a. In the Azure portal, go to your IoT hub.
- b. On the left pane, select **Shared access policies**.
- c. Select **iothubowner get the shared access keys**.
- d. Copy the connection string primary key, and then paste it in the input box.
-
- > [!NOTE]
- > The connection string is written in the following format:
- >
- > `HostName=xxx.azure-devices.net;SharedAccessKeyName=iothubowner;SharedAccessKey=xxx`
-
- When the connection succeeds, a list of edge devices is displayed, including your Azure Stack Edge device. You can now manage your IoT Edge devices and interact with your Azure IoT hub through the context menu.
-
- To view the modules that are deployed on the edge device, under the Azure Stack device, expand the **Modules** node.
-
-## Troubleshooting
-
-* **Kubernetes API access (kubectl)**
-
- * Configure your machine for access to the Kubernetes cluster by following the instructions in [Create and manage a Kubernetes cluster on Azure Stack Edge Pro GPU device](../../../databox-online/azure-stack-edge-gpu-create-kubernetes-cluster.md).
- * All deployed IoT Edge modules use the *iotedge* namespace. Be sure to include that name when you're using kubectl.
-* **Module logs**
-
- If the *iotedge* tool is inaccessible for obtaining logs, use [kubectl logs](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#logs) to view the logs or pipe to a file. For example: <br/> `kubectl logs deployments/mediaedge -n iotedge --all-containers`
-* **Pod and node metrics**
-
- To view pod and node metrics, use [kubectl top](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#top). For example:
- <br/>`kubectl top pods -n iotedge`
-* **Module networking**
-
- For module discovery on Azure Stack Edge, the module must have the host port binding in createOptions. The module will then be addressable over `moduleName:hostport`.
-
- ```json
- "createOptions": {
- "HostConfig": {
- "PortBindings": {
- "8554/tcp": [ { "HostPort": "8554" } ]
- }
- }
- }
- ```
-* **Volume mounting**
-
- A module will fail to start if the container is trying to mount a volume to an existing and non-empty directory.
-* **Shared memory when gRPC is used**
-
- Shared memory on Azure Stack Edge resources is supported across pods in any namespace when you use Host IPC.
-
- Configure shared memory on an edge module for deployment via IoT Hub by using the following code:
-
- ```
- ...
- "createOptions": {
- "HostConfig": {
- "IpcMode": "host"
- }
- ...
-
- //(Advanced) Configuring shared memory on a Kubernetes pod or deployment manifest for deployment via the Kubernetes API spec:
- ...
- template:
- spec:
- hostIPC: true
- ...
- ```
-* **(Advanced) Pod co-location**
-
- When you use Kubernetes to deploy custom inference solutions that communicate with Video Analyzer via gRPC, ensure that the pods are deployed on the same nodes as Video Analyzer modules.
-
- * **Option 1**: Use *node affinity* and built-in node labels for co-location.
-
- Currently, NodeSelector custom configuration doesn't appear to be an option, because users don't have access to set labels on the nodes. However depending on the users' topology and naming conventions, they might be able to use [built-in node labels](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#built-in-node-labels). To achieve co-location, you can add to the inference pod manifest a nodeAffinity section that references Azure Stack Edge resources with Video Analyzer.
- * **Option 2**: (Recommended) Use *pod affinity* for co-location.
-
- Kubernetes supports [pod affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity), which can schedule pods on the same node. To achieve co-location, you can add to the inference pod manifest, a podAffinity section that references the Video Analyzer module.
-
- ```yaml
- // Example Video Analyzer module deployment match labels
- selector:
- matchLabels:
- net.azure-devices.edge.deviceid: dev-ase-1-edge
- net.azure-devices.edge.module: mediaedge
-
- // Example inference deployment manifest pod affinity
- spec:
- affinity:
- podAntiAffinity:
- requiredDuringSchedulingIgnoredDuringExecution:
- - labelSelector:
- matchExpressions:
- - key: net.azure-devices.edge.module
- operator: In
- values:
- - mediaedge
- topologyKey: "kubernetes.io/hostname"
- ```
-* **You get a 404 error code when you use the *rtspsim* module**
-
- The container reads videos from exactly one folder within the container. If you map/bind an external folder into a folder that already exists within the container image, Docker hides the files present in the container image.
-
- For example, with no bindings, the container might have these files:
-
- ```
- root@rtspsim# ls /live/mediaServer/media
- /live/mediaServer/media/camera-300s.mkv
- /live/mediaServer/media/win10.mkv
- ```
-
- And your host might have these files:
-
- ```
- C:\MyTestVideos> dir
- Test1.mkv
- Test2.mkv
- ```
-
- But when the following binding is added in the deployment manifest file, Docker overwrites the contents of /live/mediaServer/media to match what's on the host.
-
- `C:\MyTestVideos:/live/mediaServer/media`
-
- ```
- root@rtspsim# ls /live/mediaServer/media
- /live/mediaServer/media/Test1.mkv
- /live/mediaServer/media/Test2.mkv
- ```
-
-## Next steps
-
-Analyze video with Computer Vision and Spatial Analysis [using Azure Stack Edge](computer-vision-for-spatial-analysis.md)
azure-video-analyzer Detect Motion Emit Events Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/detect-motion-emit-events-quickstart.md
- Title: Detect motion and emit events from the edge - Azure
-description: This quickstart shows you how to use Azure Video Analyzer to detect motion and emit events, by programmatically calling direct methods.
- Previously updated : 11/04/2021
-zone_pivot_groups: video-analyzer-programming-languages
---
-# Quickstart: Detect motion and emit events
---
-This quickstart walks you through the steps to get started with Azure Video Analyzer. It uses an Azure VM as an IoT Edge device and a simulated live video stream. After completing the setup steps, you'll be able to run a simulated live video stream through a video pipeline that detects and reports any motion in that stream. The following diagram shows a graphical representation of that pipeline.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/get-started-detect-motion-emit-events/motion-detection.svg" alt-text="Detect motion":::
---
-## Prerequisites
---
-## Set up Azure resources
-
-[![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/ava-click-to-deploy)
-
-## Overview
-
-![Azure Video Analyzer based on motion detection](./media/analyze-live-video/detect-motion.png)
-
-This diagram shows you how the signal flows in this quickstart. An [edge module](https://github.com/Azure/video-analyzer/tree/main/edge-modules/sources/rtspsim-live555) simulates an IP camera hosting a Real-Time Streaming Protocol (RTSP) server. An [RTSP source](../pipeline.md#rtsp-source) node pulls the video feed from this server and sends video frames to the [motion detection processor](../pipeline.md#motion-detection-processor) node. The motion detection processor node enables you to detect motion in live video. It examines incoming video frames and determines if there is movement in the video. If motion is detected, it passes on the video frame to the next node in the pipeline, and emits an event. Finally, any emitted events are sent to the IoT hub message sink where they are published to IoT Hub.
-
-## Set up your development environment
---
-## Review the sample video
---
-## Examine the sample files
---
-## Generate and deploy the deployment manifest
---
-## Prepare to monitor events
---
-## Run the sample program
---
-## Interpret results
---
-## Clean up resources
-
-If you intend to try the other quickstarts, then you should keep the resources you created. Otherwise, in the Azure portal, go to your resource groups, select the resource group where you ran this quickstart, and then delete all of the resources.
-
-## Next steps
--- Follow [Quickstart: Analyze a live video feed from a (simulated) IP camera using your own HTTP model](analyze-live-video-use-your-model-http.md) to apply AI to live video feeds.-- Review additional challenges for advanced users:-
- - Use an [IP camera](https://en.wikipedia.org/wiki/IP_camera) that supports RTSP instead of using the RTSP simulator. You can find IP cameras that support RTSP on the [ONVIF conformant products](https://www.onvif.org/conformant-products/) page. Look for devices that conform with profiles G, S, or T.
- - Use an AMD64 or x64 Linux device rather than using a Linux VM in Azure. This device must be in the same network as the IP camera. Follow the instructions in [Install Azure IoT Edge runtime on Linux](../../../iot-edge/how-to-install-iot-edge.md?preserve-view=true&view=iotedge-2020-11). Then follow the instructions in [Deploy your first IoT Edge module to a virtual Linux device](../../../iot-edge/quickstart-linux.md?preserve-view=true&view=iotedge-2020-11) register the device with Azure IoT Hub.
azure-video-analyzer Detect Motion Record Video Clips Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/detect-motion-record-video-clips-cloud.md
- Title: Detect motion in a video, record the video
-description: This quickstart shows how to use Azure Video Analyzer edge module to detect motion in a live video stream and record the video to the Video Analyzer account.
- Previously updated : 11/04/2021--
-# Quickstart: Detect motion in a (simulated) live video, record the video to the Video Analyzer account
---
-This article walks you through the steps to use Azure Video Analyzer edge module for [event-based recording](../event-based-video-recording-concept.md). It uses a Linux VM in Azure as an IoT Edge device and a simulated live video stream. This video stream is analyzed for the presence of moving objects. When motion is detected, events are sent to Azure IoT Hub, and the relevant part of the video stream is recorded as a [video resource](../terminology.md#video) in your Video Analyzer account.
-
-## Prerequisites
-
-* An Azure account that includes an active subscription. [Create an account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) for free if you don't already have one.
-
- [!INCLUDE [azure-subscription-permissions](./includes/common-includes/azure-subscription-permissions.md)]
-* [Visual Studio Code](https://code.visualstudio.com/), with the following extensions:
- * [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit)
-
-### Set up Azure resources
-
-[![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/ava-click-to-deploy)
-
-The deployment process will take about **20 minutes**. Upon completion, you will have certain Azure resources deployed in the Azure subscription, including:
-1. **Video Analyzer account** - This [cloud service](../overview.md) is used to register the Video Analyzer edge module, and for playing back recorded video and video analytics.
-1. **Storage account** - For storing recorded video and video analytics.
-1. **Managed Identity** - This is the user assigned [managed identity]../../../active-directory/managed-identities-azure-resources/overview.md) used to manage access to the above storage account.
-1. **Virtual machine** - This is a virtual machine that will serve as your simulated edge device.
-1. **IoT Hub** - This acts as a central message hub for bi-directional communication between your IoT application, IoT Edge modules and the devices it manages.
-
-You can get more details [here](https://github.com/Azure/video-analyzer/tree/main/setup).
-
-## Review the sample video
-
-In the virtual machine created by the above deployment are several MKV files. One of these files is called `lots_015.mkv`. In the following steps, we will use this video file to simulate a live stream for this tutorial.
-
-You can use an application like [VLC Player](https://www.videolan.org/vlc/), launch it, hit `Ctrl+N`, and paste [the parking lot video sample](https://avamedia.blob.core.windows.net/public/lots_015.mkv) link to start playback. At about the 5-second mark, a white car moves through the parking lot.
-
-> [!VIDEO https://www.microsoft.com/videoplayer/embed/RE4LUbN]
-
-When you complete the steps below, you will have used the Video Analyzer edge module to detect that motion of the car, and record a video clip starting at around that 5-second mark.
-
-The diagram below is the visual representation of the overall flow.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/detect-motion-record-video-clips-cloud/topology.svg" alt-text="Event-based video recording to a video resource based on motion events":::
-
-## Set up your development environment
-
-### Obtain your IoT Hub connection string
-
-1. In Azure portal, navigate to the IoT Hub you created as part of the above setup step
-1. Look for **Shared access policies** option in the left-hand navigation, and click there.
-1. Click on the policy named **iothubowner**
-1. Copy the **Primary connection string** - it will look like `HostName=xxx.azure-devices.net;SharedAccessKeyName=iothubowner;SharedAccessKey=XXX`
-
-### Connect to the IoT Hub
-
-1. Open Visual Studio Code, select **View** > **Explorer**. Or, select Ctrl+Shift+E.
-1. In the lower-left corner of the **Explorer** tab, select **Azure IoT Hub**.
-1. Select the **More Options** icon to see the context menu. Then select **Set IoT Hub Connection String**.
-1. When an input box appears, enter your IoT Hub connection string.
-1. In about 30 seconds, refresh Azure IoT Hub in the lower-left section. You should see the edge device `avasample-iot-edge-device`, which should have the following modules deployed:
- * Video Analyzer edge module (module name **avaedge**)
- * RTSP simulator (module name **rtspsim**)
--
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/get-started-detect-motion-emit-events/modules-node.png" alt-text="Expand the Modules node":::
-
-> [!TIP]
-> If you have [manually deployed Video Analyzer](deploy-iot-edge-device.md) yourselves on an edge device (such as an ARM64 device), then you will see the module show up under that device, under the Azure IoT Hub. You can select that module, and follow the rest of the steps below.
-
-### Prepare to monitor the modules
-
-When you use run this quickstart, events will be sent to the IoT Hub. To see these events, follow these steps:
-
-1. In Visual Studio Code, open the **Extensions** tab (or press Ctrl+Shift+X) and search for **Azure IoT Hub**.
-1. Right-click and select **Extension Settings**.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/get-started-detect-motion-emit-events/extension-settings.png" alt-text="Select Extension Settings":::
-1. Search and enable "Show Verbose Message".
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/get-started-detect-motion-emit-events/verbose-message.png" alt-text="Show Verbose Message":::
-1. Open the Explorer pane in Visual Studio Code, and look for **Azure IoT Hub** in the lower-left corner.
-1. Expand the **Devices** node.
-1. Right-click on `avasample-iot-edge-device`, and select **Start Monitoring Built-in Event Endpoint**.
-
- [!INCLUDE [provide-builtin-endpoint](./includes/common-includes/provide-builtin-endpoint.md)]
-
-## Use direct method calls to analyze live video
-
-You can now analyze live video streams by invoking direct methods exposed by the Video Analyzer edge module. Read [Video Analyzer direct methods](direct-methods.md) to examine all the direct methods provided by the module.
-
-### Enumerate pipeline topologies
-
-This step enumerates all the [pipeline topologies](../pipeline.md) in the module.
-
-1. Right-click on "avaedge" module and select **Invoke Module Direct Method** from the context menu.
-1. You will see an edit box pop in the top-middle of Visual Studio Code window. Enter "pipelineTopologyList" in the edit box and press enter.
-1. Next, copy, and paste the below JSON payload in the edit box and press enter.
-
-```json
-{
- "@apiVersion" : "1.1"
-}
-```
-
-Within a few seconds, you will see the following response in the OUTPUT window:
-
-```
-[DirectMethod] Invoking Direct Method [pipelineTopologyList] to [avasample-iot-edge-device/avaedge] ...
-[DirectMethod] Response from [avasample-iot-edge-device/avaedge]:
-{
- "status": 200,
- "payload": {
- "value": []
- }
-}
-```
-
-The above response is expected, as no pipeline topologies have been created.
-
-### Set a pipeline topology
-
-Using the same steps as above, you can invoke `pipelineTopologySet` to set a pipeline topology using the following JSON as the payload. You will be creating a pipeline topology named "EVRtoVideoSinkOnMotionDetection".
-
-> [!NOTE]
-> In the payload below, the `videoName` property is set to "sample-motion-video-camera001", which will be the name of the video resource that is created in your Video Analyzer account. This resource name must be unique for each live video source you record. You should edit the `videoName` property below as needed to ensure uniqueness.
-
-```
-{
- "@apiVersion": "1.1",
- "name": "EVRtoVideoSinkOnMotionDetection",
- "properties": {
- "description": "Event-based video recording to Video Sink based on motion events",
- "parameters": [
- {
- "name": "rtspUserName",
- "type": "String",
- "description": "rtsp source user name.",
- "default": "dummyUserName"
- },
- {
- "name": "rtspPassword",
- "type": "String",
- "description": "rtsp source password.",
- "default": "dummyPassword"
- },
- {
- "name": "rtspUrl",
- "type": "String",
- "description": "rtsp Url"
- },
- {
- "name": "motionSensitivity",
- "type": "String",
- "description": "motion detection sensitivity",
- "default": "medium"
- },
- {
- "name": "hubSinkOutputName",
- "type": "String",
- "description": "hub sink output name",
- "default": "inferenceOutput"
- }
- ],
- "sources": [
- {
- "@type": "#Microsoft.VideoAnalyzer.RtspSource",
- "name": "rtspSource",
- "endpoint": {
- "@type": "#Microsoft.VideoAnalyzer.UnsecuredEndpoint",
- "url": "${rtspUrl}",
- "credentials": {
- "@type": "#Microsoft.VideoAnalyzer.UsernamePasswordCredentials",
- "username": "${rtspUserName}",
- "password": "${rtspPassword}"
- }
- }
- }
- ],
- "processors": [
- {
- "@type": "#Microsoft.VideoAnalyzer.MotionDetectionProcessor",
- "name": "motionDetection",
- "sensitivity": "${motionSensitivity}",
- "inputs": [
- {
- "nodeName": "rtspSource",
- "outputSelectors": [
- {
- "property": "mediaType",
- "operator": "is",
- "value": "video"
- }
- ]
- }
- ]
- },
- {
- "@type": "#Microsoft.VideoAnalyzer.SignalGateProcessor",
- "name": "signalGateProcessor",
- "inputs": [
- {
- "nodeName": "motionDetection"
- },
- {
- "nodeName": "rtspSource",
- "outputSelectors": [
- {
- "property": "mediaType",
- "operator": "is",
- "value": "video"
- }
- ]
- }
- ],
- "activationEvaluationWindow": "PT1S",
- "activationSignalOffset": "PT0S",
- "minimumActivationTime": "PT30S",
- "maximumActivationTime": "PT30S"
- }
- ],
- "sinks": [
- {
- "@type": "#Microsoft.VideoAnalyzer.VideoSink",
- "name": "videoSink",
- "videoName": "sample-motion-video-camera001",
- "inputs": [
- {
- "nodeName": "signalGateProcessor",
- "outputSelectors": [
- {
- "property": "mediaType",
- "operator": "is",
- "value": "video"
- }
- ]
- }
- ],
- "videoCreationProperties": {
- "title": "sample-motion-video-camera001",
- "description": "Motion-detection based recording of clips to a video resource",
- "segmentLength": "PT30S"
- },
- "localMediaCachePath": "/var/lib/videoanalyzer/tmp/",
- "localMediaCacheMaximumSizeMiB": "2048"
- },
- {
- "@type": "#Microsoft.VideoAnalyzer.IoTHubMessageSink",
- "name": "hubSink",
- "hubOutputName": "${hubSinkOutputName}",
- "inputs": [
- {
- "nodeName": "motionDetection"
- }
- ]
- }
- ]
- }
-}
-```
-
-The above JSON payload results in the creation of a pipeline topology that defines five parameters (four of which have default values). The topology has one source node ([RTSP source](../pipeline.md#rtsp-source)), two processor nodes ([motion detection processor](../pipeline.md#motion-detection-processor) and [signal gate processor](../pipeline.md#signal-gate-processor), and two sink nodes (IoT Hub sink and [video sink](../pipeline.md#video-sink)). The visual representation of the topology is shown above.
-
-Within a few seconds, you will see the following response in the **OUTPUT** window.
-
-```
-[DirectMethod] Invoking Direct Method [pipelineTopologySet] to [avasample-iot-edge-device/avaedge] ...
-[DirectMethod] Response from [avasample-iot-edge-device/avaedge]:
-{
- "status": 201,
- "payload": {
- "systemData": {
- "createdAt": "2021-05-03T15:17:53.483Z",
- "lastModifiedAt": "2021-05-03T15:17:53.483Z"
- },
- "name": "EVRtoVideoSinkOnMotionDetection",
- "properties": {
- "description": "Event-based video recording to Video Sink based on motion events",
- "parameters": [
- {
- "name": "hubSinkOutputName",
- "type": "string",
- "description": "hub sink output name",
- "default": "inferenceOutput"
- },
- {
- "name": "motionSensitivity",
- "type": "string",
- "description": "motion detection sensitivity",
- "default": "medium"
- },
- {
- "name": "rtspPassword",
- "type": "string",
- "description": "rtsp source password.",
- "default": "dummyPassword"
- },
- {
- "name": "rtspUrl",
- "type": "string",
- "description": "rtsp Url"
- },
- {
- "name": "rtspUserName",
- "type": "string",
- "description": "rtsp source user name.",
- "default": "dummyUserName"
- }
- ],
- "sources": [
- {
- "@type": "#Microsoft.VideoAnalyzer.RtspSource",
- "name": "rtspSource",
- "transport": "tcp",
- "endpoint": {
- "@type": "#Microsoft.VideoAnalyzer.UnsecuredEndpoint",
- "url": "${rtspUrl}",
- "credentials": {
- "@type": "#Microsoft.VideoAnalyzer.UsernamePasswordCredentials",
- "username": "${rtspUserName}",
- "password": "${rtspPassword}"
- }
- }
- }
- ],
- "processors": [
- {
- "@type": "#Microsoft.VideoAnalyzer.MotionDetectionProcessor",
- "sensitivity": "${motionSensitivity}",
- "eventAggregationWindow": "PT1S",
- "name": "motionDetection",
- "inputs": [
- {
- "nodeName": "rtspSource",
- "outputSelectors": [
- {
- "property": "mediaType",
- "operator": "is",
- "value": "video"
- }
- ]
- }
- ]
- },
- {
- "@type": "#Microsoft.VideoAnalyzer.SignalGateProcessor",
- "activationEvaluationWindow": "PT1S",
- "activationSignalOffset": "PT0S",
- "minimumActivationTime": "PT30S",
- "maximumActivationTime": "PT30S",
- "name": "signalGateProcessor",
- "inputs": [
- {
- "nodeName": "motionDetection",
- "outputSelectors": []
- },
- {
- "nodeName": "rtspSource",
- "outputSelectors": [
- {
- "property": "mediaType",
- "operator": "is",
- "value": "video"
- }
- ]
- }
- ]
- }
- ],
- "sinks": [
- {
- "@type": "#Microsoft.VideoAnalyzer.VideoSink",
- "localMediaCachePath": "/var/lib/videoanalyzer/tmp/",
- "localMediaCacheMaximumSizeMiB": "2048",
- "videoName": "sample-motion-video-camera001",
- "videoCreationProperties": {
- "title": "sample-motion-video-camera001",
- "description": "Motion-detection based recording of clips to a video resource",
- "segmentLength": "PT30S"
- },
- "name": "videoSink",
- "inputs": [
- {
- "nodeName": "signalGateProcessor",
- "outputSelectors": [
- {
- "property": "mediaType",
- "operator": "is",
- "value": "video"
- }
- ]
- }
- ]
- },
- {
- "@type": "#Microsoft.VideoAnalyzer.IotHubMessageSink",
- "hubOutputName": "${hubSinkOutputName}",
- "name": "hubSink",
- "inputs": [
- {
- "nodeName": "motionDetection",
- "outputSelectors": []
- }
- ]
- }
- ]
- }
- }
-}
-```
---
-The status returned is 201, indicating that a new pipeline topology was created. Try the following direct methods as next steps:
-
-* Invoke `pipelineTopologySet` again and check that the status code returned is 200. Status code 200 indicates that an existing pipeline topology was successfully updated.
-* Invoke `pipelineTopologySet` again but change the description string. Check that the status code in the response is 200 and the description is updated to the new value.
-* Invoke `pipelineTopologyList` as outlined in the previous step and check that now you can see the "EVRtoVideoSinkOnMotionDetection" topology listed in the returned payload.
-
-### Read the pipeline topology
-
-Now invoke `pipelineTopologyGet` with the following payload
-```
-
-{
- "@apiVersion" : "1.1",
- "name" : "EVRtoVideoSinkOnMotionDetection"
-}
-```
-
-Within a few seconds, you should see the following response in the Output window
-
-```
-[DirectMethod] Invoking Direct Method [pipelineTopologyGet] to [avasample-iot-edge-device/avaedge] ...
-[DirectMethod] Response from [avasample-iot-edge-device/avaedge]:
-{
- "status": 200,
- "payload": {
- "systemData": {
- "createdAt": "2021-05-03T15:17:53.483Z",
- "lastModifiedAt": "2021-05-03T15:17:53.483Z"
- },
- "name": "EVRtoVideoSinkOnMotionDetection",
- "properties": {
- "description": "Event-based video recording to Video Sink based on motion events",
- "parameters": [
- {
- "name": "hubSinkOutputName",
- "type": "string",
- "description": "hub sink output name",
- "default": "inferenceOutput"
- },
- {
- "name": "motionSensitivity",
- "type": "string",
- "description": "motion detection sensitivity",
- "default": "medium"
- },
- {
- "name": "rtspPassword",
- "type": "string",
- "description": "rtsp source password.",
- "default": "dummyPassword"
- },
- {
- "name": "rtspUrl",
- "type": "string",
- "description": "rtsp Url"
- },
- {
- "name": "rtspUserName",
- "type": "string",
- "description": "rtsp source user name.",
- "default": "dummyUserName"
- }
- ],
- "sources": [
- {
- "@type": "#Microsoft.VideoAnalyzer.RtspSource",
- "name": "rtspSource",
- "transport": "tcp",
- "endpoint": {
- "@type": "#Microsoft.VideoAnalyzer.UnsecuredEndpoint",
- "url": "${rtspUrl}",
- "credentials": {
- "@type": "#Microsoft.VideoAnalyzer.UsernamePasswordCredentials",
- "username": "${rtspUserName}",
- "password": "${rtspPassword}"
- }
- }
- }
- ],
- "processors": [
- {
- "@type": "#Microsoft.VideoAnalyzer.MotionDetectionProcessor",
- "sensitivity": "${motionSensitivity}",
- "eventAggregationWindow": "PT1S",
- "name": "motionDetection",
- "inputs": [
- {
- "nodeName": "rtspSource",
- "outputSelectors": [
- {
- "property": "mediaType",
- "operator": "is",
- "value": "video"
- }
- ]
- }
- ]
- },
- {
- "@type": "#Microsoft.VideoAnalyzer.SignalGateProcessor",
- "activationEvaluationWindow": "PT1S",
- "activationSignalOffset": "PT0S",
- "minimumActivationTime": "PT30S",
- "maximumActivationTime": "PT30S",
- "name": "signalGateProcessor",
- "inputs": [
- {
- "nodeName": "motionDetection",
- "outputSelectors": []
- },
- {
- "nodeName": "rtspSource",
- "outputSelectors": [
- {
- "property": "mediaType",
- "operator": "is",
- "value": "video"
- }
- ]
- }
- ]
- }
- ],
- "sinks": [
- {
- "@type": "#Microsoft.VideoAnalyzer.IotHubMessageSink",
- "hubOutputName": "${hubSinkOutputName}",
- "name": "hubSink",
- "inputs": [
- {
- "nodeName": "motionDetection",
- "outputSelectors": []
- }
- ]
- },
- {
- "@type": "#Microsoft.VideoAnalyzer.VideoSink",
- "localMediaCachePath": "/var/lib/videoanalyzer/tmp/",
- "localMediaCacheMaximumSizeMiB": "2048",
- "videoName": "sample-motion-video-camera001",
- "videoCreationProperties": {
- "title": "sample-motion-video-camera001",
- "description": "Motion-detection based recording of clips to a video resource",
- "segmentLength": "PT30S"
- },
- "name": "videoSink",
- "inputs": [
- {
- "nodeName": "signalGateProcessor",
- "outputSelectors": [
- {
- "property": "mediaType",
- "operator": "is",
- "value": "video"
- }
- ]
- }
- ]
- }
- ]
- }
- }
-}
-```
-
-Note the following properties in the response payload:
-
-* Status code is 200, indicating success.
-* The payload includes the `createdAt` time stamp and the `lastModifiedAt` time stamp.
-
-### Create a live pipeline using the topology
-
-Next, create a live pipeline that references the above pipeline topology. Invoke the `livePipelineSet` direct method with the following payload:
-
-```
-{
- "@apiVersion" : "1.1",
- "name" : "Sample-Pipeline-1",
- "properties" : {
- "topologyName" : "EVRtoVideoSinkOnMotionDetection",
- "description" : "Sample pipeline description",
- "parameters" : [
- { "name" : "rtspUrl", "value" : "rtsp://rtspsim:554/media/lots_015.mkv" }
- ]
- }
-}
-```
-
-Note the following:
-
-* The payload above specifies the topology ("EVRtoVideoSinkOnMotionDetection") to be used by the live pipeline.
-* The payload contains parameter value for `rtspUrl`, which did not have a default value in the topology payload.
-
-Within few seconds, you will see the following response in the Output window:
-
-```
-[DirectMethod] Invoking Direct Method [livePipelineSet] to [avasample-iot-edge-device/avaedge] ...
-[DirectMethod] Response from [avasample-iot-edge-device/avaedge]:
-{
- "status": 201,
- "payload": {
- "systemData": {
- "createdAt": "2021-05-03T15:20:29.023Z",
- "lastModifiedAt": "2021-05-03T15:20:29.023Z"
- },
- "name": "Sample-Pipeline-1",
- "properties": {
- "state": "Inactive",
- "description": "Sample pipeline description",
- "topologyName": "EVRtoVideoSinkOnMotionDetection",
- "parameters": [
- {
- "name": "rtspUrl",
- "value": "rtsp://rtspsim:554/media/lots_015.mkv"
- }
- ]
- }
- }
-}
-```
-
-Note the following properties in the response payload:
-
-* Status code is 201, indicating a new live pipeline was created.
-* State is "Inactive", indicating that the live pipeline was created but not activated. For more information, see [pipeline states](../pipeline.md#pipeline-states).
-
-Try the following direct methods as next steps:
-
-* Invoke `livePipelineSet` again with the same payload and note that the returned status code is now 200.
-* Invoke `livePipelineSet` again but with a different description and note the updated description in the response payload, indicating that the live pipeline was successfully updated.
-
- > [!NOTE]
- > As explained in [Pipeline topologies](../pipeline.md#pipeline-topologies), you can create multiple live pipelines, to analyze live video streams from many cameras using the same pipeline topology. However, this particular topology hard-codes the value of `videoName`. Since only one live video source should be recorded to a Video Analyzer video resource, you must not create additional live pipelines with this particular topology.
-
-### Activate the live pipeline
-
-Next, you can activate the live pipeline - which starts the flow of (simulated) live video through the pipeline. Invoke the direct method `livePipelineActivate` with the following payload:
-
-```
-{
- "@apiVersion" : "1.1",
- "name" : "Sample-Pipeline-1"
-}
-```
-
-Within few seconds, you should see the following response in the OUTPUT window
-
-```
-[DirectMethod] Invoking Direct Method [livePipelineActivate] to [avasample-iot-edge-device/avaedge] ...
-[DirectMethod] Response from [avasample-iot-edge-device/avaedge]:
-{
- "status": 200,
- "payload": null
-}
-```
-
-Status code of 200 in the response payload indicates that the live pipeline was successfully activated.
-
-### Check the state of the live pipeline
-
-Now invoke the `livePipelineGet` direct method with the following payload:
-
-```
-{
- "@apiVersion" : "1.1",
- "name" : "Sample-Pipeline-1"
-}
-```
-
-Within few seconds, you should see the following response in the OUTPUT window
-
-```
-[DirectMethod] Invoking Direct Method [livePipelineGet] to [avasample-iot-edge-device/avaedge] ...
-[DirectMethod] Response from [avasample-iot-edge-device/avaedge]:
-{
- "status": 200,
- "payload": {
- "systemData": {
- "createdAt": "2021-05-03T14:21:21.750Z",
- "lastModifiedAt": "2021-05-03T14:21:21.750Z"
- },
- "name": "Sample-Pipeline-1",
- "properties": {
- "state": "Active",
- "description": "Sample pipeline description",
- "topologyName": "EVRtoVideoSinkOnMotionDetection",
- "parameters": [
- {
- "name": "rtspUrl",
- "value": "rtsp://rtspsim:554/media/lots_015.mkv"
- }
- ]
- }
- }
-}
-```
-
-Note the following properties in the response payload:
-
-* Status code is 200, indicating success.
-* State is "Active", indicating the live pipeline is now in "Active" state.
-
-## Observe results
-
-The live pipeline that you created and activated above uses the motion detection processor node to detect motion in the incoming live video stream and sends events to IoT Hub sink. These events are then relayed to your IoT Hub as messages, which can now be observed. You will see the following messages in the OUTPUT window
-
-```
-[IoTHubMonitor] [1:22:53 PM] Message received from [avasample-iot-edge-device/avaedge]:
-{
- "body": {
- "sdp": "SDP:\nv=0\r\no=- 1620066173760872 1 IN IP4 172.18.0.4\r\ns=Matroska video+audio+(optional)subtitles, streamed by the LIVE555 Media Server\r\ni=media/lots_015.mkv\r\nt=0 0\r\na=tool:LIVE555 Streaming Media v2020.08.19\r\na=type:broadcast\r\na=control:*\r\na=range:npt=0-73.000\r\na=x-qt-text-nam:Matroska video+audio+(optional)subtitles, streamed by the LIVE555 Media Server\r\na=x-qt-text-inf:media/lots_015.mkv\r\nm=video 0 RTP/AVP 96\r\nc=IN IP4 0.0.0.0\r\nb=AS:500\r\na=rtpmap:96 H264/90000\r\na=fmtp:96 packetization-mode=1;profile-level-id=640028;sprop-parameter-sets=Z2QAKKzZQHgCJoQAAAMABAAAAwDwPGDGWA==,aOvhEsiw\r\na=control:track1\r\n"
- },
- "properties": {
- "topic": "/subscriptions/{subscriptionID}/resourceGroups/{name}/providers/microsoft.media/videoAnalyzers/{ava-account-name}",
- "subject": "/edgeModules/avaedge/livePipelines/Sample-Pipeline-1/sources/rtspSource",
- "eventType": "Microsoft.VideoAnalyzer.Diagnostics.MediaSessionEstablished",
- "eventTime": "2021-05-03T18:22:53.761Z",
- "dataVersion": "1.0"
- },
- "systemProperties": {
- "iothub-connection-device-id": "avasample-iot-edge-device",
- "iothub-connection-module-id": "avaedge",
- "iothub-connection-auth-method": "{\"scope\":\"module\",\"type\":\"sas\",\"issuer\":\"iothub\",\"acceptingIpFilterRule\":null}",
- "iothub-connection-auth-generation-id": "637556611958535962",
- "iothub-enqueuedtime": 1620066173816,
- "iothub-message-source": "Telemetry",
- "messageId": "c2de6a40-1e0a-45ef-9449-599fc5680d05",
- "contentType": "application/json",
- "contentEncoding": "utf-8"
- }
-}
-[IoTHubMonitor] [1:22:59 PM] Message received from [avasample-iot-edge-device/avaedge]:
-{
- "body": {
- "timestamp": 145805956115743,
- "inferences": [
- {
- "type": "motion",
- "motion": {
- "box": {
- "l": 0.48954,
- "t": 0.140741,
- "w": 0.075,
- "h": 0.058824
- }
- }
- }
- ]
- },
- "properties": {
- "topic": "/subscriptions/{subscriptionID}/resourceGroups/{name}/providers/microsoft.media/videoAnalyzers/{ava-account-name}",
- "subject": "/edgeModules/avaedge/livePipelines/Sample-Pipeline-1/processors/motionDetection",
- "eventType": "Microsoft.VideoAnalyzer.Analytics.Inference",
- "eventTime": "2021-05-03T18:22:59.063Z",
- "dataVersion": "1.0"
- },
- "systemProperties": {
- "iothub-connection-device-id": "avasample-iot-edge-device",
- "iothub-connection-module-id": "avaedge",
- "iothub-connection-auth-method": "{\"scope\":\"module\",\"type\":\"sas\",\"issuer\":\"iothub\",\"acceptingIpFilterRule\":null}",
- "iothub-connection-auth-generation-id": "637556611958535962",
- "iothub-enqueuedtime": 1620066179359,
- "iothub-message-source": "Telemetry",
- "messageId": "9ccfab80-2993-42c7-9452-92e21df96413",
- "contentType": "application/json",
- "contentEncoding": "utf-8"
- }
-}
-
-```
-
-Note the following properties in the above messages:
-
-* Each message contains a `body` section and a `properties` section. To understand what these sections represent, read the article [Create and Read IoT Hub message](../../../iot-hub/iot-hub-devguide-messages-construct.md).
-* The first message is **MediaSessionEstablished** indicating that the RTSP Source node (subject) was able to establish connection with the RTSP simulator, and begin to receive a (simulated) live feed.
-* The `subject` references the node in the live pipeline from which the message was generated. In this case, the message is originating from the RTSP source node.
-* The `eventType` indicates that this is a Diagnostics event.
-* The `eventTime` indicates the time when the event occurred.
-* The `body` contains data about the event - it's the [SDP](https://en.wikipedia.org/wiki/Session_Description_Protocol) message.
-* The second message is an **Inference** event. You can check that it is sent roughly 5 seconds after the **MediaSessionEstablished** message, which corresponds to the delay between the start of the video, and when the car drives through the parking lot.
-* The `subject` references the motion detection processor node in the pipeline, which generated this message
-* This is an Analytics event and hence the `body` contains `timestamp` and `inferences` data.
-* The `inferences` section indicates that the `type` is "motion" and has additional data about the "motion" event.
-
-Notice the `messageId` section in the body is "9ccfab80-2993-42c7-9452-92e21df96413". It shows up in the following operational event:
-
-```
-[IoTHubMonitor] [1:23:31 PM] Message received from [avasample-iot-edge-device/avaedge]:
-{
- "body": {
- "outputType": "video",
- "outputLocation": "sample-motion-video-camera001"
- },
- "properties": {
- "topic": "/subscriptions/{subscriptionID}/resourceGroups/{name}/providers/microsoft.media/videoAnalyzers/{ava-account-name}",
- "subject": "/edgeModules/avaedge/livePipelines/Sample-Pipeline-1/sinks/videoSink",
- "eventType": "Microsoft.VideoAnalyzer.Operational.RecordingStarted",
- "eventTime": "2021-05-03T18:23:31.319Z",
- "dataVersion": "1.0"
- },
- "systemProperties": {
- "iothub-connection-device-id": "avasample-iot-edge-device",
- "iothub-connection-module-id": "avaedge",
- "iothub-connection-auth-method": "{\"scope\":\"module\",\"type\":\"sas\",\"issuer\":\"iothub\",\"acceptingIpFilterRule\":null}",
- "iothub-connection-auth-generation-id": "637556611958535962",
- "iothub-enqueuedtime": 1620066211373,
- "iothub-message-source": "Telemetry",
- "messageId": "c7cbb363-7cc7-4169-936f-55de5fae111c",
- "correlationId": "9ccfab80-2993-42c7-9452-92e21df96413",
- "contentType": "application/json",
- "contentEncoding": "utf-8"
- }
-}
-```
-"
-This **RecordingStarted** message indicates that video recording started. Notice that `correlationId` value of "9ccfab80-2993-42c7-9452-92e21df96413" matches the `messageId` of the **Inference** message, which allows you to track the event that triggered the recording. The next operational event is the following:
-
-```
-[IoTHubMonitor] [1:24:00 PM] Message received from [avasample-iot-edge-device/avaedge]:
-{
- "body": {
- "outputType": "video",
- "outputLocation": "sample-motion"
- },
- "properties": {
- "topic": "/subscriptions/{subscriptionID}/resourceGroups/{name}/providers/microsoft.media/videoAnalyzers/{ava-account-name}",
- "subject": "/edgeModules/avaedge/livePipelines/Sample-Pipeline-1/sinks/videoSink",
- "eventType": "Microsoft.VideoAnalyzer.Operational.RecordingAvailable",
- "eventTime": "2021-05-03T18:24:00.686Z",
- "dataVersion": "1.0"
- },
- "systemProperties": {
- "iothub-connection-device-id": "avasample-iot-edge-device",
- "iothub-connection-module-id": "avaedge",
- "iothub-connection-auth-method": "{\"scope\":\"module\",\"type\":\"sas\",\"issuer\":\"iothub\",\"acceptingIpFilterRule\":null}",
- "iothub-connection-auth-generation-id": "637556611958535962",
- "iothub-enqueuedtime": 1620066240741,
- "iothub-message-source": "Telemetry",
- "messageId": "5b26aa88-e037-4834-af34-a6a4df3c42c2",
- "correlationId": "9ccfab80-2993-42c7-9452-92e21df96413",
- "contentType": "application/json",
- "contentEncoding": "utf-8"
- }
-}
-[IoTHubMonitor] [1:24:00 PM] Message received from [avasample-iot-edge-device/avaedge]:
-{
-```
-
-This **RecordingAvailable** event indicates that the media data has now been recorded to the video resource. Notice that the `correlationId` is the same: "9ccfab80-2993-42c7-9452-92e21df96413". The last operational event for this chain of messages (with the same `correlationId`) is as follows:
-
-```
-[IoTHubMonitor] [1:24:00 PM] Message received from [avasample-iot-edge-device/avaedge]:
-{
- "body": {
- "outputType": "video",
- "outputLocation": "sample-motion"
- },
- "properties": {
- "topic": "/subscriptions/{subscriptionID}/resourceGroups/{name}/providers/microsoft.media/videoAnalyzers/{ava-account-name}",
- "subject": "/edgeModules/avaedge/livePipelines/Sample-Pipeline-1/sinks/videoSink",
- "eventType": "Microsoft.VideoAnalyzer.Operational.RecordingStopped",
- "eventTime": "2021-05-03T18:24:00.710Z",
- "dataVersion": "1.0"
- },
- "systemProperties": {
- "iothub-connection-device-id": "avasample-iot-edge-device",
- "iothub-connection-module-id": "avaedge",
- "iothub-connection-auth-method": "{\"scope\":\"module\",\"type\":\"sas\",\"issuer\":\"iothub\",\"acceptingIpFilterRule\":null}",
- "iothub-connection-auth-generation-id": "637556611958535962",
- "iothub-enqueuedtime": 1620066240766,
- "iothub-message-source": "Telemetry",
- "messageId": "f3dbd5d5-3176-4d5b-80d8-c67de85bc619",
- "correlationId": "9ccfab80-2993-42c7-9452-92e21df96413",
- "contentType": "application/json",
- "contentEncoding": "utf-8"
- }
-}
-```
-This **RecordingStopped** event indicates that the signal gate has closed, and the relevant portion of the incoming live video has been recorded. Notice that the `correlationId` is the same: "9ccfab80-2993-42c7-9452-92e21df96413".
-
-In the topology, the signal gate processor node was configured with activation times of 30 seconds, which means that the pipeline topology will record roughly 30 seconds worth of video. While video is being recorded, the motion detection processor node will continue to emit **Inference** events, which will show up in the OUTPUT window in between the **RecordingAvailable** and **RecordingStopped** events.
-
-If you let the live pipeline continue to run, the RTSP simulator will reach the end of the video file and stop/disconnect. The RTSP source node will then reconnect to the simulator, and the process will repeat.
-
-## Invoke additional direct method calls to clean up
-
-Next, you can invoke direct methods to deactivate and delete the live pipeline (in that order).
-
-### Deactivate the live pipeline
-
-Invoke the`livePipelineDeactivate` direct method with the following payload:
-
-```
-{
- "@apiVersion" : "1.1",
- "name" : "Sample-Pipeline-1"
-}
-```
-
-Within few seconds, you should see the following response in the OUTPUT window.
-
-```
-[DirectMethod] Invoking Direct Method [livePipelineDeactivate] to [avasample-iot-edge-device/avaedge] ...
-[DirectMethod] Response from [avasample-iot-edge-device/avaedge]:
-{
- "status": 200,
- "payload": null
-}
-```
-
-Status code of 200 indicates that the live pipeline was successfully deactivated.
--
-### Delete the live pipeline
-
-Invoke the direct method `livePipelineDelete` with the following payload
-
-```
-{
- "@apiVersion" : "1.1",
- "name" : "Sample-Pipeline-1"
-}
-```
-
-Within few seconds, you should see the following response in the OUTPUT window:
-
-```
-[DirectMethod] Invoking Direct Method [livePipelineDelete] to [avasample-iot-edge-device/avaedge] ...
-[DirectMethod] Response from [avasample-iot-edge-device/avaedge]:
-{
- "status": 200,
- "payload": null
-}
-```
-
-Status code of 200 in the response indicates that the live pipeline was successfully deleted.
--
-### Delete the pipeline topology
-
-Invoke the `pipelineTopologyDelete` direct method with the following payload:
-
-```
-{
- "@apiVersion" : "1.1",
- "name" : "EVRtoVideoSinkOnMotionDetection"
-}
-```
-
-Within few seconds, you should see the following response in the OUTPUT window
-
-```
-[DirectMethod] Invoking Direct Method [pipelineTopologyDelete] to [avasample-iot-edge-device/avaedge] ...
-[DirectMethod] Response from [avasample-iot-edge-device/avaedge]:
-{
- "status": 200,
- "payload": null
-}
-```
-
-Status code of 200 indicates that the pipeline topology was successfully deleted.
-
-## Play back the recording
-
-You can examine the Video Analyzer video resource that was created by the live pipeline by logging in to the Azure portal and viewing the video.
-1. Open your web browser, and go to the [Azure portal](https://portal.azure.com/). Enter your credentials to sign in to the portal. The default view is your service dashboard.
-1. Locate your Video Analyzer account among the resources you have in your subscription, and open the account pane.
-1. Select **Videos** in the **Video Analyzers** list.
-1. You'll find a video listed with the name `sample-motion-video-camera001`. This is the name chosen in your pipeline topology file.
-1. Select the video.
-1. The video details page will open and the playback should start automatically.
-
- <!--TODO: add image -- ![Video playback]() TODO: new screenshot is needed here -->
---
-## Clean up resources
--
-## Next steps
-
-* Check out [Recorded and live videos](../viewing-videos-how-to.md)
-* Try [Quickstart: Analyze a live video feed from a (simulated) IP camera using your own HTTP model](analyze-live-video-use-your-model-http.md)
azure-video-analyzer Detect Motion Record Video Edge Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/detect-motion-record-video-edge-devices.md
- Title: Detect motion and record video on edge devices
-description: Use Azure Video Analyzer to analyze the live video feed from a (simulated) IP camera. It shows how to detect if any motion is present, and if so, record an MP4 video clip to the local file system on the edge device. The quickstart uses an Azure VM as an IoT Edge device and also uses a simulated live video stream.
- Previously updated : 06/01/2021
-zone_pivot_groups: video-analyzer-programming-languages
---
-# Quickstart: Detect motion and