Updates from: 01/03/2023 02:06:49
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory How To Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-configure.md
To configure provisioning, follow these steps.
7. Enter a **Notification email**. This email will be notified when provisioning isn't healthy. It is recommended that you keep **Prevent accidental deletion** enabled and set the **Accidental deletion threshold** to a number that you wish to be notified about. For more information, see [accidental deletes](#accidental-deletions) below. 8. Move the selector to Enable, and select Save.
+ >[!NOTE]
+ > During the configuration process the synchronization service account will be created with the format **ADToAADSyncServiceAccount@[TenantID].onmicrosoft.com** and you may get an error if multi-factor authentication is enabled for the synchronization service account, or other interactive authentication policies are accidentally enabled for the synchronization account. Removing multi-factor authentication or any interactive authentication policies for the synchronization service account should resolve the error and you can complete the configuration smoothly.
++ ## Scope provisioning to specific users and groups You can scope the agent to synchronize specific users and groups by using on-premises Active Directory groups or organizational units. You can't configure groups and organizational units within a configuration. >[!NOTE]
active-directory Entitlement Management Access Package Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-resources.md
Title: Change resource roles for an access package in entitlement management - Microsoft Entra
+ Title: Change resource roles for an access package in entitlement management - Azure AD
description: Learn how to change the resource roles for an existing access package in entitlement management. documentationCenter: ''
active-directory Entitlement Management Access Package Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-settings.md
Title: Share link to request an access package in entitlement management - Microsoft Entra
+ Title: Share link to request an access package in entitlement management - Azure AD
description: Learn how to share link to request an access package in entitlement management. documentationCenter: ''
active-directory Entitlement Management Catalog Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-catalog-create.md
Title: Create and manage a catalog of resources in entitlement management - Microsoft Entra
+ Title: Create and manage a catalog of resources in entitlement management - Azure AD
description: Learn how to create a new container of resources and access packages in entitlement management. documentationCenter: ''
active-directory Entitlement Management Delegate Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-delegate-catalog.md
Title: Delegate access governance to catalog creators in entitlement management - Microsoft Entra
+ Title: Delegate access governance to catalog creators in entitlement management - Azure AD
description: Learn how to delegate access governance from IT administrators to catalog creators and project managers so that they can manage access themselves. documentationCenter: ''
active-directory Entitlement Management Delegate Managers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-delegate-managers.md
Title: Delegate access governance to access package managers in entitlement management - Microsoft Entra
+ Title: Delegate access governance to access package managers in entitlement management - Azure AD
description: Learn how to delegate access governance from IT administrators to access package managers and project managers so that they can manage access themselves. documentationCenter: ''
active-directory Entitlement Management External Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-external-users.md
Title: Govern access for external users in entitlement management - Microsoft Entra
+ Title: Govern access for external users in entitlement management - Azure AD
description: Learn about the settings you can specify to govern access for external users in entitlement management. documentationCenter: ''
active-directory Entitlement Management Logs And Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-logs-and-reporting.md
Title: Archive & report with Azure Monitor - Microsoft Entra entitlement management
+ Title: Archive & report with Azure Monitor - entitlement management
description: Learn how to archive logs and create reports with Azure Monitor in entitlement management. documentationCenter: ''
active-directory Entitlement Management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-overview.md
Title: What is entitlement management? - Microsoft Entra
+ Title: What is entitlement management? - Azure AD
description: Get an overview of entitlement management and how you can use it to manage access to groups, applications, and SharePoint Online sites for internal and external users. documentationCenter: ''
active-directory Entitlement Management Reprocess Access Package Assignments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-reprocess-access-package-assignments.md
Title: Reprocess assignments for an access package in entitlement management - Microsoft Entra
+ Title: Reprocess assignments for an access package in entitlement management - Azure AD
description: Learn how to reprocess assignments for an access package in entitlement management. documentationCenter: ''
active-directory Configure Authentication For Federated Users Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-authentication-for-federated-users-portal.md
Previously updated : 02/09/2022 Last updated : 01/02/2023
zone_pivot_groups: home-realm-discovery
# Configure sign-in behavior using Home Realm Discovery
-This article provides an introduction to configuring Azure Active Directory(Azure AD) authentication behavior for federated users using Home Realm Discovery (HRD) policy. It covers using auto-acceleration to skip the username entry screen and automatically forward users to federated login endpoints. To learn more about HRD policy, see [Home Realm Discovery](home-realm-discovery-policy.md)
+This article provides an introduction to configuring Azure Active Directory (Azure AD) authentication behavior for federated users using Home Realm Discovery (HRD) policy. It covers using auto-acceleration sign-in to skip the username entry screen and automatically forward users to federated login endpoints. To learn more about HRD policy, check out the [Home Realm Discovery](home-realm-discovery-policy.md) article.
+
+## Auto-acceleration sign-in
+
+Some organizations configure domains in their Azure AD tenant to federate with another identity provider (IDP), such as AD FS for user authentication. When a user signs into an application, they're first presented with an Azure AD sign-in page. After they've typed their UPN, if they are in a federated domain they're then taken to the sign-in page of the IDP serving that domain. Under certain circumstances, administrators might want to direct users to the sign-in page when they're signing in to specific applications. As a result users can skip the initial Azure AD page. This process is referred to as "sign-in auto-acceleration."
For federated users with cloud-enabled credentials, such as SMS sign-in or FIDO keys, you should prevent sign-in auto-acceleration. See [Disable auto-acceleration sign-in](prevent-domain-hints-with-home-realm-discovery.md) to learn how to prevent domain hints with HRD.
If nothing is returned, it means you have no policies created in your tenant.
In this example, you create a policy that when it's assigned to an application either: -- Auto-accelerates users to an federated identity provider sign-in screen when they are signing in to an application when there is a single domain in your tenant.-- Auto-accelerates users to an federated identity provider sign-in screen if there is more than one federated domain in your tenant.
+- Auto-accelerates users to a federated identity provider sign-in screen when they're signing in to an application when there's a single domain in your tenant.
+- Auto-accelerates users to a federated identity provider sign-in screen if there's more than one federated domain in your tenant.
- Enables non-interactive username/password sign-in directly to Azure AD for federated users for the applications the policy is assigned to.
-The following policy auto-accelerates users to an federated identity provider sign-in screen when they're signing in to an application when there's a single domain in your tenant.
+The following policy auto-accelerates users to a federated identity provider sign-in screen when they're signing in to an application when there's a single domain in your tenant.
::: zone pivot="powershell-hrd"
New-AzureADPolicy -Definition @("{`"HomeRealmDiscoveryPolicy`":{`"AccelerateToFe
``` ::: zone-end
-The following policy auto-accelerates users to an federated identity provider sign-in screen when there is more than one federated domain in your tenant. If you have more than one federated domain that authenticates users for applications, you need to specify the domain to auto-accelerate.
+The following policy auto-accelerates users to a federated identity provider sign-in screen when there's more than one federated domain in your tenant. If you've more than one federated domain that authenticates users for applications, you need to specify the domain to auto-accelerate.
::: zone pivot="powershell-hrd"
To see your new policy and get its **ObjectID**, run the following command:
Get-AzureADPolicy ```
-To apply the HRD policy after you have created it, you can assign it to multiple application service principals.
+To apply the HRD policy after you've created it, you can assign it to multiple application service principals.
## Locate the service principal to which to assign the policy
You need the **ObjectID** of the service principals to which you want to assign
You can use the [Azure portal](https://portal.azure.com), or you can query [Microsoft Graph](/graph/api/resources/serviceprincipal). You can also go to the [Graph Explorer Tool](https://developer.microsoft.com/graph/graph-explorer) and sign in to your Azure AD account to see all your organization's service principals.
-Because you are using PowerShell, you can use the following cmdlet to list the service principals and their IDs.
+Because you're using PowerShell, you can use the following cmdlet to list the service principals and their IDs.
```powershell Get-AzureADServicePrincipal
From the Microsoft Graph explorer window:
## Next steps
-[Prevent sign-in auto-acceleration](prevent-domain-hints-with-home-realm-discovery.md).
+- [Prevent sign-in auto-acceleration](prevent-domain-hints-with-home-realm-discovery.md)
+- [Home Realm Discovery for an application](./home-realm-discovery-policy.md)
active-directory Home Realm Discovery Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/home-realm-discovery-policy.md
Previously updated : 08/24/2022 Last updated : 01/02/2023 # Home Realm Discovery for an application
-Home Realm Discovery (HRD) is the process that allows Azure Active directory (Azure AD) to determine which identity provider ("IdP") a user needs to authenticate with at sign-in time. When a user signs in to an Azure AD tenant to access a resource, or to the Azure AD common sign-in page, they type a user name (UPN). Azure AD uses that to discover where the user needs to sign in.
+Home Realm Discovery (HRD) is the process that allows Azure Active directory (Azure AD) to determine which identity provider (IDP) a user needs to authenticate with at sign-in time. When a user signs in to an Azure AD tenant to access a resource, or to the Azure AD common sign-in page, they type a user name (UPN). Azure AD uses that to discover where the user needs to sign in.
The user will be taken to one of the following identity providers to be authenticated:
For example, the application "largeapp.com" might enable their customers to acce
Domain hint syntax varies depending on the protocol that's used, and it's typically configured in the application in the following ways: -- For applications that use the**WS-Federation**: whr=contoso.com in the query string.
+- For applications that use the **WS-Federation**: `whr` query string parameter. For example, whr=contoso.com.
- For applications that use the **SAML**: Either a SAML authentication request that contains a domain hint or a query string whr=contoso.com. -- For applications that use the **Open ID Connect**: A query string domain_hint=contoso.com.
+- For applications that use the **Open ID Connect**: `domain_hint` query string parameter. For example, domain_hint=contoso.com.
-By default, Azure AD attempts to redirect sign-in to the IdP that's configured for a domain if **both** of the following are true:
+By default, Azure AD attempts to redirect sign-in to the IDP that's configured for a domain if **both** of the following are true:
- A domain hint is included in the authentication request from the application **and** - The tenant is federated with that domain.
active-directory Configure Cmmc Level 2 Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/configure-cmmc-level-2-access-control.md
The following table provides a list of control IDs and associated customer respo
| AC.L2-3.1.9 | With Azure AD, you can deliver notification or banner messages for all apps that require and record acknowledgment before granting access. You can granularly target these terms of use policies to specific users (Member or Guest). You can also customize them per application via conditional access policies.<br><br>**Conditional access** <br>[What is conditional access in Azure AD?](../conditional-access/overview.md)<br><br>**Terms of use**<br>[Azure Active Directory terms of use](../conditional-access/terms-of-use.md)<br>[View report of who has accepted and declined](../conditional-access/terms-of-use.md) | | AC.L2-3.1.10 | Implement device lock by using a conditional access policy to restrict access to compliant or hybrid Azure AD joined devices. Configure policy settings on the device to enforce device lock at the OS level with MDM solutions such as Intune. Endpoint Manager or group policy objects can also be considered in hybrid deployments. For unmanaged devices, configure the Sign-In Frequency setting to force users to reauthenticate.<br>[Require device to be marked as compliant](../conditional-access/require-managed-devices.md)<br>[Grant controls in Conditional Access policy - Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md)<br>[User sign-in frequency](../conditional-access/howto-conditional-access-session-lifetime.md)<br><br>Configure devices for maximum minutes of inactivity until the screen locks ([Android](/mem/intune/configuration/device-restrictions-android), [iOS](/mem/intune/configuration/device-restrictions-ios), [Windows 10](/mem/intune/configuration/device-restrictions-windows-10)).| | AC.L2-3.1.11 | Enable Continuous Access Evaluation (CAE) for all supported applications. For application that don't support CAE, or for conditions not applicable to CAE, implement policies in Microsoft Defender for Cloud Apps to automatically terminate sessions when conditions occur. Additionally, configure Azure Active Directory Identity Protection to evaluate user and sign-in Risk. Use conditional access with Identity protection to allow user to automatically remediate risk.<br>[Continuous access evaluation in Azure AD](../conditional-access/concept-continuous-access-evaluation.md)<br>[Control cloud app usage by creating policies](/defender-cloud-apps/control-cloud-apps-with-policies)<br>[What is Azure Active Directory Identity Protection?](../identity-protection/overview-identity-protection.md)
-|AC.L2-3.1.12 | In todayΓÇÖs world, users access cloud-based applications almost exclusively remotely from unknown or untrusted networks. It's critical to securing this pattern of access to adopt zero trust principals. To meet these controls requirements in a modern cloud world we must verify each access request explicitly, implement least privilege and assume breach.<br><br>Configure named locations to delineate internal vs external networks. Configure conditional access app control to route access via Microsoft Defender for Cloud Apps (MDCA). Configure MDCA to control and monitor all sessions.<br>[Zero Trust Deployment Guide for Microsoft Azure Active Directory](/security/blog/2020/04/30/zero-trust-deployment-guide-azure-active-directory/)<br>[Location condition in Azure Active Directory Conditional Access](/azure/active-directory/conditional-access/location-condition.md)<br>[Deploy Cloud App Security Conditional Access App Control for Azure AD apps](/cloud-app-security/proxy-deployment-aad.md)<br>[What is Microsoft Defender for Cloud Apps?](/cloud-app-security/what-is-cloud-app-security.md)<br>[Monitor alerts raised in Microsoft Defender for Cloud Apps](/cloud-app-security/monitor-alerts.md) |
+|AC.L2-3.1.12 | In todayΓÇÖs world, users access cloud-based applications almost exclusively remotely from unknown or untrusted networks. It's critical to securing this pattern of access to adopt zero trust principals. To meet these controls requirements in a modern cloud world we must verify each access request explicitly, implement least privilege and assume breach.<br><br>Configure named locations to delineate internal vs external networks. Configure conditional access app control to route access via Microsoft Defender for Cloud Apps (MDCA). Configure MDCA to control and monitor all sessions.<br>[Zero Trust Deployment Guide for Microsoft Azure Active Directory](https://www.microsoft.com/security/blog/2020/04/30/zero-trust-deployment-guide-azure-active-directory/)<br>[Location condition in Azure Active Directory Conditional Access](/azure/active-directory/conditional-access/location-condition)<br>[Deploy Cloud App Security Conditional Access App Control for Azure AD apps](/cloud-app-security/proxy-deployment-aad)<br>[What is Microsoft Defender for Cloud Apps?](/cloud-app-security/what-is-cloud-app-security)<br>[Monitor alerts raised in Microsoft Defender for Cloud Apps](/cloud-app-security/monitor-alerts) |
| AC.L2-3.1.13 | All Azure AD customer-facing web services are secured with the Transport Layer Security (TLS) protocol and are implemented using FIPS-validated cryptography.<br>[Azure Active Directory Data Security Considerations (microsoft.com)](https://azure.microsoft.com/resources/azure-active-directory-data-security-considerations/) |
-| AC.L2-3.1.14 | Configure named locations to delineate internal vs external networks. Configure conditional access app control to route access via Microsoft Defender for Cloud Apps (MDCA). Configure MDCA to control and monitor all sessions. Secure devices used by privileged accounts as part of the privileged access story.<br>[Location condition in Azure Active Directory Conditional Access](/azure/active-directory/conditional-access/location-condition.md)<br>[Session controls in Conditional Access policy](/azure/active-directory/conditional-access/concept-conditional-access-session.md)<br>[Securing privileged access overview](/security/compass/overview.md) |
-| AC.L2-3.1.15 | Conditional Access is the Zero Trust control plane to target policies for access to your apps when combined with authentication context. You can apply different policies in those apps. Secure devices used by privileged accounts as part of the privileged access story. Configure conditional access policies to require the use of these secured devices by privileged users when performing privileged commands.<br>[Cloud apps, actions, and authentication context in Conditional Access policy](/azure/active-directory/conditional-access/concept-conditional-access-cloud-apps.md)<br>[Securing privileged access overview](/security/compass/overview.md)<br>[Filter for devices as a condition in Conditional Access policy](/azure/active-directory/conditional-access/concept-condition-filters-for-devices.md) |
-| AC.L2-3.1.18 | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to enforce mobile device configuration and connection profile. Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](/azure/active-directory/conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](/azure/active-directory/conditional-access/concept-conditional-access-grant.md)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started.md)<br>[What is app management in Microsoft Intune?](/mem/intune/apps/app-management.md) |
+| AC.L2-3.1.14 | Configure named locations to delineate internal vs external networks. Configure conditional access app control to route access via Microsoft Defender for Cloud Apps (MDCA). Configure MDCA to control and monitor all sessions. Secure devices used by privileged accounts as part of the privileged access story.<br>[Location condition in Azure Active Directory Conditional Access](/azure/active-directory/conditional-access/location-condition)<br>[Session controls in Conditional Access policy](/azure/active-directory/conditional-access/concept-conditional-access-session)<br>[Securing privileged access overview](/security/compass/overview) |
+| AC.L2-3.1.15 | Conditional Access is the Zero Trust control plane to target policies for access to your apps when combined with authentication context. You can apply different policies in those apps. Secure devices used by privileged accounts as part of the privileged access story. Configure conditional access policies to require the use of these secured devices by privileged users when performing privileged commands.<br>[Cloud apps, actions, and authentication context in Conditional Access policy](/azure/active-directory/conditional-access/concept-conditional-access-cloud-apps)<br>[Securing privileged access overview](/security/compass/overview)<br>[Filter for devices as a condition in Conditional Access policy](/azure/active-directory/conditional-access/concept-condition-filters-for-devices) |
+| AC.L2-3.1.18 | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to enforce mobile device configuration and connection profile. Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](/azure/active-directory/conditional-access/concept-conditional-access-grant)<br>[Require hybrid Azure AD joined device](/azure/active-directory/conditional-access/concept-conditional-access-grant)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started)<br>[What is app management in Microsoft Intune?](/mem/intune/apps/app-management) |
| AC.L2-3.1.19 | **Managed Device**<br>Configure conditional access policies to enforce compliant or HAADJ device and to ensure managed devices are configured appropriately via device management solution to encrypt CUI<br><br>**Unmanaged Device**<br>Configure conditional access policies to require app protection policies.<br>[Grant controls in Conditional Access policy - Require device to be marked as compliant](../conditional-access/concept-conditional-access-grant.md)<br>[Grant controls in Conditional Access policy - Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md)<br>[Grant controls in Conditional Access policy - Require app protection policy](../conditional-access/concept-conditional-access-grant.md) |
-| AC.L2-3.1.21 | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to control the use of portable storage devices on systems. Configure policy settings on the Windows device to completely prohibit or restrict use of portable storage at the OS level. For all other devices where you may be unable to granularly control access to portable storage block download entirely with Microsoft Defender for Cloud Apps. Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](/azure/active-directory/conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](/azure/active-directory/conditional-access/concept-conditional-access-grant.md)<br>[Configure authentication session management - Azure Active Directory](/azure/active-directory/conditional-access/howto-conditional-access-session-lifetime.md)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started.md)<br>[Restrict USB devices using administrative templates in Microsoft Intune](/mem/intune/configuration/administrative-templates-restrict-usb.md)<br><br>**Microsoft Defender for Cloud Apps**<br>[Create session policies in Defender for Cloud Apps](/defender-cloud-apps/session-policy-aad.md)
+| AC.L2-3.1.21 | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to control the use of portable storage devices on systems. Configure policy settings on the Windows device to completely prohibit or restrict use of portable storage at the OS level. For all other devices where you may be unable to granularly control access to portable storage block download entirely with Microsoft Defender for Cloud Apps. Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](/azure/active-directory/conditional-access/concept-conditional-access-grant)<br>[Require hybrid Azure AD joined device](/azure/active-directory/conditional-access/concept-conditional-access-grant)<br>[Configure authentication session management - Azure Active Directory](/azure/active-directory/conditional-access/howto-conditional-access-session-lifetime)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started)<br>[Restrict USB devices using administrative templates in Microsoft Intune](/mem/intune/configuration/administrative-templates-restrict-usb)<br><br>**Microsoft Defender for Cloud Apps**<br>[Create session policies in Defender for Cloud Apps](/defender-cloud-apps/session-policy-aad)
### Next steps
active-directory Configure Cmmc Level 2 Additional Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/configure-cmmc-level-2-additional-controls.md
The following table provides a list of control IDs and associated customer respo
| *Control* | *Guidance* | | - | - |
-| AU.L2-3.3.1<br><br>AU.L2-3.3.2 | All operations are audited within the Azure AD audit logs. Each audit log entry contains a userΓÇÖs immutable objectID that can be used to uniquely trace an individual system user to each action. You can collect and analyze logs by using a Security Information and Event Management (SIEM) solution such as Microsoft Sentinel. Alternatively, you can use Azure Event Hubs to integrate logs with third-party SIEM solutions to enable monitoring and notification.<br>[Audit activity reports in the Azure Active Directory portal](/azure/active-directory/reports-monitoring/concept-audit-logs.md)<br>[Connect Azure Active Directory data to Microsoft Sentinel](/azure/sentinel/connect-azure-active-directory)<br>[Tutorial: Stream logs to an Azure event hub](/azure/active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) |
-| AU.L2-3.3.4 | Azure Service Health notifies you about Azure service incidents so you can take action to mitigate downtime. Configure customizable cloud alerts for Azure Active Directory. <br>[What is Azure Service Health?](/azure/service-health/overview.md)<br>[Three ways to get notified about Azure service issues](https://azure.microsoft.com/blog/three-ways-to-get-notified-about-azure-service-issues/)<br>[Azure Service Health](https://azure.microsoft.com/get-started/azure-portal/service-health/) |
-| AU.L2-3.3.6 | Ensure Azure AD events are included in event logging strategy. You can collect and analyze logs by using a Security Information and Event Management (SIEM) solution such as Microsoft Sentinel. Alternatively, you can use Azure Event Hubs to integrate logs with third-party SIEM solutions to enable monitoring and notification. Use Azure AD entitlement management with access reviews to ensure compliance status of accounts. <br>[Audit activity reports in the Azure Active Directory portal](/azure/active-directory/reports-monitoring/concept-audit-logs.md)<br>[Connect Azure Active Directory data to Microsoft Sentinel](/azure/sentinel/connect-azure-active-directory.md)<br>[Tutorial: Stream logs to an Azure event hub](/azure/active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) |
-| AU.L2-3.3.8<br><br>AU.L2-3.3.9 | Azure AD logs are retained by default for 30 days. These logs are unable to modified or deleted and are only accessible to limited set of privileged roles.<br>[Sign-in logs in Azure Active Directory](/azure/active-directory/reports-monitoring/concept-sign-ins.md)<br>[Audit logs in Azure Active Directory](/azure/active-directory/reports-monitoring/concept-audit-logs.md)
+| AU.L2-3.3.1<br><br>AU.L2-3.3.2 | All operations are audited within the Azure AD audit logs. Each audit log entry contains a userΓÇÖs immutable objectID that can be used to uniquely trace an individual system user to each action. You can collect and analyze logs by using a Security Information and Event Management (SIEM) solution such as Microsoft Sentinel. Alternatively, you can use Azure Event Hubs to integrate logs with third-party SIEM solutions to enable monitoring and notification.<br>[Audit activity reports in the Azure Active Directory portal](/azure/active-directory/reports-monitoring/concept-audit-logs)<br>[Connect Azure Active Directory data to Microsoft Sentinel](/azure/sentinel/connect-azure-active-directory)<br>[Tutorial: Stream logs to an Azure event hub](/azure/active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub) |
+| AU.L2-3.3.4 | Azure Service Health notifies you about Azure service incidents so you can take action to mitigate downtime. Configure customizable cloud alerts for Azure Active Directory. <br>[What is Azure Service Health?](/azure/service-health/overview)<br>[Three ways to get notified about Azure service issues](https://azure.microsoft.com/blog/three-ways-to-get-notified-about-azure-service-issues/)<br>[Azure Service Health](https://azure.microsoft.com/get-started/azure-portal/service-health/) |
+| AU.L2-3.3.6 | Ensure Azure AD events are included in event logging strategy. You can collect and analyze logs by using a Security Information and Event Management (SIEM) solution such as Microsoft Sentinel. Alternatively, you can use Azure Event Hubs to integrate logs with third-party SIEM solutions to enable monitoring and notification. Use Azure AD entitlement management with access reviews to ensure compliance status of accounts. <br>[Audit activity reports in the Azure Active Directory portal](/azure/active-directory/reports-monitoring/concept-audit-logs)<br>[Connect Azure Active Directory data to Microsoft Sentinel](/azure/sentinel/connect-azure-active-directory)<br>[Tutorial: Stream logs to an Azure event hub](/azure/active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub) |
+| AU.L2-3.3.8<br><br>AU.L2-3.3.9 | Azure AD logs are retained by default for 30 days. These logs are unable to modified or deleted and are only accessible to limited set of privileged roles.<br>[Sign-in logs in Azure Active Directory](/azure/active-directory/reports-monitoring/concept-sign-ins)<br>[Audit logs in Azure Active Directory](/azure/active-directory/reports-monitoring/concept-audit-logs)
## Configuration Management (CM)
The following table provides a list of control IDs and associated customer respo
| *Control* | *Guidance* | | - | - |
-| CM.L2-3.4.2 | Adopt a zero-trust security posture. Use conditional access policies to restrict access to compliant devices. Configure policy settings on the device to enforce security configuration settings on the device with MDM solutions such as Microsoft Intune. Microsoft Endpoint Configuration Manager(MECM) or group policy objects can also be considered in hybrid deployments and combined with conditional access require hybrid Azure AD joined device.<br><br>**Zero-trust**<br>[Securing identity with Zero Trust](/security/zero-trust/identity.md)<br><br>**Conditional access**<br>[What is conditional access in Azure AD?](/azure/active-directory/conditional-access/overview.md)<br>[Grant controls in Conditional Access policy](/azure/active-directory/conditional-access/concept-conditional-access-grant.md)<br><br>**Device policies**<br>[What is Microsoft Intune?](/mem/intune/fundamentals/what-is-intune.md)<br>[What is Defender for Cloud Apps?](/cloud-app-security/what-is-cloud-app-security.md)<br>[What is app management in Microsoft Intune?](/mem/intune/apps/app-management.md)<br>[Microsoft Endpoint Manager overview](/mem/endpoint-manager-overview.md) |
-| CM.L2-3.4.5 | Azure Active Directory (Azure AD) is a cloud-based identity and access management service. Customers don't have physical access to the Azure AD datacenters. As such, each physical access restriction above is satisfied by Microsoft and inherited by the customers of Azure AD. Implement Azure AD role based access controls. Eliminate standing privileged access, provide just in time access with approval workflows with Privileged Identity Management.<br>[Overview of Azure Active Directory role-based access control (RBAC)](/azure/active-directory/roles/custom-overview.md)<br>[What is Privileged Identity Management?](/azure/active-directory/privileged-identity-management/pim-configure.md)<br>[Approve or deny requests for Azure AD roles in PIM](/azure/active-directory/privileged-identity-management/azure-ad-pim-approval-workflow.md) |
-| CM.L2-3.4.6 | Configure device management solutions (Such as Microsoft Intune) to implement a custom security baseline applied to organizational systems to remove non-essential applications and disable unnecessary services. Leave only the fewest capabilities necessary for the systems to operate effectively. Configure conditional access to restrict access to compliant or hybrid Azure AD joined devices. <br>[What is Microsoft Intune](/mem/intune/fundamentals/what-is-intune.md)<br>[Require device to be marked as compliant](../conditional-access/require-managed-devices.md)<br>[Grant controls in Conditional Access policy - Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md) |
-| CM.L2-3.4.7 | Use Application Administrator role to delegate authorized use of essential applications. Use App Roles or group claims to manage least privilege access within application. Configure user consent to require admin approval and don't allow group owner consent. Configure Admin consent request workflows to enable users to request access to applications that require admin consent. Use Microsoft Defender for Cloud Apps to identify unsanctioned/unknown application use. Use this telemetry to then determine essential/non-essential apps.<br>[Azure AD built-in roles - Application Administrator](/azure/active-directory/roles/permissions-reference.md)<br>[Azure AD App Roles - App Roles vs. Groups ](/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps.md)<br>[Configure how users consent to applications](/azure/active-directory/manage-apps/configure-user-consent?tabs=azure-portal.md)<br>[Configure group owner consent to apps accessing group data](/azure/active-directory/manage-apps/configure-user-consent-groups?tabs=azure-portal.md)<br>[Configure the admin consent workflow](/azure/active-directory/manage-apps/configure-admin-consent-workflow.md)<br>[What is Defender for Cloud Apps?](/defender-cloud-apps/what-is-defender-for-cloud-apps.d)<br>[Discover and manage Shadow IT tutorial](/defender-cloud-apps/tutorial-shadow-it.md) |
-| CM.L2-3.4.8 <br><br>CM.L2-3.4.9 | Configure MDM/configuration management policy to prevent the use of unauthorized software. Configure conditional access grant controls to require compliant or hybrid joined device to incorporate device compliance with MDM/configuration management policy into the conditional access authorization decision.<br>[What is Microsoft Intune](/mem/intune/fundamentals/what-is-intune.md)<br>[Conditional Access - Require compliant or hybrid joined devices](/azure/active-directory/conditional-access/howto-conditional-access-policy-compliant-device.md) |
+| CM.L2-3.4.2 | Adopt a zero-trust security posture. Use conditional access policies to restrict access to compliant devices. Configure policy settings on the device to enforce security configuration settings on the device with MDM solutions such as Microsoft Intune. Microsoft Endpoint Configuration Manager(MECM) or group policy objects can also be considered in hybrid deployments and combined with conditional access require hybrid Azure AD joined device.<br><br>**Zero-trust**<br>[Securing identity with Zero Trust](/security/zero-trust/identity)<br><br>**Conditional access**<br>[What is conditional access in Azure AD?](/azure/active-directory/conditional-access/overview)<br>[Grant controls in Conditional Access policy](/azure/active-directory/conditional-access/concept-conditional-access-grant)<br><br>**Device policies**<br>[What is Microsoft Intune?](/mem/intune/fundamentals/what-is-intune)<br>[What is Defender for Cloud Apps?](/cloud-app-security/what-is-cloud-app-security)<br>[What is app management in Microsoft Intune?](/mem/intune/apps/app-management)<br>[Microsoft Endpoint Manager overview](/mem/endpoint-manager-overview) |
+| CM.L2-3.4.5 | Azure Active Directory (Azure AD) is a cloud-based identity and access management service. Customers don't have physical access to the Azure AD datacenters. As such, each physical access restriction above is satisfied by Microsoft and inherited by the customers of Azure AD. Implement Azure AD role based access controls. Eliminate standing privileged access, provide just in time access with approval workflows with Privileged Identity Management.<br>[Overview of Azure Active Directory role-based access control (RBAC)](/azure/active-directory/roles/custom-overview)<br>[What is Privileged Identity Management?](/azure/active-directory/privileged-identity-management/pim-configure)<br>[Approve or deny requests for Azure AD roles in PIM](/azure/active-directory/privileged-identity-management/azure-ad-pim-approval-workflow) |
+| CM.L2-3.4.6 | Configure device management solutions (Such as Microsoft Intune) to implement a custom security baseline applied to organizational systems to remove non-essential applications and disable unnecessary services. Leave only the fewest capabilities necessary for the systems to operate effectively. Configure conditional access to restrict access to compliant or hybrid Azure AD joined devices. <br>[What is Microsoft Intune](/mem/intune/fundamentals/what-is-intune)<br>[Require device to be marked as compliant](../conditional-access/require-managed-devices.md)<br>[Grant controls in Conditional Access policy - Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md) |
+| CM.L2-3.4.7 | Use Application Administrator role to delegate authorized use of essential applications. Use App Roles or group claims to manage least privilege access within application. Configure user consent to require admin approval and don't allow group owner consent. Configure Admin consent request workflows to enable users to request access to applications that require admin consent. Use Microsoft Defender for Cloud Apps to identify unsanctioned/unknown application use. Use this telemetry to then determine essential/non-essential apps.<br>[Azure AD built-in roles - Application Administrator](/azure/active-directory/roles/permissions-reference)<br>[Azure AD App Roles - App Roles vs. Groups ](/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps)<br>[Configure how users consent to applications](/azure/active-directory/manage-apps/configure-user-consent?tabs=azure-portal.md)<br>[Configure group owner consent to apps accessing group data](/azure/active-directory/manage-apps/configure-user-consent-groups?tabs=azure-portal.md)<br>[Configure the admin consent workflow](/azure/active-directory/manage-apps/configure-admin-consent-workflow)<br>[What is Defender for Cloud Apps?](/defender-cloud-apps/what-is-defender-for-cloud-apps)<br>[Discover and manage Shadow IT tutorial](/defender-cloud-apps/tutorial-shadow-it) |
+| CM.L2-3.4.8 <br><br>CM.L2-3.4.9 | Configure MDM/configuration management policy to prevent the use of unauthorized software. Configure conditional access grant controls to require compliant or hybrid joined device to incorporate device compliance with MDM/configuration management policy into the conditional access authorization decision.<br>[What is Microsoft Intune](/mem/intune/fundamentals/what-is-intune)<br>[Conditional Access - Require compliant or hybrid joined devices](/azure/active-directory/conditional-access/howto-conditional-access-policy-compliant-device) |
## Incident Response (IR)
The following table provides a list of control IDs and associated customer respo
| *Control* | *Guidance* | | - | - |
-| IR.L2-3.6.1 | Implement incident handling and monitoring capabilities. The audit logs record all configuration changes. Authentication and authorization events are audited within the sign-in logs, and any detected risks are audited in the Identity Protection logs. You can stream each of these logs directly into a SIEM solution, such as Microsoft Sentinel. Alternatively, use Azure Event Hubs to integrate logs with third-party SIEM solutions.<br><br>**Audit events**<br>[Audit activity reports in the Azure Active Directory portal](/azure/active-directory/reports-monitoring/concept-audit-logs.md)<br>[Sign-in activity reports in the Azure Active Directory portal](/azure/active-directory/reports-monitoring/concept-sign-ins.md)<br>[How To: Investigate risk](/azure/active-directory/identity-protection/howto-identity-protection-investigate-risk.md)<br><br>**SIEM integrations**<br>[Microsoft Sentinel : Connect data from Azure Active Directory (Azure AD)](/azure/sentinel/connect-azure-active-directory.md)[Stream to Azure event hub and other SIEMs](/azure/active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) |
+| IR.L2-3.6.1 | Implement incident handling and monitoring capabilities. The audit logs record all configuration changes. Authentication and authorization events are audited within the sign-in logs, and any detected risks are audited in the Identity Protection logs. You can stream each of these logs directly into a SIEM solution, such as Microsoft Sentinel. Alternatively, use Azure Event Hubs to integrate logs with third-party SIEM solutions.<br><br>**Audit events**<br>[Audit activity reports in the Azure Active Directory portal](/azure/active-directory/reports-monitoring/concept-audit-logs)<br>[Sign-in activity reports in the Azure Active Directory portal](/azure/active-directory/reports-monitoring/concept-sign-ins)<br>[How To: Investigate risk](/azure/active-directory/identity-protection/howto-identity-protection-investigate-risk)<br><br>**SIEM integrations**<br>[Microsoft Sentinel : Connect data from Azure Active Directory (Azure AD)](/azure/sentinel/connect-azure-active-directory)[Stream to Azure event hub and other SIEMs](/azure/active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub) |
## Maintenance (MA)
The following table provides a list of control IDs and associated customer respo
| *Control* | *Guidance* | | - | - | | MA.L2-3.7.5 | Accounts assigned administrative rights are targeted by attackers, including accounts used to establish non-local maintenance sessions. Requiring multifactor authentication (MFA) on those accounts is an easy way to reduce the risk of those accounts being compromised.<br>[Conditional Access - Require MFA for administrators](../conditional-access/howto-conditional-access-policy-admin-mfa.md) |
-| MP.L2-3.8.7 | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to control the use of removable media on systems. Deploy and manage Removable Storage Access Control using Intune or Group Policy. Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](/azure/active-directory/conditional-access/concept-conditional-access-grant#require-device-to-be-marked-as-compliant.md)<br>[Require hybrid Azure AD joined device](/conditional-access/concept-conditional-access-grant#require-hybrid-azure-ad-joined-device.md)<br><br>**Intune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started.md)<br><br>**Removable storage access control**<br>[Deploy and manage Removable Storage Access Control using Intune](/microsoft-365/security/defender-endpoint/deploy-manage-removable-storage-intune?view=o365-worldwide&preserve-view=true)<br>[Deploy and manage Removable Storage Access Control using group policy](/microsoft-365/security/defender-endpoint/deploy-manage-removable-storage-group-policy?view=o365-worldwide&preserve-view=true) |
+| MP.L2-3.8.7 | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to control the use of removable media on systems. Deploy and manage Removable Storage Access Control using Intune or Group Policy. Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](/azure/active-directory/conditional-access/concept-conditional-access-grant#require-device-to-be-marked-as-compliant.md)<br>[Require hybrid Azure AD joined device](/azure/active-directory/conditional-access/concept-conditional-access-grant#require-hybrid-azure-ad-joined-device.md)<br><br>**Intune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started)<br><br>**Removable storage access control**<br>[Deploy and manage Removable Storage Access Control using Intune](/microsoft-365/security/defender-endpoint/deploy-manage-removable-storage-intune?view=o365-worldwide&preserve-view=true)<br>[Deploy and manage Removable Storage Access Control using group policy](/microsoft-365/security/defender-endpoint/deploy-manage-removable-storage-group-policy?view=o365-worldwide&preserve-view=true) |
## Personnel Security (PS)
The following table provides a list of control IDs and associated customer respo
| *Control* | *Guidance* | | - | - |
-| PS.L2-3.9.2 | Configure provisioning (including disablement upon termination) of accounts in Azure AD from external HR systems, on-premises Active Directory, or directly in the cloud. Terminate all system access by revoking existing sessions.<br><br>**Account provisioning**<br>[What is identity provisioning with Azure AD?](/azure/active-directory/cloud-sync/what-is-provisioning.md)<br>[Azure AD Connect sync: Understand and customize synchronization](/azure/active-directory/hybrid/how-to-connect-sync-whatis.md)<br>[What is Azure AD Connect cloud sync?](/azure/active-directory/cloud-sync/what-is-cloud-sync.md)<br><br>**Revoke all associated authenticators**<br>[Revoke user access in an emergency in Azure Active Directory](/azure/active-directory/enterprise-users/users-revoke-access.md) |
+| PS.L2-3.9.2 | Configure provisioning (including disablement upon termination) of accounts in Azure AD from external HR systems, on-premises Active Directory, or directly in the cloud. Terminate all system access by revoking existing sessions.<br><br>**Account provisioning**<br>[What is identity provisioning with Azure AD?](/azure/active-directory/cloud-sync/what-is-provisioning)<br>[Azure AD Connect sync: Understand and customize synchronization](/azure/active-directory/hybrid/how-to-connect-sync-whatis)<br>[What is Azure AD Connect cloud sync?](/azure/active-directory/cloud-sync/what-is-cloud-sync)<br><br>**Revoke all associated authenticators**<br>[Revoke user access in an emergency in Azure Active Directory](/azure/active-directory/enterprise-users/users-revoke-access) |
## System and Communications Protection (SC)
The following table provides a list of control IDs and associated customer respo
| *Control* | *Guidance* | | - | - |
-| SC.L2-3.13.3 | Maintain separate user accounts in Azure Active Directory for everyday productivity use and administrative or system/privileged management. Privileged accounts should be cloud-only or managed accounts and not synchronized from on-premises to protect the cloud environment from on-premises compromise. System/privileged access should only be permitted from a security hardened privileged access workstation (PAW). Configure Conditional Access device filters to restrict access to administrative applications from PAWs that are enabled using Azure Virtual Desktops.<br>[Why are privileged access devices important](/security/compass/privileged-access-devices.md)<br>[Device Roles and Profiles](/security/compass/privileged-access-devices.md)<br>[Filter for devices as a condition in Conditional Access policy](../conditional-access/concept-condition-filters-for-devices.md)<br>[Azure Virtual Desktop](https://azure.microsoft.com/products/virtual-desktop/) |
-| SC.L2-3.13.4 | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to ensure devices are compliant with system hardening procedures. Include compliance with company policy regarding software patches to prevent attackers from exploiting flaws.<br><br>Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](/azure/active-directory/conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](/azure/active-directory/conditional-access/concept-conditional-access-grant.md)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started.md)<br><br>9-20 check split tunneling language. |
-| SC.L2-3.13.13 | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to disable the use of mobile code. Where use of mobile code is required monitor the use with endpoint security such as Microsoft Defender for Endpoint.<br><br>Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](/azure/active-directory/conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](/azure/active-directory/conditional-access/concept-conditional-access-grant.md)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started.md)<br><br>**Defender for Endpoint**<br>[Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint?view=o365-worldwide&preserve-view=true) |
+| SC.L2-3.13.3 | Maintain separate user accounts in Azure Active Directory for everyday productivity use and administrative or system/privileged management. Privileged accounts should be cloud-only or managed accounts and not synchronized from on-premises to protect the cloud environment from on-premises compromise. System/privileged access should only be permitted from a security hardened privileged access workstation (PAW). Configure Conditional Access device filters to restrict access to administrative applications from PAWs that are enabled using Azure Virtual Desktops.<br>[Why are privileged access devices important](/security/compass/privileged-access-devices)<br>[Device Roles and Profiles](/security/compass/privileged-access-devices)<br>[Filter for devices as a condition in Conditional Access policy](../conditional-access/concept-condition-filters-for-devices.md)<br>[Azure Virtual Desktop](https://azure.microsoft.com/products/virtual-desktop/) |
+| SC.L2-3.13.4 | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to ensure devices are compliant with system hardening procedures. Include compliance with company policy regarding software patches to prevent attackers from exploiting flaws.<br><br>Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](/azure/active-directory/conditional-access/concept-conditional-access-grant)<br>[Require hybrid Azure AD joined device](/azure/active-directory/conditional-access/concept-conditional-access-grant)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started)<br><br>9-20 check split tunneling language. |
+| SC.L2-3.13.13 | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to disable the use of mobile code. Where use of mobile code is required monitor the use with endpoint security such as Microsoft Defender for Endpoint.<br><br>Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](/azure/active-directory/conditional-access/concept-conditional-access-grant)<br>[Require hybrid Azure AD joined device](/azure/active-directory/conditional-access/concept-conditional-access-grant)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started)<br><br>**Defender for Endpoint**<br>[Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint?view=o365-worldwide&preserve-view=true) |
## System and Information Integrity (SI)
The following table provides a list of control IDs and associated customer respo
| *Control* | *Guidance* | | - | - |
-| SI.L2-3.14.7 | Consolidate telemetry: Azure AD logs to stream to SIEM, such as Azure Sentinel Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM), or group policy objects (GPO) to require Intrusion Detection/Protection (IDS/IPS) such as Microsoft Defender for Endpoint is installed and in use. Use telemetry provided by the IDS/IPS to identify unusual activities or conditions related to inbound and outbound communications traffic or unauthorized use.<br><br>Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](/azure/active-directory/conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](/azure/active-directory/conditional-access/concept-conditional-access-grant.md)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started.md)<br><br>**Defender for Endpoint**<br>[Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint?view=o365-worldwide&preserve-view=true) |
+| SI.L2-3.14.7 | Consolidate telemetry: Azure AD logs to stream to SIEM, such as Azure Sentinel Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM), or group policy objects (GPO) to require Intrusion Detection/Protection (IDS/IPS) such as Microsoft Defender for Endpoint is installed and in use. Use telemetry provided by the IDS/IPS to identify unusual activities or conditions related to inbound and outbound communications traffic or unauthorized use.<br><br>Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](/azure/active-directory/conditional-access/concept-conditional-access-grant)<br>[Require hybrid Azure AD joined device](/azure/active-directory/conditional-access/concept-conditional-access-grant)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started)<br><br>**Defender for Endpoint**<br>[Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint?view=o365-worldwide&preserve-view=true) |
### Next steps
active-directory Configure Cmmc Level 2 Identification And Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/configure-cmmc-level-2-identification-and-authentication.md
The following table provides a list of control IDs and associated customer respo
| *Control* | *Guidance* | | - | - |
-| IA.L2-3.5.3 | The following are definitions for the terms used for this control area:<li>**Local Access** - Access to an organizational information system by a user (or process acting on behalf of a user) communicating through a direct connection without the use of a network.<li>**Network Access** - Access to an information system by a user (or a process acting on behalf of a user) communicating through a network (for example, local area network, wide area network, Internet).<li>**Privileged User** - A user that's authorized (and therefore, trusted) to perform security-relevant functions that ordinary users aren't authorized to perform.<br><br>Breaking down the above requirement means:<li>All users are required MFA for network/remote access.<li>Only privileged users are required MFA for local access. If regular user accounts have administrative rights only on their computers, they're not a ΓÇ£privileged accountΓÇ¥ and don't require MFA for local access.<br><br> You're responsible for configuring Conditional Access to require multifactor authentication. Enable Azure AD Authentication methods that meet AAL2 and above.<br>[Grant controls in Conditional Access policy - Azure Active Directory](/azure/active-directory/conditional-access/concept-conditional-access-grant.md)<br>[Achieve NIST authenticator assurance levels with Azure Active Directory](/azure/active-directory/standards/nist-overview.md)<br>[Authentication methods and features - Azure Active Directory](/azure/active-directory/authentication/concept-authentication-methods.md) |
-| IA.L2-3.5.4 | All Azure AD Authentication methods at AAL2 and above are replay resistant.<br>[Achieve NIST authenticator assurance levels with Azure Active Directory](/azure/active-directory/standards/nist-overview.md) |
+| IA.L2-3.5.3 | The following are definitions for the terms used for this control area:<li>**Local Access** - Access to an organizational information system by a user (or process acting on behalf of a user) communicating through a direct connection without the use of a network.<li>**Network Access** - Access to an information system by a user (or a process acting on behalf of a user) communicating through a network (for example, local area network, wide area network, Internet).<li>**Privileged User** - A user that's authorized (and therefore, trusted) to perform security-relevant functions that ordinary users aren't authorized to perform.<br><br>Breaking down the above requirement means:<li>All users are required MFA for network/remote access.<li>Only privileged users are required MFA for local access. If regular user accounts have administrative rights only on their computers, they're not a ΓÇ£privileged accountΓÇ¥ and don't require MFA for local access.<br><br> You're responsible for configuring Conditional Access to require multifactor authentication. Enable Azure AD Authentication methods that meet AAL2 and above.<br>[Grant controls in Conditional Access policy - Azure Active Directory](/azure/active-directory/conditional-access/concept-conditional-access-grant)<br>[Achieve NIST authenticator assurance levels with Azure Active Directory](/azure/active-directory/standards/nist-overview)<br>[Authentication methods and features - Azure Active Directory](/azure/active-directory/authentication/concept-authentication-methods) |
+| IA.L2-3.5.4 | All Azure AD Authentication methods at AAL2 and above are replay resistant.<br>[Achieve NIST authenticator assurance levels with Azure Active Directory](/azure/active-directory/standards/nist-overview) |
| IA.L2-3.5.5 | All user, group, device object globally unique identifiers (GUIDs) are guaranteed unique and non-reusable for the lifetime of the Azure AD tenant.<br>[user resource type - Microsoft Graph v1.0](/graph/api/resources/user?view=graph-rest-1.0&preserve-view=true)<br>[group resource type - Microsoft Graph v1.0](/graph/api/resources/group?view=graph-rest-1.0&preserve-view=true)<br>[device resource type - Microsoft Graph v1.0](/graph/api/resources/device?view=graph-rest-1.0&preserve-view=true) |
-| IA.L2-3.5.6 | Implement account management automation with Microsoft Graph and Azure AD PowerShell. Use Microsoft Graph to monitor sign-in activity and Azure AD PowerShell to take action on accounts within the required time frame.<br><br>**Determine inactivity**<br>[Manage inactive user accounts in Azure AD](/azure/active-directory/reports-monitoring/howto-manage-inactive-user-accounts.md)<br>[Manage stale devices in Azure AD](/azure/active-directory/devices/manage-stale-devices.md)<br><br>**Remove or disable accounts**<br>[Working with users in Microsoft Graph](/graph/api/resources/users.md)<br>[Get a user](/graph/api/user-get?tabs=http)<br>[Update user](/graph/api/user-update?tabs=http)<br>[Delete a user](/graph/api/user-delete?tabs=http)<br><br>**Work with devices in Microsoft Graph**<br>[Get device](/graph/api/device-get?tabs=http)<br>[Update device](/graph/api/device-update?tabs=http)<br>[Delete device](/graph/api/device-delete?tabs=http)<br><br>**[Use Azure AD PowerShell](/powershell/module/azuread/)**<br>[Get-AzureADUser](/powershell/module/azuread/get-azureaduser.md)<br>[Set-AzureADUser](/powershell/module/azuread/set-azureaduser.md)<br>[Get-AzureADDevice](/powershell/module/azuread/get-azureaddevice.md)<br>[Set-AzureADDevice](/powershell/module/azuread/set-azureaddevice.md) |
+| IA.L2-3.5.6 | Implement account management automation with Microsoft Graph and Azure AD PowerShell. Use Microsoft Graph to monitor sign-in activity and Azure AD PowerShell to take action on accounts within the required time frame.<br><br>**Determine inactivity**<br>[Manage inactive user accounts in Azure AD](/azure/active-directory/reports-monitoring/howto-manage-inactive-user-accounts)<br>[Manage stale devices in Azure AD](/azure/active-directory/devices/manage-stale-devices)<br><br>**Remove or disable accounts**<br>[Working with users in Microsoft Graph](/graph/api/resources/user)<br>[Get a user](/graph/api/user-get?tabs=http)<br>[Update user](/graph/api/user-update?tabs=http)<br>[Delete a user](/graph/api/user-delete?tabs=http)<br><br>**Work with devices in Microsoft Graph**<br>[Get device](/graph/api/device-get?tabs=http)<br>[Update device](/graph/api/device-update?tabs=http)<br>[Delete device](/graph/api/device-delete?tabs=http)<br><br>**[Use Azure AD PowerShell](/powershell/module/azuread/)**<br>[Get-AzureADUser](/powershell/module/azuread/get-azureaduser)<br>[Set-AzureADUser](/powershell/module/azuread/set-azureaduser.md)<br>[Get-AzureADDevice](/powershell/module/azuread/get-azureaddevice)<br>[Set-AzureADDevice](/powershell/module/azuread/set-azureaddevice) |
| IA.L2-3.5.7 <br><br>IA.L2-3.5.8 | We **strongly encourage** passwordless strategies. This control is only applicable to password authenticators, so removing passwords as an available authenticator renders this control not applicable.<br><br>Per NIST SP 800-63 B Section 5.1.1: Maintain a list of commonly used, expected, or compromised passwords.<br><br>With Azure AD password protection, default global banned password lists are automatically applied to all users in an Azure AD tenant. To support your business and security needs, you can define entries in a custom banned password list. When users change or reset their passwords, these banned password lists are checked to enforce the use of strong passwords.<br>For customers that require strict password character change, password reuse and complexity requirements use hybrid accounts configured with Password-Hash-Sync. This action ensures the passwords synchronized to Azure AD inherit the restrictions configured in Active Directory password policies. Further protect on-premises passwords by configuring on-premises Azure AD Password Protection for Active Directory Domain Services.<br>[NIST Special Publication 800-63 B](https://pages.nist.gov/800-63-3/sp800-63b.html)<br>[NIST Special Publication 800-53 Revision 5 (IA-5 - Control enhancement (1)](https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-53r5.pdf)<br>[Eliminate bad passwords using Azure AD password protection](../authentication/concept-password-ban-bad.md)<br>[What is password hash synchronization with Azure AD?](../hybrid/whatis-phs.md) |
-| IA.L2-3.5.9 | An Azure AD user initial password is a temporary single use password that once successfully used is immediately required to be changed to a permanent password. Microsoft strongly encourages the adoption of passwordless authentication methods. Users can bootstrap Passwordless authentication methods using Temporary Access Pass (TAP). TAP is a time and use limited passcode issued by an admin that satisfies strong authentication requirements. Use of passwordless authentication along with the time and use limited TAP completely eliminates the use of passwords (and their reuse).<br>[Add or delete users - Azure Active Directory](/azure/active-directory/fundamentals/add-users-azure-active-directory.md)<br>[Configure a Temporary Access Pass in Azure AD to register Passwordless authentication methods](/azure/active-directory/authentication/howto-authentication-temporary-access-pass.md)<br>[Passwordless authentication](/security/business/solutions/passwordless-authentication?ef_id=369464fc2ba818d0bd6507de2cde3d58:G:s&OCID=AIDcmmdamuj0pc_SEM_369464fc2ba818d0bd6507de2cde3d58:G:s&msclkid=369464fc2ba818d0bd6507de2cde3d58) |
+| IA.L2-3.5.9 | An Azure AD user initial password is a temporary single use password that once successfully used is immediately required to be changed to a permanent password. Microsoft strongly encourages the adoption of passwordless authentication methods. Users can bootstrap Passwordless authentication methods using Temporary Access Pass (TAP). TAP is a time and use limited passcode issued by an admin that satisfies strong authentication requirements. Use of passwordless authentication along with the time and use limited TAP completely eliminates the use of passwords (and their reuse).<br>[Add or delete users - Azure Active Directory](/azure/active-directory/fundamentals/add-users-azure-active-directory)<br>[Configure a Temporary Access Pass in Azure AD to register Passwordless authentication methods](/azure/active-directory/authentication/howto-authentication-temporary-access-pass)<br>[Passwordless authentication](/azure/active-directory/authentication/concept-authentication-passwordless) |
| IA.L2-3.5.10 | **Secret Encryption at Rest**:<br>In addition to disk level encryption, when at rest, secrets stored in the directory are encrypted using the Distributed Key Manager(DKM). The encryption keys are stored in Azure AD core store and in turn are encrypted with a scale unit key. The key is stored in a container that is protected with directory ACLs, for highest privileged users and specific services. The symmetric key is typically rotated every six months. Access to the environment is further protected with operational controls and physical security.<br><br>**Encryption in Transit**:<br>To assure data security, Directory Data in Azure AD is signed and encrypted while in transit between data centers within a scale unit. The data is encrypted and unencrypted by the Azure AD core store tier, which resides inside secured server hosting areas of the associated Microsoft data centers.<br><br>Customer-facing web services are secured with the Transport Layer Security (TLS) protocol.<br>For more information, [download](https://azure.microsoft.com/resources/azure-active-directory-data-security-considerations/) *Data Protection Considerations - Data Security*. On page 15, there are more details.<br>[Demystifying Password Hash Sync (microsoft.com)](https://www.microsoft.com/security/blog/2019/05/30/demystifying-password-hash-sync/)<br>[Azure Active Directory Data Security Considerations](https://aka.ms/aaddatawhitepaper) | |IA.L2-3.5.11 | By default, Azure AD obscures all authenticator feedback. |
api-management Api Management Api Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-api-templates.md
# API templates in Azure API Management
-Azure API Management provides you the ability to customize the content of developer portal pages using a set of templates that configure their content. Using [DotLiquid](http://dotliquidmarkup.org/) syntax and the editor of your choice, such as [DotLiquid for Designers](https://github.com/dotliquid/dotliquid/wiki/DotLiquid-for-Designers), and a provided set of localized [String resources](api-management-template-resources.md#strings), [Glyph resources](api-management-template-resources.md#glyphs), and [Page controls](api-management-page-controls.md), you have great flexibility to configure the content of the pages as you see fit using these templates.
+Azure API Management provides you the ability to customize the content of developer portal pages using a set of templates that configure their content. Using [DotLiquid](https://github.com/dotliquid) syntax and the editor of your choice, such as [DotLiquid for Designers](https://github.com/dotliquid/dotliquid/wiki/DotLiquid-for-Designers), and a provided set of localized [String resources](api-management-template-resources.md#strings), [Glyph resources](api-management-template-resources.md#glyphs), and [Page controls](api-management-page-controls.md), you have great flexibility to configure the content of the pages as you see fit using these templates.
The templates in this section allow you to customize the content of the API pages in the developer portal.
api-management Api Management Developer Portal Templates Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-developer-portal-templates-reference.md
# Developer portal templates
-Azure API Management provides you the ability to customize the content of developer portal pages using a set of templates that configure their content. Using [DotLiquid](http://dotliquidmarkup.org/) syntax and the editor of your choice, such as [DotLiquid for Designers](https://github.com/dotliquid/dotliquid/wiki/DotLiquid-for-Designers), and a provided set of localized [String resources](api-management-template-resources.md#strings), [Glyph resources](api-management-template-resources.md#glyphs), and [Page controls](api-management-page-controls.md), you have great flexibility to configure the content of the pages as you see fit using these templates.
+Azure API Management provides you the ability to customize the content of developer portal pages using a set of templates that configure their content. Using [DotLiquid](https://github.com/dotliquid) syntax and the editor of your choice, such as [DotLiquid for Designers](https://github.com/dotliquid/dotliquid/wiki/DotLiquid-for-Designers), and a provided set of localized [String resources](api-management-template-resources.md#strings), [Glyph resources](api-management-template-resources.md#glyphs), and [Page controls](api-management-page-controls.md), you have great flexibility to configure the content of the pages as you see fit using these templates.
For more information about working with templates, see [How to customize the API Management developer portal using templates](api-management-developer-portal-templates.md).
For more information about working with templates, see [How to customize the API
+ [Template reference](api-management-developer-portal-templates-reference.md) + [Data model reference](api-management-template-data-model-reference.md) + [Page controls](api-management-page-controls.md)
-+ [Template resources](api-management-template-resources.md)
++ [Template resources](api-management-template-resources.md)
api-management Api Management Developer Portal Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-developer-portal-templates.md
There are three fundamental ways to customize the developer portal in Azure API
* [Update the styles used for page elements across the developer portal][customize-styles] * [Modify the templates used for pages generated by the portal][portal-templates] (explained in this guide)
-Templates are used to customize the content of system-generated developer portal pages (for example, API docs, products, user authentication, etc.). Using [DotLiquid](http://dotliquidmarkup.org/) syntax, and a provided set of localized string resources, icons, and page controls, you have great flexibility to configure the content of the pages as you see fit.
+Templates are used to customize the content of system-generated developer portal pages (for example, API docs, products, user authentication, etc.). Using [DotLiquid](https://github.com/dotliquid) syntax, and a provided set of localized string resources, icons, and page controls, you have great flexibility to configure the content of the pages as you see fit.
[!INCLUDE [api-management-portal-legacy.md](../../includes/api-management-portal-legacy.md)]
Some templates, like the **User Profile** templates, customize different parts o
The editor for each developer portal template has two sections displayed at the bottom of the page. The left-hand side displays the editing pane for the template, and the right-hand side displays the data model for the template.
-The template editing pane contains the markup that controls the appearance and behavior of the corresponding page in the developer portal. The markup in the template uses the [DotLiquid](http://dotliquidmarkup.org/) syntax. One popular editor for DotLiquid is [DotLiquid for Designers](https://github.com/dotliquid/dotliquid/wiki/DotLiquid-for-Designers). Any changes made to the template during editing are displayed in real-time in the browser, but are not visible to your customers until you [save](#to-save-a-template) and [publish](#to-publish-a-template) the template.
+The template editing pane contains the markup that controls the appearance and behavior of the corresponding page in the developer portal. The markup in the template uses the [DotLiquid](https://github.com/dotliquid) syntax. One popular editor for DotLiquid is [DotLiquid for Designers](https://github.com/dotliquid/dotliquid/wiki/DotLiquid-for-Designers). Any changes made to the template during editing are displayed in real-time in the browser, but are not visible to your customers until you [save](#to-save-a-template) and [publish](#to-publish-a-template) the template.
![Template markup][api-management-template]
api-management Api Management Issue Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-issue-templates.md
Last updated 11/04/2019
# Issue templates in Azure API Management
-Azure API Management provides you the ability to customize the content of developer portal pages using a set of templates that configure their content. Using [DotLiquid](http://dotliquidmarkup.org/) syntax and the editor of your choice, such as [DotLiquid for Designers](https://github.com/dotliquid/dotliquid/wiki/DotLiquid-for-Designers), and a provided set of localized [String resources](api-management-template-resources.md#strings), [Glyph resources](api-management-template-resources.md#glyphs), and [Page controls](api-management-page-controls.md), you have great flexibility to configure the content of the pages as you see fit using these templates.
+Azure API Management provides you the ability to customize the content of developer portal pages using a set of templates that configure their content. Using [DotLiquid](https://github.com/dotliquid) syntax and the editor of your choice, such as [DotLiquid for Designers](https://github.com/dotliquid/dotliquid/wiki/DotLiquid-for-Designers), and a provided set of localized [String resources](api-management-template-resources.md#strings), [Glyph resources](api-management-template-resources.md#glyphs), and [Page controls](api-management-page-controls.md), you have great flexibility to configure the content of the pages as you see fit using these templates.
The templates in this section allow you to customize the content of the Issue pages in the developer portal.
api-management Api Management User Profile Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-user-profile-templates.md
Last updated 11/04/2019
# User profile templates in Azure API Management
-Azure API Management provides you the ability to customize the content of developer portal pages using a set of templates that configure their content. Using [DotLiquid](http://dotliquidmarkup.org/) syntax and the editor of your choice, such as [DotLiquid for Designers](https://github.com/dotliquid/dotliquid/wiki/DotLiquid-for-Designers), and a provided set of localized [String resources](api-management-template-resources.md#strings), [Glyph resources](api-management-template-resources.md#glyphs), and [Page controls](api-management-page-controls.md), you have great flexibility to configure the content of the pages as you see fit using these templates.
+Azure API Management provides you the ability to customize the content of developer portal pages using a set of templates that configure their content. Using [DotLiquid](https://github.com/dotliquid) syntax and the editor of your choice, such as [DotLiquid for Designers](https://github.com/dotliquid/dotliquid/wiki/DotLiquid-for-Designers), and a provided set of localized [String resources](api-management-template-resources.md#strings), [Glyph resources](api-management-template-resources.md#glyphs), and [Page controls](api-management-page-controls.md), you have great flexibility to configure the content of the pages as you see fit using these templates.
The templates in this section allow you to customize the content of the User profile pages in the developer portal.
app-service Configure Vnet Integration Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-vnet-integration-enable.md
The subnet must be delegated to Microsoft.Web/serverFarms. If the delegation isn
:::image type="content" source="./media/configure-vnet-integration-enable/vnetint-app.png" alt-text="Screenshot that shows selecting VNet integration.":::
-1. The dropdown list contains all the virtual networks in your subscription in the same region. Select an empty preexisting subnet or create a new subnet.
+1. The dropdown list contains all the virtual networks in your subscription in the same region. Select an empty pre-existing subnet or create a new subnet.
:::image type="content" source="./media/configure-vnet-integration-enable/vnetint-add-vnet.png" alt-text="Screenshot that shows selecting the virtual network.":::
az webapp vnet-integration add --resource-group <group-name> --name <app-name> -
## Configure with Azure PowerShell
+Prepare parameters.
+ ```azurepowershell
-# Parameters
$siteName = '<app-name>'
-$resourceGroupName = '<group-name>'
+$vNetResourceGroupName = '<group-name>'
+$webAppResourceGroupName = '<group-name>'
$vNetName = '<vnet-name>' $integrationSubnetName = '<subnet-name>'
-$subscriptionId = '<subscription-guid>'
+$vNetSubscriptionId = '<subscription-guid>'
+```
+
+> [!NOTE]
+> If the virtual network is in another subscription than webapp, you can use the *Set-AzContext -Subscription "xxxx-xxxx-xxxx-xxxx"* command to set the current subscription context. Set the current subscription context to the subscription where the virtual network was deployed.
+
+Check if the subnet is delegated to Microsoft.Web/serverFarms.
+
+```azurepowershell
+$vnet = Get-AzVirtualNetwork -Name $vNetName -ResourceGroupName $vNetResourceGroupName
+$subnet = Get-AzVirtualNetworkSubnetConfig -Name $integrationSubnetName -VirtualNetwork $vnet
+Get-AzDelegation -Subnet $subnet
+```
-# Configure VNet Integration
-$subnetResourceId = "/subscriptions/$subscriptionId/resourceGroups/$resourceGroupName/providers/Microsoft.Network/virtualNetworks/$vNetName/subnets/$integrationSubnetName"
-$webApp = Get-AzResource -ResourceType Microsoft.Web/sites -ResourceGroupName $resourceGroupName -ResourceName $siteName
+If your subnet isn't delegated to Microsoft.Web/serverFarms, add delegation using below commands.
+
+```azurepowershell
+$subnet = Add-AzDelegation -Name "myDelegation" -ServiceName "Microsoft.Web/serverFarms" -Subnet $subnet
+Set-AzVirtualNetwork -VirtualNetwork $vnet
+```
+
+Configure VNet Integration.
+
+> [!NOTE]
+> If the webapp is in another subscription than virtual network, you can use the *Set-AzContext -Subscription "xxxx-xxxx-xxxx-xxxx"* command to set the current subscription context. Set the current subscription context to the subscription where the web app was deployed.
+
+```azurepowershell
+$subnetResourceId = "/subscriptions/$vNetSubscriptionId/resourceGroups/$vNetResourceGroupName/providers/Microsoft.Network/virtualNetworks/$vNetName/subnets/$integrationSubnetName"
+$webApp = Get-AzResource -ResourceType Microsoft.Web/sites -ResourceGroupName $webAppResourceGroupName -ResourceName $siteName
$webApp.Properties.virtualNetworkSubnetId = $subnetResourceId
+$webApp.Properties.vnetRouteAllEnabled = 'true'
$webApp | Set-AzResource -Force ```
azure-functions Functions Reference Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-powershell.md
At the root of the project, there's a shared [`host.json`](functions-host-json.m
Certain bindings require the presence of an `extensions.csproj` file. Binding extensions, required in [version 2.x and later versions](functions-versions.md) of the Functions runtime, are defined in the `extensions.csproj` file, with the actual library files in the `bin` folder. When developing locally, you must [register binding extensions](functions-bindings-register.md#extension-bundles). When developing functions in the Azure portal, this registration is done for you.
-In PowerShell Function Apps, you may optionally have a `profile.ps1` which runs when a function app starts to run (otherwise know as a *[cold start](#cold-start)*. For more information, see [PowerShell profile](#powershell-profile).
+In PowerShell Function Apps, you may optionally have a `profile.ps1` which runs when a function app starts to run (otherwise know as a *[cold start](#cold-start)*). For more information, see [PowerShell profile](#powershell-profile).
## Defining a PowerShell script as a function
azure-monitor Agent Linux Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-linux-troubleshoot.md
This error indicates that the Linux diagnostic extension (LAD) is installed side
sudo systemctl start cron ```
- **RHEL/CeonOS**
+ **RHEL/CentOS**
``` # To Install the service binaries
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
View [supported operating systems for Azure Arc Connected Machine agent](../../a
<sup>1</sup> Requires Python (2 or 3) to be installed on the machine.<br> <sup>2</sup> Requires Python 2 to be installed on the machine and aliased to the `python` command.<br>
-<sup>3</sup> Also supported on Arm64-based machines.
+<sup>3</sup> Also supported on Arm64-based machines.<br>
<sup>4</sup> Requires at least 4GB of disk space allocated (not provided by default). > [!NOTE]
azure-monitor Data Collection Iis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-iis.md
To complete this procedure, you need:
- Log Analytics workspace where you have at least [contributor rights](../logs/manage-access.md#azure-rbac). - [Data collection endpoint](../essentials/data-collection-endpoint-overview.md#create-a-data-collection-endpoint). - [Permissions to create Data Collection Rule objects](../essentials/data-collection-rule-overview.md#permissions) in the workspace.-- A VM, Virtual Machine Scale Set, or Arc-enabled on-premises server with IIS logs.
-
- - The log file must be stored on a local drive of the machine on which Azure Monitor Agent is running.
+- A VM, Virtual Machine Scale Set, or Arc-enabled on-premises server that runs IIS.
+ - An IIS log file in W3C format must be stored on the local drive of the machine on which Azure Monitor Agent is running.
- Each entry in the log file must be delineated with an end of line.
- - The log file must not allow circular logging, log rotation where the file is overwritten with new entries or renaming where a file is moved and a new file with the same name is opened.
+ - The log file must not allow circular logging, log rotation where the file is overwritten with new entries, or renaming where a file is moved and a new file with the same name is opened.
+ ## Create data collection rule to collect IIS logs The [data collection rule](../essentials/data-collection-rule-overview.md) defines:
Heartbeat
### Verify that IIS logs are being created Look at the timestamps of the log files and open the latest to see that latest timestamps are present in the log files. The default location for IIS log files is C:\\inetpub\\LogFiles\\W3SVC1. ### Verify that you specified the correct log location in the data collection rule The data collection rule will have a section similar to the following. The `logDirectories` element specifies the path to the log file to collect from the agent computer. Check the agent computer to verify that this is correct.
Open IIS Manager and verify that the logs are being written in W3C format.
:::image type="content" source="media/data-collection-text-log/iis-log-format-setting.png" lightbox="media/data-collection-text-log/iis-log-format-setting.png" alt-text="Screenshot of IIS logging configuration dialog box on agent machine.":::
-Open the IIS log on the agent machine to verify logs are in W3C format.
+Open the IIS log file on the agent machine to verify that logs are in W3C format.
### Share logs with Microsoft If everything is configured properly, but you're still not collecting log data, use the following procedure to collect diagnostics logs for Azure Monitor agent to share with the Azure Monitor group.
azure-monitor Data Collection Rule Azure Monitor Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-rule-azure-monitor-agent.md
Title: Monitor data from virtual machines with Azure Monitor Agent
+ Title: Collect events and performance counters from virtual machines with Azure Monitor Agent
description: Describes how to collect events and performance data from virtual machines by using Azure Monitor Agent. Last updated 12/11/2022
-# Collect data from virtual machines with Azure Monitor Agent
+# Collect events and performance counters from virtual machines with Azure Monitor Agent
This article describes how to collect events and performance counters from virtual machines by using [Azure Monitor Agent](azure-monitor-agent-overview.md).
This article describes how to collect events and performance counters from virtu
To complete this procedure, you need: - Log Analytics workspace where you have at least [contributor rights](../logs/manage-access.md#azure-rbac).-- [Data collection endpoint](../essentials/data-collection-endpoint-overview.md#create-a-data-collection-endpoint). - [Permissions to create Data Collection Rule objects](../essentials/data-collection-rule-overview.md#permissions) in the workspace.-- A VM, Virtual Machine Scale Set, or Arc-enabled on-premises server with the logs you want to collect.
-
- - The log file must be stored on a local drive of the machine on which Azure Monitor Agent is running.
- - Each entry in the log file must be delineated with an end of line.
- - The log file must not allow circular logging, log rotation where the file is overwritten with new entries or renaming where a file is moved and a new file with the same name is opened.
## Create a data collection rule
For sample templates, see [Azure Resource Manager template samples for data coll
## Filter events using XPath queries
-Since you're charged for any data you collect in a Log Analytics workspace, you should limit data collection from your agent to only the event data that you need. The basic configuration in the Azure portal provides you with a limited ability to filter out events.
+You're charged for any data you collect in a Log Analytics workspace. Therefore, you should only collect the event data you need. The basic configuration in the Azure portal provides you with a limited ability to filter out events.
[!INCLUDE [azure-monitor-cost-optimization](../../../includes/azure-monitor-cost-optimization.md)]
azure-monitor Data Collection Text Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-text-log.md
To complete this procedure, you need:
- [Data collection endpoint](../essentials/data-collection-endpoint-overview.md#create-a-data-collection-endpoint). - [Custom table](../logs/create-custom-table.md) to send your logs to. - [Permissions to create Data Collection Rule objects](../essentials/data-collection-rule-overview.md#permissions) in the workspace.-- A machine that write logs to a text file.
+- A VM, Virtual Machine Scale Set, or Arc-enabled on-premises server that writes logs to a text file.
+ - The log file must be stored on the local drive of the machine on which Azure Monitor Agent is running.
+ - Each entry in the log file must be delineated with an end of line.
+ - The log file must not allow circular logging, log rotation where the file is overwritten with new entries, or renaming where a file is moved and a new file with the same name is opened.
## Create data collection rule to collect text logs
azure-monitor Alerts Troubleshoot Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-troubleshoot-log.md
When a log alert rule is created, the query is validated for correct syntax. But
- Rules were created via the API, and validation was skipped by the user. - The query [runs on multiple resources](../logs/cross-workspace-query.md), and one or more of the resources was deleted or moved.-- The [query fails](https://dev.loganalytics.io/documentation/Using-the-API/Errors) because:
+- The [query fails](/azure/azure-monitor/logs/api/errors) because:
- The logging solution wasn't [deployed to the workspace](../insights/solutions.md#install-a-monitoring-solution), so tables aren't created. - Data stopped flowing to a table in the query for more than 30 days. - [Custom logs tables](../agents/data-sources-custom-logs.md) aren't yet created, because the data flow hasn't started.
Try the following steps to resolve the problem:
- Learn about [log alerts in Azure](./alerts-unified-log.md). - Learn more about [configuring log alerts](../logs/log-query-overview.md).-- Learn more about [log queries](../logs/log-query-overview.md).
+- Learn more about [log queries](../logs/log-query-overview.md).
azure-monitor Logs Data Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-data-export.md
After you've configured data export rules in a Log Analytics workspace, new data
[![Diagram that shows a data export flow.](media/logs-data-export/data-export-overview.png "Diagram that shows a data export flow.")](media/logs-data-export/data-export-overview.png#lightbox)
-Data is exported without a filter. For example, when you configure a data export rule for a *SecurityEvent* table, all data sent to the *SecurityEvent* table is exported starting from the configuration time.
+Data is exported without a filter. For example, when you configure a data export rule for a *SecurityEvent* table, all data sent to the *SecurityEvent* table is exported starting from the configuration time. Alternatively, you can filter or modify exported data by configuring [transformations](./../essentials/data-collection-transformations.md) in your workspace, which apply to incoming data, before it's sent to your Log Analytics workspaces and to export destinations.
## Other export options Log Analytics workspace data export continuously exports data that's sent to your Log Analytics workspace. There are other options to export data for particular scenarios:
Log Analytics workspace data export continuously exports data that's sent to you
## Limitations -- All tables will be supported in export, but tables are currently limited to those specified in the [supported tables](#supported-tables) section.-- Legacy custom logs by using the [HTTP Data Collector API](./data-collector-api.md) won't be supported in export. Data for [data collection rule-based custom logs](./logs-ingestion-api-overview.md) can be exported.-- You can define up to 10 enabled rules in your workspace. More rules are allowed when disabled.
+- Custom logs created via [HTTP Data Collector API](./data-collector-api.md), or 'dataSources' API won't be supported in export. Custom log created using [data collection rule](./logs-ingestion-api-overview.md) can be exported.
+- We are support more tables in data export gradually, but currently limited to those specified in the [supported tables](#supported-tables) section.
+- You can define up to 10 enabled rules in your workspace, each can include multiple tables. You can create more rules in workspace in disabled state.
- Destinations must be in the same region as the Log Analytics workspace. - The storage account must be unique across rules in the workspace. - Table names can be 60 characters long when you're exporting to a storage account. They can be 47 characters when you're exporting to event hubs. Tables with longer names won't be exported.
backup Disk Backup Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/disk-backup-overview.md
Azure Backup uses [incremental snapshots](../virtual-machines/disks-incremental-
Incremental snapshots are always stored on standard storage, irrespective of the storage type of parent-managed disks, and are charged based on the pricing of standard storage. For example, incremental snapshots of a Premium SSD-Managed Disk are stored on standard storage. By default, they are stored on ZRS in regions that support ZRS. Otherwise, they are stored on locally redundant storage (LRS). The per GiB pricing of both the options, LRS and ZRS, is the same.
-The snapshots created by Azure Backup are stored in the resource group within your Azure subscription and incur Snapshot Storage charges. ForTo more details about the snapshot pricing, see [Managed Disk Pricing](https://azure.microsoft.com/pricing/details/managed-disks/). Because the snapshots aren't copied to the Backup Vault, Azure Backup doesn't charge a Protected Instance fee and Backup Storage cost doesn't apply.
+The snapshots created by Azure Backup are stored in the resource group within your Azure subscription and incur Snapshot Storage charges. For more details about the snapshot pricing, see [Managed Disk Pricing](https://azure.microsoft.com/pricing/details/managed-disks/). Because the snapshots aren't copied to the Backup Vault, Azure Backup doesn't charge a Protected Instance fee and Backup Storage cost doesn't apply.
The number of recovery points is determined by the Backup policy used to configure backups of the disk backup instances. Older block blobs are deleted according to the garbage collection process as the corresponding older recovery points are pruned.
cognitive-services How To Get Speech Session Id https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-get-speech-session-id.md
The example below is the Response body of a `Create Transcription` request. GUID
> Use the same technique to determine different IDs required for debugging issues related to [Custom Speech](custom-speech-overview.md), like uploading a dataset using [Datasets_Create](https://centralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_Create) request. > [!NOTE]
-> You can also see all existing transcriptions and their Transcription IDs for a given Speech resource by using [GetTranscriptions](https://centralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/GetTranscriptions) request.
+> You can also see all existing transcriptions and their Transcription IDs for a given Speech resource by using [Transcriptions_Get](https://centralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Get) request.
cognitive-services Migrate V3 0 To V3 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/migrate-v3-0-to-v3-1.md
In the [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/s
The `filter` property is added to the [Transcriptions_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_List), [Transcriptions_ListFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_ListFiles), and [Projects_ListTranscriptions](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_ListTranscriptions) operations. The `filter` expression can be used to select a subset of the available resources. You can filter by `displayName`, `description`, `createdDateTime`, `lastActionDateTime`, `status`, and `locale`. For example: `filter=createdDateTime gt 2022-02-01T11:00:00Z`
+If you use webhook to receive notifications about transcription status, please note that the webhooks created via V3.0 API cannot receive notifications for V3.1 transcription requests. You need to create a new webhook endpoint via V3.1 API in order to receive notifications for V3.1 transcription requests.
+ ## Custom Speech ### Datasets
cognitive-services Pronunciation Assessment Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/pronunciation-assessment-tool.md
Follow these steps to assess your pronunciation of the reference text:
1. Go to **Pronunciation Assessment** in the [Speech Studio](https://aka.ms/speechstudio/pronunciationassessment).
+ :::image type="content" source="media/pronunciation-assessment/pa.png" alt-text="Screenshot of how to go to Pronunciation Assessment on Speech Studio.":::
+ 1. Choose a supported [language](language-support.md?tabs=pronunciation-assessment) that you want to evaluate the pronunciation.
+ :::image type="content" source="media/pronunciation-assessment/pa-language.png" alt-text="Screenshot of choosing a supported language that you want to evaluate the pronunciation.":::
+ 1. Choose from the provisioned text samples, or under the **Enter your own script** label, enter your own reference text. When reading the text, you should be close to microphone to make sure the recorded voice isn't too low.
You can also check the pronunciation assessment result in JSON. The word-level,
### Overall scores
-Pronunciation Assessment evaluates three aspects of pronunciation: accuracy, fluency, and completeness. At the bottom of **Assessment result**, you can see **Pronunciation score**, **Accuracy score**, **Fluency score**, and **Completeness score**. The **Pronunciation score** is overall score indicating the pronunciation quality of the given speech. This overall score is aggregated from **Accuracy score**, **Fluency score**, and **Completeness score** with weight.
+Pronunciation Assessment evaluates three aspects of pronunciation: accuracy, fluency, and completeness. At the bottom of **Assessment result**, you can see **Pronunciation score**, **Accuracy score**, **Fluency score**, and **Completeness score**. The **Accuracy score** and the **Fluency score** will vary over time throughout the recording process. The **Completeness score** is only calculated at the end of the evaluation. The **Pronunciation score** is overall score indicating the pronunciation quality of the given speech. During recording, the **Pronunciation score** is aggregated from **Accuracy score** and **Fluency score** with weight. Once completing recording, this overall score is aggregated from **Accuracy score**, **Fluency score**, and **Completeness score** with weight.
+
+**During recording**
++
+**Completing recording**
### Scores within words
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/quickstart.md
zone_pivot_groups: usage-custom-language-features
-# Quickstart: Orchestration workflow (preview)
+# Quickstart: Orchestration workflow
Use this article to get started with Orchestration workflow projects using Language Studio and the REST API. Follow these steps to try out an example.
communication-services Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/pricing.md
Calling and screen-sharing services are charged on a per minute per participant
Each participant of the call will count in billing for each minute they're connected to the call. This holds true regardless of whether the user is video calling, voice calling, or screen-sharing.
+Calls charged with precision to a millisecond. For example, if a call lasts 30 seconds, the charge will be $0.02.
+ ### Pricing example: Group audio/video call using JS and iOS SDKs Alice made a group call with her colleagues, Bob, and Charlie. Alice and Bob used the JS SDKs, Charlie iOS SDKs.
data-factory Data Factory Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-factory-troubleshoot-guide.md
When you observe that the activity is running much longer than your normal runs
> [!TIP] > Actually, both [Binary format in Azure Data Factory and Synapse Analytics](format-binary.md) and [Delimited text format in Azure Data Factory and Azure Synapse Analytics](format-delimited-text.md) clearly state that the "deflate64" format is not supported in Azure Data Factory.
+### Execute Pipeline passes array parameter as string to the child pipeline
+
+**Error message:** `Operation on target ForEach1 failed: The execution of template action 'MainForEach1' failed: the result of the evaluation of 'foreach' expression '@pipeline().parameters.<parameterName>' is of type 'String'. The result must be a valid array.`
+
+**Cause:** Even if in the Execute Pipeline you create the parameter of type array, as shown in the below image, the pipeline will fail.
++
+This is due to the fact that the payload is passed from the parent pipeline to the child as string. We can see it when we check the input passed to the child pipeline.
++
+**Recommendation:** To solve the issue we can leverage the create array function as shown in the below image.
++
+Then our pipeline will succeed. And we can see in the input box that the parameter passed is an array.
++ ## Next steps For more troubleshooting help, try these resources:
defender-for-iot Architecture Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/architecture-connections.md
Last updated 09/11/2022
# OT sensor cloud connection methods
-This article describes the architectures and methods supported for connecting your Microsoft Defender for IoT OT sensors to the cloud. An integral part of the Microsoft Defender for IoT service is the managed cloud service in Azure that acts as the central security monitoring portal for aggregating security information collected from network monitoring sensors and security agents. In order to ensure the security of IoT/OT at a global scale, the service supports millions of concurrent telemetry sources securely and reliably.
--
+This article describes the architectures and methods supported for connecting your Microsoft Defender for IoT OT sensors to the Azure portal in the cloud.
+OT network sensors connect to Azure to provide data about detected devices, alerts, and sensor health, to access threat intelligence packages, and more. For example, connected Azure services include IoT Hub, Blob Storage, Event Hubs, Aria, the Microsoft Download Center.
The cloud connection methods described in this article are supported only for OT sensor version 22.x and later. All methods provide: -- **Simple deployment**, requiring no extra installations in your private Azure environment, such as for an IoT Hub--- **Improved security**, without needing to configure or lock down any resource security settings in the Azure VNET
+- **Improved security**, without additional security configurations. Connect to Azure using specific and secure firewall rules](how-to-set-up-your-network#sensor-access-to-azure-portal.md), without the need for any wildcards.
- **Encryption**, Transport Layer Security (TLS1.2/AES-256) provides encrypted communication between the sensor and Azure resources. - **Scalability** for new features supported only in the cloud -- **Flexible connectivity** using any of the connection methods described in this article-
-For more information, see [Choose a sensor connection method](connect-sensors.md#choose-a-sensor-connection-method).
+For more information, see [Choose a sensor connection method](connect-sensors.md#choose-a-sensor-connection-method) and [Download endpoint details](how-to-manage-sensors-on-the-cloud.md#endpoint).
> [!IMPORTANT]
-> To ensure that your network is ready, we recommend that you first run the migration in a lab or testing environment so that you can safely validate your Azure service configurations.
+> To ensure that your network is ready, we recommend that you first run your connections in a lab or testing environment so that you can safely validate your Azure service configurations.
> ## Proxy connections with an Azure proxy
defender-for-iot Cli Ot Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/cli-ot-sensor.md
+
+ Title: CLI command reference from OT network sensors- Microsoft Defender for IoT
+description: Learn about the CLI commands available from Microsoft Defender for IoT OT network sensors.
Last updated : 12/29/2022+++
+# CLI command reference from OT network sensors
+
+This article lists the CLI commands available from Defender for IoT OT network sensors.
+
+## Prerequisites
+
+Before you can run any of the following CLI commands, you'll need access to the CLI on your OT network sensor as a privileged user.
+
+Each activity listed in this article is accessible by a different set of privileged users, including the *cyberx*, *support*, or *cyber_x_host* users. Command syntax is listed only for the users supported for a specific activity.
+
+>[!IMPORTANT]
+> We recommend that customers using the Defender for IoT CLI use the *support* user whenever possible.
+
+For more information, see [Access the CLI](../references-work-with-defender-for-iot-cli-commands.md#access-the-cli) and [Privileged user access for OT monitoring](../references-work-with-defender-for-iot-cli-commands.md#privileged-user-access-for-ot-monitoring).
+
+## Appliance maintenance
+
+### Check OT monitoring services health
+
+Use the following commands to verify that the Defender for IoT application on the OT sensor are working correctly, including the web console and traffic analysis processes.
+
+Health checks are also available from the OT sensor console. For more information, see [Troubleshoot the sensor and on-premises management console](../how-to-troubleshoot-the-sensor-and-on-premises-management-console.md).
+
+|User |Command |Full command syntax |
+||||
+|**support** | `system sanity` | No attributes |
+|**cyberx** | `cyberx-xsense-sanity` | No attributes |
++
+The following example shows the command syntax and response for the *support* user:
+
+```bash
+root@xsense: system sanity
+[+] C-Cabra Engine | Running for 17:26:30.191945
+[+] Cache Layer | Running for 17:26:32.352745
+[+] Core API | Running for 17:26:28
+[+] Health Monitor | Running for 17:26:28
+[+] Horizon Agent 1 | Running for 17:26:27
+[+] Horizon Parser | Running for 17:26:30.183145
+[+] Network Processor | Running for 17:26:27
+[+] Persistence Layer | Running for 17:26:33.577045
+[+] Profiling Service | Running for 17:26:34.105745
+[+] Traffic Monitor | Running for 17:26:30.345145
+[+] Upload Manager Service | Running for 17:26:31.514645
+[+] Watch Dog | Running for 17:26:30
+[+] Web Apps | Running for 17:26:30
+
+System is UP! (medium)
+```
++
+### Restart and shutdown
+#### Restart an appliance
+
+Use the following commands to restart the OT sensor appliance.
+
+|User |Command |Full command syntax |
+||||
+|**support** | `system reboot` | No attributes |
+|**cyberx** | `sudo reboot` | No attributes |
+|**cyberx_host** | `sudo reboot` | No attributes |
++
+For example, for the *support* user:
+
+```bash
+root@xsense: system reboot
+```
+
+#### Shut down an appliance
+
+Use the following commands to shut down the OT sensor appliance.
+
+|User |Command |Full command syntax |
+||||
+|**support** | `system shutdown` | No attributes |
+|**cyberx** | `sudo shutdown -r now` | No attributes |
+|**cyberx_host** | `sudo shutdown -r now` | No attributes |
++
+For example, for the *support* user:
+
+```bash
+root@xsense: system shutdown
+```
+
+### Software versions
+#### Show installed software version
+
+Use the following commands to list the Defender for IoT software version installed on your OT sensor.
+
+|User |Command |Full command syntax |
+||||
+|**support** | `system version` | No attributes |
+|**cyberx** | `cyberx-xsense-version` | No attributes |
++
+For example, for the *support* user:
+
+```bash
+root@xsense: system version
+Version: 22.2.5.9-r-2121448
+```
+
+#### Update sensor software from CLI
+
+For more information, see [Update your sensors](update-ot-software.md#update-your-sensors).
+
+### Date, time, and NTP
+#### Show current system date/time
+
+Use the following commands to show the current system date and time on your OT network sensor, in GMT format.
+
+|User |Command |Full command syntax |
+||||
+|**support** | `date` | No attributes |
+|**cyberx** | `date` | No attributes |
+|**cyberx_host** | `date` | No attributes |
++
+For example, for the *support* user:
+
+```bash
+root@xsense: date
+Thu Sep 29 18:38:23 UTC 2022
+root@xsense:
+```
+
+#### Turn on NTP time sync
+
+Use the following commands to turn on synchronization for the appliance time with an NTP server.
+
+To use these commands, make sure that:
+
+- The NTP server can be reached from the appliance management port
+- You use the same NTP server to synchronize all sensor appliances and the on-premises management console
+
+|User |Command |Full command syntax |
+||||
+|**support** | `ntp enable <IP address>` | No attributes |
+|**cyberx** | `cyberx-xsense-ntp-enable <IP address>` | No attributes |
+
+In these commands, `<IP address>` is the IP address of a valid IPv4 NTP server using port 123.
+
+For example, for the *support* user:
+
+```bash
+root@xsense: ntp enable 129.6.15.28
+root@xsense:
+```
+
+#### Turn off NTP time sync
+
+Use the following commands to turn off the synchronization for the appliance time with an NTP server.
+
+|User |Command |Full command syntax |
+||||
+|**support** | `ntp disable <IP address>` | No attributes |
+|**cyberx** | `cyberx-xsense-ntp-disable <IP address>` | No attributes |
+
+In these commands, `<IP address>` is the IP address of a valid IPv4 NTP server using port 123.
+
+For example, for the *support* user:
+
+```bash
+root@xsense: ntp disable 129.6.15.28
+root@xsense:
+```
+
+## Backup and restore
+
+The following sections describe the CLI commands supported for backing up and restoring a system snapshot of your OT network sensor.
+
+Backup files include a full snapshot of the sensor state, including configuration settings, baseline values, inventory data, and logs.
+
+>[!CAUTION]
+> Do not interrupt a system backup or restore operation as this may cause the system to become unusable.
+
+### List current backup files
+
+Use the following commands to list the backup files currently stored on your OT network sensor.
+
+|User |Command |Full command syntax |
+||||
+|**support** | `system backup-list` | No attributes |
+|**cyberx** | ` cyberx-xsense-system-backup-list` | No attributes |
++
+For example, for the *support* user:
+
+```bash
+root@xsense: system backup-list
+backup files:
+ e2e-xsense-1664469968212-backup-version-22.3.0.318-r-71e6295-2022-09-29_18:30:20.tar
+ e2e-xsense-1664469968212-backup-version-22.3.0.318-r-71e6295-2022-09-29_18:29:55.tar
+root@xsense:
+```
++
+### Start an immediate, unscheduled backup
+
+Use the following commands to start an immediate, unscheduled backup of the data on your OT sensor. For more information, see [Set up backup and restore files](../how-to-manage-individual-sensors.md#set-up-backup-and-restore-files).
+
+> [!CAUTION]
+> Make sure not to stop or power off the appliance while backing up data.
+
+|User |Command |Full command syntax |
+||||
+|**support** | `system backup` | No attributes |
+|**cyberx** | ` cyberx-xsense-system-backup` | No attributes |
++
+For example, for the *support* user:
+
+```bash
+root@xsense: system backup
+Backing up DATA_KEY
+...
+...
+Finished backup. Backup is stored at /var/cyberx/backups/e2e-xsense-1664469968212-backup-version-22.2.6.318-r-71e6295-2022-09-29_18:29:55.tar
+Setting backup status 'SUCCESS' in redis
+root@xsense:
+```
+
+### Restore data from the most recent backup
+
+Use the following commands to restore data on your OT network sensor using the most recent backup file. When prompted, confirm that you want to proceed.
+
+> [!CAUTION]
+> Make sure not to stop or power off the appliance while restoring data.
+
+|User |Command |Full command syntax |
+||||
+|**support** | `system restore` | No attributes |
+|**cyberx** | ` cyberx-xsense-system-restore` | No attributes |
++
+For example, for the *support* user:
+
+```bash
+root@xsense: system restore
+Waiting for redis to start...
+Redis is up
+Use backup file as "/var/cyberx/backups/e2e-xsense-1664469968212-backup-version-22.2.6.318-r-71e6295-2022-09-29_18:30:20.tar" ? [Y/n]: y
+WARNING - the following procedure will restore data. do not stop or power off the server machine while this procedure is running. Are you sure you wish to proceed? [Y/n]: y
+...
+...
+watchdog started
+starting components
+root@xsense:
+```
++
+### Display backup disk space allocation
+
+The following command lists the current backup disk space allocation, including the following details:
+
+- Backup folder location
+- Backup folder size
+- Backup folder limitations
+- Last backup operation time
+- Free disk space available for backups
+
+|User |Command |Full command syntax |
+||||
+|**cyberx** | ` cyberx-backup-memory-check` | No attributes |
+
+For example, for the *cyberx* user:
+
+```bash
+root@xsense:/# cyberx-backup-memory-check
+2.1M /var/cyberx/backups
+Backup limit is: 20Gb
+root@xsense:/#
+```
++
+## TLS/SSL certificates
++
+### Import TLS/SSL certificates to your OT sensor
+
+Use the following command to import TLS/SSL certificates to the sensor from the CLI.
+
+To use this command:
+
+- Verify that the certificate file you want to import is readable on the appliance. Upload certificate files to the appliance using tools such as WinSCP or Wget.
+- Confirm with your IT office that the appliance domain as it appears in the certificate is correct for your DNS server and the corresponding IP address.
+
+For more information, see [Certificates for appliance encryption and authentication (OT appliances)](how-to-deploy-certificates.md).
+
+|User |Command |Full command syntax |
+||||
+| **cyberx** | `cyberx-xsense-certificate-import` | cyberx-xsense-certificate-import [-h] [--crt &lt;PATH&gt;] [--key &lt;FILE NAME&gt;] [--chain &lt;PATH&gt;] [--pass &lt;PASSPHRASE&gt;] [--passphrase-set &lt;VALUE&gt;]`
+
+In this command:
+
+- `-h`: Shows the full command help syntax
+- `--crt`: The path to the certificate file you want to upload, with a `.crt` extension
+- `--key`: The `\*.key` file you want to use for the certificate. Key length must be a minimum of 2,048 bits
+- `--chain`: The path to a certificate chain file. Optional.
+- `--pass`: A passphrase used to encrypt the certificate. Optional.
+- `--passphrase-set`: Unused and set to *False* by default. Set to *True* to use passphrase supplied with the previous certificate. Optional.
+
+For example, for the *cyberx* user:
+
+```bash
+root@xsense:/# cyberx-xsense-certificate-import
+```
+
+### Restore the default self-signed certificate
+
+Use the following command to restore the default, self-signed certificates on your sensor appliance. We recommend that you use this activity for troubleshooting only, and not on production environments.
+
+|User |Command |Full command syntax |
+||||
+|**cyberx** | `cyberx-xsense-create-self-signed-certificate` | No attributes |
+
+For example, for the *cyberx* user:
+
+```bash
+root@xsense:/# cyberx-xsense-create-self-signed-certificate
+Creating a self-signed certificate for Apache2...
+random directory name for the new certificate is 348
+Generating a RSA private key
+................+++++
+....................................+++++
+writing new private key to '/var/cyberx/keys/certificates/348/apache.key'
+--
+executing a query to add the certificate to db
+finished
+root@xsense:/#
+```
++
+## Local user management
+
+### Change local user passwords
+
+Use the following commands to change passwords for local users on your OT sensor.
+
+When you change the password for the *cyberx*, *support*, or *cyberx_host* user, the password is changed for both SSH and web access.
++
+|User |Command |Full command syntax |
+||||
+|**cyberx** | `cyberx-users-password-reset` | `cyberx-users-password-reset -u <user> -p <password>` |
+|**cyberx_host** | `passwd` | No attributes |
++
+The following example shows the *cyberx* user resetting the *support* user's password to `jI8iD9kE6hB8qN0h`:
+
+```bash
+root@xsense:/# cyberx-users-password-reset -u support -p jI8iD9kE6hB8qN0h
+resetting the password of OS user "support"
+Sending USER_PASSWORD request to OS manager
+Open UDS connection with /var/cyberx/system/os_manager.sock
+Received data: b'ack'
+resetting the password of UI user "support"
+root@xsense:/#
+```
+
+The following example shows the *cyberx_host* user changing the *cyberx_host* user's password.
+
+```bash
+cyberx_host@xsense:/# passwd
+Changing password for user cyberx_host.
+(current) UNIX password:
+New password:
+Retype new password:
+passwd: all authentication tokens updated successfully.
+cyberx_host@xsense:/#
+```
++
+### Control user session timeouts
+
+Define the time after which users are automatically signed out of the OT sensor. Define this value in a properties file saved on the sensor.
+not that
+For more information, see [Control user session timeouts](manage-users-sensor.md#control-user-session-timeouts).
+
+### Define maximum number of failed sign-ins
+
+Define the number of maximum failed sign-ins before an OT sensor will prevent the user from signing in again from the same IP address. Define this value in a properties file saved on the sensor.
+
+For more information, see [Define maximum number of failed sign-ins](manage-users-sensor.md#define-maximum-number-of-failed-sign-ins).
+
+## Network configuration
+
+### Network settings
+#### Change networking configuration or reassign network interface roles
+
+Use the following command to rerun the OT monitoring software configuration wizard, which helps you define or reconfigure the following OT sensor settings:
+
+- Enable/disable SPAN monitoring interfaces
+- Configure network settings for the management interface (IP, subnet, default gateway, DNS)
+- Setting up for [ERSPAN monitoring](traffic-mirroring/configure-mirror-erspan.md)
+- Assigning a backup directory
+
+|User |Command |Full command syntax |
+||||
+|**cyberx_host** | `sudo dpkg-reconfigure iot-sensor` | No attributes |
+
+For example, with the **cyberx_host** user:
+
+```bash
+root@xsense:/# sudo dpkg-reconfigure iot-sensor
+```
+
+The configuration wizard starts automatically after you run this command.
+For more information, see [Install OT monitoring software](../how-to-install-software.md#install-ot-monitoring-software).
++
+#### Validate and show network interface configuration
+
+Use the following commands to validate and show the current network interface configuration on the OT sensor.
+
+|User |Command |Full command syntax |
+||||
+|**support** | `network validate` | No attributes |
+
+For example, for the *support* user:
+
+```bash
+root@xsense: network validate
+Success! (Appliance configuration matches the network settings)
+Current Network Settings:
+interface: eth0
+ip: 172.20.248.69
+subnet: 255.255.192.0
+default gateway: 10.1.0.1
+dns: 168.63.129.16
+monitor interfaces mapping: local_listener=adiot0
+root@xsense:
+```
+
+### Network connectivity
+#### Check network connectivity from the OT sensor
+
+Use the following commands to send a ping message from the OT sensor.
+
+|User |Command |Full command syntax |
+||||
+|**support** | `ping <IP address>` | No attributes|
+|**cyberx** | `ping <IP address>` | No attributes |
+
+In these commands, `<IP address>` is the IP address of a valid IPv4 network host accessible from the management port on your OT sensor.
+
+#### Check network interface current load
+
+Use the following command to display network traffic and bandwidth using a six-second test.
+
+|User |Command |Full command syntax |
+||||
+|**cyberx** | `cyberx-nload` | No attributes |
+
+```bash
+root@xsense:/# cyberx-nload
+eth0:
+ Received: 66.95 KBit/s Sent: 87.94 KBit/s
+ Received: 58.95 KBit/s Sent: 107.25 KBit/s
+ Received: 43.67 KBit/s Sent: 107.86 KBit/s
+ Received: 87.00 KBit/s Sent: 191.47 KBit/s
+ Received: 79.71 KBit/s Sent: 85.45 KBit/s
+ Received: 54.68 KBit/s Sent: 48.77 KBit/s
+local_listener (virtual adiot0):
+ Received: 0.0 Bit Sent: 0.0 Bit
+ Received: 0.0 Bit Sent: 0.0 Bit
+ Received: 0.0 Bit Sent: 0.0 Bit
+ Received: 0.0 Bit Sent: 0.0 Bit
+ Received: 0.0 Bit Sent: 0.0 Bit
+ Received: 0.0 Bit Sent: 0.0 Bit
+root@xsense:/#
+```
+
+#### Check internet connection
+
+Use the following command to check the internet connectivity on your appliance.
+
+|User |Command |Full command syntax |
+||||
+|**cyberx** | `cyberx-xsense-internet-connectivity` | No attributes |
+
+```bash
+root@xsense:/# cyberx-xsense-internet-connectivity
+Checking internet connectivity...
+The machine was successfully able to connect the internet.
+root@xsense:/#
+```
++
+### Set bandwidth limit for the management network interface
+
+Use the following command to set the outbound bandwidth limit for uploads from the OT sensor's management interface to the Azure portal or an on-premises management console.
+
+Setting outbound bandwidth limits can be helpful in maintaining networking quality of service (QoS). This command is supported only in bandwidth-constrained environments, such as over a satellite or serial link.
+
+|User |Command |Full command syntax |
+||||
+|**cyberx** | `cyberx-xsense-limit-interface` | `cyberx-xsense-limit-interface [-h] --interface <INTERFACE VALUE> [--limit <LIMIT VALUE] [--clear]` |
+
+In this command:
+
+- `-h` or `--help`: Shows the command help syntax
+
+- `--interface <INTERFACE VALUE>`: Is the interface you want to limit, such as `eth0`
+
+- `--limit <LIMIT VALUE>`: The limit you want to set, such as `30kbit`. Use one of the following units:
+
+ - `kbps`: Kilobytes per second
+ - `mbps`: Megabytes per second
+ - `kbit`: Kilobits per second
+ - `mbit`: Megabits per second
+ - `bps` or a bare number: Bytes per second
+
+- `--clear`: Clears all settings for the specified interface
++
+For example, for the *cyberx* user:
+
+```bash
+root@xsense:/# cyberx-xsense-limit-interface -h
+usage: cyberx-xsense-limit-interface [-h] --interface INTERFACE [--limit LIMIT] [--clear]
+
+optional arguments:
+ -h, --help show this help message and exit
+ --interface INTERFACE
+ interface (e.g. eth0)
+ --limit LIMIT limit value (e.g. 30kbit). kbps - Kilobytes per second, mbps - Megabytes per second, kbit -
+ Kilobits per second, mbit - Megabits per second, bps or a bare number - Bytes per second
+ --clear flag, will clear settings for the given interface
+root@xsense:/#
+root@xsense:/# cyberx-xsense-limit-interface --interface eth0 --limit 1000mbps
+setting the bandwidth limit of interface "eth0" to 1000mbps
+```
++++
+### Physical interfaces
+#### Locate a physical port by blinking interface lights
+
+Use the following command to locate a specific physical interface by causing the interface lights to blink.
+
+|User |Command |Full command syntax |
+||||
+|**support** | `network blink <INT>` | No attributes |
+
+In this command, `<INT>` is a physical ethernet port on the appliance.
+
+The following example shows the *support* user blinking the *eth0* interface:
+
+```bash
+root@xsense: network blink eth0
+Blinking interface for 20 seconds ...
+```
+
+#### List connected physical interfaces
+
+Use the following commands to list the connected physical interfaces on your OT sensor.
+
+|User |Command |Full command syntax |
+||||
+|**support** | `network list` | No attributes |
+|**cyberx** | `ifconfig` | No attributes |
+
+For example, for the *support* user:
+
+```bash
+root@xsense: network list
+adiot0: flags=4419<UP,BROADCAST,RUNNING,PROMISC,MULTICAST> mtu 4096
+ ether be:b1:01:1f:91:88 txqueuelen 1000 (Ethernet)
+ RX packets 2589575 bytes 740011013 (740.0 MB)
+ RX errors 0 dropped 0 overruns 0 frame 0
+ TX packets 1 bytes 90 (90.0 B)
+ TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
+
+eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
+ inet 172.18.0.2 netmask 255.255.0.0 broadcast 172.18.255.255
+ ether 02:42:ac:12:00:02 txqueuelen 0 (Ethernet)
+ RX packets 22419372 bytes 5757035946 (5.7 GB)
+ RX errors 0 dropped 0 overruns 0 frame 0
+ TX packets 23078301 bytes 2544599581 (2.5 GB)
+ TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
+
+lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
+ inet 127.0.0.1 netmask 255.0.0.0
+ loop txqueuelen 1000 (Local Loopback)
+ RX packets 837196 bytes 259542408 (259.5 MB)
+ RX errors 0 dropped 0 overruns 0 frame 0
+ TX packets 837196 bytes 259542408 (259.5 MB)
+ TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
+
+root@xsense:
+```
+
+## Traffic capture filters
++
+To reduce alert fatigue and focus your network monitoring on high priority traffic, you may decide to filter the traffic that streams into Defender for IoT at the source. Capture filters allow you to block high-bandwidth traffic at the hardware layer, optimizing both appliance performance and resource usage.
+
+Use include an/or exclude lists to create and configure capture filters on your OT network sensors, making sure that you don't block any of the traffic that you want to monitor.
+
+The basic use case for capture filters uses the same filter for all Defender for IoT components. However, for advanced use cases, you may want to configure separate filters for each of the following Defender for IoT components:
+
+- `horizon`: Captures deep packet inspection (DPI) data
+- `collector`: Captures PCAP data
+- `traffic-monitor`: Captures communication statistics
+
+> [!NOTE]
+> Capture filters don't apply to [Defender for IoT malware alerts](../alert-engine-messages.md#malware-engine-alerts), which are triggered on all detected network traffic.
+>
+
+### Create a basic filter for all components
+
+The method used to configure a basic capture filter differs, depending on the user performing the command:
+
+- **cyberx** user: Run the specified command with specific attributes to configure your capture filter.
+- **support** user: Run the specified command, and then enter values as [prompted by the CLI](#create-a-basic-capture-filter-using-the-support-user), editing your include and exclude lists in a nano editor.
+
+Use the following commands to create a new capture filter:
+
+|User |Command |Full command syntax |
+||||
+| **support** | `network capture-filter` | No attributes.|
+| **cyberx** | `cyberx-xsense-capture-filter` | `cyberx-xsense-capture-filter [-h] [-i INCLUDE] [-x EXCLUDE] [-etp EXCLUDE_TCP_PORT] [-eup EXCLUDE_UDP_PORT] [-itp INCLUDE_TCP_PORT] [-iup INCLUDE_UDP_PORT] [-vlan INCLUDE_VLAN_IDS] -m MODE [-S]` |
+
+Supported attributes for the *cyberx* user are defined as follows:
+
+|Attribute |Description |
+|||
+|`-h`, `--help` | Shows the help message and exits. |
+|`-i <INCLUDE>`, `--include <INCLUDE>` | The path to a file that contains the devices and subnet masks you want to include, where `<INCLUDE>` is the path to the file. |
+|`-x EXCLUDE`, `--exclude EXCLUDE` | The path to a file that contains the devices and subnet masks you want to exclude, where `<EXCLUDE>` is the path to the file. |
+|- `-etp <EXCLUDE_TCP_PORT>`, `--exclude-tcp-port <EXCLUDE_TCP_PORT>` | Excludes TCP traffic on any specified ports, where the `<EXCLUDE_TCP_PORT>` defines the port or ports you want to exclude. Delimitate multiple ports by commas, with no spaces. |
+|`-eup <EXCLUDE_UDP_PORT>`, `--exclude-udp-port <EXCLUDE_UDP_PORT>` | Excludes UDP traffic on any specified ports, where the `<EXCLUDE_UDP_PORT>` defines the port or ports you want to exclude. Delimitate multiple ports by commas, with no spaces. |
+|`-itp <INCLUDE_TCP_PORT>`, `--include-tcp-port <INCLUDE_TCP_PORT>` | Includes TCP traffic on any specified ports, where the `<INCLUDE_TCP_PORT>` defines the port or ports you want to include. Delimitate multiple ports by commas, with no spaces. |
+|`-iup <INCLUDE_UDP_PORT>`, `--include-udp-port <INCLUDE_UDP_PORT>` | Includes UDP traffic on any specified ports, where the `<INCLUDE_UDP_PORT>` defines the port or ports you want to include. Delimitate multiple ports by commas, with no spaces. |
+|`-vlan <INCLUDE_VLAN_IDS>`, `--include-vlan-ids <INCLUDE_VLAN_IDS>` | Includes VLAN traffic by specified VLAN IDs, `<INCLUDE_VLAN_IDS>` defines the VLAN ID or IDs you want to include. Delimitate multiple VLAN IDs by commas, with no spaces. |
+|`-p <PROGRAM>`, `--program <PROGRAM>` | Defines the component for which you want to configure a capture filter. Use `all` for basic use cases, to create a single capture filter for all components. <br><br>For advanced use cases, create separate capture filters for each component. For more information, see [Create an advanced filter for specific components](#create-an-advanced-filter-for-specific-components).|
+|`-m <MODE>`, `--mode <MODE>` | Defines an include list mode, and is relevant only when an include list is used. Use one of the following values: <br><br>- `internal`: Includes all communication between the specified source and destination <br>- `all-connected`: Includes all communication between either of the specified endpoints and external endpoints. <br><br>For example, for endpoints A and B, if you use the `internal` mode, included traffic will only include communications between endpoints **A** and **B**. <br>However, if you use the `all-connected` mode, included traffic will include all communications between A *or* B and other, external endpoints. |
+
+#### Create a basic capture filter using the support user
+
+If you're creating a basic capture filter as the *support* user, no attributes are passed in the [original command](#create-a-basic-filter-for-all-components). Instead, a series of prompts is displayed to help you create the capture filter interactively.
+
+Reply to the prompts displayed as follows:
+
+1. `Would you like to supply devices and subnet masks you wish to include in the capture filter? [Y/N]:`
+
+ Select `Y` to open a new include file, where you can add a device, channel, and/or subnet that you want to include in monitored traffic. Any other traffic, not listed in your include file, isn't ingested to Defender for IoT.
+
+ The include file is opened in the [Nano](https://www.nano-editor.org/dist/latest/cheatsheet.html) text editor. In the include file, define devices, channels, and subnets as follows:
+
+ |Type |Description |Example |
+ ||||
+ |**Device** | Define a device by its IP address. | `1.1.1.1` includes all traffic for this device. |
+ |**Channel** | Define a channel by the IP addresses of its source and destination devices, separated by a comma. | `1.1.1.1,2.2.2.2` includes all of the traffic for this channel. |
+ |**Subnet** | Define a subnet by its network address. | `1.1.1` includes all traffic for this subnet. |
+ |**Subnet channel** | Define subnet channel network addresses for the source and destination subnets. | `1.1.1,2.2.2` includes all of the traffic between these subnets. |
+
+ List multiple arguments in separate rows.
+
+1. `Would you like to supply devices and subnet masks you wish to exclude from the capture filter? [Y/N]:`
+
+ Select `Y` to open a new exclude file where you can add a device, channel, and/or subnet that you want to exclude from monitored traffic. Any other traffic, not listed in your exclude file, is ingested to Defender for IoT.
+
+ The exclude file is opened in the [Nano](https://www.nano-editor.org/dist/latest/cheatsheet.html) text editor. In the exclude file, define devices, channels, and subnets as follows:
+
+ |Type |Description |Example |
+ ||||
+ | **Device** | Define a device by its IP address. | `1.1.1.1` excludes all traffic for this device. |
+ | **Channel** | Define a channel by the IP addresses of its source and destination devices, separated by a comma. | `1.1.1.1,2.2.2.2` excludes all of the traffic between these devices. |
+ | **Channel by port** | Define a channel by the IP addresses of its source and destination devices, and the traffic port. | `1.1.1.1,2.2.2.2,443` excludes all of the traffic between these devices and using the specified port.|
+ | **Subnet** | Define a subnet by its network address. | `1.1.1` excludes all traffic for this subnet. |
+ | **Subnet channel** | Define subnet channel network addresses for the source and destination subnets. | `1.1.1,2.2.2` excludes all of the traffic between these subnets. |
+
+ List multiple arguments in separate rows.
+
+1. Reply to the following prompts to define any TCP or UDP ports to include or exclude. Separate multiple ports by comma, and press ENTER to skip any specific prompt.
+
+ - `Enter tcp ports to include (delimited by comma or Enter to skip):`
+ - `Enter udp ports to include (delimited by comma or Enter to skip):`
+ - `Enter tcp ports to exclude (delimited by comma or Enter to skip):`
+ - `Enter udp ports to exclude (delimited by comma or Enter to skip):`
+ - `Enter VLAN ids to include (delimited by comma or Enter to skip):`
+
+ For example, enter multiple ports as follows: `502,443`
+
+1. `In which component do you wish to apply this capture filter?`
+
+ Enter `all` for a basic capture filter. For [advanced use cases](#create-an-advanced-capture-filter-using-the-support-user), create capture filters for each Defender for IoT component separately.
+
+1. `Type Y for "internal" otherwise N for "all-connected" (custom operation mode enabled) [Y/N]:`
+
+ This prompt allows you to configure which traffic is in scope. Define whether you want to collect traffic where both endpoints are in scope, or only one of them is in the specified subnet. Supported values include:
+
+ - `internal`: Includes all communication between the specified source and destination
+ - `all-connected`: Includes all communication between either of the specified endpoints and external endpoints.
+
+ For example, for endpoints A and B, if you use the `internal` mode, included traffic will only include communications between endpoints **A** and **B**. <br>However, if you use the `all-connected` mode, included traffic will include all communications between A *or* B and other, external endpoints.
+
+ The default mode is `internal`. To use the `all-connected` mode, select `Y` at the prompt, and then enter `all-connected`.
+
+The following example shows a series of prompts that creates a capture filter to exclude subnet `192.168.x.x` and port `9000:`
+
+```bash
+root@xsense: network capture-filter
+Would you like to supply devices and subnet masks you wish to include in the capture filter? [y/N]: n
+Would you like to supply devices and subnet masks you wish to exclude from the capture filter? [y/N]: y
+You've exited the editor. Would you like to apply your modifications? [y/N]: y
+Enter tcp ports to include (delimited by comma or Enter to skip):
+Enter udp ports to include (delimited by comma or Enter to skip):
+Enter tcp ports to exclude (delimited by comma or Enter to skip):9000
+Enter udp ports to exclude (delimited by comma or Enter to skip):9000
+Enter VLAN ids to include (delimited by comma or Enter to skip):
+In which component do you wish to apply this capture filter?all
+Would you like to supply a custom base capture filter for the collector component? [y/N]: n
+Would you like to supply a custom base capture filter for the traffic_monitor component? [y/N]: n
+Would you like to supply a custom base capture filter for the horizon component? [y/N]: n
+type Y for "internal" otherwise N for "all-connected" (custom operation mode enabled) [Y/n]: internal
+Please respond with 'yes' or 'no' (or 'y' or 'n').
+type Y for "internal" otherwise N for "all-connected" (custom operation mode enabled) [Y/n]: y
+starting "/usr/local/bin/cyberx-xsense-capture-filter --exclude /var/cyberx/media/capture-filter/exclude --exclude-tcp-port 9000 --exclude-udp-port 9000 --program all --mode internal --from-shell"
+No include file given
+Loaded 1 unique channels
+(000) ret #262144
+(000) ldh [12]
+......
+......
+......
+debug: set new filter for horizon '(((not (net 192.168))) and (not (tcp port 9000)) and (not (udp port 9000))) or (vlan and ((not (net 192.168))) and (not (tcp port 9000)) and (not (udp port 9000)))'
+root@xsense:
+```
+
+### Create an advanced filter for specific components
+
+When configuring advanced capture filters for specific components, you can use your initial include and exclude files as a base, or template, capture filter. Then, configure extra filters for each component on top of the base as needed.
+
+To create a capture filter for *each* component, make sure to repeat the entire process for each component.
+
+> [!NOTE]
+> If you've created different capture filters for different components, the mode selection is used for all components. Defining the capture filter for one component as `internal` and the capture filter for another component as `all-connected` isn't supported.
+
+|User |Command |Full command syntax |
+||||
+| **support** | `network capture-filter` | No attributes.|
+| **cyberx** | `cyberx-xsense-capture-filter` | `cyberx-xsense-capture-filter [-h] [-i INCLUDE] [-x EXCLUDE] [-etp EXCLUDE_TCP_PORT] [-eup EXCLUDE_UDP_PORT] [-itp INCLUDE_TCP_PORT] [-iup INCLUDE_UDP_PORT] [-vlan INCLUDE_VLAN_IDS] -p PROGRAM [-o BASE_HORIZON] [-s BASE_TRAFFIC_MONITOR] [-c BASE_COLLECTOR] -m MODE [-S]` |
+
+The following extra attributes are used for the *cyberx* user to create capture filters for each component separately:
+
+|Attribute |Description |
+|||
+|`-p <PROGRAM>`, `--program <PROGRAM>` | Defines the component for which you want to configure a capture filter, where `<PROGRAM>` has the following supported values: <br>- `traffic-monitor` <br>- `collector` <br>- `horizon` <br>- `all`: Creates a single capture filter for all components. For more information, see [Create a basic filter for all components](#create-a-basic-filter-for-all-components).|
+|`-o <BASE_HORIZON>`, `--base-horizon <BASE_HORIZON>` | Defines a base capture filter for the `horizon` component, where `<BASE_HORIZON>` is the filter you want to use. <br> Default value = `""` |
+|`-s BASE_TRAFFIC_MONITOR`, `--base-traffic-monitor BASE_TRAFFIC_MONITOR` | Defines a base capture filter for the `traffic-monitor` component. <br> Default value = `""` |
+|`-c BASE_COLLECTOR`, `--base-collector BASE_COLLECTOR` | Defines a base capture filter for the `collector` component. <br> Default value = `""` |
+
+Other attribute values have the same descriptions as in the basic use case, described [earlier](#create-a-basic-filter-for-all-components).
+
+#### Create an advanced capture filter using the support user
+
+If you're creating a capture filter for each component separately as the *support* user, no attributes are passed in the [original command](#create-an-advanced-filter-for-specific-components). Instead, a series of prompts is displayed to help you create the capture filter interactively.
+
+Most of the prompts are identical to [basic use case](#create-a-basic-capture-filter-using-the-support-user). Reply to the following extra prompts as follows:
+
+1. `In which component do you wish to apply this capture filter?`
+
+ Enter one of the following values, depending on the component you want to filter:
+
+ - `horizon`
+ - `traffic-monitor`
+ - `collector`
+
+1. You're prompted to configure a custom base capture filter for the selected component. This option uses the capture filter you configured in the previous steps as a base, or template, where you can add extra configurations on top of the base.
+
+ For example, if you'd selected to configure a capture filter for the `collector` component in the previous step, you're prompted: `Would you like to supply a custom base capture filter for the collector component? [Y/N]:`
+
+ Enter `Y` to customize the template for the specified component, or `N` to use the capture filter you'd configured earlier as it is.
+
+Continue with the remaining prompts as in the [basic use case](#create-a-basic-capture-filter-using-the-support-user).
+
+### List current capture filters for specific components
+
+Use the following commands to show details about the current capture filters configured for your sensor.
+
+|User |Command |Full command syntax |
+||||
+| **support** | Use the following commands to view the capture filters for each component: <br><br>- **horizon**: `edit-config horizon_parser/horizon.properties` <br>- **traffic-monitor**: `edit-config traffic_monitor/traffic-monitor` <br>- **collector**: `edit-config dumpark.properties` | No attributes |
+| **cyberx** | Use the following commands to view the capture filters for each component: <br><br>-**horizon**: `nano /var/cyberx/properties/horizon_parser/horizon.properties` <br>- **traffic-monitor**: `nano /var/cyberx/properties/traffic_monitor/traffic-monitor.properties` <br>- **collector**: `nano /var/cyberx/properties/dumpark.properties` | No attributes |
+
+These commands open the following files, which list the capture filters configured for each component:
+
+|Name |File |Property |
+||||
+|**horizon** | `/var/cyberx/properties/horizon.properties` | `horizon.processor.filter` |
+|**traffic-monitor** | `/var/cyberx/properties/traffic-monitor.properties` | `horizon.processor.filter` |
+|**collector** | `/var/cyberx/properties/dumpark.properties` | `dumpark.network.filter` |
+
+For example with the **support** user, with a capture filter defined for the *collector* component that excludes subnet 192.168.x.x and port 9000:
+
+```bash
+
+root@xsense: edit-config dumpark.properties
+ GNU nano 2.9.3 /tmp/tmpevt4igo7/tmpevt4igo7
+
+dumpark.network.filter=(((not (net 192.168))) and (not (tcp port 9000)) and (not
+dumpark.network.snaplen=4096
+dumpark.packet.filter.data.transfer=false
+dumpark.infinite=true
+dumpark.output.session=false
+dumpark.output.single=false
+dumpark.output.raw=true
+dumpark.output.rotate=true
+dumpark.output.rotate.history=300
+dumpark.output.size=20M
+dumpark.output.time=30S
+```
+
+### Reset all capture filters
+
+Use the following command to reset your sensor to the default capture configuration with the *cyberx* user, removing all capture filters.
+
+|User |Command |Full command syntax |
+||||
+| **cyberx** | `cyberx-xsense-capture-filter -p all -m all-connected` | No attributes |
+
+If you want to modify the existing capture filters, run the [earlier](#create-a-basic-filter-for-all-components) command again, with new attribute values.
+
+To reset all capture filters using the *support* user, run the [earlier](#create-a-basic-filter-for-all-components) command again, and respond `N` to all [prompts](#create-a-basic-capture-filter-using-the-support-user) to reset all capture filters.
+
+The following example shows the command syntax and response for the *cyberx* user:
+
+```bash
+root@xsense:/# cyberx-xsense-capture-filter -p all -m all-connected
+starting "/usr/local/bin/cyberx-xsense-capture-filter -p all -m all-connected"
+No include file given
+No exclude file given
+(000) ret #262144
+(000) ret #262144
+debug: set new filter for dumpark ''
+No include file given
+No exclude file given
+(000) ret #262144
+(000) ret #262144
+debug: set new filter for traffic-monitor ''
+No include file given
+No exclude file given
+(000) ret #262144
+(000) ret #262144
+debug: set new filter for horizon ''
+root@xsense:/#
+```
+
+## Alerts
+### Trigger a test alert
+
+Use the following command to test connectivity and alert forwarding from the sensor to management consoles, including the Azure portal, a Defender for IoT on-premises management console, or a third-party SIEM.
+
+|User |Command |Full command syntax |
+||||
+| **cyberx** | `cyberx-xsense-trigger-test-alert` | No attributes |
+
+The following example shows the command syntax and response for the *cyberx* user:
+
+```bash
+root@xsense:/# cyberx-xsense-trigger-test-alert
+Triggering Test Alert...
+Test Alert was successfully triggered.
+```
+
+### Alert exclusion rules from an OT sensor
+
+The following commands support alert exclusion features on your OT sensor, including showing current exclusion rules, adding and editing rules, and deleting rules.
+
+> [!NOTE]
+> Alert exclusion rules defined on an OT sensor can be overwritten by alert exclusion rules defined on your on-premises management console.
+
+#### Show current alert exclusion rules
+
+Use the following command to display a list of currently configured exclusion rules.
+
+|User |Command |Full command syntax |
+||||
+|**support** | `alerts exclusion-rule-list` | `alerts exclusion-rule-list [-h] -n NAME [-ts TIMES] [-dir DIRECTION] [-dev DEVICES] [-a ALERTS]` |
+|**cyberx** | `alerts cyberx-xsense-exclusion-rule-list` | `alerts cyberx-xsense-exclusion-rule-list [-h] -n NAME [-ts TIMES] [-dir DIRECTION] [-dev DEVICES] [-a ALERTS]` |
+
+The following example shows the command syntax and response for the *support* user:
+
+```bash
+root@xsense: alerts exclusion-rule-list
+starting "/usr/local/bin/cyberx-xsense-exclusion-rule-list"
+root@xsense:
+```
+
+#### Create a new alert exclusion rule
+
+Use the following commands to create a local alert exclusion rule on your sensor.
+
+|User |Command |Full command syntax |
+||||
+| **support** | `cyberx-xsense-exclusion-rule-create` | `cyberx-xsense-exclusion-rule-create [-h] [-n NAME] [-ts TIMES] [-dir DIRECTION] [-dev DEVICES] [-a ALERTS]`|
+| **cyberx** |`cyberx-xsense-exclusion-rule-create` |`cyberx-xsense-exclusion-rule-create [-h] [-n NAME] [-ts TIMES] [-dir DIRECTION] [-dev DEVICES] [-a ALERTS]` |
+
+Supported attributes are defined as follows:
+
+|Attribute |Description |
+|||
+|`-h`, `--help` | Shows the help message and exits. |
+|`[-n <NAME>]`, `[--name <NAME>]` | Define the rule's name.|
+|`[-ts <TIMES>]` `[--time_span <TIMES>]` | Defines the time span for which the rule is active, using the following syntax: `xx:yy-xx:yy, xx:yy-xx:yy` |
+|`[-dir <DIRECTION>]`, `--direction <DIRECTION>` | Address direction to exclude. Use one of the following values: `both`, `src`, `dst`|
+|`[-dev <DEVICES>]`, `[--devices <DEVICES>]` | Device addresses or address types to exclude, using the following syntax: `ip-x.x.x.x`, `mac-xx:xx:xx:xx:xx:xx`, `subnet:x.x.x.x/x`|
+| `[-a <ALERTS>]`, `--alerts <ALERTS>`|Alert names to exclude, by hex value. For example: `0x00000, 0x000001` |
+
+The following example shows the command syntax and response for the *support* user:
+
+```bash
+alerts exclusion-rule-create [-h] -n NAME [-ts TIMES] [-dir DIRECTION]
+[-dev DEVICES] [-a ALERTS]
+```
+
+#### Modify an alert exclusion rule
+
+Use the following commands to modify an existing local alert exclusion rule on your sensor.
+
+|User |Command |Full command syntax |
+||||
+| **support** | `exclusion-rule-append` | `exclusion-rule-append [-h] [-n NAME] [-ts TIMES] [-dir DIRECTION] [-dev DEVICES] [-a ALERTS]`|
+| **cyberx** |`exclusion-rule-append` |`exclusion-rule-append [-h] [-n NAME] [-ts TIMES] [-dir DIRECTION] [-dev DEVICES] [-a ALERTS]` |
+
+Supported attributes are defined as follows:
+
+|Attribute |Description |
+|||
+|`-h`, `--help` | Shows the help message and exits. |
+|`[-n <NAME>]`, `[--name <NAME>]` | The name of the rule you want to modify.|
+|`[-ts <TIMES>]` `[--time_span <TIMES>]` | Defines the time span for which the rule is active, using the following syntax: `xx:yy-xx:yy, xx:yy-xx:yy` |
+|`[-dir <DIRECTION>]`, `--direction <DIRECTION>` | Address direction to exclude. Use one of the following values: `both`, `src`, `dst`|
+|`[-dev <DEVICES>]`, `[--devices <DEVICES>]` | Device addresses or address types to exclude, using the following syntax: `ip-x.x.x.x`, `mac-xx:xx:xx:xx:xx:xx`, `subnet:x.x.x.x/x`|
+| `[-a <ALERTS>]`, `--alerts <ALERTS>`|Alert names to exclude, by hex value. For example: `0x00000, 0x000001` |
+
+Use the following command syntax with the **support* user:
+
+```bash
+alerts exclusion-rule-append [-h] -n NAME [-ts TIMES] [-dir DIRECTION]
+[-dev DEVICES] [-a ALERTS]
+```
+
+#### Delete an alert exclusion rule
+
+Use the following commands to delete an existing local alert exclusion rule on your sensor.
+
+|User |Command |Full command syntax |
+||||
+| **support** | `exclusion-rule-remove` | `exclusion-rule-append [-h] [-n NAME] [-ts TIMES] [-dir DIRECTION] [-dev DEVICES] [-a ALERTS]`|
+| **cyberx** |`exclusion-rule-remove` |`exclusion-rule-append [-h] [-n NAME] [-ts TIMES] [-dir DIRECTION] [-dev DEVICES] [-a ALERTS]` |
+
+Supported attributes are defined as follows:
+
+|Attribute |Description |
+|||
+|`-h`, `--help` | Shows the help message and exits. |
+|`[-n <NAME>]`, `[--name <NAME>]` | The name of the rule you want to delete.|
+|`[-ts <TIMES>]` `[--time_span <TIMES>]` | Defines the time span for which the rule is active, using the following syntax: `xx:yy-xx:yy, xx:yy-xx:yy` |
+|`[-dir <DIRECTION>]`, `--direction <DIRECTION>` | Address direction to exclude. Use one of the following values: `both`, `src`, `dst`|
+|`[-dev <DEVICES>]`, `[--devices <DEVICES>]` | Device addresses or address types to exclude, using the following syntax: `ip-x.x.x.x`, `mac-xx:xx:xx:xx:xx:xx`, `subnet:x.x.x.x/x`|
+| `[-a <ALERTS>]`, `--alerts <ALERTS>`|Alert names to exclude, by hex value. For example: `0x00000, 0x000001` |
+
+The following example shows the command syntax and response for the *support* user:
+
+```bash
+alerts exclusion-rule-remove [-h] -n NAME [-ts TIMES] [-dir DIRECTION]
+[-dev DEVICES] [-a ALERTS]
+```
++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn more about Defender for IoT CLI commands](references-work-with-defender-for-iot-cli-commands.md)
defender-for-iot Faqs Ot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/faqs-ot.md
This article provides a list of frequently asked questions and answers about OT
## Our organization uses proprietary non-standard industrial protocols. Are they supported?
-Microsoft Defender for IoT provides comprehensive protocol support. In addition to embedded protocol support, you can secure IoT and OT devices running proprietary and custom protocols, or protocols that deviate from any standard. Using the Horizon Open Development Environment (ODE) SDK, developers can create dissector plugins that decode network traffic based on defined protocols. Traffic is analyzed by services to provide complete monitoring, alerting, and reporting. Use Horizon to:
+Microsoft Defender for IoT provides comprehensive protocol support. In addition to embedded protocol support, you can secure IoT and OT devices running proprietary and custom protocols, or protocols that deviate from any standard. Use the Horizon Open Development Environment (ODE) SDK, to create dissector plugins that decode network traffic based on defined protocols. Traffic is analyzed by services to provide complete monitoring, alerting, and reporting. Use Horizon to:
+ - Expand visibility and control without the need to upgrade to new versions. - Secure proprietary information by developing on-site as an external plugin. - Localize text for alerts, events, and protocol parameters.
Certified hardware has been tested in our labs for driver stability, packet drop
## Regulation doesn't allow us to connect our system to the Internet. Can we still utilize Defender for IoT?
-Yes you can! The Microsoft Defender for IoT platform on-premises solution is deployed as a physical or virtual sensor appliance that passively ingests network traffic (via SPAN, RSPAN, or TAP) to analyze, discover, and continuously monitor IT, OT, and IoT networks. For larger enterprises, multiple sensors can aggregate their data to an on-premises management console.
+Yes you can! The Microsoft Defender for IoT platform on-premises solution is deployed as a physical or virtual sensor appliance that passively ingests network traffic, such as via SPAN, RSPAN, or TAP, to analyze, discover, and continuously monitor IT, OT, and IoT networks. For larger enterprises, multiple sensors can aggregate their data to an on-premises management console.
## Where in the network should I connect monitoring ports?
-The Microsoft Defender for IoT sensor connects to a SPAN port or network TAP and immediately begins collecting ICS network traffic via passive (agentless) monitoring. It has zero impact on OT networks since it isnΓÇÖt placed in the data path and doesnΓÇÖt actively scan OT devices.
+The Microsoft Defender for IoT sensor connects to a SPAN port or network TAP and immediately begins collecting ICS network traffic via passive (agentless) monitoring. It has zero effect on OT networks since it isnΓÇÖt placed in the data path and doesnΓÇÖt actively scan OT devices.
For example: - A single appliance (virtual of physical) can be in the Shop Floor DMZ layer, having all Shop Floor cell traffic routed to this layer.
For information on how to activate your on-premises management console, see [Act
## How to change the network configuration
-You can update your sensor network configuration before or after activation. For more information, see [Activate and set up your sensor](how-to-activate-and-set-up-your-sensor.md#activate-and-set-up-your-sensor).
+Change network configuration settings before or after you activate your sensor using either of the following options:
+
+- **From the sensor UI**: [Update the sensor network configuration](how-to-manage-individual-sensors.md#update-the-sensor-network-configuration)
+- **From the sensor CLI**: [Network configuration](cli-ot-sensor.md#network-configuration)
-You can also [update the sensor network configuration](how-to-manage-individual-sensors.md#update-the-sensor-network-configuration) after activation.
+For more information, see [Activate and set up your sensor](how-to-activate-and-set-up-your-sensor.md) and [Getting started with advanced CLI commands](references-work-with-defender-for-iot-cli-commands.md)
-You can work with CLI [commands](references-work-with-defender-for-iot-cli-commands.md#network-configuration) to [change network configurations](references-work-with-defender-for-iot-cli-commands.md#network-configuration).
## How do I check the sanity of my deployment
defender-for-iot How To Manage Individual Sensors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-individual-sensors.md
A unique activation file is uploaded to each sensor that you deploy. For more in
Locally connected sensors are associated with an Azure subscription. The activation file for your locally connected sensors contains an expiration date. One month before this date, a warning message appears in the System Messages window in the top-right corner of the console. The warning remains until after you've updated the activation file.
-You can continue to work with Defender for IoT features even if the activation file has expired.
+You can continue to work with Defender for IoT features even if the activation file has expired.
### About activation files for cloud-connected sensors
Sensors that are cloud connected aren't limited by time periods for their activa
You might need to upload a new activation file for an onboarded sensor when: -- An activation file expires on a locally connected sensor.
+- An activation file expires on a locally connected sensor.
-- You want to work in a different sensor management mode.
+- You want to work in a different sensor management mode.
- For sensors connected via an IoT Hub ([legacy](architecture-connections.md)), you want to assign a new Defender for IoT hub to a cloud-connected sensor.
You'll receive an error message if the activation file couldn't be uploaded. The
## Manage certificates
-Following sensor installation, a local self-signed certificate is generated and used to access the sensor web application. When logging in to the sensor for the first time, Administrator users are prompted to provide an SSL/TLS certificate.
+Following sensor installation, a local self-signed certificate is generated and used to access the sensor web application. When logging in to the sensor for the first time, Administrator users are prompted to provide an SSL/TLS certificate.
Sensor Administrators may be required to update certificates that were uploaded after initial login. This may happen, for example, if a certificate expired.
This section describes how to ensure connection between the sensor and the on-pr
3. In the **Sensor Setup ΓÇô Connection String** section, copy the automatically generated connection string.
- :::image type="content" source="media/how-to-manage-individual-sensors/connection-string-screen.png" alt-text="Copy the connection string from this screen.":::
+ :::image type="content" source="media/how-to-manage-individual-sensors/connection-string-screen.png" alt-text="Copy the connection string from this screen.":::
4. Sign in to the sensor console.
Continue with additional settings, such as [adding users](how-to-create-and-mana
## Change the name of a sensor You can change the name of your sensor console. The new name will appear in:+ - The sensor console web browser - Various console windows - Troubleshooting logs
System backup is performed automatically at 3:00 AM daily. The data is saved on
You can automatically transfer this file to the internal network. > [!NOTE]
+>
> - The backup and restore procedure can be performed between the same versions only. > - In some architectures, the backup is disabled. You can enable it in the `/var/cyberx/properties/backup.properties` file.
Sensor backup files are automatically named through the following format: `<sens
4. Edit and create credentials to share for the SMB server:
- `sudo nano /etc/samba/user`
+ `sudo nano /etc/samba/user`
5. Add:
You can restore a sensor from a backup file using the sensor console or the CLI.
To restore a backup from the sensor console, the backup file must be accessible from the sensor. - **To download a backup file:**
-
+ 1. Access the sensor using an SFTP client.
-
+ 1. Sign in to an administrative account and enter the sensor IP address.
-
+ 1. Download the backup file from your chosen location and save it. The default location for system backup files is `/var/cyberx/backups`.
-
+ - **To restore the sensor**:
-
+ 1. Sign in to the sensor console and go to **System settings** > **Sensor management** > **Backup & restore** > **Restore**. For example:
-
+ :::image type="content" source="media/how-to-manage-individual-sensors/restore-sensor-screen.png" alt-text="Screenshot of Restore tab in sensor console.":::
-
- 1. Select **Browse** to select your downloaded backup file. The sensor will start to restore from the selected backup file.
-
- 1. When the restore process is complete, select **Close**.
+
+ 1. Select **Browse** to select your downloaded backup file. The sensor will start to restore from the selected backup file.
+
+ 1. When the restore process is complete, select **Close**.
**To restore the latest backup file by using the CLI:**
To restore a backup from the sensor console, the backup file must be accessible
## Configure SMTP settings
-Define SMTP mail server settings for the sensor so that you configure the sensor to send data to other servers.
+Define SMTP mail server settings for the sensor so that you configure the sensor to send data to other servers.
You'll need an SMTP mail server configured to enable email alerts about disconnected sensors, failed sensor backup retrievals, and SPAN monitoring port failures from the on-premises management console, and to set up mail forwarding and configure [forwarding alert rules](how-to-forward-alert-information-to-partners.md). **Prerequisites**:
-Make sure you can reach the SMTP server from the [sensor's management port](/best-practices/understand-network-architecture).
+Make sure you can reach the SMTP server from the [sensor's management port](/azure/defender-for-iot/organizations/best-practices/understand-network-architecture).
**To configure an SMTP server on your sensor**:
Make sure you can reach the SMTP server from the [sensor's management port](/bes
|**SSL** | Toggle on for secure connections from your sensor. | |**Authentication** | Toggle on and then enter a username and password for your email account. | |**Use NTLM** | Toggle on to enable [NTLM](/windows-server/security/kerberos/ntlm-overview). This option only appears when you have the **Authentication** option toggled on. |
-
+ 1. Select **Save** when you're done. ## Forward sensor failure alerts
To access system properties:
## Download a diagnostics log for support
-This procedure describes how to download a diagnostics log to send to support in connection with a specific support ticket.
+This procedure describes how to download a diagnostics log to send to support in connection with a specific support ticket.
This feature is supported for the following sensor versions:
This feature is supported for the following sensor versions:
1. For a locally managed sensor, version 22.1.3 or higher, continue with [Upload a diagnostics log for support](how-to-manage-sensors-on-the-cloud.md#upload-a-diagnostics-log-for-support-public-preview).
+## Retrieve forensics data stored on the sensor
+
+Use Defender for IoT data mining reports on an OT network sensor to retrieve forensic data from that sensorΓÇÖs storage. The following types of forensic data is stored locally on OT sensors, for devices detected by that sensor:
+
+- Device data
+- Alert data
+- Alert PCAP files
+- Event timeline data
+- Log files
+
+Each type of data has a different retention period and maximum capacity. For more information see [Create data mining queries](how-to-create-data-mining-queries.md).
+ ## Clearing sensor data In cases where the sensor needs to be relocated or erased, the sensor can be reset.
defender-for-iot How To Manage Sensors From The On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-sensors-from-the-on-premises-management-console.md
You can define the following sensor system settings from the management console:
1. Select **Save**. -- ## Update threat intelligence packages The data package for threat intelligence is provided with each new Defender for IoT version, or if needed between releases. The package contains signatures (including malware signatures), CVEs, and other security content.
You can manually upload this file in the Azure portal and automatically update i
[!INCLUDE [root-of-trust](includes/root-of-trust.md)] - **To update the threat intelligence data:**
-1. Go to the Defender for IoT **Updates** page.
+1. Go to the Defender for IoT **Updates** page.
1. Download and save the file.
-1. Sign in to the management console.
+1. Sign in to the management console.
-1. On the side menu, select **System Settings**.
+1. On the side menu, select **System Settings**.
1. Select the sensors that should receive the update in the **Sensor Engine Configuration** section.
-1. In the **Select Threat Intelligence Data** section, select the plus sign (**+**).
+1. In the **Select Threat Intelligence Data** section, select the plus sign (**+**).
1. Upload the package that you downloaded from the Defender for IoT **Updates** page.
Sensors are protected by Defender for IoT engines. You can enable or disable the
1. In the console's left pane, select **System Settings**. 1. In the **Sensor Engine Configuration** section, select **Enable** or **Disable** for the engines.
-
+ 1. Select **SAVE CHANGES**. A red exclamation mark appears if there's a mismatch of enabled engines on one of your enterprise sensors. The engine might have been disabled directly from the sensor.
- :::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/red-exclamation-example.png" alt-text="Mismatch of enabled engines.":::
+ :::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/red-exclamation-example.png" alt-text="Mismatch of enabled engines.":::
+
+## Retrieve forensics data stored on the sensor
+
+Use Defender for IoT data mining reports on an OT network sensor to retrieve forensic data from that sensorΓÇÖs storage. The following types of forensic data is stored locally on OT sensors, for devices detected by that sensor:
+
+- Device data
+- Alert data
+- Alert PCAP files
+- Event timeline data
+- Log files
+
+Each type of data has a different retention period and maximum capacity. For more information see [Create data mining queries](how-to-create-data-mining-queries.md).
## Define sensor backup schedules
By default, sensors are automatically backed up at 3:00 AM daily. The backup sch
:::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/sensor-backup-schedule-screen.png" alt-text="A view of the sensor backup screen.":::
-When the default sensor backup location is changed, the on-premises management console automatically retrieves the files from the new location on the sensor or an external location, provided that the console has permission to access the location.
+When the default sensor backup location is changed, the on-premises management console automatically retrieves the files from the new location on the sensor or an external location, provided that the console has permission to access the location.
When the sensors aren't registered with the on-premises management console, the **Sensor Backup Schedule** dialog box indicates that no sensors are managed.
The restore process is the same regardless of where the files are stored. For mo
### Backup storage for sensors
-You can use the on-premises management console to maintain up to nine backups for each managed sensor, provided that the backed-up files don't exceed the maximum backup space that's allocated.
+You can use the on-premises management console to maintain up to nine backups for each managed sensor, provided that the backed-up files don't exceed the maximum backup space that's allocated.
-The available space is calculated based on the management console model you're working with:
+The available space is calculated based on the management console model you're working with:
-- **Production model**: Default storage is 40 GB; limit is 100 GB.
+- **Production model**: Default storage is 40 GB; limit is 100 GB.
-- **Medium model**: Default storage is 20 GB; limit is 50 GB.
+- **Medium model**: Default storage is 20 GB; limit is 50 GB.
-- **Laptop model**: Default storage is 10 GB; limit is 25 GB.
+- **Laptop model**: Default storage is 10 GB; limit is 25 GB.
-- **Thin model**: Default storage is 2 GB; limit is 4 GB.
+- **Thin model**: Default storage is 2 GB; limit is 4 GB.
-- **Rugged model**: Default storage is 10 GB; limit is 25 GB.
+- **Rugged model**: Default storage is 10 GB; limit is 25 GB.
-The default allocation is displayed in the **Sensor Backup Schedule** dialog box.
+The default allocation is displayed in the **Sensor Backup Schedule** dialog box.
:::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/edit-mail-server-configuration.png" alt-text="The Edit Mail Server Configuration screen.":::
-There's no storage limit when you're backing up to an external server. You must, however, define an upper allocation limit in the **Sensor Backup Schedule** > **Custom Path** field. The following numbers and characters are supported: `/, a-z, A-Z, 0-9, and _`.
+There's no storage limit when you're backing up to an external server. You must, however, define an upper allocation limit in the **Sensor Backup Schedule** > **Custom Path** field. The following numbers and characters are supported: `/, a-z, A-Z, 0-9, and _`.
Here's information about exceeding allocation storage limits: -- If you exceed the allocated storage space, the sensor isn't backed up.
+- If you exceed the allocated storage space, the sensor isn't backed up.
- If you're backing up more than one sensor, the management console tries to retrieve sensor files for the managed sensors. -- If the retrieval from one sensor exceeds the limit, the management console tries to retrieve backup information from the next sensor.
+- If the retrieval from one sensor exceeds the limit, the management console tries to retrieve backup information from the next sensor.
When you exceed the retained number of backups defined, the oldest backed-up file is deleted to accommodate the new one.
-Sensor backup files are automatically named in the following format: `<sensor name>-backup-version-<version>-<date>.tar`. For example: `Sensor_1-backup-version-2.6.0.102-2019-06-24_09:24:55.tar`.
+Sensor backup files are automatically named in the following format: `<sensor name>-backup-version-<version>-<date>.tar`. For example: `Sensor_1-backup-version-2.6.0.102-2019-06-24_09:24:55.tar`.
**To back up sensors:**
Sensor backup files are automatically named in the following format: `<sensor na
1. Enable the **Collect Backups** toggle.
-1. Select a calendar interval, date, and time zone. The time format is based on a 24-hour clock. For example, enter 6:00 PM as **18:00**.
+1. Select a calendar interval, date, and time zone. The time format is based on a 24-hour clock. For example, enter 6:00 PM as **18:00**.
1. In the **Backup Storage Allocation** field, enter the storage that you want to allocate for your backups. You're notified if you exceed the maximum space.
Sensor backup files are automatically named in the following format: `<sensor na
- To back up to the on-premises management console, disable the **Custom Path** toggle. The default location is `/var/cyberx/sensor-backups`.
- - To back up to an external server, enable the **Custom Path** toggle and enter a location. The following numbers and characters are supported: `/, a-z, A-Z, 0-9, and, _`.
+ - To back up to an external server, enable the **Custom Path** toggle and enter a location. The following numbers and characters are supported: `/, a-z, A-Z, 0-9, and, _`.
-1. Select **Save**.
+1. Select **Save**.
**To back up immediately:** -- Select **Back Up Now**. The on-premises management console creates and collects sensor backup files.
+- Select **Back Up Now**. The on-premises management console creates and collects sensor backup files.
-### Receiving backup notifications for sensors
+### Receiving backup notifications for sensors
The **Sensor Backup Schedule** dialog box and the backup log automatically list information about backup successes and failures. :::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/sensor-location.png" alt-text="View your sensors and where they're located and all relevant information.":::
-Failures might occur because:
+Failures might occur because:
-- No backup file is found.
+- No backup file is found.
- A file was found but can't be retrieved. -- There's a network connection failure.
+- There's a network connection failure.
- There's not enough room allocated to the on-premises management console to complete the backup.
You can send an email notification, syslog updates, and system notifications whe
**To set up an SMB server so you can save a sensor backup to an external drive:**
-1. Create a shared folder in the external SMB server.
+1. Create a shared folder in the external SMB server.
-1. Get the folder path, username, and password required to access the SMB server.
+1. Get the folder path, username, and password required to access the SMB server.
-1. In Defender for IoT, make a directory for the backups:
+1. In Defender for IoT, make a directory for the backups:
```bash sudo mkdir /<backup_folder_name_on_server> sudo chmod 777 /<backup_folder_name_on_server>/
- ```
+ ```
1. Edit fstab:ΓÇ»
You can send an email notification, syslog updates, and system notifications whe
add - //<server_IP>/<folder_path> /<backup_folder_name_on_cyberx_server> cifs rw,credentials=/etc/samba/user,vers=3.0,uid=cyberx,gid=cyberx,file_mode=0777,dir_mode=0777 0 0 ```
-
-1. Edit or create credentials to share. These are the credentials for the SMB server:
+1. Edit or create credentials to share. These are the credentials for the SMB server:
```bash sudo nano /etc/samba/user ```
-
1. Add:ΓÇ»
You can send an email notification, syslog updates, and system notifications whe
password=<password> ```
-
-1. Mount the directory:
+1. Mount the directory:
```bash sudo mount -a ```
-
1. Configure a backup directory to the shared folder on the Defender for IoT sensor:ΓÇ» ```bash sudo nano /var/cyberx/properties/backup.properties ```
-
1. Set `Backup.shared_location` to `<backup_folder_name_on_cyberx_server>`.
For more information, see:
- [Manage sensors with Defender for IoT in the Azure portal](how-to-manage-sensors-on-the-cloud.md) - [Threat intelligence research and packages](how-to-work-with-threat-intelligence-packages.md) - [Troubleshoot the sensor and on-premises management console](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md)-
defender-for-iot How To Manage Sensors On The Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-sensors-on-the-cloud.md
To edit a site's details, select the site's name on the **Sites and sensors** pa
- **Tags**: (Optional) Enter values for the **Key** and **Value** fields for each new tag you want to add to your site. Select **+ Add** to add a new tag. -- **Owner**: For sites with OT sensors only. Enter one or more email addresses for the user you want to designate as the owner of the devices at this site. The site owner is inherited by all devices at the site, and is shown on the IoT device entity pages and in incident details in Microsoft Sentinel.
+- **Owner**: For sites with OT sensors only. Enter one or more email addresses for the user you want to designate as the owner of the devices at this site. The site owner is inherited by all devices at the site, and is shown on the IoT device entity pages and in incident details in Microsoft Sentinel.
In Microsoft Sentinel, use the **AD4IoT-SendEmailtoIoTOwner** and **AD4IoT-CVEAutoWorkflow** playbooks to automatically notify device owners about important alerts or incidents. For more information, see [Investigate and detect threats for IoT devices](../../sentinel/iot-advanced-threat-monitoring.md).
When you're done, select **Save** to save your changes.
Sensors that you've on-boarded to Defender for IoT are listed on the Defender for IoT **Sites and sensors** page. Select a specific sensor name to drill down to more details for that sensor.
-Use the options on the **Sites and sensor** page and a sensor details page to do any of the following tasks. If you're on the **Sites and sensors** page, select multiple sensors to apply your actions in bulk using toolbar options. For individual sensors, use the **Sites and sensors** toolbar options, the **...** options menu at the right of a sensor row, or the options on a sensor details page.
+Use the options on the **Sites and sensor** page and a sensor details page to do any of the following tasks. If you're on the **Sites and sensors** page, select multiple sensors to apply your actions in bulk using toolbar options. For individual sensors, use the **Sites and sensors** toolbar options, the **...** options menu at the right of a sensor row, or the options on a sensor details page.
|Task |Description | |||
Use the options on the **Sites and sensor** page and a sensor details page to do
| :::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/icon-diagnostics.png" border="false"::: **Send diagnostic files to support** | Individual, locally managed OT sensors only. <br><br>Available from the **...** options menu. <br><br>For more information, see [Upload a diagnostics log for support (Public preview)](#upload-a-diagnostics-log-for-support-public-preview).| | **Download SNMP MIB file** | Available from the **Sites and sensors** toolbar **More actions** menu. <br><br>For more information, see [Set up SNMP MIB monitoring](how-to-set-up-snmp-mib-monitoring.md).| | **Recover an on-premises management console password** | Available from the **Sites and sensors** toolbar **More actions** menu. <br><br>For more information, see [Manage the on-premises management console](how-to-manage-the-on-premises-management-console.md). |
-| **Download endpoint details** (Public preview) | Available from the **Sites and sensors** toolbar **More actions** menu, for OT sensor versions 22.x only. <br><br>Download the list of endpoints that must be enabled as secure endpoints from OT network sensors. Make sure that HTTPS traffic is enabled over port 443 to the listed endpoints for your sensor to connect to Azure. Outbound allow rules are defined once for all OT sensors onboarded to the same subscription.<br><br>To enable this option, select a sensor with a supported software version, or a site with one or more sensors with supported versions. |
+|<a name="endpoint"></a> **Download endpoint details** (Public preview) | Available from the **Sites and sensors** toolbar **More actions** menu, for OT sensor versions 22.x only. <br><br>Download the list of endpoints that must be enabled as secure endpoints from OT network sensors. Make sure that HTTPS traffic is enabled over port 443 to the listed endpoints for your sensor to connect to Azure. Outbound allow rules are defined once for all OT sensors onboarded to the same subscription.<br><br>To enable this option, select a sensor with a supported software version, or a site with one or more sensors with supported versions. |
+
+## Retrieve forensics data stored on the sensor
+
+Use Azure Monitor workbooks on an OT network sensor to retrieve forensic data from that sensorΓÇÖs storage. The following types of forensic data is stored locally on OT sensors, for devices detected by that sensor:
+
+- Device data
+- Alert data
+- Alert PCAP files
+- Event timeline data
+- Log files
+
+Each type of data has a different retention period and maximum capacity. For more information see [Visualize Microsoft Defender for IoT data with Azure Monitor workbooks](workbooks.md).
## Reactivate an OT sensor
Make sure that you've started with the relevant updates steps for this update. F
> > For more information, see [Default privileged on-premises users](roles-on-premises.md#default-privileged-on-premises-users). - ## Understand sensor health (Public preview) This procedure describes how to view sensor health data from the Azure portal. Sensor health includes data such as whether traffic is stable, the sensor is overloaded, notifications about sensor software versions, and more.
defender-for-iot Iot Advanced Threat Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/iot-advanced-threat-monitoring.md
This playbook updates the incident severity according to the importance level of
## Next steps > [!div class="nextstepaction"]
-> [Visualize data](/azure/sentinel/get-visibility.md)
+> [Visualize data](/azure/sentinel/get-visibility)
> [!div class="nextstepaction"]
-> [Create custom analytics rules](/azure/sentinel/detect-threats-custom.md)
+> [Create custom analytics rules](/azure/sentinel/detect-threats-custom)
> [!div class="nextstepaction"]
-> [Investigate incidents](/sentinel/investigate-cases)
+> [Investigate incidents](/azure/sentinel/investigate-cases)
> [!div class="nextstepaction"]
-> [Investigate entities](/azure/sentinel/entity-pages.md)
+> [Investigate entities](/azure/sentinel/entity-pages)
> [!div class="nextstepaction"]
-> [Use playbooks with automation rules](/azure/sentinel/tutorial-respond-threats-playbook.md)
+> [Use playbooks with automation rules](/azure/sentinel/tutorial-respond-threats-playbook)
For more information, see our blog: [Defending Critical Infrastructure with the Microsoft Sentinel: IT/OT Threat Monitoring Solution](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/defending-critical-infrastructure-with-the-microsoft-sentinel-it/ba-p/3061184)
defender-for-iot Iot Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/iot-solution.md
Before you start, make sure you have the following requirements on your workspac
- A Defender for IoT plan on your Azure subscription with data streaming into Defender for IoT. For more information, see [Quickstart: Get started with Defender for IoT](getting-started.md). > [!IMPORTANT]
-> Currently, having both the Microsoft Defender for IoT and the [Microsoft Defender for Cloud](/azure/sentinel/data-connectors-reference.md#microsoft-defender-for-cloud) data connectors enabled on the same Microsoft Sentinel workspace simultaneously may result in duplicate alerts in Microsoft Sentinel. We recommend that you disconnect the Microsoft Defender for Cloud data connector before connecting to Microsoft Defender for IoT.
+> Currently, having both the Microsoft Defender for IoT and the [Microsoft Defender for Cloud](/azure/sentinel/data-connectors-reference#microsoft-defender-for-cloud) data connectors enabled on the same Microsoft Sentinel workspace simultaneously may result in duplicate alerts in Microsoft Sentinel. We recommend that you disconnect the Microsoft Defender for Cloud data connector before connecting to Microsoft Defender for IoT.
> ## Connect your data from Defender for IoT to Microsoft Sentinel
Start by enabling the **Defender for IoT** data connector to stream all your Def
If you've made any connection changes, it can take 10 seconds or more for the **Subscription** list to update.
-For more information, see [Connect Microsoft Sentinel to Azure, Windows, Microsoft, and Amazon services](/azure/sentinel/connect-azure-windows-microsoft-services.md).
+For more information, see [Connect Microsoft Sentinel to Azure, Windows, Microsoft, and Amazon services](/azure/sentinel/connect-azure-windows-microsoft-services).
## View Defender for IoT alerts
For more information, see:
- [Tutorial: Investigate and detect threats for IoT devices](iot-advanced-threat-monitoring.md) - [Defending Critical Infrastructure with the Microsoft Sentinel: IT/OT Threat Monitoring Solution](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/defending-critical-infrastructure-with-the-microsoft-sentinel-it/ba-p/3061184) - [Microsoft Defender for IoT solution](https://azuremarketplace.microsoft.com/marketplace/apps/azuresentinel.azure-sentinel-solution-unifiedmicrosoftsocforot?tab=Overview)-- [Microsoft Defender for IoT data connector](/azure/sentinel/data-connectors-reference.md#microsoft-defender-for-iot)
+- [Microsoft Defender for IoT data connector](/azure/sentinel/data-connectors-reference#microsoft-defender-for-iot)
defender-for-iot Manage Users Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/manage-users-sensor.md
This procedure describes how to create new users for a specific OT network senso
|**First Name** | Enter the user's first name. | |**Last Name** | Enter the user's last name. | |**Role** | Select one of the following user roles: **Admin**, **Security Analyst**, or **Read Only**. For more information, see [On-premises user roles](roles-on-premises.md#on-premises-user-roles). |
- |**Password** | Select the user type, either **Local** or **Active Directory User**. <br><br>For local users, enter a password for the user. Password requirements include: <br>- At least eight characters<br>- Both lowercase and uppercase alphabetic characters<br>- At least one numbers<br>- At least one symbol<br><br>Local user passwords can only be modified by **Admin** users.|
+ |**Password** | Select the user type, either **Local** or **Active Directory User**. <br><br>For local users, enter a password for the user. Password requirements include: <br>- At least eight characters<br>- Both lowercase and uppercase alphabetic characters<br>- At least one number<br>- At least one symbol<br><br>Local user passwords can only be modified by **Admin** users.|
> [!TIP] > Integrating with Active Directory lets you associate groups of users with specific permission levels. If you want to create users using Active Directory, first configure [Active Directory on the sensor](manage-users-sensor.md#integrate-ot-sensor-users-with-active-directory) and then return to this procedure.
For more information, see [Active Directory support on sensors and on-premises m
|Name |Description | ||| |**Domain Controller FQDN** | The fully qualified domain name (FQDN), exactly as it appears on your LDAP server. For example, enter `host1.subdomain.domain.com`. |
- |**Domain Controller Port** | The port on which your LDAP is configured. |
+ |**Domain Controller Port** | The port where your LDAP is configured. |
|**Primary Domain** | The domain name, such as `subdomain.domain.com`, and then select the connection type for your LDAP configuration. <br><br>Supported connection types include: **LDAPS/NTLMv3** (recommended), **LDAP/NTLMv3**, or **LDAP/SASL-MD5** | |**Active Directory Groups** | Select **+ Add** to add an Active Directory group to each permission level listed, as needed. <br><br> When you enter a group name, make sure that you enter the group name exactly as it's defined in your Active Directory configuration on the LDAP server. You'll use these group names when [adding new sensor users](#add-new-ot-sensor-users) with Active Directory.<br><br> Supported permission levels include **Read-only**, **Security Analyst**, **Admin**, and **Trusted Domains**. |
This procedure descries how to recover privileged access to a sensor, for the *c
> > Return to Azure, and select the settings icon in the top toolbar. On the **Directories + subscriptions** page, make sure that you've selected the subscription where your sensor was onboarded to Defender for IoT. Then repeat the steps in Azure to download the **password_recovery.zip** file and upload it on the sensor again.
-1. Select **Next**. A system-generated password for your sensor appears for you to use for the selected user. Make sure to write the password down as it won't be shown again.
+1. Select **Next**. A system-generated password for your sensor appears for you to use for the selected user. Make sure to write down the password as it won't be shown again.
1. Select **Next** again to sign into your sensor with the new password.
+### Define maximum number of failed sign-ins
+
+Use the OT sensor's CLI access to define the number of maximum failed sign-ins before an OT sensor will prevent the user from signing in again from the same IP address.
+
+For more information, see [Defender for IoT CLI users and access](references-work-with-defender-for-iot-cli-commands.md).
+
+**Prerequisites**: This procedure is available for the *cyberx* user only.
+
+1. Sign into your OT sensor via SSH and run:
+
+ ```bash
+ nano /var/cyberx/components/xsense-web/cyberx_web/settings.py
+ ```
+
+1. In the **settings.py** file, set the `"MAX_FAILED_LOGINS"` value to the maximum number of failed sign ins you want to define. Make sure that you consider the number of concurrent users in your system.
+
+1. Exit the file and run `sudo monit restart all` to apply your changes.
+ ## Control user session timeouts
-By default, on-premises users are signed out of their sessions after 30 minutes of inactivity. Admin users can use the local CLI to either turn this feature on or off, or to adjust the inactivity thresholds.
-For more information, see [Work with Defender for IoT CLI commands](references-work-with-defender-for-iot-cli-commands.md).
+By default, on-premises users are signed out of their sessions after 30 minutes of inactivity. Admin users can use the local CLI access to either turn this feature on or off, or to adjust the inactivity thresholds. For more information, see [Defender for IoT CLI users and access](references-work-with-defender-for-iot-cli-commands.md).
> [!NOTE] > Any changes made to user session timeouts are reset to defaults when you [update the OT monitoring software](update-ot-software.md).
For more information, see [Work with Defender for IoT CLI commands](references-w
For more information, see: - [Create and manage users on an on-premises management console](manage-users-on-premises-management-console.md)-- [Audit user activity](track-user-activity.md)
+- [Audit user activity](track-user-activity.md)
defender-for-iot References Work With Defender For Iot Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/references-work-with-defender-for-iot-apis.md
For more information, see:
- [Manage your OT device inventory from an on-premises management console](how-to-investigate-all-enterprise-sensor-detections-in-a-device-inventory.md) - [View and manage alerts from the Azure portal](how-to-manage-cloud-alerts.md) - [View alerts on your sensor](how-to-view-alerts.md)-- [Work with alerts on the on-premises management console](how-to-work-with-alerts-on-premises-management-console.md)
+- [Work with alerts on the on-premises management console](how-to-work-with-alerts-on-premises-management-console.md)
defender-for-iot References Work With Defender For Iot Cli Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/references-work-with-defender-for-iot-cli-commands.md
Title: Work with Defender for IoT CLI commands
-description: This article describes Defender for IoT CLI commands for sensors and on-premises management consoles.
Previously updated : 11/09/2021-
+ Title: CLI command users and access for OT monitoring - Microsoft Defender for IoT
+description: Learn about the users supported for the Microsoft Defender for IoT CLI commands and how to access the CLI.
Last updated : 12/29/2022+
-# Work with Defender for IoT CLI commands
+# Defender for IoT CLI users and access
-This article describes CLI commands for sensors and on-premises management consoles. The commands are accessible to the following users:
+This article provides an introduction to the Microsoft Defender for IoT command line interface (CLI). The CLI is a text-based user interface that allows you to access your OT and Enterprise IoT sensors, and the on-premises management console, for advanced configuration, troubleshooting, and support.
-- `cyberx`-- `support`-- `cyberx_host`
+To access the Defender for IoT CLI, you'll need access to the sensor or on-premises management console.
-For more information, see [Default privileged on-premises users](roles-on-premises.md#default-privileged-on-premises-users)..
+- For OT sensors or the on-premises management console, you'll need to sign in as a [privileged user](#privileged-user-access-for-ot-monitoring).
+- For Enterprise IoT sensors, you can sign in as any user.
-To start working in the CLI, connect using a terminal, such as PuTTY, using one of the privileged users.
+## Privileged user access for OT monitoring
+Privileged users for OT monitoring are pre-defined together with the [OT monitoring software installation](../how-to-install-software.md), as part of the hardened operating system.
-## Create local alert exclusion rules
+- On the OT sensor, users include the *cyberx*, *support*, and *cyberx_host* users.
+- On the on-premises management console, users include the *cyberx* and *support* users.
-You can create a local alert exclusion rule by entering the following command into the CLI:
+The following table describes the access available to each privileged user:
-```azurecli-interactive
-alerts exclusion-rule-create [-h] -n NAME [-ts TIMES] [-dir DIRECTION]
-[-dev DEVICES] [-a ALERTS]
-```
-
-The following attributes can be used with the alert exclusion rules:
-
-| Attribute | Description |
-|--|--|
-| [-h] | Prints the help information for the command. |
-| -n NAME | The name of the rule being created. |
-| [-ts TIMES] | The time span for which the rule is active. This should be specified as:<br />`xx:yy-xx:yy`<br />You can define more than one time period by using a comma between them. For example: `xx:yy-xx:yy, xx:yy-xx:yy`. |
-| [-dir DIRECTION] | The direction in which the rule is applied. This should be specified as:<br />`both | src | dst` |
-| [-dev DEVICES] | The IP address and the address type of the devices to be excluded by the rule, specified as:<br />`ip-x.x.x.x`<br />`mac-xx:xx:xx:xx:xx:xx`<br />`subnet: x.x.x.x/x` |
-| [-a ALERTS] | The name of the alert that the rule will exclude:<br />`0x00000`<br />`0x000001` |
-
-## Append local alert exclusion rules
-
-You can append local alert exclusion rules by entering the following command in the CLI:
-
-```azurecli-interactive
-alerts exclusion-rule-append [-h] -n NAME [-ts TIMES] [-dir DIRECTION]
-[-dev DEVICES] [-a ALERTS]
-```
-
-The attributes used here are the same as the attributes explained in the Create local alert exclusion rules section. The difference in the usage is that here the attributes are applied on the existing rules.
-
-## Show local alert exclusion rules
-
-Enter the following command to present the existing list of exclusion rules:
-
-```azurecli-interactive
-alerts exclusion-rule-list [-h] -n NAME [-ts TIMES] [-dir DIRECTION]
-[-dev DEVICES] [-a ALERTS]
-```
-
-## Delete local alert exclusion rules
-
-You can delete an existing alert exclusion rule by entering the following command:
-
-```azurecli-interactive
-alerts exclusion-rule-remove [-h] -n NAME [-ts TIMES] [-dir DIRECTION]
-[-dev DEVICES] [-a ALERTS]
-```
-
-The following attribute can be used with the alert exclusion rules:
-
-| Attribute | Description|
-| | - |
-| -n NAME | The name of the rule to be deleted. |
-
-## Sync time from the NTP server
-
-You can enable, or disable a time sync from a specified NTP server.
-
-### Enable NTP sync
-
-Enter the following command to periodically retrieve the time from the specified NTP server:
-
-```azurecli-interactive
-ntp enable IP
-```
-
-The attribute that you can define within the command is the IP address of the NTP server.
-
-### Disable NTP sync
-
-Enter the following command to disable the time sync with the specified NTP server:
-
-```azurecli-interactive
-ntp disable IP
-```
-
-The attribute that you can define within the command is the IP address of the NTP server.
-
-## Network configuration
-
-The following table describes the commands available to configure your network options for Microsoft Defender for IoT:
-
-|Name|Command|Description|
-|--|-|--|
-|Ping|`ping IP`| Ping an address outside the Defender for IoT platform.|
-|Blink|`network blink`| Locate a connection by causing the interface lights to blink. |
-|Reconfigure the network |`network edit-settings`| Enable a change in the network configuration parameters. |
-|Show network settings |`network list`|Displays the network adapter parameters. |
-|Validate the network configuration |`network validate` |Presents the output network settings. <br /> <br />For example: <br /> <br />Current Network Settings: <br /> interface: eth0 <br /> ip: 10.100.100.1 <br />subnet: 255.255.255.0 <br />default gateway: 10.100.100.254 <br />dns: 10.100.100.254 <br />monitor interfaces: eth1|
-|Import a certificate |`certificate import FILE` |Imports the HTTPS certificate. You'll need to specify the full path, which leads to a \*.crt file. |
-|Show the date |`date` |Returns the current date on the host in GMT format. |
-
-## Network capture filter configuration
-
-The `network capture-filter` command allows administrators to eliminate network traffic that doesn't need to be analyzed. You can filter traffic by using an include list, or an exclude list. This command doesn't support the malware detection engine.
-
-```azurecli-interactive
-network capture-filter
-```
-
-After you enter the command, you'll be prompted with the following question:
-
->`Would you like to supply devices and subnet masks you wish to include in the capture filter? [Y/N]:`
-
-Select `Y` to open a nano file where you can add a device, channel, port, and subset according to the following syntax:
-
-| Attribute | Description |
-|--|--|
-| 1.1.1.1 | Includes all of the traffic for this device. |
-| 1.1.1.1,2.2.2.2 | Includes all of the traffic for this channel. |
-| 1.1.1,2.2.2 | Includes all of the traffic for this subnet. |
-
-Separate arguments by dropping a row.
-
-When you include a device, channel, or subnet, the sensor processes all the valid traffic for that argument, including ports and traffic that wouldn't usually be processed.
-
-You'll then be asked the following question:
-
->`Would you like to supply devices and subnet masks you wish to exclude from the capture filter? [Y/N]:`
-
-Select `Y` to open a nano file where you can add a device, channel, port, and subsets according to the following syntax:
-
-| Attribute | Description |
-|--|--|
-| 1.1.1.1 | Excludes all the traffic for this device. |
-| 1.1.1.1,2.2.2.2 | Excludes all the traffic for this channel, meaning all the traffic between two devices. |
-| 1.1.1.1,2.2.2.2,443 | Excludes all the traffic for this channel by port. |
-| 1.1.1 | Excludes all the traffic for this subnet. |
-| 1.1.1,2.2.2 | Excludes all the traffic for between subnets. |
-
-Separate arguments by dropping a row.
-
-When you exclude a device, channel, or subnet, the sensor will exclude all the valid traffic for that argument.
-
-### Ports
-
-Include or exclude UDP and TCP ports for all the traffic.
-
->`502`: single port
-
->`502,443`: both ports
-
->`Enter tcp ports to include (delimited by comma or Enter to skip):`
-
->`Enter udp ports to include (delimited by comma or Enter to skip):`
-
->`Enter tcp ports to exclude (delimited by comma or Enter to skip):`
-
->`Enter udp ports to exclude (delimited by comma or Enter to skip):`
-
-### Components
-
-You're asked the following question:
-
->`In which component do you wish to apply this capture filter?`
-
-Your options are:ΓÇ»`all`, `dissector`, `collector`, `statistics-collector`, `rpc-parser`, or `smb-parser`.
-
-In most common use cases, we recommend that you select `all`. Selecting `all` doesn't include the malware detection engine, which isn't supported by this command.
-
-### Custom base capture filter
-
-The base capture filter is the baseline for the components. For example, the filter determines which ports are available to the component.
-
-Select `Y` for all of the following options. All of the filters are added to the baseline after the changes are set. If you make a change, it will overwrite the existing baseline.
-
->`Would you like to supply a custom base capture filter for the dissector component? [Y/N]:`
-
->`Would you like to supply a custom base capture filter for the collector component? [Y/N]:`
-
->`Would you like to supply a custom base capture filter for the statistics-collector component? [Y/N]:`
-
->`Would you like to supply a custom base capture filter for the rpc-parser component? [Y/N]:`
-
->`Would you like to supply a custom base capture filter for the smb-parser component? [Y/N]:`
-
->`Type Y for "internal" otherwise N for "all-connected" (custom operation mode enabled) [Y/N]:`
-
-If you choose to exclude a subnet such as 1.1.1:
--- `internal` will exclude only that subnet.--- `all-connected` will exclude that subnet and all the traffic to and from that subnet.-
-We recommend that you select `internal`.
+|Name |Connects to |Permissions |
+||||
+|**support** | The OT sensor or on-premises management console's `configuration shell` | A powerful administrative account with access to:<br>- All CLI commands<br>- The ability to manage log files<br>- Start and stop services<br><br>This user has no filesystem access |
+|**cyberx** | The OT sensor or on-premises management console's `terminal (root)` | Serves as a root user and has unlimited privileges on the appliance. <br><br>Used only for the following tasks:<br>- Changing default passwords<br>- Troubleshooting<br>- Filesystem access |
+|**cyberx_host** | The OT sensor's host OS `terminal (root)` | Serves as a root user and has unlimited privileges on the appliance host OS.<br><br>Used for: <br>- Network configuration<br>- Application container control <br>- Filesystem access |
> [!NOTE]
-> Your choices are used for all the filters in the tool and are not session dependent. In other words, you can't ever choose `internal` for some filters and `all-connected` for others.
+> We recommend that customers using the Defender for IoT CLI use the *support* user whenever possible. Other CLI users cannot be added.
+
+### Supported users by CLI actions
-### Comments
+The following tables list the activities available by CLI and the privileged users supported for each activity.
-You can view filters inΓÇ»```/var/cyberx/properties/cybershark.properties```:
+### Appliance maintenance commands
-- **statistics-collector**: `bpf_filter property` in ```/var/cyberx/properties/net.stats.collector.properties```
+|Service area |Users |Actions |
+||||
+|Sensor health | *support*, *cyberx* | [Check OT monitoring services health](cli-ot-sensor.md#check-ot-monitoring-services-health) |
+|Restart and shutdown | *support*, *cyberx*, *cyberx_host* | [Restart an appliance](cli-ot-sensor.md#restart-an-appliance)<br>[Shut down an appliance](cli-ot-sensor.md#shut-down-an-appliance) |
+|Software versions | *support*, *cyberx* | [Show installed software version](cli-ot-sensor.md#show-installed-software-version) <br>[Update software version](update-ot-software.md) |
+|Date and time | *support*, *cyberx*, *cyberx_host* | [Show current system date/time](cli-ot-sensor.md#show-current-system-datetime) |
+|NTP | *support*, *cyberx* | [Turn on NTP time sync](cli-ot-sensor.md#turn-on-ntp-time-sync)<br>[Turn off NTP time sync](cli-ot-sensor.md#turn-off-ntp-time-sync) |
-- **dissector**: `override.capture_filter` property in ```/var/cyberx/properties/cybershark.properties```
+### Backup and restore commands
-- **rpc-parser**: `override.capture_filter` property inΓÇ»```/var/cyberx/properties/rpc-parser.properties```
+|Service area |Users |Actions |
+||||
+|Backup files | *support*, *cyberx* | [List current backup files](cli-ot-sensor.md#list-current-backup-files) <br>[Start an immediate, unscheduled backup](cli-ot-sensor.md#start-an-immediate-unscheduled-backup) |
+|Restore | *support*, *cyberx* | [Restore data from the most recent backup](cli-ot-sensor.md#restore-data-from-the-most-recent-backup) |
+|Backup disk space | *cyberx* | [Display backup disk space allocation](cli-ot-sensor.md#display-backup-disk-space-allocation) |
-- **smb-parser**: `override.capture_filter` property in ```/var/cyberx/properties/smb-parser.properties```
+### TLS/SSL certificate commands
-- **collector**: `general.bpf_filter` property in ```/var/cyberx/properties/collector.properties```
+|Service area |Users |Actions |
+||||
+|Certificate management | *cyberx* | [Import TLS/SSL certificates to your OT sensor](cli-ot-sensor.md#import-tlsssl-certificates-to-your-ot-sensor)<br>[Restore the default self-signed certificate](cli-ot-sensor.md#restore-the-default-self-signed-certificate) |
-You can restore the default configuration by entering the following code for the cyberx user:
+### Local user management commands
-```azurecli-interactive
-sudo cyberx-xsense-capture-filter -p all -m all-connected
-```
+|Service area |Users |Actions |
+||||
+|Password management | *cyberx*, *cyberx_host* | [Change local user passwords](cli-ot-sensor.md#change-local-user-passwords) |
+| Sign-in configuration| *support*, *cyberx*, *cyberx_host* |[Control user session timeouts](manage-users-sensor.md#control-user-session-timeouts) |
+| Sign-in configuration | *cyberx* | [Define maximum number of failed sign-ins](manage-users-sensor.md#define-maximum-number-of-failed-sign-ins) |
-## Define client and server hosts
+### Network configuration commands
-If Defender for IoT didn't automatically detect the client, and server hosts, enter the following command to set the client and server hosts:
+|Service area |Users |Actions |
+||||
+| Network setting configuration | *cyberx_host* | [Change networking configuration or reassign network interface roles](cli-ot-sensor.md#change-networking-configuration-or-reassign-network-interface-roles) |
+|Network setting configuration | *support* | [Validate and show network interface configuration](cli-ot-sensor.md#validate-and-show-network-interface-configuration) |
+|Network connectivity | *support*, *cyberx* | [Check network connectivity from the OT sensor](cli-ot-sensor.md#check-network-connectivity-from-the-ot-sensor) |
+|Network connectivity | *cyberx* | [Check network interface current load](cli-ot-sensor.md#check-network-interface-current-load) <br>[Check internet connection](cli-ot-sensor.md#check-internet-connection) |
+|Network bandwidth limit | *cyberx* | [Set bandwidth limit for the management network interface](cli-ot-sensor.md#set-bandwidth-limit-for-the-management-network-interface) |
+|Physical interfaces management | *support* | [Locate a physical port by blinking interface lights](cli-ot-sensor.md#locate-a-physical-port-by-blinking-interface-lights) |
+|Physical interfaces management | *support*, *cyberx* | [List connected physical interfaces](cli-ot-sensor.md#list-connected-physical-interfaces) |
-```azurecli-interactive
-directions [-h] [--identifier IDENTIFIER] [--port PORT] [--remove] [--add]
-[--tcp] [--udp]
-```
+### Traffic capture filter commands
-You can use the following attributes with the `directions` command:
+|Service area |Users |Actions |
+||||
+| Capture filter management | *support*, *cyberx* | [Create a basic filter for all components](cli-ot-sensor.md#create-a-basic-filter-for-all-components)<br>[Create an advanced filter for specific components](cli-ot-sensor.md#create-an-advanced-filter-for-specific-components) <br>[List current capture filters for specific components](cli-ot-sensor.md#list-current-capture-filters-for-specific-components) <br> [Reset all capture filters](cli-ot-sensor.md#reset-all-capture-filters) |
-| Attribute | Description |
-|--|--|
-| [-h] | Prints help information for the command. |
-| [--identifier IDENTIFIER] | The server identifier. |
-| [--port PORT] | The server port. |
-| [--remove] | Removes a client or server host from the list. |
-| [--add] | Adds a client or server host to the list. |
-| [--tcp] | Use TCP when communicating with this host. |
-| [--udp] | Use UDP when communicating with this host. |
+### Alert commands
-## System actions
-The following table describes the commands available to perform various system actions within Defender for IoT:
+|Service area |Users |Actions |
+||||
+|Alert functionality testing | *cyberx* | [Trigger a test alert](cli-ot-sensor.md#trigger-a-test-alert) |
+| Alert exclusion rules | *support*, *cyberx* | [Show current alert exclusion rules](cli-ot-sensor.md#show-current-alert-exclusion-rules) <br>[Create a new alert exclusion rule](cli-ot-sensor.md#create-a-new-alert-exclusion-rule)<br>[Modify an alert exclusion rule](cli-ot-sensor.md#modify-an-alert-exclusion-rule)<br>[Delete an alert exclusion rule](cli-ot-sensor.md#delete-an-alert-exclusion-rule)
-|Name|Code|Description|
-|-|-|--|
-|Show the date|`date`|Returns the current date on the host in GMT format.|
-|Reboot the host|`system reboot`|Reboots the host device.|
-|Shut down the host|`system shutdown`|Shuts down the host.|
-|Back up the system|`system backup`|Initiates an immediate backup (an unscheduled backup).|
-|Restore the system from a backup|`system restore`|Restores from the most recent backup.|
-|List the backup files|`system backup-list`|Lists the available backup files.|
-|Display the status of all Defender for IoT platform services|`system sanity`|Checks the performance of the system by listing the current status of all Defender for IoT platform services.|
-|Show the software version|`system version`|Displays the version of the software currently running on the system.|
-## Deploy SSL and TLS certificates to appliances
+## Defender for IoT CLI access
-Enter the following command to import SSL and TLS enterprise certificates into the CLI:
+To access the Defender for IoT CLI, sign in to your OT or Enterprise IoT sensor or your on-premises management console using a terminal emulator and SSH.
-```azurecli-interactive
-cyberx-xsense-certificate-import
-```
-To use the tool, you need to upload the certificate files to the device. You can do this through tools such as WinSCP or Wget.
+- **On a Windows system**, use PuTTY or another similar application.
+- **On a Mac system**, use Terminal.
+- **On a virtual appliance**, access the CLI via SSH, the vSphere client, or Hyper-V Manager. Connect to the virtual appliance's management interface IP address via port 22.
-The command supports the following input flags:
+Each CLI command on an OT network sensor or on-premises management console is supported a different set of privileged users, as noted in the relevant CLI descriptions. Make sure you sign in as the user required for the command you want to run. For more information, see [Privileged user access for OT monitoring](#privileged-user-access-for-ot-monitoring).
-| Flag | Description |
-|--|--|
-| -h | Shows the command-line help syntax. |
-| --crt | The path to the certificate file (.crt extension). |
-| --key | The \*.key file. Key length should be a minimum of 2,048 bits. |
-| --chain | Path to the certificate chain file (optional). |
-| --pass | Passphrase used to encrypt the certificate (optional). |
-| --passphrase-set | The default is **False**, **unused**. <br />Set to **True** to use the previous passphrase supplied with the previous certificate (optional). |
+## Sign out of the CLI
-When you're using the tool:
+Make sure to properly sign out of the CLI when you're done using it. You're automatically signed out after an inactive period of 300 seconds.
-- Verify that the certificate files are readable on the appliance.
+To sign out manually on an OT sensor or on-premises management console, run one of the following commands:
-- Confirm with IT the appliance domain (as it appears in the certificate) with your DNS server and the corresponding IP address.
+|User |Command |
+|||
+|**support** | `logout` |
+|**cyberx** | `cyberx-xsense-logout` |
+|**cyberx_host** | `logout` |
-## Sign out of a support shell
-You're automatically signed out of an SSH session after an inactive period of 300 seconds.
+## Next steps
-To sign out of your session manually, enter the following command:
+> [!div class="nextstepaction"]
+> [Manage an OT sensor from the CLI](cli-ot-sensor.md)
-```azurecli-interactive
-logout
-```
+> [!div class="nextstepaction"]
+> [On-premises users and roles for OT monitoring](roles-on-premises.md)
-## Next steps
-For more information, see [Defender for IoT API sensor and management console APIs](references-work-with-defender-for-iot-apis.md).
+You can also control and monitor your cloud connected sensors from the Defender for IoT **Sites and sensors** page. For more information, see [Manage sensors with Defender for IoT in the Azure portal](../how-to-manage-sensors-on-the-cloud.md).
defender-for-iot Roles On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/roles-on-premises.md
The following table describes each default privileged user in detail:
|**support** | The sensor or on-premises management console's `sensor_app` container | Serves as a locked-down, user shell for dedicated CLI tools.<br><br>Has no filesystem access.<br><br>Can access only dedicated CLI commands for controlling OT monitoring. <br><br>Can recover or change passwords for the *support* user, and any user with the **Admin**, **Security Analyst**, and **Read-only** roles. | |**cyberx_host** | The on-premises management console's host OS | Serves as a root user in the on-premises management console's host OS.<br><br>Used for support scenarios with containers and filesystem access. |
+Supported CLI commands and command syntax differ for each user. For more information, see [Defender for IoT CLI users and access](references-work-with-defender-for-iot-cli-commands.md) and [CLI command reference from OT network sensors](cli-ot-sensor.md).
+ ## On-premises user roles The following roles are available on OT network sensors and on-premises management consoles:
defender-for-iot Update Ot Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/update-ot-software.md
You can update software on your sensors individually, directly from each sensor
> For more information, see [Update an on-premises management console](#update-an-on-premises-management-console). >
-# [From each sensor](#tab/sensor)
+# [From an OT sensor UI](#tab/sensor)
-This procedure describes how to manually download the new sensor software version and then run your update directly on the sensor console.
+This procedure describes how to manually download the new sensor software version and then run your update directly on the sensor console's UI.
-**To update sensor software directly from the sensor console**:
+**To update sensor software directly from the sensor UI**:
1. In the Azure portal, go to **Defender for IoT** > **Getting started** > **Updates**.
The sensor update process won't succeed if you don't update the on-premises mana
If updates fail, a retry option appears with an option to download the failure log. Retry the update process or open a support ticket with the downloaded log files for assistance.
+# [From an OT sensor via CLI](#tab/cli)
+
+This procedure describes how to update OT sensor software via the CLI, directly on the OT sensor.
+
+**To update sensor software directly from the sensor via CLI**:
+
+1. Use SFTP or SCP to copy the update file to the sensor machine.
+
+1. Sign in to the sensor as the `cyberx_host` user and copy the update file to the `/opt/sensor/logs/` directory.
+
+1. Sign in to the sensor as the `cyberx` user and copy the file to a location accessible for the update process. For example:
+
+ ```bash
+ cd /var/host-logs/
+ mv <filename> /var/cyberx/media/device-info/update_agent.tar
+ ```
+
+1. Start running the software update. Run:
+
+ ```bash
+ curl -X POST http://127.0.0.1:9090/core/api/v1/configuration/agent
+ ```
+
+1. Verify that the update process has started by checking the `upgrade.log` file. Run:
+
+ ```bash
+ tail -f /var/cyberx/logs/upgrade.log
+ ```
+
+ Output similar to the following appears:
+
+ ```bash
+ 2022-05-23 15:39:00,632 [http-nio-0.0.0.0-9090-exec-2] INFO com.cyberx.infrastructure.common.utils.UpgradeUtils- [32200] Extracting upgrade package from /var/cyberx/media/device-info/update_agent.tar to /var/cyberx/media/device-info/update
+
+ 2022-05-23 15:39:33,180 [http-nio-0.0.0.0-9090-exec-2] INFO com.cyberx.infrastructure.common.utils.UpgradeUtils- [32200] Prepared upgrade, scheduling in 30 seconds
+
+ 2022-05-23 15:40:03,181 [pool-34-thread-1] INFO com.cyberx.infrastructure.common.utils.UpgradeUtils- [32200] Send upgrade request to os-manager. file location: /var/cyberx/media/device-info/update
+ ```
+
+ At some point during the update process, your SSH connection will disconnect. This is a good indication that your update is running.
+
+1. Continue to monitor the update process by checking the `install.log` file.
+
+ Sign into the sensor as the `cyberx_host` user and run:
+
+ ```bash
+ tail -f /opt/sensor/logs/install.log
+ ```
> [!NOTE]
dns Private Dns Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-migration-guide.md
If you're using automation including templates, PowerShell scripts or custom cod
* [Azure DNS private zones REST API](/rest/api/dns/privatedns/privatezones) * [Azure DNS private zones CLI](/cli/azure/network/private-dns/link/vnet) * [Azure DNS private zones PowerShell](/powershell/module/az.privatedns/)
-* [Azure DNS private zones SDK](/dotnet/api/overview/azure/privatedns/management)
+* [Azure DNS private zones SDK](/dotnet/api/overview/azure/resourcemanager.privatedns-readme)
## Need further help
Create a support ticket if you need further help with the migration process or b
* Read about some common [private zone scenarios](./private-dns-scenarios.md) that can be realized with private zones in Azure DNS. * For common questions and answers about private zones in Azure DNS, including specific behavior you can expect for certain kinds of operations, see [Private DNS FAQ](./dns-faq-private.yml). * Learn about DNS zones and records by visiting [DNS zones and records overview](dns-zones-records.md).
-* Learn about some of the other key [networking capabilities](../networking/fundamentals/networking-overview.md) of Azure.
+* Learn about some of the other key [networking capabilities](../networking/fundamentals/networking-overview.md) of Azure.
energy-data-services Concepts Manifest Ingestion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/concepts-manifest-ingestion.md
# Manifest-based ingestion concepts
+Manifest-based file ingestion provides end-users and systems a robust mechanism for loading metadata about datasets in Microsoft Energy Data Services Preview instance. This metadata is indexed by the system and allows the end-user to search the datasets.
-Manifest-based file ingestion provides end-users and systems a robust mechanism for loading metadata in Microsoft Energy Data Services Preview instance. A manifest is a JSON document that has a pre-determined structure for capturing entities that conform to the [OSDU&trade;](https://osduforum.org/) Well-known Schema (WKS) definitions.
-
-Manifest-based file ingestion doesn't understand the contents of the file or doesn't parse the file. It just creates a metadata record for the file and makes it searchable. It doesn't infer or does anything on top of the file.
+Manifest-based file ingestion is an opaque ingestion that do not parse or understand the file contents. It creates a metadata record based on the manifest and makes the record searchable.
[!INCLUDE [preview features callout](./includes/preview/preview-callout.md)]
-## Understanding the manifest
+## What is a Manifest?
+A manifest is a JSON document that has a pre-determined structure for capturing entities defined as 'kind', that is, registered as schemas with the Schema service - [Well-known Schema (WKS) definitions](https://community.opengroup.org/osdu/dat#manifest-schemas).
+
+You can find an example manifest json document [here](https://community.opengroup.org/osdu/data/data-definitions/-/tree/master/Examples/manifest#manifest-example).
-The manifest schema has containers for the following entities
+The manifest schema has containers for the following OSDU&trade; [Group types](https://community.opengroup.org/osdu/dat#2-group-type):
* **ReferenceData** (*zero or more*) - A set of permissible values to be used by other (master or transaction) data fields. Examples include *Unit of Measure (feet)*, *Currency*, etc. * **MasterData** (*zero or more*) - A single source of basic business data used across multiple systems, applications, and/or process. Examples include *Wells* and *Wellbores*
The manifest schema has containers for the following entities
* **WorkProductComponents (WPC)** (*zero or more - must be present if loading datasets*) - A typed, smallest, independently usable unit of business data content transferred as part of a Work Product (a collection of things ingested together). Each Work Product Component (WPC) typically uses reference data, belongs to some master data, and maintains a reference to datasets. Example: *Well Logs, Faults, Documents* * **Datasets** (*zero or more - must be present if loading WorkProduct and WorkProductComponent records*) - Each Work Product Component (WPC) consists of one or more data containers known as datasets.
-## Manifest-based file ingestion workflow steps
-
-1. A manifest is submitted to the Workflow Service using the manifest ingestion workflow name (for example, "Osdu_ingest")
-2. Once the request is validated and the user authorization is complete, the workflow service will load and initiate the manifest ingestion workflow.
-3. The first step is to check the syntax of the manifest.
- 1. Retrieve the **kind** property of the manifest
- 2. Retrieve the **schema definition** from the Schema service for the manifest kind
- 3. Validate that the manifest is syntactically correct according to the manifest schema definitions.
- 4. For each Reference data, Master data, Work Product, Work Product Component, and Dataset, do the following activities:
- 1. Retrieve the **kind** property.
- 2. Retrieve the **schema definition** from the Schema service for the kind
- 3. Validate that the entity is syntactically correct according to the schema definition and submits the manifest to the Workflow Service
- 4. Validate that mandatory attributes exist in the manifest
- 5. Validate that all property values follow the patterns defined in the schemas
- 6. Validate that no extra properties are present in the manifest
- 5. Any entity that doesn't pass the syntax check is rejected
-4. The content is checked for a series of validation rules
- 1. Validation of referential integrity between Work Product Components and Datasets
- 1. There are no orphan Datasets defined in the WP (each Dataset belongs to a WPC)
- 2. Each Dataset defined in the WPC is described in the WP Dataset block
- 3. Each WPC is linked to at least
- 2. Validation that referenced parent data exists
- 3. Validation that Dataset file paths aren't empty
-5. Process the contents into storage
- 1. Write each valid entity into the data platform via the Storage API
- 2. Capture the ID generated to update surrogate-keys where surrogate-keys are used
-6. Workflow exits
-
-## Manifest ingestion components
-
-* **Workflow Service** is a wrapper service on top of the Airflow workflow engine, which orchestrates the ingestion workflow. Airflow is the chosen workflow engine by the [OSDU&trade;](https://osduforum.org/) community to orchestrate and run ingestion workflows. Airflow isn't directly exposed to clients, instead its features are accessed through the workflow service.
-* **File Service** is used to upload files, file collections, and other types of source data to the data platform.
-* **Storage Service** is used to save the manifest records into the data platform.
-* **Airflow engine** is the workflow engine that executes DAGs (Directed Acyclic Graphs).
-* **Schema Service** stores schemas used in the data platform. Schemas are being referenced during the Manifest-based file ingestion.
-* **Entitlements Service** manages access groups. This service is used during the ingestion for verification of ingestion permissions. This service is also used during the metadata record retrieval for validation of "read" writes.
+The Manifest data is loaded in a particular sequence:
+1. The 'ReferenceData' array (if populated).
+2. The 'MasterData' array (if populated).
+3. The 'Data' structure is processed last (if populated). Inside the 'Data' property, processing is done in the following order:
+ 1. the 'Datasets' array
+ 2. the 'WorkProductComponents' array
+ 3. the 'WorkProduct'.
+
+Any arrays are ordered. should there be interdependencies, the dependent items must be placed behind their relationship targets, for example, a master-data Well record must be placed in the 'MasterData' array before its Wellbores.
+
+## Manifest-based file ingestion workflow
+
+Microsoft Energy Data Services Preview instance has out-of-the-box support for Manifest-based file ingestion workflow. `Osdu_ingest` Airflow DAG is pre-configured in your instance.
+
+### Manifest-based file ingestion workflow components
+The Manifest-based file ingestion workflow consists of the following components:
+* **Workflow Service** - A wrapper service running on top of the Airflow workflow engine.
+* **Airflow engine** - A workflow orchestration engine that executes workflows registered as DAGs (Directed Acyclic Graphs). Airflow is the chosen workflow engine by the [OSDU&trade;](https://osduforum.org/) community to orchestrate and run ingestion workflows. Airflow isn't directly exposed, instead its features are accessed through the workflow service.
+* **Storage Service** - A service that is used to save the manifest metadata records into the data platform.
+* **Schema Service** - A service that manages OSDU&trade; defined schemas in the data platform. Schemas are being referenced during the Manifest-based file ingestion.
+* **Entitlements Service** - A service that manages access groups. This service is used during the ingestion for verification of ingestion permissions. This service is also used during the metadata record retrieval for validation of "read" writes.
+* **Legal Service** - A service that validates compliance through legal tags.
* **Search Service** is used to perform referential integrity check during the manifest ingestion process.
-## Manifest ingestion workflow sequence
+### Pre-requisites
+Before running the Manifest-based file ingestion workflow, customers must ensure that the user accounts running the workflow have access to the core services (Search, Storage, Schema, Entitlement and Legal) and Workflow service (see [Entitlement roles](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/osdu-entitlement-roles.md) for details). As part of Microsoft Energy Data Services instance provisioning, the OSDU&trade; standard schemas and associated reference data are pre-loaded. Customers must ensure that the user account used for ingesting the manifests is included in appropriate owners and viewers ACLs. Customers must ensure that manifests are configured with correct legal tags, owners and viewers ACLs, reference data, etc.
+
+### Workflow sequence
+The following illustration provides the Manifest-based file ingestion workflow:
+ :::image type="content" source="media/concepts-manifest-ingestion/concept-manifest-ingestion-sequence.png" alt-text="Screenshot of the manifest ingestion sequence.":::
+
+A user submits a manifest to the `Workflow Service` using the manifest ingestion workflow name ("Osdu_ingest"). If the request is proper and the user is authorized to run the workflow, the workflow service loads the manifest and initiates the manifest ingestion workflow.
+
+The workflow service executes a series of manifest `syntax validation` like manifest structure and attribute validation as per the defined schema and check for mandatory schema attributes. The system then perform `referential integrity validation` between Work Product Components and Datasets. For example, whether the referenced parent data exists.
+Once the validations are successful, the system processes the content into storage by writing each valid entity into the data platform using the Storage Service API.
OSDU&trade; is a trademark of The Open Group.
firewall Ip Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/ip-groups.md
IP Groups are available in all public cloud regions.
## IP address limits
-You can have a maximum of 200 IP Groups per firewall with a maximum 5000 individual IP addresses or IP prefixes per each IP Group.
+You can have a maximum of 200 IP Groups per firewall with a maximum of 5,000 individual IP addresses or IP prefixes per each IP Group.
## Related Azure PowerShell cmdlets
hdinsight Apache Hadoop On Premises Migration Best Practices Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-on-premises-migration-best-practices-storage.md
description: Learn storage best practices for migrating on-premises Hadoop clust
Previously updated : 12/10/2019 Last updated : 12/31/2022 # Migrate on-premises Apache Hadoop clusters to Azure HDInsight
hdinsight Hbase Troubleshoot Storage Exception Reset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/hbase-troubleshoot-storage-exception-reset.md
Title: Storage exception after connection reset in Azure HDInsight
description: Storage exception after connection reset in Azure HDInsight Previously updated : 08/08/2019 Last updated : 12/31/2022 # Scenario: Storage exception after connection reset in Azure HDInsight
If you didn't see your problem or are unable to solve your issue, visit one of t
* Connect with [@AzureSupport](https://twitter.com/azuresupport) - the official Microsoft Azure account for improving customer experience. Connecting the Azure community to the right resources: answers, support, and experts.
-* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/).
+* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/).
hdinsight Troubleshoot Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/troubleshoot-rest-api.md
Title: REST API to query Apache HBase in Azure HDInsight
description: This article describes troubleshooting steps when interacting with Apache HBase components on Azure HDInsight clusters. Previously updated : 04/08/2020 Last updated : 12/31/2022 # REST API to query Apache HBase in Azure HDInsight
If you didn't see your problem or are unable to solve your issue, visit one of t
* Connect with [@AzureSupport](https://twitter.com/azuresupport) - the official Microsoft Azure account for improving customer experience. Connecting the Azure community to the right resources: answers, support, and experts.
-* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/).
+* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/).
hdinsight Hdinsight Restrict Public Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-restrict-public-connectivity.md
Title: Restrict public connectivity in Azure HDInsight
description: Learn how to remove access to all outbound public IP addresses. Previously updated : 09/20/2021 Last updated : 12/31/2022 # Restrict public connectivity in Azure HDInsight
iot-hub Iot Hub Python Python C2d https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-python-python-c2d.md
ms.devlang: python Previously updated : 04/09/2020 Last updated : 01/02/2023
In this section, you create a Python console app that sends cloud-to-device mess
## Run the applications
-You are now ready to run the applications.
+You're now ready to run the applications.
1. At the command prompt in your working directory, run the following command to listen for cloud-to-device messages:
machine-learning How To Deploy Mlflow Model Spark Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-mlflow-model-spark-jobs.md
+
+ Title: Deploy and run MLflow models in Spark jobs
+
+description: Learn to deploy your MLflow model in Spark jobs to perform inference.
++++++ Last updated : 12/30/2022++++
+# Deploy and run MLflow models in Spark jobs
+
+In this article, learn how to deploy and run your [MLflow](https://www.mlflow.org) model in Spark jobs to perform inference over large amounts of data or as part of data wrangling jobs.
++
+## About this example
+
+This example shows how you can deploy an MLflow model registered in Azure Machine Learning to Spark jobs running in [managed Spark clusters (preview)](how-to-submit-spark-jobs.md), Azure Databricks, or Azure Synapse Analytics, to perform inference over large amounts of data.
+
+It uses an MLflow model based on the [Diabetes dataset](https://www4.stat.ncsu.edu/~boos/var.select/diabetes.html). This dataset contains ten baseline variables, age, sex, body mass index, average blood pressure, and six blood serum measurements obtained from n = 442 diabetes patients, as well as the response of interest, a quantitative measure of disease progression one year after baseline (regression).
+
+The model has been trained using an `scikit-learn` regressor and all the required preprocessing has been packaged as a pipeline, making this model an end-to-end pipeline that goes from raw data to predictions.
+
+The information in this article is based on code samples contained in the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the commands locally without having to copy/paste YAML and other files, clone the repo and then change directories to the `sdk/python/using-mlflow/deploy`.
+
+```azurecli
+git clone https://github.com/Azure/azureml-examples --depth 1
+cd sdk/python/using-mlflow/deploy
+```
+
+## Prerequisites
+
+Before following the steps in this article, make sure you have the following prerequisites:
+
+- An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
+- You must have a MLflow model registered in your workspace. Particularly, this example will register a model trained for the [Diabetes dataset](https://www4.stat.ncsu.edu/~boos/var.select/diabetes.html).
+- Install the Mlflow SDK package `mlflow` and the Azure Machine Learning plug-in for MLflow `azureml-mlflow`.
+
+ ```bash
+ pip install mlflow azureml-mlflow
+ ```
+
+- If you aren't running in Azure Machine Learning compute, configure the MLflow tracking URI or MLflow's registry URI to point to the workspace you are working on. See [Track runs using MLflow with Azure Machine Learning](how-to-use-mlflow-cli-runs.md#set-up-tracking-environment) for more details.
++
+### Connect to your workspace
+
+First, let's connect to Azure Machine Learning workspace where your model is registered.
+
+# [Azure Machine Learning compute](#tab/aml)
+
+Tracking is already configured for you. Your default credentials will also be used when working with MLflow.
+
+# [Remote compute](#tab/remote)
+
+**Configure tracking URI**
+
+You need to configure MLflow to point to the Azure Machine Learning MLflow tracking URI. The tracking URI has the protocol `azureml://`. You can use MLflow to configure it.
+
+```python
+azureml_tracking_uri = "<AZUREML_TRACKING_URI>"
+mlflow.set_tracking_uri(azureml_tracking_uri)
+```
+
+There are multiple ways to get the Azure Machine Learning MLflow tracking URI. Refer to [Set up tracking environment](how-to-use-mlflow-cli-runs.md) to see all the alternatives.
+
+> [!TIP]
+> When working on shared environments, like for instance an Azure Databricks cluster, Azure Synapse Analytics cluster, or similar, it is useful to configure the environment variable `MLFLOW_TRACKING_URI` to automatically configure the MLflow tracking URI to the desired target for all the sessions running in the cluster rather than to do it on a per-session basis.
+
+**Configure authentication**
+
+Once the tracking is configured, you'll also need to configure how the authentication needs to happen to the associated workspace. For interactive jobs where there's a user connected to the session, you can rely on Interactive Authentication.
+
+For those scenarios where unattended execution is required, you'll have to configure a service principal to communicate with Azure Machine Learning.
+
+```python
+import os
+
+os.environ["AZURE_TENANT_ID"] = "<AZURE_TENANT_ID>"
+os.environ["AZURE_CLIENT_ID"] = "<AZURE_CLIENT_ID>"
+os.environ["AZURE_CLIENT_SECRET"] = "<AZURE_CLIENT_SECRET>"
+```
+
+> [!TIP]
+> When working on shared environments, it is better to configure this environment variables for the entire cluster. As a best practice, manage them as secrets in an instance of Azure Key Vault. For instance, in Azure Databricks, you can use secrets to set this variables as follows: `AZURE_CLIENT_SECRET={{secrets/<scope-name>/<secret-name>}}`. See [Reference a secret in an environment variable](https://learn.microsoft.com/azure/databricks/security/secrets/secrets#reference-a-secret-in-an-environment-variable) for how to do it in Azure Databricks or refer to similar documentation in your platform.
+++
+### Registering the model
+
+We need a model registered in the Azure Machine Learning registry to perform inference. In this case, we already have a local copy of the model in the repository, so we only need to publish the model to the registry in the workspace. You can skip this step if the model you are trying to deploy is already registered.
+
+```python
+model_name = 'sklearn-diabetes'
+model_local_path = "sklearn-diabetes/model"
+
+registered_model = mlflow_client.create_model_version(
+ name=model_name, source=f"file://{model_local_path}"
+)
+version = registered_model.version
+```
+
+Alternatively, if your model was logged inside of a run, you can register it directly.
+
+> [!TIP]
+> To register the model, you'll need to know the location where the model has been stored. If you are using `autolog` feature of MLflow, the path will depend on the type and framework of the model being used. We recommend to check the jobs output to identify which is the name of this folder. You can look for the folder that contains a file named `MLModel`. If you are logging your models manually using `log_model`, then the path is the argument you pass to such method. As an example, if you log the model using `mlflow.sklearn.log_model(my_model, "classifier")`, then the path where the model is stored is `classifier`.
+
+```python
+model_name = 'sklearn-diabetes'
+
+registered_model = mlflow_client.create_model_version(
+ name=model_name, source=f"runs://{RUN_ID}/{MODEL_PATH}"
+)
+version = registered_model.version
+```
+
+> [!NOTE]
+> The path `MODEL_PATH` is the location where the model has been stored in the run.
+++
+### Get input data to score
+
+We'll need some input data to run or jobs on. In this example, we'll download sample data from internet and place it in a shared storage used by the Spark cluster.
+
+```python
+import urllib
+
+urllib.request.urlretrieve("https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/data/heart.csv", "/tmp/data")
+```
+
+Move the data to a mounted storage account available to the entire cluster.
+
+```python
+dbutils.fs.mv("file:/tmp/data", "dbfs:/")
+```
+
+> [!IMPORTANT]
+> The previous code uses `dbutils`, which is a tool available in Azure Databricks cluster. Use the appropriate tool depending on the platform you are using.
+
+The input data is then placed in the following folder:
+
+```python
+input_data_path = "dbfs:/data"
+```
+
+## Run the model in Spark clusters
+
+The following section explains how to run MLflow models registered in Azure Machine Learning in Spark jobs.
+
+1. Configure the model URI. The following URI brings a model named `heart-classifier` in its latest version.
+
+ ```python
+ model_uri = "models:/heart-classifier/latest"
+ ```
+
+1. Load the model as an UDF function. A user-defined function (UDF) is a function defined by a user, allowing custom logic to be reused in the user environment.
+
+ ```python
+ predict_function = mlflow.pyfunc.spark_udf(spark, model_uri, env_manager="local")
+ ```
+
+ > [!TIP]
+ > Use the argument `result_type` to control the type returned by the `predict()` function.
+
+1. Read the data you want to score:
+
+ ```python
+ df = spark.read.option("header", "true").option("inferSchema", "true").csv(input_data_path).drop("target")
+ ```
+
+ In our case, the input data is on `CSV` format and placed in the folder `dbfs:/data/`. We're also dropping the column `target` as this dataset contains the target variable to predict. In production scenarios, your data won't have this column.
+
+1. Run the function `predict_function` and place the predictions on a new column. In this case, we're placing the predictions in the column `predictions`.
+
+ ```python
+ df.withColumn("predictions", score_function(*df.columns))
+ ```
+
+ > [!TIP]
+ > The `predict_function` receives as arguments the columns required. In our case, all the columns of the data frame are expected by the model and hence `df.columns` is used. If your model requires a subset of the columns, you can introduce them manually. If you model has a signature, types need to be compatible between inputs and expected types.
++
+## Run the model in a standalone Spark job in Azure Machine Learning
+
+ Azure Machine Learning supports creation of a standalone Spark job, and creation of a reusable Spark component that can be used in [Azure Machine Learning pipelines](concept-ml-pipelines.md). In this example, we'll deploy a scoring job that runs in Azure Machine Learning standalone Spark job and runs an MLflow model to perform inference.
+
+> [!NOTE]
+> To learn more about Spark jobs in Azure Machine Learning, see [Submit Spark jobs in Azure Machine Learning (preview)](how-to-submit-spark-jobs.md).
+
+1. A Spark job requires a Python script that takes arguments. Create a scoring script:
+
+ __score.py__
+
+ ```python
+ import argparse
+
+ parser = argparse.ArgumentParser()
+ parser.add_argument("--model")
+ parser.add_argument("--input_data")
+ parser.add_argument("--scored_data")
+
+ args = parser.parse_args()
+ print(args.model)
+ print(args.input_data)
+
+ # Load the model as an UDF function
+ predict_function = mlflow.pyfunc.spark_udf(spark, args.model, env_manager="local")
+
+ # Read the data you want to score
+ df = spark.read.option("header", "true").option("inferSchema", "true").csv(input_data).drop("target")
+
+ # Run the function `predict_function` and place the predictions on a new column
+ scored_data = df.withColumn("predictions", score_function(*df.columns))
+
+ # Save the predictions
+ scored_data.to_csv(args.scored_data)
+ ```
+
+ The above script takes three arguments `--model`, `--input_data` and `--scored_data`. The first two are inputs and represent the model we want to run and the input data, the last one is an output and it is the output folder where predictions will be placed.
+
+1. Create a job definition:
+
+ __mlflow-score-spark-job.yml__
+
+ ```yml
+ $schema: http://azureml/sdk-2-0/SparkJob.json
+ type: spark
+
+ code: ./src
+ entry:
+ file: score.py
+
+ conf:
+ spark.driver.cores: 1
+ spark.driver.memory: 2g
+ spark.executor.cores: 2
+ spark.executor.memory: 2g
+ spark.executor.instances: 2
+
+ inputs:
+ model:
+ type: mlflow_model
+ path: azureml:heart-classifier@latest
+ input_data:
+ type: uri_file
+ path: https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/data/heart.csv
+ mode: direct
+
+ outputs:
+ scored_data:
+ type: uri_folder
+
+ args: >-
+ --model ${{inputs.model}}
+ --input_data ${{inputs.input_data}}
+ --scored_data ${{outputs.scored_data}}
+
+ identity:
+ type: user_identity
+
+ resources:
+ instance_type: standard_e4s_v3
+ runtime_version: "3.2"
+ ```
+
+ > [!TIP]
+ > To use an attached Synapse Spark pool, define `compute` property in the sample YAML specification file shown above instead of `resources` property.
+
+1. The YAML files shown above can be used in the `az ml job create` command, with the `--file` parameter, to create a standalone Spark job as shown:
+
+ ```azurecli
+ az ml job create -f mlflow-score-spark-job.yml
+ ```
+
+## Next steps
+
+- [Deploy MLflow models to batch endpoints](how-to-mlflow-batch.md)
+- [Deploy MLflow models to online endpoint](how-to-deploy-mlflow-models-online-endpoints.md)
+- [Using MLflow models for no-code deployment](how-to-log-mlflow-models.md)
machine-learning How To Deploy Mlflow Models Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-mlflow-models-online-endpoints.md
Last updated 03/31/2022 # Deploy MLflow models to online endpoints
git clone https://github.com/Azure/azureml-examples --depth 1
cd azureml-examples/cli/endpoints/online ```
+### Follow along in Jupyter Notebooks
+
+You can follow along this sample in the following notebooks. In the cloned repository, open the notebook: [mlflow_sdk_online_endpoints_progresive.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/deploy/mlflow_sdk_online_endpoints.ipynb).
+ ## Prerequisites
+Before following the steps in this article, make sure you have the following prerequisites:
+
+- An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
+- Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure Machine Learning. To perform the steps in this article, your user account must be assigned the owner or contributor role for the Azure Machine Learning workspace, or a custom role allowing Microsoft.MachineLearningServices/workspaces/onlineEndpoints/*. For more information, see [Manage access to an Azure Machine Learning workspace](how-to-assign-roles.md).
+- You must have a MLflow model registered in your workspace. Particularly, this example will register a model trained for the [Diabetes dataset](https://www4.stat.ncsu.edu/~boos/var.select/diabetes.html).
-* You must have a MLflow model registered in your workspace. Particularly, this example will register a model trained for the [Diabetes dataset](https://www4.stat.ncsu.edu/~boos/var.select/diabetes.html).
+Additionally, you will need to:
+
+# [Azure CLI](#tab/cli)
+
+- Install the Azure CLI and the ml extension to the Azure CLI. For more information, see [Install, set up, and use the CLI (v2)](how-to-configure-cli.md).
+
+# [Python (Azure ML SDK)](#tab/sdk)
+
+- Install the Azure Machine Learning SDK for Python
+
+ ```bash
+ pip install azure-ai-ml
+ ```
+
+# [Python (MLflow SDK)](#tab/mlflow)
+
+- Install the Mlflow SDK package `mlflow` and the Azure Machine Learning plug-in for MLflow `azureml-mlflow`.
+
+ ```bash
+ pip install mlflow azureml-mlflow
+ ```
+
+- If you are not running in Azure Machine Learning compute, configure the MLflow tracking URI or MLflow's registry URI to point to the workspace you are working on. See [Track runs using MLflow with Azure Machine Learning](how-to-use-mlflow-cli-runs.md#set-up-tracking-environment) for more details.
+
+# [Studio](#tab/studio)
+
+There are no additional prerequisites when working in Azure Machine Learning studio.
++ ### Connect to your workspace
az account set --subscription <subscription>
az configure --defaults workspace=<workspace> group=<resource-group> location=<location> ```
-# [Python](#tab/sdk)
+# [Python (Azure ML SDK)](#tab/sdk)
The workspace is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning. In this section, we'll connect to the workspace in which you'll perform deployment tasks.
The workspace is the top-level resource for Azure Machine Learning, providing a
ml_client = MLClient(DefaultAzureCredential(), subscription_id, resource_group, workspace) ```
+# [Python (MLflow SDK)](#tab/mlflow)
+
+1. Import the required libraries
+
+ ```python
+ import json
+ import mlflow
+ import requests
+ import pandas as pd
+ from mlflow.deployments import get_deploy_client
+ ```
+
+1. Configure the deployment client
+
+ ```python
+ deployment_client = get_deploy_client(mlflow.get_tracking_uri())
+ ```
+ # [Studio](#tab/studio) Navigate to [Azure Machine Learning studio](https://ml.azure.com).
MODEL_NAME='sklearn-diabetes'
az ml model create --name $MODEL_NAME --type "mlflow_model" --path "sklearn-diabetes/model" ```
-# [Python](#tab/sdk)
+# [Python (Azure ML SDK)](#tab/sdk)
```python model_name = 'sklearn-diabetes'
model = ml_client.models.create_or_update(
) ```
+# [Python (MLflow SDK)](#tab/mlflow)
+
+```python
+model_name = 'sklearn-diabetes'
+model_local_path = "sklearn-diabetes/model"
+
+registered_model = mlflow_client.create_model_version(
+ name=model_name, source=f"file://{model_local_path}"
+)
+version = registered_model.version
+```
+ # [Studio](#tab/studio) To create a model in Azure Machine Learning, open the Models page in Azure Machine Learning. Select **Register model** and select where your model is located. Fill out the required fields, and then select __Register__.
az ml model create --name $MODEL_NAME --path azureml://jobs/$RUN_ID/outputs/arti
> [!NOTE] > The path `$MODEL_PATH` is the location where the model has been stored in the run.
-# [Python](#tab/sdk)
+# [Python (Azure ML SDK)](#tab/sdk)
```python model_name = 'sklearn-diabetes'
ml_client.models.create_or_update(
> [!NOTE] > The path `MODEL_PATH` is the location where the model has been stored in the run.
+# [Python (MLflow SDK)](#tab/mlflow)
+
+```python
+model_name = 'sklearn-diabetes'
+
+registered_model = mlflow_client.create_model_version(
+ name=model_name, source=f"runs://{RUN_ID}/{MODEL_PATH}"
+)
+version = registered_model.version
+```
+
+> [!NOTE]
+> The path `MODEL_PATH` is the location where the model has been stored in the run.
+ # [Studio](#tab/studio) :::image type="content" source="media/how-to-deploy-mlflow-models-online-endpoints/mlflow-register-model-output.gif" lightbox="media/how-to-deploy-mlflow-models-online-endpoints/mlflow-register-model-output.gif" alt-text="Screenshot showing how to download Outputs and logs from Experimentation run":::
ml_client.models.create_or_update(
:::code language="yaml" source="~/azureml-examples-main/cli/endpoints/online/mlflow/create-endpoint.yaml":::
- # [Python](#tab/sdk)
+ # [Python (Azure ML SDK)](#tab/sdk)
```python endpoint_name = "sklearn-diabetes-" + datetime.datetime.now().strftime("%m%d%H%M%f")
ml_client.models.create_or_update(
) ```
+ # [Python (MLflow SDK)](#tab/mlflow)
+
+ We can configure the properties of this endpoint using a configuration file. In this case, we are configuring the authentication mode of the endpoint to be "key".
+
+ ```python
+ endpoint_config = {
+ "auth_mode": "key",
+ "identity": {
+ "type": "system_assigned"
+ }
+ }
+ ```
+
+ Let's write this configuration into a `JSON` file:
+
+ ```python
+ endpoint_config_path = "endpoint_config.json"
+ with open(endpoint_config_path, "w") as outfile:
+ outfile.write(json.dumps(endpoint_config))
+ ```
+ # [Studio](#tab/studio) *You will perform this step in the deployment stage.*
ml_client.models.create_or_update(
:::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-mlflow.sh" ID="create_endpoint":::
- # [Python](#tab/sdk)
+ # [Python (Azure ML SDK)](#tab/sdk)
```python ml_client.begin_create_or_update(endpoint) ```
+ # [Python (MLflow SDK)](#tab/mlflow)
+
+ ```python
+ endpoint = deployment_client.create_endpoint(
+ name=endpoint_name,
+ config={"endpoint-config-file": endpoint_config_path},
+ )
+ ```
+ # [Studio](#tab/studio) *You will perform this step in the deployment stage.*
ml_client.models.create_or_update(
:::code language="yaml" source="~/azureml-examples-main/cli/endpoints/online/mlflow/sklearn-deployment.yaml":::
- # [Python](#tab/sdk)
+ # [Python (Azure ML SDK)](#tab/sdk)
```python blue_deployment = ManagedOnlineDeployment(
ml_client.models.create_or_update(
) ```
+ # [Python (MLflow SDK)](#tab/mlflow)
+
+ ```python
+ blue_deployment_name = "blue"
+ ```
+
+ To configure the hardware requirements of your deployment, you need to create a JSON file with the desired configuration:
+
+ ```python
+ deploy_config = {
+ "instance_type": "Standard_F4s_v2",
+ "instance_count": 1,
+ }
+ ```
+
+ > [!NOTE]
+ > The full specification of this configuration can be found at [Managed online deployment schema (v2)](reference-yaml-deployment-managed-online.md).
+
+ Write the configuration to a file:
+
+ ```python
+ deployment_config_path = "deployment_config.json"
+ with open(deployment_config_path, "w") as outfile:
+ outfile.write(json.dumps(deploy_config))
+ ```
+ # [Studio](#tab/studio) *You will perform this step in the deployment stage.*
ml_client.models.create_or_update(
:::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-mlflow.sh" ID="create_sklearn_deployment":::
- # [Python](#tab/sdk)
+ # [Python (Azure ML SDK)](#tab/sdk)
```python ml_client.online_deployments.begin_create_or_update(blue_deployment) ```
- Once created, we need to set the traffic to this deployment.
+ # [Python (MLflow SDK)](#tab/mlflow)
```python
- endpoint.traffic = {blue_deployment.name: 100}
- ml_client.begin_create_or_update(endpoint)
+ blue_deployment = deployment_client.create_deployment(
+ name=blue_deployment_name,
+ endpoint=endpoint_name,
+ model_uri=f"models:/{model_name}/{version}",
+ config={"deploy-config-file": deployment_config_path},
+ )
``` # [Studio](#tab/studio)
ml_client.models.create_or_update(
:::image type="content" source="media/how-to-deploy-mlflow-models-online-endpoints/review-screen-ncd.png" lightbox="media/how-to-deploy-mlflow-models-online-endpoints/review-screen-ncd.png" alt-text="Screenshot showing NCD review screen":::
+1. Assign all the traffic to the deployment
+
+ So far, the endpoint has one deployment, but none of its traffic is assigned to it. Let's assign it.
+
+ # [Azure CLI](#tab/cli)
+
+ *This step in not required in the Azure CLI since we used the `--all-traffic` during creation.*
+
+ # [Python (Azure ML SDK)](#tab/sdk)
+
+ ```python
+ endpoint.traffic = { blue_deployment_name: 100 }
+ ```
+
+ # [Python (MLflow SDK)](#tab/mlflow)
+
+ ```python
+ traffic_config = {"traffic": {blue_deployment_name: 100}}
+ ```
+
+ Write the configuration to a file:
+
+ ```python
+ traffic_config_path = "traffic_config.json"
+ with open(traffic_config_path, "w") as outfile:
+ outfile.write(json.dumps(traffic_config))
+ ```
+
+ # [Studio](#tab/studio)
+
+ *This step in not required in studio since we assigned the traffic during creation.*
+
+1. Update the endpoint configuration:
+
+ # [Azure CLI](#tab/cli)
+
+ *This step in not required in the Azure CLI since we used the `--all-traffic` during creation.*
+
+ # [Python (Azure ML SDK)](#tab/sdk)
+
+ ```python
+ ml_client.begin_create_or_update(endpoint).result()
+ ```
+
+ # [Python (MLflow SDK)](#tab/mlflow)
+
+ ```python
+ deployment_client.update_endpoint(
+ endpoint=endpoint_name,
+ config={"endpoint-config-file": traffic_config_path},
+ )
+ ```
+
+ # [Studio](#tab/studio)
+
+ *This step in not required in studio since we assigned the traffic during creation.*
+ ### Invoke the endpoint
-Once your deployment completes, your deployment is ready to serve request. One of the easier ways to test the deployment is by using a sample request file along with the `invoke` method.
+Once your deployment completes, your deployment is ready to serve request. One of the easier ways to test the deployment is by using the built-in invocation capability in the deployment client you are using.
**sample-request-sklearn.json**
To submit a request to the endpoint, you can do as follows:
:::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-mlflow.sh" ID="test_sklearn_deployment":::
-# [Python](#tab/sdk)
+# [Python (Azure ML SDK)](#tab/sdk)
```python ml_client.online_endpoints.invoke( endpoint_name=endpoint_name,
- deployment_name=deployment.name,
request_file="sample-request-sklearn.json", ) ```
+# [Python (MLflow SDK)](#tab/mlflow)
+
+```python
+# Read the sample request we have in the json file to construct a pandas data frame
+with open("sample-request-sklearn.json", "r") as f:
+ sample_request = json.loads(f.read())
+ samples = pd.DataFrame(**sample_request["input_data"])
+
+deployment_client.predict(endpoint=endpoint_name, df=samples)
+```
+ # [Studio](#tab/studio) MLflow models can use the __Test__ tab to create invocations to the created endpoints. To do that:
Use the following steps to deploy an MLflow model with a custom scoring script.
*The environment will be created inline in the deployment configuration.*
- # [Python](#tab/sdk)
+ # [Python (Azure ML SDK)](#tab/sdk)
```python environment = Environment(
Use the following steps to deploy an MLflow model with a custom scoring script.
) ```
+ # [Python (MLflow SDK)](#tab/mlflow)
+
+ *This operation is not supported in MLflow SDK*
+ # [Studio](#tab/studio) On [Azure ML studio portal](https://ml.azure.com), follow these steps:
Use the following steps to deploy an MLflow model with a custom scoring script.
az ml online-deployment create -f deployment.yml ```
- # [Python](#tab/sdk)
+ # [Python (Azure ML SDK)](#tab/sdk)
```python blue_deployment = ManagedOnlineDeployment(
Use the following steps to deploy an MLflow model with a custom scoring script.
) ```
+ # [Python (MLflow SDK)](#tab/mlflow)
+
+ *This operation is not supported in MLflow SDK*
+ # [Studio](#tab/studio) > [!IMPORTANT]
Use the following steps to deploy an MLflow model with a custom scoring script.
az ml online-endpoint invoke --name $ENDPOINT_NAME --request-file endpoints/online/mlflow/sample-request-sklearn-custom.json ```
- # [Python](#tab/sdk)
+ # [Python (Azure ML SDK)](#tab/sdk)
```python ml_client.online_endpoints.invoke(
Use the following steps to deploy an MLflow model with a custom scoring script.
) ```
+ # [Python (MLflow SDK)](#tab/mlflow)
+
+ *This operation is not supported in MLflow SDK*
+ # [Studio](#tab/studio) MLflow models can use the __Test__ tab to create invocations to the created endpoints. To do that:
Once you're done with the endpoint, you can delete the associated resources:
:::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-mlflow.sh" ID="delete_endpoint":::
-# [Python](#tab/sdk)
+# [Python (Azure ML SDK)](#tab/sdk)
```python ml_client.online_endpoints.begin_delete(endpoint_name) ```
+# [Python (MLflow SDK)](#tab/mlflow)
+
+```python
+deployment_client.delete_endpoint(endpoint_name)
+```
+ # [Studio](#tab/studio) 1. Navigate to the __Endpoints__ tab on the side menu.
machine-learning How To Deploy Mlflow Models Online Progressive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-mlflow-models-online-progressive.md
You can follow along this sample in the following notebooks. In the cloned repos
Before following the steps in this article, make sure you have the following prerequisites: -- Install the Mlflow SDK package: `mlflow`.-- Install the Azure Machine Learning plug-in for MLflow: `azureml-mlflow`.
+- An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
+- Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure Machine Learning. To perform the steps in this article, your user account must be assigned the owner or contributor role for the Azure Machine Learning workspace, or a custom role allowing Microsoft.MachineLearningServices/workspaces/onlineEndpoints/*. For more information, see [Manage access to an Azure Machine Learning workspace](how-to-assign-roles.md).
+
+Additionally, you will need to:
+
+# [Azure CLI](#tab/cli)
+
+- Install the Azure CLI and the ml extension to the Azure CLI. For more information, see [Install, set up, and use the CLI (v2)](how-to-configure-cli.md).
+
+# [Python (Azure ML SDK)](#tab/sdk)
+
+- Install the Azure Machine Learning SDK for Python
+
+ ```bash
+ pip install azure-ai-ml
+ ```
+
+# [Python (MLflow SDK)](#tab/mlflow)
+
+- Install the Mlflow SDK package `mlflow` and the Azure Machine Learning plug-in for MLflow `azureml-mlflow`.
+
+ ```bash
+ pip install mlflow azureml-mlflow
+ ```
+ - If you are not running in Azure Machine Learning compute, configure the MLflow tracking URI or MLflow's registry URI to point to the workspace you are working on. See [Track runs using MLflow with Azure Machine Learning](how-to-use-mlflow-cli-runs.md#set-up-tracking-environment) for more details. ++ ### Connect to your workspace First, let's connect to Azure Machine Learning workspace where we are going to work on.
So far, the endpoint is empty. There are no deployments on it. Let's create the
# [Azure CLI](#tab/cli)
- ```azurecli
*This step in not required in the Azure CLI since we used the `--all-traffic` during creation.*
- ```
# [Python (Azure ML SDK)](#tab/sdk)
So far, the endpoint is empty. There are no deployments on it. Let's create the
.sample(n=5) .drop(columns=["target"]) .reset_index(drop=True)
- )
-
- sample_request = { "input_data": json.loads(samples.to_json(orient="split", index=False)) }
+ )
``` 1. Test the deployment
So far, the endpoint is empty. There are no deployments on it. Let's create the
# [Python (MLflow SDK)](#tab/mlflow)
- Get the scoring URI:
-
- ```python
- scoring_uri = deployment_client.get_endpoint(endpoint=endpoint_name)["properties"]["scoringUri"]
- ```
-
- Let's create the headers:
-
- ```python
- headers = {
- 'Content-Type':'application/json',
- 'Authorization':('Bearer '+ endpoint_secret_key),
- }
- ```
-
- Call the endpoint and its default deployment:
- ```python
- req = requests.post(scoring_uri, json=sample_request, headers=headers)
- req.json()
+ deployment_client.predict(endpoint=endpoint_name, df=samples)
``` ### Create a green deployment under the endpoint
Let's imagine that there is a new version of the model created by the developmen
) ```
+1. Test the deployment without changing traffic
+
+ # [Azure CLI](#tab/cli)
+
+ ```azurecli
+ az ml online-endpoint invoke --name $ENDPOINT_NAME --deployment-name $GREEN_DEPLOYMENT_NAME --request-file sample.json
+ ```
+
+ # [Python (Azure ML SDK)](#tab/sdk)
+
+ ```python
+ ml_client.online_endpoints.invoke(
+ endpoint_name=endpoint_name,
+ deployment_name=green_deployment_name
+ request_file="sample.json",
+ )
+ ```
+
+ # [Python (MLflow SDK)](#tab/mlflow)
+
+ ```python
+ deployment_client.predict(endpoint=endpoint_name, deployment_name=green_deployment_name, df=samples)
+ ```
+
+
+
+ > [!TIP]
+ > Notice how now we are indicating the name of the deployment we want to invoke.
+ ## Progressively update the traffic One we are confident with the new deployment, we can update the traffic to route some of it to the new deployment. Traffic is configured at the endpoint level:
machine-learning How To Deploy Mlflow Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-mlflow-models.md
__MLmodel__
:::code language="yaml" source="~/azureml-examples-main/sdk/python/endpoints/online/mlflow/sklearn-diabetes/model/MLmodel" highlight="13-19":::
-You can inspect the model signature of your model by opening the MLmodel file associated with your MLflow model. For more details about how signatures work in MLflow see [Signatures in MLflow](concept-mlflow-models.md#signatures).
+You can inspect the model signature of your model by opening the MLmodel file associated with your MLflow model. For more details about how signatures work in MLflow, see [Signatures in MLflow](concept-mlflow-models.md#signatures).
> [!TIP] > Signatures in MLflow models are optional but they are highly encouraged as they provide a convenient way to early detect data compatibility issues. For more information about how to log models with signatures read [Logging models with a custom signature, environment or samples](how-to-log-mlflow-models.md#logging-models-with-a-custom-signature-environment-or-samples).
The previous payload corresponds to MLflow server 2.0+.
-For more information about MLflow built-in deployment tools see [MLflow documentation section](https://www.mlflow.org/docs/latest/models.html#built-in-deployment-tools).
+For more information about MLflow built-in deployment tools, see [MLflow documentation section](https://www.mlflow.org/docs/latest/models.html#built-in-deployment-tools).
## How to customize inference when deploying MLflow models
-You may be used to author scoring scripts to customize how inference is executed for your models. This is particularly the case when you are using features like `autolog` in MLflow that automatically log models for you as the best of the knowledge of the framework. However, you may need to run inference in a different way.
+You may be used to author scoring scripts to customize how inference is executed for your models. However, when deploying MLflow models to Azure Machine Learning, the decision about how inference should be executed is done by the model builder (the person who built the model) rather than by the DevOps engineer (the person who is trying to deploy it). Features like `autolog` in MLflow automatically log models for you at the best of the knowledge of the framework. Those decisions may not be the ones you want in some scenarios.
-For those cases, you can either [change how your model is being logged in the training routine](#change-how-your-model-is-logged-during-training) or [customize inference with a scoring script](#customize-inference-with-a-scoring-script)
+For those cases, you can either [change how your model is being logged in the training routine](#change-how-your-model-is-logged-during-training) or [customize inference with a scoring script](#customize-inference-with-a-scoring-script).
### Change how your model is logged during training
-When you log a model using either `mlflow.autolog` or using `mlflow.<flavor>.log_model`, the flavor used for the model decides how inference should be executed and what gets returned by the model. MLflow doesn't enforce any specific behavior in how the `predict()` function generates results. There are scenarios where you probably want to do some pre-processing or post-processing before and after your model is executed.
+When you log a model using either `mlflow.autolog` or using `mlflow.<flavor>.log_model`, the flavor used for the model decides how inference should be executed and what gets returned by the model. MLflow doesn't enforce any specific behavior in how the `predict()` function generates results. However, there are scenarios where you probably want to do some pre-processing or post-processing before and after your model is executed. On another scenarios, you may want to change what's returned like probabilities vs classes.
-A solution to this scenario is to implement machine learning pipelines that moves from inputs to outputs directly. Although this is possible (and sometimes encourageable for performance considerations), it may be challenging to achieve. For those cases, you probably want to [customize how your model does inference using a custom models](how-to-log-mlflow-models.md?#logging-custom-models).
+A solution to this scenario is to implement machine learning pipelines that moves from inputs to outputs directly. For instance, [`sklearn.pipeline.Pipeline`](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html) or [`pyspark.ml.Pipeline`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.ml.Pipeline.html) are popular (and sometimes encourageable for performance considerations) ways to do so. Another alternative is to [customize how your model does inference using a custom model flavor](how-to-log-mlflow-models.md?#logging-custom-models).
### Customize inference with a scoring script
-If you want to customize how inference is executed for MLflow models (or opt-out for no-code deployment) you can refer to [Customizing MLflow model deployments (Online Endpoints)](how-to-deploy-mlflow-models-online-endpoints.md#customizing-mlflow-model-deployments) and [Customizing MLflow model deployments (Batch Endpoints)](how-to-mlflow-batch.md#customizing-mlflow-models-deployments-with-a-scoring-script).
+Although MLflow models don't require a scoring script, you can still provide one if needed. You can use it to customize how inference is executed for MLflow models. To learn how to do it, refer to [Customizing MLflow model deployments (Online Endpoints)](how-to-deploy-mlflow-models-online-endpoints.md#customizing-mlflow-model-deployments) and [Customizing MLflow model deployments (Batch Endpoints)](how-to-mlflow-batch.md#customizing-mlflow-models-deployments-with-a-scoring-script).
> [!IMPORTANT]
-> When you opt-in to indicate a scoring script, you also need to provide an environment for deployment.
+> When you opt-in to indicate a scoring script for an MLflow model deployment, you also need to provide an environment for it.
## Next steps
machine-learning How To Export Delete Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-export-delete-data.md
These resources can be deleted by selecting them from the list and choosing **De
> [!IMPORTANT] > If the resource is configured for soft delete, the data won't be deleted unless you optionally select to delete the resource permanently. For more information, see the following articles: > * [Workspace soft-deletion](concept-soft-delete.md).
-> * [Soft delete for blobs](/azure/storage/soft-delete-blob-overview.md).
+> * [Soft delete for blobs](/azure/storage/blobs/soft-delete-blob-overview).
> * [Soft delete in Azure Container Registry](/azure/container-registry/container-registry-soft-delete-policy). > * [Azure log analytics workspace](/azure/azure-monitor/logs/delete-workspace). > * [Azure Key Vault soft-delete](/azure/key-vault/general/soft-delete-overview).
You can download a registered model by navigating to the **Model** and choosing
## Next steps
-Learn more about [Managing a workspace](how-to-manage-workspace.md).
+Learn more about [Managing a workspace](how-to-manage-workspace.md).
managed-grafana Troubleshoot Managed Grafana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/troubleshoot-managed-grafana.md
If you get an error while filling out the form to create the Managed Grafana ins
Enter a name that: - Is unique in the entire Azure region. It can't already be used by another user.-- Is 30 characters long or smaller
+- Is 23 characters long or smaller
- Begins with a letter. The rest can only be alphanumeric characters or hyphens, and the name must end with an alphanumeric character. ### Solution 2: review deployment error
mysql Concepts Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-data-in-replication.md
Title: Data-in replication - Azure Database for MySQL Flexible
+ Title: Data-in replication - Azure Database for MySQL Flexible Server
description: Learn about using Data-in replication to synchronize from an external server into the Azure Database for MySQL Flexible service.+++ Last updated : 12/30/2022 -- Previously updated : 06/08/2021
-# Replicate data into Azure Database for MySQL Flexible Server
+# Replicate data into Azure Database for MySQL Flexible Server
[!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)] Data-in replication allows you to synchronize data from an external MySQL server into the Azure Database for MySQL Flexible service. The external server can be on-premises, in virtual machines, Azure Database for MySQL Single Server, or a database service hosted by other cloud providers. Data-in replication is based on the binary log (binlog) file position-based. To learn more about binlog replication, see the [MySQL binlog replication overview](https://dev.mysql.com/doc/refman/5.7/en/binlog-replication-configuration-overview.html).
-> [!Note]
+> [!NOTE]
> GTID-based replication is currently not supported for Azure Database for MySQL Flexible Servers.<br>
-> Configuring Data-in replication for zone-redundant high availability servers is not supported.
+> Configuring Data-in replication for zone-redundant high-availability servers is not supported.
## When to use Data-in replication
For migration scenarios, use the [Azure Database Migration Service](https://azur
### Data not replicated
-The [*mysql system database*](https://dev.mysql.com/doc/refman/5.7/en/system-schema.html) on the source server isn't replicated. In addition, changes to accounts and permissions on the source server aren't replicated. If you create an account on the source server and this account needs to access the replica server, manually create the same account on the replica server. To understand what tables are contained in the system database, see the [MySQL manual](https://dev.mysql.com/doc/refman/5.7/en/system-schema.html).
+The [*mysql system database*](https://dev.mysql.com/doc/refman/5.7/en/system-schema.html) on the source server isn't replicated. In addition, changes to accounts and permissions on the source server aren't replicated. If you create an account on the source server and this account needs to access the replica server, manually create the same account on the replica server. To understand the tables in the system database, see the [MySQL manual](https://dev.mysql.com/doc/refman/5.7/en/system-schema.html).
+
+### Data-in replication not supported on High Availability (HA) enabled servers
+
+It isn't supported to configure Data-in replication for servers that have high availability (HA) option enabled. On HA-enabled servers, the stored procedures for replication `mysql.az_replication_*` won't be available.
-### Data-in replication not supported on HA enabled servers
-It is not supported to configure Data-in replication for servers which have high availability (HA) option enabled. On HA enabled servers, the stored procedures for replication `mysql.az_replication_*` won't be available.
-> [!Tip]
->If you are using HA server as a source server, MySQL native binary log (binlog) file position-based replication would fail, when failover happens on the server. If replica server supports GTID based replication, we should configure GTID based replication.
+> [!TIP]
+> If you are using the HA server as a source server, MySQL native binary log (binlog) file position-based replication will fail when failover happens on the server. If replica server supports GTID based replication, we should configure GTID based replication.
-### Filtering
+### Filter
-Modifying the parameter `replicate_wild_ignore_table` used to create replication filter for tables, is currently not supported for Azure Database for MySQL -Flexible server.
+Modifying the parameter `replicate_wild_ignore_table` used to create replication filter for tables is currently not supported for Azure Database for MySQL -Flexible server.
### Requirements - The source server version must be at least MySQL version 5.7.-- Our recommendation is to have the same version for source and replica server versions. For example, both must be MySQL version 5.7 or both must be MySQL version 8.0.-- Our recommendation is to have a primary key in each table. If we have table without primary key, you might face slowness in replication.
+- Our recommendation is to have the same version for source and replica server versions. For example, both must be MySQL version 5.7, or both must be MySQL version 8.0.
+- Our recommendation is to have a primary key in each table. If we have a table without primary key, you might face slowness in replication.
- The source server should use the MySQL InnoDB engine.-- User must have permissions to configure binary logging and create new users on the source server.
+- User must have the right permissions to configure binary logging and create new users on the source server.
- Binary log files on the source server shouldn't be purged before the replica applies those changes. If the source is Azure Database for MySQL refer how to configure binlog_expire_logs_seconds for [Flexible server](./concepts-server-parameters.md#binlog_expire_logs_seconds) or [Single server](../concepts-server-parameters.md#binlog_expire_logs_seconds) - If the source server has SSL enabled, ensure the SSL CA certificate provided for the domain has been included in the `mysql.az_replication_change_master` stored procedure. Refer to the following [examples](./how-to-data-in-replication.md#link-source-and-replica-servers-to-start-data-in-replication) and the `master_ssl_ca` parameter. - Ensure that the machine hosting the source server allows both inbound and outbound traffic on port 3306. - Ensure that the source server has a **public IP address**, that DNS is publicly accessible, or that the source server has a fully qualified domain name (FQDN).-- In case of public access, ensure that the source server has a public IP address, that DNS is publicly accessible, or that the source server has a fully qualified domain name (FQDN).-- In case of private access ensure that the source server name can be resolved and is accessible from the VNet where the Azure Database for MySQL instance is running.For more details see, [Name resolution for resources in Azure virtual networks](../../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md)
+- With public access, ensure that the source server has a public IP address, that DNS is publicly accessible, or that the source server has a fully qualified domain name (FQDN).
+- With private access, ensure that the source server name can be resolved and is accessible from the VNet where the Azure Database for MySQL instance is running. (For more details, visit [Name resolution for resources in Azure virtual networks](../../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md)).
## Next steps -- Learn how to [set up data-in replication](how-to-data-in-replication.md)-- Learn about [replicating in Azure with read replicas](concepts-read-replicas.md)
+- Learn more on how to [set up data-in replication](how-to-data-in-replication.md)
+- Learn more about [replicating in Azure with read replicas](concepts-read-replicas.md)
mysql Concepts Data Out Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-data-out-replication.md
+
+ Title: Data-out replication - Azure Database for MySQL Flexible Server
+description: Learn about the concepts of data-out replication out of Azure Database for MySQL - Flexible Server to another MySQL server
+++ Last updated : 12/30/2022+++++
+# Replicate data from Azure Database for MySQL Flexible Server
++
+Data-out replication allows you to synchronize data out of Azure Database for MySQL - Flexible Server to another MySQL server using MySQL native replication. The MySQL server (replica) can be on-premises, in virtual machines, or a database service hosted by other cloud providers. While [Data-in replication](concepts-data-in-replication.md) helps to move data into Azure Database for MySQL - Flexible Server (replica), Data-out replication would allow you to transfer data out of Azure Database for MySQL - Flexible Server (Primary). With Data-out replication, the binary log (binlog) is made community consumable allowing the Azure Database for MySQL- Flexible server to act as a Primary server for the external replicas. To learn more about binlog replication, see the [MySQL binlog replication overview](https://dev.mysql.com/doc/refman/5.7/en/binlog-replication-configuration-overview.html).
+
+> [!NOTE]
+> Data-out replication is not supported on Azure Database for MySQL - Flexible Server, which has Azure authentication configured.
+
+The main scenarios to consider about using Data-out replication are:
+
+- **Hybrid Data Synchronization:** Data-out replication can be used to keep the data synchronized between Azure Database for MySQL Flexible server and on-premises servers. This method will help to integrate seamlessly between cloud and on-premises systems in a hybrid solution. This solution can also be useful if you want to avoid vendor lock-in.
+
+- **Multi-Cloud Synchronization:** For complex cloud solutions, use Data-out replication to synchronize data between Azure Database for MySQL Flexible Server and different cloud providers, including virtual machines and database services hosted in those clouds.
+
+- **Migration:** Customers can do Minimal Time migration using open-source tools such as MyDumper/MyLoader with Data-out replication to migrate data out Azure MySQL Flexible server.
+
+## Limitations and considerations
+
+### Azure AD isn't supported
+
+Data-out replication isn't supported on Azure Database for MySQL - Flexible Server, which has Azure authentication configured. Any Azure AD transaction (Azure AD user create/update) on the source server will break data-out replication.
+
+> [!TIP]
+> Use guidance published here - MySQL :: MySQL Replication :: 2.7.3 Skipping Transactions to skip past an event or events by issuing a CHANGE MASTER TO statement to move the source's binary log position forward. Restart replication posts the action.
+
+### Filter
+
+You must use the replication filter to filter out Azure custom tables on the replica server. This can be achieved by setting Replicate_Wild_Ignore_Table = "mysql.\_\_%" to filter the Azure MySQL internal tables on the replica. To modify this parameter from the Azure portal, navigate to Azure Database for MySQL Flexible server and select "Server parameters" to view/edit the Replicate_Wild_Ignore_Table parameter.
+
+Refer to the following general guidance on the replication filter:
+- MySQL 5.7 Reference Manual - 13.4.2.2 CHANGE REPLICATION FILTER Statement
+- MySQL 5.7 Reference Manual - 16.1.6.3 Replica Server Options and Variables
+- MySQL 8.0 Reference Manual - 17.2.5.4 Replication Channel Based Filters.
+
+## Next steps
+
+- How to configure [Data-out replication](how-to-data-out-replication.md)
+- Learn about [Data-in replication](concepts-data-in-replication.md)
+- How to configure [Data-in replication](how-to-data-in-replication.md)
mysql How To Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-data-in-replication.md
Title: Configure Data-in replication - Azure Database for MySQL Flexible Server description: This article describes how to set up Data-in replication for Azure Database for MySQL Flexible Server.+++ Last updated : 12/30/2022 -- Previously updated : 06/08/2021
-# How to configure Azure Database for MySQL Flexible Server Data-in replication
+# How to configure Azure Database for MySQL Flexible Server data-in replication
[[!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)] This article describes how to set up [Data-in replication](concepts-data-in-replication.md) in Azure Database for MySQL Flexible Server by configuring the source and replica servers. This article assumes that you have some prior experience with MySQL servers and databases.
-> [!NOTE]
+> [!NOTE]
> This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article. To create a replica in the Azure Database for MySQL Flexible service, [Data-in replication](concepts-data-in-replication.md) synchronizes data from a source MySQL server on-premises, in virtual machines (VMs), or in cloud database services. Data-in replication is based on the binary log (binlog) file position-based. To learn more about binlog replication, see the [MySQL binlog replication overview](https://dev.mysql.com/doc/refman/5.7/en/binlog-replication-configuration-overview.html).
Review the [limitations and requirements](concepts-data-in-replication.md#limita
1. Create a new instance of Azure Database for MySQL Flexible Server (for example, `replica.mysql.database.azure.com`). Refer to [Create an Azure Database for MySQL Flexible Server server by using the Azure portal](quickstart-create-server-portal.md) for server creation. This server is the "replica" server for Data-in replication.
-2. Create the same user accounts and corresponding privileges.
+1. Create the same user accounts and corresponding privileges.
User accounts aren't replicated from the source server to the replica server. If you plan on providing users with access to the replica server, you need to create all accounts and corresponding privileges manually on this newly created Azure Database for MySQL Flexible Server.
The following steps prepare and configure the MySQL server hosted on-premises, i
1. Review the [source server requirements](concepts-data-in-replication.md#requirements) before proceeding.
-2. Networking Requirements
+1. Networking Requirements
* Ensure that the source server allows both inbound and outbound traffic on port 3306, and that it has a **public IP address**, the DNS is publicly accessible, or that it has a fully qualified domain name (FQDN).
- * If private access is in use, make sure that you have connectivity between Source server and the Vnet in which the replica server is hosted.
+ * If private access is in use, make sure that you have connectivity between Source server and the Vnet in which the replica server is hosted.
* Make sure we provide site-to-site connectivity to your on-premises source servers by using either [ExpressRoute](../../expressroute/expressroute-introduction.md) or [VPN](../../vpn-gateway/vpn-gateway-about-vpngateways.md). For more information about creating a virtual network, see the [Virtual Network Documentation](../../virtual-network/index.yml), and especially the quickstart articles with step-by-step details. * If private access is used in replica server and your source is Azure VM make sure that VNet to VNet connectivity is established. VNet-Vnet peering is supported. You can also use other connectivity methods to communicate between VNets across different regions like VNet to VNet Connection. For more information you can, see [VNet-to-VNet VPN gateway](../../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) * Ensure that your virtual network Network Security Group rules don't block the outbound port 3306 (Also inbound if the MySQL is running on Azure VM). For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../../virtual-network/virtual-network-vnet-plan-design-arm.md). * Configure your source server's firewall rules to allow the replica server IP address.
-
-3. Turn on binary logging.
+1. Turn on binary logging.
Check to see if binary logging has been enabled on the source by running the following command:
The following steps prepare and configure the MySQL server hosted on-premises, i
If the variable [`log_bin`](https://dev.mysql.com/doc/refman/8.0/en/replication-options-binary-log.html#sysvar_log_bin) is returned with the value "ON", binary logging is enabled on your server.
- If `log_bin` is returned with the value "OFF" and your source server is running on-premises or on virtual machines where you can access the configuration file (my.cnf), you can follow the steps below:
+ If `log_bin` is returned with the value "OFF" and your source server is running on-premises or on virtual machines where you can access the configuration file (my.cnf), you can follow the following steps:
1. Locate your MySQL configuration file (my.cnf) in the source server. For example: /etc/my.cnf
- 2. Open the configuration file to edit it and locate **mysqld** section in the file.
- 3. In the mysqld section, add following line:
+ 1. Open the configuration file to edit it and locate **mysqld** section in the file.
+ 1. In the mysqld section, add following line:
```bash log-bin=mysql-bin.log ```
- 4. Restart the MySQL service on source server (or Restart) for the changes to take effect.
- 5. After the server is restarted, verify that binary logging is enabled by running the same query as before:
+ 1. Restart the MySQL service on source server (or Restart) for the changes to take effect.
+ 1. After the server is restarted, verify that binary logging is enabled by running the same query as before:
```sql SHOW VARIABLES LIKE 'log_bin'; ```
-4. Configure the source server settings.
+1. Configure the source server settings.
Data-in replication requires the parameter `lower_case_table_names` to be consistent between the source and replica servers. This parameter is 1 by default in Azure Database for MySQL Flexible Server.
The following steps prepare and configure the MySQL server hosted on-premises, i
SET GLOBAL lower_case_table_names = 1; ```
-5. Create a new replication role and set up permission.
+1. Create a new replication role and set up permission.
Create a user account on the source server that is configured with replication privileges. This can be done through SQL commands or a tool such as MySQL Workbench. Consider whether you plan on replicating with SSL, as this will need to be specified when creating the user. Refer to the MySQL documentation to understand how to [add user accounts](https://dev.mysql.com/doc/refman/5.7/en/user-names.html) on your source server. In the following commands, the new replication role created can access the source from any machine, not just the machine that hosts the source itself. This is done by specifying "syncuser@'%'" in the create user command. See the MySQL documentation to learn more about [specifying account names](https://dev.mysql.com/doc/refman/5.7/en/account-names.html).
- **SQL Command**
+#### [SQL Command](#tab/command-line)
- *Replication with SSL*
+**Replication with SSL**
- To require SSL for all user connections, use the following command to create a user:
+To require SSL for all user connections, use the following command to create a user:
- ```sql
- CREATE USER 'syncuser'@'%' IDENTIFIED BY 'yourpassword';
- GRANT REPLICATION SLAVE ON *.* TO ' syncuser'@'%' REQUIRE SSL;
- ```
+```sql
+CREATE USER 'syncuser'@'%' IDENTIFIED BY 'yourpassword';
+GRANT REPLICATION SLAVE ON *.* TO ' syncuser'@'%' REQUIRE SSL;
+```
- *Replication without SSL*
+**Replication without SSL**
- If SSL isn't required for all connections, use the following command to create a user:
+If SSL isn't required for all connections, use the following command to create a user:
- ```sql
- CREATE USER 'syncuser'@'%' IDENTIFIED BY 'yourpassword';
- GRANT REPLICATION SLAVE ON *.* TO ' syncuser'@'%';
- ```
+```sql
+CREATE USER 'syncuser'@'%' IDENTIFIED BY 'yourpassword';
+GRANT REPLICATION SLAVE ON *.* TO ' syncuser'@'%';
+```
- **MySQL Workbench**
+#### [MySQL Workbench](#tab/mysql-workbench)
- To create the replication role in MySQL Workbench, open the **Users and Privileges** panel from the **Management** panel, and then select **Add Account**.
+To create the replication role in MySQL Workbench, open the **Users and Privileges** panel from the **Management** panel, and then select **Add Account**.
- :::image type="content" source="./media/how-to-data-in-replication/users-privileges.png" alt-text="Users and Privileges":::
- Type in the username into the **Login Name** field.
+Type in the username into the **Login Name** field.
- :::image type="content" source="./media/how-to-data-in-replication/sync-user.png" alt-text="Sync user":::
- Select the **Administrative Roles** panel and then select **Replication Slave** from the list of **Global Privileges**. Then select **Apply** to create the replication role.
+Select the **Administrative Roles** panel and then select **Replication Slave** from the list of **Global Privileges**. Then select **Apply** to create the replication role.
- :::image type="content" source="./media/how-to-data-in-replication/replication-slave.png" alt-text="Replication Slave":::
-6. Set the source server to read-only mode.
+1. Set the source server to read-only mode.
- Before starting to dump out the database, the server needs to be placed in read-only mode. While in read-only mode, the source will be unable to process any write transactions. Evaluate the impact to your business and schedule the read-only window in an off-peak time if necessary.
+Before starting to dump out the database, the server needs to be placed in read-only mode. While in read-only mode, the source will be unable to process any write transactions. Evaluate the impact to your business and schedule the read-only window in an off-peak time if necessary.
- ```sql
- FLUSH TABLES WITH READ LOCK;
- SET GLOBAL read_only = ON;
- ```
+```sql
+FLUSH TABLES WITH READ LOCK;
+SET GLOBAL read_only = ON;
+```
-7. Get binary log file name and offset.
+1. Get binary log file name and offset.
- Run the [`show master status`](https://dev.mysql.com/doc/refman/5.7/en/show-master-status.html) command to determine the current binary log file name and offset.
+Run the [`show master status`](https://dev.mysql.com/doc/refman/5.7/en/show-master-status.html) command to determine the current binary log file name and offset.
- ```sql
- show master status;
- ```
- The results should appear similar to the following. Make sure to note the binary file name for use in later steps.
+```sql
+ show master status;
+```
+
+The results should appear similar to the following. Make sure to note the binary file name for use in later steps.
- :::image type="content" source="./media/how-to-data-in-replication/master-status.png" alt-text="Master Status Results":::
+
+#### [Azure Data Studio](#tab/azure-data-studio)
+
+<--Content here-->
++ ## Dump and restore the source server
The following steps prepare and configure the MySQL server hosted on-premises, i
You can use mysqldump to dump databases from your primary server. For details, refer to [Dump & Restore](../concepts-migrate-dump-restore.md). It's unnecessary to dump the MySQL library and test library. -
-3. Set source server to read/write mode.
+1. Set source server to read/write mode.
After the database has been dumped, change the source MySQL server back to read/write mode.
The following steps prepare and configure the MySQL server hosted on-premises, i
UNLOCK TABLES; ```
-4. Restore dump file to new server.
+1. Restore dump file to new server.
Restore the dump file to the server created in the Azure Database for MySQL Flexible Server service. Refer to [Dump & Restore](../concepts-migrate-dump-restore.md) for how to restore a dump file to a MySQL server. If the dump file is large, upload it to a virtual machine in Azure within the same region as your replica server. Restore it to the Azure Database for MySQL Flexible Server server from the virtual machine.
->[!Note]
->* If you want to avoid setting the database to read only when you dump and restore, you can use [mydumper/myloader](../concepts-migrate-mydumper-myloader.md).
+> [!NOTE]
+> If you want to avoid setting the database to read only when you dump and restore, you can use [mydumper/myloader](../concepts-migrate-mydumper-myloader.md).
## Link source and replica servers to start Data-in replication
The following steps prepare and configure the MySQL server hosted on-premises, i
CALL mysql.az_replication_change_master('<master_host>', '<master_user>', '<master_password>', <master_port>, '<master_log_file>', <master_log_pos>, '<master_ssl_ca>'); ```
-
- master_host: hostname of the source server - master_user: username for the source server - master_password: password for the source server
The following steps prepare and configure the MySQL server hosted on-premises, i
- master_log_pos: binary log position from running `show master status` - master_ssl_ca: CA certificate's context. If not using SSL, pass in empty string.
- It's recommended to pass this parameter in as a variable. For more information, see the following examples.
+ It's recommended to pass this parameter in as a variable. For more information, visit the following examples.
- > [!NOTE]
+ > [!NOTE]
> * If the source server is hosted in an Azure VM, set "Allow access to Azure services" to "ON" to allow the source and replica servers to communicate with each other. This setting can be changed from the **Connection security** options. For more information, see [Manage firewall rules using the portal](how-to-manage-firewall-portal.md).
- > * If you used mydumper/myloader to dump the database then you can get the master_log_file and master_log_pos from the */backup/metadata* file.
+ > * If you used mydumper/myloader to dump the database then you can get the master_log_file and master_log_pos from the */backup/metadata* file.
**Examples**
The following steps prepare and configure the MySQL server hosted on-premises, i
```sql CALL mysql.az_replication_change_master('master.companya.com', 'syncuser', 'P@ssword!', 3306, 'mysql-bin.000002', 120, ''); ```
-2. Start replication.
+
+1. Start replication.
Call the `mysql.az_replication_start` stored procedure to start replication.
The following steps prepare and configure the MySQL server hosted on-premises, i
CALL mysql.az_replication_start; ```
-3. Check replication status.
+1. Check replication status.
Call the [`show slave status`](https://dev.mysql.com/doc/refman/5.7/en/show-slave-status.html) command on the replica server to view the replication status.
The following steps prepare and configure the MySQL server hosted on-premises, i
show slave status; ```
- To know the correct status of replication, please refer to replication metrics - **Replica IO Status** and **Replica SQL Status** under monitoring blade.
-
- If the `Seconds_Behind_Master` is "0", replication is working well. `Seconds_Behind_Master` indicates how late the replica is. If the value isn't "0", it means that the replica is processing updates.
+ To know the correct status of replication, refer to replication metrics - **Replica IO Status** and **Replica SQL Status** under monitoring page.
+
+ If the `Seconds_Behind_Master` is "0", replication is working well. `Seconds_Behind_Master` indicates how late the replica is. If the value isn't "0", it means that the replica is processing updates.
## Other useful stored procedures for Data-in replication operations
mysql How To Data Out Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-data-out-replication.md
+
+ Title: Configure Data-out replication - Azure Database for MySQL Flexible Server
+description: This article describes how to set up Data-out replication for Azure Database for MySQL Flexible Server.
+++ Last updated : 12/30/2022+++++
+# How to configure Azure Database for MySQL Flexible Server data-out replication
+
+[[!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)]
+
+This article describes how to set up Data-out replication in Azure Database for MySQL Flexible Server by configuring the source and replica servers. This article assumes that you have some prior experience with MySQL servers and databases.
+
+For Data-out replication, the source is always Azure Database for MySQL Flexible Server. The replica can be any external MySQL server on other cloud providers, on-premises, or virtual machines. Review the limitations and requirements of Data-out replication before performing the steps in this article.
+
+> [!NOTE]
+> This article references the term slave, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
+
+## Create an Azure Database for MySQL Flexible Server instance to use as a source.
+
+1. Create a new instance of Azure Database for MySQL Flexible Server (for example, sourceserver.mysql.database.Azure.com). Refer to [Create an Azure Database for MySQL Flexible Server server using the Azure portal for server creation](quickstart-create-server-portal.md). This server is the "source" server for Data-out replication.
+
+1. Create duplicate user accounts and corresponding privileges.
+ 1. User accounts aren't replicated from the source server to the replica server. Suppose you plan on providing users with access to the replica server. In that case, you must manually create all accounts and corresponding privileges on this newly created Azure Database for MySQL Flexible Server.
+
+## Configure the source MySQL server
+
+The following steps prepare and configure the Azure Database for MySQL Flexible Server acting as the source.
+
+1. **Networking Requirements**
+
+ Ensure that your network settings are established so that source and replica server can communicate seamlessly.
+ If the source server is on public access, then ensure that firewall rules allow the replica server IP address. If the replica server is hosted on Azure, please ensure that you select the option of allowing public access from any Azure service from the networking page in the Azure portal.
+ If the source server is on private access, ensure that the replica server can connect to the source through Vnet peering or a VNet-to-VNet VPN gateway connection.
+
+ > [!NOTE]
+ > For more information - [Networking overview - Azure Database for MySQL Flexible Server](concepts-networking.md).
+
+1. **Turn on binary logging**
+
+ Check to see if binary logging has been enabled on the source by running the following command:
+
+ ```sql
+ SHOW VARIABLES LIKE 'log_bin';
+ ```
+
+ If the variable log_bin is returned with the value 'ON', binary logging is enabled on your server.
+
+1. **Create a new replication role and set up permission**
+
+ Create a user account on the configured source server with replication privileges. This can be done through SQL commands or a tool such as MySQL Workbench. Consider whether you plan on replicating with SSL, as this will need to be specified when creating the user. Refer to the MySQL documentation to understand how to [add user accounts](https://dev.mysql.com/doc/refman/5.7/en/user-names.html) on your source server.
+
+ In the following commands, the new replication role can access the source from any machine, not just the one that hosts the source itself. This is done by specifying "syncuser@'%'" in the create user command. See the MySQL documentation to learn more about [setting account names](https://dev.mysql.com/doc/refman/5.7/en/account-names.html).
+
+ There are a few tools you can use to set account names. Select the one that best fits your environment.
+
+#### [SQL Command](#tab/command-line)
+
+**Replication with SSL**
+
+To require SSL for all user connections, use the following command to create a user:
+
+```sql
+CREATE USER 'syncuser'@'%' IDENTIFIED BY 'yourpassword';
+GRANT REPLICATION SLAVE ON *.* TO ' syncuser'@'%' REQUIRE SSL;
+```
+
+**Replication without SSL**
+
+If SSL isn't required for all connections, use the following command to create a user:
+
+```sql
+CREATE USER 'syncuser'@'%' IDENTIFIED BY 'yourpassword';
+GRANT REPLICATION SLAVE ON *.* TO ' syncuser'@'%';
+```
+
+#### [MySQL Workbench](#tab/mysql-workbench)
+
+To create the replication role in MySQL Workbench, open the Users and Privileges panel from the Management panel and select Add Account.
++
+Type the username into the **Login Name** field.
++
+Select the Administrative Roles panel and Replication Slave from the list of Global Privileges. Then select Apply to create the replication role.
++
+1. **Set the source server to read-only mode**
+
+Before starting to dump out the database, the server needs to be placed in read-only mode. While in read-only mode, the source can't process any write transactions. Evaluate the effect on your business and schedule the read-only window in an off-peak time if necessary.
+
+ ```sql
+ FLUSH TABLES WITH READ LOCK;
+ SET GLOBAL read_only = ON;
+ ```
+
+1. **Get binary log file name and offset**
+
+Run the show master status command to determine the current binary log file name and offset.
+show master status;
+
+The results should appear similar to the following. Make sure to note the binary file name for use in later steps.
++
+#### [Azure Data Studio](#tab/azure-data-studio)
+
+<--Content here-->
+++
+## Dump and restore the source server.
+
+Skip this section if it's a newly created source server with no existing data to migrate to the replica. You can, at this point, unlock the tables:
+
+ ```sql
+ SET GLOBAL read_only = OFF;
+ UNLOCK TABLES;
+ ```
+
+Follow the below steps if the source server has existing data to migrate to the replica.
+
+1. Determine which databases and tables you want to replicate into Azure Database for MySQL Flexible Server and perform the dump from the source server.
+You can use mysqldump to dump databases from your primary server. For more details, visit [Dump & Restore](../single-server/concepts-migrate-dump-restore.md). It's unnecessary to dump the MySQL library and test library.
+
+1. Set the source server to read/write mode.
+After dumping the database, change the source MySQL server to read/write mode.
+
+ ```sql
+ SET GLOBAL read_only = OFF;
+ UNLOCK TABLES;
+ ```
+
+1. Restore the dump file to the new server.
+Restore the dump file to the server created in the Azure Database for MySQL Flexible Server service. Refer to Dump & Restore for restoring a dump file to a MySQL server. If the dump file is large, upload it to a virtual machine in Azure within the same region as your replica server. Restore it to the Azure Database for MySQL Flexible Server server from the virtual machine.
+
+> [!NOTE]
+> If you want to avoid setting the database to read-only when you dump and restore, you can use [mydumper/myloader](../migrate/concepts-migrate-mydumper-myloader.md).
+
+## Configure the replica server to start Data-out replication.
+
+1. Filtering
+
+ Suppose data-out replication is being set up between Azure MySQL and an external MySQL on other cloud providers or on-premises. In that case, you must use the replication filter to filter out Azure custom tables. This can be achieved by setting Replicate_Wild_Ignore_Table = "mysql.\_\_%" to filter the Azure mysql internal tables. To modify this parameter from the Azure portal, navigate to Azure Database for MySQL Flexible server used as source and select "Server parameters" to view/edit the "Replicate_Wild_Ignore_Table" parameter. Refer to [MySQL :: MySQL 5.7 Reference Manual :: 13.4.2.2 CHANGE REPLICATION FILTER Statement](https://dev.mysql.com/doc/refman/5.7/en/change-replication-filter.html) for more details on modifying this server parameter.
+
+1. Set the replica server by connecting to it and opening the MySQL shell on the replica server. From the prompt, run the following operation, which configures several MySQL replication settings at the same time:
+
+ CHANGE THE REPLICATION SOURCE TO
+ SOURCE_HOST='<master_host>',
+ SOURCE_USER='<master_user>',
+ SOURCE_PASSWORD='<master_password>',
+ SOURCE_LOG_FILE='<master_log_file>,
+ SOURCE_LOG_POS=<master_log_pos>
+
+ - master_host: hostname of the source server (example ΓÇô 'source.mysql.database.Azure.com')
+ - master_user: username for the source server (example - 'syncuser'@'%')
+ - master_password: password for the source server
+ - master_log_file: binary log file name from running show master status
+ - master_log_pos: binary log position from running show master status
+
+ > [!NOTE]
+ > To use SSL for the connection, add the attribute SOURCE_SSL=1 to the command. For more information about using SSL in a replication context, visit - https://dev.mysql.com/doc/refman/8.0/en/change-replication-source-to.html
+
+1. Activate the replica server using the following command.
+
+ ```sql
+ START REPLICA;
+ ```
+
+ At this point, the replica instance begins replicating any changes made to the source server database. You can test this by creating a sample table on your source database and checking whether it gets replicated successfully.
+
+1. Check replication status.
+
+ Call the show slave status\G command on the replica server to view the replication status.
+
+ ```sql
+ show slave status;
+ ```
+
+ If the state of Slave_IO_Running and Slave_SQL_Running are `yes` and the value of Seconds_Behind_Master is `0`, replication is working well. Seconds_Behind_Master indicates how late the replica is. The replica is processing updates if the value isn't `0`.
+
+ If the replica server is hosted in an Azure VM, set **Allow access to Azure services** to **ON** on the source to allow the source and replica servers to communicate. This setting can be changed from the connection security options. For more information, visit [Manage firewall rules using the portal](how-to-manage-firewall-portal.md).
+
+ If you used mydumper/myloader to dump the database, you could get the master_log_file and master_log_pos from the /backup/metadata file.
+
+## Next step
+
+- Learn more about [Data-out replication](concepts-data-out-replication.md)
+- Learn more about [Data-in replication](concepts-data-in-replication.md)
+- How to configure [Data-in replication](how-to-data-out-replication.md)
+- Learn more about [replicating in Azure with read replicas](concepts-read-replicas.md)
mysql Quickstart Create Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/quickstart-create-server-portal.md
Last updated 06/13/2022
-# Quickstart: Use the Azure portal to create an Azure Database for MySQL flexible server
+# Quickstart: Use the Azure portal to create an Azure Database for MySQL Flexible Server
[!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)]
network-watcher Network Watcher Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-create.md
Title: Create an Azure Network Watcher instance | Microsoft Docs
-description: Learn how to create an Azure Network Watcher in an Azure region by using the Azure portal or other technologies, and how to delete a Network Watcher.
+ Title: Create an Azure Network Watcher instance
+description: Learn how to create or delete an Azure Network Watcher using the Azure portal, PowerShell, the Azure CLI or the REST API.
ms.assetid: b1314119-0b87-4f4d-b44c-2c4d0547fb76 - Previously updated : 10/08/2021+ Last updated : 12/30/2022 -+ ms.devlang: azurecli
ms.devlang: azurecli
Network Watcher is a regional service that enables you to monitor and diagnose conditions at a network scenario level in, to, and from Azure. Scenario level monitoring enables you to diagnose problems at an end to end network level view. Network diagnostic and visualization tools available with Network Watcher help you understand, diagnose, and gain insights to your network in Azure. Network Watcher is enabled through the creation of a Network Watcher resource. This resource allows you to utilize Network Watcher capabilities. -- ## Network Watcher is automatically enabled
-When you create or update a virtual network in your subscription, Network Watcher will be enabled automatically in your Virtual Network's region. There's no impact to your resources or associated charge for automatically enabling Network Watcher.
+When you create or update a virtual network in your subscription, Network Watcher will be enabled automatically in your Virtual Network's region. Automatically enabling Network Watcher doesn't affect your resources or associated charge.
-#### Opt-out of Network Watcher automatic enablement
+### Opt-out of Network Watcher automatic enablement
If you would like to opt out of Network Watcher automatic enablement, you can do so by running the following commands: > [!WARNING]
-> Opting-out of Network Watcher automatic enablement is a permanent change. Once you opt-out, you cannot opt-in without contacting [support](https://azure.microsoft.com/support/options/).
+> Opting-out of Network Watcher automatic enablement is a permanent change. Once you opt-out, you cannot opt-in without contacting [Azure support](https://azure.microsoft.com/support/options/).
```azurepowershell-interactive Register-AzProviderFeature -FeatureName DisableNetworkWatcherAutocreation -ProviderNamespace Microsoft.Network
Register-AzResourceProvider -ProviderNamespace Microsoft.Network
az feature register --name DisableNetworkWatcherAutocreation --namespace Microsoft.Network az provider register -n Microsoft.Network ```
+## Prerequisites
-
+- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
## Create a Network Watcher in the portal
-1. Log into the [Azure portal](https://portal.azure.com) with an account that has the necessary permissions.
-2. Select **More services**.
-3. In the **All services** screen, enter **Network Watcher** in the **Filter services** search box and select it from the search result.
-You can select all the subscriptions you want to enable Network Watcher for. This action creates a Network Watcher in every region that is available.
+1. Sign in to the [Azure portal](https://portal.azure.com) with an account that has the necessary permissions.
+
+2. In the search box at the top of the portal, enter *Network Watcher*.
+
+3. In the search results, select **Network Watcher**.
+
+4. Select **+ Add**.
-![create a network watcher](./media/network-watcher-create/figure1.png)
+5. In **Add network watcher**, select your Azure subscription, then select the region that you want to enable Azure Network Watcher for.
-When you enable Network Watcher using the portal, the name of the Network Watcher instance is automatically set to *NetworkWatcher_region_name* where *region_name* corresponds to the Azure region where the instance is enabled. For example, a Network Watcher enabled in the West Central US region is named *NetworkWatcher_westcentralus*.
+6. Select **Add**.
+
+ :::image type="content" source="./media/network-watcher-create/create-network-watcher.png" alt-text="Screenshot showing how to create a Network Watcher in the Azure portal.":::
+
+When you enable Network Watcher using the Azure portal, the name of the Network Watcher instance is automatically set to *NetworkWatcher_region_name*, where *region_name* corresponds to the Azure region of the Network Watcher instance. For example, a Network Watcher enabled in the East US region is named *NetworkWatcher_eastus*.
The Network Watcher instance is automatically created in a resource group named *NetworkWatcherRG*. The resource group is created if it doesn't already exist.
-If you wish to customize the name of a Network Watcher instance and the resource group it's placed into, you can use PowerShell, the Azure CLI, the REST API, or ARMClient methods described in the sections that follow. In each option, the resource group must exist before you create a Network Watcher in it.
+If you wish to customize the name of a Network Watcher instance and the resource group it's placed into, you can use [PowerShell](#powershell) or [REST API](#restapi) methods. In each option, the resource group must exist before you create a Network Watcher in it.
-## Create a Network Watcher with PowerShell
+## <a name="powershell"></a> Create a Network Watcher using PowerShell
-To create an instance of Network Watcher, run the following example:
+Use [New-AzNetworkWatcher](/powershell/module/az.network/new-aznetworkwatcher) to create an instance of Network Watcher:
-```powershell
-New-AzNetworkWatcher -Name "NetworkWatcher_westcentralus" -ResourceGroupName "NetworkWatcherRG" -Location "West Central US"
+```azurepowershell-interactive
+New-AzNetworkWatcher -Name NetworkWatcher_westus -ResourceGroupName NetworkWatcherRG -Location westus
```
-## Create a Network Watcher with the Azure CLI
+## Create a Network Watcher using the Azure CLI
-To create an instance of Network Watcher, run the following example:
+Use [az network watcher configure](/cli/azure/network/watcher#az-network-watcher-configure) to create an instance of Network Watcher:
-```azurecli
+```azurecli-interactive
az network watcher configure --resource-group NetworkWatcherRG --locations westcentralus --enabled ```
-## Create a Network Watcher with the REST API
+## <a name="restapi"></a> Create a Network Watcher using the REST API
-The ARMclient is used to call the REST API using PowerShell. The ARMClient is found on chocolatey at [ARMClient on Chocolatey](https://chocolatey.org/packages/ARMClient)
+The ARMclient is used to call the [REST API](/rest/api/network-watcher/network-watchers/create-or-update) using PowerShell. The ARMClient is found on chocolatey at [ARMClient on Chocolatey](https://chocolatey.org/packages/ARMClient)
-### Log in with ARMClient
+### Sign in with ARMClient
```powerShell armclient login
armclient login
$subscriptionId = '<subscription id>' $networkWatcherName = '<name of network watcher>' $resourceGroupName = '<resource group name>'
-$apiversion = "2016-09-01"
+$apiversion = "2022-07-01"
$requestBody = @" { 'location': 'West Central US'
armclient put "https://management.azure.com/subscriptions/${subscriptionId}/reso
## Create a Network Watcher using Azure Quickstart Template
-To create an instance of Network Watcher, refer this [Quickstart Template](https://azure.microsoft.com/resources/templates/networkwatcher-create/)
+To create an instance of Network Watcher, refer to this [Quickstart Template](/samples/azure/azure-quickstart-templates/networkwatcher-create).
-## Delete a Network Watcher in the portal
+## Delete a Network Watcher using the Azure portal
-1. Navigate to **All Services** > **Networking** > **Network Watcher**.
-2. Select the overview tab, if you're not already there. Use the dropdown to select the subscription you want to disable network watcher in.
-3. Expand the list of locations for your chosen subscription by selecting on the arrow. For any given, use the 3 dots on the right to access the context menu.
-4. Select **Disable network watcher** to start disabling. You'll be asked to confirm this step. Select **Yes** to continue.
-On the portal, you'll have to do this individually for every region in every subscription.
+1. Sign in to the [Azure portal](https://portal.azure.com) with an account that has the necessary permissions.
+2. In the search box at the top of the portal, enter *Network Watcher*.
-## Delete a Network Watcher with PowerShell
+3. In the search results, select **Network Watcher**.
-To delete an instance of Network Watcher, run the following example:
+4. In the **Overview** page, select the Network Watcher instances that you want to delete, then select **Disable**.
-```powershell
-New-AzResourceGroup -Name NetworkWatcherRG -Location westcentralus
-New-AzNetworkWatcher -Name NetworkWatcher_westcentralus -ResourceGroupName NetworkWatcherRG -Location westcentralus
-Remove-AzNetworkWatcher -Name NetworkWatcher_westcentralus -ResourceGroupName NetworkWatcherRG
+ :::image type="content" source="./media/network-watcher-create/delete-network-watcher.png" alt-text="Screenshot showing how to delete a Network Watcher in the Azure portal.":::
+
+5. Enter *yes*, then select **Delete**.
+
+ :::image type="content" source="./media/network-watcher-create/confirm-delete-network-watcher.png" alt-text="Screenshot showing the confirmation page before deleting a Network Watcher in the Azure portal.":::
+
+## Delete a Network Watcher using PowerShell
+
+Use [Remove-AzNetworkWatcher](/powershell/module/az.network/remove-aznetworkwatcher) to delete an instance of Network Watcher:
+
+```azurepowershell-interactive
+Remove-AzNetworkWatcher -Name NetworkWatcher_westus -ResourceGroupName NetworkWatcherRG
+```
+
+## Delete a Network Watcher using the Azure CLI
+
+Use [az network watcher configure](/cli/azure/network/watcher#az-network-watcher-configure) to delete an instance of Network Watcher:
+
+```azurecli-interactive
+az network watcher configure --resource-group NetworkWatcherRG --locations westcentralus --enabled false
``` ## Next steps
-Now that you have an instance of Network Watcher, learn about the features available:
+Now that you have an instance of Network Watcher, learn about the available features:
-* [Topology](./view-network-topology.md)
+* [Topology](view-network-topology.md)
* [Packet capture](network-watcher-packet-capture-overview.md) * [IP flow verify](network-watcher-ip-flow-verify-overview.md) * [Next hop](network-watcher-next-hop-overview.md) * [Security group view](network-watcher-security-group-view-overview.md) * [NSG flow logging](network-watcher-nsg-flow-logging-overview.md)
-* [Virtual Network Gateway troubleshooting](network-watcher-troubleshoot-overview.md)
+* [Virtual Network Gateway troubleshooting](network-watcher-troubleshoot-overview.md)
sentinel Connect Log Forwarder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-log-forwarder.md
Your machine must meet the following requirements:
- Syslog-ng: 2.1 - 3.22.1 - **Packages**
- - You must have **python 2.7** or **3** installed on the Linux machine.<br>Use the `python --version` or `python3 --version` command to check.
+ - You must have **Python 2.7** or **3** installed on the Linux machine.<br>Use the `python --version` or `python3 --version` command to check.
- **Syslog RFC support** - Syslog RFC 3164
sentinel Use Matching Analytics To Detect Threats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/use-matching-analytics-to-detect-threats.md
Part of the Microsoft Threat Intelligence available through matching analytics i
:::image type="content" source="mediTI article.":::
-For more information, see the [MDTI portal](https://ti.defender.microsoft.com) and [What is Microsoft Defender Threat Intelligence?](/../../defender/threat-intelligence/what-is-microsoft-defender-threat-intelligence-defender-ti.md)
+For more information, see the [MDTI portal](https://ti.defender.microsoft.com) and [What is Microsoft Defender Threat Intelligence?](/defender/threat-intelligence/what-is-microsoft-defender-threat-intelligence-defender-ti)
## Next steps
In this article, you learned how to connect threat intelligence produced by Micr
- [Work with threat indicators in Microsoft Sentinel](work-with-threat-indicators.md). - Connect Microsoft Sentinel to [STIX/TAXII threat intelligence feeds](./connect-threat-intelligence-taxii.md). - [Connect threat intelligence platforms](./connect-threat-intelligence-tip.md) to Microsoft Sentinel.-- See which [TIP platforms, TAXII feeds, and enrichments](threat-intelligence-integration.md) can be readily integrated with Microsoft Sentinel.
+- See which [TIP platforms, TAXII feeds, and enrichments](threat-intelligence-integration.md) can be readily integrated with Microsoft Sentinel.
service-bus-messaging Service Bus Dotnet How To Use Topics Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-dotnet-how-to-use-topics-subscriptions.md
In this section, you'll add code to retrieve messages from the subscription.
// // Create the clients that we'll use for sending and processing messages. // TODO: Replace the <NAMESPACE-CONNECTION-STRING> placeholder
- client = new ServiceBusClient("<NAMESPACE-CONNECTION-STRING>">);
+ client = new ServiceBusClient("<NAMESPACE-CONNECTION-STRING>");
// create a processor that we can use to process the messages // TODO: Replace the <TOPIC-NAME> and <SUBSCRIPTION-NAME> placeholders
In this section, you'll add code to retrieve messages from the subscription.
// // Create the clients that we'll use for sending and processing messages. // TODO: Replace the <NAMESPACE-CONNECTION-STRING> placeholder
- client = new ServiceBusClient("<NAMESPACE-CONNECTION-STRING>">);
+ client = new ServiceBusClient("<NAMESPACE-CONNECTION-STRING>");
// create a processor that we can use to process the messages // TODO: Replace the <TOPIC-NAME> and <SUBSCRIPTION-NAME> placeholders
site-recovery Avs Tutorial Prepare Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/avs-tutorial-prepare-azure.md
Previously updated : 09/29/2020 Last updated : 12/22/2022 -+ # Prepare Azure Site Recovery resources for disaster recovery of Azure VMware Solution VMs
This article is the first tutorial in a series that shows you how to set up disa
In this tutorial, you learn how to: > [!div class="checklist"]
-> * Verify that the Azure account has replication permissions.
> * Create a Recovery Services vault. A vault holds metadata and configuration information for VMs, and other replication components. > * Set up an Azure virtual network (VNet). When Azure VMs are created after failover, they're joined to this network. > [!NOTE]
-> Tutorials show you the simplest deployment path for a scenario. They use default options where possible, and don't show all possible settings and paths. For detailed instructions, review the article in the How To section of the Site Recovery Table of Contents.
+> - Tutorials show you the simplest deployment path for a scenario. They use default options where possible, and don't show all possible settings and paths. For detailed instructions, review the article in the How To section of the Site Recovery Table of Contents.
+> - Some of the concepts of using Azure Site Recovery for Azure VMware Solution overlap with disaster recovery of on-prem VMware VMs and hence documentation will be cross-referenced accordingly.
+
+## Sign in to Azure
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/pricing/free-trial/) before you begin. Then sign in to the [Azure portal](https://portal.azure.com).
-> [!NOTE]
-> Some of the concepts of using Azure Site Recovery for Azure VMware Solution overlap with disaster recovery of on-prem VMware VMs and hence documentation will be cross-referenced accordingly.
-## Before you start
+## Prerequisites
+
+**Before you begin, verify the following:**
- [Deploy](../azure-vmware/tutorial-create-private-cloud.md) an Azure VMware Solution private cloud in Azure - Review the architecture for [VMware](vmware-azure-architecture.md) disaster recovery - Read common questions for [VMware](vmware-azure-common-questions.md)
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/pricing/free-trial/) before you begin. Then sign in to the [Azure portal](https://portal.azure.com).
--
-## Verify account permissions
+**Verify account permissions**
If you just created your free Azure account, you're the administrator of your subscription and you have the permissions you need. If you're not the subscription administrator, work with the administrator to assign the permissions you need. To enable replication for a new virtual machine, you must have permission to:
If you just created your free Azure account, you're the administrator of your su
To complete these tasks your account should be assigned the Virtual Machine Contributor built-in role. In addition, to manage Site Recovery operations in a vault, your account should be assigned the Site Recovery Contributor built-in role.
-## Create a Recovery Services vault
-
-1. From the Azure portal menu, select **Create a resource**, and search the Marketplace for **Recovery**.
-2. Select **Backup and Site Recovery** from the search results, and in the Backup and Site Recovery page, click **Create**.
-3. In the **Create Recovery Services vault** page, select the **Subscription**. We're using **Contoso Subscription**.
-4. In **Resource group**, select an existing resource group or create a new one. For this tutorial we're using **contosoRG**.
-5. In **Vault name**, enter a friendly name to identify the vault. For this set of tutorials we're using **ContosoVMVault**.
-6. In **Region**, select the region in which the vault should be located. We're using **West Europe**.
-7. Select **Review + create**.
+## Create a recovery services vault
- ![Screenshot of the Create Recovery Services vault page.](./media/tutorial-prepare-azure/new-vault-settings.png)
+1. In the [Azure portal](https://portal.azure.com), select **Create a resource**.
+1. Search the Azure Marketplace for *Recovery Services*.
+1. Select **Backup and Site Recovery** from the search results. Next, select **Create**.
+1. In the **Create Recovery Services vault** page, under the **Basics** > **Project details** section, do the following:
+ 1. Under **Subscription**, select the subscription in which you want to create the new recovery services vault.
+ 1. In **Resource group**, select an existing resource group or create a new one. For example, **contosoRG**.
- The new vault will now be listed in **Dashboard** > **All resources**, and on the main **Recovery Services vaults** page.
+1. In the **Create Recovery Services vault** page, under **Basics** > **Instance details** section, do the following:
+ 1. In **Vault name**, enter a friendly name to identify the vault. For example, **ContosoVMVault**.
+ 1. In **Region**, select the region where the vault should be located. For example, **(Europe) West Europe**.
+ 1. Select **Review + create** > **Create** to create the recovery vault.
+
+> [!TIP]
+> To quickly access the vault from the dashboard, select **Pin to dashboard**.
-## Set up an Azure network
+The new vault appears on **Dashboard** > **All resources**, and on the main **Recovery Services vaults** page.
- Azure VMware Solution VMs are replicated to Azure managed disks. When failover occurs, Azure VMs are created from these managed disks, and joined to the Azure network you specify in this procedure.
-1. In the [Azure portal](https://portal.azure.com), select **Create a resource** > **Networking** > **Virtual network**.
-2. Keep **Resource Manager** selected as the deployment model.
-3. In **Name**, enter a network name. The name must be unique within the Azure resource group. We're using **ContosoASRnet** in this tutorial.
-4. In **Address space**, enter the virtual network's address range in CDR notation. We're using **10.1.0.0/24**.
-5. In **Subscription**, select the subscription in which to create the network.
-6. Specify the **Resource group** in which the network will be created. We're using the existing resource group **contosoRG**.
-7. In **Location**, select the same region as that in which the Recovery Services vault was created. In our tutorial it's **West Europe**. The network must be in the same region as the vault.
-8. In **Address range**, enter the range for the network. We're using **10.1.0.0/24**, and not using a subnet.
-9. We're leaving the default options of basic DDoS protection, with no service endpoint, or firewall on the network.
-9. Select **Create**.
- ![Screenshot of the Create virtual network options.](media/tutorial-prepare-azure/create-network.png)
-The virtual network takes a few seconds to create. After it's created, you'll see it in the Azure portal dashboard.
+## Set up an Azure network
+Azure VMware Solution VMs are replicated to Azure managed disks. When failover occurs, Azure VMs are created from these managed disks, and joined to the Azure network you specify in this procedure.
+
+1. In the [Azure portal](https://portal.azure.com), select **Create a resource**.
+1. Under categories, select **Networking** > **Virtual network**.
+1. In **Create virtual network** page, under the **Basics** tab, do the following:
+ 1. In **Subscription**, select the subscription in which to create the network.
+ 2. In **Resource group**, select the resource group in which to create the network. For this tutorial, use the existing resource group **contosoRG**.
+ 1. In **Virtual network name**, enter a network name. The name must be unique within the Azure resource group. For example, **ContosoASRnet**.
+ 1. In **Region**, choose **(Europe) West Europe**. The network must be in the same region as the Recovery Services vault.
+
+ :::image type="Protection state" source="media/tutorial-prepare-azure/create-network.png" alt-text="Screenshot of the Create virtual network options.":::
+
+1. In **Create virtual network** > **IP addresses** tab, do the following:
+ 1. As there's no subnet for this network, you will first delete the pre-existing address range. To do so, select the ellipsis (...), under available IP address range, then select **Delete address space**.
+
+ :::image type="Protection state" source="media/tutorial-prepare-azure/delete-ip-address.png" alt-text="Screenshot of the delete address space.":::
+ 1. After deleting the pre-existing address range, select **Add an IP address space**.
+
+ :::image type="Protection state" source="media/tutorial-prepare-azure/add-ip-address-space.png" alt-text="Screenshot of the adding IP.":::
+
+ 1. In **Starting address** enter **10.0.0.**
+ 1. Under **Address space size**, select **/24 (256 addresses)**.
+ 1. Select **Add**.
+
+ :::image type="Content" source="media/tutorial-prepare-azure/homepage-ip-address.png" alt-text="Screenshot of the add virtual network options.":::
+1. Select **Review + create** > **Create** to create a new virtual network.
+The virtual network takes a few seconds to create. After it's created, you'll see it in the Azure portal dashboard.
## Next steps
-> [!div class="nextstepaction"]
-> [Prepare infrastructure](avs-tutorial-prepare-avs.md)
+
+Learn more about:
+- [Prepare infrastructure](avs-tutorial-prepare-avs.md)
- [Learn about](../virtual-network/virtual-networks-overview.md) Azure networks. - [Learn about](../virtual-machines/managed-disks-overview.md) managed disks.
site-recovery Tutorial Prepare Azure For Hyperv https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/tutorial-prepare-azure-for-hyperv.md
Title: Prepare Azure for Hyper-V disaster recovery with Azure Site Recovery
description: Learn how to prepare Azure for disaster recovery of on-premises Hyper-V VMs by using Azure Site Recovery Previously updated : 12/21/2022 Last updated : 12/22/2022
This tutorial is the first in a series that describes how to set up disaster rec
This tutorial shows you how to prepare Azure components when you want to replicate on-premises VMs (Hyper-V) to Azure. You'll learn how to: > [!div class="checklist"]
-> * Verify that your Azure account has replication permissions.
> * Create an Azure storage account, which stores images of replicated machines. > * Create a Recovery Services vault, which stores metadata and configuration information for VMs and other replication components. > * Set up an Azure network. When Azure VMs are created after failover, they're joined to this network.
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/pricing/free-trial/) before you begin.
## Sign in to Azure
Images of replicated machines are held in Azure storage. Azure VMs are created f
1. Select **Review** and review your settings. 1. Select **Create**.
+ :::image type="Protection state" source="media/tutorial-prepare-azure/create-storage-account.png" alt-text="Screenshot of the Create a storage account options.":::
> [!NOTE]
Images of replicated machines are held in Azure storage. Azure VMs are created f
> [!NOTE] > To quickly access the vault from the dashboard, select **Pin to dashboard**.
+The new vault appears on **Dashboard** > **All resources**, and on the main **Recovery Services vaults** page.
+ :::image type="content" source="./media/tutorial-prepare-azure/new-vault-settings.png" alt-text="Screenshot of the Create Recovery Services vault page.":::
-The new vault appears on **Dashboard** > **All resources**, and on the main **Recovery Services vaults** page.
## Set up an Azure network
When Azure VMs are created from storage after failover, they're joined to this n
2. In **Resource group**, select the resource group in which to create the network. For this tutorial, use the existing resource group **contosoRG**. 1. In **Virtual network name**, enter a network name. The name must be unique within the Azure resource group. For example, **ContosoASRnet**. 1. In **Region**, choose **(Europe) West Europe**. The network must be in the same region as the Recovery Services vault.
- :::image type="Protection state" source="media/tutorial-prepare-azure/create-network.png" alt-text="Screenshot of the Create virtual network options.":::
+
+ :::image type="Protection state" source="media/tutorial-prepare-azure/create-network.png" alt-text="Screenshot of the Create virtual network options.":::
-1. In **Create storage account** page, under the **IP addresses** tab, do the following:
+1. In **Create virtual network** > **IP addresses** tab, do the following:
1. As there's no subnet for this network, you will first delete the pre-existing address range. To do so, select the ellipsis (...), under available IP address range, then select **Delete address space**.
+
:::image type="Protection state" source="media/tutorial-prepare-azure/delete-ip-address.png" alt-text="Screenshot of the delete address space."::: 1. After deleting the pre-existing address range, select **Add an IP address space**.
+
:::image type="Protection state" source="media/tutorial-prepare-azure/add-ip-address-space.png" alt-text="Screenshot of the adding IP."::: 1. In **Starting address** enter **10.0.0.** 1. Under **Address space size**, select **/24 (256 addresses)**. 1. Select **Add**.
+
:::image type="Content" source="media/tutorial-prepare-azure/homepage-ip-address.png" alt-text="Screenshot of the add virtual network options."::: 1. Select **Review + create** > **Create** to create a new virtual network.
site-recovery Tutorial Prepare Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/tutorial-prepare-azure.md
Title: Prepare Azure for on-premises disaster recovery with Azure Site Recovery
description: Learn how to prepare Azure for disaster recovery of on-premises machines using Azure Site Recovery. Previously updated : 12/21/2022 Last updated : 12/22/2022
If you don't have an Azure subscription, create a [free account](https://azure.m
- Review the architecture for [VMware](vmware-azure-architecture.md), [Hyper-V](hyper-v-azure-architecture.md), and [physical server](physical-azure-architecture.md) disaster recovery. - Read common questions for [VMware](vmware-azure-common-questions.md) and [Hyper-V](hyper-v-azure-common-questions.md) - ### Verify account permissions If you just created your free Azure account, you're the administrator of your subscription and you have the permissions you need. If you're not the subscription administrator, work with the administrator to assign the permissions you need. To enable replication for a new virtual machine, you must have permission to:
To complete these tasks your account should be assigned the Virtual Machine Cont
1. In **Vault name**, enter a friendly name to identify the vault. For example, **ContosoVMVault**. 1. In **Region**, select the region where the vault should be located. For example, **(Europe) West Europe**. 1. Select **Review + create** > **Create** to create the recovery vault.-
- :::image type="content" source="./media/tutorial-prepare-azure/new-vault-settings.png" alt-text="Screenshot of the Create Recovery Services vault page.":::
+ :::image type="content" source="./media/tutorial-prepare-azure/new-vault-settings.png" alt-text="Screenshot of the Create Recovery Services vault page.":::
The new vault will now be listed in **Dashboard** > **All resources**, and on the main **Recovery Services vaults** page.
On-premises machines are replicated to Azure managed disks. When failover occurs
2. In **Resource group**, select the resource group in which to create the network. For this tutorial, use the existing resource group **contosoRG**. 1. In **Virtual network name**, enter a network name. The name must be unique within the Azure resource group. For example, **ContosoASRnet**. 1. In **Region**, choose **(Europe) West Europe**. The network must be in the same region as the Recovery Services vault.
- :::image type="Protection state" source="media/tutorial-prepare-azure/create-network.png" alt-text="Screenshot of the Create virtual network options.":::
+
+ :::image type="Protection state" source="media/tutorial-prepare-azure/create-network.png" alt-text="Screenshot of the Create virtual network options.":::
-1. In **Create storage account** page, under the **IP addresses** tab, do the following:
+1. In **Create virtual network** > **IP addresses** tab, do the following:
1. As there's no subnet for this network, you will first delete the pre-existing address range. To do so, select the ellipsis (...), under available IP address range, then select **Delete address space**.
- :::image type="Protection state" source="media/tutorial-prepare-azure/delete-ip-address.png" alt-text="Screenshot of the delete address space.":::
+
+ :::image type="Protection state" source="media/tutorial-prepare-azure/delete-ip-address.png" alt-text="Screenshot of the delete address space.":::
+ 1. After deleting the pre-existing address range, select **Add an IP address space**.
- :::image type="Protection state" source="media/tutorial-prepare-azure/add-ip-address-space.png" alt-text="Screenshot of the adding IP.":::
+
+ :::image type="Protection state" source="media/tutorial-prepare-azure/add-ip-address-space.png" alt-text="Screenshot of the adding IP.":::
1. In **Starting address** enter **10.0.0.** 1. Under **Address space size**, select **/24 (256 addresses)**. 1. Select **Add**.
- :::image type="Content" source="media/tutorial-prepare-azure/homepage-ip-address.png" alt-text="Screenshot of the add virtual network options.":::
+
+ :::image type="Content" source="media/tutorial-prepare-azure/homepage-ip-address.png" alt-text="Screenshot of the add virtual network options.":::
1. Select **Review + create** > **Create** to create a new virtual network.
static-web-apps Assign Roles Microsoft Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/assign-roles-microsoft-graph.md
In this tutorial, you learn to:
- Deploy a static web app. - Create an Azure Active Directory app registration. - Set up custom authentication with Azure Active Directory.-- Configure a [serverless function](authentication-authorization.md?tabs=function#role-management) that queries the user's Active Directory group membership and returns a list of custom roles.
+- Configure a [serverless function](authentication-custom.md#manage-roles) that queries the user's Active Directory group membership and returns a list of custom roles.
> [!NOTE]
-> This tutorial requires you to [use a function to assign roles](authentication-authorization.md?tabs=function#role-management). Function-based role management is currently in preview.
+> This tutorial requires you to [use a function to assign roles](authentication-custom.md#manage-roles). Function-based role management is currently in preview.
## Prerequisites
static-web-apps Authentication Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/authentication-authorization.md
Title: Authentication and authorization for Azure Static Web Apps
-description: Learn to use different authorization providers to secure your static app.
+ Title: Authenticate and authorize Static Web Apps
+description: Learn to use different authorization providers to secure your Azure Static Web Apps.
+ - Previously updated : 10/08/2021+ Last updated : 12/22/2022
-# Authentication and authorization for Azure Static Web Apps
+# Authenticate and authorize Static Web Apps
-Azure Static Web Apps provides a streamlined authentication experience. By default, you have access to a series of pre-configured providers, or the option to [register a custom provider](./authentication-custom.md).
+Azure Static Web Apps provides a streamlined authentication experience, where no other actions or configurations are required to use GitHub, Twitter, and Azure Active Directory (Azure AD) for authentication.
-- Any user can authenticate with an enabled provider.-- Once logged in, users belong to the `anonymous` and `authenticated` roles by default.-- Authorized users gain access to restricted [routes](configuration.md#routes) by rules defined in the [staticwebapp.config.json file](./configuration.md).-- Users are assigned custom roles using the built-in [invitations](#invitations) system.-- Users can be programmatically assigned custom roles at login by an API function.-- All authentication providers are enabled by default.
- - To restrict an authentication provider, [block access](#block-an-authentication-provider) with a custom route rule. Configuring a custom provider also disables pre-configured providers.
-- Pre-configured providers include:
- - Azure Active Directory<sup>1</sup>
- - GitHub
- - Twitter
+In this article, learn about default behavior, how to set up sign-in and sign-out, how to block an authentication provider, and more.
+
+You can [register a custom provider](./authentication-custom.md), which disables all pre-configured providers.
-<sup>1</sup> The preconfigured Azure Active Directory provider allows any Microsoft Account to sign in. To restrict login to a specific Active Directory tenant, configure a [custom Azure Active Directory provider](authentication-custom.md?tabs=aad).
+## Prerequisites
-The subjects of authentication and authorization significantly overlap with routing concepts, which are detailed in the [application configuration guide](configuration.md#routes).
+Be aware of the following defaults and resources for authentication and authorization with Azure Static Web Apps.
-## System folder
+**Defaults:**
+- Any user can authenticate with a pre-configured provider
+ - GitHub
+ - Twitter
+ - Azure Active Directory (Azure AD)
+ - To restrict an authentication provider, [block access](#block-an-authentication-provider) with a custom route rule
+- After sign-in, users belong to the `anonymous` and `authenticated` roles. For more information about roles, see [Manage roles](authentication-custom.md#manage-roles)
-Azure Static Web Apps uses the `/.auth` system folder to provide access to authorization-related APIs. Rather than exposing any of the routes under the `/.auth` folder directly to end users, consider creating [routing rules](configuration.md#routes) to create friendly URLs.
+**Resources:**
+- Define rules in the [staticwebapp.config.json file](./configuration.md) for authorized users to gain access to restricted [routes](configuration.md#routes)
+- Assign users custom roles using the built-in [invitations system](authentication-custom.md#manage-roles)
+- Programmatically assign users custom roles at sign-in with an [API function](apis-overview.md)
+- Understand that authentication and authorization significantly overlap with routing concepts, which are detailed in the [Application configuration guide](configuration.md)
+- Restrict sign-in to a specific Azure AD tenant by [configuring a custom Azure AD provider](authentication-custom.md?tabs=aad). The pre-configured Azure AD provider allows any Microsoft account to sign in.
+## Set up sign-in
-## Login
+Azure Static Web Apps uses the `/.auth` system folder to provide access to authorization-related APIs. Rather than expose any of the routes under the `/.auth` folder directly to end users, create [routing rules](configuration.md#routes) for friendly URLs.
Use the following table to find the provider-specific route.
-| Authorization provider | Login route |
+| Authorization provider | Sign in route |
| - | -- |
-| Azure Active Directory | `/.auth/login/aad` |
+| Azure AD | `/.auth/login/aad` |
| GitHub | `/.auth/login/github` | | Twitter | `/.auth/login/twitter` |
-For example, to log in with GitHub you could include a link like the following snippet:
+For example, to sign in with GitHub, you could include something similar to the following link.
```html <a href="/.auth/login/github">Login</a> ```
-If you chose to support more than one provider, then you need to expose a provider-specific link for each on your website.
-
-You can use a [route rule](./configuration.md#routes) to map a default provider to a friendly route like _/login_.
+If you chose to support more than one provider, expose a provider-specific link for each on your website.
+Use a [route rule](./configuration.md#routes) to map a default provider to a friendly route like _/login_.
```json {
You can use a [route rule](./configuration.md#routes) to map a default provider
} ```
-### Post login redirect
-
-If you want a user to return to a specific page after login, provide a full qualified URL in `post_login_redirect_uri` query string parameter.
+### Set up post-sign-in redirect
-For example:
+Return a user to a specific page after they sign in by providing a fully qualified URL in the `post_login_redirect_uri` query string parameter, like in the following example.
```html <a href="/.auth/login/github?post_login_redirect_uri=https://zealous-water.azurestaticapps.net/success">Login</a> ```
-Additionally, you can redirect unauthenticated users back to the referring page after they log in. To configure this behavior, create a [response override](configuration.md#response-overrides) rule that sets `post_login_redirect_uri` to `.referrer`.
-
-For example:
+You can also redirect unauthenticated users back to the referring page after they sign in. To configure this behavior, create a [response override](configuration.md#response-overrides) rule that sets `post_login_redirect_uri` to `.referrer`, like in the following example.
```json {
For example:
} ```
-## Logout
+## Set up sign-out
-The `/.auth/logout` route logs users out from the website. You can add a link to your site navigation to allow the user to log out as shown in the following example.
+The `/.auth/logout` route signs users out from the website. You can add a link to your site navigation to allow the user to sign out, like in the following example.
```html <a href="/.auth/logout">Log out</a> ```
-You can use a [route rule](./configuration.md#routes) to map a friendly route like _/logout_.
+Use a [route rule](./configuration.md#routes) to map a friendly route like _/logout_.
```json {
You can use a [route rule](./configuration.md#routes) to map a friendly route li
} ```
-### Post logout redirect
+### Set up post-sign-out redirect
-If you want a user to return to a specific page after logout, provide a URL in `post_logout_redirect_uri` query string parameter.
+To return a user to a specific page after they sign out, provide a URL in `post_logout_redirect_uri` query string parameter.
## Block an authentication provider
-You may want to restrict your app from using an authentication provider. For instance, your app may want to standardize only on [providers that expose email addresses](#provider-user-details).
+You may want to restrict your app from using an authentication provider, since all authentication providers are enabled. For instance, your app may want to standardize only on [providers that expose email addresses](authentication-custom.md#create-an-invitation).
To block a provider, you can create [route rules](configuration.md#routes) to return a 404 status code for requests to the blocked provider-specific route. For example, to restrict Twitter as provider, add the following route rule.
To block a provider, you can create [route rules](configuration.md#routes) to re
} ```
-## Roles
-
-Every user who accesses a static web app belongs to one or more roles. There are two built-in roles that users can belong to:
--- **anonymous**: All users automatically belong to the _anonymous_ role.-- **authenticated**: All users who are logged in belong to the _authenticated_ role.-
-Beyond the built-in roles, you can assign custom roles to users, and reference them in the _staticwebapp.config.json_ file.
-
-## Role management
-
-# [Invitations](#tab/invitations)
-
-### Add a user to a role
-
-To add a user to a role, you generate invitations that allow you to associate users to specific roles. Roles are defined and maintained in the _staticwebapp.config.json_ file.
-
-<a name="invitations" id="invitations"></a>
-
-#### Create an invitation
-
-Invitations are specific to individual authorization-providers, so consider the needs of your app as you select which providers to support. Some providers expose a user's email address, while others only provide the site's username.
-
-<a name="provider-user-details" id="provider-user-details"></a>
-
-| Authorization provider | Exposes a user's |
-| - | - |
-| Azure Active Directory | email address |
-| GitHub | username |
-| Twitter | username |
-
-1. Go to a Static Web Apps resource in the [Azure portal](https://portal.azure.com).
-1. Under _Settings_, select **Role Management**.
-2. Select **Invite**.
-3. Select an _Authorization provider_ from the list of options.
-4. Add either the username or email address of the recipient in the _Invitee details_ box.
- - For GitHub and Twitter, you enter the username. For all others, enter the recipient's email address.
-5. Select the domain of your static site from the _Domain_ drop-down.
- - The domain you select is the domain that appears in the invitation. If you have a custom domain associated with your site, you probably want to choose the custom domain.
-6. Add a comma-separated list of role names in the _Role_ box.
-7. Enter the maximum number of hours you want the invitation to remain valid.
- - The maximum possible limit is 168 hours, which is 7 days.
-8. Select **Generate**.
-9. Copy the link from the _Invite link_ box.
-10. Email the invitation link to the person you're granting access to your app.
-
-When the user selects the link in the invitation, they're prompted to log in with their corresponding account. Once successfully logged-in, the user is associated with the selected roles.
-
-> [!CAUTION]
-> Make sure your route rules don't conflict with your selected authentication providers. Blocking a provider with a route rule would prevent users from accepting invitations.
-
-### Update role assignments
-
-1. Go to a Static Web Apps resource in the [Azure portal](https://portal.azure.com).
-1. Under _Settings_, select **Role Management**.
-2. Select the user in the list.
-3. Edit the list of roles in the _Role_ box.
-4. Select **Update**.
-
-### Remove user
-
-1. Go to a Static Web Apps resource in the [Azure portal](https://portal.azure.com).
-1. Under _Settings_, select **Role Management**.
-1. Locate the user in the list.
-1. Check the checkbox on the user's row.
-2. Select **Delete**.
-
-As you remove a user, keep in mind the following items:
-
-1. Removing a user invalidates their permissions.
-1. Worldwide propagation may take a few minutes.
-1. If the user is added back to the app, the [`userId` changes](user-information.md).
-
-# [Function (preview)](#tab/function)
-
-Instead of using the built-in invitations system, you can use a serverless function to programmatically assign roles to users when they log in.
-
-To assign custom roles in a function, you can define an API function that is automatically called after each time a user successfully authenticates with an identity provider. The function is passed the user's information from the provider. It must return a list of custom roles that are assigned to the user.
-
-Example uses of this function include:
--- Query a database to determine which roles a user should be assigned-- Call the [Microsoft Graph API](https://developer.microsoft.com/graph) to determine a user's roles based on their Active Directory group membership-- Determine a user's roles based on claims returned by the identity provider-
-> [!NOTE]
-> The ability to assign roles via a function is only available when [custom authentication](authentication-custom.md) is configured.
->
-> When this feature is enabled, any roles assigned via the built-in invitations system are ignored.
+## Remove personal data
-### Configure a function for assigning roles
-
-To configure Static Web Apps to use an API function as the role assignment function, add a `rolesSource` property to the `auth` section of your app's [configuration file](configuration.md). The value of the `rolesSource` property is the path to the API function.
-
-```json
-{
- "auth": {
- "rolesSource": "/api/GetRoles",
- "identityProviders": {
- // ...
- }
- }
-}
-```
-
-> [!NOTE]
-> Once configured, the role assignment function can no longer be accessed by external HTTP requests.
-
-### Create a function for assigning roles
-
-After defining the `rolesSource` property in your app's configuration, add an [API function](apis-functions.md) in your static web app at the path you specified. You can use a managed function app or a bring your own function app.
-
-Each time a user successfully authenticates with an identity provider, the specified function is called via the POST method. The function is passed a JSON object in the request body that contains the user's information from the provider. For some identity providers, the user information also includes an `accessToken` that the function can use to make API calls using the user's identity.
-
-This is an example payload from Azure Active Directory:
-
-```json
-{
- "identityProvider": "aad",
- "userId": "72137ad3-ae00-42b5-8d54-aacb38576d76",
- "userDetails": "ellen@contoso.com",
- "claims": [
- {
- "typ": "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress",
- "val": "ellen@contoso.com"
- },
- {
- "typ": "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname",
- "val": "Contoso"
- },
- {
- "typ": "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname",
- "val": "Ellen"
- },
- {
- "typ": "name",
- "val": "Ellen Contoso"
- },
- {
- "typ": "http://schemas.microsoft.com/identity/claims/objectidentifier",
- "val": "7da753ff-1c8e-4b5e-affe-d89e5a57fe2f"
- },
- {
- "typ": "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier",
- "val": "72137ad3-ae00-42b5-8d54-aacb38576d76"
- },
- {
- "typ": "http://schemas.microsoft.com/identity/claims/tenantid",
- "val": "3856f5f5-4bae-464a-9044-b72dc2dcde26"
- },
- {
- "typ": "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name",
- "val": "ellen@contoso.com"
- },
- {
- "typ": "ver",
- "val": "1.0"
- }
- ],
- "accessToken": "eyJ0eXAiOiJKV..."
-}
-```
-
-The function can use the user's information to determine which roles to assign to the user. It must return an HTTP 200 response with a JSON body containing a list of custom role names to assign to the user.
-
-For example, to assign the user to the `Reader` and `Contributor` roles, return the following response:
-
-```json
-{
- "roles": [
- "Reader",
- "Contributor"
- ]
-}
-```
-
-If you do not want to assign any additional roles to the user, return an empty `roles` array.
-
-To learn more, see [Tutorial: Assign custom roles with a function and Microsoft Graph](assign-roles-microsoft-graph.md).
---
-## Remove personal identifying information
-
-When you grant consent to an application as an end user, the application has access to your email address or your username depending on the identity provider. Once this information is provided, the owner of the application decides how to manage personally identifying information.
+When you grant consent to an application as an end user, the application has access to your email address or username, depending on the identity provider. Once this information is provided, the owner of the application can decide how to manage personal data.
End users need to contact administrators of individual web apps to revoke this information from their systems.
-To remove personally identifying information from the Azure Static Web Apps platform, and prevent the platform from providing this information on future requests, submit a request using the URL:
+To remove personal data from the Azure Static Web Apps platform, and prevent the platform from providing this information on future requests, submit a request using the following URL:
```url https://identity.azurestaticapps.net/.auth/purge/<AUTHENTICATION_PROVIDER_NAME> ```
-To prevent the platform from providing this information on future requests to individual apps, submit a request to the following URL:
+To prevent the platform from providing this information on future requests to individual apps, submit a request using the following URL:
```url https://<WEB_APP_DOMAIN_NAME>/.auth/purge/<AUTHENTICATION_PROVIDER_NAME> ```
-Note that if you are using Azure Active Directory, use `aad` as the value for the `<AUTHENTICATION_PROVIDER_NAME>` placeholder.
-
-## Restrictions
+If you're using Azure AD, use `aad` as the value for the `<AUTHENTICATION_PROVIDER_NAME>` placeholder.
-See the [Quotas article](quotas.md) for general restrictions and limitations.
+> [!TIP]
+> For information about general restrictions and limitations, see [Quotas](quotas.md).
## Next steps > [!div class="nextstepaction"]
-> [Access user authentication and authorization data](user-information.md)
+> [Use routes to set allowed roles to control page access](configuration.md)
+
+## Related articles
+
+- [Manage roles with custom authentication](authentication-custom.md#manage-roles)
+- [Application configuration guide, Routing concepts](configuration.md)
+- [Access user authentication and authorization data](user-information.md)
static-web-apps Authentication Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/authentication-custom.md
To use a custom identity provider, use the following URL patterns.
If you are using Azure Active Directory, use `aad` as the value for the `<PROVIDER_NAME_IN_CONFIG>` placeholder.
+## Manage roles
+
+Every user who accesses a static web app belongs to one or more roles. There are two built-in roles that users can belong to:
+
+- **anonymous**: All users automatically belong to the _anonymous_ role.
+- **authenticated**: All users who are signed in belong to the _authenticated_ role.
+
+Beyond the built-in roles, you can assign custom roles to users, and reference them in the _staticwebapp.config.json_ file.
+
+# [Invitations](#tab/invitations)
+
+### Add a user to a role
+
+To add a user to a role, you generate invitations that allow you to associate users to specific roles. Roles are defined and maintained in the _staticwebapp.config.json_ file.
+
+<a name="invitations" id="invitations"></a>
+
+#### Create an invitation
+
+Invitations are specific to individual authorization-providers, so consider the needs of your app as you select which providers to support. Some providers expose a user's email address, while others only provide the site's username.
+
+<a name="provider-user-details" id="provider-user-details"></a>
+
+| Authorization provider | Exposes |
+| - | - |
+| Azure AD | email address |
+| GitHub | username |
+| Twitter | username |
+
+Do the following steps to create an invitation.
+
+1. Go to a Static Web Apps resource in the [Azure portal](https://portal.azure.com).
+2. Under _Settings_, select **Role Management**.
+3. Select **Invite**.
+4. Select an _Authorization provider_ from the list of options.
+5. Add either the username or email address of the recipient in the _Invitee details_ box.
+ - For GitHub and Twitter, enter the username. For all others, enter the recipient's email address.
+6. Select the domain of your static site from the _Domain_ drop-down menu.
+ - The domain you select is the domain that appears in the invitation. If you have a custom domain associated with your site, choose the custom domain.
+7. Add a comma-separated list of role names in the _Role_ box.
+8. Enter the maximum number of hours you want the invitation to remain valid.
+ - The maximum possible limit is 168 hours, which is seven days.
+9. Select **Generate**.
+10. Copy the link from the _Invite link_ box.
+11. Email the invitation link to the user that you're granting access to.
+
+When the user selects the link in the invitation, they're prompted to sign in with their corresponding account. Once successfully signed in, the user is associated with the selected roles.
+
+> [!CAUTION]
+> Make sure your route rules don't conflict with your selected authentication providers. Blocking a provider with a route rule prevents users from accepting invitations.
+
+### Update role assignments
+
+1. Go to a Static Web Apps resource in the [Azure portal](https://portal.azure.com).
+1. Under _Settings_, select **Role Management**.
+2. Select the user in the list.
+3. Edit the list of roles in the _Role_ box.
+4. Select **Update**.
+
+### Remove user
+
+1. Go to a Static Web Apps resource in the [Azure portal](https://portal.azure.com).
+1. Under _Settings_, select **Role Management**.
+1. Locate the user in the list.
+1. Check the checkbox on the user's row.
+2. Select **Delete**.
+
+As you remove a user, keep in mind the following items:
+
+- Removing a user invalidates their permissions.
+- Worldwide propagation may take a few minutes.
+- If the user is added back to the app, the [`userId` changes](user-information.md).
+
+# [Function (preview)](#tab/function)
+
+Instead of using the built-in invitations system, you can use a serverless function to programmatically assign roles to users when they sign in.
+
+To assign custom roles in a function, you can define an API function that is automatically called after each time a user successfully authenticates with an identity provider. The function is passed the user's information from the provider. It must return a list of custom roles that are assigned to the user.
+
+Example uses of this function include:
+
+- Query a database to determine which roles a user should be assigned
+- Call the [Microsoft Graph API](https://developer.microsoft.com/graph) to determine a user's roles based on their Active Directory group membership
+- Determine a user's roles based on claims returned by the identity provider
+
+> [!NOTE]
+> The ability to assign roles via a function is only available when [custom authentication](authentication-custom.md) is configured.
+>
+> When this feature is enabled, any roles assigned via the built-in invitations system are ignored.
+
+### Configure a function for assigning roles
+
+To configure Static Web Apps to use an API function as the role assignment function, add a `rolesSource` property to the `auth` section of your app's [configuration file](configuration.md). The value of the `rolesSource` property is the path to the API function.
+
+```json
+{
+ "auth": {
+ "rolesSource": "/api/GetRoles",
+ "identityProviders": {
+ // ...
+ }
+ }
+}
+```
+
+> [!NOTE]
+> Once configured, the role assignment function can no longer be accessed by external HTTP requests.
+
+### Create a function for assigning roles
+
+After you define the `rolesSource` property in your app's configuration, add an [API function](apis-functions.md) in your static web app at the specified path. You can use a managed function app or [bring your own function app](functions-bring-your-own.md).
+
+Each time a user successfully authenticates with an identity provider, the POST method calls the specified function. The function passes a JSON object in the request body that contains the user's information from the provider. For some identity providers, the user information also includes an `accessToken` that the function can use to make API calls using the user's identity.
+
+See the following example payload from Azure AD:
+
+```json
+{
+ "identityProvider": "aad",
+ "userId": "72137ad3-ae00-42b5-8d54-aacb38576d76",
+ "userDetails": "ellen@contoso.com",
+ "claims": [
+ {
+ "typ": "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress",
+ "val": "ellen@contoso.com"
+ },
+ {
+ "typ": "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname",
+ "val": "Contoso"
+ },
+ {
+ "typ": "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname",
+ "val": "Ellen"
+ },
+ {
+ "typ": "name",
+ "val": "Ellen Contoso"
+ },
+ {
+ "typ": "http://schemas.microsoft.com/identity/claims/objectidentifier",
+ "val": "7da753ff-1c8e-4b5e-affe-d89e5a57fe2f"
+ },
+ {
+ "typ": "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier",
+ "val": "72137ad3-ae00-42b5-8d54-aacb38576d76"
+ },
+ {
+ "typ": "http://schemas.microsoft.com/identity/claims/tenantid",
+ "val": "3856f5f5-4bae-464a-9044-b72dc2dcde26"
+ },
+ {
+ "typ": "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name",
+ "val": "ellen@contoso.com"
+ },
+ {
+ "typ": "ver",
+ "val": "1.0"
+ }
+ ],
+ "accessToken": "eyJ0eXAiOiJKV..."
+}
+```
+
+The function can use the user's information to determine which roles to assign to the user. It must return an HTTP 200 response with a JSON body containing a list of custom role names to assign to the user.
+
+For example, to assign the user to the `Reader` and `Contributor` roles, return the following response:
+
+```json
+{
+ "roles": [
+ "Reader",
+ "Contributor"
+ ]
+}
+```
+
+If you don't want to assign any other roles to the user, return an empty `roles` array.
+
+To learn more, see [Tutorial: Assign custom roles with a function and Microsoft Graph](assign-roles-microsoft-graph.md).
+++ ## Next steps > [!div class="nextstepaction"]
static-web-apps Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/configuration.md
In addition to IP address blocks, you can also specify [service tags](../virtual
## Authentication
-* [Default authentication providers](authentication-authorization.md#login), don't require settings in the configuration file.
+* [Default authentication providers](authentication-authorization.md#set-up-sign-in), don't require settings in the configuration file.
* [Custom authentication providers](authentication-custom.md) use the `auth` section of the settings file. For details on how to restrict routes to authenticated users, see [Securing routes with roles](#securing-routes-with-roles).
static-web-apps Plans https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/plans.md
Azure Static Web Apps is available through two different plans, Free and Standar
| Custom domains | 2 per app | 5 per app | | APIs via Azure Functions | Managed | Managed or<br>[Bring your own Functions app](functions-bring-your-own.md) | | Authentication provider integration | [Pre-configured](authentication-authorization.md)<br>(Service defined) | [Custom registrations](authentication-custom.md) |
-| [Assign custom roles with a function](authentication-authorization.md?tabs=function#role-management) | - | Γ£ö |
+| [Assign custom roles with a function](authentication-custom.md#manage-roles) | - | Γ£ö |
| Private endpoints | - | Γ£ö | | [Service Level Agreement (SLA)](https://azure.microsoft.com/support/legal/sla/app-service-static/v1_0/) | None | Γ£ö |
static-web-apps Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/quotas.md
The following quotas exist for Azure Static Web Apps.
| Custom domains | 2 per app | 5 per app | | Allowed IP ranges | Unavailable | 25 | | Authorization (built-in roles) | Unlimited end-users that may authenticate with built-in `authenticated` role | Unlimited end-users that may authenticate with built-in `authenticated` role |
-| Authorization (custom roles) | Maximum of 25 end-users that may belong to custom roles via [invitations](authentication-authorization.md?tabs=invitations#role-management) | Maximum of 25 end-users that may belong to custom roles via [invitations](authentication-authorization.md?tabs=invitations#role-management), or unlimited end-users that may be assigned custom roles via [serverless function](authentication-authorization.md?tabs=function#role-management) |
+| Authorization (custom roles) | Maximum of 25 end-users that may belong to custom roles via [invitations](authentication-custom.md#manage-roles) | Maximum of 25 end-users that may belong to custom roles via [invitations](authentication-custom.md#manage-roles), or unlimited end-users that may be assigned custom roles via [serverless function](authentication-custom.md#manage-roles) |
| Request Size Limit | 30 MB | 30 MB | ## GitHub storage
storage Storage Blob Inventory Report Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-inventory-report-analytics.md
In this section, you'll generate statistical data that you'll visualize in a rep
#### Modify the Python notebook
-1. In the first cell of the python notebook, set the value of the `storage_account` variable to the name of the primary storage account.
+1. In the first cell of the Python notebook, set the value of the `storage_account` variable to the name of the primary storage account.
2. Update the value of the `container_name` variable to the name of the container in that account that you specified when you created the Synapse workspace.
synapse-analytics Apache Spark Azure Portal Add Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-azure-portal-add-libraries.md
To learn more about these capabilities, see [Manage Spark pool packages](./apach
Please follow the steps below if you have trouble to identify the required dependencies: - **Step1: Run the following script to set up a local Python environment same with Synapse Spark environment**
-The setup script requires [Synapse-Python38-CPU.yml](https://github.com/Azure-Samples/Synapse/blob/main/Spark/Python/Synapse-Python38-CPU.yml) which is the list of libraries shipped in the default python env in Synapse spark.
+The setup script requires [Synapse-Python38-CPU.yml](https://github.com/Azure-Samples/Synapse/blob/main/Spark/Python/Synapse-Python38-CPU.yml) which is the list of libraries shipped in the default Python env in Synapse spark.
```powershell
- # one-time synapse python setup
+ # one-time synapse Python setup
wget Synapse-Python38-CPU.yml sudo bash Miniforge3-Linux-x86_64.sh -b -p /usr/lib/miniforge3 export PATH="/usr/lib/miniforge3/bin:$PATH"
traffic-manager Configure Multivalue Routing Method Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/configure-multivalue-routing-method-template.md
If your environment meets the prerequisites and you're familiar with using ARM t
## Review the template
-The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/traffic-manager-minchild).
+The template used in this quickstart is from [Azure Quickstart Templates](/samples/azure/azure-quickstart-templates/traffic-manager-minchild/).
:::code language="json" source="~/quickstart-templates/quickstarts/microsoft.network/traffic-manager-minchild/azuredeploy.json":::
virtual-machines Automation Plan Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-plan-deployment.md
Before you design your workload zone layout, consider the following questions:
* How many workload zones does your scenario require? * In which regions do you need to deploy workloads?
+* How is DNS handled?
* What storage type do you need for the shared storage? * What's your [deployment scenario](#supported-deployment-scenarios)? For more information, see [how to configure a workload zone deployment for automation](automation-deploy-workload-zone.md).
+### Windows based deployments
+
+When doing Windows based deployments the Virtual Machines in the workload zone's Virtual Network need to be able to communicate with Active Directory in order to join the SAP Virtual Machines to the Active Directory Domain. The provided DNS name needs to be resolvable by the Active Directory.
+
+The workload zone key vault must contain the following secrets:
+
+| Credential | Name | Example |
+| | -- | |
+| Account that can perform domain join activities | [IDENTIFIER]-ad-svc-account | DEV-WEEU-SAP01-ad-svc-account |
+| Password for the account that performs the domain join | [IDENTIFIER]-ad-svc-account-password | DEV-WEEU-SAP01-ad-svc-account-password |
+| sidadm account password | [IDENTIFIER]-winsidadm_password_id | DEV-WEEU-SAP01-winsidadm_password_id |
+| SID Service account password | [IDENTIFIER]-svc-sidadm-password | DEV-WEEU-SAP01-svc-sidadm-password |
+ ## Credentials management
The automation framework uses [Service Principals](#service-principal-creation)
The automation framework will use the workload zone key vault for storing both the automation user credentials and the SAP system credentials. The virtual machine credentials are named as follows:
-| Credential | Name | Example |
-| | - | - |
-| Private key | [IDENTIFIER]-sshkey | DEV-WEEU-SAP01-sid-sshkey |
-| Public key | [IDENTIFIER]-sshkey-pub | DEV-WEEU-SAP01-sid-sshkey-pub |
-| Username | [IDENTIFIER]-username | DEV-WEEU-SAP01-sid-username |
-| Password | [IDENTIFIER]-password | DEV-WEEU-SAP01-sid-password |
-| sidadm Password | [IDENTIFIER]-[SID]-sap-password | DEV-WEEU-SAP01-X00-sap-password |
+| Credential | Name | Example |
+| - | - | |
+| Private key | [IDENTIFIER]-sshkey | DEV-WEEU-SAP01-sid-sshkey |
+| Public key | [IDENTIFIER]-sshkey-pub | DEV-WEEU-SAP01-sid-sshkey-pub |
+| Username | [IDENTIFIER]-username | DEV-WEEU-SAP01-sid-username |
+| Password | [IDENTIFIER]-password | DEV-WEEU-SAP01-sid-password |
+| sidadm Password | [IDENTIFIER]-[SID]-sap-password | DEV-WEEU-SAP01-X00-sap-password |
+| sidadm account password | [IDENTIFIER]-winsidadm_password_id | DEV-WEEU-SAP01-winsidadm_password_id |
+| SID Service account password | [IDENTIFIER]-svc-sidadm-password | DEV-WEEU-SAP01-svc-sidadm-password |
### Service principal creation