Updates from: 11/24/2022 02:12:40
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Concept Resilient Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-resilient-controls.md
Title: Create a resilient access control management strategy - Azure AD
description: This document provides guidance on strategies an organization should adopt to provide resilience to reduce the risk of lockout during unforeseen disruptions -+ tags: azuread
active-directory Msal Android Single Sign On https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-android-single-sign-on.md
The Azure portal generates the redirect URI for you and displays it in the **And
For more information about signing your app, see [Sign your app](https://developer.android.com/studio/publish/app-signing) in the Android Studio User Guide.
-> [!IMPORTANT]
-> Use your production signing key for the production version of your app.
- #### Configure MSAL to use a broker To use a broker in your app, you must attest that you've configured your broker redirect. For example, include both your broker enabled redirect URI--and indicate that you registered it--by including the following settings in your MSAL configuration file:
active-directory Sample V2 Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/sample-v2-code.md
The following samples show public client desktop applications that access the Mi
> [!div class="mx-tdCol2BreakAll"] > | Language/<br/>Platform | Code sample(s) <br/> on GitHub | Auth<br/> libraries | Auth flow | > | - | -- | - | -- |
-> | .NET Core | &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/1-Calling-MSGraph/1-1-AzureAD) <br/> &#8226; [Call Microsoft Graph with token cache](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/2-TokenCache) <br/> &#8226; [Call Micrsoft Graph with custom web UI HTML](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/3-CustomWebUI/3-1-CustomHTML) <br/> &#8226; [Call Microsoft Graph with custom web browser](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/3-CustomWebUI/3-2-CustomBrowser) <br/> &#8226; [Sign in users with device code flow](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/4-DeviceCodeFlow) <br/> &#8226; [Authenticate users with MSAL.NET in a WinUI desktop application](https://github.com/Azure-Samples/ms-identity-netcore-winui) | MSAL.NET |&#8226; Authorization code with PKCE <br/> &#8226; Device code |
+> | .NET Core | &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/1-Calling-MSGraph/1-1-AzureAD) <br/> &#8226; [Call Microsoft Graph with token cache](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/2-TokenCache) <br/> &#8226; [Call Microsoft Graph with custom web UI HTML](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/3-CustomWebUI/3-1-CustomHTML) <br/> &#8226; [Call Microsoft Graph with custom web browser](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/3-CustomWebUI/3-2-CustomBrowser) <br/> &#8226; [Sign in users with device code flow](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/4-DeviceCodeFlow) <br/> &#8226; [Authenticate users with MSAL.NET in a WinUI desktop application](https://github.com/Azure-Samples/ms-identity-netcore-winui) | MSAL.NET |&#8226; Authorization code with PKCE <br/> &#8226; Device code |
> | .NET | [Invoke protected API with integrated Windows authentication](https://github.com/azure-samples/active-directory-dotnet-iwa-v2) | MSAL.NET | Integrated Windows authentication | > | Java | [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/2.%20Client-Side%20Scenarios/Integrated-Windows-Auth-Flow) | MSAL Java | Integrated Windows authentication | > | Node.js | [Sign in users](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-desktop) | MSAL Node | Authorization code with PKCE |
active-directory Hybrid Organizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/hybrid-organizations.md
Previously updated : 04/26/2018- Last updated : 11/23/2022 + -
+# Customer intent: As a tenant administrator, I want to give partners access to both on-premises and cloud resources with Azure AD B2B collaboration.
# Azure Active Directory B2B collaboration for hybrid organizations
Azure Active Directory (Azure AD) B2B collaboration makes it easy for you to giv
## Grant B2B users in Azure AD access to your on-premises apps
-If your organization uses Azure AD B2B collaboration capabilities to invite guest users from partner organizations to your Azure AD, you can now provide these B2B users access to on-premises apps.
+If your organization uses [Azure AD B2B](what-is-b2b.md) collaboration capabilities to invite guest users from partner organizations to your Azure AD, you can now provide these B2B users access to on-premises apps.
For apps that use SAML-based authentication, you can make these apps available to B2B users through the Azure portal, using Azure AD Application Proxy for authentication. For apps that use integrated Windows authentication (IWA) with Kerberos constrained delegation (KCD), you also use Azure AD Proxy for authentication. However, for authorization to work, a user object is required in the on-premises Windows Server Active Directory. There are two methods you can use to create local user objects that represent your B2B guest users. - You can use Microsoft Identity Manager (MIM) 2016 SP1 and the MIM management agent for Microsoft Graph.-- You can use a PowerShell script. (This solution does not require MIM.)
+- You can use a PowerShell script. (This solution doesn't require MIM.)
For details about how to implement these solutions, see [Grant B2B users in Azure AD access to your on-premises applications](hybrid-cloud-to-on-premises.md).
-## Grant locally-managed partner accounts access to cloud resources
+## Grant locally managed partner accounts access to cloud resources
Before Azure AD, organizations with on-premises identity systems have traditionally managed partner accounts in their on-premises directory. If youΓÇÖre such an organization, you want to make sure that your partners continue to have access as you move your apps and other resources to the cloud. Ideally, you want these users to use the same set of credentials to access both cloud and on-premises resources.
We now offer methods where you can use Azure AD Connect to sync these local acco
To help protect your company data, you can control access to just the right resources, and configure authorization policies that treat these guest users differently from your employees.
-For implementation details, see [Grant locally-managed partner accounts access to cloud resources using Azure AD B2B collaboration](hybrid-on-premises-to-cloud.md).
+For implementation details, see [Grant locally managed partner accounts access to cloud resources using Azure AD B2B collaboration](hybrid-on-premises-to-cloud.md).
## Next steps - [Grant B2B users in Azure AD access to your on-premises applications](hybrid-cloud-to-on-premises.md)-- [Grant locally-managed partner accounts access to cloud resources using Azure AD B2B collaboration](hybrid-on-premises-to-cloud.md)
+- [B2B direct connect](b2b-direct-connect-overview.md)
+- [Grant locally managed partner accounts access to cloud resources using Azure AD B2B collaboration](hybrid-on-premises-to-cloud.md)
active-directory Active Directory Compare Azure Ad To Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-compare-azure-ad-to-ad.md
Title: Compare Active Directory to Azure Active Directory
description: This document compares Active Directory Domain Services (ADDS) to Azure Active Directory (AD). It outlines key concepts in both identity solutions and explains how it's different or similar. -+ tags: azuread
active-directory Active Directory Data Storage Japan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-data-storage-japan.md
Title: Customer data storage for Japan customers - Azure AD
description: Learn about where Azure Active Directory stores customer-related data for its Japan customers. -+
active-directory Active Directory Ops Guide Auth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-ops-guide-auth.md
Title: Azure Active Directory Authentication management operations reference gui
description: This operations reference guide describes the checks and actions you should take to secure authentication management -+ tags: azuread
active-directory Active Directory Ops Guide Govern https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-ops-guide-govern.md
Title: Azure Active Directory governance operations reference guide
description: This operations reference guide describes the checks and actions you should take to secure governance management -+ tags: azuread
active-directory Active Directory Ops Guide Iam https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-ops-guide-iam.md
Title: Azure Active Directory Identity and access management operations referenc
description: This operations reference guide describes the checks and actions you should take to secure identity and access management operations -+ tags: azuread
active-directory Active Directory Ops Guide Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-ops-guide-intro.md
Title: Azure Active Directory operations reference guide
description: This operations reference guide describes the checks and actions you should take to secure and maintain identity and access management, authentication, governance, and operations -+ tags: azuread
active-directory Active Directory Ops Guide Ops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-ops-guide-ops.md
Title: Azure Active Directory general operations guide reference
description: This operations reference guide describes the checks and actions you should take to secure general operations -+ tags: azuread
active-directory Azure Active Directory Parallel Identity Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/azure-active-directory-parallel-identity-options.md
Title: 'Parallel and combined identity infrastructure options'
description: This article describes the various options available for organizations to run multiple tenants and multi-cloud scenarios -+ na
active-directory Azure Ad Data Residency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/azure-ad-data-residency.md
+
+ Title: Azure AD and data residency
+description: Use residency data to manage access, achieve mobility scenarios, and secure your organization.
+++++++ Last updated : 11/23/2022+++++
+# Azure Active Directory and data residency
+
+Azure AD is an Identity as a Service (IDaaS) solution that stores and manages identity and access data in the cloud. You can use the data to enable and manage access to cloud services, achieve mobility scenarios, and secure your organization. An instance of the Azure AD service, called a [tenant](/azure/active-directory/develop/developer-glossary#tenant), is an isolated set of directory object data that the customer provisions and owns.
+
+## Core Store
+
+Update or retrieval data operations in the Azure AD Core Store relate to a single tenant based on the userΓÇÖs security token, which achieves tenant isolation. The Core Store is made up of tenants stored in scale units, each of which contains multiple tenants. Azure AD replicates each scale unit in the physical data centers of a logical region for resiliency and performance.
+
+Learn more: [Azure Active Directory Core Store Scale Units](https://www.youtube.com/watch?v=OcKO44GtHh8)
+
+Currently Azure AD has the following regions:
+
+* North America
+* Europe, Middle East, and Africa (EMEA)
+* Australia
+* China
+* Japan
+* [United States government](https://azure.microsoft.com/global-infrastructure/government/)
+* Worldwide
+
+Azure AD handles directory data based on usability, performance, residency and/or other requirements based on region. The term residency indicates Microsoft provides assurance the data isnΓÇÖt persisted outside the geographic region.
+
+Azure AD replicates each tenant through its scale unit, across data centers, based on the following criteria:
+
+* Directory data stored in data centers closest to the user-residency location, to reduce latency and provide fast user sign-in times
+* Directory data stored in geographically isolated data centers to assure availability during unforeseen geological events
+* Compliance with data residency, or other requirements, for specific customers and countries or regions
+
+During tenant creation (for example, signing up for Office 365 or Azure, or creating more Azure AD instances through the Azure portal) you select a country or region as the primary location. Azure AD maps the selection to a logical region and a single scale unit in it. Tenant location canΓÇÖt be changed after itΓÇÖs set.
+
+## Azure AD cloud solution models
+
+Use the following table to see Azure AD cloud solution models based on infrastructure, data location, and operation sovereignty.
+
+|Model|Model regions|Data location|Operations personnel|Customer support|Put a tenant in this model|
+|||||||
+|Regional (2)|North America, EMEA, Japan|At rest, in the target region. Exceptions by service or feature|Operated by Microsoft. Microsoft datacenter personnel must pass a background check.|Microsoft, globally|Create the tenant in the sign-up experience. Choose the country in the residency.|
+|Worldwide|Worldwide||Operated by Microsoft. Microsoft datacenter personnel must pass a background check.|Microsoft, globally|Create the tenant in the sign-up experience. Choose a country without a regional model.|
+|Sovereign or national clouds|US government, China|At rest, in the target country or region. No exceptions.|Operated by a data custodian (1). Personnel are screened according to requirements.|Microsoft, country or region|Each national cloud instance has a sign-up experience.
+
+**Table references**:
+
+(1) **Data custodians**: Data centers in the Worldwide region are operated by Microsoft. In China, Azure AD is operated through a partnership with [21Vianet](/microsoft-365/admin/services-in-china/services-in-china?redirectSourcePath=%252fen-us%252farticle%252fLearn-about-Office-365-operated-by-21Vianet-a8ab5061-3346-4da0-bb7c-5260822b53ae&view=o365-21vianet&viewFallbackFrom=o365-worldwide&preserve-view=true).
+(2) **Authentication data**: Tenants outside the national clouds have authentication information at rest in the continental United States.
+
+Learn more:
+
+* Power BI: [Azure Active Directory ΓÇô Where is your data located?](https://aka.ms/aaddatamap)
+* [What is the Azure Active Directory architecture?](https://aka.ms/aadarch)
+* [Find the Azure geography that meets your needs](https://azure.microsoft.com/overview/datacenters/how-to-choose/)
+* [Microsoft Trust Center](https://www.microsoft.com/trustcenter/cloudservices/nationalcloud)
+
+## Data residency across Azure AD components
+
+In addition to authentication service data, Azure AD components and service data are stored on servers in the Azure AD instanceΓÇÖs region.
+
+Learn more: [Azure Active Directory, Product overview](https://www.microsoft.com/cloud-platform/azure-active-directory-features)
+
+> [!NOTE]
+> To understand service data location, such as Exchange Online, or Skype for Business, refer to the corresponding service documentation.
+
+### Azure AD components and data storage location
+
+Data storage for Azure AD components includes authentication, identity, MFA, and others. In the following table, data includes End User Identifiable Information (EUII) and Customer Content (CC).
+
+|Azure AD component|Description|Data storage location|
+||||
+|Azure AD Authentication Service|This service is stateless. The data for authentication is in the Azure AD Core Store. It has no directory data. Azure AD Authentication Service generates log data in Azure storage, and in the data center where the service instance runs. When users attempt to authenticate using Azure AD, theyΓÇÖre routed to an instance in the geographically nearest data center that is part of its Azure AD logical region. |In region|
+|Azure AD Identity and Access Management (IAM) Services|**User and management experiences**: The Azure AD management experience is stateless and has no directory data. It generates log and usage data stored in Azure Tables storage. The user experience is like the Azure portal. <br>**Identity management business logic and reporting services**: These services have locally cached data storage for groups and users. The services generate log and usage data that goes to Azure Tables storage, Azure SQL, and in Microsoft Elastic Search reporting services. |In region|
+|Azure AD Multi-Factor Authentication (MFA)|For details about MFA-operations data storage and retention, see [Data residency and customer data for Azure AD multifactor authentication](/azure/active-directory/authentication/concept-mfa-data-residency). Azure AD MFA logs the User Principal Name (UPN), voice-call telephone numbers, and SMS challenges. For challenges to mobile app modes, the service logs the UPN and a unique device token. Data centers in the North America region store Azure AD MFA, and the logs it creates.|North America|
+|Azure AD Domain Services|See regions where Azure AD Domain Services is published on [Products available by region](https://azure.microsoft.com/regions/services/). The service holds system metadata globally in Azure Tables, and it contains no personal data.|In region|
+|Azure AD Connect Health|Azure AD Connect Health generates alerts and reports in Azure Tables storage and blob storage.|In region|
+|Azure AD dynamic membership for groups, Azure AD self-service group management|Azure Tables storage holds dynamic membership rule definitions.|In region|
+|Azure AD Application Proxy|Azure AD Application Proxy stores metadata about the tenant, connector machines, and configuration data in Azure SQL.|In region|
+|Azure AD password reset |Azure AD password reset is a back-end service using Redis Cache to track session state. To learn more, go to redis.com to see [Introduction to Redis](https://redis.io/docs/about/).|See, Intro to Redis link in center column.|
+|Azure AD password writeback in Azure AD Connect|During initial configuration, Azure AD Connect generates an asymmetric keypair, using the RivestΓÇôShamirΓÇôAdleman (RSA) cryptosystem. It then sends the public key to the self-service password reset (SSPR) cloud service, which performs two operations: </br></br>1. Creates two Azure Service Bus relays for the Azure AD Connect on-premises service to communicate securely with the SSPR service </br> 2. Generates an Advanced Encryption Standard (AES) key, K1 </br></br> The Azure Service Bus relay locations, corresponding listener keys, and a copy of the AES key (K1) goes to Azure AD Connect in the response. Future communications between SSPR and Azure AD Connect occur over the new ServiceBus channel and are encrypted using SSL. </br> New password resets, submitted during operation, are encrypted with the RSA public key generated by the client during onboarding. The private key on the Azure AD Connect machine decrypts them, which prevents pipeline subsystems from accessing the plaintext password. </br> The AES key encrypts the message payload (encrypted passwords, more data, and metadata), which prevents malicious ServiceBus attackers from tampering with the payload, even with full access to the internal ServiceBus channel. </br> For password writeback, Azure AD Connect need keys and data: </br></br> - The AES key (K1) that encrypts the reset payload, or change requests from the SSPR service to Azure AD Connect, via the ServiceBus pipeline </br> - The private key, from the asymmetric key pair that decrypts the passwords, in reset or change request payloads </br> - The ServiceBus listener keys </br></br> The AES key (K1) and the asymmetric keypair rotate a minimum of every 180 days, a duration you can change during certain onboarding or offboarding configuration events. An example is a customer disables and re-enables password writeback, which might occur during component upgrade during service and maintenance. </br> The writeback keys and data stored in the Azure AD Connect database are encrypted by data protection application programming interfaces (DPAPI) (CALG_AES_256). The result is the master ADSync encryption key stored in the Windows Credential Vault in the context of the ADSync on-premises service account. The Windows Credential Vault supplies automatic secret re-encryption as the password for the service account changes. To reset the service account password invalidates secrets in the Windows Credential Vault for the service account. Manual changes to a new service account might invalidate the stored secrets.</br> By default, the ADSync service runs in the context of a virtual service account. The account might be customized during installation to a least-privileged domain service account, a managed service account (MSA), or a group managed service account (gMSA). While virtual and managed service accounts have automatic password rotation, customers manage password rotation for a custom provisioned domain account. As noted, to reset the password causes loss of stored secrets. |In region|
+|Azure AD Device Registration Service |Azure AD Device Registration Service has computer and device lifecycle management in the directory, which enable scenarios such as device-state conditional access, and mobile device management.|In region|
+|Azure AD provisioning|Azure AD provisioning creates, removes, and updates users in systems, such as software as service (SaaS) applications. It manages user creation in Azure AD and on-premises AD from cloud HR sources, like Workday. The service stores its configuration in an Azure Cosmos DB, which stores the group membership data for the user directory it keeps. Cosmos DB replicates the database to multiple datacenters in the same region as the tenant, which isolates the data, according to the Azure AD cloud solution model. Replication creates high availability and multiple reading and writing endpoints. Cosmos DB has encryption on the database information, and the encryption keys are stored in the secrets storage for Microsoft.|In region|
+|Azure AD business-to-business (B2B) collaboration|Azure AD B2B collaboration has no directory data. Users and other directory objects in a B2B relationship, with another tenant, result in user data copied in other tenants, which might have data residency implications.|In region|
+|Azure AD Identity Protection|Azure AD Identity Protection uses real-time user log-in data, with multiple signals from company and industry sources, to feed its machine-learning systems that detect anomalous logins. Personal data is scrubbed from real-time log-in data before itΓÇÖs passed to the machine learning system. The remaining log-in data identifies potentially risky usernames and logins. After analysis, the data goes to Microsoft reporting systems. Risky logins and usernames appear in reporting for Administrators.|In region|
+|Azure AD managed identities for Azure resources|Azure AD managed identities for Azure resources with managed identities systems can authenticate to Azure services, without storing credentials. Rather than use username and password, managed identities authenticate to Azure services with certificates. The service writes certificates it issues in Azure Cosmos DB in the East US region, which fail over to another region, as needed. Azure Cosmos DB geo-redundancy occurs by global data replication. Database replication puts a read-only copy in each region that Azure AD managed identities runs. To learn more, see [Azure services that can use managed identities to access other services](/azure/active-directory/managed-identities-azure-resources/managed-identities-status#azure-services-that-support-managed-identities-for-azure-resources). Microsoft isolates each Cosmos DB instance in an Azure AD cloud solution model. </br> The resource provider, such as the virtual machine (VM) host, stores the certificate for authentication, and identity flows, with other Azure services. The service stores its master key to access Azure Cosmos DB in a datacenter secrets management service. Azure Key Vault stores the master encryption keys.|In region|
+|Azure Active Directory business-to-consumer (B2C)|Azure Active Directory B2C is an identity management service to customize and manage how customers sign up, sign in, and manage their profiles when using applications. B2C uses the Core Store to keep user identity information. The Core Store database follows known storage, replication, deletion, and data-residency rules. B2C uses an Azure Cosmos DB system to store service policies and secrets. Cosmos DB has encryption and replication services on database information. Its encryption key is stored in the secrets storage for Microsoft. Microsoft isolates Cosmos DB instances in an Azure AD cloud solution model.|Customer-selectable region|
+
+## Related resources
+
+For more information on data residency in Microsoft Cloud offerings see the following articles:
+
+* [Azure Active Directory ΓÇô Where is your data located?](https://aka.ms/aaddatamap)
+* [Data Residency in Azure | Microsoft Azure](https://azure.microsoft.com/explore/global-infrastructure/data-residency/#overview)
+* [Microsoft 365 data locations - Microsoft 365 Enterprise](/microsoft-365/enterprise/o365-data-locations?view=o365-worldwide&preserve-view=true)
+* [Microsoft Privacy - Where is Your Data Located?](https://www.microsoft.com/trust-center/privacy/data-location?rtc=1)
+* Download PDF: [Privacy considerations in the cloud](https://go.microsoft.com/fwlink/p/?LinkID=2051117&clcid=0x409&culture=en-us&country=US)
active-directory Resilience B2b Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/resilience-b2b-authentication.md
Title: Build resilience in external user authentication with Azure Active Direct
description: A guide for IT admins and architects to building resilient authentication for external users -+
active-directory Sign Up Organization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/sign-up-organization.md
Title: Sign up your organization - Azure Active Directory | Microsoft Docs
description: Instructions about how to sign up your organization to use Azure and Azure Active Directory. -+
active-directory Users Default Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/users-default-permissions.md
Title: Default user permissions - Azure Active Directory | Microsoft Docs
description: Learn about the user permissions available in Azure Active Directory. -+
active-directory How To Lifecycle Workflow Sync Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/how-to-lifecycle-workflow-sync-attributes.md
+ # How to synchronize attributes for Lifecycle workflows Workflows, contain specific tasks, which can run automatically against users based on the specified execution conditions. Automatic workflow scheduling is supported based on the employeeHireDate and employeeLeaveDateTime user attributes in Azure AD.
active-directory Trigger Custom Task https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/trigger-custom-task.md
Title: Trigger Logic Apps based on custom task extensions
description: Trigger Logic Apps based on custom task extensions -+
active-directory Workflows Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/workflows-faqs.md
Title: 'Lifecycle workflows FAQs - Azure AD (preview)'
description: Frequently asked questions about Lifecycle workflows (preview). -+
Yes, key user properties like employeeHireDate and employeeType are supported fo
![Screenshot showing an example of how mapping is done in a Lifecycle Workflow.](./media/workflows-faqs/workflows-mapping.png)
+For more information on syncing employee attributes in Lifecycle Workflows, see: [How to synchronize attributes for Lifecycle workflows](how-to-lifecycle-workflow-sync-attributes.md)
+ ### How do I see more details and parameters of tasks and the attributes that are being updated? Some tasks do update existing attributes; however, we donΓÇÖt currently share those specific details. As these tasks are updating attributes related to other Azure AD features, so you can find that info in those docs. For temporary access pass, we're writing to the appropriate attributes listed [here](/graph/api/resources/temporaryaccesspassauthenticationmethod).
active-directory Four Steps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/four-steps.md
Title: Four steps to a strong identity foundation - Azure AD
description: This topic describes four steps hybrid identity customers can take to build a strong identity foundation. -+ na
active-directory Assign User Or Group Access Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/assign-user-or-group-access-portal.md
Previously updated : 09/06/2022 Last updated : 11/22/2022
+zone_pivot_groups: enterprise-apps-all
#customer intent: As an admin, I want to manage user assignment for an app in Azure Active Directory using PowerShell
This article shows you how to assign users and groups to an enterprise application in Azure Active Directory (Azure AD) using PowerShell. When you assign a user to an application, the application appears in the user's [My Apps](https://myapps.microsoft.com/) portal for easy access. If the application exposes app roles, you can also assign a specific app role to the user.
-When you assign a group to an application, only users in the group will have access. The assignment does not cascade to nested groups.
+When you assign a group to an application, only users in the group will have access. The assignment doesn't cascade to nested groups.
-Group-based assignment requires Azure Active Directory Premium P1 or P2 edition. Group-based assignment is supported for Security groups only. Nested group memberships and Microsoft 365 groups are not currently supported. For more licensing requirements for the features discussed in this article, see the [Azure Active Directory pricing page](https://azure.microsoft.com/pricing/details/active-directory).
+Group-based assignment requires Azure Active Directory Premium P1 or P2 edition. Group-based assignment is supported for Security groups only. Nested group memberships and Microsoft 365 groups aren't currently supported. For more licensing requirements for the features discussed in this article, see the [Azure Active Directory pricing page](https://azure.microsoft.com/pricing/details/active-directory).
-For greater control, certain types of enterprise applications can be configured to require user assignment. See [Manage access to an application](what-is-access-management.md#requiring-user-assignment-for-an-app) for more information on requiring user assignment for an app.
+For greater control, certain types of enterprise applications can be configured to require user assignment. For more information on requiring user assignment for an app, see [Manage access to an application](what-is-access-management.md#requiring-user-assignment-for-an-app).
## Prerequisites
-To assign users to an app using PowerShell, you need:
+To assign users to an enterprise application, you need:
-- An Azure account with an active subscription. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An Azure AD account with an active subscription. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
- One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.-- If you have not yet installed the AzureAD module (use the command `Install-Module -Name AzureAD`). If you're prompted to install a NuGet module or the new Azure Active Directory V2 PowerShell module, type Y and press ENTER. - Azure Active Directory Premium P1 or P2 for group-based assignment. For more licensing requirements for the features discussed in this article, see the [Azure Active Directory pricing page](https://azure.microsoft.com/pricing/details/active-directory).
-## Assign users, and groups, to an app using PowerShell
++
+To assign a user or group account to an enterprise application:
+
+1. In the [Azure Active Directory Admin Center](https://aad.portal.azure.com), select **Enterprise applications**, and then search for and select the application to which you want to assign the user or group account.
+1. In the left pane, select **Users and groups**, and then select **Add user/group**.
+
+ :::image type="content" source="media/add-application-portal-assign-users/assign-user.png" alt-text="Assign user account to an application in your Azure AD tenant.":::
+
+1. On the **Add Assignment** pane, select **None Selected** under **Users and groups**.
+1. Search for and select the user or group that you want to assign to the application. For example, `contosouser1@contoso.com` or `contosoteam1@contoso.com`.
+1. Select **Select**.
+1. On the **Add Assignment** pane, select **Assign** at the bottom of the pane.
++ 1. Open an elevated Windows PowerShell command prompt.
-1. Run `Connect-AzureAD` and sign in with a Global Admin user account.
+1. Run `Connect-AzureAD Application.Read.All, Directory.Read.All, Application.ReadWrite.All, Directory.ReadWrite.All` and sign in with a Global Admin user account.
1. Use the following script to assign a user and role to an application: ```powershell
This example assigns the user Britta Simon to the Microsoft Workplace Analytics
New-AzureADUserAppRoleAssignment -ObjectId $user.ObjectId -PrincipalId $user.ObjectId -ResourceId $sp.ObjectId -Id $appRole.Id ```
-## Unassign users, and groups, from an app using PowerShell
+## Unassign users, and groups, from an application
1. Open an elevated Windows PowerShell command prompt.
-1. Run `Connect-AzureAD` and sign in with a Global Admin user account. Use the following script to remove a user and role from an application:
+1. Run `Connect-AzureAD Application.Read.All Directory.Read.All Application.ReadWrite.All Directory.ReadWrite.All` and sign in with a Global Admin user account. Use the following script to remove a user and role from an application.
```powershell # Store the proper parameters
This example assigns the user Britta Simon to the Microsoft Workplace Analytics
$assignments | Select * #To remove the App role assignment run the following command.
- Remove-AzureADServiceAppRoleAssignment -ObjectId $spo.ObjectId -AppRoleAssignmentId $assignments[assignment #].ObjectId
+ Remove-AzureADServiceAppRoleAssignment -ObjectId $spo.ObjectId -AppRoleAssignmentId $assignments[assignment number].ObjectId
``` ## Remove all users who are assigned to the application
+Use the following script to remove all users and groups assigned to the application.
+ ```powershell #Retrieve the service principal object ID. $app_name = "<Your App's display name>"
$assignments | ForEach-Object {
} } ```++
+1. Open an elevated Windows PowerShell command prompt.
+1. Run `Connect-AzureAD Application.Read.All Directory.Read.All Application.ReadWrite.All Directory.ReadWrite.All` and sign in with a Global Admin user account.
+1. Use the following script to assign a user and role to an application:
+
+```powershell
+# Assign the values to the variables
+
+$userId = "<Your user's ID>"
+$app_name = "<Your App's display name>"
+$app_role_name = "<App role display name>"
+$sp = Get-MgServicePrincipal -Filter "displayName eq '$app_name'"
+
+# Get the user to assign, and the service principal for the app to assign to
+
+$params = @{
+ "PrincipalId" =$userId
+ "ResourceId" =$sp.Id
+ "AppRoleId" =($sp.AppRoles | Where-Object { $_.DisplayName -eq $app_role_name }).Id
+ }
+
+# Assign the user to the app role
+
+New-MgUserAppRoleAssignment -UserId $userId -BodyParameter $params |
+ Format-List Id, AppRoleId, CreationTime, PrincipalDisplayName,
+ PrincipalId, PrincipalType, ResourceDisplayName, ResourceId
+```
+
+## Unassign users, and groups, from an application
+
+1. Open an elevated Windows PowerShell command prompt.
+1. Run `Connect-AzureAD Application.Read.All Directory.Read.All Application.ReadWrite.All Directory.ReadWrite.All` and sign in with a Global Admin user account. Use the following script to remove a user and role from an application.
+```powershell
+# Get the user and the service principal
+
+$user = Get-MgUser -UserId <userid>
+$spo = Get-MgServicePrincipal -ServicePrincipalId <ServicePrincipalId>
+
+# Get the Id of the role assignment
+
+$assignments = Get-MgServicePrincipalAppRoleAssignedTo -ServicePrincipalId $spo.Id | Where {$_.PrincipalDisplayName -eq $user.DisplayName}
+
+# if you run the following, it will show you the list of users assigned to the application
+
+$assignments | Select *
+
+# To remove the App role assignment run the following command.
+
+Remove-MgServicePrincipalAppRoleAssignedTo -AppRoleAssignmentId '<AppRoleAssignment-id>' -ServicePrincipalId $spo.Id
+```
+
+## Remove all users and groups assigned to the application
+
+Use the following script to remove all users and groups assigned to the application.
+
+```powershell
+$assignments | ForEach-Object {
+ if ($_.PrincipalType -in ("user", "Group")) {
+ Remove-MgServicePrincipalAppRoleAssignedTo -ServicePrincipalId $Sp.Id -AppRoleAssignmentId $_.Id }
+}
+```
+++
+1. To assign users and groups to an application, sign in to [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) with one of the roles listed in the prerequisite section.
+
+ You'll need to consent to the following permissions:
+
+ `Application.Read.All`, `Application.ReadWrite.All`, `Directory.Read.All`, `Directory.ReadWrite.All`.
+
+ To grant an app role assignment, you need three identifiers:
+
+ - `principalId`: The ID of the user or group to which you're assigning the app role.
+ - `resourceId`: The ID of the resource servicePrincipal that has defined the app role.
+ - `appRoleId`: The ID of the appRole (defined on the resource service principal) to assign to a user or group.
+
+1. Get the enterprise application. Filter by DisplayName.
+
+ ```http
+ GET servicePrincipal?$filter=DisplayName eq '{appDisplayName}'
+ ```
+ Record the following values from the response body:
+
+ - Object ID of the enterprise application
+ - appRoleId that you'll assign to the user. If the application doesn't expose any roles, the user will be assigned the default access role.
+
+1. Get the user by filtering by the user's principal name. Record the object ID of the user.
+
+ ```http
+ GET /users/{userPrincipalName}
+ ```
+1. Assign the user to the application.
+ ```http
+ POST /servicePrincipals/resource-servicePrincipal-id/appRoleAssignedTo
+
+ {
+ "principalId": "33ad69f9-da99-4bed-acd0-3f24235cb296",
+ "resourceId": "9028d19c-26a9-4809-8e3f-20ff73e2d75e",
+ "appRoleId": "ef7437e6-4f94-4a0a-a110-a439eb2aa8f7"
+ }
+ ```
+ In the example, both the resource-servicePrincipal-id and resourceId represent the enterprise application.
+
+## Unassign users, and groups, from an application
+To unassign user and groups from the application, run the following query.
+
+1. Get the enterprise application. Filter by DisplayName.
+
+ ```http
+ GET servicePrincipal?$filter=DisplayName eq '{appDisplayName}'
+ ```
+1. Get the list of appRoleAssignments for the application.
+
+ ```http
+ GET /servicePrincipals/{id}/appRoleAssignedTo
+ ```
+1. Remove the appRoleAssignments by specifying the appRoleAssignment ID.
+
+ ```http
+ DELETE /servicePrincipals/{resource-servicePrincipal-id}/appRoleAssignedTo/{appRoleAssignment-id}
+ ```
## Next steps -- [Create and assign a user account from the Azure portal](add-application-portal-assign-users.md)-- [Manage access to apps](what-is-access-management.md).
+- [Assign custom security attributes](custom-security-attributes-apps.md)
+- [Disable user sign-in](disable-user-sign-in-portal.md).
active-directory Manage Application Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/manage-application-permissions.md
Previously updated : 11/07/2022 Last updated : 11/22/2022
-zone_pivot_groups: enterprise-apps-minus-graph
+zone_pivot_groups: enterprise-apps-all
#customer intent: As an admin, I want to review permissions granted to applications so that I can restrict suspicious or over privileged applications.
To review permissions granted to applications, you need:
- One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator. - A Service principal owner who isn't an administrator is able to invalidate refresh tokens.
-## Review permissions
- :::zone pivot="portal"
+## Review permissions
+ You can access the Azure AD portal to get contextual PowerShell scripts to perform the actions. To review application permissions:
Each option generates PowerShell scripts that enable you to control user access
:::zone pivot="aad-powershell"
-## Revoke permissions
-- Using the following Azure AD PowerShell script revokes all permissions granted to an application. ```powershell
-Connect-AzureAD
+Connect-AzureAD Application.Read.All, Application.ReadWrite.All, Directory.Read.All, Directory.ReadWrite.All
# Get Service Principal using objectId $sp = Get-AzureADServicePrincipal -ObjectId "<ServicePrincipal objectID>"
$spApplicationPermissions | ForEach-Object {
## Invalidate the refresh tokens
+Remove appRoleAssignments for users or groups to the application using the following scripts.
+ ```powershell
-Connect-AzureAD
+Connect-AzureAD Application.Read.All, Application.ReadWrite.All, Directory.Read.All, Directory.ReadWrite.All
# Get Service Principal using objectId $sp = Get-AzureADServicePrincipal -ObjectId "<ServicePrincipal objectID>"
$assignments | ForEach-Object {
} ``` :::zone-end+ :::zone pivot="ms-powershell" Using the following Microsoft Graph PowerShell script revokes all permissions granted to an application. ```powershell
-Connect-MgGraph
+Connect-MgGraph Application.Read.All, Application.ReadWrite.All, Directory.Read.All, Directory.ReadWrite.All
# Get Service Principal using objectId $sp = Get-MgServicePrincipal -ServicePrincipalID "$ServicePrincipalID"
$spOAuth2PermissionsGrants= Get-MgOauth2PermissionGrant -All| Where-Object { $_.
$spOauth2PermissionsGrants |ForEach-Object { Remove-MgOauth2PermissionGrant -OAuth2PermissionGrantId $_.Id }+
+# Get all application permissions for the service principal
+$spApplicationPermissions = Get-MgServicePrincipalAppRoleAssignedTo -ServicePrincipalId $Sp.Id -All | Where-Object { $_.PrincipalType -eq "ServicePrincipal" }
+
+# Remove all application permissions
+$spApplicationPermissions | ForEach-Object {
+Remove-MgServicePrincipalAppRoleAssignedTo -ServicePrincipalId $Sp.Id -AppRoleAssignmentId $_.Id
+ }
``` ## Invalidate the refresh tokens
+Remove appRoleAssignments for users or groups to the application using the following scripts.
+ ```powershell
-Connect-MgGraph
+Connect-MgGraph Application.Read.All, Application.ReadWrite.All, Directory.Read.All, Directory.ReadWrite.All
# Get Service Principal using objectId $sp = Get-MgServicePrincipal -ServicePrincipalID "$ServicePrincipalID"
$spApplicationPermissions = Get-MgServicePrincipalAppRoleAssignedTo -ServicePrin
:::zone-end +
+To review permissions, Sign in to [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) with one of the roles listed in the prerequisite section.
+
+You'll need to consent to the following permissions:
+
+`Application.Read.All`, `Application.ReadWrite.All`, `Directory.Read.All`, `Directory.ReadWrite.All`.
+
+### Delegated permissions
+
+Run the following queries to review delegated permissions granted to an application.
+
+1. Get Service Principal using objectID
+
+ ```http
+ GET /servicePrincipals/{id}
+ ```
+
+ Example:
+
+ ```http
+ GET /servicePrincipals/57443554-98f5-4435-9002-852986eea510
+ ```
+
+1. Get all delegated permissions for the service principal
+
+ ```http
+ GET /servicePrincipals/{id}/oauth2PermissionGrants
+ ```
+1. Remove delegated permissions using oAuth2PermissionGrants ID.
+
+ ```http
+ DELETE /oAuth2PermissionGrants/{id}
+ ```
+
+### Application permissions
+
+Run the following queries to review application permissions granted to an application.
+
+1. Get all application permissions for the service principal
+
+ ```http
+ GET /servicePrincipals/{servicePrincipal-id}/appRoleAssignments
+ ```
+1. Remove application permissions using appRoleAssignment ID
+
+ ```http
+ DELETE /servicePrincipals/{resource-servicePrincipal-id}/appRoleAssignedTo/{appRoleAssignment-id}
+ ```
+
+## Invalidate the refresh tokens
+
+Run the following queries to remove appRoleAssignments of users or groups to the application.
+
+1. Get Service Principal using objectID.
+
+ ```http
+ GET /servicePrincipals/{id}
+ ```
+ Example:
+
+ ```http
+ GET /servicePrincipals/57443554-98f5-4435-9002-852986eea510
+ ```
+1. Get Azure AD App role assignments using objectID of the Service Principal.
+
+ ```http
+ GET /servicePrincipals/{servicePrincipal-id}/appRoleAssignedTo
+ ```
+1. Revoke refresh token for users and groups assigned to the application using appRoleAssignment ID.
+
+ ```http
+ DELETE /servicePrincipals/{servicePrincipal-id}/appRoleAssignedTo/{appRoleAssignment-id}
+ ```
+ > [!NOTE] > Revoking the current granted permission won't stop users from re-consenting to the application. If you want to block users from consenting, read [Configure how users consent to applications](configure-user-consent.md). ## Next steps -- [Configure admin consent workflow](configure-admin-consent-workflow.md)
+- [Configure user consent setting](configure-user-consent.md)
+- [Configure admin consent workflow](configure-admin-consent-workflow.md)
active-directory Howto Verifiable Credentials Partner Au10tix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/howto-verifiable-credentials-partner-au10tix.md
Title: Configure Verified ID by AU10TIX as your Identity Verification Partner
description: This article shows you the steps you need to follow to configure AU10TIX as your identity verification partner -+
aks Cluster Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-configuration.md
description: Learn how to configure a cluster in Azure Kubernetes Service (AKS)
Previously updated : 10/28/2022 Last updated : 11/23/2022 # Configure an AKS cluster
By using `containerd` for AKS nodes, pod startup latency improves and node resou
* `Containerd` sets up logging using the standardized `cri` logging format (which is different from what you currently get from docker's json driver). Your logging solution needs to support the `cri` logging format (like [Azure Monitor for Containers](../azure-monitor/containers/container-insights-enable-new-cluster.md)) * You can no longer access the docker engine, `/var/run/docker.sock`, or use Docker-in-Docker (DinD).
- * If you currently extract application logs or monitoring data from Docker Engine, use [Container insights](../azure-monitor/containers/container-insights-enable-new-cluster.md) instead. Additionally AKS doesn't support running any out of band commands on the agent nodes that could cause instability.
- * Building images and directly using the Docker engine using the methods above isn't recommended. Kubernetes isn't fully aware of those consumed resources, and those approaches present numerous issues detailed [here](https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/) and [here](https://securityboulevard.com/2018/05/escaping-the-whale-things-you-probably-shouldnt-do-with-docker-part-1/), for example.
+ * If you currently extract application logs or monitoring data from Docker engine, use [Container insights](../azure-monitor/containers/container-insights-enable-new-cluster.md) instead. AKS doesn't support running any out of band commands on the agent nodes that could cause instability.
+ * Building images and directly using the Docker engine using the methods above isn't recommended. Kubernetes isn't fully aware of those consumed resources, and those methods present numerous issues as described [here](https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/) and [here](https://securityboulevard.com/2018/05/escaping-the-whale-things-you-probably-shouldnt-do-with-docker-part-1/).
-* Building images - You can continue to use your current docker build workflow as normal, unless you're building images inside your AKS cluster. In this case, consider switching to the recommended approach for building images using [ACR Tasks](../container-registry/container-registry-quickstart-task-cli.md), or a more secure in-cluster option like [docker buildx](https://github.com/docker/buildx).
+* Building images - You can continue to use your current Docker build workflow as normal, unless you're building images inside your AKS cluster. In this case, consider switching to the recommended approach for building images using [ACR Tasks](../container-registry/container-registry-quickstart-task-cli.md), or a more secure in-cluster option like [Docker Buildx](https://github.com/docker/buildx).
## Generation 2 virtual machines
Additionally not all VM images support Gen2, on AKS Gen2 VMs will use the new [A
## Default OS disk sizing
-By default, when creating a new cluster or adding a new node pool to an existing cluster, the disk size is determined by the number for vCPUs, which is based on the VM SKU. The default values are shown in the following table:
+By default, when creating a new cluster or adding a new node pool to an existing cluster, the OS disk size is determined by the number for vCPUs. The number of vCPUs is based on the VM SKU and the default values are shown in the following table:
|VM SKU Cores (vCPUs)| Default OS Disk Tier | Provisioned IOPS | Provisioned Throughput (Mpbs) | |--|--|--|--|
By default, when creating a new cluster or adding a new node pool to an existing
| 64+ | P30/1024G | 5000 | 200 | > [!IMPORTANT]
-> Default OS disk sizing is only used on new clusters or node pools when Ephemeral OS disks are not supported and a default OS disk size isn't specified. The default OS disk size may impact the performance or cost of your cluster, but you can change the sizing of the OS disk at any time after cluster or node pool creation. This default disk sizing affects clusters or node pools created in July 2022 or later.
+> Default OS disk sizing is only used on new clusters or node pools when ephemeral OS disks are not supported and a default OS disk size isn't specified. The default OS disk size may impact the performance or cost of your cluster, and you cannot change the OS disk size after cluster or node pool creation. This default disk sizing affects clusters or node pools created on July 2022 or later.
## Ephemeral OS By default, Azure automatically replicates the operating system disk for a virtual machine to Azure storage to avoid data loss if the VM needs to be relocated to another host. However, since containers aren't designed to have local state persisted, this behavior offers limited value while providing some drawbacks, including slower node provisioning and higher read/write latency.
-By contrast, ephemeral OS disks are stored only on the host machine, just like a temporary disk. This provides lower read/write latency, along with faster node scaling and cluster upgrades.
+By contrast, ephemeral OS disks are stored only on the host machine, just like a temporary disk. This configuration provides lower read/write latency, along with faster node scaling and cluster upgrades.
Like the temporary disk, an ephemeral OS disk is included in the price of the virtual machine, so you don't incur more storage costs. > [!IMPORTANT]
->When you don't explicitly request managed disks for the OS, AKS will default to ephemeral OS if possible for a given node pool configuration.
+> When you don't explicitly request managed disks for the OS, AKS will default to ephemeral OS if possible for a given node pool configuration.
-If you chose to use an ephemeral OS, the OS disk must fit in the VM cache. The sizes for VM cache are available in the [Azure documentation](../virtual-machines/dv3-dsv3-series.md) in parentheses next to IO throughput ("cache size in GiB").
+If you chose to use an ephemeral OS, the OS disk must fit in the VM cache. The sizes for VM cache are available in the [Azure VM documentation](../virtual-machines/dv3-dsv3-series.md) in parentheses next to IO throughput ("cache size in GiB").
-If you chose to use the AKS default VM size [Standard_DS2_v2](../virtual-machines/dv2-dsv2-series.md#dsv2-series) SKU with the default OS disk size of 100 GB, this VM size supports ephemeral OS but only has 86 GB of cache size. This configuration would default to managed disks if you don't explicitly specify it. If you do request an ephemeral OS, you'll receive a validation error.
+If you chose to use the AKS default VM size [Standard_DS2_v2](../virtual-machines/dv2-dsv2-series.md#dsv2-series) SKU with the default OS disk size of 100 GB. The default VM size supports ephemeral OS, but only has 86 GB of cache size. This configuration would default to managed disks if you don't explicitly specify it. If you do request an ephemeral OS, you'll receive a validation error.
-If you request the same [Standard_DS2_v2](../virtual-machines/dv2-dsv2-series.md#dsv2-series) SKU with a 60GB OS disk, this configuration would default to ephemeral OS: the requested size of 60GB is smaller than the maximum cache size of 86 GB.
+If you request the same [Standard_DS2_v2](../virtual-machines/dv2-dsv2-series.md#dsv2-series) SKU with a 60GB OS disk, this configuration would default to ephemeral OS. The requested size of 60GB is smaller than the maximum cache size of 86 GB.
If you select the [Standard_D8s_v3](../virtual-machines/dv3-dsv3-series.md#dsv3-series) SKU with 100 GB OS disk, this VM size supports ephemeral OS and has 200 GB of cache space. If you don't specify the OS disk type, the node pool would receive ephemeral OS by default.
-The latest generation of VM series doesn't have a dedicated cache, but only temporary storage. Let's assume to use the [Standard_E2bds_v5](../virtual-machines/ebdsv5-ebsv5-series.md#ebdsv5-series) VM size with the default OS disk size of 100 GiB as an example. This VM size supports ephemeral OS disks but only has 75 GiB of temporary storage. This configuration would default to managed OS disks if you don't explicitly specify it. If you do request an ephemeral OS disk, you'll receive a validation error.
+The latest generation of VM series doesn't have a dedicated cache, but only temporary storage. Let's assume to use the [Standard_E2bds_v5](../virtual-machines/ebdsv5-ebsv5-series.md#ebdsv5-series) VM size with the default OS disk size of 100 GiB as an example. This VM size supports ephemeral OS disks, but only has 75 GiB of temporary storage. This configuration would default to managed OS disks if you don't explicitly specify it. If you do request an ephemeral OS disk, you'll receive a validation error.
If you request the same [Standard_E2bds_v5](../virtual-machines/ebdsv5-ebsv5-series.md#ebdsv5-series) VM size with a 60 GiB OS disk, this configuration would default to ephemeral OS disks. The requested size of 60 GiB is smaller than the maximum temporary storage of 75 GiB.
kubectl get pods --all-namespaces
## Custom resource group name
-When you deploy an Azure Kubernetes Service cluster in Azure, a second resource group gets created for the worker nodes. By default, AKS will name the node resource group `MC_resourcegroupname_clustername_location`, but you can also provide your own name.
+When you deploy an Azure Kubernetes Service cluster in Azure, a second resource group is created for the worker nodes. By default, AKS names the node resource group `MC_resourcegroupname_clustername_location`, but you can also specify a custom name.
-To specify your own resource group name, install the aks-preview Azure CLI extension version 0.3.2 or later. Using the Azure CLI, use the `--node-resource-group` parameter of the `az aks create` command to specify a custom name for the resource group. If you use an Azure Resource Manager template to deploy an AKS cluster, you can define the resource group name by using the `nodeResourceGroup` property.
+To specify a custom resource group name, install the `aks-preview` Azure CLI extension version 0.3.2 or later. When using the Azure CLI, include the `--node-resource-group` parameter of the `az aks create` command to specify a custom name for the resource group. If you use an Azure Resource Manager template to deploy an AKS cluster, you can define the resource group name by using the `nodeResourceGroup` property.
```azurecli az aks create --name myAKSCluster --resource-group myResourceGroup --node-resource-group myNodeResourceGroup ```
-The secondary resource group is automatically created by the Azure resource provider in your own subscription. You can only specify the custom resource group name when the cluster is created.
+The secondary resource group is automatically created by the Azure resource provider in your own subscription. You can only specify the custom resource group name when the cluster is created.
As you work with the node resource group, keep in mind that you can't:
As you work with the node resource group, keep in mind that you can't:
## Node Restriction (Preview)
-The [Node Restriction](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#noderestriction) admission controller limits the Node and Pod objects a kubelet can modify. Node Restriction is on by default in AKS 1.24+ clusters. If you're using an older version, use the below commands to create a cluster with Node Restriction or update an existing cluster to add Node Restriction.
+The [Node Restriction](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#noderestriction) admission controller limits the Node and Pod objects a kubelet can modify. Node Restriction is on by default in AKS 1.24+ clusters. If you're using an older version, use the below commands to create a cluster with Node Restriction, or update an existing cluster to add Node Restriction.
[!INCLUDE [preview features callout](./includes/preview/preview-callout.md)]
To remove Node Restriction from a cluster.
az aks update -n aks -g myResourceGroup --disable-node-restriction ```
-## OIDC Issuer
+## OIDC Issuer
-This enables an OIDC Issuer URL of the provider which allows the API server to discover public signing keys.
+You can enable an OIDC Issuer URL of the provider, which allows the API server to discover public signing keys.
> [!WARNING] > Enable or disable OIDC Issuer changes the current service account token issuer to a new value, which can cause down time and restarts the API server. If the application pods using a service token remain in a failed state after you enable or disable the OIDC Issuer, we recommend you manually restart the pods.
To rotate the OIDC key, perform the following command. Replace the default value
az aks oidc-issuer rotate-signing-keys -n myAKSCluster -g myResourceGroup ```
-> [!Important]
+> [!IMPORTANT]
> Once you rotate the key, the old key (key1) expires after 24 hours. This means that both the old key (key1) and the new key (key2) are valid within the 24-hour period. If you want to invalidate the old key (key1) immediately, you need to rotate the OIDC key twice. Then key2 and key3 are valid, and key1 is invalid. ## Next steps - Learn how to [upgrade the node images](node-image-upgrade.md) in your cluster.
+- Review [Baseline architecture for an Azure Kubernetes Service (AKS) cluster][baseline-reference-architecture-aks] to learn about our recommended baseline infrastructure architecture.
- See [Upgrade an Azure Kubernetes Service (AKS) cluster](upgrade-cluster.md) to learn how to upgrade your cluster to the latest version of Kubernetes. - Read more about [`containerd` and Kubernetes](https://kubernetes.io/blog/2018/05/24/kubernetes-containerd-integration-goes-ga/) - See the list of [Frequently asked questions about AKS](faq.md) to find answers to some common AKS questions.
az aks oidc-issuer rotate-signing-keys -n myAKSCluster -g myResourceGroup
[aks-add-np-containerd]: ./learn/quick-windows-container-deploy-cli.md#add-a-windows-server-node-pool-with-containerd [az-aks-create]: /cli/azure/aks#az-aks-create [az-aks-update]: /cli/azure/aks#az-aks-update
+[baseline-reference-architecture-aks]: /azure/architecture/reference-architectures/containers/aks/baseline-aks
aks Dapr Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/dapr-migration.md
Title: Migrate from Dapr OSS to the Dapr extension for Azure Kubernetes Service (AKS)
-description: Learn how to migrate from Dapr OSS to the Dapr extension for AKS
+description: Learn how to migrate your managed clusters from Dapr OSS to the Dapr extension for AKS
Previously updated : 07/21/2022 Last updated : 11/21/2022 # Migrate from Dapr OSS to the Dapr extension for Azure Kubernetes Service (AKS)
-You've installed and configured Dapr OSS on your Kubernetes cluster and want to migrate to the Dapr extension on AKS. Before you can successfully migrate to the Dapr extension, you need to fully remove Dapr OSS from your AKS cluster. In this guide, you will migrate from Dapr OSS by:
+You've installed and configured Dapr OSS on your Kubernetes cluster and want to migrate to the Dapr extension on AKS. In this guide, you'll learn how Dapr moves your managed clusters from using Dapr OSS to the Dapr extension by either:
-> [!div class="checklist"]
-> - Uninstalling Dapr, including CRDs and the `dapr-system` namespace
-> - Installing Dapr via the Dapr extension for AKS
-> - Applying your components
-> - Restarting your applications that use Dapr
+- Checking for an existing Dapr installation via CLI prompts (default method), or
+- Using the Helm release name and namespace configuration settings to manually check for an existing Dapr installation.
-> [!NOTE]
-> Expect downtime of approximately 10 minutes while migrating to Dapr extension for AKS. Downtime may take longer depending on varying factors. During this downtime, no Dapr functionality should be expected to run.
+This check allows the Dapr extension to reuse the already existing Kubernetes resources from your previous installation and start managing them.
-## Uninstall Dapr
+## Check for an existing Dapr installation
-#### [Dapr CLI](#tab/cli)
-
-1. Run the following command to uninstall Dapr and all CRDs:
+The Dapr extension, by default, checks for existing Dapr installations when you run the `az k8s-extension create` command. To list the details of your current Dapr installation, run the following command and save the Dapr release name and namespace:
```bash
-dapr uninstall -k ΓÇô-all
+helm list -A
```
-2. Uninstall the Dapr namespace:
+When [installing the extension][dapr-create], you'll receive a prompt asking if Dapr is already installed:
```bash
-kubectl delete namespace dapr-system
+Is Dapr already installed in the cluster? (y/N): y
```
-> [!NOTE]
-> `dapr-system` is the default namespace installed with `dapr init -k`. If you created a custom namespace, replace `dapr-system` with your namespace.
-
-#### [Helm](#tab/helm)
-
-1. Run the following command to uninstall Dapr:
+If Dapr is already installed, please enter the Helm release name and namespace (from `helm list -A`) when prompted the following:
```bash
-helm uninstall dapr -n dapr-system
+Enter the Helm release name for Dapr, or press Enter to use the default name [dapr]:
+Enter the namespace where Dapr is installed, or press Enter to use the default namespace [dapr-system]:
```
-2. Uninstall CRDs:
+## Configure the Dapr check using `--configuration-settings`
-```bash
-kubectl delete crd components.dapr.io
-kubectl delete crd configurations.dapr.io
-kubectl delete crd subscriptions.dapr.io
-kubectl delete crd resiliencies.dapr.io
-```
+Alternatively, when creating the Dapr extension, you can configure the above settings via `--configuration-settings`. This method is useful when you are automating the installation via bash scripts, CI pipelines, etc.
-3. Uninstall the Dapr namespace:
+If you don't have Dapr already installed on your cluster, set `skipExistingDaprCheck` to `true`:
-```bash
-kubectl delete namespace dapr-system
+```azurecli-interactive
+az k8s-extension create --cluster-type managedClusters \
+--cluster-name myAKScluster \
+--resource-group myResourceGroup \
+--name dapr \
+--extension-type Microsoft.Dapr \
+--configuration-settings "skipExistingDaprCheck=true"
```
-> [!NOTE]
-> `dapr-system` is the default namespace while doing a Helm install. If you created a custom namespace (`helm install dapr dapr/dapr --namespace <my-namespace>`), replace `dapr-system` with your namespace.
---
-## Register the `KubernetesConfiguration` service provider
-
-If you have not previously used cluster extensions, you may need to register the service provider with your subscription. You can check the status of the provider registration using the [az provider list][az-provider-list] command, as shown in the following example:
+If Dapr exists on your cluster, set the Helm release name and namespace (from `helm list -A`) via `--configuration-settings`:
```azurecli-interactive
-az provider list --query "[?contains(namespace,'Microsoft.KubernetesConfiguration')]" -o table
+az k8s-extension create --cluster-type managedClusters \
+--cluster-name myAKScluster \
+--resource-group myResourceGroup \
+--name dapr \
+--extension-type Microsoft.Dapr \
+--configuration-settings "existingDaprReleaseName=dapr" \
+--configuration-settings "existingDaprReleaseNamespace=dapr-system"
```
-The *Microsoft.KubernetesConfiguration* provider should report as *Registered*, as shown in the following example output:
+## Update HA mode or placement service settings
-```output
-Namespace RegistrationState RegistrationPolicy
- - --
-Microsoft.KubernetesConfiguration Registered RegistrationRequired
-```
+When you install the Dapr extension on top of an existing Dapr installation, you'll see the following prompt:
-If the provider shows as *NotRegistered*, register the provider using the [az provider register][az-provider-register] as shown in the following example:
+> ```The extension will be installed on your existing Dapr installation. Note, if you have updated the default values for global.ha.* or dapr_placement.* in your existing Dapr installation, you must provide them in the configuration settings. Failing to do so will result in an error, since Helm upgrade will try to modify the StatefulSet. See <link> for more information.```
-```azurecli-interactive
-az provider register --namespace Microsoft.KubernetesConfiguration
-```
+Kubernetes only allows for limited fields in StatefulSets to be patched, subsequently failing upgrade of the placement service if any of the mentioned settings are configured. You can follow the steps below to update those settings:
-## Install Dapr via the AKS extension
+1. Delete the stateful set.
-Once you've uninstalled Dapr from your system, install the [Dapr extension for AKS and Arc-enabled Kubernetes](./dapr.md#create-the-extension-and-install-dapr-on-your-aks-or-arc-enabled-kubernetes-cluster).
+ ```azurecli-interactive
+ kubectl delete statefulset.apps/dapr-placement-server -n dapr-system
+ ```
-```bash
-az k8s-extension create --cluster-type managedClusters \
cluster-name <dapr-cluster-name> \resource-group <dapr-resource-group> \name <dapr-ext> \extension-type Microsoft.Dapr
-```
+1. Update the HA mode:
+
+ ```azurecli-interactive
+ az k8s-extension update --cluster-type managedClusters \
+ --cluster-name myAKSCluster \
+ --resource-group myResourceGroup \
+ --name dapr \
+ --extension-type Microsoft.Dapr \
+ --auto-upgrade-minor-version true \
+ --configuration-settings "global.ha.enabled=true" \
+ ```
-## Apply your components
+For more information, see [Dapr Production Guidelines][dapr-prod-guidelines].
-```bash
-kubectl apply -f <component.yaml>
-```
-## Restart your applications that use Dapr
+## Next steps
-Restarting the deployment will create a new sidecar from the new Dapr installation.
+Learn more about [the cluster extension][dapr-overview] and [how to use it][dapr-howto].
-```bash
-kubectl rollout restart <deployment-name>
-```
-## Next steps
+<!-- LINKS INTERNAL -->
+[dapr-overview]: ./dapr-overview.md
+[dapr-howto]: ./dapr.md
+[dapr-create]: ./dapr.md#create-the-extension-and-install-dapr-on-your-aks-or-arc-enabled-kubernetes-cluster
-Learn more about [the cluster extension](./dapr-overview.md) and [how to use it](./dapr.md).
+<!-- LINKS EXTERNAL -->
+[dapr-prod-guidelines]: https://docs.dapr.io/operations/hosting/kubernetes/kubernetes-production/#enabling-high-availability-in-an-existing-dapr-deployment
aks Manage Abort Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/manage-abort-operations.md
Title: Abort an Azure Kubernetes Service (AKS) long running operation
+ Title: Abort an Azure Kubernetes Service (AKS) long running operation (preview)
description: Learn how to terminate a long running operation on an Azure Kubernetes Service cluster at the node pool or cluster level. Previously updated : 09/08/2022 Last updated : 11/23/2022
Last updated 09/08/2022
Sometimes deployment or other processes running within pods on nodes in a cluster can run for periods of time longer than expected due to various reasons. While it's important to allow those processes to gracefully terminate when they're no longer needed, there are circumstances where you need to release control of node pools and clusters with long running operations using an *abort* command.
-AKS now supports aborting a long running operation, allowing you to take back control and run another operation seamlessly. This design is supported using the [Azure REST API](/rest/api/azure/) or the [Azure CLI](/cli/azure/).
+AKS now supports aborting a long running operation, which is currently in public preview. This feature allows you to take back control and run another operation seamlessly. This design is supported using the [Azure REST API](/rest/api/azure/) or the [Azure CLI](/cli/azure/).
The abort operation supports the following scenarios:
The abort operation supports the following scenarios:
## Before you begin
-This article assumes that you have an existing AKS cluster. If you need an AKS cluster, start with reviewing our guidance on how to design, secure, and operate an AKS cluster to support your production-ready workloads. For more information, see [AKS architecture guidance](/azure/architecture/reference-architectures/containers/aks-start-here).
+- The Azure CLI version 2.40.0 or later. Run `az --version` to find the version, and run `az upgrade` to upgrade the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+
+- The `aks-preview` extension version 0.5.102 or later.
+ ## Abort a long running operation
api-management Api Management Howto Use Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-use-azure-monitor.md
ApiManagementGatewayLogs
For more information about using resource logs for API Management, see:
-* [Get started with Azure Monitor Log Analytics](../azure-monitor/logs/log-analytics-tutorial.md), or try the [Log Analytics Demo environment](https://portal.loganalytics.io/demo).
+* [Get started with Azure Monitor Log Analytics](../azure-monitor/logs/log-analytics-tutorial.md), or try the [Log Analytics Demo environment](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring_Logs/DemoLogsBlade).
* [Overview of log queries in Azure Monitor](../azure-monitor/logs/log-query-overview.md).
In this tutorial, you learned how to:
Advance to the next tutorial: > [!div class="nextstepaction"]
-> [Trace calls](api-management-howto-api-inspector.md)
+> [Trace calls](api-management-howto-api-inspector.md)
api-management Howto Protect Backend Frontend Azure Ad B2c https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/howto-protect-backend-frontend-azure-ad-b2c.md
You'll need to add CIDR formatted blocks of addresses to the IP restrictions pan
``` > [!NOTE]
- > Now Azure API management is able respond to cross origin requests from your JavaScript SPA apps, and it will perform throttling, rate-limiting and pre-validation of the JWT auth token being passed BEFORE forwarding the request on to the Function API.
+ > Now Azure API management is able to respond to cross origin requests from your JavaScript SPA apps, and it will perform throttling, rate-limiting and pre-validation of the JWT auth token being passed BEFORE forwarding the request on to the Function API.
> > Congratulations, you now have Azure AD B2C, API Management and Azure Functions working together to publish, secure AND consume an API!
app-service App Service Key Vault References https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-key-vault-references.md
To use a Key Vault reference for an [app setting](configure-common.md#configure-
### Considerations for Azure Files mounting
-Apps can use the `WEBSITE_CONTENTAZUREFILECONNECTIONSTRING` application setting to mount Azure Files as the file system. This setting has additional validation checks to ensure that the app can be properly started. The platform relies on having a content share within Azure Files, and it assumes a default name unless one is specified via the `WEBSITE_CONTENTSHARE` setting. For any requests which modify these settings, the platform will attempt to validate if this content share exists, and it will attempt to create it if not. If it cannot locate or create the content share, the request is blocked.
+Apps can use the `WEBSITE_CONTENTAZUREFILECONNECTIONSTRING` application setting to mount [Azure Files](../storage/files/storage-files-introduction.md) as the file system. This setting has additional validation checks to ensure that the app can be properly started. The platform relies on having a content share within Azure Files, and it assumes a default name unless one is specified via the `WEBSITE_CONTENTSHARE` setting. For any requests which modify these settings, the platform will attempt to validate if this content share exists, and it will attempt to create it if not. If it cannot locate or create the content share, the request is blocked.
When using Key Vault references for this setting, this validation check will fail by default, as the secret itself cannot be resolved while processing the incoming request. To avoid this issue, you can skip the validation by setting `WEBSITE_SKIP_CONTENTSHARE_VALIDATION` to "1". This will bypass all checks, and the content share will not be created for you. You should ensure it is created in advance.
When using Key Vault references for this setting, this validation check will fai
As part of creating the site, it is also possible that attempted mounting of the content share could fail due to managed identity permissions not being propagated or the virtual network integration not being set up. You can defer setting up Azure Files until later in the deployment template to accommodate this. See [Azure Resource Manager deployment](#azure-resource-manager-deployment) to learn more. App Service will use a default file system until Azure Files is set up, and files are not copied over, so you will need to ensure that no deployment attempts occur during the interim period before Azure Files is mounted.
+### Considerations for Application Insights instrumentation
+
+Apps can use the `APPINSIGHTS_INSTRUMENTATIONKEY` or `APPLICATIONINSIGHTS_CONNECTION_STRING` application settings to integrate with [Application Insights](../azure-monitor/app/app-insights-overview.md). The portal experiences for App Service and Azure Functions also use these settings to surface telemetry data from the resource. If these values are referenced from Key Vault, these experiences are not available, and you instead need to work directly with the Application Insights resource to view the telemetry. However, these values are [not considered secrets](../azure-monitor/app/sdk-connection-string.md#is-the-connection-string-a-secret), so you might alternatively consider configuring them directly instead of using the Key Vault references feature.
+ ### Azure Resource Manager deployment When automating resource deployments through Azure Resource Manager templates, you may need to sequence your dependencies in a particular order to make this feature work. Of note, you will need to define your application settings as their own resource, rather than using a `siteConfig` property in the site definition. This is because the site needs to be defined first so that the system-assigned identity is created with it and can be used in the access policy.
application-gateway Application Gateway Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-metrics.md
For Application Gateway, the following metrics are available:
- **Client TLS protocol**
- Count of TLS and non-TLS requests initiated by the client that established connection with the Application Gateway. To view TLS protocol distribution, filter by the dimension TLS Protocol.
+ Count of TLS and non-TLS requests initiated by the client that established connection with the Application Gateway. To view TLS protocol distribution, filter by the dimension TLS Protocol. This metric includes requests served by the gateway, such as redirects.
- **Current capacity units**
For Application Gateway, the following metrics are available:
- **Total Requests**
- Count of successful requests that Application Gateway has served. The request count can be further filtered to show count per each/specific backend pool-http setting combination.
+ Count of successful requests that Application Gateway has served by the backend pool targets. Pages served directly by the gateway, such as redirects, are not counted and should be found in the Client TLS protocol metric. Total requests count metric can be further filtered to show count per each/specific backend pool-http setting combination.
### Backend metrics
application-gateway Configuration Http Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-http-settings.md
Please refer to TLS offload and End-to-End TLS documentation for Application Gat
## Connection draining
-Connection draining helps you gracefully remove backend pool members during planned service updates. You can apply this setting to all members of a backend pool by enabling connection draining on the HTTP setting. It ensures that all deregistering instances of a backend pool continue to maintain existing connections and serve on-going requests for a configurable timeout and don't receive any new requests or connections. The only exception to this are requests bound for deregistering instances because of gateway-managed session affinity and will continue to be forwarded to the deregistering instances. Connection draining applies to backend instances that are explicitly removed from the backend pool.
+Connection draining helps you gracefully remove backend pool members during planned service updates. It applies to backend instances that are explicitly removed from the backend pool or during scale-in of backend instances. You can apply this setting to all members of a backend pool by enabling connection draining on the Backend Setting. It ensures that all deregistering instances of a backend pool continue to maintain existing connections and serve on-going requests for a configurable timeout and don't receive any new requests or connections.
+
+| Configuration Type | Value |
+| - | - |
+|Default value when Connection Draining is not enabled in Backend Setting| 30 seconds |
+|User-defined value when Connection Draining is enabled in Backend Setting | 1 to 3600 seconds |
+
+The only exception to this are requests bound for deregistering instances because of gateway-managed session affinity and will continue to be forwarded to the deregistering instances.
## Protocol
application-gateway Configuration Listeners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-listeners.md
Previously updated : 09/09/2020 Last updated : 11/23/2022
When you create a new listener, you choose between [*basic* and *multi-site*](./
- If you want all of your requests (for any domain) to be accepted and forwarded to backend pools, choose basic. Learn [how to create an application gateway with a basic listener](./quick-create-portal.md). -- If you want to forward requests to different backend pools based on the *host* header or host names, choose multi-site listener, where you must also specify a host name that matches with the incoming request. This is because Application Gateway relies on HTTP 1.1 host headers to host more than one website on the same public IP address and port. To learn more, see [hosting multiple sites using Application Gateway](multiple-site-overview.md).
+- If you want to forward requests to different backend pools based on the *host* header or host names, choose multi-site listener. Application Gateway relies on HTTP 1.1 host headers to host more than one website on the same public IP address and port. To differentiate requests on the same port, you must specify a host name that matches with the incoming request. To learn more, see [hosting multiple sites using Application Gateway](multiple-site-overview.md).
### Order of processing listeners For the v1 SKU, requests are matched according to the order of the rules and the type of listener. If a rule with basic listener comes first in the order, it's processed first and will accept any request for that port and IP combination. To avoid this, configure the rules with multi-site listeners first and push the rule with the basic listener to the last in the list.
-For the v2 SKU, multi-site listeners are processed before basic listeners.
+For the v2 SKU, multi-site listeners are processed before basic listeners, unless rule priority is defined. If using rule priority, wildcard listeners should be defined a priority with a number greater than non-wildcard listeners, to ensure non-wildcard listeners execute prior to the wildcard listeners.
## Frontend IP address
Choose the frontend IP address that you plan to associate with this listener. Th
## Frontend port
-Choose the front-end port. Select an existing port or create a new one. Choose any value from the [allowed range of ports](./application-gateway-components.md#ports). You can use not only well-known ports, such as 80 and 443, but any allowed custom port that's suitable. A port can be used for public-facing listeners or private-facing listeners, however the same port cannot be used for both at the same time.
+Choose the frontend port. Select an existing port or create a new one. Choose any value from the [allowed range of ports](./application-gateway-components.md#ports). You can use not only well-known ports, such as 80 and 443, but any allowed custom port that's suitable. A port can be used for public-facing listeners or private-facing listeners, however the same port cannot be used for both at the same time.
## Protocol
Choose HTTP or HTTPS:
- If you choose HTTP, the traffic between the client and the application gateway is unencrypted. -- Choose HTTPS if you want [TLS termination](features.md#secure-sockets-layer-ssltls-termination) or [end-to-end TLS encryption](./ssl-overview.md). The traffic between the client and the application gateway is encrypted. And the TLS connection terminates at the application gateway. If you want end-to-end TLS encryption, you must choose HTTPS and configure the **backend HTTP** setting. This ensures that traffic is re-encrypted when it travels from the application gateway to the back end.-
+- Choose HTTPS if you want [TLS termination](features.md#secure-sockets-layer-ssltls-termination) or [end-to-end TLS encryption](./ssl-overview.md). The traffic between the client and the application gateway is encrypted and the TLS connection will be terminated at the application gateway. If you want end-to-end TLS encryption to the backend target, you must choose HTTPS within **backend HTTP setting** as well. This ensures that traffic is encrypted when application gateway initiates a connection to the backend target.
To configure TLS termination, a TLS/SSL certificate must be added to the listener. This allows the Application Gateway to decrypt incoming traffic and encrypt response traffic to the client. The certificate provided to the Application Gateway must be in Personal Information Exchange (PFX) format, which contains both the private and public keys.
See [Overview of TLS termination and end to end TLS with Application Gateway](ss
### HTTP2 support
-HTTP/2 protocol support is available to clients that connect to application gateway listeners only. The communication to backend server pools is over HTTP/1.1. By default, HTTP/2 support is disabled. The following Azure PowerShell code snippet shows how to enable this:
+HTTP/2 protocol support is available to clients that connect to application gateway listeners only. Communication to backend server pools is always HTTP/1.1. By default, HTTP/2 support is disabled. The following Azure PowerShell code snippet shows how to enable this:
```azurepowershell $gw = Get-AzApplicationGateway -Name test -ResourceGroupName hm
WebSocket support is enabled by default. There's no user-configurable setting to
## Custom error pages
-You can define custom error at the global level or the listener level. But creating global-level custom error pages from the Azure portal is currently not supported. You can configure a custom error page for a 403 web application firewall error or a 502 maintenance page at the listener level. You must also specify a publicly accessible blob URL for the given error status code. For more information, see [Create Application Gateway custom error pages](./custom-error.md).
+You can define custom error at the global level or the listener level, however creating global-level custom error pages from the Azure portal is currently not supported. You can configure a custom error page for a 403 web application firewall error or a 502 maintenance page at the listener level. You must specify a publicly accessible blob URL for the given error status code. For more information, see [Create Application Gateway custom error pages](./custom-error.md).
![Application Gateway error codes](/azure/application-gateway/media/custom-error/ag-error-codes.png)
azure-arc Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/release-notes.md
This article highlights capabilities, features, and enhancements recently released or improved for Azure Arc-enabled data services.
+## November 8, 2022
+
+### Image tag
+
+`v1.13.0_2022-11-08`
+
+For complete release version information, see [Version log](version-log.md#november-8-2022).
+
+New for this release:
+
+- Azure Arc data controller
+ - Support database as resource in Azure Arc data resource provider
+
+- Arc-enabled PostgreSQL server
+ - Add support for automated backups
+
+- `arcdata` Azure CLI extension
+ - CLI support for automated backups: Setting the `--storage-class-backups` parameter for the create command will enable automated backups
+ ## October 11, 2022 ### Image tag
azure-functions Create First Function Vs Code Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-typescript.md
Before you get started, make sure you have the following requirements in place:
+ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
-+ [Node.js 14.x](https://nodejs.org/en/download/releases/) or [Node.js 16.x](https://nodejs.org/en/download/releases/) (preview). Use the `node --version` command to check your version.
++ [Node.js 16.x](https://nodejs.org/en/download/releases/) or [Node.js 18.x](https://nodejs.org/en/download/releases/) (preview). Use the `node --version` command to check your version. + [Visual Studio Code](https://code.visualstudio.com/) on one of the [supported platforms](https://code.visualstudio.com/docs/supporting/requirements#_platforms).
azure-functions Python Scale Performance Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/python-scale-performance-reference.md
async def main(req: func.HttpRequest) -> func.HttpResponse:
A function without the `async` keyword is run automatically in a ThreadPoolExecutor thread pool: ```python
-# Runs in an ThreadPoolExecutor threadpool. Number of threads is defined by PYTHON_THREADPOOL_THREAD_COUNT.
-# The example is intended to show how default synchronous function are handled.
+# Runs in a ThreadPoolExecutor threadpool. Number of threads is defined by PYTHON_THREADPOOL_THREAD_COUNT.
+# The example is intended to show how default synchronous functions are handled.
def main(): some_blocking_socket_io()
Here are a few examples of client libraries that have implemented async patterns
##### Understanding async in Python worker
-When you define `async` in front of a function signature, Python will mark the function as a coroutine. When calling the coroutine, it can be scheduled as a task into an event loop. When you call `await` in an async function, it registers a continuation into the event loop, which allows the event loop to process the next task during the wait time.
+When you define `async` in front of a function signature, Python marks the function as a coroutine. When calling the coroutine, it can be scheduled as a task into an event loop. When you call `await` in an async function, it registers a continuation into the event loop, which allows the event loop to process the next task during the wait time.
In our Python Worker, the worker shares the event loop with the customer's `async` function and it's capable for handling multiple requests concurrently. We strongly encourage our customers to make use of asyncio compatible libraries, such as [aiohttp](https://pypi.org/project/aiohttp/) and [pyzmq](https://pypi.org/project/pyzmq/). Following these recommendations increases your function's throughput compared to those libraries when implemented synchronously. > [!NOTE]
-> If your function is declared as `async` without any `await` inside its implementation, the performance of your function will be severely impacted since the event loop will be blocked which prohibit the Python worker to handle concurrent requests.
+> If your function is declared as `async` without any `await` inside its implementation, the performance of your function will be severely impacted since the event loop will be blocked which prohibits the Python worker from handling concurrent requests.
#### Use multiple language worker processes
For CPU-bound apps, you should keep the setting to a low number, starting from 1
For I/O-bound apps, you should see substantial gains by increasing the number of threads working on each invocation. the recommendation is to start with the Python default (the number of cores) + 4 and then tweak based on the throughput values you're seeing.
-For mix workloads apps, you should balance both `FUNCTIONS_WORKER_PROCESS_COUNT` and `PYTHON_THREADPOOL_THREAD_COUNT` configurations to maximize the throughput. To understand what your function apps spend the most time on, we recommend profiling them and set the values according to the behavior they present. Also refer to this [section](#use-multiple-language-worker-processes) to learn about FUNCTIONS_WORKER_PROCESS_COUNT application settings.
+For mixed workloads apps, you should balance both `FUNCTIONS_WORKER_PROCESS_COUNT` and `PYTHON_THREADPOOL_THREAD_COUNT` configurations to maximize the throughput. To understand what your function apps spend the most time on, we recommend profiling them and setting the values according to their behaviors. To learn about these application settings, see [Use multiple worker processes](#use-multiple-language-worker-processes).
> [!NOTE] > Although these recommendations apply to both HTTP and non-HTTP triggered functions, you might need to adjust other trigger specific configurations for non-HTTP triggered functions to get the expected performance from your function apps. For more information about this, please refer to this [article](functions-best-practices.md).
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
description: Overview of the Azure Monitor Agent, which collects monitoring data
Previously updated : 11/9/2022 Last updated : 11/22/2022
In addition to the generally available data collection listed above, Azure Monit
| Azure service | Current support | Other extensions installed | More information | | : | : | : | : | | [Microsoft Defender for Cloud](../../security-center/security-center-introduction.md) | Public preview | <ul><li>Azure Security Agent extension</li><li>SQL Advanced Threat Protection extension</li><li>SQL Vulnerability Assessment extension</li></ul> | [Auto-deployment of Azure Monitor Agent (Preview)](../../defender-for-cloud/auto-deploy-azure-monitoring-agent.md) |
-| [Microsoft Sentinel](../../sentinel/overview.md) | <ul><li>Windows Security Events: [Generally available](../../sentinel/connect-windows-security-events.md?tabs=AMA)</li><li>Windows Forwarding Event (WEF): [Public preview](../../sentinel/data-connectors-reference.md#windows-forwarded-events-preview)</li><li>Windows DNS logs: [Public preview](../../sentinel/connect-dns-ama.md)</li><li>Linux Syslog CEF: Preview</li></ul> | Sentinel DNS extension, if youΓÇÖre collecting DNS logs. For all other data types, you just need the Azure Monitor Agent extension. | <ul><li>[Sign-up link for Linux Syslog CEF](https://aka.ms/amadcr-privatepreviews)</li><li>No sign-up needed for Windows Forwarding Event (WEF), Windows Security Events and Windows DNS events</li></ul> |
+| [Microsoft Sentinel](../../sentinel/overview.md) | <ul><li>Windows Security Events: [Generally available](../../sentinel/connect-windows-security-events.md?tabs=AMA)</li><li>Windows Forwarding Event (WEF): [Public preview](../../sentinel/data-connectors-reference.md#windows-forwarded-events-preview)</li><li>Windows DNS logs: [Public preview](../../sentinel/connect-dns-ama.md)</li><li>Linux Syslog CEF: [Public preview](../../sentinel/connect-cef-ama.md#set-up-the-common-event-format-cef-via-ama-connector)</li></ul> | Sentinel DNS extension, if youΓÇÖre collecting DNS logs. For all other data types, you just need the Azure Monitor Agent extension. | - |
| [Change Tracking](../../automation/change-tracking/overview.md) | Change Tracking: Preview. | Change Tracking extension | [Sign-up link](https://aka.ms/amadcr-privatepreviews) | | [Update Management](../../automation/update-management/overview.md) (available without Azure Monitor Agent) | Use Update Management v2 - Public preview | None | [Update management center (Public preview) documentation](../../update-center/index.yml) | | [Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md) | Connection Monitor: Preview | Azure NetworkWatcher extension | [Sign-up link](https://aka.ms/amadcr-privatepreviews) |
azure-monitor Data Collection Rule Azure Monitor Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-rule-azure-monitor-agent.md
To send data to Log Analytics, create the data collection rule in the *same regi
1. Enter a **Rule name** and specify a **Subscription**, **Resource Group**, **Region**, and **Platform Type**: - **Region** specifies where the DCR will be created. The virtual machines and their associations can be in any subscription or resource group in the tenant.- - **Platform Type** specifies the type of resources this rule can apply to. The **Custom** option allows for both Windows and Linux types. [ ![Screenshot that shows the Basics tab of the Data Collection Rule screen.](media/data-collection-rule-azure-monitor-agent/data-collection-rule-basics-updated.png) ](media/data-collection-rule-azure-monitor-agent/data-collection-rule-basics-updated.png#lightbox)
This capability is enabled as part of the Azure CLI monitor-control-service exte
For sample templates, see [Azure Resource Manager template samples for data collection rules in Azure Monitor](./resource-manager-data-collection-rules.md). + ## Filter events using XPath queries
-You're charged for any data you collect in a Log Analytics workspace, so collect only the data you need. The basic configuration in the Azure portal provides you with a limited ability to filter out events.
+Since you're charged for any data you collect in a Log Analytics workspace, you should limit data collection from your agent to only the event data that you need. The basic configuration in the Azure portal provides you with a limited ability to filter out events.
+ To specify more filters, use custom configuration and specify an XPath that filters out the events you don't need. XPath entries are written in the form `LogName!XPathQuery`. For example, you might want to return only events from the Application event log with an event ID of 1035. The `XPathQuery` for these events would be `*[System[EventID=1035]]`. Because you want to retrieve the events from the Application event log, the XPath is `Application!*[System[EventID=1035]]`
Examples of using a custom XPath to filter events:
| Collect all Critical, Error, Warning, and Information events from the System event log except for Event ID = 6 (Driver loaded) | `System!*[System[(Level=1 or Level=2 or Level=3) and (EventID != 6)]]` | | Collect all success and failure Security events except for Event ID 4624 (Successful logon) | `Security!*[System[(band(Keywords,13510798882111488)) and (EventID != 4624)]]` | + ## Next steps - [Collect text logs by using Azure Monitor Agent](data-collection-text-log.md).
azure-monitor Data Collection Rule Sample Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-rule-sample-agent.md
The sample [data collection rule](../essentials/data-collection-rule-overview.md
- Sends all data to a Log Analytics workspace named centralWorkspace. > [!NOTE]
-> For an explanation of XPaths that are used to specify event collection in data collection rules, see [Limit data collection with custom XPath queries](../agents/data-collection-rule-azure-monitor-agent.md#filter-events-using-xpath-queries)
+> For an explanation of XPaths that are used to specify event collection in data collection rules, see [Limit data collection with custom XPath queries](../agents/data-collection-rule-azure-monitor-agent.md#filter-events-using-xpath-queries).
## Sample DCR
azure-monitor Alerts Create New Alert Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-create-new-alert-rule.md
Title: Create Azure Monitor alert rules
-description: Learn how to create a new alert rule.
+description: This article shows you how to create a new alert rule.
# Create a new alert rule
-This article shows you how to create an alert rule. Learn more about alerts [here](alerts-overview.md).
+This article shows you how to create an alert rule. To learn more about alerts, see [What are Azure Monitor alerts?](alerts-overview.md).
You create an alert rule by combining:
+ - The resources to be monitored.
+ - The signal or telemetry from the resource.
+ - Conditions.
-And then defining these elements for the resulting alert actions using:
+Then you define these elements for the resulting alert actions by using:
- [Alert processing rules](alerts-action-rules.md) - [Action groups](./action-groups.md) ## Create a new alert rule in the Azure portal
-1. In the [portal](https://portal.azure.com/), select **Monitor**, then **Alerts**.
-1. Expand the **+ Create** menu, and select **Alert rule**.
+1. In the [portal](https://portal.azure.com/), select **Monitor** > **Alerts**.
+1. Open the **+ Create** menu and select **Alert rule**.
- :::image type="content" source="media/alerts-create-new-alert-rule/alerts-create-new-alert-rule.png" alt-text="Screenshot showing steps to create new alert rule.":::
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-create-new-alert-rule.png" alt-text="Screenshot that shows steps to create a new alert rule.":::
-1. In the **Select a resource** pane, set the scope for your alert rule. You can filter by **subscription**, **resource type**, **resource location**, or do a search.
+1. On the **Select a resource** pane, set the scope for your alert rule. You can filter by **subscription**, **resource type**, or **resource location**. You can also do a search.
- The **Available signal types** for your selected resource(s) are at the bottom right of the pane.
+ **Available signal types** for your selected resources are at the bottom right of the pane.
- :::image type="content" source="media/alerts-create-new-alert-rule/alerts-select-resource.png" alt-text="Screenshot showing the select resource pane for creating new alert rule.":::
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-select-resource.png" alt-text="Screenshot that shows the select resource pane for creating a new alert rule.":::
1. Select **Include all future resources** to include any future resources added to the selected scope. 1. Select **Done**.
-1. Select **Next: Condition>** at the bottom of the page.
-1. In the **Select a signal** pane, filter the list of signals using the **Signal type** and **Monitor service**.
- - **Signal Type**: The [type of alert rule](alerts-overview.md#types-of-alerts) you're creating.
+1. Select **Next: Condition** at the bottom of the page.
+1. On the **Select a signal** pane, filter the list of signals by using the signal type and monitor service:
+ - **Signal type**: The [type of alert rule](alerts-overview.md#types-of-alerts) you're creating.
- **Monitor service**: The service sending the signal. This list is pre-populated based on the type of alert rule you selected. This table describes the services available for each type of alert rule: |Signal type |Monitor service |Description | ||||
- |Metrics|Platform |For metric signals, the monitor service is the metric namespace. ΓÇÿPlatformΓÇÖ means the metrics are provided by the resource provider, namely 'Azure'.|
+ |Metrics|Platform |For metric signals, the monitor service is the metric namespace. "Platform" means the metrics are provided by the resource provider, namely, Azure.|
| |Azure.ApplicationInsights|Customer-reported metrics, sent by the Application Insights SDK. |
- | |Azure.VM.Windows.GuestMetrics |VM guest metrics, collected by an extension running on the VM. Can include built-in operating system perf counters, and custom perf counters. |
+ | |Azure.VM.Windows.GuestMetrics |VM guest metrics, collected by an extension running on the VM. Can include built-in operating system perf counters and custom perf counters. |
| |\<your custom namespace\>|A custom metric namespace, containing custom metrics sent with the Azure Monitor Metrics API. |
- |Log |Log Analytics|The service that provides the ΓÇÿCustom log searchΓÇÖ and ΓÇÿLog (saved query)ΓÇÖ signals. |
- |Activity log|Activity Log ΓÇô Administrative|The service that provides the ΓÇÿAdministrativeΓÇÖ activity log events. |
- | |Activity Log ΓÇô Policy|The service that provides the 'Policy' activity log events. |
- | |Activity Log ΓÇô Autoscale|The service that provides the ΓÇÿAutoscaleΓÇÖ activity log events. |
- | |Activity Log ΓÇô Security|The service that provides the ΓÇÿSecurityΓÇÖ activity log events. |
+ |Log |Log Analytics|The service that provides the "Custom log search" and "Log (saved query)" signals. |
+ |Activity log|Activity log ΓÇô Administrative|The service that provides the Administrative activity log events. |
+ | |Activity log ΓÇô Policy|The service that provides the Policy activity log events. |
+ | |Activity log ΓÇô Autoscale|The service that provides the Autoscale activity log events. |
+ | |Activity log ΓÇô Security|The service that provides the Security activity log events. |
|Resource health|Resource health|The service that provides the resource-level health status. | |Service health|Service health|The service that provides the subscription-level health status. |
-
-1. Select the **Signal name**, and follow the steps in the tab below that corresponds to the type of alert you're creating.
+1. Select the **Signal name**, and follow the steps in the following tab that corresponds to the type of alert you're creating.
+ ### [Metric alert](#tab/metric)
- 1. In the **Configure signal logic** pane, you can preview the results of the selected metric signal. Select values for the following fields.
+ 1. On the **Configure signal logic** pane, you can preview the results of the selected metric signal. Select values for the following fields.
|Field |Description | ||| |Select time series|Select the time series to include in the results. |
- |Chart period|Select the time span to include in the results. Can be from the last 6 hours to the last week.|
+ |Chart period|Select the time span to include in the results. Can be from the last six hours to the last week.|
- 1. (Optional) Depending on the signal type, you may see the **Split by dimensions** section.
+ 1. (Optional) Depending on the signal type, you might see the **Split by dimensions** section.
- Dimensions are name-value pairs that contain more data about the metric value. Using dimensions allows you to filter the metrics and monitor specific time-series, instead of monitoring the aggregate of all the dimensional values. Dimensions can be either number or string columns.
+ Dimensions are name-value pairs that contain more data about the metric value. By using dimensions, you can filter the metrics and monitor specific time-series, instead of monitoring the aggregate of all the dimensional values. Dimensions can be either number or string columns.
- If you select more than one dimension value, each time series that results from the combination will trigger its own alert, and will be charged separately. For example, the transactions metric of a storage account can have an API name dimension that contains the name of the API called by each transaction (for example, GetBlob, DeleteBlob, PutPage). You can choose to have an alert fired when there's a high number of transactions in a specific API (the aggregated data), or you can use dimensions to alert only when the number of transactions is high for specific APIs.
+ If you select more than one dimension value, each time series that results from the combination will trigger its own alert and be charged separately. For example, the transactions metric of a storage account can have an API name dimension that contains the name of the API called by each transaction (for example, GetBlob, DeleteBlob, and PutPage). You can choose to have an alert fired when there's a high number of transactions in a specific API (the aggregated data). Or you can use dimensions to alert only when the number of transactions is high for specific APIs.
|Field |Description | |||
And then defining these elements for the resulting alert actions using:
|Field |Description | |||
- |Threshold|Select if threshold should be evaluated based on a static value or a dynamic value.<br>A static threshold evaluates the rule using the threshold value that you configure.<br>Dynamic Thresholds use machine learning algorithms to continuously learn the metric behavior patterns and calculate the appropriate thresholds for unexpected behavior. You can learn more about using [dynamic thresholds for metric alerts](alerts-types.md#dynamic-thresholds). |
+ |Threshold|Select if the threshold should be evaluated based on a static value or a dynamic value.<br>A static threshold evaluates the rule by using the threshold value that you configure.<br>Dynamic thresholds use machine learning algorithms to continuously learn the metric behavior patterns and calculate the appropriate thresholds for unexpected behavior. You can learn more about using [dynamic thresholds for metric alerts](alerts-types.md#dynamic-thresholds). |
|Operator|Select the operator for comparing the metric value against the threshold. | |Aggregation type|Select the aggregation function to apply on the data points: Sum, Count, Average, Min, or Max. | |Threshold value|If you selected a **static** threshold, enter the threshold value for the condition logic. |
- |Unit|If the selected metric signal supports different units,such as bytes, KB, MB, and GB, and if you selected a **static** threshold, enter the unit for the condition logic.|
- |Threshold sensitivity| If you selected a **dynamic** threshold, enter the sensitivity level. The sensitivity level affects the amount of deviation from the metric series pattern is required to trigger an alert. |
- |Aggregation granularity| Select the interval that is used to group the data points using the aggregation type function. Choose an **Aggregation granularity** (Period) that's greater than the **Frequency of evaluation** to reduce the likelihood of missing the first evaluation period of an added time series.|
- |Frequency of evaluation|Select how often the alert rule is be run. Select a frequency that is smaller than the aggregation granularity to generate a sliding window for the evaluation.|
+ |Unit|If the selected metric signal supports different units, such as bytes, KB, MB, and GB, and if you selected a **static** threshold, enter the unit for the condition logic.|
+ |Threshold sensitivity| If you selected a **dynamic** threshold, enter the sensitivity level. The sensitivity level affects the amount of deviation from the metric series pattern that's required to trigger an alert. |
+ |Aggregation granularity| Select the interval that's used to group the data points by using the aggregation type function. Choose an **Aggregation granularity** (period) that's greater than the **Frequency of evaluation** to reduce the likelihood of missing the first evaluation period of an added time series.|
+ |Frequency of evaluation|Select how often the alert rule is to be run. Select a frequency that's smaller than the aggregation granularity to generate a sliding window for the evaluation.|
1. Select **Done**.+ ### [Log alert](#tab/log) > [!NOTE]
- > If you are creating a new log alert rule, note that current alert rule wizard is a little different from the earlier experience. For detailed information about the changes, see [changes to log alert rule creation experience](#changes-to-log-alert-rule-creation-experience).
-
- 1. In the **Logs** pane, write a query that will return the log events for which you want to create an alert.
- To use one of the predefined alert rule queries, expand the **Schema and filter pane** on the left of the **Logs** pane, then select the **Queries** tab, and select one of the queries.
+ > If you're creating a new log alert rule, note that the current alert rule wizard is different from the earlier experience. For more information, see [Changes to the log alert rule creation experience](#changes-to-the-log-alert-rule-creation-experience).
+
+ 1. On the **Logs** pane, write a query that will return the log events for which you want to create an alert.
+ To use one of the predefined alert rule queries, expand the **Schema and filter** pane on the left of the **Logs** pane. Then select the **Queries** tab, and select one of the queries.
- :::image type="content" source="media/alerts-create-new-alert-rule/alerts-log-rule-query-pane.png" alt-text="Screenshot of the query pane when creating a new log alert rule.":::
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-log-rule-query-pane.png" alt-text="Screenshot that shows the Query pane when creating a new log alert rule.":::
1. Select **Run** to run the alert. 1. The **Preview** section shows you the query results. When you're finished editing your query, select **Continue Editing Alert**.
- 1. The **Condition** tab opens populated with your log query. By default, the rule counts the number of results in the last 5 minutes. If the system detects summarized query results, the rule is automatically updated with that information.
+ 1. The **Condition** tab opens populated with your log query. By default, the rule counts the number of results in the last five minutes. If the system detects summarized query results, the rule is automatically updated with that information.
- :::image type="content" source="media/alerts-create-new-alert-rule/alerts-logs-conditions-tab.png" alt-text="Screenshot of the conditions tab when creating a new log alert rule.":::
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-logs-conditions-tab.png" alt-text="Screenshot that shows the Condition tab when creating a new log alert rule.":::
1. In the **Measurement** section, select values for these fields: |Field |Description | |||
- |Measure|Log alerts can measure two different things, which can be used for different monitoring scenarios:<br> **Table rows**: The number of rows returned can be used to work with events such as Windows event logs, syslog, application exceptions. <br>**Calculation of a numeric column**: Calculations based on any numeric column can be used to include any number of resources. For example, CPU percentage. |
- |Aggregation type| The calculation performed on multiple records to aggregate them to one numeric value using the aggregation granularity. For example: Total, Average, Minimum, or Maximum. |
+ |Measure|Log alerts can measure two different things, which can be used for different monitoring scenarios:<br> **Table rows**: The number of rows returned can be used to work with events such as Windows event logs, Syslog, and application exceptions. <br>**Calculation of a numeric column**: Calculations based on any numeric column can be used to include any number of resources. An example is CPU percentage. |
+ |Aggregation type| The calculation performed on multiple records to aggregate them to one numeric value by using the aggregation granularity. Examples are Total, Average, Minimum, or Maximum. |
|Aggregation granularity| The interval for aggregating multiple records to one numeric value.|
- :::image type="content" source="media/alerts-create-new-alert-rule/alerts-log-measurements.png" alt-text="Screenshot of the measurements tab when creating a new log alert rule.":::
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-log-measurements.png" alt-text="Screenshot that shows the Measurement tab when creating a new log alert rule.":::
- 1. (Optional) In the **Split by dimensions** section, you can use dimensions to monitor the values of multiple instances of a resource with one rule. Splitting by dimensions allows you to create resource-centric alerts at scale for a subscription or resource group. When you split by dimensions, alerts are split into separate alerts by grouping combinations of numerical or string columns to monitor for the same condition on multiple Azure resources. For example, you can monitor CPU usage on multiple instances running your website or app. Each instance is monitored individually notifications are sent for each instance.
+ 1. (Optional) In the **Split by dimensions** section, you can use dimensions to monitor the values of multiple instances of a resource with one rule. Splitting by dimensions allows you to create resource-centric alerts at scale for a subscription or resource group. When you split by dimensions, alerts are split into separate alerts by grouping combinations of numerical or string columns to monitor for the same condition on multiple Azure resources. For example, you can monitor CPU usage on multiple instances running your website or app. Each instance is monitored individually. Notifications are sent for each instance.
- Splitting on **Azure Resource ID** column makes specified resource the target of the alert.
+ Splitting on the **Azure Resource ID** column makes the specified resource the target of the alert.
If you select more than one dimension value, each time series that results from the combination triggers its own alert and is charged separately. The alert payload includes the combination that triggered the alert. You can select up to six more splittings for any columns that contain text or numbers.
- You can also decide **not** to split when you want a condition applied to multiple resources in the scope. For example, if you want to fire an alert if at least five machines in the resource group scope have CPU usage over 80%.
+ You can also decide *not* to split when you want a condition applied to multiple resources in the scope. An example would be if you want to fire an alert if at least five machines in the resource group scope have CPU usage over 80 percent.
Select values for these fields:
And then defining these elements for the resulting alert actions using:
|Dimension values|The dimension values are based on data from the last 48 hours. Select **Add custom value** to add custom dimension values. | |Include all future values| Select this field to include any future values added to the selected dimension. |
- :::image type="content" source="media/alerts-create-new-alert-rule/alerts-create-log-rule-dimensions.png" alt-text="Screenshot of the splitting by dimensions section of a new log alert rule.":::
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-create-log-rule-dimensions.png" alt-text="Screenshot that shows the splitting by dimensions section of a new log alert rule.":::
1. In the **Alert logic** section, select values for these fields:
And then defining these elements for the resulting alert actions using:
|Threshold value| A number value for the threshold. | |Frequency of evaluation|The interval in which the query is run. Can be set from a minute to a day. |
- :::image type="content" source="media/alerts-create-new-alert-rule/alerts-create-log-rule-logic.png" alt-text="Screenshot of alert logic section of a new log alert rule.":::
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-create-log-rule-logic.png" alt-text="Screenshot that shows the Alert logic section of a new log alert rule.":::
- 1. (Optional) In the **Advanced options** section, you can specify the number of failures and the alert evaluation period required to trigger an alert. For example, if you set the **Aggregation granularity** to 5 minutes, you can specify that you only want to trigger an alert if there were three failures (15 minutes) in the last hour. This setting is defined by your application business policy.
+ 1. (Optional) In the **Advanced options** section, you can specify the number of failures and the alert evaluation period required to trigger an alert. For example, if you set **Aggregation granularity** to 5 minutes, you can specify that you only want to trigger an alert if there were three failures (15 minutes) in the last hour. This setting is defined by your application business policy.
Select values for these fields under **Number of violations to trigger the alert**:
And then defining these elements for the resulting alert actions using:
||| |Number of violations|The number of violations that trigger the alert.| |Evaluation period|The time period within which the number of violations occur. |
- |Override query time range| If you want the alert evaluation period to be different than the query time range, enter a time range here.<br> The alert time range is limited to a maximum of two days. Even if the query contains an **ago** command with a time range of longer than 2 days, the 2 day maximum time range is applied. For example, even if the query text contains **ago(7d)**, the query only scans up to 2 days of data.<br> If the query requires more data than the alert evaluation, and there's no **ago** command in the query, you can change the time range manually.|
+ |Override query time range| If you want the alert evaluation period to be different than the query time range, enter a time range here.<br> The alert time range is limited to a maximum of two days. Even if the query contains an **ago** command with a time range of longer than two days, the two-day maximum time range is applied. For example, even if the query text contains **ago(7d)**, the query only scans up to two days of data.<br> If the query requires more data than the alert evaluation, and there's no **ago** command in the query, you can change the time range manually.|
- :::image type="content" source="media/alerts-create-new-alert-rule/alerts-rule-preview-advanced-options.png" alt-text="Screenshot of the advanced options section of a new log alert rule.":::
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-rule-preview-advanced-options.png" alt-text="Screenshot that shows the Advanced options section of a new log alert rule.":::
> [!NOTE]
- > If you, or your administrator assigned the Azure Policy **Azure Log Search Alerts over Log Analytics workspaces should use customer-managed keys**, you must select **Check workspace linked storage**, or the rule creation will fail because it won't meet the policy requirements.
+ > If you or your administrator assigned the Azure Policy **Azure Log Search Alerts over Log Analytics workspaces should use customer-managed keys**, you must select **Check workspace linked storage**. If you don't, the rule creation will fail because it won't meet the policy requirements.
- 1. The **Preview** chart shows query evaluations results over time. You can change the chart period or select different time series that resulted from unique alert splitting by dimensions.
+ 1. The **Preview** chart shows query evaluations results over time. You can change the chart period or select different time series that resulted from a unique alert splitting by dimensions.
- :::image type="content" source="media/alerts-create-new-alert-rule/alerts-create-alert-rule-preview.png" alt-text="Screenshot of a preview of a new alert rule.":::
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-create-alert-rule-preview.png" alt-text="Screenshot that shows a preview of a new alert rule.":::
### [Activity log alert](#tab/activity-log)
- 1. In the **Conditions** pane, select the **Chart period**.
+ 1. On the **Conditions** pane, select the **Chart period**.
1. The **Preview** chart shows you the results of your selection. 1. Select values for each of these fields in the **Alert logic** section: |Field |Description | |||
- |Event level| Select the level of the events for this alert rule. Values are: **Critical**, **Error**, **Warning**, **Informational**, **Verbose** and **All**.|
+ |Event level| Select the level of the events for this alert rule. Values are **Critical**, **Error**, **Warning**, **Informational**, **Verbose**, and **All**.|
|Status|Select the status levels for the alert.| |Event initiated by|Select the user or service principal that initiated the event.| ### [Resource Health alert](#tab/resource-health)
- 1. In the **Conditions** pane, select values for each of these fields:
+ On the **Conditions** pane, select values for each of these fields:
- |Field |Description |
- |||
- |Event status| Select the statuses of Resource Health events. Values are: **Active**, **In Progress**, **Resolved**, and **Updated**.|
- |Current resource status|Select the current resource status. Values are: **Available**, **Degraded**, and **Unavailable**.|
- |Previous resource status|Select the previous resource status. Values are: **Available**, **Degraded**, **Unavailable**, and **Unknown**.|
- |Reason type|Select the cause(s) of the Resource Health events. Values are: **Platform Initiated**, **Unknown**, and **User Initiated**.|
+ |Field |Description |
+ |||
+ |Event status| Select the statuses of Resource Health events. Values are **Active**, **In Progress**, **Resolved**, and **Updated**.|
+ |Current resource status|Select the current resource status. Values are **Available**, **Degraded**, and **Unavailable**.|
+ |Previous resource status|Select the previous resource status. Values are **Available**, **Degraded**, **Unavailable**, and **Unknown**.|
+ |Reason type|Select the causes of the Resource Health events. Values are **Platform Initiated**, **Unknown**, and **User Initiated**.|
+
### [Service Health alert](#tab/service-health)
- 1. In the **Conditions** pane, select values for each of these fields:
+ On the **Conditions** pane, select values for each of these fields:
|Field |Description | ||| |Services| Select the Azure services.| |Regions|Select the Azure regions.|
- |Event types|Select the type(s) of Service Health events. Values are: **Service issue**, **Planned maintenance**, **Health advisories**, and **Security advisories**.|
+ |Event types|Select the types of Service Health events. Values are **Service issue**, **Planned maintenance**, **Health advisories**, and **Security advisories**.|
From this point on, you can select the **Review + create** button at any time.
-1. In the **Actions** tab, select or create the required [action groups](./action-groups.md).
+1. On the **Actions** tab, select or create the required [action groups](./action-groups.md).
1. (Optional) If you want to make sure that the data processing for the action group takes place within a specific region, you can select an action group in one of these regions in which to process the action group: - Sweden Central - Germany West Central > [!NOTE]
- > We are continually adding more regions for regional data processing.
+ > We're continually adding more regions for regional data processing.
- :::image type="content" source="media/alerts-create-new-alert-rule/alerts-rule-actions-tab.png" alt-text="Screenshot of the actions tab when creating a new alert rule.":::
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-rule-actions-tab.png" alt-text="Screenshot that shows the Actions tab when creating a new alert rule.":::
-1. In the **Details** tab, define the **Project details**.
+1. On the **Details** tab, define the **Project details**.
- Select the **Subscription**. - Select the **Resource group**.
- - (Optional) If you're creating a metric alert rule that monitors a custom metric with the scope defined as one of the regions below, and you want to make sure that the data processing for the alert rule takes place within that region, you can select to process the alert rule in one of these regions:
+ - (Optional) If you're creating a metric alert rule that monitors a custom metric with the scope defined as one of the following regions and you want to make sure that the data processing for the alert rule takes place within that region, you can select to process the alert rule in one of these regions:
- North Europe - West Europe - Sweden Central - Germany West Central > [!NOTE]
- > We are continually adding more regions for regional data processing.
+ > We're continually adding more regions for regional data processing.
1. Define the **Alert rule details**. ### [Metric alert](#tab/metric)
And then defining these elements for the resulting alert actions using:
||| |Enable upon creation| Select for the alert rule to start running as soon as you're done creating it.| |Automatically resolve alerts (preview) |Select to make the alert stateful. The alert is resolved when the condition isn't met anymore.|
- 1. (Optional) If you have configured action groups for this alert rule, you can add custom properties to the alert payload to add additional information to the payload. In the **Custom properties** section, add the property **Name** and **Value** for the custom property you want included in the payload.
-
+ 1. (Optional) If you've configured action groups for this alert rule, you can add custom properties to the alert payload to add more information to the payload. In the **Custom properties** section, add the property **Name** and **Value** for the custom property you want included in the payload.
- :::image type="content" source="media/alerts-create-new-alert-rule/alerts-metric-rule-details-tab.png" alt-text="Screenshot of the details tab when creating a new alert rule.":::
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-metric-rule-details-tab.png" alt-text="Screenshot that shows the Details tab when creating a new alert rule.":::
### [Log alert](#tab/log) 1. Select the **Severity**. 1. Enter values for the **Alert rule name** and the **Alert rule description**. 1. Select the **Region**.
- 1. (Optional) In the **Advanced options** section, you can set several options.
+ 1. (Optional) In the **Advanced options** section, you can set several options:
|Field |Description | |||
And then defining these elements for the resulting alert actions using:
|Mute actions |Select to set a period of time to wait before alert actions are triggered again. If you select this checkbox, the **Mute actions for** field appears to select the amount of time to wait after an alert is fired before triggering actions again.| |Check workspace linked storage|Select if logs workspace linked storage for alerts is configured. If no linked storage is configured, the rule isn't created.|
- 1. (Optional) If you have configured action groups for this alert rule, you can add custom properties to the alert payload to add additional information to the payload. In the **Custom properties** section, add the property **Name** and **Value** for the custom property you want included in the payload.
+ 1. (Optional) If you've configured action groups for this alert rule, you can add custom properties to the alert payload to add more information to the payload. In the **Custom properties** section, add the property **Name** and **Value** for the custom property you want included in the payload.
- :::image type="content" source="media/alerts-create-new-alert-rule/alerts-log-rule-details-tab.png" alt-text="Screenshot of the details tab when creating a new log alert rule.":::
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-log-rule-details-tab.png" alt-text="Screenshot that shows the Details tab when creating a new log alert rule.":::
### [Activity log alert](#tab/activity-log) 1. Enter values for the **Alert rule name** and the **Alert rule description**. 1. Select the **Region**. 1. (Optional) In the **Advanced options** section, select **Enable upon creation** for the alert rule to start running as soon as you're done creating it.
- 1. (Optional) If you have configured action groups for this alert rule, you can add custom properties to the alert payload to add additional information to the payload. In the **Custom properties** section, add the property **Name** and **Value** for the custom property you want included in the payload.
+ 1. (Optional) If you've configured action groups for this alert rule, you can add custom properties to the alert payload to add more information to the payload. In the **Custom properties** section, add the property **Name** and **Value** for the custom property you want included in the payload.
+
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-activity-log-rule-details-tab.png" alt-text="Screenshot that shows the Actions tab when creating a new activity log alert rule.":::
- :::image type="content" source="media/alerts-create-new-alert-rule/alerts-activity-log-rule-details-tab.png" alt-text="Screenshot of the actions tab when creating a new activity log alert rule.":::
### [Resource Health alert](#tab/resource-health) 1. Enter values for the **Alert rule name** and the **Alert rule description**. 1. (Optional) In the **Advanced options** section, select **Enable upon creation** for the alert rule to start running as soon as you're done creating it.
+
### [Service Health alert](#tab/service-health) 1. Enter values for the **Alert rule name** and the **Alert rule description**.
And then defining these elements for the resulting alert actions using:
-1. In the **Tags** tab, set any required tags on the alert rule resource.
+1. On the **Tags** tab, set any required tags on the alert rule resource.
- :::image type="content" source="media/alerts-create-new-alert-rule/alerts-rule-tags-tab.png" alt-text="Screenshot of the Tags tab when creating a new alert rule.":::
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-rule-tags-tab.png" alt-text="Screenshot that shows the Tags tab when creating a new alert rule.":::
-1. In the **Review + create** tab, a validation will run and inform you of any issues.
+1. On the **Review + create** tab, a validation will run and inform you of any issues.
1. When validation passes and you've reviewed the settings, select the **Create** button.
- :::image type="content" source="media/alerts-create-new-alert-rule/alerts-rule-review-create.png" alt-text="Screenshot of the Review and create tab when creating a new alert rule.":::
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-rule-review-create.png" alt-text="Screenshot that shows the Review and create tab when creating a new alert rule.":::
+## Create a new alert rule by using the CLI
-## Create a new alert rule using CLI
+You can create a new alert rule by using the [Azure CLI](/cli/azure/get-started-with-azure-cli). The following code examples use [Azure Cloud Shell](../../cloud-shell/overview.md). You can see the full list of the [Azure CLI commands for Azure Monitor](/cli/azure/azure-cli-reference-for-monitor#azure-monitor-references).
-You can create a new alert rule using the [Azure CLI](/cli/azure/get-started-with-azure-cli). The code examples below are using [Azure Cloud Shell](../../cloud-shell/overview.md). You can see the full list of the [Azure CLI commands for Azure Monitor](/cli/azure/azure-cli-reference-for-monitor#azure-monitor-references).
+1. In the [portal](https://portal.azure.com/), select **Cloud Shell**. At the prompt, use the commands that follow.
-1. In the [portal](https://portal.azure.com/), select **Cloud Shell**, and at the prompt, use the following commands:
### [Metric alert](#tab/metric)
- To create a metric alert rule, use the **az monitor metrics alert create** command. You can see detailed documentation on the metric alert rule create command in the **az monitor metrics alert create** section of the [CLI reference documentation for metric alerts](/cli/azure/monitor/metrics/alert).
+ To create a metric alert rule, use the `az monitor metrics alert create` command. You can see detailed documentation on the metric alert rule create command in the `az monitor metrics alert create` section of the [CLI reference documentation for metric alerts](/cli/azure/monitor/metrics/alert).
To create a metric alert rule that monitors if average Percentage CPU on a VM is greater than 90: ```azurecli
You can create a new alert rule using the [Azure CLI](/cli/azure/get-started-wit
``` ### [Log alert](#tab/log)
- To create a log alert rule that monitors count of system event errors:
+ To create a log alert rule that monitors the count of system event errors:
```azurecli az monitor scheduled-query create -g {ResourceGroup} -n {nameofthealert} --scopes {vm_id} --condition "count \'union Event, Syslog | where TimeGenerated > a(1h) | where EventLevelName == \"Error\" or SeverityLevel== \"err\"\' > 2" --description {descriptionofthealert} ``` > [!NOTE]
- > Azure CLI support is only available for the scheduledQueryRules API version `2021-08-01` and later. Previous API versions can use the Azure Resource Manager CLI with templates as described below. If you use the legacy [Log Analytics Alert API](./api-alerts.md), you will need to switch to use CLI. [Learn more about switching](./alerts-log-api-switch.md).
+ > Azure CLI support is only available for the `scheduledQueryRules` API version `2021-08-01` and later. Previous API versions can use the Azure Resource Manager CLI with templates as described in the following sections. If you use the legacy [Log Analytics Alert API](./api-alerts.md), you must switch to use the CLI. [Learn more about switching](./alerts-log-api-switch.md).
### [Activity log alert](#tab/activity-log)
You can create a new alert rule using the [Azure CLI](/cli/azure/get-started-wit
- [az monitor activity-log alert scope](/cli/azure/monitor/activity-log/alert/scope): Add scope for the created activity log alert rule. - [az monitor activity-log alert action-group](/cli/azure/monitor/activity-log/alert/action-group): Add an action group to the activity log alert rule.
- You can find detailed documentation on the activity log alert rule create command in the **az monitor activity-log alert create** section of the [CLI reference documentation for activity log alerts](/cli/azure/monitor/activity-log/alert).
+ You can find detailed documentation on the activity log alert rule create command in the `az monitor activity-log alert create` section of the [CLI reference documentation for activity log alerts](/cli/azure/monitor/activity-log/alert).
+ ### [Resource Health alert](#tab/resource-health)
- To create a new activity log alert rule, use the following commands using the `Resource Health` category:
+ To create a new activity log alert rule, use the following commands by using the `Resource Health` category:
- [az monitor activity-log alert create](/cli/azure/monitor/activity-log/alert#az-monitor-activity-log-alert-create): Create a new activity log alert rule resource. - [az monitor activity-log alert scope](/cli/azure/monitor/activity-log/alert/scope): Add scope for the created activity log alert rule. - [az monitor activity-log alert action-group](/cli/azure/monitor/activity-log/alert/action-group): Add an action group to the activity log alert rule.
- You can find detailed documentation on the alert rule create command in the **az monitor activity-log alert create** section of the [CLI reference documentation for activity log alerts](/cli/azure/monitor/activity-log/alert).
+ You can find detailed documentation on the alert rule create command in the `az monitor activity-log alert create` section of the [CLI reference documentation for activity log alerts](/cli/azure/monitor/activity-log/alert).
### [Service Health alert](#tab/service-health)
- To create a new activity log alert rule, use the following commands using the `Service Health` category:
- - [az monitor activity-log alert create](/cli/azure/monitor/activity-log/alert#az-monitor-activity-log-alert-create): Create a new activity log alert rule resource .
+ To create a new activity log alert rule, use the following commands by using the `Service Health` category:
+ - [az monitor activity-log alert create](/cli/azure/monitor/activity-log/alert#az-monitor-activity-log-alert-create): Create a new activity log alert rule resource.
- [az monitor activity-log alert scope](/cli/azure/monitor/activity-log/alert/scope): Add scope for the created activity log alert rule. - [az monitor activity-log alert action-group](/cli/azure/monitor/activity-log/alert/action-group): Add an action group to the activity log alert rule.
- You can find detailed documentation on the alert rule create command in the **az monitor activity-log alert create** section of the [CLI reference documentation for activity log alerts](/cli/azure/monitor/activity-log/alert).
+ You can find detailed documentation on the alert rule create command in the `az monitor activity-log alert create` section of the [CLI reference documentation for activity log alerts](/cli/azure/monitor/activity-log/alert).
-## Create a new alert rule using PowerShell
+## Create a new alert rule by using PowerShell
-- To create a metric alert rule using PowerShell, use this cmdlet: [Add-AzMetricAlertRuleV2](/powershell/module/az.monitor/add-azmetricalertrulev2)-- To create a log alert rule using PowerShell, use this cmdlet: [New-AzScheduledQueryRule](/powershell/module/az.monitor/new-azscheduledqueryrule)-- To create an activity log alert rule using PowerShell, use this cmdlet: [Set-AzActivityLogAlert](/powershell/module/az.monitor/set-azactivitylogalert)
+- To create a metric alert rule by using PowerShell, use the [Add-AzMetricAlertRuleV2](/powershell/module/az.monitor/add-azmetricalertrulev2) cmdlet.
+- To create a log alert rule by using PowerShell, use the [New-AzScheduledQueryRule](/powershell/module/az.monitor/new-azscheduledqueryrule) cmdlet.
+- To create an activity log alert rule by using PowerShell, use the [Set-AzActivityLogAlert](/powershell/module/az.monitor/set-azactivitylogalert) cmdlet.
## Create an activity log alert rule from the Activity log pane
-You can also create an activity log alert on future events similar to an activity log event that already occurred.
+You can also create an activity log alert on future events similar to an activity log event that already occurred.
-1. In the [portal](https://portal.azure.com/), [go to the activity log pane](../essentials/activity-log.md#view-the-activity-log).
-1. Filter or find the desired event, and then create an alert by selecting **Add activity log alert**.
+1. In the [portal](https://portal.azure.com/), [go to the Activity log pane](../essentials/activity-log.md#view-the-activity-log).
+1. Filter or find the desired event. Then create an alert by selecting **Add activity log alert**.
- :::image type="content" source="media/alerts-create-new-alert-rule/create-alert-rule-from-activity-log-event-new.png" alt-text="Screenshot of creating an alert rule from an activity log event." lightbox="media/alerts-create-new-alert-rule/create-alert-rule-from-activity-log-event-new.png":::
+ :::image type="content" source="media/alerts-create-new-alert-rule/create-alert-rule-from-activity-log-event-new.png" alt-text="Screenshot that shows creating an alert rule from an activity log event." lightbox="media/alerts-create-new-alert-rule/create-alert-rule-from-activity-log-event-new.png":::
-2. The **Create alert rule** wizard opens, with the scope and condition already provided according to the previously selected activity log event. If necessary, you can edit and modify the scope and condition at this stage. By default, the exact scope and condition for the new rule are copied from the original event attributes. For example, the exact resource on which the event occurred, and the specific user or service name who initiated the event, are both included by default in the new alert rule. If you want to make the alert rule more general, modify the scope, and condition accordingly (see steps 3-9 in the section "Create an alert rule from the Azure Monitor alerts pane").
+1. The **Create alert rule** wizard opens, with the scope and condition already provided according to the previously selected activity log event. If necessary, you can edit and modify the scope and condition at this stage. By default, the exact scope and condition for the new rule are copied from the original event attributes. For example, the exact resource on which the event occurred, and the specific user or service name that initiated the event, are both included by default in the new alert rule.
-3. Follow the rest of the steps from [Create a new alert rule in the Azure portal](#create-a-new-alert-rule-in-the-azure-portal).
+ If you want to make the alert rule more general, modify the scope and condition accordingly. See steps 3-9 in the section "Create a new alert rule in the Azure portal."
-## Create an activity log alert rule using an Azure Resource Manager template
+1. Follow the rest of the steps from [Create a new alert rule in the Azure portal](#create-a-new-alert-rule-in-the-azure-portal).
-To create an activity log alert rule using an Azure Resource Manager template, create a `microsoft.insights/activityLogAlerts` resource, and fill in all related properties.
+## Create an activity log alert rule by using an ARM template
-> [!NOTE]
->The highest level that activity log alerts can be defined is the subscription level. Define the alert to alert per subscription. You can't define an alert on two subscriptions.
+To create an activity log alert rule by using an Azure Resource Manager template (ARM template), create a `microsoft.insights/activityLogAlerts` resource. Then fill in all related properties.
-The following fields are the options in the Azure Resource Manager template for the conditions fields. (The **Resource Health**, **Advisor** and **Service Health** fields have extra properties fields.)
+> [!NOTE]
+>The highest level that activity log alerts can be defined is the subscription level. Define the alert to alert per subscription. You can't define an alert on two subscriptions.
+The following fields are the options in the ARM template for the conditions fields. The **Resource Health**, **Advisor** and **Service Health** fields have extra properties fields.
|Field |Description | |||
-|resourceId|The resource ID of the impacted resource in the activity log event on which the alert is generated.|
-|category|The category of the activity log event. Possible values: `Administrative`, `ServiceHealth`, `ResourceHealth`, `Autoscale`, `Security`, `Recommendation`, or `Policy` |
+|resourceId|The resource ID of the affected resource in the activity log event on which the alert is generated.|
+|category|The category of the activity log event. Possible values are `Administrative`, `ServiceHealth`, `ResourceHealth`, `Autoscale`, `Security`, `Recommendation`, or `Policy`. |
|caller|The email address or Azure Active Directory identifier of the user who performed the operation of the activity log event. |
-|level |Level of the activity in the activity log event for the alert. Possible values: `Critical`, `Error`, `Warning`, `Informational`, or `Verbose`.|
-|operationName |The name of the operation in the activity log event. Possible values: `Microsoft.Resources/deployments/write`. |
-|resourceGroup |Name of the resource group for the impacted resource in the activity log event. |
+|level |Level of the activity in the activity log event for the alert. Possible values are `Critical`, `Error`, `Warning`, `Informational`, or `Verbose`.|
+|operationName |The name of the operation in the activity log event. Possible values are `Microsoft.Resources/deployments/write`. |
+|resourceGroup |Name of the resource group for the affected resource in the activity log event. |
|resourceProvider |For more information, see [Azure resource providers and types](../../azure-resource-manager/management/resource-providers-and-types.md). For a list that maps resource providers to Azure services, see [Resource providers for Azure services](../../azure-resource-manager/management/resource-providers-and-types.md). |
-|status |String describing the status of the operation in the activity event. Possible values: `Started`, `In Progress`, `Succeeded`, `Failed`, `Active`, or `Resolved` |
+|status |String describing the status of the operation in the activity event. Possible values are `Started`, `In Progress`, `Succeeded`, `Failed`, `Active`, or `Resolved`. |
|subStatus |Usually, this field is the HTTP status code of the corresponding REST call. This field can also include other strings describing a substatus. Examples of HTTP status codes include `OK` (HTTP Status Code: 200), `No Content` (HTTP Status Code: 204), and `Service Unavailable` (HTTP Status Code: 503), among many others. |
-|resourceType |The type of the resource that was affected by the event. For example: `Microsoft.Resources/deployments`. |
+|resourceType |The type of the resource that was affected by the event. An example is `Microsoft.Resources/deployments`. |
This example sets the condition to the **Administrative** category:
This example sets the condition to the **Administrative** category:
```
-This is an example template that creates an activity log alert rule using the **Administrative** condition:
+This example template creates an activity log alert rule by using the **Administrative** condition:
```json {
This is an example template that creates an activity log alert rule using the **
] } ```+ This sample JSON can be saved as, for example, *sampleActivityLogAlert.json*. You can deploy the sample by using [Azure Resource Manager in the Azure portal](../../azure-resource-manager/templates/deploy-portal.md). For more information about the activity log fields, see [Azure activity log event schema](../essentials/activity-log-schema.md). > [!NOTE]
-> It might take up to 5 minutes for the new activity log alert rule to become active.
+> It might take up to five minutes for the new activity log alert rule to become active.
-## Create a new activity log alert rule using the REST API
+## Create a new activity log alert rule by using the REST API
-The Azure Monitor Activity Log Alerts API is a REST API. It's fully compatible with the Azure Resource Manager REST API. You can use it with PowerShell, by using the Resource Manager cmdlet or the Azure CLI.
+The Azure Monitor Activity Log Alerts API is a REST API. It's fully compatible with the Azure Resource Manager REST API. You can use it with PowerShell by using the Resource Manager cmdlet or the Azure CLI.
[!INCLUDE [updated-for-az](../../../includes/updated-for-az.md)]
-### Deploy the Resource Manager template with PowerShell
+### Deploy the ARM template with PowerShell
-To use PowerShell to deploy the sample Resource Manager template shown in the [previous section](#create-an-activity-log-alert-rule-using-an-azure-resource-manager-template) section, use the following command:
+To use PowerShell to deploy the sample ARM template shown in the [previous section](#create-an-activity-log-alert-rule-by-using-an-arm-template), use the following command:
```powershell New-AzResourceGroupDeployment -ResourceGroupName "myRG" -TemplateFile sampleActivityLogAlert.json -TemplateParameterFile sampleActivityLogAlert.parameters.json ```
-The *sampleActivityLogAlert.parameters.json* file contains the values provided for the parameters needed for alert rule creation.
-## Changes to log alert rule creation experience
+The *sampleActivityLogAlert.parameters.json* file contains values for the parameters that you need for alert rule creation.
+
+## Changes to the log alert rule creation experience
-If you're creating a new log alert rule, note that current alert rule wizard is a little different from the earlier experience:
+The current alert rule wizard is different from the earlier experience:
-- Previously, search results were included in the payload of the triggered alert and its associated notifications. The email included only 10 rows from the unfiltered results while the webhook payload contained 1000 unfiltered results. To get detailed context information about the alert so that you can decide on the appropriate action:
- - We recommend using [Dimensions](alerts-types.md#narrow-the-target-using-dimensions). Dimensions provide the column value that fired the alert, giving you context for why the alert fired and how to fix the issue.
- - When you need to investigate in the logs, use the link in the alert to the search results in Logs.
- - If you need the raw search results or for any other advanced customizations, use Logic Apps.
+- Previously, search results were included in the payload of the triggered alert and its associated notifications. The email included only 10 rows from the unfiltered results while the webhook payload contained 1,000 unfiltered results. To get detailed context information about the alert so that you can decide on the appropriate action:
+ - We recommend using [Dimensions](alerts-types.md#narrow-the-target-using-dimensions). Dimensions provide the column value that fired the alert, which gives you context for why the alert fired and how to fix the issue.
+ - When you need to investigate in the logs, use the link in the alert to the search results in logs.
+ - If you need the raw search results or for any other advanced customizations, use Azure Logic Apps.
- The new alert rule wizard doesn't support customization of the JSON payload. - Use custom properties in the [new API](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules/create-or-update#actions) to add static parameters and associated values to the webhook actions triggered by the alert. - For more advanced customizations, use Logic Apps. - The new alert rule wizard doesn't support customization of the email subject.
- - Customers often use the custom email subject to indicate the resource on which the alert fired, instead of using the Log Analytics workspace. Use the [new API](alerts-unified-log.md#split-by-alert-dimensions) to trigger an alert of the desired resource using the resource ID column.
+ - Customers often use the custom email subject to indicate the resource on which the alert fired, instead of using the Log Analytics workspace. Use the [new API](alerts-unified-log.md#split-by-alert-dimensions) to trigger an alert of the desired resource by using the resource ID column.
- For more advanced customizations, use Logic Apps. ## Next steps
+ [View and manage your alert instances](alerts-manage-alert-instances.md)
azure-monitor Availability Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-azure-functions.md
To create a new file, right-click under your timer trigger function (for example
} ```
-1. Copy the following code into the **run.csx** file. (You'll replace the preexisting code.)
+1. Define the `REGION_NAME` environment variable as a valid Azure availability location.
+
+ Run the following command in the [Azure CLI](https://learn.microsoft.com/cli/azure/account?view=azure-cli-latest#az-account-list-locations&preserve-view=true) to list available regions.
+ ```azurecli
+ az account list-locations -o table
+ ```
+
+1. Copy the following code into the **run.csx** file. (You'll replace the preexisting code.)
+
```csharp #load "runAvailabilityTest.csx"
azure-monitor Codeless Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/codeless-overview.md
The table below displays the current state of auto-instrumentation availability.
Links are provided to additional information for each supported scenario.
-|Environment/Resource Provider | .NET Framework | .NET Core / .NET | Java | Node.js | Python |
-|-||||-|-|
-|Azure App Service on Windows - Publish as Code | [ :white_check_mark: :link: ](azure-web-apps-net.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](azure-web-apps-net-core.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](azure-web-apps-java.md) | [ :white_check_mark: :link: ](azure-web-apps-nodejs.md) <sup>[1](#OnBD)</sup> | :x: |
-|Azure App Service on Windows - Publish as Docker | [ :white_check_mark: ](https://azure.github.io/AppService/2022/04/11/windows-containers-app-insights-preview.html) <sup>[2](#Preview)</sup> | [ :white_check_mark: ](https://azure.github.io/AppService/2022/04/11/windows-containers-app-insights-preview.html) <sup>[2](#Preview)</sup> | [ :white_check_mark: ](https://azure.github.io/AppService/2022/04/11/windows-containers-app-insights-preview.html) <sup>[2](#Preview)</sup> | :x: | :x: |
-|Azure App Service on Linux | :x: | [ :white_check_mark: :link: ](azure-web-apps-net-core.md?tabs=linux) <sup>[2](#Preview)</sup> | [ :white_check_mark: :link: ](azure-web-apps-java.md) | [ :white_check_mark: :link: ](azure-web-apps-nodejs.md?tabs=linux) | :x: |
-|Azure Functions - basic | [ :white_check_mark: :link: ](monitor-functions.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](monitor-functions.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](monitor-functions.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](monitor-functions.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](monitor-functions.md) <sup>[1](#OnBD)</sup> |
-|Azure Functions - dependencies | :x: | :x: | [ :white_check_mark: :link: ](monitor-functions.md) <sup>[2](#Preview)</sup> | :x: | [ :white_check_mark: :link: ](monitor-functions.md#distributed-tracing-for-python-function-apps) |
-|Azure Spring Cloud | :x: | :x: | [ :white_check_mark: :link: ](azure-web-apps-java.md) | :x: | :x: |
-|Azure Kubernetes Service (AKS) | :x: | :x: | [ :white_check_mark: :link: ](java-in-process-agent.md) | :x: | :x: |
-|Azure VMs Windows | [ :white_check_mark: :link: ](azure-vm-vmss-apps.md) <sup>[2](#Preview)</sup> <sup>[3](#Agent)</sup> | [ :white_check_mark: :link: ](azure-vm-vmss-apps.md) <sup>[2](#Preview)</sup> <sup>[3](#Agent)</sup> | [ :white_check_mark: :link: ](java-in-process-agent.md) | :x: | :x: |
-|On-premises VMs Windows | [ :white_check_mark: :link: ](status-monitor-v2-overview.md) <sup>[3](#Agent)</sup> | [ :white_check_mark: :link: ](status-monitor-v2-overview.md) <sup>[2](#Preview)</sup> <sup>[3](#Agent)</sup> | [ :white_check_mark: :link: ](java-in-process-agent.md) | :x: | :x: |
-|Standalone agent - any environment | :x: | :x: | [ :white_check_mark: :link: ](java-in-process-agent.md) | :x: | :x: |
+|Environment/Resource Provider | .NET Framework | .NET Core / .NET | Java | Node.js | Python |
+|-|||-|-|--|
+|Azure App Service on Windows - Publish as Code | [ :white_check_mark: :link: ](azure-web-apps-net.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](azure-web-apps-net-core.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](azure-web-apps-java.md) | [ :white_check_mark: :link: ](azure-web-apps-nodejs.md) <sup>[1](#OnBD)</sup> | :x: |
+|Azure App Service on Windows - Publish as Docker | [ :white_check_mark: ](https://azure.github.io/AppService/2022/04/11/windows-containers-app-insights-preview.html) <sup>[2](#Preview)</sup> | [ :white_check_mark: ](https://azure.github.io/AppService/2022/04/11/windows-containers-app-insights-preview.html) <sup>[2](#Preview)</sup> | [ :white_check_mark: ](https://azure.github.io/AppService/2022/04/11/windows-containers-app-insights-preview.html) <sup>[2](#Preview)</sup> | :x: | :x: |
+|Azure App Service on Linux - Publish as Code | :x: | [ :white_check_mark: :link: ](azure-web-apps-net-core.md?tabs=linux) <sup>[2](#Preview)</sup> | [ :white_check_mark: :link: ](azure-web-apps-java.md) | [ :white_check_mark: :link: ](azure-web-apps-nodejs.md?tabs=linux) | :x: |
+|Azure App Service on Linux - Publish as Docker | :x: | :x: | :x: | :x: | :x: |
+|Azure Functions - basic | [ :white_check_mark: :link: ](monitor-functions.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](monitor-functions.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](monitor-functions.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](monitor-functions.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](monitor-functions.md) <sup>[1](#OnBD)</sup> |
+|Azure Functions - dependencies | :x: | :x: | [ :white_check_mark: :link: ](monitor-functions.md) <sup>[2](#Preview)</sup> | :x: | [ :white_check_mark: :link: ](monitor-functions.md#distributed-tracing-for-python-function-apps) |
+|Azure Spring Cloud | :x: | :x: | [ :white_check_mark: :link: ](azure-web-apps-java.md) | :x: | :x: |
+|Azure Kubernetes Service (AKS) | :x: | :x: | [ :white_check_mark: :link: ](java-in-process-agent.md) | :x: | :x: |
+|Azure VMs Windows | [ :white_check_mark: :link: ](azure-vm-vmss-apps.md) <sup>[2](#Preview)</sup> <sup>[3](#Agent)</sup> | [ :white_check_mark: :link: ](azure-vm-vmss-apps.md) <sup>[2](#Preview)</sup> <sup>[3](#Agent)</sup> | [ :white_check_mark: :link: ](java-in-process-agent.md) | :x: | :x: |
+|On-premises VMs Windows | [ :white_check_mark: :link: ](status-monitor-v2-overview.md) <sup>[3](#Agent)</sup> | [ :white_check_mark: :link: ](status-monitor-v2-overview.md) <sup>[2](#Preview)</sup> <sup>[3](#Agent)</sup> | [ :white_check_mark: :link: ](java-in-process-agent.md) | :x: | :x: |
+|Standalone agent - any environment | :x: | :x: | [ :white_check_mark: :link: ](java-in-process-agent.md) | :x: | :x: |
**Footnotes** - <a name="OnBD">1</a>: Application Insights is on by default and enabled automatically.
azure-monitor Best Practices Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-cost.md
Title: 'Azure Monitor best practices: Cost management'
+ Title: Cost optimization and Azure Monitor
description: Guidance and recommendations for reducing your cost for Azure Monitor. - Previously updated : 03/31/2022 Last updated : 10/17/2022
-# Azure Monitor best practices: Cost management
+# Cost optimization and Azure Monitor
+You can significantly reduce your cost for Azure Monitor by understanding your different configuration options and opportunities to reduce the amount of data that it collects. Before you use this article, you should see [Azure Monitor cost and usage](usage-estimated-costs.md) to understand the different ways that Azure Monitor charges and how to view your monthly bill.
-This article provides guidance on reducing your cloud monitoring costs by implementing and managing Azure Monitor in the most cost-effective manner. It explains how to take advantage of cost-saving features to help ensure that you're not paying for data collection that provides little value. It also provides guidance for regularly monitoring your usage so that you can proactively detect and identify sources responsible for excessive usage.
-
-## Understand Azure Monitor charges
-
-You should start by understanding the different ways that Azure Monitor charges and how to view your monthly bill. See [Azure Monitor cost and usage](usage-estimated-costs.md) for a complete description and the different tools available to analyze your charges.
-
-## Configure workspaces
-
-You can start using Azure Monitor with a single Log Analytics workspace by using default options. As your monitoring environment grows, you'll need to make decisions about whether to have multiple services share a single workspace or create multiple workspaces. You want to evaluate configuration options that allow you to reduce your monitoring costs.
-
-### Configure pricing tier or dedicated cluster
-
-By default, workspaces will use pay-as-you-go pricing with no minimum data volume. If you collect enough amount of data, you can significantly decrease your cost by using a [commitment tier](logs/cost-logs.md#commitment-tiers). You commit to a daily minimum of data collected in exchange for a lower rate.
-
-[Dedicated clusters](logs/logs-dedicated-clusters.md) provide more functionality and cost savings if you ingest at least 500 GB per day collectively among multiple workspaces in the same region. Unlike commitment tiers, workspaces in a dedicated cluster don't need to individually reach 500 GB.
-
-See [Azure Monitor Logs pricing details](logs/cost-logs.md) for information on commitment tiers and guidance on determining which is most appropriate for your level of usage. See [Usage and estimated costs](usage-estimated-costs.md#usage-and-estimated-costs) to view estimated costs for your usage at different pricing tiers.
-
-### Optimize workspace configuration
-
-As your monitoring environment becomes more complex, you'll need to consider whether to create more Log Analytics workspaces. This need might surface as you place resources in more regions or as you implement more services that use workspaces such as Microsoft Sentinel and Microsoft Defender for Cloud.
-
-There can be cost implications with your workspace design, most notably when you combine different services such as operational data from Azure Monitor and security data from Microsoft Sentinel. For a description of these implications and guidance on determining the most cost-effective solution for your environment, see:
--- [Workspaces with Microsoft Defender for Cloud](logs/cost-logs.md#workspaces-with-microsoft-defender-for-cloud)-
-## Configure tables in each workspace
-
-Except for [tables that don't incur charges](logs/cost-logs.md#data-size-calculation), all data in a Log Analytics workspace is billed at the same rate by default. You might be collecting data that you query infrequently or that you need to archive for compliance but rarely access. You can significantly reduce your costs by optimizing your data retention and archiving and configuring Basic Logs.
-
-### Configure data retention and archiving
-
-Data collected in a Log Analytics workspace is retained for 31 days at no charge. The time period is 90 days if Microsoft Sentinel is enabled on the workspace. You can retain data beyond the default for trending analysis or other reporting, but there's a charge for this retention.
-
-Your retention requirement might be for compliance reasons or for occasional investigation or analysis of historical data. In this case, you should configure [Archived Logs](logs/data-retention-archive.md), which allows you to retain data for up to seven years at a reduced cost. There's a cost to search archived data or temporarily restore it for analysis. If you require infrequent access to this data, this cost is more than offset by the reduced retention cost.
-
-You can configure retention and archiving for all tables in a workspace or configure each table separately. The options allow you to optimize your costs by setting only the retention you require for each data type.
-
-### Configure Basic Logs
-
-You can save on data ingestion costs by configuring [certain tables](logs/basic-logs-configure.md#which-tables-support-basic-logs) in your Log Analytics workspace that you primarily use for debugging, troubleshooting, and auditing as [Basic Logs](logs/basic-logs-configure.md).
-
-Tables configured for Basic Logs have a lower ingestion cost in exchange for reduced features. They can't be used for alerting, their retention is set to eight days, they support a limited version of the query language, and there's a cost for querying them. If you query these tables infrequently, this query cost can be more than offset by the reduced ingestion cost.
-
-The decision whether to configure a table for Basic Logs is based on the following criteria:
--- The table currently supports Basic Logs.-- You don't require more than eight days of data retention for the table.-- You only require basic queries of the data using a limited version of the query language.-- The cost savings for data ingestion over a month exceed the expected cost for any expected queries-
-See [Query Basic Logs in Azure Monitor](.//logs/basic-logs-query.md) for information on query limitations. See [Configure Basic Logs in Azure Monitor](logs/basic-logs-configure.md) for more information about Basic Logs.
-
-## Reduce the amount of data collected
-
-The most straightforward strategy to reduce your costs for data ingestion and retention is to reduce the amount of data that you collect. Your goal should be to collect the minimal amount of data to meet your monitoring requirements. You might find that you're collecting data that's not being used for alerting or analysis. If so, you have an opportunity to reduce your monitoring costs by modifying your configuration to stop collecting data that you don't need.
-
-The configuration change varies depending on the data source. The following sections provide guidance for configuring common data sources to reduce the data they send to the workspace.
-
-## Virtual machines
-
-Virtual machines can vary significantly in the amount of data they collect, depending on the amount of telemetry generated by the applications and services they have installed. The following table lists the most common data collected from virtual machines and strategies for limiting them for each of the Azure Monitor agents.
+> [!NOTE]
+> This article describes [Cost optimization](/azure/architecture/framework/cost/) for Azure Monitor as part of the [Azure Well-Architected Framework](/azure/architecture/framework/). This is a set of guiding tenets that can be used to improve the quality of a workload. The framework consists of five pillars of architectural excellence:
+>
+> - Reliability
+> - Security
+> - Cost Optimization
+> - Operational Excellence
+> - Performance Efficiency
-| Source | Strategy | Log Analytics agent | Azure Monitor agent |
-|:|:|:|:|
-| Event logs | Collect only required event logs and levels. For example, *Information*-level events are rarely used and should typically not be collected. For the Azure Monitor agent, filter particular event IDs that are frequent but not valuable. | Change the [event log configuration for the workspace](agents/data-sources-windows-events.md). | Change the [data collection rule](agents/data-collection-rule-azure-monitor-agent.md). Use [custom XPath queries](agents/data-collection-rule-azure-monitor-agent.md#filter-events-using-xpath-queries) to filter specific event IDs. |
-| Syslog | Reduce the number of facilities collected and only collect required event levels. For example, *Info* and *Debug* level events are rarely used and should typically not be collected. | Change the [Syslog configuration for the workspace](agents/data-sources-syslog.md). | Change the [data collection rule](agents/data-collection-rule-azure-monitor-agent.md). Use [custom XPath queries](agents/data-collection-rule-azure-monitor-agent.md#filter-events-using-xpath-queries) to filter specific events. |
-| Performance counters | Collect only the performance counters required and reduce the frequency of collection. For the Azure Monitor agent, consider sending performance data only to Metrics and not Logs. | Change the [performance counter configuration for the workspace](agents/data-sources-performance-counters.md). | Change the [data collection rule](agents/data-collection-rule-azure-monitor-agent.md). Use [custom XPath queries](agents/data-collection-rule-azure-monitor-agent.md#filter-events-using-xpath-queries) to filter specific counters. |
+## Design considerations
-### Use transformations to filter events
+Azure Monitor includes the following design considerations related to cost:
-The bulk of data collection from virtual machines will be from Windows or Syslog events. While you can provide more filtering with the Azure Monitor agent, you still might be collecting records that provide little value. Use [transformations](essentials//data-collection-transformations.md) to implement more granular filtering and also to filter data from columns that provide little value. For example, you might have a Windows event that's valuable for alerting, but it includes columns with redundant or excessive data. You can create a transformation that allows the event to be collected but removes this excessive data.
+- Log Analytics workspace architecture<br><br>You can start using Azure Monitor with a single Log Analytics workspace by using default options. As your monitoring environment grows, you'll need to make decisions about whether to have multiple services share a single workspace or create multiple workspaces. There can be cost implications with your workspace design, most notably when you combine different services such as operational data from Azure Monitor and security data from Microsoft Sentinel. This may include trade-offs between functionality and cost depending on your particular priorities.<br><br>See [Design a Log Analytics workspace architecture](logs/workspace-design.md) for a list of criteria to consider when designing a workspace architecture.
-See the following section on filtering data with transformations for a summary on where to implement filtering and transformations for different data sources.
-### Multi-homing agents
+## Checklist
-You should be cautious with any configuration using multi-homed agents where a single virtual machine sends data to multiple workspaces because you might be incurring charges for the same data multiple times. If you do multi-home agents, make sure you're sending unique data to each workspace.
+**Log Analytics workspace configuration**
-You can also collect duplicate data with a single virtual machine running both the Azure Monitor agent and Log Analytics agent, even if they're both sending data to the same workspace. While the agents can coexist, each works independently without any knowledge of the other. Continue to use the Log Analytics agent until you [migrate to the Azure Monitor agent](./agents/azure-monitor-agent-migration.md) rather than using both together unless you can ensure that each is collecting unique data.
+> [!div class="checklist"]
+> - Configure pricing tier or dedicated cluster to optimize your cost depending on your usage.
+> - Configure tables used for debugging, troubleshooting, and auditing as Basic Logs.
+> - Configure data retention and archiving.
-See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for guidance on analyzing your collected data to make sure you aren't collecting duplicate data for the same machine.
+**Data collection**
-## Application Insights
+> [!div class="checklist"]
+> - Use diagnostic settings and transformations to collect only critical resource log data from Azure resources.
+> - Configure VM agents to collect only critical events.
+> - Use transformations to filter resource logs.
+> - Ensure that VMs aren't sending data to multiple workspaces.
-There are multiple methods that you can use to limit the amount of data collected by Application Insights:
+**Monitor usage**
-* **Sampling**: [Sampling](app/sampling.md) is the primary tool you can use to tune the amount of data collected by Application Insights. Use sampling to reduce the amount of telemetry that's sent from your applications with minimal distortion of metrics.
-* **Limit Ajax calls**: [Limit the number of Ajax calls](app/javascript.md#configuration) that can be reported in every page view or disable Ajax reporting. If you disable Ajax calls, you'll be disabling [JavaScript correlation](app/javascript.md#enable-distributed-tracing) too.
-* **Disable unneeded modules**: [Edit ApplicationInsights.config](app/configuration-with-applicationinsights-config.md) to turn off collection modules that you don't need. For example, you might decide that performance counters or dependency data aren't required.
-* **Pre-aggregate metrics**: If you put calls to TrackMetric in your application, you can reduce traffic by using the overload that accepts your calculation of the average and standard deviation of a batch of measurements. Alternatively, you can use a [pre-aggregating package](https://www.myget.org/gallery/applicationinsights-sdk-labs).
-* **Limit the use of custom metrics**: The Application Insights option to [Enable alerting on custom metric dimensions](app/pre-aggregated-metrics-log-metrics.md#custom-metrics-dimensions-and-pre-aggregation) can increase costs. Using this option can result in the creation of more pre-aggregation metrics.
-* **Ensure use of updated SDKs**: Earlier versions of the ASP.NET Core SDK and Worker Service SDK [collect many counters by default](app/eventcounters.md#default-counters-collected), which were collected as custom metrics. Use later versions to specify [only required counters](app/eventcounters.md#customizing-counters-to-be-collected).
+> [!div class="checklist"]
+> - Send alert when data collection is high.
+> - Analyze your collected data at regular intervals to determine if there are opportunities to further reduce your cost.
+> - Consider a daily cap as a preventative measure to ensure that you don't exceed a particular budget.
-## Resource logs
-The data volume for [resource logs](essentials/resource-logs.md) varies significantly between services, so you should only collect the categories that are required. You might also not want to collect platform metrics from Azure resources because this data is already being collected in Metrics. Only configure your diagnostic data to collect metrics if you need metric data in the workspace for more complex analysis with log queries.
+## Configuration recommendations
-Diagnostic settings don't allow granular filtering of resource logs. You might require certain logs in a particular category but not others. In this case, use [transformations](essentials/data-collection-transformations.md) on the workspace to filter logs that you don't require. You can also filter out the value of certain columns that you don't require to save additional cost.
-## Other insights and services
-See the documentation for other services that store their data in a Log Analytics workspace for recommendations on optimizing their data usage:
+### Log Analytics workspace configuration
+You may be able to significantly reduce your costs by optimizing the configuration of your Log Analytics workspaces. You can commit to a minimum amount of data collection in exchange for a reduced rate, and optimize your costs for the functionality and retention of data in particular tables.
-- **Container insights**: [Understand monitoring costs for Container insights](containers/container-insights-cost.md#control-ingestion-to-reduce-cost)-- **Microsoft Sentinel**: [Reduce costs for Microsoft Sentinel](../sentinel/billing-reduce-costs.md)-- **Defender for Cloud**: [Setting the security event option at the workspace level](../defender-for-cloud/working-with-log-analytics-agent.md#data-collection-tier)
+| Recommendation | Description |
+|:|:|
+| Configure pricing tier or dedicated cluster for your Log Analytics workspaces. | By default, Log Analytics workspaces will use pay-as-you-go pricing with no minimum data volume. If you collect enough amount of data, you can significantly decrease your cost by using a [commitment tier](logs/cost-logs.md#commitment-tiers) or [dedicated cluster](logs/logs-dedicated-clusters.md), which allows you to commit to a daily minimum of data collected in exchange for a lower rate.<br><br>See [Azure Monitor Logs cost calculations and options](logs/cost-logs.md) for details on commitment tiers and guidance on determining which is most appropriate for your level of usage. See [Usage and estimated costs](usage-estimated-costs.md#usage-and-estimated-costs) to view estimated costs for your usage at different pricing tiers.
+| Configure tables used for debugging, troubleshooting, and auditing as Basic Logs. | Tables in a Log Analytics workspace configured for [Basic Logs](logs/basic-logs-configure.md) have a lower ingestion cost in exchange for limited features and a charge for log queries. If you query these tables infrequently, this query cost can be more than offset by the reduced ingestion cost.<br><br>See [Configure Basic Logs in Azure Monitor (Preview)](logs/basic-logs-configure.md) for more information about Basic Logs and [Query Basic Logs in Azure Monitor (preview)](.//logs/basic-logs-query.md) for details on query limitations. |
+| Configure data retention and archiving. | There is a charge for retaining data in a Log Analytics workspace beyond the default of 30 days (90 days in Sentinel if enabled on the workspace). If you need to retain data for compliance reasons or for occasional investigation or analysis of historical data, configure [Archived Logs](logs/data-retention-archive.md), which allows you to retain data for up to seven years at a reduced cost.<br><br>See [Configure data retention and archive policies in Azure Monitor Logs](logs/data-retention-archive.md) for details on how to configure your workspace and how to work with archived data. |
-## Filter data with transformations (preview)
-You can use [data collection rule transformations in Azure Monitor](essentials//data-collection-transformations.md) to filter incoming data to reduce costs for data ingestion and retention. In addition to filtering records from the incoming data, you can filter out columns in the data, reducing its billable size as described in [Data size calculation](logs/cost-logs.md#data-size-calculation).
-Use ingestion-time transformations on the workspace to further filter data for workflows where you don't have granular control. For example, you can select categories in a [diagnostic setting](essentials/diagnostic-settings.md) to collect resource logs for a particular service, but that category might also send records that you don't need. Create a transformation for the table that service uses to filter out records you don't want.
+### Data collection
+Since Azure Monitor charges for the collection of data, your goal should be to collect the minimal amount of data required to meet your monitoring requirements. You have an opportunity to reduce your monitoring costs by modifying your configuration to stop collecting data that you're not using for alerting or analysis.
-You can also use ingestion-time transformations to lower the storage requirements for records you want by removing columns without useful information. For example, you might have error events in a resource log that you want for alerting. But you might not require certain columns in those records that contain a large amount of data. You can create a transformation for the table that removes those columns.
+| Recommendation | Description |
+|:|:|
+| **Azure resources** ||
+| Collect only critical resource log data from Azure resources. | When you create [diagnostic settings](essentials/diagnostic-settings.md) to send [resource logs](essentials/resource-logs.md) for your Azure resources to a Log Analytics database, only specify those categories that you require. Since diagnostic settings don't allow granular filtering of resource logs, use a [workspace transformation](essentials/data-collection-transformations.md?#workspace-transformation-dcr) to further filter unneeded data. See [Diagnostic settings in Azure Monitor](essentials/diagnostic-settings.md#controlling-costs) for details on how to configure diagnostic settings and using transformations to filter their data. |
+| **Virtual machines** ||
+| Configure VM agents to collect only critical events. | Virtual machines can vary significantly in the amount of data they collect, depending on the amount of telemetry generated by the applications and services they have installed. See [Monitor virtual machines with Azure Monitor: Workloads](vm/monitor-virtual-machine-workloads.md#controlling-costs) for guidance on data to collect and strategies for using XPath queries and transformations to limit it.|
+| Ensure that VMs aren't sending duplicate data. | Any configuration that uses multiple agents on a single machine or where you multi-home agents to send data to multiple workspaces may incur charges for the same data multiple times. If you do multi-home agents, make sure you're sending unique data to each workspace. See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for guidance on analyzing your collected data to make sure you aren't collecting duplicate data. If you're migrating between agents, continue to use the Log Analytics agent until you [migrate to the Azure Monitor agent](./agents/azure-monitor-agent-migration.md) rather than using both together unless you can ensure that each is collecting unique data. |
+| **Container insights** | |
+| Configure agent collection to remove unneeded data. | Analyze the data collected by Container insights as described in [Controlling ingestion to reduce cost](containers/container-insights-cost.md#control-ingestion-to-reduce-cost) and adjust your configuration to stop collection of data you don't need. |
+| Limit Prometheus metrics collected | If you configured Prometheus metric scraping, then follow the recommendations at [Controlling ingestion to reduce cost](containers/container-insights-cost.md#prometheus-metrics-scraping) to optimize your data collection for cost. |
+| Configure Basic Logs | Convert your schema to ContainerLogV2 which is compatible with Basic logs and can provide significant cost savings as described in [Controlling ingestion to reduce cost](containers/container-insights-cost.md#configure-basic-logs). |
+| **Application Insights** ||
+| Use sampling to tune the amount of data collected. | [Sampling](app/sampling.md) is the primary tool you can use to tune the amount of data collected by Application Insights. Use sampling to reduce the amount of telemetry that's sent from your applications with minimal distortion of metrics. |
+| Limit the number of Ajax calls. | [Limit the number of Ajax calls](app/javascript.md#configuration) that can be reported in every page view or disable Ajax reporting. If you disable Ajax calls, you'll be disabling [JavaScript correlation](app/javascript.md#enable-distributed-tracing) too. |
+| Disable unneeded modules. | [Edit ApplicationInsights.config](app/configuration-with-applicationinsights-config.md) to turn off collection modules that you don't need. For example, you might decide that performance counters or dependency data aren't required. |
+| Pre-aggregate metrics from any calls to TrackMetric. | If you put calls to TrackMetric in your application, you can reduce traffic by using the overload that accepts your calculation of the average and standard deviation of a batch of measurements. Alternatively, you can use a [pre-aggregating package](https://www.myget.org/gallery/applicationinsights-sdk-labs). |
+| Limit the use of custom metrics. | The Application Insights option to [Enable alerting on custom metric dimensions](app/pre-aggregated-metrics-log-metrics.md#custom-metrics-dimensions-and-pre-aggregation) can increase costs. Using this option can result in the creation of more pre-aggregation metrics. |
+| Ensure use of updated SDKs. | Earlier versions of the ASP.NET Core SDK and Worker Service SDK [collect many counters by default](app/eventcounters.md#default-counters-collected), which were collected as custom metrics. Use later versions to specify [only required counters](app/eventcounters.md#customizing-counters-to-be-collected). |
-The following table shows methods to apply transformations to different workflows.
-> [!NOTE]
-> Azure tables here refers to tables that are created and maintained by Microsoft and documented in the [Azure Monitor reference](/azure/azure-monitor/reference/). Custom tables are created by custom applications and have a suffix of *_CL* in their name.
-
-| Source | Target | Description | Filtering method |
-|:|:|:|:|
-| Azure Monitor agent | Azure tables | Collect data from standard sources such as Windows events, Syslog, and performance data and send to Azure tables in Log Analytics workspace. | Use XPath in the data collection rule (DCR) to collect specific data from client machines. Ingestion-time transformations in the agent DCR aren't yet supported. |
-| Azure Monitor agent | Custom tables | Collecting data outside of standard data sources is not yet supported. | |
-| Log Analytics agent | Azure tables | Collect data from standard sources such as Windows events, Syslog, and performance data and send it to Azure tables in the Log Analytics workspace. | Configure data collection on the workspace. Optionally, create ingestion-time transformation in the workspace DCR to filter records and columns. |
-| Log Analytics agent | Custom tables | Configure [custom logs](agents/data-sources-custom-logs.md) on the workspace to collect file-based text logs. | Configure ingestion-time transformation in the workspace DCR to filter or transform incoming data. You must first migrate the custom table to the new logs ingestion API. |
-| Data Collector API | Custom tables | Use the [Data Collector API](logs/data-collector-api.md) to send data to custom tables in the workspace by using the REST API. | Configure ingestion-time transformation in the workspace DCR to filter or transform incoming data. You must first migrate the custom table to the new Logs ingestion API. |
-| Logs ingestion API | Custom tables<br>Azure tables | Use the [Logs ingestion API](logs/logs-ingestion-api-overview.md) to send data to the workspace using REST API. | Configure ingestion-time transformation in the DCR for the custom log. |
-| Other data sources | Azure tables | Includes resource logs from diagnostic settings and other Azure Monitor features such as Application insights, Container insights, and VM insights. | Configure ingestion-time transformation in the workspace DCR to filter or transform incoming data. |
## Monitor workspace and analyze usage
-After you've configured your environment and data collection for cost optimization, you need to continue to monitor it to ensure that you don't experience unexpected increases in billable usage. You should also analyze your usage regularly to determine if you have other opportunities to reduce your usage. For example, you might want to further filter out collected data that hasn't proven to be useful.
-
-### Set a daily cap
-
-A [daily cap](logs/daily-cap.md) disables data collection in a Log Analytics workspace for the rest of the day after your configured limit is reached. A daily cap shouldn't be used as a method to reduce costs but as a preventative measure to ensure that you don't exceed a particular budget. Daily caps are typically used by organizations that are particularly cost conscious.
+After you've configured your environment and data collection for cost optimization, you need to continue to monitor it to ensure that you don't experience unexpected increases in billable usage. You should also analyze your usage regularly to determine if you have other opportunities to further filter out collected data that hasn't proven to be useful.
-When data collection stops, you effectively have no monitoring of features and resources relying on that workspace. Instead of relying on the daily cap alone, you can configure an alert rule to notify you when data collection reaches some level before the daily cap. Notification allows you to address any increases before data collection shuts down, or even to temporarily disable collection for less critical resources.
-See [Set daily cap on Log Analytics workspace](logs/daily-cap.md) for information on how the daily cap works and how to configure one.
-
-### Send alert when data collection is high
-
-To avoid unexpected bills, you should be proactively notified anytime you experience excessive usage. Notification allows you to address any potential anomalies before the end of your billing period.
-
-The following example is a [log alert rule](alerts/alerts-unified-log.md) that sends an alert if the billable data volume ingested in the last 24 hours was greater than 50 GB. Modify the **Alert Logic** setting to use a different threshold based on expected usage in your environment. You can also increase the frequency to check usage multiple times every day, but this option will result in a higher charge for the alert rule.
-
-| Setting | Value |
+| Recommendation | Description |
|:|:|
-| **Scope** | |
-| Target scope | Select your Log Analytics workspace. |
-| **Condition** | |
-| Query | `Usage \| where IsBillable \| summarize DataGB = sum(Quantity / 1000.)` |
-| Measurement | Measure: *DataGB*<br>Aggregation type: Total<br>Aggregation granularity: 1 day |
-| Alert Logic | Operator: Greater than<br>Threshold value: 50<br>Frequency of evaluation: 1 day |
-| Actions | Select or add an [action group](alerts/action-groups.md) to notify you when the threshold is exceeded. |
-| **Details** | |
-| Severity| Warning |
-| Alert rule name | Billable data volume greater than 50 GB in 24 hours. |
-
-See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for information on using log queries like the one used here to analyze billable usage in your workspace.
+| Send alert when data collection is high. | To avoid unexpected bills, you should be proactively notified anytime you experience excessive usage. Notification allows you to address any potential anomalies before the end of your billing period. See [Send alert when data collection is high](logs/analyze-usage.md#send-alert-when-data-collection-is-high) for details. |
+| Analyze collected data | Periodically analyze data collection using methods in [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) to determine if there's additional configuration that can decrease your usage further. This is particularly important when you add a new set of data sources, such as a new set of virtual machines or onboard a new service. |
+| Consider a daily cap as a preventative measure to ensure that you don't exceed a particular budget. | A [daily cap](logs/daily-cap.md) disables data collection in a Log Analytics workspace for the rest of the day after your configured limit is reached. This shouldn't be used as a method to reduce costs as described in [When to use a daily cap](logs/daily-cap.md). See [Set daily cap on Log Analytics workspace](logs/daily-cap.md) for information on how the daily cap works and how to configure one. |
-## Analyze your collected data
-When you detect an increase in data collection, you need methods to analyze your collected data to identify the source of the increase. You should also periodically analyze data collection to determine if there's additional configuration that can decrease your usage further. This practice is particularly important when you add a new set of data sources, such as a new set of virtual machines or onboard a new service.
-See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for different methods to analyze your collected data and billable usage. This article includes various log queries that will help you identify the source of any data increases and to understand your basic usage patterns.
-## Next steps
+## Next step
-- See [Azure Monitor cost and usage](usage-estimated-costs.md)) for a description of Azure Monitor and how to view and analyze your monthly bill.-- See [Azure Monitor Logs pricing details](logs/cost-logs.md) for information on how charges are calculated for data in a Log Analytics workspace and different configuration options to reduce your charges.-- See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for information on analyzing the data in your workspace to determine the source of any higher-than-expected usage and opportunities to reduce your amount of data collected.-- See [Set daily cap on Log Analytics workspace](logs/daily-cap.md) to control your costs by setting a daily limit on the amount of data that can be ingested in a workspace.
+- [Get best practices for a complete deployment of Azure Monitor](best-practices.md).
azure-monitor Container Insights Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-cost.md
This article provides pricing guidance for Container insights to help you unders
* Measure costs after Container insights has been enabled for one or more containers. * Control the collection of data and make cost reductions.
-Azure Monitor Logs collects, indexes, and stores data generated by your Kubernetes cluster.
-The Azure Monitor pricing model is primarily based on the amount of data ingested in gigabytes per day into your Log Analytics workspace. The cost of a Log Analytics workspace isn't based only on the volume of data collected. It's also dependent on the plan selected and how long you chose to store data generated from your clusters.
+
+The Azure Monitor pricing model is primarily based on the amount of data ingested in gigabytes per day into your Log Analytics workspace. The cost of a Log Analytics workspace isn't based only on the volume of data collected, it is also dependent on the plan selected, and how long you chose to store data generated from your clusters.
>[!NOTE] >All sizes and pricing are for sample estimation only. See the Azure Monitor [pricing](https://azure.microsoft.com/pricing/details/monitor/) page for the most recent pricing based on your Azure Monitor Log Analytics pricing model and Azure region.
The following types of data collected from a Kubernetes cluster with Container i
- Active scraping of Prometheus metrics - [Diagnostic log collection](../../aks/monitor-aks.md#configure-monitoring) of Kubernetes main node logs in your Azure Kubernetes Service (AKS) cluster to analyze log data generated by main components, such as `kube-apiserver` and `kube-controller-manager`.
-## What's collected from Kubernetes clusters?
-
-Container insights includes a predefined set of metrics and inventory items that are collected and written as log data in your Log Analytics workspace. All the metrics listed here are collected every minute.
-
-### Node metrics collected
-
-The 24 metrics per node that are collected:
--- cpuUsageNanoCores-- cpuCapacityNanoCores-- cpuAllocatableNanoCores-- memoryRssBytes-- memoryWorkingSetBytes-- memoryCapacityBytes-- memoryAllocatableBytes-- restartTimeEpoch-- used (disk)-- free (disk)-- used_percent (disk)-- io_time (diskio)-- writes (diskio)-- reads (diskio)-- write_bytes (diskio)-- write_time (diskio)-- iops_in_progress (diskio)-- read_bytes (diskio)-- read_time (diskio)-- err_in (net)-- err_out (net)-- bytes_recv (net)-- bytes_sent (net)-- Kubelet_docker_operations (kubelet)-
-### Container metrics
-
-The eight metrics per container that are collected:
--- cpuUsageNanoCores-- cpuRequestNanoCores-- cpuLimitNanoCores-- memoryRssBytes-- memoryWorkingSetBytes-- memoryRequestBytes-- memoryLimitBytes-- restartTimeEpoch-
-### Cluster inventory
-
-The cluster inventory data that's collected by default:
--- KubePodInventory: 1 per pod per minute-- KubeNodeInventory: 1 per node per minute-- Kube-- ContainerInventory: 1 per container per minute-
-## Estimate costs to monitor your AKS cluster
+## Estimating costs to monitor your AKS cluster
The following estimation is based on an AKS cluster with the following sizing example. The estimate applies only for metrics and inventory data collected. For container logs like stdout, stderr, and environmental variables, the estimate varies based on the log sizes generated by the workload. They're excluded from our estimation.
If you use [Prometheus metric scraping](container-insights-prometheus.md), make
### Configure Basic Logs
-You can save on data ingestion costs by configuring certain tables in your Log Analytics workspace that you primarily use for debugging, troubleshooting, and auditing as Basic Logs. For more information, including the limitations of Basic Logs, see [Configure Basic Logs](../best-practices-cost.md#configure-basic-logs). ContainerLogV2 is the configured version of Basic Logs that Container Insights uses. ContainerLogV2 includes verbose text-based log records.
+You can save on data ingestion costs by configuring certain tables in your Log Analytics workspace that you primarily use for debugging, troubleshooting, and auditing as Basic Logs. For more information, including the limitations of Basic Logs, see [Configure Basic Logs in Azure Monitor](../logs/basic-logs-configure.md). ContainerLogV2 is the configured version of Basic Logs that Container Insights uses. ContainerLogV2 includes verbose text-based log records.
You must be on the ContainerLogV2 schema to configure Basic Logs. For more information, see [Enable the ContainerLogV2 schema (preview)](container-insights-logging-v2.md).
+## Data collected from Kubernetes clusters
+
+### Metric data
+Container insights includes a predefined set of metrics and inventory items collected that are written as log data in your Log Analytics workspace. All metrics in the following table are collected every one minute.
++
+| Type | Metrics |
+|:|:|
+| Node metrics | `cpuUsageNanoCores`<br>`cpuCapacityNanoCores`<br>`cpuAllocatableNanoCores`<br>`memoryRssBytes`<br>`memoryWorkingSetBytes`<br>`memoryCapacityBytes`<br>`memoryAllocatableBytes`<br>`restartTimeEpoch`<br>`used` (disk)<br>`free` (disk)<br>`used_percent` (disk)<br>`io_time` (diskio)<br>`writes` (diskio)<br>`reads` (diskio)<br>`write_bytes` (diskio)<br>`write_time` (diskio)<br>`iops_in_progress` (diskio)<br>`read_bytes` (diskio)<br>`read_time` (diskio)<br>`err_in` (net)<br>`err_out` (net)<br>`bytes_recv` (net)<br>`bytes_sent` (net)<br>`Kubelet_docker_operations` (kubelet)
+| Container metrics | `cpuUsageNanoCores`<br>`cpuRequestNanoCores`<br>`cpuLimitNanoCores`<br>`memoryRssBytes`<br>`memoryWorkingSetBytes`<br>`memoryRequestBytes`<br>`memoryLimitBytes`<br>`restartTimeEpoch`
+
+### Cluster inventory
+
+The following list is the cluster inventory data collected by default:
+
+- KubePodInventory ΓÇô 1 per pod per minute
+- KubeNodeInventory ΓÇô 1 per node per minute
+- KubeServices ΓÇô 1 per service per minute
+- ContainerInventory ΓÇô 1 per container per minute
## Next steps To help you understand what the costs are likely to be based on recent usage patterns from data collected with Container insights, see [Analyze usage in a Log Analytics workspace](../logs/analyze-usage.md).
azure-monitor Data Collection Transformations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-transformations.md
ms.reviwer: nikeist
# Data collection transformations in Azure Monitor (preview) Transformations in Azure Monitor allow you to filter or modify incoming data before it's sent to a Log Analytics workspace. This article provides a basic description of transformations and how they are implemented. It provides links to other content for actually creating a transformation.
-## When to use transformations
-Transformations are useful for a variety of scenarios, including those described below.
+## Why to use transformations
+The following table describes the different goals that transformations can be used to achieve.
-### Reduce data costs
-Since you're charged ingestion cost for any data sent to a Log Analytics workspace, you want to filter out any data that you don't require to reduce your costs.
--- **Remove entire rows.** For example, you might have a diagnostic setting to collect resource logs from a particular resource but not require all of the log entries that it generates. Create a transformation that filters out records that match a certain criteria. --- **Remove a column from each row.** For example, your data may include columns with data that's redundant or has minimal value. Create a transformation that filters out columns that aren't required.--- **Parse important data from a column.** You may have a table with valuable data buried in a particular column. Use a transformation to parse the valuable data into a new column and remove the original.--
-### Remove sensitive data
-You may have a data source that sends information you don't want stored for privacy or compliancy reasons.
--- **Filter sensitive information.** Filter out entire rows or just particular columns that contain sensitive information.
-
-- **Obfuscate sensitive information**. For example, you might replace digits with a common character in an IP address or telephone number.--
-### Enrich data with additional or calculated information
-Use a transformation to add information to data that provides business context or simplifies querying the data later.
+| Category | Details |
+|:|:|
+| Remove sensitive data | You may have a data source that sends information you don't want stored for privacy or compliancy reasons.<br><br>**Filter sensitive information.** Filter out entire rows or just particular columns that contain sensitive information.<br><br>**Obfuscate sensitive information**. For example, you might replace digits with a common character in an IP address or telephone number. |
+| Enrich data with additional or calculated information | Use a transformation to add information to data that provides business context or simplifies querying the data later.<br><br>**Add a column with additional information.** For example, you might add a column identifying whether an IP address in another column is internal or external.<br><br>**Add business specific information.** For example, you might add a column indicating a company division based on location information in other columns. |
+| Reduce data costs | Since you're charged ingestion cost for any data sent to a Log Analytics workspace, you want to filter out any data that you don't require to reduce your costs.<br><br>**Remove entire rows.** For example, you might have a diagnostic setting to collect resource logs from a particular resource but not require all of the log entries that it generates. Create a transformation that filters out records that match a certain criteria.<br><br>**Remove a column from each row.** For example, your data may include columns with data that's redundant or has minimal value. Create a transformation that filters out columns that aren't required.<br><br>**Parse important data from a column.** You may have a table with valuable data buried in a particular column. Use a transformation to parse the valuable data into a new column and remove the original. |
-- **Add a column with additional information.** For example, you might add a column identifying whether an IP address in another column is internal or external. -- **Add business specific information.** For example, you might add a column indicating a company division based on location information in other columns. ## Supported tables Transformations may be applied to the following tables in a Log Analytics workspace.
azure-monitor Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/diagnostic-settings.md
The following table provides unique requirements for each destination including
| Event Hubs | The shared access policy for the namespace defines the permissions that the streaming mechanism has. Streaming to Event Hubs requires Manage, Send, and Listen permissions. To update the diagnostic setting to include streaming, you must have the ListKey permission on that Event Hubs authorization rule.<br><br>The event hub namespace needs to be in the same region as the resource being monitored if the resource is regional. <br><br> Diagnostic settings can't access Event Hubs resources when virtual networks are enabled. You must enable **Allow trusted Microsoft services** to bypass this firewall setting in Event Hubs so that the Azure Monitor diagnostic settings service is granted access to your Event Hubs resources.| | Partner integrations | The solutions vary by partner. Check the [Azure Monitor partner integrations documentation](../../partner-solutions/overview.md) for details.
+## Controlling costs
+
+There is a cost for collecting data in a Log Analytics workspace, so you should only collect the categories you require for each service. The data volume for resource logs varies significantly between services,
+
+You might also not want to collect platform metrics from Azure resources because this data is already being collected in Metrics. Only configure your diagnostic data to collect metrics if you need metric data in the workspace for more complex analysis with log queries.
+
+Diagnostic settings don't allow granular filtering of resource logs. You might require certain logs in a particular category but not others. Or you may want to remove unneeded columns from the data. In these cases, use [transformations](data-collection-transformations.md) on the workspace to filter logs that you don't require.
++
+You can also use transformations to lower the storage requirements for records you want by removing columns without useful information. For example, you might have error events in a resource log that you want for alerting. But you might not require certain columns in those records that contain a large amount of data. You can create a transformation for the table that removes those columns.
++ ## Create diagnostic settings You can create and edit diagnostic settings by using multiple methods.
azure-monitor Metrics Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-supported.md
Previously updated : 11/17/2022 Last updated : 11/21/2022
This latest update adds a new column and reorders the metrics to be alphabetical
> [!NOTE] > NumActiveWorkers is supported only if YARN is installed, and the Resource Manager is running.
+>
+> Alternatively customers can use Log Analytics for Kafka to get the insight and can write custom query to get the best monitoring experience. For more information, see how to [Use Azure Monitor logs to monitor HDInsight clusters](https://learn.microsoft.com/azure/hdinsight/hdinsight-hadoop-oms-log-analytics-tutorial)
## Microsoft.HealthcareApis/services
azure-monitor Dns Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/dns-analytics.md
description: Set up and use the DNS Analytics solution in Azure Monitor to gathe
Previously updated : 03/20/2018 Last updated : 11/23/2022
DNS Analytics helps you to:
The solution collects, analyzes, and correlates Windows DNS analytic and audit logs and other related data from your DNS servers.
+> [!IMPORTANT]
+> The Log Analytics agent will be **retired on 31 August, 2024**. If you are using the Log Analytics agent in your Microsoft Sentinel deployment, we recommend that you start planning your migration to the AMA. For more information, see [AMA migration for Microsoft Sentinel](../..//sentinel/ama-migrate.md).
+ ## Connected sources The following table describes the connected sources that are supported by this solution:
To provide feedback, visit the [Log Analytics UserVoice page](https://aka.ms/dns
## Next steps
-[Query logs](../logs/log-query-overview.md) to view detailed DNS log records.
+[Query logs](../logs/log-query-overview.md) to view detailed DNS log records.
azure-monitor Analyze Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/analyze-usage.md
Last updated 08/25/2022
# Analyze usage in a Log Analytics workspace Azure Monitor costs can vary significantly based on the volume of data being collected in your Log Analytics workspace. This volume is affected by the set of solutions using the workspace and the amount of data that each solution collects. This article provides guidance on analyzing your collected data to assist in controlling your data ingestion costs. It helps you determine the cause of higher-than-expected usage. It also helps you to predict your costs as you monitor more resources and configure different Azure Monitor features. + ## Causes for higher-than-expected usage Each Log Analytics workspace is charged as a separate service and contributes to the bill for your Azure subscription. The amount of data ingestion can be considerable, depending on the:
Each Log Analytics workspace is charged as a separate service and contributes to
An unexpected increase in any of these factors can result in increased charges for data retention. The rest of this article provides methods for detecting such a situation and then analyzing collected data to identify and mitigate the source of the increased usage.
+## Send alert when data collection is high
+
+To avoid unexpected bills, you should be proactively notified anytime you experience excessive usage. Notification allows you to address any potential anomalies before the end of your billing period.
+
+The following example is a [log alert rule](../alerts/alerts-unified-log.md) that sends an alert if the billable data volume ingested in the last 24 hours was greater than 50 GB. Modify the **Alert Logic** setting to use a different threshold based on expected usage in your environment. You can also increase the frequency to check usage multiple times every day, but this option will result in a higher charge for the alert rule.
+
+| Setting | Value |
+|:|:|
+| **Scope** | |
+| Target scope | Select your Log Analytics workspace. |
+| **Condition** | |
+| Query | `Usage \| where IsBillable \| summarize DataGB = sum(Quantity / 1000.)` |
+| Measurement | Measure: *DataGB*<br>Aggregation type: Total<br>Aggregation granularity: 1 day |
+| Alert Logic | Operator: Greater than<br>Threshold value: 50<br>Frequency of evaluation: 1 day |
+| Actions | Select or add an [action group](../alerts/action-groups.md) to notify you when the threshold is exceeded. |
+| **Details** | |
+| Severity| Warning |
+| Alert rule name | Billable data volume greater than 50 GB in 24 hours. |
+ ## Usage analysis in Azure Monitor Start your analysis with existing tools in Azure Monitor. These tools require no configuration and can often provide the information you need with minimal effort. If you need deeper analysis into your collected data than existing Azure Monitor features, use any of the following [log queries](log-query-overview.md) in [Log Analytics](log-analytics-overview.md).
azure-monitor Basic Logs Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-configure.md
The following table summarizes the two plans.
> [!NOTE] > The Basic log data plan isn't available for workspaces in [legacy pricing tiers](cost-logs.md#legacy-pricing-tiers).
+## When should I use Basic Logs?
+The decision whether to configure a table for Basic Logs is based on the following criteria:
+
+- The table currently [supports Basic Logs](#which-tables-support-basic-logs).
+- You don't require more than eight days of data retention for the table.
+- You only require basic queries of the data using a limited version of the query language.
+- The cost savings for data ingestion over a month exceed the expected cost for any expected queries
+ ## Which tables support Basic Logs? By default, all tables in your Log Analytics workspace are Analytics tables, and they're available for query and alerts. You can currently configure the following tables for Basic Logs:
azure-monitor Cost Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/cost-logs.md
ms.reviwer: dalek git
The most significant charges for most Azure Monitor implementations will typically be ingestion and retention of data in your Log Analytics workspaces. Several features in Azure Monitor don't have a direct cost but add to the workspace data that's collected. This article describes how data charges are calculated for your Log Analytics workspaces and Application Insights resources and the different configuration options that affect your costs. ++ ## Pricing model The default pricing for Log Analytics is a pay-as-you-go model that's based on ingested data volume and data retention. Each Log Analytics workspace is charged as a separate service and contributes to the bill for your Azure subscription. [Pricing for Log Analytics](https://azure.microsoft.com/pricing/details/monitor/) is set regionally. The amount of data ingestion can be considerable, depending on:
azure-monitor Daily Cap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/daily-cap.md
A daily cap on a Log Analytics workspace allows you to avoid unexpected increase
> [!IMPORTANT] > You should use care when setting a daily cap because when data collection stops, your ability to observe and receive alerts when the health conditions of your resources will be impacted. It can also impact other Azure services and solutions whose functionality may depend on up-to-date data being available in the workspace. Your goal shouldn't be to regularly hit the daily limit but rather use it as an infrequent method to avoid unplanned charges resulting from an unexpected increase in the volume of data collected.-
+>
+> For strategies to reduce your Azure Monitor costs, see [Cost optimization and Azure Monitor](/azure/azure-monitor/best-practices-cost).
## How the daily cap works Each workspace has a daily cap that defines its own data volume limit. When the daily cap is reached, a warning banner appears across the top of the page for the selected Log Analytics workspace in the Azure portal, and an operation event is sent to the *Operation* table under the **LogManagement** category. You can optionally create an alert rule to send an alert when this event is created.
Data collection resumes at the reset time which is a different hour of the day f
> [!NOTE] > The daily cap can't stop data collection at precisely the specified cap level and some excess data is expected, particularly if the workspace is receiving high volumes of data. If data is collected above the cap, it's still billed. See [View the effect of the Daily Cap](#view-the-effect-of-the-daily-cap) for a query that is helpful in studying the daily cap behavior.
+## When to use a daily cap
+Daily caps are typically used by organizations that are particularly cost conscious. They shouldn't be used as a method to reduce costs, but rather as a preventative measure to ensure that you don't exceed a particular budget.
+
+When data collection stops, you effectively have no monitoring of features and resources relying on that workspace. Instead of relying on the daily cap alone, you can [create an alert rule](#alert-when-daily-cap-is-reached) to notify you when data collection reaches some level before the daily cap. Notification allows you to address any increases before data collection shuts down, or even to temporarily disable collection for less critical resources.
+ ## Application Insights You shouldn't create a daily cap for workspace-based Application Insights resources but instead create a daily cap for their workspace. You do need to create a separate daily cap for any classic Application Insights resources since their data doesn't reside in a Log Analytics workspace.
azure-monitor Data Collector Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-collector-api.md
The following properties are reserved and shouldn't be used in a custom record t
- TimeGenerated - RawData + ## Data limits The data posted to the Azure Monitor Data collection API is subject to certain constraints:
The complete set of status codes that the service might return is listed in the
To query data submitted by the Azure Monitor HTTP Data Collector API, search for records whose **Type** is equal to the **LogType** value that you specified and appended with **_CL**. For example, if you used **MyCustomLog**, you would return all records with `MyCustomLog_CL`. ## Sample requests
-In the next sections, you'll find samples that demonstrate how to submit data to the Azure Monitor HTTP Data Collector API by using various programming languages.
+In this section are samples that demonstrate how to submit data to the Azure Monitor HTTP Data Collector API by using various programming languages.
For each sample, set the variables for the authorization header by doing the following:
For each sample, set the variables for the authorization header by doing the fol
Alternatively, you can change the variables for the log type and JSON data.
-### PowerShell sample
+### [PowerShell](#tab/powershell)
+ ```powershell # Replace with your Workspace ID $CustomerId = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
Function Post-LogAnalyticsData($customerId, $sharedKey, $body, $logType)
Post-LogAnalyticsData -customerId $customerId -sharedKey $sharedKey -body ([System.Text.Encoding]::UTF8.GetBytes($json)) -logType $logType ```
-### C# sample
+### [C#](#tab/c-sharp)
```csharp using System; using System.Net;
namespace OIAPIExample
```
-### Python sample
+### [Python](#tab/python)
>[!NOTE] > If using Python 2, you may need to change the line:
post_data(customer_id, shared_key, body, log_type)
```
-### Java sample
+### [Java](#tab/java)
```java
public class ApiExample {
``` + ## Alternatives and considerations
azure-monitor Logs Ingestion Api Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-ingestion-api-overview.md
You can modify the target table and workspace by modifying the DCR without any c
## Supported tables
-The following tables are supported.
- ### Custom tables
-The Logs Ingestion API can send data to any custom table that you create and to certain built-in tables in your Log Analytics workspace. The target table must exist before you can send data to it.
+The Logs Ingestion API can send data to any custom table that you create and to certain built-in tables in your Log Analytics workspace. The target table must exist before you can send data to it. Custom tables must have the `_CL` suffix.
### Built-in tables
-The Logs Ingestion API can send data to the following built-in tables. Other tables might be added to this list as support for them is implemented:
+The Logs Ingestion API can send data to the following built-in tables. Other tables may be added to this list as support for them is implemented. Columns extended on top of built-in tables must have the suffix `_CF`. Columns in a custom table don't need this suffix. Column names can consist of alphanumeric characters and the characters `_` and `-`, and they must start with a letter.
- [CommonSecurityLog](/azure/azure-monitor/reference/tables/commonsecuritylog) - [SecurityEvents](/azure/azure-monitor/reference/tables/securityevent) - [Syslog](/azure/azure-monitor/reference/tables/syslog) - [WindowsEvents](/azure/azure-monitor/reference/tables/windowsevent)
-### Table limits
-
-Tables have the following limitations:
-* Custom tables must have the `_CL` suffix.
-* Column names can consist of alphanumeric characters and the characters `_` and `-`. They must start with a letter.
-* Columns extended on top of built-in tables must have the suffix `_CF`. Columns in a custom table don't need this suffix.
## Authentication
azure-monitor Usage Estimated Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/usage-estimated-costs.md
Last updated 05/05/2022
This article describes the different ways that Azure Monitor charges for usage. It also explains how to evaluate charges on your Azure bill and how to estimate charges to monitor your entire environment. + ## Pricing model Azure Monitor uses consumption-based pricing, which is also known as pay-as-you-go pricing. With this billing model, you only pay for what you use. Features of Azure Monitor that are enabled by default don't incur any charge. These features include collection and alerting on the [Activity log](essentials/activity-log.md) and collection and analysis of [platform metrics](essentials/metrics-supported.md).
Use the following basic guidance for common resources:
- **Virtual machines**: With typical monitoring enabled, a virtual machine generates from 1 GB to 3 GB of data per month. This range is highly dependent on the configuration of your agents. - **Application Insights**: For different methods to estimate data from your applications, see the following section.-- **Container insights**: For guidance on estimating data for your Azure Kubernetes Service (AKS) cluster, see [Estimating costs to monitor your AKS cluster](containers/container-insights-cost.md#estimate-costs-to-monitor-your-aks-cluster).
+- **Container insights**: For guidance on estimating data for your Azure Kubernetes Service (AKS) cluster, see [Estimating costs to monitor your AKS cluster](containers/container-insights-cost.md#estimating-costs-to-monitor-your-aks-cluster).
The [Azure Monitor pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=monitor) includes data volume estimation calculators for these three cases.
azure-monitor Monitor Virtual Machine Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-workloads.md
For a list of the data sources available and details on how to configure them, s
> [!IMPORTANT] > Be careful to collect only the data that you require. Costs are associated with any data collected in your workspace. The data that you collect should only support particular analysis and alerting scenarios.
+## Controlling costs
+Be careful to collect only the data that you require. Costs are associated with any data collected in your workspace. The data that you collect should only support particular analysis and alerting scenarios.
+++
+Virtual machines can vary significantly in the amount of data they collect, depending on the amount of telemetry generated by the applications and services they have installed. Since your Azure Monitor cost is dependent on how much data you collect, you want to ensure that you're not collecting any more data than you require to meet your monitoring requirements.
+
+Each data source that you collect may have a different method for filtering out unwanted data. You can also use transformations to implement more granular filtering and also to filter data from columns that provide little value. For example, you might have a Windows event that's valuable for alerting, but it includes columns with redundant or excessive data. You can create a transformation that allows the event to be collected but removes this excessive data.
+
+The following table lists the different data sources on a VM and how to filter the data they collect.
+
+> [!NOTE]
+> Azure tables here refers to tables that are created and maintained by Microsoft and documented in the [Azure Monitor reference](/azure/azure-monitor/reference/). Custom tables are created by custom applications and have a suffix of _CL in their name.
+
+| Target | Description | Filtering method |
+|:|:|:|
+| Azure tables | [Collect data from standard sources](../agents/data-collection-rule-azure-monitor-agent.md) such as Windows events, Syslog, and performance data and send to Azure tables in Log Analytics workspace. | Use [XPath in the data collection rule (DCR)](../agents/data-collection-rule-azure-monitor-agent.md#filter-events-using-xpath-queries) to collect specific data from client machines.<br><br>Use transformations to further filter specific events or remove unnecessary columns. |
+| Custom tables | [Create a data collection rule](../agents/data-collection-text-log.md) to collect file-base text logs from the agent. | Add a [transformation](../essentials/data-collection-transformations.md) to the data collection rule. |
++ ## Convert management pack logic A significant number of customers who implement Azure Monitor currently monitor their virtual machine workloads by using management packs in System Center Operations Manager. There are no migration tools to convert assets from Operations Manager to Azure Monitor because the platforms are fundamentally different. Your migration instead constitutes a standard Azure Monitor implementation while you continue to use Operations Manager. As you customize Azure Monitor to meet your requirements for different applications and components and as it gains more features, then you can start to retire different management packs and agents in Operations Manager.
azure-netapp-files Azure Netapp Files Sdk Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-sdk-cli.md
The table below lists the supported SDKs. You can find details about the suppor
||--| | .NET | [Azure/azure-sdk-for-net](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/netapp) | | Python | [Azure/azure-sdk-for-python](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/netapp) |
-| Go | [Azure/azure-sdk-for-go](https://github.com/Azure/azure-sdk-for-go/tree/main/services/netapp) |
+| Go | [Azure/azure-sdk-for-go](https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/resourcemanager/netapp) |
| Java | [Azure/azure-sdk-for-java](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/netapp) | | JavaScript | [Azure/azure-sdk-for-js](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/netapp/arm-netapp) | | Ruby | [Azure/azure-sdk-for-ruby](https://github.com/Azure/azure-sdk-for-ruby/tree/master/management/azure_mgmt_netapp) |
azure-netapp-files Azure Netapp Files Solution Architectures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md
na Previously updated : 11/10/2022 Last updated : 11/23/2022 # Solution architectures using Azure NetApp Files
This section provides references to SAP on Azure solutions.
* [Protecting HANA databases configured with HSR on Azure NetApp Files with AzAcSnap](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/protecting-hana-databases-configured-with-hsr-on-azure-netapp/ba-p/3654620) * [Manual Recovery Guide for SAP HANA on Azure VMs from Azure NetApp Files snapshot with AzAcSnap](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/manual-recovery-guide-for-sap-hana-on-azure-vms-from-azure/ba-p/3290161) * [SAP HANA Disaster Recovery with Azure NetApp Files](https://docs.netapp.com/us-en/netapp-solutions-sap/pdfs/sidebar/SAP_HANA_Disaster_Recovery_with_Azure_NetApp_Files.pdf)
-* [SAP HANA backup and recovery on Azure NetApp Files with SnapCenter Service](https://docs.netapp.com/us-en/netapp-solutions-sap/pdfs/sidebar/SAP_HANA_backup_and_recovery_on_Azure_NetApp_Files_with_SnapCenter_Service.pdf)
### SAP AnyDB
azure-netapp-files Create Active Directory Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-active-directory-connections.md
na Previously updated : 11/22/2022 Last updated : 11/23/2022 # Create and manage Active Directory connections for Azure NetApp Files
Several features of Azure NetApp Files require that you have an Active Directory
* Alternatively, an AD domain user account with `msDS-SupportedEncryptionTypes` write permission on the AD connection admin account can also be used to set the Kerberos encryption type property on the AD connection admin account. >[!NOTE]
- >It's _not_ recommended nor required to add the Azure NetApp Files AD admin account to the AD domain groups listed above. Nor is it recommended or required to grant `msDS-SupportedEncryptionTypes` write permission to the AD admin account.
+ >It's _not_ recommended or required to add the Azure NetApp Files AD admin account to the AD domain groups listed above. Nor is it recommended or required to grant `msDS-SupportedEncryptionTypes` write permission to the Azure NetApp Files AD admin account.
If you set both AES-128 and AES-256 Kerberos encryption on the admin account of the AD connection, the highest level of encryption supported by your AD DS will be used.
azure-signalr Signalr Tutorial Build Blazor Server Chat App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-tutorial-build-blazor-server-chat-app.md
Beginning in Visual Studio 2019 version 16.2.0, Azure SignalR Service is built i
dotnet new blazorserver -o BlazorChat ```
-1. Add a new C# file called `BlazorChatSampleHub.cs` and create a new class `BlazorSampleHub` deriving from the `Hub` class for the chat app. For more information on creating hubs, see [Create and Use Hubs](/aspnet/core/signalr/hubs#create-and-use-hubs).
+1. Add a new C# file called `BlazorChatSampleHub.cs` and create a new class `BlazorChatSampleHub` deriving from the `Hub` class for the chat app. For more information on creating hubs, see [Create and Use Hubs](/aspnet/core/signalr/hubs#create-and-use-hubs).
```cs using System;
azure-sql-edge Resources Partners Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/resources-partners-security.md
This article highlights Microsoft partners companies with security solutions to
| Partner| Description | Links | |--|--|--|
-|![DH2i](media/resources/dh2i-logo.png)|DH2i takes an innovative new approach to networking connectivity by enabling organizations with its Software Defined Perimeter (SDP) Always-Secure and Always-On IT Infrastructure. DxOdyssey for IoT extends this to edge devices, allowing seamless access from the edge devices to the data center and cloud. This SDP module runs on any IoT device in a container on x64 and arm64 architecture. Once enabled, organizations can create secure, private application-level tunnels between devices and hubs without the requirement of a VPN or exposing public, open ports. This SDP module is purpose-built for IoT use cases where edge devices must communicate with any other devices, resources, applications, or clouds. Minimum hardware requirements: Linux x64 and arm64 OS, 1 GB of RAM, 100 Mb of storage| [Website](https://dh2i.com/) [Marketplace](https://portal.azure.com/#blade/Microsoft_Azure_Marketplace/MarketplaceOffersBlade/selectedMenuItemId/home) [Documentation](https://dh2i.com/dxodyssey-for-iot/) [Support](https://dh2i.com/support/)
+|![DH2i](media/resources/dh2i-logo.png)|DH2i takes an innovative new approach to networking connectivity by enabling organizations with its Software Defined Perimeter (SDP) Always-Secure and Always-On IT Infrastructure. DxOdyssey for IoT extends this to edge devices, allowing seamless access from the edge devices to the data center and cloud. This SDP module runs on any IoT device in a container on x64 and arm64 architecture. Once enabled, organizations can create secure, private application-level tunnels between devices and hubs without the requirement of a VPN or exposing public, open ports. This SDP module is purpose-built for IoT use cases where edge devices must communicate with any other devices, resources, applications, or clouds. Minimum hardware requirements: Linux x64 and arm64 OS, 1 GB of RAM, 100 Mb of storage| [Website](https://dh2i.com/) [Marketplace](https://portal.azure.com/#blade/Microsoft_Azure_Marketplace/MarketplaceOffersBlade/selectedMenuItemId/home) [Documentation](https://dh2i.com/dxodyssey-for-iot/) [Support](https://support.dh2i.com/)
## Next steps
azure-video-indexer Accounts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/accounts-overview.md
When creating a new paid account, you need to connect the Azure Video Indexer ac
> [!NOTE] > It is recommended to use Azure Video Indexer ARM-based accounts.
-* [Create an ARM-based (paid) account in Azure portal](create-account-portal.md). To create an account with an API, see [Accounts](/rest/api/videoindexer/accounts?branch=videoindex)
+* [Create an ARM-based (paid) account in Azure portal](create-account-portal.md). To create an account with an API, see [Accounts](/rest/api/videoindexer/preview/accounts)
> [!TIP] > Make sure you are signed in with the correct domain to the [Azure Video Indexer website](https://www.videoindexer.ai/). For details, see [Switch tenants](switch-tenants-portal.md).
azure-video-indexer Connect Classic Account To Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/connect-classic-account-to-arm.md
In this article, we demonstrate options of connecting your **existing** Azure Vi
Connecting a classic account to be ARM-based triggers a 30 days of a transition state. In the transition state, an existing account can be accessed by generating an access token using both: * Access token [generated through API Management](https://aka.ms/avam-dev-portal)(classic way)
-* Access token [generated through ARM](/rest/api/videoindexer/generate/access-token)
+* Access token [generated through ARM](/rest/api/videoindexer/preview/generate/access-token)
The transition state moves all account management functionality to be managed by ARM and will be handled by [Azure RBAC][docs-rbac-overview].
Before the end of the 30 days of transition state, you can remove access from us
## After connecting to ARM is complete
-After successfully connecting your account to ARM, it is recommended to make sure your account management APIs are replaced with [Azure Video Indexer REST API](/rest/api/videoindexer/accounts?branch=videoindex).
-As mentioned in the beginning of this article, during the 30 days of the transition state, ΓÇ£[Get-access-token](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Account-Access-Token)ΓÇ¥ will be supported side by side the ARM-based ΓÇ£[Generate-Access token](/rest/api/videoindexer/generate/access-token)ΓÇ¥.
+After successfully connecting your account to ARM, it is recommended to make sure your account management APIs are replaced with [Azure Video Indexer REST API](/rest/api/videoindexer/preview/accounts).
+As mentioned in the beginning of this article, during the 30 days of the transition state, ΓÇ£[Get-access-token](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Account-Access-Token)ΓÇ¥ will be supported side by side the ARM-based ΓÇ£[Generate-Access token](/rest/api/videoindexer/preview/generate/access-token)ΓÇ¥.
Make sure to change to the new "Generate-Access token" by updating all your solutions that use the API. APIs to be changed:
APIs to be changed:
- Get accounts ΓÇô List of all account in a region. - Create paid account ΓÇô would create a classic account.
-For a full description of [Azure Video Indexer REST API](/rest/api/videoindexer/accounts?branch=videoindex) calls and documentation, follow the link.
+For a full description of [Azure Video Indexer REST API](/rest/api/videoindexer/preview/accounts) calls and documentation, follow the link.
For code sample generating an access token through ARM see [C# code sample](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/ApiUsage/ArmBased/Program.cs).
azure-video-indexer Customize Language Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-language-model-overview.md
Previously updated : 02/02/2022 Last updated : 11/23/2022 # Customize a Language model with Azure Video Indexer Azure Video Indexer supports automatic speech recognition through integration with the Microsoft [Custom Speech Service](https://azure.microsoft.com/services/cognitive-services/custom-speech-service/). You can customize the Language model by uploading adaptation text, namely text from the domain whose vocabulary you'd like the engine to adapt to. Once you train your model, new words appearing in the adaptation text will be recognized, assuming default pronunciation, and the Language model will learn new probable sequences of words. See the list of supported by Azure Video Indexer languages in [supported langues](language-support.md).
-Let's take a word that is highly specific, like "Kubernetes" (in the context of Azure Kubernetes service), as an example. Since the word is new to Azure Video Indexer, it is recognized as "communities". You need to train the model to recognize it as "Kubernetes". In other cases, the words exist, but the Language model is not expecting them to appear in a certain context. For example, "container service" is not a 2-word sequence that a non-specialized Language model would recognize as a specific set of words.
+Let's take a word that is highly specific, like *"Kubernetes"* (in the context of Azure Kubernetes service), as an example. Since the word is new to Azure Video Indexer, it is recognized as *"communities"*. You need to train the model to recognize it as *"Kubernetes"*. In other cases, the words exist, but the Language model is not expecting them to appear in a certain context. For example, *"container service"* is not a 2-word sequence that a non-specialized Language model would recognize as a specific set of words.
-You have the option to upload words without context in a list in a text file. This is considered partial adaptation. Alternatively, you can upload text file(s) of documentation or sentences related to your content for better adaptation.
+There are 2 ways to customize a language model:
+
+- **Option 1**: Edit the transcript that was generated by Azure Video Indexer. By editing and correcting the transcript, you are training a language model to provide improved results in the future.
+- **Option 2**: Upload text file(s) to train the language model. The upload file can either contain a list of words as you would like them to appear in the Video Indexer transcript or the relevant words included naturally in sentences and paragraphs. As better results are achieved with the latter approach, it's recommended for the upload file to contain full sentences or paragraphs related to your content.
+
+> [!Important]
+> Do not include in the upload file the words or sentences as currently incorrectly transcribed (for example, *"communities"*) as this will negate the intended impact.
+> Only include the words as you would like them to appear (for example, *"Kubernetes"*).
You can use the Azure Video Indexer APIs or the website to create and edit custom Language models, as described in topics in the [Next steps](#next-steps) section of this topic.
azure-video-indexer Live Stream Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/live-stream-analysis.md
A solution described in this article, allows customers to use Azure Video Indexe
*Figure 1 ΓÇô Sample player displaying the Azure Video Indexer metadata on the live stream*
-The [stream analysis solution](https://aka.ms/livestreamanalysis) at hand, uses Azure Functions and two Logic Apps to process a live program from a live channel in Azure Media Services with Azure Video Indexer and displays the result with Azure Media Player showing the near real-time resulted stream.
+The [stream analysis solution](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/LiveStreamAnalysis/README.MD) at hand, uses Azure Functions and two Logic Apps to process a live program from a live channel in Azure Media Services with Azure Video Indexer and displays the result with Azure Media Player showing the near real-time resulted stream.
In high level, it is comprised of two main steps. The first step runs every 60 seconds, and takes a subclip of the last 60 seconds played, creates an asset from it and indexes it via Azure Video Indexer. Then the second step is called once indexing is complete. The insights captured are processed, sent to Azure Cosmos DB, and the subclip indexed is deleted.
The sample player plays the live stream and gets the insights from Azure Cosmos
## Step-by-step guide
-The full code and a step-by-step guide to deploy the results can be found in [GitHub project for Live media analytics with Azure Video Indexer](https://aka.ms/livestreamanalysis).
+The full code and a step-by-step guide to deploy the results can be found in [GitHub project for Live media analytics with Azure Video Indexer](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/LiveStreamAnalysis/README.MD).
## Next steps
azure-video-indexer Logic Apps Connector Arm Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/logic-apps-connector-arm-accounts.md
The following image shows the first flow:
1. <a name="access_token"></a>Generate an access token. > [!NOTE]
- > For details about the ARM API and the request/response examples, see [Generate an Azure Video Indexer access token](/rest/api/videoindexer/generate/access-token?tabs=HTTP).
+ > For details about the ARM API and the request/response examples, see [Generate an Azure Video Indexer access token](/rest/api/videoindexer/preview/generate/access-token).
> > Press **Try it** to get the correct values for your account.
azure-video-indexer Restricted Viewer Role https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/restricted-viewer-role.md
Users with this role are **unable** to perform the following tasks:
## Using an ARM API
-To generate a Video Indexer restricted viewer access token via API, see [documentation](/rest/api/videoindexer/generate/access-token).
+To generate a Video Indexer restricted viewer access token via API, see [documentation](/rest/api/videoindexer/preview/generate/access-token).
## Restricted Viewer Video Indexer website experience
azure-vmware Deploy Zerto Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-zerto-disaster-recovery.md
In this scenario, the primary site is an Azure VMware Solution private cloud in
Currently, Zerto disaster recovery on Azure VMware Solution is in an Initial Availability (IA) phase. In the IA phase, you must contact Microsoft to request and qualify for IA support.
-To request IA support for Zerto on Azure VMware Solution, send an email request to zertoonavs@microsoft.com. In the IA phase, Azure VMware Solution only supports manual installation and onboarding of Zerto. However, Microsoft will work with you to ensure that you can manually install Zerto on your private cloud.
+To request IA support for Zerto on Azure VMware Solution, submit this [Install Zerto on AVS form](https://aka.ms/ZertoAVSinstall) with the required information. In the IA phase, Azure VMware Solution only supports manual installation and onboarding of Zerto. However, Microsoft will work with you to ensure that you can manually install Zerto on your private cloud.
> [!NOTE] > As part of the manual installation, Microsoft creates a new vCenter user account for Zerto. This user account is only for Zerto Virtual Manager (ZVM) to perform operations on the Azure VMware Solution vCenter. When installing ZVM on Azure VMware Solution, donΓÇÖt select the ΓÇ£Select to enforce roles and permissions using Zerto vCenter privilegesΓÇ¥ option.
azure-web-pubsub Reference Odata Filter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-odata-filter.md
Last updated 11/11/2022
# OData filter syntax in Azure Web PubSub service
-In Azure Web PubSub service, the **filter** parameter specifies inclusion or exclusion criteria for the connections to send messages to. This article describes the OData syntax of **filter** and provides examples.
+Azure Web PubSub's **filter** parameter defines inclusion or exclusion criteria for sending messages to connections. This parameter is used in the [Send to all](/rest/api/webpubsub/dataplane/web-pub-sub/send-to-all), [Send to group](/rest/api/webpubsub/dataplane/web-pub-sub/send-to-group), and [Send to user](/rest/api/webpubsub/dataplane/web-pub-sub/send-to-user) operations.
-The complete syntax is described in the [formal grammar](#formal-grammar).
+This article provides the following resources:
-There is also a browsable [syntax diagram](https://aka.ms/awps/filter-syntax-diagram) that allows you to interactively explore the grammar and the relationships between its rules.
+- A description of the OData syntax of the **filter** parameter with examples.
+- A description of the complete [Extended Backus-Naur Form](#formal-grammar) grammar.
+- A browsable [syntax diagram](https://aka.ms/awps/filter-syntax-diagram) to interactively explore the syntax grammar rules.
## Syntax
-A filter in the OData language is a Boolean expression, which in turn can be one of several types of expression, as shown by the following EBNF ([Extended Backus-Naur Form](https://en.wikipedia.org/wiki/Extended_BackusΓÇôNaur_form)):
+A filter in the OData language is boolean expression, which in turn can be one of several types of expression, as shown by the following EBNF ([Extended Backus-Naur Form](https://en.wikipedia.org/wiki/Extended_BackusΓÇôNaur_form)) description:
``` /* Identifiers */
boolean_expression ::= logical_expression
| '(' boolean_expression ')' ```
-An interactive syntax diagram is also available:
+An interactive syntax diagram is available at, [OData syntax diagram for Azure Web PubSub service](https://aka.ms/awps/filter-syntax-diagram).
-> [!div class="nextstepaction"]
-> [OData syntax diagram for Azure Web PubSub service](https://aka.ms/awps/filter-syntax-diagram)
-
-> [!NOTE]
-> See [formal grammar section](#formal-grammar) for the complete EBNF.
+For the complete EBNF, see [formal grammar section](#formal-grammar) .
### Identifiers
-The filter syntax is used to filter out the connections matching the filter expression to send messages to.
-
-Azure Web PubSub supports below identifiers:
+Using the filter syntax, you can control sending messages to connections matching the identifier criteria. Azure Web PubSub supports below identifiers:
-| Identifier | Description | Note | Examples
-| | | -- | --
+| Identifier | Description | Note | Examples |
+| | |--| --
| `userId` | The userId of the connection. | Case insensitive. It can be used in [string operations](#supported-operations). | `userId eq 'user1'` | `connectionId` | The connectionId of the connection. | Case insensitive. It can be used in [string operations](#supported-operations). | `connectionId ne '123'` | `groups` | The collection of groups the connection is currently in. | Case insensitive. It can be used in [collection operations](#supported-operations). | `'group1' in groups`
-Identifiers are used to refer to the property value of a connection. Azure Web PubSub supports 3 identifiers matching the property name of the connection model. and supports identifiers `userId` and `connectionId` in string operations, supports identifier `groups` in [collection operations](#supported-operations). For example, to filter out connections with userId `user1`, we specify the filter as `userId eq 'user1'`. Read through the below sections for more samples using the filter.
+Identifiers refer to the property value of a connection. Azure Web PubSub supports three identifiers matching the property name of the connection model. and supports identifiers `userId` and `connectionId` in string operations, supports identifier `groups` in [collection operations](#supported-operations). For example, to filter out connections with userId `user1`, we specify the filter as `userId eq 'user1'`. Read through the below sections for more samples using the filter.
### Boolean expressions
-The expression for a filter is a boolean expression. When sending messages to connections, Azure Web PubSub sends messages to connections with filter expression evaluated to `true`.
+The expression for a filter is a boolean expression. Azure Web PubSub sends messages to connections with filter expressions evaluated to `true`.
-The types of Boolean expressions include:
+The types of boolean expressions include:
-- Logical expressions that combine other Boolean expressions using the operators `and`, `or`, and `not`.
+- Logical expressions that combine other boolean expressions using the operators `and`, `or`, and `not`.
- Comparison expressions, which compare fields or range variables to constant values using the operators `eq`, `ne`, `gt`, `lt`, `ge`, and `le`.-- The Boolean literals `true` and `false`. These constants can be useful sometimes when programmatically generating filters, but otherwise don't tend to be used in practice.-- Boolean expressions in parentheses. Using parentheses can help to explicitly determine the order of operations in a filter. For more information on the default precedence of the OData operators, see [operator precedence section](#operator-precedence).
+- The boolean literals `true` and `false`. These constants can be useful sometimes when programmatically generating filters, but otherwise don't tend to be used in practice.
+- Boolean expressions in parentheses. Using parentheses helps to explicitly determine the order of operations in a filter. For more information on the default precedence of the OData operators, see [operator precedence section](#operator-precedence).
### Supported operations+
+The filter syntax supports the following operations:
+ | Operator | Description | Example | | | | **Logical Operators**
The types of Boolean expressions include:
| `string substring(string p, int startIndex)`,</br>`string substring(string p, int startIndex, int length)` | Substring of the string | `substring(userId,5,2) eq 'ab'` can match connections for user `user-ab-de` | `bool endswith(string p0, string p1)` | Check if `p0` ends with `p1` | `endswith(userId,'de')` can match connections for user `user-ab-de` | `bool startswith(string p0, string p1)` | Check if `p0` starts with `p1` | `startswith(userId,'user')` can match connections for user `user-ab-de`
-| `int indexof(string p0, string p1)` | Get the index of `p1` in `p0`. Returns `-1` if `p0` does not contain `p1`. | `indexof(userId,'-ab-') ge 0` can match connections for user `user-ab-de`
+| `int indexof(string p0, string p1)` | Get the index of `p1` in `p0`. Returns `-1` if `p0` doesn't contain `p1`. | `indexof(userId,'-ab-') ge 0` can match connections for user `user-ab-de`
| `int length(string p)` | Get the length of the input string | `length(userId) gt 1` can match connections for user `user-ab-de` | **Collection Functions**
-| `int length(collection p)` | Get the length of the collection | `length(groups) gt 1` can match connections in 2 groups
+| `int length(collection p)` | Get the length of the collection | `length(groups) gt 1` can match connections in two groups
### Operator precedence
-If you write a filter expression with no parentheses around its sub-expressions, Azure Web PubSub service will evaluate it according to a set of operator precedence rules. These rules are based on which operators are used to combine sub-expressions. The following table lists groups of operators in order from highest to lowest precedence:
+If you write a filter expression with no parentheses around its subexpressions, Azure Web PubSub service will evaluate it according to a set of operator precedence rules. These rules are based on which operators are used to combine subexpressions. The following table lists groups of operators in order from highest to lowest precedence:
| Group | Operator(s) | | | |
length(userId) gt 0 and length(userId) lt 3 or length(userId) gt 7 and length(us
((length(userId) gt 0) and (length(userId) lt 3)) or ((length(userId) gt 7) and (length(userId) lt 10)) ```
-The `not` operator has the highest precedence of all -- even higher than the comparison operators. That's why if you try to write a filter like this:
+The `not` operator has the highest precedence of all, even higher than the comparison operators. If you write a filter like this:
```odata-filter-expr not length(userId) gt 5
not (length(userId) gt 5)
### Filter size limitations
-There are limits to the size and complexity of filter expressions that you can send to Azure Web PubSub service. The limits are based roughly on the number of clauses in your filter expression. A good guideline is that if you have over 100 clauses, you are at risk of exceeding the limit. We recommend designing your application in such a way that it doesn't generate filters of unbounded size.
+There are limits to the size and complexity of filter expressions that you can send to Azure Web PubSub service. The limits are based roughly on the number of clauses in your filter expression. A good guideline is that if you have over 100 clauses, you are at risk of exceeding the limit. To avoid exceeding the limit, design your application so that it doesn't generate filters of unbounded size.
## Examples
There are limits to the size and complexity of filter expressions that you can s
## Formal grammar
-We can describe the subset of the OData language supported by Azure Web PubSub service using an EBNF ([Extended Backus-Naur Form](https://en.wikipedia.org/wiki/Extended_BackusΓÇôNaur_form)) grammar. Rules are listed "top-down", starting with the most complex expressions, and breaking them down into more primitive expressions. At the top is the grammar rule for `$filter` that correspond to specific parameter `filter` of the Azure Azure Web PubSub service `Send*` REST APIs:
+We can describe the subset of the OData language supported by Azure Web PubSub service using an EBNF ([Extended Backus-Naur Form](https://en.wikipedia.org/wiki/Extended_BackusΓÇôNaur_form)) grammar. Rules are listed "top-down", starting with the most complex expressions, then breaking them down into more primitive expressions. The top is the grammar rule for `$filter` that corresponds to specific parameter `filter` of the Azure Web PubSub service `Send*` REST APIs:
```
cognitive-services Cognitive Services Container Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-container-support.md
Azure Cognitive Services containers provide the following set of Docker containe
| [Language service][ta-containers-language] | **Text Language Detection** ([image](https://go.microsoft.com/fwlink/?linkid=2018759&clcid=0x409)) | For up to 120 languages, detects which language the input text is written in and report a single language code for every document submitted on the request. The language code is paired with a score indicating the strength of the score. | Generally available. <br> This container can also [run in disconnected environments](containers/disconnected-containers.md). | | [Language service][ta-containers-sentiment] | **Sentiment Analysis** ([image](https://go.microsoft.com/fwlink/?linkid=2018654&clcid=0x409)) | Analyzes raw text for clues about positive or negative sentiment. This version of sentiment analysis returns sentiment labels (for example *positive* or *negative*) for each document and sentence within it. | Generally available. <br> This container can also [run in disconnected environments](containers/disconnected-containers.md). | | [Language service][ta-containers-health] | **Text Analytics for health** | Extract and label medical information from unstructured clinical text. | Generally available |
-| [Translator][tr-containers] | **Translator** | Translate text in several languages and dialects. | Gated preview - [request access](https://aka.ms/csgate-translator). <br> This container can also [run in disconnected environments](containers/disconnected-containers.md). |
+| [Translator][tr-containers] | **Translator** | Translate text in several languages and dialects. | Generally available. Gated - [request access](https://aka.ms/csgate-translator). <br>This container can also [run in disconnected environments](containers/disconnected-containers.md). |
### Speech containers
communication-services Custom Teams Endpoint Authentication Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/custom-teams-endpoint-authentication-overview.md
Before we begin:
- The Azure Communication Services resource admin needs to grant Alice permission to perform her role. Learn more about [Azure RBAC role assignment](../../../role-based-access-control/role-assignments-portal.md). Steps:
-1. Authenticate Alice using Azure Active Directory: Alice is authenticated using a standard OAuth flow with *Microsoft Authentication Library (MSAL)*. If authentication is successful, the client application receives an Azure AD access token, with a value of 'A1' and an Object ID of an Azure AD user with a value of 'A2'. Tokens are outlined later in this article. Authentication from the developer perspective is explored in this [quickstart](../../quickstarts/manage-teams-identity.md).
-1. Get an access token for Alice: The application for Teams users performs control plane logic, using artifacts 'A1', 'A2' and 'A3'. This produces Azure Communication Services access token 'D' and gives Alice access. This access token can also be used for data plane actions in Azure Communication Services, like Calling.
+1. Authenticate Alice using Azure Active Directory: Alice is authenticated using a standard OAuth flow with *Microsoft Authentication Library (MSAL)*. If authentication is successful, the client application receives an Azure AD access token, with a value of 'A1' and an Object ID of an Azure AD user. Tokens are outlined later in this article. Authentication from the developer perspective is explored in this [quickstart](../../quickstarts/manage-teams-identity.md).
+1. Get an access token for Alice: The application for Teams users performs control plane logic, using artifacts 'A1', 'A2' and 'A3'. This produces Azure Communication Services access token 'D' and gives Alice access. This access token can also be used for data plane actions in Azure Communication Services, like Calling. The 'A2' and 'A3' artifacts are expected to be passed along with the artifact 'A1' for validation that the Azure AD Token was issued to the expected user and application and will prevent attackers from using the Azure AD access tokens issued to other applications or other users. For more information on how to get 'A' artifacts, see [Receive the Azure AD user token and object ID via the MSAL library](../../quickstarts/manage-teams-identity.md?pivots=programming-language-csharp#step-1-receive-the-azure-ad-user-token-and-object-id-via-the-msal-library) and [Getting Application ID](../troubleshooting-info.md#getting-application-id).
1. Call Bob: Alice makes a call to Teams user Bob, with Fabrikam's app. The call takes place via the Calling SDK with an Azure Communication Services access token. Learn more about [developing custom Teams clients](../../quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md). Artifacts: - Artifact A1 - Type: Azure AD access token - Audience: _`Azure Communication Services`_ ΓÇö control plane
- - Azure AD application ID: Fabrikam's _`Azure AD application ID`_
+ - Source: Fabrikam's Azure AD tenant
- Permissions: _`https://auth.msft.communication.azure.com/Teams.ManageCalls`_, _`https://auth.msft.communication.azure.com/Teams.ManageChats`_ - Artifact A2 - Type: Object ID of an Azure AD user
- - Azure AD application ID: Fabrikam's _`Azure AD application ID`_
+ - Source: Fabrikam's Azure AD tenant
- Artifact A3 - Type: Azure AD application ID
- - Azure AD application ID: Fabrikam's _`Azure AD application ID`_
+ - Source: Fabrikam's Azure AD tenant
- Artifact D - Type: Azure Communication Services access token - Audience: _`Azure Communication Services`_ ΓÇö data plane
Before we begin:
- Alice or her Azure AD administrator needs to give Contoso's Azure Active Directory application consent before the first attempt to sign in. Learn more about [consent](../../../active-directory/develop/consent-framework.md). Steps:
-1. Authenticate Alice using the Fabrikam application: Alice is authenticated through Fabrikam's application. A standard OAuth flow with Microsoft Authentication Library (MSAL) is used. If authentication is successful, the client application, the Contoso app in this case, receives an Azure AD access token with a value of 'A1' and an Object ID of an Azure AD user with a value of 'A2'. Token details are outlined below. Authentication from the developer perspective is explored in this [quickstart](../../quickstarts/manage-teams-identity.md).
-1. Get an access token for Alice: The Contoso application performs control plane logic, using artifacts 'A1', 'A2' and 'A3'. This generates Azure Communication Services access token 'D' for Alice within the Contoso application. This access token can be used for data plane actions in Azure Communication Services, like Calling.
+1. Authenticate Alice using the Fabrikam application: Alice is authenticated through Fabrikam's application. A standard OAuth flow with Microsoft Authentication Library (MSAL) is used. If authentication is successful, the client application, the Contoso app in this case, receives an Azure AD access token with a value of 'A1' and an Object ID of an Azure AD user with a value of 'A2'. Token details are outlined below. Authentication from the developer perspective is explored in this [quickstart](../../quickstarts/manage-teams-identity.md).
+1. Get an access token for Alice: The Contoso application by using a custom authentication artifact with value 'B' performs authorization logic to decide whether Alice has permission to exchange the Azure AD access token for an Azure Communication Services access token. After successful authorization, the Contoso application performs control plane logic, using artifacts 'A1', 'A2', and 'A3'. This generates Azure Communication Services access token 'D' for Alice within the Contoso application. This access token can be used for data plane actions in Azure Communication Services, like Calling. The 'A2' and 'A3' artifacts are expected to be passed along with the artifact 'A1' for validation that the Azure AD Token was issued to the expected user and application and will prevent attackers from using the Azure AD access tokens issued to other applications or other users. For more information on how to get 'A' artifacts, see [Receive the Azure AD user token and object ID via the MSAL library](../../quickstarts/manage-teams-identity.md?pivots=programming-language-csharp#step-1-receive-the-azure-ad-user-token-and-object-id-via-the-msal-library) and [Getting Application ID](../troubleshooting-info.md#getting-application-id).
1. Call Bob: Alice makes a call to Teams user Bob, with Fabrikam's application. The call takes place via the Calling SDK with an Azure Communication Services access token. Learn more about developing custom, Teams apps [in this quickstart](../../quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md).
Artifacts:
- Artifact A1 - Type: Azure AD access token - Audience: Azure Communication Services ΓÇö control plane
- - Azure AD application ID: Contoso's _`Azure AD application ID`_
+ - Source: Contoso application registration's Azure AD tenant
- Permission: _`https://auth.msft.communication.azure.com/Teams.ManageCalls`_, _`https://auth.msft.communication.azure.com/Teams.ManageChats`_ - Artifact A2 - Type: Object ID of an Azure AD user
- - Azure AD application ID: Fabrikam's _`Azure AD application ID`_
+ - Source: Fabrikam's Azure AD tenant
- Artifact A3 - Type: Azure AD application ID
- - Azure AD application ID: Contoso's _`Azure AD application ID`_
+ - Source: Contoso application registration's Azure AD tenant
- Artifact B
- - Type: Custom Contoso authentication artifact
+ - Type: Custom Contoso authorization artifact (issued either by Azure AD or a different authorization service)
- Artifact C - Type: Hash-based Message Authentication Code (HMAC) (based on Contoso's _`connection string`_) - Artifact D
Artifacts:
## Next steps
-The following articles may be of interest to you:
- - Learn more about [authentication](../authentication.md). - Try this [quickstart to authenticate Teams users](../../quickstarts/manage-teams-identity.md). - Try this [quickstart to call a Teams user](../../quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md).+
+The following sample apps may be interesting to you:
+
+- Try the [Sample App](https://github.com/Azure-Samples/communication-services-javascript-quickstarts/tree/main/manage-teams-identity-mobile-and-desktop), which showcases a process of acquiring Azure Communication Services access tokens for Teams users in mobile and desktop applications.
+
+- To see how the Azure Communication Services access tokens for Teams users are acquired in a single-page application, check out a [SPA sample app](https://github.com/Azure-Samples/communication-services-javascript-quickstarts/tree/main/manage-teams-identity-spa).
+
+- To learn more about a server implementation of an authentication service for Azure Communication Services check out the [Authentication service hero sample](../../samples/trusted-auth-sample.md).
communication-services Direct Routing Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/direct-routing-infrastructure.md
The certificate must have the SBC FQDN as the common name (CN) or the subject al
Alternatively, Communication Services direct routing supports a wildcard in the CN and/or SAN, and the wildcard must conform to standard [RFC HTTP Over TLS](https://tools.ietf.org/html/rfc2818#section-3.1). Customers who already use Office 365 and have a domain registered in Microsoft 365 Admin Center can use SBC FQDN from the same domain.
-Domains that arenΓÇÖt previously used in O365 must be provisioned.
An example would be using `\*.contoso.com`, which would match the SBC FQDN `sbc.contoso.com`, but wouldn't match with `sbc.test.contoso.com`.
On the leg between the Cloud Media Processor and Communication Services Calling
- [Telephony Concept](./telephony-concept.md) - [Phone number types in Azure Communication Services](./plan-solution.md) - [Pair the Session Border Controller and configure voice routing](./direct-routing-provisioning.md)
+- [Call Automation overview](../call-automation/call-automation.md)
- [Pricing](../pricing.md) ### Quickstarts -- [Call to Phone](../../quickstarts/telephony/pstn-call.md)
+- [Get a phone number](../../quickstarts/telephony/get-phone-number.md)
+- [Outbound call to a phone number](../../quickstarts/telephony/pstn-call.md)
+- [Redirect inbound telephony calls with Call Automation](../../quickstarts/call-automation/redirect-inbound-telephony-calls.md)
communication-services Direct Routing Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/direct-routing-provisioning.md
For information about whether Azure Communication Services direct routing is the
If everything is set up correctly, you should see an exchange of OPTIONS messages between Microsoft and your Session Border Controller. Use your SBC monitoring/logs to validate the connection.
-## Voice routing considerations
+## Outbound voice routing considerations
Azure Communication Services direct routing has a routing mechanism that allows a call to be sent to a specific SBC based on the called number pattern.
If you created one voice route with a pattern `^\+1(425|206)(\d{7})$` and added
> [!NOTE] > In all the examples, if the dialed number does not match the pattern, the call will be dropped unless there is a purchased number exist for the communication resource, and this number was used as `alternateCallerId` in the application.
-## Configure voice routing
+## Configure outbound voice routing
### Configure using Azure portal
For more information about regular expressions, see [.NET regular expressions ov
You can select multiple SBCs for a single pattern. In such a case, the routing algorithm will choose them in random order. You may also specify the exact number pattern more than once. The higher row will have higher priority, and if all SBCs associated with that row aren't available next row will be selected. This way, you create complex routing scenarios.
+## Managing inbound calls
+
+For general inbound call management use [Call Automation SDKs](../call-automation/incoming-call-notification.md) to build an application that listen for and manage inbound calls placed to a phone number or received via ACS direct routing.
+Omnichannel for Customer Service customers please refer to [these instructions](/dynamics365/customer-service/voice-channel-inbound-calling).
+ ## Delete direct routing configuration ### Delete using Azure portal
You can select multiple SBCs for a single pattern. In such a case, the routing a
### Conceptual documentation - [Session Border Controllers certified for Azure Communication Services direct routing](./certified-session-border-controllers.md)
+- [Call Automation overview](../call-automation/call-automation.md)
- [Pricing](../pricing.md) ### Quickstarts -- [Call to Phone](../../quickstarts/telephony/pstn-call.md)
+- [Outbound call to a phone number](../../quickstarts/telephony/pstn-call.md)
+- [Redirect inbound telephony calls with Call Automation](../../quickstarts/call-automation/redirect-inbound-telephony-calls.md)
communication-services Emergency Calling Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/emergency-calling-concept.md
The Emergency service is temporarily free to use for Azure Communication Service
## Emergency calling with Azure Communication Services direct routing Emergency call is a regular call from a direct routing perspective. If you want to implement emergency calling with Azure Communication Services direct routing, you need to make sure that there is a routing rule for your emergency number (911, 112, etc.). You also need to make sure that your carrier processes emergency calls properly.
-There is also an option to use purchased number as a caller ID for direct routing calls, in such case if there is no voice routing rule for emergency number, the call will fall back to Microsoft network, and we will treat it as a regular emergency call. Learn more about [voice routing fall back](./direct-routing-provisioning.md#voice-routing-considerations).
+There is also an option to use purchased number as a caller ID for direct routing calls, in such case if there is no voice routing rule for emergency number, the call will fall back to Microsoft network, and we will treat it as a regular emergency call. Learn more about [voice routing fall back](./direct-routing-provisioning.md#outbound-voice-routing-considerations).
## Next steps
communication-services Inbound Calling Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/inbound-calling-capabilities.md
Inbound PSTN calling is currently supported in GA for Dynamics Omnichannel. You can use phone numbers [provided by Microsoft](./telephony-concept.md#voice-calling-pstn) and phone numbers supplied by [direct routing](./telephony-concept.md#azure-direct-routing).
-**Inbound calling with Dynamics 365 Omnichannel (OC)**
+**Inbound calling with Omnichannel for Customer Service**
-Supported in General Availability, to set up inbound calling for Dynamics 365 OC with direct routing or Voice Calling (PSTN) follow [these instructions](/dynamics365/customer-service/voice-channel-inbound-calling).
+Supported in General Availability, to set up inbound calling in Omnichannel for Customer Service with direct routing or Voice Calling (PSTN) follow [these instructions](/dynamics365/customer-service/voice-channel-inbound-calling).
**Inbound calling with ACS Call Automation SDK**
communication-services Telephony Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/telephony-concept.md
This option requires:
### Quickstarts -- [Get a phone Number](../../quickstarts/telephony/get-phone-number.md)-- [Call to Phone](../../quickstarts/telephony/pstn-call.md)
+- [Get a phone number](../../quickstarts/telephony/get-phone-number.md)
+- [Outbound call to a phone number](../../quickstarts/telephony/pstn-call.md)
+- [Redirect inbound telephony calls with Call Automation](../../quickstarts/call-automation/redirect-inbound-telephony-calls.md)
communication-services Redirect Inbound Telephony Calls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/call-automation/redirect-inbound-telephony-calls.md
zone_pivot_groups: acs-csharp-java
[!INCLUDE [Public Preview Notice](../../includes/public-preview-include.md)]
-Get started with Azure Communication Services by using the Call Automation SDKs to build automated calling workflows that listen for and manage inbound calls placed to a phone number or received via Direct Routing.
+Get started with Azure Communication Services by using the Call Automation SDKs to build automated calling workflows that listen for and manage inbound calls placed to a phone number or received via ACS direct routing.
::: zone pivot="programming-language-csharp" [!INCLUDE [Redirect inbound call with .NET](./includes/redirect-inbound-telephony-calls-csharp.md)]
container-apps Compare Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/compare-options.md
You can get started building your first container app [using the quickstarts](ge
[Azure Functions](../azure-functions/functions-overview.md) is a serverless Functions-as-a-Service (FaaS) solution. It's optimized for running event-driven applications using the functions programming model. It shares many characteristics with Azure Container Apps around scale and integration with events, but optimized for ephemeral functions deployed as either code or containers. The Azure Functions programming model provides productivity benefits for teams looking to trigger the execution of your functions on events and bind to other data sources. When building FaaS-style functions, Azure Functions is the ideal option. The Azure Functions programming model is available as a base container image, making it portable to other container based compute platforms allowing teams to reuse code as environment requirements change. ### Azure Spring Apps
-[Azure Spring Apps](../spring-apps/overview.md) is a platform as a service (PaaS) for Spring developers. If you want to run Spring Boot, Spring Cloud or any other Spring applications on Azure, Azure Spring Apps is an ideal option. The service manages the infrastructure of Spring applications so developers can focus on their code. Azure Spring Apps provides lifecycle management using comprehensive monitoring and diagnostics, configuration management, service discovery, CI/CD integration, blue-green deployments, and more.
+[Azure Spring Apps](../spring-apps/overview.md) is a fully managed service for Spring developers. If you want to run Spring Boot, Spring Cloud or any other Spring applications on Azure, Azure Spring Apps is an ideal option. The service manages the infrastructure of Spring applications so developers can focus on their code. Azure Spring Apps provides lifecycle management using comprehensive monitoring and diagnostics, configuration management, service discovery, CI/CD integration, blue-green deployments, and more.
### Azure Red Hat OpenShift [Azure Red Hat OpenShift](../openshift/intro-openshift.md) is jointly engineered, operated, and supported by Red Hat and Microsoft to provide an integrated product and support experience for running Kubernetes-powered OpenShift. With Azure Red Hat OpenShift, teams can choose their own registry, networking, storage, and CI/CD solutions, or use the built-in solutions for automated source code management, container and application builds, deployments, scaling, health management, and more from OpenShift. If your team or organization is using OpenShift, Azure Red Hat OpenShift is an ideal option.
container-apps Custom Domains Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/custom-domains-certificates.md
Azure Container Apps allows you to bind one or more custom domains to a containe
- [SNI domain certificates](https://wikipedia.org/wiki/Server_Name_Indication) are required. - Ingress must be enabled for the container app
+> [!NOTE]
+> To configure a custom DNS suffix for all container apps in an environment, see [Custom environment DNS suffix in Azure Container Apps](environment-custom-dns-suffix.md).
+ ## Add a custom domain and certificate > [!IMPORTANT]
container-apps Environment Custom Dns Suffix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/environment-custom-dns-suffix.md
+
+ Title: Custom environment DNS suffix in Azure Container Apps
+description: Learn to manage custom DNS suffix and TLS certificate in Azure Container Apps environments
++++ Last updated : 10/13/2022+++
+# Custom environment DNS Suffix in Azure Container Apps
+
+By default, an Azure Container Apps environment provides a DNS suffix in the format `<UNIQUE_IDENTIFIER>.<REGION_NAME>.azurecontainerapps.io`. Each container app in the environment generates a domain name based on this DNS suffix. You can configure a custom DNS suffix for your environment.
+
+> [!NOTE]
+> To configure a custom domain for individual container apps, see [Custom domain names and certificates in Azure Container Apps](custom-domains-certificates.md).
+
+## Add a custom DNS suffix and certificate
+
+1. Go to your Container Apps environment in the [Azure portal](https://portal.azure.com)
+
+1. Under the *Settings* section, select **Custom DNS suffix**.
+
+1. In **DNS suffix**, enter the custom DNS suffix for the environment.
+
+ For example, if you enter `example.com`, the container app domain names will be in the format `<APP_NAME>.example.com`.
+
+1. In a new browser window, go to your domain provider's website and add the DNS records shown in the *Domain validation* section to your domain.
+
+ | Record type | Host | Value | Description |
+ | -- | -- | -- | -- |
+ | A | `*.<DNS_SUFFIX>` | Environment inbound IP address | Wildcard record configured to the IP address of the environment. |
+ | TXT | `asuid.<DNS_SUFFIX>` | Validation token | TXT record with the value of the validation token (not required for Container Apps environment with internal load balancer). |
+
+1. Back in the *Custom DNS suffix* window, in **Certificate file**, browse and select a certificate for the TLS binding.
+
+ > [!IMPORTANT]
+ > You must use an existing wildcard certificate that's valid for the custom DNS suffix you provided.
+
+1. In **Certificate password**, enter the password for the certificate.
+
+1. Select **Save**.
+
+Once the save operation is complete, the environment is updated with the custom DNS suffix and TLS certificate.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Custom domains in Azure Container Apps](custom-domains-certificates.md)
cosmos-db Performance Tips Query Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/performance-tips-query-sdk.md
filteredItemsAsPages.map(page -> {
## Tune the buffer size
-Parallel query is designed to pre-fetch results while the current batch of results is being processed by the client. The pre-fetching helps in overall latency improvement of a query. [setMaxBufferedItemCount](/java/api/com.azure.cosmos.models.cosmosqueryrequestoptions.setmaxbuffereditemcount) in `CosmosQueryRequestOptions` limits the number of pre-fetched results. Setting setMaxBufferedItemCount to the expected number of results returned (or a higher number) enables the query to receive maximum benefit from pre-fetching (NOTE: This can also result in high memory consumption). If you set this value to 0, the system will automatically determine the number of items to buffer.
+Parallel query is designed to pre-fetch results while the current batch of results is being processed by the client. The pre-fetching helps in overall latency improvement of a query. [setMaxBufferedItemCount](/java/api/com.azure.cosmos.models.cosmosqueryrequestoptions.setmaxbuffereditemcount) in `CosmosQueryRequestOptions` limits the number of pre-fetched results. To maximize the pre-fetching, set the `maxBufferedItemCount` to a higher number than the `pageSize` (NOTE: This can also result in high memory consumption). To minimize the pre-fetching, set the `maxBufferedItemCount` equal to the `pageSize`. If you set this value to 0, the system will automatically determine the number of items to buffer.
```java CosmosQueryRequestOptions options = new CosmosQueryRequestOptions();
cost-management-billing Tutorial Export Acm Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/tutorial-export-acm-data.md
description: This article shows you how you can create and manage exported Cost Management data so that you can use it in external systems. Previously updated : 11/07/2022 Last updated : 11/22/2022
Data export is available for various Azure account types, including [Enterprise
- Owner - Can create, modify, or delete scheduled exports for a subscription. - Contributor - Can create, modify, or delete their own scheduled exports. Can modify the name of scheduled exports created by others. - Reader - Can schedule exports that they have permission to.-
-**For more information about scopes, including access needed to configure exports for Enterprise Agreement and Microsoft Customer agreement scopes, see [Understand and work with scopes](understand-work-scopes.md)**.
+ - **For more information about scopes, including access needed to configure exports for Enterprise Agreement and Microsoft Customer agreement scopes, see [Understand and work with scopes](understand-work-scopes.md)**.
For Azure Storage accounts: - Write permissions are required to change the configured storage account, independent of permissions on the export. - Your Azure storage account must be configured for blob or file storage. - The storage account must not have a firewall configured.
+- The storage account configuration must have the **Permitted scope for copy operations (preview)** option set to **From any storage account**.
+ :::image type="content" source="./media/tutorial-export-acm-data/permitted-scope-copy-operations.png" alt-text="Screenshot showing the From any storage account option set." lightbox="./media/tutorial-export-acm-data/permitted-scope-copy-operations.png" :::
If you have a new subscription, you can't immediately use Cost Management features. It might take up to 48 hours before you can use all Cost Management features.
data-factory Connector Sap Change Data Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sap-change-data-capture.md
To create a mapping data flow using the SAP CDC connector as a source, complete
:::image type="content" source="media/sap-change-data-capture-solution/sap-change-data-capture-mapping-data-flow-select-dataset.png" alt-text="Screenshot of the select dataset option in source settings of mapping data flow source.":::
-1. On the tab **Source options** select the option **Full on every run** if you want to load full snapshots on every execution of your mapping data flow, or **Full on the first run, then incremental** if you want to subscribe to a change feed from the SAP source system. In this case, the first run of your pipeline will do a delta initialization, which means it will return a current full data snapshot and create an ODP delta subscription in the source system so that with subsequent runs, the SAP source system will return incremental changes since the previous run only. In case of incremental loads it is required to specify the keys of the ODP source object in the **Key columns** property.
+1. On the tab **Source options** select the option **Full on every run** if you want to load full snapshots on every execution of your mapping data flow, or **Full on the first run, then incremental** if you want to subscribe to a change feed from the SAP source system. In this case, the first run of your pipeline will do a delta initialization, which means it will return a current full data snapshot and create an ODP delta subscription in the source system so that with subsequent runs, the SAP source system will return incremental changes since the previous run only. You can also do **incremental changes only** if you want to create an ODP delta subscription in the SAP source system in the first run of your pipeline without returning any data, and with subsequent runs, the SAP source system will return incremental changes since the previous run only. In case of incremental loads it is required to specify the keys of the ODP source object in the **Key columns** property.
:::image type="content" source="media/sap-change-data-capture-solution/sap-change-data-capture-mapping-data-flow-run-mode.png" alt-text="Screenshot of the run mode property in source options of mapping data flow source.":::
To create a mapping data flow using the SAP CDC connector as a source, complete
1. For the tabs **Projection**, **Optimize** and **Inspect**, please follow [mapping data flow](concepts-data-flow-overview.md).
-1. If **Run mode** is set to **Full on every run**, the tab **Optimize** offers additional selection and partitioning options. Each partition condition (the screenshot below shows an example with two conditions) will trigger a separate extraction process in the connected SAP system. Up to three of these extraction process are executed in parallel.
+1. If **Run mode** is set to **Full on every run** or **Full on the first run, then incremental**, the tab **Optimize** offers additional selection and partitioning options. Each partition condition (the screenshot below shows an example with two conditions) will trigger a separate extraction process in the connected SAP system. Up to three of these extraction process are executed in parallel.
:::image type="content" source="media/sap-change-data-capture-solution/sap-change-data-capture-mapping-data-flow-optimize-partition.png" alt-text="Screenshot of the partitioning options in optimize of mapping data flow source.":::
databox-online Azure Stack Edge Gpu Deploy Virtual Machine High Performance Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-high-performance-network.md
Previously updated : 11/18/2022 Last updated : 11/23/2022 # Customer intent: As an IT admin, I need to understand how to configure compute on an Azure Stack Edge Pro GPU device so that I can use it to transform data before I send it to Azure.
To maximize performance, processing, and transmitting on the same NUMA node, pro
To deploy HPN VMs on Azure Stack Edge, you must reserve vCPUs on NUMA nodes. The number of vCPUs reserved determines the available vCPUs that can be assigned to the HPN VMs.
-For the number of cores that each HPN VM size uses, see theΓÇ»[Supported HPN VM sizes](azure-stack-edge-gpu-virtual-machine-sizes.md#supported-vm-sizes).
+For the number of cores that each HPN VM size uses, see [Supported HPN VM sizes](azure-stack-edge-gpu-virtual-machine-sizes.md#supported-vm-sizes).
-In version 2210, vCPUs are automatically reserved with the maximum number of vCPUs supported on each NUMA node. If the vCPUs were already reserved for HPN VMs in an earlier version, the existing reservation is carried forth to the 2210 version. If vCPUs weren't reserved for HPN VMs in an earlier version, upgrading to 2210 will still carry forth the existing configuration.
+In version 2210, vCPUs are automatically reserved with the maximum number of vCPUs supported on each NUMA node. If vCPUs were already reserved for HPN VMs in an earlier version, the existing reservation is carried forth to the 2210 version. If vCPUs weren't reserved for HPN VMs in an earlier version, upgrading to 2210 will still carry forth the existing configuration.
-For versions 2209 and earlier, you must reserve vCPUs on NUMA nodes before you deploy HPN VMs on your device. We recommend that the vCPU reservation is done on NUMA node 0, as this node has Mellanox high speed network interfaces, Port 5 and Port 6, attached to it.
+For versions 2209 and earlier, you must reserve vCPUs on NUMA nodes before you deploy HPN VMs on your device. We recommend NUMA node 0 for vCPU reservations because NUMA node 0 has Mellanox high speed network interfaces.
## HPN VM deployment workflow The high-level summary of the HPN deployment workflow is as follows:
-1. While configuring the network settings on your device, make sure that there's a virtual switch associated with a network interface on your device that can be used for the VM resources and VMs. We'll use the default virtual network created with the vswitch for this article. You have the option of creating and using a different virtual network, if desired.
+1. While configuring the network settings on your device, make sure that there's a virtual switch associated with a network interface on your device that can be used for VM resources and VMs. We'll use the default virtual network created with the vswitch for this article. You have the option of creating and using a different virtual network, if desired.
2. Enable cloud management of VMs from the Azure portal. Download a VHD onto your device, and create a VM image from the VHD.
The high-level summary of the HPN deployment workflow is as follows:
4. Use the resources created in the previous steps: 1. The VM image that you created.
- 2. The default virtual network associated with the virtual switch. The default virtual network has the same name as the name of the virtual switch.
+ 2. The default virtual network associated with the virtual switch. The default virtual network name is the same as the name of the virtual switch.
3. The default subnet for the default virtual network. 1. And create or specify the following resources:
- 1. Specify a VM name, choose a supported HPN VM size, and specify sign-in credentials for the VM.
+ 1. Specify a VM name and a supported HPN VM size, and specify sign-in credentials for the VM.
1. Create new data disks or attach existing data disks.
- 1. Configure static or dynamic IP for the VM. If you're providing a static IP, choose from a free IP in the subnet range of the default virtual network.
+ 1. Configure a static or dynamic IP for the VM. If you're providing a static IP, specify a free IP in the subnet range of the default virtual network.
1. Use the preceding resources to create an HPN VM.
Before you create and manage VMs on your device via the Azure portal, make sure
- The default vCPU reservation uses the SkuPolicy, which reserves all vCPUs that are available for HPN VMs.
- - If the vCPUs were already reserved for HPN VMs in an earlier version - for example, version 2009 or earlier, then the existing reservation is carried forth to the 2210 version.
+ - If the vCPUs were already reserved for HPN VMs in an earlier version - for example, in version 2209 or earlier, then the existing reservation is carried forth to the 2210 version.
- For most use cases, we recommend that you use the default configuration. If needed, you can also customize the NUMA configuration for HPN VMs. To customize the configuration, use the steps provided for 2209. - Use the following steps to get information about the SkuPolicy settings on your device: 1. [Connect to the PowerShell interface of the device](azure-stack-edge-gpu-connect-powershell-interface.md#connect-to-the-powershell-interface).
-
-
+ 1. Run the following command to see the available NUMA policies on your device: ```powershell
Before you create and manage VMs on your device via the Azure portal, make sure
This cmdlet will output: 1. HpnLpMapping: The NUMA logical processor indexes that are reserved on the machine. 1. HpnCapableLpMapping: The NUMA logical processor indexes that are capable for reservation.
- 1. HpnLpAvailable: The NUMA logical processor indexes that aren't available for new HPN VM deployments.
- 1. The NUMA logical processors used by HPN VMs and NUMA logical processors available for new HPN VM deployments on each NUMA node in the cluster.
+ 1. HpnLpAvailable: The NUMA logical processor indexes that are available for new HPN VM deployments.
```powershell Get-HcsNumaLpMapping
Before you create and manage VMs on your device via the Azure portal, make sure
```powershell Get-HcsNumaLpMapping ```-
- The output shouldn't show the indexes you set. If you see the indexes you set in the output, the `Set` command didn't complete successfully. Retry the command and if the problem persists, contact Microsoft Support.
-
- Here's an example output.
-
- ```powershell
- dbe-1csphq2.microsoftdatabox.com]: PS> Get-HcsNumaLpMapping -MapType MinRootAware -NodeName 1CSPHQ2
-
- { Numa Node #0 : CPUs [0, 1, 2, 3] }
-
- { Numa Node #1 : CPUs [20, 21, 22, 23] }
-
- [dbe-1csphq2.microsoftdatabox.com]:
-
- PS>
### [2209 and earlier](#tab/2209)
In addition to the above prerequisites that are used for VM creation, configure
Get-HcsNumaLpMapping -MapType HighPerformanceCapable -NodeName <Output of hostname command> ```
- Here's example output:
+ Here's an example output:
```powershell [dbe-1csphq2.microsoftdatabox.com]: PS>hostname 1CSPHQ2
- [dbe-1csphq2.microsoftdatabox.com]: P> Get-HcsNumaLpMapping -MapType HighPerformanceCapable -NodeName
[dbe-1csphq2.microsoftdatabox.com]: P> Get-HcsNumaLpMapping -MapType HighPerformanceCapable -NodeName 1CSPHQ2 { Numa Node #0 : CPUs [4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19] } { Numa Node #1 : CPUs [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39] }
In addition to the above prerequisites that are used for VM creation, configure
[dbe-1csphq2.microsoftdatabox.com]: PS> ```
- 1. Reserve vCPUs for HPN VMs. The number of vCPUs reserved here determines the available vCPUs that could be assigned to the HPN VMs. For the number of cores that each HPN VM size uses, see theΓÇ»[Supported HPN VM sizes](azure-stack-edge-gpu-virtual-machine-sizes.md#supported-vm-sizes). On your device, Mellanox ports 5 and 6 are on NUMA node 0.
+ 1. Reserve vCPUs for HPN VMs. The number of vCPUs reserved here determines the available vCPUs that can be assigned to the HPN VMs. For the number of cores used by each HPN VM size, see [Supported HPN VM sizes](azure-stack-edge-gpu-virtual-machine-sizes.md#supported-vm-sizes). On your device, Mellanox ports 5 and 6 are on NUMA node 0.
```powershell Set-HcsNumaLpMapping -CpusForHighPerfVmsCommaSeperated <Logical indexes from the Get-HcsNumaLpMapping cmdlet> -AssignAllCpusToRoot $false
In addition to the above prerequisites that are used for VM creation, configure
``` > [!Note]
- > - You can choose to reserve all the logical indexes from both NUMA nodes shown in the example or a subset of the indexes. If you choose to reserve a subset of indexes, pick the indexes from the device node that has a Mellanox network interface attached to it, for best performance. For Azure Stack Edge Pro GPU, the NUMA node with Mellanox network interface is #0.
+ > - You can choose to reserve all the logical indexes from both NUMA nodes shown in the example, or a subset of the indexes. If you choose to reserve a subset of indexes, pick the indexes from the device node that has a Mellanox network interface attached to it, for best performance. For Azure Stack Edge Pro GPU, the NUMA node with Mellanox network interface is #0.
> - The list of logical indexes must contain a paired sequence of an odd number and an even number. For example, ((4,5)(6,7)(10,11)). Attempting to set a list of numbers such as `5,6,7` or pairs such as `4,6` will not work. > - Using two `Set-HcsNuma` commands consecutively to assign vCPUs will reset the configuration. Also, do not free the CPUs using the Set-HcsNuma cmdlet if you have deployed an HPN VM.
In addition to the above prerequisites that are used for VM creation, configure
Get-HcsNumaLpMapping ```
- The output shouldn't show the indexes you set. If you see the indexes you set in the output, the `Set` command didn't complete successfully. Retry the command and if the problem persists, contact Microsoft Support.
+ The output shouldn't show the indexes you set. If you see the indexes you set in output, the `Set` command didn't complete successfully. In this case, retry the command and, if the problem persists, contact Microsoft Support.
Here's an example output.
defender-for-cloud Supported Machines Endpoint Solutions Clouds Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds-containers.md
The **tabs** below show the features that are available, by environment, for Mic
| Aspect | Details | |--|--|
-| Registries and images | **Supported**<br> ΓÇó [ACR registries protected with Azure Private Link](../container-registry/container-registry-private-link.md) (Private registries requires access to Trusted Services) <br> ΓÇó Windows images using Windows OS version 1709 and above (Preview). This is free while it's in preview, and will incur charges (based on the Defender for Containers plan) when it becomes generally available.<br><br>**Unsupported**<br> ΓÇó Super-minimalist images such as [Docker scratch](https://hub.docker.com/_/scratch/) images<br> ΓÇó "Distroless" images that only contain an application and its runtime dependencies without a package manager, shell, or OS<br> ΓÇó Images with [Open Container Initiative (OCI) Image Format Specification](https://github.com/opencontainers/image-spec/blob/master/spec.md) |
+| Registries and images | **Supported**<br> ΓÇó [ACR registries protected with Azure Private Link](../container-registry/container-registry-private-link.md) (Private registries requires access to Trusted Services) <br> ΓÇó Windows images using Windows OS version 1709 and above (Preview). This is free while it's in preview, and will incur charges (based on the Defender for Containers plan) when it becomes generally available.<br><br>**Unsupported**<br> ΓÇó Super-minimalist images such as [Docker scratch](https://hub.docker.com/_/scratch/) images<br> ΓÇó "Distroless" images that only contain an application and its runtime dependencies without a package manager, shell, or OS<br> ΓÇó Images with [Open Container Initiative (OCI) Image Format Specification](https://github.com/opencontainers/image-spec/blob/master/spec.md) <br> ΓÇó Providing image tag information for [multi-architecture images](https://www.docker.com/blog/multi-arch-build-and-images-the-simple-way/) is currently unsupported|
| OS Packages | **Supported** <br> ΓÇó Alpine Linux 3.12-3.16 <br> ΓÇó Red Hat Enterprise Linux 6, 7, 8 <br> ΓÇó CentOS 6, 7 <br> ΓÇó Oracle Linux 6, 7, 8 <br> ΓÇó Amazon Linux 1, 2 <br> ΓÇó openSUSE Leap 42, 15 <br> ΓÇó SUSE Enterprise Linux 11, 12, 15 <br> ΓÇó Debian GNU/Linux wheezy, jessie, stretch, buster, bullseye <br> ΓÇó Ubuntu 10.10-22.04 <br> ΓÇó FreeBSD 11.1-13.1 <br> ΓÇó Fedora 32, 33, 34, 35| | Language specific packages (Preview) <br><br> (**Only supported for Linux images**) | **Supported** <br> ΓÇó Python <br> ΓÇó Node.js <br> ΓÇó .NET <br> ΓÇó JAVA <br> ΓÇó Go |
Outbound proxy without authentication and outbound proxy with basic authenticati
| Aspect | Details | |--|--|
-| Registries and images | **Supported**<br> ΓÇó [ACR registries protected with Azure Private Link](../container-registry/container-registry-private-link.md) (Private registries requires access to Trusted Services) <br> ΓÇó Windows images using Windows OS version 1709 and above (Preview). This is free while it's in preview, and will incur charges (based on the Defender for Containers plan) when it becomes generally available.<br><br>**Unsupported**<br> ΓÇó Super-minimalist images such as [Docker scratch](https://hub.docker.com/_/scratch/) images<br> ΓÇó "Distroless" images that only contain an application and its runtime dependencies without a package manager, shell, or OS<br> ΓÇó Images with [Open Container Initiative (OCI) Image Format Specification](https://github.com/opencontainers/image-spec/blob/master/spec.md) |
+| Registries and images | **Supported**<br> ΓÇó [ACR registries protected with Azure Private Link](../container-registry/container-registry-private-link.md) (Private registries requires access to Trusted Services) <br> ΓÇó Windows images using Windows OS version 1709 and above (Preview). This is free while it's in preview, and will incur charges (based on the Defender for Containers plan) when it becomes generally available.<br><br>**Unsupported**<br> ΓÇó Super-minimalist images such as [Docker scratch](https://hub.docker.com/_/scratch/) images<br> ΓÇó "Distroless" images that only contain an application and its runtime dependencies without a package manager, shell, or OS<br> ΓÇó Images with [Open Container Initiative (OCI) Image Format Specification](https://github.com/opencontainers/image-spec/blob/master/spec.md) <br> ΓÇó Providing image tag information for [multi-architecture images](https://www.docker.com/blog/multi-arch-build-and-images-the-simple-way/) is currently unsupported |
| OS Packages | **Supported** <br> ΓÇó Alpine Linux 3.12-3.15 <br> ΓÇó Red Hat Enterprise Linux 6, 7, 8 <br> ΓÇó CentOS 6, 7 <br> ΓÇó Oracle Linux 6, 7, 8 <br> ΓÇó Amazon Linux 1, 2 <br> ΓÇó openSUSE Leap 42, 15 <br> ΓÇó SUSE Enterprise Linux 11, 12, 15 <br> ΓÇó Debian GNU/Linux wheezy, jessie, stretch, buster, bullseye <br> ΓÇó Ubuntu 10.10-22.04 <br> ΓÇó FreeBSD 11.1-13.1 <br> ΓÇó Fedora 32, 33, 34, 35| | Language specific packages (Preview) <br><br> (**Only supported for Linux images**) | **Supported** <br> ΓÇó Python <br> ΓÇó Node.js <br> ΓÇó .NET <br> ΓÇó JAVA <br> ΓÇó Go |
defender-for-iot Concept Sentinel Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/concept-sentinel-integration.md
Together with the new responsibilities, SOC teams deal with new challenges, incl
- **Siloed or inefficient communication and processes** between OT and SOC organizations. -- **Limited technology and tools**, including:
+- **Limited technology and tools**, such as lack of visibility or automated security remediation for OT networks. You'll need to evaluate and link information across data sources for OT networks, and integrations with existing SOC solutions may be costly.
- - Lack of visibility and insight into OT networks.
+However, without OT telemetry, context and integration with existing SOC tools and workflows, OT security and operational threats may be handled incorrectly, or even go unnoticed.
- - Limited insight about events across enterprise IT/OT networks, including tools that don't allow SOC teams to evaluate and link information across data sources in IT/OT environments.
+## Integrate Defender for IoT and Microsoft Sentinel
- - Low level of automated security remediation for OT networks.
+Microsoft Sentinel is a scalable cloud service for security information event management (SIEM) security orchestration automated response (SOAR). SOC teams can use the integration between Microsoft Defender for Iot and Microsoft Sentinel to collect data across networks, detect and investigate threats, and respond to incidents.
- - Costly and time-consuming effort needed to integrate OT security solutions into existing SOC solutions.
+In Microsoft Sentinel, the Defender for IoT data connector and solution brings out-of-the-box security content to SOC teams, helping them to view, analyze and respond to OT security alerts, and understand the generated incidents in the broader organizational threat contents.
-Without OT telemetry, context and integration with existing SOC tools and workflows, OT security and operational threats may be handled incorrectly, or even go unnoticed.
+Install the Defender for IoT data connector alone to stream your OT network alerts to Microsoft Sentinel. Then, also install the **Microsoft Defender for IoT** solution the extra value of IoT/OT-specific analytics rules, workbooks, and SOAR playbooks, as well as incident mappings to [MITRE ATT&CK for ICS](https://collaborate.mitre.org/attackics/index.php/Overview).
-## Integrate Defender for IoT and Microsoft Sentinel
+### Integrated detection and response
-Microsoft Sentinel is a scalable cloud solution for security information event management (SIEM) security orchestration automated response (SOAR). SOC teams can use Microsoft Sentinel to collect data across networks, detect and investigate threats, and respond to incidents.
+The following table shows how both the OT team, on the Defender for IoT side, and the SOC team, on the Microsoft Sentinel side, can detect and respond to threats fast across the entire attack timeline.
-The Defender for IoT and Microsoft Sentinel integration delivers out-of-the-box capabilities to SOC teams. This helps them to efficiently and effectively view, analyze, and respond to OT security alerts, and the incidents they generate in a broader organizational threat context.
+|Microsoft Sentinel |Step |Defender for IoT |
+||||
+| | **OT alert triggered** | High confidence OT alerts, powered by Defender for IoT's *Section 52* security research group, are triggered based on data ingested to Defender for IoT. |
+|Analytics rules automatically open incidents *only* for relevant use cases, avoiding OT alert fatigue | **OT incident created** | |
+|SOC teams map business impact, including data about the site, line, compromised assets, and OT owners | **OT incident business impact mapping** | |
+|SOC teams move the incident to *Active* and start investigating, using network connections and events, workbooks, and the OT device entity page | **OT incident investigation** | Alerts are moved to *Active*, and OT teams investigate using PCAP data, detailed reports, and other device details |
+|SOC teams respond with OT playbooks and notebooks | **OT incident response** | OT teams either suppress the alert or learn it for next time, as needed |
+|After the threat is mitigated, SOC teams close the incident | **OT incident closure** | After the threat is mitigated, OT teams close the alert |
-Bring Defender for IoT's rich telemetry into Microsoft Sentinel to bridge the gap between OT and SOC teams with the Microsoft Sentinel data connector for Defender for IoT and the **Microsoft Defender for IoT** solution.
+## Microsoft Sentinel incidents for Defender for IoT
-The **Microsoft Defender for IoT** solution installs out-of-the-box security content to your Microsoft Sentinel, including analytics rules to automatically open incidents, workbooks to visualize and monitor data, and playbooks to automate response actions.
+After you've configured the Defender for IoT data connector and have IoT/OT alert data streaming to Microsoft Sentinel, use one of the following methods to create incidents based on those alerts:
-Once Defender for IoT data is ingested into Microsoft Sentinel, security experts can work with IoT/OT-specific analytics rules, workbooks, and SOAR playbooks, as well as incident mappings to [MITRE ATT&CK for ICS](https://collaborate.mitre.org/attackics/index.php/Overview).
+|Method |Description |
+|||
+|**Use the default data connector rule** | Use the default, **Create incidents based on all alerts generated in Microsoft Defender for IOT** analytics rule provided with the data connector. This rule creates a separate incident in Microsoft Sentinel for each alert streamed from Defender for IoT. |
+|**Use out-of-the-box solution rules** | Enable some or all of the [out-of-the-box analytics rules](https://azuremarketplace.microsoft.com/marketplace/apps/azuresentinel.azure-sentinel-solution-unifiedmicrosoftsocforot?tab=Overview) provided with the **Microsoft Defender for IoT** solution.<br><br> These analytics rules help to reduce alert fatigue by creating incidents only in specific situations. For example, you might choose to create incidents for excessive login attempts, but for multiple scans detected in the network. |
+|**Create custom rules** | Create custom analytics rules to create incidents based only on your specific needs. You can use the out-of-the-box analytics rules as a starting point, or create rules from scratch. <br><br>Add the following filter to prevent duplicate incidents for the same alert ID: `| where TimeGenerated <= ProcessingEndTime + 60m` |
-### Workbooks
+Regardless of the method you choose to create alerts, only one incident should be created for each Defender for IoT alert ID.
+
+## Microsoft Sentinel workbooks for Defender for IoT
To visualize and monitor your Defender for IoT data, use the workbooks deployed to your Microsoft Sentinel workspace as part of the **Microsoft Defender for IoT** solution. Defender for IoT workbooks provide guided investigations for OT entities based on open incidents, alert notifications, and activities for OT assets. They also provide a hunting experience across the MITRE ATT&CK® framework for ICS, and are designed to enable analysts, security engineers, and MSSPs to gain situational awareness of OT security posture.
-For example, workbooks can display alerts by any of the following dimensions:
--- Type, such as policy violation, protocol violation, malware, and so on-- Severity-- OT device type, such as PLC, HMI, engineering workstation, and so on-- OT equipment vendor-- Alerts over time-
-Workbooks also show the result of mapping alerts to MITRE ATT&CK for ICS tactics, plus the distribution of tactics by count and time period. For example:
+Workbooks can display alerts by type, severity, OT device type or vendor, or alerts over time. Workbooks also show the result of mapping alerts to MITRE ATT&CK for ICS tactics, plus the distribution of tactics by count and time period. For example:
:::image type="content" source="media/concept-sentinel-integration/mitre-attack.png" alt-text="Image of MITRE ATT&CK graph":::
-### SOAR playbooks
+## SOAR playbooks for Defender for IoT
Playbooks are collections of automated remediation actions that can be run from Microsoft Sentinel as a routine. A playbook can help automate and orchestrate your threat response. It can be run manually or set to run automatically in response to specific alerts or incidents, when triggered by an analytics rule or an automation rule, respectively.
For example, use SOAR playbooks to:
- Send an email to relevant stakeholders when suspicious activity is detected, for example unplanned PLC reprogramming. The mail may be sent to OT personnel, such as a control engineer responsible on the related production line.
-## Integrated incident timeline
-The following table shows how both the OT team, on the Defender for IoT side, and the SOC team, on the Microsoft Sentinel side, can detect and respond to threats fast across the entire attack timeline.
-|Microsoft Sentinel |Step |Defender for IoT |
-||||
-| | **OT alert triggered** | High confidence OT alerts, powered by Defender for IoT's *Section 52* security research group, are triggered based on data ingested to Defender for IoT. |
-|Analytics rules automatically open incidents *only* for relevant use cases, avoiding OT alert fatigue | **OT incident created** | |
-|SOC teams map business impact, including data about the site, line, compromised assets, and OT owners | **OT incident business impact mapping** | |
-|SOC teams move the incident to *Active* and start investigating, using network connections and events, workbooks, and the OT device entity page | **OT incident investigation** | Alerts are moved to *Active*, and OT teams investigate using PCAP data, detailed reports, and other device details |
-|SOC teams respond with OT playbooks and notebooks | **OT incident response** | OT teams either suppress the alert or learn it for next time, as needed |
-|After the threat is mitigated, SOC teams close the incident | **OT incident closure** | After the threat is mitigated, OT teams close the alert |
+## Comparing Defender for IoT events, alerts, and incidents
+
+This section clarifies the differences between Defender for IoT events, alerts, and incidents in Microsoft Sentinel. Use the listed queries to view a full list of the current events, alerts, and incidents for your OT networks.
+
+You'll typically see more Defender for IoT *events* in Microsoft Sentinel than *alerts*, and more Defender for IoT *alerts* than *incidents*.
++
+- **Events**: Each alert log that streams to Microsoft Sentinel from Defender for IoT is an *event*. If the alert log reflects a new or updated alert in Defender for IoT, a new record is added to the **SecurityAlert** table.
+
+ To view all Defender for IoT events in Microsoft Sentinel, run the following query on the **SecurityAlert** table:
+
+ ```kql
+ SecurityAlert
+ | where ProviderName == 'IoTSecurity' or ProviderName == 'CustomAlertRule'
+ Instead
+ ```
+
+- **Alerts**: Microsoft Sentinel creates alerts based on your current analytics rules and the alert logs listed in the **SecurityAlert** table. If you don't have any active analytics rules for Defender for IoT, Microsoft Sentinel considers each alert log as an *event*.
+
+ To view alerts in Microsoft Sentinel, run the following query on the **SecurityAlert** table:
+
+ ```kql
+ SecurityAlert
+ | where ProviderName == 'ASI Scheduled Alerts' or ProviderName == 'CustomAlertRule'
+ ```
+
+- **Incidents**. Microsoft Sentinel creates incidents based on your analytics rules. You might have several alerts grouped in the same incident, or you may have analytics rules configured to *not* create incidents for specific alert types.
+
+ To view incidents in Microsoft Sentinel, run the following query:
+ ```kql
+ SecurityIncident
+ ```
## Next steps
defender-for-iot Faqs Ot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/faqs-ot.md
For example:
## How can I change a user's passwords
-Learn how to [Change a user's password](how-to-create-and-manage-users.md#change-a-users-password) for either the sensor or the on-premises management console.
+You can change user passwords or recover access to privileged users on both the OT network sensor and the on-premises management console. For more information, see:
-You can also [Recover the password for the on-premises management console, or the sensor](how-to-create-and-manage-users.md#recover-the-password-for-the-on-premises-management-console-or-the-sensor).
+- [Create and manage users on an OT network sensor](manage-users-sensor.md)
+- [Create and manage users on an on-premises management console](manage-users-on-premises-management-console.md)
## How do I activate the sensor and on-premises management console
defender-for-iot Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/getting-started.md
Before you start, make sure that you have:
- An Azure account. If you don't already have an Azure account, you can [create your free Azure account today](https://azure.microsoft.com/free/). -- Access to an Azure subscription with the subscription **Owner** or **Contributor** role.
+- Access to the Azure portal as a [Security Admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner). For more information, see [Azure user roles for OT and Enterprise IoT monitoring with Defender for IoT](roles-azure.md).
If you're using a Defender for IoT sensor version earlier than 22.1.x, you must also have an Azure IoT Hub (Free or Standard tier) **Contributor** role, for cloud-connected management. Make sure that the **Microsoft Defender for IoT** feature is enabled.
-### Permissions
-
-Defender for IoT users require the following permissions:
-
-| Permission | Security reader | Security admin | Subscription contributor | Subscription owner |
-|--|--|--|--|--|
-| Onboard subscriptions and update committed devices | | Γ£ô | Γ£ô | Γ£ô |
-| Onboard sensors | | Γ£ô | Γ£ô | Γ£ô |
-| View details and access software, activation files and threat intelligence packages | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Recover passwords | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-
-For more information, see [Azure roles](../../role-based-access-control/rbac-and-directory-admin-roles.md).
- ### Supported service regions Defender for IoT routes all traffic from all European regions to the *West Europe* regional datacenter. It routes traffic from all remaining regions to the *East US* regional datacenter.
defender-for-iot How To Activate And Set Up Your Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-activate-and-set-up-your-sensor.md
You can access console tools from the side menu. Tools help you:
||| | System settings | Configure the system settings. For example, define DHCP settings, provide mail server details, or create port aliases. | | Custom alert rules | Use custom alert rules to more specifically pinpoint activity or traffic of interest to you. For more information, see [Customize alert rules](how-to-accelerate-alert-incident-response.md#customize-alert-rules). |
-| Users | Define users and roles with various access levels. For more information, see [About Defender for IoT console users](how-to-create-and-manage-users.md#about-defender-for-iot-console-users). |
+| Users | Define users and roles with various access levels. For more information, see [Create and manage users on an OT network sensor](manage-users-sensor.md). |
| Forwarding | Forward alert information to partners that integrate with Defender for IoT, for example, Microsoft Sentinel, Splunk, ServiceNow. You can also send to email addresses, webhook servers, and more. <br /> See [Forward alert information](how-to-forward-alert-information-to-partners.md) for details. |
defender-for-iot How To Create And Manage Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-create-and-manage-users.md
- Title: Create and manage users
-description: Create and manage users of sensors and the on-premises management console. Users can be assigned the role of Administrator, Security Analyst, or Read-only user.
Previously updated : 01/26/2022---
-# About Defender for IoT console users
-
-This article describes how to create and manage users of sensors and the on-premises management console. User roles include Administrator, Security Analyst, or Read-only users. Each role is associated with a range of permissions to tools for the sensor or on-premises management console. Roles are designed to facilitate granular, secure access to Microsoft Defender for IoT.
-
-Features are also available to track user activity and enable Active Directory sign in.
-
-By default, each sensor and on-premises management console is installed with the *cyberx* and *support* users. Sensors are also installed with the *cyberx_host* user. These users have access to advanced tools for troubleshooting and setup. Administrator users should sign in with these user credentials, create an admin user, and then create extra users for security analysts and read-only users.
-
-## Role-based permissions
-The following user roles are available:
--- **Read only**: Read-only users perform tasks such as viewing alerts and devices on the device map. These users have access to options displayed under **Discover**.--- **Security analyst**: Security Analysts have Read-only user permissions. They can also perform actions on devices, acknowledge alerts, and use investigation tools. These users have access to options displayed under **Discover** and **Analyze**.--- **Administrator**: Administrators have access to all tools, including system configurations, creating and managing users, and more. These users have access to options displayed under **Discover**, **Analyze**, and **Manage** sections of the console main screen.-
-### Role-based permissions to on-premises management console tools
-
-This section describes permissions available to Administrators, Security Analysts, and Read-only users for the on-premises management console.
-
-| Permission | Read-only | Security Analyst | Administrator |
-|--|--|--|--|
-| View and filter the enterprise map | Γ£ô | Γ£ô | Γ£ô |
-| Build a site | | | Γ£ô |
-| Manage a site (add and edit zones) | | | Γ£ô |
-| View and filter device inventory | Γ£ô | Γ£ô | Γ£ô |
-| View and manage alerts: acknowledge, learn, and pin | Γ£ô | Γ£ô | Γ£ô |
-| Generate reports | | Γ£ô | Γ£ô |
-| View risk assessment reports | | Γ£ô | Γ£ô |
-| Set alert exclusions | | Γ£ô | Γ£ô |
-| View or define access groups | | | Γ£ô |
-| Manage system settings | | | Γ£ô |
-| Manage users | | | Γ£ô |
-| Send alert data to partners | | | Γ£ô |
-| Manage certificates | | | Γ£ô |
-| Session timeout when users aren't active | 30 minutes | 30 minutes | 30 minutes |
-
-#### Assign users to access groups
-
-Administrators can enhance user access control in Defender for IoT by assigning users to specific *access groups*. Access groups are assigned to zones, sites, regions, and business units where a sensor is located. By assigning users to access groups, administrators gain specific control over where users manage and analyze device detections.
-
-Working this way accommodates large organizations where user permissions can be complex or determined by a global organizational security policy. For more information, see [Define global access control](how-to-define-global-user-access-control.md).
-
-### Role-based permissions to sensor tools
-
-This section describes permissions available to sensor Administrators, Security Analysts, and Read-only users.
-
-| Permission | Read-only | Security Analyst | Administrator |
-|--|--|--|--|
-| View the dashboard | Γ£ô | Γ£ô | Γ£ô |
-| Control map zoom views | | | Γ£ô |
-| View alerts | Γ£ô | Γ£ô | Γ£ô |
-| Manage alerts: acknowledge, learn, and pin | | Γ£ô | Γ£ô |
-| View events in a timeline | | Γ£ô | Γ£ô |
-| Authorize devices, known scanning devices, programming devices | | Γ£ô | Γ£ô |
-| Merge and delete devices | | | Γ£ô |
-| View investigation data | Γ£ô | Γ£ô | Γ£ô |
-| Manage system settings | | | Γ£ô |
-| Manage users | | | Γ£ô |
-| DNS servers for reverse lookup | | | Γ£ô |
-| Send alert data to partners | | Γ£ô | Γ£ô |
-| Create alert comments | | Γ£ô | Γ£ô |
-| View programming change history | Γ£ô | Γ£ô | Γ£ô |
-| Create customized alert rules | | Γ£ô | Γ£ô |
-| Manage multiple notifications simultaneously | | Γ£ô | Γ£ô |
-| Manage certificates | | | Γ£ô |
-| Session timeout when users are not active | 30 minutes | 30 minutes | 30 minutes |
-
-## Define users
-
-This section describes how to define users. Cyberx, support, and administrator users can add, remove, and update other user definitions.
-
-**To define a user**:
-
-1. From the left pane for the sensor or the on-premises management console, select **Users**.
-
- :::image type="content" source="media/how-to-create-and-manage-users/users-pane.png" alt-text="Screenshot of the Users pane for creating users.":::
-1. In the **Users** window, select **Create User**.
-
-1. In the **Create User** pane, define the following parameters:
-
- - **Username**: Enter a username.
- - **Email**: Enter the user's email address.
- - **First Name**: Enter the user's first name.
- - **Last Name**: Enter the user's last name.
- - **Role**: Define the user's role. For more information, see [Role-based permissions](#role-based-permissions).
- - **Access Group**: If you're creating a user for the on-premises management console, define the user's access group. For more information, see [Define global access control](how-to-define-global-user-access-control.md).
- - **Password**: Select the user type as follows:
- - **Local User**: Define a password for the user of a sensor or an on-premises management console. Password must have at least eight characters and contain lowercase and uppercase alphabetic characters, numbers, and symbols.
- - **Active Directory User**: You can allow users to sign in to the sensor or management console by using Active Directory credentials. Defined Active Directory groups can be associated with specific permission levels. For example, configure a specific Active Directory group and assign all users in the group to the Read-only user type.
--
-## User session timeout
-
-If users aren't active at the keyboard or mouse for a specific time, they're signed out of their session and must sign in again.
-
-When users haven't worked with their console mouse or keyboard for 30 minutes, a session sign-out is forced.
-
-This feature is enabled by default and on upgrade, but can be disabled. In addition, session counting times can be updated. Session times are defined in seconds. Definitions are applied per sensor and on-premises management console.
-
-A session timeout message appears at the console when the inactivity timeout has passed.
-
-### Control inactivity sign-out
-
-Administrator users can enable and disable inactivity sign-out and adjust the inactivity thresholds.
-
-**To access the command**:
-
-1. Sign in to the CLI for the sensor or on-premises management console by using Defender for IoT administrative credentials.
-
-1. Enter `sudo nano /var/cyberx/properties/authentication`.
-
-```azurecli-interactive
- infinity_session_expiration = true
- session_expiration_default_seconds = 0
- # half an hour in seconds (comment)
- session_expiration_admin_seconds = 1800
- # a day in seconds
- session_expiration_security_analyst_seconds = 1800
- # a week in seconds
- session_expiration_read_only_users_seconds = 1800
-```
-
-To disable the feature, change `infinity_session_expiration = true` to `infinity_session_expiration = false`.
-
-To update sign-out counting periods, adjust the `= <number>` value to the required time.
-
-## Track user activity
-
-Track user activity on a sensor's event timeline, or by viewing audit logs generated on an on-premises management console.
--- **The timeline** displays the event or affected device, and the time and date that the user carried out the activity.--- **Audit logs** record key activity data at the time of occurrence. Use audit logs generated on the on-premises management console to understand which changes were made, when, and by whom.-
-### View user activity on the sensor's Event Timeline
-
-Select **Event Timeline** from the sensor side menu. If needed, verify that **User Operations** filter is set to **Show**.
-
-For example:
--
-Use the filters or search using CTRL+F to find the information of interest to you.
-
-### View audit log data on the on-premises management console
-
-In the on-premises management console, select **System Settings > System Statistics**, and then select **Audit log**.
-
-The dialog displays data from the currently active audit log. For example:
-
-For example:
--
-New audit logs are generated at every 10 MB. One previous log is stored in addition to the current active log file.
-
-Audit logs include the following data:
-
-| Action | Information logged |
-|--|--|
-| **Learn, and remediation of alerts** | Alert ID |
-| **Password changes** | User, User ID |
-| **Login** | User |
-| **User creation** | User, User role |
-| **Password reset** | User name |
-| **Exclusion rules-Creation**| Rule summary |
-| **Exclusion rules-Editing**| Rule ID, Rule Summary |
-| **Exclusion rules-Deletion** | Rule ID |
-| **Management Console Upgrade** | The upgrade file used |
-| **Sensor upgrade retry** | Sensor ID |
-| **Uploaded TI package** | No additional information recorded. |
--
-> [!TIP]
-> You may also want to export your audit logs to send them to the support team for extra troubleshooting. For more information, see [Export audit logs for troubleshooting](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md#export-audit-logs-for-troubleshooting)
->
-
-## Change a user's password
-
-User passwords can be changed for users created with a local password.
-
-**Administrator users**
-
-The Administrator can change the password for the Security Analyst and Read-only roles. The Administrator role user can't change their own password and must contact a higher-level role.
-
-**Security Analyst and Read-only users**
-
-The Security Analyst and Read-only roles can't reset any passwords. The Security Analyst and Read-only roles need to contact a user with a higher role level to have their passwords reset.
-
-**CyberX and Support users**
-
-CyberX role can change the password for all user roles. The Support role can change the password for a Support, Administrator, Security Analyst, and Read-only user roles.
-
-**To reset a user's password on the sensor**:
-
-1. Sign in to the sensor using a user with the role Administrator, Support, or CyberX.
-
-1. Select **Users** from the left-hand panel.
-
-1. Locate the local user whose password needs to be changed.
-
-1. On this row, select three dots (...) and then select **Edit**.
-
- :::image type="content" source="media/how-to-create-and-manage-users/change-password.png" alt-text="Screenshot of the Change password dialog for local sensor users.":::
-
-1. Enter and confirm the new password in the **Change Password** section.
-
- > [!NOTE]
- > Passwords must be at least 16 characters, contain lowercase and uppercase alphabetic characters, numbers and one of the following symbols: #%*+,-./:=?@[]^_{}~
-
-1. Select **Update**.
-
-**To reset a user's password on the on-premises management console**:
-
-1. Sign in to the on-premises management console using a user with the role Administrator, Support, or CyberX.
-
-1. Select **Users** from the left-hand panel.
-
-1. Locate your user and select the edit icon :::image type="icon" source="media/password-recovery-images/edit-icon.png" border="false"::: .
-
-1. Enter the new password in the **New Password** and **Confirm New Password** fields.
-
- > [!NOTE]
- > Passwords must be at least 16 characters, contain lowercase and uppercase alphabetic characters, numbers and one of the following symbols: #%*+,-./:=?@[]^_{}~
-
-1. Select **Update**.
-
-## Recover the password for the on-premises management console, or the sensor
-
-You can recover the password for the on-premises management console or the sensor with the Password recovery feature. Only the CyberX and Support users have access to the Password recovery feature.
-
-**To recover the password for the on-premises management console, or the sensor**:
-
-1. On the sign-in screen of either the on-premises management console or the sensor, select **Password recovery**. The **Password recovery** screen opens.
-
- :::image type="content" source="media/how-to-create-and-manage-users/password-recovery.png" alt-text="Screenshot of the Select Password recovery from the sign-in screen of either the on-premises management console, or the sensor.":::
-
-1. Select either **CyberX** or **Support** from the drop-down menu, and copy the unique identifier code.
-
- :::image type="content" source="media/how-to-create-and-manage-users/password-recovery-screen.png" alt-text="Screenshot of selecting either the Defender for IoT user or the support user.":::
-
-1. Navigate to the Azure portal, and select **Sites and Sensors**.
-
-1. Select the **Subscription Filter** icon :::image type="icon" source="media/password-recovery-images/subscription-icon.png" border="false"::: from the top toolbar, and select the subscription your sensor is connected to.
-
-1. Select the **More Actions** drop down menu, and select **Recover on-premises management console password**.
-
- :::image type="content" source="media/how-to-create-and-manage-users/recover-password.png" alt-text="Screenshot of the recover on-premises management console password option.":::
-
-1. Enter the unique identifier that you received on the **Password recovery** screen and select **Recover**. The `password_recovery.zip` file is downloaded.
-
- :::image type="content" source="media/how-to-create-and-manage-users/enter-identifier.png" alt-text="Screenshot of entering enter the unique identifier and then selecting recover." lightbox="media/how-to-create-and-manage-users/enter-identifier.png":::
-
- [!INCLUDE [root-of-trust](includes/root-of-trust.md)]
-
-1. On the Password recovery screen, select **Upload**. **The Upload Password Recovery File** window will open.
-
-1. Select **Browse** to locate your `password_recovery.zip` file, or drag the `password_recovery.zip` to the window.
-
- > [!NOTE]
- > An error message may appear indicating the file is invalid. To fix this error message, ensure you selected the right subscription before downloading the `password_recovery.zip` and download it again.
-
-1. Select **Next**, and your user, and a system-generated password for your management console will then appear.
--
-## Next steps
--- [Activate and set up your sensor](how-to-activate-and-set-up-your-sensor.md)--- [Activate and set up your on-premises management console](how-to-activate-and-set-up-your-on-premises-management-console.md)--- [Track sensor activity](how-to-track-sensor-activity.md)--- [Integrate with Active Directory servers](integrate-with-active-directory.md)
defender-for-iot How To Define Global User Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-define-global-user-access-control.md
- Title: Define global user access control
-description: In large organizations, user permissions can be complex and might be determined by a global organizational structure, in addition to the standard site and zone structure.
Previously updated : 11/09/2021---
-# Define global access control
-
-In large organizations, user permissions can be complex and might be determined by a global organizational structure, in addition to the standard site and zone structure.
-
-To support the demand for user access permissions that are global and more complex, you can create a global business topology that's based on business units, regions, and sites. Then you can define user access permissions around these entities.
-
-Working with access tools for business topology helps organizations implement zero-trust strategies by better controlling where users manage and analyze devices in the Microsoft Defender for IoT platform.
-
-## About access groups
-
-Global access control is established through the creation of user access groups. Access groups consist of rules regarding which users can access specific business entities. Working with groups lets you control view and configuration access to Defender for IoT for specific user roles at relevant business units, regions, and sites.
-
-For example, allow security analysts from an Active Directory group to access all West European automotive and glass production lines, along with a plastics line in one region.
--
-Before you create access groups, we recommend that you:
--- Carefully set up your business topology. For more information about business topology, see [Work with site map views](how-to-gain-insight-into-global-regional-and-local-threats.md#work-with-site-map-views).--- Plan which users are associated with the access groups that you create. Two options are available for assigning users to access groups:-
- - **Assign groups of Active Directory groups**: Verify that you set up an Active Directory instance to integrate with the on-premises management console.
-
- - **Assign local users**: Verify that you created users. For more information, see [Define users](how-to-create-and-manage-users.md#define-users).
-
-Admin users can't be assigned to access groups. These users have access to all business topology entities by default.
-
-## Create access groups
-
-This section describes how to create access groups. Default global business units and regions are created for the first group that you create. You can edit the default entities when you define your first group.
-
-To create groups:
-
-1. Select **Access Groups** from the side menu of the on-premises management console.
-
-2. Select :::image type="icon" source="media/how-to-define-global-user-access-control/add-icon.png" border="false":::. In the **Add Access Group** dialog box, enter a name for the access group. The console supports 64 characters. Assign the name in a way that will help you easily distinguish this group from other groups.
-
- :::image type="content" source="media/how-to-define-global-user-access-control/add-access-group.png" alt-text="The Add Access Group dialog box where you create access groups.":::
-
-3. If the **Assign an Active Directory Group** option appears, you can assign one Active Directory group of users to this access group.
-
- :::image type="content" source="media/how-to-define-global-user-access-control/add-access-group.png" alt-text="Assign an Active Directory group in the Create Access Group dialog box.":::
-
- If the option doesn't appear, and you want to include Active Directory groups in access groups, select **System Settings**. On the **Integrations** pane, define the groups. Enter a group name exactly as it appears in the Active Directory configurations, and in lowercase.
-
-5. On the **Users** pane, assign as many users as required to the group. You can also assign users to different groups. If you work this way, you must create and save the access group and rules, and then assign users to the group from the **Users** pane.
-
- :::image type="content" source="media/how-to-define-global-user-access-control/role-management.png" alt-text="Manage your users' roles and assign them as needed.":::
-
-6. Create rules in the **Add Rules for *name*** dialog box based on your business topology's access requirements. Options that appear here are based on the topology that you designed in the **Enterprise View** and **Site Management** windows.
-
- You can create more than one rule per group. You might need to create more than one rule per group when you're working with complex access granularity at multiple sites.
-
- :::image type="content" source="media/how-to-define-global-user-access-control/add-rule.png" alt-text="Add a rule to your system.":::
-
-The rules that you create appear in the **Add Access Group** dialog box. There, you can delete or edit them.
--
-### About rules
-
-When you're creating rules, be aware of the following information:
--- When an access group contains several rules, the rule logic aggregates all rules. For example, the rules use AND logic, not OR logic.--- For a rule to be successfully applied, you must assign sensors to zones in the **Site Management** window.--- You can assign only one element per rule. For example, you can assign one business unit, one region, and one site for each rule. Create more rules for the group if, for example, you want users in one Active Directory group to have access to different business units in different regions.--- If you change an entity and the change affects the rule logic, the rule will be deleted. If changes made to a topology entity affect the rule logic such that all rules are deleted, the access group remains but the users can't sign in to the on-premises management console. Users are notified to contact their administrator to sign in.--- If no business unit or region is selected, users will have access to all defined business units and regions.-
-## Next steps
-
-For more information, see [About Defender for IoT console users](how-to-create-and-manage-users.md).
defender-for-iot How To Install Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-install-software.md
This procedure describes how to install OT sensor software on a physical or virt
Save the usernames and passwords listed, as the passwords are unique and this is the only time that the credentials are listed. Copy the credentials to a safe place so that you can use them when signing into the sensor for the first time.
+ For more information, see [Default privileged on-premises users](roles-on-premises.md#default-privileged-on-premises-users).
+ Select `<Ok>` when you're ready to continue. The installation continues running again, and then reboots when the installation is complete. Upon reboot, you're prompted to enter credentials to sign in. For example:
This procedure describes how to install OT sensor software on a physical or virt
Make sure that your sensor is connected to your network, and then you can sign in to your sensor via a network-connected browser. For more information, see [Activate and set up your sensor](how-to-activate-and-set-up-your-sensor.md#activate-and-set-up-your-sensor). + # [On-premises management console](#tab/on-prem)
During the installation process, you can add a secondary NIC. If you choose not
1. Accept the settings and continue by typing `Y`.
-1. After about 10 minutes, the two sets of credentials appear. One is for a **CyberX** user, and one is for a **Support** user.
+1. After about 10 minutes, the two sets of credentials appear. For example:
:::image type="content" source="media/tutorial-install-components/credentials-screen.png" alt-text="Copy these credentials as they won't be presented again."::: Save the usernames and passwords, you'll need these credentials to access the platform the first time you use it.
+ For more information, see [Default privileged on-premises users](roles-on-premises.md#default-privileged-on-premises-users).
+ 1. Select **Enter** to continue. For information on how to find the physical port on your appliance, see [Find your port](#find-your-port).
This command will cause the light on the port to flash for the specified time pe
After you've finished installing OT monitoring software on your appliance, test your system to make sure that processes are running correctly. The same validation process applies to all appliance types.
-System health validations are supported via the sensor or on-premises management console UI or CLI, and are available for both the **Support** and **CyberX** users.
+System health validations are supported via the sensor or on-premises management console UI or CLI, and are available for both the *support* and *cyberx* users.
After installing OT monitoring software, make sure to run the following tests:
The interface between the IT firewall, on-premises management console, and the O
**To enable tunneling access for sensors**:
-1. Sign in to the on-premises management console's CLI with the **CyberX** or the **Support** user credentials.
+1. Sign in to the on-premises management console's CLI with the *cyberx* or the *support* user credentials. For more information, see [Default privileged on-premises users](roles-on-premises.md#default-privileged-on-premises-users).
1. Enter `sudo cyberx-management-tunnel-enable`.
defender-for-iot How To Manage Individual Sensors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-individual-sensors.md
This section describes how to ensure connection between the sensor and the on-pr
8. In the on-premises management console, in the **Site Management** window, assign the sensor to a site and zone.
-Continue with additional configurations, such as adding users, configuring forwarding exclusion rules and more. For example, see [Activate and set up your on-premises management console](how-to-activate-and-set-up-your-on-premises-management-console.md), [About Defender for IoT console users](how-to-create-and-manage-users.md), or [Forward alert information](how-to-forward-alert-information-to-partners.md).
+Continue with additional configurations, such as [adding users](manage-users-on-premises-management-console.md), [configuring forwarding exclusion rules](how-to-forward-alert-information-to-partners.md) and more. For more information,, see [Activate and set up your on-premises management console](how-to-activate-and-set-up-your-on-premises-management-console.md).
## Change the name of a sensor
Clearing data deletes all detected or learned data on the sensor. After clearing
**To clear system data**:
-1. Sign in to the sensor as the **cyberx** user.
+1. Sign in to the sensor as the *cyberx* user.
1. Select **Support** > **Clear data**.
defender-for-iot How To Manage Sensors On The Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-sensors-on-the-cloud.md
If you're updating your OT sensor version from a legacy version to 22.1.x or hig
Make sure that you've started with the relevant updates steps for this update. For more information, see [Update OT system software](update-ot-software.md). > [!NOTE]
-> After upgrading to version 22.1.x, the new upgrade log can be found at the following path, accessed via SSH and the *cyberx_host* user: `/opt/sensor/logs/legacy-upgrade.log`.
+> After upgrading to version 22.1.x, the new upgrade log is accessible by the *cyberx_host* user on the sensor at the following path: `/opt/sensor/logs/legacy-upgrade.log`. To access the update log, sign into the sensor via SSH with the *cyberx_host* user.
>
+> For more information, see [Default privileged on-premises users](roles-on-premises.md#default-privileged-on-premises-users).
+ ## Understand sensor health (Public preview)
defender-for-iot How To Set Up Your Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-set-up-your-network.md
This section provides troubleshooting for common issues when preparing your netw
1. Connect a monitor and a keyboard to the appliance.
- 1. Use the **support** user and password to sign in.
+ 1. Use the *support* user and password to sign in.
1. Use the command **network list** to see the current IP address.
This section provides troubleshooting for common issues when preparing your netw
1. Connect with a monitor and keyboard to the appliance, or use PuTTY to connect remotely to the CLI.
-2. Use the **support** credentials to sign in.
+2. Use the *support* credentials to sign in.
3. Use the **system sanity** command and check that all processes are running.
defender-for-iot How To Troubleshoot The Sensor And On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-troubleshoot-the-sensor-and-on-premises-management-console.md
Check your system health from the sensor or on-premises management console.
**To access the system health tool**:
-1. Sign in to the sensor or on-premises management console with the **Support** user credentials.
+1. Sign in to the sensor or on-premises management console with the *support* user credentials.
1. Select **System Statistics** from the **System Settings** window.
Verify that the system is up and running prior to testing the system's sanity.
**To test the system's sanity**:
-1. Connect to the CLI with the Linux terminal (for example, PuTTY) and the user **Support**.
+1. Connect to the CLI with the Linux terminal (for example, PuTTY) and the user *support*.
1. Enter `system sanity`.
Verify that the correct version is used:
**To check the system's version**:
-1. Connect to the CLI with the Linux terminal (for example, PuTTY) and the user **Support**.
+1. Connect to the CLI with the Linux terminal (for example, PuTTY) and the user *support*.
1. Enter `system version`.
Verify that all the input interfaces configured during the installation process
**To validate the system's network status**:
-1. Connect to the CLI with the Linux terminal (for example, PuTTY) and the **Support** user.
+1. Connect to the CLI with the Linux terminal (for example, PuTTY) and the *support* user.
1. Enter `network list` (the equivalent of the Linux command `ifconfig`).
Verify that you can access the console web GUI:
1. Connect a monitor and a keyboard to the appliance.
- 1. Use the **Support** user and password to sign in.
+ 1. Use the *support* user and password to sign in.
1. Use the command `network list` to see the current IP address.
Verify that you can access the console web GUI:
1. To apply the settings, select **Y**.
-1. After restart, connect with the **Support** user credentials and use the `network list` command to verify that the parameters were changed.
+1. After restart, connect with the *support* user credentials and use the `network list` command to verify that the parameters were changed.
1. Try to ping and connect from the GUI again.
Verify that you can access the console web GUI:
1. Connect a monitor and keyboard to the appliance, or use PuTTY to connect remotely to the CLI.
-1. Use the **Support** user credentials to sign in.
+1. Use the *support* user credentials to sign in.
1. Use the `system sanity` command and check that all processes are running. For example:
When signing into a preconfigured sensor for the first time, you'll need to perf
1. Select **Next**, and your user, and system-generated password for your management console will then appear. > [!NOTE]
- > When you sign in to a sensor or on-premises management console for the first time it will be linked to the subscription you connected it to. If you need to reset the password for the CyberX, or Support user you will need to select that subscription. For more information on recovering a CyberX, or Support user password, see [Recover the password for the on-premises management console, or the sensor](how-to-create-and-manage-users.md#recover-the-password-for-the-on-premises-management-console-or-the-sensor).
+ > When you sign in to a sensor or on-premises management console for the first time, it's linked to your Azure subscription, which you'll need if you need to recover the password for the *cyberx*, or *support* user. For more information, see the relevant procedure for [sensors](manage-users-sensor.md#recover-privileged-access-to-a-sensor) or an [on-premises management console](manage-users-on-premises-management-console.md#recover-privileged-access-to-an-on-premises-management-console).
### Investigate a lack of traffic
You may also want to export your audit logs to send them to the support team for
1. Exported audit logs are encrypted for your security, and require a password to open. In the **Archived Files** list, select the :::image type="icon" source="media/how-to-troubleshoot-the-sensor-and-on-premises-management-console/eye-icon.png" border="false"::: button for your exported logs to view its password. If you're forwarding the audit logs to the support team, make sure to send the password to support separately from the exported logs.
-For more information, see [View audit log data on the on-premises management console](how-to-create-and-manage-users.md#view-audit-log-data-on-the-on-premises-management-console).
+For more information, see [Track on-premises user activity](track-user-activity.md).
## Next steps
defender-for-iot How To Work With The Sensor Device Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-work-with-the-sensor-device-map.md
A variety of map tools help you gain insight into devices and connections of int
- [Group highlight and filters tools](#group-highlight-and-filters-tools) - [Map display tools](#map-display-tools)
-Your user role determines which tools are available in the Device Map window. See [Create and manage users](how-to-create-and-manage-users.md) for details about user roles.
+Your user role determines which tools are available in the Device Map window. For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md) and [Create and manage users on an OT network sensor](manage-users-sensor.md).
### Basic search tools
defender-for-iot How To Work With Threat Intelligence Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-work-with-threat-intelligence-packages.md
Title: Update threat intelligence data description: The threat intelligence data package is provided with each new Defender for IoT version, or if needed between releases. Previously updated : 06/02/2022 Last updated : 11/16/2022 # Threat intelligence research and packages+ ## Overview Security teams at Microsoft carry out proprietary ICS threat intelligence and vulnerability research. These teams include MSTIC (Microsoft Threat Intelligence Center), DART (Microsoft Detection and Response Team), DCU (Digital Crimes Unit), and Section 52 (IoT/OT/ICS domain experts that track ICS-specific zero-days, reverse-engineering malware, campaigns, and adversaries)
You can change the sensor threat intelligence update mode after initial onboardi
Packages can be downloaded the Azure portal and manually uploaded to individual sensors. If the on-premises management console manages your sensors, you can download threat intelligence packages to the management console and push them to multiple sensors simultaneously. This option is available for both *cloud connected* and *locally managed* sensors. [!INCLUDE [root-of-trust](includes/root-of-trust.md)] - **To upload to a single sensor:**
-1. Go to the Microsoft Defender for IoT **Updates** page.
+1. In Defender for IoT on the Azure portal, go to the **Get started** > **Updates** tab.
-2. Download and save the **Threat Intelligence** package.
+1. In the **Sensor threat intelligence update** box, select **Download file** to download the latest threat intelligence package.
-3. Sign in to the sensor console.
+1. Sign in to the sensor console, and then select **System settings** > **Threat intelligence**.
-4. On the side menu, select **System Settings**.
+1. In the **Threat intelligence** pane, select **Upload file**. For example:
-5. Select **Threat Intelligence Data**, and then select **Update**.
+ :::image type="content" source="media/how-to-work-with-threat-intelligence-packages/update-threat-intelligence-single-sensor.png" alt-text="Screenshot of where you can upload Threat Intelligence package to a single sensor." lightbox="media/how-to-work-with-threat-intelligence-packages/update-threat-intelligence-single-sensor.png":::
-6. Upload the new package.
+1. Browse to and select the package you'd downloaded from the Azure portal and upload it to the sensor.
**To upload to multiple sensors simultaneously:**
-1. Go to the Microsoft Defender for IoT **Updates** page.
+1. In Defender for IoT on the Azure portal, go to the **Get started** > **Updates** tab.
+
+1. In the **Sensor threat intelligence update** box, select **Download file** to download the latest threat intelligence package.
+
+1. Sign in to the management console and select **System settings**.
+
+1. In the **Sensor Engine Configuration** area, select the sensors that you want to receive the updated packages. For example:
-2. Download and save the **Threat Intelligence** package.
+ :::image type="content" source="media/how-to-work-with-threat-intelligence-packages/update-threat-intelligence-multiple-sensors.png" alt-text="Screenshot of where you can select which sensors you want to make changes to." lightbox="media/how-to-work-with-threat-intelligence-packages/update-threat-intelligence-multiple-sensors.png":::
-3. Sign in to the management console.
+1. In the **Sensor Threat Intelligence Data** section, select the plus sign (**+**).
-4. On the side menu, select **System Settings**.
+1. In the **Upload File** dialog, select **BROWSE FILE...** to browse to and select the update package. For example:
-5. In the **Sensor Engine Configuration** section, select the sensors that should receive the updated packages.
+ :::image type="content" source="media/how-to-work-with-threat-intelligence-packages/upload-threat-intelligence-to-management-console.png" alt-text="Screenshot of where you can upload a Threat Intelligence package to multiple sensors." lightbox="media/how-to-work-with-threat-intelligence-packages/upload-threat-intelligence-to-management-console.png":::
-6. In the **Select Threat Intelligence Data** section, select the plus sign (**+**).
+1. Select **CLOSE** and then **SAVE CHANGES** to push the threat intelligence update to all selected sensors.
-7. Upload the package.
+ :::image type="content" source="media/how-to-work-with-threat-intelligence-packages/save-changes-management-console.png" alt-text="Screenshot of where you can save changes made to selected sensors on the management console." lightbox="media/how-to-work-with-threat-intelligence-packages/save-changes-management-console.png":::
## Review package update status on the sensor
Review the following information about threat intelligence packages for your clo
1. Review the **Threat Intelligence version** installed on each sensor. Version naming is based on the day the package was built by Defender for IoT.
-1. Review the **Threat Intelligence mode** . *Automatic* indicates that newly available packages will be automatically installed on sensors as they're released by Defender for IoT.
+1. Review the **Threat Intelligence mode** . *Automatic* indicates that newly available packages will be automatically installed on sensors as they're released by Defender for IoT.
*Manual* indicates that you can push newly available packages directly to sensors as needed.
Review the following information about threat intelligence packages for your clo
- Update Available - Ok
-If cloud connected threat intelligence updates fail, review connection information in the **Sensor status** and **Last connected UTC** columns in the **Sites and Sensors** page.
+If cloud connected threat intelligence updates fail, review connection information in the **Sensor status** and **Last connected UTC** columns in the **Sites and Sensors** page.
## Next steps
defender-for-iot Integrate With Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/integrate-with-active-directory.md
- Title: Integrate with Active Directory - Microsoft Defender for IoT
-description: Configure the sensor or on-premises management console to work with Active Directory.
Previously updated : 05/17/2022---
-# Integrate with Active Directory servers
-
-Configure the sensor or on-premises management console to work with Active Directory. This allows Active Directory users to access the Microsoft Defender for IoT consoles by using their Active Directory credentials.
-
-> [!Note]
-> LDAP v3 is supported.
-
-Two types of LDAP-based authentication are supported:
--- **Full authentication**: User details are retrieved from the LDAP server. Examples are the first name, last name, email, and user permissions.--- **Trusted user**: Only the user password is retrieved. Other user details that are retrieved are based on users defined in the sensor.-
-For more information, see [networking requirements](how-to-set-up-your-network.md#other-firewall-rules-for-external-services-optional).
-
-## Active Directory and Defender for IoT permissions
-
-You can associate Active Directory groups defined here with specific permission levels. For example, configure a specific Active Directory group and assign Read-only permissions to all users in the group.
-
-## Active Directory configuration guidelines
--- You must define the LDAP parameters here exactly as they appear in Active Directory.-- For all the Active Directory parameters, use lowercase only. Use lowercase even when the configurations in Active Directory use uppercase.-- You can't configure both LDAP and LDAPS for the same domain. You can, however, use both for different domains at the same time.-
-**To configure Active Directory**:
-
-1. From the left pane, select **System Settings**.
-1. Select **Integrations** and then select **Active Directory**.
-
-1. Enable the **Active Directory Integration Enabled** toggle.
-
-1. Set the Active Directory server parameters, as follows:
-
- | Server parameter | Description |
- |--|--|
- | Domain controller FQDN | Set the fully qualified domain name (FQDN) exactly as it appears on your LDAP server. For example, enter `host1.subdomain.domain.com`. |
- | Domain controller port | Define the port on which your LDAP is configured. |
- | Primary domain | Set the domain name (for example, `subdomain.domain.com`) |
- | Connection type | Set the authentication type: LDAPS/NTLMv3 (Recommended), LDAP/NTLMv3 or LDAP/SASL-MD5 |
- | Active Directory groups | Enter the group names that are defined in your Active Directory configuration on the LDAP server. You can enter a group name that you'll associate with Admin, Security Analyst and Read-only permission levels. Use these groups when creating new sensor users.|
- | Trusted endpoints | To add a trusted domain, add the domain name and the connection type of a trusted domain. <br />You can configure trusted endpoints only for users who were defined under users. |
-
- ### Active Directory groups for the on-premises management console
-
- If you're creating Active Directory groups for on-premises management console users, you must create an Access Group rule for each Active Directory group. On-premises management console Active Directory credentials won't work if an Access Group rule doesn't exist for the Active Directory user group. For more information, see [Define global access control](how-to-define-global-user-access-control.md).
-
-1. Select **Save**.
-
-1. To add a trusted server, select **Add Server** and configure another server.
--
-## Next steps
-
-For more information, see [how to create and manage users](./how-to-create-and-manage-users.md).
defender-for-iot Iot Advanced Threat Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/iot-advanced-threat-monitoring.md
+
+ Title: Investigate and detect threats for IoT devices | Microsoft Docs
+description: This tutorial describes how to use the Microsoft Sentinel data connector and solution for Microsoft Defender for IoT to secure your entire OT environment. Detect and respond to OT threats, including multistage attacks that may cross IT and OT boundaries.
+ Last updated : 09/18/2022++
+# Tutorial: Investigate and detect threats for IoT devices
+
+The integration between Microsoft Defender for IoT and [Microsoft Sentinel](/azure/sentinel/) enable SOC teams to efficiently and effectively detect and respond to Operational Technology (OT) threats. Enhance your security capabilities with the [Microsoft Defender for IoT solution](/azure/sentinel/sentinel-solutions-catalog#domain-solutions), a set of bundled content configured specifically for Defender for IoT data that includes analytics rules, workbooks, and playbooks.
+
+While Defender for IoT supports both Enterprise IoT and OT networks, the **Microsoft Defender for IoT** solution supports OT networks only.
+
+In this tutorial, you:
+
+> [!div class="checklist"]
+>
+> * Install the **Microsoft Defender for IoT** solution in your Microsoft Sentinel workspace
+> * Learn how to investigate Defender for IoT alerts in Microsoft Sentinel incidents
+> * Learn about the analytics rules, workbooks, and playbooks deployed to your Microsoft Sentinel workspace with the **Microsoft Defender for IoT** solution
+
+> [!IMPORTANT]
+>
+> The Microsoft Sentinel content hub experience is currently in **PREVIEW**, as is the **Microsoft Defender for IoT** solution. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Prerequisites
+
+Before you start, make sure you have:
+
+- **Read** and **Write** permissions on your Microsoft Sentinel workspace. For more information, see [Permissions in Microsoft Sentinel](/azure/sentinel/roles).
+
+- Completed [Tutorial: Connect Microsoft Defender for IoT with Microsoft Sentinel](iot-solution.md).
+
+## Install the Defender for IoT solution
+
+Microsoft Sentinel [solutions](/azure/sentinel/sentinel-solutions) can help you onboard Microsoft Sentinel security content for a specific data connector using a single process.
+
+The **Microsoft Defender for IoT** solution integrates Defender for IoT data with Microsoft Sentinel's security orchestration, automation, and response (SOAR) capabilities by providing out-of-the-box and OT-optimized playbooks for automated response and prevention capabilities.
+
+**To install the solution**:
+
+1. In Microsoft Sentinel, under **Content management**, select **Content hub** and then locate the **Microsoft Defender for IoT** solution.
+
+1. At the bottom right, select **View details**, and then **Create**. Select the subscription, resource group, and workspace where you want to install the solution, and then review the related security content that will be deployed.
+
+1. When you're done, select **Review + Create** to install the solution.
+
+For more information, see [About Microsoft Sentinel content and solutions](/azure/sentinel/sentinel-solutions) and [Centrally discover and deploy out-of-the-box content and solutions](/azure/sentinel/sentinel-solutions-deploy).
+
+## Detect threats out-of-the-box with Defender for IoT data
+
+The **Microsoft Defender for IoT** data connector includes a default *Microsoft Security* rule named **Create incidents based on Azure Defender for IOT alerts**, which automatically creates new incidents for any new Defender for IoT alerts detected.
+
+The **Microsoft Defender for IoT** solution includes a more detailed set of out-of-the-box analytics rules, which are built specifically for Defender for IoT data and fine-tune the incidents created in Microsoft Sentinel for relevant alerts.
+
+**To use out-of-the-box Defender for IoT alerts**:
+
+1. On the Microsoft Sentinel **Analytics** page, search for and disable the **Create incidents based on Azure Defender for IOT alerts** rule. This step prevents duplicate incidents from being created in Microsoft Sentinel for the same alerts.
+
+1. Search for and enable any of the following out-of-the-box analytics rules, installed with the **Microsoft Defender for IoT** solution:
+
+ | Rule Name | Description|
+ | - | -|
+ | **Illegal function codes for ICS/SCADA traffic** | Illegal function codes in supervisory control and data acquisition (SCADA) equipment may indicate one of the following: <br><br>- Improper application configuration, such as due to a firmware update or reinstallation. <br>- Malicious activity. For example, a cyber threat that attempts to use illegal values within a protocol to exploit a vulnerability in the programmable logic controller (PLC), such as a buffer overflow. |
+ | **Firmware update** | Unauthorized firmware updates may indicate malicious activity on the network, such as a cyber threat that attempts to manipulate PLC firmware to compromise PLC function. |
+ | **Unauthorized PLC changes** | Unauthorized changes to PLC ladder logic code may be one of the following: <br><br>- An indication of new functionality in the PLC. <br>- Improper configuration of an application, such as due to a firmware update or reinstallation. <br>- Malicious activity on the network, such as a cyber threat that attempts to manipulate PLC programming to compromise PLC function. |
+ | **PLC insecure key state** | The new mode may indicate that the PLC is not secure. Leaving the PLC in an insecure operating mode may allow adversaries to perform malicious activities on it, such as a program download. <br><br>If the PLC is compromised, devices and processes that interact with it may be impacted. which may affect overall system security and safety. |
+ | **PLC stop** | The PLC stop command may indicate an improper configuration of an application that has caused the PLC to stop functioning, or malicious activity on the network. For example, a cyber threat that attempts to manipulate PLC programming to affect the functionality of the network. |
+ | **Suspicious malware found in the network** | Suspicious malware found on the network indicates that suspicious malware is trying to compromise production. |
+ | **Multiple scans in the network** | Multiple scans on the network can be an indication of one of the following: <br><br>- A new device on the network <br>- New functionality of an existing device <br>- Misconfiguration of an application, such as due to a firmware update or reinstallation <br>- Malicious activity on the network for reconnaissance |
+ | **Internet connectivity** | An OT device communicating with internet addresses may indicate an improper application configuration, such as anti-virus software attempting to download updates from an external server, or malicious activity on the network. |
+ | **Unauthorized device in the SCADA network** | An unauthorized device on the network may be a legitimate, new device recently installed on the network, or an indication of unauthorized or even malicious activity on the network, such as a cyber threat attempting to manipulate the SCADA network. |
+ | **Unauthorized DHCP configuration in the SCADA network** | An unauthorized DHCP configuration on the network may indicate a new, unauthorized device operating on the network. <br><br>This may be a legitimate, new device recently deployed on the network, or an indication of unauthorized or even malicious activity on the network, such as a cyber threat attempting to manipulate the SCADA network. |
+ | **Excessive login attempts** | Excessive sign in attempts may indicate improper service configuration, human error, or malicious activity on the network, such as a cyber threat attempting to manipulate the SCADA network. |
+ | **High bandwidth in the network** | An unusually high bandwidth may be an indication of a new service/process on the network, such as backup, or an indication of malicious activity on the network, such as a cyber threat attempting to manipulate the SCADA network. |
+ | **Denial of Service** | This alert detects attacks that would prevent the use or proper operation of the DCS system. |
+ | **Unauthorized remote access to the network** | Unauthorized remote access to the network can compromise the target device. <br><br> This means that if another device on the network is compromised, the target devices can be accessed remotely, increasing the attack surface. |
+ | **No traffic on Sensor Detected** | A sensor that no longer detects network traffic indicates that the system may be insecure. |
+
+## Investigate Defender for IoT incidents
+
+After youΓÇÖve [configured your Defender for IoT data to trigger new incidents in Microsoft Sentinel](#detect-threats-out-of-the-box-with-defender-for-iot-data), start investigating those incidents in Microsoft Sentinel as you would other incidents.
+
+**To investigate Microsoft Defender for IoT incidents**:
+
+1. In Microsoft Sentinel, go to the **Incidents** page.
+
+1. Above the incident grid, select the **Product name** filter and clear the **Select all** option. Then, select **Microsoft Defender for IoT** to view only incidents triggered by Defender for IoT alerts. For example:
+
+ :::image type="content" source="media/iot-solution/filter-incidents-defender-for-iot.png" alt-text="Screenshot of filtering incidents by product name for Defender for IoT devices.":::
+
+1. Select a specific incident to begin your investigation.
+
+ In the incident details pane on the right, view details such as incident severity, a summary of the entities involved, any mapped MITRE ATT&CK tactics or techniques, and more.
+
+ :::image type="content" source="media/iot-solution/investigate-iot-incidents.png" alt-text="Screenshot of a Microsoft Defender for IoT incident in Microsoft Sentinel.":::
+
+ > [!TIP]
+ > To investigate the incident in Defender for IoT, select the **Investigate in Microsoft Defender for IoT** link at the top of the incident details pane.
+
+For more information on how to investigate incidents and use the investigation graph, see [Investigate incidents with Microsoft Sentinel](/azure/sentinel/investigate-cases).
+
+### Investigate further with IoT device entities
+
+When investigating an incident in Microsoft Sentinel, in an incident details pane, select an IoT device entity from the **Entities** list to open its device entity page. You can identify an IoT device by the IoT device icon: :::image type="icon" source="media/iot-solution/iot-device-icon.png" border="false":::
+
+If you don't see your IoT device entity right away, select **View full details** under the entities listed to open the full incident page. In the **Entities** tab, select an IoT device to open its entity page. For example:
+
+ :::image type="content" source="media/iot-solution/incident-full-details-iot-device.png" alt-text="Screenshot of a full detail incident page.":::
+
+The IoT device entity page provides contextual device information, with basic device details and device owner contact information. The device entity page can help prioritize remediation based on device importance and business impact, as per each alert's site, zone, and sensor. For example:
++
+For more information on entity pages, see [Investigate entities with entity pages in Microsoft Sentinel](/azure/sentinel/entity-pages).
+
+You can also hunt for vulnerable devices on the Microsoft Sentinel **Entity behavior** page. For example, view the top five IoT devices with the highest number of alerts, or search for a device by IP address or device name:
++
+For more information on how to investigate incidents and use the investigation graph, see [Investigate incidents with Microsoft Sentinel](/azure/sentinel/investigate-cases).
+
+## Visualize and monitor Defender for IoT data
+
+To visualize and monitor your Defender for IoT data, use the workbooks deployed to your Microsoft Sentinel workspace as part of the [Microsoft Defender for IoT](#install-the-defender-for-iot-solution) solution.
+
+The Defenders for IoT workbooks provide guided investigations for OT entities based on open incidents, alert notifications, and activities for OT assets. They also provide a hunting experience across the MITRE ATT&CK® framework for ICS, and are designed to enable analysts, security engineers, and MSSPs to gain situational awareness of OT security posture.
+
+View workbooks in Microsoft Sentinel on the **Threat management > Workbooks > My workbooks** tab. For more information, see [Visualize collected data](/azure/sentinel/get-visibility).
+
+The following table describes the workbooks included in the **Microsoft Defender for IoT** solution:
+
+|Workbook |Description |Logs |
+||||
+|**Overview** | Dashboard displaying a summary of key metrics for device inventory, threat detection and vulnerabilities. | Uses data from Azure Resource Graph (ARG) |
+|**Device Inventory** | Displays data such as: OT device name, type, IP address, Mac address, Model, OS, Serial Number, Vendor, Protocols, Open alerts, and CVEs and recommendations per device. Can be filtered by site, zone, and sensor. | Uses data from Azure Resource Graph (ARG) |
+|**Incidents** | Displays data such as: <br><br>- Incident Metrics, Topmost Incident, Incident over time, Incident by Protocol, Incident by Device Type, Incident by Vendor, and Incident by IP address.<br><br>- Incident by Severity, Incident Mean time to respond, Incident Mean time to resolve and Incident close reasons. | Uses data from the following log: `SecurityAlert` |
+|**Alerts** | Displays data such as: Alert Metrics, Top Alerts, Alert over time, Alert by Severity, Alert by Engine, Alert by Device Type, Alert by Vendor and Alert by IP address. | Uses data from Azure Resource Graph (ARG) |
+|**MITRE ATT&CK® for ICS** | Displays data such as: Tactic Count, Tactic Details, Tactic over time, Technique Count. | Uses data from the following log: `SecurityAlert` |
+|**Vulnerabilities** | Displays vulnerabilities and CVEs for vulnerable devices. Can be filtered by device site and CVE severity. | Uses data from Azure Resource Graph (ARG) |
+
+## Automate response to Defender for IoT alerts
+
+Playbooks are collections of automated remediation actions that can be run from Microsoft Sentinel as a routine. A playbook can help automate and orchestrate your threat response; it can be run manually or set to run automatically in response to specific alerts or incidents, when triggered by an analytics rule or an automation rule, respectively.
+
+The [Microsoft Defender for IoT](#install-the-defender-for-iot-solution) solution includes out-of-the-box playbooks that provide the following functionality:
+
+- [Automatically close incidents](#automatically-close-incidents)
+- [Send email notifications by production line](#send-email-notifications-by-production-line)
+- [Create a new ServiceNow ticket](#create-a-new-servicenow-ticket)
+- [Update alert statuses in Defender for IoT](#update-alert-statuses-in-defender-for-iot)
+- [Automate workflows for incidents with active CVEs](#automate-workflows-for-incidents-with-active-cves)
+- [Send email to the IoT/OT device owner](#send-email-to-the-iotot-device-owner)
+- [Triage incidents involving highly important devices](#triage-incidents-involving-highly-important-devices)
+
+Before using the out-of-the-box playbooks, make sure to perform the prerequisite steps as listed [below](#playbook-prerequisites).
+
+For more information, see:
+
+- [Tutorial: Use playbooks with automation rules in Microsoft Sentinel](/azure/sentinel/tutorial-respond-threats-playbook)
+- [Automate threat response with playbooks in Microsoft Sentinel](/azure/sentinel/automate-responses-with-playbooks)
+
+### Playbook prerequisites
+
+Before using the out-of-the-box playbooks, make sure you perform the following prerequisites, as needed for each playbook:
+
+- [Ensure valid playbook connections](#ensure-valid-playbook-connections)
+- [Add a required role to your subscription](#add-a-required-role-to-your-subscription)
+- [Connect your incidents, relevant analytics rules, and the playbook](#connect-your-incidents-relevant-analytics-rules-and-the-playbook)
+
+#### Ensure valid playbook connections
+
+This procedure helps ensure that each connection step in your playbook has valid connections, and is required for all solution playbooks.
+
+**To ensure your valid connections**:
+
+1. In Microsoft Sentinel, open the playbook from **Automation** > **Active playbooks**.
+
+1. Select a playbook to open it as a Logic app.
+
+1. With the playbook opened as a Logic app, select **Logic app designer**. Expand each step in the logic app to check for invalid connections, which are indicated by an orange warning triangle. For example:
+
+ :::image type="content" source="media/iot-solution/connection-steps.png" alt-text="Screenshot of the default AD4IOT AutoAlertStatusSync playbook." lightbox="media/iot-solution/connection-steps.png":::
+
+ > [!IMPORTANT]
+ > Make sure to expand each step in the logic app. Invalid connections may be hiding inside other steps.
+
+1. Select **Save**.
+
+#### Add a required role to your subscription
+
+This procedure describes how to add a required role to the Azure subscription where the playbook is installed, and is required only for the following playbooks:
+
+- [AD4IoT-AutoAlertStatusSync](#update-alert-statuses-in-defender-for-iot)
+- [AD4IoT-CVEAutoWorkflow](#automate-workflows-for-incidents-with-active-cves)
+- [AD4IoT-SendEmailtoIoTOwner](#send-email-to-the-iotot-device-owner)
+- [AD4IoT-AutoTriageIncident](#triage-incidents-involving-highly-important-devices)
+
+Required roles differ per playbook, but the steps remain the same.
+
+**To add a required role to your subscription**:
+
+1. In Microsoft Sentinel, open the playbook from **Automation** > **Active playbooks**.
+
+1. Select a playbook to open it as a Logic app.
+
+1. With the playbook opened as a Logic app, select **Identity > System assigned**, and then in the **Permissions** area, select the **Azure role assignments** button.
+
+1. In the **Azure role assignments** page, select **Add role assignment**.
+
+1. In the **Add role assignment** pane:
+
+ 1. Define the **Scope** as **Subscription**.
+
+ 1. From the dropdown, select the **Subscription** where your playbook is installed.
+
+ 1. From the **Role** dropdown, select one of the following roles, depending on the playbook youΓÇÖre working with:
+
+ |Playbook name |Role |
+ |||
+ |[AD4IoT-AutoAlertStatusSync](#update-alert-statuses-in-defender-for-iot) |Security Admin |
+ |[AD4IoT-CVEAutoWorkflow](#automate-workflows-for-incidents-with-active-cves) |Reader |
+ |[AD4IoT-SendEmailtoIoTOwner](#send-email-to-the-iotot-device-owner) |Reader |
+ |[AD4IoT-AutoTriageIncident](#triage-incidents-involving-highly-important-devices) |Reader |
+
+1. When you're done, select **Save**.
+
+#### Connect your incidents, relevant analytics rules, and the playbook
+
+This procedure describes how to configure a Microsoft Sentinel analytics rule to automatically run your playbooks based on an incident trigger, and is required for all solution playbooks.
+
+**To add your analytics rule**:
+
+1. In Microsoft Sentinel, go to **Automation** > **Automation rules**.
+
+1. To create a new automation rule, select **Create** > **Automation rule**.
+
+1. In the **Trigger** field, select one of the following triggers, depending on the playbook youΓÇÖre working with:
+
+ - The [AD4IoT-AutoAlertStatusSync](#update-alert-statuses-in-defender-for-iot) playbook: Select the **When an incident is updated** trigger
+ - All other solution playbooks: Select the **When an incident is created** trigger
+
+1. In the **Conditions** area, select **If > Analytic rule name > Contains**, and then select the specific analytics rules relevant for Defender for IoT in your organization.
+
+ For example:
+
+ :::image type="content" source="media/iot-solution/automate-playbook.png" alt-text="Screenshot of a Defender for IoT alert status sync automation rule." lightbox="media/iot-solution/automate-playbook.png":::
+
+ You may be using out-of-the-box analytics rules, or you may have modified the out-of-the-box content, or created your own. For more information, see [Detect threats out-of-the-box with Defender for IoT data](#detect-threats-out-of-the-box-with-defender-for-iot-data).
+
+1. In the **Actions** area, select **Run playbook** > *playbook name*.
+
+1. Select **Run**.
+
+> [!TIP]
+> You can also manually run a playbook on demand. This can be useful in situations where you want more control over orchestration and response processes. For more information, see [Run a playbook on demand](/azure/sentinel/tutorial-respond-threats-playbook#run-a-playbook-on-demand).
+
+### Automatically close incidents
+
+**Playbook name**: AD4IoT-AutoCloseIncidents
+
+In some cases, maintenance activities generate alerts in Microsoft Sentinel that can distract a SOC team from handling the real problems. This playbook automatically closes incidents created from such alerts during a specified maintenance period, explicitly parsing the IoT device entity fields.
+
+To use this playbook:
+
+- Enter the relevant time period when the maintenance is expected to occur, and the IP addresses of any relevant assets, such as listed in an Excel file.
+- Create a watchlist that includes all the asset IP addresses on which alerts should be handled automatically.
+
+### Send email notifications by production line
+
+**Playbook name**: AD4IoT-MailByProductionLine
+
+This playbook sends mail to notify specific stakeholders about alerts and events that occur in your environment.
+
+For example, when you have specific security teams assigned to specific product lines or geographic locations, you'll want that team to be notified about alerts that are relevant to their responsibilities.
+
+To use this playbook, create a watchlist that maps between the sensor names and the mailing addresses of each of the stakeholders you want to alert.
+
+### Create a new ServiceNow ticket
+
+**Playbook name**: AD4IoT-NewAssetServiceNowTicket
+
+Typically, the entity authorized to program a PLC is the Engineering Workstation. Therefore, attackers might create new Engineering Workstations in order to create malicious PLC programming.
+
+This playbook opens a ticket in ServiceNow each time a new Engineering Workstation is detected, explicitly parsing the IoT device entity fields.
+
+### Update alert statuses in Defender for IoT
+
+**Playbook name**: AD4IoT-AutoAlertStatusSync
+
+This playbook updates alert statuses in Defender for IoT whenever a related alert in Microsoft Sentinel has a **Status** update.
+
+This synchronization overrides any status defined in Defender for IoT, in the Azure portal or the sensor console, so that the alert statuses match that of the related incident.
+
+### Automate workflows for incidents with active CVEs
+
+**Playbook name**: AD4IoT-CVEAutoWorkflow
+
+This playbook adds active CVEs into the incident comments of affected devices. An automated triage is performed if the CVE is critical, and an email notification is sent to the device owner, as defined on the site level in Defender for IoT.
+
+To add a device owner, edit the site owner on the **Sites and sensors** page in Defender for IoT. For more information, see [Site management options from the Azure portal](how-to-manage-sensors-on-the-cloud.md#site-management-options-from-the-azure-portal).
+
+### Send email to the IoT/OT device owner
+
+**Playbook name**: AD4IoT-SendEmailtoIoTOwner
+
+This playbook sends an email with the incident details to the device owner as defined on the site level in Defender for IoT, so that they can start investigating, even responding directly from the automated email. Response options include:
+
+- **Yes this is expected**. Select this option to close the incident.
+
+- **No this is NOT expected**. Select this option to keep the incident active, increase the severity, and add a confirmation tag to the incident.
+
+The incident is automatically updated based on the response selected by the device owner.
+
+To add a device owner, edit the site owner on the **Sites and sensors** page in Defender for IoT. For more information, see [Site management options from the Azure portal](how-to-manage-sensors-on-the-cloud.md#site-management-options-from-the-azure-portal).
+
+### Triage incidents involving highly important devices
+
+**Playbook name**: AD4IoT-AutoTriageIncident
+
+This playbook updates the incident severity according to the importance level of the devices involved.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Visualize data](/azure/sentinel/get-visibility.md)
+
+> [!div class="nextstepaction"]
+> [Create custom analytics rules](/azure/sentinel/detect-threats-custom.md)
+
+> [!div class="nextstepaction"]
+> [Investigate incidents](/sentinel/investigate-cases)
+
+> [!div class="nextstepaction"]
+> [Investigate entities](/azure/sentinel/entity-pages.md)
+
+> [!div class="nextstepaction"]
+> [Use playbooks with automation rules](/azure/sentinel/tutorial-respond-threats-playbook.md)
+
+For more information, see our blog: [Defending Critical Infrastructure with the Microsoft Sentinel: IT/OT Threat Monitoring Solution](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/defending-critical-infrastructure-with-the-microsoft-sentinel-it/ba-p/3061184)
+
defender-for-iot Iot Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/iot-solution.md
+
+ Title: Connect Microsoft Defender for IoT with Microsoft Sentinel
+description: This tutorial describes how to integrate Microsoft Sentinel and Microsoft Defender for IoT with the Microsoft Sentinel data connector to secure your entire OT environment. Detect and respond to OT threats, including multistage attacks that may cross IT and OT boundaries.
+ Last updated : 06/20/2022++
+# Tutorial: Connect Microsoft Defender for IoT with Microsoft Sentinel
+
+ΓÇïMicrosoft Defender for IoT enables you to secure your entire OT and Enterprise IoT environment, whether you need to protect existing devices or build security into new innovations.
+
+Microsoft Sentinel and Microsoft Defender for IoT help to bridge the gap between IT and OT security challenges, and to empower SOC teams with out-of-the-box capabilities to efficiently and effectively detect and respond to OT threats. The integration between Microsoft Defender for IoT and Microsoft Sentinel helps organizations to quickly detect multistage attacks, which often cross IT and OT boundaries.
+
+This connector allows you to stream Microsoft Defender for IoT data into Microsoft Sentinel, so you can view, analyze, and respond to Defender for IoT alerts, and the incidents they generate, in a broader organizational threat context.
+
+The Microsoft Sentinel integration is supported only for OT networks.
+
+In this tutorial, you will learn how to:
+
+> [!div class="checklist"]
+>
+> * Connect Defender for IoT data to Microsoft Sentinel
+> * Use Log Analytics to query Defender for IoT alert data
+
+## Prerequisites
+
+Before you start, make sure you have the following requirements on your workspace:
+
+- **Read** and **Write** permissions on your Microsoft Sentinel workspace. For more information, see [Permissions in Microsoft Sentinel](/azure/sentinel/roles).
+
+- **Contributor** or **Owner** permissions on the subscription you want to connect to Microsoft Sentinel.
+
+- A Defender for IoT plan on your Azure subscription with data streaming into Defender for IoT. For more information, see [Quickstart: Get started with Defender for IoT](getting-started.md).
+
+> [!IMPORTANT]
+> Currently, having both the Microsoft Defender for IoT and the [Microsoft Defender for Cloud](/azure/sentinel/data-connectors-reference.md#microsoft-defender-for-cloud) data connectors enabled on the same Microsoft Sentinel workspace simultaneously may result in duplicate alerts in Microsoft Sentinel. We recommend that you disconnect the Microsoft Defender for Cloud data connector before connecting to Microsoft Defender for IoT.
+>
+
+## Connect your data from Defender for IoT to Microsoft Sentinel
+
+Start by enabling the **Defender for IoT** data connector to stream all your Defender for IoT events into Microsoft Sentinel.
+
+**To enable the Defender for IoT data connector**:
+
+1. In Microsoft Sentinel, under **Configuration**, select **Data connectors**, and then locate the **Microsoft Defender for IoT** data connector.
+
+1. At the bottom right, select **Open connector page**.
+
+1. On the **Instructions** tab, under **Configuration**, select **Connect** for each subscription whose alerts and device alerts you want to stream into Microsoft Sentinel.
+
+ If you've made any connection changes, it can take 10 seconds or more for the **Subscription** list to update.
+
+For more information, see [Connect Microsoft Sentinel to Azure, Windows, Microsoft, and Amazon services](/azure/sentinel/connect-azure-windows-microsoft-services.md).
+
+## View Defender for IoT alerts
+
+After you've connected a subscription to Microsoft Sentinel, you'll be able to view Defender for IoT alerts in the Microsoft Sentinel **Logs** area.
+
+1. In Microsoft Sentinel, select **Logs > AzureSecurityOfThings > SecurityAlert**, or search for **SecurityAlert**.
+
+1. Use the following sample queries to filter the logs and view alerts generated by Defender for IoT:
+
+ **To see all alerts generated by Defender for IoT**:
+
+ ```kusto
+ SecurityAlert | where ProductName == "Azure Security Center for IoT"
+ ```
+
+ **To see specific sensor alerts generated by Defender for IoT**:
+
+ ```kusto
+ SecurityAlert
+ | where ProductName == "Azure Security Center for IoT"
+ | where tostring(parse_json(ExtendedProperties).SensorId) == ΓÇ£<sensor_name>ΓÇ¥
+ ```
+
+ **To see specific OT engine alerts generated by Defender for IoT**:
+
+ ```kusto
+ SecurityAlert
+ | where ProductName == "Azure Security Center for IoT"
+ | where ProductComponentName == "MALWARE"
+
+ SecurityAlert
+ | where ProductName == "Azure Security Center for IoT"
+ | where ProductComponentName == "ANOMALY"
+
+ SecurityAlert
+ | where ProductName == "Azure Security Center for IoT"
+ | where ProductComponentName == "PROTOCOL_VIOLATION"
+
+ SecurityAlert
+ | where ProductName == "Azure Security Center for IoT"
+ | where ProductComponentName == "POLICY_VIOLATION"
+
+ SecurityAlert
+ | where ProductName == "Azure Security Center for IoT"
+ | where ProductComponentName == "OPERATIONAL"
+ ```
+
+ **To see high severity alerts generated by Defender for IoT**:
+
+ ```kusto
+ SecurityAlert
+ | where ProductName == "Azure Security Center for IoT"
+ | where AlertSeverity == "High"
+ ```
+
+ **To see specific protocol alerts generated by Defender for IoT**:
+
+ ```kusto
+ SecurityAlert
+ | where ProductName == "Azure Security Center for IoT"
+ | where tostring(parse_json(ExtendedProperties).Protocol) == "<protocol_name>"
+ ```
+
+> [!NOTE]
+> The **Logs** page in Microsoft Sentinel is based on Azure Monitor's Log Analytics.
+>
+> For more information, see [Log queries overview](/azure/azure-monitor/logs/log-query-overview) in the Azure Monitor documentation and the [Write your first KQL query](/training/modules/write-first-query-kusto-query-language/) Learn module.
+>
+
+### Understand alert timestamps
+
+Defender for IoT alerts, in both the Azure portal and on the sensor console, track the time an alert was first detected, last detected, and last changed.
+
+The following table describes the Defender for IoT alert timestamp fields, with a mapping to the relevant fields from Log Analytics shown in Microsoft Sentinel.
+
+|Defender for IoT field |Description | Log Analytics field |
+||||
+|**First detection** |Defines the first time the alert was detected in the network. | `StartTime` |
+|**Last detection** | Defines the last time the alert was detected in the network, and replaces the **Detection time** column.| `EndTime` |
+|**Last activity** | Defines the last time the alert was changed, including manual updates for severity or status, or automated changes for device updates or device/alert de-duplication | `TimeGenerated` |
+
+In Defender for IoT on the Azure portal and the sensor console, the **Last detection** column is shown by default. Edit the columns on the **Alerts** page to show the **First detection** and **Last activity** columns as needed.
+
+For more information, see [View alerts on the Defender for IoT portal](how-to-manage-cloud-alerts.md) and [View alerts on your sensor](how-to-view-alerts.md).
+
+### Understand multiple records per alert
+
+Defender for IoT alert data is streamed to the Microsoft Sentinel and stored in your Log Analytics workspace, in the [SecurityAlert]() table.
+
+Records in the **SecurityAlert** table are created updated each time an alert is generated or updated in Defender for IoT. Sometimes a single alert will have multiple records, such as when the alert was first created and then again when it was updated.
+
+In Microsoft Sentinel, use the following query to check the records added to the **SecurityAlert** table for a single alert:
+
+```kql
+SecurityAlert
+| where ProductName == "Azure Security Center for IoT"
+| where VendorOriginalId == "Defender for IoT Alert ID"
+| sort by TimeGenerated desc
+```
+
+The following types of updates generate new records in the **SecurityAlert** table:
+
+- Updates for alert status or severity
+- Updates in the last detection time, such as when the same alert is detected multiple times
+- A new device is added to an existing alert
+- The device properties for an alert are updated
+
+## Next steps
+
+[Install the **Microsoft Defender for IoT** solution](iot-advanced-threat-monitoring.md) to your Microsoft Sentinel workspace.
+
+The **Microsoft Defender for IoT** solution is a set of bundled, out-of-the-box content that's configured specifically for Defender for IoT data, and includes analytics rules, workbooks, and playbooks.
+
+For more information, see:
+
+- [Tutorial: Investigate and detect threats for IoT devices](iot-advanced-threat-monitoring.md)
+- [Defending Critical Infrastructure with the Microsoft Sentinel: IT/OT Threat Monitoring Solution](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/defending-critical-infrastructure-with-the-microsoft-sentinel-it/ba-p/3061184)
+- [Microsoft Defender for IoT solution](https://azuremarketplace.microsoft.com/marketplace/apps/azuresentinel.azure-sentinel-solution-unifiedmicrosoftsocforot?tab=Overview)
+- [Microsoft Defender for IoT data connector](/azure/sentinel/data-connectors-reference.md#microsoft-defender-for-iot)
+
defender-for-iot Manage Users On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/manage-users-on-premises-management-console.md
+
+ Title: Create and manage users on an on-premises management console - Microsoft Defender for IoT
+description: Create and manage users on a Microsoft Defender for IoT on-premises management console.
Last updated : 09/11/2022+++
+# Create and manage users on an on-premises management console
+
+Microsoft Defender for IoT provides tools for managing on-premises user access in the [OT network sensor](manage-users-sensor.md), and the on-premises management console. Azure users are managed [at the Azure subscription level](manage-users-overview.md) using Azure RBAC.
+
+This article describes how to manage on-premises users directly on an on-premises management console.
+
+## Default privileged users
+
+By default, each on-premises management console is installed with the privileged *cyberx* and *support* users, which have access to advanced tools for troubleshooting and setup.
+
+When setting up an on-premises management console for the first time, sign in with one of these privileged users, create an initial user with an **Admin** role, and then create extra users for security analysts and read-only users.
+
+For more information, see [Install OT monitoring software](how-to-install-software.md#install-ot-monitoring-software) and [Default privileged on-premises users](roles-on-premises.md#default-privileged-on-premises-users).
+
+## Add new on-premises management console users
+
+This procedure describes how to create new users for an on-premises management console.
+
+**Prerequisites**: This procedure is available for the *cyberx* and *support* users, and any user with the **Admin** role.
+
+**To add a user**:
+
+1. Sign in to the on-premises management console and select **Users** > **+ Add user**.
+
+1. Select **Create user** and then define the following values:
+
+ |Name |Description |
+ |||
+ |**Username** | Enter a username. |
+ |**Email** | Enter the user's email address. |
+ |**First Name** | Enter the user's first name. |
+ |**Last Name** | Enter the user's last name. |
+ |**Role** | Select a user role. For more information, see [On-premises user roles](roles-on-premises.md#on-premises-user-roles). |
+ |**Remote Sites Access Group** | Available for the on-premises management console only. <br><br> Select either **All** to assign the user to all global access groups, or **Specific** to assign them to a specific group only, and then select the group from the drop-down list. <br><br>For more information, see [Define global access permission for on-premises users](#define-global-access-permission-for-on-premises-users). |
+ |**Password** | Select the user type, either **Local** or **Active Directory User**. <br><br>For local users, enter a password for the user. Password requirements include: <br>- At least eight characters<br>- Both lowercase and uppercase alphabetic characters<br>- At least one numbers<br>- At least one symbol|
+
+ > [!TIP]
+ > Integrating with Active Directory lets you associate groups of users with specific permission levels. If you want to create users using Active Directory, first configure [Active Directory on the on-premises management console](#integrate-users-with-active-directory) and then return to this procedure.
+ >
+
+1. Select **Save** when you're done.
+
+Your new user is added and is listed on the sensor **Users** page.
+
+**To edit a user**, select the **Edit** :::image type="icon" source="media/manage-users-on-premises-management-console/icon-edit.png" border="false"::: button for the user you want to edit, and change any values as needed.
+
+**To delete a user**, select the **Delete** :::image type="icon" source="media/manage-users-on-premises-management-console/icon-delete.png" border="false"::: button for the user you want to delete.
+
+### Change a user's password
+
+This procedure describes how **Admin** users can change local user passwords. **Admin** users can change passwords for themselves or for other **Security Analyst** or **Read Only** users. [Privileged users](#default-privileged-users) can change their own passwords, and the passwords for **Admin** users.
+
+> [!TIP]
+> If you need to recover access to a privileged user account, see [Recover privileged access to an on-premises management console](#recover-privileged-access-to-an-on-premises-management-console).
+
+**Prerequisites**: This procedure is available only for the *cyberx* or *support* users, or for users with the **Admin** role.
+
+**To reset a user's password on the on-premises management console**:
+
+1. Sign into the on-premises management console and select **Users**.
+
+1. On the **Users** page, locate the user whose password needs to be changed.
+
+1. At the right of that user row, select the **Edit** :::image type="icon" source="media/manage-users-on-premises-management-console/icon-edit.png" border="false"::: button.
+
+1. In the **Edit user** pane that appears, scroll down to the **Change password** section. Enter and confirm the new password.
+
+ Passwords must be at least 16 characters, contain lowercase and uppercase alphabetic characters, numbers, and one of the following symbols: **#%*+,-./:=?@[]^_{}~**
+
+1. Select **Update** when you're done.
+
+### Recover privileged access to an on-premises management console
+
+This procedure describes how to recover either the *cyberx* or *support* user password on an on-premises management console. For more information, see [Default privileged on-premises users](roles-on-premises.md#default-privileged-on-premises-users).
+
+**Prerequisites**: This procedure is available for the *cyberx* and *support* users only.
+
+**To recover privileged access to an on-premises management console**:
+
+1. Start signing in to your on-premises management console. On the sign-in screen, under the **Username** and **Password** fields, select **Password recovery**.
+
+1. In the **Password Recovery** dialog, select either **CyberX** or **Support** from the drop-down menu, and copy the unique identifier code that's displayed to the clipboard.
+
+1. Go the Defender for IoT **Sites and sensors** page in the Azure portal. You may want to open the Azure portal in a new browser tab or window, keeping your on-premises management console open.
+
+ In your Azure portal settings > **Directories + subscriptions**, make sure that you've selected the subscription where your sensors were onboarded to Defender for IoT.
+
+1. In the **Sites and sensors** page, select the **More Actions** drop down menu > **Recover on-premises management console password**.
+
+ :::image type="content" source="media/how-to-create-and-manage-users/recover-password.png" alt-text="Screenshot of the recover on-premises management console password option.":::
+
+1. In the **Recover** dialog that opens, enter the unique identifier that you've copied to the clipboard from your on-premises management console and select **Recover**. A **password_recovery.zip** file is automatically downloaded.
+
+ [!INCLUDE [root-of-trust](includes/root-of-trust.md)]
+
+1. Back on the on-premises management console tab, on the **Password recovery** dialog, select **Upload**. Browse to an upload the **password_recovery.zip** file you downloaded from the Azure portal.
+
+ > [!NOTE]
+ > If an error message appears, indicating that the file is invalid, you may have had an incorrect subscription selected in your Azure portal settings.
+ >
+ > Return to Azure, and select the settings icon in the top toolbar. On the **Directories + subscriptions** page, make sure that you've selected the subscription where your sensors were onboarded to Defender for IoT. Then repeat the steps in Azure to download the **password_recovery.zip** file and upload it on the on-premises management console again.
+
+1. Select **Next**. A system-generated password for your on-premises management console appears for you to use for the selected user. Make sure to write the password down as it won't be shown again.
+
+1. Select **Next** again to sign into your on-premises management console.
+
+## Integrate users with Active Directory
+
+Configure an integration between your on-premises management console and Active Directory to:
+
+- Allow Active Directory users to sign in to your on-premises management console
+- Use Active Directory groups, with collective permissions assigned to all users in the group
+
+For example, use Active Directory when you have a large number of users that you want to assign Read Only access to, and you want to manage those permissions at the group level.
+
+For more information, see [Active Directory support on sensors and on-premises management consoles](manage-users-overview.md#active-directory-support-on-sensors-and-on-premises-management-consoles).
+
+**Prerequisites**: This procedure is available for the *cyberx* and *support* users only, or any user with an **Admin** role.
+
+**To integrate with Active Directory**:
+
+1. Sign in to your on-premises management console and select **System Settings**.
+
+1. Scroll down to the **Management console integrations** area on the right, and then select **Active Directory**.
+
+1. Select the **Active Directory Integration Enabled** option and enter the following values for an Active Directory server:
+
+ |Field |Description |
+ |||
+ |**Domain Controller FQDN** | The fully qualified domain name (FQDN), exactly as it appears on your LDAP server. For example, enter `host1.subdomain.domain.com`. |
+ |**Domain Controller Port** | The port on which your LDAP is configured. |
+ |**Primary Domain** | The domain name, such as `subdomain.domain.com`, and then select the connection type for your LDAP configuration. <br><br>Supported connection types include: **LDAPS/NTLMv3** (recommended), **LDAP/NTLMv3**, or **LDAP/SASL-MD5** |
+ |**Active Directory Groups** | Select **+ Add** to add an Active Directory group to each permission level listed, as needed. <br><br>When you enter a group name, make sure that you enter the group name as it's defined in your Active Directory configuration on the LDAP server. Then, make sure to use these groups when creating new sensor users from Active Directory.<br><br> Supported permission levels include **Read-only**, **Security Analyst**, **Admin**, and **Trusted Domains**.<br><br> Add groups as **Trusted endpoints** in a separate row from the other Active Directory groups. To add a trusted domain, add the domain name and the connection type of a trusted domain. You can configure trusted endpoints only for users who were defined under users. <!--what?-->|
+
+ Select **+ Add Server** to add another server and enter its values as needed, and **Save** when you're done.
+
+ > [!IMPORTANT]
+ > When entering LDAP parameters:
+ >
+ > - Define values exactly as they appear in Active directory, except for the case.
+ > - User lowercase only, even if the configuration in Active Directory uses uppercase.
+ > - LDAP and LDAPS can't be configured for the same domain. However, you can configure each in different domains and then use them at the same time.
+ >
+
+1. Create access group rules for on-premises management console users.
+
+ If you configure Active Directory groups for on-premises management console users, you must also create an access group rule for each Active Directory group. Active Directory credentials won't work for on-premises management console users without a corresponding access group rule.
+
+ For more information, see [Define global access permission for on-premises users](#define-global-access-permission-for-on-premises-users).
++
+## Define global access permission for on-premises users
+
+Large organizations often have a complex user permissions model based on global organizational structures. To manage your on-premises Defender for IoT users, we recommend that you use a global business topology that's based on business units, regions, and sites, and then define user access permissions around those entities.
+
+Create *user access groups* to establish global access control across Defender for IoT on-premises resources. Each access group includes rules about the users that can access specific entities in your business topology, including business units, regions, and sites.
+
+For more information, see [On-premises global access groups](manage-users-overview.md#on-premises-global-access-groups).
+
+**Prerequisites**:
+
+This procedure is available for the *cyberx* and *support* users, and any user with the **Admin** role.
+
+Before you create access groups, we also recommend that you:
+
+- Plan which users are associated with the access groups that you create. Two options are available for assigning users to access groups:
+
+ - **Assign groups of Active Directory groups**: Verify that you [set up an Active Directory instance](#integrate-users-with-active-directory) to integrate with the on-premises management console.
+
+ - **Assign local users**: Verify that you've [created local users](#create-and-manage-users-on-an-on-premises-management-console).
+
+ Users with **Admin** roles have access to all business topology entities by default, and can't be assigned to access groups.
+
+- Carefully set up your business topology. For a rule to be successfully applied, you must assign sensors to zones in the **Site Management** window. For more information, see:
+
+ - [Work with site map views](how-to-gain-insight-into-global-regional-and-local-threats.md#work-with-site-map-views)
+ - [Create enterprise zones](how-to-activate-and-set-up-your-on-premises-management-console.md#create-enterprise-zones)
+ - [Assign sensors to zones](how-to-activate-and-set-up-your-on-premises-management-console.md#assign-sensors-to-zones)
+
+**To create access groups**:
+
+1. Sign in to the on-premises management console as user with an **Admin** role.
+
+1. Select **Access Groups** from the left navigation menu, and then select **Add** :::image type="icon" source="media/how-to-define-global-user-access-control/add-icon.png" border="false":::.
+
+1. In the **Add Access Group** dialog box, enter a meaningful name for the access group, with a maximum of 64 characters.
+
+1. Select **ADD RULE**, and then select the business topology options that you want to include in the access group. The options that appear in the **Add Rule** dialog are the entities that you'd created in the **Enterprise View** and **Site Management** pages. For example:
+
+ :::image type="content" source="media/how-to-define-global-user-access-control/add-rule.png" alt-text="Screenshot of the Add Rule dialog box." lightbox="media/how-to-define-global-user-access-control/add-rule.png":::
+
+ If they don't otherwise exist yet, default global business units and regions are created for the first group you create. If you don't select any business units or regions, users in the access group will have access to all business topology entities.
+
+ Each rule can include only one element per type. For example, you can assign one business unit, one region, and one site for each rule. If you want the same users to have access to multiple business units, in different regions, create more rules for the group. When an access group contains several rules, the rule logic aggregates all rules using an AND logic.
+
+ Any rules you create are listed in the **Add Access Group** dialog box, where you can edit them further or delete them as needed. For example:
+
+ :::image type="content" source="media/how-to-define-global-user-access-control/edit-access-groups.png" alt-text="Screenshot of the Add Access Group dialog box." lightbox="media/how-to-define-global-user-access-control/edit-access-groups.png":::
+
+1. Add users with one or both of the following methods:
+
+ - If the **Assign an Active Directory Group** option appears, assign an Active Directory group of users to this access group as needed. For example:
+
+ :::image type="content" source="media/how-to-define-global-user-access-control/add-access-group.png" alt-text="Screenshot of adding an Active Directory group to a Global Access Group." lightbox="media/how-to-define-global-user-access-control/add-access-group.png":::
+
+ If the option doesn't appear, and you want to include Active Directory groups in access groups, make sure that you've included your Active Directory group in your Active Directory integration. For more information, see [Integrate on-premises users with Active Directory](#integrate-users-with-active-directory).
+
+ - Add local users to your groups by editing existing users from the **Users** page. On the **Users** page, select the **Edit** button for the user you want to assign to the group, and then update the **Remote Sites Access Group** value for the selected user. For more information, see [Add new on-premises management console users](#add-new-on-premises-management-console-users).
++
+### Changes to topology entities
+
+If you later modify a topology entity and the change affects the rule logic, the rule is automatically deleted.
+
+If modifications to topology entities affect rule logic so that all rules are deleted, the access group remains but users won't be able to sign in to the on-premises management console. Instead, users are notified to contact their on-premises management console administrator for help signing in. [Update the settings](#add-new-on-premises-management-console-users) for these users so that they're no longer part of the legacy access group.
+
+## Control user session timeouts
+
+By default, on-premises users are signed out of their sessions after 30 minutes of inactivity. Admin users can use the local CLI to either turn this feature on or off, or to adjust the inactivity thresholds.
+For more information, see [Work with Defender for IoT CLI commands](references-work-with-defender-for-iot-cli-commands.md).
+
+> [!NOTE]
+> Any changes made to user session timeouts are reset to defaults when you [update the OT monitoring software](update-ot-software.md).
+
+**Prerequisites**: This procedure is available for the *cyberx* and *support* users only.
+
+**To control sensor user session timeouts**:
+
+1. Sign in to your sensor via a terminal and run:
+
+ ```cli
+ sudo nano /var/cyberx/properties/authentication.properties
+ ```
+
+ The following output appears:
+
+ ```cli
+ infinity_session_expiration = true
+ session_expiration_default_seconds = 0
+ # half an hour in seconds
+ session_expiration_admin_seconds = 1800
+ session_expiration_security_analyst_seconds = 1800
+ session_expiration_read_only_users_seconds = 1800
+ certifcate_validation = true
+ CRL_timeout_secounds = 3
+ CRL_retries = 1
+
+ ```
+
+1. Do one of the following:
+
+ - **To turn off user session timeouts entirely**, change `infinity_session_expiration = true` to `infinity_session_expiration = false`. Change it back to turn it back on again.
+
+ - **To adjust an inactivity timeout period**, adjust one of the following values to the required time, in seconds:
+
+ - `session_expiration_default_seconds` for all users
+ - `session_expiration_admin_seconds` for *Admin* users only
+ - `session_expiration_security_analyst_seconds` for *Security Analyst* users only
+ - `session_expiration_read_only_users_seconds` for *Read Only* users only
+
+## Next steps
+
+For more information, see:
+
+- [Create and manage users on an OT network sensor](manage-users-sensor.md)
+- [Audit user activity](track-user-activity.md)
defender-for-iot Manage Users Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/manage-users-overview.md
+
+ Title: User management for Microsoft Defender for IoT
+description: Learn about the different options for user and user role management for Microsoft Defender for IoT.
Last updated : 11/13/2022+++
+# Microsoft Defender for IoT user management
+
+Microsoft Defender for IoT provides tools both in the Azure portal and on-premises for managing user access across Defender for IoT resources.
+
+## Azure users for Defender for IoT
+
+In the Azure portal, user are managed at the subscription level with [Azure Active Directory (AAD)](/azure/active-directory/) and [Azure role-based access control (RBAC)](/azure/role-based-access-control/overview). Azure subscription users can have one or more user roles, which determine the data and actions they can access from the Azure portal, including in Defender for IoT.
+
+Use the [portal](/azure/role-based-access-control/quickstart-assign-role-user-portal) or [PowerShell](/azure/role-based-access-control/tutorial-role-assignments-group-powershell) to assign your Azure subscription users with the specific roles they'll need to view data and take action, such as whether they'll be viewing alert or device data, or managing pricing plans and sensors.
+
+For more information, see [Azure user roles for OT and Enterprise IoT monitoring](roles-azure.md)
+
+## On-premises users for Defender for IoT
+
+When working with OT networks, Defender for IoT services and data is available also from on-premises OT network sensors and the on-premises sensor management console, in addition to the Azure portal.
+
+You'll need to define on-premises users on both your OT network sensors and the on-premises management console, in addition to Azure. Both the OT sensors and the on-premises management console are installed with a set of default, privileged users, which you can use to define additional administrators and other users.
+
+Sign into the OT sensors to [define sensor users](manage-users-sensor.md), and sign into the on-premises management console to [define on-premises management console users](manage-users-on-premises-management-console.md).
+
+For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md).
+
+### Active Directory support on sensors and on-premises management consoles
+
+You might want to configure an integration between your sensor and Active Directory to allow Active Directory users to sign in to your sensor, or to use Active Directory groups, with collective permissions assigned to all users in the group.
+
+For example, use Active Directory when you have a large number of users that you want to assign **Read Only** access to, and you want to manage those permissions at the group level.
+
+Defender for IoT's integration with Active Directory supports LDAP v3 and the following types of LDAP-based authentication:
+
+- **Full authentication**: User details are retrieved from the LDAP server. Examples are the first name, last name, email, and user permissions.
+
+- **Trusted user**: Only the user password is retrieved. Other user details that are retrieved are based on users defined in the sensor.
+
+For more information, see:
+
+- [Integrate OT sensor users with Active Directory](manage-users-sensor.md#integrate-ot-sensor-users-with-active-directory)
+- [Integrate on-premises management console users with Active Directory](manage-users-on-premises-management-console.md#integrate-users-with-active-directory)
+- [Other firewall rules for external services (optional)](how-to-set-up-your-network.md#other-firewall-rules-for-external-services-optional).
++
+### On-premises global access groups
+
+Large organizations often have a complex user permissions model based on global organizational structures. To manage your on-premises Defender for IoT users, use a global business topology that's based on business units, regions, and sites, and then define user access permissions around those entities.
+
+Create user access groups to establish global access control across Defender for IoT on-premises resources. Each access group includes rules about the users that can access specific entities in your business topology, including business units, regions, and sites.
+
+For example, the following diagram shows how you can allow security analysts from an Active Directory group to access all West European automotive and glass production lines, along with a plastics line in one region:
++
+For more information, see [Define global access permission for on-premises users](manage-users-on-premises-management-console.md#define-global-access-permission-for-on-premises-users).
+
+> [!TIP]
+> Access groups and rules help to implement zero-trust strategies by controlling where users manage and analyze devices on Defender for IoT sensors and the on-premises management console. For more information, see [Gain insight into global, regional, and local threats](how-to-gain-insight-into-global-regional-and-local-threats.md).
+>
+
+## Next steps
+
+- [Manage Azure subscription users](/azure/role-based-access-control/quickstart-assign-role-user-portal)
+- [Create and manage users on an OT network sensor](manage-users-sensor.md)
+- [Create and manage users on an on-premises management console](manage-users-on-premises-management-console.md)
+
+For more information, see:
+
+- [Azure user roles and permissions for Defender for IoT](roles-azure.md)
+- [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md)
defender-for-iot Manage Users Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/manage-users-sensor.md
+
+ Title: Create and manage users on an OT network sensor - Microsoft Defender for IoT
+description: Create and manage on-premises users on a Microsoft Defender for IoT OT network sensor.
Last updated : 09/28/2022+++
+# Create and manage users on an OT network sensor
+
+Microsoft Defender for IoT provides tools for managing on-premises user access in the [OT network sensor](manage-users-sensor.md), and the on-premises management console. Azure users are managed [at the Azure subscription level](manage-users-overview.md) using Azure RBAC.
+
+This article describes how to manage on-premises users directly on an OT network sensor.
+
+## Default privileged users
+
+By default, each OT network sensor is installed with the privileged *cyberx*, *support*, and *cyberx_host* users, which have access to advanced tools for troubleshooting and setup.
+
+When setting up a sensor for the first time, sign in with one of these privileged users, create an initial user with an **Admin** role, and then create extra users for security analysts and read-only users.
+
+For more information, see [Install OT monitoring software](how-to-install-software.md#install-ot-monitoring-software) and [Default privileged on-premises users](roles-on-premises.md#default-privileged-on-premises-users).
+
+## Add new OT sensor users
+
+This procedure describes how to create new users for a specific OT network sensor.
+
+**Prerequisites**: This procedure is available for the *cyberx*, *support*, and *cyberx_host* users, and any user with the **Admin** role.
+
+**To add a user**:
+
+1. Sign in to the sensor console and select **Users** > **+ Add user**.
+
+1. On the **Create a user | Users** page, enter the following details:
+
+ |Name |Description |
+ |||
+ |**User name** | Enter a meaningful username for the user. |
+ |**Email** | Enter the user's email address. |
+ |**First Name** | Enter the user's first name. |
+ |**Last Name** | Enter the user's last name. |
+ |**Role** | Select one of the following user roles: **Admin**, **Security Analyst**, or **Read Only**. For more information, see [On-premises user roles](roles-on-premises.md#on-premises-user-roles). |
+ |**Password** | Select the user type, either **Local** or **Active Directory User**. <br><br>For local users, enter a password for the user. Password requirements include: <br>- At least eight characters<br>- Both lowercase and uppercase alphabetic characters<br>- At least one numbers<br>- At least one symbol<br><br>Local user passwords can only be modified by **Admin** users.|
+
+ > [!TIP]
+ > Integrating with Active Directory lets you associate groups of users with specific permission levels. If you want to create users using Active Directory, first configure [Active Directory on the sensor](manage-users-sensor.md#integrate-ot-sensor-users-with-active-directory) and then return to this procedure.
+ >
+
+1. Select **Save** when you're done.
+
+Your new user is added and is listed on the sensor **Users** page.
+
+To edit a user, select the **Edit** :::image type="icon" source="media/manage-users-on-premises-management-console/icon-edit.png" border="false"::: icon for the user you want to edit, and change any values as needed.
+
+To delete a user, select the **Delete** button for the user you want to delete.
+## Integrate OT sensor users with Active Directory
+
+Configure an integration between your sensor and Active Directory to:
+
+- Allow Active Directory users to sign in to your sensor
+- Use Active Directory groups, with collective permissions assigned to all users in the group
+
+For example, use Active Directory when you have a large number of users that you want to assign Read Only access to, and you want to manage those permissions at the group level.
+
+For more information, see [Active Directory support on sensors and on-premises management consoles](manage-users-overview.md#active-directory-support-on-sensors-and-on-premises-management-consoles).
+
+**Prerequisites**: This procedure is available for the *cyberx* and *support* users, and any user with the **Admin** role.
+
+**To integrate with Active Directory**:
+
+1. Sign in to your OT sensor and select **System Settings** > **Integrations** > **Active Directory**.
+
+1. Toggle on the **Active Directory Integration Enabled** option.
+
+1. Enter the following values for your Active Directory server:
+
+ |Name |Description |
+ |||
+ |**Domain Controller FQDN** | The fully qualified domain name (FQDN), exactly as it appears on your LDAP server. For example, enter `host1.subdomain.domain.com`. |
+ |**Domain Controller Port** | The port on which your LDAP is configured. |
+ |**Primary Domain** | The domain name, such as `subdomain.domain.com`, and then select the connection type for your LDAP configuration. <br><br>Supported connection types include: **LDAPS/NTLMv3** (recommended), **LDAP/NTLMv3**, or **LDAP/SASL-MD5** |
+ |**Active Directory Groups** | Select **+ Add** to add an Active Directory group to each permission level listed, as needed. <br><br> When you enter a group name, make sure that you enter the group name exactly as it's defined in your Active Directory configuration on the LDAP server. You'll use these group names when [adding new sensor users](#add-new-ot-sensor-users) with Active Directory.<br><br> Supported permission levels include **Read-only**, **Security Analyst**, **Admin**, and **Trusted Domains**. |
++
+ > [!IMPORTANT]
+ > When entering LDAP parameters:
+ >
+ > - Define values exactly as they appear in Active Directory, except for the case.
+ > - User lowercase characters only, even if the configuration in Active Directory uses uppercase.
+ > - LDAP and LDAPS can't be configured for the same domain. However, you can configure each in different domains and then use them at the same time.
+ >
+
+1. To add another Active Directory server, select **+ Add Server** at the top of the page and define those server values.
+
+1. When you've added all your Active Directory servers, select **Save**.
++
+## Change a sensor user's password
+
+This procedure describes how **Admin** users can change local user passwords. **Admin** users can change passwords for themselves or for other **Security Analyst** or **Read Only** users. [Privileged users](#default-privileged-users) can change their own passwords, and the passwords for **Admin** users.
+
+> [!TIP]
+> If you need to recover access to a privileged user account, see [Recover privileged access to a sensor](#recover-privileged-access-to-a-sensor).
+
+**Prerequisites**: This procedure is available only for the *cyberx*, *support*, or *cyberx_host* users, or for users with the **Admin** role.
+
+**To change a user's password on a sensor**:
+
+1. Sign into the sensor and select **Users**.
+
+1. On the sensor's **Users** page, locate the user whose password needs to be changed.
+
+1. At the right of that user row, select the options (**...**) menu > **Edit** to open the user pane.
+
+1. In the user pane on the right, in the **Change password** area, enter and confirm the new password. If you're changing your own password, you'll also need to enter your current password.
+
+ Password requirements include:
+
+ - At least eight characters
+ - Both lowercase and uppercase alphabetic characters
+ - At least one numbers
+ - At least one symbol
+
+1. Select **Save** when you're done.
+
+## Recover privileged access to a sensor
+
+This procedure descries how to recover privileged access to a sensor, for the *cyberx*, *support*, or *cyberx_host* users. For more information, see [Default privileged on-premises users](roles-on-premises.md#default-privileged-on-premises-users).
+
+**Prerequisites**: This procedure is available only for the *cyberx*, *support*, or *cyberx_host* users.
+
+**To recover privileged access to a sensor**:
+
+1. Start signing in to the OT network sensor. On the sign-in screen, select the **Reset** link. For example:
+
+ :::image type="content" source="media/manage-users-sensor/reset-privileged-password.png" alt-text="Screenshot of the sensor sign-in screen with the Reset password link.":::
+
+1. In the **Reset password** dialog, from the **Choose user** menu, select the user whose password you're recovering, either **Cyberx**, **Support**, or **CyberX_host**.
+
+1. Copy the unique identifier code that's shown in the **Reset password identifier** to the clipboard. For example:
+
+ :::image type="content" source="media/manage-users-sensor/password-recovery-sensor.png" alt-text="Screenshot of the Reset password dialog on the OT sensor.":::
+
+1. Go the Defender for IoT **Sites and sensors** page in the Azure portal. You may want to open the Azure portal in a new browser tab or window, keeping your sensor tab open.
+
+ In your Azure portal settings > **Directories + subscriptions**, make sure that you've selected the subscription where your sensor was onboarded to Defender for IoT.
+
+1. On the **Sites and sensors** page, locate the sensor that you're working with, and select the options menu (**...**) on the right > **Recover my password**. For example:
+
+ :::image type="content" source="media/manage-users-sensor/recover-my-password.png" alt-text="Screenshot of the Recover my password option on the Sites and sensors page." lightbox="media/manage-users-sensor/recover-my-password.png":::
+
+1. In the **Recover** dialog that opens, enter the unique identifier that you've copied to the clipboard from your sensor and select **Recover**. A **password_recovery.zip** file is automatically downloaded.
+
+ [!INCLUDE [root-of-trust](includes/root-of-trust.md)]
+
+1. Back on the sensor tab, on the **Password recovery** screen, select **Select file**. Navigate to and upload the **password_recovery.zip** file you'd downloaded earlier from the Azure portal.
+
+ > [!NOTE]
+ > If an error message appears, indicating that the file is invalid, you may have had an incorrect subscription selected in your Azure portal settings.
+ >
+ > Return to Azure, and select the settings icon in the top toolbar. On the **Directories + subscriptions** page, make sure that you've selected the subscription where your sensor was onboarded to Defender for IoT. Then repeat the steps in Azure to download the **password_recovery.zip** file and upload it on the sensor again.
+
+1. Select **Next**. A system-generated password for your sensor appears for you to use for the selected user. Make sure to write the password down as it won't be shown again.
+
+1. Select **Next** again to sign into your sensor with the new password.
+
+## Control user session timeouts
+
+By default, on-premises users are signed out of their sessions after 30 minutes of inactivity. Admin users can use the local CLI to either turn this feature on or off, or to adjust the inactivity thresholds.
+For more information, see [Work with Defender for IoT CLI commands](references-work-with-defender-for-iot-cli-commands.md).
+
+> [!NOTE]
+> Any changes made to user session timeouts are reset to defaults when you [update the OT monitoring software](update-ot-software.md).
+
+**Prerequisites**: This procedure is available for the *cyberx*, *support*, and *cyberx_host* users only.
+
+**To control sensor user session timeouts**:
+
+1. Sign in to your sensor via a terminal and run:
+
+ ```cli
+ sudo nano /var/cyberx/properties/authentication.properties
+ ```
+
+ The following output appears:
+
+ ```cli
+ infinity_session_expiration=true
+ session_expiration_default_seconds=0
+ session_expiration_admin_seconds=1800
+ session_expiration_security_analyst_seconds=1800
+ session_expiration_read_only_users_seconds=1800
+ certifcate_validation=false
+ crl_timeout_secounds=3
+ crl_retries=1
+ cm_auth_token=
+
+ ```
+
+1. Do one of the following:
+
+ - **To turn off user session timeouts entirely**, change `infinity_session_expiration=true` to `infinity_session_expiration=false`. Change it back to turn it back on again.
+
+ - **To adjust an inactivity timeout period**, adjust one of the following values to the required time, in seconds:
+
+ - `session_expiration_default_seconds` for all users
+ - `session_expiration_admin_seconds` for *Admin* users only
+ - `session_expiration_security_analyst_seconds` for *Security Analyst* users only
+ - `session_expiration_read_only_users_seconds` for *Read Only* users only
+
+## Next steps
+
+For more information, see:
+
+- [Create and manage users on an on-premises management console](manage-users-on-premises-management-console.md)
+- [Audit user activity](track-user-activity.md)
defender-for-iot References Work With Defender For Iot Cli Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/references-work-with-defender-for-iot-cli-commands.md
This article describes CLI commands for sensors and on-premises management consoles. The commands are accessible to the following users: -- CyberX -- Support-- cyberx_host
+- `cyberx`
+- `support`
+- `cyberx_host`
+
+For more information, see [Default privileged on-premises users](roles-on-premises.md#default-privileged-on-premises-users)..
+
+To start working in the CLI, connect using a terminal, such as PuTTY using one of the privileged users.
-To start working in the CLI, connect using a terminal. For example, terminal name `Putty`, and `Support` user.
## Create local alert exclusion rules
defender-for-iot Release Notes Sentinel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes-sentinel.md
Title: Release notes for the Microsoft Defender for IoT solution in Microsoft Sentinel
+ Title: Microsoft Defender for IoT solution versions in Microsoft Sentinel
description: Learn about the updates available in each version of the Microsoft Defender for IoT solution, available from the Microsoft Sentinel content hub. Last updated 09/22/2022
-# Release notes for the Microsoft Defender for IoT solution in Microsoft Sentinel
+# Microsoft Defender for IoT solution versions in Microsoft Sentinel
This article lists the updates to out-of-the-box security content available from each version of the **Microsoft Defender for IoT** solution. The **Microsoft Defender for IoT** solution is available from the Microsoft Sentinel content hub.
For more information about earlier versions of the **Microsoft Defender for IoT*
## Next steps
-Learn more in [What's new in Microsoft Defender for IoT?](whats-new.md) and the [Microsoft Sentinel documentation](../../sentinel/index.yml).
+Learn more in [What's new in Microsoft Defender for IoT?](whats-new.md) and the [Microsoft Sentinel documentation](../../sentinel/index.yml).
defender-for-iot Roles Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/roles-azure.md
+
+ Title: Azure user roles and permissions for Microsoft Defender for IoT
+description: Learn about the Azure user roles and permissions available for OT and Enterprise IoT monitoring with Microsoft Defender for IoT on the Azure portal.
Last updated : 09/19/2022+++
+# Azure user roles and permissions for Defender for IoT
+
+Microsoft Defender for IoT uses [Azure Role-Based Access Control (RBAC)](/azure/role-based-access-control/) to provide access to Enterprise IoT monitoring services and data on the Azure portal.
+
+The built-in Azure [Security Reader](../../role-based-access-control/built-in-roles.md#security-reader), [Security Admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), and [Owner](../../role-based-access-control/built-in-roles.md#owner) roles are relevant for use in Defender for IoT.
+
+This article provides a reference of Defender for IoT actions available for each role in the Azure portal. For more information, see [Azure built-in roles](/azure/role-based-access-control/built-in-roles).
+
+## Roles and permissions reference
+
+Roles for management actions are applied to user roles across an entire Azure subscription.
+
+| Action and scope|[Security Reader](../../role-based-access-control/built-in-roles.md#security-reader) |[Security Admin](../../role-based-access-control/built-in-roles.md#security-admin) |[Contributor](../../role-based-access-control/built-in-roles.md#contributor) | [Owner](../../role-based-access-control/built-in-roles.md#owner) |
+||||||
+| **Grant permissions to others** | - | - | - | Γ£ö |
+| **Onboard OT or Enterprise IoT sensors** [*](#enterprise-iot-security) | - | Γ£ö | Γ£ö | Γ£ö |
+| **Download OT sensor and on-premises management console software** | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| **Download sensor activation files** | - | Γ£ö | Γ£ö | Γ£ö |
+| **View values on the Pricing page** [*](#enterprise-iot-security) | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| **Modify values on the Pricing page** [*](#enterprise-iot-security) | - | Γ£ö | Γ£ö | Γ£ö |
+| **View values on the Sites and sensors page** [*](#enterprise-iot-security) | Γ£ö | Γ£ö | Γ£ö | Γ£ö|
+| **Modify values on the Sites and sensors page** [*](#enterprise-iot-security) | - | Γ£ö | Γ£ö | Γ£ö|
+| **Recover on-premises management console passwords** | - | Γ£ö | Γ£ö | Γ£ö |
+| **Download OT threat intelligence packages** | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| **Push OT threat intelligence updates** | - | Γ£ö | Γ£ö | Γ£ö |
+| **Onboard an Enterprise IoT plan from Microsoft 365 Defender** [*](#enterprise-iot-security) | - | Γ£ö | - | - |
+| **View Azure alerts** | Γ£ö | Γ£ö |Γ£ö | Γ£ö|
+| **Modify Azure alerts (write access)** | - | Γ£ö |Γ£ö | Γ£ö |
+| **View Azure device inventory** | Γ£ö | Γ£ö |Γ£ö | Γ£ö|
+| **Manage Azure device inventory (write access)** | - | Γ£ö |Γ£ö | Γ£ö |
+| **View Azure workbooks** | Γ£ö | Γ£ö |Γ£ö | Γ£ö |
+| **Manage Azure workbooks (write access)** | - | Γ£ö |Γ£ö | Γ£ö |
+
+## Enterprise IoT security
+
+Add, edit, or cancel an Enterprise IoT plan with [Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint) from Microsoft 365 Defender. Alerts, vulnerabilities, and recommendations for Enterprise IoT networks are also only available from Microsoft 365 Defender.
+
+In addition to the permissions listed above, Enterprise IoT security with Defender for IoT has the following requirements:
+
+- **To add an Enterprise IoT plan**, you'll need an E5 license and specific permissions in your Microsoft 365 Defender tenant.
+- **To view Enterprise IoT devices in your Azure device inventory**, you'll need an Enterprise IoT network sensor registered.
+
+For more information, see [Securing IoT devices in the enterprise](concept-enterprise.md).
+
+## Next steps
+
+For more information, see:
+
+- [Microsoft Defender for IoT user management](manage-users-overview.md)
+- [On-premises user roles for OT monitoring with Defender for IoT](roles-on-premises.md)
defender-for-iot Roles On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/roles-on-premises.md
+
+ Title: On-premises users and roles for Defender for IoT - Microsoft Defender for IoT
+description: Learn about the on-premises user roles available for OT monitoring with Microsoft Defender for IoT network sensors and on-premises management consoles.
Last updated : 09/19/2022+++
+# On-premises users and roles for OT monitoring with Defender for IoT
+
+When working with OT networks, Defender for IoT services and data is available from on-premises OT network sensors and the on-premises sensor management consoles, in addition to Azure.
+
+This article provides:
+
+- A description of the default, privileged users that come with Defender for IoT software installation
+- A reference of the actions available for each on-premises user role, on both OT network sensors and the on-premises management console
+
+## Default privileged on-premises users
+
+By default, each sensor and on-premises management console is [installed](how-to-install-software.md#install-ot-monitoring-software) with the *cyberx* and *support* privileged users. OT sensors are also installed with the *cyberx_host* privileged user.
+
+Privileged users have access to advanced tools for troubleshooting and setup, such as the CLI. When first setting up your sensor or on-premises management console, first sign in with one of the privileged users. Then create an initial user with an **Admin** role, and then use that admin user to create other users with other roles.
+
+The following table describes each default privileged user in detail:
+
+|Username |Connects to |Permissions |
+||||
+|**cyberx** | The sensor or on-premises management console's `sensor_app` container | Serves as a root user within the main application. <br><br>Used for troubleshooting with advanced root access.<br><br>Can access the container filesystem, commands, and dedicated CLI commands for controlling OT monitoring. <br><br>Can recover or change passwords for users with any roles. <!--check this abt passwords--> |
+|**support** | The sensor or on-premises management console's `sensor_app` container | Serves as a locked-down, user shell for dedicated CLI tools.<br><br>Has no filesystem access.<br><br>Can access only dedicated CLI commands for controlling OT monitoring. <br><br>Can recover or change passwords for the *support* user, and any user with the **Admin**, **Security Analyst**, and **Read-only** roles. <!--check this abt passwords--> |
+|**cyberx_host** | The on-premises management console's host OS | Serves as a root user in the on-premises management console's host OS.<br><br>Used for support scenarios with containers and filesystem access. |
+
+## On-premises user roles
+
+The following roles are available on OT network sensors and on-premises management consoles:
+
+|Role |Description |
+|||
+|**Admin** | Admin users have access to all tools, including system configurations, creating and managing users, and more. |
+|**Security Analyst** | Security Analysts don't have admin-level permissions for configurations, but can perform actions on devices, acknowledge alerts, and use investigation tools. <br><br>Security Analysts can access options on the sensor displayed in the **Discover** and **Analyze** menus on the sensor, and in the **NAVIGATION** and **ANALYSIS** menus on the on-premises management console. |
+|**Read Only** | Read-only users perform tasks such as viewing alerts and devices on the device map. <br><br>Read Only users can access options displayed in the **Discover** and **Analyze** menus on the sensor, in read-only mode, and in the **NAVIGATION** menu on the on-premises management console. |
+
+When first deploying an OT monitoring system, sign in to your sensors and on-premises management console with one of the [default, privileged users](#default-privileged-on-premises-users) described above. Create your first **Admin** user, and then use that user to create other users and assign them to roles.
+
+Permissions applied to each role differ between the sensor and the on-premises management console. For more information, see the tables below for the permissions available for each role, on the [sensor](#role-based-permissions-for-ot-network-sensors) and on the [on-premises management console](#role-based-permissions-for-the-on-premises-management-console).
+
+## Role-based permissions for OT network sensors
+
+| Permission | Read Only | Security Analyst | Admin |
+|--|--|--|--|
+| **View the dashboard** | Γ£ö | Γ£ö |Γ£ö |
+| **Control map zoom views** | - | - | Γ£ö |
+| **View alerts** | Γ£ö | Γ£ö | Γ£ö |
+| **Manage alerts**: acknowledge, learn, and pin |- | Γ£ö | Γ£ö |
+| **View events in a timeline** | - | Γ£ö | Γ£ö |
+| **Authorize devices**, known scanning devices, programming devices | - | Γ£ö | Γ£ö |
+| **Merge and delete devices** |- |- | Γ£ö |
+| **View investigation data** | Γ£ö | Γ£ö | Γ£ö |
+| **Manage system settings** | - | -| Γ£ö |
+| **Manage users** |- | - | Γ£ö |
+| **Change passwords** |- | -| Γ£ö[*](#pw-sensor) |
+| **DNS servers for reverse lookup** |- | -| Γ£ö |
+| **Send alert data to partners** | - | Γ£ö | Γ£ö |
+| **Create alert comments** |- | Γ£ö | Γ£ö |
+| **View programming change history** | Γ£ö | Γ£ö | Γ£ö |
+| **Create customized alert rules** | - | Γ£ö | Γ£ö |
+| **Manage multiple notifications simultaneously** | - | Γ£ö | Γ£ö |
+| **Manage certificates** | - | - | Γ£ö |
+
+> [!NOTE]
+> <a name="pw-sensor"></a>**Admin** users can only change passwords for other users with the **Security Analyst** and **Read-only** roles. To change the password of an **Admin** user, sign in to your sensor as [a privileged user](#default-privileged-on-premises-users). <!--verify this -->
+
+## Role-based permissions for the on-premises management console
+
+| Permission | Read Only | Security Analyst | Admin |
+|--|--|--|--|
+| **View and filter the enterprise map** | Γ£ö | Γ£ö | Γ£ö |
+| **Build a site** | - | - | Γ£ö |
+| **Manage a site** (add and edit zones) |- |- | Γ£ö |
+| **View and filter device inventory** | Γ£ö | Γ£ö | Γ£ö |
+| **View and manage alerts**: acknowledge, learn, and pin | Γ£ö | Γ£ö | Γ£ö |
+| **Generate reports** |- | Γ£ö | Γ£ö |
+| **View risk assessment reports** | - | Γ£ö | Γ£ö |
+| **Set alert exclusions** | - | Γ£ö | Γ£ö |
+| **View or define access groups** | - | - | Γ£ö |
+| **Manage system settings** | - | - | Γ£ö |
+| **Manage users** | - |- | Γ£ö |
+| **Change passwords** |- | -| Γ£ö[*](#pw-cm)|
+| **Send alert data to partners** | - | - | Γ£ö |
+| **Manage certificates** | - | - | Γ£ö |
+
+> [!NOTE]
+> <a name="pw-cm"></a>**Admin** users can only change passwords for other users with the **Security Analyst** and **Read-only** roles. To change the password of an **Admin** user, sign in to your sensor as [a privileged user](#default-privileged-on-premises-users). <!--verify this-->
+
+## Next steps
+
+For more information, see:
+
+- [Microsoft Defender for IoT user management](manage-users-overview.md)
+- [Create and manage users on an OT network sensor](manage-users-sensor.md)
+- [Create and manage users on an on-premises management console](manage-users-on-premises-management-console.md)
+- [Azure user roles and permissions for Defender for IoT](roles-azure.md)
defender-for-iot Track User Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/track-user-activity.md
+
+ Title: Audit Microsoft Defender for IoT user activity
+description: Learn how to track and audit user activity across Microsoft Defender for IoT.
Last updated : 01/26/2022+++
+# Audit user activity
+
+After you've set up your user access for the [Azure portal](manage-users-overview.md), on your [OT network sensors](manage-users-sensor.md) and an [on-premises management consoles](manage-users-on-premises-management-console.md), you'll want to be able to track and audit user activity across all of Microsoft Defender for IoT.
+
+## Audit Azure user activity
+
+Use Azure Active Directory (AAD) user auditing resources to audit Azure user activity across Defender for IoT. For more information, see:
+
+- [Audit logs in Azure Active directory](/azure/active-directory/reports-monitoring/concept-audit-logs)
+- [Azure AD audit activity reference](/azure/active-directory/reports-monitoring/reference-audit-activities)
+
+## Audit user activity on an OT network sensor
+
+Audit and track user activity on a sensor's **Event timeline**. The **Event timeline** displays events that occurred on the sensor, affected devices for each event, and the time and date that the event occurred.
+
+> [!NOTE]
+> This procedure is supported for users with an **Admin** role, and the *cyberx*, *support*, and *cyberx_host* users.
+>
+
+**To use the sensor's Event Timeline**:
+
+1. Sign into the sensor console as one of the following users:
+
+ - Any **Admin** user
+ - The *cyberx*, *support*, or *cyberx_host* user
+
+1. On the sensor, **Event Timeline** from the left-hand menu. Make sure that the filter is set to show **User Operations**.
+
+ For example:
+
+ :::image type="content" source="media/manage-users-sensor/track-user-activity.png" alt-text="Screenshot of the Event Timeline on the sensor showing user activity.":::
+
+1. Use additional filters or search using **CTRL+F** to find the information of interest to you.
+
+## Audit user activity on an on-premises management console
+
+To audit and track user activity on an on-premises management console, use the on-premises management console audit logs, which record key activity data at the time of occurrence. Use on-premises management console audit logs to understand changes that were made on the on-premises management console, when, and by whom.
+
+**To access on-premises management console audit logs**:
+
+Sign in to the on-premises management console and select **System Settings > System Statistics** > **Audit log**.
+
+The dialog displays data from the currently active audit log. For example:
+
+For example:
++
+New audit logs are generated at every 10 MB. One previous log is stored in addition to the current active log file.
+
+Audit logs include the following data:
+
+| Action | Information logged |
+|--|--|
+| **Learn, and remediation of alerts** | Alert ID |
+| **Password changes** | User, User ID |
+| **Login** | User |
+| **User creation** | User, User role |
+| **Password reset** | User name |
+| **Exclusion rules-Creation**| Rule summary |
+| **Exclusion rules-Editing**| Rule ID, Rule Summary |
+| **Exclusion rules-Deletion** | Rule ID |
+| **Management Console Upgrade** | The upgrade file used |
+| **Sensor upgrade retry** | Sensor ID |
+| **Uploaded TI package** | No additional information recorded. |
++
+> [!TIP]
+> You may also want to export your audit logs to send them to the support team for extra troubleshooting. For more information, see [Export audit logs for troubleshooting](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md#export-audit-logs-for-troubleshooting)
+>
+
+## Next steps
+
+For more information, see:
+
+- [Microsoft Defender for IoT user management](manage-users-overview.md)
+- [Azure user roles and permissions for Defender for IoT](roles-azure.md)
+- [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md)
+- [Create and manage users on an OT network sensor](manage-users-sensor.md)
+- [Create and manage users on an on-premises management console](manage-users-on-premises-management-console.md)
defender-for-iot Tutorial Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-onboarding.md
Before you start, make sure that you have the following:
- Completed [Quickstart: Get started with Defender for IoT](getting-started.md) so that you have an Azure subscription added to Defender for IoT. -- Azure permissions of **Security admin**, **Subscription contributor**, or **Subscription owner** on your subscription
+- Access to the Azure portal as a [Security Admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner). For more information, see [Azure user roles for OT and Enterprise IoT monitoring with Defender for IoT](roles-azure.md).
- At least one device to monitor, with the device connected to a SPAN port on a switch.
This procedure describes how to install the sensor software on your VM.
1. The following credentials are automatically generated and presented. Copy the usernames and passwords to a safe place, because they're required to sign-in and manage your sensor. The usernames and passwords won't be presented again.
- - **Support**: The administrative user for user management.
+ - **support**: The administrative user for user management.
- - **CyberX**: The equivalent of root for accessing the appliance.
+ - **cyberx**: The equivalent of root for accessing the appliance.
+
+ For more information, see [Default privileged on-premises users](roles-on-premises.md#default-privileged-on-premises-users).
1. When the appliance restarts, access the sensor via the IP address previously configured: `https://<ip_address>`. ### Post-installation validation
-This procedure describes how to validate your installation using the sensor's own system health checks, and is available to both the **Support** and **CyberX** sensor users.
+This procedure describes how to validate your installation using the sensor's own system health checks, and is available to both the *support* and *cyberx* sensor users.
**To validate your installation**:
defender-for-iot Update Ot Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/update-ot-software.md
The sensor update process won't succeed if you don't update the on-premises mana
> [!NOTE]
-> After upgrading to version 22.1.x, the new upgrade log can be found at the following path, accessed via SSH and the *cyberx_host* user: `/opt/sensor/logs/legacy-upgrade.log`.
+> After upgrading to version 22.1.x, the new upgrade log is accessible by the *cyberx_host* user on the sensor at the following path: `/opt/sensor/logs/legacy-upgrade.log`. To access the update log, sign into the sensor via SSH with the *cyberx_host* user.
>-
+> For more information, see [Default privileged on-premises users](roles-on-premises.md#default-privileged-on-premises-users).
## Download and apply a new activation file
deployment-environments Quickstart Create Access Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/quickstart-create-access-environments.md
Complete the following steps in the Azure CLI to create an environment and confi
1. Sign in to the Azure CLI: ```azurecli
- az login
+ az login
``` 1. List all the Azure Deployment Environments projects you have access to:
dev-box How To Customize Devbox Azure Image Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-customize-devbox-azure-image-builder.md
+
+ Title: Configure a Dev Box with Azure Image Builder
+
+description: 'Learn how to create a custom image with Azure Image Builder, then create a Dev box with the image.'
++++ Last updated : 11/17/2022+++
+# Configure a Dev Box with Azure Image Builder
+
+By using standardized virtual machine (VM) images, your organization can more easily migrate to the cloud and help ensure consistency in your deployments. Images ordinarily include predefined security, configuration settings, and any necessary software. Setting up your own imaging pipeline requires time, infrastructure, and many other details. With Azure VM Image Builder, you can create a configuration that describes your image and submit it to the service, where the image is built and then distributed to a dev box project. In this article, you will create a customized dev box using a template which includes a customization step to install Visual Studio Code.
+
+Although it's possible to create custom VM images by hand or by using other tools, the process can be cumbersome and unreliable. VM Image Builder, which is built on HashiCorp Packer, gives you the benefits of a managed service.
+
+To reduce the complexity of creating VM images, VM Image Builder:
+
+- Removes the need to use complex tooling, processes, and manual steps to create a VM image. VM Image Builder abstracts out all these details and hides Azure-specific requirements, such as the need to generalize the image (Sysprep). And it gives more advanced users the ability to override such requirements.
+
+- Can be integrated with existing image build pipelines for a click-and-go experience. To do so, you can either call VM Image Builder from your pipeline or use an Azure VM Image Builder service DevOps task (preview).
+
+- Can fetch customization data from various sources, which removes the need to collect them all from one place.
+
+- Can be integrated with Compute Gallery, which creates an image management system with which to distribute, replicate, version, and scale images globally. Additionally, you can distribute the same resulting image as a VHD or as one or more managed images, without having to rebuild them from scratch.
+
+## Prerequisites
+To provision a custom image you've creating by using VM Image Builder, you need:
+- Owner or Contributor permissions on an Azure Subscription or a specific resource group.
+- A resource group
+- A dev center with an attached network connection.
+ If you don't have a dev center with an attached network connection, follow these steps to attach the network connection: [Create a network connection](./quickstart-configure-dev-box-service.md#create-a-network-connection).
+
+## Create a Windows image and distribute it to an Azure Compute Gallery
+The next step is to use Azure VM Image Builder and Azure PowerShell to create an image version in an Azure Compute Gallery (formerly Shared Image Gallery) and then distribute the image globally. You can also do this by using the Azure CLI.
+
+1. To use VM Image Builder, you need to register the features.
+
+ Check your provider registrations. Make sure that each one returns Registered.
+
+ ```powershell
+ Get-AzResourceProvider -ProviderNamespace Microsoft.VirtualMachineImages | Format-table -Property ResourceTypes,RegistrationState
+ Get-AzResourceProvider -ProviderNamespace Microsoft.Storage | Format-table -Property ResourceTypes,RegistrationState
+ Get-AzResourceProvider -ProviderNamespace Microsoft.Compute | Format-table -Property ResourceTypes,RegistrationState
+ Get-AzResourceProvider -ProviderNamespace Microsoft.KeyVault | Format-table -Property ResourceTypes,RegistrationState
+ Get-AzResourceProvider -ProviderNamespace Microsoft.Network | Format-table -Property ResourceTypes,RegistrationState
+ ```
+
+
+ If they don't return Registered, register the providers by running the following commands:
+ ```powershell
+ Register-AzResourceProvider -ProviderNamespace Microsoft.VirtualMachineImages
+ Register-AzResourceProvider -ProviderNamespace Microsoft.Storage
+ Register-AzResourceProvider -ProviderNamespace Microsoft.Compute
+ Register-AzResourceProvider -ProviderNamespace Microsoft.KeyVault
+ Register-AzResourceProvider -ProviderNamespace Microsoft.Network
+ ```
+
+2. Install PowerShell modules:
+
+ ```powershell
+ 'Az.ImageBuilder', 'Az.ManagedServiceIdentity' | ForEach-Object {Install-Module -Name $_ -AllowPrerelease}
+ ```
+
+3. Create variables to store information that you'll use more than once.
+
+ Copy the sample code and replace the Resource group with the resource group you have used to create the dev center.
+
+ ```powershell
+ # Get existing context
+ $currentAzContext = Get-AzContext
+ # Get your current subscription ID.
+ $subscriptionID=$currentAzContext.Subscription.Id
+ # Destination image resource group
+ $imageResourceGroup="<Resource group>"
+ # Location
+ $location="eastus2"
+ # Image distribution metadata reference name
+ $runOutputName="aibCustWinManImg01"
+ # Image template name
+ $imageTemplateName="vscodeWinTemplate"
+ ```
+
+4. Create a user-assigned identity and set permissions on the resource group
+
+ VM Image Builder uses the provided user-identity to inject the image into Azure Compute Gallery. In this example, you create an Azure role definition with specific actions for distributing the image. The role definition is then assigned to the user identity.
+
+ ```powershell
+ # setup role def names, these need to be unique
+ $timeInt=$(get-date -UFormat "%s")
+ $imageRoleDefName="Azure Image Builder Image Def"+$timeInt
+ $identityName="aibIdentity"+$timeInt
+
+ ## Add an Azure PowerShell module to support AzUserAssignedIdentity
+ Install-Module -Name Az.ManagedServiceIdentity
+
+ # Create an identity
+ New-AzUserAssignedIdentity -ResourceGroupName $imageResourceGroup -Name $identityName
+
+ $identityNameResourceId=$(Get-AzUserAssignedIdentity -ResourceGroupName $imageResourceGroup -Name $identityName).Id
+ $identityNamePrincipalId=$(Get-AzUserAssignedIdentity -ResourceGroupName $imageResourceGroup -Name $identityName).PrincipalId
+ ```
+
+5. Assign permissions for the identity to distribute the images
+
+ Use this command to download an Azure role definition template, and then update it with the previously specified parameters.
+
+ ```powershell
+ $aibRoleImageCreationUrl="https://raw.githubusercontent.com/azure/azvmimagebuilder/master/solutions/12_Creating_AIB_Security_Roles/aibRoleImageCreation.json"
+ $aibRoleImageCreationPath = "aibRoleImageCreation.json"
+
+ # Download the configuration
+ Invoke-WebRequest -Uri $aibRoleImageCreationUrl -OutFile $aibRoleImageCreationPath -UseBasicParsing
+ ((Get-Content -path $aibRoleImageCreationPath -Raw) -replace '<subscriptionID>',$subscriptionID) | Set-Content -Path $aibRoleImageCreationPath
+ ((Get-Content -path $aibRoleImageCreationPath -Raw) -replace '<rgName>', $imageResourceGroup) | Set-Content -Path $aibRoleImageCreationPath
+ ((Get-Content -path $aibRoleImageCreationPath -Raw) -replace 'Azure Image Builder Service Image Creation Role', $imageRoleDefName) | Set-Content -Path $aibRoleImageCreationPath
+
+ # Create a role definition
+ New-AzRoleDefinition -InputFile ./aibRoleImageCreation.json
+ # Grant the role definition to the VM Image Builder service principal
+ New-AzRoleAssignment -ObjectId $identityNamePrincipalId -RoleDefinitionName $imageRoleDefName -Scope "/subscriptions/$subscriptionID/resourceGroups/$imageResourceGroup"
+ ```
+
+## Create an Azure Compute Gallery
+
+To use VM Image Builder with an Azure Compute Gallery, you need to have an existing gallery and image definition. VM Image Builder doesn't create the gallery and image definition for you. The definition created below will have Trusted Launch as security type and meets the windows 365 image requirements.
+
+```powershell
+# Gallery name
+$galleryName= "devboxGallery"
+
+# Image definition name
+$imageDefName ="vscodeImageDef"
+
+# Additional replication region
+$replRegion2="eastus"
+
+# Create the gallery
+New-AzGallery -GalleryName $galleryName -ResourceGroupName $imageResourceGroup -Location $location
+
+$SecurityType = @{Name='SecurityType';Value='TrustedLaunch'}
+$features = @($SecurityType)
+
+# Create the image definition
+New-AzGalleryImageDefinition -GalleryName $galleryName -ResourceGroupName $imageResourceGroup -Location $location -Name $imageDefName -OsState generalized -OsType Windows -Publisher 'myCompany' -Offer 'vscodebox' -Sku '1-0-0' -Feature $features -HyperVGeneration "V2"
+```
+
+1. Copy the ARM Template for Azure Image Builder. This template indicates the source image and also the customizations applied. With this template, we are installing choco and vscode. It also indicates where the image will be distributed.
+
+ ```json
+ {
+ "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "imageTemplateName": {
+ "type": "string"
+ },
+ "api-version": {
+ "type": "string"
+ },
+ "svclocation": {
+ "type": "string"
+ }
+ },
+ "variables": {},
+ "resources": [
+ {
+ "name": "[parameters('imageTemplateName')]",
+ "type": "Microsoft.VirtualMachineImages/imageTemplates",
+ "apiVersion": "[parameters('api-version')]",
+ "location": "[parameters('svclocation')]",
+ "dependsOn": [],
+ "tags": {
+ "imagebuilderTemplate": "win11multi",
+ "userIdentity": "enabled"
+ },
+ "identity": {
+ "type": "UserAssigned",
+ "userAssignedIdentities": {
+ "<imgBuilderId>": {}
+ }
+ },
+ "properties": {
+ "buildTimeoutInMinutes": 100,
+ "vmProfile": {
+ "vmSize": "Standard_DS2_v2",
+ "osDiskSizeGB": 127
+ },
+ "source": {
+ "type": "PlatformImage",
+ "publisher": "MicrosoftWindowsDesktop",
+ "offer": "Windows-11",
+ "sku": "win11-21h2-avd",
+ "version": "latest"
+ },
+ "customize": [
+ {
+ "type": "PowerShell",
+ "name": "Install Choco and Vscode",
+ "inline": [
+ "Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://community.chocolatey.org/install.ps1'))",
+ "choco install -y vscode"
+ ]
+ }
+ ],
+ "distribute":
+ [
+ {
+ "type": "SharedImage",
+ "galleryImageId": "/subscriptions/<subscriptionID>/resourceGroups/<rgName>/providers/Microsoft.Compute/galleries/<sharedImageGalName>/images/<imageDefName>",
+ "runOutputName": "<runOutputName>",
+ "artifactTags": {
+ "source": "azureVmImageBuilder",
+ "baseosimg": "win11multi"
+ },
+ "replicationRegions": [
+ "<region1>",
+ "<region2>"
+ ]
+ }
+ ]
+ }
+ }
+ ]
+ }
+ ```
+2. Configure the template with your variables.
+ ```powershell
+ $templateFilePath = <Template Path>
+
+ (Get-Content -path $templateFilePath -Raw ) -replace '<subscriptionID>',$subscriptionID | Set-Content -Path $templateFilePath
+ (Get-Content -path $templateFilePath -Raw ) -replace '<rgName>',$imageResourceGroup | Set-Content -Path $templateFilePath
+ (Get-Content -path $templateFilePath -Raw ) -replace '<runOutputName>',$runOutputName | Set-Content -Path $templateFilePath
+ (Get-Content -path $templateFilePath -Raw ) -replace '<imageDefName>',$imageDefName | Set-Content -Path $templateFilePath
+ (Get-Content -path $templateFilePath -Raw ) -replace '<sharedImageGalName>',$galleryName| Set-Content -Path $templateFilePath
+ (Get-Content -path $templateFilePath -Raw ) -replace '<region1>',$location | Set-Content -Path $templateFilePath
+ (Get-Content -path $templateFilePath -Raw ) -replace '<region2>',$replRegion2 | Set-Content -Path $templateFilePath
+ ((Get-Content -path $templateFilePath -Raw) -replace '<imgBuilderId>',$identityNameResourceId) | Set-Content -Path $templateFilePath
+ ```
+3. Create the image version
+
+ Your template must be submitted to the service. The following commands will download any dependent artifacts, such as scripts, and store them in the staging resource group, which is prefixed with IT_.
+
+ ```powershell
+ New-AzResourceGroupDeployment -ResourceGroupName $imageResourceGroup -TemplateFile $templateFilePath -Api-Version "2020-02-14" -imageTemplateName $imageTemplateName -svclocation $location
+ ```
+ To build the image, invoke 'Run' on the template.
+ ```powershell
+ Invoke-AzResourceAction -ResourceName $imageTemplateName -ResourceGroupName $imageResourceGroup -ResourceType Microsoft.VirtualMachineImages/imageTemplates -ApiVersion "2020-02-14" -Action Run
+ ```
+ Creating the image and replicating it to both regions can take a few moments. Before you begin creating a dev box definition, wait until this part is finished.
+ ```powershell
+ Get-AzImageBuilderTemplate -ImageTemplateName $imageTemplateName -ResourceGroupName $imageResourceGroup | Select-Object -Property Name, LastRunStatusRunState, LastRunStatusMessage, ProvisioningState
+ ```
+ Alternatively, you can navigate to the Azure portal to your compute gallery > image definition to view the provisioning state of your image.
+
+![Provisioning state of the customized image version](./media/how-to-customize-devbox-azure-image-builder/image-version-provisioning-state.png)
+
+## Configure the Azure Compute Gallery
+
+Once your custom image has been provisioned within the compute gallery, you can configure the gallery to use the images within the dev center. More details here:
+
+[Configure an Azure Compute Gallery](./how-to-configure-azure-compute-gallery.md)
+
+## Setting up Dev Box Service with Custom Image
+
+Once the compute gallery images are available in the dev center. You can use the custom image with the dev box service. More details here:
+
+[Configure the Microsoft Dev Box Service](./quickstart-configure-dev-box-service.md)
+
+## Next steps
+- [Create dev box definitions](./quickstart-configure-dev-box-service.md#create-a-dev-box-definition)
+- [Configure an Azure Compute Gallery](./how-to-configure-azure-compute-gallery.md)
energy-data-services How To Convert Segy To Zgy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-convert-segy-to-zgy.md
In this article, you will learn how to convert SEG-Y formatted data to the ZGY f
empty: none ```
-8. Run the following commands using **sdutil** to see its working fine. Follow the directions in [Setup and Usage for Azure env](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil/-/tree/azure/stable#setup-and-usage-for-azure-env). Understand that depending on your OS and Python version, you may have to run `python3` command as opposed to `python`. If you run into errors with these commands, refer to the [SDUTIL tutorial](/tutorials/tutorial-seismic-ddms-sdutil.md). See [How to generate a refresh token](how-to-generate-refresh-token.md). Once you've generated the token, store it in a place where you'll be able to access it in the future.
+8. Run the following commands using **sdutil** to see its working fine. Follow the directions in [Setup and Usage for Azure env](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil/-/tree/azure/stable#setup-and-usage-for-azure-env). Understand that depending on your OS and Python version, you may have to run `python3` command as opposed to `python`. If you run into errors with these commands, refer to the [SDUTIL tutorial](/azure/energy-data-services/tutorial-seismic-ddms-sdutil). See [How to generate a refresh token](how-to-generate-refresh-token.md). Once you've generated the token, store it in a place where you'll be able to access it in the future.
> [!NOTE] > when running `python sdutil config init`, you don't need to enter anything when prompted with `Insert the azure (azureGlabEnv) application key:`.
In this article, you will learn how to convert SEG-Y formatted data to the ZGY f
10. Create the manifest file (otherwise known as the records file)
- ZGY conversion uses a manifest file that you'll upload to your storage account in order to run the conversion. This manifest file is created by using multiple JSON files and running a script. The JSON files for this process are stored [here](https://community.opengroup.org/osdu/platform/data-flow/ingestion/segy-to-zgy-conversion/-/tree/master/doc/sample-records/volve). For more information on Volve, such as where the dataset definitions come from, visit [their website](https://www.equinor.com/en/what-we-do/digitalisation-in-our-dna/volve-field-data-village-download.html). Complete the following steps in order to create the manifest file:
+ ZGY conversion uses a manifest file that you'll upload to your storage account in order to run the conversion. This manifest file is created by using multiple JSON files and running a script. The JSON files for this process are stored [here](https://community.opengroup.org/osdu/platform/data-flow/ingestion/segy-to-zgy-conversion/-/tree/master/doc/sample-records/volve). For more information on Volve, such as where the dataset definitions come from, visit [their website](https://www.equinor.com/energy/volve-data-sharing). Complete the following steps in order to create the manifest file:
* Clone the [repo](https://community.opengroup.org/osdu/platform/data-flow/ingestion/segy-to-zgy-conversion/-/tree/master/) and navigate to the folder doc/sample-records/volve * Edit the values in the `prepare-records.sh` bash script. Recall that the format of the legal tag will be prefixed with the Microsoft Energy Data Services instance name and data partition name, so it looks like `<instancename>`-`<datapartitionname>`-`<legaltagname>`.
OSDU&trade; is a trademark of The Open Group.
## Next steps <!-- Add a context sentence for the following links --> > [!div class="nextstepaction"]
-> [How to convert segy to ovds](./how-to-convert-segy-to-ovds.md)
+> [How to convert segy to ovds](./how-to-convert-segy-to-ovds.md)
frontdoor Create Front Door Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/create-front-door-terraform.md
Title: 'Quickstart: Create a Azure Front Door Standard/Premium profile using Terraform'
+ Title: 'Quickstart: Create an Azure Front Door Standard/Premium profile using Terraform'
description: This quickstart describes how to create an Azure Front Door Standard/Premium using Terraform.
The steps in this article were tested with the following Terraform and Terraform
1. Create a file named `providers.tf` and insert the following code:
- ```terraform
- # Configure the Azure provider
- terraform {
- required_providers {
- azurerm = {
- source = "hashicorp/azurerm"
- version = "~> 3.27.0"
- }
-
- random = {
- source = "hashicorp/random"
- }
- }
-
- required_version = ">= 1.1.0"
- }
-
- provider "azurerm" {
- features {}
- }
- ```
+ [!code-terraform[master](../../terraform/quickstart/101-front-door-standard-premium/providers.tf)]
1. Create a file named `resource-group.tf` and insert the following code:
- ```terraform
- resource "azurerm_resource_group" "my_resource_group" {
- name = var.resource_group_name
- location = var.location
- }
- ```
+ [!code-terraform[master](../../terraform/quickstart/101-front-door-standard-premium/resource-group.tf)]
1. Create a file named `app-service.tf` and insert the following code:
- ```terraform
- locals {
- app_name = "myapp-${lower(random_id.app_name.hex)}"
- app_service_plan_name = "AppServicePlan"
- }
-
- resource "azurerm_service_plan" "app_service_plan" {
- name = local.app_service_plan_name
- location = var.location
- resource_group_name = azurerm_resource_group.my_resource_group.name
-
- sku_name = var.app_service_plan_sku_name
- os_type = "Windows"
- worker_count = var.app_service_plan_capacity
- }
-
- resource "azurerm_windows_web_app" "app" {
- name = local.app_name
- location = var.location
- resource_group_name = azurerm_resource_group.my_resource_group.name
- service_plan_id = azurerm_service_plan.app_service_plan.id
-
- https_only = true
-
- site_config {
- ftps_state = "Disabled"
- minimum_tls_version = "1.2"
- ip_restriction = [ {
- service_tag = "AzureFrontDoor.Backend"
- ip_address = null
- virtual_network_subnet_id = null
- action = "Allow"
- priority = 100
- headers = [ {
- x_azure_fdid = [ azurerm_cdn_frontdoor_profile.my_front_door.resource_guid ]
- x_fd_health_probe = []
- x_forwarded_for = []
- x_forwarded_host = []
- } ]
- name = "Allow traffic from Front Door"
- } ]
- }
- }
- ```
+ [!code-terraform[master](../../terraform/quickstart/101-front-door-standard-premium/app-service.tf)]
1. Create a file named `front-door.tf` and insert the following code:
- ```terraform
- locals {
- front_door_profile_name = "MyFrontDoor"
- front_door_endpoint_name = "afd-${lower(random_id.front_door_endpoint_name.hex)}"
- front_door_origin_group_name = "MyOriginGroup"
- front_door_origin_name = "MyAppServiceOrigin"
- front_door_route_name = "MyRoute"
- }
-
- resource "azurerm_cdn_frontdoor_profile" "my_front_door" {
- name = local.front_door_profile_name
- resource_group_name = azurerm_resource_group.my_resource_group.name
- sku_name = var.front_door_sku_name
- }
-
- resource "azurerm_cdn_frontdoor_endpoint" "my_endpoint" {
- name = local.front_door_endpoint_name
- cdn_frontdoor_profile_id = azurerm_cdn_frontdoor_profile.my_front_door.id
- }
-
- resource "azurerm_cdn_frontdoor_origin_group" "my_origin_group" {
- name = local.front_door_origin_group_name
- cdn_frontdoor_profile_id = azurerm_cdn_frontdoor_profile.my_front_door.id
- session_affinity_enabled = true
-
- load_balancing {
- sample_size = 4
- successful_samples_required = 3
- }
-
- health_probe {
- path = "/"
- request_type = "HEAD"
- protocol = "Https"
- interval_in_seconds = 100
- }
- }
-
- resource "azurerm_cdn_frontdoor_origin" "my_app_service_origin" {
- name = local.front_door_origin_name
- cdn_frontdoor_origin_group_id = azurerm_cdn_frontdoor_origin_group.my_origin_group.id
-
- enabled = true
- host_name = azurerm_windows_web_app.app.default_hostname
- http_port = 80
- https_port = 443
- origin_host_header = azurerm_windows_web_app.app.default_hostname
- priority = 1
- weight = 1000
- certificate_name_check_enabled = true
- }
-
- resource "azurerm_cdn_frontdoor_route" "my_route" {
- name = local.front_door_route_name
- cdn_frontdoor_endpoint_id = azurerm_cdn_frontdoor_endpoint.my_endpoint.id
- cdn_frontdoor_origin_group_id = azurerm_cdn_frontdoor_origin_group.my_origin_group.id
- cdn_frontdoor_origin_ids = [azurerm_cdn_frontdoor_origin.my_app_service_origin.id]
-
- supported_protocols = ["Http", "Https"]
- patterns_to_match = ["/*"]
- forwarding_protocol = "HttpsOnly"
- link_to_default_domain = true
- https_redirect_enabled = true
- }
- ```
+ [!code-terraform[master](../../terraform/quickstart/101-front-door-standard-premium/front-door.tf)]
1. Create a file named `variables.tf` and insert the following code:
- ```terraform
- variable "location" {
- type = string
- default = "westus2"
- }
-
- variable "resource_group_name" {
- type = string
- default = "FrontDoor"
- }
-
- variable "app_service_plan_sku_name" {
- type = string
- default = "S1"
- }
-
- variable "app_service_plan_capacity" {
- type = number
- default = 1
- }
-
- variable "app_service_plan_sku_tier_name" {
- type = string
- default = "Standard"
- }
-
- variable "front_door_sku_name" {
- type = string
- default = "Standard_AzureFrontDoor"
- validation {
- condition = contains(["Standard_AzureFrontDoor", "Premium_AzureFrontDoor"], var.front_door_sku_name)
- error_message = "The SKU value must be Standard_AzureFrontDoor or Premium_AzureFrontDoor."
- }
- }
-
- resource "random_id" "app_name" {
- byte_length = 8
- }
-
- resource "random_id" "front_door_endpoint_name" {
- byte_length = 8
- }
- ```
+ [!code-terraform[master](../../terraform/quickstart/101-front-door-standard-premium/variables.tf)]
1. Create a file named `outputs.tf` and insert the following code:
- ```terraform
- output "frontDoorEndpointHostName" {
- value = azurerm_cdn_frontdoor_endpoint.my_endpoint.host_name
- }
- ```
+ [!code-terraform[master](../../terraform/quickstart/101-front-door-standard-premium/outputs.tf)]
## Initialize Terraform
frontdoor End To End Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/end-to-end-tls.md
For TLS1.2 the following cipher suites are supported:
* TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 * TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
+* TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
+* TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
* TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 * TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 > [!NOTE]
-> For Windows 10 and later versions, we recommend enabling one or both of the ECDHE cipher suites for better security. Windows 8.1, 8, and 7 aren't compatible with these ECDHE cipher suites. The DHE cipher suites have been provided for compatibility with those operating systems.
+> For Windows 10 and later versions, we recommend enabling one or both of the ECDHE cipher suites for better security. CBC ciphers are enabled to support Windows 8.1, 8, and 7 operating systems. The DHE cipher suites will be disabled in the future.
Using custom domains with TLS1.0/1.1 enabled the following cipher suites are supported:
frontdoor Front Door Quickstart Template Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-quickstart-template-samples.md
Title: Azure Resource Manager template samples - Azure Front Door
-description: Learn about Resource Manager template samples for Azure Front Door, including templates for creating a basic Front Door and configuring Front Door rate limiting.
+description: Learn about Resource Manager template samples for Azure Front Door, including templates for creating a basic Front Door profile and configuring Front Door rate limiting.
documentationcenter: ""
Last updated 03/10/2022
zone_pivot_groups: front-door-tiers
-# Azure Resource Manager deployment model templates for Front Door
+# Bicep and Azure Resource Manager deployment model templates for Front Door
-The following table includes links to Azure Resource Manager deployment model templates for Azure Front Door.
+The following table includes links to Bicep and Azure Resource Manager deployment model templates for Azure Front Door.
::: zone pivot="front-door-standard-premium"
The following table includes links to Azure Resource Manager deployment model te
| Template | Description | | | | | [Create a basic Front Door](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-create-basic)| Creates a basic Front Door configuration with a single backend. |
-| [Create a Front Door with multiple backends and backend pools and URL based routing](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-create-multiple-backends)| Creates a Front Door with load balancing configured for multiple backends in ta backend pool and also across backend pools based on URL path. |
+| [Create a Front Door with multiple backends and backend pools and URL based routing](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-create-multiple-backends)| Creates a Front Door with load balancing configured for multiple backends in a backend pool and also across backend pools based on URL path. |
| [Onboard a custom domain and managed TLS certificate with Front Door](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-custom-domain)| Add a custom domain to your Front Door and use a Front Door-managed TLS certificate. | | [Onboard a custom domain and customer-managed TLS certificate with Front Door](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-custom-domain-customer-certificate)| Add a custom domain to your Front Door and use your own TLS certificate by using Key Vault. | | [Create Front Door with geo filtering](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-geo-filtering)| Create a Front Door that allows/blocks traffic from certain countries/regions. |
frontdoor Quickstart Create Front Door Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/quickstart-create-front-door-terraform.md
The steps in this article were tested with the following Terraform and Terraform
1. Create a file named `providers.tf` and insert the following code:
- ```terraform
- # Configure the Azure provider
- terraform {
- required_providers {
- azurerm = {
- source = "hashicorp/azurerm"
- version = "~> 3.27.0"
- }
-
- random = {
- source = "hashicorp/random"
- }
- }
-
- required_version = ">= 1.1.0"
- }
-
- provider "azurerm" {
- features {}
- }
- ```
+ [!code-terraform[master](../../terraform/quickstart/101-front-door-classic/providers.tf)]
1. Create a file named `resource-group.tf` and insert the following code:
- ```terraform
- resource "azurerm_resource_group" "my_resource_group" {
- name = var.resource_group_name
- location = var.location
- }
- ```
+ [!code-terraform[master](../../terraform/quickstart/101-front-door-classic/resource-group.tf)]
1. Create a file named `front-door.tf` and insert the following code:
- ```terraform
- locals {
- front_door_name = "afd-${lower(random_id.front_door_name.hex)}"
- front_door_frontend_endpoint_name = "frontEndEndpoint"
- front_door_load_balancing_settings_name = "loadBalancingSettings"
- front_door_health_probe_settings_name = "healthProbeSettings"
- front_door_routing_rule_name = "routingRule"
- front_door_backend_pool_name = "backendPool"
- }
-
- resource "azurerm_frontdoor" "my_front_door" {
- name = local.front_door_name
- resource_group_name = azurerm_resource_group.my_resource_group.name
-
- frontend_endpoint {
- name = local.front_door_frontend_endpoint_name
- host_name = "${local.front_door_name}.azurefd.net"
- session_affinity_enabled = false
- }
-
- backend_pool_load_balancing {
- name = local.front_door_load_balancing_settings_name
- sample_size = 4
- successful_samples_required = 2
- }
-
- backend_pool_health_probe {
- name = local.front_door_health_probe_settings_name
- path = "/"
- protocol = "Http"
- interval_in_seconds = 120
- }
-
- backend_pool {
- name = local.front_door_backend_pool_name
- backend {
- host_header = var.backend_address
- address = var.backend_address
- http_port = 80
- https_port = 443
- weight = 50
- priority = 1
- }
-
- load_balancing_name = local.front_door_load_balancing_settings_name
- health_probe_name = local.front_door_health_probe_settings_name
- }
-
- routing_rule {
- name = local.front_door_routing_rule_name
- accepted_protocols = ["Http", "Https"]
- patterns_to_match = ["/*"]
- frontend_endpoints = [local.front_door_frontend_endpoint_name]
- forwarding_configuration {
- forwarding_protocol = "MatchRequest"
- backend_pool_name = local.front_door_backend_pool_name
- }
- }
- }
- ```
+ [!code-terraform[master](../../terraform/quickstart/101-front-door-classic/front-door.tf)]
1. Create a file named `variables.tf` and insert the following code:
- ```terraform
- variable "location" {
- type = string
- default = "westus2"
- }
-
- variable "resource_group_name" {
- type = string
- default = "FrontDoor"
- }
-
- variable "backend_address" {
- type = string
- }
-
- resource "random_id" "front_door_name" {
- byte_length = 8
- }
- ```
+ [!code-terraform[master](../../terraform/quickstart/101-front-door-classic/variables.tf)]
1. Create a file named `terraform.tfvars` and insert the following code, being sure to update the value to your own backend hostname:
frontdoor Terraform Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/terraform-samples.md
+
+ Title: Terraform samples - Azure Front Door
+description: Learn about Terraform samples for Azure Front Door, including samples for creating a basic Front Door profile.
+
+documentationcenter: ""
+++
+ na
+ Last updated : 11/22/2022+
+zone_pivot_groups: front-door-tiers
+
+# Terraform deployment model templates for Front Door
+
+The following table includes links to Terraform deployment model templates for Azure Front Door.
++
+| Sample | Description |
+|-|-|
+|**App Service origins**| **Description** |
+| [App Service with Private Link](https://github.com/Azure/terraform/tree/master/quickstart/101-front-door-standard-premium) | Creates an App Service app with a private endpoint, and a Front Door profile. |
+| | |
+++
+| Template | Description |
+| | |
+| [Create a basic Front Door](https://github.com/Azure/terraform/tree/master/quickstart/101-front-door-classic)| Creates a basic Front Door configuration with a single backend. |
+| | |
++
+## Next steps
++
+- Learn how to [create a Front Door profile](standard-premium/create-front-door-portal.md).
+++
+- Learn how to [create a Front Door](quickstart-create-front-door.md).
+
hdinsight Hive Llap Sizing Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/hive-llap-sizing-guide.md
Previously updated : 07/19/2022 Last updated : 11/23/2022 # Azure HDInsight Interactive Query Cluster (Hive LLAP) sizing guide
specific tuning.
| Node Type | Instance | Size | | : | :-: | : |
-| Head | D13 v2 | 8 vcpus, 56 GB RAM, 400 GB SSD |
+| Head | D13 v2 | 8 vcpus, 56-GB RAM, 400 GB SSD |
| Worker | **D14 v2** | **16 vcpus, 112 GB RAM, 800 GB SSD** | | ZooKeeper | A4 v2 | 4 vcpus, 8-GB RAM, 40 GB SSD |
specific tuning.
| yarn.scheduler.maximum-allocation-mb | 102400 (MB) | The maximum allocation for every container request at the RM, in MBs. Memory requests higher than this value won't take effect | | yarn.scheduler.maximum-allocation-vcores | 12 |The maximum number of CPU cores for every container request at the Resource Manager. Requests higher than this value won't take effect. | | yarn.nodemanager.resource.cpu-vcores | 12 | Number of CPU cores per NodeManager that can be allocated for containers. |
-| yarn.scheduler.capacity.root.llap.capacity | 85 (%) | YARN capacity allocation for llap queue |
+| yarn.scheduler.capacity.root.llap.capacity | 85 (%) | YARN capacity allocation for LLAP queue |
| tez.am.resource.memory.mb | 4096 (MB) | The amount of memory in MB to be used by the tez AppMaster | | hive.server2.tez.sessions.per.default.queue | <number_of_worker_nodes> |The number of sessions for each queue named in the hive.server2.tez.default.queues. This number corresponds to number of query coordinators(Tez AMs) | | hive.tez.container.size | 4096 (MB) | Specified Tez container size in MB |
For D14 v2, the recommended value is **12**.
#### **4. Number of concurrent queries** Configuration: ***hive.server2.tez.sessions.per.default.queue***
-This configuration value determines the number of Tez sessions that can be launched in parallel. These Tez sessions will be launched for each of the queues specified by "hive.server2.tez.default.queues". It corresponds to the number of Tez AMs (Query Coordinators). It's recommended to be the same as the number of worker nodes. The number of Tez AMs can be higher than the number of LLAP daemon nodes. The Tez AM's primary responsibility is to coordinate the query execution and assign query plan fragments to corresponding LLAP daemons for execution. Keep this value as multiple of a number of LLAP daemon nodes to achieve higher throughput.
+This configuration value determines the number of Tez sessions that can be launched in parallel. These Tez sessions will be launched for each of the queues specified by "hive.server2.tez.default.queues". It corresponds to the number of Tez AMs (Query Coordinators). It's recommended to be the same as the number of worker nodes. The number of Tez AMs can be higher than the number of LLAP daemon nodes. The Tez AM's primary responsibility is to coordinate the query execution and assign query plan fragments to corresponding LLAP daemons for execution. Keep this value as multiple of many LLAP daemon nodes to achieve higher throughput.
Default HDInsight cluster has four LLAP daemons running on four worker nodes, so the recommended value is **4**.
The recommended value is **4096 MB**.
#### **6. LLAP Queue capacity allocation** Configuration: ***yarn.scheduler.capacity.root.llap.capacity***
-This value indicates a percentage of capacity given to llap queue. The capacity allocations may have different values for different workloads depending on how the YARN queues are configured. If your workload is read-only operations, then setting it as high as 90% of the capacity should work. However, if your workload is mix of update/delete/merge operations using managed tables, it's recommended to give 85% of the capacity for llap queue. The remaining 15% capacity can be used by other tasks such as compaction etc. to allocate containers from default queue. That way tasks in default queue won't deprive of YARN resources.
+This value indicates a percentage of capacity given to LLAP queue. The capacity allocations may have different values for different workloads depending on how the YARN queues are configured. If your workload is read-only operations, then setting it as high as 90% of the capacity should work. However, if your workload is mix of update/delete/merge operations using managed tables, it's recommended to give 85% of the capacity for LLAP queue. The remaining 15% capacity can be used by other tasks such as compaction etc. to allocate containers from default queue. That way tasks in default queue won't deprive of YARN resources.
-For D14v2 worker nodes, the recommended value for llap queue is **85**.
+For D14v2 worker nodes, the recommended value for LLAP queue is **85**.
(For readonly workloads, it can be increased up to 90 as suitable.) #### **7. LLAP daemon container size**
LLAP daemon is run as a YARN container on each worker node. The total memory siz
* Total memory configured for all containers on a node and LLAP queue capacity Memory needed by Tez Application Masters(Tez AM) can be calculated as follows.
-Tez AM acts as a query coordinator and the number of Tez AMs should be configured based on a number of concurrent queries to be served. Theoretically, we can consider one Tez AM per worker node. However, its possible that you may see more than one Tez AM on a worker node. For calculation purpose, we assume uniform distribution of Tez AMs across all LLAP daemon nodes/worker nodes.
+Tez AM acts as a query coordinator and the number of Tez AMs should be configured based on many concurrent queries to be served. Theoretically, we can consider one Tez AM per worker node. However, it's possible that you may see more than one Tez AM on a worker node. For calculation purpose, we assume uniform distribution of Tez AMs across all LLAP daemon nodes/worker nodes.
It's recommended to have 4 GB of memory per Tez AM. Number of Tez Ams = value specified by Hive config ***hive.server2.tez.sessions.per.default.queue***.
For D14 v2, the default configuration has four Tez AMs and four LLAP daemon node
Tez AM memory per node = (ceil(4/4) x 4 GB) = 4 GB Total Memory available for LLAP queue per worker node can be calculated as follows:
-This value depends on the total amount of memory available for all YARN containers on a node(*yarn.nodemanager.resource.memory-mb*) and the percentage of capacity configured for llap queue(*yarn.scheduler.capacity.root.llap.capacity*).
-Total memory for LLAP queue on worker node = Total memory available for all YARN containers on a node x Percentage of capacity for llap queue.
+This value depends on the total amount of memory available for all YARN containers on a node(*yarn.nodemanager.resource.memory-mb*) and the percentage of capacity configured for LLAP queue(*yarn.scheduler.capacity.root.llap.capacity*).
+Total memory for LLAP queue on worker node = Total memory available for all YARN containers on a node x Percentage of capacity for LLAP queue.
For D14 v2, this value is (100 GB x 0.85) = 85 GB. The LLAP daemon container size is calculated as follows;
The LLAP daemon container size is calculated as follows;
**LLAP daemon container size = (Total memory for LLAP queue on a workernode) ΓÇô (Tez AM memory per node) - (Service Master container size)** There is only one Service Master (Application Master for LLAP service) on the cluster spawned on one of the worker nodes. For calculation purpose, we consider one service master per worker node. For D14 v2 worker node, HDI 4.0 - the recommended value is (85 GB - 4 GB - 1 GB)) = **80 GB**
-(For HDI 3.6, recommended value is **79 GB** because you should reserve additional ~2 GB for slider AM.)
+
#### **8. Determining number of executors per LLAP daemon** Configuration: ***hive.llap.daemon.num.executors***, ***hive.llap.io.threadpool.size***
Configuration: ***hive.llap.daemon.num.executors***, ***hive.llap.io.threadpool.
***hive.llap.daemon.num.executors***: This configuration controls the number of executors that can execute tasks in parallel per LLAP daemon. This value depends on the number of vcores, the amount of memory used per executor, and the amount of total memory available for LLAP daemon container. The number of executors can be oversubscribed to 120% of available vcores per worker node. However, it should be adjusted if it doesn't meet the memory requirements based on memory needed per executor and the LLAP daemon container size.
-Each executor is equivalent to a Tez container and can consume 4GB(Tez container size) of memory. All executors in LLAP daemon share the same heap memory. With the assumption that not all executors run memory intensive operations at the same time, you can consider 75% of Tez container size(4 GB) per executor. This way you can increase the number of executors by giving each executor less memory (e.g. 3 GB) for increased parallelism. However, it is recommended to tune this setting for your target workload.
+Each executor is equivalent to a Tez container and can consume 4 GB(Tez container size) of memory. All executors in LLAP daemon share the same heap memory. With the assumption that not all executors run memory intensive operations at the same time, you can consider 75% of Tez container size(4 GB) per executor. This way you can increase the number of executors by giving each executor less memory (for example, 3 GB) for increased parallelism. However, it is recommended to tune this setting for your target workload.
There are 16 vcores on D14 v2 VMs.
-For D14 v2, the recommended value for num of executors is (16 vcores x 120%) ~= **19** on each worker node considering 3GB per executor.
+For D14 v2, the recommended value for num of executors is (16 vcores x 120%) ~= **19** on each worker node considering 3 GB per executor.
***hive.llap.io.threadpool.size***: This value specifies the thread pool size for executors. Since executors are fixed as specified, it will be same as number of executors per LLAP daemon.
Setting *hive.llap.io.allocator.mmap* = true will enable SSD caching.
When SSD cache is enabled, some portion of the memory will be used to store metadata for the SSD cache. The metadata is stored in memory and it's expected to be ~8% of SSD cache size. SSD Cache in-memory metadata size = LLAP daemon container size - (Head room + Heap size) For D14 v2, with HDI 4.0, SSD cache in-memory metadata size = 80 GB - (4 GB + 57 GB) = **19 GB**
-For D14 v2, with HDI 3.6, SSD cache in-memory metadata size = 79 GB - (4 GB + 57 GB) = **18 GB**
+ Given the size of available memory for storing SSD cache metadata, we can calculate the size of SSD cache that can be supported. Size of in-memory metadata for SSD cache = LLAP daemon container size - (Head room + Heap size)
Size of in-memory metadata for SSD cache = LLAP daemon container size - (Head r
Size of SSD cache = size of in-memory metadata for SSD cache(19 GB) / 0.08 (8 percent) For D14 v2 and HDI 4.0, the recommended SSD cache size = 19 GB / 0.08 ~= **237 GB**
-For D14 v2 and HDI 3.6, the recommended SSD cache size = 18 GB / 0.08 ~= **225 GB**
+ #### **10. Adjusting Map Join memory** Configuration: ***hive.auto.convert.join.noconditionaltask.size*** Make sure you have *hive.auto.convert.join.noconditionaltask* enabled for this parameter to take effect.
-This configuration determine the threshold for MapJoin selection by Hive optimizer that considers oversubscription of memory from other executors to have more room for in-memory hash tables to allow more map join conversions. Considering 3GB per executor, this size can be oversubscribed to 3GB, but some heap memory may also be used for sort buffers, shuffle buffers, etc. by the other operations.
+This configuration determines the threshold for MapJoin selection by Hive optimizer that considers oversubscription of memory from other executors to have more room for in-memory hash tables to allow more map join conversions. Considering 3 GB per executor, this size can be oversubscribed to 3 GB, but some heap memory may also be used for sort buffers, shuffle buffers, etc. by the other operations.
So for D14 v2, with 3 GB memory per executor, it's recommended to set this value to **2048 MB**. (Note: This value may need adjustments that are suitable for your workload. Setting this value too low may not use autoconvert feature. And setting it too high may result into out of memory exceptions or GC pauses that can result into adverse performance.)
Ambari environment variables: ***num_llap_nodes, num_llap_nodes_for_llap_daemons
**num_llap_nodes** - specifies number of nodes used by Hive LLAP service, this includes nodes running LLAP daemon, LLAP Service Master, and Tez Application Master(Tez AM). :::image type="content" source="./media/hive-llap-sizing-guide/LLAP_sizing_guide_num_llap_nodes.png " alt-text="`Number of Nodes for LLAP service`" border="true"::: - **num_llap_nodes_for_llap_daemons** - specified number of nodes used only for LLAP daemons. LLAP daemon container sizes are set to max fit node, so it will result in one llap daemon on each node. :::image type="content" source="./media/hive-llap-sizing-guide/LLAP_sizing_guide_num_llap_nodes_for_llap_daemons.png " alt-text="`Number of Nodes for LLAP daemons`" border="true":::
It's recommended to keep both values same as number of worker nodes in Interacti
### **Considerations for Workload Management** If you want to enable workload management for LLAP, make sure you reserve enough capacity for workload management to function as expected. The workload management requires configuration of a custom YARN queue, which is in addition to `llap` queue. Make sure you divide total cluster resource capacity between llap queue and workload management queue in accordance to your workload requirements. Workload management spawns Tez Application Masters(Tez AMs) when a resource plan is activated.
-Please note:
+
+**Note:**
* Tez AMs spawned by activating a resource plan consume resources from the workload management queue as specified by `hive.server2.tez.interactive.queue`. * The number of Tez AMs would depend on the value of `QUERY_PARALLELISM` specified in the resource plan.
-* Once the workload management is active, Tez AMs in llap queue will not used. Only Tez AMs from workload management queue are used for query coordination. Tez AMs in the `llap` queue are used when workload management is disabled.
+* Once the workload management is active, Tez AMs in LLAP queue will not be used. Only Tez AMs from workload management queue are used for query coordination. Tez AMs in the `llap` queue are used when workload management is disabled.
For example:
-Total cluster capacity = 100 GB memory, divided between LLAP, Workload Management, and Default queues as follows:
+Total cluster capacity = 100-GB memory, divided between LLAP, Workload Management, and Default queues as follows:
+ - LLAP queue capacity = 70 GB
- Workload management queue capacity = 20 GB - Default queue capacity = 10 GB
-With 20 GB in workload management queue capacity, a resource plan can specify `QUERY_PARALLELISM` value as five, which means workload management can launch five Tez AMs with 4 GB container size each. If `QUERY_PARALLELISM` is higher than the capacity, you may see some Tez AMs stop responding in `ACCEPTED` state. The Hiveserver2 Interactive cannot submit query fragments to the Tez AMs that are not in `RUNNING` state.
+With 20 GB in workload management queue capacity, a resource plan can specify `QUERY_PARALLELISM` value as five, which means workload management can launch five Tez AMs with 4 GB container size each. If `QUERY_PARALLELISM` is higher then the capacity, you may see some Tez AMs stop responding in `ACCEPTED` state. The Hiveserver2 Interactive cannot submit query fragments to the Tez AMs that are not in `RUNNING` state.
#### **Next Steps**
If setting these values didn't resolve your issue, visit one of the following...
* Connect with [@AzureSupport](https://twitter.com/azuresupport) - the official Microsoft Azure account for improving customer experience by connecting the Azure community to the right resources: answers, support, and experts.
-* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, please review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/).
+* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/).
* ##### **Other References:** * [Configure other LLAP properties](https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.5/performance-tuning/content/hive_setup_llap.html)
healthcare-apis Authentication Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/authentication-authorization.md
FHIR service of Azure Health Data Services provides the following roles:
* **FHIR Data Exporter**: Can read and export ($export operator) data. * **FHIR Data Contributor**: Can perform all data plane operations. * **FHIR Data Converter**: Can use the converter to perform data conversion.
+* **FHIR SMART User**: Role allows user to read and write FHIR data according to the [SMART IG V1.0.0 specifications](http://hl7.org/fhir/smart-app-launch/1.0.0/).
DICOM service of Azure Health Data Services provides the following roles:
iot-central Tutorial Industrial End To End https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/tutorial-industrial-end-to-end.md
The IoT Edge deployment manifest defines four custom modules:
- [opcpublisher](https://github.com/Azure/Industrial-IoT/blob/main/docs/modules/publisher.md) - forwards OPC-UA data from an OPC-UA server to the **miabgateway**. - [miabgateway](https://github.com/iot-for-all/iotc-miab-gateway) - gateway to send OPC-UA data to your IoT Central app and handle commands sent from your IoT Central app.
-You can see the deployment manifest in the tool configuration file. The manifest is part of the device template that the tool adds to your IoT Central application.
+You can see the deployment manifest in the tool configuration file. The tool assigns the deployment manifest to the IoT Edge device it registers in your IoT Central application.
To learn more about how to use the REST API to deploy and configure the IoT Edge runtime, see [Run Azure IoT Edge on Ubuntu Virtual Machines](../../iot-edge/how-to-install-iot-edge-ubuntuvm.md).
iot-hub Iot Hub Dev Guide Sas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-dev-guide-sas.md
Here are the main steps of the token service pattern:
2. When a device/module needs to access your IoT hub, it requests a signed token from your token service. The device can authenticate with your custom identity registry/authentication scheme to determine the device/module identity that the token service uses to create the token.
-3. The token service returns a token. The token is created by using `/devices/{deviceId}` or `/devices/{deviceId}/module/{moduleId}` as `resourceURI`, with `deviceId` as the device being authenticated or `moduleId` as the module being authenticated. The token service uses the shared access policy to construct the token.
+3. The token service returns a token. The token is created by using `/devices/{deviceId}` or `/devices/{deviceId}/modules/{moduleId}` as `resourceURI`, with `deviceId` as the device being authenticated or `moduleId` as the module being authenticated. The token service uses the shared access policy to construct the token.
4. The device/module uses the token directly with the IoT hub.
If you would like to try out some of the concepts described in this article, see
* [Get started with Azure IoT Hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-nodejs) * [How to send cloud-to-device messages with IoT Hub](iot-hub-csharp-csharp-c2d.md)
-* [How to process IoT Hub device-to-cloud messages](tutorial-routing.md)
+* [How to process IoT Hub device-to-cloud messages](tutorial-routing.md)
machine-learning Concept Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-data.md
Azure Machine Learning lets you bring data from a local machine or an existing c
> [!div class="checklist"] > - [**URIs**](#uris) - A **U**niform **R**esource **I**dentifier that is a reference to a storage location on your local computer or in the cloud that makes it very easy to access data in your jobs. Azure Machine Learning distinguishes two types of URIs:`uri_file` and `uri_folder`. If you want to consume a file as an input of a job, you can define this job input by providing `type` as `uri_file`, `path` as where the file is.
-> - [**MLTable**](#mltable) - `MLTable` helps you to abstract the schema definition for tabular data so it is more suitable for complex/changing schema or to be leveraged in automl. If you just want to create an data asset for a job or you want to write your own parsing logic in python you could use `uri_file`, `uri_folder`.
+> - [**MLTable**](#mltable) - `MLTable` helps you to abstract the schema definition for tabular data so it is more suitable for complex/changing schema or to be leveraged in automl. If you just want to create an data asset for a job or you want to write your own parsing logic in Python you could use `uri_file`, `uri_folder`.
> - [**Data asset**](#data-asset) - If you plan to share your data (URIs or MLTables) in your workspace to team members, or you want to track data versions, or track lineage, you can create data assets from URIs or MLTables you have. But if you didn't create data asset, you can still consume the data in jobs without lineange tracking, version management, etc. > - [**Datastore**](#datastore) - Azure Machine Learning Datastores securely keep the connection information(storage container name, credentials) to your data storage on Azure, so you don't have to code it in your scripts. You can use AzureML datastore uri and relative path to your data to point to your data. You can also register files/folders in your AzureML datastore into data assets.
machine-learning How To Access Data Batch Endpoints Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-data-batch-endpoints-jobs.md
+
+ Title: "Input data for batch endpoints jobs"
+
+description: Learn how to access data from different sources in batch endpoints jobs.
++++++ Last updated : 10/10/2022++++
+# Input data for batch endpoints jobs
+
+Batch endpoints can be used to perform batch scoring on large amounts of data. Such data can be placed in different places. In this tutorial we'll cover the different places where batch endpoints can read data from and how to reference it.
+
+## Prerequisites
+
+* This example assumes that you've a model correctly deployed as a batch endpoint. Particularly, we're using the *heart condition classifier* created in the tutorial [Using MLflow models in batch deployments](how-to-mlflow-batch.md).
+
+## Supported data inputs
+
+Batch endpoints support reading files located in the following storage options:
+
+* Azure Machine Learning Data Stores. The following stores are supported:
+ * Azure Blob Storage
+ * Azure Data Lake Storage Gen1
+ * Azure Data Lake Storage Gen2
+* Azure Machine Learning Data Assets. The following types are supported:
+ * Data assets of type Folder (`uri_folder`).
+ * Data assets of type File (`uri_file`).
+ * Datasets of type `FileDataset` (Deprecated).
+* Azure Storage Accounts. The following storage containers are supported:
+ * Azure Data Lake Storage Gen1
+ * Azure Data Lake Storage Gen2
+ * Azure Blob Storage
+
+> [!TIP]
+> Local data folders/files can be used when executing batch endpoints from the Azure ML CLI or Azure ML SDK for Python. However, that operation will result in the local data to be uploaded to the default Azure Machine Learning Data Store of the workspace you are working on.
+
+> [!IMPORTANT]
+> __Deprecation notice__: Datasets of type `FileDataset` (V1) are deprecated and will be retired in the future. Existing batch endpoints relying on this functionality will continue to work but batch endpoints created with GA CLIv2 (2.4.0 and newer) or GA REST API (2022-05-01 and newer) will not support V1 dataset.
++
+## Reading data from data stores
+
+Data from Azure Machine Learning registered data stores can be directly referenced by batch deployments jobs. In this example, we're going to first upload some data to the default data store in the Azure Machine Learning workspace and then run a batch deployment on it. Follow these steps to run a batch endpoint job using data stored in a data store:
+
+1. Let's get access to the default data store in the Azure Machine Learning workspace. If your data is in a different store, you can use that store instead. There's no requirement of using the default data store.
+
+ # [Azure CLI](#tab/cli)
+
+ ```azurecli
+ DATASTORE_ID=$(az ml datastore show -n workspaceblobstore | jq -r '.id')
+ ```
+
+ > [!NOTE]
+ > Data stores ID would look like `azureml:/subscriptions/<subscription>/resourceGroups/<resource-group>/providers/Microsoft.MachineLearningServices/workspaces/<workspace>/datastores/<data-store>`.
+
+ # [Python](#tab/sdk)
+
+ ```python
+ default_ds = ml_client.datastores.get_default()
+ ```
+
+ # [REST](#tab/rest)
+
+ Use the Azure ML CLI, Azure ML SDK for Python, or Studio to get the data store information.
+
+
+
+ > [!TIP]
+ > The default blob data store in a workspace is called __workspaceblobstore__. You can skip this step if you already know the resource ID of the default data store in your workspace.
+
+1. We'll need to upload some sample data to it. This example assumes you've uploaded the sample data included in the repo in the folder `sdk/python/endpoints/batch/heart-classifier/data` in the folder `heart-classifier/data` in the blob storage account. Ensure you have done that before moving forward.
+
+1. Create a data input:
+
+ # [Azure CLI](#tab/cli)
+
+ Let's place the file path in the following variable:
+
+ ```azurecli
+ DATA_PATH="heart-disease-uci-unlabeled"
+ INPUT_PATH="$DATASTORE_ID/paths/$DATA_PATH"
+ ```
+
+ # [Python](#tab/sdk)
+
+ ```python
+ data_path = "heart-classifier/data"
+ input = Input(type=AssetTypes.URI_FOLDER, path=f"{default_ds.id}/paths/{data_path})
+ ```
+
+ # [REST](#tab/rest)
+
+ Use the Azure ML CLI, Azure ML SDK for Python, or Studio to get the subscription ID, resource group, workspace, and name of the data store. You will need them later.
+
+
+
+ > [!NOTE]
+ > See how the path `paths` is appended to the resource id of the data store to indicate that what follows is a path inside of it.
+
+ > [!TIP]
+ > You can also use `azureml:/datastores/<data-store>/paths/<data-path>` as a way to indicate the input.
+
+1. Run the deployment:
+
+ # [Azure CLI](#tab/cli)
+
+ ```bash
+ INVOKE_RESPONSE = $(az ml batch-endpoint invoke --name $ENDPOINT_NAME --input $INPUT_PATH)
+ ```
+
+ # [Python](#tab/sdk)
+
+ ```python
+ job = ml_client.batch_endpoints.invoke(
+ endpoint_name=endpoint.name,
+ input=input,
+ )
+ ```
+
+ # [REST](#tab/rest)
+
+ __Request__
+
+ ```http
+ POST jobs HTTP/1.1
+ Host: <ENDPOINT_URI>
+ Authorization: Bearer <TOKEN>
+ Content-Type: application/json
+ ```
+
+ __Body__
+
+ ```json
+ {
+ "properties": {
+ "InputData": {
+ "mnistinput": {
+ "JobInputType" : "UriFolder",
+ "Uri": "azureml:/subscriptions/<subscription>/resourceGroups/<resource-group/providers/Microsoft.MachineLearningServices/workspaces/<workspace>/datastores/<data-store>/paths/<data-path>"
+ }
+ }
+ }
+ }
+ ```
+
+## Reading data from a data asset
+
+Azure Machine Learning data assets (formerly known as datasets) are supported as inputs for jobs. Follow these steps to run a batch endpoint job using data stored in a registered data asset in Azure Machine Learning:
+
+> [!WARNING]
+> Data assets of type Table (`MLTable`) aren't currently supported.
+
+1. Let's create the data asset first. This data asset consists of a folder with multiple CSV files that we want to process in parallel using batch endpoints. You can skip this step is your data is already registered as a data asset.
+
+ # [Azure CLI](#tab/cli)
+
+ Create a data asset definition in `YAML`:
+
+ __heart-dataset-unlabeled.yml__
+ ```yaml
+ $schema: https://azuremlschemas.azureedge.net/latest/data.schema.json
+ name: heart-dataset-unlabeled
+ description: An unlabeled dataset for heart classification.
+ type: uri_folder
+ path: heart-classifier-mlflow/data
+ ```
+
+ Then, create the data asset:
+
+ ```bash
+ az ml data create -f heart-dataset-unlabeled.yml
+ ```
+
+ # [Python](#tab/sdk)
+
+ ```python
+ data_path = "heart-classifier-mlflow/data"
+ dataset_name = "heart-dataset-unlabeled"
+
+ heart_dataset_unlabeled = Data(
+ path=data_path,
+ type=AssetTypes.URI_FOLDER,
+ description="An unlabeled dataset for heart classification",
+ name=dataset_name,
+ )
+ ```
+
+ Then, create the data asset:
+
+ ```python
+ ml_client.data.create_or_update(heart_dataset_unlabeled)
+ ```
+
+ To get the newly created data asset, use:
+
+ ```python
+ heart_dataset_unlabeled = ml_client.data.get(name=dataset_name)
+ ```
+
+ # [REST](#tab/rest)
+
+ Use the Azure ML CLI, Azure ML SDK for Python, or Studio to get the location (region), workspace, and data asset name and version. You will need them later.
++
+1. Create a data input:
+
+ # [Azure CLI](#tab/cli)
+
+ ```azurecli
+ DATASET_ID=$(az ml data show -n heart-dataset-unlabeled --label latest --query id)
+ ```
+
+ # [Python](#tab/sdk)
+
+ ```python
+ input = Input(type=AssetTypes.URI_FOLDER, path=heart_dataset_unlabeled.id)
+ ```
+
+ # [REST](#tab/rest)
+
+ This step isn't required.
+
+
+
+ > [!NOTE]
+ > Data assets ID would look like `/subscriptions/<subscription>/resourcegroups/<resource-group>/providers/Microsoft.MachineLearningServices/workspaces/<workspace>/data/<data-asset>/versions/<version>`.
++
+1. Run the deployment:
+
+ # [Azure CLI](#tab/cli)
+
+ ```bash
+ INVOKE_RESPONSE = $(az ml batch-endpoint invoke --name $ENDPOINT_NAME --input $DATASET_ID)
+ ```
+
+ > [!TIP]
+ > You can also use `--input azureml:/<dataasset_name>@latest` as a way to indicate the input.
+
+ # [Python](#tab/sdk)
+
+ ```python
+ job = ml_client.batch_endpoints.invoke(
+ endpoint_name=endpoint.name,
+ input=input,
+ )
+ ```
+
+ # [REST](#tab/rest)
+
+ __Request__
+
+ ```http
+ POST jobs HTTP/1.1
+ Host: <ENDPOINT_URI>
+ Authorization: Bearer <TOKEN>
+ Content-Type: application/json
+ ```
+
+ __Body__
+
+ ```json
+ {
+ "properties": {
+ "InputData": {
+ "mnistinput": {
+ "JobInputType" : "UriFolder",
+ "Uri": "azureml://locations/<location>/workspaces/<workspace>/data/<dataset_name>/versions/labels/latest"
+ }
+ }
+ }
+ }
+ ```
+
+## Reading data from Azure Storage Accounts
+
+Azure Machine Learning batch endpoints can read data from cloud locations in Azure Storage Accounts, both public and private. Use the following steps to run a batch endpoint job using data stored in a storage account:
+
+> [!NOTE]
+> Check the section [Security considerations when reading data](#security-considerations-when-reading-data) for learn more about additional configuration required to successfully read data from storage accoutns.
+
+1. Create a data input:
+
+ # [Azure CLI](#tab/cli)
+
+ This step isn't required.
+
+ # [Python](#tab/sdk)
+
+ ```python
+ input = Input(type=AssetTypes.URI_FOLDER, path="https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/data")
+ ```
+
+ If your data is a file, change `type=AssetTypes.URI_FILE`:
+
+ ```python
+ input = Input(type=AssetTypes.URI_FILE, path="https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/data/heart.csv")
+ ```
+
+ # [REST](#tab/rest)
+
+ This step isn't required.
++
+1. Run the deployment:
+
+ # [Azure CLI](#tab/cli)
+
+ ```bash
+ INVOKE_RESPONSE = $(az ml batch-endpoint invoke --name $ENDPOINT_NAME --input-type uri_folder --input https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/data)
+ ```
+
+ If your data is a file, change `--input-type uri_file`:
+
+ ```bash
+ INVOKE_RESPONSE = $(az ml batch-endpoint invoke --name $ENDPOINT_NAME --input-type uri_file --input https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/data/heart.csv)
+ ```
+
+ # [Python](#tab/sdk)
+
+ ```python
+ job = ml_client.batch_endpoints.invoke(
+ endpoint_name=endpoint.name,
+ input=input,
+ )
+ ```
+
+ # [REST](#tab/rest)
+
+ __Request__
+
+ ```http
+ POST jobs HTTP/1.1
+ Host: <ENDPOINT_URI>
+ Authorization: Bearer <TOKEN>
+ Content-Type: application/json
+ ```
+
+ __Body__
+
+ ```json
+ {
+ "properties": {
+ "InputData": {
+ "mnistinput": {
+ "JobInputType" : "UriFolder",
+ "Uri": "https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/data"
+ }
+ }
+ }
+ }
+ ```
+
+ If your data is a file, change `JobInputType`:
+
+ __Body__
+
+ ```json
+ {
+ "properties": {
+ "InputData": {
+ "mnistinput": {
+ "JobInputType" : "UriFolder",
+ "Uri": "https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/data/heart.csv"
+ }
+ }
+ }
+ }
+ ```
+
+
+## Security considerations when reading data
+
+Batch endpoints ensure that only authorized users are able to invoke batch deployments and generate jobs. However, depending on how the input data is configured, other credentials may be used to read the underlying data. Use the following table to understand which credentials are used and any additional requirements.
+
+| Data input type | Credential in store | Credentials used | Access granted by |
+||||-|
+| Data store | Yes | Data store's credentials in the workspace | Credentials |
+| Data store | No | Identity of the job | Depends on type |
+| Data asset | Yes | Data store's credentials in the workspace | Credentials |
+| Data asset | No | Identity of the job | Depends on store |
+| Azure Blob Storage | Not apply | Identity of the job + Managed identity of the compute cluster | RBAC |
+| Azure Data Lake Storage Gen1 | Not apply | Identity of the job + Managed identity of the compute cluster | POSIX |
+| Azure Data Lake Storage Gen2 | Not apply | Identity of the job + Managed identity of the compute cluster | POSIX and RBAC |
+
+The managed identity of the compute cluster is used for mounting and configuring the data store. That means that in order to successfully read data from external storage services, the managed identity of the compute cluster where the deployment is running must have at least [Storage Blob Data Reader](../role-based-access-control/built-in-roles.md#storage-blob-data-reader) access to the storage account. Only storage account owners can [change your access level via the Azure portal](../storage/blobs/assign-azure-role-data-access.md).
+
+> [!NOTE]
+> To assign an identity to the compute used by a batch deployment, follow the instructions at [Set up authentication between Azure ML and other services](how-to-identity-based-service-authentication.md#compute-cluster). Configure the identity on the compute cluster associated with the deployment. Notice that all the jobs running on such compute are affected by this change. However, different deployments (even under the same deployment) can be configured to run under different clusters so you can administer the permissions accordingly depending on your requirements.
+
+## Next steps
+
+* [Troubleshooting batch endpoints](how-to-troubleshoot-batch-endpoints.md).
+* [Customize outputs in batch deployments](how-to-deploy-model-custom-output.md).
+* [Invoking batch endpoints from Azure Data Factory](how-to-use-batch-azure-data-factory.md).
machine-learning How To Authenticate Batch Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-authenticate-batch-endpoint.md
+
+ Title: "Authentication on batch endpoints"
+
+description: Learn how authentication works on Batch Endpoints.
++++++ Last updated : 10/10/2022++++
+# Authentication on batch endpoints
+
+Batch endpoints support Azure Active Directory authentication, or `aad_token`. That means that in order to invoke a batch endpoint, the user must present a valid Azure Active Directory authentication token to the batch endpoint URI. Authorization is enforced at the endpoint level. The following article explains how to correctly interact with batch endpoints and the security requirements for it.
+
+## Prerequisites
+
+* This example assumes that you have a model correctly deployed as a batch endpoint. Particularly, we are using the *heart condition classifier* created in the tutorial [Using MLflow models in batch deployments](how-to-mlflow-batch.md).
+
+## How authentication works
+
+To invoke a batch endpoint, the user must present a valid Azure Active Directory token representing a __security principal__. This principal can be a __user principal__ or a __service principal__. In any case, once an endpoint is invoked, a batch deployment job is created under the identity associated with the token. The identity needs the following permissions in order to successfully create a job:
+
+> [!div class="checklist"]
+> * Read batch endpoints/deployments.
+> * Create jobs in batch inference endpoints/deployment.
+> * Create experiments/runs.
+> * Read and write from/to data stores.
+> * Lists datastore secrets.
+
+You can either use one of the [built-in security roles](../role-based-access-control/built-in-roles.md) or create a new one. In any case, the identity used to invoke the endpoints requires to be granted the permissions explicitly. See [Steps to assign an Azure role](../role-based-access-control/role-assignments-steps.md) for instructions to assign them.
+
+> [!IMPORTANT]
+> The identity used for invoking a batch endpoint may not be used to read the underlying data depending on how the data store is configured. Please see [Security considerations when reading data](how-to-access-data-batch-endpoints-jobs.md#security-considerations-when-reading-data) for more details.
+
+## How to run jobs using different types of credentials
+
+The following examples show different ways to start batch deployment jobs using different types of credentials:
+
+> [!IMPORTANT]
+> When working on a private link-enabled workspaces, batch endpoints can't be invoked from the UI in Azure ML studio. Please use the Azure ML CLI v2 instead for job creation.
+
+### Running jobs using user's credentials
+
+In this case, we want to execute a batch endpoint using the identity of the user currently logged in. Follow these steps:
+
+> [!NOTE]
+> When working on Azure ML studio, batch endpoints/deployments are always executed using the identity of the current user logged in.
+
+# [Azure ML CLI](#tab/cli)
+
+1. Use the Azure CLI to log in using either interactive or device code authentication:
+
+ ```azurecli
+ az login
+ ```
+
+1. Once authenticated, use the following command to run a batch deployment job:
+
+ ```azurecli
+ az ml batch-endpoint invoke --name $ENDPOINT_NAME --input https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci
+ ```
+
+# [Azure ML SDK for Python](#tab/sdk)
+
+1. Use the Azure ML SDK for Python to log in using either interactive or device authentication:
+
+ ```python
+ from azure.ai.ml import MLClient
+ from azure.identity import InteractiveAzureCredentials
+
+ subscription_id = "<subscription>"
+ resource_group = "<resource-group>"
+ workspace = "<workspace>"
+
+ ml_client = MLClient(InteractiveAzureCredentials(), subscription_id, resource_group, workspace)
+ ```
+
+1. Once authenticated, use the following command to run a batch deployment job:
+
+ ```python
+ job = ml_client.batch_endpoints.invoke(
+ endpoint_name,
+ input=Input(path="https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci")
+ )
+ ```
+
+# [REST](#tab/rest)
+
+When working with REST APIs, we recommend to using either a [service principal](#running-jobs-using-a-service-principal) or a [managed identity](#running-jobs-using-a-managed-identity) to interact with the API.
+++
+### Running jobs using a service principal
+
+In this case, we want to execute a batch endpoint using a service principal already created in Azure Active Directory. To complete the authentication, you will have to create a secret to perform the authentication. Follow these steps:
+
+# [Azure ML CLI](#tab/cli)
+
+1. Create a secret to use for authentication as explained at [Option 2: Create a new application secret](../active-directory/develop/howto-create-service-principal-portal.md#option-2-create-a-new-application-secret).
+1. To authenticate using a service principal, use the following command. For more details see [Sign in with Azure CLI](/cli/azure/authenticate-azure-cli).
+
+ ```bash
+ az login --service-principal -u <app-id> -p <password-or-cert> --tenant <tenant>
+ ```
+
+1. Once authenticated, use the following command to run a batch deployment job:
+
+ ```azurecli
+ az ml batch-endpoint invoke --name $ENDPOINT_NAME --input https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/
+ ```
+
+# [Azure ML SDK for Python](#tab/sdk)
+
+1. Create a secret to use for authentication as explained at [Option 2: Create a new application secret](../active-directory/develop/howto-create-service-principal-portal.md#option-2-create-a-new-application-secret).
+1. To authenticate using a service principal, indicate the tenant ID, client ID and client secret of the service principal using environment variables as demonstrated:
+
+ ```python
+ from azure.ai.ml import MLClient
+ from azure.identity import EnvironmentCredential
+
+ os.environ["AZURE_TENANT_ID"] = "<TENANT_ID>"
+ os.environ["AZURE_CLIENT_ID"] = "<CLIENT_ID>"
+ os.environ["AZURE_CLIENT_SECRET"] = "<CLIENT_SECRET>"
+
+ subscription_id = "<subscription>"
+ resource_group = "<resource-group>"
+ workspace = "<workspace>"
+
+ ml_client = MLClient(DefaultAzureCredential(), subscription_id, resource_group, workspace)
+ ```
+
+1. Once authenticated, use the following command to run a batch deployment job:
+
+ ```python
+ job = ml_client.batch_endpoints.invoke(
+ endpoint_name,
+ input=Input(path="https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci")
+ )
+ ```
+
+# [REST](#tab/rest)
+
+1. Create a secret to use for authentication as explained at [Option 2: Create a new application secret](../active-directory/develop/howto-create-service-principal-portal.md#option-2-create-a-new-application-secret).
+
+1. Use the login service from Azure to get an authorization token. Authorization tokens are issued to a particular scope. The resource type for Azure Machine learning is `https://ml.azure.com`. The request would look as follows:
+
+ __Request__:
+
+ ```http
+ POST /{TENANT_ID}/oauth2/token HTTP/1.1
+ Host: login.microsoftonline.com
+ ```
+
+ __Body__:
+
+ ```
+ grant_type=client_credentials&client_id=<CLIENT_ID>&client_secret=<CLIENT_SECRET>&resource=https://ml.azure.com
+ ```
+
+ > [!IMPORTANT]
+ > Notice that the resource scope for invoking a batch endpoints (`https://ml.azure.com1) is different from the resource scope used to manage them. All management APIs in Azure use the resource scope `https://management.azure.com`, including Azure Machine Learning.
+
+3. Once authenticated, use the query to run a batch deployment job:
+
+ __Request__:
+
+ ```http
+ POST jobs HTTP/1.1
+ Host: <ENDPOINT_URI>
+ Authorization: Bearer <TOKEN>
+ Content-Type: application/json
+ ```
+ __Body:__
+
+ ```json
+ {
+ "properties": {
+ "InputData": {
+ "mnistinput": {
+ "JobInputType" : "UriFolder",
+ "Uri": "https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci"
+ }
+ }
+ }
+ }
+ ```
+++
+### Running jobs using a managed identity
+
+You can use managed identities to invoke batch endpoint and deployments. Please notice that this manage identity doesn't belong to the batch endpoint, but it is the identity used to execute the endpoint and hence create a batch job. Both user assigned and system assigned identities can be use in this scenario.
+
+# [Azure ML CLI](#tab/cli)
+
+On resources configured for managed identities for Azure resources, you can sign in using the managed identity. Signing in with the resource's identity is done through the `--identity` flag. For more details see [Sign in with Azure CLI](/cli/azure/authenticate-azure-cli).
+
+```bash
+az login --identity
+```
+
+Once authenticated, use the following command to run a batch deployment job:
+
+```azurecli
+az ml batch-endpoint invoke --name $ENDPOINT_NAME --input https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci
+```
+
+# [Azure ML SDK for Python](#tab/sdk)
+
+On resources configured for managed identities for Azure resources, you can sign in using the managed identity. Use the resource ID along with the `ManagedIdentityCredential` object as demonstrated in the following example:
+
+```python
+from azure.ai.ml import MLClient
+from azure.identity import ManagedIdentityCredential
+
+subscription_id = "<subscription>"
+resource_group = "<resource-group>"
+workspace = "<workspace>"
+resource_id = "<resource-id>"
+
+ml_client = MLClient(ManagedIdentityCredential(resource_id), subscription_id, resource_group, workspace)
+```
+
+Once authenticated, use the following command to run a batch deployment job:
+
+```python
+job = ml_client.batch_endpoints.invoke(
+ endpoint_name,
+ input=Input(path="https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci")
+ )
+```
+
+# [REST](#tab/rest)
+
+You can use the REST API of Azure Machine Learning to start a batch endpoints job using a managed identity. The steps vary depending on the underlying service being used. Some examples include (but are not limited to):
+
+* [Managed identity for Azure Data Factory](../data-factory/data-factory-service-identity.md)
+* [How to use managed identities for App Service and Azure Functions](../app-service/overview-managed-identity.md).
+* [How to use managed identities for Azure resources on an Azure VM to acquire an access token](../active-directory/managed-identities-azure-resources/how-to-use-vm-token.md).
+
+You can also use the Azure CLI to get an authentication token for the managed identity and the pass it to the batch endpoints URI.
+++
+## Next steps
+
+* [Network isolation in batch endpoints](how-to-secure-batch-endpoint.md)
+* [Invoking batch endpoints from Event Grid events in storage](how-to-use-event-grid-batch.md).
+* [Invoking batch endpoints from Azure Data Factory](how-to-use-batch-azure-data-factory.md).
machine-learning How To Batch Scoring Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-batch-scoring-script.md
+
+ Title: 'Author scoring scripts for batch deployments'
+
+description: In this article, learn how to author scoring scripts to perform batch inference in batch deployments.
+++++++ Last updated : 11/03/2022+++
+# Author scoring scripts for batch deployments
++
+Batch endpoints allow you to deploy models to perform inference at scale. Because how inference should be executed varies from model's format, model's type and use case, batch endpoints require a scoring script (also known as batch driver script) to indicate the deployment how to use the model over the provided data. In this article you will learn how to use scoring scripts in different scenarios and their best practices.
+
+> [!TIP]
+> MLflow models don't require a scoring script as it is autogenerated for you. For more details about how batch endpoints work with MLflow models, see the dedicated tutorial [Using MLflow models in batch deployments](how-to-mlflow-batch.md). Notice that this feature doesn't prevent you from writing an specific scoring script for MLflow models as explained at [Using MLflow models with a scoring script](how-to-mlflow-batch.md#using-mlflow-models-with-a-scoring-script).
+
+> [!WARNING]
+> If you are deploying an Automated ML model under a batch endpoint, notice that the scoring script that Automated ML provides only works for Online Endpoints and it is not designed for batch execution. Please follow this guideline to learn how to create one depending on what your model does.
+
+## Understanding the scoring script
+
+The scoring script is a Python file (`.py`) that contains the logic about how to run the model and read the input data submitted by the batch deployment executor driver. Each model deployment has to provide a scoring script, however, an endpoint may host multiple deployments using different scoring script versions.
+
+The scoring script must contain two methods:
+
+#### The `init` method
+
+Use the `init()` method for any costly or common preparation. For example, use it to load the model into a global object. This function will be called once at the beginning of the process. You model's files will be available in an environment variable called `AZUREML_MODEL_DIR`. Use this variable to locate the files associated with the model.
+
+```python
+def init():
+ global model
+
+ # AZUREML_MODEL_DIR is an environment variable created during deployment
+ # The path "model" is the name of the registered model's folder
+ model_path = os.path.join(os.environ["AZUREML_MODEL_DIR"], "model")
+
+ # load the model
+ model = load_model(model_path)
+```
+
+Notice that in this example we are placing the model in a global variable `model`. Use global variables to make available any asset needed to perform inference to your scoring function.
+
+#### The `run` method
+
+Use the `run(mini_batch: List[str]) -> Union[List[Any], pandas.DataFrame]` method to perform the scoring of each mini-batch generated by the batch deployment. Such method will be called once per each `mini_batch` generated for your input data. Batch deployments read data in batches accordingly to how the deployment is configured.
+
+```python
+def run(mini_batch):
+ results = []
+
+ for file in mini_batch:
+ (...)
+
+ return pd.DataFrame(results)
+```
+
+The method receives a list of file paths as a parameter (`mini_batch`). You can use this list to either iterate over each file and process it one by one, or to read the entire batch and process it at once. The best option will depend on your compute memory and the throughput you need to achieve. For an example of how to read entire batches of data at once see [High throughput deployments](how-to-image-processing-batch.md#high-throughput-deployments).
+
+> [!NOTE]
+> __How is work distributed?__:
+>
+> Batch deployments distribute work at the file level, which means that a folder containing 100 files with mini-batches of 10 files will generate 10 batches of 10 files each. Notice that this will happen regardless of the size of the files involved. If your files are too big to be processed in large mini-batches we suggest to either split the files in smaller files to achieve a higher level of parallelism or to decrease the number of files per mini-batch. At this moment, batch deployment can't account for skews in the file's size distribution.
+
+The `run()` method should return a pandas DataFrame or an array/list. Each returned output element indicates one successful run of an input element in the input `mini_batch`. For file datasets, each row/element will represent a single file processed. For a tabular dataset, each row/element will represent a row in a processed file.
+
+> [!IMPORTANT]
+> __How to write predictions?__:
+>
+> Use __arrays__ when you need to output a single prediction. Use __pandas DataFrames__ when you need to return multiple pieces of information. For instance, for tabular data, you may want to append your predictions to the original record. Use a pandas DataFrame for this case. For file datasets, __we still recommend to output a pandas DataFrame__ as they provide a more robust approach to read the results.
+>
+> Although pandas DataFrame may contain column names, they are not included in the output file. If needed, please see [Customize outputs in batch deployments](how-to-deploy-model-custom-output.md).
+
+> [!WARNING]
+> Do not not output complex data types (or lists of complex data types) in the `run` function. Those outputs will be transformed to string and they will be hard to read.
+
+The resulting DataFrame or array is appended to the output file indicated. There's no requirement on the cardinality of the results (1 file can generate 1 or many rows/elements in the output). All elements in the result DataFrame or array will be written to the output file as-is (considering the `output_action` isn't `summary_only`).
+
+## Writing predictions in a different way
+
+By default, the batch deployment will write the model's predictions in a single file as indicated in the deployment. However, there are some cases where you need to write the predictions in multiple files. For instance, if the input data is partitioned, you typically would want to generate your output partitioned too. On those cases you can [Customize outputs in batch deployments](how-to-deploy-model-custom-output.md) to indicate:
+
+> [!div class="checklist"]
+> * The file format used (CSV, parquet, json, etc).
+> * The way data is partitioned in the output.
+
+Read the article [Customize outputs in batch deployments](how-to-deploy-model-custom-output.md) for an example about how to achieve it.
+
+## Source control of scoring scripts
+
+It is highly advisable to put scoring scripts under source control.
+
+## Best practices for writing scoring scripts
+
+When writing scoring scripts that work with big amounts of data, you need to take into account several factors, including:
+
+* The size of each file.
+* The amount of data on each file.
+* The amount of memory required to read each file.
+* The amount of memory required to read an entire batch of files.
+* The memory footprint of the model.
+* The memory footprint of the model when running over the input data.
+* The available memory in your compute.
+
+Batch deployments distribute work at the file level, which means that a folder containing 100 files with mini-batches of 10 files will generate 10 batches of 10 files each. Notice that this will happen regardless of the size of the files involved. If your files are too big to be processed in large mini-batches we suggest to either split the files in smaller files to achieve a higher level of parallelism or to decrease the number of files per mini-batch. At this moment, batch deployment can't account for skews in the file's size distribution.
+
+### Running inference at the mini-batch, file or the row level
+
+Batch endpoints will call the `run()` function in your scoring script once per mini-batch. However, you will have the power to decide if you want to run the inference over the entire batch, over one file at a time, or over one row at a time (if your data happens to be tabular).
+
+#### Mini-batch level
+
+You will typically want to run inference over the batch all at once when you want to achieve high throughput in your batch scoring process. This is the case for instance if you run inference over a GPU where you want to achieve saturation of the inference device. You may also be relying on a data loader that can handle the batching itself if data doesn't fit on memory, like `TensorFlow` or `PyTorch` data loaders. On those cases, you may want to consider running inference on the entire batch.
+
+> [!WARNING]
+> Running inference at the batch level may require having high control over the input data size to be able to correctly account for the memory requirements and avoid out of memory exceptions. Whether you are able or not of loading the entire mini-batch in memory will depend on the size of the mini-batch, the size of the instances in the cluster, the number of workers on each node, and the size of the mini-batch.
+
+For an example about how to achieve it see [High throughput deployments](how-to-image-processing-batch.md#high-throughput-deployments).
+
+#### File level
+
+One of the easiest ways to perform inference is by iterating over all the files in the mini-batch and run your model over it. In some cases, like image processing, this may be a good idea. If your data is tabular, you may need to make a good estimation about the number of rows on each file to estimate if your model is able to handle the memory requirements to not just load the entire data into memory but also to perform inference over it. Remember that some models (specially those based on recurrent neural networks) will unfold and present a memory footprint that may not be linear with the number of rows. If your model is expensive in terms of memory, please consider running inference at the row level.
+
+> [!TIP]
+> If file sizes are too big to be readed even at once, please consider breaking down files into multiple smaller files to account for better parallelization.
+
+For an example about how to achieve it see [Image processing with batch deployments](how-to-image-processing-batch.md).
+
+#### Row level (tabular)
+
+For models that present challenges in the size of their inputs, you may want to consider running inference at the row level. Your batch deployment will still provide your scoring script with a mini-batch of files, however, you will read one file, one row at a time. This may look inefficient but for some deep learning models may be the only way to perform inference without scaling up your hardware requirements.
+
+For an example about how to achieve it see [Text processing with batch deployments](how-to-nlp-processing-batch.md).
+
+### Relationship between the degree of parallelism and the scoring script
+
+Your deployment configuration controls the size of each mini-batch and the number of workers on each node. Take into account them when deciding if you want to read the entire mini-batch to perform inference. When running multiple workers on the same instance, take into account that memory will be shared across all the workers. Usually, increasing the number of workers per node should be accompanied by a decrease in the mini-batch size or by a change in the scoring strategy (if data size remains the same).
+
+## Next steps
+
+* [Troubleshooting batch endpoints](how-to-troubleshoot-batch-endpoints.md).
+* [Use MLflow models in batch deployments](how-to-mlflow-batch.md).
+* [Image processing with batch deployments](how-to-image-processing-batch.md).
machine-learning How To Deploy Model Custom Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-model-custom-output.md
+
+ Title: "Customize outputs in batch deployments"
+
+description: Learn how create deployments that generate custom outputs and files.
++++++ Last updated : 10/10/2022++++
+# Customize outputs in batch deployments
++
+Sometimes you need to execute inference having a higher control of what is being written as output of the batch job. Those cases include:
+
+> [!div class="checklist"]
+> * You need to control how the predictions are being written in the output. For instance, you want to append the prediction to the original data (if data is tabular).
+> * You need to write your predictions in a different file format from the one supported out-of-the-box by batch deployments.
+> * Your model is a generative model that can't write the output in a tabular format. For instance, models that produce images as outputs.
+> * Your model produces multiple tabular files instead of a single one. This is the case for instance of models that perform forecasting considering multiple scenarios.
+
+In any of those cases, Batch Deployments allow you to take control of the output of the jobs by allowing you to write directly to the output of the batch deployment job. In this tutorial, we'll see how to deploy a model to perform batch inference and writes the outputs in `parquet` format by appending the predictions to the original input data.
+
+## About this sample
+
+This example shows how you can deploy a model to perform batch inference and customize how your predictions are written in the output. This example uses an MLflow model based on the [UCI Heart Disease Data Set](https://archive.ics.uci.edu/ml/datasets/Heart+Disease). The database contains 76 attributes, but we are using a subset of 14 of them. The model tries to predict the presence of heart disease in a patient. It is integer valued from 0 (no presence) to 1 (presence).
+
+The model has been trained using an `XGBBoost` classifier and all the required preprocessing has been packaged as a `scikit-learn` pipeline, making this model an end-to-end pipeline that goes from raw data to predictions.
+
+The information in this article is based on code samples contained in the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the commands locally without having to copy/paste YAML and other files, clone the repo and then change directories to the `cli/endpoints/batch` if you are using the Azure CLI or `sdk/endpoints/batch` if you are using our SDK for Python.
+
+```azurecli
+git clone https://github.com/Azure/azureml-examples --depth 1
+cd azureml-examples/cli/endpoints/batch
+```
+
+### Follow along in Jupyter Notebooks
+
+You can follow along this sample in a Jupyter Notebook. In the cloned repository, open the notebook: [custom-output-batch.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/batch/custom-output-batch.ipynb).
+
+## Prerequisites
++
+* A model registered in the workspace. In this tutorial, we'll use an MLflow model. Particularly, we are using the *heart condition classifier* created in the tutorial [Using MLflow models in batch deployments](how-to-mlflow-batch.md).
+* You must have an endpoint already created. If you don't, follow the instructions at [Use batch endpoints for batch scoring](how-to-use-batch-endpoint.md). This example assumes the endpoint is named `heart-classifier-batch`.
+* You must have a compute created where to deploy the deployment. If you don't, follow the instructions at [Create compute](how-to-use-batch-endpoint.md#create-compute). This example assumes the name of the compute is `cpu-cluster`.
+
+## Creating a batch deployment with a custom output
+
+In this example, we are going to create a deployment that can write directly to the output folder of the batch deployment job. The deployment will use this feature to write custom parquet files.
+
+### Registering the model
+
+Batch Endpoint can only deploy registered models. In this case, we already have a local copy of the model in the repository, so we only need to publish the model to the registry in the workspace. You can skip this step if the model you are trying to deploy is already registered.
+
+# [Azure ML CLI](#tab/cli)
+
+```azurecli
+MODEL_NAME='heart-classifier'
+az ml model create --name $MODEL_NAME --type "mlflow_model" --path "heart-classifier-mlflow/model"
+```
+
+# [Azure ML SDK for Python](#tab/sdk)
+
+```python
+model_name = 'heart-classifier'
+model = ml_client.models.create_or_update(
+ Model(name=model_name, path='heart-classifier-mlflow/model', type=AssetTypes.MLFLOW_MODEL)
+)
+```
++
+> [!NOTE]
+> The model used in this tutorial is an MLflow model. However, the steps apply for both MLflow models and custom models.
+
+### Creating a scoring script
+
+We need to create a scoring script that can read the input data provided by the batch deployment and return the scores of the model. We are also going to write directly to the output folder of the job. In summary, the proposed scoring script does as follows:
+
+1. Reads the input data as CSV files.
+2. Runs an MLflow model `predict` function over the input data.
+3. Appends the predictions to a `pandas.DataFrame` along with the input data.
+4. Writes the data in a file named as the input file, but in `parquet` format.
+
+__batch_driver_parquet.py__
+
+```python
+import os
+import mlflow
+import pandas as pd
+from pathlib import Path
+
+def init():
+ global model
+ global output_path
+
+ # AZUREML_MODEL_DIR is an environment variable created during deployment
+ # It is the path to the model folder
+ # Please provide your model's folder name if there's one:
+ model_path = os.path.join(os.environ["AZUREML_MODEL_DIR"], "model")
+ output_path = os.environ['AZUREML_BI_OUTPUT_PATH']
+ model = mlflow.pyfunc.load_model(model_path)
+
+def run(mini_batch):
+ for file_path in mini_batch:
+ data = pd.read_csv(file_path)
+ pred = model.predict(data)
+
+ data['prediction'] = pred
+
+ output_file_name = Path(file_path).stem
+ output_file_path = os.path.join(output_path, output_file_name + '.parquet')
+ data.to_parquet(output_file_path)
+
+ return mini_batch
+```
+
+__Remarks:__
+* Notice how the environment variable `AZUREML_BI_OUTPUT_PATH` is used to get access to the output path of the deployment job.
+* The `init()` function is populating a global variable called `output_path` that can be used later to know where to write.
+* The `run` method returns a list of the processed files. It is required for the `run` function to return a `list` or a `pandas.DataFrame` object.
+
+> [!WARNING]
+> Take into account that all the batch executors will have write access to this path at the same time. This means that you need to account for concurrency. In this case, we are ensuring each executor writes its own file by using the input file name as the name of the output folder.
+
+### Creating the deployment
+
+Follow the next steps to create a deployment using the previous scoring script:
+
+1. First, let's create an environment where the scoring script can be executed:
+
+ # [Azure ML CLI](#tab/cli)
+
+ No extra step is required for the Azure ML CLI. The environment definition will be included in the deployment file.
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ Let's get a reference to the environment:
+
+ ```python
+ environment = Environment(
+ conda_file="./heart-classifier-mlflow/environment/conda.yaml",
+ image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:latest",
+ )
+ ```
+
+2. MLflow models don't require you to indicate an environment or a scoring script when creating the deployments as it is created for you. However, in this case we are going to indicate a scoring script and environment since we want to customize how inference is executed.
+
+ > [!NOTE]
+ > This example assumes you have an endpoint created with the name `heart-classifier-batch` and a compute cluster with name `cpu-cluster`. If you don't, please follow the steps in the doc [Use batch endpoints for batch scoring](how-to-use-batch-endpoint.md).
+
+ # [Azure ML CLI](#tab/cli)
+
+ To create a new deployment under the created endpoint, create a `YAML` configuration like the following:
+
+ ```yaml
+ $schema: https://azuremlschemas.azureedge.net/latest/batchDeployment.schema.json
+ endpoint_name: heart-classifier-batch
+ name: classifier-xgboost-parquet
+ description: A heart condition classifier based on XGBoost
+ model: azureml:heart-classifier@latest
+ environment:
+ image: mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:latest
+ conda_file: ./heart-classifier-mlflow/environment/conda.yaml
+ code_configuration:
+ code: ./heart-classifier-custom/code/
+ scoring_script: batch_driver_parquet.py
+ compute: azureml:cpu-cluster
+ resources:
+ instance_count: 2
+ max_concurrency_per_instance: 2
+ mini_batch_size: 2
+ output_action: summary_only
+ retry_settings:
+ max_retries: 3
+ timeout: 300
+ error_threshold: -1
+ logging_level: info
+ ```
+
+ Then, create the deployment with the following command:
+
+ ```azurecli
+ DEPLOYMENT_NAME="classifier-xgboost-parquet"
+ az ml batch-deployment create -f endpoint.yml
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ To create a new deployment under the created endpoint, use the following script:
+
+ ```python
+ deployment = BatchDeployment(
+ name="classifier-xgboost-parquet",
+ description="A heart condition classifier based on XGBoost",
+ endpoint_name=endpoint.name,
+ model=model,
+ environment=environment,
+ code_configuration=CodeConfiguration(
+ code="./heart-classifier-mlflow/code/",
+ scoring_script="batch_driver_parquet.py",
+ ),
+ compute=compute_name,
+ instance_count=2,
+ max_concurrency_per_instance=2,
+ mini_batch_size=2,
+ output_action=BatchDeploymentOutputAction.SUMMARY_ONLY,
+ retry_settings=BatchRetrySettings(max_retries=3, timeout=300),
+ logging_level="info",
+ )
+ ```
+
+ Then, create the deployment with the following command:
+
+ ```python
+ ml_client.batch_deployments.begin_create_or_update(deployment)
+ ```
+
+
+ > [!IMPORTANT]
+ > Notice that now `output_action` is set to `SUMMARY_ONLY`.
+
+3. At this point, our batch endpoint is ready to be used.
+
+## Testing out the deployment
+
+For testing our endpoint, we are going to use a sample of unlabeled data located in this repository and that can be used with the model. Batch endpoints can only process data that is located in the cloud and that is accessible from the Azure Machine Learning workspace. In this example, we are going to upload it to an Azure Machine Learning data store. Particularly, we are going to create a data asset that can be used to invoke the endpoint for scoring. However, notice that batch endpoints accept data that can be placed in multiple type of locations.
+
+1. Let's create the data asset first. This data asset consists of a folder with multiple CSV files that we want to process in parallel using batch endpoints. You can skip this step is your data is already registered as a data asset or you want to use a different input type.
+
+ # [Azure ML CLI](#tab/cli)
+
+ Create a data asset definition in `YAML`:
+
+ __heart-dataset-unlabeled.yml__
+ ```yaml
+ $schema: https://azuremlschemas.azureedge.net/latest/data.schema.json
+ name: heart-dataset-unlabeled
+ description: An unlabeled dataset for heart classification.
+ type: uri_folder
+ path: heart-dataset
+ ```
+
+ Then, create the data asset:
+
+ ```azurecli
+ az ml data create -f heart-dataset-unlabeled.yml
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ data_path = "resources/heart-dataset/"
+ dataset_name = "heart-dataset-unlabeled"
+
+ heart_dataset_unlabeled = Data(
+ path=data_path,
+ type=AssetTypes.URI_FOLDER,
+ description="An unlabeled dataset for heart classification",
+ name=dataset_name,
+ )
+ ml_client.data.create_or_update(heart_dataset_unlabeled)
+ ```
+
+1. Now that the data is uploaded and ready to be used, let's invoke the endpoint:
+
+ # [Azure ML CLI](#tab/cli)
+
+ ```azurecli
+ JOB_NAME = $(az ml batch-endpoint invoke --name $ENDPOINT_NAME --deployment-name $DEPLOYMENT_NAME --input azureml:heart-dataset-unlabeled@latest | jq -r '.name')
+ ```
+
+ > [!NOTE]
+ > The utility `jq` may not be installed on every installation. You can get instructions in [this link](https://stedolan.github.io/jq/download/).
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ input = Input(type=AssetTypes.URI_FOLDER, path=heart_dataset_unlabeled.id)
+ job = ml_client.batch_endpoints.invoke(
+ endpoint_name=endpoint.name,
+ deployment_name=deployment.name,
+ input=input,
+ )
+ ```
+
+1. A batch job is started as soon as the command returns. You can monitor the status of the job until it finishes:
+
+ # [Azure ML CLI](#tab/cli)
+
+ ```azurecli
+ az ml job show --name $JOB_NAME
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ ml_client.jobs.get(job.name)
+ ```
+
+## Analyzing the outputs
+
+The job generates a named output called `score` where all the generated files are placed. Since we wrote into the directory directly, one file per each input file, then we can expect to have the same number of files. In this particular example we decided to name the output files the same as the inputs, but they will have a parquet extension.
+
+> [!NOTE]
+> Notice that a file `predictions.csv` is also included in the output folder. This file contains the summary of the processed files.
+
+You can download the results of the job by using the job name:
+
+# [Azure ML CLI](#tab/cli)
+
+To download the predictions, use the following command:
+
+```azurecli
+az ml job download --name $JOB_NAME --output-name score --download-path ./
+```
+
+# [Azure ML SDK for Python](#tab/sdk)
+
+```python
+ml_client.jobs.download(name=job.name, output_name='score', download_path='./')
+```
++
+Once the file is downloaded, you can open it using your favorite tool. The following example loads the predictions using `Pandas` dataframe.
+
+```python
+import pandas as pd
+import glob
+
+output_files = glob.glob("named-outputs/score/*.parquet")
+score = pd.concat((pd.read_parquet(f) for f in output_files))
+```
+
+The output looks as follows:
+
+| age | sex | ... | thal | prediction |
+|--||--||--|
+| 63 | 1 | ... | fixed | 0 |
+| 67 | 1 | ... | normal | 1 |
+| 67 | 1 | ... | reversible | 0 |
+| 37 | 1 | ... | normal | 0 |
++
+## Next steps
+
+* [Using batch deployments for image file processing](how-to-image-processing-batch.md)
+* [Using batch deployments for NLP processing](how-to-nlp-processing-batch.md)
machine-learning How To Image Processing Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-image-processing-batch.md
+
+ Title: "Image processing with batch deployments"
+
+description: Learn how to deploy a model in batch endpoints that process images
++++++ Last updated : 10/10/2022++++
+# Image processing with batch deployments
++
+Batch Endpoints can be used for processing tabular data, but also any other file type like images. Those deployments are supported in both MLflow and custom models. In this tutorial, we will learn how to deploy a model that classifies images according to the ImageNet taxonomy.
+
+## About this sample
+
+The model we are going to work with was built using TensorFlow along with the RestNet architecture ([Identity Mappings in Deep Residual Networks](https://arxiv.org/abs/1603.05027)). A sample of this model can be downloaded from `https://azuremlexampledata.blob.core.windows.net/data/imagenet/model.zip`. The model has the following constrains that are important to keep in mind for deployment:
+
+* It works with images of size 244x244 (tensors of `(224, 224, 3)`).
+* It requires inputs to be scaled to the range `[0,1]`.
+
+The information in this article is based on code samples contained in the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the commands locally without having to copy/paste YAML and other files, clone the repo, and then change directories to the `cli/endpoints/batch` if you are using the Azure CLI or `sdk/endpoints/batch` if you are using our SDK for Python.
+
+```azurecli
+git clone https://github.com/Azure/azureml-examples --depth 1
+cd azureml-examples/cli/endpoints/batch
+```
+
+### Follow along in Jupyter Notebooks
+
+You can follow along this sample in a Jupyter Notebook. In the cloned repository, open the notebook: [imagenet-classifier-batch.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/batch/imagenet-classifier-batch.ipynb).
+
+## Prerequisites
++
+* You must have a batch endpoint already created. This example assumes the endpoint is named `imagenet-classifier-batch`. If you don't have one, follow the instructions at [Use batch endpoints for batch scoring](how-to-use-batch-endpoint.md).
+* You must have a compute created where to deploy the deployment. This example assumes the name of the compute is `cpu-cluster`. If you don't, follow the instructions at [Create compute](how-to-use-batch-endpoint.md#create-compute).
+
+## Image classification with batch deployments
+
+In this example, we are going to learn how to deploy a deep learning model that can classify a given image according to the [taxonomy of ImageNet](https://image-net.org/).
+
+### Registering the model
+
+Batch Endpoint can only deploy registered models so we need to register it. You can skip this step if the model you are trying to deploy is already registered.
+
+1. Downloading a copy of the model:
+
+ # [Azure ML CLI](#tab/cli)
+
+ ```azurecli
+ wget https://azuremlexampledata.blob.core.windows.net/data/imagenet/model.zip
+ mkdir -p imagenet-classifier
+ unzip model.zip -d imagenet-classifier
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ import os
+ import urllib.request
+ from zipfile import ZipFile
+
+ response = urllib.request.urlretrieve('https://azuremlexampledata.blob.core.windows.net/data/imagenet/model.zip', 'model.zip')
+
+ os.mkdirs("imagenet-classifier", exits_ok=True)
+ with ZipFile(response[0], 'r') as zip:
+ model_path = zip.extractall(path="imagenet-classifier")
+ ```
+
+2. Register the model:
+
+ # [Azure ML CLI](#tab/cli)
+
+ ```azurecli
+ MODEL_NAME='imagenet-classifier'
+ az ml model create --name $MODEL_NAME --type "custom_model" --path "imagenet-classifier/model"
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ model_name = 'imagenet-classifier'
+ model = ml_client.models.create_or_update(
+ Model(name=model_name, path=model_path, type=AssetTypes.CUSTOM_MODEL)
+ )
+ ```
+
+### Creating a scoring script
+
+We need to create a scoring script that can read the images provided by the batch deployment and return the scores of the model. The following script:
+
+> [!div class="checklist"]
+> * Indicates an `init` function that load the model using `keras` module in `tensorflow`.
+> * Indicates a `run` function that is executed for each mini-batch the batch deployment provides.
+> * The `run` function read one image of the file at a time
+> * The `run` method resizes the images to the expected sizes for the model.
+> * The `run` method rescales the images to the range `[0,1]` domain, which is what the model expects.
+> * It returns the classes and the probabilities associated with the predictions.
+
+__imagenet_scorer.py__
+
+```python
+import os
+import numpy as np
+import pandas as pd
+import tensorflow as tf
+from os.path import basename
+from PIL import Image
+from tensorflow.keras.models import load_model
++
+def init():
+ global model
+ global input_width
+ global input_height
+
+ # AZUREML_MODEL_DIR is an environment variable created during deployment
+ model_path = os.path.join(os.environ["AZUREML_MODEL_DIR"], "model")
+
+ # load the model
+ model = load_model(model_path)
+ input_width = 244
+ input_height = 244
+
+def run(mini_batch):
+ results = []
+
+ for image in mini_batch:
+ data = Image.open(image).resize((input_width, input_height)) # Read and resize the image
+ data = np.array(data)/255.0 # Normalize
+ data_batch = tf.expand_dims(data, axis=0) # create a batch of size (1, 244, 244, 3)
+
+ # perform inference
+ pred = model.predict(data_batch)
+
+ # Compute probabilities, classes and labels
+ pred_prob = tf.math.reduce_max(tf.math.softmax(pred, axis=-1)).numpy()
+ pred_class = tf.math.argmax(pred, axis=-1).numpy()
+
+ results.append([basename(image), pred_class[0], pred_prob])
+
+ return pd.DataFrame(results)
+```
+
+> [!TIP]
+> Although images are provided in mini-batches by the deployment, this scoring script processes one image at a time. This is a common pattern as trying to load the entire batch and send it to the model at once may result in high-memory pressure on the batch executor (OOM exeptions). However, there are certain cases where doing so enables high throughput in the scoring task. This is the case for instance of batch deployments over a GPU hardware where we want to achieve high GPU utilization. See [High throughput deployments](#high-throughput-deployments) for an example of a scoring script that takes advantage of it.
+
+> [!NOTE]
+> If you are trying to deploy a generative model (one that generates files), please read how to author a scoring script as explained at [Deployment of models that produces multiple files](how-to-deploy-model-custom-output.md).
+
+### Creating the deployment
+
+One the scoring script is created, it's time to create a batch deployment for it. Follow the following steps to create it:
+
+1. We need to indicate over which environment we are going to run the deployment. In our case, our model runs on `TensorFlow`. Azure Machine Learning already has an environment with the required software installed, so we can reutilize this environment. We are just going to add a couple of dependencies in a `conda.yml` file.
+
+ # [Azure ML CLI](#tab/cli)
+
+ No extra step is required for the Azure ML CLI. The environment definition will be included in the deployment file.
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ Let's get a reference to the environment:
+
+ ```python
+ environment = Environment(
+ conda_file="./imagenet-classifier/environment/conda.yml",
+ image="mcr.microsoft.com/azureml/tensorflow-2.4-ubuntu18.04-py37-cpu-inference:latest",
+ )
+ ```
+
+1. Now, let create the deployment.
+
+ > [!NOTE]
+ > This example assumes you have an endpoint created with the name `imagenet-classifier-batch` and a compute cluster with name `cpu-cluster`. If you don't, please follow the steps in the doc [Use batch endpoints for batch scoring](how-to-use-batch-endpoint.md).
+
+ # [Azure ML CLI](#tab/cli)
+
+ To create a new deployment under the created endpoint, create a `YAML` configuration like the following:
+
+ ```yaml
+ $schema: https://azuremlschemas.azureedge.net/latest/batchDeployment.schema.json
+ endpoint_name: imagenet-classifier-batch
+ name: imagenet-classifier-resnetv2
+ description: A ResNetV2 model architecture for performing ImageNet classification in batch
+ model: azureml:imagenet-classifier@latest
+ compute: azureml:cpu-cluster
+ environment:
+ image: mcr.microsoft.com/azureml/tensorflow-2.4-ubuntu18.04-py37-cpu-inference:latest
+ conda_file: ./imagenet-classifier/environment/conda.yml
+ code_configuration:
+ code: ./imagenet-classifier/code/
+ scoring_script: imagenet_scorer.py
+ resources:
+ instance_count: 2
+ max_concurrency_per_instance: 1
+ mini_batch_size: 5
+ output_action: append_row
+ output_file_name: predictions.csv
+ retry_settings:
+ max_retries: 3
+ timeout: 300
+ error_threshold: -1
+ logging_level: info
+ ```
+
+ Then, create the deployment with the following command:
+
+ ```azurecli
+ DEPLOYMENT_NAME="imagenet-classifier-resnetv2"
+ az ml batch-deployment create -f deployment.yml
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ To create a new deployment with the indicated environment and scoring script use the following code:
+
+ ```python
+ deployment = BatchDeployment(
+ name="imagenet-classifier-resnetv2",
+ description="A ResNetV2 model architecture for performing ImageNet classification in batch",
+ endpoint_name=endpoint.name,
+ model=model,
+ environment=environment,
+ code_configuration=CodeConfiguration(
+ code="./imagenet-classifier/code/",
+ scoring_script="imagenet_scorer.py",
+ ),
+ compute=compute_name,
+ instance_count=2,
+ max_concurrency_per_instance=1,
+ mini_batch_size=10,
+ output_action=BatchDeploymentOutputAction.APPEND_ROW,
+ output_file_name="predictions.csv",
+ retry_settings=BatchRetrySettings(max_retries=3, timeout=300),
+ logging_level="info",
+ )
+ ```
+
+ Then, create the deployment with the following command:
+
+ ```python
+ ml_client.batch_deployments.begin_create_or_update(deployment)
+ ```
+
+1. Although you can invoke a specific deployment inside of an endpoint, you will usually want to invoke the endpoint itself, and let the endpoint decide which deployment to use. Such deployment is named the "default" deployment. This gives you the possibility of changing the default deployment - and hence changing the model serving the deployment - without changing the contract with the user invoking the endpoint. Use the following instruction to update the default deployment:
+
+ # [Azure ML CLI](#tab/cli)
+
+ ```bash
+ az ml batch-endpoint update --name $ENDPOINT_NAME --set defaults.deployment_name=$DEPLOYMENT_NAME
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ endpoint.defaults.deployment_name = deployment.name
+ ml_client.batch_endpoints.begin_create_or_update(endpoint)
+ ```
+
+1. At this point, our batch endpoint is ready to be used.
+
+## Testing out the deployment
+
+For testing our endpoint, we are going to use a sample of 1000 images from the original ImageNet dataset. Batch endpoints can only process data that is located in the cloud and that is accessible from the Azure Machine Learning workspace. In this example, we are going to upload it to an Azure Machine Learning data store. Particularly, we are going to create a data asset that can be used to invoke the endpoint for scoring. However, notice that batch endpoints accept data that can be placed in multiple type of locations.
+
+1. Let's download the associated sample data:
+
+ # [Azure ML CLI](#tab/cli)
+
+ ```bash
+ wget https://azuremlexampledata.blob.core.windows.net/data/imagenet-1000.zip
+ unzip imagenet-1000.zip -d /tmp/imagenet-1000
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ !wget https://azuremlexampledata.blob.core.windows.net/data/imagenet-1000.zip
+ !unzip imagenet-1000.zip -d /tmp/imagenet-1000
+ ```
+
+2. Now, let's create the data asset from the data just downloaded
+
+ # [Azure ML CLI](#tab/cli)
+
+ Create a data asset definition in `YAML`:
+
+ __imagenet-sample-unlabeled.yml__
+ ```yaml
+ $schema: https://azuremlschemas.azureedge.net/latest/data.schema.json
+ name: imagenet-sample-unlabeled
+ description: A sample of 1000 images from the original ImageNet dataset.
+ type: uri_folder
+ path: /tmp/imagenet-1000
+ ```
+
+ Then, create the data asset:
+
+ ```azurecli
+ az ml data create -f imagenet-sample-unlabeled.yml
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ data_path = "/tmp/imagenet-1000"
+ dataset_name = "imagenet-sample-unlabeled"
+
+ imagenet_sample = Data(
+ path=data_path,
+ type=AssetTypes.URI_FOLDER,
+ description="A sample of 1000 images from the original ImageNet dataset",
+ name=dataset_name,
+ )
+ ml_client.data.create_or_update(imagenet_sample)
+ ```
+
+3. Now that the data is uploaded and ready to be used, let's invoke the endpoint:
+
+ # [Azure ML CLI](#tab/cli)
+
+ ```azurecli
+ JOB_NAME = $(az ml batch-endpoint invoke --name $ENDPOINT_NAME --input azureml:imagenet-sample-unlabeled@latest | jq -r '.name')
+ ```
+
+ > [!NOTE]
+ > The utility `jq` may not be installed on every installation. You can get instructions in [this link](https://stedolan.github.io/jq/download/).
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ input = Input(type=AssetTypes.URI_FOLDER, path=imagenet_sample.id)
+ job = ml_client.batch_endpoints.invoke(
+ endpoint_name=endpoint.name,
+ input=input,
+ )
+ ```
+
+
+ > [!TIP]
+ > Notice how we are not indicating the deployment name in the invoke operation. That's because the endpoint automatically routes the job to the default deployment. Since our endpoint only has one deployment, then that one is the default one. You can target an specific deployment by indicating the argument/parameter `deployment_name`.
+
+4. A batch job is started as soon as the command returns. You can monitor the status of the job until it finishes:
+
+ # [Azure ML CLI](#tab/cli)
+
+ ```azurecli
+ az ml job show --name $JOB_NAME
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ ml_client.jobs.get(job.name)
+ ```
+
+5. Once the deployment is finished, we can download the predictions:
+
+ # [Azure ML CLI](#tab/cli)
+
+ To download the predictions, use the following command:
+
+ ```azurecli
+ az ml job download --name $JOB_NAME --output-name score --download-path ./
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ ml_client.jobs.download(name=job.name, output_name='score', download_path='./')
+ ```
+
+6. The output predictions will look like the following. Notice that the predictions have been combined with the labels for the convenience of the reader. To know more about how to achieve this see the associated notebook.
+
+ ```python
+ import pandas as pd
+ score = pd.read_csv("named-outputs/score/predictions.csv", header=None, names=['file', 'class', 'probabilities'], sep=' ')
+ score['label'] = score['class'].apply(lambda pred: imagenet_labels[pred])
+ score
+ ```
+
+ | file | class | probabilities | label |
+ |--|-|| -|
+ | n02088094_Afghan_hound.JPEG | 161 | 0.994745 | Afghan hound |
+ | n02088238_basset | 162 | 0.999397 | basset |
+ | n02088364_beagle.JPEG | 165 | 0.366914 | bluetick |
+ | n02088466_bloodhound.JPEG | 164 | 0.926464 | bloodhound |
+ | ... | ... | ... | ... |
+
+
+## High throughput deployments
+
+As mentioned before, the deployment we just created processes one image a time, even when the batch deployment is providing a batch of them. In most cases this is the best approach as it simplifies how the models execute and avoids any possible out-of-memory problems. However, in certain others we may want to saturate as much as possible the utilization of the underlying hardware. This is the case GPUs for instance.
+
+On those cases, we may want to perform inference on the entire batch of data. That implies loading the entire set of images to memory and sending them directly to the model. The following example uses `TensorFlow` to read batch of images and score them all at once. It also uses `TensorFlow` ops to do any data preprocessing so the entire pipeline will happen on the same device being used (CPU/GPU).
+
+> [!WARNING]
+> Some models have a non-linear relationship with the size of the inputs in terms of the memory consumption. Batch again (as done in this example) or decrease the size of the batches created by the batch deployment to avoid out-of-memory exceptions.
+
+__imagenet_scorer_batch.py__
+
+```python
+import os
+import numpy as np
+import pandas as pd
+import tensorflow as tf
+from tensorflow.keras.models import load_model
+
+def init():
+ global model
+ global input_width
+ global input_height
+
+ # AZUREML_MODEL_DIR is an environment variable created during deployment
+ model_path = os.path.join(os.environ["AZUREML_MODEL_DIR"], "model")
+
+ # load the model
+ model = load_model(model_path)
+ input_width = 244
+ input_height = 244
+
+def decode_img(file_path):
+ file = tf.io.read_file(file_path)
+ img = tf.io.decode_jpeg(file, channels=3)
+ img = tf.image.resize(img, [input_width, input_height])
+ return img/255.
+
+def run(mini_batch):
+ images_ds = tf.data.Dataset.from_tensor_slices(mini_batch)
+ images_ds = images_ds.map(decode_img).batch(64)
+
+ # perform inference
+ pred = model.predict(images_ds)
+
+ # Compute probabilities, classes and labels
+ pred_prob = tf.math.reduce_max(tf.math.softmax(pred, axis=-1)).numpy()
+ pred_class = tf.math.argmax(pred, axis=-1).numpy()
+
+ return pd.DataFrame([mini_batch, pred_prob, pred_class], columns=['file', 'probability', 'class'])
+```
+
+Remarks:
+* Notice that this script is constructing a tensor dataset from the mini-batch sent by the batch deployment. This dataset is preprocessed to obtain the expected tensors for the model using the `map` operation with the function `decode_img`.
+* The dataset is batched again (16) send the data to the model. Use this parameter to control how much information you can load into memory and send to the model at once. If running on a GPU, you will need to carefully tune this parameter to achieve the maximum utilization of the GPU just before getting an OOM exception.
+* Once predictions are computed, the tensors are converted to `numpy.ndarray`.
++
+## Considerations for MLflow models processing images
+
+MLflow models in Batch Endpoints support reading images as input data. Remember that MLflow models don't require a scoring script. Have the following considerations when using them:
+
+> [!div class="checklist"]
+> * Image files supported includes: `.png`, `.jpg`, `.jpeg`, `.tiff`, `.bmp` and `.gif`.
+> * MLflow models should expect to recieve a `np.ndarray` as input that will match the dimensions of the input image. In order to support multiple image sizes on each batch, the batch executor will invoke the MLflow model once per image file.
+> * MLflow models are highly encouraged to include a signature, and if they do it must be of type `TensorSpec`. Inputs are reshaped to match tensor's shape if available. If no signature is available, tensors of type `np.uint8` are inferred.
+> * For models that include a signature and are expected to handle variable size of images, then include a signature that can guarantee it. For instance, the following signature will allow batches of 3 channeled images. Specify the signature when you register the model with `mlflow.<flavor>.log_model(..., signature=signature)`.
+
+```python
+import numpy as np
+import mlflow
+from mlflow.models.signature import ModelSignature
+from mlflow.types.schema import Schema, TensorSpec
+
+input_schema = Schema([
+ TensorSpec(np.dtype(np.uint8), (-1, -1, -1, 3)),
+])
+signature = ModelSignature(inputs=input_schema)
+```
+
+For more information about how to use MLflow models in batch deployments read [Using MLflow models in batch deployments](how-to-mlflow-batch.md).
+
+## Next steps
+
+* [Using MLflow models in batch deployments](how-to-mlflow-batch.md)
+* [NLP tasks with batch deployments](how-to-nlp-processing-batch.md)
machine-learning How To Mlflow Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-mlflow-batch.md
+
+ Title: "Using MLflow models in batch deployments"
+
+description: Learn how to deploy MLflow models in batch deployments
++++++ Last updated : 10/10/2022++++
+# Use MLflow models in batch deployments
++
+In this article, learn how to deploy your [MLflow](https://www.mlflow.org) model to Azure ML for both batch inference using batch endpoints. Azure Machine Learning supports no-code deployment of models created and logged with MLflow. This means that you don't have to provide a scoring script or an environment.
+
+For no-code-deployment, Azure Machine Learning
+
+* Provides a MLflow base image/curated environment that contains the required dependencies to run an Azure Machine Learning Batch job.
+* Creates a batch job pipeline with a scoring script for you that can be used to process data using parallelization.
+
+> [!NOTE]
+> For more information about the supported file types in batch endpoints with MLflow, view [Considerations when deploying to batch inference](#considerations-when-deploying-to-batch-inference).
+
+## About this example
+
+This example shows how you can deploy an MLflow model to a batch endpoint to perform batch predictions. This example uses an MLflow model based on the [UCI Heart Disease Data Set](https://archive.ics.uci.edu/ml/datasets/Heart+Disease). The database contains 76 attributes, but we are using a subset of 14 of them. The model tries to predict the presence of heart disease in a patient. It is integer valued from 0 (no presence) to 1 (presence).
+
+The model has been trained using an `XGBBoost` classifier and all the required preprocessing has been packaged as a `scikit-learn` pipeline, making this model an end-to-end pipeline that goes from raw data to predictions.
+
+The information in this article is based on code samples contained in the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the commands locally without having to copy/paste YAML and other files, clone the repo and then change directories to the `cli/endpoints/batch` if you are using the Azure CLI or `sdk/endpoints/batch` if you are using our SDK for Python.
+
+```azurecli
+git clone https://github.com/Azure/azureml-examples --depth 1
+cd azureml-examples/cli/endpoints/batch
+```
+
+### Follow along in Jupyter Notebooks
+
+You can follow along this sample in the following notebooks. In the cloned repository, open the notebook: [mlflow-for-batch-tabular.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/batch/mlflow-for-batch-tabular.ipynb).
+
+## Prerequisites
++
+* You must have a MLflow model. If your model is not in MLflow format and you want to use this feature, you can [convert your custom ML model to MLflow format](how-to-convert-custom-model-to-mlflow.md).
+
+## Steps
+
+Follow these steps to deploy an MLflow model to a batch endpoint for running batch inference over new data:
+
+1. First, let's connect to Azure Machine Learning workspace where we are going to work on.
+
+ # [Azure CLI](#tab/cli)
+
+ ```azurecli
+ az account set --subscription <subscription>
+ az configure --defaults workspace=<workspace> group=<resource-group> location=<location>
+ ```
+
+ # [Python](#tab/sdk)
+
+ The workspace is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning. In this section, we'll connect to the workspace in which you'll perform deployment tasks.
+
+ 1. Import the required libraries:
+
+ ```python
+ from azure.ai.ml import MLClient, Input
+ from azure.ai.ml.entities import BatchEndpoint, BatchDeployment, Model, AmlCompute, Data, BatchRetrySettings
+ from azure.ai.ml.constants import AssetTypes, BatchDeploymentOutputAction
+ from azure.identity import DefaultAzureCredential
+ ```
+
+ 2. Configure workspace details and get a handle to the workspace:
+
+ ```python
+ subscription_id = "<subscription>"
+ resource_group = "<resource-group>"
+ workspace = "<workspace>"
+
+ ml_client = MLClient(DefaultAzureCredential(), subscription_id, resource_group, workspace)
+ ```
+
+
+2. Batch Endpoint can only deploy registered models. In this case, we already have a local copy of the model in the repository, so we only need to publish the model to the registry in the workspace. You can skip this step if the model you are trying to deploy is already registered.
+
+ # [Azure CLI](#tab/cli)
+
+ ```azurecli
+ MODEL_NAME='heart-classifier'
+ az ml model create --name $MODEL_NAME --type "mlflow_model" --path "heart-classifier-mlflow/model"
+ ```
+
+ # [Python](#tab/sdk)
+
+ ```python
+ model_name = 'heart-classifier'
+ model_local_path = "heart-classifier-mlflow/model"
+ model = ml_client.models.create_or_update(
+ Model(name=model_name, path=model_local_path, type=AssetTypes.MLFLOW_MODEL)
+ )
+ ```
+
+3. Before moving any forward, we need to make sure the batch deployments we are about to create can run on some infrastructure (compute). Batch deployments can run on any Azure ML compute that already exists in the workspace. That means that multiple batch deployments can share the same compute infrastructure. In this example, we are going to work on an AzureML compute cluster called `cpu-cluster`. Let's verify the compute exists on the workspace or create it otherwise.
+
+ # [Azure CLI](#tab/cli)
+
+ Create a compute definition `YAML` like the following one:
+
+ __cpu-cluster.yml__
+
+ ```yaml
+ $schema: https://azuremlschemas.azureedge.net/latest/amlCompute.schema.json
+ name: cluster-cpu
+ type: amlcompute
+ size: STANDARD_DS3_v2
+ min_instances: 0
+ max_instances: 2
+ idle_time_before_scale_down: 120
+ ```
+
+ Create the compute using the following command:
+
+ ```azurecli
+ az ml compute create -f cpu-cluster.yml
+ ```
+
+ # [Python](#tab/sdk)
+
+ To create a new compute cluster where to create the deployment, use the following script:
+
+ ```python
+ compute_name = "cpu-cluster"
+ if not any(filter(lambda m : m.name == compute_name, ml_client.compute.list())):
+ compute_cluster = AmlCompute(name=compute_name, description="amlcompute", min_instances=0, max_instances=2)
+ ml_client.begin_create_or_update(compute_cluster)
+ ```
+
+4. Now it is time to create the batch endpoint and deployment. Let's start with the endpoint first. Endpoints only require a name and a description to be created:
+
+ # [Azure CLI](#tab/cli)
+
+ To create a new endpoint, create a `YAML` configuration like the following:
+
+ ```yaml
+ $schema: https://azuremlschemas.azureedge.net/latest/batchEndpoint.schema.json
+ name: heart-classifier-batch
+ description: A heart condition classifier for batch inference
+ auth_mode: aad_token
+ ```
+
+ Then, create the endpoint with the following command:
+
+ ```azurecli
+ ENDPOINT_NAME='heart-classifier-batch'
+ az ml batch-endpoint create -n $ENDPOINT_NAME -f endpoint.yml
+ ```
+
+ # [Python](#tab/sdk)
+
+ To create a new endpoint, use the following script:
+
+ ```python
+ endpoint = BatchEndpoint(
+ name="heart-classifier-batch",
+ description="A heart condition classifier for batch inference",
+ )
+ ```
+
+ Then, create the endpoint with the following command:
+
+ ```python
+ ml_client.batch_endpoints.begin_create_or_update(endpoint)
+ ```
+
+5. Now, let create the deployment. MLflow models don't require you to indicate an environment or a scoring script when creating the deployments as it is created for you. However, you can specify them if you want to customize how the deployment does inference.
+
+ # [Azure CLI](#tab/cli)
+
+ To create a new deployment under the created endpoint, create a `YAML` configuration like the following:
+
+ ```yaml
+ $schema: https://azuremlschemas.azureedge.net/latest/batchDeployment.schema.json
+ endpoint_name: heart-classifier-batch
+ name: classifier-xgboost-mlflow
+ description: A heart condition classifier based on XGBoost
+ model: azureml:heart-classifier@latest
+ compute: azureml:cpu-cluster
+ resources:
+ instance_count: 2
+ max_concurrency_per_instance: 2
+ mini_batch_size: 2
+ output_action: append_row
+ output_file_name: predictions.csv
+ retry_settings:
+ max_retries: 3
+ timeout: 300
+ error_threshold: -1
+ logging_level: info
+ ```
+
+ Then, create the deployment with the following command:
+
+ ```azurecli
+ DEPLOYMENT_NAME="classifier-xgboost-mlflow"
+ az ml batch-deployment create -n $DEPLOYMENT_NAME -f endpoint.yml
+ ```
+
+ # [Python](#tab/sdk)
+
+ To create a new deployment under the created endpoint, first define the deployment:
+
+ ```python
+ deployment = BatchDeployment(
+ name="classifier-xgboost-mlflow",
+ description="A heart condition classifier based on XGBoost",
+ endpoint_name=endpoint.name,
+ model=model,
+ compute=compute_name,
+ instance_count=2,
+ max_concurrency_per_instance=2,
+ mini_batch_size=2,
+ output_action=BatchDeploymentOutputAction.APPEND_ROW,
+ output_file_name="predictions.csv",
+ retry_settings=BatchRetrySettings(max_retries=3, timeout=300),
+ logging_level="info",
+ )
+ ```
+
+ Then, create the deployment with the following command:
+
+ ```python
+ ml_client.batch_deployments.begin_create_or_update(deployment)
+ ```
+
+
+ > [!NOTE]
+ > `scoring_script` and `environment` auto generation only supports `pyfunc` model flavor. To use a different flavor, see [Using MLflow models with a scoring script](#using-mlflow-models-with-a-scoring-script).
+
+6. Although you can invoke a specific deployment inside of an endpoint, you will usually want to invoke the endpoint itself and let the endpoint decide which deployment to use. Such deployment is named the "default" deployment. This gives you the possibility of changing the default deployment and hence changing the model serving the deployment without changing the contract with the user invoking the endpoint. Use the following instruction to update the default deployment:
+
+ # [Azure CLI](#tab/cli)
+
+ ```azurecli
+ az ml batch-endpoint update --name $ENDPOINT_NAME --set defaults.deployment_name=$DEPLOYMENT_NAME
+ ```
+
+ # [Python](#tab/sdk)
+
+ ```python
+ endpoint = ml_client.batch_endpoints.get(endpoint.name)
+ endpoint.defaults.deployment_name = deployment.name
+ ml_client.batch_endpoints.begin_create_or_update(endpoint)
+ ```
+
+7. At this point, our batch endpoint is ready to be used.
+
+## Testing out the deployment
+
+For testing our endpoint, we are going to use a sample of unlabeled data located in this repository and that can be used with the model. Batch endpoints can only process data that is located in the cloud and that is accessible from the Azure Machine Learning workspace. In this example, we are going to upload it to an Azure Machine Learning data store. Particularly, we are going to create a data asset that can be used to invoke the endpoint for scoring. However, notice that batch endpoints accept data that can be placed in multiple type of locations.
+
+1. Let's create the data asset first. This data asset consists of a folder with multiple CSV files that we want to process in parallel using batch endpoints. You can skip this step is your data is already registered as a data asset or you want to use a different input type.
+
+ # [Azure CLI](#tab/cli)
+
+ a. Create a data asset definition in `YAML`:
+
+ __heart-dataset-unlabeled.yml__
+ ```yaml
+ $schema: https://azuremlschemas.azureedge.net/latest/data.schema.json
+ name: heart-dataset-unlabeled
+ description: An unlabeled dataset for heart classification.
+ type: uri_folder
+ path: heart-classifier-mlflow/data
+ ```
+
+ b. Create the data asset:
+
+ ```azurecli
+ az ml data create -f heart-dataset-unlabeled.yml
+ ```
+
+ # [Python](#tab/sdk)
+
+ a. Create a data asset definition:
+
+ ```python
+ data_path = "heart-classifier-mlflow/data"
+ dataset_name = "heart-dataset-unlabeled"
+
+ heart_dataset_unlabeled = Data(
+ path=data_path,
+ type=AssetTypes.URI_FOLDER,
+ description="An unlabeled dataset for heart classification",
+ name=dataset_name,
+ )
+ ```
+
+ b. Create the data asset:
+
+ ```python
+ ml_client.data.create_or_update(heart_dataset_unlabeled)
+ ```
+
+ c. Refresh the object to reflect the changes:
+
+ ```python
+ heart_dataset_unlabeled = ml_client.data.get(name=dataset_name)
+ ```
+
+2. Now that the data is uploaded and ready to be used, let's invoke the endpoint:
+
+ # [Azure CLI](#tab/cli)
+
+ ```azurecli
+ JOB_NAME = $(az ml batch-endpoint invoke --name $ENDPOINT_NAME --input azureml:heart-dataset-unlabeled@latest | jq -r '.name')
+ ```
+
+ > [!NOTE]
+ > The utility `jq` may not be installed on every installation. You can get installation instructions in [this link](https://stedolan.github.io/jq/download/).
+
+ # [Python](#tab/sdk)
+
+ ```python
+ input = Input(type=AssetTypes.URI_FOLDER, path=heart_dataset_unlabeled.id)
+ job = ml_client.batch_endpoints.invoke(
+ endpoint_name=endpoint.name,
+ input=input,
+ )
+ ```
+
+
+ > [!TIP]
+ > Notice how we are not indicating the deployment name in the invoke operation. That's because the endpoint automatically routes the job to the default deployment. Since our endpoint only has one deployment, then that one is the default one. You can target an specific deployment by indicating the argument/parameter `deployment_name`.
+
+3. A batch job is started as soon as the command returns. You can monitor the status of the job until it finishes:
+
+ # [Azure CLI](#tab/cli)
+
+ ```azurecli
+ az ml job show --name $JOB_NAME
+ ```
+
+ # [Python](#tab/sdk)
+
+ ```python
+ ml_client.jobs.get(job.name)
+ ```
+
+## Analyzing the outputs
+
+Output predictions are generated in the `predictions.csv` file as indicated in the deployment configuration. The job generates a named output called `score` where this file is placed. Only one file is generated per batch job.
+
+The file is structured as follows:
+
+* There is one row per each data point that was sent to the model. For tabular data, this means that one row is generated for each row in the input files and hence the number of rows in the generated file (`predictions.csv`) equals the sum of all the rows in all the processed files. For other data types, there is one row per each processed file.
+* Two columns are indicated:
+ * The file name where the data was read from. In tabular data, use this field to know which prediction belongs to which input data. For any given file, predictions are returned in the same order they appear in the input file so you can rely on the row number to match the corresponding prediction.
+ * The prediction associated with the input data. This value is returned "as-is" it was provided by the model's `predict().` function.
++
+You can download the results of the job by using the job name:
+
+# [Azure CLI](#tab/cli)
+
+To download the predictions, use the following command:
+
+```azurecli
+az ml job download --name $JOB_NAME --output-name score --download-path ./
+```
+
+# [Python](#tab/sdk)
+
+```python
+ml_client.jobs.download(name=job.name, output_name='score', download_path='./')
+```
++
+Once the file is downloaded, you can open it using your favorite tool. The following example loads the predictions using `Pandas` dataframe.
+
+```python
+import pandas as pd
+from ast import literal_eval
+
+with open('named-outputs/score/predictions.csv', 'r') as f:
+ pd.DataFrame(literal_eval(f.read().replace('\n', ',')), columns=['file', 'prediction'])
+```
+
+> [!WARNING]
+> The file `predictions.csv` may not be a regular CSV file and can't be read correctly using `pandas.read_csv()` method.
+
+The output looks as follows:
+
+| file | prediction |
+| -- | -- |
+| heart-unlabeled-0.csv | 0 |
+| heart-unlabeled-0.csv | 1 |
+| ... | 1 |
+| heart-unlabeled-3.csv | 0 |
+
+> [!TIP]
+> Notice that in this example the input data was tabular data in `CSV` format and there were 4 different input files (heart-unlabeled-0.csv, heart-unlabeled-1.csv, heart-unlabeled-2.csv and heart-unlabeled-3.csv).
+
+## Considerations when deploying to batch inference
+
+Azure Machine Learning supports no-code deployment for batch inference in [managed endpoints](concept-endpoints.md). This represents a convenient way to deploy models that require processing of big amounts of data in a batch-fashion.
+
+### How work is distributed on workers
+
+Work is distributed at the file level, for both structured and unstructured data. As a consequence, only [file datasets](v1/how-to-create-register-datasets.md#filedataset) or [URI folders](reference-yaml-data.md) are supported for this feature. Each worker processes batches of `Mini batch size` files at a time. Further parallelism can be achieved if `Max concurrency per instance` is increased.
+
+> [!WARNING]
+> Nested folder structures are not explored during inference. If you are partitioning your data using folders, make sure to flatten the structure beforehand.
+
+> [!WARNING]
+> Batch deployments will call the `predict` function of the MLflow model once per file. For CSV files containing multiple rows, this may impose a memory pressure in the underlying compute. When sizing your compute, take into account not only the memory consumption of the data being read but also the memory footprint of the model itself. This is specially true for models that processes text, like transformer-based models where the memory consumption is not linear with the size of the input. If you encouter several out-of-memory exceptions, consider splitting the data in smaller files with less rows or implement batching at the row level inside of the model/scoring script.
+
+### File's types support
+
+The following data types are supported for batch inference when deploying MLflow models without an environment and a scoring script:
+
+| File extension | Type returned as model's input | Signature requirement |
+| :- | :- | :- |
+| `.csv` | `pd.DataFrame` | `ColSpec`. If not provided, columns typing is not enforced. |
+| `.png`, `.jpg`, `.jpeg`, `.tiff`, `.bmp`, `.gif` | `np.ndarray` | `TensorSpec`. Input is reshaped to match tensors shape if available. If no signature is available, tensors of type `np.uint8` are inferred. For additional guidance read [Considerations for MLflow models processing images](how-to-image-processing-batch.md#considerations-for-mlflow-models-processing-images). |
+
+> [!WARNING]
+> Be advised that any unsupported file that may be present in the input data will make the job to fail. You will see an error entry as follows: *"ERROR:azureml:Error processing input file: '/mnt/batch/tasks/.../a-given-file.parquet'. File type 'parquet' is not supported."*.
+
+> [!TIP]
+> If you like to process a different file type, or execute inference in a different way that batch endpoints do by default you can always create the deploymnet with a scoring script as explained in [Using MLflow models with a scoring script](#using-mlflow-models-with-a-scoring-script).
+
+### Signature enforcement for MLflow models
+
+Input's data types are enforced by batch deployment jobs while reading the data using the available MLflow model signature. This means that your data input should comply with the types indicated in the model signature. If the data can't be parsed as expected, the job will fail with an error message similar to the following one: *"ERROR:azureml:Error processing input file: '/mnt/batch/tasks/.../a-given-file.csv'. Exception: invalid literal for int() with base 10: 'value'"*.
+
+> [!TIP]
+> Signatures in MLflow models are optional but they are highly encouraged as they provide a convenient way to early detect data compatibility issues. For more information about how to log models with signatures read [Logging models with a custom signature, environment or samples](how-to-log-mlflow-models.md#logging-models-with-a-custom-signature-environment-or-samples).
+
+You can inspect the model signature of your model by opening the `MLmodel` file associated with your MLflow model. For more details about how signatures work in MLflow see [Signatures in MLflow](concept-mlflow-models.md#signatures).
+
+### Flavor support
+
+Batch deployments only support deploying MLflow models with a `pyfunc` flavor. If you need to deploy a different flavor, see [Using MLflow models with a scoring script](#using-mlflow-models-with-a-scoring-script).
+
+## Using MLflow models with a scoring script
+
+MLflow models can be deployed to batch endpoints without indicating a scoring script in the deployment definition. However, you can opt in to indicate this file (usually referred as the *batch driver*) to customize how inference is executed.
+
+You will typically select this workflow when:
+> [!div class="checklist"]
+> * You need to process a file type not supported by batch deployments MLflow deployments.
+> * You need to customize the way the model is run, for instance, use an specific flavor to load it with `mlflow.<flavor>.load()`.
+> * You need to do pre/pos processing in your scoring routine when it is not done by the model itself.
+> * The output of the model can't be nicely represented in tabular data. For instance, it is a tensor representing an image.
+> * You model can't process each file at once because of memory constrains and it needs to read it in chunks.
+
+> [!IMPORTANT]
+> If you choose to indicate an scoring script for an MLflow model deployment, you will also have to specify the environment where the deployment will run.
+
+> [!WARNING]
+> Customizing the scoring script for MLflow deployments is only available from the Azure CLI or SDK for Python. If you are creating a deployment using [Azure ML studio UI](https://ml.azure.com), please switch to the CLI or the SDK.
++
+### Steps
+
+Use the following steps to deploy an MLflow model with a custom scoring script.
+
+1. Create a scoring script:
+
+ __batch_driver.py__
+
+ ```python
+ import os
+ import mlflow
+ import pandas as pd
+
+ def init():
+ global model
+
+ # AZUREML_MODEL_DIR is an environment variable created during deployment
+ # It is the path to the model folder
+ model_path = os.path.join(os.environ["AZUREML_MODEL_DIR"], "model")
+ model = mlflow.pyfunc.load_model(model_path)
+
+ def run(mini_batch):
+ results = pd.DataFrame(columns=['file', 'predictions'])
+
+ for file_path in mini_batch:
+ data = pd.read_csv(file_path)
+ pred = model.predict(data)
+
+ df = pd.DataFrame(pred, columns=['predictions'])
+ df['file'] = os.path.basename(file_path)
+ results = pd.concat([results, df])
+
+ return results
+ ```
+
+1. Let's create an environment where the scoring script can be executed. Since our model is MLflow, the conda requirements are also specified in the model package (for more details about MLflow models and the files included on it see [The MLmodel format](concept-mlflow-models.md#the-mlmodel-format)). We are going then to build the environment using the conda dependencies from the file. However, __we need also to include__ the package `azureml-core` which is required for Batch Deployments.
+
+ > [!TIP]
+ > If your model is already registered in the model registry, you can download/copy the `conda.yml` file associated with your model by going to [Azure ML studio](https://ml.azure.com) > Models > Select your model from the list > Artifacts. Open the root folder in the navigation and select the `conda.yml` file listed. Click on Download or copy its content.
+
+ > [!IMPORTANT]
+ > This example uses a conda environment specified at `/heart-classifier-mlflow/environment/conda.yaml`. This file was created by combining the original MLflow conda dependencies file and adding the package `azureml-core`. __You can't use the `conda.yml` file from the model directly__.
+
+ # [Azure CLI](#tab/cli)
+
+ No extra step is required for the Azure ML CLI. The environment definition will be included in the deployment file.
+
+ # [Python](#tab/sdk)
+
+ Let's get a reference to the environment:
+
+ ```python
+ environment = Environment(
+ conda_file="./heart-classifier-mlflow/environment/conda.yaml",
+ image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:latest",
+ )
+ ```
+
+1. Let's create the deployment now:
+
+ # [Azure CLI](#tab/cli)
+
+ To create a new deployment under the created endpoint, create a `YAML` configuration like the following:
+
+ ```yaml
+ $schema: https://azuremlschemas.azureedge.net/latest/batchDeployment.schema.json
+ endpoint_name: heart-classifier-batch
+ name: classifier-xgboost-custom
+ description: A heart condition classifier based on XGBoost
+ model: azureml:heart-classifier@latest
+ environment:
+ image: mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:latest
+ conda_file: ./heart-classifier-mlflow/environment/conda.yaml
+ code_configuration:
+ code: ./heart-classifier-custom/code/
+ scoring_script: batch_driver.py
+ compute: azureml:cpu-cluster
+ resources:
+ instance_count: 2
+ max_concurrency_per_instance: 2
+ mini_batch_size: 2
+ output_action: append_row
+ output_file_name: predictions.csv
+ retry_settings:
+ max_retries: 3
+ timeout: 300
+ error_threshold: -1
+ logging_level: info
+ ```
+
+ Then, create the deployment with the following command:
+
+ ```azurecli
+ az ml batch-deployment create -f deployment.yml
+ ```
+
+ # [Python](#tab/sdk)
+
+ To create a new deployment under the created endpoint, use the following script:
+
+ ```python
+ deployment = BatchDeployment(
+ name="classifier-xgboost-custom",
+ description="A heart condition classifier based on XGBoost",
+ endpoint_name=endpoint.name,
+ model=model,
+ environment=environment,
+ code_configuration=CodeConfiguration(
+ code="./heart-classifier-mlflow/code/",
+ scoring_script="batch_driver.py",
+ ),
+ compute=compute_name,
+ instance_count=2,
+ max_concurrency_per_instance=2,
+ mini_batch_size=2,
+ output_action=BatchDeploymentOutputAction.APPEND_ROW,
+ output_file_name="predictions.csv",
+ retry_settings=BatchRetrySettings(max_retries=3, timeout=300),
+ logging_level="info",
+ )
+ ml_client.batch_deployments.begin_create_or_update(deployment)
+ ```
+
+
+1. At this point, our batch endpoint is ready to be used.
+
+## Next steps
+
+* [Customize outputs in batch deployments](how-to-deploy-model-custom-output.md)
machine-learning How To Nlp Processing Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-nlp-processing-batch.md
+
+ Title: "Text processing with batch deployments"
+
+description: Learn how to use batch deployments to process text and output results.
++++++ Last updated : 10/10/2022++++
+# Text processing with batch deployments
++
+Batch Endpoints can be used for processing tabular data, but also any other file type like text. Those deployments are supported in both MLflow and custom models. In this tutorial we will learn how to deploy a model that can perform text summarization of long sequences of text using a model from HuggingFace.
+
+## About this sample
+
+The model we are going to work with was built using the popular library transformers from HuggingFace along with [a pre-trained model from Facebook with the BART architecture](https://huggingface.co/facebook/bart-large-cnn). It was introduced in the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation](https://arxiv.org/abs/1910.13461). This model has the following constrains that are important to keep in mind for deployment:
+
+* It can work with sequences up to 1024 tokens.
+* It is trained for summarization of text in English.
+* We are going to use TensorFlow as a backend.
+
+The information in this article is based on code samples contained in the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the commands locally without having to copy/paste YAML and other files, clone the repo and then change directories to the `cli/endpoints/batch` if you are using the Azure CLI or `sdk/endpoints/batch` if you are using our SDK for Python.
+
+```azurecli
+git clone https://github.com/Azure/azureml-examples --depth 1
+cd azureml-examples/cli/endpoints/batch
+```
+
+### Follow along in Jupyter Notebooks
+
+You can follow along this sample in a Jupyter Notebook. In the cloned repository, open the notebook: [text-summarization-batch.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/batch/text-summarization-batch.ipynb).
+
+## Prerequisites
++
+* You must have an endpoint already created. If you don't please follow the instructions at [Use batch endpoints for batch scoring](how-to-use-batch-endpoint.md). This example assumes the endpoint is named `text-summarization-batch`.
+* You must have a compute created where to deploy the deployment. If you don't please follow the instructions at [Create compute](how-to-use-batch-endpoint.md#create-compute). This example assumes the name of the compute is `cpu-cluster`.
+* Due to the size of the model, it hasn't been included in this repository. Instead, you can generate a local copy with the following code. A local copy of the model will be placed at `bart-text-summarization/model`. We will use it during the course of this tutorial.
+
+ ```python
+ from transformers import pipeline
+
+ model = pipeline("summarization", model="facebook/bart-large-cnn")
+ model_local_path = 'bart-text-summarization/model'
+ summarizer.save_pretrained(model_local_path)
+ ```
+
+## NLP tasks with batch deployments
+
+In this example, we are going to learn how to deploy a deep learning model based on the BART architecture that can perform text summarization over text in English. The text will be placed in CSV files for convenience.
+
+### Registering the model
+
+Batch Endpoint can only deploy registered models. In this case, we need to publish the model we have just downloaded from HuggingFace. You can skip this step if the model you are trying to deploy is already registered.
+
+# [Azure ML CLI](#tab/cli)
+
+```bash
+MODEL_NAME='bart-text-summarization'
+az ml model create --name $MODEL_NAME --type "custom_model" --path "bart-text-summarization/model"
+```
+
+# [Azure ML SDK for Python](#tab/sdk)
+
+```python
+model_name = 'bart-text-summarization'
+model = ml_client.models.create_or_update(
+ Model(name=model_name, path='bart-text-summarization/model', type=AssetTypes.CUSTOM_MODEL)
+)
+```
++
+### Creating a scoring script
+
+We need to create a scoring script that can read the CSV files provided by the batch deployment and return the scores of the model with the summary. The following script does the following:
+
+> [!div class="checklist"]
+> * Indicates an `init` function that load the model using `transformers`. Notice that the tokenizer of the model is loaded separately to account for the limitation in the sequence lenghs of the model we are currently using.
+> * Indicates a `run` function that is executed for each mini-batch the batch deployment provides.
+> * The `run` function read the entire batch using the `datasets` library. The text we need to summarize is on the column `text`.
+> * The `run` method iterates over each of the rows of the text and run the prediction. Since this is a very expensive model, running the prediction over entire files will result in an out-of-memory exception. Notice that the model is not execute with the `pipeline` object from `transformers`. This is done to account for long sequences of text and the limitation of 1024 tokens in the underlying model we are using.
+> * It returns the summary of the provided text.
+
+__transformer_scorer.py__
+
+```python
+import os
+import numpy as np
+from transformers import pipeline, AutoTokenizer, TFBartForConditionalGeneration
+from datasets import load_dataset
+
+def init():
+ global model
+ global tokenizer
+
+ # AZUREML_MODEL_DIR is an environment variable created during deployment
+ # Change "model" to the name of the folder used by you model, or model file name.
+ model_path = os.path.join(os.environ["AZUREML_MODEL_DIR"], "model")
+
+ # load the model
+ tokenizer = AutoTokenizer.from_pretrained(model_path, truncation=True, max_length=1024)
+ model = TFBartForConditionalGeneration.from_pretrained(model_path)
+
+def run(mini_batch):
+ resultList = []
+
+ ds = load_dataset('csv', data_files={ 'score': mini_batch})
+ for text in ds['score']['text']:
+ # perform inference
+ input_ids = tokenizer.batch_encode_plus([text], truncation=True, padding=True, max_length=1024)['input_ids']
+ summary_ids = model.generate(input_ids, max_length=130, min_length=30, do_sample=False)
+ summaries = [tokenizer.decode(s, skip_special_tokens=True, clean_up_tokenization_spaces=False) for s in summary_ids]
+
+ # Get results:
+ resultList.append(summaries[0])
+
+ return resultList
+```
+
+> [!TIP]
+> Although files are provided in mini-batches by the deployment, this scoring script processes one row at a time. This is a common pattern when dealing with expensive models (like transformers) as trying to load the entire batch and send it to the model at once may result in high-memory pressure on the batch executor (OOM exeptions).
++
+### Creating the deployment
+
+One the scoring script is created, it's time to create a batch deployment for it. Follow the following steps to create it:
+
+1. We need to indicate over which environment we are going to run the deployment. In our case, our model runs on `TensorFlow`. Azure Machine Learning already has an environment with the required software installed, so we can reutilize this environment. We are just going to add a couple of dependencies in a `conda.yml` file including the libraries `transformers` and `datasets`.
+
+ # [Azure ML CLI](#tab/cli)
+
+ No extra step is required for the Azure ML CLI. The environment definition will be included in the deployment file.
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ Let's get a reference to the environment:
+
+ ```python
+ environment = Environment(
+ conda_file="./bart-text-summarization/environment/conda.yml",
+ image="mcr.microsoft.com/azureml/tensorflow-2.4-ubuntu18.04-py37-cpu-inference:latest",
+ )
+ ```
+
+2. Now, let create the deployment.
+
+ > [!NOTE]
+ > This example assumes you have an endpoint created with the name `text-summarization-batch` and a compute cluster with name `cpu-cluster`. If you don't, please follow the steps in the doc [Use batch endpoints for batch scoring](how-to-use-batch-endpoint.md).
+
+ # [Azure ML CLI](#tab/cli)
+
+ To create a new deployment under the created endpoint, create a `YAML` configuration like the following:
+
+ ```yaml
+ $schema: https://azuremlschemas.azureedge.net/latest/batchDeployment.schema.json
+ endpoint_name: text-summarization-batch
+ name: text-summarization-hfbart
+ description: A text summarization deployment implemented with HuggingFace and BART architecture
+ model: azureml:bart-text-summarization@latest
+ compute: azureml:cpu-cluster
+ environment:
+ image: mcr.microsoft.com/azureml/tensorflow-2.4-ubuntu18.04-py37-cpu-inference:latest
+ conda_file: ./bart-text-summarization/environment/conda.yml
+ code_configuration:
+ code: ./bart-text-summarization/code/
+ scoring_script: transformer_scorer.py
+ resources:
+ instance_count: 2
+ max_concurrency_per_instance: 1
+ mini_batch_size: 1
+ output_action: append_row
+ output_file_name: predictions.csv
+ retry_settings:
+ max_retries: 3
+ timeout: 3000
+ error_threshold: -1
+ logging_level: info
+ ```
+
+ Then, create the deployment with the following command:
+
+ ```bash
+ DEPLOYMENT_NAME="text-summarization-hfbart"
+ az ml batch-deployment create -f endpoint.yml
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ To create a new deployment with the indicated environment and scoring script use the following code:
+
+ ```python
+ deployment = BatchDeployment(
+ name="text-summarization-hfbart",
+ description="A text summarization deployment implemented with HuggingFace and BART architecture",
+ endpoint_name=endpoint.name,
+ model=model,
+ environment=environment,
+ code_configuration=CodeConfiguration(
+ code="./bart-text-summarization/code/",
+ scoring_script="imagenet_scorer.py",
+ ),
+ compute=compute_name,
+ instance_count=2,
+ max_concurrency_per_instance=1,
+ mini_batch_size=1,
+ output_action=BatchDeploymentOutputAction.APPEND_ROW,
+ output_file_name="predictions.csv",
+ retry_settings=BatchRetrySettings(max_retries=3, timeout=3000),
+ logging_level="info",
+ )
+ ```
+
+ Then, create the deployment with the following command:
+
+ ```python
+ ml_client.batch_deployments.begin_create_or_update(deployment)
+ ```
+
+
+ > [!IMPORTANT]
+ > You will notice in this deployment a high value in `timeout` in the parameter `retry_settings`. The reason for it is due to the nature of the model we are running. This is a very expensive model and inference on a single row may take up to 60 seconds. The `timeout` parameters controls how much time the Batch Deployment should wait for the scoring script to finish processing each mini-batch. Since our model runs predictions row by row, processing a long file may take time. Also notice that the number of files per batch is set to 1 (`mini_batch_size=1`). This is again related to the nature of the work we are doing. Processing one file at a time per batch is expensive enough to justify it. You will notice this being a pattern in NLP processing.
+
+3. Although you can invoke a specific deployment inside of an endpoint, you will usually want to invoke the endpoint itself and let the endpoint decide which deployment to use. Such deployment is named the "default" deployment. This gives you the possibility of changing the default deployment and hence changing the model serving the deployment without changing the contract with the user invoking the endpoint. Use the following instruction to update the default deployment:
+
+ # [Azure ML CLI](#tab/cli)
+
+ ```bash
+ az ml batch-endpoint update --name $ENDPOINT_NAME --set defaults.deployment_name=$DEPLOYMENT_NAME
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ endpoint.defaults.deployment_name = deployment.name
+ ml_client.batch_endpoints.begin_create_or_update(endpoint)
+ ```
+
+4. At this point, our batch endpoint is ready to be used.
++
+## Considerations when deploying models that process text
+
+As mentioned in some of the notes along this tutorial, processing text may have some peculiarities that require specific configuration for batch deployments. Take the following consideration when designing the batch deployment:
+
+> [!div class="checklist"]
+> * Some NLP models may be very expensive in terms of memory and compute time. If this is the case, consider decreasing the number of files included on each mini-batch. In the example above, the number was taken to the minimum, 1 file per batch. While this may not be your case, take into consideration how many files your model can score at each time. Have in mind that the relationship between the size of the input and the memory footprint of your model may not be linear for deep learning models.
+> * If your model can't even handle one file at a time (like in this example), consider reading the input data in rows/chunks. Implement batching at the row level if you need to achieve higher throughput or hardware utilization.
+> * Set the `timeout` value of your deployment accordly to how expensive your model is and how much data you expect to process. Remember that the `timeout` indicates the time the batch deployment would wait for your scoring script to run for a given batch. If your batch have many files or files with many rows, this will impact the right value of this parameter.
+
+## Considerations for MLflow models that process text
+
+MLflow models in Batch Endpoints support reading CSVs as input data, which may contain long sequences of text. The same considerations mentioned above apply to MLflow models. However, since you are not required to provide a scoring script for your MLflow model deployment, some of the recommendation there may be harder to achieve.
+
+* Only `CSV` files are supported for MLflow deployments processing text. You will need to author a scoring script if you need to process other file types like `TXT`, `PARQUET`, etc. See [Using MLflow models with a scoring script](how-to-mlflow-batch.md#using-mlflow-models-with-a-scoring-script) for details.
+* Batch deployments will call your MLflow model's predict function with the content of an entire file in as Pandas dataframe. If your input data contains many rows, chances are that running a complex model (like the one presented in this tutorial) will result in an out-of-memory exception. If this is your case, you can consider:
+ * Customize how your model runs predictions and implement batching. To learn how to customize MLflow model's inference, see [Logging custom models](how-to-log-mlflow-models.md?#logging-custom-models).
+ * Author a scoring script and load your model using `mlflow.<flavor>.load_model()`. See [Using MLflow models with a scoring script](how-to-mlflow-batch.md#using-mlflow-models-with-a-scoring-script) for details.
++
machine-learning How To Secure Batch Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-batch-endpoint.md
+
+ Title: "Network isolation in batch endpoints"
+
+description: Learn how to deploy Batch Endpoints in private networks with isolation.
++++++ Last updated : 10/10/2022++++
+# Network isolation in batch endpoints
+
+When deploying a machine learning model to a batch endpoint, you can secure their communication using private networks. This article explains the requirements to use batch endpoint in an environment secured by private networks.
+
+## Prerequisites
+
+* A secure Azure Machine Learning workspace. For more details about how to achieve it read [Create a secure workspace](tutorial-create-secure-workspace.md).
+* For Azure Container Registry in private networks, please note that there are [some prerequisites about their configuration](how-to-secure-workspace-vnet.md#prerequisites).
+
+ > [!WARNING]
+ > Azure Container Registries with Quarantine feature enabled are not supported by the moment.
+
+* Ensure blob, file, queue, and table private endpoints are configured for the storage accounts as explained at [Secure Azure storage accounts](how-to-secure-workspace-vnet.md#secure-azure-storage-accounts). Batch deployments require all the 4 to properly work.
+
+## Securing batch endpoints
+
+All the batch endpoints created inside of secure workspace are deployed as private batch endpoints by default. No further configuration is required.
+
+> [!IMPORTANT]
+> When working on a private link-enabled workspaces, batch endpoints can be created and managed using Azure Machine Learning studio. However, they can't be invoked from the UI in studio. Please use the Azure ML CLI v2 instead for job creation. For more details about how to use it see [Invoke the batch endpoint to start a batch scoring job](how-to-use-batch-endpoint.md#invoke-the-batch-endpoint-to-start-a-batch-job).
+
+The following diagram shows how the networking looks like for batch endpoints when deployed in a private workspace:
++
+In order to enable the jump host VM (or self-hosted agent VMs if using [Azure Bastion](../bastion/bastion-overview.md)) access to the resources in Azure Machine Learning VNET, the previous architecture uses virtual network peering to seamlessly connect these two virtual networks. Thus the two virtual networks appear as one for connectivity purposes. The traffic between VMs and Azure Machine Learning resources in peered virtual networks uses the Microsoft backbone infrastructure. Like traffic between them in the same network, traffic is routed through Microsoft's private network only.
+
+## Securing batch deployment jobs
+
+Azure Machine Learning batch deployments run on compute clusters. To secure batch deployment jobs, those compute clusters have to be deployed in a virtual network too.
+
+1. Create an Azure Machine Learning [computer cluster in the virtual network](how-to-secure-training-vnet.md#compute-cluster).
+2. Ensure all related services have private endpoints configured in the network. Private endpoints are used for not only Azure Machine Learning workspace, but also its associated resources such as Azure Storage, Azure Key Vault, or Azure Container Registry. Azure Container Registry is a required service. While securing the Azure Machine Learning workspace with virtual networks, please note that there are [some prerequisites about Azure Container Registry](how-to-secure-workspace-vnet.md#prerequisites).
+4. If your compute instance uses a public IP address, you must [Allow inbound communication](how-to-secure-training-vnet.md#required-public-internet-access) so that management services can submit jobs to your compute resources.
+
+ > [!TIP]
+ > Compute cluster and compute instance can be created with or without a public IP address. If created with a public IP address, you get a load balancer with a public IP to accept the inbound access from Azure batch service and Azure Machine Learning service. You need to configure User Defined Routing (UDR) if you use a firewall. If created without a public IP, you get a private link service to accept the inbound access from Azure batch service and Azure Machine Learning service without a public IP.
+
+1. Extra NSG may be required depending on your case. Please see [Limitations for Azure Machine Learning compute cluster](how-to-secure-training-vnet.md#azure-machine-learning-compute-clusterinstance-1).
+
+For more details about how to configure compute clusters networking read [Secure an Azure Machine Learning training environment with virtual networks](how-to-secure-training-vnet.md#azure-machine-learning-compute-clusterinstance-1).
+
+## Using two-networks architecture
+
+There are cases where the input data is not in the same network as in the Azure Machine Learning resources. In those cases, your Azure Machine Learning workspace may need to interact with more than one VNet. You can achieve this configuration by adding an extra set of private endpoints to the VNet where the rest of the resources are located.
+
+The following diagram shows the high level design:
++
+### Considerations
+
+Have the following considerations when using such architecture:
+
+* Put the second set of private endpoints in a different resource group and hence in different private DNS zones. This prevents a name resolution conflict between the set of IPs used for the workspace and the ones used by the client VNets. Azure Private DNS provides a reliable, secure DNS service to manage and resolve domain names in a virtual network without the need to add a custom DNS solution. By using private DNS zones, you can use your own custom domain names rather than the Azure-provided names available today. Please note that the DNS resolution against a private DNS zone works only from virtual networks that are linked to it. For more details see [recommended zone names for Azure services](../private-link/private-endpoint-dns.md#azure-services-dns-zone-configuration).
+* For your storage accounts, add 4 private endpoints in each VNet for blob, file, queue, and table as explained at [Secure Azure storage accounts](how-to-secure-workspace-vnet.md#secure-azure-storage-accounts).
++
+## Recommended read
+
+* [Secure Azure Machine Learning workspace resources using virtual networks (VNets)](how-to-network-security-overview.md)
machine-learning How To Troubleshoot Batch Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-batch-endpoints.md
+
+ Title: "Troubleshooting batch endpoints"
+
+description: Learn how to troubleshoot and diagnostic errors with batch endpoints jobs
++++++ Last updated : 10/10/2022++++
+# Troubleshooting batch endpoints
++
+Learn how to troubleshoot and solve, or work around, common errors you may come across when using [batch endpoints](how-to-use-batch-endpoint.md) for batch scoring.
+
+## Understanding logs of a batch scoring job
+
+### Get logs
+
+After you invoke a batch endpoint using the Azure CLI or REST, the batch scoring job will run asynchronously. There are two options to get the logs for a batch scoring job.
+
+Option 1: Stream logs to local console
+
+You can run the following command to stream system-generated logs to your console. Only logs in the `azureml-logs` folder will be streamed.
+
+```azurecli
+az ml job stream -name <job_name>
+```
+
+Option 2: View logs in studio
+
+To get the link to the run in studio, run:
+
+```azurecli
+az ml job show --name <job_name> --query interaction_endpoints.Studio.endpoint -o tsv
+```
+
+1. Open the job in studio using the value returned by the above command.
+1. Choose __batchscoring__
+1. Open the __Outputs + logs__ tab
+1. Choose the log(s) you wish to review
+
+### Understand log structure
+
+There are two top-level log folders, `azureml-logs` and `logs`.
+
+The file `~/azureml-logs/70_driver_log.txt` contains information from the controller that launches the scoring script.
+
+Because of the distributed nature of batch scoring jobs, there are logs from several different sources. However, two combined files are created that provide high-level information:
+
+- `~/logs/job_progress_overview.txt`: This file provides high-level information about the number of mini-batches (also known as tasks) created so far and the number of mini-batches processed so far. As the mini-batches end, the log records the results of the job. If the job failed, it will show the error message and where to start the troubleshooting.
+
+- `~/logs/sys/master_role.txt`: This file provides the principal node (also known as the orchestrator) view of the running job. This log provides information on task creation, progress monitoring, the job result.
+
+For a concise understanding of errors in your script there is:
+
+- `~/logs/user/error.txt`: This file will try to summarize the errors in your script.
+
+For more information on errors in your script, there is:
+
+- `~/logs/user/error/`: This file contains full stack traces of exceptions thrown while loading and running the entry script.
+
+When you need a full understanding of how each node executed the score script, look at the individual process logs for each node. The process logs can be found in the `sys/node` folder, grouped by worker nodes:
+
+- `~/logs/sys/node/<ip_address>/<process_name>.txt`: This file provides detailed info about each mini-batch as it's picked up or completed by a worker. For each mini-batch, this file includes:
+
+ - The IP address and the PID of the worker process.
+ - The total number of items, the number of successfully processed items, and the number of failed items.
+ - The start time, duration, process time, and run method time.
+
+You can also view the results of periodic checks of the resource usage for each node. The log files and setup files are in this folder:
+
+- `~/logs/perf`: Set `--resource_monitor_interval` to change the checking interval in seconds. The default interval is `600`, which is approximately 10 minutes. To stop the monitoring, set the value to `0`. Each `<ip_address>` folder includes:
+
+ - `os/`: Information about all running processes in the node. One check runs an operating system command and saves the result to a file. On Linux, the command is `ps`.
+ - `%Y%m%d%H`: The sub folder name is the time to hour.
+ - `processes_%M`: The file ends with the minute of the checking time.
+ - `node_disk_usage.csv`: Detailed disk usage of the node.
+ - `node_resource_usage.csv`: Resource usage overview of the node.
+ - `processes_resource_usage.csv`: Resource usage overview of each process.
+
+### How to log in scoring script
+
+You can use Python logging in your scoring script. Logs are stored in `logs/user/stdout/<node_id>/processNNN.stdout.txt`.
+
+```python
+import argparse
+import logging
+
+# Get logging_level
+arg_parser = argparse.ArgumentParser(description="Argument parser.")
+arg_parser.add_argument("--logging_level", type=str, help="logging level")
+args, unknown_args = arg_parser.parse_known_args()
+print(args.logging_level)
+
+# Initialize Python logger
+logger = logging.getLogger(__name__)
+logger.setLevel(args.logging_level.upper())
+logger.info("Info log statement")
+logger.debug("Debug log statement")
+```
+
+## Common issues
+
+The following section contains common problems and solutions you may see during batch endpoint development and consumption.
+
+### No module named 'azureml'
+
+__Message logged__: `No module named 'azureml'`.
+
+__Reason__: Azure Machine Learning Batch Deployments require the package `azureml-core` to be installed.
+
+__Solution__: Add `azureml-core` to your conda dependencies file.
+
+### Output already exists
+
+__Reason__: Azure Machine Learning Batch Deployment can't overwrite the `predictions.csv` file generated by the output.
+
+__Solution__: If you are indicated an output location for the predictions, ensure the path leads to a non-existing file.
+
+### The run() function in the entry script had timeout for [number] times
+
+__Message logged__: `No progress update in [number] seconds. No progress update in this check. Wait [number] seconds since last update.`
+
+__Reason__: Batch Deployments can be configured with a `timeout` value that indicates the amount of time the deployment shall wait for a single batch to be processed. If the execution of the batch takes more than such value, the task is aborted. Tasks that are aborted can be retried up to a maximum of times that can also be configured. If the `timeout` occurs on each retry, then the deployment job fails. These properties can be configured for each deployment.
+
+__Solution__: Increase the `timemout` value of the deployment by updating the deployment. These properties are configured in the parameter `retry_settings`. By default, a `timeout=30` and `retries=3` is configured. When deciding the value of the `timeout`, take into consideration the number of files being processed on each batch and the size of each of those files. You can also decrease them to account for more mini-batches of smaller size and hence quicker to execute.
+
+### Dataset initialization failed
+
+__Message logged__: Dataset initialization failed: UserErrorException: Message: Cannot mount Dataset(id='xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx', name='None', version=None). Source of the dataset is either not accessible or does not contain any data.
+
+__Reason__: The compute cluster where the deployment is running can't mount the storage where the data asset is located. The managed identity of the compute don't have permissions to perform the mount.
+
+__Solutions__: Ensure the identity associated with the compute cluster where your deployment is running has at least has at least [Storage Blob Data Reader](../role-based-access-control/built-in-roles.md#storage-blob-data-reader) access to the storage account. Only storage account owners can [change your access level via the Azure portal](../storage/blobs/assign-azure-role-data-access.md).
+
+### Data set node [code] references parameter dataset_param which doesn't have a specified value or a default value
+
+__Message logged__: Data set node [code] references parameter dataset_param which doesn't have a specified value or a default value.
+
+__Reason__: The input data asset provided to the batch endpoint isn't supported.
+
+__Solution__: Ensure you are providing a data input that is supported for batch endpoints.
+
+### User program failed with Exception: Run failed, please check logs for details
+
+__Message logged__: User program failed with Exception: Run failed, please check logs for details. You can check logs/readme.txt for the layout of logs.
+
+__Reason__: There was an error while running the `init()` or `run()` function of the scoring script.
+
+__Solution__: Go to __Outputs + Logs__ and open the file at `logs > user > error > 10.0.0.X > process000.txt`. You will see the error message generated by the `init()` or `run()` method.
+
+### There is no succeeded mini batch item returned from run()
+
+__Message logged__: There is no succeeded mini batch item returned from run(). Please check 'response: run()' in https://aka.ms/batch-inference-documentation.
+
+__Reason__: The batch endpoint failed to provide data in the expected format to the `run()` method. This may be due to corrupted files being read or incompatibility of the input data with the signature of the model (MLflow).
+
+__Solution__: To understand what may be happening, go to __Outputs + Logs__ and open the file at `logs > user > stdout > 10.0.0.X > process000.stdout.txt`. Look for error entries like `Error processing input file`. You should find there details about why the input file can't be correctly read.
+
+### Audiences in JWT are not allowed
+
+__Context__: When invoking a batch endpoint using its REST APIs.
+
+__Reason__: The access token used to invoke the REST API for the endpoint/deployment is indicating a token that is issued for a different audience/service. Azure Active Directory tokens are issued for specific actions.
+
+__Solution__: When generating an authentication token to be used with the Batch Endpoint REST API, ensure the `resource` parameter is set to `https://ml.azure.com`. Please notice that this resource is different from the resource you need to indicate to manage the endpoint using the REST API. All Azure resources (including batch endpoints) use the resource `https://management.azure.com` for managing them. Ensure you use the right resource URI on each case. Notice that if you want to use the management API and the job invocation API at the same time, you will need two tokens. For details see: [Authentication on batch endpoints (REST)](how-to-authenticate-batch-endpoint.md?tabs=rest).
machine-learning How To Use Batch Azure Data Factory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-azure-data-factory.md
+
+ Title: "Invoking batch endpoints from Azure Data Factory"
+
+description: Learn how to use Azure Data Factory to invoke Batch Endpoints.
++++++ Last updated : 10/10/2022++++
+# Invoking batch endpoints from Azure Data Factory
++
+Big data requires a service that can orchestrate and operationalize processes to refine these enormous stores of raw data into actionable business insights. [Azure Data Factory](../data-factory/introduction.md) is a managed cloud service that's built for these complex hybrid extract-transform-load (ETL), extract-load-transform (ELT), and data integration projects.
+
+Azure Data Factory allows the creation of pipelines that can orchestrate multiple data transformations and manage them as a single unit. Batch endpoints are an excellent candidate to become a step in such processing workflow. In this example, learn how to use batch endpoints in Azure Data Factory activities by relying on the Web Invoke activity and the REST API.
+
+## Prerequisites
+
+* This example assumes that you have a model correctly deployed as a batch endpoint. Particularly, we are using the *heart condition classifier* created in the tutorial [Using MLflow models in batch deployments](how-to-mlflow-batch.md).
+* An Azure Data Factory resource created and configured. If you have not created your data factory yet, follow the steps in [Quickstart: Create a data factory by using the Azure portal and Azure Data Factory Studio](../data-factory/quickstart-create-data-factory-portal.md) to create one.
+* After creating it, browse to the data factory in the Azure portal:
+
+ :::image type="content" source="../data-factory/media/doc-common-process/data-factory-home-page.png" alt-text="Screenshot of the home page for the Azure Data Factory, with the Open Azure Data Factory Studio tile.":::
+
+* Select **Open** on the **Open Azure Data Factory Studio** tile to launch the Data Integration application in a separate tab.
+
+## Authenticating against batch endpoints
+
+Azure Data Factory can invoke the REST APIs of batch endpoints by using the [Web Invoke](../data-factory/control-flow-web-activity.md) activity. Batch endpoints support Azure Active Directory for authorization and hence the request made to the APIs require a proper authentication handling.
+
+You can use a service principal or a [managed identity](../active-directory/managed-identities-azure-resources/overview.md) to authenticate against Batch Endpoints. We recommend using a managed identity as it simplifies the use of secrets.
+
+> [!IMPORTANT]
+> When your data is stored in cloud locations instead of Azure Machine Learning Data Stores, the identity of the compute is used to read the data instead of the identity used to invoke the endpoint.
+
+# [Using a Managed Identity](#tab/mi)
+
+1. You can use Azure Data Factory managed identity to communicate with Batch Endpoints. In this case, you only need to make sure that your Azure Data Factory resource was deployed with a managed identity.
+2. If you don't have an Azure Data Factory resource or it was already deployed without a managed identity, please follow the following steps to create it: [Managed identity for Azure Data Factory](../data-factory/data-factory-service-identity.md#system-assigned-managed-identity).
+
+ > [!WARNING]
+ > Notice that changing the resource identity once deployed is not possible in Azure Data Factory. Once the resource is created, you will need to recreate it if you need to change the identity of it.
+
+3. Once deployed, grant access for the managed identity of the resource you created to your Azure Machine Learning workspace as explained at [Grant access](../role-based-access-control/quickstart-assign-role-user-portal.md#grant-access). In this example the service principal will require:
+
+ 1. Permission in the workspace to read batch deployments and perform actions over them.
+ 1. Permissions to read/write in data stores.
+ 2. Permissions to read in any cloud location (storage account) indicated as a data input.
+
+# [Using a Service Principal](#tab/sp)
+
+1. Create a service principal following the steps at [Register an application with Azure AD and create a service principal](../active-directory/develop/howto-create-service-principal-portal.md#register-an-application-with-azure-ad-and-create-a-service-principal).
+1. Create a secret to use for authentication as explained at [Option 2: Create a new application secret](../active-directory/develop/howto-create-service-principal-portal.md#option-2-create-a-new-application-secret).
+1. Take note of the `client secret` generated.
+1. Take note of the `client ID` and the `tenant id` as explained at [Get tenant and app ID values for signing in](../active-directory/develop/howto-create-service-principal-portal.md#option-2-create-a-new-application-secret).
+1. Grant access for the service principal you created to your workspace as explained at [Grant access](../role-based-access-control/quickstart-assign-role-user-portal.md#grant-access). In this example the service principal will require:
+
+ 1. Permission in the workspace to read batch deployments and perform actions over them.
+ 1. Permissions to read/write in data stores.
++
+## About the pipeline
+
+We are going to create a pipeline in Azure Data Factory that can invoke a given batch endpoint over some data. The pipeline will communicate with Azure Machine Learning batch endpoints using REST. To know more about how to use the REST API of batch endpoints read [Deploy models with REST for batch scoring](how-to-deploy-batch-with-rest.md).
+
+The pipeline will look as follows:
+
+# [Using a Managed Identity](#tab/mi)
++
+It is composed of the following activities:
+
+* __Run Batch-Endpoint__: It's a Web Activity that uses the batch endpoint URI to invoke it. It passes the input data URI where the data is located and the expected output file.
+* __Wait for job__: It's a loop activity that checks the status of the created job and waits for its completion, either as **Completed** or **Failed**. This activity, in turns, uses the following activities:
+ * __Check status__: It's a Web Activity that queries the status of the job resource that was returned as a response of the __Run Batch-Endpoint__ activity.
+ * __Wait__: It's a Wait Activity that controls the polling frequency of the job's status. We set a default of 120 (2 minutes).
+
+The pipeline requires the following parameters to be configured:
+
+| Parameter | Description | Sample value |
+| | -|- |
+| `endpoint_uri` | The endpoint scoring URI | `https://<endpoint_name>.<region>.inference.ml.azure.com/jobs` |
+| `api_version` | The API version to use with REST API calls. Defaults to `2020-09-01-preview` | `2020-09-01-preview` |
+| `poll_interval` | The number of seconds to wait before checking the job status for completion. Defaults to `120`. | `120` |
+| `endpoint_input_uri` | The endpoint's input data. Multiple data input types are supported. Ensure that the manage identity you are using for executing the job has access to the underlying location. Alternative, if using Data Stores, ensure the credentials are indicated there. | `azureml://datastores/.../paths/.../data/` |
+| `endpoint_output_uri` | The endpoint's output data file. It must be a path to an output file in a Data Store attached to the Machine Learning workspace. Not other type of URIs is supported. | `azureml://datastores/azureml/paths/batch/predictions.csv` |
+
+# [Using a Service Principal](#tab/sp)
++
+It is composed of the following activities:
+
+* __Authorize__: It's a Web Activity that uses the service principal created in [Authenticating against batch endpoints](#authenticating-against-batch-endpoints) to obtain an authorization token. This token will be used to invoke the endpoint later.
+* __Run Batch-Endpoint__: It's a Web Activity that uses the batch endpoint URI to invoke it. It passes the input data URI where the data is located and the expected output file.
+* __Wait for job__: It's a loop activity that checks the status of the created job and waits for its completion, either as **Completed** or **Failed**. This activity, in turns, uses the following activities:
+ * __Authorize Management__: It's a Web Activity that uses the service principal created in [Authenticating against batch endpoints](#authenticating-against-batch-endpoints) to obtain an authorization token to be used for job's status query.
+ * __Check status__: It's a Web Activity that queries the status of the job resource that was returned as a response of the __Run Batch-Endpoint__ activity.
+ * __Wait__: It's a Wait Activity that controls the polling frequency of the job's status. We set a default of 120 (2 minutes).
+
+The pipeline requires the following parameters to be configured:
+
+| Parameter | Description | Sample value |
+| | -|- |
+| `tenant_id` | Tenant ID where the endpoint is deployed | `00000000-0000-0000-00000000` |
+| `client_id` | The client ID of the service principal used to invoke the endpoint | `00000000-0000-0000-00000000` |
+| `client_secret` | The client secret of the service principal used to invoke the endpoint | `ABCDEFGhijkLMNOPQRstUVwz` |
+| `endpoint_uri` | The endpoint scoring URI | `https://<endpoint_name>.<region>.inference.ml.azure.com/jobs` |
+| `api_version` | The API version to use with REST API calls. Defaults to `2020-09-01-preview` | `2020-09-01-preview` |
+| `poll_interval` | The number of seconds to wait before checking the job status for completion. Defaults to `120`. | `120` |
+| `endpoint_input_uri` | The endpoint's input data. Multiple data input types are supported. Ensure that the manage identity you are using for executing the job has access to the underlying location. Alternative, if using Data Stores, ensure the credentials are indicated there. | `azureml://datastores/.../paths/.../data/` |
+| `endpoint_output_uri` | The endpoint's output data file. It must be a path to an output file in a Data Store attached to the Machine Learning workspace. Not other type of URIs is supported. | `azureml://datastores/azureml/paths/batch/predictions.csv` |
+++
+> [!WARNING]
+> Remember that `endpoint_output_uri` should be the path to a file that doesn't exist yet. Otherwise, the job will fail with the error *the path already exists*.
+
+> [!IMPORTANT]
+> The input data URI can be a path to an Azure Machine Learning data store, data asset, or a cloud URI. Depending on the case, further configuration may be required to ensure the deployment can read the data properly. See [Accessing storage services](how-to-identity-based-service-authentication.md#accessing-storage-services) for details.
+
+## Steps
+
+To create this pipeline in your existing Azure Data Factory, follow these steps:
+
+1. Open Azure Data Factory Studio and under __Factory Resources__ click the plus sign.
+2. Select __Pipeline__ > __Import from pipeline template__
+3. You will be prompted to select a `zip` file. Uses [the following template if using managed identities](https://azuremlexampledata.blob.core.windows.net/data/templates/batch-inference/Run-BatchEndpoint-MI.zip) or [the following one if using a service principal](https://azuremlexampledata.blob.core.windows.net/data/templates/batch-inference/Run-BatchEndpoint-SP.zip).
+4. A preview of the pipeline will show up in the portal. Click __Use this template__.
+5. The pipeline will be created for you with the name __Run-BatchEndpoint__.
+6. Configure the parameters of the batch deployment you are using:
+
+ # [Using a Managed Identity](#tab/mi)
+
+ :::image type="content" source="./media/how-to-use-batch-adf/pipeline-params-mi.png" alt-text="Screenshot of the pipeline parameters expected for the resulting pipeline.":::
+
+ # [Using a Service Principal](#tab/sp)
+
+ :::image type="content" source="./media/how-to-use-batch-adf/pipeline-params.png" alt-text="Screenshot of the pipeline parameters expected for the resulting pipeline.":::
+
+
+
+ > [!WARNING]
+ > Ensure that your batch endpoint has a default deployment configured before submitting a job to it. The created pipeline will invoke the endpoint and hence a default deployment needs to be created and configured.
+
+ > [!TIP]
+ > For best reusability, use the created pipeline as a template and call it from within other Azure Data Factory pipelines by leveraging the [Execute pipeline activity](../data-factory/control-flow-execute-pipeline-activity.md). In that case, do not configure the parameters in the created pipeline but pass them when you are executing the pipeline.
+ >
+ > :::image type="content" source="./media/how-to-use-batch-adf/pipeline-run.png" alt-text="Screenshot of the pipeline parameters expected for the resulting pipeline when invoked from another pipeline.":::
+
+7. Your pipeline is ready to be used.
++
+## Limitations
+
+When calling Azure Machine Learning batch deployments consider the following limitations:
+
+* __Data inputs__:
+ * Only Azure Machine Learning data stores or Azure Storage Accounts (Azure Blob Storage, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2) are supported as inputs. If your input data is in another source, use the Azure Data Factory Copy activity before the execution of the batch job to sink the data to a compatible store.
+ * Ensure the deployment has the required access to read the input data depending on the type of input you are using. See [Accessing storage services](how-to-identity-based-service-authentication.md#accessing-storage-services) for details.
+* __Data outputs__:
+ * Only registered Azure Machine Learning data stores are supported.
+ * Only Azure Blob Storage Accounts are supported for outputs. For instance, Azure Data Lake Storage Gen2 isn't supported as output in batch deployment jobs. If you need to output the data to a different location/sink, use the Azure Data Factory Copy activity after the execution of the batch job.
+
+## Considerations when reading and writing data
+
+When reading and writing data, take into account the following considerations:
+
+* Batch endpoint jobs don't explore nested folders and hence can't work with nested folder structures. If your data is distributed in multiple folders, notice that you will have to flatten the structure.
+* Make sure that your scoring script provided in the deployment can handle the data as it is expected to be fed into the job. If the model is MLflow, read the limitation in terms of the file type supported by the moment at [Using MLflow models in batch deployments](how-to-mlflow-batch.md).
+* Batch endpoints distribute and parallelize the work across multiple workers at the file level. Make sure that each worker node has enough memory to load the entire data file at once and send it to the model. Such is especially true for tabular data.
+* When estimating the memory consumption of your jobs, take into account the model memory footprint too. Some models, like transformers in NLP, don't have a liner relationship between the size of the inputs and the memory consumption. On those cases, you may want to consider further partitioning your data into multiple files to allow a greater degree of parallelization with smaller files.
machine-learning How To Use Batch Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-endpoint.md
+
+ Title: 'Use batch endpoints for batch scoring'
+
+description: In this article, learn how to create a batch endpoint to continuously batch score large data.
+++++++ Last updated : 11/04/2022+
+#Customer intent: As an ML engineer or data scientist, I want to create an endpoint to host my models for batch scoring, so that I can use the same endpoint continuously for different large datasets on-demand or on-schedule.
++
+# Use batch endpoints for batch scoring
++
+Batch endpoints provide a convenient way to run inference over large volumes of data. They simplify the process of hosting your models for batch scoring, so you can focus on machine learning, not infrastructure. For more information, see [What are Azure Machine Learning endpoints?](./concept-endpoints.md).
+
+Use batch endpoints when:
+
+> [!div class="checklist"]
+> * You have expensive models that requires a longer time to run inference.
+> * You need to perform inference over large amounts of data, distributed in multiple files.
+> * You don't have low latency requirements.
+> * You can take advantage of parallelization.
+
+In this article, you will learn how to use batch endpoints to do batch scoring.
+
+> [!TIP]
+> We suggest you to read the Scenarios sections (see the navigation bar at the left) to find more about how to use Batch Endpoints in specific scenarios including NLP, computer vision, or how to integrate them with other Azure services.
+
+## About this example
+
+On this example, we are going to deploy a model to solve the classic MNIST ("Modified National Institute of Standards and Technology") digit recognition problem to perform batch inferencing over large amounts of data (image files). In the first section of this tutorial, we are going to create a batch deployment with a model created using Torch. Such deployment will become our default one in the endpoint. On the second half, [we are going to see how we can create a second deployment](#adding-deployments-to-an-endpoint) using a model created with TensorFlow (Keras), test it out, and then switch the endpoint to start using the new deployment as default.
+
+The information in this article is based on code samples contained in the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the commands locally without having to copy/paste YAML and other files, clone the repo and then change directories to the `cli/endpoints/batch` if you are using the Azure CLI or `sdk/endpoints/batch` if you are using our SDK for Python.
+
+```azurecli
+git clone https://github.com/Azure/azureml-examples --depth 1
+cd azureml-examples/cli/endpoints/batch
+```
+
+### Follow along in Jupyter Notebooks
+
+You can follow along this sample in the following notebooks. In the cloned repository, open the notebook: [mnist-batch.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/batch/mnist-batch.ipynb).
+
+## Prerequisites
++
+### Connect to your workspace
+
+First, let's connect to Azure Machine Learning workspace where we are going to work on.
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli
+az account set --subscription <subscription>
+az configure --defaults workspace=<workspace> group=<resource-group> location=<location>
+```
+
+# [Python](#tab/python)
+
+The workspace is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning. In this section, we'll connect to the workspace in which you'll perform deployment tasks.
+
+1. Import the required libraries:
+
+```python
+from azure.ai.ml import MLClient, Input
+from azure.ai.ml.entities import BatchEndpoint, BatchDeployment, Model, AmlCompute, Data, BatchRetrySettings
+from azure.ai.ml.constants import AssetTypes, BatchDeploymentOutputAction
+from azure.identity import DefaultAzureCredential
+```
+
+2. Configure workspace details and get a handle to the workspace:
+
+```python
+subscription_id = "<subscription>"
+resource_group = "<resource-group>"
+workspace = "<workspace>"
+
+ml_client = MLClient(DefaultAzureCredential(), subscription_id, resource_group, workspace)
+```
+
+# [Studio](#tab/azure-studio)
+
+Open the [Azure ML studio portal](https://ml.azure.com) and log in using your credentials.
+++
+### Create compute
+
+Batch endpoints run on compute clusters. They support both [Azure Machine Learning Compute clusters (AmlCompute)](./how-to-create-attach-compute-cluster.md) or [Kubernetes clusters](./how-to-attach-kubernetes-anywhere.md). Clusters are a shared resource so one cluster can host one or many batch deployments (along with other workloads if desired).
+
+Run the following code to create an Azure Machine Learning compute cluster. The following examples in this article use the compute created here named `batch-cluster`. Adjust as needed and reference your compute using `azureml:<your-compute-name>`.
+
+# [Azure CLI](#tab/azure-cli)
++
+# [Python](#tab/python)
+
+```python
+compute_name = "batch-cluster"
+compute_cluster = AmlCompute(name=compute_name, description="amlcompute", min_instances=0, max_instances=5)
+ml_client.begin_create_or_update(compute_cluster)
+```
+
+# [Studio](#tab/azure-studio)
+
+*Create a compute cluster as explained in the following tutorial [Create an Azure Machine Learning compute cluster](./how-to-create-attach-compute-cluster.md?tabs=azure-studio).*
+++
+> [!NOTE]
+> You are not charged for compute at this point as the cluster will remain at 0 nodes until a batch endpoint is invoked and a batch scoring job is submitted. Learn more about [manage and optimize cost for AmlCompute](./how-to-manage-optimize-cost.md#use-azure-machine-learning-compute-cluster-amlcompute).
++
+### Registering the model
+
+Batch Deployments can only deploy models registered in the workspace. You can skip this step if the model you are trying to deploy is already registered. In this case, we are registering a Torch model for the popular digit recognition problem (MNIST).
+
+> [!TIP]
+> Models are associated with the deployment rather than with the endpoint. This means that a single endpoint can serve different models or different model versions under the same endpoint as long as they are deployed in different deployments.
+
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli
+MODEL_NAME='mnist'
+az ml model create --name $MODEL_NAME --type "custom_model" --path "./mnist/model/"
+```
+
+# [Python](#tab/python)
+
+```python
+model_name = 'mnist'
+model = ml_client.models.create_or_update(
+ Model(name=model_name, path='./mnist/model/', type=AssetTypes.CUSTOM_MODEL)
+)
+```
+
+# [Studio](#tab/azure-studio)
+
+1. Navigate to the __Models__ tab on the side menu.
+1. Click on __Register__ > __From local files__.
+1. In the wizard, leave the option *Model type* as __Unspecified type__.
+1. Click on __Browse__ > __Browse folder__ > Select the folder `./mnist/model/` > __Next__.
+1. Configure the name of the model: `mnist`. You can leave the rest of the fields as they are.
+1. Click on __Register__.
+++
+## Create a batch endpoint
+
+A batch endpoint is an HTTPS endpoint that clients can call to trigger a batch scoring job. A batch scoring job is a job that scores multiple inputs (for more, see [What are batch endpoints?](./concept-endpoints.md#what-are-batch-endpoints)). A batch deployment is a set of compute resources hosting the model that does the actual batch scoring. One batch endpoint can have multiple batch deployments.
+
+> [!TIP]
+> One of the batch deployments will serve as the default deployment for the endpoint. The default deployment will be used to do the actual batch scoring when the endpoint is invoked. Learn more about [batch endpoints and batch deployment](./concept-endpoints.md#what-are-batch-endpoints).
+
+### Steps
+
+1. Decide on the name of the endpoint. The name of the endpoint will end-up in the URI associated with your endpoint. Because of that, __batch endpoint names need to be unique within an Azure region__. For example, there can be only one batch endpoint with the name `mybatchendpoint` in `westus2`.
+
+ # [Azure CLI](#tab/azure-cli)
+
+ In this case, let's place the name of the endpoint in a variable so we can easily reference it later.
+
+ ```azurecli
+ ENDPOINT_NAME="mnist-batch"
+ ```
+
+ # [Python](#tab/python)
+
+ In this case, let's place the name of the endpoint in a variable so we can easily reference it later.
+
+ ```python
+ endpoint_name="mnist-batch"
+ ```
+
+ # [Studio](#tab/azure-studio)
+
+ *You will configure the name of the endpoint later in the creation wizard.*
+
+
+1. Configure your batch endpoint
+
+ # [Azure CLI](#tab/azure-cli)
+
+ The following YAML file defines a batch endpoint, which you can include in the CLI command for [batch endpoint creation](#create-a-batch-endpoint). In the repository, this file is located at `/cli/endpoints/batch/batch-endpoint.yml`.
+
+ :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/mnist-endpoint.yml":::
+
+ The following table describes the key properties of the endpoint. For the full batch endpoint YAML schema, see [CLI (v2) batch endpoint YAML schema](./reference-yaml-endpoint-batch.md).
+
+ | Key | Description |
+ | | -- |
+ | `name` | The name of the batch endpoint. Needs to be unique at the Azure region level.|
+ | `description` | The description of the batch endpoint. This property is optional. |
+ | `auth_mode` | The authentication method for the batch endpoint. Currently only Azure Active Directory token-based authentication (`aad_token`) is supported. |
+ | `defaults.deployment_name` | The name of the deployment that will serve as the default deployment for the endpoint. |
+
+ # [Python](#tab/python)
+
+ ```python
+ # create a batch endpoint
+ endpoint = BatchEndpoint(
+ name=endpoint_name,
+ description="A batch endpoint for scoring images from the MNIST dataset.",
+ )
+ ```
+
+ | Key | Description |
+ | | -- |
+ | `name` | The name of the batch endpoint. Needs to be unique at the Azure region level.|
+ | `description` | The description of the batch endpoint. This property is optional. |
+ | `auth_mode` | The authentication method for the batch endpoint. Currently only Azure Active Directory token-based authentication (`aad_token`) is supported. |
+ | `defaults.deployment_name` | The name of the deployment that will serve as the default deployment for the endpoint. |
+
+ # [Studio](#tab/azure-studio)
+
+ *You will create the endpoint in the same step you create the deployment.*
+
+
+1. Create the endpoint:
+
+ # [Azure CLI](#tab/azure-cli)
+
+ Run the following code to create a batch deployment under the batch endpoint and set it as the default deployment.
+
+ :::code language="azurecli" source="~/azureml-examples-main/cli/batch-score.sh" ID="create_batch_endpoint" :::
+
+ # [Python](#tab/python)
+
+ ```python
+ ml_client.batch_endpoints.begin_create_or_update(endpoint)
+ ```
+ # [Studio](#tab/azure-studio)
+
+ *You will create the endpoint in the same step you are creating the deployment later.*
++
+## Create a scoring script
+
+Batch deployments require a scoring script that indicates how the given model should be executed and how input data must be processed. In this case, we are deploying a model that read image files representing digits and outputs the corresponding digit. The scoring script looks as follows:
+
+> [!NOTE]
+> For MLflow models this scoring script is not required as it is automatically generated by Azure Machine Learning. If your model is an MLflow model, you can skip this step. For more details about how batch endpoints work with MLflow models, see the dedicated tutorial [Using MLflow models in batch deployments](how-to-mlflow-batch.md).
+
+> [!TIP]
+> For more information about how to write scoring scripts and best practices for it please see [Author scoring scripts for batch deployments](how-to-batch-scoring-script.md).
++
+## Create a batch deployment
+
+A deployment is a set of resources required for hosting the model that does the actual inferencing. To create a batch deployment, you need all the following items:
+
+* A registered model in the workspace.
+* The code to score the model.
+* The environment in which the model runs.
+* The pre-created compute and resource settings.
+
+1. Create an environment where your batch deployment will run. Include in the environment any dependency your code requires for running. You will also need to add the library `azureml-core` as it is required for batch deployments to work.
+
+ # [Azure CLI](#tab/azure-cli)
+
+ *No extra step is required for the Azure ML CLI. The environment definition will be included in the deployment file as an anonymous environment.*
+
+ # [Python](#tab/python)
+
+ Let's get a reference to the environment:
+
+ ```python
+ env = Environment(
+ conda_file="./mnist/environment/conda.yml",
+ image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:latest",
+ )
+ ```
+
+ # [Studio](#tab/azure-studio)
+
+ 1. Navigate to the __Environments__ tab on the side menu.
+ 1. Select the tab __Custom environments__ > __Create__.
+ 1. Enter the name of the environment, in this case `torch-batch-env`.
+ 1. On __Select environment type__ select __Use existing docker image with conda__.
+ 1. On __Container registry image path__, enter `mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04`.
+ 1. On __Customize__ section copy the content of the file `./mnist/environment/conda.yml` included in the repository into the portal. The conda file looks as follows:
+
+ :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/mnist/environment/conda.yml":::
+
+ 1. Click on __Next__ and then on __Create__.
+ 1. The environment is ready to be used.
+
+
+
+ > [!WARNING]
+ > Curated environments are not supported in batch deployments. You will need to indicate your own environment. You can always use the base image of a curated environment as yours to simplify the process.
+
+ > [!IMPORTANT]
+ > Do not forget to include the library `azureml-core` in your deployment as it is required by the executor.
+
+
+1. Create a deployment definition
+
+ # [Azure CLI](#tab/azure-cli)
+
+ __mnist-torch-deployment.yml__
+
+ :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/mnist-torch-deployment.yml":::
+
+ For the full batch deployment YAML schema, see [CLI (v2) batch deployment YAML schema](./reference-yaml-deployment-batch.md).
+
+ | Key | Description |
+ | | -- |
+ | `name` | The name of the deployment. |
+ | `endpoint_name` | The name of the endpoint to create the deployment under. |
+ | `model` | The model to be used for batch scoring. The example defines a model inline using `path`. Model files will be automatically uploaded and registered with an autogenerated name and version. Follow the [Model schema](./reference-yaml-model.md#yaml-syntax) for more options. As a best practice for production scenarios, you should create the model separately and reference it here. To reference an existing model, use the `azureml:<model-name>:<model-version>` syntax. |
+ | `code_configuration.code.path` | The local directory that contains all the Python source code to score the model. |
+ | `code_configuration.scoring_script` | The Python file in the above directory. This file must have an `init()` function and a `run()` function. Use the `init()` function for any costly or common preparation (for example, load the model in memory). `init()` will be called only once at beginning of process. Use `run(mini_batch)` to score each entry; the value of `mini_batch` is a list of file paths. The `run()` function should return a pandas DataFrame or an array. Each returned element indicates one successful run of input element in the `mini_batch`. For more information on how to author scoring script, see [Understanding the scoring script](how-to-batch-scoring-script.md#understanding-the-scoring-script). |
+ | `environment` | The environment to score the model. The example defines an environment inline using `conda_file` and `image`. The `conda_file` dependencies will be installed on top of the `image`. The environment will be automatically registered with an autogenerated name and version. Follow the [Environment schema](./reference-yaml-environment.md#yaml-syntax) for more options. As a best practice for production scenarios, you should create the environment separately and reference it here. To reference an existing environment, use the `azureml:<environment-name>:<environment-version>` syntax. |
+ | `compute` | The compute to run batch scoring. The example uses the `batch-cluster` created at the beginning and references it using `azureml:<compute-name>` syntax. |
+ | `resources.instance_count` | The number of instances to be used for each batch scoring job. |
+ | `max_concurrency_per_instance` | [Optional] The maximum number of parallel `scoring_script` runs per instance. |
+ | `mini_batch_size` | [Optional] The number of files the `scoring_script` can process in one `run()` call. |
+ | `output_action` | [Optional] How the output should be organized in the output file. `append_row` will merge all `run()` returned output results into one single file named `output_file_name`. `summary_only` won't merge the output results and only calculate `error_threshold`. |
+ | `output_file_name` | [Optional] The name of the batch scoring output file for `append_row` `output_action`. |
+ | `retry_settings.max_retries` | [Optional] The number of max tries for a failed `scoring_script` `run()`. |
+ | `retry_settings.timeout` | [Optional] The timeout in seconds for a `scoring_script` `run()` for scoring a mini batch. |
+ | `error_threshold` | [Optional] The number of input file scoring failures that should be ignored. If the error count for the entire input goes above this value, the batch scoring job will be terminated. The example uses `-1`, which indicates that any number of failures is allowed without terminating the batch scoring job. |
+ | `logging_level` | [Optional] Log verbosity. Values in increasing verbosity are: WARNING, INFO, and DEBUG. |
+
+ # [Python](#tab/python)
+
+ ```python
+ deployment = BatchDeployment(
+ name="mnist-torch-dpl",
+ description="A deployment using Torch to solve the MNIST classification dataset.",
+ endpoint_name=batch_endpoint_name,
+ model=model,
+ code_path="./mnist/code/",
+ scoring_script="batch_driver.py",
+ environment=env,
+ compute=compute_name,
+ instance_count=2,
+ max_concurrency_per_instance=2,
+ mini_batch_size=10,
+ output_action=BatchDeploymentOutputAction.APPEND_ROW,
+ output_file_name="predictions.csv",
+ retry_settings=BatchRetrySettings(max_retries=3, timeout=30),
+ logging_level="info",
+ )
+ ```
+
+ This class allows user to configure the following key aspects.
+ * `name` - Name of the deployment.
+ * `endpoint_name` - Name of the endpoint to create the deployment under.
+ * `model` - The model to use for the deployment. This value can be either a reference to an existing versioned model in the workspace or an inline model specification.
+ * `environment` - The environment to use for the deployment. This value can be either a reference to an existing versioned environment in the workspace or an inline environment specification.
+ * `code_path`- Path to the source code directory for scoring the model
+ * `scoring_script` - Relative path to the scoring file in the source code directory
+ * `compute` - Name of the compute target to execute the batch scoring jobs on
+ * `instance_count`- The number of nodes to use for each batch scoring job.
+ * `max_concurrency_per_instance`- The maximum number of parallel scoring_script runs per instance.
+ * `mini_batch_size` - The number of files the code_configuration.scoring_script can process in one `run`() call.
+ * `retry_settings`- Retry settings for scoring each mini batch.
+ * `max_retries`- The maximum number of retries for a failed or timed-out mini batch (default is 3)
+ * `timeout`- The timeout in seconds for scoring a mini batch (default is 30)
+ * `output_action`- Indicates how the output should be organized in the output file. Allowed values are `append_row` or `summary_only`. Default is `append_row`
+ * `output_file_name`- Name of the batch scoring output file. Default is `predictions.csv`
+ * `environment_variables`- Dictionary of environment variable name-value pairs to set for each batch scoring job.
+ * `logging_level`- The log verbosity level. Allowed values are `warning`, `info`, `debug`. Default is `info`.
+
+ # [Studio](#tab/azure-studio)
+
+ 1. Navigate to the __Endpoints__ tab on the side menu.
+ 1. Select the tab __Batch endpoints__ > __Create__.
+ 1. Give the endpoint a name, in this case `mnist-batch`. You can configure the rest of the fields or leave them blank.
+ 1. Click on __Next__.
+ 1. On the model list, select the model `mnist` and click on __Next__.
+ 1. On the deployment configuration page, give the deployment a name.
+ 1. On __Output action__, ensure __Append row__ is selected.
+ 1. On __Output file name__, ensure the batch scoring output file is the one you need. Default is `predictions.csv`.
+ 1. On __Mini batch size__, adjust the size of the files that will be included on each mini-batch. This will control the amount of data your scoring script receives per each batch.
+ 1. On __Scoring timeout (seconds)__, ensure you are giving enough time for your deployment to score a given batch of files. If you increase the number of files, you usually have to increase the timeout value too. More expensive models (like those based on deep learning), may require high values in this field.
+ 1. On __Max concurrency per instance__, configure the number of executors you want to have per each compute instance you get in the deployment. A higher number here guarantees a higher degree of parallelization but it also increases the memory pressure on the compute instance. Tune this value altogether with __Mini batch size__.
+ 1. Once done, click on __Next__.
+ 1. On environment, go to __Select scoring file and dependencies__ and click on __Browse__.
+ 1. Select the scoring script file on `/mnist/code/batch_driver.py`.
+ 1. On the section __Choose an environment__, select the environment you created a previous step.
+ 1. Click on __Next__.
+ 1. On the section __Compute__, select the compute cluster you created in a previous step.
+
+ > [!WARNING]
+ > Azure Kubernetes cluster are supported in batch deployments, but only when created using the Azure ML CLI or Python SDK.
+
+ 1. On __Instance count__, enter the number of compute instances you want for the deployment. In this case, we will use 2.
+ 1. Click on __Next__.
+
+1. Create the deployment:
+
+ # [Azure CLI](#tab/azure-cli)
+
+ Run the following code to create a batch deployment under the batch endpoint and set it as the default deployment.
+
+ :::code language="azurecli" source="~/azureml-examples-main/cli/batch-score.sh" ID="create_batch_deployment_set_default" :::
+
+ > [!TIP]
+ > The `--set-default` parameter sets the newly created deployment as the default deployment of the endpoint. It's a convenient way to create a new default deployment of the endpoint, especially for the first deployment creation. As a best practice for production scenarios, you may want to create a new deployment without setting it as default, verify it, and update the default deployment later. For more information, see the [Deploy a new model](#adding-deployments-to-an-endpoint) section.
+
+ # [Python](#tab/python)
+
+ Using the `MLClient` created earlier, we'll now create the deployment in the workspace. This command will start the deployment creation and return a confirmation response while the deployment creation continues.
+
+ ```python
+ ml_client.batch_deployments.begin_create_or_update(deployment)
+ ```
+
+ Once the deployment is completed, we need to ensure the new deployment is the default deployment in the endpoint:
+
+ ```python
+ endpoint = ml_client.batch_endpoints.get(endpoint_name)
+ endpoint.defaults.deployment_name = deployment.name
+ ml_client.batch_endpoints.begin_create_or_update(endpoint)
+ ```
+
+ # [Studio](#tab/azure-studio)
+
+ In the wizard, click on __Create__ to start the deployment process.
+
+ :::image type="content" source="./media/how-to-use-batch-endpoints-studio/review-batch-wizard.png" alt-text="Screenshot of batch endpoints/deployment review screen.":::
+
+
+
+ > [!NOTE]
+ > __How is work distributed?__:
+ >
+ > Batch deployments distribute work at the file level, which means that a folder containing 100 files with mini-batches of 10 files will generate 10 batches of 10 files each. Notice that this will happen regardless of the size of the files involved. If your files are too big to be processed in large mini-batches we suggest to either split the files in smaller files to achieve a higher level of parallelism or to decrease the number of files per mini-batch. At this moment, batch deployment can't account for skews in the file's size distribution.
+
+1. Check batch endpoint and deployment details.
+
+ # [Azure CLI](#tab/azure-cli)
+
+ Use `show` to check endpoint and deployment details. To check a batch deployment, run the following code:
+
+ :::code language="azurecli" source="~/azureml-examples-main/cli/batch-score.sh" ID="check_batch_deployment_detail" :::
+
+
+ # [Python](#tab/python)
+
+ To check a batch deployment, run the following code:
+
+ ```python
+ ml_client.batch_deployments.get(name=deployment.name, endpoint_name=endpoint.name)
+ ```
+
+ # [Studio](#tab/azure-studio)
+
+ 1. Navigate to the __Endpoints__ tab on the side menu.
+ 1. Select the tab __Batch endpoints__.
+ 1. Select the batch endpoint you want to get details from.
+ 1. In the endpoint page, you will see all the details of the endpoint along with all the deployments available.
+
+ :::image type="content" source="./media/how-to-use-batch-endpoints-studio/batch-endpoint-details.png" alt-text="Screenshot of the check batch endpoints and deployment details.":::
+
+## Invoke the batch endpoint to start a batch job
+
+Invoke a batch endpoint triggers a batch scoring job. A job `name` will be returned from the invoke response and can be used to track the batch scoring progress. The batch scoring job runs for a period of time. It splits the entire inputs into multiple `mini_batch` and processes in parallel on the compute cluster. The batch scoring job outputs will be stored in cloud storage, either in the workspace's default blob storage, or the storage you specified.
+
+# [Azure CLI](#tab/azure-cli)
+
+
+# [Python](#tab/python)
+
+```python
+job = ml_client.batch_endpoints.invoke(
+ endpoint_name=endpoint_name,
+ inputs=Input(path="https://pipelinedata.blob.core.windows.net/sampledata/mnist", type=AssetTypes.URI_FOLDER)
+)
+```
+
+# [Studio](#tab/azure-studio)
+
+1. Navigate to the __Endpoints__ tab on the side menu.
+1. Select the tab __Batch endpoints__.
+1. Select the batch endpoint you just created.
+1. Click on __Create job__.
+
+ :::image type="content" source="./media/how-to-use-batch-endpoints-studio/create-batch-job.png" alt-text="Screenshot of the create job option to start batch scoring.":::
+
+1. On __Deployment__, select the deployment you want to execute.
+
+ :::image type="content" source="./media/how-to-use-batch-endpoints-studio/job-setting-batch-scoring.png" alt-text="Screenshot of using the deployment to submit a batch job.":::
+
+1. Click on __Next__.
+1. On __Select data source__, select the data input you want to use. For this example, select __Datastore__ and in the section __Path__ enter the full URL `https://pipelinedata.blob.core.windows.net/sampledat) for details.
+
+ :::image type="content" source="./media/how-to-use-batch-endpoints-studio/select-datastore-job.png" alt-text="Screenshot of selecting datastore as an input option.":::
+
+1. Start the job.
+++
+### Configure job's inputs
+
+Batch endpoints support reading files or folders that are located in different locations. To learn more about how the supported types and how to specify them read [Accessing data from batch endpoints jobs](how-to-access-data-batch-endpoints-jobs.md).
+
+> [!TIP]
+> Local data folders/files can be used when executing batch endpoints from the Azure ML CLI or Azure ML SDK for Python. However, that operation will result in the local data to be uploaded to the default Azure Machine Learning Data Store of the workspace you are working on.
+
+> [!IMPORTANT]
+> __Deprecation notice__: Datasets of type `FileDataset` (V1) are deprecated and will be retired in the future. Existing batch endpoints relying on this functionality will continue to work but batch endpoints created with GA CLIv2 (2.4.0 and newer) or GA REST API (2022-05-01 and newer) will not support V1 dataset.
+
+### Configure the output location
+
+The batch scoring results are by default stored in the workspace's default blob store within a folder named by job name (a system-generated GUID). You can configure where to store the scoring outputs when you invoke the batch endpoint.
+
+# [Azure CLI](#tab/azure-cli)
+
+Use `output-path` to configure any folder in an Azure Machine Learning registered datastore. The syntax for the `--output-path` is the same as `--input` when you're specifying a folder, that is, `azureml://datastores/<datastore-name>/paths/<path-on-datastore>/`. Use `--set output_file_name=<your-file-name>` to configure a new output file name.
++
+# [Python](#tab/python)
+
+Use `output_path` to configure any folder in an Azure Machine Learning registered datastore. The syntax for the `--output-path` is the same as `--input` when you're specifying a folder, that is, `azureml://datastores/<datastore-name>/paths/<path-on-datastore>/`. Use `output_file_name=<your-file-name>` to configure a new output file name.
+
+```python
+job = ml_client.batch_endpoints.invoke(
+ endpoint_name=endpoint_name,
+ inputs={
+ "input": Input(path="https://pipelinedata.blob.core.windows.net/sampledata/mnist", type=AssetTypes.URI_FOLDER)
+ },
+ output_path={
+ "score": Input(path=f"azureml://datastores/workspaceblobstore/paths/{endpoint_name}")
+ },
+ output_file_name="predictions.csv"
+)
+```
+
+# [Studio](#tab/azure-studio)
+
+1. Navigate to the __Endpoints__ tab on the side menu.
+1. Select the tab __Batch endpoints__.
+1. Select the batch endpoint you just created.
+1. Click on __Create job__.
+
+ :::image type="content" source="./media/how-to-use-batch-endpoints-studio/create-batch-job.png" alt-text="Screenshot of the create job option to start batch scoring.":::
+
+1. On __Deployment__, select the deployment you want to execute.
+1. Click on __Next__.
+1. Check the option __Override deployment settings__.
+
+ :::image type="content" source="./media/how-to-use-batch-endpoints-studio/overwrite-setting.png" alt-text="Screenshot of the overwrite setting when starting a batch job.":::
+
+1. You can now configure __Output file name__ and some extra properties of the deployment execution. Just this execution will be affected.
+1. On __Select data source__, select the data input you want to use.
+1. On __Configure output location__, check the option __Enable output configuration__.
+
+ :::image type="content" source="./media/how-to-use-batch-endpoints-studio/configure-output-location.png" alt-text="Screenshot of optionally configuring output location.":::
+
+1. Configure the __Blob datastore__ where the outputs should be placed.
+++
+> [!WARNING]
+> You must use a unique output location. If the output file exists, the batch scoring job will fail.
+
+> [!IMPORTANT]
+> As opposite as for inputs, only Azure Machine Learning data stores running on blob storage accounts are supported for outputs.
+
+## Overwrite deployment configuration per each job
+
+Some settings can be overwritten when invoke to make best use of the compute resources and to improve performance. The following settings can be configured in a per-job basis:
+
+* Use __instance count__ to overwrite the number of instances to request from the compute cluster. For example, for larger volume of data inputs, you may want to use more instances to speed up the end to end batch scoring.
+* Use __mini-batch size__ to overwrite the number of files to include on each mini-batch. The number of mini batches is decided by total input file counts and mini_batch_size. Smaller mini_batch_size generates more mini batches. Mini batches can be run in parallel, but there might be extra scheduling and invocation overhead.
+* Other settings can be overwritten other settings including __max retries__, __timeout__, and __error threshold__. These settings might impact the end to end batch scoring time for different workloads.
+
+# [Azure CLI](#tab/azure-cli)
++
+# [Python](#tab/python)
+
+```python
+job = ml_client.batch_endpoints.invoke(
+ endpoint_name=endpoint_name,
+ input=Input(path="https://pipelinedata.blob.core.windows.net/sampledata/mnist"),
+ params_override=[
+ { "mini_batch_size": "20" },
+ { "compute.instance_count": "5" }
+ ],
+)
+```
+
+# [Studio](#tab/azure-studio)
+
+1. Navigate to the __Endpoints__ tab on the side menu.
+1. Select the tab __Batch endpoints__.
+1. Select the batch endpoint you just created.
+1. Click on __Create job__.
+
+ :::image type="content" source="./media/how-to-use-batch-endpoints-studio/create-batch-job.png" alt-text="Screenshot of the create job option to start batch scoring.":::
+
+1. On __Deployment__, select the deployment you want to execute.
+1. Click on __Next__.
+1. Check the option __Override deployment settings__.
+
+ :::image type="content" source="./media/how-to-use-batch-endpoints-studio/overwrite-setting.png" alt-text="Screenshot of the overwrite setting when starting a batch job.":::
+
+1. Configure the job parameters. Only the current job execution will be affected by this configuration.
+++
+### Monitor batch scoring job execution progress
+
+Batch scoring jobs usually take some time to process the entire set of inputs.
+
+# [Azure CLI](#tab/azure-cli)
+
+You can use CLI `job show` to view the job. Run the following code to check job status from the previous endpoint invoke. To learn more about job commands, run `az ml job -h`.
++
+# [Python](#tab/python)
+
+The following code checks the job status and outputs a link to the Azure ML studio for further details.
+
+```python
+ml_client.jobs.get(job.name)
+```
+
+# [Studio](#tab/azure-studio)
+
+1. Navigate to the __Endpoints__ tab on the side menu.
+1. Select the tab __Batch endpoints__.
+1. Select the batch endpoint you want to monitor.
+1. Click on the tab __Jobs__.
+
+ :::image type="content" source="media/how-to-use-batch-endpoints-studio/summary-jobs.png" alt-text="Screenshot of summary of jobs submitted to a batch endpoint.":::
+
+1. You will see a list of the jobs created for the selected endpoint.
+1. Select the last job that is running.
+1. You will be redirected to the job monitoring page.
+++
+### Check batch scoring results
+
+Follow the steps below to view the scoring results in Azure Storage Explorer when the job is completed:
+
+1. Run the following code to open batch scoring job in Azure Machine Learning studio. The job studio link is also included in the response of `invoke`, as the value of `interactionEndpoints.Studio.endpoint`.
+
+ :::code language="azurecli" source="~/azureml-examples-main/cli/batch-score.sh" ID="show_job_in_studio" :::
+
+1. In the graph of the job, select the `batchscoring` step.
+1. Select the __Outputs + logs__ tab and then select **Show data outputs**.
+1. From __Data outputs__, select the icon to open __Storage Explorer__.
+
+ :::image type="content" source="media/how-to-use-batch-endpoint/view-data-outputs.png" alt-text="Studio screenshot showing view data outputs location." lightbox="media/how-to-use-batch-endpoint/view-data-outputs.png":::
+
+ The scoring results in Storage Explorer are similar to the following sample page:
+
+ :::image type="content" source="media/how-to-use-batch-endpoint/scoring-view.png" alt-text="Screenshot of the scoring output." lightbox="media/how-to-use-batch-endpoint/scoring-view.png":::
+
+## Adding deployments to an endpoint
+
+Once you have a batch endpoint with a deployment, you can continue to refine your model and add new deployments. Batch endpoints will continue serving the default deployment while you develop and deploy new models under the same endpoint. Deployments can't affect one to another.
+
+In this example, you will learn how to add a second deployment __that solves the same MNIST problem but using a model built with Keras and TensorFlow__.
+
+### Adding a second deployment
+
+1. Create an environment where your batch deployment will run. Include in the environment any dependency your code requires for running. You will also need to add the library `azureml-core` as it is required for batch deployments to work. The following environment definition has the required libraries to run a model with TensorFlow.
+
+ # [Azure CLI](#tab/azure-cli)
+
+ *No extra step is required for the Azure ML CLI. The environment definition will be included in the deployment file as an anonymous environment.*
+
+ # [Python](#tab/python)
+
+ Let's get a reference to the environment:
+
+ ```python
+ env = Environment(
+ conda_file="./mnist-keras/environment/conda.yml",
+ image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:latest",
+ )
+ ```
+
+ # [Studio](#tab/azure-studio)
+
+ 1. Navigate to the __Environments__ tab on the side menu.
+ 1. Select the tab __Custom environments__ > __Create__.
+ 1. Enter the name of the environment, in this case `keras-batch-env`.
+ 1. On __Select environment type__ select __Use existing docker image with conda__.
+ 1. On __Container registry image path__, enter `mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04`.
+ 1. On __Customize__ section copy the content of the file `./mnist-keras/environment/conda.yml` included in the repository into the portal. The conda file looks as follows:
+
+ :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/mnist-keras/environment/conda.yml":::
+
+ 1. Click on __Next__ and then on __Create__.
+ 1. The environment is ready to be used.
+
+
+
+ > [!WARNING]
+ > Curated environments are not supported in batch deployments. You will need to indicate your own environment. You can always use the base image of a curated environment as yours to simplify the process.
+
+ > [!IMPORTANT]
+ > Do not forget to include the library `azureml-core` in your deployment as it is required by the executor.
+
+1. Create a scoring script for the model:
+
+ __batch_driver.py__
+
+ :::code language="python" source="~/azureml-examples-main/sdk/python/endpoints/batch/mnist-keras/code/batch_driver.py" :::
+
+3. Create a deployment definition
+
+ # [Azure CLI](#tab/azure-cli)
+
+ __mnist-keras-deployment__
+
+ :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/mnist-keras-deployment.yml":::
+
+ # [Python](#tab/python)
+
+ ```python
+ deployment = BatchDeployment(
+ name="non-mlflow-deployment",
+ description="this is a sample non-mlflow deployment",
+ endpoint_name=batch_endpoint_name,
+ model=model,
+ code_path="./mnist-keras/code/",
+ scoring_script="batch_driver.py",
+ environment=env,
+ compute=compute_name,
+ instance_count=2,
+ max_concurrency_per_instance=2,
+ mini_batch_size=10,
+ output_action=BatchDeploymentOutputAction.APPEND_ROW,
+ output_file_name="predictions.csv",
+ retry_settings=BatchRetrySettings(max_retries=3, timeout=30),
+ logging_level="info",
+ )
+ ```
+
+ # [Studio](#tab/azure-studio)
+
+ 1. Navigate to the __Endpoints__ tab on the side menu.
+ 1. Select the tab __Batch endpoints__.
+ 1. Select the existing batch endpoint where you want to add the deployment.
+ 1. Click on __Add deployment__.
+
+ :::image type="content" source="./media/how-to-use-batch-endpoints-studio/add-deployment-option.png" alt-text="Screenshot of add new deployment option.":::
+
+ 1. On the model list, select the model `mnist` and click on __Next__.
+ 1. On the deployment configuration page, give the deployment a name.
+ 1. On __Output action__, ensure __Append row__ is selected.
+ 1. On __Output file name__, ensure the batch scoring output file is the one you need. Default is `predictions.csv`.
+ 1. On __Mini batch size__, adjust the size of the files that will be included on each mini-batch. This will control the amount of data your scoring script receives per each batch.
+ 1. On __Scoring timeout (seconds)__, ensure you are giving enough time for your deployment to score a given batch of files. If you increase the number of files, you usually have to increase the timeout value too. More expensive models (like those based on deep learning), may require high values in this field.
+ 1. On __Max concurrency per instance__, configure the number of executors you want to have per each compute instance you get in the deployment. A higher number here guarantees a higher degree of parallelization but it also increases the memory pressure on the compute instance. Tune this value altogether with __Mini batch size__.
+ 1. Once done, click on __Next__.
+ 1. On environment, go to __Select scoring file and dependencies__ and click on __Browse__.
+ 1. Select the scoring script file on `/mnist-keras/code/batch_driver.py`.
+ 1. On the section __Choose an environment__, select the environment you created a previous step.
+ 1. Click on __Next__.
+ 1. On the section __Compute__, select the compute cluster you created in a previous step.
+ 1. On __Instance count__, enter the number of compute instances you want for the deployment. In this case, we will use 2.
+ 1. Click on __Next__
+
+1. Create the deployment:
+
+ # [Azure CLI](#tab/azure-cli)
+
+ Run the following code to create a batch deployment under the batch endpoint and set it as the default deployment.
+
+ :::code language="azurecli" source="~/azureml-examples-main/cli/batch-score.sh" ID="create_new_deployment_not_default" :::
+
+ > [!TIP]
+ > The `--set-default` parameter is missing in this case. As a best practice for production scenarios, you may want to create a new deployment without setting it as default, verify it, and update the default deployment later.
+
+ # [Python](#tab/python)
+
+ Using the `MLClient` created earlier, we'll now create the deployment in the workspace. This command will start the deployment creation and return a confirmation response while the deployment creation continues.
+
+ ```python
+ ml_client.batch_deployments.begin_create_or_update(deployment)
+ ```
+
+ # [Studio](#tab/azure-studio)
+
+ In the wizard, click on __Create__ to start the deployment process.
++
+### Test a non-default batch deployment
+
+To test the new non-default deployment, you will need to know the name of the deployment you want to run.
+
+# [Azure CLI](#tab/azure-cli)
++
+Notice `--deployment-name` is used to specify the deployment we want to execute. This parameter allows you to `invoke` a non-default deployment, and it will not update the default deployment of the batch endpoint.
+
+# [Python](#tab/python)
+
+```python
+job = ml_client.batch_endpoints.invoke(
+ deployment_name=deployment.name,
+ endpoint_name=endpoint.name,
+ input=input,
+)
+```
+
+Notice `deployment_name` is used to specify the deployment we want to execute. This parameter allows you to `invoke` a non-default deployment, and it will not update the default deployment of the batch endpoint.
+
+# [Studio](#tab/azure-studio)
+
+1. Navigate to the __Endpoints__ tab on the side menu.
+1. Select the tab __Batch endpoints__.
+1. Select the batch endpoint you just created.
+1. Click on __Create job__.
+1. On __Deployment__, select the deployment you want to execute. In this case, `mnist-keras`.
+1. Complete the job creation wizard to get the job started.
+++
+### Update the default batch deployment
+
+Although you can invoke a specific deployment inside of an endpoint, you will usually want to invoke the endpoint itself and let the endpoint decide which deployment to use. Such deployment is named the "default" deployment. This gives you the possibility of changing the default deployment and hence changing the model serving the deployment without changing the contract with the user invoking the endpoint. Use the following instruction to update the default deployment:
+
+# [Azure CLI](#tab/azure-cli)
++
+# [Python](#tab/python)
+
+```python
+endpoint = ml_client.batch_endpoints.get(endpoint_name)
+endpoint.defaults.deployment_name = deployment.name
+ml_client.batch_endpoints.begin_create_or_update(endpoint)
+```
+
+# [Studio](#tab/azure-studio)
+
+1. Navigate to the __Endpoints__ tab on the side menu.
+1. Select the tab __Batch endpoints__.
+1. Select the batch endpoint you want to configure.
+1. Click on __Update default deployment__.
+
+ :::image type="content" source="./media/how-to-use-batch-endpoints-studio/update-default-deployment.png" alt-text="Screenshot of updating default deployment.":::
+
+1. On __Select default deployment__, select the name of the deployment you want to be the default one.
+1. Click on __Update__.
+1. The selected deployment is now the default one.
+++
+## Delete the batch endpoint and the deployment
+
+# [Azure CLI](#tab/azure-cli)
+
+If you aren't going to use the old batch deployment, you should delete it by running the following code. `--yes` is used to confirm the deletion.
++
+Run the following code to delete the batch endpoint and all the underlying deployments. Batch scoring jobs won't be deleted.
++
+# [Python](#tab/python)
+
+Delete endpoint:
+
+```python
+ml_client.batch_endpoints.begin_delete(name=batch_endpoint_name)
+```
+
+Delete compute: optional, as you may choose to reuse your compute cluster with later deployments.
+
+```python
+ml_client.compute.begin_delete(name=compute_name)
+```
+
+# [Studio](#tab/azure-studio)
+
+1. Navigate to the __Endpoints__ tab on the side menu.
+1. Select the tab __Batch endpoints__.
+1. Select the batch endpoint you want to delete.
+1. Click on __Delete__.
+1. The endpoint all along with its deployments will be deleted.
+1. Notice that this won't affect the compute cluster where the deployment(s) run.
+++
+## Next steps
+
+* [Accessing data from batch endpoints jobs](how-to-access-data-batch-endpoints-jobs.md).
+* [Authentication on batch endpoints](how-to-authenticate-batch-endpoint.md).
+* [Network isolation in batch endpoints](how-to-secure-batch-endpoint.md).
+* [Troubleshooting batch endpoints](how-to-troubleshoot-batch-endpoints.md).
machine-learning How To Use Event Grid Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-event-grid-batch.md
+
+ Title: "Invoking batch endpoints from Event Grid events in storage"
+
+description: Learn how to use batch endpoints to be automatically triggered when new files are generated in storage.
++++++ Last updated : 10/10/2022++++
+# Invoking batch endpoints from Event Grid events in storage
++
+Event Grid is a fully managed service that enables you to easily manage events across many different Azure services and applications. It simplifies building event-driven and serverless applications. In this tutorial we are going to learn how to create a Logic App that can subscribe to the Event Grid event associated with new files created in a storage account and trigger a batch endpoint to process the given file.
+
+The workflow will work in the following way:
+
+1. It will be triggered when a new blob is created in a specific storage account.
+2. Since the storage account can contain multiple data assets, event filtering will be applied to only react to events happening in a specific folder inside of it. Further filtering can be done is needed.
+3. It will get an authorization token to invoke batch endpoints using the credentials from a Service Principal.
+4. It will trigger the batch endpoint (default deployment) using the newly created file as input.
+
+> [!IMPORTANT]
+> When using Logic App connected with event grid to invoke batch deployment, a job for each file that triggers the event of *blog created* will be generated. However, keep in mind that batch deployments distribute the work at the file level. Since this execution is specifying only one file, then, there will not be any parallelization happening in the deployment. Instead, you will be taking advantage of the capability of batch deployments of executing multiple scoring jobs under the same compute cluster. If you need to run jobs on entire folders in an automatic fashion, we recommend you to switch to [Invoking batch endpoints from Azure Data Factory](how-to-use-batch-azure-data-factory.md).
+
+## Prerequisites
+
+* This example assumes that you have a model correctly deployed as a batch endpoint. Particularly, we are using the *heart condition classifier* created in the tutorial [Using MLflow models in batch deployments](how-to-mlflow-batch.md).
+* This example assumes that your batch deployment runs in a compute cluster called `cpu-cluster`.
+* The Logic App we are creating will communicate with Azure Machine Learning batch endpoints using REST. To know more about how to use the REST API of batch endpoints read [Deploy models with REST for batch scoring](how-to-deploy-batch-with-rest.md).
+
+## Authenticating against batch endpoints
+
+Azure Logic Apps can invoke the REST APIs of batch endpoints by using the [HTTP](../connectors/connectors-native-http.md) activity. Batch endpoints support Azure Active Directory for authorization and hence the request made to the APIs require a proper authentication handling.
+
+We recommend to using a service principal for authentication and interaction with batch endpoints in this scenario.
+
+1. Create a service principal following the steps at [Register an application with Azure AD and create a service principal](../active-directory/develop/howto-create-service-principal-portal.md#register-an-application-with-azure-ad-and-create-a-service-principal).
+1. Create a secret to use for authentication as explained at [Option 2: Create a new application secret](../active-directory/develop/howto-create-service-principal-portal.md#option-2-create-a-new-application-secret).
+1. Take note of the `client secret` generated.
+1. Take note of the `client ID` and the `tenant id` as explained at [Get tenant and app ID values for signing in](../active-directory/develop/howto-create-service-principal-portal.md#option-2-create-a-new-application-secret).
+1. Grant access for the service principal you created to your workspace as explained at [Grant access](../role-based-access-control/quickstart-assign-role-user-portal.md#grant-access). In this example the service principal will require:
+
+ 1. Permission in the workspace to read batch deployments and perform actions over them.
+ 1. Permissions to read/write in data stores.
+
+## Enabling data access
+
+We will be using cloud URIs provided by Event Grid to indicate the input data to send to the deployment job. Batch deployments use the identity of the compute to mount the data. The identity of the job is used to read the data once mounted for external storage accounts. You will need to assign a user-assigned managed identity to the compute cluster in order to ensure it does have access to mount the underlying data. Follow these steps to ensure data access:
+
+1. Create a [managed identity resource](../active-directory/managed-identities-azure-resources/overview.md):
+
+ # [Azure ML CLI](#tab/cli)
+
+ ```azurecli
+ IDENTITY=$(az identity create -n azureml-cpu-cluster-idn --query id -o tsv)
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ # Use the Azure CLI to create the managed identity. Then copy the value of the variable IDENTITY into a Python variable
+ identity="/subscriptions/<subscription>/resourcegroups/<resource-group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/azureml-cpu-cluster-idn"
+ ```
+
+1. Update the compute cluster to use the managed identity we created:
+
+ > [!NOTE]
+ > This examples assumes you have a compute cluster created named `cpu-cluster` and it is used for the default deployment in the endpoint.
+
+ # [Azure ML CLI](#tab/cli)
+
+ ```azurecli
+ az ml compute update --name cpu-cluster --identity-type user_assigned --user-assigned-identities $IDENTITY
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ from azure.ai.ml import MLClient
+ from azure.ai.ml.entities import AmlCompute, ManagedIdentityConfiguration
+ from azure.ai.ml.constants import ManagedServiceIdentityType
+
+ compute_name = "cpu-cluster"
+ compute_cluster = ml_client.compute.get(name=compute_name)
+
+ compute_cluster.identity.type = ManagedServiceIdentityType.USER_ASSIGNED
+ compute_cluster.identity.user_assigned_identities = [
+ ManagedIdentityConfiguration(resource_id=identity)
+ ]
+
+ ml_client.compute.begin_create_or_update(compute_cluster)
+ ```
+
+1. Go to the [Azure portal](https://portal.azure.com) and ensure the managed identity has the right permissions to read the data. To access storage services, you must have at least [Storage Blob Data Reader](../role-based-access-control/built-in-roles.md#storage-blob-data-reader) access to the storage account. Only storage account owners can [change your access level via the Azure portal](../storage/blobs/assign-azure-role-data-access.md).
+
+## Create a Logic App
+
+1. In the [Azure portal](https://portal.azure.com), sign in with your Azure account.
+
+1. On the Azure home page, select **Create a resource**.
+
+1. On the Azure Marketplace menu, select **Integration** > **Logic App**.
+
+ ![Screenshot that shows Azure Marketplace menu with "Integration" and "Logic App" selected.](../logic-apps/media/tutorial-build-scheduled-recurring-logic-app-workflow/create-new-logic-app-resource.png)
+
+1. On the **Create Logic App** pane, on the **Basics** tab, provide the following information about your logic app resource.
+
+ ![Screenshot showing Azure portal, logic app creation pane, and info for new logic app resource.](../logic-apps/media/tutorial-build-scheduled-recurring-logic-app-workflow/create-logic-app-settings.png)
+
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **Subscription** | Yes | <*Azure-subscription-name*> | Your Azure subscription name. This example uses **Pay-As-You-Go**. |
+ | **Resource Group** | Yes | **LA-TravelTime-RG** | The [Azure resource group](../azure-resource-manager/management/overview.md) where you create your logic app resource and related resources. This name must be unique across regions and can contain only letters, numbers, hyphens (`-`), underscores (`_`), parentheses (`(`, `)`), and periods (`.`). |
+ | **Name** | Yes | **LA-TravelTime** | Your logic app resource name, which must be unique across regions and can contain only letters, numbers, hyphens (`-`), underscores (`_`), parentheses (`(`, `)`), and periods (`.`). |
+
+1. Before you continue making selections, go to the **Plan** section. For **Plan type**, select **Consumption** to show only the settings for a Consumption logic app workflow, which runs in multi-tenant Azure Logic Apps.
+
+ The **Plan type** property also specifies the billing model to use.
+
+ | Plan type | Description |
+ |--|-|
+ | **Standard** | This logic app type is the default selection and runs in single-tenant Azure Logic Apps and uses the [Standard billing model](../logic-apps/logic-apps-pricing.md#standard-pricing). |
+ | **Consumption** | This logic app type runs in global, multi-tenant Azure Logic Apps and uses the [Consumption billing model](../logic-apps/logic-apps-pricing.md#consumption-pricing). |
+
+1. Now continue with the following selections:
+
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **Region** | Yes | **West US** | The Azure datacenter region for storing your app's information. This example deploys the sample logic app to the **West US** region in Azure. <br><br>**Note**: If your subscription is associated with an integration service environment, this list includes those environments. |
+ | **Enable log analytics** | Yes | **No** | This option appears and applies only when you select the **Consumption** logic app type. Change this option only when you want to enable diagnostic logging. For this tutorial, keep the default selection. |
+
+1. When you're done, select **Review + create**. After Azure validates the information about your logic app resource, select **Create**.
+
+1. After Azure deploys your app, select **Go to resource**.
+
+ Azure opens the workflow template selection pane, which shows an introduction video, commonly used triggers, and workflow template patterns.
+
+1. Scroll down past the video and common triggers sections to the **Templates** section, and select **Blank Logic App**.
+
+ ![Screenshot that shows the workflow template selection pane with "Blank Logic App" selected.](../logic-apps/media/tutorial-build-scheduled-recurring-logic-app-workflow/select-logic-app-template.png)
++
+## Configure the workflow parameters
+
+This Logic App will use parameters to store specific pieces of information that you will need to run the batch deployment.
+
+1. On the workflow designer, under the tool bar, select the option __Parameters__ and configure them as follows:
+
+ :::image type="content" source="./media/how-to-use-event-grid-batch/parameters.png" alt-text="Screenshot of all the parameters required in the workflow.":::
+
+1. To create a parameter, use the __Add parameter__ option:
+
+ :::image type="content" source="./media/how-to-use-event-grid-batch/parameter.png" alt-text="Screenshot showing how to add one parameter in designer.":::
+
+1. Create the following parameters.
+
+ | Parameter | Description | Sample value |
+ | | -|- |
+ | `tenant_id` | Tenant ID where the endpoint is deployed | `00000000-0000-0000-00000000` |
+ | `client_id` | The client ID of the service principal used to invoke the endpoint | `00000000-0000-0000-00000000` |
+ | `client_secret` | The client secret of the service principal used to invoke the endpoint | `ABCDEFGhijkLMNOPQRstUVwz` |
+ | `endpoint_uri` | The endpoint scoring URI | `https://<endpoint_name>.<region>.inference.ml.azure.com/jobs` |
+
+ > [!IMPORTANT]
+ > `endpoint_uri` is the URI of the endpoint you are trying to execute. The endpoint must have a default deployment configured.
+
+ > [!TIP]
+ > Use the values configured at [Authenticating against batch endpoints](#authenticating-against-batch-endpoints).
+
+## Add the trigger
+
+We want to trigger the Logic App each time a new file is created in a given folder (data asset) of a Storage Account. The Logic App will also use the information of the event to invoke the batch endpoint and passing the specific file to be processed.
+
+1. On the workflow designer, under the search box, select **Built-in**.
+
+1. In the search box, enter **event grid**, and select the trigger named **When a resource event occurs**.
+
+1. Configure the trigger as follows:
+
+ | Property | Value | Description |
+ |-|-|-|
+ | **Subscription** | Your subscription name | The subscription where the Azure Storage Account is placed. |
+ | **Resource Type** | `Microsoft.Storage.StorageAccounts` | The resource type emitting the events. |
+ | **Resource Name** | Your storage account name | The name of the Storage Account where the files will be generated. |
+ | **Event Type Item** | `Microsoft.Storage.BlobCreated` | The event type. |
+
+1. Click on __Add new parameter__ and select __Prefix Filter__. Add the value `/blobServices/default/containers/<container_name>/blobs/<path_to_data_folder>`.
+
+ > [!IMPORTANT]
+ > __Prefix Filter__ allows Event Grid to only notify the workflow when a blob is created in the specific path we indicated. In this case, we are assumming that files will be created by some external process in the folder `<path_to_data_folder>` inside the container `<container_name>` in the selected Storage Account. Configure this parameter to match the location of your data. Otherwise, the event will be fired for any file created at any location of the Storage Account. See [Event filtering for Event Grid](../event-grid/event-filtering.md) for more details.
+
+ The trigger will look as follows:
+
+ :::image type="content" source="./media/how-to-use-event-grid-batch/create-trigger.png" alt-text="Screenshot of the trigger activity of the Logic App.":::
+
+## Configure the actions
+
+1. Click on __+ New step__.
+
+1. On the workflow designer, under the search box, select **Built-in** and then click on __HTTP__:
+
+1. Configure the action as follows:
+
+ | Property | Value | Notes |
+ |-|-|-|
+ | **Method** | `POST` | The HTTP method |
+ | **URI** | `concat('https://login.microsoftonline.com/', parameters('tenant_id'), '/oauth2/token')` | Click on __Add dynamic context__, then __Expression__, to enter this expression. |
+ | **Headers** | `Content-Type` with value `application/x-www-form-urlencoded` | |
+ | **Body** | `concat('grant_type=client_credentials&client_id=', parameters('client_id'), '&client_secret=', parameters('client_secret'), '&resource=https://ml.azure.com')` | Click on __Add dynamic context__, then __Expression__, to enter this expression. |
+
+ The action will look as follows: