Updates from: 11/24/2022 02:12:40
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Concept Resilient Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-resilient-controls.md
Title: Create a resilient access control management strategy - Azure AD
description: This document provides guidance on strategies an organization should adopt to provide resilience to reduce the risk of lockout during unforeseen disruptions -+ tags: azuread
active-directory Msal Android Single Sign On https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-android-single-sign-on.md
The Azure portal generates the redirect URI for you and displays it in the **And
For more information about signing your app, see [Sign your app](https://developer.android.com/studio/publish/app-signing) in the Android Studio User Guide.
-> [!IMPORTANT]
-> Use your production signing key for the production version of your app.
- #### Configure MSAL to use a broker To use a broker in your app, you must attest that you've configured your broker redirect. For example, include both your broker enabled redirect URI--and indicate that you registered it--by including the following settings in your MSAL configuration file:
active-directory Sample V2 Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/sample-v2-code.md
The following samples show public client desktop applications that access the Mi
> [!div class="mx-tdCol2BreakAll"] > | Language/<br/>Platform | Code sample(s) <br/> on GitHub | Auth<br/> libraries | Auth flow | > | - | -- | - | -- |
-> | .NET Core | &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/1-Calling-MSGraph/1-1-AzureAD) <br/> &#8226; [Call Microsoft Graph with token cache](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/2-TokenCache) <br/> &#8226; [Call Micrsoft Graph with custom web UI HTML](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/3-CustomWebUI/3-1-CustomHTML) <br/> &#8226; [Call Microsoft Graph with custom web browser](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/3-CustomWebUI/3-2-CustomBrowser) <br/> &#8226; [Sign in users with device code flow](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/4-DeviceCodeFlow) <br/> &#8226; [Authenticate users with MSAL.NET in a WinUI desktop application](https://github.com/Azure-Samples/ms-identity-netcore-winui) | MSAL.NET |&#8226; Authorization code with PKCE <br/> &#8226; Device code |
+> | .NET Core | &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/1-Calling-MSGraph/1-1-AzureAD) <br/> &#8226; [Call Microsoft Graph with token cache](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/2-TokenCache) <br/> &#8226; [Call Microsoft Graph with custom web UI HTML](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/3-CustomWebUI/3-1-CustomHTML) <br/> &#8226; [Call Microsoft Graph with custom web browser](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/3-CustomWebUI/3-2-CustomBrowser) <br/> &#8226; [Sign in users with device code flow](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/4-DeviceCodeFlow) <br/> &#8226; [Authenticate users with MSAL.NET in a WinUI desktop application](https://github.com/Azure-Samples/ms-identity-netcore-winui) | MSAL.NET |&#8226; Authorization code with PKCE <br/> &#8226; Device code |
> | .NET | [Invoke protected API with integrated Windows authentication](https://github.com/azure-samples/active-directory-dotnet-iwa-v2) | MSAL.NET | Integrated Windows authentication | > | Java | [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/2.%20Client-Side%20Scenarios/Integrated-Windows-Auth-Flow) | MSAL Java | Integrated Windows authentication | > | Node.js | [Sign in users](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-desktop) | MSAL Node | Authorization code with PKCE |
active-directory Hybrid Organizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/hybrid-organizations.md
Previously updated : 04/26/2018- Last updated : 11/23/2022 + -
+# Customer intent: As a tenant administrator, I want to give partners access to both on-premises and cloud resources with Azure AD B2B collaboration.
# Azure Active Directory B2B collaboration for hybrid organizations
Azure Active Directory (Azure AD) B2B collaboration makes it easy for you to giv
## Grant B2B users in Azure AD access to your on-premises apps
-If your organization uses Azure AD B2B collaboration capabilities to invite guest users from partner organizations to your Azure AD, you can now provide these B2B users access to on-premises apps.
+If your organization uses [Azure AD B2B](what-is-b2b.md) collaboration capabilities to invite guest users from partner organizations to your Azure AD, you can now provide these B2B users access to on-premises apps.
For apps that use SAML-based authentication, you can make these apps available to B2B users through the Azure portal, using Azure AD Application Proxy for authentication. For apps that use integrated Windows authentication (IWA) with Kerberos constrained delegation (KCD), you also use Azure AD Proxy for authentication. However, for authorization to work, a user object is required in the on-premises Windows Server Active Directory. There are two methods you can use to create local user objects that represent your B2B guest users. - You can use Microsoft Identity Manager (MIM) 2016 SP1 and the MIM management agent for Microsoft Graph.-- You can use a PowerShell script. (This solution does not require MIM.)
+- You can use a PowerShell script. (This solution doesn't require MIM.)
For details about how to implement these solutions, see [Grant B2B users in Azure AD access to your on-premises applications](hybrid-cloud-to-on-premises.md).
-## Grant locally-managed partner accounts access to cloud resources
+## Grant locally managed partner accounts access to cloud resources
Before Azure AD, organizations with on-premises identity systems have traditionally managed partner accounts in their on-premises directory. If youΓÇÖre such an organization, you want to make sure that your partners continue to have access as you move your apps and other resources to the cloud. Ideally, you want these users to use the same set of credentials to access both cloud and on-premises resources.
We now offer methods where you can use Azure AD Connect to sync these local acco
To help protect your company data, you can control access to just the right resources, and configure authorization policies that treat these guest users differently from your employees.
-For implementation details, see [Grant locally-managed partner accounts access to cloud resources using Azure AD B2B collaboration](hybrid-on-premises-to-cloud.md).
+For implementation details, see [Grant locally managed partner accounts access to cloud resources using Azure AD B2B collaboration](hybrid-on-premises-to-cloud.md).
## Next steps - [Grant B2B users in Azure AD access to your on-premises applications](hybrid-cloud-to-on-premises.md)-- [Grant locally-managed partner accounts access to cloud resources using Azure AD B2B collaboration](hybrid-on-premises-to-cloud.md)
+- [B2B direct connect](b2b-direct-connect-overview.md)
+- [Grant locally managed partner accounts access to cloud resources using Azure AD B2B collaboration](hybrid-on-premises-to-cloud.md)
active-directory Active Directory Compare Azure Ad To Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-compare-azure-ad-to-ad.md
Title: Compare Active Directory to Azure Active Directory
description: This document compares Active Directory Domain Services (ADDS) to Azure Active Directory (AD). It outlines key concepts in both identity solutions and explains how it's different or similar. -+ tags: azuread
active-directory Active Directory Data Storage Japan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-data-storage-japan.md
Title: Customer data storage for Japan customers - Azure AD
description: Learn about where Azure Active Directory stores customer-related data for its Japan customers. -+
active-directory Active Directory Ops Guide Auth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-ops-guide-auth.md
Title: Azure Active Directory Authentication management operations reference gui
description: This operations reference guide describes the checks and actions you should take to secure authentication management -+ tags: azuread
active-directory Active Directory Ops Guide Govern https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-ops-guide-govern.md
Title: Azure Active Directory governance operations reference guide
description: This operations reference guide describes the checks and actions you should take to secure governance management -+ tags: azuread
active-directory Active Directory Ops Guide Iam https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-ops-guide-iam.md
Title: Azure Active Directory Identity and access management operations referenc
description: This operations reference guide describes the checks and actions you should take to secure identity and access management operations -+ tags: azuread
active-directory Active Directory Ops Guide Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-ops-guide-intro.md
Title: Azure Active Directory operations reference guide
description: This operations reference guide describes the checks and actions you should take to secure and maintain identity and access management, authentication, governance, and operations -+ tags: azuread
active-directory Active Directory Ops Guide Ops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-ops-guide-ops.md
Title: Azure Active Directory general operations guide reference
description: This operations reference guide describes the checks and actions you should take to secure general operations -+ tags: azuread
active-directory Azure Active Directory Parallel Identity Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/azure-active-directory-parallel-identity-options.md
Title: 'Parallel and combined identity infrastructure options'
description: This article describes the various options available for organizations to run multiple tenants and multi-cloud scenarios -+ na
active-directory Azure Ad Data Residency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/azure-ad-data-residency.md
+
+ Title: Azure AD and data residency
+description: Use residency data to manage access, achieve mobility scenarios, and secure your organization.
+++++++ Last updated : 11/23/2022+++++
+# Azure Active Directory and data residency
+
+Azure AD is an Identity as a Service (IDaaS) solution that stores and manages identity and access data in the cloud. You can use the data to enable and manage access to cloud services, achieve mobility scenarios, and secure your organization. An instance of the Azure AD service, called a [tenant](/azure/active-directory/develop/developer-glossary#tenant), is an isolated set of directory object data that the customer provisions and owns.
+
+## Core Store
+
+Update or retrieval data operations in the Azure AD Core Store relate to a single tenant based on the userΓÇÖs security token, which achieves tenant isolation. The Core Store is made up of tenants stored in scale units, each of which contains multiple tenants. Azure AD replicates each scale unit in the physical data centers of a logical region for resiliency and performance.
+
+Learn more: [Azure Active Directory Core Store Scale Units](https://www.youtube.com/watch?v=OcKO44GtHh8)
+
+Currently Azure AD has the following regions:
+
+* North America
+* Europe, Middle East, and Africa (EMEA)
+* Australia
+* China
+* Japan
+* [United States government](https://azure.microsoft.com/global-infrastructure/government/)
+* Worldwide
+
+Azure AD handles directory data based on usability, performance, residency and/or other requirements based on region. The term residency indicates Microsoft provides assurance the data isnΓÇÖt persisted outside the geographic region.
+
+Azure AD replicates each tenant through its scale unit, across data centers, based on the following criteria:
+
+* Directory data stored in data centers closest to the user-residency location, to reduce latency and provide fast user sign-in times
+* Directory data stored in geographically isolated data centers to assure availability during unforeseen geological events
+* Compliance with data residency, or other requirements, for specific customers and countries or regions
+
+During tenant creation (for example, signing up for Office 365 or Azure, or creating more Azure AD instances through the Azure portal) you select a country or region as the primary location. Azure AD maps the selection to a logical region and a single scale unit in it. Tenant location canΓÇÖt be changed after itΓÇÖs set.
+
+## Azure AD cloud solution models
+
+Use the following table to see Azure AD cloud solution models based on infrastructure, data location, and operation sovereignty.
+
+|Model|Model regions|Data location|Operations personnel|Customer support|Put a tenant in this model|
+|||||||
+|Regional (2)|North America, EMEA, Japan|At rest, in the target region. Exceptions by service or feature|Operated by Microsoft. Microsoft datacenter personnel must pass a background check.|Microsoft, globally|Create the tenant in the sign-up experience. Choose the country in the residency.|
+|Worldwide|Worldwide||Operated by Microsoft. Microsoft datacenter personnel must pass a background check.|Microsoft, globally|Create the tenant in the sign-up experience. Choose a country without a regional model.|
+|Sovereign or national clouds|US government, China|At rest, in the target country or region. No exceptions.|Operated by a data custodian (1). Personnel are screened according to requirements.|Microsoft, country or region|Each national cloud instance has a sign-up experience.
+
+**Table references**:
+
+(1) **Data custodians**: Data centers in the Worldwide region are operated by Microsoft. In China, Azure AD is operated through a partnership with [21Vianet](/microsoft-365/admin/services-in-china/services-in-china?redirectSourcePath=%252fen-us%252farticle%252fLearn-about-Office-365-operated-by-21Vianet-a8ab5061-3346-4da0-bb7c-5260822b53ae&view=o365-21vianet&viewFallbackFrom=o365-worldwide&preserve-view=true).
+(2) **Authentication data**: Tenants outside the national clouds have authentication information at rest in the continental United States.
+
+Learn more:
+
+* Power BI: [Azure Active Directory ΓÇô Where is your data located?](https://aka.ms/aaddatamap)
+* [What is the Azure Active Directory architecture?](https://aka.ms/aadarch)
+* [Find the Azure geography that meets your needs](https://azure.microsoft.com/overview/datacenters/how-to-choose/)
+* [Microsoft Trust Center](https://www.microsoft.com/trustcenter/cloudservices/nationalcloud)
+
+## Data residency across Azure AD components
+
+In addition to authentication service data, Azure AD components and service data are stored on servers in the Azure AD instanceΓÇÖs region.
+
+Learn more: [Azure Active Directory, Product overview](https://www.microsoft.com/cloud-platform/azure-active-directory-features)
+
+> [!NOTE]
+> To understand service data location, such as Exchange Online, or Skype for Business, refer to the corresponding service documentation.
+
+### Azure AD components and data storage location
+
+Data storage for Azure AD components includes authentication, identity, MFA, and others. In the following table, data includes End User Identifiable Information (EUII) and Customer Content (CC).
+
+|Azure AD component|Description|Data storage location|
+||||
+|Azure AD Authentication Service|This service is stateless. The data for authentication is in the Azure AD Core Store. It has no directory data. Azure AD Authentication Service generates log data in Azure storage, and in the data center where the service instance runs. When users attempt to authenticate using Azure AD, theyΓÇÖre routed to an instance in the geographically nearest data center that is part of its Azure AD logical region. |In region|
+|Azure AD Identity and Access Management (IAM) Services|**User and management experiences**: The Azure AD management experience is stateless and has no directory data. It generates log and usage data stored in Azure Tables storage. The user experience is like the Azure portal. <br>**Identity management business logic and reporting services**: These services have locally cached data storage for groups and users. The services generate log and usage data that goes to Azure Tables storage, Azure SQL, and in Microsoft Elastic Search reporting services. |In region|
+|Azure AD Multi-Factor Authentication (MFA)|For details about MFA-operations data storage and retention, see [Data residency and customer data for Azure AD multifactor authentication](/azure/active-directory/authentication/concept-mfa-data-residency). Azure AD MFA logs the User Principal Name (UPN), voice-call telephone numbers, and SMS challenges. For challenges to mobile app modes, the service logs the UPN and a unique device token. Data centers in the North America region store Azure AD MFA, and the logs it creates.|North America|
+|Azure AD Domain Services|See regions where Azure AD Domain Services is published on [Products available by region](https://azure.microsoft.com/regions/services/). The service holds system metadata globally in Azure Tables, and it contains no personal data.|In region|
+|Azure AD Connect Health|Azure AD Connect Health generates alerts and reports in Azure Tables storage and blob storage.|In region|
+|Azure AD dynamic membership for groups, Azure AD self-service group management|Azure Tables storage holds dynamic membership rule definitions.|In region|
+|Azure AD Application Proxy|Azure AD Application Proxy stores metadata about the tenant, connector machines, and configuration data in Azure SQL.|In region|
+|Azure AD password reset |Azure AD password reset is a back-end service using Redis Cache to track session state. To learn more, go to redis.com to see [Introduction to Redis](https://redis.io/docs/about/).|See, Intro to Redis link in center column.|
+|Azure AD password writeback in Azure AD Connect|During initial configuration, Azure AD Connect generates an asymmetric keypair, using the RivestΓÇôShamirΓÇôAdleman (RSA) cryptosystem. It then sends the public key to the self-service password reset (SSPR) cloud service, which performs two operations: </br></br>1. Creates two Azure Service Bus relays for the Azure AD Connect on-premises service to communicate securely with the SSPR service </br> 2. Generates an Advanced Encryption Standard (AES) key, K1 </br></br> The Azure Service Bus relay locations, corresponding listener keys, and a copy of the AES key (K1) goes to Azure AD Connect in the response. Future communications between SSPR and Azure AD Connect occur over the new ServiceBus channel and are encrypted using SSL. </br> New password resets, submitted during operation, are encrypted with the RSA public key generated by the client during onboarding. The private key on the Azure AD Connect machine decrypts them, which prevents pipeline subsystems from accessing the plaintext password. </br> The AES key encrypts the message payload (encrypted passwords, more data, and metadata), which prevents malicious ServiceBus attackers from tampering with the payload, even with full access to the internal ServiceBus channel. </br> For password writeback, Azure AD Connect need keys and data: </br></br> - The AES key (K1) that encrypts the reset payload, or change requests from the SSPR service to Azure AD Connect, via the ServiceBus pipeline </br> - The private key, from the asymmetric key pair that decrypts the passwords, in reset or change request payloads </br> - The ServiceBus listener keys </br></br> The AES key (K1) and the asymmetric keypair rotate a minimum of every 180 days, a duration you can change during certain onboarding or offboarding configuration events. An example is a customer disables and re-enables password writeback, which might occur during component upgrade during service and maintenance. </br> The writeback keys and data stored in the Azure AD Connect database are encrypted by data protection application programming interfaces (DPAPI) (CALG_AES_256). The result is the master ADSync encryption key stored in the Windows Credential Vault in the context of the ADSync on-premises service account. The Windows Credential Vault supplies automatic secret re-encryption as the password for the service account changes. To reset the service account password invalidates secrets in the Windows Credential Vault for the service account. Manual changes to a new service account might invalidate the stored secrets.</br> By default, the ADSync service runs in the context of a virtual service account. The account might be customized during installation to a least-privileged domain service account, a managed service account (MSA), or a group managed service account (gMSA). While virtual and managed service accounts have automatic password rotation, customers manage password rotation for a custom provisioned domain account. As noted, to reset the password causes loss of stored secrets. |In region|
+|Azure AD Device Registration Service |Azure AD Device Registration Service has computer and device lifecycle management in the directory, which enable scenarios such as device-state conditional access, and mobile device management.|In region|
+|Azure AD provisioning|Azure AD provisioning creates, removes, and updates users in systems, such as software as service (SaaS) applications. It manages user creation in Azure AD and on-premises AD from cloud HR sources, like Workday. The service stores its configuration in an Azure Cosmos DB, which stores the group membership data for the user directory it keeps. Cosmos DB replicates the database to multiple datacenters in the same region as the tenant, which isolates the data, according to the Azure AD cloud solution model. Replication creates high availability and multiple reading and writing endpoints. Cosmos DB has encryption on the database information, and the encryption keys are stored in the secrets storage for Microsoft.|In region|
+|Azure AD business-to-business (B2B) collaboration|Azure AD B2B collaboration has no directory data. Users and other directory objects in a B2B relationship, with another tenant, result in user data copied in other tenants, which might have data residency implications.|In region|
+|Azure AD Identity Protection|Azure AD Identity Protection uses real-time user log-in data, with multiple signals from company and industry sources, to feed its machine-learning systems that detect anomalous logins. Personal data is scrubbed from real-time log-in data before itΓÇÖs passed to the machine learning system. The remaining log-in data identifies potentially risky usernames and logins. After analysis, the data goes to Microsoft reporting systems. Risky logins and usernames appear in reporting for Administrators.|In region|
+|Azure AD managed identities for Azure resources|Azure AD managed identities for Azure resources with managed identities systems can authenticate to Azure services, without storing credentials. Rather than use username and password, managed identities authenticate to Azure services with certificates. The service writes certificates it issues in Azure Cosmos DB in the East US region, which fail over to another region, as needed. Azure Cosmos DB geo-redundancy occurs by global data replication. Database replication puts a read-only copy in each region that Azure AD managed identities runs. To learn more, see [Azure services that can use managed identities to access other services](/azure/active-directory/managed-identities-azure-resources/managed-identities-status#azure-services-that-support-managed-identities-for-azure-resources). Microsoft isolates each Cosmos DB instance in an Azure AD cloud solution model. </br> The resource provider, such as the virtual machine (VM) host, stores the certificate for authentication, and identity flows, with other Azure services. The service stores its master key to access Azure Cosmos DB in a datacenter secrets management service. Azure Key Vault stores the master encryption keys.|In region|
+|Azure Active Directory business-to-consumer (B2C)|Azure Active Directory B2C is an identity management service to customize and manage how customers sign up, sign in, and manage their profiles when using applications. B2C uses the Core Store to keep user identity information. The Core Store database follows known storage, replication, deletion, and data-residency rules. B2C uses an Azure Cosmos DB system to store service policies and secrets. Cosmos DB has encryption and replication services on database information. Its encryption key is stored in the secrets storage for Microsoft. Microsoft isolates Cosmos DB instances in an Azure AD cloud solution model.|Customer-selectable region|
+
+## Related resources
+
+For more information on data residency in Microsoft Cloud offerings see the following articles:
+
+* [Azure Active Directory ΓÇô Where is your data located?](https://aka.ms/aaddatamap)
+* [Data Residency in Azure | Microsoft Azure](https://azure.microsoft.com/explore/global-infrastructure/data-residency/#overview)
+* [Microsoft 365 data locations - Microsoft 365 Enterprise](/microsoft-365/enterprise/o365-data-locations?view=o365-worldwide&preserve-view=true)
+* [Microsoft Privacy - Where is Your Data Located?](https://www.microsoft.com/trust-center/privacy/data-location?rtc=1)
+* Download PDF: [Privacy considerations in the cloud](https://go.microsoft.com/fwlink/p/?LinkID=2051117&clcid=0x409&culture=en-us&country=US)
active-directory Resilience B2b Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/resilience-b2b-authentication.md
Title: Build resilience in external user authentication with Azure Active Direct
description: A guide for IT admins and architects to building resilient authentication for external users -+
active-directory Sign Up Organization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/sign-up-organization.md
Title: Sign up your organization - Azure Active Directory | Microsoft Docs
description: Instructions about how to sign up your organization to use Azure and Azure Active Directory. -+
active-directory Users Default Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/users-default-permissions.md
Title: Default user permissions - Azure Active Directory | Microsoft Docs
description: Learn about the user permissions available in Azure Active Directory. -+
active-directory How To Lifecycle Workflow Sync Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/how-to-lifecycle-workflow-sync-attributes.md
+ # How to synchronize attributes for Lifecycle workflows Workflows, contain specific tasks, which can run automatically against users based on the specified execution conditions. Automatic workflow scheduling is supported based on the employeeHireDate and employeeLeaveDateTime user attributes in Azure AD.
active-directory Trigger Custom Task https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/trigger-custom-task.md
Title: Trigger Logic Apps based on custom task extensions
description: Trigger Logic Apps based on custom task extensions -+
active-directory Workflows Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/workflows-faqs.md
Title: 'Lifecycle workflows FAQs - Azure AD (preview)'
description: Frequently asked questions about Lifecycle workflows (preview). -+
Yes, key user properties like employeeHireDate and employeeType are supported fo
![Screenshot showing an example of how mapping is done in a Lifecycle Workflow.](./media/workflows-faqs/workflows-mapping.png)
+For more information on syncing employee attributes in Lifecycle Workflows, see: [How to synchronize attributes for Lifecycle workflows](how-to-lifecycle-workflow-sync-attributes.md)
+ ### How do I see more details and parameters of tasks and the attributes that are being updated? Some tasks do update existing attributes; however, we donΓÇÖt currently share those specific details. As these tasks are updating attributes related to other Azure AD features, so you can find that info in those docs. For temporary access pass, we're writing to the appropriate attributes listed [here](/graph/api/resources/temporaryaccesspassauthenticationmethod).
active-directory Four Steps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/four-steps.md
Title: Four steps to a strong identity foundation - Azure AD
description: This topic describes four steps hybrid identity customers can take to build a strong identity foundation. -+ na
active-directory Assign User Or Group Access Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/assign-user-or-group-access-portal.md
Previously updated : 09/06/2022 Last updated : 11/22/2022
+zone_pivot_groups: enterprise-apps-all
#customer intent: As an admin, I want to manage user assignment for an app in Azure Active Directory using PowerShell
This article shows you how to assign users and groups to an enterprise application in Azure Active Directory (Azure AD) using PowerShell. When you assign a user to an application, the application appears in the user's [My Apps](https://myapps.microsoft.com/) portal for easy access. If the application exposes app roles, you can also assign a specific app role to the user.
-When you assign a group to an application, only users in the group will have access. The assignment does not cascade to nested groups.
+When you assign a group to an application, only users in the group will have access. The assignment doesn't cascade to nested groups.
-Group-based assignment requires Azure Active Directory Premium P1 or P2 edition. Group-based assignment is supported for Security groups only. Nested group memberships and Microsoft 365 groups are not currently supported. For more licensing requirements for the features discussed in this article, see the [Azure Active Directory pricing page](https://azure.microsoft.com/pricing/details/active-directory).
+Group-based assignment requires Azure Active Directory Premium P1 or P2 edition. Group-based assignment is supported for Security groups only. Nested group memberships and Microsoft 365 groups aren't currently supported. For more licensing requirements for the features discussed in this article, see the [Azure Active Directory pricing page](https://azure.microsoft.com/pricing/details/active-directory).
-For greater control, certain types of enterprise applications can be configured to require user assignment. See [Manage access to an application](what-is-access-management.md#requiring-user-assignment-for-an-app) for more information on requiring user assignment for an app.
+For greater control, certain types of enterprise applications can be configured to require user assignment. For more information on requiring user assignment for an app, see [Manage access to an application](what-is-access-management.md#requiring-user-assignment-for-an-app).
## Prerequisites
-To assign users to an app using PowerShell, you need:
+To assign users to an enterprise application, you need:
-- An Azure account with an active subscription. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An Azure AD account with an active subscription. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
- One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.-- If you have not yet installed the AzureAD module (use the command `Install-Module -Name AzureAD`). If you're prompted to install a NuGet module or the new Azure Active Directory V2 PowerShell module, type Y and press ENTER. - Azure Active Directory Premium P1 or P2 for group-based assignment. For more licensing requirements for the features discussed in this article, see the [Azure Active Directory pricing page](https://azure.microsoft.com/pricing/details/active-directory).
-## Assign users, and groups, to an app using PowerShell
++
+To assign a user or group account to an enterprise application:
+
+1. In the [Azure Active Directory Admin Center](https://aad.portal.azure.com), select **Enterprise applications**, and then search for and select the application to which you want to assign the user or group account.
+1. In the left pane, select **Users and groups**, and then select **Add user/group**.
+
+ :::image type="content" source="media/add-application-portal-assign-users/assign-user.png" alt-text="Assign user account to an application in your Azure AD tenant.":::
+
+1. On the **Add Assignment** pane, select **None Selected** under **Users and groups**.
+1. Search for and select the user or group that you want to assign to the application. For example, `contosouser1@contoso.com` or `contosoteam1@contoso.com`.
+1. Select **Select**.
+1. On the **Add Assignment** pane, select **Assign** at the bottom of the pane.
++ 1. Open an elevated Windows PowerShell command prompt.
-1. Run `Connect-AzureAD` and sign in with a Global Admin user account.
+1. Run `Connect-AzureAD Application.Read.All, Directory.Read.All, Application.ReadWrite.All, Directory.ReadWrite.All` and sign in with a Global Admin user account.
1. Use the following script to assign a user and role to an application: ```powershell
This example assigns the user Britta Simon to the Microsoft Workplace Analytics
New-AzureADUserAppRoleAssignment -ObjectId $user.ObjectId -PrincipalId $user.ObjectId -ResourceId $sp.ObjectId -Id $appRole.Id ```
-## Unassign users, and groups, from an app using PowerShell
+## Unassign users, and groups, from an application
1. Open an elevated Windows PowerShell command prompt.
-1. Run `Connect-AzureAD` and sign in with a Global Admin user account. Use the following script to remove a user and role from an application:
+1. Run `Connect-AzureAD Application.Read.All Directory.Read.All Application.ReadWrite.All Directory.ReadWrite.All` and sign in with a Global Admin user account. Use the following script to remove a user and role from an application.
```powershell # Store the proper parameters
This example assigns the user Britta Simon to the Microsoft Workplace Analytics
$assignments | Select * #To remove the App role assignment run the following command.
- Remove-AzureADServiceAppRoleAssignment -ObjectId $spo.ObjectId -AppRoleAssignmentId $assignments[assignment #].ObjectId
+ Remove-AzureADServiceAppRoleAssignment -ObjectId $spo.ObjectId -AppRoleAssignmentId $assignments[assignment number].ObjectId
``` ## Remove all users who are assigned to the application
+Use the following script to remove all users and groups assigned to the application.
+ ```powershell #Retrieve the service principal object ID. $app_name = "<Your App's display name>"
$assignments | ForEach-Object {
} } ```++
+1. Open an elevated Windows PowerShell command prompt.
+1. Run `Connect-AzureAD Application.Read.All Directory.Read.All Application.ReadWrite.All Directory.ReadWrite.All` and sign in with a Global Admin user account.
+1. Use the following script to assign a user and role to an application:
+
+```powershell
+# Assign the values to the variables
+
+$userId = "<Your user's ID>"
+$app_name = "<Your App's display name>"
+$app_role_name = "<App role display name>"
+$sp = Get-MgServicePrincipal -Filter "displayName eq '$app_name'"
+
+# Get the user to assign, and the service principal for the app to assign to
+
+$params = @{
+ "PrincipalId" =$userId
+ "ResourceId" =$sp.Id
+ "AppRoleId" =($sp.AppRoles | Where-Object { $_.DisplayName -eq $app_role_name }).Id
+ }
+
+# Assign the user to the app role
+
+New-MgUserAppRoleAssignment -UserId $userId -BodyParameter $params |
+ Format-List Id, AppRoleId, CreationTime, PrincipalDisplayName,
+ PrincipalId, PrincipalType, ResourceDisplayName, ResourceId
+```
+
+## Unassign users, and groups, from an application
+
+1. Open an elevated Windows PowerShell command prompt.
+1. Run `Connect-AzureAD Application.Read.All Directory.Read.All Application.ReadWrite.All Directory.ReadWrite.All` and sign in with a Global Admin user account. Use the following script to remove a user and role from an application.
+```powershell
+# Get the user and the service principal
+
+$user = Get-MgUser -UserId <userid>
+$spo = Get-MgServicePrincipal -ServicePrincipalId <ServicePrincipalId>
+
+# Get the Id of the role assignment
+
+$assignments = Get-MgServicePrincipalAppRoleAssignedTo -ServicePrincipalId $spo.Id | Where {$_.PrincipalDisplayName -eq $user.DisplayName}
+
+# if you run the following, it will show you the list of users assigned to the application
+
+$assignments | Select *
+
+# To remove the App role assignment run the following command.
+
+Remove-MgServicePrincipalAppRoleAssignedTo -AppRoleAssignmentId '<AppRoleAssignment-id>' -ServicePrincipalId $spo.Id
+```
+
+## Remove all users and groups assigned to the application
+
+Use the following script to remove all users and groups assigned to the application.
+
+```powershell
+$assignments | ForEach-Object {
+ if ($_.PrincipalType -in ("user", "Group")) {
+ Remove-MgServicePrincipalAppRoleAssignedTo -ServicePrincipalId $Sp.Id -AppRoleAssignmentId $_.Id }
+}
+```
+++
+1. To assign users and groups to an application, sign in to [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) with one of the roles listed in the prerequisite section.
+
+ You'll need to consent to the following permissions:
+
+ `Application.Read.All`, `Application.ReadWrite.All`, `Directory.Read.All`, `Directory.ReadWrite.All`.
+
+ To grant an app role assignment, you need three identifiers:
+
+ - `principalId`: The ID of the user or group to which you're assigning the app role.
+ - `resourceId`: The ID of the resource servicePrincipal that has defined the app role.
+ - `appRoleId`: The ID of the appRole (defined on the resource service principal) to assign to a user or group.
+
+1. Get the enterprise application. Filter by DisplayName.
+
+ ```http
+ GET servicePrincipal?$filter=DisplayName eq '{appDisplayName}'
+ ```
+ Record the following values from the response body:
+
+ - Object ID of the enterprise application
+ - appRoleId that you'll assign to the user. If the application doesn't expose any roles, the user will be assigned the default access role.
+
+1. Get the user by filtering by the user's principal name. Record the object ID of the user.
+
+ ```http
+ GET /users/{userPrincipalName}
+ ```
+1. Assign the user to the application.
+ ```http
+ POST /servicePrincipals/resource-servicePrincipal-id/appRoleAssignedTo
+
+ {
+ "principalId": "33ad69f9-da99-4bed-acd0-3f24235cb296",
+ "resourceId": "9028d19c-26a9-4809-8e3f-20ff73e2d75e",
+ "appRoleId": "ef7437e6-4f94-4a0a-a110-a439eb2aa8f7"
+ }
+ ```
+ In the example, both the resource-servicePrincipal-id and resourceId represent the enterprise application.
+
+## Unassign users, and groups, from an application
+To unassign user and groups from the application, run the following query.
+
+1. Get the enterprise application. Filter by DisplayName.
+
+ ```http
+ GET servicePrincipal?$filter=DisplayName eq '{appDisplayName}'
+ ```
+1. Get the list of appRoleAssignments for the application.
+
+ ```http
+ GET /servicePrincipals/{id}/appRoleAssignedTo
+ ```
+1. Remove the appRoleAssignments by specifying the appRoleAssignment ID.
+
+ ```http
+ DELETE /servicePrincipals/{resource-servicePrincipal-id}/appRoleAssignedTo/{appRoleAssignment-id}
+ ```
## Next steps -- [Create and assign a user account from the Azure portal](add-application-portal-assign-users.md)-- [Manage access to apps](what-is-access-management.md).
+- [Assign custom security attributes](custom-security-attributes-apps.md)
+- [Disable user sign-in](disable-user-sign-in-portal.md).
active-directory Manage Application Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/manage-application-permissions.md
Previously updated : 11/07/2022 Last updated : 11/22/2022
-zone_pivot_groups: enterprise-apps-minus-graph
+zone_pivot_groups: enterprise-apps-all
#customer intent: As an admin, I want to review permissions granted to applications so that I can restrict suspicious or over privileged applications.
To review permissions granted to applications, you need:
- One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator. - A Service principal owner who isn't an administrator is able to invalidate refresh tokens.
-## Review permissions
- :::zone pivot="portal"
+## Review permissions
+ You can access the Azure AD portal to get contextual PowerShell scripts to perform the actions. To review application permissions:
Each option generates PowerShell scripts that enable you to control user access
:::zone pivot="aad-powershell"
-## Revoke permissions
-- Using the following Azure AD PowerShell script revokes all permissions granted to an application. ```powershell
-Connect-AzureAD
+Connect-AzureAD Application.Read.All, Application.ReadWrite.All, Directory.Read.All, Directory.ReadWrite.All
# Get Service Principal using objectId $sp = Get-AzureADServicePrincipal -ObjectId "<ServicePrincipal objectID>"
$spApplicationPermissions | ForEach-Object {
## Invalidate the refresh tokens
+Remove appRoleAssignments for users or groups to the application using the following scripts.
+ ```powershell
-Connect-AzureAD
+Connect-AzureAD Application.Read.All, Application.ReadWrite.All, Directory.Read.All, Directory.ReadWrite.All
# Get Service Principal using objectId $sp = Get-AzureADServicePrincipal -ObjectId "<ServicePrincipal objectID>"
$assignments | ForEach-Object {
} ``` :::zone-end+ :::zone pivot="ms-powershell" Using the following Microsoft Graph PowerShell script revokes all permissions granted to an application. ```powershell
-Connect-MgGraph
+Connect-MgGraph Application.Read.All, Application.ReadWrite.All, Directory.Read.All, Directory.ReadWrite.All
# Get Service Principal using objectId $sp = Get-MgServicePrincipal -ServicePrincipalID "$ServicePrincipalID"
$spOAuth2PermissionsGrants= Get-MgOauth2PermissionGrant -All| Where-Object { $_.
$spOauth2PermissionsGrants |ForEach-Object { Remove-MgOauth2PermissionGrant -OAuth2PermissionGrantId $_.Id }+
+# Get all application permissions for the service principal
+$spApplicationPermissions = Get-MgServicePrincipalAppRoleAssignedTo -ServicePrincipalId $Sp.Id -All | Where-Object { $_.PrincipalType -eq "ServicePrincipal" }
+
+# Remove all application permissions
+$spApplicationPermissions | ForEach-Object {
+Remove-MgServicePrincipalAppRoleAssignedTo -ServicePrincipalId $Sp.Id -AppRoleAssignmentId $_.Id
+ }
``` ## Invalidate the refresh tokens
+Remove appRoleAssignments for users or groups to the application using the following scripts.
+ ```powershell
-Connect-MgGraph
+Connect-MgGraph Application.Read.All, Application.ReadWrite.All, Directory.Read.All, Directory.ReadWrite.All
# Get Service Principal using objectId $sp = Get-MgServicePrincipal -ServicePrincipalID "$ServicePrincipalID"
$spApplicationPermissions = Get-MgServicePrincipalAppRoleAssignedTo -ServicePrin
:::zone-end +
+To review permissions, Sign in to [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) with one of the roles listed in the prerequisite section.
+
+You'll need to consent to the following permissions:
+
+`Application.Read.All`, `Application.ReadWrite.All`, `Directory.Read.All`, `Directory.ReadWrite.All`.
+
+### Delegated permissions
+
+Run the following queries to review delegated permissions granted to an application.
+
+1. Get Service Principal using objectID
+
+ ```http
+ GET /servicePrincipals/{id}
+ ```
+
+ Example:
+
+ ```http
+ GET /servicePrincipals/57443554-98f5-4435-9002-852986eea510
+ ```
+
+1. Get all delegated permissions for the service principal
+
+ ```http
+ GET /servicePrincipals/{id}/oauth2PermissionGrants
+ ```
+1. Remove delegated permissions using oAuth2PermissionGrants ID.
+
+ ```http
+ DELETE /oAuth2PermissionGrants/{id}
+ ```
+
+### Application permissions
+
+Run the following queries to review application permissions granted to an application.
+
+1. Get all application permissions for the service principal
+
+ ```http
+ GET /servicePrincipals/{servicePrincipal-id}/appRoleAssignments
+ ```
+1. Remove application permissions using appRoleAssignment ID
+
+ ```http
+ DELETE /servicePrincipals/{resource-servicePrincipal-id}/appRoleAssignedTo/{appRoleAssignment-id}
+ ```
+
+## Invalidate the refresh tokens
+
+Run the following queries to remove appRoleAssignments of users or groups to the application.
+
+1. Get Service Principal using objectID.
+
+ ```http
+ GET /servicePrincipals/{id}
+ ```
+ Example:
+
+ ```http
+ GET /servicePrincipals/57443554-98f5-4435-9002-852986eea510
+ ```
+1. Get Azure AD App role assignments using objectID of the Service Principal.
+
+ ```http
+ GET /servicePrincipals/{servicePrincipal-id}/appRoleAssignedTo
+ ```
+1. Revoke refresh token for users and groups assigned to the application using appRoleAssignment ID.
+
+ ```http
+ DELETE /servicePrincipals/{servicePrincipal-id}/appRoleAssignedTo/{appRoleAssignment-id}
+ ```
+ > [!NOTE] > Revoking the current granted permission won't stop users from re-consenting to the application. If you want to block users from consenting, read [Configure how users consent to applications](configure-user-consent.md). ## Next steps -- [Configure admin consent workflow](configure-admin-consent-workflow.md)
+- [Configure user consent setting](configure-user-consent.md)
+- [Configure admin consent workflow](configure-admin-consent-workflow.md)
active-directory Howto Verifiable Credentials Partner Au10tix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/howto-verifiable-credentials-partner-au10tix.md
Title: Configure Verified ID by AU10TIX as your Identity Verification Partner
description: This article shows you the steps you need to follow to configure AU10TIX as your identity verification partner -+
aks Cluster Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-configuration.md
description: Learn how to configure a cluster in Azure Kubernetes Service (AKS)
Previously updated : 10/28/2022 Last updated : 11/23/2022 # Configure an AKS cluster
By using `containerd` for AKS nodes, pod startup latency improves and node resou
* `Containerd` sets up logging using the standardized `cri` logging format (which is different from what you currently get from docker's json driver). Your logging solution needs to support the `cri` logging format (like [Azure Monitor for Containers](../azure-monitor/containers/container-insights-enable-new-cluster.md)) * You can no longer access the docker engine, `/var/run/docker.sock`, or use Docker-in-Docker (DinD).
- * If you currently extract application logs or monitoring data from Docker Engine, use [Container insights](../azure-monitor/containers/container-insights-enable-new-cluster.md) instead. Additionally AKS doesn't support running any out of band commands on the agent nodes that could cause instability.
- * Building images and directly using the Docker engine using the methods above isn't recommended. Kubernetes isn't fully aware of those consumed resources, and those approaches present numerous issues detailed [here](https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/) and [here](https://securityboulevard.com/2018/05/escaping-the-whale-things-you-probably-shouldnt-do-with-docker-part-1/), for example.
+ * If you currently extract application logs or monitoring data from Docker engine, use [Container insights](../azure-monitor/containers/container-insights-enable-new-cluster.md) instead. AKS doesn't support running any out of band commands on the agent nodes that could cause instability.
+ * Building images and directly using the Docker engine using the methods above isn't recommended. Kubernetes isn't fully aware of those consumed resources, and those methods present numerous issues as described [here](https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/) and [here](https://securityboulevard.com/2018/05/escaping-the-whale-things-you-probably-shouldnt-do-with-docker-part-1/).
-* Building images - You can continue to use your current docker build workflow as normal, unless you're building images inside your AKS cluster. In this case, consider switching to the recommended approach for building images using [ACR Tasks](../container-registry/container-registry-quickstart-task-cli.md), or a more secure in-cluster option like [docker buildx](https://github.com/docker/buildx).
+* Building images - You can continue to use your current Docker build workflow as normal, unless you're building images inside your AKS cluster. In this case, consider switching to the recommended approach for building images using [ACR Tasks](../container-registry/container-registry-quickstart-task-cli.md), or a more secure in-cluster option like [Docker Buildx](https://github.com/docker/buildx).
## Generation 2 virtual machines
Additionally not all VM images support Gen2, on AKS Gen2 VMs will use the new [A
## Default OS disk sizing
-By default, when creating a new cluster or adding a new node pool to an existing cluster, the disk size is determined by the number for vCPUs, which is based on the VM SKU. The default values are shown in the following table:
+By default, when creating a new cluster or adding a new node pool to an existing cluster, the OS disk size is determined by the number for vCPUs. The number of vCPUs is based on the VM SKU and the default values are shown in the following table:
|VM SKU Cores (vCPUs)| Default OS Disk Tier | Provisioned IOPS | Provisioned Throughput (Mpbs) | |--|--|--|--|
By default, when creating a new cluster or adding a new node pool to an existing
| 64+ | P30/1024G | 5000 | 200 | > [!IMPORTANT]
-> Default OS disk sizing is only used on new clusters or node pools when Ephemeral OS disks are not supported and a default OS disk size isn't specified. The default OS disk size may impact the performance or cost of your cluster, but you can change the sizing of the OS disk at any time after cluster or node pool creation. This default disk sizing affects clusters or node pools created in July 2022 or later.
+> Default OS disk sizing is only used on new clusters or node pools when ephemeral OS disks are not supported and a default OS disk size isn't specified. The default OS disk size may impact the performance or cost of your cluster, and you cannot change the OS disk size after cluster or node pool creation. This default disk sizing affects clusters or node pools created on July 2022 or later.
## Ephemeral OS By default, Azure automatically replicates the operating system disk for a virtual machine to Azure storage to avoid data loss if the VM needs to be relocated to another host. However, since containers aren't designed to have local state persisted, this behavior offers limited value while providing some drawbacks, including slower node provisioning and higher read/write latency.
-By contrast, ephemeral OS disks are stored only on the host machine, just like a temporary disk. This provides lower read/write latency, along with faster node scaling and cluster upgrades.
+By contrast, ephemeral OS disks are stored only on the host machine, just like a temporary disk. This configuration provides lower read/write latency, along with faster node scaling and cluster upgrades.
Like the temporary disk, an ephemeral OS disk is included in the price of the virtual machine, so you don't incur more storage costs. > [!IMPORTANT]
->When you don't explicitly request managed disks for the OS, AKS will default to ephemeral OS if possible for a given node pool configuration.
+> When you don't explicitly request managed disks for the OS, AKS will default to ephemeral OS if possible for a given node pool configuration.
-If you chose to use an ephemeral OS, the OS disk must fit in the VM cache. The sizes for VM cache are available in the [Azure documentation](../virtual-machines/dv3-dsv3-series.md) in parentheses next to IO throughput ("cache size in GiB").
+If you chose to use an ephemeral OS, the OS disk must fit in the VM cache. The sizes for VM cache are available in the [Azure VM documentation](../virtual-machines/dv3-dsv3-series.md) in parentheses next to IO throughput ("cache size in GiB").
-If you chose to use the AKS default VM size [Standard_DS2_v2](../virtual-machines/dv2-dsv2-series.md#dsv2-series) SKU with the default OS disk size of 100 GB, this VM size supports ephemeral OS but only has 86 GB of cache size. This configuration would default to managed disks if you don't explicitly specify it. If you do request an ephemeral OS, you'll receive a validation error.
+If you chose to use the AKS default VM size [Standard_DS2_v2](../virtual-machines/dv2-dsv2-series.md#dsv2-series) SKU with the default OS disk size of 100 GB. The default VM size supports ephemeral OS, but only has 86 GB of cache size. This configuration would default to managed disks if you don't explicitly specify it. If you do request an ephemeral OS, you'll receive a validation error.
-If you request the same [Standard_DS2_v2](../virtual-machines/dv2-dsv2-series.md#dsv2-series) SKU with a 60GB OS disk, this configuration would default to ephemeral OS: the requested size of 60GB is smaller than the maximum cache size of 86 GB.
+If you request the same [Standard_DS2_v2](../virtual-machines/dv2-dsv2-series.md#dsv2-series) SKU with a 60GB OS disk, this configuration would default to ephemeral OS. The requested size of 60GB is smaller than the maximum cache size of 86 GB.
If you select the [Standard_D8s_v3](../virtual-machines/dv3-dsv3-series.md#dsv3-series) SKU with 100 GB OS disk, this VM size supports ephemeral OS and has 200 GB of cache space. If you don't specify the OS disk type, the node pool would receive ephemeral OS by default.
-The latest generation of VM series doesn't have a dedicated cache, but only temporary storage. Let's assume to use the [Standard_E2bds_v5](../virtual-machines/ebdsv5-ebsv5-series.md#ebdsv5-series) VM size with the default OS disk size of 100 GiB as an example. This VM size supports ephemeral OS disks but only has 75 GiB of temporary storage. This configuration would default to managed OS disks if you don't explicitly specify it. If you do request an ephemeral OS disk, you'll receive a validation error.
+The latest generation of VM series doesn't have a dedicated cache, but only temporary storage. Let's assume to use the [Standard_E2bds_v5](../virtual-machines/ebdsv5-ebsv5-series.md#ebdsv5-series) VM size with the default OS disk size of 100 GiB as an example. This VM size supports ephemeral OS disks, but only has 75 GiB of temporary storage. This configuration would default to managed OS disks if you don't explicitly specify it. If you do request an ephemeral OS disk, you'll receive a validation error.
If you request the same [Standard_E2bds_v5](../virtual-machines/ebdsv5-ebsv5-series.md#ebdsv5-series) VM size with a 60 GiB OS disk, this configuration would default to ephemeral OS disks. The requested size of 60 GiB is smaller than the maximum temporary storage of 75 GiB.
kubectl get pods --all-namespaces
## Custom resource group name
-When you deploy an Azure Kubernetes Service cluster in Azure, a second resource group gets created for the worker nodes. By default, AKS will name the node resource group `MC_resourcegroupname_clustername_location`, but you can also provide your own name.
+When you deploy an Azure Kubernetes Service cluster in Azure, a second resource group is created for the worker nodes. By default, AKS names the node resource group `MC_resourcegroupname_clustername_location`, but you can also specify a custom name.
-To specify your own resource group name, install the aks-preview Azure CLI extension version 0.3.2 or later. Using the Azure CLI, use the `--node-resource-group` parameter of the `az aks create` command to specify a custom name for the resource group. If you use an Azure Resource Manager template to deploy an AKS cluster, you can define the resource group name by using the `nodeResourceGroup` property.
+To specify a custom resource group name, install the `aks-preview` Azure CLI extension version 0.3.2 or later. When using the Azure CLI, include the `--node-resource-group` parameter of the `az aks create` command to specify a custom name for the resource group. If you use an Azure Resource Manager template to deploy an AKS cluster, you can define the resource group name by using the `nodeResourceGroup` property.
```azurecli az aks create --name myAKSCluster --resource-group myResourceGroup --node-resource-group myNodeResourceGroup ```
-The secondary resource group is automatically created by the Azure resource provider in your own subscription. You can only specify the custom resource group name when the cluster is created.
+The secondary resource group is automatically created by the Azure resource provider in your own subscription. You can only specify the custom resource group name when the cluster is created.
As you work with the node resource group, keep in mind that you can't:
As you work with the node resource group, keep in mind that you can't:
## Node Restriction (Preview)
-The [Node Restriction](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#noderestriction) admission controller limits the Node and Pod objects a kubelet can modify. Node Restriction is on by default in AKS 1.24+ clusters. If you're using an older version, use the below commands to create a cluster with Node Restriction or update an existing cluster to add Node Restriction.
+The [Node Restriction](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#noderestriction) admission controller limits the Node and Pod objects a kubelet can modify. Node Restriction is on by default in AKS 1.24+ clusters. If you're using an older version, use the below commands to create a cluster with Node Restriction, or update an existing cluster to add Node Restriction.
[!INCLUDE [preview features callout](./includes/preview/preview-callout.md)]
To remove Node Restriction from a cluster.
az aks update -n aks -g myResourceGroup --disable-node-restriction ```
-## OIDC Issuer
+## OIDC Issuer
-This enables an OIDC Issuer URL of the provider which allows the API server to discover public signing keys.
+You can enable an OIDC Issuer URL of the provider, which allows the API server to discover public signing keys.
> [!WARNING] > Enable or disable OIDC Issuer changes the current service account token issuer to a new value, which can cause down time and restarts the API server. If the application pods using a service token remain in a failed state after you enable or disable the OIDC Issuer, we recommend you manually restart the pods.
To rotate the OIDC key, perform the following command. Replace the default value
az aks oidc-issuer rotate-signing-keys -n myAKSCluster -g myResourceGroup ```
-> [!Important]
+> [!IMPORTANT]
> Once you rotate the key, the old key (key1) expires after 24 hours. This means that both the old key (key1) and the new key (key2) are valid within the 24-hour period. If you want to invalidate the old key (key1) immediately, you need to rotate the OIDC key twice. Then key2 and key3 are valid, and key1 is invalid. ## Next steps - Learn how to [upgrade the node images](node-image-upgrade.md) in your cluster.
+- Review [Baseline architecture for an Azure Kubernetes Service (AKS) cluster][baseline-reference-architecture-aks] to learn about our recommended baseline infrastructure architecture.
- See [Upgrade an Azure Kubernetes Service (AKS) cluster](upgrade-cluster.md) to learn how to upgrade your cluster to the latest version of Kubernetes. - Read more about [`containerd` and Kubernetes](https://kubernetes.io/blog/2018/05/24/kubernetes-containerd-integration-goes-ga/) - See the list of [Frequently asked questions about AKS](faq.md) to find answers to some common AKS questions.
az aks oidc-issuer rotate-signing-keys -n myAKSCluster -g myResourceGroup
[aks-add-np-containerd]: ./learn/quick-windows-container-deploy-cli.md#add-a-windows-server-node-pool-with-containerd [az-aks-create]: /cli/azure/aks#az-aks-create [az-aks-update]: /cli/azure/aks#az-aks-update
+[baseline-reference-architecture-aks]: /azure/architecture/reference-architectures/containers/aks/baseline-aks
aks Dapr Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/dapr-migration.md
Title: Migrate from Dapr OSS to the Dapr extension for Azure Kubernetes Service (AKS)
-description: Learn how to migrate from Dapr OSS to the Dapr extension for AKS
+description: Learn how to migrate your managed clusters from Dapr OSS to the Dapr extension for AKS
Previously updated : 07/21/2022 Last updated : 11/21/2022 # Migrate from Dapr OSS to the Dapr extension for Azure Kubernetes Service (AKS)
-You've installed and configured Dapr OSS on your Kubernetes cluster and want to migrate to the Dapr extension on AKS. Before you can successfully migrate to the Dapr extension, you need to fully remove Dapr OSS from your AKS cluster. In this guide, you will migrate from Dapr OSS by:
+You've installed and configured Dapr OSS on your Kubernetes cluster and want to migrate to the Dapr extension on AKS. In this guide, you'll learn how Dapr moves your managed clusters from using Dapr OSS to the Dapr extension by either:
-> [!div class="checklist"]
-> - Uninstalling Dapr, including CRDs and the `dapr-system` namespace
-> - Installing Dapr via the Dapr extension for AKS
-> - Applying your components
-> - Restarting your applications that use Dapr
+- Checking for an existing Dapr installation via CLI prompts (default method), or
+- Using the Helm release name and namespace configuration settings to manually check for an existing Dapr installation.
-> [!NOTE]
-> Expect downtime of approximately 10 minutes while migrating to Dapr extension for AKS. Downtime may take longer depending on varying factors. During this downtime, no Dapr functionality should be expected to run.
+This check allows the Dapr extension to reuse the already existing Kubernetes resources from your previous installation and start managing them.
-## Uninstall Dapr
+## Check for an existing Dapr installation
-#### [Dapr CLI](#tab/cli)
-
-1. Run the following command to uninstall Dapr and all CRDs:
+The Dapr extension, by default, checks for existing Dapr installations when you run the `az k8s-extension create` command. To list the details of your current Dapr installation, run the following command and save the Dapr release name and namespace:
```bash
-dapr uninstall -k ΓÇô-all
+helm list -A
```
-2. Uninstall the Dapr namespace:
+When [installing the extension][dapr-create], you'll receive a prompt asking if Dapr is already installed:
```bash
-kubectl delete namespace dapr-system
+Is Dapr already installed in the cluster? (y/N): y
```
-> [!NOTE]
-> `dapr-system` is the default namespace installed with `dapr init -k`. If you created a custom namespace, replace `dapr-system` with your namespace.
-
-#### [Helm](#tab/helm)
-
-1. Run the following command to uninstall Dapr:
+If Dapr is already installed, please enter the Helm release name and namespace (from `helm list -A`) when prompted the following:
```bash
-helm uninstall dapr -n dapr-system
+Enter the Helm release name for Dapr, or press Enter to use the default name [dapr]:
+Enter the namespace where Dapr is installed, or press Enter to use the default namespace [dapr-system]:
```
-2. Uninstall CRDs:
+## Configure the Dapr check using `--configuration-settings`
-```bash
-kubectl delete crd components.dapr.io
-kubectl delete crd configurations.dapr.io
-kubectl delete crd subscriptions.dapr.io
-kubectl delete crd resiliencies.dapr.io
-```
+Alternatively, when creating the Dapr extension, you can configure the above settings via `--configuration-settings`. This method is useful when you are automating the installation via bash scripts, CI pipelines, etc.
-3. Uninstall the Dapr namespace:
+If you don't have Dapr already installed on your cluster, set `skipExistingDaprCheck` to `true`:
-```bash
-kubectl delete namespace dapr-system
+```azurecli-interactive
+az k8s-extension create --cluster-type managedClusters \
+--cluster-name myAKScluster \
+--resource-group myResourceGroup \
+--name dapr \
+--extension-type Microsoft.Dapr \
+--configuration-settings "skipExistingDaprCheck=true"
```
-> [!NOTE]
-> `dapr-system` is the default namespace while doing a Helm install. If you created a custom namespace (`helm install dapr dapr/dapr --namespace <my-namespace>`), replace `dapr-system` with your namespace.
---
-## Register the `KubernetesConfiguration` service provider
-
-If you have not previously used cluster extensions, you may need to register the service provider with your subscription. You can check the status of the provider registration using the [az provider list][az-provider-list] command, as shown in the following example:
+If Dapr exists on your cluster, set the Helm release name and namespace (from `helm list -A`) via `--configuration-settings`:
```azurecli-interactive
-az provider list --query "[?contains(namespace,'Microsoft.KubernetesConfiguration')]" -o table
+az k8s-extension create --cluster-type managedClusters \
+--cluster-name myAKScluster \
+--resource-group myResourceGroup \
+--name dapr \
+--extension-type Microsoft.Dapr \
+--configuration-settings "existingDaprReleaseName=dapr" \
+--configuration-settings "existingDaprReleaseNamespace=dapr-system"
```
-The *Microsoft.KubernetesConfiguration* provider should report as *Registered*, as shown in the following example output:
+## Update HA mode or placement service settings
-```output
-Namespace RegistrationState RegistrationPolicy
- - --
-Microsoft.KubernetesConfiguration Registered RegistrationRequired
-```
+When you install the Dapr extension on top of an existing Dapr installation, you'll see the following prompt:
-If the provider shows as *NotRegistered*, register the provider using the [az provider register][az-provider-register] as shown in the following example:
+> ```The extension will be installed on your existing Dapr installation. Note, if you have updated the default values for global.ha.* or dapr_placement.* in your existing Dapr installation, you must provide them in the configuration settings. Failing to do so will result in an error, since Helm upgrade will try to modify the StatefulSet. See <link> for more information.```
-```azurecli-interactive
-az provider register --namespace Microsoft.KubernetesConfiguration
-```
+Kubernetes only allows for limited fields in StatefulSets to be patched, subsequently failing upgrade of the placement service if any of the mentioned settings are configured. You can follow the steps below to update those settings:
-## Install Dapr via the AKS extension
+1. Delete the stateful set.
-Once you've uninstalled Dapr from your system, install the [Dapr extension for AKS and Arc-enabled Kubernetes](./dapr.md#create-the-extension-and-install-dapr-on-your-aks-or-arc-enabled-kubernetes-cluster).
+ ```azurecli-interactive
+ kubectl delete statefulset.apps/dapr-placement-server -n dapr-system
+ ```
-```bash
-az k8s-extension create --cluster-type managedClusters \
cluster-name <dapr-cluster-name> \resource-group <dapr-resource-group> \name <dapr-ext> \extension-type Microsoft.Dapr
-```
+1. Update the HA mode:
+
+ ```azurecli-interactive
+ az k8s-extension update --cluster-type managedClusters \
+ --cluster-name myAKSCluster \
+ --resource-group myResourceGroup \
+ --name dapr \
+ --extension-type Microsoft.Dapr \
+ --auto-upgrade-minor-version true \
+ --configuration-settings "global.ha.enabled=true" \
+ ```
-## Apply your components
+For more information, see [Dapr Production Guidelines][dapr-prod-guidelines].
-```bash
-kubectl apply -f <component.yaml>
-```
-## Restart your applications that use Dapr
+## Next steps
-Restarting the deployment will create a new sidecar from the new Dapr installation.
+Learn more about [the cluster extension][dapr-overview] and [how to use it][dapr-howto].
-```bash
-kubectl rollout restart <deployment-name>
-```
-## Next steps
+<!-- LINKS INTERNAL -->
+[dapr-overview]: ./dapr-overview.md
+[dapr-howto]: ./dapr.md
+[dapr-create]: ./dapr.md#create-the-extension-and-install-dapr-on-your-aks-or-arc-enabled-kubernetes-cluster
-Learn more about [the cluster extension](./dapr-overview.md) and [how to use it](./dapr.md).
+<!-- LINKS EXTERNAL -->
+[dapr-prod-guidelines]: https://docs.dapr.io/operations/hosting/kubernetes/kubernetes-production/#enabling-high-availability-in-an-existing-dapr-deployment
aks Manage Abort Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/manage-abort-operations.md
Title: Abort an Azure Kubernetes Service (AKS) long running operation
+ Title: Abort an Azure Kubernetes Service (AKS) long running operation (preview)
description: Learn how to terminate a long running operation on an Azure Kubernetes Service cluster at the node pool or cluster level. Previously updated : 09/08/2022 Last updated : 11/23/2022
Last updated 09/08/2022
Sometimes deployment or other processes running within pods on nodes in a cluster can run for periods of time longer than expected due to various reasons. While it's important to allow those processes to gracefully terminate when they're no longer needed, there are circumstances where you need to release control of node pools and clusters with long running operations using an *abort* command.
-AKS now supports aborting a long running operation, allowing you to take back control and run another operation seamlessly. This design is supported using the [Azure REST API](/rest/api/azure/) or the [Azure CLI](/cli/azure/).
+AKS now supports aborting a long running operation, which is currently in public preview. This feature allows you to take back control and run another operation seamlessly. This design is supported using the [Azure REST API](/rest/api/azure/) or the [Azure CLI](/cli/azure/).
The abort operation supports the following scenarios:
The abort operation supports the following scenarios:
## Before you begin
-This article assumes that you have an existing AKS cluster. If you need an AKS cluster, start with reviewing our guidance on how to design, secure, and operate an AKS cluster to support your production-ready workloads. For more information, see [AKS architecture guidance](/azure/architecture/reference-architectures/containers/aks-start-here).
+- The Azure CLI version 2.40.0 or later. Run `az --version` to find the version, and run `az upgrade` to upgrade the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+
+- The `aks-preview` extension version 0.5.102 or later.
+ ## Abort a long running operation
api-management Api Management Howto Use Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-use-azure-monitor.md
ApiManagementGatewayLogs
For more information about using resource logs for API Management, see:
-* [Get started with Azure Monitor Log Analytics](../azure-monitor/logs/log-analytics-tutorial.md), or try the [Log Analytics Demo environment](https://portal.loganalytics.io/demo).
+* [Get started with Azure Monitor Log Analytics](../azure-monitor/logs/log-analytics-tutorial.md), or try the [Log Analytics Demo environment](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring_Logs/DemoLogsBlade).
* [Overview of log queries in Azure Monitor](../azure-monitor/logs/log-query-overview.md).
In this tutorial, you learned how to:
Advance to the next tutorial: > [!div class="nextstepaction"]
-> [Trace calls](api-management-howto-api-inspector.md)
+> [Trace calls](api-management-howto-api-inspector.md)
api-management Howto Protect Backend Frontend Azure Ad B2c https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/howto-protect-backend-frontend-azure-ad-b2c.md
You'll need to add CIDR formatted blocks of addresses to the IP restrictions pan
``` > [!NOTE]
- > Now Azure API management is able respond to cross origin requests from your JavaScript SPA apps, and it will perform throttling, rate-limiting and pre-validation of the JWT auth token being passed BEFORE forwarding the request on to the Function API.
+ > Now Azure API management is able to respond to cross origin requests from your JavaScript SPA apps, and it will perform throttling, rate-limiting and pre-validation of the JWT auth token being passed BEFORE forwarding the request on to the Function API.
> > Congratulations, you now have Azure AD B2C, API Management and Azure Functions working together to publish, secure AND consume an API!
app-service App Service Key Vault References https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-key-vault-references.md
To use a Key Vault reference for an [app setting](configure-common.md#configure-
### Considerations for Azure Files mounting
-Apps can use the `WEBSITE_CONTENTAZUREFILECONNECTIONSTRING` application setting to mount Azure Files as the file system. This setting has additional validation checks to ensure that the app can be properly started. The platform relies on having a content share within Azure Files, and it assumes a default name unless one is specified via the `WEBSITE_CONTENTSHARE` setting. For any requests which modify these settings, the platform will attempt to validate if this content share exists, and it will attempt to create it if not. If it cannot locate or create the content share, the request is blocked.
+Apps can use the `WEBSITE_CONTENTAZUREFILECONNECTIONSTRING` application setting to mount [Azure Files](../storage/files/storage-files-introduction.md) as the file system. This setting has additional validation checks to ensure that the app can be properly started. The platform relies on having a content share within Azure Files, and it assumes a default name unless one is specified via the `WEBSITE_CONTENTSHARE` setting. For any requests which modify these settings, the platform will attempt to validate if this content share exists, and it will attempt to create it if not. If it cannot locate or create the content share, the request is blocked.
When using Key Vault references for this setting, this validation check will fail by default, as the secret itself cannot be resolved while processing the incoming request. To avoid this issue, you can skip the validation by setting `WEBSITE_SKIP_CONTENTSHARE_VALIDATION` to "1". This will bypass all checks, and the content share will not be created for you. You should ensure it is created in advance.
When using Key Vault references for this setting, this validation check will fai
As part of creating the site, it is also possible that attempted mounting of the content share could fail due to managed identity permissions not being propagated or the virtual network integration not being set up. You can defer setting up Azure Files until later in the deployment template to accommodate this. See [Azure Resource Manager deployment](#azure-resource-manager-deployment) to learn more. App Service will use a default file system until Azure Files is set up, and files are not copied over, so you will need to ensure that no deployment attempts occur during the interim period before Azure Files is mounted.
+### Considerations for Application Insights instrumentation
+
+Apps can use the `APPINSIGHTS_INSTRUMENTATIONKEY` or `APPLICATIONINSIGHTS_CONNECTION_STRING` application settings to integrate with [Application Insights](../azure-monitor/app/app-insights-overview.md). The portal experiences for App Service and Azure Functions also use these settings to surface telemetry data from the resource. If these values are referenced from Key Vault, these experiences are not available, and you instead need to work directly with the Application Insights resource to view the telemetry. However, these values are [not considered secrets](../azure-monitor/app/sdk-connection-string.md#is-the-connection-string-a-secret), so you might alternatively consider configuring them directly instead of using the Key Vault references feature.
+ ### Azure Resource Manager deployment When automating resource deployments through Azure Resource Manager templates, you may need to sequence your dependencies in a particular order to make this feature work. Of note, you will need to define your application settings as their own resource, rather than using a `siteConfig` property in the site definition. This is because the site needs to be defined first so that the system-assigned identity is created with it and can be used in the access policy.
application-gateway Application Gateway Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-metrics.md
For Application Gateway, the following metrics are available:
- **Client TLS protocol**
- Count of TLS and non-TLS requests initiated by the client that established connection with the Application Gateway. To view TLS protocol distribution, filter by the dimension TLS Protocol.
+ Count of TLS and non-TLS requests initiated by the client that established connection with the Application Gateway. To view TLS protocol distribution, filter by the dimension TLS Protocol. This metric includes requests served by the gateway, such as redirects.
- **Current capacity units**
For Application Gateway, the following metrics are available:
- **Total Requests**
- Count of successful requests that Application Gateway has served. The request count can be further filtered to show count per each/specific backend pool-http setting combination.
+ Count of successful requests that Application Gateway has served by the backend pool targets. Pages served directly by the gateway, such as redirects, are not counted and should be found in the Client TLS protocol metric. Total requests count metric can be further filtered to show count per each/specific backend pool-http setting combination.
### Backend metrics
application-gateway Configuration Http Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-http-settings.md
Please refer to TLS offload and End-to-End TLS documentation for Application Gat
## Connection draining
-Connection draining helps you gracefully remove backend pool members during planned service updates. You can apply this setting to all members of a backend pool by enabling connection draining on the HTTP setting. It ensures that all deregistering instances of a backend pool continue to maintain existing connections and serve on-going requests for a configurable timeout and don't receive any new requests or connections. The only exception to this are requests bound for deregistering instances because of gateway-managed session affinity and will continue to be forwarded to the deregistering instances. Connection draining applies to backend instances that are explicitly removed from the backend pool.
+Connection draining helps you gracefully remove backend pool members during planned service updates. It applies to backend instances that are explicitly removed from the backend pool or during scale-in of backend instances. You can apply this setting to all members of a backend pool by enabling connection draining on the Backend Setting. It ensures that all deregistering instances of a backend pool continue to maintain existing connections and serve on-going requests for a configurable timeout and don't receive any new requests or connections.
+
+| Configuration Type | Value |
+| - | - |
+|Default value when Connection Draining is not enabled in Backend Setting| 30 seconds |
+|User-defined value when Connection Draining is enabled in Backend Setting | 1 to 3600 seconds |
+
+The only exception to this are requests bound for deregistering instances because of gateway-managed session affinity and will continue to be forwarded to the deregistering instances.
## Protocol
application-gateway Configuration Listeners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-listeners.md
Previously updated : 09/09/2020 Last updated : 11/23/2022
When you create a new listener, you choose between [*basic* and *multi-site*](./
- If you want all of your requests (for any domain) to be accepted and forwarded to backend pools, choose basic. Learn [how to create an application gateway with a basic listener](./quick-create-portal.md). -- If you want to forward requests to different backend pools based on the *host* header or host names, choose multi-site listener, where you must also specify a host name that matches with the incoming request. This is because Application Gateway relies on HTTP 1.1 host headers to host more than one website on the same public IP address and port. To learn more, see [hosting multiple sites using Application Gateway](multiple-site-overview.md).
+- If you want to forward requests to different backend pools based on the *host* header or host names, choose multi-site listener. Application Gateway relies on HTTP 1.1 host headers to host more than one website on the same public IP address and port. To differentiate requests on the same port, you must specify a host name that matches with the incoming request. To learn more, see [hosting multiple sites using Application Gateway](multiple-site-overview.md).
### Order of processing listeners For the v1 SKU, requests are matched according to the order of the rules and the type of listener. If a rule with basic listener comes first in the order, it's processed first and will accept any request for that port and IP combination. To avoid this, configure the rules with multi-site listeners first and push the rule with the basic listener to the last in the list.
-For the v2 SKU, multi-site listeners are processed before basic listeners.
+For the v2 SKU, multi-site listeners are processed before basic listeners, unless rule priority is defined. If using rule priority, wildcard listeners should be defined a priority with a number greater than non-wildcard listeners, to ensure non-wildcard listeners execute prior to the wildcard listeners.
## Frontend IP address
Choose the frontend IP address that you plan to associate with this listener. Th
## Frontend port
-Choose the front-end port. Select an existing port or create a new one. Choose any value from the [allowed range of ports](./application-gateway-components.md#ports). You can use not only well-known ports, such as 80 and 443, but any allowed custom port that's suitable. A port can be used for public-facing listeners or private-facing listeners, however the same port cannot be used for both at the same time.
+Choose the frontend port. Select an existing port or create a new one. Choose any value from the [allowed range of ports](./application-gateway-components.md#ports). You can use not only well-known ports, such as 80 and 443, but any allowed custom port that's suitable. A port can be used for public-facing listeners or private-facing listeners, however the same port cannot be used for both at the same time.
## Protocol
Choose HTTP or HTTPS:
- If you choose HTTP, the traffic between the client and the application gateway is unencrypted. -- Choose HTTPS if you want [TLS termination](features.md#secure-sockets-layer-ssltls-termination) or [end-to-end TLS encryption](./ssl-overview.md). The traffic between the client and the application gateway is encrypted. And the TLS connection terminates at the application gateway. If you want end-to-end TLS encryption, you must choose HTTPS and configure the **backend HTTP** setting. This ensures that traffic is re-encrypted when it travels from the application gateway to the back end.-
+- Choose HTTPS if you want [TLS termination](features.md#secure-sockets-layer-ssltls-termination) or [end-to-end TLS encryption](./ssl-overview.md). The traffic between the client and the application gateway is encrypted and the TLS connection will be terminated at the application gateway. If you want end-to-end TLS encryption to the backend target, you must choose HTTPS within **backend HTTP setting** as well. This ensures that traffic is encrypted when application gateway initiates a connection to the backend target.
To configure TLS termination, a TLS/SSL certificate must be added to the listener. This allows the Application Gateway to decrypt incoming traffic and encrypt response traffic to the client. The certificate provided to the Application Gateway must be in Personal Information Exchange (PFX) format, which contains both the private and public keys.
See [Overview of TLS termination and end to end TLS with Application Gateway](ss
### HTTP2 support
-HTTP/2 protocol support is available to clients that connect to application gateway listeners only. The communication to backend server pools is over HTTP/1.1. By default, HTTP/2 support is disabled. The following Azure PowerShell code snippet shows how to enable this:
+HTTP/2 protocol support is available to clients that connect to application gateway listeners only. Communication to backend server pools is always HTTP/1.1. By default, HTTP/2 support is disabled. The following Azure PowerShell code snippet shows how to enable this:
```azurepowershell $gw = Get-AzApplicationGateway -Name test -ResourceGroupName hm
WebSocket support is enabled by default. There's no user-configurable setting to
## Custom error pages
-You can define custom error at the global level or the listener level. But creating global-level custom error pages from the Azure portal is currently not supported. You can configure a custom error page for a 403 web application firewall error or a 502 maintenance page at the listener level. You must also specify a publicly accessible blob URL for the given error status code. For more information, see [Create Application Gateway custom error pages](./custom-error.md).
+You can define custom error at the global level or the listener level, however creating global-level custom error pages from the Azure portal is currently not supported. You can configure a custom error page for a 403 web application firewall error or a 502 maintenance page at the listener level. You must specify a publicly accessible blob URL for the given error status code. For more information, see [Create Application Gateway custom error pages](./custom-error.md).
![Application Gateway error codes](/azure/application-gateway/media/custom-error/ag-error-codes.png)
azure-arc Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/release-notes.md
This article highlights capabilities, features, and enhancements recently released or improved for Azure Arc-enabled data services.
+## November 8, 2022
+
+### Image tag
+
+`v1.13.0_2022-11-08`
+
+For complete release version information, see [Version log](version-log.md#november-8-2022).
+
+New for this release:
+
+- Azure Arc data controller
+ - Support database as resource in Azure Arc data resource provider
+
+- Arc-enabled PostgreSQL server
+ - Add support for automated backups
+
+- `arcdata` Azure CLI extension
+ - CLI support for automated backups: Setting the `--storage-class-backups` parameter for the create command will enable automated backups
+ ## October 11, 2022 ### Image tag
azure-functions Create First Function Vs Code Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-typescript.md
Before you get started, make sure you have the following requirements in place:
+ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
-+ [Node.js 14.x](https://nodejs.org/en/download/releases/) or [Node.js 16.x](https://nodejs.org/en/download/releases/) (preview). Use the `node --version` command to check your version.
++ [Node.js 16.x](https://nodejs.org/en/download/releases/) or [Node.js 18.x](https://nodejs.org/en/download/releases/) (preview). Use the `node --version` command to check your version. + [Visual Studio Code](https://code.visualstudio.com/) on one of the [supported platforms](https://code.visualstudio.com/docs/supporting/requirements#_platforms).
azure-functions Python Scale Performance Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/python-scale-performance-reference.md
async def main(req: func.HttpRequest) -> func.HttpResponse:
A function without the `async` keyword is run automatically in a ThreadPoolExecutor thread pool: ```python
-# Runs in an ThreadPoolExecutor threadpool. Number of threads is defined by PYTHON_THREADPOOL_THREAD_COUNT.
-# The example is intended to show how default synchronous function are handled.
+# Runs in a ThreadPoolExecutor threadpool. Number of threads is defined by PYTHON_THREADPOOL_THREAD_COUNT.
+# The example is intended to show how default synchronous functions are handled.
def main(): some_blocking_socket_io()
Here are a few examples of client libraries that have implemented async patterns
##### Understanding async in Python worker
-When you define `async` in front of a function signature, Python will mark the function as a coroutine. When calling the coroutine, it can be scheduled as a task into an event loop. When you call `await` in an async function, it registers a continuation into the event loop, which allows the event loop to process the next task during the wait time.
+When you define `async` in front of a function signature, Python marks the function as a coroutine. When calling the coroutine, it can be scheduled as a task into an event loop. When you call `await` in an async function, it registers a continuation into the event loop, which allows the event loop to process the next task during the wait time.
In our Python Worker, the worker shares the event loop with the customer's `async` function and it's capable for handling multiple requests concurrently. We strongly encourage our customers to make use of asyncio compatible libraries, such as [aiohttp](https://pypi.org/project/aiohttp/) and [pyzmq](https://pypi.org/project/pyzmq/). Following these recommendations increases your function's throughput compared to those libraries when implemented synchronously. > [!NOTE]
-> If your function is declared as `async` without any `await` inside its implementation, the performance of your function will be severely impacted since the event loop will be blocked which prohibit the Python worker to handle concurrent requests.
+> If your function is declared as `async` without any `await` inside its implementation, the performance of your function will be severely impacted since the event loop will be blocked which prohibits the Python worker from handling concurrent requests.
#### Use multiple language worker processes
For CPU-bound apps, you should keep the setting to a low number, starting from 1
For I/O-bound apps, you should see substantial gains by increasing the number of threads working on each invocation. the recommendation is to start with the Python default (the number of cores) + 4 and then tweak based on the throughput values you're seeing.
-For mix workloads apps, you should balance both `FUNCTIONS_WORKER_PROCESS_COUNT` and `PYTHON_THREADPOOL_THREAD_COUNT` configurations to maximize the throughput. To understand what your function apps spend the most time on, we recommend profiling them and set the values according to the behavior they present. Also refer to this [section](#use-multiple-language-worker-processes) to learn about FUNCTIONS_WORKER_PROCESS_COUNT application settings.
+For mixed workloads apps, you should balance both `FUNCTIONS_WORKER_PROCESS_COUNT` and `PYTHON_THREADPOOL_THREAD_COUNT` configurations to maximize the throughput. To understand what your function apps spend the most time on, we recommend profiling them and setting the values according to their behaviors. To learn about these application settings, see [Use multiple worker processes](#use-multiple-language-worker-processes).
> [!NOTE] > Although these recommendations apply to both HTTP and non-HTTP triggered functions, you might need to adjust other trigger specific configurations for non-HTTP triggered functions to get the expected performance from your function apps. For more information about this, please refer to this [article](functions-best-practices.md).
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
description: Overview of the Azure Monitor Agent, which collects monitoring data
Previously updated : 11/9/2022 Last updated : 11/22/2022
In addition to the generally available data collection listed above, Azure Monit
| Azure service | Current support | Other extensions installed | More information | | : | : | : | : | | [Microsoft Defender for Cloud](../../security-center/security-center-introduction.md) | Public preview | <ul><li>Azure Security Agent extension</li><li>SQL Advanced Threat Protection extension</li><li>SQL Vulnerability Assessment extension</li></ul> | [Auto-deployment of Azure Monitor Agent (Preview)](../../defender-for-cloud/auto-deploy-azure-monitoring-agent.md) |
-| [Microsoft Sentinel](../../sentinel/overview.md) | <ul><li>Windows Security Events: [Generally available](../../sentinel/connect-windows-security-events.md?tabs=AMA)</li><li>Windows Forwarding Event (WEF): [Public preview](../../sentinel/data-connectors-reference.md#windows-forwarded-events-preview)</li><li>Windows DNS logs: [Public preview](../../sentinel/connect-dns-ama.md)</li><li>Linux Syslog CEF: Preview</li></ul> | Sentinel DNS extension, if youΓÇÖre collecting DNS logs. For all other data types, you just need the Azure Monitor Agent extension. | <ul><li>[Sign-up link for Linux Syslog CEF](https://aka.ms/amadcr-privatepreviews)</li><li>No sign-up needed for Windows Forwarding Event (WEF), Windows Security Events and Windows DNS events</li></ul> |
+| [Microsoft Sentinel](../../sentinel/overview.md) | <ul><li>Windows Security Events: [Generally available](../../sentinel/connect-windows-security-events.md?tabs=AMA)</li><li>Windows Forwarding Event (WEF): [Public preview](../../sentinel/data-connectors-reference.md#windows-forwarded-events-preview)</li><li>Windows DNS logs: [Public preview](../../sentinel/connect-dns-ama.md)</li><li>Linux Syslog CEF: [Public preview](../../sentinel/connect-cef-ama.md#set-up-the-common-event-format-cef-via-ama-connector)</li></ul> | Sentinel DNS extension, if youΓÇÖre collecting DNS logs. For all other data types, you just need the Azure Monitor Agent extension. | - |
| [Change Tracking](../../automation/change-tracking/overview.md) | Change Tracking: Preview. | Change Tracking extension | [Sign-up link](https://aka.ms/amadcr-privatepreviews) | | [Update Management](../../automation/update-management/overview.md) (available without Azure Monitor Agent) | Use Update Management v2 - Public preview | None | [Update management center (Public preview) documentation](../../update-center/index.yml) | | [Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md) | Connection Monitor: Preview | Azure NetworkWatcher extension | [Sign-up link](https://aka.ms/amadcr-privatepreviews) |
azure-monitor Data Collection Rule Azure Monitor Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-rule-azure-monitor-agent.md
To send data to Log Analytics, create the data collection rule in the *same regi
1. Enter a **Rule name** and specify a **Subscription**, **Resource Group**, **Region**, and **Platform Type**: - **Region** specifies where the DCR will be created. The virtual machines and their associations can be in any subscription or resource group in the tenant.- - **Platform Type** specifies the type of resources this rule can apply to. The **Custom** option allows for both Windows and Linux types. [ ![Screenshot that shows the Basics tab of the Data Collection Rule screen.](media/data-collection-rule-azure-monitor-agent/data-collection-rule-basics-updated.png) ](media/data-collection-rule-azure-monitor-agent/data-collection-rule-basics-updated.png#lightbox)
This capability is enabled as part of the Azure CLI monitor-control-service exte
For sample templates, see [Azure Resource Manager template samples for data collection rules in Azure Monitor](./resource-manager-data-collection-rules.md). + ## Filter events using XPath queries
-You're charged for any data you collect in a Log Analytics workspace, so collect only the data you need. The basic configuration in the Azure portal provides you with a limited ability to filter out events.
+Since you're charged for any data you collect in a Log Analytics workspace, you should limit data collection from your agent to only the event data that you need. The basic configuration in the Azure portal provides you with a limited ability to filter out events.
+ To specify more filters, use custom configuration and specify an XPath that filters out the events you don't need. XPath entries are written in the form `LogName!XPathQuery`. For example, you might want to return only events from the Application event log with an event ID of 1035. The `XPathQuery` for these events would be `*[System[EventID=1035]]`. Because you want to retrieve the events from the Application event log, the XPath is `Application!*[System[EventID=1035]]`
Examples of using a custom XPath to filter events:
| Collect all Critical, Error, Warning, and Information events from the System event log except for Event ID = 6 (Driver loaded) | `System!*[System[(Level=1 or Level=2 or Level=3) and (EventID != 6)]]` | | Collect all success and failure Security events except for Event ID 4624 (Successful logon) | `Security!*[System[(band(Keywords,13510798882111488)) and (EventID != 4624)]]` | + ## Next steps - [Collect text logs by using Azure Monitor Agent](data-collection-text-log.md).
azure-monitor Data Collection Rule Sample Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-rule-sample-agent.md
The sample [data collection rule](../essentials/data-collection-rule-overview.md
- Sends all data to a Log Analytics workspace named centralWorkspace. > [!NOTE]
-> For an explanation of XPaths that are used to specify event collection in data collection rules, see [Limit data collection with custom XPath queries](../agents/data-collection-rule-azure-monitor-agent.md#filter-events-using-xpath-queries)
+> For an explanation of XPaths that are used to specify event collection in data collection rules, see [Limit data collection with custom XPath queries](../agents/data-collection-rule-azure-monitor-agent.md#filter-events-using-xpath-queries).
## Sample DCR
azure-monitor Alerts Create New Alert Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-create-new-alert-rule.md
Title: Create Azure Monitor alert rules
-description: Learn how to create a new alert rule.
+description: This article shows you how to create a new alert rule.
# Create a new alert rule
-This article shows you how to create an alert rule. Learn more about alerts [here](alerts-overview.md).
+This article shows you how to create an alert rule. To learn more about alerts, see [What are Azure Monitor alerts?](alerts-overview.md).
You create an alert rule by combining:
+ - The resources to be monitored.
+ - The signal or telemetry from the resource.
+ - Conditions.
-And then defining these elements for the resulting alert actions using:
+Then you define these elements for the resulting alert actions by using:
- [Alert processing rules](alerts-action-rules.md) - [Action groups](./action-groups.md) ## Create a new alert rule in the Azure portal
-1. In the [portal](https://portal.azure.com/), select **Monitor**, then **Alerts**.
-1. Expand the **+ Create** menu, and select **Alert rule**.
+1. In the [portal](https://portal.azure.com/), select **Monitor** > **Alerts**.
+1. Open the **+ Create** menu and select **Alert rule**.
- :::image type="content" source="media/alerts-create-new-alert-rule/alerts-create-new-alert-rule.png" alt-text="Screenshot showing steps to create new alert rule.":::
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-create-new-alert-rule.png" alt-text="Screenshot that shows steps to create a new alert rule.":::
-1. In the **Select a resource** pane, set the scope for your alert rule. You can filter by **subscription**, **resource type**, **resource location**, or do a search.
+1. On the **Select a resource** pane, set the scope for your alert rule. You can filter by **subscription**, **resource type**, or **resource location**. You can also do a search.
- The **Available signal types** for your selected resource(s) are at the bottom right of the pane.
+ **Available signal types** for your selected resources are at the bottom right of the pane.
- :::image type="content" source="media/alerts-create-new-alert-rule/alerts-select-resource.png" alt-text="Screenshot showing the select resource pane for creating new alert rule.":::
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-select-resource.png" alt-text="Screenshot that shows the select resource pane for creating a new alert rule.":::
1. Select **Include all future resources** to include any future resources added to the selected scope. 1. Select **Done**.
-1. Select **Next: Condition>** at the bottom of the page.
-1. In the **Select a signal** pane, filter the list of signals using the **Signal type** and **Monitor service**.
- - **Signal Type**: The [type of alert rule](alerts-overview.md#types-of-alerts) you're creating.
+1. Select **Next: Condition** at the bottom of the page.
+1. On the **Select a signal** pane, filter the list of signals by using the signal type and monitor service:
+ - **Signal type**: The [type of alert rule](alerts-overview.md#types-of-alerts) you're creating.
- **Monitor service**: The service sending the signal. This list is pre-populated based on the type of alert rule you selected. This table describes the services available for each type of alert rule: |Signal type |Monitor service |Description | ||||
- |Metrics|Platform |For metric signals, the monitor service is the metric namespace. ΓÇÿPlatformΓÇÖ means the metrics are provided by the resource provider, namely 'Azure'.|
+ |Metrics|Platform |For metric signals, the monitor service is the metric namespace. "Platform" means the metrics are provided by the resource provider, namely, Azure.|
| |Azure.ApplicationInsights|Customer-reported metrics, sent by the Application Insights SDK. |
- | |Azure.VM.Windows.GuestMetrics |VM guest metrics, collected by an extension running on the VM. Can include built-in operating system perf counters, and custom perf counters. |
+ | |Azure.VM.Windows.GuestMetrics |VM guest metrics, collected by an extension running on the VM. Can include built-in operating system perf counters and custom perf counters. |
| |\<your custom namespace\>|A custom metric namespace, containing custom metrics sent with the Azure Monitor Metrics API. |
- |Log |Log Analytics|The service that provides the ΓÇÿCustom log searchΓÇÖ and ΓÇÿLog (saved query)ΓÇÖ signals. |
- |Activity log|Activity Log ΓÇô Administrative|The service that provides the ΓÇÿAdministrativeΓÇÖ activity log events. |
- | |Activity Log ΓÇô Policy|The service that provides the 'Policy' activity log events. |
- | |Activity Log ΓÇô Autoscale|The service that provides the ΓÇÿAutoscaleΓÇÖ activity log events. |
- | |Activity Log ΓÇô Security|The service that provides the ΓÇÿSecurityΓÇÖ activity log events. |
+ |Log |Log Analytics|The service that provides the "Custom log search" and "Log (saved query)" signals. |
+ |Activity log|Activity log ΓÇô Administrative|The service that provides the Administrative activity log events. |
+ | |Activity log ΓÇô Policy|The service that provides the Policy activity log events. |
+ | |Activity log ΓÇô Autoscale|The service that provides the Autoscale activity log events. |
+ | |Activity log ΓÇô Security|The service that provides the Security activity log events. |
|Resource health|Resource health|The service that provides the resource-level health status. | |Service health|Service health|The service that provides the subscription-level health status. |
-
-1. Select the **Signal name**, and follow the steps in the tab below that corresponds to the type of alert you're creating.
+1. Select the **Signal name**, and follow the steps in the following tab that corresponds to the type of alert you're creating.
+ ### [Metric alert](#tab/metric)
- 1. In the **Configure signal logic** pane, you can preview the results of the selected metric signal. Select values for the following fields.
+ 1. On the **Configure signal logic** pane, you can preview the results of the selected metric signal. Select values for the following fields.
|Field |Description | ||| |Select time series|Select the time series to include in the results. |
- |Chart period|Select the time span to include in the results. Can be from the last 6 hours to the last week.|
+ |Chart period|Select the time span to include in the results. Can be from the last six hours to the last week.|
- 1. (Optional) Depending on the signal type, you may see the **Split by dimensions** section.
+ 1. (Optional) Depending on the signal type, you might see the **Split by dimensions** section.
- Dimensions are name-value pairs that contain more data about the metric value. Using dimensions allows you to filter the metrics and monitor specific time-series, instead of monitoring the aggregate of all the dimensional values. Dimensions can be either number or string columns.
+ Dimensions are name-value pairs that contain more data about the metric value. By using dimensions, you can filter the metrics and monitor specific time-series, instead of monitoring the aggregate of all the dimensional values. Dimensions can be either number or string columns.
- If you select more than one dimension value, each time series that results from the combination will trigger its own alert, and will be charged separately. For example, the transactions metric of a storage account can have an API name dimension that contains the name of the API called by each transaction (for example, GetBlob, DeleteBlob, PutPage). You can choose to have an alert fired when there's a high number of transactions in a specific API (the aggregated data), or you can use dimensions to alert only when the number of transactions is high for specific APIs.
+ If you select more than one dimension value, each time series that results from the combination will trigger its own alert and be charged separately. For example, the transactions metric of a storage account can have an API name dimension that contains the name of the API called by each transaction (for example, GetBlob, DeleteBlob, and PutPage). You can choose to have an alert fired when there's a high number of transactions in a specific API (the aggregated data). Or you can use dimensions to alert only when the number of transactions is high for specific APIs.
|Field |Description | |||
And then defining these elements for the resulting alert actions using:
|Field |Description | |||
- |Threshold|Select if threshold should be evaluated based on a static value or a dynamic value.<br>A static threshold evaluates the rule using the threshold value that you configure.<br>Dynamic Thresholds use machine learning algorithms to continuously learn the metric behavior patterns and calculate the appropriate thresholds for unexpected behavior. You can learn more about using [dynamic thresholds for metric alerts](alerts-types.md#dynamic-thresholds). |
+ |Threshold|Select if the threshold should be evaluated based on a static value or a dynamic value.<br>A static threshold evaluates the rule by using the threshold value that you configure.<br>Dynamic thresholds use machine learning algorithms to continuously learn the metric behavior patterns and calculate the appropriate thresholds for unexpected behavior. You can learn more about using [dynamic thresholds for metric alerts](alerts-types.md#dynamic-thresholds). |
|Operator|Select the operator for comparing the metric value against the threshold. | |Aggregation type|Select the aggregation function to apply on the data points: Sum, Count, Average, Min, or Max. | |Threshold value|If you selected a **static** threshold, enter the threshold value for the condition logic. |
- |Unit|If the selected metric signal supports different units,such as bytes, KB, MB, and GB, and if you selected a **static** threshold, enter the unit for the condition logic.|
- |Threshold sensitivity| If you selected a **dynamic** threshold, enter the sensitivity level. The sensitivity level affects the amount of deviation from the metric series pattern is required to trigger an alert. |
- |Aggregation granularity| Select the interval that is used to group the data points using the aggregation type function. Choose an **Aggregation granularity** (Period) that's greater than the **Frequency of evaluation** to reduce the likelihood of missing the first evaluation period of an added time series.|
- |Frequency of evaluation|Select how often the alert rule is be run. Select a frequency that is smaller than the aggregation granularity to generate a sliding window for the evaluation.|
+ |Unit|If the selected metric signal supports different units, such as bytes, KB, MB, and GB, and if you selected a **static** threshold, enter the unit for the condition logic.|
+ |Threshold sensitivity| If you selected a **dynamic** threshold, enter the sensitivity level. The sensitivity level affects the amount of deviation from the metric series pattern that's required to trigger an alert. |
+ |Aggregation granularity| Select the interval that's used to group the data points by using the aggregation type function. Choose an **Aggregation granularity** (period) that's greater than the **Frequency of evaluation** to reduce the likelihood of missing the first evaluation period of an added time series.|
+ |Frequency of evaluation|Select how often the alert rule is to be run. Select a frequency that's smaller than the aggregation granularity to generate a sliding window for the evaluation.|
1. Select **Done**.+ ### [Log alert](#tab/log) > [!NOTE]
- > If you are creating a new log alert rule, note that current alert rule wizard is a little different from the earlier experience. For detailed information about the changes, see [changes to log alert rule creation experience](#changes-to-log-alert-rule-creation-experience).
-
- 1. In the **Logs** pane, write a query that will return the log events for which you want to create an alert.
- To use one of the predefined alert rule queries, expand the **Schema and filter pane** on the left of the **Logs** pane, then select the **Queries** tab, and select one of the queries.
+ > If you're creating a new log alert rule, note that the current alert rule wizard is different from the earlier experience. For more information, see [Changes to the log alert rule creation experience](#changes-to-the-log-alert-rule-creation-experience).
+
+ 1. On the **Logs** pane, write a query that will return the log events for which you want to create an alert.
+ To use one of the predefined alert rule queries, expand the **Schema and filter** pane on the left of the **Logs** pane. Then select the **Queries** tab, and select one of the queries.
- :::image type="content" source="media/alerts-create-new-alert-rule/alerts-log-rule-query-pane.png" alt-text="Screenshot of the query pane when creating a new log alert rule.":::
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-log-rule-query-pane.png" alt-text="Screenshot that shows the Query pane when creating a new log alert rule.":::
1. Select **Run** to run the alert. 1. The **Preview** section shows you the query results. When you're finished editing your query, select **Continue Editing Alert**.
- 1. The **Condition** tab opens populated with your log query. By default, the rule counts the number of results in the last 5 minutes. If the system detects summarized query results, the rule is automatically updated with that information.
+ 1. The **Condition** tab opens populated with your log query. By default, the rule counts the number of results in the last five minutes. If the system detects summarized query results, the rule is automatically updated with that information.
- :::image type="content" source="media/alerts-create-new-alert-rule/alerts-logs-conditions-tab.png" alt-text="Screenshot of the conditions tab when creating a new log alert rule.":::
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-logs-conditions-tab.png" alt-text="Screenshot that shows the Condition tab when creating a new log alert rule.":::
1. In the **Measurement** section, select values for these fields: |Field |Description | |||
- |Measure|Log alerts can measure two different things, which can be used for different monitoring scenarios:<br> **Table rows**: The number of rows returned can be used to work with events such as Windows event logs, syslog, application exceptions. <br>**Calculation of a numeric column**: Calculations based on any numeric column can be used to include any number of resources. For example, CPU percentage. |
- |Aggregation type| The calculation performed on multiple records to aggregate them to one numeric value using the aggregation granularity. For example: Total, Average, Minimum, or Maximum. |
+ |Measure|Log alerts can measure two different things, which can be used for different monitoring scenarios:<br> **Table rows**: The number of rows returned can be used to work with events such as Windows event logs, Syslog, and application exceptions. <br>**Calculation of a numeric column**: Calculations based on any numeric column can be used to include any number of resources. An example is CPU percentage. |
+ |Aggregation type| The calculation performed on multiple records to aggregate them to one numeric value by using the aggregation granularity. Examples are Total, Average, Minimum, or Maximum. |
|Aggregation granularity| The interval for aggregating multiple records to one numeric value.|
- :::image type="content" source="media/alerts-create-new-alert-rule/alerts-log-measurements.png" alt-text="Screenshot of the measurements tab when creating a new log alert rule.":::
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-log-measurements.png" alt-text="Screenshot that shows the Measurement tab when creating a new log alert rule.":::
- 1. (Optional) In the **Split by dimensions** section, you can use dimensions to monitor the values of multiple instances of a resource with one rule. Splitting by dimensions allows you to create resource-centric alerts at scale for a subscription or resource group. When you split by dimensions, alerts are split into separate alerts by grouping combinations of numerical or string columns to monitor for the same condition on multiple Azure resources. For example, you can monitor CPU usage on multiple instances running your website or app. Each instance is monitored individually notifications are sent for each instance.
+ 1. (Optional) In the **Split by dimensions** section, you can use dimensions to monitor the values of multiple instances of a resource with one rule. Splitting by dimensions allows you to create resource-centric alerts at scale for a subscription or resource group. When you split by dimensions, alerts are split into separate alerts by grouping combinations of numerical or string columns to monitor for the same condition on multiple Azure resources. For example, you can monitor CPU usage on multiple instances running your website or app. Each instance is monitored individually. Notifications are sent for each instance.
- Splitting on **Azure Resource ID** column makes specified resource the target of the alert.
+ Splitting on the **Azure Resource ID** column makes the specified resource the target of the alert.
If you select more than one dimension value, each time series that results from the combination triggers its own alert and is charged separately. The alert payload includes the combination that triggered the alert. You can select up to six more splittings for any columns that contain text or numbers.
- You can also decide **not** to split when you want a condition applied to multiple resources in the scope. For example, if you want to fire an alert if at least five machines in the resource group scope have CPU usage over 80%.
+ You can also decide *not* to split when you want a condition applied to multiple resources in the scope. An example would be if you want to fire an alert if at least five machines in the resource group scope have CPU usage over 80 percent.
Select values for these fields:
And then defining these elements for the resulting alert actions using:
|Dimension values|The dimension values are based on data from the last 48 hours. Select **Add custom value** to add custom dimension values. | |Include all future values| Select this field to include any future values added to the selected dimension. |
- :::image type="content" source="media/alerts-create-new-alert-rule/alerts-create-log-rule-dimensions.png" alt-text="Screenshot of the splitting by dimensions section of a new log alert rule.":::
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-create-log-rule-dimensions.png" alt-text="Screenshot that shows the splitting by dimensions section of a new log alert rule.":::
1. In the **Alert logic** section, select values for these fields:
And then defining these elements for the resulting alert actions using:
|Threshold value| A number value for the threshold. | |Frequency of evaluation|The interval in which the query is run. Can be set from a minute to a day. |
- :::image type="content" source="media/alerts-create-new-alert-rule/alerts-create-log-rule-logic.png" alt-text="Screenshot of alert logic section of a new log alert rule.":::
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-create-log-rule-logic.png" alt-text="Screenshot that shows the Alert logic section of a new log alert rule.":::
- 1. (Optional) In the **Advanced options** section, you can specify the number of failures and the alert evaluation period required to trigger an alert. For example, if you set the **Aggregation granularity** to 5 minutes, you can specify that you only want to trigger an alert if there were three failures (15 minutes) in the last hour. This setting is defined by your application business policy.
+ 1. (Optional) In the **Advanced options** section, you can specify the number of failures and the alert evaluation period required to trigger an alert. For example, if you set **Aggregation granularity** to 5 minutes, you can specify that you only want to trigger an alert if there were three failures (15 minutes) in the last hour. This setting is defined by your application business policy.
Select values for these fields under **Number of violations to trigger the alert**:
And then defining these elements for the resulting alert actions using:
||| |Number of violations|The number of violations that trigger the alert.| |Evaluation period|The time period within which the number of violations occur. |
- |Override query time range| If you want the alert evaluation period to be different than the query time range, enter a time range here.<br> The alert time range is limited to a maximum of two days. Even if the query contains an **ago** command with a time range of longer than 2 days, the 2 day maximum time range is applied. For example, even if the query text contains **ago(7d)**, the query only scans up to 2 days of data.<br> If the query requires more data than the alert evaluation, and there's no **ago** command in the query, you can change the time range manually.|
+ |Override query time range| If you want the alert evaluation period to be different than the query time range, enter a time range here.<br> The alert time range is limited to a maximum of two days. Even if the query contains an **ago** command with a time range of longer than two days, the two-day maximum time range is applied. For example, even if the query text contains **ago(7d)**, the query only scans up to two days of data.<br> If the query requires more data than the alert evaluation, and there's no **ago** command in the query, you can change the time range manually.|
- :::image type="content" source="media/alerts-create-new-alert-rule/alerts-rule-preview-advanced-options.png" alt-text="Screenshot of the advanced options section of a new log alert rule.":::
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-rule-preview-advanced-options.png" alt-text="Screenshot that shows the Advanced options section of a new log alert rule.":::
> [!NOTE]
- > If you, or your administrator assigned the Azure Policy **Azure Log Search Alerts over Log Analytics workspaces should use customer-managed keys**, you must select **Check workspace linked storage**, or the rule creation will fail because it won't meet the policy requirements.
+ > If you or your administrator assigned the Azure Policy **Azure Log Search Alerts over Log Analytics workspaces should use customer-managed keys**, you must select **Check workspace linked storage**. If you don't, the rule creation will fail because it won't meet the policy requirements.
- 1. The **Preview** chart shows query evaluations results over time. You can change the chart period or select different time series that resulted from unique alert splitting by dimensions.
+ 1. The **Preview** chart shows query evaluations results over time. You can change the chart period or select different time series that resulted from a unique alert splitting by dimensions.
- :::image type="content" source="media/alerts-create-new-alert-rule/alerts-create-alert-rule-preview.png" alt-text="Screenshot of a preview of a new alert rule.":::
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-create-alert-rule-preview.png" alt-text="Screenshot that shows a preview of a new alert rule.":::
### [Activity log alert](#tab/activity-log)
- 1. In the **Conditions** pane, select the **Chart period**.
+ 1. On the **Conditions** pane, select the **Chart period**.
1. The **Preview** chart shows you the results of your selection. 1. Select values for each of these fields in the **Alert logic** section: |Field |Description | |||
- |Event level| Select the level of the events for this alert rule. Values are: **Critical**, **Error**, **Warning**, **Informational**, **Verbose** and **All**.|
+ |Event level| Select the level of the events for this alert rule. Values are **Critical**, **Error**, **Warning**, **Informational**, **Verbose**, and **All**.|
|Status|Select the status levels for the alert.| |Event initiated by|Select the user or service principal that initiated the event.| ### [Resource Health alert](#tab/resource-health)
- 1. In the **Conditions** pane, select values for each of these fields:
+ On the **Conditions** pane, select values for each of these fields:
- |Field |Description |
- |||
- |Event status| Select the statuses of Resource Health events. Values are: **Active**, **In Progress**, **Resolved**, and **Updated**.|
- |Current resource status|Select the current resource status. Values are: **Available**, **Degraded**, and **Unavailable**.|
- |Previous resource status|Select the previous resource status. Values are: **Available**, **Degraded**, **Unavailable**, and **Unknown**.|
- |Reason type|Select the cause(s) of the Resource Health events. Values are: **Platform Initiated**, **Unknown**, and **User Initiated**.|
+ |Field |Description |
+ |||
+ |Event status| Select the statuses of Resource Health events. Values are **Active**, **In Progress**, **Resolved**, and **Updated**.|
+ |Current resource status|Select the current resource status. Values are **Available**, **Degraded**, and **Unavailable**.|
+ |Previous resource status|Select the previous resource status. Values are **Available**, **Degraded**, **Unavailable**, and **Unknown**.|
+ |Reason type|Select the causes of the Resource Health events. Values are **Platform Initiated**, **Unknown**, and **User Initiated**.|
+
### [Service Health alert](#tab/service-health)
- 1. In the **Conditions** pane, select values for each of these fields:
+ On the **Conditions** pane, select values for each of these fields:
|Field |Description | ||| |Services| Select the Azure services.| |Regions|Select the Azure regions.|
- |Event types|Select the type(s) of Service Health events. Values are: **Service issue**, **Planned maintenance**, **Health advisories**, and **Security advisories**.|
+ |Event types|Select the types of Service Health events. Values are **Service issue**, **Planned maintenance**, **Health advisories**, and **Security advisories**.|
From this point on, you can select the **Review + create** button at any time.
-1. In the **Actions** tab, select or create the required [action groups](./action-groups.md).
+1. On the **Actions** tab, select or create the required [action groups](./action-groups.md).
1. (Optional) If you want to make sure that the data processing for the action group takes place within a specific region, you can select an action group in one of these regions in which to process the action group: - Sweden Central - Germany West Central > [!NOTE]
- > We are continually adding more regions for regional data processing.
+ > We're continually adding more regions for regional data processing.
- :::image type="content" source="media/alerts-create-new-alert-rule/alerts-rule-actions-tab.png" alt-text="Screenshot of the actions tab when creating a new alert rule.":::
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-rule-actions-tab.png" alt-text="Screenshot that shows the Actions tab when creating a new alert rule.":::
-1. In the **Details** tab, define the **Project details**.
+1. On the **Details** tab, define the **Project details**.
- Select the **Subscription**. - Select the **Resource group**.
- - (Optional) If you're creating a metric alert rule that monitors a custom metric with the scope defined as one of the regions below, and you want to make sure that the data processing for the alert rule takes place within that region, you can select to process the alert rule in one of these regions:
+ - (Optional) If you're creating a metric alert rule that monitors a custom metric with the scope defined as one of the following regions and you want to make sure that the data processing for the alert rule takes place within that region, you can select to process the alert rule in one of these regions:
- North Europe - West Europe - Sweden Central - Germany West Central > [!NOTE]
- > We are continually adding more regions for regional data processing.
+ > We're continually adding more regions for regional data processing.
1. Define the **Alert rule details**. ### [Metric alert](#tab/metric)
And then defining these elements for the resulting alert actions using:
||| |Enable upon creation| Select for the alert rule to start running as soon as you're done creating it.| |Automatically resolve alerts (preview) |Select to make the alert stateful. The alert is resolved when the condition isn't met anymore.|
- 1. (Optional) If you have configured action groups for this alert rule, you can add custom properties to the alert payload to add additional information to the payload. In the **Custom properties** section, add the property **Name** and **Value** for the custom property you want included in the payload.
-
+ 1. (Optional) If you've configured action groups for this alert rule, you can add custom properties to the alert payload to add more information to the payload. In the **Custom properties** section, add the property **Name** and **Value** for the custom property you want included in the payload.
- :::image type="content" source="media/alerts-create-new-alert-rule/alerts-metric-rule-details-tab.png" alt-text="Screenshot of the details tab when creating a new alert rule.":::
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-metric-rule-details-tab.png" alt-text="Screenshot that shows the Details tab when creating a new alert rule.":::
### [Log alert](#tab/log) 1. Select the **Severity**. 1. Enter values for the **Alert rule name** and the **Alert rule description**. 1. Select the **Region**.
- 1. (Optional) In the **Advanced options** section, you can set several options.
+ 1. (Optional) In the **Advanced options** section, you can set several options:
|Field |Description | |||
And then defining these elements for the resulting alert actions using:
|Mute actions |Select to set a period of time to wait before alert actions are triggered again. If you select this checkbox, the **Mute actions for** field appears to select the amount of time to wait after an alert is fired before triggering actions again.| |Check workspace linked storage|Select if logs workspace linked storage for alerts is configured. If no linked storage is configured, the rule isn't created.|
- 1. (Optional) If you have configured action groups for this alert rule, you can add custom properties to the alert payload to add additional information to the payload. In the **Custom properties** section, add the property **Name** and **Value** for the custom property you want included in the payload.
+ 1. (Optional) If you've configured action groups for this alert rule, you can add custom properties to the alert payload to add more information to the payload. In the **Custom properties** section, add the property **Name** and **Value** for the custom property you want included in the payload.
- :::image type="content" source="media/alerts-create-new-alert-rule/alerts-log-rule-details-tab.png" alt-text="Screenshot of the details tab when creating a new log alert rule.":::
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-log-rule-details-tab.png" alt-text="Screenshot that shows the Details tab when creating a new log alert rule.":::
### [Activity log alert](#tab/activity-log) 1. Enter values for the **Alert rule name** and the **Alert rule description**. 1. Select the **Region**. 1. (Optional) In the **Advanced options** section, select **Enable upon creation** for the alert rule to start running as soon as you're done creating it.
- 1. (Optional) If you have configured action groups for this alert rule, you can add custom properties to the alert payload to add additional information to the payload. In the **Custom properties** section, add the property **Name** and **Value** for the custom property you want included in the payload.
+ 1. (Optional) If you've configured action groups for this alert rule, you can add custom properties to the alert payload to add more information to the payload. In the **Custom properties** section, add the property **Name** and **Value** for the custom property you want included in the payload.
+
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-activity-log-rule-details-tab.png" alt-text="Screenshot that shows the Actions tab when creating a new activity log alert rule.":::
- :::image type="content" source="media/alerts-create-new-alert-rule/alerts-activity-log-rule-details-tab.png" alt-text="Screenshot of the actions tab when creating a new activity log alert rule.":::
### [Resource Health alert](#tab/resource-health) 1. Enter values for the **Alert rule name** and the **Alert rule description**. 1. (Optional) In the **Advanced options** section, select **Enable upon creation** for the alert rule to start running as soon as you're done creating it.
+
### [Service Health alert](#tab/service-health) 1. Enter values for the **Alert rule name** and the **Alert rule description**.
And then defining these elements for the resulting alert actions using:
-1. In the **Tags** tab, set any required tags on the alert rule resource.
+1. On the **Tags** tab, set any required tags on the alert rule resource.
- :::image type="content" source="media/alerts-create-new-alert-rule/alerts-rule-tags-tab.png" alt-text="Screenshot of the Tags tab when creating a new alert rule.":::
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-rule-tags-tab.png" alt-text="Screenshot that shows the Tags tab when creating a new alert rule.":::
-1. In the **Review + create** tab, a validation will run and inform you of any issues.
+1. On the **Review + create** tab, a validation will run and inform you of any issues.
1. When validation passes and you've reviewed the settings, select the **Create** button.
- :::image type="content" source="media/alerts-create-new-alert-rule/alerts-rule-review-create.png" alt-text="Screenshot of the Review and create tab when creating a new alert rule.":::
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-rule-review-create.png" alt-text="Screenshot that shows the Review and create tab when creating a new alert rule.":::
+## Create a new alert rule by using the CLI
-## Create a new alert rule using CLI
+You can create a new alert rule by using the [Azure CLI](/cli/azure/get-started-with-azure-cli). The following code examples use [Azure Cloud Shell](../../cloud-shell/overview.md). You can see the full list of the [Azure CLI commands for Azure Monitor](/cli/azure/azure-cli-reference-for-monitor#azure-monitor-references).
-You can create a new alert rule using the [Azure CLI](/cli/azure/get-started-with-azure-cli). The code examples below are using [Azure Cloud Shell](../../cloud-shell/overview.md). You can see the full list of the [Azure CLI commands for Azure Monitor](/cli/azure/azure-cli-reference-for-monitor#azure-monitor-references).
+1. In the [portal](https://portal.azure.com/), select **Cloud Shell**. At the prompt, use the commands that follow.
-1. In the [portal](https://portal.azure.com/), select **Cloud Shell**, and at the prompt, use the following commands:
### [Metric alert](#tab/metric)
- To create a metric alert rule, use the **az monitor metrics alert create** command. You can see detailed documentation on the metric alert rule create command in the **az monitor metrics alert create** section of the [CLI reference documentation for metric alerts](/cli/azure/monitor/metrics/alert).
+ To create a metric alert rule, use the `az monitor metrics alert create` command. You can see detailed documentation on the metric alert rule create command in the `az monitor metrics alert create` section of the [CLI reference documentation for metric alerts](/cli/azure/monitor/metrics/alert).
To create a metric alert rule that monitors if average Percentage CPU on a VM is greater than 90: ```azurecli
You can create a new alert rule using the [Azure CLI](/cli/azure/get-started-wit
``` ### [Log alert](#tab/log)
- To create a log alert rule that monitors count of system event errors:
+ To create a log alert rule that monitors the count of system event errors:
```azurecli az monitor scheduled-query create -g {ResourceGroup} -n {nameofthealert} --scopes {vm_id} --condition "count \'union Event, Syslog | where TimeGenerated > a(1h) | where EventLevelName == \"Error\" or SeverityLevel== \"err\"\' > 2" --description {descriptionofthealert} ``` > [!NOTE]
- > Azure CLI support is only available for the scheduledQueryRules API version `2021-08-01` and later. Previous API versions can use the Azure Resource Manager CLI with templates as described below. If you use the legacy [Log Analytics Alert API](./api-alerts.md), you will need to switch to use CLI. [Learn more about switching](./alerts-log-api-switch.md).
+ > Azure CLI support is only available for the `scheduledQueryRules` API version `2021-08-01` and later. Previous API versions can use the Azure Resource Manager CLI with templates as described in the following sections. If you use the legacy [Log Analytics Alert API](./api-alerts.md), you must switch to use the CLI. [Learn more about switching](./alerts-log-api-switch.md).
### [Activity log alert](#tab/activity-log)
You can create a new alert rule using the [Azure CLI](/cli/azure/get-started-wit
- [az monitor activity-log alert scope](/cli/azure/monitor/activity-log/alert/scope): Add scope for the created activity log alert rule. - [az monitor activity-log alert action-group](/cli/azure/monitor/activity-log/alert/action-group): Add an action group to the activity log alert rule.
- You can find detailed documentation on the activity log alert rule create command in the **az monitor activity-log alert create** section of the [CLI reference documentation for activity log alerts](/cli/azure/monitor/activity-log/alert).
+ You can find detailed documentation on the activity log alert rule create command in the `az monitor activity-log alert create` section of the [CLI reference documentation for activity log alerts](/cli/azure/monitor/activity-log/alert).
+ ### [Resource Health alert](#tab/resource-health)
- To create a new activity log alert rule, use the following commands using the `Resource Health` category:
+ To create a new activity log alert rule, use the following commands by using the `Resource Health` category:
- [az monitor activity-log alert create](/cli/azure/monitor/activity-log/alert#az-monitor-activity-log-alert-create): Create a new activity log alert rule resource. - [az monitor activity-log alert scope](/cli/azure/monitor/activity-log/alert/scope): Add scope for the created activity log alert rule. - [az monitor activity-log alert action-group](/cli/azure/monitor/activity-log/alert/action-group): Add an action group to the activity log alert rule.
- You can find detailed documentation on the alert rule create command in the **az monitor activity-log alert create** section of the [CLI reference documentation for activity log alerts](/cli/azure/monitor/activity-log/alert).
+ You can find detailed documentation on the alert rule create command in the `az monitor activity-log alert create` section of the [CLI reference documentation for activity log alerts](/cli/azure/monitor/activity-log/alert).
### [Service Health alert](#tab/service-health)
- To create a new activity log alert rule, use the following commands using the `Service Health` category:
- - [az monitor activity-log alert create](/cli/azure/monitor/activity-log/alert#az-monitor-activity-log-alert-create): Create a new activity log alert rule resource .
+ To create a new activity log alert rule, use the following commands by using the `Service Health` category:
+ - [az monitor activity-log alert create](/cli/azure/monitor/activity-log/alert#az-monitor-activity-log-alert-create): Create a new activity log alert rule resource.
- [az monitor activity-log alert scope](/cli/azure/monitor/activity-log/alert/scope): Add scope for the created activity log alert rule. - [az monitor activity-log alert action-group](/cli/azure/monitor/activity-log/alert/action-group): Add an action group to the activity log alert rule.
- You can find detailed documentation on the alert rule create command in the **az monitor activity-log alert create** section of the [CLI reference documentation for activity log alerts](/cli/azure/monitor/activity-log/alert).
+ You can find detailed documentation on the alert rule create command in the `az monitor activity-log alert create` section of the [CLI reference documentation for activity log alerts](/cli/azure/monitor/activity-log/alert).
-## Create a new alert rule using PowerShell
+## Create a new alert rule by using PowerShell
-- To create a metric alert rule using PowerShell, use this cmdlet: [Add-AzMetricAlertRuleV2](/powershell/module/az.monitor/add-azmetricalertrulev2)-- To create a log alert rule using PowerShell, use this cmdlet: [New-AzScheduledQueryRule](/powershell/module/az.monitor/new-azscheduledqueryrule)-- To create an activity log alert rule using PowerShell, use this cmdlet: [Set-AzActivityLogAlert](/powershell/module/az.monitor/set-azactivitylogalert)
+- To create a metric alert rule by using PowerShell, use the [Add-AzMetricAlertRuleV2](/powershell/module/az.monitor/add-azmetricalertrulev2) cmdlet.
+- To create a log alert rule by using PowerShell, use the [New-AzScheduledQueryRule](/powershell/module/az.monitor/new-azscheduledqueryrule) cmdlet.
+- To create an activity log alert rule by using PowerShell, use the [Set-AzActivityLogAlert](/powershell/module/az.monitor/set-azactivitylogalert) cmdlet.
## Create an activity log alert rule from the Activity log pane
-You can also create an activity log alert on future events similar to an activity log event that already occurred.
+You can also create an activity log alert on future events similar to an activity log event that already occurred.
-1. In the [portal](https://portal.azure.com/), [go to the activity log pane](../essentials/activity-log.md#view-the-activity-log).
-1. Filter or find the desired event, and then create an alert by selecting **Add activity log alert**.
+1. In the [portal](https://portal.azure.com/), [go to the Activity log pane](../essentials/activity-log.md#view-the-activity-log).
+1. Filter or find the desired event. Then create an alert by selecting **Add activity log alert**.
- :::image type="content" source="media/alerts-create-new-alert-rule/create-alert-rule-from-activity-log-event-new.png" alt-text="Screenshot of creating an alert rule from an activity log event." lightbox="media/alerts-create-new-alert-rule/create-alert-rule-from-activity-log-event-new.png":::
+ :::image type="content" source="media/alerts-create-new-alert-rule/create-alert-rule-from-activity-log-event-new.png" alt-text="Screenshot that shows creating an alert rule from an activity log event." lightbox="media/alerts-create-new-alert-rule/create-alert-rule-from-activity-log-event-new.png":::
-2. The **Create alert rule** wizard opens, with the scope and condition already provided according to the previously selected activity log event. If necessary, you can edit and modify the scope and condition at this stage. By default, the exact scope and condition for the new rule are copied from the original event attributes. For example, the exact resource on which the event occurred, and the specific user or service name who initiated the event, are both included by default in the new alert rule. If you want to make the alert rule more general, modify the scope, and condition accordingly (see steps 3-9 in the section "Create an alert rule from the Azure Monitor alerts pane").
+1. The **Create alert rule** wizard opens, with the scope and condition already provided according to the previously selected activity log event. If necessary, you can edit and modify the scope and condition at this stage. By default, the exact scope and condition for the new rule are copied from the original event attributes. For example, the exact resource on which the event occurred, and the specific user or service name that initiated the event, are both included by default in the new alert rule.
-3. Follow the rest of the steps from [Create a new alert rule in the Azure portal](#create-a-new-alert-rule-in-the-azure-portal).
+ If you want to make the alert rule more general, modify the scope and condition accordingly. See steps 3-9 in the section "Create a new alert rule in the Azure portal."
-## Create an activity log alert rule using an Azure Resource Manager template
+1. Follow the rest of the steps from [Create a new alert rule in the Azure portal](#create-a-new-alert-rule-in-the-azure-portal).
-To create an activity log alert rule using an Azure Resource Manager template, create a `microsoft.insights/activityLogAlerts` resource, and fill in all related properties.
+## Create an activity log alert rule by using an ARM template
-> [!NOTE]
->The highest level that activity log alerts can be defined is the subscription level. Define the alert to alert per subscription. You can't define an alert on two subscriptions.
+To create an activity log alert rule by using an Azure Resource Manager template (ARM template), create a `microsoft.insights/activityLogAlerts` resource. Then fill in all related properties.
-The following fields are the options in the Azure Resource Manager template for the conditions fields. (The **Resource Health**, **Advisor** and **Service Health** fields have extra properties fields.)
+> [!NOTE]
+>The highest level that activity log alerts can be defined is the subscription level. Define the alert to alert per subscription. You can't define an alert on two subscriptions.
+The following fields are the options in the ARM template for the conditions fields. The **Resource Health**, **Advisor** and **Service Health** fields have extra properties fields.
|Field |Description | |||
-|resourceId|The resource ID of the impacted resource in the activity log event on which the alert is generated.|
-|category|The category of the activity log event. Possible values: `Administrative`, `ServiceHealth`, `ResourceHealth`, `Autoscale`, `Security`, `Recommendation`, or `Policy` |
+|resourceId|The resource ID of the affected resource in the activity log event on which the alert is generated.|
+|category|The category of the activity log event. Possible values are `Administrative`, `ServiceHealth`, `ResourceHealth`, `Autoscale`, `Security`, `Recommendation`, or `Policy`. |
|caller|The email address or Azure Active Directory identifier of the user who performed the operation of the activity log event. |
-|level |Level of the activity in the activity log event for the alert. Possible values: `Critical`, `Error`, `Warning`, `Informational`, or `Verbose`.|
-|operationName |The name of the operation in the activity log event. Possible values: `Microsoft.Resources/deployments/write`. |
-|resourceGroup |Name of the resource group for the impacted resource in the activity log event. |
+|level |Level of the activity in the activity log event for the alert. Possible values are `Critical`, `Error`, `Warning`, `Informational`, or `Verbose`.|
+|operationName |The name of the operation in the activity log event. Possible values are `Microsoft.Resources/deployments/write`. |
+|resourceGroup |Name of the resource group for the affected resource in the activity log event. |
|resourceProvider |For more information, see [Azure resource providers and types](../../azure-resource-manager/management/resource-providers-and-types.md). For a list that maps resource providers to Azure services, see [Resource providers for Azure services](../../azure-resource-manager/management/resource-providers-and-types.md). |
-|status |String describing the status of the operation in the activity event. Possible values: `Started`, `In Progress`, `Succeeded`, `Failed`, `Active`, or `Resolved` |
+|status |String describing the status of the operation in the activity event. Possible values are `Started`, `In Progress`, `Succeeded`, `Failed`, `Active`, or `Resolved`. |
|subStatus |Usually, this field is the HTTP status code of the corresponding REST call. This field can also include other strings describing a substatus. Examples of HTTP status codes include `OK` (HTTP Status Code: 200), `No Content` (HTTP Status Code: 204), and `Service Unavailable` (HTTP Status Code: 503), among many others. |
-|resourceType |The type of the resource that was affected by the event. For example: `Microsoft.Resources/deployments`. |
+|resourceType |The type of the resource that was affected by the event. An example is `Microsoft.Resources/deployments`. |
This example sets the condition to the **Administrative** category:
This example sets the condition to the **Administrative** category:
```
-This is an example template that creates an activity log alert rule using the **Administrative** condition:
+This example template creates an activity log alert rule by using the **Administrative** condition:
```json {
This is an example template that creates an activity log alert rule using the **
] } ```+ This sample JSON can be saved as, for example, *sampleActivityLogAlert.json*. You can deploy the sample by using [Azure Resource Manager in the Azure portal](../../azure-resource-manager/templates/deploy-portal.md). For more information about the activity log fields, see [Azure activity log event schema](../essentials/activity-log-schema.md). > [!NOTE]
-> It might take up to 5 minutes for the new activity log alert rule to become active.
+> It might take up to five minutes for the new activity log alert rule to become active.
-## Create a new activity log alert rule using the REST API
+## Create a new activity log alert rule by using the REST API
-The Azure Monitor Activity Log Alerts API is a REST API. It's fully compatible with the Azure Resource Manager REST API. You can use it with PowerShell, by using the Resource Manager cmdlet or the Azure CLI.
+The Azure Monitor Activity Log Alerts API is a REST API. It's fully compatible with the Azure Resource Manager REST API. You can use it with PowerShell by using the Resource Manager cmdlet or the Azure CLI.
[!INCLUDE [updated-for-az](../../../includes/updated-for-az.md)]
-### Deploy the Resource Manager template with PowerShell
+### Deploy the ARM template with PowerShell
-To use PowerShell to deploy the sample Resource Manager template shown in the [previous section](#create-an-activity-log-alert-rule-using-an-azure-resource-manager-template) section, use the following command:
+To use PowerShell to deploy the sample ARM template shown in the [previous section](#create-an-activity-log-alert-rule-by-using-an-arm-template), use the following command:
```powershell New-AzResourceGroupDeployment -ResourceGroupName "myRG" -TemplateFile sampleActivityLogAlert.json -TemplateParameterFile sampleActivityLogAlert.parameters.json ```
-The *sampleActivityLogAlert.parameters.json* file contains the values provided for the parameters needed for alert rule creation.
-## Changes to log alert rule creation experience
+The *sampleActivityLogAlert.parameters.json* file contains values for the parameters that you need for alert rule creation.
+
+## Changes to the log alert rule creation experience
-If you're creating a new log alert rule, note that current alert rule wizard is a little different from the earlier experience:
+The current alert rule wizard is different from the earlier experience:
-- Previously, search results were included in the payload of the triggered alert and its associated notifications. The email included only 10 rows from the unfiltered results while the webhook payload contained 1000 unfiltered results. To get detailed context information about the alert so that you can decide on the appropriate action:
- - We recommend using [Dimensions](alerts-types.md#narrow-the-target-using-dimensions). Dimensions provide the column value that fired the alert, giving you context for why the alert fired and how to fix the issue.
- - When you need to investigate in the logs, use the link in the alert to the search results in Logs.
- - If you need the raw search results or for any other advanced customizations, use Logic Apps.
+- Previously, search results were included in the payload of the triggered alert and its associated notifications. The email included only 10 rows from the unfiltered results while the webhook payload contained 1,000 unfiltered results. To get detailed context information about the alert so that you can decide on the appropriate action:
+ - We recommend using [Dimensions](alerts-types.md#narrow-the-target-using-dimensions). Dimensions provide the column value that fired the alert, which gives you context for why the alert fired and how to fix the issue.
+ - When you need to investigate in the logs, use the link in the alert to the search results in logs.
+ - If you need the raw search results or for any other advanced customizations, use Azure Logic Apps.
- The new alert rule wizard doesn't support customization of the JSON payload. - Use custom properties in the [new API](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules/create-or-update#actions) to add static parameters and associated values to the webhook actions triggered by the alert. - For more advanced customizations, use Logic Apps. - The new alert rule wizard doesn't support customization of the email subject.
- - Customers often use the custom email subject to indicate the resource on which the alert fired, instead of using the Log Analytics workspace. Use the [new API](alerts-unified-log.md#split-by-alert-dimensions) to trigger an alert of the desired resource using the resource ID column.
+ - Customers often use the custom email subject to indicate the resource on which the alert fired, instead of using the Log Analytics workspace. Use the [new API](alerts-unified-log.md#split-by-alert-dimensions) to trigger an alert of the desired resource by using the resource ID column.
- For more advanced customizations, use Logic Apps. ## Next steps
+ [View and manage your alert instances](alerts-manage-alert-instances.md)
azure-monitor Availability Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-azure-functions.md
To create a new file, right-click under your timer trigger function (for example
} ```
-1. Copy the following code into the **run.csx** file. (You'll replace the preexisting code.)
+1. Define the `REGION_NAME` environment variable as a valid Azure availability location.
+
+ Run the following command in the [Azure CLI](https://learn.microsoft.com/cli/azure/account?view=azure-cli-latest#az-account-list-locations&preserve-view=true) to list available regions.
+ ```azurecli
+ az account list-locations -o table
+ ```
+
+1. Copy the following code into the **run.csx** file. (You'll replace the preexisting code.)
+
```csharp #load "runAvailabilityTest.csx"
azure-monitor Codeless Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/codeless-overview.md
The table below displays the current state of auto-instrumentation availability.
Links are provided to additional information for each supported scenario.
-|Environment/Resource Provider | .NET Framework | .NET Core / .NET | Java | Node.js | Python |
-|-||||-|-|
-|Azure App Service on Windows - Publish as Code | [ :white_check_mark: :link: ](azure-web-apps-net.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](azure-web-apps-net-core.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](azure-web-apps-java.md) | [ :white_check_mark: :link: ](azure-web-apps-nodejs.md) <sup>[1](#OnBD)</sup> | :x: |
-|Azure App Service on Windows - Publish as Docker | [ :white_check_mark: ](https://azure.github.io/AppService/2022/04/11/windows-containers-app-insights-preview.html) <sup>[2](#Preview)</sup> | [ :white_check_mark: ](https://azure.github.io/AppService/2022/04/11/windows-containers-app-insights-preview.html) <sup>[2](#Preview)</sup> | [ :white_check_mark: ](https://azure.github.io/AppService/2022/04/11/windows-containers-app-insights-preview.html) <sup>[2](#Preview)</sup> | :x: | :x: |
-|Azure App Service on Linux | :x: | [ :white_check_mark: :link: ](azure-web-apps-net-core.md?tabs=linux) <sup>[2](#Preview)</sup> | [ :white_check_mark: :link: ](azure-web-apps-java.md) | [ :white_check_mark: :link: ](azure-web-apps-nodejs.md?tabs=linux) | :x: |
-|Azure Functions - basic | [ :white_check_mark: :link: ](monitor-functions.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](monitor-functions.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](monitor-functions.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](monitor-functions.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](monitor-functions.md) <sup>[1](#OnBD)</sup> |
-|Azure Functions - dependencies | :x: | :x: | [ :white_check_mark: :link: ](monitor-functions.md) <sup>[2](#Preview)</sup> | :x: | [ :white_check_mark: :link: ](monitor-functions.md#distributed-tracing-for-python-function-apps) |
-|Azure Spring Cloud | :x: | :x: | [ :white_check_mark: :link: ](azure-web-apps-java.md) | :x: | :x: |
-|Azure Kubernetes Service (AKS) | :x: | :x: | [ :white_check_mark: :link: ](java-in-process-agent.md) | :x: | :x: |
-|Azure VMs Windows | [ :white_check_mark: :link: ](azure-vm-vmss-apps.md) <sup>[2](#Preview)</sup> <sup>[3](#Agent)</sup> | [ :white_check_mark: :link: ](azure-vm-vmss-apps.md) <sup>[2](#Preview)</sup> <sup>[3](#Agent)</sup> | [ :white_check_mark: :link: ](java-in-process-agent.md) | :x: | :x: |
-|On-premises VMs Windows | [ :white_check_mark: :link: ](status-monitor-v2-overview.md) <sup>[3](#Agent)</sup> | [ :white_check_mark: :link: ](status-monitor-v2-overview.md) <sup>[2](#Preview)</sup> <sup>[3](#Agent)</sup> | [ :white_check_mark: :link: ](java-in-process-agent.md) | :x: | :x: |
-|Standalone agent - any environment | :x: | :x: | [ :white_check_mark: :link: ](java-in-process-agent.md) | :x: | :x: |
+|Environment/Resource Provider | .NET Framework | .NET Core / .NET | Java | Node.js | Python |
+|-|||-|-|--|
+|Azure App Service on Windows - Publish as Code | [ :white_check_mark: :link: ](azure-web-apps-net.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](azure-web-apps-net-core.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](azure-web-apps-java.md) | [ :white_check_mark: :link: ](azure-web-apps-nodejs.md) <sup>[1](#OnBD)</sup> | :x: |
+|Azure App Service on Windows - Publish as Docker | [ :white_check_mark: ](https://azure.github.io/AppService/2022/04/11/windows-containers-app-insights-preview.html) <sup>[2](#Preview)</sup> | [ :white_check_mark: ](https://azure.github.io/AppService/2022/04/11/windows-containers-app-insights-preview.html) <sup>[2](#Preview)</sup> | [ :white_check_mark: ](https://azure.github.io/AppService/2022/04/11/windows-containers-app-insights-preview.html) <sup>[2](#Preview)</sup> | :x: | :x: |
+|Azure App Service on Linux - Publish as Code | :x: | [ :white_check_mark: :link: ](azure-web-apps-net-core.md?tabs=linux) <sup>[2](#Preview)</sup> | [ :white_check_mark: :link: ](azure-web-apps-java.md) | [ :white_check_mark: :link: ](azure-web-apps-nodejs.md?tabs=linux) | :x: |
+|Azure App Service on Linux - Publish as Docker | :x: | :x: | :x: | :x: | :x: |
+|Azure Functions - basic | [ :white_check_mark: :link: ](monitor-functions.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](monitor-functions.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](monitor-functions.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](monitor-functions.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](monitor-functions.md) <sup>[1](#OnBD)</sup> |
+|Azure Functions - dependencies | :x: | :x: | [ :white_check_mark: :link: ](monitor-functions.md) <sup>[2](#Preview)</sup> | :x: | [ :white_check_mark: :link: ](monitor-functions.md#distributed-tracing-for-python-function-apps) |
+|Azure Spring Cloud | :x: | :x: | [ :white_check_mark: :link: ](azure-web-apps-java.md) | :x: | :x: |
+|Azure Kubernetes Service (AKS) | :x: | :x: | [ :white_check_mark: :link: ](java-in-process-agent.md) | :x: | :x: |
+|Azure VMs Windows | [ :white_check_mark: :link: ](azure-vm-vmss-apps.md) <sup>[2](#Preview)</sup> <sup>[3](#Agent)</sup> | [ :white_check_mark: :link: ](azure-vm-vmss-apps.md) <sup>[2](#Preview)</sup> <sup>[3](#Agent)</sup> | [ :white_check_mark: :link: ](java-in-process-agent.md) | :x: | :x: |
+|On-premises VMs Windows | [ :white_check_mark: :link: ](status-monitor-v2-overview.md) <sup>[3](#Agent)</sup> | [ :white_check_mark: :link: ](status-monitor-v2-overview.md) <sup>[2](#Preview)</sup> <sup>[3](#Agent)</sup> | [ :white_check_mark: :link: ](java-in-process-agent.md) | :x: | :x: |
+|Standalone agent - any environment | :x: | :x: | [ :white_check_mark: :link: ](java-in-process-agent.md) | :x: | :x: |
**Footnotes** - <a name="OnBD">1</a>: Application Insights is on by default and enabled automatically.
azure-monitor Best Practices Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-cost.md
Title: 'Azure Monitor best practices: Cost management'
+ Title: Cost optimization and Azure Monitor
description: Guidance and recommendations for reducing your cost for Azure Monitor. - Previously updated : 03/31/2022 Last updated : 10/17/2022
-# Azure Monitor best practices: Cost management
+# Cost optimization and Azure Monitor
+You can significantly reduce your cost for Azure Monitor by understanding your different configuration options and opportunities to reduce the amount of data that it collects. Before you use this article, you should see [Azure Monitor cost and usage](usage-estimated-costs.md) to understand the different ways that Azure Monitor charges and how to view your monthly bill.
-This article provides guidance on reducing your cloud monitoring costs by implementing and managing Azure Monitor in the most cost-effective manner. It explains how to take advantage of cost-saving features to help ensure that you're not paying for data collection that provides little value. It also provides guidance for regularly monitoring your usage so that you can proactively detect and identify sources responsible for excessive usage.
-
-## Understand Azure Monitor charges
-
-You should start by understanding the different ways that Azure Monitor charges and how to view your monthly bill. See [Azure Monitor cost and usage](usage-estimated-costs.md) for a complete description and the different tools available to analyze your charges.
-
-## Configure workspaces
-
-You can start using Azure Monitor with a single Log Analytics workspace by using default options. As your monitoring environment grows, you'll need to make decisions about whether to have multiple services share a single workspace or create multiple workspaces. You want to evaluate configuration options that allow you to reduce your monitoring costs.
-
-### Configure pricing tier or dedicated cluster
-
-By default, workspaces will use pay-as-you-go pricing with no minimum data volume. If you collect enough amount of data, you can significantly decrease your cost by using a [commitment tier](logs/cost-logs.md#commitment-tiers). You commit to a daily minimum of data collected in exchange for a lower rate.
-
-[Dedicated clusters](logs/logs-dedicated-clusters.md) provide more functionality and cost savings if you ingest at least 500 GB per day collectively among multiple workspaces in the same region. Unlike commitment tiers, workspaces in a dedicated cluster don't need to individually reach 500 GB.
-
-See [Azure Monitor Logs pricing details](logs/cost-logs.md) for information on commitment tiers and guidance on determining which is most appropriate for your level of usage. See [Usage and estimated costs](usage-estimated-costs.md#usage-and-estimated-costs) to view estimated costs for your usage at different pricing tiers.
-
-### Optimize workspace configuration
-
-As your monitoring environment becomes more complex, you'll need to consider whether to create more Log Analytics workspaces. This need might surface as you place resources in more regions or as you implement more services that use workspaces such as Microsoft Sentinel and Microsoft Defender for Cloud.
-
-There can be cost implications with your workspace design, most notably when you combine different services such as operational data from Azure Monitor and security data from Microsoft Sentinel. For a description of these implications and guidance on determining the most cost-effective solution for your environment, see:
--- [Workspaces with Microsoft Defender for Cloud](logs/cost-logs.md#workspaces-with-microsoft-defender-for-cloud)-
-## Configure tables in each workspace
-
-Except for [tables that don't incur charges](logs/cost-logs.md#data-size-calculation), all data in a Log Analytics workspace is billed at the same rate by default. You might be collecting data that you query infrequently or that you need to archive for compliance but rarely access. You can significantly reduce your costs by optimizing your data retention and archiving and configuring Basic Logs.
-
-### Configure data retention and archiving
-
-Data collected in a Log Analytics workspace is retained for 31 days at no charge. The time period is 90 days if Microsoft Sentinel is enabled on the workspace. You can retain data beyond the default for trending analysis or other reporting, but there's a charge for this retention.
-
-Your retention requirement might be for compliance reasons or for occasional investigation or analysis of historical data. In this case, you should configure [Archived Logs](logs/data-retention-archive.md), which allows you to retain data for up to seven years at a reduced cost. There's a cost to search archived data or temporarily restore it for analysis. If you require infrequent access to this data, this cost is more than offset by the reduced retention cost.
-
-You can configure retention and archiving for all tables in a workspace or configure each table separately. The options allow you to optimize your costs by setting only the retention you require for each data type.
-
-### Configure Basic Logs
-
-You can save on data ingestion costs by configuring [certain tables](logs/basic-logs-configure.md#which-tables-support-basic-logs) in your Log Analytics workspace that you primarily use for debugging, troubleshooting, and auditing as [Basic Logs](logs/basic-logs-configure.md).
-
-Tables configured for Basic Logs have a lower ingestion cost in exchange for reduced features. They can't be used for alerting, their retention is set to eight days, they support a limited version of the query language, and there's a cost for querying them. If you query these tables infrequently, this query cost can be more than offset by the reduced ingestion cost.
-
-The decision whether to configure a table for Basic Logs is based on the following criteria:
--- The table currently supports Basic Logs.-- You don't require more than eight days of data retention for the table.-- You only require basic queries of the data using a limited version of the query language.-- The cost savings for data ingestion over a month exceed the expected cost for any expected queries-
-See [Query Basic Logs in Azure Monitor](.//logs/basic-logs-query.md) for information on query limitations. See [Configure Basic Logs in Azure Monitor](logs/basic-logs-configure.md) for more information about Basic Logs.
-
-## Reduce the amount of data collected
-
-The most straightforward strategy to reduce your costs for data ingestion and retention is to reduce the amount of data that you collect. Your goal should be to collect the minimal amount of data to meet your monitoring requirements. You might find that you're collecting data that's not being used for alerting or analysis. If so, you have an opportunity to reduce your monitoring costs by modifying your configuration to stop collecting data that you don't need.
-
-The configuration change varies depending on the data source. The following sections provide guidance for configuring common data sources to reduce the data they send to the workspace.
-
-## Virtual machines
-
-Virtual machines can vary significantly in the amount of data they collect, depending on the amount of telemetry generated by the applications and services they have installed. The following table lists the most common data collected from virtual machines and strategies for limiting them for each of the Azure Monitor agents.
+> [!NOTE]
+> This article describes [Cost optimization](/azure/architecture/framework/cost/) for Azure Monitor as part of the [Azure Well-Architected Framework](/azure/architecture/framework/). This is a set of guiding tenets that can be used to improve the quality of a workload. The framework consists of five pillars of architectural excellence:
+>
+> - Reliability
+> - Security
+> - Cost Optimization
+> - Operational Excellence
+> - Performance Efficiency
-| Source | Strategy | Log Analytics agent | Azure Monitor agent |
-|:|:|:|:|
-| Event logs | Collect only required event logs and levels. For example, *Information*-level events are rarely used and should typically not be collected. For the Azure Monitor agent, filter particular event IDs that are frequent but not valuable. | Change the [event log configuration for the workspace](agents/data-sources-windows-events.md). | Change the [data collection rule](agents/data-collection-rule-azure-monitor-agent.md). Use [custom XPath queries](agents/data-collection-rule-azure-monitor-agent.md#filter-events-using-xpath-queries) to filter specific event IDs. |
-| Syslog | Reduce the number of facilities collected and only collect required event levels. For example, *Info* and *Debug* level events are rarely used and should typically not be collected. | Change the [Syslog configuration for the workspace](agents/data-sources-syslog.md). | Change the [data collection rule](agents/data-collection-rule-azure-monitor-agent.md). Use [custom XPath queries](agents/data-collection-rule-azure-monitor-agent.md#filter-events-using-xpath-queries) to filter specific events. |
-| Performance counters | Collect only the performance counters required and reduce the frequency of collection. For the Azure Monitor agent, consider sending performance data only to Metrics and not Logs. | Change the [performance counter configuration for the workspace](agents/data-sources-performance-counters.md). | Change the [data collection rule](agents/data-collection-rule-azure-monitor-agent.md). Use [custom XPath queries](agents/data-collection-rule-azure-monitor-agent.md#filter-events-using-xpath-queries) to filter specific counters. |
+## Design considerations
-### Use transformations to filter events
+Azure Monitor includes the following design considerations related to cost:
-The bulk of data collection from virtual machines will be from Windows or Syslog events. While you can provide more filtering with the Azure Monitor agent, you still might be collecting records that provide little value. Use [transformations](essentials//data-collection-transformations.md) to implement more granular filtering and also to filter data from columns that provide little value. For example, you might have a Windows event that's valuable for alerting, but it includes columns with redundant or excessive data. You can create a transformation that allows the event to be collected but removes this excessive data.
+- Log Analytics workspace architecture<br><br>You can start using Azure Monitor with a single Log Analytics workspace by using default options. As your monitoring environment grows, you'll need to make decisions about whether to have multiple services share a single workspace or create multiple workspaces. There can be cost implications with your workspace design, most notably when you combine different services such as operational data from Azure Monitor and security data from Microsoft Sentinel. This may include trade-offs between functionality and cost depending on your particular priorities.<br><br>See [Design a Log Analytics workspace architecture](logs/workspace-design.md) for a list of criteria to consider when designing a workspace architecture.
-See the following section on filtering data with transformations for a summary on where to implement filtering and transformations for different data sources.
-### Multi-homing agents
+## Checklist
-You should be cautious with any configuration using multi-homed agents where a single virtual machine sends data to multiple workspaces because you might be incurring charges for the same data multiple times. If you do multi-home agents, make sure you're sending unique data to each workspace.
+**Log Analytics workspace configuration**
-You can also collect duplicate data with a single virtual machine running both the Azure Monitor agent and Log Analytics agent, even if they're both sending data to the same workspace. While the agents can coexist, each works independently without any knowledge of the other. Continue to use the Log Analytics agent until you [migrate to the Azure Monitor agent](./agents/azure-monitor-agent-migration.md) rather than using both together unless you can ensure that each is collecting unique data.
+> [!div class="checklist"]
+> - Configure pricing tier or dedicated cluster to optimize your cost depending on your usage.
+> - Configure tables used for debugging, troubleshooting, and auditing as Basic Logs.
+> - Configure data retention and archiving.
-See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for guidance on analyzing your collected data to make sure you aren't collecting duplicate data for the same machine.
+**Data collection**
-## Application Insights
+> [!div class="checklist"]
+> - Use diagnostic settings and transformations to collect only critical resource log data from Azure resources.
+> - Configure VM agents to collect only critical events.
+> - Use transformations to filter resource logs.
+> - Ensure that VMs aren't sending data to multiple workspaces.
-There are multiple methods that you can use to limit the amount of data collected by Application Insights:
+**Monitor usage**
-* **Sampling**: [Sampling](app/sampling.md) is the primary tool you can use to tune the amount of data collected by Application Insights. Use sampling to reduce the amount of telemetry that's sent from your applications with minimal distortion of metrics.
-* **Limit Ajax calls**: [Limit the number of Ajax calls](app/javascript.md#configuration) that can be reported in every page view or disable Ajax reporting. If you disable Ajax calls, you'll be disabling [JavaScript correlation](app/javascript.md#enable-distributed-tracing) too.
-* **Disable unneeded modules**: [Edit ApplicationInsights.config](app/configuration-with-applicationinsights-config.md) to turn off collection modules that you don't need. For example, you might decide that performance counters or dependency data aren't required.
-* **Pre-aggregate metrics**: If you put calls to TrackMetric in your application, you can reduce traffic by using the overload that accepts your calculation of the average and standard deviation of a batch of measurements. Alternatively, you can use a [pre-aggregating package](https://www.myget.org/gallery/applicationinsights-sdk-labs).
-* **Limit the use of custom metrics**: The Application Insights option to [Enable alerting on custom metric dimensions](app/pre-aggregated-metrics-log-metrics.md#custom-metrics-dimensions-and-pre-aggregation) can increase costs. Using this option can result in the creation of more pre-aggregation metrics.
-* **Ensure use of updated SDKs**: Earlier versions of the ASP.NET Core SDK and Worker Service SDK [collect many counters by default](app/eventcounters.md#default-counters-collected), which were collected as custom metrics. Use later versions to specify [only required counters](app/eventcounters.md#customizing-counters-to-be-collected).
+> [!div class="checklist"]
+> - Send alert when data collection is high.
+> - Analyze your collected data at regular intervals to determine if there are opportunities to further reduce your cost.
+> - Consider a daily cap as a preventative measure to ensure that you don't exceed a particular budget.
-## Resource logs
-The data volume for [resource logs](essentials/resource-logs.md) varies significantly between services, so you should only collect the categories that are required. You might also not want to collect platform metrics from Azure resources because this data is already being collected in Metrics. Only configure your diagnostic data to collect metrics if you need metric data in the workspace for more complex analysis with log queries.
+## Configuration recommendations
-Diagnostic settings don't allow granular filtering of resource logs. You might require certain logs in a particular category but not others. In this case, use [transformations](essentials/data-collection-transformations.md) on the workspace to filter logs that you don't require. You can also filter out the value of certain columns that you don't require to save additional cost.
-## Other insights and services
-See the documentation for other services that store their data in a Log Analytics workspace for recommendations on optimizing their data usage:
+### Log Analytics workspace configuration
+You may be able to significantly reduce your costs by optimizing the configuration of your Log Analytics workspaces. You can commit to a minimum amount of data collection in exchange for a reduced rate, and optimize your costs for the functionality and retention of data in particular tables.
-- **Container insights**: [Understand monitoring costs for Container insights](containers/container-insights-cost.md#control-ingestion-to-reduce-cost)-- **Microsoft Sentinel**: [Reduce costs for Microsoft Sentinel](../sentinel/billing-reduce-costs.md)-- **Defender for Cloud**: [Setting the security event option at the workspace level](../defender-for-cloud/working-with-log-analytics-agent.md#data-collection-tier)
+| Recommendation | Description |
+|:|:|
+| Configure pricing tier or dedicated cluster for your Log Analytics workspaces. | By default, Log Analytics workspaces will use pay-as-you-go pricing with no minimum data volume. If you collect enough amount of data, you can significantly decrease your cost by using a [commitment tier](logs/cost-logs.md#commitment-tiers) or [dedicated cluster](logs/logs-dedicated-clusters.md), which allows you to commit to a daily minimum of data collected in exchange for a lower rate.<br><br>See [Azure Monitor Logs cost calculations and options](logs/cost-logs.md) for details on commitment tiers and guidance on determining which is most appropriate for your level of usage. See [Usage and estimated costs](usage-estimated-costs.md#usage-and-estimated-costs) to view estimated costs for your usage at different pricing tiers.
+| Configure tables used for debugging, troubleshooting, and auditing as Basic Logs. | Tables in a Log Analytics workspace configured for [Basic Logs](logs/basic-logs-configure.md) have a lower ingestion cost in exchange for limited features and a charge for log queries. If you query these tables infrequently, this query cost can be more than offset by the reduced ingestion cost.<br><br>See [Configure Basic Logs in Azure Monitor (Preview)](logs/basic-logs-configure.md) for more information about Basic Logs and [Query Basic Logs in Azure Monitor (preview)](.//logs/basic-logs-query.md) for details on query limitations. |
+| Configure data retention and archiving. | There is a charge for retaining data in a Log Analytics workspace beyond the default of 30 days (90 days in Sentinel if enabled on the workspace). If you need to retain data for compliance reasons or for occasional investigation or analysis of historical data, configure [Archived Logs](logs/data-retention-archive.md), which allows you to retain data for up to seven years at a reduced cost.<br><br>See [Configure data retention and archive policies in Azure Monitor Logs](logs/data-retention-archive.md) for details on how to configure your workspace and how to work with archived data. |
-## Filter data with transformations (preview)
-You can use [data collection rule transformations in Azure Monitor](essentials//data-collection-transformations.md) to filter incoming data to reduce costs for data ingestion and retention. In addition to filtering records from the incoming data, you can filter out columns in the data, reducing its billable size as described in [Data size calculation](logs/cost-logs.md#data-size-calculation).
-Use ingestion-time transformations on the workspace to further filter data for workflows where you don't have granular control. For example, you can select categories in a [diagnostic setting](essentials/diagnostic-settings.md) to collect resource logs for a particular service, but that category might also send records that you don't need. Create a transformation for the table that service uses to filter out records you don't want.
+### Data collection
+Since Azure Monitor charges for the collection of data, your goal should be to collect the minimal amount of data required to meet your monitoring requirements. You have an opportunity to reduce your monitoring costs by modifying your configuration to stop collecting data that you're not using for alerting or analysis.
-You can also use ingestion-time transformations to lower the storage requirements for records you want by removing columns without useful information. For example, you might have error events in a resource log that you want for alerting. But you might not require certain columns in those records that contain a large amount of data. You can create a transformation for the table that removes those columns.
+| Recommendation | Description |
+|:|:|
+| **Azure resources** ||
+| Collect only critical resource log data from Azure resources. | When you create [diagnostic settings](essentials/diagnostic-settings.md) to send [resource logs](essentials/resource-logs.md) for your Azure resources to a Log Analytics database, only specify those categories that you require. Since diagnostic settings don't allow granular filtering of resource logs, use a [workspace transformation](essentials/data-collection-transformations.md?#workspace-transformation-dcr) to further filter unneeded data. See [Diagnostic settings in Azure Monitor](essentials/diagnostic-settings.md#controlling-costs) for details on how to configure diagnostic settings and using transformations to filter their data. |
+| **Virtual machines** ||
+| Configure VM agents to collect only critical events. | Virtual machines can vary significantly in the amount of data they collect, depending on the amount of telemetry generated by the applications and services they have installed. See [Monitor virtual machines with Azure Monitor: Workloads](vm/monitor-virtual-machine-workloads.md#controlling-costs) for guidance on data to collect and strategies for using XPath queries and transformations to limit it.|
+| Ensure that VMs aren't sending duplicate data. | Any configuration that uses multiple agents on a single machine or where you multi-home agents to send data to multiple workspaces may incur charges for the same data multiple times. If you do multi-home agents, make sure you're sending unique data to each workspace. See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for guidance on analyzing your collected data to make sure you aren't collecting duplicate data. If you're migrating between agents, continue to use the Log Analytics agent until you [migrate to the Azure Monitor agent](./agents/azure-monitor-agent-migration.md) rather than using both together unless you can ensure that each is collecting unique data. |
+| **Container insights** | |
+| Configure agent collection to remove unneeded data. | Analyze the data collected by Container insights as described in [Controlling ingestion to reduce cost](containers/container-insights-cost.md#control-ingestion-to-reduce-cost) and adjust your configuration to stop collection of data you don't need. |
+| Limit Prometheus metrics collected | If you configured Prometheus metric scraping, then follow the recommendations at [Controlling ingestion to reduce cost](containers/container-insights-cost.md#prometheus-metrics-scraping) to optimize your data collection for cost. |
+| Configure Basic Logs | Convert your schema to ContainerLogV2 which is compatible with Basic logs and can provide significant cost savings as described in [Controlling ingestion to reduce cost](containers/container-insights-cost.md#configure-basic-logs). |
+| **Application Insights** ||
+| Use sampling to tune the amount of data collected. | [Sampling](app/sampling.md) is the primary tool you can use to tune the amount of data collected by Application Insights. Use sampling to reduce the amount of telemetry that's sent from your applications with minimal distortion of metrics. |
+| Limit the number of Ajax calls. | [Limit the number of Ajax calls](app/javascript.md#configuration) that can be reported in every page view or disable Ajax reporting. If you disable Ajax calls, you'll be disabling [JavaScript correlation](app/javascript.md#enable-distributed-tracing) too. |
+| Disable unneeded modules. | [Edit ApplicationInsights.config](app/configuration-with-applicationinsights-config.md) to turn off collection modules that you don't need. For example, you might decide that performance counters or dependency data aren't required. |
+| Pre-aggregate metrics from any calls to TrackMetric. | If you put calls to TrackMetric in your application, you can reduce traffic by using the overload that accepts your calculation of the average and standard deviation of a batch of measurements. Alternatively, you can use a [pre-aggregating package](https://www.myget.org/gallery/applicationinsights-sdk-labs). |
+| Limit the use of custom metrics. | The Application Insights option to [Enable alerting on custom metric dimensions](app/pre-aggregated-metrics-log-metrics.md#custom-metrics-dimensions-and-pre-aggregation) can increase costs. Using this option can result in the creation of more pre-aggregation metrics. |
+| Ensure use of updated SDKs. | Earlier versions of the ASP.NET Core SDK and Worker Service SDK [collect many counters by default](app/eventcounters.md#default-counters-collected), which were collected as custom metrics. Use later versions to specify [only required counters](app/eventcounters.md#customizing-counters-to-be-collected). |
-The following table shows methods to apply transformations to different workflows.
-> [!NOTE]
-> Azure tables here refers to tables that are created and maintained by Microsoft and documented in the [Azure Monitor reference](/azure/azure-monitor/reference/). Custom tables are created by custom applications and have a suffix of *_CL* in their name.
-
-| Source | Target | Description | Filtering method |
-|:|:|:|:|
-| Azure Monitor agent | Azure tables | Collect data from standard sources such as Windows events, Syslog, and performance data and send to Azure tables in Log Analytics workspace. | Use XPath in the data collection rule (DCR) to collect specific data from client machines. Ingestion-time transformations in the agent DCR aren't yet supported. |
-| Azure Monitor agent | Custom tables | Collecting data outside of standard data sources is not yet supported. | |
-| Log Analytics agent | Azure tables | Collect data from standard sources such as Windows events, Syslog, and performance data and send it to Azure tables in the Log Analytics workspace. | Configure data collection on the workspace. Optionally, create ingestion-time transformation in the workspace DCR to filter records and columns. |
-| Log Analytics agent | Custom tables | Configure [custom logs](agents/data-sources-custom-logs.md) on the workspace to collect file-based text logs. | Configure ingestion-time transformation in the workspace DCR to filter or transform incoming data. You must first migrate the custom table to the new logs ingestion API. |
-| Data Collector API | Custom tables | Use the [Data Collector API](logs/data-collector-api.md) to send data to custom tables in the workspace by using the REST API. | Configure ingestion-time transformation in the workspace DCR to filter or transform incoming data. You must first migrate the custom table to the new Logs ingestion API. |
-| Logs ingestion API | Custom tables<br>Azure tables | Use the [Logs ingestion API](logs/logs-ingestion-api-overview.md) to send data to the workspace using REST API. | Configure ingestion-time transformation in the DCR for the custom log. |
-| Other data sources | Azure tables | Includes resource logs from diagnostic settings and other Azure Monitor features such as Application insights, Container insights, and VM insights. | Configure ingestion-time transformation in the workspace DCR to filter or transform incoming data. |
## Monitor workspace and analyze usage
-After you've configured your environment and data collection for cost optimization, you need to continue to monitor it to ensure that you don't experience unexpected increases in billable usage. You should also analyze your usage regularly to determine if you have other opportunities to reduce your usage. For example, you might want to further filter out collected data that hasn't proven to be useful.
-
-### Set a daily cap
-
-A [daily cap](logs/daily-cap.md) disables data collection in a Log Analytics workspace for the rest of the day after your configured limit is reached. A daily cap shouldn't be used as a method to reduce costs but as a preventative measure to ensure that you don't exceed a particular budget. Daily caps are typically used by organizations that are particularly cost conscious.
+After you've configured your environment and data collection for cost optimization, you need to continue to monitor it to ensure that you don't experience unexpected increases in billable usage. You should also analyze your usage regularly to determine if you have other opportunities to further filter out collected data that hasn't proven to be useful.
-When data collection stops, you effectively have no monitoring of features and resources relying on that workspace. Instead of relying on the daily cap alone, you can configure an alert rule to notify you when data collection reaches some level before the daily cap. Notification allows you to address any increases before data collection shuts down, or even to temporarily disable collection for less critical resources.
-See [Set daily cap on Log Analytics workspace](logs/daily-cap.md) for information on how the daily cap works and how to configure one.
-
-### Send alert when data collection is high
-
-To avoid unexpected bills, you should be proactively notified anytime you experience excessive usage. Notification allows you to address any potential anomalies before the end of your billing period.
-
-The following example is a [log alert rule](alerts/alerts-unified-log.md) that sends an alert if the billable data volume ingested in the last 24 hours was greater than 50 GB. Modify the **Alert Logic** setting to use a different threshold based on expected usage in your environment. You can also increase the frequency to check usage multiple times every day, but this option will result in a higher charge for the alert rule.
-
-| Setting | Value |
+| Recommendation | Description |
|:|:|
-| **Scope** | |
-| Target scope | Select your Log Analytics workspace. |
-| **Condition** | |
-| Query | `Usage \| where IsBillable \| summarize DataGB = sum(Quantity / 1000.)` |
-| Measurement | Measure: *DataGB*<br>Aggregation type: Total<br>Aggregation granularity: 1 day |
-| Alert Logic | Operator: Greater than<br>Threshold value: 50<br>Frequency of evaluation: 1 day |
-| Actions | Select or add an [action group](alerts/action-groups.md) to notify you when the threshold is exceeded. |
-| **Details** | |
-| Severity| Warning |
-| Alert rule name | Billable data volume greater than 50 GB in 24 hours. |
-
-See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for information on using log queries like the one used here to analyze billable usage in your workspace.
+| Send alert when data collection is high. | To avoid unexpected bills, you should be proactively notified anytime you experience excessive usage. Notification allows you to address any potential anomalies before the end of your billing period. See [Send alert when data collection is high](logs/analyze-usage.md#send-alert-when-data-collection-is-high) for details. |
+| Analyze collected data | Periodically analyze data collection using methods in [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) to determine if there's additional configuration that can decrease your usage further. This is particularly important when you add a new set of data sources, such as a new set of virtual machines or onboard a new service. |
+| Consider a daily cap as a preventative measure to ensure that you don't exceed a particular budget. | A [daily cap](logs/daily-cap.md) disables data collection in a Log Analytics workspace for the rest of the day after your configured limit is reached. This shouldn't be used as a method to reduce costs as described in [When to use a daily cap](logs/daily-cap.md). See [Set daily cap on Log Analytics workspace](logs/daily-cap.md) for information on how the daily cap works and how to configure one. |
-## Analyze your collected data
-When you detect an increase in data collection, you need methods to analyze your collected data to identify the source of the increase. You should also periodically analyze data collection to determine if there's additional configuration that can decrease your usage further. This practice is particularly important when you add a new set of data sources, such as a new set of virtual machines or onboard a new service.
-See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for different methods to analyze your collected data and billable usage. This article includes various log queries that will help you identify the source of any data increases and to understand your basic usage patterns.
-## Next steps
+## Next step
-- See [Azure Monitor cost and usage](usage-estimated-costs.md)) for a description of Azure Monitor and how to view and analyze your monthly bill.-- See [Azure Monitor Logs pricing details](logs/cost-logs.md) for information on how charges are calculated for data in a Log Analytics workspace and different configuration options to reduce your charges.-- See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for information on analyzing the data in your workspace to determine the source of any higher-than-expected usage and opportunities to reduce your amount of data collected.-- See [Set daily cap on Log Analytics workspace](logs/daily-cap.md) to control your costs by setting a daily limit on the amount of data that can be ingested in a workspace.
+- [Get best practices for a complete deployment of Azure Monitor](best-practices.md).
azure-monitor Container Insights Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-cost.md
This article provides pricing guidance for Container insights to help you unders
* Measure costs after Container insights has been enabled for one or more containers. * Control the collection of data and make cost reductions.
-Azure Monitor Logs collects, indexes, and stores data generated by your Kubernetes cluster.
-The Azure Monitor pricing model is primarily based on the amount of data ingested in gigabytes per day into your Log Analytics workspace. The cost of a Log Analytics workspace isn't based only on the volume of data collected. It's also dependent on the plan selected and how long you chose to store data generated from your clusters.
+
+The Azure Monitor pricing model is primarily based on the amount of data ingested in gigabytes per day into your Log Analytics workspace. The cost of a Log Analytics workspace isn't based only on the volume of data collected, it is also dependent on the plan selected, and how long you chose to store data generated from your clusters.
>[!NOTE] >All sizes and pricing are for sample estimation only. See the Azure Monitor [pricing](https://azure.microsoft.com/pricing/details/monitor/) page for the most recent pricing based on your Azure Monitor Log Analytics pricing model and Azure region.
The following types of data collected from a Kubernetes cluster with Container i
- Active scraping of Prometheus metrics - [Diagnostic log collection](../../aks/monitor-aks.md#configure-monitoring) of Kubernetes main node logs in your Azure Kubernetes Service (AKS) cluster to analyze log data generated by main components, such as `kube-apiserver` and `kube-controller-manager`.
-## What's collected from Kubernetes clusters?
-
-Container insights includes a predefined set of metrics and inventory items that are collected and written as log data in your Log Analytics workspace. All the metrics listed here are collected every minute.
-
-### Node metrics collected
-
-The 24 metrics per node that are collected:
--- cpuUsageNanoCores-- cpuCapacityNanoCores-- cpuAllocatableNanoCores-- memoryRssBytes-- memoryWorkingSetBytes-- memoryCapacityBytes-- memoryAllocatableBytes-- restartTimeEpoch-- used (disk)-- free (disk)-- used_percent (disk)-- io_time (diskio)-- writes (diskio)-- reads (diskio)-- write_bytes (diskio)-- write_time (diskio)-- iops_in_progress (diskio)-- read_bytes (diskio)-- read_time (diskio)-- err_in (net)-- err_out (net)-- bytes_recv (net)-- bytes_sent (net)-- Kubelet_docker_operations (kubelet)-
-### Container metrics
-
-The eight metrics per container that are collected:
--- cpuUsageNanoCores-- cpuRequestNanoCores-- cpuLimitNanoCores-- memoryRssBytes-- memoryWorkingSetBytes-- memoryRequestBytes-- memoryLimitBytes-- restartTimeEpoch-
-### Cluster inventory
-
-The cluster inventory data that's collected by default:
--- KubePodInventory: 1 per pod per minute-- KubeNodeInventory: 1 per node per minute-- Kube-- ContainerInventory: 1 per container per minute-
-## Estimate costs to monitor your AKS cluster
+## Estimating costs to monitor your AKS cluster
The following estimation is based on an AKS cluster with the following sizing example. The estimate applies only for metrics and inventory data collected. For container logs like stdout, stderr, and environmental variables, the estimate varies based on the log sizes generated by the workload. They're excluded from our estimation.
If you use [Prometheus metric scraping](container-insights-prometheus.md), make
### Configure Basic Logs
-You can save on data ingestion costs by configuring certain tables in your Log Analytics workspace that you primarily use for debugging, troubleshooting, and auditing as Basic Logs. For more information, including the limitations of Basic Logs, see [Configure Basic Logs](../best-practices-cost.md#configure-basic-logs). ContainerLogV2 is the configured version of Basic Logs that Container Insights uses. ContainerLogV2 includes verbose text-based log records.
+You can save on data ingestion costs by configuring certain tables in your Log Analytics workspace that you primarily use for debugging, troubleshooting, and auditing as Basic Logs. For more information, including the limitations of Basic Logs, see [Configure Basic Logs in Azure Monitor](../logs/basic-logs-configure.md). ContainerLogV2 is the configured version of Basic Logs that Container Insights uses. ContainerLogV2 includes verbose text-based log records.
You must be on the ContainerLogV2 schema to configure Basic Logs. For more information, see [Enable the ContainerLogV2 schema (preview)](container-insights-logging-v2.md).
+## Data collected from Kubernetes clusters
+
+### Metric data
+Container insights includes a predefined set of metrics and inventory items collected that are written as log data in your Log Analytics workspace. All metrics in the following table are collected every one minute.
++
+| Type | Metrics |
+|:|:|
+| Node metrics | `cpuUsageNanoCores`<br>`cpuCapacityNanoCores`<br>`cpuAllocatableNanoCores`<br>`memoryRssBytes`<br>`memoryWorkingSetBytes`<br>`memoryCapacityBytes`<br>`memoryAllocatableBytes`<br>`restartTimeEpoch`<br>`used` (disk)<br>`free` (disk)<br>`used_percent` (disk)<br>`io_time` (diskio)<br>`writes` (diskio)<br>`reads` (diskio)<br>`write_bytes` (diskio)<br>`write_time` (diskio)<br>`iops_in_progress` (diskio)<br>`read_bytes` (diskio)<br>`read_time` (diskio)<br>`err_in` (net)<br>`err_out` (net)<br>`bytes_recv` (net)<br>`bytes_sent` (net)<br>`Kubelet_docker_operations` (kubelet)
+| Container metrics | `cpuUsageNanoCores`<br>`cpuRequestNanoCores`<br>`cpuLimitNanoCores`<br>`memoryRssBytes`<br>`memoryWorkingSetBytes`<br>`memoryRequestBytes`<br>`memoryLimitBytes`<br>`restartTimeEpoch`
+
+### Cluster inventory
+
+The following list is the cluster inventory data collected by default:
+
+- KubePodInventory ΓÇô 1 per pod per minute
+- KubeNodeInventory ΓÇô 1 per node per minute
+- KubeServices ΓÇô 1 per service per minute
+- ContainerInventory ΓÇô 1 per container per minute
## Next steps To help you understand what the costs are likely to be based on recent usage patterns from data collected with Container insights, see [Analyze usage in a Log Analytics workspace](../logs/analyze-usage.md).
azure-monitor Data Collection Transformations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-transformations.md
ms.reviwer: nikeist
# Data collection transformations in Azure Monitor (preview) Transformations in Azure Monitor allow you to filter or modify incoming data before it's sent to a Log Analytics workspace. This article provides a basic description of transformations and how they are implemented. It provides links to other content for actually creating a transformation.
-## When to use transformations
-Transformations are useful for a variety of scenarios, including those described below.
+## Why to use transformations
+The following table describes the different goals that transformations can be used to achieve.
-### Reduce data costs
-Since you're charged ingestion cost for any data sent to a Log Analytics workspace, you want to filter out any data that you don't require to reduce your costs.
--- **Remove entire rows.** For example, you might have a diagnostic setting to collect resource logs from a particular resource but not require all of the log entries that it generates. Create a transformation that filters out records that match a certain criteria. --- **Remove a column from each row.** For example, your data may include columns with data that's redundant or has minimal value. Create a transformation that filters out columns that aren't required.--- **Parse important data from a column.** You may have a table with valuable data buried in a particular column. Use a transformation to parse the valuable data into a new column and remove the original.--
-### Remove sensitive data
-You may have a data source that sends information you don't want stored for privacy or compliancy reasons.
--- **Filter sensitive information.** Filter out entire rows or just particular columns that contain sensitive information.
-
-- **Obfuscate sensitive information**. For example, you might replace digits with a common character in an IP address or telephone number.--
-### Enrich data with additional or calculated information
-Use a transformation to add information to data that provides business context or simplifies querying the data later.
+| Category | Details |
+|:|:|
+| Remove sensitive data | You may have a data source that sends information you don't want stored for privacy or compliancy reasons.<br><br>**Filter sensitive information.** Filter out entire rows or just particular columns that contain sensitive information.<br><br>**Obfuscate sensitive information**. For example, you might replace digits with a common character in an IP address or telephone number. |
+| Enrich data with additional or calculated information | Use a transformation to add information to data that provides business context or simplifies querying the data later.<br><br>**Add a column with additional information.** For example, you might add a column identifying whether an IP address in another column is internal or external.<br><br>**Add business specific information.** For example, you might add a column indicating a company division based on location information in other columns. |
+| Reduce data costs | Since you're charged ingestion cost for any data sent to a Log Analytics workspace, you want to filter out any data that you don't require to reduce your costs.<br><br>**Remove entire rows.** For example, you might have a diagnostic setting to collect resource logs from a particular resource but not require all of the log entries that it generates. Create a transformation that filters out records that match a certain criteria.<br><br>**Remove a column from each row.** For example, your data may include columns with data that's redundant or has minimal value. Create a transformation that filters out columns that aren't required.<br><br>**Parse important data from a column.** You may have a table with valuable data buried in a particular column. Use a transformation to parse the valuable data into a new column and remove the original. |
-- **Add a column with additional information.** For example, you might add a column identifying whether an IP address in another column is internal or external. -- **Add business specific information.** For example, you might add a column indicating a company division based on location information in other columns. ## Supported tables Transformations may be applied to the following tables in a Log Analytics workspace.
azure-monitor Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/diagnostic-settings.md
The following table provides unique requirements for each destination including
| Event Hubs | The shared access policy for the namespace defines the permissions that the streaming mechanism has. Streaming to Event Hubs requires Manage, Send, and Listen permissions. To update the diagnostic setting to include streaming, you must have the ListKey permission on that Event Hubs authorization rule.<br><br>The event hub namespace needs to be in the same region as the resource being monitored if the resource is regional. <br><br> Diagnostic settings can't access Event Hubs resources when virtual networks are enabled. You must enable **Allow trusted Microsoft services** to bypass this firewall setting in Event Hubs so that the Azure Monitor diagnostic settings service is granted access to your Event Hubs resources.| | Partner integrations | The solutions vary by partner. Check the [Azure Monitor partner integrations documentation](../../partner-solutions/overview.md) for details.
+## Controlling costs
+
+There is a cost for collecting data in a Log Analytics workspace, so you should only collect the categories you require for each service. The data volume for resource logs varies significantly between services,
+
+You might also not want to collect platform metrics from Azure resources because this data is already being collected in Metrics. Only configure your diagnostic data to collect metrics if you need metric data in the workspace for more complex analysis with log queries.
+
+Diagnostic settings don't allow granular filtering of resource logs. You might require certain logs in a particular category but not others. Or you may want to remove unneeded columns from the data. In these cases, use [transformations](data-collection-transformations.md) on the workspace to filter logs that you don't require.
++
+You can also use transformations to lower the storage requirements for records you want by removing columns without useful information. For example, you might have error events in a resource log that you want for alerting. But you might not require certain columns in those records that contain a large amount of data. You can create a transformation for the table that removes those columns.
++ ## Create diagnostic settings You can create and edit diagnostic settings by using multiple methods.
azure-monitor Metrics Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-supported.md
Previously updated : 11/17/2022 Last updated : 11/21/2022
This latest update adds a new column and reorders the metrics to be alphabetical
> [!NOTE] > NumActiveWorkers is supported only if YARN is installed, and the Resource Manager is running.
+>
+> Alternatively customers can use Log Analytics for Kafka to get the insight and can write custom query to get the best monitoring experience. For more information, see how to [Use Azure Monitor logs to monitor HDInsight clusters](https://learn.microsoft.com/azure/hdinsight/hdinsight-hadoop-oms-log-analytics-tutorial)
## Microsoft.HealthcareApis/services
azure-monitor Dns Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/dns-analytics.md
description: Set up and use the DNS Analytics solution in Azure Monitor to gathe
Previously updated : 03/20/2018 Last updated : 11/23/2022
DNS Analytics helps you to:
The solution collects, analyzes, and correlates Windows DNS analytic and audit logs and other related data from your DNS servers.
+> [!IMPORTANT]
+> The Log Analytics agent will be **retired on 31 August, 2024**. If you are using the Log Analytics agent in your Microsoft Sentinel deployment, we recommend that you start planning your migration to the AMA. For more information, see [AMA migration for Microsoft Sentinel](../..//sentinel/ama-migrate.md).
+ ## Connected sources The following table describes the connected sources that are supported by this solution:
To provide feedback, visit the [Log Analytics UserVoice page](https://aka.ms/dns
## Next steps
-[Query logs](../logs/log-query-overview.md) to view detailed DNS log records.
+[Query logs](../logs/log-query-overview.md) to view detailed DNS log records.
azure-monitor Analyze Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/analyze-usage.md
Last updated 08/25/2022
# Analyze usage in a Log Analytics workspace Azure Monitor costs can vary significantly based on the volume of data being collected in your Log Analytics workspace. This volume is affected by the set of solutions using the workspace and the amount of data that each solution collects. This article provides guidance on analyzing your collected data to assist in controlling your data ingestion costs. It helps you determine the cause of higher-than-expected usage. It also helps you to predict your costs as you monitor more resources and configure different Azure Monitor features. + ## Causes for higher-than-expected usage Each Log Analytics workspace is charged as a separate service and contributes to the bill for your Azure subscription. The amount of data ingestion can be considerable, depending on the:
Each Log Analytics workspace is charged as a separate service and contributes to
An unexpected increase in any of these factors can result in increased charges for data retention. The rest of this article provides methods for detecting such a situation and then analyzing collected data to identify and mitigate the source of the increased usage.
+## Send alert when data collection is high
+
+To avoid unexpected bills, you should be proactively notified anytime you experience excessive usage. Notification allows you to address any potential anomalies before the end of your billing period.
+
+The following example is a [log alert rule](../alerts/alerts-unified-log.md) that sends an alert if the billable data volume ingested in the last 24 hours was greater than 50 GB. Modify the **Alert Logic** setting to use a different threshold based on expected usage in your environment. You can also increase the frequency to check usage multiple times every day, but this option will result in a higher charge for the alert rule.
+
+| Setting | Value |
+|:|:|
+| **Scope** | |
+| Target scope | Select your Log Analytics workspace. |
+| **Condition** | |
+| Query | `Usage \| where IsBillable \| summarize DataGB = sum(Quantity / 1000.)` |
+| Measurement | Measure: *DataGB*<br>Aggregation type: Total<br>Aggregation granularity: 1 day |
+| Alert Logic | Operator: Greater than<br>Threshold value: 50<br>Frequency of evaluation: 1 day |
+| Actions | Select or add an [action group](../alerts/action-groups.md) to notify you when the threshold is exceeded. |
+| **Details** | |
+| Severity| Warning |
+| Alert rule name | Billable data volume greater than 50 GB in 24 hours. |
+ ## Usage analysis in Azure Monitor Start your analysis with existing tools in Azure Monitor. These tools require no configuration and can often provide the information you need with minimal effort. If you need deeper analysis into your collected data than existing Azure Monitor features, use any of the following [log queries](log-query-overview.md) in [Log Analytics](log-analytics-overview.md).
azure-monitor Basic Logs Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-configure.md
The following table summarizes the two plans.
> [!NOTE] > The Basic log data plan isn't available for workspaces in [legacy pricing tiers](cost-logs.md#legacy-pricing-tiers).
+## When should I use Basic Logs?
+The decision whether to configure a table for Basic Logs is based on the following criteria:
+
+- The table currently [supports Basic Logs](#which-tables-support-basic-logs).
+- You don't require more than eight days of data retention for the table.
+- You only require basic queries of the data using a limited version of the query language.
+- The cost savings for data ingestion over a month exceed the expected cost for any expected queries
+ ## Which tables support Basic Logs? By default, all tables in your Log Analytics workspace are Analytics tables, and they're available for query and alerts. You can currently configure the following tables for Basic Logs:
azure-monitor Cost Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/cost-logs.md
ms.reviwer: dalek git
The most significant charges for most Azure Monitor implementations will typically be ingestion and retention of data in your Log Analytics workspaces. Several features in Azure Monitor don't have a direct cost but add to the workspace data that's collected. This article describes how data charges are calculated for your Log Analytics workspaces and Application Insights resources and the different configuration options that affect your costs. ++ ## Pricing model The default pricing for Log Analytics is a pay-as-you-go model that's based on ingested data volume and data retention. Each Log Analytics workspace is charged as a separate service and contributes to the bill for your Azure subscription. [Pricing for Log Analytics](https://azure.microsoft.com/pricing/details/monitor/) is set regionally. The amount of data ingestion can be considerable, depending on:
azure-monitor Daily Cap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/daily-cap.md
A daily cap on a Log Analytics workspace allows you to avoid unexpected increase
> [!IMPORTANT] > You should use care when setting a daily cap because when data collection stops, your ability to observe and receive alerts when the health conditions of your resources will be impacted. It can also impact other Azure services and solutions whose functionality may depend on up-to-date data being available in the workspace. Your goal shouldn't be to regularly hit the daily limit but rather use it as an infrequent method to avoid unplanned charges resulting from an unexpected increase in the volume of data collected.-
+>
+> For strategies to reduce your Azure Monitor costs, see [Cost optimization and Azure Monitor](/azure/azure-monitor/best-practices-cost).
## How the daily cap works Each workspace has a daily cap that defines its own data volume limit. When the daily cap is reached, a warning banner appears across the top of the page for the selected Log Analytics workspace in the Azure portal, and an operation event is sent to the *Operation* table under the **LogManagement** category. You can optionally create an alert rule to send an alert when this event is created.
Data collection resumes at the reset time which is a different hour of the day f
> [!NOTE] > The daily cap can't stop data collection at precisely the specified cap level and some excess data is expected, particularly if the workspace is receiving high volumes of data. If data is collected above the cap, it's still billed. See [View the effect of the Daily Cap](#view-the-effect-of-the-daily-cap) for a query that is helpful in studying the daily cap behavior.
+## When to use a daily cap
+Daily caps are typically used by organizations that are particularly cost conscious. They shouldn't be used as a method to reduce costs, but rather as a preventative measure to ensure that you don't exceed a particular budget.
+
+When data collection stops, you effectively have no monitoring of features and resources relying on that workspace. Instead of relying on the daily cap alone, you can [create an alert rule](#alert-when-daily-cap-is-reached) to notify you when data collection reaches some level before the daily cap. Notification allows you to address any increases before data collection shuts down, or even to temporarily disable collection for less critical resources.
+ ## Application Insights You shouldn't create a daily cap for workspace-based Application Insights resources but instead create a daily cap for their workspace. You do need to create a separate daily cap for any classic Application Insights resources since their data doesn't reside in a Log Analytics workspace.
azure-monitor Data Collector Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-collector-api.md
The following properties are reserved and shouldn't be used in a custom record t
- TimeGenerated - RawData + ## Data limits The data posted to the Azure Monitor Data collection API is subject to certain constraints:
The complete set of status codes that the service might return is listed in the
To query data submitted by the Azure Monitor HTTP Data Collector API, search for records whose **Type** is equal to the **LogType** value that you specified and appended with **_CL**. For example, if you used **MyCustomLog**, you would return all records with `MyCustomLog_CL`. ## Sample requests
-In the next sections, you'll find samples that demonstrate how to submit data to the Azure Monitor HTTP Data Collector API by using various programming languages.
+In this section are samples that demonstrate how to submit data to the Azure Monitor HTTP Data Collector API by using various programming languages.
For each sample, set the variables for the authorization header by doing the following:
For each sample, set the variables for the authorization header by doing the fol
Alternatively, you can change the variables for the log type and JSON data.
-### PowerShell sample
+### [PowerShell](#tab/powershell)
+ ```powershell # Replace with your Workspace ID $CustomerId = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
Function Post-LogAnalyticsData($customerId, $sharedKey, $body, $logType)
Post-LogAnalyticsData -customerId $customerId -sharedKey $sharedKey -body ([System.Text.Encoding]::UTF8.GetBytes($json)) -logType $logType ```
-### C# sample
+### [C#](#tab/c-sharp)
```csharp using System; using System.Net;
namespace OIAPIExample
```
-### Python sample
+### [Python](#tab/python)
>[!NOTE] > If using Python 2, you may need to change the line:
post_data(customer_id, shared_key, body, log_type)
```
-### Java sample
+### [Java](#tab/java)
```java
public class ApiExample {
``` + ## Alternatives and considerations
azure-monitor Logs Ingestion Api Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-ingestion-api-overview.md
You can modify the target table and workspace by modifying the DCR without any c
## Supported tables
-The following tables are supported.
- ### Custom tables
-The Logs Ingestion API can send data to any custom table that you create and to certain built-in tables in your Log Analytics workspace. The target table must exist before you can send data to it.
+The Logs Ingestion API can send data to any custom table that you create and to certain built-in tables in your Log Analytics workspace. The target table must exist before you can send data to it. Custom tables must have the `_CL` suffix.
### Built-in tables
-The Logs Ingestion API can send data to the following built-in tables. Other tables might be added to this list as support for them is implemented:
+The Logs Ingestion API can send data to the following built-in tables. Other tables may be added to this list as support for them is implemented. Columns extended on top of built-in tables must have the suffix `_CF`. Columns in a custom table don't need this suffix. Column names can consist of alphanumeric characters and the characters `_` and `-`, and they must start with a letter.
- [CommonSecurityLog](/azure/azure-monitor/reference/tables/commonsecuritylog) - [SecurityEvents](/azure/azure-monitor/reference/tables/securityevent) - [Syslog](/azure/azure-monitor/reference/tables/syslog) - [WindowsEvents](/azure/azure-monitor/reference/tables/windowsevent)
-### Table limits
-
-Tables have the following limitations:
-* Custom tables must have the `_CL` suffix.
-* Column names can consist of alphanumeric characters and the characters `_` and `-`. They must start with a letter.
-* Columns extended on top of built-in tables must have the suffix `_CF`. Columns in a custom table don't need this suffix.
## Authentication
azure-monitor Usage Estimated Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/usage-estimated-costs.md
Last updated 05/05/2022
This article describes the different ways that Azure Monitor charges for usage. It also explains how to evaluate charges on your Azure bill and how to estimate charges to monitor your entire environment. + ## Pricing model Azure Monitor uses consumption-based pricing, which is also known as pay-as-you-go pricing. With this billing model, you only pay for what you use. Features of Azure Monitor that are enabled by default don't incur any charge. These features include collection and alerting on the [Activity log](essentials/activity-log.md) and collection and analysis of [platform metrics](essentials/metrics-supported.md).
Use the following basic guidance for common resources:
- **Virtual machines**: With typical monitoring enabled, a virtual machine generates from 1 GB to 3 GB of data per month. This range is highly dependent on the configuration of your agents. - **Application Insights**: For different methods to estimate data from your applications, see the following section.-- **Container insights**: For guidance on estimating data for your Azure Kubernetes Service (AKS) cluster, see [Estimating costs to monitor your AKS cluster](containers/container-insights-cost.md#estimate-costs-to-monitor-your-aks-cluster).
+- **Container insights**: For guidance on estimating data for your Azure Kubernetes Service (AKS) cluster, see [Estimating costs to monitor your AKS cluster](containers/container-insights-cost.md#estimating-costs-to-monitor-your-aks-cluster).
The [Azure Monitor pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=monitor) includes data volume estimation calculators for these three cases.
azure-monitor Monitor Virtual Machine Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-workloads.md
For a list of the data sources available and details on how to configure them, s
> [!IMPORTANT] > Be careful to collect only the data that you require. Costs are associated with any data collected in your workspace. The data that you collect should only support particular analysis and alerting scenarios.
+## Controlling costs
+Be careful to collect only the data that you require. Costs are associated with any data collected in your workspace. The data that you collect should only support particular analysis and alerting scenarios.
+++
+Virtual machines can vary significantly in the amount of data they collect, depending on the amount of telemetry generated by the applications and services they have installed. Since your Azure Monitor cost is dependent on how much data you collect, you want to ensure that you're not collecting any more data than you require to meet your monitoring requirements.
+
+Each data source that you collect may have a different method for filtering out unwanted data. You can also use transformations to implement more granular filtering and also to filter data from columns that provide little value. For example, you might have a Windows event that's valuable for alerting, but it includes columns with redundant or excessive data. You can create a transformation that allows the event to be collected but removes this excessive data.
+
+The following table lists the different data sources on a VM and how to filter the data they collect.
+
+> [!NOTE]
+> Azure tables here refers to tables that are created and maintained by Microsoft and documented in the [Azure Monitor reference](/azure/azure-monitor/reference/). Custom tables are created by custom applications and have a suffix of _CL in their name.
+
+| Target | Description | Filtering method |
+|:|:|:|
+| Azure tables | [Collect data from standard sources](../agents/data-collection-rule-azure-monitor-agent.md) such as Windows events, Syslog, and performance data and send to Azure tables in Log Analytics workspace. | Use [XPath in the data collection rule (DCR)](../agents/data-collection-rule-azure-monitor-agent.md#filter-events-using-xpath-queries) to collect specific data from client machines.<br><br>Use transformations to further filter specific events or remove unnecessary columns. |
+| Custom tables | [Create a data collection rule](../agents/data-collection-text-log.md) to collect file-base text logs from the agent. | Add a [transformation](../essentials/data-collection-transformations.md) to the data collection rule. |
++ ## Convert management pack logic A significant number of customers who implement Azure Monitor currently monitor their virtual machine workloads by using management packs in System Center Operations Manager. There are no migration tools to convert assets from Operations Manager to Azure Monitor because the platforms are fundamentally different. Your migration instead constitutes a standard Azure Monitor implementation while you continue to use Operations Manager. As you customize Azure Monitor to meet your requirements for different applications and components and as it gains more features, then you can start to retire different management packs and agents in Operations Manager.
azure-netapp-files Azure Netapp Files Sdk Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-sdk-cli.md
The table below lists the supported SDKs. You can find details about the suppor
||--| | .NET | [Azure/azure-sdk-for-net](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/netapp) | | Python | [Azure/azure-sdk-for-python](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/netapp) |
-| Go | [Azure/azure-sdk-for-go](https://github.com/Azure/azure-sdk-for-go/tree/main/services/netapp) |
+| Go | [Azure/azure-sdk-for-go](https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/resourcemanager/netapp) |
| Java | [Azure/azure-sdk-for-java](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/netapp) | | JavaScript | [Azure/azure-sdk-for-js](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/netapp/arm-netapp) | | Ruby | [Azure/azure-sdk-for-ruby](https://github.com/Azure/azure-sdk-for-ruby/tree/master/management/azure_mgmt_netapp) |
azure-netapp-files Azure Netapp Files Solution Architectures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md
na Previously updated : 11/10/2022 Last updated : 11/23/2022 # Solution architectures using Azure NetApp Files
This section provides references to SAP on Azure solutions.
* [Protecting HANA databases configured with HSR on Azure NetApp Files with AzAcSnap](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/protecting-hana-databases-configured-with-hsr-on-azure-netapp/ba-p/3654620) * [Manual Recovery Guide for SAP HANA on Azure VMs from Azure NetApp Files snapshot with AzAcSnap](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/manual-recovery-guide-for-sap-hana-on-azure-vms-from-azure/ba-p/3290161) * [SAP HANA Disaster Recovery with Azure NetApp Files](https://docs.netapp.com/us-en/netapp-solutions-sap/pdfs/sidebar/SAP_HANA_Disaster_Recovery_with_Azure_NetApp_Files.pdf)
-* [SAP HANA backup and recovery on Azure NetApp Files with SnapCenter Service](https://docs.netapp.com/us-en/netapp-solutions-sap/pdfs/sidebar/SAP_HANA_backup_and_recovery_on_Azure_NetApp_Files_with_SnapCenter_Service.pdf)
### SAP AnyDB
azure-netapp-files Create Active Directory Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-active-directory-connections.md
na Previously updated : 11/22/2022 Last updated : 11/23/2022 # Create and manage Active Directory connections for Azure NetApp Files
Several features of Azure NetApp Files require that you have an Active Directory
* Alternatively, an AD domain user account with `msDS-SupportedEncryptionTypes` write permission on the AD connection admin account can also be used to set the Kerberos encryption type property on the AD connection admin account. >[!NOTE]
- >It's _not_ recommended nor required to add the Azure NetApp Files AD admin account to the AD domain groups listed above. Nor is it recommended or required to grant `msDS-SupportedEncryptionTypes` write permission to the AD admin account.
+ >It's _not_ recommended or required to add the Azure NetApp Files AD admin account to the AD domain groups listed above. Nor is it recommended or required to grant `msDS-SupportedEncryptionTypes` write permission to the Azure NetApp Files AD admin account.
If you set both AES-128 and AES-256 Kerberos encryption on the admin account of the AD connection, the highest level of encryption supported by your AD DS will be used.
azure-signalr Signalr Tutorial Build Blazor Server Chat App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-tutorial-build-blazor-server-chat-app.md
Beginning in Visual Studio 2019 version 16.2.0, Azure SignalR Service is built i
dotnet new blazorserver -o BlazorChat ```
-1. Add a new C# file called `BlazorChatSampleHub.cs` and create a new class `BlazorSampleHub` deriving from the `Hub` class for the chat app. For more information on creating hubs, see [Create and Use Hubs](/aspnet/core/signalr/hubs#create-and-use-hubs).
+1. Add a new C# file called `BlazorChatSampleHub.cs` and create a new class `BlazorChatSampleHub` deriving from the `Hub` class for the chat app. For more information on creating hubs, see [Create and Use Hubs](/aspnet/core/signalr/hubs#create-and-use-hubs).
```cs using System;
azure-sql-edge Resources Partners Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/resources-partners-security.md
This article highlights Microsoft partners companies with security solutions to
| Partner| Description | Links | |--|--|--|
-|![DH2i](media/resources/dh2i-logo.png)|DH2i takes an innovative new approach to networking connectivity by enabling organizations with its Software Defined Perimeter (SDP) Always-Secure and Always-On IT Infrastructure. DxOdyssey for IoT extends this to edge devices, allowing seamless access from the edge devices to the data center and cloud. This SDP module runs on any IoT device in a container on x64 and arm64 architecture. Once enabled, organizations can create secure, private application-level tunnels between devices and hubs without the requirement of a VPN or exposing public, open ports. This SDP module is purpose-built for IoT use cases where edge devices must communicate with any other devices, resources, applications, or clouds. Minimum hardware requirements: Linux x64 and arm64 OS, 1 GB of RAM, 100 Mb of storage| [Website](https://dh2i.com/) [Marketplace](https://portal.azure.com/#blade/Microsoft_Azure_Marketplace/MarketplaceOffersBlade/selectedMenuItemId/home) [Documentation](https://dh2i.com/dxodyssey-for-iot/) [Support](https://dh2i.com/support/)
+|![DH2i](media/resources/dh2i-logo.png)|DH2i takes an innovative new approach to networking connectivity by enabling organizations with its Software Defined Perimeter (SDP) Always-Secure and Always-On IT Infrastructure. DxOdyssey for IoT extends this to edge devices, allowing seamless access from the edge devices to the data center and cloud. This SDP module runs on any IoT device in a container on x64 and arm64 architecture. Once enabled, organizations can create secure, private application-level tunnels between devices and hubs without the requirement of a VPN or exposing public, open ports. This SDP module is purpose-built for IoT use cases where edge devices must communicate with any other devices, resources, applications, or clouds. Minimum hardware requirements: Linux x64 and arm64 OS, 1 GB of RAM, 100 Mb of storage| [Website](https://dh2i.com/) [Marketplace](https://portal.azure.com/#blade/Microsoft_Azure_Marketplace/MarketplaceOffersBlade/selectedMenuItemId/home) [Documentation](https://dh2i.com/dxodyssey-for-iot/) [Support](https://support.dh2i.com/)
## Next steps
azure-video-indexer Accounts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/accounts-overview.md
When creating a new paid account, you need to connect the Azure Video Indexer ac
> [!NOTE] > It is recommended to use Azure Video Indexer ARM-based accounts.
-* [Create an ARM-based (paid) account in Azure portal](create-account-portal.md). To create an account with an API, see [Accounts](/rest/api/videoindexer/accounts?branch=videoindex)
+* [Create an ARM-based (paid) account in Azure portal](create-account-portal.md). To create an account with an API, see [Accounts](/rest/api/videoindexer/preview/accounts)
> [!TIP] > Make sure you are signed in with the correct domain to the [Azure Video Indexer website](https://www.videoindexer.ai/). For details, see [Switch tenants](switch-tenants-portal.md).
azure-video-indexer Connect Classic Account To Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/connect-classic-account-to-arm.md
In this article, we demonstrate options of connecting your **existing** Azure Vi
Connecting a classic account to be ARM-based triggers a 30 days of a transition state. In the transition state, an existing account can be accessed by generating an access token using both: * Access token [generated through API Management](https://aka.ms/avam-dev-portal)(classic way)
-* Access token [generated through ARM](/rest/api/videoindexer/generate/access-token)
+* Access token [generated through ARM](/rest/api/videoindexer/preview/generate/access-token)
The transition state moves all account management functionality to be managed by ARM and will be handled by [Azure RBAC][docs-rbac-overview].
Before the end of the 30 days of transition state, you can remove access from us
## After connecting to ARM is complete
-After successfully connecting your account to ARM, it is recommended to make sure your account management APIs are replaced with [Azure Video Indexer REST API](/rest/api/videoindexer/accounts?branch=videoindex).
-As mentioned in the beginning of this article, during the 30 days of the transition state, ΓÇ£[Get-access-token](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Account-Access-Token)ΓÇ¥ will be supported side by side the ARM-based ΓÇ£[Generate-Access token](/rest/api/videoindexer/generate/access-token)ΓÇ¥.
+After successfully connecting your account to ARM, it is recommended to make sure your account management APIs are replaced with [Azure Video Indexer REST API](/rest/api/videoindexer/preview/accounts).
+As mentioned in the beginning of this article, during the 30 days of the transition state, ΓÇ£[Get-access-token](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Account-Access-Token)ΓÇ¥ will be supported side by side the ARM-based ΓÇ£[Generate-Access token](/rest/api/videoindexer/preview/generate/access-token)ΓÇ¥.
Make sure to change to the new "Generate-Access token" by updating all your solutions that use the API. APIs to be changed:
APIs to be changed:
- Get accounts ΓÇô List of all account in a region. - Create paid account ΓÇô would create a classic account.
-For a full description of [Azure Video Indexer REST API](/rest/api/videoindexer/accounts?branch=videoindex) calls and documentation, follow the link.
+For a full description of [Azure Video Indexer REST API](/rest/api/videoindexer/preview/accounts) calls and documentation, follow the link.
For code sample generating an access token through ARM see [C# code sample](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/ApiUsage/ArmBased/Program.cs).
azure-video-indexer Customize Language Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-language-model-overview.md
Previously updated : 02/02/2022 Last updated : 11/23/2022 # Customize a Language model with Azure Video Indexer Azure Video Indexer supports automatic speech recognition through integration with the Microsoft [Custom Speech Service](https://azure.microsoft.com/services/cognitive-services/custom-speech-service/). You can customize the Language model by uploading adaptation text, namely text from the domain whose vocabulary you'd like the engine to adapt to. Once you train your model, new words appearing in the adaptation text will be recognized, assuming default pronunciation, and the Language model will learn new probable sequences of words. See the list of supported by Azure Video Indexer languages in [supported langues](language-support.md).
-Let's take a word that is highly specific, like "Kubernetes" (in the context of Azure Kubernetes service), as an example. Since the word is new to Azure Video Indexer, it is recognized as "communities". You need to train the model to recognize it as "Kubernetes". In other cases, the words exist, but the Language model is not expecting them to appear in a certain context. For example, "container service" is not a 2-word sequence that a non-specialized Language model would recognize as a specific set of words.
+Let's take a word that is highly specific, like *"Kubernetes"* (in the context of Azure Kubernetes service), as an example. Since the word is new to Azure Video Indexer, it is recognized as *"communities"*. You need to train the model to recognize it as *"Kubernetes"*. In other cases, the words exist, but the Language model is not expecting them to appear in a certain context. For example, *"container service"* is not a 2-word sequence that a non-specialized Language model would recognize as a specific set of words.
-You have the option to upload words without context in a list in a text file. This is considered partial adaptation. Alternatively, you can upload text file(s) of documentation or sentences related to your content for better adaptation.
+There are 2 ways to customize a language model:
+
+- **Option 1**: Edit the transcript that was generated by Azure Video Indexer. By editing and correcting the transcript, you are training a language model to provide improved results in the future.
+- **Option 2**: Upload text file(s) to train the language model. The upload file can either contain a list of words as you would like them to appear in the Video Indexer transcript or the relevant words included naturally in sentences and paragraphs. As better results are achieved with the latter approach, it's recommended for the upload file to contain full sentences or paragraphs related to your content.
+
+> [!Important]
+> Do not include in the upload file the words or sentences as currently incorrectly transcribed (for example, *"communities"*) as this will negate the intended impact.
+> Only include the words as you would like them to appear (for example, *"Kubernetes"*).
You can use the Azure Video Indexer APIs or the website to create and edit custom Language models, as described in topics in the [Next steps](#next-steps) section of this topic.
azure-video-indexer Live Stream Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/live-stream-analysis.md
A solution described in this article, allows customers to use Azure Video Indexe
*Figure 1 ΓÇô Sample player displaying the Azure Video Indexer metadata on the live stream*
-The [stream analysis solution](https://aka.ms/livestreamanalysis) at hand, uses Azure Functions and two Logic Apps to process a live program from a live channel in Azure Media Services with Azure Video Indexer and displays the result with Azure Media Player showing the near real-time resulted stream.
+The [stream analysis solution](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/LiveStreamAnalysis/README.MD) at hand, uses Azure Functions and two Logic Apps to process a live program from a live channel in Azure Media Services with Azure Video Indexer and displays the result with Azure Media Player showing the near real-time resulted stream.
In high level, it is comprised of two main steps. The first step runs every 60 seconds, and takes a subclip of the last 60 seconds played, creates an asset from it and indexes it via Azure Video Indexer. Then the second step is called once indexing is complete. The insights captured are processed, sent to Azure Cosmos DB, and the subclip indexed is deleted.
The sample player plays the live stream and gets the insights from Azure Cosmos
## Step-by-step guide
-The full code and a step-by-step guide to deploy the results can be found in [GitHub project for Live media analytics with Azure Video Indexer](https://aka.ms/livestreamanalysis).
+The full code and a step-by-step guide to deploy the results can be found in [GitHub project for Live media analytics with Azure Video Indexer](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/LiveStreamAnalysis/README.MD).
## Next steps
azure-video-indexer Logic Apps Connector Arm Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/logic-apps-connector-arm-accounts.md
The following image shows the first flow:
1. <a name="access_token"></a>Generate an access token. > [!NOTE]
- > For details about the ARM API and the request/response examples, see [Generate an Azure Video Indexer access token](/rest/api/videoindexer/generate/access-token?tabs=HTTP).
+ > For details about the ARM API and the request/response examples, see [Generate an Azure Video Indexer access token](/rest/api/videoindexer/preview/generate/access-token).
> > Press **Try it** to get the correct values for your account.
azure-video-indexer Restricted Viewer Role https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/restricted-viewer-role.md
Users with this role are **unable** to perform the following tasks:
## Using an ARM API
-To generate a Video Indexer restricted viewer access token via API, see [documentation](/rest/api/videoindexer/generate/access-token).
+To generate a Video Indexer restricted viewer access token via API, see [documentation](/rest/api/videoindexer/preview/generate/access-token).
## Restricted Viewer Video Indexer website experience
azure-vmware Deploy Zerto Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-zerto-disaster-recovery.md
In this scenario, the primary site is an Azure VMware Solution private cloud in
Currently, Zerto disaster recovery on Azure VMware Solution is in an Initial Availability (IA) phase. In the IA phase, you must contact Microsoft to request and qualify for IA support.
-To request IA support for Zerto on Azure VMware Solution, send an email request to zertoonavs@microsoft.com. In the IA phase, Azure VMware Solution only supports manual installation and onboarding of Zerto. However, Microsoft will work with you to ensure that you can manually install Zerto on your private cloud.
+To request IA support for Zerto on Azure VMware Solution, submit this [Install Zerto on AVS form](https://aka.ms/ZertoAVSinstall) with the required information. In the IA phase, Azure VMware Solution only supports manual installation and onboarding of Zerto. However, Microsoft will work with you to ensure that you can manually install Zerto on your private cloud.
> [!NOTE] > As part of the manual installation, Microsoft creates a new vCenter user account for Zerto. This user account is only for Zerto Virtual Manager (ZVM) to perform operations on the Azure VMware Solution vCenter. When installing ZVM on Azure VMware Solution, donΓÇÖt select the ΓÇ£Select to enforce roles and permissions using Zerto vCenter privilegesΓÇ¥ option.
azure-web-pubsub Reference Odata Filter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-odata-filter.md
Last updated 11/11/2022
# OData filter syntax in Azure Web PubSub service
-In Azure Web PubSub service, the **filter** parameter specifies inclusion or exclusion criteria for the connections to send messages to. This article describes the OData syntax of **filter** and provides examples.
+Azure Web PubSub's **filter** parameter defines inclusion or exclusion criteria for sending messages to connections. This parameter is used in the [Send to all](/rest/api/webpubsub/dataplane/web-pub-sub/send-to-all), [Send to group](/rest/api/webpubsub/dataplane/web-pub-sub/send-to-group), and [Send to user](/rest/api/webpubsub/dataplane/web-pub-sub/send-to-user) operations.
-The complete syntax is described in the [formal grammar](#formal-grammar).
+This article provides the following resources:
-There is also a browsable [syntax diagram](https://aka.ms/awps/filter-syntax-diagram) that allows you to interactively explore the grammar and the relationships between its rules.
+- A description of the OData syntax of the **filter** parameter with examples.
+- A description of the complete [Extended Backus-Naur Form](#formal-grammar) grammar.
+- A browsable [syntax diagram](https://aka.ms/awps/filter-syntax-diagram) to interactively explore the syntax grammar rules.
## Syntax
-A filter in the OData language is a Boolean expression, which in turn can be one of several types of expression, as shown by the following EBNF ([Extended Backus-Naur Form](https://en.wikipedia.org/wiki/Extended_BackusΓÇôNaur_form)):
+A filter in the OData language is boolean expression, which in turn can be one of several types of expression, as shown by the following EBNF ([Extended Backus-Naur Form](https://en.wikipedia.org/wiki/Extended_BackusΓÇôNaur_form)) description:
``` /* Identifiers */
boolean_expression ::= logical_expression
| '(' boolean_expression ')' ```
-An interactive syntax diagram is also available:
+An interactive syntax diagram is available at, [OData syntax diagram for Azure Web PubSub service](https://aka.ms/awps/filter-syntax-diagram).
-> [!div class="nextstepaction"]
-> [OData syntax diagram for Azure Web PubSub service](https://aka.ms/awps/filter-syntax-diagram)
-
-> [!NOTE]
-> See [formal grammar section](#formal-grammar) for the complete EBNF.
+For the complete EBNF, see [formal grammar section](#formal-grammar) .
### Identifiers
-The filter syntax is used to filter out the connections matching the filter expression to send messages to.
-
-Azure Web PubSub supports below identifiers:
+Using the filter syntax, you can control sending messages to connections matching the identifier criteria. Azure Web PubSub supports below identifiers:
-| Identifier | Description | Note | Examples
-| | | -- | --
+| Identifier | Description | Note | Examples |
+| | |--| --
| `userId` | The userId of the connection. | Case insensitive. It can be used in [string operations](#supported-operations). | `userId eq 'user1'` | `connectionId` | The connectionId of the connection. | Case insensitive. It can be used in [string operations](#supported-operations). | `connectionId ne '123'` | `groups` | The collection of groups the connection is currently in. | Case insensitive. It can be used in [collection operations](#supported-operations). | `'group1' in groups`
-Identifiers are used to refer to the property value of a connection. Azure Web PubSub supports 3 identifiers matching the property name of the connection model. and supports identifiers `userId` and `connectionId` in string operations, supports identifier `groups` in [collection operations](#supported-operations). For example, to filter out connections with userId `user1`, we specify the filter as `userId eq 'user1'`. Read through the below sections for more samples using the filter.
+Identifiers refer to the property value of a connection. Azure Web PubSub supports three identifiers matching the property name of the connection model. and supports identifiers `userId` and `connectionId` in string operations, supports identifier `groups` in [collection operations](#supported-operations). For example, to filter out connections with userId `user1`, we specify the filter as `userId eq 'user1'`. Read through the below sections for more samples using the filter.
### Boolean expressions
-The expression for a filter is a boolean expression. When sending messages to connections, Azure Web PubSub sends messages to connections with filter expression evaluated to `true`.
+The expression for a filter is a boolean expression. Azure Web PubSub sends messages to connections with filter expressions evaluated to `true`.
-The types of Boolean expressions include:
+The types of boolean expressions include:
-- Logical expressions that combine other Boolean expressions using the operators `and`, `or`, and `not`.
+- Logical expressions that combine other boolean expressions using the operators `and`, `or`, and `not`.
- Comparison expressions, which compare fields or range variables to constant values using the operators `eq`, `ne`, `gt`, `lt`, `ge`, and `le`.-- The Boolean literals `true` and `false`. These constants can be useful sometimes when programmatically generating filters, but otherwise don't tend to be used in practice.-- Boolean expressions in parentheses. Using parentheses can help to explicitly determine the order of operations in a filter. For more information on the default precedence of the OData operators, see [operator precedence section](#operator-precedence).
+- The boolean literals `true` and `false`. These constants can be useful sometimes when programmatically generating filters, but otherwise don't tend to be used in practice.
+- Boolean expressions in parentheses. Using parentheses helps to explicitly determine the order of operations in a filter. For more information on the default precedence of the OData operators, see [operator precedence section](#operator-precedence).
### Supported operations+
+The filter syntax supports the following operations:
+ | Operator | Description | Example | | | | **Logical Operators**
The types of Boolean expressions include:
| `string substring(string p, int startIndex)`,</br>`string substring(string p, int startIndex, int length)` | Substring of the string | `substring(userId,5,2) eq 'ab'` can match connections for user `user-ab-de` | `bool endswith(string p0, string p1)` | Check if `p0` ends with `p1` | `endswith(userId,'de')` can match connections for user `user-ab-de` | `bool startswith(string p0, string p1)` | Check if `p0` starts with `p1` | `startswith(userId,'user')` can match connections for user `user-ab-de`
-| `int indexof(string p0, string p1)` | Get the index of `p1` in `p0`. Returns `-1` if `p0` does not contain `p1`. | `indexof(userId,'-ab-') ge 0` can match connections for user `user-ab-de`
+| `int indexof(string p0, string p1)` | Get the index of `p1` in `p0`. Returns `-1` if `p0` doesn't contain `p1`. | `indexof(userId,'-ab-') ge 0` can match connections for user `user-ab-de`
| `int length(string p)` | Get the length of the input string | `length(userId) gt 1` can match connections for user `user-ab-de` | **Collection Functions**
-| `int length(collection p)` | Get the length of the collection | `length(groups) gt 1` can match connections in 2 groups
+| `int length(collection p)` | Get the length of the collection | `length(groups) gt 1` can match connections in two groups
### Operator precedence
-If you write a filter expression with no parentheses around its sub-expressions, Azure Web PubSub service will evaluate it according to a set of operator precedence rules. These rules are based on which operators are used to combine sub-expressions. The following table lists groups of operators in order from highest to lowest precedence:
+If you write a filter expression with no parentheses around its subexpressions, Azure Web PubSub service will evaluate it according to a set of operator precedence rules. These rules are based on which operators are used to combine subexpressions. The following table lists groups of operators in order from highest to lowest precedence:
| Group | Operator(s) | | | |
length(userId) gt 0 and length(userId) lt 3 or length(userId) gt 7 and length(us
((length(userId) gt 0) and (length(userId) lt 3)) or ((length(userId) gt 7) and (length(userId) lt 10)) ```
-The `not` operator has the highest precedence of all -- even higher than the comparison operators. That's why if you try to write a filter like this:
+The `not` operator has the highest precedence of all, even higher than the comparison operators. If you write a filter like this:
```odata-filter-expr not length(userId) gt 5
not (length(userId) gt 5)
### Filter size limitations
-There are limits to the size and complexity of filter expressions that you can send to Azure Web PubSub service. The limits are based roughly on the number of clauses in your filter expression. A good guideline is that if you have over 100 clauses, you are at risk of exceeding the limit. We recommend designing your application in such a way that it doesn't generate filters of unbounded size.
+There are limits to the size and complexity of filter expressions that you can send to Azure Web PubSub service. The limits are based roughly on the number of clauses in your filter expression. A good guideline is that if you have over 100 clauses, you are at risk of exceeding the limit. To avoid exceeding the limit, design your application so that it doesn't generate filters of unbounded size.
## Examples
There are limits to the size and complexity of filter expressions that you can s
## Formal grammar
-We can describe the subset of the OData language supported by Azure Web PubSub service using an EBNF ([Extended Backus-Naur Form](https://en.wikipedia.org/wiki/Extended_BackusΓÇôNaur_form)) grammar. Rules are listed "top-down", starting with the most complex expressions, and breaking them down into more primitive expressions. At the top is the grammar rule for `$filter` that correspond to specific parameter `filter` of the Azure Azure Web PubSub service `Send*` REST APIs:
+We can describe the subset of the OData language supported by Azure Web PubSub service using an EBNF ([Extended Backus-Naur Form](https://en.wikipedia.org/wiki/Extended_BackusΓÇôNaur_form)) grammar. Rules are listed "top-down", starting with the most complex expressions, then breaking them down into more primitive expressions. The top is the grammar rule for `$filter` that corresponds to specific parameter `filter` of the Azure Web PubSub service `Send*` REST APIs:
```
cognitive-services Cognitive Services Container Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-container-support.md
Azure Cognitive Services containers provide the following set of Docker containe
| [Language service][ta-containers-language] | **Text Language Detection** ([image](https://go.microsoft.com/fwlink/?linkid=2018759&clcid=0x409)) | For up to 120 languages, detects which language the input text is written in and report a single language code for every document submitted on the request. The language code is paired with a score indicating the strength of the score. | Generally available. <br> This container can also [run in disconnected environments](containers/disconnected-containers.md). | | [Language service][ta-containers-sentiment] | **Sentiment Analysis** ([image](https://go.microsoft.com/fwlink/?linkid=2018654&clcid=0x409)) | Analyzes raw text for clues about positive or negative sentiment. This version of sentiment analysis returns sentiment labels (for example *positive* or *negative*) for each document and sentence within it. | Generally available. <br> This container can also [run in disconnected environments](containers/disconnected-containers.md). | | [Language service][ta-containers-health] | **Text Analytics for health** | Extract and label medical information from unstructured clinical text. | Generally available |
-| [Translator][tr-containers] | **Translator** | Translate text in several languages and dialects. | Gated preview - [request access](https://aka.ms/csgate-translator). <br> This container can also [run in disconnected environments](containers/disconnected-containers.md). |
+| [Translator][tr-containers] | **Translator** | Translate text in several languages and dialects. | Generally available. Gated - [request access](https://aka.ms/csgate-translator). <br>This container can also [run in disconnected environments](containers/disconnected-containers.md). |
### Speech containers
communication-services Custom Teams Endpoint Authentication Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/custom-teams-endpoint-authentication-overview.md
Before we begin:
- The Azure Communication Services resource admin needs to grant Alice permission to perform her role. Learn more about [Azure RBAC role assignment](../../../role-based-access-control/role-assignments-portal.md). Steps:
-1. Authenticate Alice using Azure Active Directory: Alice is authenticated using a standard OAuth flow with *Microsoft Authentication Library (MSAL)*. If authentication is successful, the client application receives an Azure AD access token, with a value of 'A1' and an Object ID of an Azure AD user with a value of 'A2'. Tokens are outlined later in this article. Authentication from the developer perspective is explored in this [quickstart](../../quickstarts/manage-teams-identity.md).
-1. Get an access token for Alice: The application for Teams users performs control plane logic, using artifacts 'A1', 'A2' and 'A3'. This produces Azure Communication Services access token 'D' and gives Alice access. This access token can also be used for data plane actions in Azure Communication Services, like Calling.
+1. Authenticate Alice using Azure Active Directory: Alice is authenticated using a standard OAuth flow with *Microsoft Authentication Library (MSAL)*. If authentication is successful, the client application receives an Azure AD access token, with a value of 'A1' and an Object ID of an Azure AD user. Tokens are outlined later in this article. Authentication from the developer perspective is explored in this [quickstart](../../quickstarts/manage-teams-identity.md).
+1. Get an access token for Alice: The application for Teams users performs control plane logic, using artifacts 'A1', 'A2' and 'A3'. This produces Azure Communication Services access token 'D' and gives Alice access. This access token can also be used for data plane actions in Azure Communication Services, like Calling. The 'A2' and 'A3' artifacts are expected to be passed along with the artifact 'A1' for validation that the Azure AD Token was issued to the expected user and application and will prevent attackers from using the Azure AD access tokens issued to other applications or other users. For more information on how to get 'A' artifacts, see [Receive the Azure AD user token and object ID via the MSAL library](../../quickstarts/manage-teams-identity.md?pivots=programming-language-csharp#step-1-receive-the-azure-ad-user-token-and-object-id-via-the-msal-library) and [Getting Application ID](../troubleshooting-info.md#getting-application-id).
1. Call Bob: Alice makes a call to Teams user Bob, with Fabrikam's app. The call takes place via the Calling SDK with an Azure Communication Services access token. Learn more about [developing custom Teams clients](../../quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md). Artifacts: - Artifact A1 - Type: Azure AD access token - Audience: _`Azure Communication Services`_ ΓÇö control plane
- - Azure AD application ID: Fabrikam's _`Azure AD application ID`_
+ - Source: Fabrikam's Azure AD tenant
- Permissions: _`https://auth.msft.communication.azure.com/Teams.ManageCalls`_, _`https://auth.msft.communication.azure.com/Teams.ManageChats`_ - Artifact A2 - Type: Object ID of an Azure AD user
- - Azure AD application ID: Fabrikam's _`Azure AD application ID`_
+ - Source: Fabrikam's Azure AD tenant
- Artifact A3 - Type: Azure AD application ID
- - Azure AD application ID: Fabrikam's _`Azure AD application ID`_
+ - Source: Fabrikam's Azure AD tenant
- Artifact D - Type: Azure Communication Services access token - Audience: _`Azure Communication Services`_ ΓÇö data plane
Before we begin:
- Alice or her Azure AD administrator needs to give Contoso's Azure Active Directory application consent before the first attempt to sign in. Learn more about [consent](../../../active-directory/develop/consent-framework.md). Steps:
-1. Authenticate Alice using the Fabrikam application: Alice is authenticated through Fabrikam's application. A standard OAuth flow with Microsoft Authentication Library (MSAL) is used. If authentication is successful, the client application, the Contoso app in this case, receives an Azure AD access token with a value of 'A1' and an Object ID of an Azure AD user with a value of 'A2'. Token details are outlined below. Authentication from the developer perspective is explored in this [quickstart](../../quickstarts/manage-teams-identity.md).
-1. Get an access token for Alice: The Contoso application performs control plane logic, using artifacts 'A1', 'A2' and 'A3'. This generates Azure Communication Services access token 'D' for Alice within the Contoso application. This access token can be used for data plane actions in Azure Communication Services, like Calling.
+1. Authenticate Alice using the Fabrikam application: Alice is authenticated through Fabrikam's application. A standard OAuth flow with Microsoft Authentication Library (MSAL) is used. If authentication is successful, the client application, the Contoso app in this case, receives an Azure AD access token with a value of 'A1' and an Object ID of an Azure AD user with a value of 'A2'. Token details are outlined below. Authentication from the developer perspective is explored in this [quickstart](../../quickstarts/manage-teams-identity.md).
+1. Get an access token for Alice: The Contoso application by using a custom authentication artifact with value 'B' performs authorization logic to decide whether Alice has permission to exchange the Azure AD access token for an Azure Communication Services access token. After successful authorization, the Contoso application performs control plane logic, using artifacts 'A1', 'A2', and 'A3'. This generates Azure Communication Services access token 'D' for Alice within the Contoso application. This access token can be used for data plane actions in Azure Communication Services, like Calling. The 'A2' and 'A3' artifacts are expected to be passed along with the artifact 'A1' for validation that the Azure AD Token was issued to the expected user and application and will prevent attackers from using the Azure AD access tokens issued to other applications or other users. For more information on how to get 'A' artifacts, see [Receive the Azure AD user token and object ID via the MSAL library](../../quickstarts/manage-teams-identity.md?pivots=programming-language-csharp#step-1-receive-the-azure-ad-user-token-and-object-id-via-the-msal-library) and [Getting Application ID](../troubleshooting-info.md#getting-application-id).
1. Call Bob: Alice makes a call to Teams user Bob, with Fabrikam's application. The call takes place via the Calling SDK with an Azure Communication Services access token. Learn more about developing custom, Teams apps [in this quickstart](../../quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md).
Artifacts:
- Artifact A1 - Type: Azure AD access token - Audience: Azure Communication Services ΓÇö control plane
- - Azure AD application ID: Contoso's _`Azure AD application ID`_
+ - Source: Contoso application registration's Azure AD tenant
- Permission: _`https://auth.msft.communication.azure.com/Teams.ManageCalls`_, _`https://auth.msft.communication.azure.com/Teams.ManageChats`_ - Artifact A2 - Type: Object ID of an Azure AD user
- - Azure AD application ID: Fabrikam's _`Azure AD application ID`_
+ - Source: Fabrikam's Azure AD tenant
- Artifact A3 - Type: Azure AD application ID
- - Azure AD application ID: Contoso's _`Azure AD application ID`_
+ - Source: Contoso application registration's Azure AD tenant
- Artifact B
- - Type: Custom Contoso authentication artifact
+ - Type: Custom Contoso authorization artifact (issued either by Azure AD or a different authorization service)
- Artifact C - Type: Hash-based Message Authentication Code (HMAC) (based on Contoso's _`connection string`_) - Artifact D
Artifacts:
## Next steps
-The following articles may be of interest to you:
- - Learn more about [authentication](../authentication.md). - Try this [quickstart to authenticate Teams users](../../quickstarts/manage-teams-identity.md). - Try this [quickstart to call a Teams user](../../quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md).+
+The following sample apps may be interesting to you:
+
+- Try the [Sample App](https://github.com/Azure-Samples/communication-services-javascript-quickstarts/tree/main/manage-teams-identity-mobile-and-desktop), which showcases a process of acquiring Azure Communication Services access tokens for Teams users in mobile and desktop applications.
+
+- To see how the Azure Communication Services access tokens for Teams users are acquired in a single-page application, check out a [SPA sample app](https://github.com/Azure-Samples/communication-services-javascript-quickstarts/tree/main/manage-teams-identity-spa).
+
+- To learn more about a server implementation of an authentication service for Azure Communication Services check out the [Authentication service hero sample](../../samples/trusted-auth-sample.md).
communication-services Direct Routing Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/direct-routing-infrastructure.md
The certificate must have the SBC FQDN as the common name (CN) or the subject al
Alternatively, Communication Services direct routing supports a wildcard in the CN and/or SAN, and the wildcard must conform to standard [RFC HTTP Over TLS](https://tools.ietf.org/html/rfc2818#section-3.1). Customers who already use Office 365 and have a domain registered in Microsoft 365 Admin Center can use SBC FQDN from the same domain.
-Domains that arenΓÇÖt previously used in O365 must be provisioned.
An example would be using `\*.contoso.com`, which would match the SBC FQDN `sbc.contoso.com`, but wouldn't match with `sbc.test.contoso.com`.
On the leg between the Cloud Media Processor and Communication Services Calling
- [Telephony Concept](./telephony-concept.md) - [Phone number types in Azure Communication Services](./plan-solution.md) - [Pair the Session Border Controller and configure voice routing](./direct-routing-provisioning.md)
+- [Call Automation overview](../call-automation/call-automation.md)
- [Pricing](../pricing.md) ### Quickstarts -- [Call to Phone](../../quickstarts/telephony/pstn-call.md)
+- [Get a phone number](../../quickstarts/telephony/get-phone-number.md)
+- [Outbound call to a phone number](../../quickstarts/telephony/pstn-call.md)
+- [Redirect inbound telephony calls with Call Automation](../../quickstarts/call-automation/redirect-inbound-telephony-calls.md)
communication-services Direct Routing Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/direct-routing-provisioning.md
For information about whether Azure Communication Services direct routing is the
If everything is set up correctly, you should see an exchange of OPTIONS messages between Microsoft and your Session Border Controller. Use your SBC monitoring/logs to validate the connection.
-## Voice routing considerations
+## Outbound voice routing considerations
Azure Communication Services direct routing has a routing mechanism that allows a call to be sent to a specific SBC based on the called number pattern.
If you created one voice route with a pattern `^\+1(425|206)(\d{7})$` and added
> [!NOTE] > In all the examples, if the dialed number does not match the pattern, the call will be dropped unless there is a purchased number exist for the communication resource, and this number was used as `alternateCallerId` in the application.
-## Configure voice routing
+## Configure outbound voice routing
### Configure using Azure portal
For more information about regular expressions, see [.NET regular expressions ov
You can select multiple SBCs for a single pattern. In such a case, the routing algorithm will choose them in random order. You may also specify the exact number pattern more than once. The higher row will have higher priority, and if all SBCs associated with that row aren't available next row will be selected. This way, you create complex routing scenarios.
+## Managing inbound calls
+
+For general inbound call management use [Call Automation SDKs](../call-automation/incoming-call-notification.md) to build an application that listen for and manage inbound calls placed to a phone number or received via ACS direct routing.
+Omnichannel for Customer Service customers please refer to [these instructions](/dynamics365/customer-service/voice-channel-inbound-calling).
+ ## Delete direct routing configuration ### Delete using Azure portal
You can select multiple SBCs for a single pattern. In such a case, the routing a
### Conceptual documentation - [Session Border Controllers certified for Azure Communication Services direct routing](./certified-session-border-controllers.md)
+- [Call Automation overview](../call-automation/call-automation.md)
- [Pricing](../pricing.md) ### Quickstarts -- [Call to Phone](../../quickstarts/telephony/pstn-call.md)
+- [Outbound call to a phone number](../../quickstarts/telephony/pstn-call.md)
+- [Redirect inbound telephony calls with Call Automation](../../quickstarts/call-automation/redirect-inbound-telephony-calls.md)
communication-services Emergency Calling Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/emergency-calling-concept.md
The Emergency service is temporarily free to use for Azure Communication Service
## Emergency calling with Azure Communication Services direct routing Emergency call is a regular call from a direct routing perspective. If you want to implement emergency calling with Azure Communication Services direct routing, you need to make sure that there is a routing rule for your emergency number (911, 112, etc.). You also need to make sure that your carrier processes emergency calls properly.
-There is also an option to use purchased number as a caller ID for direct routing calls, in such case if there is no voice routing rule for emergency number, the call will fall back to Microsoft network, and we will treat it as a regular emergency call. Learn more about [voice routing fall back](./direct-routing-provisioning.md#voice-routing-considerations).
+There is also an option to use purchased number as a caller ID for direct routing calls, in such case if there is no voice routing rule for emergency number, the call will fall back to Microsoft network, and we will treat it as a regular emergency call. Learn more about [voice routing fall back](./direct-routing-provisioning.md#outbound-voice-routing-considerations).
## Next steps
communication-services Inbound Calling Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/inbound-calling-capabilities.md
Inbound PSTN calling is currently supported in GA for Dynamics Omnichannel. You can use phone numbers [provided by Microsoft](./telephony-concept.md#voice-calling-pstn) and phone numbers supplied by [direct routing](./telephony-concept.md#azure-direct-routing).
-**Inbound calling with Dynamics 365 Omnichannel (OC)**
+**Inbound calling with Omnichannel for Customer Service**
-Supported in General Availability, to set up inbound calling for Dynamics 365 OC with direct routing or Voice Calling (PSTN) follow [these instructions](/dynamics365/customer-service/voice-channel-inbound-calling).
+Supported in General Availability, to set up inbound calling in Omnichannel for Customer Service with direct routing or Voice Calling (PSTN) follow [these instructions](/dynamics365/customer-service/voice-channel-inbound-calling).
**Inbound calling with ACS Call Automation SDK**
communication-services Telephony Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/telephony-concept.md
This option requires:
### Quickstarts -- [Get a phone Number](../../quickstarts/telephony/get-phone-number.md)-- [Call to Phone](../../quickstarts/telephony/pstn-call.md)
+- [Get a phone number](../../quickstarts/telephony/get-phone-number.md)
+- [Outbound call to a phone number](../../quickstarts/telephony/pstn-call.md)
+- [Redirect inbound telephony calls with Call Automation](../../quickstarts/call-automation/redirect-inbound-telephony-calls.md)
communication-services Redirect Inbound Telephony Calls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/call-automation/redirect-inbound-telephony-calls.md
zone_pivot_groups: acs-csharp-java
[!INCLUDE [Public Preview Notice](../../includes/public-preview-include.md)]
-Get started with Azure Communication Services by using the Call Automation SDKs to build automated calling workflows that listen for and manage inbound calls placed to a phone number or received via Direct Routing.
+Get started with Azure Communication Services by using the Call Automation SDKs to build automated calling workflows that listen for and manage inbound calls placed to a phone number or received via ACS direct routing.
::: zone pivot="programming-language-csharp" [!INCLUDE [Redirect inbound call with .NET](./includes/redirect-inbound-telephony-calls-csharp.md)]
container-apps Compare Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/compare-options.md
You can get started building your first container app [using the quickstarts](ge
[Azure Functions](../azure-functions/functions-overview.md) is a serverless Functions-as-a-Service (FaaS) solution. It's optimized for running event-driven applications using the functions programming model. It shares many characteristics with Azure Container Apps around scale and integration with events, but optimized for ephemeral functions deployed as either code or containers. The Azure Functions programming model provides productivity benefits for teams looking to trigger the execution of your functions on events and bind to other data sources. When building FaaS-style functions, Azure Functions is the ideal option. The Azure Functions programming model is available as a base container image, making it portable to other container based compute platforms allowing teams to reuse code as environment requirements change. ### Azure Spring Apps
-[Azure Spring Apps](../spring-apps/overview.md) is a platform as a service (PaaS) for Spring developers. If you want to run Spring Boot, Spring Cloud or any other Spring applications on Azure, Azure Spring Apps is an ideal option. The service manages the infrastructure of Spring applications so developers can focus on their code. Azure Spring Apps provides lifecycle management using comprehensive monitoring and diagnostics, configuration management, service discovery, CI/CD integration, blue-green deployments, and more.
+[Azure Spring Apps](../spring-apps/overview.md) is a fully managed service for Spring developers. If you want to run Spring Boot, Spring Cloud or any other Spring applications on Azure, Azure Spring Apps is an ideal option. The service manages the infrastructure of Spring applications so developers can focus on their code. Azure Spring Apps provides lifecycle management using comprehensive monitoring and diagnostics, configuration management, service discovery, CI/CD integration, blue-green deployments, and more.
### Azure Red Hat OpenShift [Azure Red Hat OpenShift](../openshift/intro-openshift.md) is jointly engineered, operated, and supported by Red Hat and Microsoft to provide an integrated product and support experience for running Kubernetes-powered OpenShift. With Azure Red Hat OpenShift, teams can choose their own registry, networking, storage, and CI/CD solutions, or use the built-in solutions for automated source code management, container and application builds, deployments, scaling, health management, and more from OpenShift. If your team or organization is using OpenShift, Azure Red Hat OpenShift is an ideal option.
container-apps Custom Domains Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/custom-domains-certificates.md
Azure Container Apps allows you to bind one or more custom domains to a containe
- [SNI domain certificates](https://wikipedia.org/wiki/Server_Name_Indication) are required. - Ingress must be enabled for the container app
+> [!NOTE]
+> To configure a custom DNS suffix for all container apps in an environment, see [Custom environment DNS suffix in Azure Container Apps](environment-custom-dns-suffix.md).
+ ## Add a custom domain and certificate > [!IMPORTANT]
container-apps Environment Custom Dns Suffix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/environment-custom-dns-suffix.md
+
+ Title: Custom environment DNS suffix in Azure Container Apps
+description: Learn to manage custom DNS suffix and TLS certificate in Azure Container Apps environments
++++ Last updated : 10/13/2022+++
+# Custom environment DNS Suffix in Azure Container Apps
+
+By default, an Azure Container Apps environment provides a DNS suffix in the format `<UNIQUE_IDENTIFIER>.<REGION_NAME>.azurecontainerapps.io`. Each container app in the environment generates a domain name based on this DNS suffix. You can configure a custom DNS suffix for your environment.
+
+> [!NOTE]
+> To configure a custom domain for individual container apps, see [Custom domain names and certificates in Azure Container Apps](custom-domains-certificates.md).
+
+## Add a custom DNS suffix and certificate
+
+1. Go to your Container Apps environment in the [Azure portal](https://portal.azure.com)
+
+1. Under the *Settings* section, select **Custom DNS suffix**.
+
+1. In **DNS suffix**, enter the custom DNS suffix for the environment.
+
+ For example, if you enter `example.com`, the container app domain names will be in the format `<APP_NAME>.example.com`.
+
+1. In a new browser window, go to your domain provider's website and add the DNS records shown in the *Domain validation* section to your domain.
+
+ | Record type | Host | Value | Description |
+ | -- | -- | -- | -- |
+ | A | `*.<DNS_SUFFIX>` | Environment inbound IP address | Wildcard record configured to the IP address of the environment. |
+ | TXT | `asuid.<DNS_SUFFIX>` | Validation token | TXT record with the value of the validation token (not required for Container Apps environment with internal load balancer). |
+
+1. Back in the *Custom DNS suffix* window, in **Certificate file**, browse and select a certificate for the TLS binding.
+
+ > [!IMPORTANT]
+ > You must use an existing wildcard certificate that's valid for the custom DNS suffix you provided.
+
+1. In **Certificate password**, enter the password for the certificate.
+
+1. Select **Save**.
+
+Once the save operation is complete, the environment is updated with the custom DNS suffix and TLS certificate.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Custom domains in Azure Container Apps](custom-domains-certificates.md)
cosmos-db Performance Tips Query Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/performance-tips-query-sdk.md
filteredItemsAsPages.map(page -> {
## Tune the buffer size
-Parallel query is designed to pre-fetch results while the current batch of results is being processed by the client. The pre-fetching helps in overall latency improvement of a query. [setMaxBufferedItemCount](/java/api/com.azure.cosmos.models.cosmosqueryrequestoptions.setmaxbuffereditemcount) in `CosmosQueryRequestOptions` limits the number of pre-fetched results. Setting setMaxBufferedItemCount to the expected number of results returned (or a higher number) enables the query to receive maximum benefit from pre-fetching (NOTE: This can also result in high memory consumption). If you set this value to 0, the system will automatically determine the number of items to buffer.
+Parallel query is designed to pre-fetch results while the current batch of results is being processed by the client. The pre-fetching helps in overall latency improvement of a query. [setMaxBufferedItemCount](/java/api/com.azure.cosmos.models.cosmosqueryrequestoptions.setmaxbuffereditemcount) in `CosmosQueryRequestOptions` limits the number of pre-fetched results. To maximize the pre-fetching, set the `maxBufferedItemCount` to a higher number than the `pageSize` (NOTE: This can also result in high memory consumption). To minimize the pre-fetching, set the `maxBufferedItemCount` equal to the `pageSize`. If you set this value to 0, the system will automatically determine the number of items to buffer.
```java CosmosQueryRequestOptions options = new CosmosQueryRequestOptions();
cost-management-billing Tutorial Export Acm Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/tutorial-export-acm-data.md
description: This article shows you how you can create and manage exported Cost Management data so that you can use it in external systems. Previously updated : 11/07/2022 Last updated : 11/22/2022
Data export is available for various Azure account types, including [Enterprise
- Owner - Can create, modify, or delete scheduled exports for a subscription. - Contributor - Can create, modify, or delete their own scheduled exports. Can modify the name of scheduled exports created by others. - Reader - Can schedule exports that they have permission to.-
-**For more information about scopes, including access needed to configure exports for Enterprise Agreement and Microsoft Customer agreement scopes, see [Understand and work with scopes](understand-work-scopes.md)**.
+ - **For more information about scopes, including access needed to configure exports for Enterprise Agreement and Microsoft Customer agreement scopes, see [Understand and work with scopes](understand-work-scopes.md)**.
For Azure Storage accounts: - Write permissions are required to change the configured storage account, independent of permissions on the export. - Your Azure storage account must be configured for blob or file storage. - The storage account must not have a firewall configured.
+- The storage account configuration must have the **Permitted scope for copy operations (preview)** option set to **From any storage account**.
+ :::image type="content" source="./media/tutorial-export-acm-data/permitted-scope-copy-operations.png" alt-text="Screenshot showing the From any storage account option set." lightbox="./media/tutorial-export-acm-data/permitted-scope-copy-operations.png" :::
If you have a new subscription, you can't immediately use Cost Management features. It might take up to 48 hours before you can use all Cost Management features.
data-factory Connector Sap Change Data Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sap-change-data-capture.md
To create a mapping data flow using the SAP CDC connector as a source, complete
:::image type="content" source="media/sap-change-data-capture-solution/sap-change-data-capture-mapping-data-flow-select-dataset.png" alt-text="Screenshot of the select dataset option in source settings of mapping data flow source.":::
-1. On the tab **Source options** select the option **Full on every run** if you want to load full snapshots on every execution of your mapping data flow, or **Full on the first run, then incremental** if you want to subscribe to a change feed from the SAP source system. In this case, the first run of your pipeline will do a delta initialization, which means it will return a current full data snapshot and create an ODP delta subscription in the source system so that with subsequent runs, the SAP source system will return incremental changes since the previous run only. In case of incremental loads it is required to specify the keys of the ODP source object in the **Key columns** property.
+1. On the tab **Source options** select the option **Full on every run** if you want to load full snapshots on every execution of your mapping data flow, or **Full on the first run, then incremental** if you want to subscribe to a change feed from the SAP source system. In this case, the first run of your pipeline will do a delta initialization, which means it will return a current full data snapshot and create an ODP delta subscription in the source system so that with subsequent runs, the SAP source system will return incremental changes since the previous run only. You can also do **incremental changes only** if you want to create an ODP delta subscription in the SAP source system in the first run of your pipeline without returning any data, and with subsequent runs, the SAP source system will return incremental changes since the previous run only. In case of incremental loads it is required to specify the keys of the ODP source object in the **Key columns** property.
:::image type="content" source="media/sap-change-data-capture-solution/sap-change-data-capture-mapping-data-flow-run-mode.png" alt-text="Screenshot of the run mode property in source options of mapping data flow source.":::
To create a mapping data flow using the SAP CDC connector as a source, complete
1. For the tabs **Projection**, **Optimize** and **Inspect**, please follow [mapping data flow](concepts-data-flow-overview.md).
-1. If **Run mode** is set to **Full on every run**, the tab **Optimize** offers additional selection and partitioning options. Each partition condition (the screenshot below shows an example with two conditions) will trigger a separate extraction process in the connected SAP system. Up to three of these extraction process are executed in parallel.
+1. If **Run mode** is set to **Full on every run** or **Full on the first run, then incremental**, the tab **Optimize** offers additional selection and partitioning options. Each partition condition (the screenshot below shows an example with two conditions) will trigger a separate extraction process in the connected SAP system. Up to three of these extraction process are executed in parallel.
:::image type="content" source="media/sap-change-data-capture-solution/sap-change-data-capture-mapping-data-flow-optimize-partition.png" alt-text="Screenshot of the partitioning options in optimize of mapping data flow source.":::
databox-online Azure Stack Edge Gpu Deploy Virtual Machine High Performance Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-high-performance-network.md
Previously updated : 11/18/2022 Last updated : 11/23/2022 # Customer intent: As an IT admin, I need to understand how to configure compute on an Azure Stack Edge Pro GPU device so that I can use it to transform data before I send it to Azure.
To maximize performance, processing, and transmitting on the same NUMA node, pro
To deploy HPN VMs on Azure Stack Edge, you must reserve vCPUs on NUMA nodes. The number of vCPUs reserved determines the available vCPUs that can be assigned to the HPN VMs.
-For the number of cores that each HPN VM size uses, see theΓÇ»[Supported HPN VM sizes](azure-stack-edge-gpu-virtual-machine-sizes.md#supported-vm-sizes).
+For the number of cores that each HPN VM size uses, see [Supported HPN VM sizes](azure-stack-edge-gpu-virtual-machine-sizes.md#supported-vm-sizes).
-In version 2210, vCPUs are automatically reserved with the maximum number of vCPUs supported on each NUMA node. If the vCPUs were already reserved for HPN VMs in an earlier version, the existing reservation is carried forth to the 2210 version. If vCPUs weren't reserved for HPN VMs in an earlier version, upgrading to 2210 will still carry forth the existing configuration.
+In version 2210, vCPUs are automatically reserved with the maximum number of vCPUs supported on each NUMA node. If vCPUs were already reserved for HPN VMs in an earlier version, the existing reservation is carried forth to the 2210 version. If vCPUs weren't reserved for HPN VMs in an earlier version, upgrading to 2210 will still carry forth the existing configuration.
-For versions 2209 and earlier, you must reserve vCPUs on NUMA nodes before you deploy HPN VMs on your device. We recommend that the vCPU reservation is done on NUMA node 0, as this node has Mellanox high speed network interfaces, Port 5 and Port 6, attached to it.
+For versions 2209 and earlier, you must reserve vCPUs on NUMA nodes before you deploy HPN VMs on your device. We recommend NUMA node 0 for vCPU reservations because NUMA node 0 has Mellanox high speed network interfaces.
## HPN VM deployment workflow The high-level summary of the HPN deployment workflow is as follows:
-1. While configuring the network settings on your device, make sure that there's a virtual switch associated with a network interface on your device that can be used for the VM resources and VMs. We'll use the default virtual network created with the vswitch for this article. You have the option of creating and using a different virtual network, if desired.
+1. While configuring the network settings on your device, make sure that there's a virtual switch associated with a network interface on your device that can be used for VM resources and VMs. We'll use the default virtual network created with the vswitch for this article. You have the option of creating and using a different virtual network, if desired.
2. Enable cloud management of VMs from the Azure portal. Download a VHD onto your device, and create a VM image from the VHD.
The high-level summary of the HPN deployment workflow is as follows:
4. Use the resources created in the previous steps: 1. The VM image that you created.
- 2. The default virtual network associated with the virtual switch. The default virtual network has the same name as the name of the virtual switch.
+ 2. The default virtual network associated with the virtual switch. The default virtual network name is the same as the name of the virtual switch.
3. The default subnet for the default virtual network. 1. And create or specify the following resources:
- 1. Specify a VM name, choose a supported HPN VM size, and specify sign-in credentials for the VM.
+ 1. Specify a VM name and a supported HPN VM size, and specify sign-in credentials for the VM.
1. Create new data disks or attach existing data disks.
- 1. Configure static or dynamic IP for the VM. If you're providing a static IP, choose from a free IP in the subnet range of the default virtual network.
+ 1. Configure a static or dynamic IP for the VM. If you're providing a static IP, specify a free IP in the subnet range of the default virtual network.
1. Use the preceding resources to create an HPN VM.
Before you create and manage VMs on your device via the Azure portal, make sure
- The default vCPU reservation uses the SkuPolicy, which reserves all vCPUs that are available for HPN VMs.
- - If the vCPUs were already reserved for HPN VMs in an earlier version - for example, version 2009 or earlier, then the existing reservation is carried forth to the 2210 version.
+ - If the vCPUs were already reserved for HPN VMs in an earlier version - for example, in version 2209 or earlier, then the existing reservation is carried forth to the 2210 version.
- For most use cases, we recommend that you use the default configuration. If needed, you can also customize the NUMA configuration for HPN VMs. To customize the configuration, use the steps provided for 2209. - Use the following steps to get information about the SkuPolicy settings on your device: 1. [Connect to the PowerShell interface of the device](azure-stack-edge-gpu-connect-powershell-interface.md#connect-to-the-powershell-interface).
-
-
+ 1. Run the following command to see the available NUMA policies on your device: ```powershell
Before you create and manage VMs on your device via the Azure portal, make sure
This cmdlet will output: 1. HpnLpMapping: The NUMA logical processor indexes that are reserved on the machine. 1. HpnCapableLpMapping: The NUMA logical processor indexes that are capable for reservation.
- 1. HpnLpAvailable: The NUMA logical processor indexes that aren't available for new HPN VM deployments.
- 1. The NUMA logical processors used by HPN VMs and NUMA logical processors available for new HPN VM deployments on each NUMA node in the cluster.
+ 1. HpnLpAvailable: The NUMA logical processor indexes that are available for new HPN VM deployments.
```powershell Get-HcsNumaLpMapping
Before you create and manage VMs on your device via the Azure portal, make sure
```powershell Get-HcsNumaLpMapping ```-
- The output shouldn't show the indexes you set. If you see the indexes you set in the output, the `Set` command didn't complete successfully. Retry the command and if the problem persists, contact Microsoft Support.
-
- Here's an example output.
-
- ```powershell
- dbe-1csphq2.microsoftdatabox.com]: PS> Get-HcsNumaLpMapping -MapType MinRootAware -NodeName 1CSPHQ2
-
- { Numa Node #0 : CPUs [0, 1, 2, 3] }
-
- { Numa Node #1 : CPUs [20, 21, 22, 23] }
-
- [dbe-1csphq2.microsoftdatabox.com]:
-
- PS>
### [2209 and earlier](#tab/2209)
In addition to the above prerequisites that are used for VM creation, configure
Get-HcsNumaLpMapping -MapType HighPerformanceCapable -NodeName <Output of hostname command> ```
- Here's example output:
+ Here's an example output:
```powershell [dbe-1csphq2.microsoftdatabox.com]: PS>hostname 1CSPHQ2
- [dbe-1csphq2.microsoftdatabox.com]: P> Get-HcsNumaLpMapping -MapType HighPerformanceCapable -NodeName
[dbe-1csphq2.microsoftdatabox.com]: P> Get-HcsNumaLpMapping -MapType HighPerformanceCapable -NodeName 1CSPHQ2 { Numa Node #0 : CPUs [4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19] } { Numa Node #1 : CPUs [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39] }
In addition to the above prerequisites that are used for VM creation, configure
[dbe-1csphq2.microsoftdatabox.com]: PS> ```
- 1. Reserve vCPUs for HPN VMs. The number of vCPUs reserved here determines the available vCPUs that could be assigned to the HPN VMs. For the number of cores that each HPN VM size uses, see theΓÇ»[Supported HPN VM sizes](azure-stack-edge-gpu-virtual-machine-sizes.md#supported-vm-sizes). On your device, Mellanox ports 5 and 6 are on NUMA node 0.
+ 1. Reserve vCPUs for HPN VMs. The number of vCPUs reserved here determines the available vCPUs that can be assigned to the HPN VMs. For the number of cores used by each HPN VM size, see [Supported HPN VM sizes](azure-stack-edge-gpu-virtual-machine-sizes.md#supported-vm-sizes). On your device, Mellanox ports 5 and 6 are on NUMA node 0.
```powershell Set-HcsNumaLpMapping -CpusForHighPerfVmsCommaSeperated <Logical indexes from the Get-HcsNumaLpMapping cmdlet> -AssignAllCpusToRoot $false
In addition to the above prerequisites that are used for VM creation, configure
``` > [!Note]
- > - You can choose to reserve all the logical indexes from both NUMA nodes shown in the example or a subset of the indexes. If you choose to reserve a subset of indexes, pick the indexes from the device node that has a Mellanox network interface attached to it, for best performance. For Azure Stack Edge Pro GPU, the NUMA node with Mellanox network interface is #0.
+ > - You can choose to reserve all the logical indexes from both NUMA nodes shown in the example, or a subset of the indexes. If you choose to reserve a subset of indexes, pick the indexes from the device node that has a Mellanox network interface attached to it, for best performance. For Azure Stack Edge Pro GPU, the NUMA node with Mellanox network interface is #0.
> - The list of logical indexes must contain a paired sequence of an odd number and an even number. For example, ((4,5)(6,7)(10,11)). Attempting to set a list of numbers such as `5,6,7` or pairs such as `4,6` will not work. > - Using two `Set-HcsNuma` commands consecutively to assign vCPUs will reset the configuration. Also, do not free the CPUs using the Set-HcsNuma cmdlet if you have deployed an HPN VM.
In addition to the above prerequisites that are used for VM creation, configure
Get-HcsNumaLpMapping ```
- The output shouldn't show the indexes you set. If you see the indexes you set in the output, the `Set` command didn't complete successfully. Retry the command and if the problem persists, contact Microsoft Support.
+ The output shouldn't show the indexes you set. If you see the indexes you set in output, the `Set` command didn't complete successfully. In this case, retry the command and, if the problem persists, contact Microsoft Support.
Here's an example output.
defender-for-cloud Supported Machines Endpoint Solutions Clouds Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds-containers.md
The **tabs** below show the features that are available, by environment, for Mic
| Aspect | Details | |--|--|
-| Registries and images | **Supported**<br> ΓÇó [ACR registries protected with Azure Private Link](../container-registry/container-registry-private-link.md) (Private registries requires access to Trusted Services) <br> ΓÇó Windows images using Windows OS version 1709 and above (Preview). This is free while it's in preview, and will incur charges (based on the Defender for Containers plan) when it becomes generally available.<br><br>**Unsupported**<br> ΓÇó Super-minimalist images such as [Docker scratch](https://hub.docker.com/_/scratch/) images<br> ΓÇó "Distroless" images that only contain an application and its runtime dependencies without a package manager, shell, or OS<br> ΓÇó Images with [Open Container Initiative (OCI) Image Format Specification](https://github.com/opencontainers/image-spec/blob/master/spec.md) |
+| Registries and images | **Supported**<br> ΓÇó [ACR registries protected with Azure Private Link](../container-registry/container-registry-private-link.md) (Private registries requires access to Trusted Services) <br> ΓÇó Windows images using Windows OS version 1709 and above (Preview). This is free while it's in preview, and will incur charges (based on the Defender for Containers plan) when it becomes generally available.<br><br>**Unsupported**<br> ΓÇó Super-minimalist images such as [Docker scratch](https://hub.docker.com/_/scratch/) images<br> ΓÇó "Distroless" images that only contain an application and its runtime dependencies without a package manager, shell, or OS<br> ΓÇó Images with [Open Container Initiative (OCI) Image Format Specification](https://github.com/opencontainers/image-spec/blob/master/spec.md) <br> ΓÇó Providing image tag information for [multi-architecture images](https://www.docker.com/blog/multi-arch-build-and-images-the-simple-way/) is currently unsupported|
| OS Packages | **Supported** <br> ΓÇó Alpine Linux 3.12-3.16 <br> ΓÇó Red Hat Enterprise Linux 6, 7, 8 <br> ΓÇó CentOS 6, 7 <br> ΓÇó Oracle Linux 6, 7, 8 <br> ΓÇó Amazon Linux 1, 2 <br> ΓÇó openSUSE Leap 42, 15 <br> ΓÇó SUSE Enterprise Linux 11, 12, 15 <br> ΓÇó Debian GNU/Linux wheezy, jessie, stretch, buster, bullseye <br> ΓÇó Ubuntu 10.10-22.04 <br> ΓÇó FreeBSD 11.1-13.1 <br> ΓÇó Fedora 32, 33, 34, 35| | Language specific packages (Preview) <br><br> (**Only supported for Linux images**) | **Supported** <br> ΓÇó Python <br> ΓÇó Node.js <br> ΓÇó .NET <br> ΓÇó JAVA <br> ΓÇó Go |
Outbound proxy without authentication and outbound proxy with basic authenticati
| Aspect | Details | |--|--|
-| Registries and images | **Supported**<br> ΓÇó [ACR registries protected with Azure Private Link](../container-registry/container-registry-private-link.md) (Private registries requires access to Trusted Services) <br> ΓÇó Windows images using Windows OS version 1709 and above (Preview). This is free while it's in preview, and will incur charges (based on the Defender for Containers plan) when it becomes generally available.<br><br>**Unsupported**<br> ΓÇó Super-minimalist images such as [Docker scratch](https://hub.docker.com/_/scratch/) images<br> ΓÇó "Distroless" images that only contain an application and its runtime dependencies without a package manager, shell, or OS<br> ΓÇó Images with [Open Container Initiative (OCI) Image Format Specification](https://github.com/opencontainers/image-spec/blob/master/spec.md) |
+| Registries and images | **Supported**<br> ΓÇó [ACR registries protected with Azure Private Link](../container-registry/container-registry-private-link.md) (Private registries requires access to Trusted Services) <br> ΓÇó Windows images using Windows OS version 1709 and above (Preview). This is free while it's in preview, and will incur charges (based on the Defender for Containers plan) when it becomes generally available.<br><br>**Unsupported**<br> ΓÇó Super-minimalist images such as [Docker scratch](https://hub.docker.com/_/scratch/) images<br> ΓÇó "Distroless" images that only contain an application and its runtime dependencies without a package manager, shell, or OS<br> ΓÇó Images with [Open Container Initiative (OCI) Image Format Specification](https://github.com/opencontainers/image-spec/blob/master/spec.md) <br> ΓÇó Providing image tag information for [multi-architecture images](https://www.docker.com/blog/multi-arch-build-and-images-the-simple-way/) is currently unsupported |
| OS Packages | **Supported** <br> ΓÇó Alpine Linux 3.12-3.15 <br> ΓÇó Red Hat Enterprise Linux 6, 7, 8 <br> ΓÇó CentOS 6, 7 <br> ΓÇó Oracle Linux 6, 7, 8 <br> ΓÇó Amazon Linux 1, 2 <br> ΓÇó openSUSE Leap 42, 15 <br> ΓÇó SUSE Enterprise Linux 11, 12, 15 <br> ΓÇó Debian GNU/Linux wheezy, jessie, stretch, buster, bullseye <br> ΓÇó Ubuntu 10.10-22.04 <br> ΓÇó FreeBSD 11.1-13.1 <br> ΓÇó Fedora 32, 33, 34, 35| | Language specific packages (Preview) <br><br> (**Only supported for Linux images**) | **Supported** <br> ΓÇó Python <br> ΓÇó Node.js <br> ΓÇó .NET <br> ΓÇó JAVA <br> ΓÇó Go |
defender-for-iot Concept Sentinel Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/concept-sentinel-integration.md
Together with the new responsibilities, SOC teams deal with new challenges, incl
- **Siloed or inefficient communication and processes** between OT and SOC organizations. -- **Limited technology and tools**, including:
+- **Limited technology and tools**, such as lack of visibility or automated security remediation for OT networks. You'll need to evaluate and link information across data sources for OT networks, and integrations with existing SOC solutions may be costly.
- - Lack of visibility and insight into OT networks.
+However, without OT telemetry, context and integration with existing SOC tools and workflows, OT security and operational threats may be handled incorrectly, or even go unnoticed.
- - Limited insight about events across enterprise IT/OT networks, including tools that don't allow SOC teams to evaluate and link information across data sources in IT/OT environments.
+## Integrate Defender for IoT and Microsoft Sentinel
- - Low level of automated security remediation for OT networks.
+Microsoft Sentinel is a scalable cloud service for security information event management (SIEM) security orchestration automated response (SOAR). SOC teams can use the integration between Microsoft Defender for Iot and Microsoft Sentinel to collect data across networks, detect and investigate threats, and respond to incidents.
- - Costly and time-consuming effort needed to integrate OT security solutions into existing SOC solutions.
+In Microsoft Sentinel, the Defender for IoT data connector and solution brings out-of-the-box security content to SOC teams, helping them to view, analyze and respond to OT security alerts, and understand the generated incidents in the broader organizational threat contents.
-Without OT telemetry, context and integration with existing SOC tools and workflows, OT security and operational threats may be handled incorrectly, or even go unnoticed.
+Install the Defender for IoT data connector alone to stream your OT network alerts to Microsoft Sentinel. Then, also install the **Microsoft Defender for IoT** solution the extra value of IoT/OT-specific analytics rules, workbooks, and SOAR playbooks, as well as incident mappings to [MITRE ATT&CK for ICS](https://collaborate.mitre.org/attackics/index.php/Overview).
-## Integrate Defender for IoT and Microsoft Sentinel
+### Integrated detection and response
-Microsoft Sentinel is a scalable cloud solution for security information event management (SIEM) security orchestration automated response (SOAR). SOC teams can use Microsoft Sentinel to collect data across networks, detect and investigate threats, and respond to incidents.
+The following table shows how both the OT team, on the Defender for IoT side, and the SOC team, on the Microsoft Sentinel side, can detect and respond to threats fast across the entire attack timeline.
-The Defender for IoT and Microsoft Sentinel integration delivers out-of-the-box capabilities to SOC teams. This helps them to efficiently and effectively view, analyze, and respond to OT security alerts, and the incidents they generate in a broader organizational threat context.
+|Microsoft Sentinel |Step |Defender for IoT |
+||||
+| | **OT alert triggered** | High confidence OT alerts, powered by Defender for IoT's *Section 52* security research group, are triggered based on data ingested to Defender for IoT. |
+|Analytics rules automatically open incidents *only* for relevant use cases, avoiding OT alert fatigue | **OT incident created** | |
+|SOC teams map business impact, including data about the site, line, compromised assets, and OT owners | **OT incident business impact mapping** | |
+|SOC teams move the incident to *Active* and start investigating, using network connections and events, workbooks, and the OT device entity page | **OT incident investigation** | Alerts are moved to *Active*, and OT teams investigate using PCAP data, detailed reports, and other device details |
+|SOC teams respond with OT playbooks and notebooks | **OT incident response** | OT teams either suppress the alert or learn it for next time, as needed |
+|After the threat is mitigated, SOC teams close the incident | **OT incident closure** | After the threat is mitigated, OT teams close the alert |
-Bring Defender for IoT's rich telemetry into Microsoft Sentinel to bridge the gap between OT and SOC teams with the Microsoft Sentinel data connector for Defender for IoT and the **Microsoft Defender for IoT** solution.
+## Microsoft Sentinel incidents for Defender for IoT
-The **Microsoft Defender for IoT** solution installs out-of-the-box security content to your Microsoft Sentinel, including analytics rules to automatically open incidents, workbooks to visualize and monitor data, and playbooks to automate response actions.
+After you've configured the Defender for IoT data connector and have IoT/OT alert data streaming to Microsoft Sentinel, use one of the following methods to create incidents based on those alerts:
-Once Defender for IoT data is ingested into Microsoft Sentinel, security experts can work with IoT/OT-specific analytics rules, workbooks, and SOAR playbooks, as well as incident mappings to [MITRE ATT&CK for ICS](https://collaborate.mitre.org/attackics/index.php/Overview).
+|Method |Description |
+|||
+|**Use the default data connector rule** | Use the default, **Create incidents based on all alerts generated in Microsoft Defender for IOT** analytics rule provided with the data connector. This rule creates a separate incident in Microsoft Sentinel for each alert streamed from Defender for IoT. |
+|**Use out-of-the-box solution rules** | Enable some or all of the [out-of-the-box analytics rules](https://azuremarketplace.microsoft.com/marketplace/apps/azuresentinel.azure-sentinel-solution-unifiedmicrosoftsocforot?tab=Overview) provided with the **Microsoft Defender for IoT** solution.<br><br> These analytics rules help to reduce alert fatigue by creating incidents only in specific situations. For example, you might choose to create incidents for excessive login attempts, but for multiple scans detected in the network. |
+|**Create custom rules** | Create custom analytics rules to create incidents based only on your specific needs. You can use the out-of-the-box analytics rules as a starting point, or create rules from scratch. <br><br>Add the following filter to prevent duplicate incidents for the same alert ID: `| where TimeGenerated <= ProcessingEndTime + 60m` |
-### Workbooks
+Regardless of the method you choose to create alerts, only one incident should be created for each Defender for IoT alert ID.
+
+## Microsoft Sentinel workbooks for Defender for IoT
To visualize and monitor your Defender for IoT data, use the workbooks deployed to your Microsoft Sentinel workspace as part of the **Microsoft Defender for IoT** solution. Defender for IoT workbooks provide guided investigations for OT entities based on open incidents, alert notifications, and activities for OT assets. They also provide a hunting experience across the MITRE ATT&CK® framework for ICS, and are designed to enable analysts, security engineers, and MSSPs to gain situational awareness of OT security posture.
-For example, workbooks can display alerts by any of the following dimensions:
--- Type, such as policy violation, protocol violation, malware, and so on-- Severity-- OT device type, such as PLC, HMI, engineering workstation, and so on-- OT equipment vendor-- Alerts over time-
-Workbooks also show the result of mapping alerts to MITRE ATT&CK for ICS tactics, plus the distribution of tactics by count and time period. For example:
+Workbooks can display alerts by type, severity, OT device type or vendor, or alerts over time. Workbooks also show the result of mapping alerts to MITRE ATT&CK for ICS tactics, plus the distribution of tactics by count and time period. For example:
:::image type="content" source="media/concept-sentinel-integration/mitre-attack.png" alt-text="Image of MITRE ATT&CK graph":::
-### SOAR playbooks
+## SOAR playbooks for Defender for IoT
Playbooks are collections of automated remediation actions that can be run from Microsoft Sentinel as a routine. A playbook can help automate and orchestrate your threat response. It can be run manually or set to run automatically in response to specific alerts or incidents, when triggered by an analytics rule or an automation rule, respectively.
For example, use SOAR playbooks to:
- Send an email to relevant stakeholders when suspicious activity is detected, for example unplanned PLC reprogramming. The mail may be sent to OT personnel, such as a control engineer responsible on the related production line.
-## Integrated incident timeline
-The following table shows how both the OT team, on the Defender for IoT side, and the SOC team, on the Microsoft Sentinel side, can detect and respond to threats fast across the entire attack timeline.
-|Microsoft Sentinel |Step |Defender for IoT |
-||||
-| | **OT alert triggered** | High confidence OT alerts, powered by Defender for IoT's *Section 52* security research group, are triggered based on data ingested to Defender for IoT. |
-|Analytics rules automatically open incidents *only* for relevant use cases, avoiding OT alert fatigue | **OT incident created** | |
-|SOC teams map business impact, including data about the site, line, compromised assets, and OT owners | **OT incident business impact mapping** | |
-|SOC teams move the incident to *Active* and start investigating, using network connections and events, workbooks, and the OT device entity page | **OT incident investigation** | Alerts are moved to *Active*, and OT teams investigate using PCAP data, detailed reports, and other device details |
-|SOC teams respond with OT playbooks and notebooks | **OT incident response** | OT teams either suppress the alert or learn it for next time, as needed |
-|After the threat is mitigated, SOC teams close the incident | **OT incident closure** | After the threat is mitigated, OT teams close the alert |
+## Comparing Defender for IoT events, alerts, and incidents
+
+This section clarifies the differences between Defender for IoT events, alerts, and incidents in Microsoft Sentinel. Use the listed queries to view a full list of the current events, alerts, and incidents for your OT networks.
+
+You'll typically see more Defender for IoT *events* in Microsoft Sentinel than *alerts*, and more Defender for IoT *alerts* than *incidents*.
++
+- **Events**: Each alert log that streams to Microsoft Sentinel from Defender for IoT is an *event*. If the alert log reflects a new or updated alert in Defender for IoT, a new record is added to the **SecurityAlert** table.
+
+ To view all Defender for IoT events in Microsoft Sentinel, run the following query on the **SecurityAlert** table:
+
+ ```kql
+ SecurityAlert
+ | where ProviderName == 'IoTSecurity' or ProviderName == 'CustomAlertRule'
+ Instead
+ ```
+
+- **Alerts**: Microsoft Sentinel creates alerts based on your current analytics rules and the alert logs listed in the **SecurityAlert** table. If you don't have any active analytics rules for Defender for IoT, Microsoft Sentinel considers each alert log as an *event*.
+
+ To view alerts in Microsoft Sentinel, run the following query on the **SecurityAlert** table:
+
+ ```kql
+ SecurityAlert
+ | where ProviderName == 'ASI Scheduled Alerts' or ProviderName == 'CustomAlertRule'
+ ```
+
+- **Incidents**. Microsoft Sentinel creates incidents based on your analytics rules. You might have several alerts grouped in the same incident, or you may have analytics rules configured to *not* create incidents for specific alert types.
+
+ To view incidents in Microsoft Sentinel, run the following query:
+ ```kql
+ SecurityIncident
+ ```
## Next steps
defender-for-iot Faqs Ot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/faqs-ot.md
For example:
## How can I change a user's passwords
-Learn how to [Change a user's password](how-to-create-and-manage-users.md#change-a-users-password) for either the sensor or the on-premises management console.
+You can change user passwords or recover access to privileged users on both the OT network sensor and the on-premises management console. For more information, see:
-You can also [Recover the password for the on-premises management console, or the sensor](how-to-create-and-manage-users.md#recover-the-password-for-the-on-premises-management-console-or-the-sensor).
+- [Create and manage users on an OT network sensor](manage-users-sensor.md)
+- [Create and manage users on an on-premises management console](manage-users-on-premises-management-console.md)
## How do I activate the sensor and on-premises management console
defender-for-iot Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/getting-started.md
Before you start, make sure that you have:
- An Azure account. If you don't already have an Azure account, you can [create your free Azure account today](https://azure.microsoft.com/free/). -- Access to an Azure subscription with the subscription **Owner** or **Contributor** role.
+- Access to the Azure portal as a [Security Admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner). For more information, see [Azure user roles for OT and Enterprise IoT monitoring with Defender for IoT](roles-azure.md).
If you're using a Defender for IoT sensor version earlier than 22.1.x, you must also have an Azure IoT Hub (Free or Standard tier) **Contributor** role, for cloud-connected management. Make sure that the **Microsoft Defender for IoT** feature is enabled.
-### Permissions
-
-Defender for IoT users require the following permissions:
-
-| Permission | Security reader | Security admin | Subscription contributor | Subscription owner |
-|--|--|--|--|--|
-| Onboard subscriptions and update committed devices | | Γ£ô | Γ£ô | Γ£ô |
-| Onboard sensors | | Γ£ô | Γ£ô | Γ£ô |
-| View details and access software, activation files and threat intelligence packages | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Recover passwords | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-
-For more information, see [Azure roles](../../role-based-access-control/rbac-and-directory-admin-roles.md).
- ### Supported service regions Defender for IoT routes all traffic from all European regions to the *West Europe* regional datacenter. It routes traffic from all remaining regions to the *East US* regional datacenter.
defender-for-iot How To Activate And Set Up Your Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-activate-and-set-up-your-sensor.md
You can access console tools from the side menu. Tools help you:
||| | System settings | Configure the system settings. For example, define DHCP settings, provide mail server details, or create port aliases. | | Custom alert rules | Use custom alert rules to more specifically pinpoint activity or traffic of interest to you. For more information, see [Customize alert rules](how-to-accelerate-alert-incident-response.md#customize-alert-rules). |
-| Users | Define users and roles with various access levels. For more information, see [About Defender for IoT console users](how-to-create-and-manage-users.md#about-defender-for-iot-console-users). |
+| Users | Define users and roles with various access levels. For more information, see [Create and manage users on an OT network sensor](manage-users-sensor.md). |
| Forwarding | Forward alert information to partners that integrate with Defender for IoT, for example, Microsoft Sentinel, Splunk, ServiceNow. You can also send to email addresses, webhook servers, and more. <br /> See [Forward alert information](how-to-forward-alert-information-to-partners.md) for details. |
defender-for-iot How To Create And Manage Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-create-and-manage-users.md
- Title: Create and manage users
-description: Create and manage users of sensors and the on-premises management console. Users can be assigned the role of Administrator, Security Analyst, or Read-only user.
Previously updated : 01/26/2022---
-# About Defender for IoT console users
-
-This article describes how to create and manage users of sensors and the on-premises management console. User roles include Administrator, Security Analyst, or Read-only users. Each role is associated with a range of permissions to tools for the sensor or on-premises management console. Roles are designed to facilitate granular, secure access to Microsoft Defender for IoT.
-
-Features are also available to track user activity and enable Active Directory sign in.
-
-By default, each sensor and on-premises management console is installed with the *cyberx* and *support* users. Sensors are also installed with the *cyberx_host* user. These users have access to advanced tools for troubleshooting and setup. Administrator users should sign in with these user credentials, create an admin user, and then create extra users for security analysts and read-only users.
-
-## Role-based permissions
-The following user roles are available:
--- **Read only**: Read-only users perform tasks such as viewing alerts and devices on the device map. These users have access to options displayed under **Discover**.--- **Security analyst**: Security Analysts have Read-only user permissions. They can also perform actions on devices, acknowledge alerts, and use investigation tools. These users have access to options displayed under **Discover** and **Analyze**.--- **Administrator**: Administrators have access to all tools, including system configurations, creating and managing users, and more. These users have access to options displayed under **Discover**, **Analyze**, and **Manage** sections of the console main screen.-
-### Role-based permissions to on-premises management console tools
-
-This section describes permissions available to Administrators, Security Analysts, and Read-only users for the on-premises management console.
-
-| Permission | Read-only | Security Analyst | Administrator |
-|--|--|--|--|
-| View and filter the enterprise map | Γ£ô | Γ£ô | Γ£ô |
-| Build a site | | | Γ£ô |
-| Manage a site (add and edit zones) | | | Γ£ô |
-| View and filter device inventory | Γ£ô | Γ£ô | Γ£ô |
-| View and manage alerts: acknowledge, learn, and pin | Γ£ô | Γ£ô | Γ£ô |
-| Generate reports | | Γ£ô | Γ£ô |
-| View risk assessment reports | | Γ£ô | Γ£ô |
-| Set alert exclusions | | Γ£ô | Γ£ô |
-| View or define access groups | | | Γ£ô |
-| Manage system settings | | | Γ£ô |
-| Manage users | | | Γ£ô |
-| Send alert data to partners | | | Γ£ô |
-| Manage certificates | | | Γ£ô |
-| Session timeout when users aren't active | 30 minutes | 30 minutes | 30 minutes |
-
-#### Assign users to access groups
-
-Administrators can enhance user access control in Defender for IoT by assigning users to specific *access groups*. Access groups are assigned to zones, sites, regions, and business units where a sensor is located. By assigning users to access groups, administrators gain specific control over where users manage and analyze device detections.
-
-Working this way accommodates large organizations where user permissions can be complex or determined by a global organizational security policy. For more information, see [Define global access control](how-to-define-global-user-access-control.md).
-
-### Role-based permissions to sensor tools
-
-This section describes permissions available to sensor Administrators, Security Analysts, and Read-only users.
-
-| Permission | Read-only | Security Analyst | Administrator |
-|--|--|--|--|
-| View the dashboard | Γ£ô | Γ£ô | Γ£ô |
-| Control map zoom views | | | Γ£ô |
-| View alerts | Γ£ô | Γ£ô | Γ£ô |
-| Manage alerts: acknowledge, learn, and pin | | Γ£ô | Γ£ô |
-| View events in a timeline | | Γ£ô | Γ£ô |
-| Authorize devices, known scanning devices, programming devices | | Γ£ô | Γ£ô |
-| Merge and delete devices | | | Γ£ô |
-| View investigation data | Γ£ô | Γ£ô | Γ£ô |
-| Manage system settings | | | Γ£ô |
-| Manage users | | | Γ£ô |
-| DNS servers for reverse lookup | | | Γ£ô |
-| Send alert data to partners | | Γ£ô | Γ£ô |
-| Create alert comments | | Γ£ô | Γ£ô |
-| View programming change history | Γ£ô | Γ£ô | Γ£ô |
-| Create customized alert rules | | Γ£ô | Γ£ô |
-| Manage multiple notifications simultaneously | | Γ£ô | Γ£ô |
-| Manage certificates | | | Γ£ô |
-| Session timeout when users are not active | 30 minutes | 30 minutes | 30 minutes |
-
-## Define users
-
-This section describes how to define users. Cyberx, support, and administrator users can add, remove, and update other user definitions.
-
-**To define a user**:
-
-1. From the left pane for the sensor or the on-premises management console, select **Users**.
-
- :::image type="content" source="media/how-to-create-and-manage-users/users-pane.png" alt-text="Screenshot of the Users pane for creating users.":::
-1. In the **Users** window, select **Create User**.
-
-1. In the **Create User** pane, define the following parameters:
-
- - **Username**: Enter a username.
- - **Email**: Enter the user's email address.
- - **First Name**: Enter the user's first name.
- - **Last Name**: Enter the user's last name.
- - **Role**: Define the user's role. For more information, see [Role-based permissions](#role-based-permissions).
- - **Access Group**: If you're creating a user for the on-premises management console, define the user's access group. For more information, see [Define global access control](how-to-define-global-user-access-control.md).
- - **Password**: Select the user type as follows:
- - **Local User**: Define a password for the user of a sensor or an on-premises management console. Password must have at least eight characters and contain lowercase and uppercase alphabetic characters, numbers, and symbols.
- - **Active Directory User**: You can allow users to sign in to the sensor or management console by using Active Directory credentials. Defined Active Directory groups can be associated with specific permission levels. For example, configure a specific Active Directory group and assign all users in the group to the Read-only user type.
--
-## User session timeout
-
-If users aren't active at the keyboard or mouse for a specific time, they're signed out of their session and must sign in again.
-
-When users haven't worked with their console mouse or keyboard for 30 minutes, a session sign-out is forced.
-
-This feature is enabled by default and on upgrade, but can be disabled. In addition, session counting times can be updated. Session times are defined in seconds. Definitions are applied per sensor and on-premises management console.
-
-A session timeout message appears at the console when the inactivity timeout has passed.
-
-### Control inactivity sign-out
-
-Administrator users can enable and disable inactivity sign-out and adjust the inactivity thresholds.
-
-**To access the command**:
-
-1. Sign in to the CLI for the sensor or on-premises management console by using Defender for IoT administrative credentials.
-
-1. Enter `sudo nano /var/cyberx/properties/authentication`.
-
-```azurecli-interactive
- infinity_session_expiration = true
- session_expiration_default_seconds = 0
- # half an hour in seconds (comment)
- session_expiration_admin_seconds = 1800
- # a day in seconds
- session_expiration_security_analyst_seconds = 1800
- # a week in seconds
- session_expiration_read_only_users_seconds = 1800
-```
-
-To disable the feature, change `infinity_session_expiration = true` to `infinity_session_expiration = false`.
-
-To update sign-out counting periods, adjust the `= <number>` value to the required time.
-
-## Track user activity
-
-Track user activity on a sensor's event timeline, or by viewing audit logs generated on an on-premises management console.
--- **The timeline** displays the event or affected device, and the time and date that the user carried out the activity.--- **Audit logs** record key activity data at the time of occurrence. Use audit logs generated on the on-premises management console to understand which changes were made, when, and by whom.-
-### View user activity on the sensor's Event Timeline
-
-Select **Event Timeline** from the sensor side menu. If needed, verify that **User Operations** filter is set to **Show**.
-
-For example:
--
-Use the filters or search using CTRL+F to find the information of interest to you.
-
-### View audit log data on the on-premises management console
-
-In the on-premises management console, select **System Settings > System Statistics**, and then select **Audit log**.
-
-The dialog displays data from the currently active audit log. For example:
-
-For example:
--
-New audit logs are generated at every 10 MB. One previous log is stored in addition to the current active log file.
-
-Audit logs include the following data:
-
-| Action | Information logged |
-|--|--|
-| **Learn, and remediation of alerts** | Alert ID |
-| **Password changes** | User, User ID |
-| **Login** | User |
-| **User creation** | User, User role |
-| **Password reset** | User name |
-| **Exclusion rules-Creation**| Rule summary |
-| **Exclusion rules-Editing**| Rule ID, Rule Summary |
-| **Exclusion rules-Deletion** | Rule ID |
-| **Management Console Upgrade** | The upgrade file used |
-| **Sensor upgrade retry** | Sensor ID |
-| **Uploaded TI package** | No additional information recorded. |
--
-> [!TIP]
-> You may also want to export your audit logs to send them to the support team for extra troubleshooting. For more information, see [Export audit logs for troubleshooting](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md#export-audit-logs-for-troubleshooting)
->
-
-## Change a user's password
-
-User passwords can be changed for users created with a local password.
-
-**Administrator users**
-
-The Administrator can change the password for the Security Analyst and Read-only roles. The Administrator role user can't change their own password and must contact a higher-level role.
-
-**Security Analyst and Read-only users**
-
-The Security Analyst and Read-only roles can't reset any passwords. The Security Analyst and Read-only roles need to contact a user with a higher role level to have their passwords reset.
-
-**CyberX and Support users**
-
-CyberX role can change the password for all user roles. The Support role can change the password for a Support, Administrator, Security Analyst, and Read-only user roles.
-
-**To reset a user's password on the sensor**:
-
-1. Sign in to the sensor using a user with the role Administrator, Support, or CyberX.
-
-1. Select **Users** from the left-hand panel.
-
-1. Locate the local user whose password needs to be changed.
-
-1. On this row, select three dots (...) and then select **Edit**.
-
- :::image type="content" source="media/how-to-create-and-manage-users/change-password.png" alt-text="Screenshot of the Change password dialog for local sensor users.":::
-
-1. Enter and confirm the new password in the **Change Password** section.
-
- > [!NOTE]
- > Passwords must be at least 16 characters, contain lowercase and uppercase alphabetic characters, numbers and one of the following symbols: #%*+,-./:=?@[]^_{}~
-
-1. Select **Update**.
-
-**To reset a user's password on the on-premises management console**:
-
-1. Sign in to the on-premises management console using a user with the role Administrator, Support, or CyberX.
-
-1. Select **Users** from the left-hand panel.
-
-1. Locate your user and select the edit icon :::image type="icon" source="media/password-recovery-images/edit-icon.png" border="false"::: .
-
-1. Enter the new password in the **New Password** and **Confirm New Password** fields.
-
- > [!NOTE]
- > Passwords must be at least 16 characters, contain lowercase and uppercase alphabetic characters, numbers and one of the following symbols: #%*+,-./:=?@[]^_{}~
-
-1. Select **Update**.
-
-## Recover the password for the on-premises management console, or the sensor
-
-You can recover the password for the on-premises management console or the sensor with the Password recovery feature. Only the CyberX and Support users have access to the Password recovery feature.
-
-**To recover the password for the on-premises management console, or the sensor**:
-
-1. On the sign-in screen of either the on-premises management console or the sensor, select **Password recovery**. The **Password recovery** screen opens.
-
- :::image type="content" source="media/how-to-create-and-manage-users/password-recovery.png" alt-text="Screenshot of the Select Password recovery from the sign-in screen of either the on-premises management console, or the sensor.":::
-
-1. Select either **CyberX** or **Support** from the drop-down menu, and copy the unique identifier code.
-
- :::image type="content" source="media/how-to-create-and-manage-users/password-recovery-screen.png" alt-text="Screenshot of selecting either the Defender for IoT user or the support user.":::
-
-1. Navigate to the Azure portal, and select **Sites and Sensors**.
-
-1. Select the **Subscription Filter** icon :::image type="icon" source="media/password-recovery-images/subscription-icon.png" border="false"::: from the top toolbar, and select the subscription your sensor is connected to.
-
-1. Select the **More Actions** drop down menu, and select **Recover on-premises management console password**.
-
- :::image type="content" source="media/how-to-create-and-manage-users/recover-password.png" alt-text="Screenshot of the recover on-premises management console password option.":::
-
-1. Enter the unique identifier that you received on the **Password recovery** screen and select **Recover**. The `password_recovery.zip` file is downloaded.
-
- :::image type="content" source="media/how-to-create-and-manage-users/enter-identifier.png" alt-text="Screenshot of entering enter the unique identifier and then selecting recover." lightbox="media/how-to-create-and-manage-users/enter-identifier.png":::
-
- [!INCLUDE [root-of-trust](includes/root-of-trust.md)]
-
-1. On the Password recovery screen, select **Upload**. **The Upload Password Recovery File** window will open.
-
-1. Select **Browse** to locate your `password_recovery.zip` file, or drag the `password_recovery.zip` to the window.
-
- > [!NOTE]
- > An error message may appear indicating the file is invalid. To fix this error message, ensure you selected the right subscription before downloading the `password_recovery.zip` and download it again.
-
-1. Select **Next**, and your user, and a system-generated password for your management console will then appear.
--
-## Next steps
--- [Activate and set up your sensor](how-to-activate-and-set-up-your-sensor.md)--- [Activate and set up your on-premises management console](how-to-activate-and-set-up-your-on-premises-management-console.md)--- [Track sensor activity](how-to-track-sensor-activity.md)--- [Integrate with Active Directory servers](integrate-with-active-directory.md)
defender-for-iot How To Define Global User Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-define-global-user-access-control.md
- Title: Define global user access control
-description: In large organizations, user permissions can be complex and might be determined by a global organizational structure, in addition to the standard site and zone structure.
Previously updated : 11/09/2021---
-# Define global access control
-
-In large organizations, user permissions can be complex and might be determined by a global organizational structure, in addition to the standard site and zone structure.
-
-To support the demand for user access permissions that are global and more complex, you can create a global business topology that's based on business units, regions, and sites. Then you can define user access permissions around these entities.
-
-Working with access tools for business topology helps organizations implement zero-trust strategies by better controlling where users manage and analyze devices in the Microsoft Defender for IoT platform.
-
-## About access groups
-
-Global access control is established through the creation of user access groups. Access groups consist of rules regarding which users can access specific business entities. Working with groups lets you control view and configuration access to Defender for IoT for specific user roles at relevant business units, regions, and sites.
-
-For example, allow security analysts from an Active Directory group to access all West European automotive and glass production lines, along with a plastics line in one region.
--
-Before you create access groups, we recommend that you:
--- Carefully set up your business topology. For more information about business topology, see [Work with site map views](how-to-gain-insight-into-global-regional-and-local-threats.md#work-with-site-map-views).--- Plan which users are associated with the access groups that you create. Two options are available for assigning users to access groups:-
- - **Assign groups of Active Directory groups**: Verify that you set up an Active Directory instance to integrate with the on-premises management console.
-
- - **Assign local users**: Verify that you created users. For more information, see [Define users](how-to-create-and-manage-users.md#define-users).
-
-Admin users can't be assigned to access groups. These users have access to all business topology entities by default.
-
-## Create access groups
-
-This section describes how to create access groups. Default global business units and regions are created for the first group that you create. You can edit the default entities when you define your first group.
-
-To create groups:
-
-1. Select **Access Groups** from the side menu of the on-premises management console.
-
-2. Select :::image type="icon" source="media/how-to-define-global-user-access-control/add-icon.png" border="false":::. In the **Add Access Group** dialog box, enter a name for the access group. The console supports 64 characters. Assign the name in a way that will help you easily distinguish this group from other groups.
-
- :::image type="content" source="media/how-to-define-global-user-access-control/add-access-group.png" alt-text="The Add Access Group dialog box where you create access groups.":::
-
-3. If the **Assign an Active Directory Group** option appears, you can assign one Active Directory group of users to this access group.
-
- :::image type="content" source="media/how-to-define-global-user-access-control/add-access-group.png" alt-text="Assign an Active Directory group in the Create Access Group dialog box.":::
-
- If the option doesn't appear, and you want to include Active Directory groups in access groups, select **System Settings**. On the **Integrations** pane, define the groups. Enter a group name exactly as it appears in the Active Directory configurations, and in lowercase.
-
-5. On the **Users** pane, assign as many users as required to the group. You can also assign users to different groups. If you work this way, you must create and save the access group and rules, and then assign users to the group from the **Users** pane.
-
- :::image type="content" source="media/how-to-define-global-user-access-control/role-management.png" alt-text="Manage your users' roles and assign them as needed.":::
-
-6. Create rules in the **Add Rules for *name*** dialog box based on your business topology's access requirements. Options that appear here are based on the topology that you designed in the **Enterprise View** and **Site Management** windows.
-
- You can create more than one rule per group. You might need to create more than one rule per group when you're working with complex access granularity at multiple sites.
-
- :::image type="content" source="media/how-to-define-global-user-access-control/add-rule.png" alt-text="Add a rule to your system.":::
-
-The rules that you create appear in the **Add Access Group** dialog box. There, you can delete or edit them.
--
-### About rules
-
-When you're creating rules, be aware of the following information:
--- When an access group contains several rules, the rule logic aggregates all rules. For example, the rules use AND logic, not OR logic.--- For a rule to be successfully applied, you must assign sensors to zones in the **Site Management** window.--- You can assign only one element per rule. For example, you can assign one business unit, one region, and one site for each rule. Create more rules for the group if, for example, you want users in one Active Directory group to have access to different business units in different regions.--- If you change an entity and the change affects the rule logic, the rule will be deleted. If changes made to a topology entity affect the rule logic such that all rules are deleted, the access group remains but the users can't sign in to the on-premises management console. Users are notified to contact their administrator to sign in.--- If no business unit or region is selected, users will have access to all defined business units and regions.-
-## Next steps
-
-For more information, see [About Defender for IoT console users](how-to-create-and-manage-users.md).
defender-for-iot How To Install Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-install-software.md
This procedure describes how to install OT sensor software on a physical or virt
Save the usernames and passwords listed, as the passwords are unique and this is the only time that the credentials are listed. Copy the credentials to a safe place so that you can use them when signing into the sensor for the first time.
+ For more information, see [Default privileged on-premises users](roles-on-premises.md#default-privileged-on-premises-users).
+ Select `<Ok>` when you're ready to continue. The installation continues running again, and then reboots when the installation is complete. Upon reboot, you're prompted to enter credentials to sign in. For example:
This procedure describes how to install OT sensor software on a physical or virt
Make sure that your sensor is connected to your network, and then you can sign in to your sensor via a network-connected browser. For more information, see [Activate and set up your sensor](how-to-activate-and-set-up-your-sensor.md#activate-and-set-up-your-sensor). + # [On-premises management console](#tab/on-prem)
During the installation process, you can add a secondary NIC. If you choose not
1. Accept the settings and continue by typing `Y`.
-1. After about 10 minutes, the two sets of credentials appear. One is for a **CyberX** user, and one is for a **Support** user.
+1. After about 10 minutes, the two sets of credentials appear. For example:
:::image type="content" source="media/tutorial-install-components/credentials-screen.png" alt-text="Copy these credentials as they won't be presented again."::: Save the usernames and passwords, you'll need these credentials to access the platform the first time you use it.
+ For more information, see [Default privileged on-premises users](roles-on-premises.md#default-privileged-on-premises-users).
+ 1. Select **Enter** to continue. For information on how to find the physical port on your appliance, see [Find your port](#find-your-port).
This command will cause the light on the port to flash for the specified time pe
After you've finished installing OT monitoring software on your appliance, test your system to make sure that processes are running correctly. The same validation process applies to all appliance types.
-System health validations are supported via the sensor or on-premises management console UI or CLI, and are available for both the **Support** and **CyberX** users.
+System health validations are supported via the sensor or on-premises management console UI or CLI, and are available for both the *support* and *cyberx* users.
After installing OT monitoring software, make sure to run the following tests:
The interface between the IT firewall, on-premises management console, and the O
**To enable tunneling access for sensors**:
-1. Sign in to the on-premises management console's CLI with the **CyberX** or the **Support** user credentials.
+1. Sign in to the on-premises management console's CLI with the *cyberx* or the *support* user credentials. For more information, see [Default privileged on-premises users](roles-on-premises.md#default-privileged-on-premises-users).
1. Enter `sudo cyberx-management-tunnel-enable`.
defender-for-iot How To Manage Individual Sensors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-individual-sensors.md
This section describes how to ensure connection between the sensor and the on-pr
8. In the on-premises management console, in the **Site Management** window, assign the sensor to a site and zone.
-Continue with additional configurations, such as adding users, configuring forwarding exclusion rules and more. For example, see [Activate and set up your on-premises management console](how-to-activate-and-set-up-your-on-premises-management-console.md), [About Defender for IoT console users](how-to-create-and-manage-users.md), or [Forward alert information](how-to-forward-alert-information-to-partners.md).
+Continue with additional configurations, such as [adding users](manage-users-on-premises-management-console.md), [configuring forwarding exclusion rules](how-to-forward-alert-information-to-partners.md) and more. For more information,, see [Activate and set up your on-premises management console](how-to-activate-and-set-up-your-on-premises-management-console.md).
## Change the name of a sensor
Clearing data deletes all detected or learned data on the sensor. After clearing
**To clear system data**:
-1. Sign in to the sensor as the **cyberx** user.
+1. Sign in to the sensor as the *cyberx* user.
1. Select **Support** > **Clear data**.
defender-for-iot How To Manage Sensors On The Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-sensors-on-the-cloud.md
If you're updating your OT sensor version from a legacy version to 22.1.x or hig
Make sure that you've started with the relevant updates steps for this update. For more information, see [Update OT system software](update-ot-software.md). > [!NOTE]
-> After upgrading to version 22.1.x, the new upgrade log can be found at the following path, accessed via SSH and the *cyberx_host* user: `/opt/sensor/logs/legacy-upgrade.log`.
+> After upgrading to version 22.1.x, the new upgrade log is accessible by the *cyberx_host* user on the sensor at the following path: `/opt/sensor/logs/legacy-upgrade.log`. To access the update log, sign into the sensor via SSH with the *cyberx_host* user.
>
+> For more information, see [Default privileged on-premises users](roles-on-premises.md#default-privileged-on-premises-users).
+ ## Understand sensor health (Public preview)
defender-for-iot How To Set Up Your Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-set-up-your-network.md
This section provides troubleshooting for common issues when preparing your netw
1. Connect a monitor and a keyboard to the appliance.
- 1. Use the **support** user and password to sign in.
+ 1. Use the *support* user and password to sign in.
1. Use the command **network list** to see the current IP address.
This section provides troubleshooting for common issues when preparing your netw
1. Connect with a monitor and keyboard to the appliance, or use PuTTY to connect remotely to the CLI.
-2. Use the **support** credentials to sign in.
+2. Use the *support* credentials to sign in.
3. Use the **system sanity** command and check that all processes are running.
defender-for-iot How To Troubleshoot The Sensor And On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-troubleshoot-the-sensor-and-on-premises-management-console.md
Check your system health from the sensor or on-premises management console.
**To access the system health tool**:
-1. Sign in to the sensor or on-premises management console with the **Support** user credentials.
+1. Sign in to the sensor or on-premises management console with the *support* user credentials.
1. Select **System Statistics** from the **System Settings** window.
Verify that the system is up and running prior to testing the system's sanity.
**To test the system's sanity**:
-1. Connect to the CLI with the Linux terminal (for example, PuTTY) and the user **Support**.
+1. Connect to the CLI with the Linux terminal (for example, PuTTY) and the user *support*.
1. Enter `system sanity`.
Verify that the correct version is used:
**To check the system's version**:
-1. Connect to the CLI with the Linux terminal (for example, PuTTY) and the user **Support**.
+1. Connect to the CLI with the Linux terminal (for example, PuTTY) and the user *support*.
1. Enter `system version`.
Verify that all the input interfaces configured during the installation process
**To validate the system's network status**:
-1. Connect to the CLI with the Linux terminal (for example, PuTTY) and the **Support** user.
+1. Connect to the CLI with the Linux terminal (for example, PuTTY) and the *support* user.
1. Enter `network list` (the equivalent of the Linux command `ifconfig`).
Verify that you can access the console web GUI:
1. Connect a monitor and a keyboard to the appliance.
- 1. Use the **Support** user and password to sign in.
+ 1. Use the *support* user and password to sign in.
1. Use the command `network list` to see the current IP address.
Verify that you can access the console web GUI:
1. To apply the settings, select **Y**.
-1. After restart, connect with the **Support** user credentials and use the `network list` command to verify that the parameters were changed.
+1. After restart, connect with the *support* user credentials and use the `network list` command to verify that the parameters were changed.
1. Try to ping and connect from the GUI again.
Verify that you can access the console web GUI:
1. Connect a monitor and keyboard to the appliance, or use PuTTY to connect remotely to the CLI.
-1. Use the **Support** user credentials to sign in.
+1. Use the *support* user credentials to sign in.
1. Use the `system sanity` command and check that all processes are running. For example:
When signing into a preconfigured sensor for the first time, you'll need to perf
1. Select **Next**, and your user, and system-generated password for your management console will then appear. > [!NOTE]
- > When you sign in to a sensor or on-premises management console for the first time it will be linked to the subscription you connected it to. If you need to reset the password for the CyberX, or Support user you will need to select that subscription. For more information on recovering a CyberX, or Support user password, see [Recover the password for the on-premises management console, or the sensor](how-to-create-and-manage-users.md#recover-the-password-for-the-on-premises-management-console-or-the-sensor).
+ > When you sign in to a sensor or on-premises management console for the first time, it's linked to your Azure subscription, which you'll need if you need to recover the password for the *cyberx*, or *support* user. For more information, see the relevant procedure for [sensors](manage-users-sensor.md#recover-privileged-access-to-a-sensor) or an [on-premises management console](manage-users-on-premises-management-console.md#recover-privileged-access-to-an-on-premises-management-console).
### Investigate a lack of traffic
You may also want to export your audit logs to send them to the support team for
1. Exported audit logs are encrypted for your security, and require a password to open. In the **Archived Files** list, select the :::image type="icon" source="media/how-to-troubleshoot-the-sensor-and-on-premises-management-console/eye-icon.png" border="false"::: button for your exported logs to view its password. If you're forwarding the audit logs to the support team, make sure to send the password to support separately from the exported logs.
-For more information, see [View audit log data on the on-premises management console](how-to-create-and-manage-users.md#view-audit-log-data-on-the-on-premises-management-console).
+For more information, see [Track on-premises user activity](track-user-activity.md).
## Next steps
defender-for-iot How To Work With The Sensor Device Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-work-with-the-sensor-device-map.md
A variety of map tools help you gain insight into devices and connections of int
- [Group highlight and filters tools](#group-highlight-and-filters-tools) - [Map display tools](#map-display-tools)
-Your user role determines which tools are available in the Device Map window. See [Create and manage users](how-to-create-and-manage-users.md) for details about user roles.
+Your user role determines which tools are available in the Device Map window. For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md) and [Create and manage users on an OT network sensor](manage-users-sensor.md).
### Basic search tools
defender-for-iot How To Work With Threat Intelligence Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-work-with-threat-intelligence-packages.md
Title: Update threat intelligence data description: The threat intelligence data package is provided with each new Defender for IoT version, or if needed between releases. Previously updated : 06/02/2022 Last updated : 11/16/2022 # Threat intelligence research and packages+ ## Overview Security teams at Microsoft carry out proprietary ICS threat intelligence and vulnerability research. These teams include MSTIC (Microsoft Threat Intelligence Center), DART (Microsoft Detection and Response Team), DCU (Digital Crimes Unit), and Section 52 (IoT/OT/ICS domain experts that track ICS-specific zero-days, reverse-engineering malware, campaigns, and adversaries)
You can change the sensor threat intelligence update mode after initial onboardi
Packages can be downloaded the Azure portal and manually uploaded to individual sensors. If the on-premises management console manages your sensors, you can download threat intelligence packages to the management console and push them to multiple sensors simultaneously. This option is available for both *cloud connected* and *locally managed* sensors. [!INCLUDE [root-of-trust](includes/root-of-trust.md)] - **To upload to a single sensor:**
-1. Go to the Microsoft Defender for IoT **Updates** page.
+1. In Defender for IoT on the Azure portal, go to the **Get started** > **Updates** tab.
-2. Download and save the **Threat Intelligence** package.
+1. In the **Sensor threat intelligence update** box, select **Download file** to download the latest threat intelligence package.
-3. Sign in to the sensor console.
+1. Sign in to the sensor console, and then select **System settings** > **Threat intelligence**.
-4. On the side menu, select **System Settings**.
+1. In the **Threat intelligence** pane, select **Upload file**. For example:
-5. Select **Threat Intelligence Data**, and then select **Update**.
+ :::image type="content" source="media/how-to-work-with-threat-intelligence-packages/update-threat-intelligence-single-sensor.png" alt-text="Screenshot of where you can upload Threat Intelligence package to a single sensor." lightbox="media/how-to-work-with-threat-intelligence-packages/update-threat-intelligence-single-sensor.png":::
-6. Upload the new package.
+1. Browse to and select the package you'd downloaded from the Azure portal and upload it to the sensor.
**To upload to multiple sensors simultaneously:**
-1. Go to the Microsoft Defender for IoT **Updates** page.
+1. In Defender for IoT on the Azure portal, go to the **Get started** > **Updates** tab.
+
+1. In the **Sensor threat intelligence update** box, select **Download file** to download the latest threat intelligence package.
+
+1. Sign in to the management console and select **System settings**.
+
+1. In the **Sensor Engine Configuration** area, select the sensors that you want to receive the updated packages. For example:
-2. Download and save the **Threat Intelligence** package.
+ :::image type="content" source="media/how-to-work-with-threat-intelligence-packages/update-threat-intelligence-multiple-sensors.png" alt-text="Screenshot of where you can select which sensors you want to make changes to." lightbox="media/how-to-work-with-threat-intelligence-packages/update-threat-intelligence-multiple-sensors.png":::
-3. Sign in to the management console.
+1. In the **Sensor Threat Intelligence Data** section, select the plus sign (**+**).
-4. On the side menu, select **System Settings**.
+1. In the **Upload File** dialog, select **BROWSE FILE...** to browse to and select the update package. For example:
-5. In the **Sensor Engine Configuration** section, select the sensors that should receive the updated packages.
+ :::image type="content" source="media/how-to-work-with-threat-intelligence-packages/upload-threat-intelligence-to-management-console.png" alt-text="Screenshot of where you can upload a Threat Intelligence package to multiple sensors." lightbox="media/how-to-work-with-threat-intelligence-packages/upload-threat-intelligence-to-management-console.png":::
-6. In the **Select Threat Intelligence Data** section, select the plus sign (**+**).
+1. Select **CLOSE** and then **SAVE CHANGES** to push the threat intelligence update to all selected sensors.
-7. Upload the package.
+ :::image type="content" source="media/how-to-work-with-threat-intelligence-packages/save-changes-management-console.png" alt-text="Screenshot of where you can save changes made to selected sensors on the management console." lightbox="media/how-to-work-with-threat-intelligence-packages/save-changes-management-console.png":::
## Review package update status on the sensor
Review the following information about threat intelligence packages for your clo
1. Review the **Threat Intelligence version** installed on each sensor. Version naming is based on the day the package was built by Defender for IoT.
-1. Review the **Threat Intelligence mode** . *Automatic* indicates that newly available packages will be automatically installed on sensors as they're released by Defender for IoT.
+1. Review the **Threat Intelligence mode** . *Automatic* indicates that newly available packages will be automatically installed on sensors as they're released by Defender for IoT.
*Manual* indicates that you can push newly available packages directly to sensors as needed.
Review the following information about threat intelligence packages for your clo
- Update Available - Ok
-If cloud connected threat intelligence updates fail, review connection information in the **Sensor status** and **Last connected UTC** columns in the **Sites and Sensors** page.
+If cloud connected threat intelligence updates fail, review connection information in the **Sensor status** and **Last connected UTC** columns in the **Sites and Sensors** page.
## Next steps
defender-for-iot Integrate With Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/integrate-with-active-directory.md
- Title: Integrate with Active Directory - Microsoft Defender for IoT
-description: Configure the sensor or on-premises management console to work with Active Directory.
Previously updated : 05/17/2022---
-# Integrate with Active Directory servers
-
-Configure the sensor or on-premises management console to work with Active Directory. This allows Active Directory users to access the Microsoft Defender for IoT consoles by using their Active Directory credentials.
-
-> [!Note]
-> LDAP v3 is supported.
-
-Two types of LDAP-based authentication are supported:
--- **Full authentication**: User details are retrieved from the LDAP server. Examples are the first name, last name, email, and user permissions.--- **Trusted user**: Only the user password is retrieved. Other user details that are retrieved are based on users defined in the sensor.-
-For more information, see [networking requirements](how-to-set-up-your-network.md#other-firewall-rules-for-external-services-optional).
-
-## Active Directory and Defender for IoT permissions
-
-You can associate Active Directory groups defined here with specific permission levels. For example, configure a specific Active Directory group and assign Read-only permissions to all users in the group.
-
-## Active Directory configuration guidelines
--- You must define the LDAP parameters here exactly as they appear in Active Directory.-- For all the Active Directory parameters, use lowercase only. Use lowercase even when the configurations in Active Directory use uppercase.-- You can't configure both LDAP and LDAPS for the same domain. You can, however, use both for different domains at the same time.-
-**To configure Active Directory**:
-
-1. From the left pane, select **System Settings**.
-1. Select **Integrations** and then select **Active Directory**.
-
-1. Enable the **Active Directory Integration Enabled** toggle.
-
-1. Set the Active Directory server parameters, as follows:
-
- | Server parameter | Description |
- |--|--|
- | Domain controller FQDN | Set the fully qualified domain name (FQDN) exactly as it appears on your LDAP server. For example, enter `host1.subdomain.domain.com`. |
- | Domain controller port | Define the port on which your LDAP is configured. |
- | Primary domain | Set the domain name (for example, `subdomain.domain.com`) |
- | Connection type | Set the authentication type: LDAPS/NTLMv3 (Recommended), LDAP/NTLMv3 or LDAP/SASL-MD5 |
- | Active Directory groups | Enter the group names that are defined in your Active Directory configuration on the LDAP server. You can enter a group name that you'll associate with Admin, Security Analyst and Read-only permission levels. Use these groups when creating new sensor users.|
- | Trusted endpoints | To add a trusted domain, add the domain name and the connection type of a trusted domain. <br />You can configure trusted endpoints only for users who were defined under users. |
-
- ### Active Directory groups for the on-premises management console
-
- If you're creating Active Directory groups for on-premises management console users, you must create an Access Group rule for each Active Directory group. On-premises management console Active Directory credentials won't work if an Access Group rule doesn't exist for the Active Directory user group. For more information, see [Define global access control](how-to-define-global-user-access-control.md).
-
-1. Select **Save**.
-
-1. To add a trusted server, select **Add Server** and configure another server.
--
-## Next steps
-
-For more information, see [how to create and manage users](./how-to-create-and-manage-users.md).
defender-for-iot Iot Advanced Threat Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/iot-advanced-threat-monitoring.md
+
+ Title: Investigate and detect threats for IoT devices | Microsoft Docs
+description: This tutorial describes how to use the Microsoft Sentinel data connector and solution for Microsoft Defender for IoT to secure your entire OT environment. Detect and respond to OT threats, including multistage attacks that may cross IT and OT boundaries.
+ Last updated : 09/18/2022++
+# Tutorial: Investigate and detect threats for IoT devices
+
+The integration between Microsoft Defender for IoT and [Microsoft Sentinel](/azure/sentinel/) enable SOC teams to efficiently and effectively detect and respond to Operational Technology (OT) threats. Enhance your security capabilities with the [Microsoft Defender for IoT solution](/azure/sentinel/sentinel-solutions-catalog#domain-solutions), a set of bundled content configured specifically for Defender for IoT data that includes analytics rules, workbooks, and playbooks.
+
+While Defender for IoT supports both Enterprise IoT and OT networks, the **Microsoft Defender for IoT** solution supports OT networks only.
+
+In this tutorial, you:
+
+> [!div class="checklist"]
+>
+> * Install the **Microsoft Defender for IoT** solution in your Microsoft Sentinel workspace
+> * Learn how to investigate Defender for IoT alerts in Microsoft Sentinel incidents
+> * Learn about the analytics rules, workbooks, and playbooks deployed to your Microsoft Sentinel workspace with the **Microsoft Defender for IoT** solution
+
+> [!IMPORTANT]
+>
+> The Microsoft Sentinel content hub experience is currently in **PREVIEW**, as is the **Microsoft Defender for IoT** solution. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Prerequisites
+
+Before you start, make sure you have:
+
+- **Read** and **Write** permissions on your Microsoft Sentinel workspace. For more information, see [Permissions in Microsoft Sentinel](/azure/sentinel/roles).
+
+- Completed [Tutorial: Connect Microsoft Defender for IoT with Microsoft Sentinel](iot-solution.md).
+
+## Install the Defender for IoT solution
+
+Microsoft Sentinel [solutions](/azure/sentinel/sentinel-solutions) can help you onboard Microsoft Sentinel security content for a specific data connector using a single process.
+
+The **Microsoft Defender for IoT** solution integrates Defender for IoT data with Microsoft Sentinel's security orchestration, automation, and response (SOAR) capabilities by providing out-of-the-box and OT-optimized playbooks for automated response and prevention capabilities.
+
+**To install the solution**:
+
+1. In Microsoft Sentinel, under **Content management**, select **Content hub** and then locate the **Microsoft Defender for IoT** solution.
+
+1. At the bottom right, select **View details**, and then **Create**. Select the subscription, resource group, and workspace where you want to install the solution, and then review the related security content that will be deployed.
+
+1. When you're done, select **Review + Create** to install the solution.
+
+For more information, see [About Microsoft Sentinel content and solutions](/azure/sentinel/sentinel-solutions) and [Centrally discover and deploy out-of-the-box content and solutions](/azure/sentinel/sentinel-solutions-deploy).
+
+## Detect threats out-of-the-box with Defender for IoT data
+
+The **Microsoft Defender for IoT** data connector includes a default *Microsoft Security* rule named **Create incidents based on Azure Defender for IOT alerts**, which automatically creates new incidents for any new Defender for IoT alerts detected.
+
+The **Microsoft Defender for IoT** solution includes a more detailed set of out-of-the-box analytics rules, which are built specifically for Defender for IoT data and fine-tune the incidents created in Microsoft Sentinel for relevant alerts.
+
+**To use out-of-the-box Defender for IoT alerts**:
+
+1. On the Microsoft Sentinel **Analytics** page, search for and disable the **Create incidents based on Azure Defender for IOT alerts** rule. This step prevents duplicate incidents from being created in Microsoft Sentinel for the same alerts.
+
+1. Search for and enable any of the following out-of-the-box analytics rules, installed with the **Microsoft Defender for IoT** solution:
+
+ | Rule Name | Description|
+ | - | -|
+ | **Illegal function codes for ICS/SCADA traffic** | Illegal function codes in supervisory control and data acquisition (SCADA) equipment may indicate one of the following: <br><br>- Improper application configuration, such as due to a firmware update or reinstallation. <br>- Malicious activity. For example, a cyber threat that attempts to use illegal values within a protocol to exploit a vulnerability in the programmable logic controller (PLC), such as a buffer overflow. |
+ | **Firmware update** | Unauthorized firmware updates may indicate malicious activity on the network, such as a cyber threat that attempts to manipulate PLC firmware to compromise PLC function. |
+ | **Unauthorized PLC changes** | Unauthorized changes to PLC ladder logic code may be one of the following: <br><br>- An indication of new functionality in the PLC. <br>- Improper configuration of an application, such as due to a firmware update or reinstallation. <br>- Malicious activity on the network, such as a cyber threat that attempts to manipulate PLC programming to compromise PLC function. |
+ | **PLC insecure key state** | The new mode may indicate that the PLC is not secure. Leaving the PLC in an insecure operating mode may allow adversaries to perform malicious activities on it, such as a program download. <br><br>If the PLC is compromised, devices and processes that interact with it may be impacted. which may affect overall system security and safety. |
+ | **PLC stop** | The PLC stop command may indicate an improper configuration of an application that has caused the PLC to stop functioning, or malicious activity on the network. For example, a cyber threat that attempts to manipulate PLC programming to affect the functionality of the network. |
+ | **Suspicious malware found in the network** | Suspicious malware found on the network indicates that suspicious malware is trying to compromise production. |
+ | **Multiple scans in the network** | Multiple scans on the network can be an indication of one of the following: <br><br>- A new device on the network <br>- New functionality of an existing device <br>- Misconfiguration of an application, such as due to a firmware update or reinstallation <br>- Malicious activity on the network for reconnaissance |
+ | **Internet connectivity** | An OT device communicating with internet addresses may indicate an improper application configuration, such as anti-virus software attempting to download updates from an external server, or malicious activity on the network. |
+ | **Unauthorized device in the SCADA network** | An unauthorized device on the network may be a legitimate, new device recently installed on the network, or an indication of unauthorized or even malicious activity on the network, such as a cyber threat attempting to manipulate the SCADA network. |
+ | **Unauthorized DHCP configuration in the SCADA network** | An unauthorized DHCP configuration on the network may indicate a new, unauthorized device operating on the network. <br><br>This may be a legitimate, new device recently deployed on the network, or an indication of unauthorized or even malicious activity on the network, such as a cyber threat attempting to manipulate the SCADA network. |
+ | **Excessive login attempts** | Excessive sign in attempts may indicate improper service configuration, human error, or malicious activity on the network, such as a cyber threat attempting to manipulate the SCADA network. |
+ | **High bandwidth in the network** | An unusually high bandwidth may be an indication of a new service/process on the network, such as backup, or an indication of malicious activity on the network, such as a cyber threat attempting to manipulate the SCADA network. |
+ | **Denial of Service** | This alert detects attacks that would prevent the use or proper operation of the DCS system. |
+ | **Unauthorized remote access to the network** | Unauthorized remote access to the network can compromise the target device. <br><br> This means that if another device on the network is compromised, the target devices can be accessed remotely, increasing the attack surface. |
+ | **No traffic on Sensor Detected** | A sensor that no longer detects network traffic indicates that the system may be insecure. |
+
+## Investigate Defender for IoT incidents
+
+After youΓÇÖve [configured your Defender for IoT data to trigger new incidents in Microsoft Sentinel](#detect-threats-out-of-the-box-with-defender-for-iot-data), start investigating those incidents in Microsoft Sentinel as you would other incidents.
+
+**To investigate Microsoft Defender for IoT incidents**:
+
+1. In Microsoft Sentinel, go to the **Incidents** page.
+
+1. Above the incident grid, select the **Product name** filter and clear the **Select all** option. Then, select **Microsoft Defender for IoT** to view only incidents triggered by Defender for IoT alerts. For example:
+
+ :::image type="content" source="media/iot-solution/filter-incidents-defender-for-iot.png" alt-text="Screenshot of filtering incidents by product name for Defender for IoT devices.":::
+
+1. Select a specific incident to begin your investigation.
+
+ In the incident details pane on the right, view details such as incident severity, a summary of the entities involved, any mapped MITRE ATT&CK tactics or techniques, and more.
+
+ :::image type="content" source="media/iot-solution/investigate-iot-incidents.png" alt-text="Screenshot of a Microsoft Defender for IoT incident in Microsoft Sentinel.":::
+
+ > [!TIP]
+ > To investigate the incident in Defender for IoT, select the **Investigate in Microsoft Defender for IoT** link at the top of the incident details pane.
+
+For more information on how to investigate incidents and use the investigation graph, see [Investigate incidents with Microsoft Sentinel](/azure/sentinel/investigate-cases).
+
+### Investigate further with IoT device entities
+
+When investigating an incident in Microsoft Sentinel, in an incident details pane, select an IoT device entity from the **Entities** list to open its device entity page. You can identify an IoT device by the IoT device icon: :::image type="icon" source="media/iot-solution/iot-device-icon.png" border="false":::
+
+If you don't see your IoT device entity right away, select **View full details** under the entities listed to open the full incident page. In the **Entities** tab, select an IoT device to open its entity page. For example:
+
+ :::image type="content" source="media/iot-solution/incident-full-details-iot-device.png" alt-text="Screenshot of a full detail incident page.":::
+
+The IoT device entity page provides contextual device information, with basic device details and device owner contact information. The device entity page can help prioritize remediation based on device importance and business impact, as per each alert's site, zone, and sensor. For example:
++
+For more information on entity pages, see [Investigate entities with entity pages in Microsoft Sentinel](/azure/sentinel/entity-pages).
+
+You can also hunt for vulnerable devices on the Microsoft Sentinel **Entity behavior** page. For example, view the top five IoT devices with the highest number of alerts, or search for a device by IP address or device name:
++
+For more information on how to investigate incidents and use the investigation graph, see [Investigate incidents with Microsoft Sentinel](/azure/sentinel/investigate-cases).
+
+## Visualize and monitor Defender for IoT data
+
+To visualize and monitor your Defender for IoT data, use the workbooks deployed to your Microsoft Sentinel workspace as part of the [Microsoft Defender for IoT](#install-the-defender-for-iot-solution) solution.
+
+The Defenders for IoT workbooks provide guided investigations for OT entities based on open incidents, alert notifications, and activities for OT assets. They also provide a hunting experience across the MITRE ATT&CK® framework for ICS, and are designed to enable analysts, security engineers, and MSSPs to gain situational awareness of OT security posture.
+
+View workbooks in Microsoft Sentinel on the **Threat management > Workbooks > My workbooks** tab. For more information, see [Visualize collected data](/azure/sentinel/get-visibility).
+
+The following table describes the workbooks included in the **Microsoft Defender for IoT** solution:
+
+|Workbook |Description |Logs |
+||||
+|**Overview** | Dashboard displaying a summary of key metrics for device inventory, threat detection and vulnerabilities. | Uses data from Azure Resource Graph (ARG) |
+|**Device Inventory** | Displays data such as: OT device name, type, IP address, Mac address, Model, OS, Serial Number, Vendor, Protocols, Open alerts, and CVEs and recommendations per device. Can be filtered by site, zone, and sensor. | Uses data from Azure Resource Graph (ARG) |
+|**Incidents** | Displays data such as: <br><br>- Incident Metrics, Topmost Incident, Incident over time, Incident by Protocol, Incident by Device Type, Incident by Vendor, and Incident by IP address.<br><br>- Incident by Severity, Incident Mean time to respond, Incident Mean time to resolve and Incident close reasons. | Uses data from the following log: `SecurityAlert` |
+|**Alerts** | Displays data such as: Alert Metrics, Top Alerts, Alert over time, Alert by Severity, Alert by Engine, Alert by Device Type, Alert by Vendor and Alert by IP address. | Uses data from Azure Resource Graph (ARG) |
+|**MITRE ATT&CK® for ICS** | Displays data such as: Tactic Count, Tactic Details, Tactic over time, Technique Count. | Uses data from the following log: `SecurityAlert` |
+|**Vulnerabilities** | Displays vulnerabilities and CVEs for vulnerable devices. Can be filtered by device site and CVE severity. | Uses data from Azure Resource Graph (ARG) |
+
+## Automate response to Defender for IoT alerts
+
+Playbooks are collections of automated remediation actions that can be run from Microsoft Sentinel as a routine. A playbook can help automate and orchestrate your threat response; it can be run manually or set to run automatically in response to specific alerts or incidents, when triggered by an analytics rule or an automation rule, respectively.
+
+The [Microsoft Defender for IoT](#install-the-defender-for-iot-solution) solution includes out-of-the-box playbooks that provide the following functionality:
+
+- [Automatically close incidents](#automatically-close-incidents)
+- [Send email notifications by production line](#send-email-notifications-by-production-line)
+- [Create a new ServiceNow ticket](#create-a-new-servicenow-ticket)
+- [Update alert statuses in Defender for IoT](#update-alert-statuses-in-defender-for-iot)
+- [Automate workflows for incidents with active CVEs](#automate-workflows-for-incidents-with-active-cves)
+- [Send email to the IoT/OT device owner](#send-email-to-the-iotot-device-owner)
+- [Triage incidents involving highly important devices](#triage-incidents-involving-highly-important-devices)
+
+Before using the out-of-the-box playbooks, make sure to perform the prerequisite steps as listed [below](#playbook-prerequisites).
+
+For more information, see:
+
+- [Tutorial: Use playbooks with automation rules in Microsoft Sentinel](/azure/sentinel/tutorial-respond-threats-playbook)
+- [Automate threat response with playbooks in Microsoft Sentinel](/azure/sentinel/automate-responses-with-playbooks)
+
+### Playbook prerequisites
+
+Before using the out-of-the-box playbooks, make sure you perform the following prerequisites, as needed for each playbook:
+
+- [Ensure valid playbook connections](#ensure-valid-playbook-connections)
+- [Add a required role to your subscription](#add-a-required-role-to-your-subscription)
+- [Connect your incidents, relevant analytics rules, and the playbook](#connect-your-incidents-relevant-analytics-rules-and-the-playbook)
+
+#### Ensure valid playbook connections
+
+This procedure helps ensure that each connection step in your playbook has valid connections, and is required for all solution playbooks.
+
+**To ensure your valid connections**:
+
+1. In Microsoft Sentinel, open the playbook from **Automation** > **Active playbooks**.
+
+1. Select a playbook to open it as a Logic app.
+
+1. With the playbook opened as a Logic app, select **Logic app designer**. Expand each step in the logic app to check for invalid connections, which are indicated by an orange warning triangle. For example:
+
+ :::image type="content" source="media/iot-solution/connection-steps.png" alt-text="Screenshot of the default AD4IOT AutoAlertStatusSync playbook." lightbox="media/iot-solution/connection-steps.png":::
+
+ > [!IMPORTANT]
+ > Make sure to expand each step in the logic app. Invalid connections may be hiding inside other steps.
+
+1. Select **Save**.
+
+#### Add a required role to your subscription
+
+This procedure describes how to add a required role to the Azure subscription where the playbook is installed, and is required only for the following playbooks:
+
+- [AD4IoT-AutoAlertStatusSync](#update-alert-statuses-in-defender-for-iot)
+- [AD4IoT-CVEAutoWorkflow](#automate-workflows-for-incidents-with-active-cves)
+- [AD4IoT-SendEmailtoIoTOwner](#send-email-to-the-iotot-device-owner)
+- [AD4IoT-AutoTriageIncident](#triage-incidents-involving-highly-important-devices)
+
+Required roles differ per playbook, but the steps remain the same.
+
+**To add a required role to your subscription**:
+
+1. In Microsoft Sentinel, open the playbook from **Automation** > **Active playbooks**.
+
+1. Select a playbook to open it as a Logic app.
+
+1. With the playbook opened as a Logic app, select **Identity > System assigned**, and then in the **Permissions** area, select the **Azure role assignments** button.
+
+1. In the **Azure role assignments** page, select **Add role assignment**.
+
+1. In the **Add role assignment** pane:
+
+ 1. Define the **Scope** as **Subscription**.
+
+ 1. From the dropdown, select the **Subscription** where your playbook is installed.
+
+ 1. From the **Role** dropdown, select one of the following roles, depending on the playbook youΓÇÖre working with:
+
+ |Playbook name |Role |
+ |||
+ |[AD4IoT-AutoAlertStatusSync](#update-alert-statuses-in-defender-for-iot) |Security Admin |
+ |[AD4IoT-CVEAutoWorkflow](#automate-workflows-for-incidents-with-active-cves) |Reader |
+ |[AD4IoT-SendEmailtoIoTOwner](#send-email-to-the-iotot-device-owner) |Reader |
+ |[AD4IoT-AutoTriageIncident](#triage-incidents-involving-highly-important-devices) |Reader |
+
+1. When you're done, select **Save**.
+
+#### Connect your incidents, relevant analytics rules, and the playbook
+
+This procedure describes how to configure a Microsoft Sentinel analytics rule to automatically run your playbooks based on an incident trigger, and is required for all solution playbooks.
+
+**To add your analytics rule**:
+
+1. In Microsoft Sentinel, go to **Automation** > **Automation rules**.
+
+1. To create a new automation rule, select **Create** > **Automation rule**.
+
+1. In the **Trigger** field, select one of the following triggers, depending on the playbook youΓÇÖre working with:
+
+ - The [AD4IoT-AutoAlertStatusSync](#update-alert-statuses-in-defender-for-iot) playbook: Select the **When an incident is updated** trigger
+ - All other solution playbooks: Select the **When an incident is created** trigger
+
+1. In the **Conditions** area, select **If > Analytic rule name > Contains**, and then select the specific analytics rules relevant for Defender for IoT in your organization.
+
+ For example:
+
+ :::image type="content" source="media/iot-solution/automate-playbook.png" alt-text="Screenshot of a Defender for IoT alert status sync automation rule." lightbox="media/iot-solution/automate-playbook.png":::
+
+ You may be using out-of-the-box analytics rules, or you may have modified the out-of-the-box content, or created your own. For more information, see [Detect threats out-of-the-box with Defender for IoT data](#detect-threats-out-of-the-box-with-defender-for-iot-data).
+
+1. In the **Actions** area, select **Run playbook** > *playbook name*.
+
+1. Select **Run**.
+
+> [!TIP]
+> You can also manually run a playbook on demand. This can be useful in situations where you want more control over orchestration and response processes. For more information, see [Run a playbook on demand](/azure/sentinel/tutorial-respond-threats-playbook#run-a-playbook-on-demand).
+
+### Automatically close incidents
+
+**Playbook name**: AD4IoT-AutoCloseIncidents
+
+In some cases, maintenance activities generate alerts in Microsoft Sentinel that can distract a SOC team from handling the real problems. This playbook automatically closes incidents created from such alerts during a specified maintenance period, explicitly parsing the IoT device entity fields.
+
+To use this playbook:
+
+- Enter the relevant time period when the maintenance is expected to occur, and the IP addresses of any relevant assets, such as listed in an Excel file.
+- Create a watchlist that includes all the asset IP addresses on which alerts should be handled automatically.
+
+### Send email notifications by production line
+
+**Playbook name**: AD4IoT-MailByProductionLine
+
+This playbook sends mail to notify specific stakeholders about alerts and events that occur in your environment.
+
+For example, when you have specific security teams assigned to specific product lines or geographic locations, you'll want that team to be notified about alerts that are relevant to their responsibilities.
+
+To use this playbook, create a watchlist that maps between the sensor names and the mailing addresses of each of the stakeholders you want to alert.
+
+### Create a new ServiceNow ticket
+
+**Playbook name**: AD4IoT-NewAssetServiceNowTicket
+
+Typically, the entity authorized to program a PLC is the Engineering Workstation. Therefore, attackers might create new Engineering Workstations in order to create malicious PLC programming.
+
+This playbook opens a ticket in ServiceNow each time a new Engineering Workstation is detected, explicitly parsing the IoT device entity fields.
+
+### Update alert statuses in Defender for IoT
+
+**Playbook name**: AD4IoT-AutoAlertStatusSync
+
+This playbook updates alert statuses in Defender for IoT whenever a related alert in Microsoft Sentinel has a **Status** update.
+
+This synchronization overrides any status defined in Defender for IoT, in the Azure portal or the sensor console, so that the alert statuses match that of the related incident.
+
+### Automate workflows for incidents with active CVEs
+
+**Playbook name**: AD4IoT-CVEAutoWorkflow
+
+This playbook adds active CVEs into the incident comments of affected devices. An automated triage is performed if the CVE is critical, and an email notification is sent to the device owner, as defined on the site level in Defender for IoT.
+
+To add a device owner, edit the site owner on the **Sites and sensors** page in Defender for IoT. For more information, see [Site management options from the Azure portal](how-to-manage-sensors-on-the-cloud.md#site-management-options-from-the-azure-portal).
+
+### Send email to the IoT/OT device owner
+
+**Playbook name**: AD4IoT-SendEmailtoIoTOwner
+
+This playbook sends an email with the incident details to the device owner as defined on the site level in Defender for IoT, so that they can start investigating, even responding directly from the automated email. Response options include:
+
+- **Yes this is expected**. Select this option to close the incident.
+
+- **No this is NOT expected**. Select this option to keep the incident active, increase the severity, and add a confirmation tag to the incident.
+
+The incident is automatically updated based on the response selected by the device owner.
+
+To add a device owner, edit the site owner on the **Sites and sensors** page in Defender for IoT. For more information, see [Site management options from the Azure portal](how-to-manage-sensors-on-the-cloud.md#site-management-options-from-the-azure-portal).
+
+### Triage incidents involving highly important devices
+
+**Playbook name**: AD4IoT-AutoTriageIncident
+
+This playbook updates the incident severity according to the importance level of the devices involved.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Visualize data](/azure/sentinel/get-visibility.md)
+
+> [!div class="nextstepaction"]
+> [Create custom analytics rules](/azure/sentinel/detect-threats-custom.md)
+
+> [!div class="nextstepaction"]
+> [Investigate incidents](/sentinel/investigate-cases)
+
+> [!div class="nextstepaction"]
+> [Investigate entities](/azure/sentinel/entity-pages.md)
+
+> [!div class="nextstepaction"]
+> [Use playbooks with automation rules](/azure/sentinel/tutorial-respond-threats-playbook.md)
+
+For more information, see our blog: [Defending Critical Infrastructure with the Microsoft Sentinel: IT/OT Threat Monitoring Solution](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/defending-critical-infrastructure-with-the-microsoft-sentinel-it/ba-p/3061184)
+
defender-for-iot Iot Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/iot-solution.md
+
+ Title: Connect Microsoft Defender for IoT with Microsoft Sentinel
+description: This tutorial describes how to integrate Microsoft Sentinel and Microsoft Defender for IoT with the Microsoft Sentinel data connector to secure your entire OT environment. Detect and respond to OT threats, including multistage attacks that may cross IT and OT boundaries.
+ Last updated : 06/20/2022++
+# Tutorial: Connect Microsoft Defender for IoT with Microsoft Sentinel
+
+ΓÇïMicrosoft Defender for IoT enables you to secure your entire OT and Enterprise IoT environment, whether you need to protect existing devices or build security into new innovations.
+
+Microsoft Sentinel and Microsoft Defender for IoT help to bridge the gap between IT and OT security challenges, and to empower SOC teams with out-of-the-box capabilities to efficiently and effectively detect and respond to OT threats. The integration between Microsoft Defender for IoT and Microsoft Sentinel helps organizations to quickly detect multistage attacks, which often cross IT and OT boundaries.
+
+This connector allows you to stream Microsoft Defender for IoT data into Microsoft Sentinel, so you can view, analyze, and respond to Defender for IoT alerts, and the incidents they generate, in a broader organizational threat context.
+
+The Microsoft Sentinel integration is supported only for OT networks.
+
+In this tutorial, you will learn how to:
+
+> [!div class="checklist"]
+>
+> * Connect Defender for IoT data to Microsoft Sentinel
+> * Use Log Analytics to query Defender for IoT alert data
+
+## Prerequisites
+
+Before you start, make sure you have the following requirements on your workspace:
+
+- **Read** and **Write** permissions on your Microsoft Sentinel workspace. For more information, see [Permissions in Microsoft Sentinel](/azure/sentinel/roles).
+
+- **Contributor** or **Owner** permissions on the subscription you want to connect to Microsoft Sentinel.
+
+- A Defender for IoT plan on your Azure subscription with data streaming into Defender for IoT. For more information, see [Quickstart: Get started with Defender for IoT](getting-started.md).
+
+> [!IMPORTANT]
+> Currently, having both the Microsoft Defender for IoT and the [Microsoft Defender for Cloud](/azure/sentinel/data-connectors-reference.md#microsoft-defender-for-cloud) data connectors enabled on the same Microsoft Sentinel workspace simultaneously may result in duplicate alerts in Microsoft Sentinel. We recommend that you disconnect the Microsoft Defender for Cloud data connector before connecting to Microsoft Defender for IoT.
+>
+
+## Connect your data from Defender for IoT to Microsoft Sentinel
+
+Start by enabling the **Defender for IoT** data connector to stream all your Defender for IoT events into Microsoft Sentinel.
+
+**To enable the Defender for IoT data connector**:
+
+1. In Microsoft Sentinel, under **Configuration**, select **Data connectors**, and then locate the **Microsoft Defender for IoT** data connector.
+
+1. At the bottom right, select **Open connector page**.
+
+1. On the **Instructions** tab, under **Configuration**, select **Connect** for each subscription whose alerts and device alerts you want to stream into Microsoft Sentinel.
+
+ If you've made any connection changes, it can take 10 seconds or more for the **Subscription** list to update.
+
+For more information, see [Connect Microsoft Sentinel to Azure, Windows, Microsoft, and Amazon services](/azure/sentinel/connect-azure-windows-microsoft-services.md).
+
+## View Defender for IoT alerts
+
+After you've connected a subscription to Microsoft Sentinel, you'll be able to view Defender for IoT alerts in the Microsoft Sentinel **Logs** area.
+
+1. In Microsoft Sentinel, select **Logs > AzureSecurityOfThings > SecurityAlert**, or search for **SecurityAlert**.
+
+1. Use the following sample queries to filter the logs and view alerts generated by Defender for IoT:
+
+ **To see all alerts generated by Defender for IoT**:
+
+ ```kusto
+ SecurityAlert | where ProductName == "Azure Security Center for IoT"
+ ```
+
+ **To see specific sensor alerts generated by Defender for IoT**:
+
+ ```kusto
+ SecurityAlert
+ | where ProductName == "Azure Security Center for IoT"
+ | where tostring(parse_json(ExtendedProperties).SensorId) == ΓÇ£<sensor_name>ΓÇ¥
+ ```
+
+ **To see specific OT engine alerts generated by Defender for IoT**:
+
+ ```kusto
+ SecurityAlert
+ | where ProductName == "Azure Security Center for IoT"
+ | where ProductComponentName == "MALWARE"
+
+ SecurityAlert
+ | where ProductName == "Azure Security Center for IoT"
+ | where ProductComponentName == "ANOMALY"
+
+ SecurityAlert
+ | where ProductName == "Azure Security Center for IoT"
+ | where ProductComponentName == "PROTOCOL_VIOLATION"
+
+ SecurityAlert
+ | where ProductName == "Azure Security Center for IoT"
+ | where ProductComponentName == "POLICY_VIOLATION"
+
+ SecurityAlert
+ | where ProductName == "Azure Security Center for IoT"
+ | where ProductComponentName == "OPERATIONAL"
+ ```
+
+ **To see high severity alerts generated by Defender for IoT**:
+
+ ```kusto
+ SecurityAlert
+ | where ProductName == "Azure Security Center for IoT"
+ | where AlertSeverity == "High"
+ ```
+
+ **To see specific protocol alerts generated by Defender for IoT**:
+
+ ```kusto
+ SecurityAlert
+ | where ProductName == "Azure Security Center for IoT"
+ | where tostring(parse_json(ExtendedProperties).Protocol) == "<protocol_name>"
+ ```
+
+> [!NOTE]
+> The **Logs** page in Microsoft Sentinel is based on Azure Monitor's Log Analytics.
+>
+> For more information, see [Log queries overview](/azure/azure-monitor/logs/log-query-overview) in the Azure Monitor documentation and the [Write your first KQL query](/training/modules/write-first-query-kusto-query-language/) Learn module.
+>
+
+### Understand alert timestamps
+
+Defender for IoT alerts, in both the Azure portal and on the sensor console, track the time an alert was first detected, last detected, and last changed.
+
+The following table describes the Defender for IoT alert timestamp fields, with a mapping to the relevant fields from Log Analytics shown in Microsoft Sentinel.
+
+|Defender for IoT field |Description | Log Analytics field |
+||||
+|**First detection** |Defines the first time the alert was detected in the network. | `StartTime` |
+|**Last detection** | Defines the last time the alert was detected in the network, and replaces the **Detection time** column.| `EndTime` |
+|**Last activity** | Defines the last time the alert was changed, including manual updates for severity or status, or automated changes for device updates or device/alert de-duplication | `TimeGenerated` |
+
+In Defender for IoT on the Azure portal and the sensor console, the **Last detection** column is shown by default. Edit the columns on the **Alerts** page to show the **First detection** and **Last activity** columns as needed.
+
+For more information, see [View alerts on the Defender for IoT portal](how-to-manage-cloud-alerts.md) and [View alerts on your sensor](how-to-view-alerts.md).
+
+### Understand multiple records per alert
+
+Defender for IoT alert data is streamed to the Microsoft Sentinel and stored in your Log Analytics workspace, in the [SecurityAlert]() table.
+
+Records in the **SecurityAlert** table are created updated each time an alert is generated or updated in Defender for IoT. Sometimes a single alert will have multiple records, such as when the alert was first created and then again when it was updated.
+
+In Microsoft Sentinel, use the following query to check the records added to the **SecurityAlert** table for a single alert:
+
+```kql
+SecurityAlert
+| where ProductName == "Azure Security Center for IoT"
+| where VendorOriginalId == "Defender for IoT Alert ID"
+| sort by TimeGenerated desc
+```
+
+The following types of updates generate new records in the **SecurityAlert** table:
+
+- Updates for alert status or severity
+- Updates in the last detection time, such as when the same alert is detected multiple times
+- A new device is added to an existing alert
+- The device properties for an alert are updated
+
+## Next steps
+
+[Install the **Microsoft Defender for IoT** solution](iot-advanced-threat-monitoring.md) to your Microsoft Sentinel workspace.
+
+The **Microsoft Defender for IoT** solution is a set of bundled, out-of-the-box content that's configured specifically for Defender for IoT data, and includes analytics rules, workbooks, and playbooks.
+
+For more information, see:
+
+- [Tutorial: Investigate and detect threats for IoT devices](iot-advanced-threat-monitoring.md)
+- [Defending Critical Infrastructure with the Microsoft Sentinel: IT/OT Threat Monitoring Solution](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/defending-critical-infrastructure-with-the-microsoft-sentinel-it/ba-p/3061184)
+- [Microsoft Defender for IoT solution](https://azuremarketplace.microsoft.com/marketplace/apps/azuresentinel.azure-sentinel-solution-unifiedmicrosoftsocforot?tab=Overview)
+- [Microsoft Defender for IoT data connector](/azure/sentinel/data-connectors-reference.md#microsoft-defender-for-iot)
+
defender-for-iot Manage Users On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/manage-users-on-premises-management-console.md
+
+ Title: Create and manage users on an on-premises management console - Microsoft Defender for IoT
+description: Create and manage users on a Microsoft Defender for IoT on-premises management console.
Last updated : 09/11/2022+++
+# Create and manage users on an on-premises management console
+
+Microsoft Defender for IoT provides tools for managing on-premises user access in the [OT network sensor](manage-users-sensor.md), and the on-premises management console. Azure users are managed [at the Azure subscription level](manage-users-overview.md) using Azure RBAC.
+
+This article describes how to manage on-premises users directly on an on-premises management console.
+
+## Default privileged users
+
+By default, each on-premises management console is installed with the privileged *cyberx* and *support* users, which have access to advanced tools for troubleshooting and setup.
+
+When setting up an on-premises management console for the first time, sign in with one of these privileged users, create an initial user with an **Admin** role, and then create extra users for security analysts and read-only users.
+
+For more information, see [Install OT monitoring software](how-to-install-software.md#install-ot-monitoring-software) and [Default privileged on-premises users](roles-on-premises.md#default-privileged-on-premises-users).
+
+## Add new on-premises management console users
+
+This procedure describes how to create new users for an on-premises management console.
+
+**Prerequisites**: This procedure is available for the *cyberx* and *support* users, and any user with the **Admin** role.
+
+**To add a user**:
+
+1. Sign in to the on-premises management console and select **Users** > **+ Add user**.
+
+1. Select **Create user** and then define the following values:
+
+ |Name |Description |
+ |||
+ |**Username** | Enter a username. |
+ |**Email** | Enter the user's email address. |
+ |**First Name** | Enter the user's first name. |
+ |**Last Name** | Enter the user's last name. |
+ |**Role** | Select a user role. For more information, see [On-premises user roles](roles-on-premises.md#on-premises-user-roles). |
+ |**Remote Sites Access Group** | Available for the on-premises management console only. <br><br> Select either **All** to assign the user to all global access groups, or **Specific** to assign them to a specific group only, and then select the group from the drop-down list. <br><br>For more information, see [Define global access permission for on-premises users](#define-global-access-permission-for-on-premises-users). |
+ |**Password** | Select the user type, either **Local** or **Active Directory User**. <br><br>For local users, enter a password for the user. Password requirements include: <br>- At least eight characters<br>- Both lowercase and uppercase alphabetic characters<br>- At least one numbers<br>- At least one symbol|
+
+ > [!TIP]
+ > Integrating with Active Directory lets you associate groups of users with specific permission levels. If you want to create users using Active Directory, first configure [Active Directory on the on-premises management console](#integrate-users-with-active-directory) and then return to this procedure.
+ >
+
+1. Select **Save** when you're done.
+
+Your new user is added and is listed on the sensor **Users** page.
+
+**To edit a user**, select the **Edit** :::image type="icon" source="media/manage-users-on-premises-management-console/icon-edit.png" border="false"::: button for the user you want to edit, and change any values as needed.
+
+**To delete a user**, select the **Delete** :::image type="icon" source="media/manage-users-on-premises-management-console/icon-delete.png" border="false"::: button for the user you want to delete.
+
+### Change a user's password
+
+This procedure describes how **Admin** users can change local user passwords. **Admin** users can change passwords for themselves or for other **Security Analyst** or **Read Only** users. [Privileged users](#default-privileged-users) can change their own passwords, and the passwords for **Admin** users.
+
+> [!TIP]
+> If you need to recover access to a privileged user account, see [Recover privileged access to an on-premises management console](#recover-privileged-access-to-an-on-premises-management-console).
+
+**Prerequisites**: This procedure is available only for the *cyberx* or *support* users, or for users with the **Admin** role.
+
+**To reset a user's password on the on-premises management console**:
+
+1. Sign into the on-premises management console and select **Users**.
+
+1. On the **Users** page, locate the user whose password needs to be changed.
+
+1. At the right of that user row, select the **Edit** :::image type="icon" source="media/manage-users-on-premises-management-console/icon-edit.png" border="false"::: button.
+
+1. In the **Edit user** pane that appears, scroll down to the **Change password** section. Enter and confirm the new password.
+
+ Passwords must be at least 16 characters, contain lowercase and uppercase alphabetic characters, numbers, and one of the following symbols: **#%*+,-./:=?@[]^_{}~**
+
+1. Select **Update** when you're done.
+
+### Recover privileged access to an on-premises management console
+
+This procedure describes how to recover either the *cyberx* or *support* user password on an on-premises management console. For more information, see [Default privileged on-premises users](roles-on-premises.md#default-privileged-on-premises-users).
+
+**Prerequisites**: This procedure is available for the *cyberx* and *support* users only.
+
+**To recover privileged access to an on-premises management console**:
+
+1. Start signing in to your on-premises management console. On the sign-in screen, under the **Username** and **Password** fields, select **Password recovery**.
+
+1. In the **Password Recovery** dialog, select either **CyberX** or **Support** from the drop-down menu, and copy the unique identifier code that's displayed to the clipboard.
+
+1. Go the Defender for IoT **Sites and sensors** page in the Azure portal. You may want to open the Azure portal in a new browser tab or window, keeping your on-premises management console open.
+
+ In your Azure portal settings > **Directories + subscriptions**, make sure that you've selected the subscription where your sensors were onboarded to Defender for IoT.
+
+1. In the **Sites and sensors** page, select the **More Actions** drop down menu > **Recover on-premises management console password**.
+
+ :::image type="content" source="media/how-to-create-and-manage-users/recover-password.png" alt-text="Screenshot of the recover on-premises management console password option.":::
+
+1. In the **Recover** dialog that opens, enter the unique identifier that you've copied to the clipboard from your on-premises management console and select **Recover**. A **password_recovery.zip** file is automatically downloaded.
+
+ [!INCLUDE [root-of-trust](includes/root-of-trust.md)]
+
+1. Back on the on-premises management console tab, on the **Password recovery** dialog, select **Upload**. Browse to an upload the **password_recovery.zip** file you downloaded from the Azure portal.
+
+ > [!NOTE]
+ > If an error message appears, indicating that the file is invalid, you may have had an incorrect subscription selected in your Azure portal settings.
+ >
+ > Return to Azure, and select the settings icon in the top toolbar. On the **Directories + subscriptions** page, make sure that you've selected the subscription where your sensors were onboarded to Defender for IoT. Then repeat the steps in Azure to download the **password_recovery.zip** file and upload it on the on-premises management console again.
+
+1. Select **Next**. A system-generated password for your on-premises management console appears for you to use for the selected user. Make sure to write the password down as it won't be shown again.
+
+1. Select **Next** again to sign into your on-premises management console.
+
+## Integrate users with Active Directory
+
+Configure an integration between your on-premises management console and Active Directory to:
+
+- Allow Active Directory users to sign in to your on-premises management console
+- Use Active Directory groups, with collective permissions assigned to all users in the group
+
+For example, use Active Directory when you have a large number of users that you want to assign Read Only access to, and you want to manage those permissions at the group level.
+
+For more information, see [Active Directory support on sensors and on-premises management consoles](manage-users-overview.md#active-directory-support-on-sensors-and-on-premises-management-consoles).
+
+**Prerequisites**: This procedure is available for the *cyberx* and *support* users only, or any user with an **Admin** role.
+
+**To integrate with Active Directory**:
+
+1. Sign in to your on-premises management console and select **System Settings**.
+
+1. Scroll down to the **Management console integrations** area on the right, and then select **Active Directory**.
+
+1. Select the **Active Directory Integration Enabled** option and enter the following values for an Active Directory server:
+
+ |Field |Description |
+ |||
+ |**Domain Controller FQDN** | The fully qualified domain name (FQDN), exactly as it appears on your LDAP server. For example, enter `host1.subdomain.domain.com`. |
+ |**Domain Controller Port** | The port on which your LDAP is configured. |
+ |**Primary Domain** | The domain name, such as `subdomain.domain.com`, and then select the connection type for your LDAP configuration. <br><br>Supported connection types include: **LDAPS/NTLMv3** (recommended), **LDAP/NTLMv3**, or **LDAP/SASL-MD5** |
+ |**Active Directory Groups** | Select **+ Add** to add an Active Directory group to each permission level listed, as needed. <br><br>When you enter a group name, make sure that you enter the group name as it's defined in your Active Directory configuration on the LDAP server. Then, make sure to use these groups when creating new sensor users from Active Directory.<br><br> Supported permission levels include **Read-only**, **Security Analyst**, **Admin**, and **Trusted Domains**.<br><br> Add groups as **Trusted endpoints** in a separate row from the other Active Directory groups. To add a trusted domain, add the domain name and the connection type of a trusted domain. You can configure trusted endpoints only for users who were defined under users. <!--what?-->|
+
+ Select **+ Add Server** to add another server and enter its values as needed, and **Save** when you're done.
+
+ > [!IMPORTANT]
+ > When entering LDAP parameters:
+ >
+ > - Define values exactly as they appear in Active directory, except for the case.
+ > - User lowercase only, even if the configuration in Active Directory uses uppercase.
+ > - LDAP and LDAPS can't be configured for the same domain. However, you can configure each in different domains and then use them at the same time.
+ >
+
+1. Create access group rules for on-premises management console users.
+
+ If you configure Active Directory groups for on-premises management console users, you must also create an access group rule for each Active Directory group. Active Directory credentials won't work for on-premises management console users without a corresponding access group rule.
+
+ For more information, see [Define global access permission for on-premises users](#define-global-access-permission-for-on-premises-users).
++
+## Define global access permission for on-premises users
+
+Large organizations often have a complex user permissions model based on global organizational structures. To manage your on-premises Defender for IoT users, we recommend that you use a global business topology that's based on business units, regions, and sites, and then define user access permissions around those entities.
+
+Create *user access groups* to establish global access control across Defender for IoT on-premises resources. Each access group includes rules about the users that can access specific entities in your business topology, including business units, regions, and sites.
+
+For more information, see [On-premises global access groups](manage-users-overview.md#on-premises-global-access-groups).
+
+**Prerequisites**:
+
+This procedure is available for the *cyberx* and *support* users, and any user with the **Admin** role.
+
+Before you create access groups, we also recommend that you:
+
+- Plan which users are associated with the access groups that you create. Two options are available for assigning users to access groups:
+
+ - **Assign groups of Active Directory groups**: Verify that you [set up an Active Directory instance](#integrate-users-with-active-directory) to integrate with the on-premises management console.
+
+ - **Assign local users**: Verify that you've [created local users](#create-and-manage-users-on-an-on-premises-management-console).
+
+ Users with **Admin** roles have access to all business topology entities by default, and can't be assigned to access groups.
+
+- Carefully set up your business topology. For a rule to be successfully applied, you must assign sensors to zones in the **Site Management** window. For more information, see:
+
+ - [Work with site map views](how-to-gain-insight-into-global-regional-and-local-threats.md#work-with-site-map-views)
+ - [Create enterprise zones](how-to-activate-and-set-up-your-on-premises-management-console.md#create-enterprise-zones)
+ - [Assign sensors to zones](how-to-activate-and-set-up-your-on-premises-management-console.md#assign-sensors-to-zones)
+
+**To create access groups**:
+
+1. Sign in to the on-premises management console as user with an **Admin** role.
+
+1. Select **Access Groups** from the left navigation menu, and then select **Add** :::image type="icon" source="media/how-to-define-global-user-access-control/add-icon.png" border="false":::.
+
+1. In the **Add Access Group** dialog box, enter a meaningful name for the access group, with a maximum of 64 characters.
+
+1. Select **ADD RULE**, and then select the business topology options that you want to include in the access group. The options that appear in the **Add Rule** dialog are the entities that you'd created in the **Enterprise View** and **Site Management** pages. For example:
+
+ :::image type="content" source="media/how-to-define-global-user-access-control/add-rule.png" alt-text="Screenshot of the Add Rule dialog box." lightbox="media/how-to-define-global-user-access-control/add-rule.png":::
+
+ If they don't otherwise exist yet, default global business units and regions are created for the first group you create. If you don't select any business units or regions, users in the access group will have access to all business topology entities.
+
+ Each rule can include only one element per type. For example, you can assign one business unit, one region, and one site for each rule. If you want the same users to have access to multiple business units, in different regions, create more rules for the group. When an access group contains several rules, the rule logic aggregates all rules using an AND logic.
+
+ Any rules you create are listed in the **Add Access Group** dialog box, where you can edit them further or delete them as needed. For example:
+
+ :::image type="content" source="media/how-to-define-global-user-access-control/edit-access-groups.png" alt-text="Screenshot of the Add Access Group dialog box." lightbox="media/how-to-define-global-user-access-control/edit-access-groups.png":::
+
+1. Add users with one or both of the following methods:
+
+ - If the **Assign an Active Directory Group** option appears, assign an Active Directory group of users to this access group as needed. For example:
+
+ :::image type="content" source="media/how-to-define-global-user-access-control/add-access-group.png" alt-text="Screenshot of adding an Active Directory group to a Global Access Group." lightbox="media/how-to-define-global-user-access-control/add-access-group.png":::
+
+ If the option doesn't appear, and you want to include Active Directory groups in access groups, make sure that you've included your Active Directory group in your Active Directory integration. For more information, see [Integrate on-premises users with Active Directory](#integrate-users-with-active-directory).
+
+ - Add local users to your groups by editing existing users from the **Users** page. On the **Users** page, select the **Edit** button for the user you want to assign to the group, and then update the **Remote Sites Access Group** value for the selected user. For more information, see [Add new on-premises management console users](#add-new-on-premises-management-console-users).
++
+### Changes to topology entities
+
+If you later modify a topology entity and the change affects the rule logic, the rule is automatically deleted.
+
+If modifications to topology entities affect rule logic so that all rules are deleted, the access group remains but users won't be able to sign in to the on-premises management console. Instead, users are notified to contact their on-premises management console administrator for help signing in. [Update the settings](#add-new-on-premises-management-console-users) for these users so that they're no longer part of the legacy access group.
+
+## Control user session timeouts
+
+By default, on-premises users are signed out of their sessions after 30 minutes of inactivity. Admin users can use the local CLI to either turn this feature on or off, or to adjust the inactivity thresholds.
+For more information, see [Work with Defender for IoT CLI commands](references-work-with-defender-for-iot-cli-commands.md).
+
+> [!NOTE]
+> Any changes made to user session timeouts are reset to defaults when you [update the OT monitoring software](update-ot-software.md).
+
+**Prerequisites**: This procedure is available for the *cyberx* and *support* users only.
+
+**To control sensor user session timeouts**:
+
+1. Sign in to your sensor via a terminal and run:
+
+ ```cli
+ sudo nano /var/cyberx/properties/authentication.properties
+ ```
+
+ The following output appears:
+
+ ```cli
+ infinity_session_expiration = true
+ session_expiration_default_seconds = 0
+ # half an hour in seconds
+ session_expiration_admin_seconds = 1800
+ session_expiration_security_analyst_seconds = 1800
+ session_expiration_read_only_users_seconds = 1800
+ certifcate_validation = true
+ CRL_timeout_secounds = 3
+ CRL_retries = 1
+
+ ```
+
+1. Do one of the following:
+
+ - **To turn off user session timeouts entirely**, change `infinity_session_expiration = true` to `infinity_session_expiration = false`. Change it back to turn it back on again.
+
+ - **To adjust an inactivity timeout period**, adjust one of the following values to the required time, in seconds:
+
+ - `session_expiration_default_seconds` for all users
+ - `session_expiration_admin_seconds` for *Admin* users only
+ - `session_expiration_security_analyst_seconds` for *Security Analyst* users only
+ - `session_expiration_read_only_users_seconds` for *Read Only* users only
+
+## Next steps
+
+For more information, see:
+
+- [Create and manage users on an OT network sensor](manage-users-sensor.md)
+- [Audit user activity](track-user-activity.md)
defender-for-iot Manage Users Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/manage-users-overview.md
+
+ Title: User management for Microsoft Defender for IoT
+description: Learn about the different options for user and user role management for Microsoft Defender for IoT.
Last updated : 11/13/2022+++
+# Microsoft Defender for IoT user management
+
+Microsoft Defender for IoT provides tools both in the Azure portal and on-premises for managing user access across Defender for IoT resources.
+
+## Azure users for Defender for IoT
+
+In the Azure portal, user are managed at the subscription level with [Azure Active Directory (AAD)](/azure/active-directory/) and [Azure role-based access control (RBAC)](/azure/role-based-access-control/overview). Azure subscription users can have one or more user roles, which determine the data and actions they can access from the Azure portal, including in Defender for IoT.
+
+Use the [portal](/azure/role-based-access-control/quickstart-assign-role-user-portal) or [PowerShell](/azure/role-based-access-control/tutorial-role-assignments-group-powershell) to assign your Azure subscription users with the specific roles they'll need to view data and take action, such as whether they'll be viewing alert or device data, or managing pricing plans and sensors.
+
+For more information, see [Azure user roles for OT and Enterprise IoT monitoring](roles-azure.md)
+
+## On-premises users for Defender for IoT
+
+When working with OT networks, Defender for IoT services and data is available also from on-premises OT network sensors and the on-premises sensor management console, in addition to the Azure portal.
+
+You'll need to define on-premises users on both your OT network sensors and the on-premises management console, in addition to Azure. Both the OT sensors and the on-premises management console are installed with a set of default, privileged users, which you can use to define additional administrators and other users.
+
+Sign into the OT sensors to [define sensor users](manage-users-sensor.md), and sign into the on-premises management console to [define on-premises management console users](manage-users-on-premises-management-console.md).
+
+For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md).
+
+### Active Directory support on sensors and on-premises management consoles
+
+You might want to configure an integration between your sensor and Active Directory to allow Active Directory users to sign in to your sensor, or to use Active Directory groups, with collective permissions assigned to all users in the group.
+
+For example, use Active Directory when you have a large number of users that you want to assign **Read Only** access to, and you want to manage those permissions at the group level.
+
+Defender for IoT's integration with Active Directory supports LDAP v3 and the following types of LDAP-based authentication:
+
+- **Full authentication**: User details are retrieved from the LDAP server. Examples are the first name, last name, email, and user permissions.
+
+- **Trusted user**: Only the user password is retrieved. Other user details that are retrieved are based on users defined in the sensor.
+
+For more information, see:
+
+- [Integrate OT sensor users with Active Directory](manage-users-sensor.md#integrate-ot-sensor-users-with-active-directory)
+- [Integrate on-premises management console users with Active Directory](manage-users-on-premises-management-console.md#integrate-users-with-active-directory)
+- [Other firewall rules for external services (optional)](how-to-set-up-your-network.md#other-firewall-rules-for-external-services-optional).
++
+### On-premises global access groups
+
+Large organizations often have a complex user permissions model based on global organizational structures. To manage your on-premises Defender for IoT users, use a global business topology that's based on business units, regions, and sites, and then define user access permissions around those entities.
+
+Create user access groups to establish global access control across Defender for IoT on-premises resources. Each access group includes rules about the users that can access specific entities in your business topology, including business units, regions, and sites.
+
+For example, the following diagram shows how you can allow security analysts from an Active Directory group to access all West European automotive and glass production lines, along with a plastics line in one region:
++
+For more information, see [Define global access permission for on-premises users](manage-users-on-premises-management-console.md#define-global-access-permission-for-on-premises-users).
+
+> [!TIP]
+> Access groups and rules help to implement zero-trust strategies by controlling where users manage and analyze devices on Defender for IoT sensors and the on-premises management console. For more information, see [Gain insight into global, regional, and local threats](how-to-gain-insight-into-global-regional-and-local-threats.md).
+>
+
+## Next steps
+
+- [Manage Azure subscription users](/azure/role-based-access-control/quickstart-assign-role-user-portal)
+- [Create and manage users on an OT network sensor](manage-users-sensor.md)
+- [Create and manage users on an on-premises management console](manage-users-on-premises-management-console.md)
+
+For more information, see:
+
+- [Azure user roles and permissions for Defender for IoT](roles-azure.md)
+- [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md)
defender-for-iot Manage Users Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/manage-users-sensor.md
+
+ Title: Create and manage users on an OT network sensor - Microsoft Defender for IoT
+description: Create and manage on-premises users on a Microsoft Defender for IoT OT network sensor.
Last updated : 09/28/2022+++
+# Create and manage users on an OT network sensor
+
+Microsoft Defender for IoT provides tools for managing on-premises user access in the [OT network sensor](manage-users-sensor.md), and the on-premises management console. Azure users are managed [at the Azure subscription level](manage-users-overview.md) using Azure RBAC.
+
+This article describes how to manage on-premises users directly on an OT network sensor.
+
+## Default privileged users
+
+By default, each OT network sensor is installed with the privileged *cyberx*, *support*, and *cyberx_host* users, which have access to advanced tools for troubleshooting and setup.
+
+When setting up a sensor for the first time, sign in with one of these privileged users, create an initial user with an **Admin** role, and then create extra users for security analysts and read-only users.
+
+For more information, see [Install OT monitoring software](how-to-install-software.md#install-ot-monitoring-software) and [Default privileged on-premises users](roles-on-premises.md#default-privileged-on-premises-users).
+
+## Add new OT sensor users
+
+This procedure describes how to create new users for a specific OT network sensor.
+
+**Prerequisites**: This procedure is available for the *cyberx*, *support*, and *cyberx_host* users, and any user with the **Admin** role.
+
+**To add a user**:
+
+1. Sign in to the sensor console and select **Users** > **+ Add user**.
+
+1. On the **Create a user | Users** page, enter the following details:
+
+ |Name |Description |
+ |||
+ |**User name** | Enter a meaningful username for the user. |
+ |**Email** | Enter the user's email address. |
+ |**First Name** | Enter the user's first name. |
+ |**Last Name** | Enter the user's last name. |
+ |**Role** | Select one of the following user roles: **Admin**, **Security Analyst**, or **Read Only**. For more information, see [On-premises user roles](roles-on-premises.md#on-premises-user-roles). |
+ |**Password** | Select the user type, either **Local** or **Active Directory User**. <br><br>For local users, enter a password for the user. Password requirements include: <br>- At least eight characters<br>- Both lowercase and uppercase alphabetic characters<br>- At least one numbers<br>- At least one symbol<br><br>Local user passwords can only be modified by **Admin** users.|
+
+ > [!TIP]
+ > Integrating with Active Directory lets you associate groups of users with specific permission levels. If you want to create users using Active Directory, first configure [Active Directory on the sensor](manage-users-sensor.md#integrate-ot-sensor-users-with-active-directory) and then return to this procedure.
+ >
+
+1. Select **Save** when you're done.
+
+Your new user is added and is listed on the sensor **Users** page.
+
+To edit a user, select the **Edit** :::image type="icon" source="media/manage-users-on-premises-management-console/icon-edit.png" border="false"::: icon for the user you want to edit, and change any values as needed.
+
+To delete a user, select the **Delete** button for the user you want to delete.
+## Integrate OT sensor users with Active Directory
+
+Configure an integration between your sensor and Active Directory to:
+
+- Allow Active Directory users to sign in to your sensor
+- Use Active Directory groups, with collective permissions assigned to all users in the group
+
+For example, use Active Directory when you have a large number of users that you want to assign Read Only access to, and you want to manage those permissions at the group level.
+
+For more information, see [Active Directory support on sensors and on-premises management consoles](manage-users-overview.md#active-directory-support-on-sensors-and-on-premises-management-consoles).
+
+**Prerequisites**: This procedure is available for the *cyberx* and *support* users, and any user with the **Admin** role.
+
+**To integrate with Active Directory**:
+
+1. Sign in to your OT sensor and select **System Settings** > **Integrations** > **Active Directory**.
+
+1. Toggle on the **Active Directory Integration Enabled** option.
+
+1. Enter the following values for your Active Directory server:
+
+ |Name |Description |
+ |||
+ |**Domain Controller FQDN** | The fully qualified domain name (FQDN), exactly as it appears on your LDAP server. For example, enter `host1.subdomain.domain.com`. |
+ |**Domain Controller Port** | The port on which your LDAP is configured. |
+ |**Primary Domain** | The domain name, such as `subdomain.domain.com`, and then select the connection type for your LDAP configuration. <br><br>Supported connection types include: **LDAPS/NTLMv3** (recommended), **LDAP/NTLMv3**, or **LDAP/SASL-MD5** |
+ |**Active Directory Groups** | Select **+ Add** to add an Active Directory group to each permission level listed, as needed. <br><br> When you enter a group name, make sure that you enter the group name exactly as it's defined in your Active Directory configuration on the LDAP server. You'll use these group names when [adding new sensor users](#add-new-ot-sensor-users) with Active Directory.<br><br> Supported permission levels include **Read-only**, **Security Analyst**, **Admin**, and **Trusted Domains**. |
++
+ > [!IMPORTANT]
+ > When entering LDAP parameters:
+ >
+ > - Define values exactly as they appear in Active Directory, except for the case.
+ > - User lowercase characters only, even if the configuration in Active Directory uses uppercase.
+ > - LDAP and LDAPS can't be configured for the same domain. However, you can configure each in different domains and then use them at the same time.
+ >
+
+1. To add another Active Directory server, select **+ Add Server** at the top of the page and define those server values.
+
+1. When you've added all your Active Directory servers, select **Save**.
++
+## Change a sensor user's password
+
+This procedure describes how **Admin** users can change local user passwords. **Admin** users can change passwords for themselves or for other **Security Analyst** or **Read Only** users. [Privileged users](#default-privileged-users) can change their own passwords, and the passwords for **Admin** users.
+
+> [!TIP]
+> If you need to recover access to a privileged user account, see [Recover privileged access to a sensor](#recover-privileged-access-to-a-sensor).
+
+**Prerequisites**: This procedure is available only for the *cyberx*, *support*, or *cyberx_host* users, or for users with the **Admin** role.
+
+**To change a user's password on a sensor**:
+
+1. Sign into the sensor and select **Users**.
+
+1. On the sensor's **Users** page, locate the user whose password needs to be changed.
+
+1. At the right of that user row, select the options (**...**) menu > **Edit** to open the user pane.
+
+1. In the user pane on the right, in the **Change password** area, enter and confirm the new password. If you're changing your own password, you'll also need to enter your current password.
+
+ Password requirements include:
+
+ - At least eight characters
+ - Both lowercase and uppercase alphabetic characters
+ - At least one numbers
+ - At least one symbol
+
+1. Select **Save** when you're done.
+
+## Recover privileged access to a sensor
+
+This procedure descries how to recover privileged access to a sensor, for the *cyberx*, *support*, or *cyberx_host* users. For more information, see [Default privileged on-premises users](roles-on-premises.md#default-privileged-on-premises-users).
+
+**Prerequisites**: This procedure is available only for the *cyberx*, *support*, or *cyberx_host* users.
+
+**To recover privileged access to a sensor**:
+
+1. Start signing in to the OT network sensor. On the sign-in screen, select the **Reset** link. For example:
+
+ :::image type="content" source="media/manage-users-sensor/reset-privileged-password.png" alt-text="Screenshot of the sensor sign-in screen with the Reset password link.":::
+
+1. In the **Reset password** dialog, from the **Choose user** menu, select the user whose password you're recovering, either **Cyberx**, **Support**, or **CyberX_host**.
+
+1. Copy the unique identifier code that's shown in the **Reset password identifier** to the clipboard. For example:
+
+ :::image type="content" source="media/manage-users-sensor/password-recovery-sensor.png" alt-text="Screenshot of the Reset password dialog on the OT sensor.":::
+
+1. Go the Defender for IoT **Sites and sensors** page in the Azure portal. You may want to open the Azure portal in a new browser tab or window, keeping your sensor tab open.
+
+ In your Azure portal settings > **Directories + subscriptions**, make sure that you've selected the subscription where your sensor was onboarded to Defender for IoT.
+
+1. On the **Sites and sensors** page, locate the sensor that you're working with, and select the options menu (**...**) on the right > **Recover my password**. For example:
+
+ :::image type="content" source="media/manage-users-sensor/recover-my-password.png" alt-text="Screenshot of the Recover my password option on the Sites and sensors page." lightbox="media/manage-users-sensor/recover-my-password.png":::
+
+1. In the **Recover** dialog that opens, enter the unique identifier that you've copied to the clipboard from your sensor and select **Recover**. A **password_recovery.zip** file is automatically downloaded.
+
+ [!INCLUDE [root-of-trust](includes/root-of-trust.md)]
+
+1. Back on the sensor tab, on the **Password recovery** screen, select **Select file**. Navigate to and upload the **password_recovery.zip** file you'd downloaded earlier from the Azure portal.
+
+ > [!NOTE]
+ > If an error message appears, indicating that the file is invalid, you may have had an incorrect subscription selected in your Azure portal settings.
+ >
+ > Return to Azure, and select the settings icon in the top toolbar. On the **Directories + subscriptions** page, make sure that you've selected the subscription where your sensor was onboarded to Defender for IoT. Then repeat the steps in Azure to download the **password_recovery.zip** file and upload it on the sensor again.
+
+1. Select **Next**. A system-generated password for your sensor appears for you to use for the selected user. Make sure to write the password down as it won't be shown again.
+
+1. Select **Next** again to sign into your sensor with the new password.
+
+## Control user session timeouts
+
+By default, on-premises users are signed out of their sessions after 30 minutes of inactivity. Admin users can use the local CLI to either turn this feature on or off, or to adjust the inactivity thresholds.
+For more information, see [Work with Defender for IoT CLI commands](references-work-with-defender-for-iot-cli-commands.md).
+
+> [!NOTE]
+> Any changes made to user session timeouts are reset to defaults when you [update the OT monitoring software](update-ot-software.md).
+
+**Prerequisites**: This procedure is available for the *cyberx*, *support*, and *cyberx_host* users only.
+
+**To control sensor user session timeouts**:
+
+1. Sign in to your sensor via a terminal and run:
+
+ ```cli
+ sudo nano /var/cyberx/properties/authentication.properties
+ ```
+
+ The following output appears:
+
+ ```cli
+ infinity_session_expiration=true
+ session_expiration_default_seconds=0
+ session_expiration_admin_seconds=1800
+ session_expiration_security_analyst_seconds=1800
+ session_expiration_read_only_users_seconds=1800
+ certifcate_validation=false
+ crl_timeout_secounds=3
+ crl_retries=1
+ cm_auth_token=
+
+ ```
+
+1. Do one of the following:
+
+ - **To turn off user session timeouts entirely**, change `infinity_session_expiration=true` to `infinity_session_expiration=false`. Change it back to turn it back on again.
+
+ - **To adjust an inactivity timeout period**, adjust one of the following values to the required time, in seconds:
+
+ - `session_expiration_default_seconds` for all users
+ - `session_expiration_admin_seconds` for *Admin* users only
+ - `session_expiration_security_analyst_seconds` for *Security Analyst* users only
+ - `session_expiration_read_only_users_seconds` for *Read Only* users only
+
+## Next steps
+
+For more information, see:
+
+- [Create and manage users on an on-premises management console](manage-users-on-premises-management-console.md)
+- [Audit user activity](track-user-activity.md)
defender-for-iot References Work With Defender For Iot Cli Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/references-work-with-defender-for-iot-cli-commands.md
This article describes CLI commands for sensors and on-premises management consoles. The commands are accessible to the following users: -- CyberX -- Support-- cyberx_host
+- `cyberx`
+- `support`
+- `cyberx_host`
+
+For more information, see [Default privileged on-premises users](roles-on-premises.md#default-privileged-on-premises-users)..
+
+To start working in the CLI, connect using a terminal, such as PuTTY using one of the privileged users.
-To start working in the CLI, connect using a terminal. For example, terminal name `Putty`, and `Support` user.
## Create local alert exclusion rules
defender-for-iot Release Notes Sentinel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes-sentinel.md
Title: Release notes for the Microsoft Defender for IoT solution in Microsoft Sentinel
+ Title: Microsoft Defender for IoT solution versions in Microsoft Sentinel
description: Learn about the updates available in each version of the Microsoft Defender for IoT solution, available from the Microsoft Sentinel content hub. Last updated 09/22/2022
-# Release notes for the Microsoft Defender for IoT solution in Microsoft Sentinel
+# Microsoft Defender for IoT solution versions in Microsoft Sentinel
This article lists the updates to out-of-the-box security content available from each version of the **Microsoft Defender for IoT** solution. The **Microsoft Defender for IoT** solution is available from the Microsoft Sentinel content hub.
For more information about earlier versions of the **Microsoft Defender for IoT*
## Next steps
-Learn more in [What's new in Microsoft Defender for IoT?](whats-new.md) and the [Microsoft Sentinel documentation](../../sentinel/index.yml).
+Learn more in [What's new in Microsoft Defender for IoT?](whats-new.md) and the [Microsoft Sentinel documentation](../../sentinel/index.yml).
defender-for-iot Roles Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/roles-azure.md
+
+ Title: Azure user roles and permissions for Microsoft Defender for IoT
+description: Learn about the Azure user roles and permissions available for OT and Enterprise IoT monitoring with Microsoft Defender for IoT on the Azure portal.
Last updated : 09/19/2022+++
+# Azure user roles and permissions for Defender for IoT
+
+Microsoft Defender for IoT uses [Azure Role-Based Access Control (RBAC)](/azure/role-based-access-control/) to provide access to Enterprise IoT monitoring services and data on the Azure portal.
+
+The built-in Azure [Security Reader](../../role-based-access-control/built-in-roles.md#security-reader), [Security Admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), and [Owner](../../role-based-access-control/built-in-roles.md#owner) roles are relevant for use in Defender for IoT.
+
+This article provides a reference of Defender for IoT actions available for each role in the Azure portal. For more information, see [Azure built-in roles](/azure/role-based-access-control/built-in-roles).
+
+## Roles and permissions reference
+
+Roles for management actions are applied to user roles across an entire Azure subscription.
+
+| Action and scope|[Security Reader](../../role-based-access-control/built-in-roles.md#security-reader) |[Security Admin](../../role-based-access-control/built-in-roles.md#security-admin) |[Contributor](../../role-based-access-control/built-in-roles.md#contributor) | [Owner](../../role-based-access-control/built-in-roles.md#owner) |
+||||||
+| **Grant permissions to others** | - | - | - | Γ£ö |
+| **Onboard OT or Enterprise IoT sensors** [*](#enterprise-iot-security) | - | Γ£ö | Γ£ö | Γ£ö |
+| **Download OT sensor and on-premises management console software** | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| **Download sensor activation files** | - | Γ£ö | Γ£ö | Γ£ö |
+| **View values on the Pricing page** [*](#enterprise-iot-security) | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| **Modify values on the Pricing page** [*](#enterprise-iot-security) | - | Γ£ö | Γ£ö | Γ£ö |
+| **View values on the Sites and sensors page** [*](#enterprise-iot-security) | Γ£ö | Γ£ö | Γ£ö | Γ£ö|
+| **Modify values on the Sites and sensors page** [*](#enterprise-iot-security) | - | Γ£ö | Γ£ö | Γ£ö|
+| **Recover on-premises management console passwords** | - | Γ£ö | Γ£ö | Γ£ö |
+| **Download OT threat intelligence packages** | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| **Push OT threat intelligence updates** | - | Γ£ö | Γ£ö | Γ£ö |
+| **Onboard an Enterprise IoT plan from Microsoft 365 Defender** [*](#enterprise-iot-security) | - | Γ£ö | - | - |
+| **View Azure alerts** | Γ£ö | Γ£ö |Γ£ö | Γ£ö|
+| **Modify Azure alerts (write access)** | - | Γ£ö |Γ£ö | Γ£ö |
+| **View Azure device inventory** | Γ£ö | Γ£ö |Γ£ö | Γ£ö|
+| **Manage Azure device inventory (write access)** | - | Γ£ö |Γ£ö | Γ£ö |
+| **View Azure workbooks** | Γ£ö | Γ£ö |Γ£ö | Γ£ö |
+| **Manage Azure workbooks (write access)** | - | Γ£ö |Γ£ö | Γ£ö |
+
+## Enterprise IoT security
+
+Add, edit, or cancel an Enterprise IoT plan with [Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint) from Microsoft 365 Defender. Alerts, vulnerabilities, and recommendations for Enterprise IoT networks are also only available from Microsoft 365 Defender.
+
+In addition to the permissions listed above, Enterprise IoT security with Defender for IoT has the following requirements:
+
+- **To add an Enterprise IoT plan**, you'll need an E5 license and specific permissions in your Microsoft 365 Defender tenant.
+- **To view Enterprise IoT devices in your Azure device inventory**, you'll need an Enterprise IoT network sensor registered.
+
+For more information, see [Securing IoT devices in the enterprise](concept-enterprise.md).
+
+## Next steps
+
+For more information, see:
+
+- [Microsoft Defender for IoT user management](manage-users-overview.md)
+- [On-premises user roles for OT monitoring with Defender for IoT](roles-on-premises.md)
defender-for-iot Roles On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/roles-on-premises.md
+
+ Title: On-premises users and roles for Defender for IoT - Microsoft Defender for IoT
+description: Learn about the on-premises user roles available for OT monitoring with Microsoft Defender for IoT network sensors and on-premises management consoles.
Last updated : 09/19/2022+++
+# On-premises users and roles for OT monitoring with Defender for IoT
+
+When working with OT networks, Defender for IoT services and data is available from on-premises OT network sensors and the on-premises sensor management consoles, in addition to Azure.
+
+This article provides:
+
+- A description of the default, privileged users that come with Defender for IoT software installation
+- A reference of the actions available for each on-premises user role, on both OT network sensors and the on-premises management console
+
+## Default privileged on-premises users
+
+By default, each sensor and on-premises management console is [installed](how-to-install-software.md#install-ot-monitoring-software) with the *cyberx* and *support* privileged users. OT sensors are also installed with the *cyberx_host* privileged user.
+
+Privileged users have access to advanced tools for troubleshooting and setup, such as the CLI. When first setting up your sensor or on-premises management console, first sign in with one of the privileged users. Then create an initial user with an **Admin** role, and then use that admin user to create other users with other roles.
+
+The following table describes each default privileged user in detail:
+
+|Username |Connects to |Permissions |
+||||
+|**cyberx** | The sensor or on-premises management console's `sensor_app` container | Serves as a root user within the main application. <br><br>Used for troubleshooting with advanced root access.<br><br>Can access the container filesystem, commands, and dedicated CLI commands for controlling OT monitoring. <br><br>Can recover or change passwords for users with any roles. <!--check this abt passwords--> |
+|**support** | The sensor or on-premises management console's `sensor_app` container | Serves as a locked-down, user shell for dedicated CLI tools.<br><br>Has no filesystem access.<br><br>Can access only dedicated CLI commands for controlling OT monitoring. <br><br>Can recover or change passwords for the *support* user, and any user with the **Admin**, **Security Analyst**, and **Read-only** roles. <!--check this abt passwords--> |
+|**cyberx_host** | The on-premises management console's host OS | Serves as a root user in the on-premises management console's host OS.<br><br>Used for support scenarios with containers and filesystem access. |
+
+## On-premises user roles
+
+The following roles are available on OT network sensors and on-premises management consoles:
+
+|Role |Description |
+|||
+|**Admin** | Admin users have access to all tools, including system configurations, creating and managing users, and more. |
+|**Security Analyst** | Security Analysts don't have admin-level permissions for configurations, but can perform actions on devices, acknowledge alerts, and use investigation tools. <br><br>Security Analysts can access options on the sensor displayed in the **Discover** and **Analyze** menus on the sensor, and in the **NAVIGATION** and **ANALYSIS** menus on the on-premises management console. |
+|**Read Only** | Read-only users perform tasks such as viewing alerts and devices on the device map. <br><br>Read Only users can access options displayed in the **Discover** and **Analyze** menus on the sensor, in read-only mode, and in the **NAVIGATION** menu on the on-premises management console. |
+
+When first deploying an OT monitoring system, sign in to your sensors and on-premises management console with one of the [default, privileged users](#default-privileged-on-premises-users) described above. Create your first **Admin** user, and then use that user to create other users and assign them to roles.
+
+Permissions applied to each role differ between the sensor and the on-premises management console. For more information, see the tables below for the permissions available for each role, on the [sensor](#role-based-permissions-for-ot-network-sensors) and on the [on-premises management console](#role-based-permissions-for-the-on-premises-management-console).
+
+## Role-based permissions for OT network sensors
+
+| Permission | Read Only | Security Analyst | Admin |
+|--|--|--|--|
+| **View the dashboard** | Γ£ö | Γ£ö |Γ£ö |
+| **Control map zoom views** | - | - | Γ£ö |
+| **View alerts** | Γ£ö | Γ£ö | Γ£ö |
+| **Manage alerts**: acknowledge, learn, and pin |- | Γ£ö | Γ£ö |
+| **View events in a timeline** | - | Γ£ö | Γ£ö |
+| **Authorize devices**, known scanning devices, programming devices | - | Γ£ö | Γ£ö |
+| **Merge and delete devices** |- |- | Γ£ö |
+| **View investigation data** | Γ£ö | Γ£ö | Γ£ö |
+| **Manage system settings** | - | -| Γ£ö |
+| **Manage users** |- | - | Γ£ö |
+| **Change passwords** |- | -| Γ£ö[*](#pw-sensor) |
+| **DNS servers for reverse lookup** |- | -| Γ£ö |
+| **Send alert data to partners** | - | Γ£ö | Γ£ö |
+| **Create alert comments** |- | Γ£ö | Γ£ö |
+| **View programming change history** | Γ£ö | Γ£ö | Γ£ö |
+| **Create customized alert rules** | - | Γ£ö | Γ£ö |
+| **Manage multiple notifications simultaneously** | - | Γ£ö | Γ£ö |
+| **Manage certificates** | - | - | Γ£ö |
+
+> [!NOTE]
+> <a name="pw-sensor"></a>**Admin** users can only change passwords for other users with the **Security Analyst** and **Read-only** roles. To change the password of an **Admin** user, sign in to your sensor as [a privileged user](#default-privileged-on-premises-users). <!--verify this -->
+
+## Role-based permissions for the on-premises management console
+
+| Permission | Read Only | Security Analyst | Admin |
+|--|--|--|--|
+| **View and filter the enterprise map** | Γ£ö | Γ£ö | Γ£ö |
+| **Build a site** | - | - | Γ£ö |
+| **Manage a site** (add and edit zones) |- |- | Γ£ö |
+| **View and filter device inventory** | Γ£ö | Γ£ö | Γ£ö |
+| **View and manage alerts**: acknowledge, learn, and pin | Γ£ö | Γ£ö | Γ£ö |
+| **Generate reports** |- | Γ£ö | Γ£ö |
+| **View risk assessment reports** | - | Γ£ö | Γ£ö |
+| **Set alert exclusions** | - | Γ£ö | Γ£ö |
+| **View or define access groups** | - | - | Γ£ö |
+| **Manage system settings** | - | - | Γ£ö |
+| **Manage users** | - |- | Γ£ö |
+| **Change passwords** |- | -| Γ£ö[*](#pw-cm)|
+| **Send alert data to partners** | - | - | Γ£ö |
+| **Manage certificates** | - | - | Γ£ö |
+
+> [!NOTE]
+> <a name="pw-cm"></a>**Admin** users can only change passwords for other users with the **Security Analyst** and **Read-only** roles. To change the password of an **Admin** user, sign in to your sensor as [a privileged user](#default-privileged-on-premises-users). <!--verify this-->
+
+## Next steps
+
+For more information, see:
+
+- [Microsoft Defender for IoT user management](manage-users-overview.md)
+- [Create and manage users on an OT network sensor](manage-users-sensor.md)
+- [Create and manage users on an on-premises management console](manage-users-on-premises-management-console.md)
+- [Azure user roles and permissions for Defender for IoT](roles-azure.md)
defender-for-iot Track User Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/track-user-activity.md
+
+ Title: Audit Microsoft Defender for IoT user activity
+description: Learn how to track and audit user activity across Microsoft Defender for IoT.
Last updated : 01/26/2022+++
+# Audit user activity
+
+After you've set up your user access for the [Azure portal](manage-users-overview.md), on your [OT network sensors](manage-users-sensor.md) and an [on-premises management consoles](manage-users-on-premises-management-console.md), you'll want to be able to track and audit user activity across all of Microsoft Defender for IoT.
+
+## Audit Azure user activity
+
+Use Azure Active Directory (AAD) user auditing resources to audit Azure user activity across Defender for IoT. For more information, see:
+
+- [Audit logs in Azure Active directory](/azure/active-directory/reports-monitoring/concept-audit-logs)
+- [Azure AD audit activity reference](/azure/active-directory/reports-monitoring/reference-audit-activities)
+
+## Audit user activity on an OT network sensor
+
+Audit and track user activity on a sensor's **Event timeline**. The **Event timeline** displays events that occurred on the sensor, affected devices for each event, and the time and date that the event occurred.
+
+> [!NOTE]
+> This procedure is supported for users with an **Admin** role, and the *cyberx*, *support*, and *cyberx_host* users.
+>
+
+**To use the sensor's Event Timeline**:
+
+1. Sign into the sensor console as one of the following users:
+
+ - Any **Admin** user
+ - The *cyberx*, *support*, or *cyberx_host* user
+
+1. On the sensor, **Event Timeline** from the left-hand menu. Make sure that the filter is set to show **User Operations**.
+
+ For example:
+
+ :::image type="content" source="media/manage-users-sensor/track-user-activity.png" alt-text="Screenshot of the Event Timeline on the sensor showing user activity.":::
+
+1. Use additional filters or search using **CTRL+F** to find the information of interest to you.
+
+## Audit user activity on an on-premises management console
+
+To audit and track user activity on an on-premises management console, use the on-premises management console audit logs, which record key activity data at the time of occurrence. Use on-premises management console audit logs to understand changes that were made on the on-premises management console, when, and by whom.
+
+**To access on-premises management console audit logs**:
+
+Sign in to the on-premises management console and select **System Settings > System Statistics** > **Audit log**.
+
+The dialog displays data from the currently active audit log. For example:
+
+For example:
++
+New audit logs are generated at every 10 MB. One previous log is stored in addition to the current active log file.
+
+Audit logs include the following data:
+
+| Action | Information logged |
+|--|--|
+| **Learn, and remediation of alerts** | Alert ID |
+| **Password changes** | User, User ID |
+| **Login** | User |
+| **User creation** | User, User role |
+| **Password reset** | User name |
+| **Exclusion rules-Creation**| Rule summary |
+| **Exclusion rules-Editing**| Rule ID, Rule Summary |
+| **Exclusion rules-Deletion** | Rule ID |
+| **Management Console Upgrade** | The upgrade file used |
+| **Sensor upgrade retry** | Sensor ID |
+| **Uploaded TI package** | No additional information recorded. |
++
+> [!TIP]
+> You may also want to export your audit logs to send them to the support team for extra troubleshooting. For more information, see [Export audit logs for troubleshooting](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md#export-audit-logs-for-troubleshooting)
+>
+
+## Next steps
+
+For more information, see:
+
+- [Microsoft Defender for IoT user management](manage-users-overview.md)
+- [Azure user roles and permissions for Defender for IoT](roles-azure.md)
+- [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md)
+- [Create and manage users on an OT network sensor](manage-users-sensor.md)
+- [Create and manage users on an on-premises management console](manage-users-on-premises-management-console.md)
defender-for-iot Tutorial Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-onboarding.md
Before you start, make sure that you have the following:
- Completed [Quickstart: Get started with Defender for IoT](getting-started.md) so that you have an Azure subscription added to Defender for IoT. -- Azure permissions of **Security admin**, **Subscription contributor**, or **Subscription owner** on your subscription
+- Access to the Azure portal as a [Security Admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner). For more information, see [Azure user roles for OT and Enterprise IoT monitoring with Defender for IoT](roles-azure.md).
- At least one device to monitor, with the device connected to a SPAN port on a switch.
This procedure describes how to install the sensor software on your VM.
1. The following credentials are automatically generated and presented. Copy the usernames and passwords to a safe place, because they're required to sign-in and manage your sensor. The usernames and passwords won't be presented again.
- - **Support**: The administrative user for user management.
+ - **support**: The administrative user for user management.
- - **CyberX**: The equivalent of root for accessing the appliance.
+ - **cyberx**: The equivalent of root for accessing the appliance.
+
+ For more information, see [Default privileged on-premises users](roles-on-premises.md#default-privileged-on-premises-users).
1. When the appliance restarts, access the sensor via the IP address previously configured: `https://<ip_address>`. ### Post-installation validation
-This procedure describes how to validate your installation using the sensor's own system health checks, and is available to both the **Support** and **CyberX** sensor users.
+This procedure describes how to validate your installation using the sensor's own system health checks, and is available to both the *support* and *cyberx* sensor users.
**To validate your installation**:
defender-for-iot Update Ot Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/update-ot-software.md
The sensor update process won't succeed if you don't update the on-premises mana
> [!NOTE]
-> After upgrading to version 22.1.x, the new upgrade log can be found at the following path, accessed via SSH and the *cyberx_host* user: `/opt/sensor/logs/legacy-upgrade.log`.
+> After upgrading to version 22.1.x, the new upgrade log is accessible by the *cyberx_host* user on the sensor at the following path: `/opt/sensor/logs/legacy-upgrade.log`. To access the update log, sign into the sensor via SSH with the *cyberx_host* user.
>-
+> For more information, see [Default privileged on-premises users](roles-on-premises.md#default-privileged-on-premises-users).
## Download and apply a new activation file
deployment-environments Quickstart Create Access Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/quickstart-create-access-environments.md
Complete the following steps in the Azure CLI to create an environment and confi
1. Sign in to the Azure CLI: ```azurecli
- az login
+ az login
``` 1. List all the Azure Deployment Environments projects you have access to:
dev-box How To Customize Devbox Azure Image Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-customize-devbox-azure-image-builder.md
+
+ Title: Configure a Dev Box with Azure Image Builder
+
+description: 'Learn how to create a custom image with Azure Image Builder, then create a Dev box with the image.'
++++ Last updated : 11/17/2022+++
+# Configure a Dev Box with Azure Image Builder
+
+By using standardized virtual machine (VM) images, your organization can more easily migrate to the cloud and help ensure consistency in your deployments. Images ordinarily include predefined security, configuration settings, and any necessary software. Setting up your own imaging pipeline requires time, infrastructure, and many other details. With Azure VM Image Builder, you can create a configuration that describes your image and submit it to the service, where the image is built and then distributed to a dev box project. In this article, you will create a customized dev box using a template which includes a customization step to install Visual Studio Code.
+
+Although it's possible to create custom VM images by hand or by using other tools, the process can be cumbersome and unreliable. VM Image Builder, which is built on HashiCorp Packer, gives you the benefits of a managed service.
+
+To reduce the complexity of creating VM images, VM Image Builder:
+
+- Removes the need to use complex tooling, processes, and manual steps to create a VM image. VM Image Builder abstracts out all these details and hides Azure-specific requirements, such as the need to generalize the image (Sysprep). And it gives more advanced users the ability to override such requirements.
+
+- Can be integrated with existing image build pipelines for a click-and-go experience. To do so, you can either call VM Image Builder from your pipeline or use an Azure VM Image Builder service DevOps task (preview).
+
+- Can fetch customization data from various sources, which removes the need to collect them all from one place.
+
+- Can be integrated with Compute Gallery, which creates an image management system with which to distribute, replicate, version, and scale images globally. Additionally, you can distribute the same resulting image as a VHD or as one or more managed images, without having to rebuild them from scratch.
+
+## Prerequisites
+To provision a custom image you've creating by using VM Image Builder, you need:
+- Owner or Contributor permissions on an Azure Subscription or a specific resource group.
+- A resource group
+- A dev center with an attached network connection.
+ If you don't have a dev center with an attached network connection, follow these steps to attach the network connection: [Create a network connection](./quickstart-configure-dev-box-service.md#create-a-network-connection).
+
+## Create a Windows image and distribute it to an Azure Compute Gallery
+The next step is to use Azure VM Image Builder and Azure PowerShell to create an image version in an Azure Compute Gallery (formerly Shared Image Gallery) and then distribute the image globally. You can also do this by using the Azure CLI.
+
+1. To use VM Image Builder, you need to register the features.
+
+ Check your provider registrations. Make sure that each one returns Registered.
+
+ ```powershell
+ Get-AzResourceProvider -ProviderNamespace Microsoft.VirtualMachineImages | Format-table -Property ResourceTypes,RegistrationState
+ Get-AzResourceProvider -ProviderNamespace Microsoft.Storage | Format-table -Property ResourceTypes,RegistrationState
+ Get-AzResourceProvider -ProviderNamespace Microsoft.Compute | Format-table -Property ResourceTypes,RegistrationState
+ Get-AzResourceProvider -ProviderNamespace Microsoft.KeyVault | Format-table -Property ResourceTypes,RegistrationState
+ Get-AzResourceProvider -ProviderNamespace Microsoft.Network | Format-table -Property ResourceTypes,RegistrationState
+ ```
+
+
+ If they don't return Registered, register the providers by running the following commands:
+ ```powershell
+ Register-AzResourceProvider -ProviderNamespace Microsoft.VirtualMachineImages
+ Register-AzResourceProvider -ProviderNamespace Microsoft.Storage
+ Register-AzResourceProvider -ProviderNamespace Microsoft.Compute
+ Register-AzResourceProvider -ProviderNamespace Microsoft.KeyVault
+ Register-AzResourceProvider -ProviderNamespace Microsoft.Network
+ ```
+
+2. Install PowerShell modules:
+
+ ```powershell
+ 'Az.ImageBuilder', 'Az.ManagedServiceIdentity' | ForEach-Object {Install-Module -Name $_ -AllowPrerelease}
+ ```
+
+3. Create variables to store information that you'll use more than once.
+
+ Copy the sample code and replace the Resource group with the resource group you have used to create the dev center.
+
+ ```powershell
+ # Get existing context
+ $currentAzContext = Get-AzContext
+ # Get your current subscription ID.
+ $subscriptionID=$currentAzContext.Subscription.Id
+ # Destination image resource group
+ $imageResourceGroup="<Resource group>"
+ # Location
+ $location="eastus2"
+ # Image distribution metadata reference name
+ $runOutputName="aibCustWinManImg01"
+ # Image template name
+ $imageTemplateName="vscodeWinTemplate"
+ ```
+
+4. Create a user-assigned identity and set permissions on the resource group
+
+ VM Image Builder uses the provided user-identity to inject the image into Azure Compute Gallery. In this example, you create an Azure role definition with specific actions for distributing the image. The role definition is then assigned to the user identity.
+
+ ```powershell
+ # setup role def names, these need to be unique
+ $timeInt=$(get-date -UFormat "%s")
+ $imageRoleDefName="Azure Image Builder Image Def"+$timeInt
+ $identityName="aibIdentity"+$timeInt
+
+ ## Add an Azure PowerShell module to support AzUserAssignedIdentity
+ Install-Module -Name Az.ManagedServiceIdentity
+
+ # Create an identity
+ New-AzUserAssignedIdentity -ResourceGroupName $imageResourceGroup -Name $identityName
+
+ $identityNameResourceId=$(Get-AzUserAssignedIdentity -ResourceGroupName $imageResourceGroup -Name $identityName).Id
+ $identityNamePrincipalId=$(Get-AzUserAssignedIdentity -ResourceGroupName $imageResourceGroup -Name $identityName).PrincipalId
+ ```
+
+5. Assign permissions for the identity to distribute the images
+
+ Use this command to download an Azure role definition template, and then update it with the previously specified parameters.
+
+ ```powershell
+ $aibRoleImageCreationUrl="https://raw.githubusercontent.com/azure/azvmimagebuilder/master/solutions/12_Creating_AIB_Security_Roles/aibRoleImageCreation.json"
+ $aibRoleImageCreationPath = "aibRoleImageCreation.json"
+
+ # Download the configuration
+ Invoke-WebRequest -Uri $aibRoleImageCreationUrl -OutFile $aibRoleImageCreationPath -UseBasicParsing
+ ((Get-Content -path $aibRoleImageCreationPath -Raw) -replace '<subscriptionID>',$subscriptionID) | Set-Content -Path $aibRoleImageCreationPath
+ ((Get-Content -path $aibRoleImageCreationPath -Raw) -replace '<rgName>', $imageResourceGroup) | Set-Content -Path $aibRoleImageCreationPath
+ ((Get-Content -path $aibRoleImageCreationPath -Raw) -replace 'Azure Image Builder Service Image Creation Role', $imageRoleDefName) | Set-Content -Path $aibRoleImageCreationPath
+
+ # Create a role definition
+ New-AzRoleDefinition -InputFile ./aibRoleImageCreation.json
+ # Grant the role definition to the VM Image Builder service principal
+ New-AzRoleAssignment -ObjectId $identityNamePrincipalId -RoleDefinitionName $imageRoleDefName -Scope "/subscriptions/$subscriptionID/resourceGroups/$imageResourceGroup"
+ ```
+
+## Create an Azure Compute Gallery
+
+To use VM Image Builder with an Azure Compute Gallery, you need to have an existing gallery and image definition. VM Image Builder doesn't create the gallery and image definition for you. The definition created below will have Trusted Launch as security type and meets the windows 365 image requirements.
+
+```powershell
+# Gallery name
+$galleryName= "devboxGallery"
+
+# Image definition name
+$imageDefName ="vscodeImageDef"
+
+# Additional replication region
+$replRegion2="eastus"
+
+# Create the gallery
+New-AzGallery -GalleryName $galleryName -ResourceGroupName $imageResourceGroup -Location $location
+
+$SecurityType = @{Name='SecurityType';Value='TrustedLaunch'}
+$features = @($SecurityType)
+
+# Create the image definition
+New-AzGalleryImageDefinition -GalleryName $galleryName -ResourceGroupName $imageResourceGroup -Location $location -Name $imageDefName -OsState generalized -OsType Windows -Publisher 'myCompany' -Offer 'vscodebox' -Sku '1-0-0' -Feature $features -HyperVGeneration "V2"
+```
+
+1. Copy the ARM Template for Azure Image Builder. This template indicates the source image and also the customizations applied. With this template, we are installing choco and vscode. It also indicates where the image will be distributed.
+
+ ```json
+ {
+ "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "imageTemplateName": {
+ "type": "string"
+ },
+ "api-version": {
+ "type": "string"
+ },
+ "svclocation": {
+ "type": "string"
+ }
+ },
+ "variables": {},
+ "resources": [
+ {
+ "name": "[parameters('imageTemplateName')]",
+ "type": "Microsoft.VirtualMachineImages/imageTemplates",
+ "apiVersion": "[parameters('api-version')]",
+ "location": "[parameters('svclocation')]",
+ "dependsOn": [],
+ "tags": {
+ "imagebuilderTemplate": "win11multi",
+ "userIdentity": "enabled"
+ },
+ "identity": {
+ "type": "UserAssigned",
+ "userAssignedIdentities": {
+ "<imgBuilderId>": {}
+ }
+ },
+ "properties": {
+ "buildTimeoutInMinutes": 100,
+ "vmProfile": {
+ "vmSize": "Standard_DS2_v2",
+ "osDiskSizeGB": 127
+ },
+ "source": {
+ "type": "PlatformImage",
+ "publisher": "MicrosoftWindowsDesktop",
+ "offer": "Windows-11",
+ "sku": "win11-21h2-avd",
+ "version": "latest"
+ },
+ "customize": [
+ {
+ "type": "PowerShell",
+ "name": "Install Choco and Vscode",
+ "inline": [
+ "Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://community.chocolatey.org/install.ps1'))",
+ "choco install -y vscode"
+ ]
+ }
+ ],
+ "distribute":
+ [
+ {
+ "type": "SharedImage",
+ "galleryImageId": "/subscriptions/<subscriptionID>/resourceGroups/<rgName>/providers/Microsoft.Compute/galleries/<sharedImageGalName>/images/<imageDefName>",
+ "runOutputName": "<runOutputName>",
+ "artifactTags": {
+ "source": "azureVmImageBuilder",
+ "baseosimg": "win11multi"
+ },
+ "replicationRegions": [
+ "<region1>",
+ "<region2>"
+ ]
+ }
+ ]
+ }
+ }
+ ]
+ }
+ ```
+2. Configure the template with your variables.
+ ```powershell
+ $templateFilePath = <Template Path>
+
+ (Get-Content -path $templateFilePath -Raw ) -replace '<subscriptionID>',$subscriptionID | Set-Content -Path $templateFilePath
+ (Get-Content -path $templateFilePath -Raw ) -replace '<rgName>',$imageResourceGroup | Set-Content -Path $templateFilePath
+ (Get-Content -path $templateFilePath -Raw ) -replace '<runOutputName>',$runOutputName | Set-Content -Path $templateFilePath
+ (Get-Content -path $templateFilePath -Raw ) -replace '<imageDefName>',$imageDefName | Set-Content -Path $templateFilePath
+ (Get-Content -path $templateFilePath -Raw ) -replace '<sharedImageGalName>',$galleryName| Set-Content -Path $templateFilePath
+ (Get-Content -path $templateFilePath -Raw ) -replace '<region1>',$location | Set-Content -Path $templateFilePath
+ (Get-Content -path $templateFilePath -Raw ) -replace '<region2>',$replRegion2 | Set-Content -Path $templateFilePath
+ ((Get-Content -path $templateFilePath -Raw) -replace '<imgBuilderId>',$identityNameResourceId) | Set-Content -Path $templateFilePath
+ ```
+3. Create the image version
+
+ Your template must be submitted to the service. The following commands will download any dependent artifacts, such as scripts, and store them in the staging resource group, which is prefixed with IT_.
+
+ ```powershell
+ New-AzResourceGroupDeployment -ResourceGroupName $imageResourceGroup -TemplateFile $templateFilePath -Api-Version "2020-02-14" -imageTemplateName $imageTemplateName -svclocation $location
+ ```
+ To build the image, invoke 'Run' on the template.
+ ```powershell
+ Invoke-AzResourceAction -ResourceName $imageTemplateName -ResourceGroupName $imageResourceGroup -ResourceType Microsoft.VirtualMachineImages/imageTemplates -ApiVersion "2020-02-14" -Action Run
+ ```
+ Creating the image and replicating it to both regions can take a few moments. Before you begin creating a dev box definition, wait until this part is finished.
+ ```powershell
+ Get-AzImageBuilderTemplate -ImageTemplateName $imageTemplateName -ResourceGroupName $imageResourceGroup | Select-Object -Property Name, LastRunStatusRunState, LastRunStatusMessage, ProvisioningState
+ ```
+ Alternatively, you can navigate to the Azure portal to your compute gallery > image definition to view the provisioning state of your image.
+
+![Provisioning state of the customized image version](./media/how-to-customize-devbox-azure-image-builder/image-version-provisioning-state.png)
+
+## Configure the Azure Compute Gallery
+
+Once your custom image has been provisioned within the compute gallery, you can configure the gallery to use the images within the dev center. More details here:
+
+[Configure an Azure Compute Gallery](./how-to-configure-azure-compute-gallery.md)
+
+## Setting up Dev Box Service with Custom Image
+
+Once the compute gallery images are available in the dev center. You can use the custom image with the dev box service. More details here:
+
+[Configure the Microsoft Dev Box Service](./quickstart-configure-dev-box-service.md)
+
+## Next steps
+- [Create dev box definitions](./quickstart-configure-dev-box-service.md#create-a-dev-box-definition)
+- [Configure an Azure Compute Gallery](./how-to-configure-azure-compute-gallery.md)
energy-data-services How To Convert Segy To Zgy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-convert-segy-to-zgy.md
In this article, you will learn how to convert SEG-Y formatted data to the ZGY f
empty: none ```
-8. Run the following commands using **sdutil** to see its working fine. Follow the directions in [Setup and Usage for Azure env](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil/-/tree/azure/stable#setup-and-usage-for-azure-env). Understand that depending on your OS and Python version, you may have to run `python3` command as opposed to `python`. If you run into errors with these commands, refer to the [SDUTIL tutorial](/tutorials/tutorial-seismic-ddms-sdutil.md). See [How to generate a refresh token](how-to-generate-refresh-token.md). Once you've generated the token, store it in a place where you'll be able to access it in the future.
+8. Run the following commands using **sdutil** to see its working fine. Follow the directions in [Setup and Usage for Azure env](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil/-/tree/azure/stable#setup-and-usage-for-azure-env). Understand that depending on your OS and Python version, you may have to run `python3` command as opposed to `python`. If you run into errors with these commands, refer to the [SDUTIL tutorial](/azure/energy-data-services/tutorial-seismic-ddms-sdutil). See [How to generate a refresh token](how-to-generate-refresh-token.md). Once you've generated the token, store it in a place where you'll be able to access it in the future.
> [!NOTE] > when running `python sdutil config init`, you don't need to enter anything when prompted with `Insert the azure (azureGlabEnv) application key:`.
In this article, you will learn how to convert SEG-Y formatted data to the ZGY f
10. Create the manifest file (otherwise known as the records file)
- ZGY conversion uses a manifest file that you'll upload to your storage account in order to run the conversion. This manifest file is created by using multiple JSON files and running a script. The JSON files for this process are stored [here](https://community.opengroup.org/osdu/platform/data-flow/ingestion/segy-to-zgy-conversion/-/tree/master/doc/sample-records/volve). For more information on Volve, such as where the dataset definitions come from, visit [their website](https://www.equinor.com/en/what-we-do/digitalisation-in-our-dna/volve-field-data-village-download.html). Complete the following steps in order to create the manifest file:
+ ZGY conversion uses a manifest file that you'll upload to your storage account in order to run the conversion. This manifest file is created by using multiple JSON files and running a script. The JSON files for this process are stored [here](https://community.opengroup.org/osdu/platform/data-flow/ingestion/segy-to-zgy-conversion/-/tree/master/doc/sample-records/volve). For more information on Volve, such as where the dataset definitions come from, visit [their website](https://www.equinor.com/energy/volve-data-sharing). Complete the following steps in order to create the manifest file:
* Clone the [repo](https://community.opengroup.org/osdu/platform/data-flow/ingestion/segy-to-zgy-conversion/-/tree/master/) and navigate to the folder doc/sample-records/volve * Edit the values in the `prepare-records.sh` bash script. Recall that the format of the legal tag will be prefixed with the Microsoft Energy Data Services instance name and data partition name, so it looks like `<instancename>`-`<datapartitionname>`-`<legaltagname>`.
OSDU&trade; is a trademark of The Open Group.
## Next steps <!-- Add a context sentence for the following links --> > [!div class="nextstepaction"]
-> [How to convert segy to ovds](./how-to-convert-segy-to-ovds.md)
+> [How to convert segy to ovds](./how-to-convert-segy-to-ovds.md)
frontdoor Create Front Door Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/create-front-door-terraform.md
Title: 'Quickstart: Create a Azure Front Door Standard/Premium profile using Terraform'
+ Title: 'Quickstart: Create an Azure Front Door Standard/Premium profile using Terraform'
description: This quickstart describes how to create an Azure Front Door Standard/Premium using Terraform.
The steps in this article were tested with the following Terraform and Terraform
1. Create a file named `providers.tf` and insert the following code:
- ```terraform
- # Configure the Azure provider
- terraform {
- required_providers {
- azurerm = {
- source = "hashicorp/azurerm"
- version = "~> 3.27.0"
- }
-
- random = {
- source = "hashicorp/random"
- }
- }
-
- required_version = ">= 1.1.0"
- }
-
- provider "azurerm" {
- features {}
- }
- ```
+ [!code-terraform[master](../../terraform/quickstart/101-front-door-standard-premium/providers.tf)]
1. Create a file named `resource-group.tf` and insert the following code:
- ```terraform
- resource "azurerm_resource_group" "my_resource_group" {
- name = var.resource_group_name
- location = var.location
- }
- ```
+ [!code-terraform[master](../../terraform/quickstart/101-front-door-standard-premium/resource-group.tf)]
1. Create a file named `app-service.tf` and insert the following code:
- ```terraform
- locals {
- app_name = "myapp-${lower(random_id.app_name.hex)}"
- app_service_plan_name = "AppServicePlan"
- }
-
- resource "azurerm_service_plan" "app_service_plan" {
- name = local.app_service_plan_name
- location = var.location
- resource_group_name = azurerm_resource_group.my_resource_group.name
-
- sku_name = var.app_service_plan_sku_name
- os_type = "Windows"
- worker_count = var.app_service_plan_capacity
- }
-
- resource "azurerm_windows_web_app" "app" {
- name = local.app_name
- location = var.location
- resource_group_name = azurerm_resource_group.my_resource_group.name
- service_plan_id = azurerm_service_plan.app_service_plan.id
-
- https_only = true
-
- site_config {
- ftps_state = "Disabled"
- minimum_tls_version = "1.2"
- ip_restriction = [ {
- service_tag = "AzureFrontDoor.Backend"
- ip_address = null
- virtual_network_subnet_id = null
- action = "Allow"
- priority = 100
- headers = [ {
- x_azure_fdid = [ azurerm_cdn_frontdoor_profile.my_front_door.resource_guid ]
- x_fd_health_probe = []
- x_forwarded_for = []
- x_forwarded_host = []
- } ]
- name = "Allow traffic from Front Door"
- } ]
- }
- }
- ```
+ [!code-terraform[master](../../terraform/quickstart/101-front-door-standard-premium/app-service.tf)]
1. Create a file named `front-door.tf` and insert the following code:
- ```terraform
- locals {
- front_door_profile_name = "MyFrontDoor"
- front_door_endpoint_name = "afd-${lower(random_id.front_door_endpoint_name.hex)}"
- front_door_origin_group_name = "MyOriginGroup"
- front_door_origin_name = "MyAppServiceOrigin"
- front_door_route_name = "MyRoute"
- }
-
- resource "azurerm_cdn_frontdoor_profile" "my_front_door" {
- name = local.front_door_profile_name
- resource_group_name = azurerm_resource_group.my_resource_group.name
- sku_name = var.front_door_sku_name
- }
-
- resource "azurerm_cdn_frontdoor_endpoint" "my_endpoint" {
- name = local.front_door_endpoint_name
- cdn_frontdoor_profile_id = azurerm_cdn_frontdoor_profile.my_front_door.id
- }
-
- resource "azurerm_cdn_frontdoor_origin_group" "my_origin_group" {
- name = local.front_door_origin_group_name
- cdn_frontdoor_profile_id = azurerm_cdn_frontdoor_profile.my_front_door.id
- session_affinity_enabled = true
-
- load_balancing {
- sample_size = 4
- successful_samples_required = 3
- }
-
- health_probe {
- path = "/"
- request_type = "HEAD"
- protocol = "Https"
- interval_in_seconds = 100
- }
- }
-
- resource "azurerm_cdn_frontdoor_origin" "my_app_service_origin" {
- name = local.front_door_origin_name
- cdn_frontdoor_origin_group_id = azurerm_cdn_frontdoor_origin_group.my_origin_group.id
-
- enabled = true
- host_name = azurerm_windows_web_app.app.default_hostname
- http_port = 80
- https_port = 443
- origin_host_header = azurerm_windows_web_app.app.default_hostname
- priority = 1
- weight = 1000
- certificate_name_check_enabled = true
- }
-
- resource "azurerm_cdn_frontdoor_route" "my_route" {
- name = local.front_door_route_name
- cdn_frontdoor_endpoint_id = azurerm_cdn_frontdoor_endpoint.my_endpoint.id
- cdn_frontdoor_origin_group_id = azurerm_cdn_frontdoor_origin_group.my_origin_group.id
- cdn_frontdoor_origin_ids = [azurerm_cdn_frontdoor_origin.my_app_service_origin.id]
-
- supported_protocols = ["Http", "Https"]
- patterns_to_match = ["/*"]
- forwarding_protocol = "HttpsOnly"
- link_to_default_domain = true
- https_redirect_enabled = true
- }
- ```
+ [!code-terraform[master](../../terraform/quickstart/101-front-door-standard-premium/front-door.tf)]
1. Create a file named `variables.tf` and insert the following code:
- ```terraform
- variable "location" {
- type = string
- default = "westus2"
- }
-
- variable "resource_group_name" {
- type = string
- default = "FrontDoor"
- }
-
- variable "app_service_plan_sku_name" {
- type = string
- default = "S1"
- }
-
- variable "app_service_plan_capacity" {
- type = number
- default = 1
- }
-
- variable "app_service_plan_sku_tier_name" {
- type = string
- default = "Standard"
- }
-
- variable "front_door_sku_name" {
- type = string
- default = "Standard_AzureFrontDoor"
- validation {
- condition = contains(["Standard_AzureFrontDoor", "Premium_AzureFrontDoor"], var.front_door_sku_name)
- error_message = "The SKU value must be Standard_AzureFrontDoor or Premium_AzureFrontDoor."
- }
- }
-
- resource "random_id" "app_name" {
- byte_length = 8
- }
-
- resource "random_id" "front_door_endpoint_name" {
- byte_length = 8
- }
- ```
+ [!code-terraform[master](../../terraform/quickstart/101-front-door-standard-premium/variables.tf)]
1. Create a file named `outputs.tf` and insert the following code:
- ```terraform
- output "frontDoorEndpointHostName" {
- value = azurerm_cdn_frontdoor_endpoint.my_endpoint.host_name
- }
- ```
+ [!code-terraform[master](../../terraform/quickstart/101-front-door-standard-premium/outputs.tf)]
## Initialize Terraform
frontdoor End To End Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/end-to-end-tls.md
For TLS1.2 the following cipher suites are supported:
* TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 * TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
+* TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
+* TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
* TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 * TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 > [!NOTE]
-> For Windows 10 and later versions, we recommend enabling one or both of the ECDHE cipher suites for better security. Windows 8.1, 8, and 7 aren't compatible with these ECDHE cipher suites. The DHE cipher suites have been provided for compatibility with those operating systems.
+> For Windows 10 and later versions, we recommend enabling one or both of the ECDHE cipher suites for better security. CBC ciphers are enabled to support Windows 8.1, 8, and 7 operating systems. The DHE cipher suites will be disabled in the future.
Using custom domains with TLS1.0/1.1 enabled the following cipher suites are supported:
frontdoor Front Door Quickstart Template Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-quickstart-template-samples.md
Title: Azure Resource Manager template samples - Azure Front Door
-description: Learn about Resource Manager template samples for Azure Front Door, including templates for creating a basic Front Door and configuring Front Door rate limiting.
+description: Learn about Resource Manager template samples for Azure Front Door, including templates for creating a basic Front Door profile and configuring Front Door rate limiting.
documentationcenter: ""
Last updated 03/10/2022
zone_pivot_groups: front-door-tiers
-# Azure Resource Manager deployment model templates for Front Door
+# Bicep and Azure Resource Manager deployment model templates for Front Door
-The following table includes links to Azure Resource Manager deployment model templates for Azure Front Door.
+The following table includes links to Bicep and Azure Resource Manager deployment model templates for Azure Front Door.
::: zone pivot="front-door-standard-premium"
The following table includes links to Azure Resource Manager deployment model te
| Template | Description | | | | | [Create a basic Front Door](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-create-basic)| Creates a basic Front Door configuration with a single backend. |
-| [Create a Front Door with multiple backends and backend pools and URL based routing](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-create-multiple-backends)| Creates a Front Door with load balancing configured for multiple backends in ta backend pool and also across backend pools based on URL path. |
+| [Create a Front Door with multiple backends and backend pools and URL based routing](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-create-multiple-backends)| Creates a Front Door with load balancing configured for multiple backends in a backend pool and also across backend pools based on URL path. |
| [Onboard a custom domain and managed TLS certificate with Front Door](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-custom-domain)| Add a custom domain to your Front Door and use a Front Door-managed TLS certificate. | | [Onboard a custom domain and customer-managed TLS certificate with Front Door](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-custom-domain-customer-certificate)| Add a custom domain to your Front Door and use your own TLS certificate by using Key Vault. | | [Create Front Door with geo filtering](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-geo-filtering)| Create a Front Door that allows/blocks traffic from certain countries/regions. |
frontdoor Quickstart Create Front Door Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/quickstart-create-front-door-terraform.md
The steps in this article were tested with the following Terraform and Terraform
1. Create a file named `providers.tf` and insert the following code:
- ```terraform
- # Configure the Azure provider
- terraform {
- required_providers {
- azurerm = {
- source = "hashicorp/azurerm"
- version = "~> 3.27.0"
- }
-
- random = {
- source = "hashicorp/random"
- }
- }
-
- required_version = ">= 1.1.0"
- }
-
- provider "azurerm" {
- features {}
- }
- ```
+ [!code-terraform[master](../../terraform/quickstart/101-front-door-classic/providers.tf)]
1. Create a file named `resource-group.tf` and insert the following code:
- ```terraform
- resource "azurerm_resource_group" "my_resource_group" {
- name = var.resource_group_name
- location = var.location
- }
- ```
+ [!code-terraform[master](../../terraform/quickstart/101-front-door-classic/resource-group.tf)]
1. Create a file named `front-door.tf` and insert the following code:
- ```terraform
- locals {
- front_door_name = "afd-${lower(random_id.front_door_name.hex)}"
- front_door_frontend_endpoint_name = "frontEndEndpoint"
- front_door_load_balancing_settings_name = "loadBalancingSettings"
- front_door_health_probe_settings_name = "healthProbeSettings"
- front_door_routing_rule_name = "routingRule"
- front_door_backend_pool_name = "backendPool"
- }
-
- resource "azurerm_frontdoor" "my_front_door" {
- name = local.front_door_name
- resource_group_name = azurerm_resource_group.my_resource_group.name
-
- frontend_endpoint {
- name = local.front_door_frontend_endpoint_name
- host_name = "${local.front_door_name}.azurefd.net"
- session_affinity_enabled = false
- }
-
- backend_pool_load_balancing {
- name = local.front_door_load_balancing_settings_name
- sample_size = 4
- successful_samples_required = 2
- }
-
- backend_pool_health_probe {
- name = local.front_door_health_probe_settings_name
- path = "/"
- protocol = "Http"
- interval_in_seconds = 120
- }
-
- backend_pool {
- name = local.front_door_backend_pool_name
- backend {
- host_header = var.backend_address
- address = var.backend_address
- http_port = 80
- https_port = 443
- weight = 50
- priority = 1
- }
-
- load_balancing_name = local.front_door_load_balancing_settings_name
- health_probe_name = local.front_door_health_probe_settings_name
- }
-
- routing_rule {
- name = local.front_door_routing_rule_name
- accepted_protocols = ["Http", "Https"]
- patterns_to_match = ["/*"]
- frontend_endpoints = [local.front_door_frontend_endpoint_name]
- forwarding_configuration {
- forwarding_protocol = "MatchRequest"
- backend_pool_name = local.front_door_backend_pool_name
- }
- }
- }
- ```
+ [!code-terraform[master](../../terraform/quickstart/101-front-door-classic/front-door.tf)]
1. Create a file named `variables.tf` and insert the following code:
- ```terraform
- variable "location" {
- type = string
- default = "westus2"
- }
-
- variable "resource_group_name" {
- type = string
- default = "FrontDoor"
- }
-
- variable "backend_address" {
- type = string
- }
-
- resource "random_id" "front_door_name" {
- byte_length = 8
- }
- ```
+ [!code-terraform[master](../../terraform/quickstart/101-front-door-classic/variables.tf)]
1. Create a file named `terraform.tfvars` and insert the following code, being sure to update the value to your own backend hostname:
frontdoor Terraform Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/terraform-samples.md
+
+ Title: Terraform samples - Azure Front Door
+description: Learn about Terraform samples for Azure Front Door, including samples for creating a basic Front Door profile.
+
+documentationcenter: ""
+++
+ na
+ Last updated : 11/22/2022+
+zone_pivot_groups: front-door-tiers
+
+# Terraform deployment model templates for Front Door
+
+The following table includes links to Terraform deployment model templates for Azure Front Door.
++
+| Sample | Description |
+|-|-|
+|**App Service origins**| **Description** |
+| [App Service with Private Link](https://github.com/Azure/terraform/tree/master/quickstart/101-front-door-standard-premium) | Creates an App Service app with a private endpoint, and a Front Door profile. |
+| | |
+++
+| Template | Description |
+| | |
+| [Create a basic Front Door](https://github.com/Azure/terraform/tree/master/quickstart/101-front-door-classic)| Creates a basic Front Door configuration with a single backend. |
+| | |
++
+## Next steps
++
+- Learn how to [create a Front Door profile](standard-premium/create-front-door-portal.md).
+++
+- Learn how to [create a Front Door](quickstart-create-front-door.md).
+
hdinsight Hive Llap Sizing Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/hive-llap-sizing-guide.md
Previously updated : 07/19/2022 Last updated : 11/23/2022 # Azure HDInsight Interactive Query Cluster (Hive LLAP) sizing guide
specific tuning.
| Node Type | Instance | Size | | : | :-: | : |
-| Head | D13 v2 | 8 vcpus, 56 GB RAM, 400 GB SSD |
+| Head | D13 v2 | 8 vcpus, 56-GB RAM, 400 GB SSD |
| Worker | **D14 v2** | **16 vcpus, 112 GB RAM, 800 GB SSD** | | ZooKeeper | A4 v2 | 4 vcpus, 8-GB RAM, 40 GB SSD |
specific tuning.
| yarn.scheduler.maximum-allocation-mb | 102400 (MB) | The maximum allocation for every container request at the RM, in MBs. Memory requests higher than this value won't take effect | | yarn.scheduler.maximum-allocation-vcores | 12 |The maximum number of CPU cores for every container request at the Resource Manager. Requests higher than this value won't take effect. | | yarn.nodemanager.resource.cpu-vcores | 12 | Number of CPU cores per NodeManager that can be allocated for containers. |
-| yarn.scheduler.capacity.root.llap.capacity | 85 (%) | YARN capacity allocation for llap queue |
+| yarn.scheduler.capacity.root.llap.capacity | 85 (%) | YARN capacity allocation for LLAP queue |
| tez.am.resource.memory.mb | 4096 (MB) | The amount of memory in MB to be used by the tez AppMaster | | hive.server2.tez.sessions.per.default.queue | <number_of_worker_nodes> |The number of sessions for each queue named in the hive.server2.tez.default.queues. This number corresponds to number of query coordinators(Tez AMs) | | hive.tez.container.size | 4096 (MB) | Specified Tez container size in MB |
For D14 v2, the recommended value is **12**.
#### **4. Number of concurrent queries** Configuration: ***hive.server2.tez.sessions.per.default.queue***
-This configuration value determines the number of Tez sessions that can be launched in parallel. These Tez sessions will be launched for each of the queues specified by "hive.server2.tez.default.queues". It corresponds to the number of Tez AMs (Query Coordinators). It's recommended to be the same as the number of worker nodes. The number of Tez AMs can be higher than the number of LLAP daemon nodes. The Tez AM's primary responsibility is to coordinate the query execution and assign query plan fragments to corresponding LLAP daemons for execution. Keep this value as multiple of a number of LLAP daemon nodes to achieve higher throughput.
+This configuration value determines the number of Tez sessions that can be launched in parallel. These Tez sessions will be launched for each of the queues specified by "hive.server2.tez.default.queues". It corresponds to the number of Tez AMs (Query Coordinators). It's recommended to be the same as the number of worker nodes. The number of Tez AMs can be higher than the number of LLAP daemon nodes. The Tez AM's primary responsibility is to coordinate the query execution and assign query plan fragments to corresponding LLAP daemons for execution. Keep this value as multiple of many LLAP daemon nodes to achieve higher throughput.
Default HDInsight cluster has four LLAP daemons running on four worker nodes, so the recommended value is **4**.
The recommended value is **4096 MB**.
#### **6. LLAP Queue capacity allocation** Configuration: ***yarn.scheduler.capacity.root.llap.capacity***
-This value indicates a percentage of capacity given to llap queue. The capacity allocations may have different values for different workloads depending on how the YARN queues are configured. If your workload is read-only operations, then setting it as high as 90% of the capacity should work. However, if your workload is mix of update/delete/merge operations using managed tables, it's recommended to give 85% of the capacity for llap queue. The remaining 15% capacity can be used by other tasks such as compaction etc. to allocate containers from default queue. That way tasks in default queue won't deprive of YARN resources.
+This value indicates a percentage of capacity given to LLAP queue. The capacity allocations may have different values for different workloads depending on how the YARN queues are configured. If your workload is read-only operations, then setting it as high as 90% of the capacity should work. However, if your workload is mix of update/delete/merge operations using managed tables, it's recommended to give 85% of the capacity for LLAP queue. The remaining 15% capacity can be used by other tasks such as compaction etc. to allocate containers from default queue. That way tasks in default queue won't deprive of YARN resources.
-For D14v2 worker nodes, the recommended value for llap queue is **85**.
+For D14v2 worker nodes, the recommended value for LLAP queue is **85**.
(For readonly workloads, it can be increased up to 90 as suitable.) #### **7. LLAP daemon container size**
LLAP daemon is run as a YARN container on each worker node. The total memory siz
* Total memory configured for all containers on a node and LLAP queue capacity Memory needed by Tez Application Masters(Tez AM) can be calculated as follows.
-Tez AM acts as a query coordinator and the number of Tez AMs should be configured based on a number of concurrent queries to be served. Theoretically, we can consider one Tez AM per worker node. However, its possible that you may see more than one Tez AM on a worker node. For calculation purpose, we assume uniform distribution of Tez AMs across all LLAP daemon nodes/worker nodes.
+Tez AM acts as a query coordinator and the number of Tez AMs should be configured based on many concurrent queries to be served. Theoretically, we can consider one Tez AM per worker node. However, it's possible that you may see more than one Tez AM on a worker node. For calculation purpose, we assume uniform distribution of Tez AMs across all LLAP daemon nodes/worker nodes.
It's recommended to have 4 GB of memory per Tez AM. Number of Tez Ams = value specified by Hive config ***hive.server2.tez.sessions.per.default.queue***.
For D14 v2, the default configuration has four Tez AMs and four LLAP daemon node
Tez AM memory per node = (ceil(4/4) x 4 GB) = 4 GB Total Memory available for LLAP queue per worker node can be calculated as follows:
-This value depends on the total amount of memory available for all YARN containers on a node(*yarn.nodemanager.resource.memory-mb*) and the percentage of capacity configured for llap queue(*yarn.scheduler.capacity.root.llap.capacity*).
-Total memory for LLAP queue on worker node = Total memory available for all YARN containers on a node x Percentage of capacity for llap queue.
+This value depends on the total amount of memory available for all YARN containers on a node(*yarn.nodemanager.resource.memory-mb*) and the percentage of capacity configured for LLAP queue(*yarn.scheduler.capacity.root.llap.capacity*).
+Total memory for LLAP queue on worker node = Total memory available for all YARN containers on a node x Percentage of capacity for LLAP queue.
For D14 v2, this value is (100 GB x 0.85) = 85 GB. The LLAP daemon container size is calculated as follows;
The LLAP daemon container size is calculated as follows;
**LLAP daemon container size = (Total memory for LLAP queue on a workernode) ΓÇô (Tez AM memory per node) - (Service Master container size)** There is only one Service Master (Application Master for LLAP service) on the cluster spawned on one of the worker nodes. For calculation purpose, we consider one service master per worker node. For D14 v2 worker node, HDI 4.0 - the recommended value is (85 GB - 4 GB - 1 GB)) = **80 GB**
-(For HDI 3.6, recommended value is **79 GB** because you should reserve additional ~2 GB for slider AM.)
+
#### **8. Determining number of executors per LLAP daemon** Configuration: ***hive.llap.daemon.num.executors***, ***hive.llap.io.threadpool.size***
Configuration: ***hive.llap.daemon.num.executors***, ***hive.llap.io.threadpool.
***hive.llap.daemon.num.executors***: This configuration controls the number of executors that can execute tasks in parallel per LLAP daemon. This value depends on the number of vcores, the amount of memory used per executor, and the amount of total memory available for LLAP daemon container. The number of executors can be oversubscribed to 120% of available vcores per worker node. However, it should be adjusted if it doesn't meet the memory requirements based on memory needed per executor and the LLAP daemon container size.
-Each executor is equivalent to a Tez container and can consume 4GB(Tez container size) of memory. All executors in LLAP daemon share the same heap memory. With the assumption that not all executors run memory intensive operations at the same time, you can consider 75% of Tez container size(4 GB) per executor. This way you can increase the number of executors by giving each executor less memory (e.g. 3 GB) for increased parallelism. However, it is recommended to tune this setting for your target workload.
+Each executor is equivalent to a Tez container and can consume 4 GB(Tez container size) of memory. All executors in LLAP daemon share the same heap memory. With the assumption that not all executors run memory intensive operations at the same time, you can consider 75% of Tez container size(4 GB) per executor. This way you can increase the number of executors by giving each executor less memory (for example, 3 GB) for increased parallelism. However, it is recommended to tune this setting for your target workload.
There are 16 vcores on D14 v2 VMs.
-For D14 v2, the recommended value for num of executors is (16 vcores x 120%) ~= **19** on each worker node considering 3GB per executor.
+For D14 v2, the recommended value for num of executors is (16 vcores x 120%) ~= **19** on each worker node considering 3 GB per executor.
***hive.llap.io.threadpool.size***: This value specifies the thread pool size for executors. Since executors are fixed as specified, it will be same as number of executors per LLAP daemon.
Setting *hive.llap.io.allocator.mmap* = true will enable SSD caching.
When SSD cache is enabled, some portion of the memory will be used to store metadata for the SSD cache. The metadata is stored in memory and it's expected to be ~8% of SSD cache size. SSD Cache in-memory metadata size = LLAP daemon container size - (Head room + Heap size) For D14 v2, with HDI 4.0, SSD cache in-memory metadata size = 80 GB - (4 GB + 57 GB) = **19 GB**
-For D14 v2, with HDI 3.6, SSD cache in-memory metadata size = 79 GB - (4 GB + 57 GB) = **18 GB**
+ Given the size of available memory for storing SSD cache metadata, we can calculate the size of SSD cache that can be supported. Size of in-memory metadata for SSD cache = LLAP daemon container size - (Head room + Heap size)
Size of in-memory metadata for SSD cache = LLAP daemon container size - (Head r
Size of SSD cache = size of in-memory metadata for SSD cache(19 GB) / 0.08 (8 percent) For D14 v2 and HDI 4.0, the recommended SSD cache size = 19 GB / 0.08 ~= **237 GB**
-For D14 v2 and HDI 3.6, the recommended SSD cache size = 18 GB / 0.08 ~= **225 GB**
+ #### **10. Adjusting Map Join memory** Configuration: ***hive.auto.convert.join.noconditionaltask.size*** Make sure you have *hive.auto.convert.join.noconditionaltask* enabled for this parameter to take effect.
-This configuration determine the threshold for MapJoin selection by Hive optimizer that considers oversubscription of memory from other executors to have more room for in-memory hash tables to allow more map join conversions. Considering 3GB per executor, this size can be oversubscribed to 3GB, but some heap memory may also be used for sort buffers, shuffle buffers, etc. by the other operations.
+This configuration determines the threshold for MapJoin selection by Hive optimizer that considers oversubscription of memory from other executors to have more room for in-memory hash tables to allow more map join conversions. Considering 3 GB per executor, this size can be oversubscribed to 3 GB, but some heap memory may also be used for sort buffers, shuffle buffers, etc. by the other operations.
So for D14 v2, with 3 GB memory per executor, it's recommended to set this value to **2048 MB**. (Note: This value may need adjustments that are suitable for your workload. Setting this value too low may not use autoconvert feature. And setting it too high may result into out of memory exceptions or GC pauses that can result into adverse performance.)
Ambari environment variables: ***num_llap_nodes, num_llap_nodes_for_llap_daemons
**num_llap_nodes** - specifies number of nodes used by Hive LLAP service, this includes nodes running LLAP daemon, LLAP Service Master, and Tez Application Master(Tez AM). :::image type="content" source="./media/hive-llap-sizing-guide/LLAP_sizing_guide_num_llap_nodes.png " alt-text="`Number of Nodes for LLAP service`" border="true"::: - **num_llap_nodes_for_llap_daemons** - specified number of nodes used only for LLAP daemons. LLAP daemon container sizes are set to max fit node, so it will result in one llap daemon on each node. :::image type="content" source="./media/hive-llap-sizing-guide/LLAP_sizing_guide_num_llap_nodes_for_llap_daemons.png " alt-text="`Number of Nodes for LLAP daemons`" border="true":::
It's recommended to keep both values same as number of worker nodes in Interacti
### **Considerations for Workload Management** If you want to enable workload management for LLAP, make sure you reserve enough capacity for workload management to function as expected. The workload management requires configuration of a custom YARN queue, which is in addition to `llap` queue. Make sure you divide total cluster resource capacity between llap queue and workload management queue in accordance to your workload requirements. Workload management spawns Tez Application Masters(Tez AMs) when a resource plan is activated.
-Please note:
+
+**Note:**
* Tez AMs spawned by activating a resource plan consume resources from the workload management queue as specified by `hive.server2.tez.interactive.queue`. * The number of Tez AMs would depend on the value of `QUERY_PARALLELISM` specified in the resource plan.
-* Once the workload management is active, Tez AMs in llap queue will not used. Only Tez AMs from workload management queue are used for query coordination. Tez AMs in the `llap` queue are used when workload management is disabled.
+* Once the workload management is active, Tez AMs in LLAP queue will not be used. Only Tez AMs from workload management queue are used for query coordination. Tez AMs in the `llap` queue are used when workload management is disabled.
For example:
-Total cluster capacity = 100 GB memory, divided between LLAP, Workload Management, and Default queues as follows:
+Total cluster capacity = 100-GB memory, divided between LLAP, Workload Management, and Default queues as follows:
+ - LLAP queue capacity = 70 GB
- Workload management queue capacity = 20 GB - Default queue capacity = 10 GB
-With 20 GB in workload management queue capacity, a resource plan can specify `QUERY_PARALLELISM` value as five, which means workload management can launch five Tez AMs with 4 GB container size each. If `QUERY_PARALLELISM` is higher than the capacity, you may see some Tez AMs stop responding in `ACCEPTED` state. The Hiveserver2 Interactive cannot submit query fragments to the Tez AMs that are not in `RUNNING` state.
+With 20 GB in workload management queue capacity, a resource plan can specify `QUERY_PARALLELISM` value as five, which means workload management can launch five Tez AMs with 4 GB container size each. If `QUERY_PARALLELISM` is higher then the capacity, you may see some Tez AMs stop responding in `ACCEPTED` state. The Hiveserver2 Interactive cannot submit query fragments to the Tez AMs that are not in `RUNNING` state.
#### **Next Steps**
If setting these values didn't resolve your issue, visit one of the following...
* Connect with [@AzureSupport](https://twitter.com/azuresupport) - the official Microsoft Azure account for improving customer experience by connecting the Azure community to the right resources: answers, support, and experts.
-* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, please review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/).
+* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/).
* ##### **Other References:** * [Configure other LLAP properties](https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.5/performance-tuning/content/hive_setup_llap.html)
healthcare-apis Authentication Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/authentication-authorization.md
FHIR service of Azure Health Data Services provides the following roles:
* **FHIR Data Exporter**: Can read and export ($export operator) data. * **FHIR Data Contributor**: Can perform all data plane operations. * **FHIR Data Converter**: Can use the converter to perform data conversion.
+* **FHIR SMART User**: Role allows user to read and write FHIR data according to the [SMART IG V1.0.0 specifications](http://hl7.org/fhir/smart-app-launch/1.0.0/).
DICOM service of Azure Health Data Services provides the following roles:
iot-central Tutorial Industrial End To End https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/tutorial-industrial-end-to-end.md
The IoT Edge deployment manifest defines four custom modules:
- [opcpublisher](https://github.com/Azure/Industrial-IoT/blob/main/docs/modules/publisher.md) - forwards OPC-UA data from an OPC-UA server to the **miabgateway**. - [miabgateway](https://github.com/iot-for-all/iotc-miab-gateway) - gateway to send OPC-UA data to your IoT Central app and handle commands sent from your IoT Central app.
-You can see the deployment manifest in the tool configuration file. The manifest is part of the device template that the tool adds to your IoT Central application.
+You can see the deployment manifest in the tool configuration file. The tool assigns the deployment manifest to the IoT Edge device it registers in your IoT Central application.
To learn more about how to use the REST API to deploy and configure the IoT Edge runtime, see [Run Azure IoT Edge on Ubuntu Virtual Machines](../../iot-edge/how-to-install-iot-edge-ubuntuvm.md).
iot-hub Iot Hub Dev Guide Sas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-dev-guide-sas.md
Here are the main steps of the token service pattern:
2. When a device/module needs to access your IoT hub, it requests a signed token from your token service. The device can authenticate with your custom identity registry/authentication scheme to determine the device/module identity that the token service uses to create the token.
-3. The token service returns a token. The token is created by using `/devices/{deviceId}` or `/devices/{deviceId}/module/{moduleId}` as `resourceURI`, with `deviceId` as the device being authenticated or `moduleId` as the module being authenticated. The token service uses the shared access policy to construct the token.
+3. The token service returns a token. The token is created by using `/devices/{deviceId}` or `/devices/{deviceId}/modules/{moduleId}` as `resourceURI`, with `deviceId` as the device being authenticated or `moduleId` as the module being authenticated. The token service uses the shared access policy to construct the token.
4. The device/module uses the token directly with the IoT hub.
If you would like to try out some of the concepts described in this article, see
* [Get started with Azure IoT Hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-nodejs) * [How to send cloud-to-device messages with IoT Hub](iot-hub-csharp-csharp-c2d.md)
-* [How to process IoT Hub device-to-cloud messages](tutorial-routing.md)
+* [How to process IoT Hub device-to-cloud messages](tutorial-routing.md)
machine-learning Concept Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-data.md
Azure Machine Learning lets you bring data from a local machine or an existing c
> [!div class="checklist"] > - [**URIs**](#uris) - A **U**niform **R**esource **I**dentifier that is a reference to a storage location on your local computer or in the cloud that makes it very easy to access data in your jobs. Azure Machine Learning distinguishes two types of URIs:`uri_file` and `uri_folder`. If you want to consume a file as an input of a job, you can define this job input by providing `type` as `uri_file`, `path` as where the file is.
-> - [**MLTable**](#mltable) - `MLTable` helps you to abstract the schema definition for tabular data so it is more suitable for complex/changing schema or to be leveraged in automl. If you just want to create an data asset for a job or you want to write your own parsing logic in python you could use `uri_file`, `uri_folder`.
+> - [**MLTable**](#mltable) - `MLTable` helps you to abstract the schema definition for tabular data so it is more suitable for complex/changing schema or to be leveraged in automl. If you just want to create an data asset for a job or you want to write your own parsing logic in Python you could use `uri_file`, `uri_folder`.
> - [**Data asset**](#data-asset) - If you plan to share your data (URIs or MLTables) in your workspace to team members, or you want to track data versions, or track lineage, you can create data assets from URIs or MLTables you have. But if you didn't create data asset, you can still consume the data in jobs without lineange tracking, version management, etc. > - [**Datastore**](#datastore) - Azure Machine Learning Datastores securely keep the connection information(storage container name, credentials) to your data storage on Azure, so you don't have to code it in your scripts. You can use AzureML datastore uri and relative path to your data to point to your data. You can also register files/folders in your AzureML datastore into data assets.
machine-learning How To Access Data Batch Endpoints Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-data-batch-endpoints-jobs.md
+
+ Title: "Input data for batch endpoints jobs"
+
+description: Learn how to access data from different sources in batch endpoints jobs.
++++++ Last updated : 10/10/2022++++
+# Input data for batch endpoints jobs
+
+Batch endpoints can be used to perform batch scoring on large amounts of data. Such data can be placed in different places. In this tutorial we'll cover the different places where batch endpoints can read data from and how to reference it.
+
+## Prerequisites
+
+* This example assumes that you've a model correctly deployed as a batch endpoint. Particularly, we're using the *heart condition classifier* created in the tutorial [Using MLflow models in batch deployments](how-to-mlflow-batch.md).
+
+## Supported data inputs
+
+Batch endpoints support reading files located in the following storage options:
+
+* Azure Machine Learning Data Stores. The following stores are supported:
+ * Azure Blob Storage
+ * Azure Data Lake Storage Gen1
+ * Azure Data Lake Storage Gen2
+* Azure Machine Learning Data Assets. The following types are supported:
+ * Data assets of type Folder (`uri_folder`).
+ * Data assets of type File (`uri_file`).
+ * Datasets of type `FileDataset` (Deprecated).
+* Azure Storage Accounts. The following storage containers are supported:
+ * Azure Data Lake Storage Gen1
+ * Azure Data Lake Storage Gen2
+ * Azure Blob Storage
+
+> [!TIP]
+> Local data folders/files can be used when executing batch endpoints from the Azure ML CLI or Azure ML SDK for Python. However, that operation will result in the local data to be uploaded to the default Azure Machine Learning Data Store of the workspace you are working on.
+
+> [!IMPORTANT]
+> __Deprecation notice__: Datasets of type `FileDataset` (V1) are deprecated and will be retired in the future. Existing batch endpoints relying on this functionality will continue to work but batch endpoints created with GA CLIv2 (2.4.0 and newer) or GA REST API (2022-05-01 and newer) will not support V1 dataset.
++
+## Reading data from data stores
+
+Data from Azure Machine Learning registered data stores can be directly referenced by batch deployments jobs. In this example, we're going to first upload some data to the default data store in the Azure Machine Learning workspace and then run a batch deployment on it. Follow these steps to run a batch endpoint job using data stored in a data store:
+
+1. Let's get access to the default data store in the Azure Machine Learning workspace. If your data is in a different store, you can use that store instead. There's no requirement of using the default data store.
+
+ # [Azure CLI](#tab/cli)
+
+ ```azurecli
+ DATASTORE_ID=$(az ml datastore show -n workspaceblobstore | jq -r '.id')
+ ```
+
+ > [!NOTE]
+ > Data stores ID would look like `azureml:/subscriptions/<subscription>/resourceGroups/<resource-group>/providers/Microsoft.MachineLearningServices/workspaces/<workspace>/datastores/<data-store>`.
+
+ # [Python](#tab/sdk)
+
+ ```python
+ default_ds = ml_client.datastores.get_default()
+ ```
+
+ # [REST](#tab/rest)
+
+ Use the Azure ML CLI, Azure ML SDK for Python, or Studio to get the data store information.
+
+
+
+ > [!TIP]
+ > The default blob data store in a workspace is called __workspaceblobstore__. You can skip this step if you already know the resource ID of the default data store in your workspace.
+
+1. We'll need to upload some sample data to it. This example assumes you've uploaded the sample data included in the repo in the folder `sdk/python/endpoints/batch/heart-classifier/data` in the folder `heart-classifier/data` in the blob storage account. Ensure you have done that before moving forward.
+
+1. Create a data input:
+
+ # [Azure CLI](#tab/cli)
+
+ Let's place the file path in the following variable:
+
+ ```azurecli
+ DATA_PATH="heart-disease-uci-unlabeled"
+ INPUT_PATH="$DATASTORE_ID/paths/$DATA_PATH"
+ ```
+
+ # [Python](#tab/sdk)
+
+ ```python
+ data_path = "heart-classifier/data"
+ input = Input(type=AssetTypes.URI_FOLDER, path=f"{default_ds.id}/paths/{data_path})
+ ```
+
+ # [REST](#tab/rest)
+
+ Use the Azure ML CLI, Azure ML SDK for Python, or Studio to get the subscription ID, resource group, workspace, and name of the data store. You will need them later.
+
+
+
+ > [!NOTE]
+ > See how the path `paths` is appended to the resource id of the data store to indicate that what follows is a path inside of it.
+
+ > [!TIP]
+ > You can also use `azureml:/datastores/<data-store>/paths/<data-path>` as a way to indicate the input.
+
+1. Run the deployment:
+
+ # [Azure CLI](#tab/cli)
+
+ ```bash
+ INVOKE_RESPONSE = $(az ml batch-endpoint invoke --name $ENDPOINT_NAME --input $INPUT_PATH)
+ ```
+
+ # [Python](#tab/sdk)
+
+ ```python
+ job = ml_client.batch_endpoints.invoke(
+ endpoint_name=endpoint.name,
+ input=input,
+ )
+ ```
+
+ # [REST](#tab/rest)
+
+ __Request__
+
+ ```http
+ POST jobs HTTP/1.1
+ Host: <ENDPOINT_URI>
+ Authorization: Bearer <TOKEN>
+ Content-Type: application/json
+ ```
+
+ __Body__
+
+ ```json
+ {
+ "properties": {
+ "InputData": {
+ "mnistinput": {
+ "JobInputType" : "UriFolder",
+ "Uri": "azureml:/subscriptions/<subscription>/resourceGroups/<resource-group/providers/Microsoft.MachineLearningServices/workspaces/<workspace>/datastores/<data-store>/paths/<data-path>"
+ }
+ }
+ }
+ }
+ ```
+
+## Reading data from a data asset
+
+Azure Machine Learning data assets (formerly known as datasets) are supported as inputs for jobs. Follow these steps to run a batch endpoint job using data stored in a registered data asset in Azure Machine Learning:
+
+> [!WARNING]
+> Data assets of type Table (`MLTable`) aren't currently supported.
+
+1. Let's create the data asset first. This data asset consists of a folder with multiple CSV files that we want to process in parallel using batch endpoints. You can skip this step is your data is already registered as a data asset.
+
+ # [Azure CLI](#tab/cli)
+
+ Create a data asset definition in `YAML`:
+
+ __heart-dataset-unlabeled.yml__
+ ```yaml
+ $schema: https://azuremlschemas.azureedge.net/latest/data.schema.json
+ name: heart-dataset-unlabeled
+ description: An unlabeled dataset for heart classification.
+ type: uri_folder
+ path: heart-classifier-mlflow/data
+ ```
+
+ Then, create the data asset:
+
+ ```bash
+ az ml data create -f heart-dataset-unlabeled.yml
+ ```
+
+ # [Python](#tab/sdk)
+
+ ```python
+ data_path = "heart-classifier-mlflow/data"
+ dataset_name = "heart-dataset-unlabeled"
+
+ heart_dataset_unlabeled = Data(
+ path=data_path,
+ type=AssetTypes.URI_FOLDER,
+ description="An unlabeled dataset for heart classification",
+ name=dataset_name,
+ )
+ ```
+
+ Then, create the data asset:
+
+ ```python
+ ml_client.data.create_or_update(heart_dataset_unlabeled)
+ ```
+
+ To get the newly created data asset, use:
+
+ ```python
+ heart_dataset_unlabeled = ml_client.data.get(name=dataset_name)
+ ```
+
+ # [REST](#tab/rest)
+
+ Use the Azure ML CLI, Azure ML SDK for Python, or Studio to get the location (region), workspace, and data asset name and version. You will need them later.
++
+1. Create a data input:
+
+ # [Azure CLI](#tab/cli)
+
+ ```azurecli
+ DATASET_ID=$(az ml data show -n heart-dataset-unlabeled --label latest --query id)
+ ```
+
+ # [Python](#tab/sdk)
+
+ ```python
+ input = Input(type=AssetTypes.URI_FOLDER, path=heart_dataset_unlabeled.id)
+ ```
+
+ # [REST](#tab/rest)
+
+ This step isn't required.
+
+
+
+ > [!NOTE]
+ > Data assets ID would look like `/subscriptions/<subscription>/resourcegroups/<resource-group>/providers/Microsoft.MachineLearningServices/workspaces/<workspace>/data/<data-asset>/versions/<version>`.
++
+1. Run the deployment:
+
+ # [Azure CLI](#tab/cli)
+
+ ```bash
+ INVOKE_RESPONSE = $(az ml batch-endpoint invoke --name $ENDPOINT_NAME --input $DATASET_ID)
+ ```
+
+ > [!TIP]
+ > You can also use `--input azureml:/<dataasset_name>@latest` as a way to indicate the input.
+
+ # [Python](#tab/sdk)
+
+ ```python
+ job = ml_client.batch_endpoints.invoke(
+ endpoint_name=endpoint.name,
+ input=input,
+ )
+ ```
+
+ # [REST](#tab/rest)
+
+ __Request__
+
+ ```http
+ POST jobs HTTP/1.1
+ Host: <ENDPOINT_URI>
+ Authorization: Bearer <TOKEN>
+ Content-Type: application/json
+ ```
+
+ __Body__
+
+ ```json
+ {
+ "properties": {
+ "InputData": {
+ "mnistinput": {
+ "JobInputType" : "UriFolder",
+ "Uri": "azureml://locations/<location>/workspaces/<workspace>/data/<dataset_name>/versions/labels/latest"
+ }
+ }
+ }
+ }
+ ```
+
+## Reading data from Azure Storage Accounts
+
+Azure Machine Learning batch endpoints can read data from cloud locations in Azure Storage Accounts, both public and private. Use the following steps to run a batch endpoint job using data stored in a storage account:
+
+> [!NOTE]
+> Check the section [Security considerations when reading data](#security-considerations-when-reading-data) for learn more about additional configuration required to successfully read data from storage accoutns.
+
+1. Create a data input:
+
+ # [Azure CLI](#tab/cli)
+
+ This step isn't required.
+
+ # [Python](#tab/sdk)
+
+ ```python
+ input = Input(type=AssetTypes.URI_FOLDER, path="https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/data")
+ ```
+
+ If your data is a file, change `type=AssetTypes.URI_FILE`:
+
+ ```python
+ input = Input(type=AssetTypes.URI_FILE, path="https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/data/heart.csv")
+ ```
+
+ # [REST](#tab/rest)
+
+ This step isn't required.
++
+1. Run the deployment:
+
+ # [Azure CLI](#tab/cli)
+
+ ```bash
+ INVOKE_RESPONSE = $(az ml batch-endpoint invoke --name $ENDPOINT_NAME --input-type uri_folder --input https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/data)
+ ```
+
+ If your data is a file, change `--input-type uri_file`:
+
+ ```bash
+ INVOKE_RESPONSE = $(az ml batch-endpoint invoke --name $ENDPOINT_NAME --input-type uri_file --input https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/data/heart.csv)
+ ```
+
+ # [Python](#tab/sdk)
+
+ ```python
+ job = ml_client.batch_endpoints.invoke(
+ endpoint_name=endpoint.name,
+ input=input,
+ )
+ ```
+
+ # [REST](#tab/rest)
+
+ __Request__
+
+ ```http
+ POST jobs HTTP/1.1
+ Host: <ENDPOINT_URI>
+ Authorization: Bearer <TOKEN>
+ Content-Type: application/json
+ ```
+
+ __Body__
+
+ ```json
+ {
+ "properties": {
+ "InputData": {
+ "mnistinput": {
+ "JobInputType" : "UriFolder",
+ "Uri": "https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/data"
+ }
+ }
+ }
+ }
+ ```
+
+ If your data is a file, change `JobInputType`:
+
+ __Body__
+
+ ```json
+ {
+ "properties": {
+ "InputData": {
+ "mnistinput": {
+ "JobInputType" : "UriFolder",
+ "Uri": "https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/data/heart.csv"
+ }
+ }
+ }
+ }
+ ```
+
+
+## Security considerations when reading data
+
+Batch endpoints ensure that only authorized users are able to invoke batch deployments and generate jobs. However, depending on how the input data is configured, other credentials may be used to read the underlying data. Use the following table to understand which credentials are used and any additional requirements.
+
+| Data input type | Credential in store | Credentials used | Access granted by |
+||||-|
+| Data store | Yes | Data store's credentials in the workspace | Credentials |
+| Data store | No | Identity of the job | Depends on type |
+| Data asset | Yes | Data store's credentials in the workspace | Credentials |
+| Data asset | No | Identity of the job | Depends on store |
+| Azure Blob Storage | Not apply | Identity of the job + Managed identity of the compute cluster | RBAC |
+| Azure Data Lake Storage Gen1 | Not apply | Identity of the job + Managed identity of the compute cluster | POSIX |
+| Azure Data Lake Storage Gen2 | Not apply | Identity of the job + Managed identity of the compute cluster | POSIX and RBAC |
+
+The managed identity of the compute cluster is used for mounting and configuring the data store. That means that in order to successfully read data from external storage services, the managed identity of the compute cluster where the deployment is running must have at least [Storage Blob Data Reader](../role-based-access-control/built-in-roles.md#storage-blob-data-reader) access to the storage account. Only storage account owners can [change your access level via the Azure portal](../storage/blobs/assign-azure-role-data-access.md).
+
+> [!NOTE]
+> To assign an identity to the compute used by a batch deployment, follow the instructions at [Set up authentication between Azure ML and other services](how-to-identity-based-service-authentication.md#compute-cluster). Configure the identity on the compute cluster associated with the deployment. Notice that all the jobs running on such compute are affected by this change. However, different deployments (even under the same deployment) can be configured to run under different clusters so you can administer the permissions accordingly depending on your requirements.
+
+## Next steps
+
+* [Troubleshooting batch endpoints](how-to-troubleshoot-batch-endpoints.md).
+* [Customize outputs in batch deployments](how-to-deploy-model-custom-output.md).
+* [Invoking batch endpoints from Azure Data Factory](how-to-use-batch-azure-data-factory.md).
machine-learning How To Authenticate Batch Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-authenticate-batch-endpoint.md
+
+ Title: "Authentication on batch endpoints"
+
+description: Learn how authentication works on Batch Endpoints.
++++++ Last updated : 10/10/2022++++
+# Authentication on batch endpoints
+
+Batch endpoints support Azure Active Directory authentication, or `aad_token`. That means that in order to invoke a batch endpoint, the user must present a valid Azure Active Directory authentication token to the batch endpoint URI. Authorization is enforced at the endpoint level. The following article explains how to correctly interact with batch endpoints and the security requirements for it.
+
+## Prerequisites
+
+* This example assumes that you have a model correctly deployed as a batch endpoint. Particularly, we are using the *heart condition classifier* created in the tutorial [Using MLflow models in batch deployments](how-to-mlflow-batch.md).
+
+## How authentication works
+
+To invoke a batch endpoint, the user must present a valid Azure Active Directory token representing a __security principal__. This principal can be a __user principal__ or a __service principal__. In any case, once an endpoint is invoked, a batch deployment job is created under the identity associated with the token. The identity needs the following permissions in order to successfully create a job:
+
+> [!div class="checklist"]
+> * Read batch endpoints/deployments.
+> * Create jobs in batch inference endpoints/deployment.
+> * Create experiments/runs.
+> * Read and write from/to data stores.
+> * Lists datastore secrets.
+
+You can either use one of the [built-in security roles](../role-based-access-control/built-in-roles.md) or create a new one. In any case, the identity used to invoke the endpoints requires to be granted the permissions explicitly. See [Steps to assign an Azure role](../role-based-access-control/role-assignments-steps.md) for instructions to assign them.
+
+> [!IMPORTANT]
+> The identity used for invoking a batch endpoint may not be used to read the underlying data depending on how the data store is configured. Please see [Security considerations when reading data](how-to-access-data-batch-endpoints-jobs.md#security-considerations-when-reading-data) for more details.
+
+## How to run jobs using different types of credentials
+
+The following examples show different ways to start batch deployment jobs using different types of credentials:
+
+> [!IMPORTANT]
+> When working on a private link-enabled workspaces, batch endpoints can't be invoked from the UI in Azure ML studio. Please use the Azure ML CLI v2 instead for job creation.
+
+### Running jobs using user's credentials
+
+In this case, we want to execute a batch endpoint using the identity of the user currently logged in. Follow these steps:
+
+> [!NOTE]
+> When working on Azure ML studio, batch endpoints/deployments are always executed using the identity of the current user logged in.
+
+# [Azure ML CLI](#tab/cli)
+
+1. Use the Azure CLI to log in using either interactive or device code authentication:
+
+ ```azurecli
+ az login
+ ```
+
+1. Once authenticated, use the following command to run a batch deployment job:
+
+ ```azurecli
+ az ml batch-endpoint invoke --name $ENDPOINT_NAME --input https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci
+ ```
+
+# [Azure ML SDK for Python](#tab/sdk)
+
+1. Use the Azure ML SDK for Python to log in using either interactive or device authentication:
+
+ ```python
+ from azure.ai.ml import MLClient
+ from azure.identity import InteractiveAzureCredentials
+
+ subscription_id = "<subscription>"
+ resource_group = "<resource-group>"
+ workspace = "<workspace>"
+
+ ml_client = MLClient(InteractiveAzureCredentials(), subscription_id, resource_group, workspace)
+ ```
+
+1. Once authenticated, use the following command to run a batch deployment job:
+
+ ```python
+ job = ml_client.batch_endpoints.invoke(
+ endpoint_name,
+ input=Input(path="https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci")
+ )
+ ```
+
+# [REST](#tab/rest)
+
+When working with REST APIs, we recommend to using either a [service principal](#running-jobs-using-a-service-principal) or a [managed identity](#running-jobs-using-a-managed-identity) to interact with the API.
+++
+### Running jobs using a service principal
+
+In this case, we want to execute a batch endpoint using a service principal already created in Azure Active Directory. To complete the authentication, you will have to create a secret to perform the authentication. Follow these steps:
+
+# [Azure ML CLI](#tab/cli)
+
+1. Create a secret to use for authentication as explained at [Option 2: Create a new application secret](../active-directory/develop/howto-create-service-principal-portal.md#option-2-create-a-new-application-secret).
+1. To authenticate using a service principal, use the following command. For more details see [Sign in with Azure CLI](/cli/azure/authenticate-azure-cli).
+
+ ```bash
+ az login --service-principal -u <app-id> -p <password-or-cert> --tenant <tenant>
+ ```
+
+1. Once authenticated, use the following command to run a batch deployment job:
+
+ ```azurecli
+ az ml batch-endpoint invoke --name $ENDPOINT_NAME --input https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/
+ ```
+
+# [Azure ML SDK for Python](#tab/sdk)
+
+1. Create a secret to use for authentication as explained at [Option 2: Create a new application secret](../active-directory/develop/howto-create-service-principal-portal.md#option-2-create-a-new-application-secret).
+1. To authenticate using a service principal, indicate the tenant ID, client ID and client secret of the service principal using environment variables as demonstrated:
+
+ ```python
+ from azure.ai.ml import MLClient
+ from azure.identity import EnvironmentCredential
+
+ os.environ["AZURE_TENANT_ID"] = "<TENANT_ID>"
+ os.environ["AZURE_CLIENT_ID"] = "<CLIENT_ID>"
+ os.environ["AZURE_CLIENT_SECRET"] = "<CLIENT_SECRET>"
+
+ subscription_id = "<subscription>"
+ resource_group = "<resource-group>"
+ workspace = "<workspace>"
+
+ ml_client = MLClient(DefaultAzureCredential(), subscription_id, resource_group, workspace)
+ ```
+
+1. Once authenticated, use the following command to run a batch deployment job:
+
+ ```python
+ job = ml_client.batch_endpoints.invoke(
+ endpoint_name,
+ input=Input(path="https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci")
+ )
+ ```
+
+# [REST](#tab/rest)
+
+1. Create a secret to use for authentication as explained at [Option 2: Create a new application secret](../active-directory/develop/howto-create-service-principal-portal.md#option-2-create-a-new-application-secret).
+
+1. Use the login service from Azure to get an authorization token. Authorization tokens are issued to a particular scope. The resource type for Azure Machine learning is `https://ml.azure.com`. The request would look as follows:
+
+ __Request__:
+
+ ```http
+ POST /{TENANT_ID}/oauth2/token HTTP/1.1
+ Host: login.microsoftonline.com
+ ```
+
+ __Body__:
+
+ ```
+ grant_type=client_credentials&client_id=<CLIENT_ID>&client_secret=<CLIENT_SECRET>&resource=https://ml.azure.com
+ ```
+
+ > [!IMPORTANT]
+ > Notice that the resource scope for invoking a batch endpoints (`https://ml.azure.com1) is different from the resource scope used to manage them. All management APIs in Azure use the resource scope `https://management.azure.com`, including Azure Machine Learning.
+
+3. Once authenticated, use the query to run a batch deployment job:
+
+ __Request__:
+
+ ```http
+ POST jobs HTTP/1.1
+ Host: <ENDPOINT_URI>
+ Authorization: Bearer <TOKEN>
+ Content-Type: application/json
+ ```
+ __Body:__
+
+ ```json
+ {
+ "properties": {
+ "InputData": {
+ "mnistinput": {
+ "JobInputType" : "UriFolder",
+ "Uri": "https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci"
+ }
+ }
+ }
+ }
+ ```
+++
+### Running jobs using a managed identity
+
+You can use managed identities to invoke batch endpoint and deployments. Please notice that this manage identity doesn't belong to the batch endpoint, but it is the identity used to execute the endpoint and hence create a batch job. Both user assigned and system assigned identities can be use in this scenario.
+
+# [Azure ML CLI](#tab/cli)
+
+On resources configured for managed identities for Azure resources, you can sign in using the managed identity. Signing in with the resource's identity is done through the `--identity` flag. For more details see [Sign in with Azure CLI](/cli/azure/authenticate-azure-cli).
+
+```bash
+az login --identity
+```
+
+Once authenticated, use the following command to run a batch deployment job:
+
+```azurecli
+az ml batch-endpoint invoke --name $ENDPOINT_NAME --input https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci
+```
+
+# [Azure ML SDK for Python](#tab/sdk)
+
+On resources configured for managed identities for Azure resources, you can sign in using the managed identity. Use the resource ID along with the `ManagedIdentityCredential` object as demonstrated in the following example:
+
+```python
+from azure.ai.ml import MLClient
+from azure.identity import ManagedIdentityCredential
+
+subscription_id = "<subscription>"
+resource_group = "<resource-group>"
+workspace = "<workspace>"
+resource_id = "<resource-id>"
+
+ml_client = MLClient(ManagedIdentityCredential(resource_id), subscription_id, resource_group, workspace)
+```
+
+Once authenticated, use the following command to run a batch deployment job:
+
+```python
+job = ml_client.batch_endpoints.invoke(
+ endpoint_name,
+ input=Input(path="https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci")
+ )
+```
+
+# [REST](#tab/rest)
+
+You can use the REST API of Azure Machine Learning to start a batch endpoints job using a managed identity. The steps vary depending on the underlying service being used. Some examples include (but are not limited to):
+
+* [Managed identity for Azure Data Factory](../data-factory/data-factory-service-identity.md)
+* [How to use managed identities for App Service and Azure Functions](../app-service/overview-managed-identity.md).
+* [How to use managed identities for Azure resources on an Azure VM to acquire an access token](../active-directory/managed-identities-azure-resources/how-to-use-vm-token.md).
+
+You can also use the Azure CLI to get an authentication token for the managed identity and the pass it to the batch endpoints URI.
+++
+## Next steps
+
+* [Network isolation in batch endpoints](how-to-secure-batch-endpoint.md)
+* [Invoking batch endpoints from Event Grid events in storage](how-to-use-event-grid-batch.md).
+* [Invoking batch endpoints from Azure Data Factory](how-to-use-batch-azure-data-factory.md).
machine-learning How To Batch Scoring Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-batch-scoring-script.md
+
+ Title: 'Author scoring scripts for batch deployments'
+
+description: In this article, learn how to author scoring scripts to perform batch inference in batch deployments.
+++++++ Last updated : 11/03/2022+++
+# Author scoring scripts for batch deployments
++
+Batch endpoints allow you to deploy models to perform inference at scale. Because how inference should be executed varies from model's format, model's type and use case, batch endpoints require a scoring script (also known as batch driver script) to indicate the deployment how to use the model over the provided data. In this article you will learn how to use scoring scripts in different scenarios and their best practices.
+
+> [!TIP]
+> MLflow models don't require a scoring script as it is autogenerated for you. For more details about how batch endpoints work with MLflow models, see the dedicated tutorial [Using MLflow models in batch deployments](how-to-mlflow-batch.md). Notice that this feature doesn't prevent you from writing an specific scoring script for MLflow models as explained at [Using MLflow models with a scoring script](how-to-mlflow-batch.md#using-mlflow-models-with-a-scoring-script).
+
+> [!WARNING]
+> If you are deploying an Automated ML model under a batch endpoint, notice that the scoring script that Automated ML provides only works for Online Endpoints and it is not designed for batch execution. Please follow this guideline to learn how to create one depending on what your model does.
+
+## Understanding the scoring script
+
+The scoring script is a Python file (`.py`) that contains the logic about how to run the model and read the input data submitted by the batch deployment executor driver. Each model deployment has to provide a scoring script, however, an endpoint may host multiple deployments using different scoring script versions.
+
+The scoring script must contain two methods:
+
+#### The `init` method
+
+Use the `init()` method for any costly or common preparation. For example, use it to load the model into a global object. This function will be called once at the beginning of the process. You model's files will be available in an environment variable called `AZUREML_MODEL_DIR`. Use this variable to locate the files associated with the model.
+
+```python
+def init():
+ global model
+
+ # AZUREML_MODEL_DIR is an environment variable created during deployment
+ # The path "model" is the name of the registered model's folder
+ model_path = os.path.join(os.environ["AZUREML_MODEL_DIR"], "model")
+
+ # load the model
+ model = load_model(model_path)
+```
+
+Notice that in this example we are placing the model in a global variable `model`. Use global variables to make available any asset needed to perform inference to your scoring function.
+
+#### The `run` method
+
+Use the `run(mini_batch: List[str]) -> Union[List[Any], pandas.DataFrame]` method to perform the scoring of each mini-batch generated by the batch deployment. Such method will be called once per each `mini_batch` generated for your input data. Batch deployments read data in batches accordingly to how the deployment is configured.
+
+```python
+def run(mini_batch):
+ results = []
+
+ for file in mini_batch:
+ (...)
+
+ return pd.DataFrame(results)
+```
+
+The method receives a list of file paths as a parameter (`mini_batch`). You can use this list to either iterate over each file and process it one by one, or to read the entire batch and process it at once. The best option will depend on your compute memory and the throughput you need to achieve. For an example of how to read entire batches of data at once see [High throughput deployments](how-to-image-processing-batch.md#high-throughput-deployments).
+
+> [!NOTE]
+> __How is work distributed?__:
+>
+> Batch deployments distribute work at the file level, which means that a folder containing 100 files with mini-batches of 10 files will generate 10 batches of 10 files each. Notice that this will happen regardless of the size of the files involved. If your files are too big to be processed in large mini-batches we suggest to either split the files in smaller files to achieve a higher level of parallelism or to decrease the number of files per mini-batch. At this moment, batch deployment can't account for skews in the file's size distribution.
+
+The `run()` method should return a pandas DataFrame or an array/list. Each returned output element indicates one successful run of an input element in the input `mini_batch`. For file datasets, each row/element will represent a single file processed. For a tabular dataset, each row/element will represent a row in a processed file.
+
+> [!IMPORTANT]
+> __How to write predictions?__:
+>
+> Use __arrays__ when you need to output a single prediction. Use __pandas DataFrames__ when you need to return multiple pieces of information. For instance, for tabular data, you may want to append your predictions to the original record. Use a pandas DataFrame for this case. For file datasets, __we still recommend to output a pandas DataFrame__ as they provide a more robust approach to read the results.
+>
+> Although pandas DataFrame may contain column names, they are not included in the output file. If needed, please see [Customize outputs in batch deployments](how-to-deploy-model-custom-output.md).
+
+> [!WARNING]
+> Do not not output complex data types (or lists of complex data types) in the `run` function. Those outputs will be transformed to string and they will be hard to read.
+
+The resulting DataFrame or array is appended to the output file indicated. There's no requirement on the cardinality of the results (1 file can generate 1 or many rows/elements in the output). All elements in the result DataFrame or array will be written to the output file as-is (considering the `output_action` isn't `summary_only`).
+
+## Writing predictions in a different way
+
+By default, the batch deployment will write the model's predictions in a single file as indicated in the deployment. However, there are some cases where you need to write the predictions in multiple files. For instance, if the input data is partitioned, you typically would want to generate your output partitioned too. On those cases you can [Customize outputs in batch deployments](how-to-deploy-model-custom-output.md) to indicate:
+
+> [!div class="checklist"]
+> * The file format used (CSV, parquet, json, etc).
+> * The way data is partitioned in the output.
+
+Read the article [Customize outputs in batch deployments](how-to-deploy-model-custom-output.md) for an example about how to achieve it.
+
+## Source control of scoring scripts
+
+It is highly advisable to put scoring scripts under source control.
+
+## Best practices for writing scoring scripts
+
+When writing scoring scripts that work with big amounts of data, you need to take into account several factors, including:
+
+* The size of each file.
+* The amount of data on each file.
+* The amount of memory required to read each file.
+* The amount of memory required to read an entire batch of files.
+* The memory footprint of the model.
+* The memory footprint of the model when running over the input data.
+* The available memory in your compute.
+
+Batch deployments distribute work at the file level, which means that a folder containing 100 files with mini-batches of 10 files will generate 10 batches of 10 files each. Notice that this will happen regardless of the size of the files involved. If your files are too big to be processed in large mini-batches we suggest to either split the files in smaller files to achieve a higher level of parallelism or to decrease the number of files per mini-batch. At this moment, batch deployment can't account for skews in the file's size distribution.
+
+### Running inference at the mini-batch, file or the row level
+
+Batch endpoints will call the `run()` function in your scoring script once per mini-batch. However, you will have the power to decide if you want to run the inference over the entire batch, over one file at a time, or over one row at a time (if your data happens to be tabular).
+
+#### Mini-batch level
+
+You will typically want to run inference over the batch all at once when you want to achieve high throughput in your batch scoring process. This is the case for instance if you run inference over a GPU where you want to achieve saturation of the inference device. You may also be relying on a data loader that can handle the batching itself if data doesn't fit on memory, like `TensorFlow` or `PyTorch` data loaders. On those cases, you may want to consider running inference on the entire batch.
+
+> [!WARNING]
+> Running inference at the batch level may require having high control over the input data size to be able to correctly account for the memory requirements and avoid out of memory exceptions. Whether you are able or not of loading the entire mini-batch in memory will depend on the size of the mini-batch, the size of the instances in the cluster, the number of workers on each node, and the size of the mini-batch.
+
+For an example about how to achieve it see [High throughput deployments](how-to-image-processing-batch.md#high-throughput-deployments).
+
+#### File level
+
+One of the easiest ways to perform inference is by iterating over all the files in the mini-batch and run your model over it. In some cases, like image processing, this may be a good idea. If your data is tabular, you may need to make a good estimation about the number of rows on each file to estimate if your model is able to handle the memory requirements to not just load the entire data into memory but also to perform inference over it. Remember that some models (specially those based on recurrent neural networks) will unfold and present a memory footprint that may not be linear with the number of rows. If your model is expensive in terms of memory, please consider running inference at the row level.
+
+> [!TIP]
+> If file sizes are too big to be readed even at once, please consider breaking down files into multiple smaller files to account for better parallelization.
+
+For an example about how to achieve it see [Image processing with batch deployments](how-to-image-processing-batch.md).
+
+#### Row level (tabular)
+
+For models that present challenges in the size of their inputs, you may want to consider running inference at the row level. Your batch deployment will still provide your scoring script with a mini-batch of files, however, you will read one file, one row at a time. This may look inefficient but for some deep learning models may be the only way to perform inference without scaling up your hardware requirements.
+
+For an example about how to achieve it see [Text processing with batch deployments](how-to-nlp-processing-batch.md).
+
+### Relationship between the degree of parallelism and the scoring script
+
+Your deployment configuration controls the size of each mini-batch and the number of workers on each node. Take into account them when deciding if you want to read the entire mini-batch to perform inference. When running multiple workers on the same instance, take into account that memory will be shared across all the workers. Usually, increasing the number of workers per node should be accompanied by a decrease in the mini-batch size or by a change in the scoring strategy (if data size remains the same).
+
+## Next steps
+
+* [Troubleshooting batch endpoints](how-to-troubleshoot-batch-endpoints.md).
+* [Use MLflow models in batch deployments](how-to-mlflow-batch.md).
+* [Image processing with batch deployments](how-to-image-processing-batch.md).
machine-learning How To Deploy Model Custom Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-model-custom-output.md
+
+ Title: "Customize outputs in batch deployments"
+
+description: Learn how create deployments that generate custom outputs and files.
++++++ Last updated : 10/10/2022++++
+# Customize outputs in batch deployments
++
+Sometimes you need to execute inference having a higher control of what is being written as output of the batch job. Those cases include:
+
+> [!div class="checklist"]
+> * You need to control how the predictions are being written in the output. For instance, you want to append the prediction to the original data (if data is tabular).
+> * You need to write your predictions in a different file format from the one supported out-of-the-box by batch deployments.
+> * Your model is a generative model that can't write the output in a tabular format. For instance, models that produce images as outputs.
+> * Your model produces multiple tabular files instead of a single one. This is the case for instance of models that perform forecasting considering multiple scenarios.
+
+In any of those cases, Batch Deployments allow you to take control of the output of the jobs by allowing you to write directly to the output of the batch deployment job. In this tutorial, we'll see how to deploy a model to perform batch inference and writes the outputs in `parquet` format by appending the predictions to the original input data.
+
+## About this sample
+
+This example shows how you can deploy a model to perform batch inference and customize how your predictions are written in the output. This example uses an MLflow model based on the [UCI Heart Disease Data Set](https://archive.ics.uci.edu/ml/datasets/Heart+Disease). The database contains 76 attributes, but we are using a subset of 14 of them. The model tries to predict the presence of heart disease in a patient. It is integer valued from 0 (no presence) to 1 (presence).
+
+The model has been trained using an `XGBBoost` classifier and all the required preprocessing has been packaged as a `scikit-learn` pipeline, making this model an end-to-end pipeline that goes from raw data to predictions.
+
+The information in this article is based on code samples contained in the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the commands locally without having to copy/paste YAML and other files, clone the repo and then change directories to the `cli/endpoints/batch` if you are using the Azure CLI or `sdk/endpoints/batch` if you are using our SDK for Python.
+
+```azurecli
+git clone https://github.com/Azure/azureml-examples --depth 1
+cd azureml-examples/cli/endpoints/batch
+```
+
+### Follow along in Jupyter Notebooks
+
+You can follow along this sample in a Jupyter Notebook. In the cloned repository, open the notebook: [custom-output-batch.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/batch/custom-output-batch.ipynb).
+
+## Prerequisites
++
+* A model registered in the workspace. In this tutorial, we'll use an MLflow model. Particularly, we are using the *heart condition classifier* created in the tutorial [Using MLflow models in batch deployments](how-to-mlflow-batch.md).
+* You must have an endpoint already created. If you don't, follow the instructions at [Use batch endpoints for batch scoring](how-to-use-batch-endpoint.md). This example assumes the endpoint is named `heart-classifier-batch`.
+* You must have a compute created where to deploy the deployment. If you don't, follow the instructions at [Create compute](how-to-use-batch-endpoint.md#create-compute). This example assumes the name of the compute is `cpu-cluster`.
+
+## Creating a batch deployment with a custom output
+
+In this example, we are going to create a deployment that can write directly to the output folder of the batch deployment job. The deployment will use this feature to write custom parquet files.
+
+### Registering the model
+
+Batch Endpoint can only deploy registered models. In this case, we already have a local copy of the model in the repository, so we only need to publish the model to the registry in the workspace. You can skip this step if the model you are trying to deploy is already registered.
+
+# [Azure ML CLI](#tab/cli)
+
+```azurecli
+MODEL_NAME='heart-classifier'
+az ml model create --name $MODEL_NAME --type "mlflow_model" --path "heart-classifier-mlflow/model"
+```
+
+# [Azure ML SDK for Python](#tab/sdk)
+
+```python
+model_name = 'heart-classifier'
+model = ml_client.models.create_or_update(
+ Model(name=model_name, path='heart-classifier-mlflow/model', type=AssetTypes.MLFLOW_MODEL)
+)
+```
++
+> [!NOTE]
+> The model used in this tutorial is an MLflow model. However, the steps apply for both MLflow models and custom models.
+
+### Creating a scoring script
+
+We need to create a scoring script that can read the input data provided by the batch deployment and return the scores of the model. We are also going to write directly to the output folder of the job. In summary, the proposed scoring script does as follows:
+
+1. Reads the input data as CSV files.
+2. Runs an MLflow model `predict` function over the input data.
+3. Appends the predictions to a `pandas.DataFrame` along with the input data.
+4. Writes the data in a file named as the input file, but in `parquet` format.
+
+__batch_driver_parquet.py__
+
+```python
+import os
+import mlflow
+import pandas as pd
+from pathlib import Path
+
+def init():
+ global model
+ global output_path
+
+ # AZUREML_MODEL_DIR is an environment variable created during deployment
+ # It is the path to the model folder
+ # Please provide your model's folder name if there's one:
+ model_path = os.path.join(os.environ["AZUREML_MODEL_DIR"], "model")
+ output_path = os.environ['AZUREML_BI_OUTPUT_PATH']
+ model = mlflow.pyfunc.load_model(model_path)
+
+def run(mini_batch):
+ for file_path in mini_batch:
+ data = pd.read_csv(file_path)
+ pred = model.predict(data)
+
+ data['prediction'] = pred
+
+ output_file_name = Path(file_path).stem
+ output_file_path = os.path.join(output_path, output_file_name + '.parquet')
+ data.to_parquet(output_file_path)
+
+ return mini_batch
+```
+
+__Remarks:__
+* Notice how the environment variable `AZUREML_BI_OUTPUT_PATH` is used to get access to the output path of the deployment job.
+* The `init()` function is populating a global variable called `output_path` that can be used later to know where to write.
+* The `run` method returns a list of the processed files. It is required for the `run` function to return a `list` or a `pandas.DataFrame` object.
+
+> [!WARNING]
+> Take into account that all the batch executors will have write access to this path at the same time. This means that you need to account for concurrency. In this case, we are ensuring each executor writes its own file by using the input file name as the name of the output folder.
+
+### Creating the deployment
+
+Follow the next steps to create a deployment using the previous scoring script:
+
+1. First, let's create an environment where the scoring script can be executed:
+
+ # [Azure ML CLI](#tab/cli)
+
+ No extra step is required for the Azure ML CLI. The environment definition will be included in the deployment file.
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ Let's get a reference to the environment:
+
+ ```python
+ environment = Environment(
+ conda_file="./heart-classifier-mlflow/environment/conda.yaml",
+ image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:latest",
+ )
+ ```
+
+2. MLflow models don't require you to indicate an environment or a scoring script when creating the deployments as it is created for you. However, in this case we are going to indicate a scoring script and environment since we want to customize how inference is executed.
+
+ > [!NOTE]
+ > This example assumes you have an endpoint created with the name `heart-classifier-batch` and a compute cluster with name `cpu-cluster`. If you don't, please follow the steps in the doc [Use batch endpoints for batch scoring](how-to-use-batch-endpoint.md).
+
+ # [Azure ML CLI](#tab/cli)
+
+ To create a new deployment under the created endpoint, create a `YAML` configuration like the following:
+
+ ```yaml
+ $schema: https://azuremlschemas.azureedge.net/latest/batchDeployment.schema.json
+ endpoint_name: heart-classifier-batch
+ name: classifier-xgboost-parquet
+ description: A heart condition classifier based on XGBoost
+ model: azureml:heart-classifier@latest
+ environment:
+ image: mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:latest
+ conda_file: ./heart-classifier-mlflow/environment/conda.yaml
+ code_configuration:
+ code: ./heart-classifier-custom/code/
+ scoring_script: batch_driver_parquet.py
+ compute: azureml:cpu-cluster
+ resources:
+ instance_count: 2
+ max_concurrency_per_instance: 2
+ mini_batch_size: 2
+ output_action: summary_only
+ retry_settings:
+ max_retries: 3
+ timeout: 300
+ error_threshold: -1
+ logging_level: info
+ ```
+
+ Then, create the deployment with the following command:
+
+ ```azurecli
+ DEPLOYMENT_NAME="classifier-xgboost-parquet"
+ az ml batch-deployment create -f endpoint.yml
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ To create a new deployment under the created endpoint, use the following script:
+
+ ```python
+ deployment = BatchDeployment(
+ name="classifier-xgboost-parquet",
+ description="A heart condition classifier based on XGBoost",
+ endpoint_name=endpoint.name,
+ model=model,
+ environment=environment,
+ code_configuration=CodeConfiguration(
+ code="./heart-classifier-mlflow/code/",
+ scoring_script="batch_driver_parquet.py",
+ ),
+ compute=compute_name,
+ instance_count=2,
+ max_concurrency_per_instance=2,
+ mini_batch_size=2,
+ output_action=BatchDeploymentOutputAction.SUMMARY_ONLY,
+ retry_settings=BatchRetrySettings(max_retries=3, timeout=300),
+ logging_level="info",
+ )
+ ```
+
+ Then, create the deployment with the following command:
+
+ ```python
+ ml_client.batch_deployments.begin_create_or_update(deployment)
+ ```
+
+
+ > [!IMPORTANT]
+ > Notice that now `output_action` is set to `SUMMARY_ONLY`.
+
+3. At this point, our batch endpoint is ready to be used.
+
+## Testing out the deployment
+
+For testing our endpoint, we are going to use a sample of unlabeled data located in this repository and that can be used with the model. Batch endpoints can only process data that is located in the cloud and that is accessible from the Azure Machine Learning workspace. In this example, we are going to upload it to an Azure Machine Learning data store. Particularly, we are going to create a data asset that can be used to invoke the endpoint for scoring. However, notice that batch endpoints accept data that can be placed in multiple type of locations.
+
+1. Let's create the data asset first. This data asset consists of a folder with multiple CSV files that we want to process in parallel using batch endpoints. You can skip this step is your data is already registered as a data asset or you want to use a different input type.
+
+ # [Azure ML CLI](#tab/cli)
+
+ Create a data asset definition in `YAML`:
+
+ __heart-dataset-unlabeled.yml__
+ ```yaml
+ $schema: https://azuremlschemas.azureedge.net/latest/data.schema.json
+ name: heart-dataset-unlabeled
+ description: An unlabeled dataset for heart classification.
+ type: uri_folder
+ path: heart-dataset
+ ```
+
+ Then, create the data asset:
+
+ ```azurecli
+ az ml data create -f heart-dataset-unlabeled.yml
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ data_path = "resources/heart-dataset/"
+ dataset_name = "heart-dataset-unlabeled"
+
+ heart_dataset_unlabeled = Data(
+ path=data_path,
+ type=AssetTypes.URI_FOLDER,
+ description="An unlabeled dataset for heart classification",
+ name=dataset_name,
+ )
+ ml_client.data.create_or_update(heart_dataset_unlabeled)
+ ```
+
+1. Now that the data is uploaded and ready to be used, let's invoke the endpoint:
+
+ # [Azure ML CLI](#tab/cli)
+
+ ```azurecli
+ JOB_NAME = $(az ml batch-endpoint invoke --name $ENDPOINT_NAME --deployment-name $DEPLOYMENT_NAME --input azureml:heart-dataset-unlabeled@latest | jq -r '.name')
+ ```
+
+ > [!NOTE]
+ > The utility `jq` may not be installed on every installation. You can get instructions in [this link](https://stedolan.github.io/jq/download/).
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ input = Input(type=AssetTypes.URI_FOLDER, path=heart_dataset_unlabeled.id)
+ job = ml_client.batch_endpoints.invoke(
+ endpoint_name=endpoint.name,
+ deployment_name=deployment.name,
+ input=input,
+ )
+ ```
+
+1. A batch job is started as soon as the command returns. You can monitor the status of the job until it finishes:
+
+ # [Azure ML CLI](#tab/cli)
+
+ ```azurecli
+ az ml job show --name $JOB_NAME
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ ml_client.jobs.get(job.name)
+ ```
+
+## Analyzing the outputs
+
+The job generates a named output called `score` where all the generated files are placed. Since we wrote into the directory directly, one file per each input file, then we can expect to have the same number of files. In this particular example we decided to name the output files the same as the inputs, but they will have a parquet extension.
+
+> [!NOTE]
+> Notice that a file `predictions.csv` is also included in the output folder. This file contains the summary of the processed files.
+
+You can download the results of the job by using the job name:
+
+# [Azure ML CLI](#tab/cli)
+
+To download the predictions, use the following command:
+
+```azurecli
+az ml job download --name $JOB_NAME --output-name score --download-path ./
+```
+
+# [Azure ML SDK for Python](#tab/sdk)
+
+```python
+ml_client.jobs.download(name=job.name, output_name='score', download_path='./')
+```
++
+Once the file is downloaded, you can open it using your favorite tool. The following example loads the predictions using `Pandas` dataframe.
+
+```python
+import pandas as pd
+import glob
+
+output_files = glob.glob("named-outputs/score/*.parquet")
+score = pd.concat((pd.read_parquet(f) for f in output_files))
+```
+
+The output looks as follows:
+
+| age | sex | ... | thal | prediction |
+|--||--||--|
+| 63 | 1 | ... | fixed | 0 |
+| 67 | 1 | ... | normal | 1 |
+| 67 | 1 | ... | reversible | 0 |
+| 37 | 1 | ... | normal | 0 |
++
+## Next steps
+
+* [Using batch deployments for image file processing](how-to-image-processing-batch.md)
+* [Using batch deployments for NLP processing](how-to-nlp-processing-batch.md)
machine-learning How To Image Processing Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-image-processing-batch.md
+
+ Title: "Image processing with batch deployments"
+
+description: Learn how to deploy a model in batch endpoints that process images
++++++ Last updated : 10/10/2022++++
+# Image processing with batch deployments
++
+Batch Endpoints can be used for processing tabular data, but also any other file type like images. Those deployments are supported in both MLflow and custom models. In this tutorial, we will learn how to deploy a model that classifies images according to the ImageNet taxonomy.
+
+## About this sample
+
+The model we are going to work with was built using TensorFlow along with the RestNet architecture ([Identity Mappings in Deep Residual Networks](https://arxiv.org/abs/1603.05027)). A sample of this model can be downloaded from `https://azuremlexampledata.blob.core.windows.net/data/imagenet/model.zip`. The model has the following constrains that are important to keep in mind for deployment:
+
+* It works with images of size 244x244 (tensors of `(224, 224, 3)`).
+* It requires inputs to be scaled to the range `[0,1]`.
+
+The information in this article is based on code samples contained in the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the commands locally without having to copy/paste YAML and other files, clone the repo, and then change directories to the `cli/endpoints/batch` if you are using the Azure CLI or `sdk/endpoints/batch` if you are using our SDK for Python.
+
+```azurecli
+git clone https://github.com/Azure/azureml-examples --depth 1
+cd azureml-examples/cli/endpoints/batch
+```
+
+### Follow along in Jupyter Notebooks
+
+You can follow along this sample in a Jupyter Notebook. In the cloned repository, open the notebook: [imagenet-classifier-batch.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/batch/imagenet-classifier-batch.ipynb).
+
+## Prerequisites
++
+* You must have a batch endpoint already created. This example assumes the endpoint is named `imagenet-classifier-batch`. If you don't have one, follow the instructions at [Use batch endpoints for batch scoring](how-to-use-batch-endpoint.md).
+* You must have a compute created where to deploy the deployment. This example assumes the name of the compute is `cpu-cluster`. If you don't, follow the instructions at [Create compute](how-to-use-batch-endpoint.md#create-compute).
+
+## Image classification with batch deployments
+
+In this example, we are going to learn how to deploy a deep learning model that can classify a given image according to the [taxonomy of ImageNet](https://image-net.org/).
+
+### Registering the model
+
+Batch Endpoint can only deploy registered models so we need to register it. You can skip this step if the model you are trying to deploy is already registered.
+
+1. Downloading a copy of the model:
+
+ # [Azure ML CLI](#tab/cli)
+
+ ```azurecli
+ wget https://azuremlexampledata.blob.core.windows.net/data/imagenet/model.zip
+ mkdir -p imagenet-classifier
+ unzip model.zip -d imagenet-classifier
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ import os
+ import urllib.request
+ from zipfile import ZipFile
+
+ response = urllib.request.urlretrieve('https://azuremlexampledata.blob.core.windows.net/data/imagenet/model.zip', 'model.zip')
+
+ os.mkdirs("imagenet-classifier", exits_ok=True)
+ with ZipFile(response[0], 'r') as zip:
+ model_path = zip.extractall(path="imagenet-classifier")
+ ```
+
+2. Register the model:
+
+ # [Azure ML CLI](#tab/cli)
+
+ ```azurecli
+ MODEL_NAME='imagenet-classifier'
+ az ml model create --name $MODEL_NAME --type "custom_model" --path "imagenet-classifier/model"
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ model_name = 'imagenet-classifier'
+ model = ml_client.models.create_or_update(
+ Model(name=model_name, path=model_path, type=AssetTypes.CUSTOM_MODEL)
+ )
+ ```
+
+### Creating a scoring script
+
+We need to create a scoring script that can read the images provided by the batch deployment and return the scores of the model. The following script:
+
+> [!div class="checklist"]
+> * Indicates an `init` function that load the model using `keras` module in `tensorflow`.
+> * Indicates a `run` function that is executed for each mini-batch the batch deployment provides.
+> * The `run` function read one image of the file at a time
+> * The `run` method resizes the images to the expected sizes for the model.
+> * The `run` method rescales the images to the range `[0,1]` domain, which is what the model expects.
+> * It returns the classes and the probabilities associated with the predictions.
+
+__imagenet_scorer.py__
+
+```python
+import os
+import numpy as np
+import pandas as pd
+import tensorflow as tf
+from os.path import basename
+from PIL import Image
+from tensorflow.keras.models import load_model
++
+def init():
+ global model
+ global input_width
+ global input_height
+
+ # AZUREML_MODEL_DIR is an environment variable created during deployment
+ model_path = os.path.join(os.environ["AZUREML_MODEL_DIR"], "model")
+
+ # load the model
+ model = load_model(model_path)
+ input_width = 244
+ input_height = 244
+
+def run(mini_batch):
+ results = []
+
+ for image in mini_batch:
+ data = Image.open(image).resize((input_width, input_height)) # Read and resize the image
+ data = np.array(data)/255.0 # Normalize
+ data_batch = tf.expand_dims(data, axis=0) # create a batch of size (1, 244, 244, 3)
+
+ # perform inference
+ pred = model.predict(data_batch)
+
+ # Compute probabilities, classes and labels
+ pred_prob = tf.math.reduce_max(tf.math.softmax(pred, axis=-1)).numpy()
+ pred_class = tf.math.argmax(pred, axis=-1).numpy()
+
+ results.append([basename(image), pred_class[0], pred_prob])
+
+ return pd.DataFrame(results)
+```
+
+> [!TIP]
+> Although images are provided in mini-batches by the deployment, this scoring script processes one image at a time. This is a common pattern as trying to load the entire batch and send it to the model at once may result in high-memory pressure on the batch executor (OOM exeptions). However, there are certain cases where doing so enables high throughput in the scoring task. This is the case for instance of batch deployments over a GPU hardware where we want to achieve high GPU utilization. See [High throughput deployments](#high-throughput-deployments) for an example of a scoring script that takes advantage of it.
+
+> [!NOTE]
+> If you are trying to deploy a generative model (one that generates files), please read how to author a scoring script as explained at [Deployment of models that produces multiple files](how-to-deploy-model-custom-output.md).
+
+### Creating the deployment
+
+One the scoring script is created, it's time to create a batch deployment for it. Follow the following steps to create it:
+
+1. We need to indicate over which environment we are going to run the deployment. In our case, our model runs on `TensorFlow`. Azure Machine Learning already has an environment with the required software installed, so we can reutilize this environment. We are just going to add a couple of dependencies in a `conda.yml` file.
+
+ # [Azure ML CLI](#tab/cli)
+
+ No extra step is required for the Azure ML CLI. The environment definition will be included in the deployment file.
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ Let's get a reference to the environment:
+
+ ```python
+ environment = Environment(
+ conda_file="./imagenet-classifier/environment/conda.yml",
+ image="mcr.microsoft.com/azureml/tensorflow-2.4-ubuntu18.04-py37-cpu-inference:latest",
+ )
+ ```
+
+1. Now, let create the deployment.
+
+ > [!NOTE]
+ > This example assumes you have an endpoint created with the name `imagenet-classifier-batch` and a compute cluster with name `cpu-cluster`. If you don't, please follow the steps in the doc [Use batch endpoints for batch scoring](how-to-use-batch-endpoint.md).
+
+ # [Azure ML CLI](#tab/cli)
+
+ To create a new deployment under the created endpoint, create a `YAML` configuration like the following:
+
+ ```yaml
+ $schema: https://azuremlschemas.azureedge.net/latest/batchDeployment.schema.json
+ endpoint_name: imagenet-classifier-batch
+ name: imagenet-classifier-resnetv2
+ description: A ResNetV2 model architecture for performing ImageNet classification in batch
+ model: azureml:imagenet-classifier@latest
+ compute: azureml:cpu-cluster
+ environment:
+ image: mcr.microsoft.com/azureml/tensorflow-2.4-ubuntu18.04-py37-cpu-inference:latest
+ conda_file: ./imagenet-classifier/environment/conda.yml
+ code_configuration:
+ code: ./imagenet-classifier/code/
+ scoring_script: imagenet_scorer.py
+ resources:
+ instance_count: 2
+ max_concurrency_per_instance: 1
+ mini_batch_size: 5
+ output_action: append_row
+ output_file_name: predictions.csv
+ retry_settings:
+ max_retries: 3
+ timeout: 300
+ error_threshold: -1
+ logging_level: info
+ ```
+
+ Then, create the deployment with the following command:
+
+ ```azurecli
+ DEPLOYMENT_NAME="imagenet-classifier-resnetv2"
+ az ml batch-deployment create -f deployment.yml
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ To create a new deployment with the indicated environment and scoring script use the following code:
+
+ ```python
+ deployment = BatchDeployment(
+ name="imagenet-classifier-resnetv2",
+ description="A ResNetV2 model architecture for performing ImageNet classification in batch",
+ endpoint_name=endpoint.name,
+ model=model,
+ environment=environment,
+ code_configuration=CodeConfiguration(
+ code="./imagenet-classifier/code/",
+ scoring_script="imagenet_scorer.py",
+ ),
+ compute=compute_name,
+ instance_count=2,
+ max_concurrency_per_instance=1,
+ mini_batch_size=10,
+ output_action=BatchDeploymentOutputAction.APPEND_ROW,
+ output_file_name="predictions.csv",
+ retry_settings=BatchRetrySettings(max_retries=3, timeout=300),
+ logging_level="info",
+ )
+ ```
+
+ Then, create the deployment with the following command:
+
+ ```python
+ ml_client.batch_deployments.begin_create_or_update(deployment)
+ ```
+
+1. Although you can invoke a specific deployment inside of an endpoint, you will usually want to invoke the endpoint itself, and let the endpoint decide which deployment to use. Such deployment is named the "default" deployment. This gives you the possibility of changing the default deployment - and hence changing the model serving the deployment - without changing the contract with the user invoking the endpoint. Use the following instruction to update the default deployment:
+
+ # [Azure ML CLI](#tab/cli)
+
+ ```bash
+ az ml batch-endpoint update --name $ENDPOINT_NAME --set defaults.deployment_name=$DEPLOYMENT_NAME
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ endpoint.defaults.deployment_name = deployment.name
+ ml_client.batch_endpoints.begin_create_or_update(endpoint)
+ ```
+
+1. At this point, our batch endpoint is ready to be used.
+
+## Testing out the deployment
+
+For testing our endpoint, we are going to use a sample of 1000 images from the original ImageNet dataset. Batch endpoints can only process data that is located in the cloud and that is accessible from the Azure Machine Learning workspace. In this example, we are going to upload it to an Azure Machine Learning data store. Particularly, we are going to create a data asset that can be used to invoke the endpoint for scoring. However, notice that batch endpoints accept data that can be placed in multiple type of locations.
+
+1. Let's download the associated sample data:
+
+ # [Azure ML CLI](#tab/cli)
+
+ ```bash
+ wget https://azuremlexampledata.blob.core.windows.net/data/imagenet-1000.zip
+ unzip imagenet-1000.zip -d /tmp/imagenet-1000
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ !wget https://azuremlexampledata.blob.core.windows.net/data/imagenet-1000.zip
+ !unzip imagenet-1000.zip -d /tmp/imagenet-1000
+ ```
+
+2. Now, let's create the data asset from the data just downloaded
+
+ # [Azure ML CLI](#tab/cli)
+
+ Create a data asset definition in `YAML`:
+
+ __imagenet-sample-unlabeled.yml__
+ ```yaml
+ $schema: https://azuremlschemas.azureedge.net/latest/data.schema.json
+ name: imagenet-sample-unlabeled
+ description: A sample of 1000 images from the original ImageNet dataset.
+ type: uri_folder
+ path: /tmp/imagenet-1000
+ ```
+
+ Then, create the data asset:
+
+ ```azurecli
+ az ml data create -f imagenet-sample-unlabeled.yml
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ data_path = "/tmp/imagenet-1000"
+ dataset_name = "imagenet-sample-unlabeled"
+
+ imagenet_sample = Data(
+ path=data_path,
+ type=AssetTypes.URI_FOLDER,
+ description="A sample of 1000 images from the original ImageNet dataset",
+ name=dataset_name,
+ )
+ ml_client.data.create_or_update(imagenet_sample)
+ ```
+
+3. Now that the data is uploaded and ready to be used, let's invoke the endpoint:
+
+ # [Azure ML CLI](#tab/cli)
+
+ ```azurecli
+ JOB_NAME = $(az ml batch-endpoint invoke --name $ENDPOINT_NAME --input azureml:imagenet-sample-unlabeled@latest | jq -r '.name')
+ ```
+
+ > [!NOTE]
+ > The utility `jq` may not be installed on every installation. You can get instructions in [this link](https://stedolan.github.io/jq/download/).
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ input = Input(type=AssetTypes.URI_FOLDER, path=imagenet_sample.id)
+ job = ml_client.batch_endpoints.invoke(
+ endpoint_name=endpoint.name,
+ input=input,
+ )
+ ```
+
+
+ > [!TIP]
+ > Notice how we are not indicating the deployment name in the invoke operation. That's because the endpoint automatically routes the job to the default deployment. Since our endpoint only has one deployment, then that one is the default one. You can target an specific deployment by indicating the argument/parameter `deployment_name`.
+
+4. A batch job is started as soon as the command returns. You can monitor the status of the job until it finishes:
+
+ # [Azure ML CLI](#tab/cli)
+
+ ```azurecli
+ az ml job show --name $JOB_NAME
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ ml_client.jobs.get(job.name)
+ ```
+
+5. Once the deployment is finished, we can download the predictions:
+
+ # [Azure ML CLI](#tab/cli)
+
+ To download the predictions, use the following command:
+
+ ```azurecli
+ az ml job download --name $JOB_NAME --output-name score --download-path ./
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ ml_client.jobs.download(name=job.name, output_name='score', download_path='./')
+ ```
+
+6. The output predictions will look like the following. Notice that the predictions have been combined with the labels for the convenience of the reader. To know more about how to achieve this see the associated notebook.
+
+ ```python
+ import pandas as pd
+ score = pd.read_csv("named-outputs/score/predictions.csv", header=None, names=['file', 'class', 'probabilities'], sep=' ')
+ score['label'] = score['class'].apply(lambda pred: imagenet_labels[pred])
+ score
+ ```
+
+ | file | class | probabilities | label |
+ |--|-|| -|
+ | n02088094_Afghan_hound.JPEG | 161 | 0.994745 | Afghan hound |
+ | n02088238_basset | 162 | 0.999397 | basset |
+ | n02088364_beagle.JPEG | 165 | 0.366914 | bluetick |
+ | n02088466_bloodhound.JPEG | 164 | 0.926464 | bloodhound |
+ | ... | ... | ... | ... |
+
+
+## High throughput deployments
+
+As mentioned before, the deployment we just created processes one image a time, even when the batch deployment is providing a batch of them. In most cases this is the best approach as it simplifies how the models execute and avoids any possible out-of-memory problems. However, in certain others we may want to saturate as much as possible the utilization of the underlying hardware. This is the case GPUs for instance.
+
+On those cases, we may want to perform inference on the entire batch of data. That implies loading the entire set of images to memory and sending them directly to the model. The following example uses `TensorFlow` to read batch of images and score them all at once. It also uses `TensorFlow` ops to do any data preprocessing so the entire pipeline will happen on the same device being used (CPU/GPU).
+
+> [!WARNING]
+> Some models have a non-linear relationship with the size of the inputs in terms of the memory consumption. Batch again (as done in this example) or decrease the size of the batches created by the batch deployment to avoid out-of-memory exceptions.
+
+__imagenet_scorer_batch.py__
+
+```python
+import os
+import numpy as np
+import pandas as pd
+import tensorflow as tf
+from tensorflow.keras.models import load_model
+
+def init():
+ global model
+ global input_width
+ global input_height
+
+ # AZUREML_MODEL_DIR is an environment variable created during deployment
+ model_path = os.path.join(os.environ["AZUREML_MODEL_DIR"], "model")
+
+ # load the model
+ model = load_model(model_path)
+ input_width = 244
+ input_height = 244
+
+def decode_img(file_path):
+ file = tf.io.read_file(file_path)
+ img = tf.io.decode_jpeg(file, channels=3)
+ img = tf.image.resize(img, [input_width, input_height])
+ return img/255.
+
+def run(mini_batch):
+ images_ds = tf.data.Dataset.from_tensor_slices(mini_batch)
+ images_ds = images_ds.map(decode_img).batch(64)
+
+ # perform inference
+ pred = model.predict(images_ds)
+
+ # Compute probabilities, classes and labels
+ pred_prob = tf.math.reduce_max(tf.math.softmax(pred, axis=-1)).numpy()
+ pred_class = tf.math.argmax(pred, axis=-1).numpy()
+
+ return pd.DataFrame([mini_batch, pred_prob, pred_class], columns=['file', 'probability', 'class'])
+```
+
+Remarks:
+* Notice that this script is constructing a tensor dataset from the mini-batch sent by the batch deployment. This dataset is preprocessed to obtain the expected tensors for the model using the `map` operation with the function `decode_img`.
+* The dataset is batched again (16) send the data to the model. Use this parameter to control how much information you can load into memory and send to the model at once. If running on a GPU, you will need to carefully tune this parameter to achieve the maximum utilization of the GPU just before getting an OOM exception.
+* Once predictions are computed, the tensors are converted to `numpy.ndarray`.
++
+## Considerations for MLflow models processing images
+
+MLflow models in Batch Endpoints support reading images as input data. Remember that MLflow models don't require a scoring script. Have the following considerations when using them:
+
+> [!div class="checklist"]
+> * Image files supported includes: `.png`, `.jpg`, `.jpeg`, `.tiff`, `.bmp` and `.gif`.
+> * MLflow models should expect to recieve a `np.ndarray` as input that will match the dimensions of the input image. In order to support multiple image sizes on each batch, the batch executor will invoke the MLflow model once per image file.
+> * MLflow models are highly encouraged to include a signature, and if they do it must be of type `TensorSpec`. Inputs are reshaped to match tensor's shape if available. If no signature is available, tensors of type `np.uint8` are inferred.
+> * For models that include a signature and are expected to handle variable size of images, then include a signature that can guarantee it. For instance, the following signature will allow batches of 3 channeled images. Specify the signature when you register the model with `mlflow.<flavor>.log_model(..., signature=signature)`.
+
+```python
+import numpy as np
+import mlflow
+from mlflow.models.signature import ModelSignature
+from mlflow.types.schema import Schema, TensorSpec
+
+input_schema = Schema([
+ TensorSpec(np.dtype(np.uint8), (-1, -1, -1, 3)),
+])
+signature = ModelSignature(inputs=input_schema)
+```
+
+For more information about how to use MLflow models in batch deployments read [Using MLflow models in batch deployments](how-to-mlflow-batch.md).
+
+## Next steps
+
+* [Using MLflow models in batch deployments](how-to-mlflow-batch.md)
+* [NLP tasks with batch deployments](how-to-nlp-processing-batch.md)
machine-learning How To Mlflow Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-mlflow-batch.md
+
+ Title: "Using MLflow models in batch deployments"
+
+description: Learn how to deploy MLflow models in batch deployments
++++++ Last updated : 10/10/2022++++
+# Use MLflow models in batch deployments
++
+In this article, learn how to deploy your [MLflow](https://www.mlflow.org) model to Azure ML for both batch inference using batch endpoints. Azure Machine Learning supports no-code deployment of models created and logged with MLflow. This means that you don't have to provide a scoring script or an environment.
+
+For no-code-deployment, Azure Machine Learning
+
+* Provides a MLflow base image/curated environment that contains the required dependencies to run an Azure Machine Learning Batch job.
+* Creates a batch job pipeline with a scoring script for you that can be used to process data using parallelization.
+
+> [!NOTE]
+> For more information about the supported file types in batch endpoints with MLflow, view [Considerations when deploying to batch inference](#considerations-when-deploying-to-batch-inference).
+
+## About this example
+
+This example shows how you can deploy an MLflow model to a batch endpoint to perform batch predictions. This example uses an MLflow model based on the [UCI Heart Disease Data Set](https://archive.ics.uci.edu/ml/datasets/Heart+Disease). The database contains 76 attributes, but we are using a subset of 14 of them. The model tries to predict the presence of heart disease in a patient. It is integer valued from 0 (no presence) to 1 (presence).
+
+The model has been trained using an `XGBBoost` classifier and all the required preprocessing has been packaged as a `scikit-learn` pipeline, making this model an end-to-end pipeline that goes from raw data to predictions.
+
+The information in this article is based on code samples contained in the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the commands locally without having to copy/paste YAML and other files, clone the repo and then change directories to the `cli/endpoints/batch` if you are using the Azure CLI or `sdk/endpoints/batch` if you are using our SDK for Python.
+
+```azurecli
+git clone https://github.com/Azure/azureml-examples --depth 1
+cd azureml-examples/cli/endpoints/batch
+```
+
+### Follow along in Jupyter Notebooks
+
+You can follow along this sample in the following notebooks. In the cloned repository, open the notebook: [mlflow-for-batch-tabular.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/batch/mlflow-for-batch-tabular.ipynb).
+
+## Prerequisites
++
+* You must have a MLflow model. If your model is not in MLflow format and you want to use this feature, you can [convert your custom ML model to MLflow format](how-to-convert-custom-model-to-mlflow.md).
+
+## Steps
+
+Follow these steps to deploy an MLflow model to a batch endpoint for running batch inference over new data:
+
+1. First, let's connect to Azure Machine Learning workspace where we are going to work on.
+
+ # [Azure CLI](#tab/cli)
+
+ ```azurecli
+ az account set --subscription <subscription>
+ az configure --defaults workspace=<workspace> group=<resource-group> location=<location>
+ ```
+
+ # [Python](#tab/sdk)
+
+ The workspace is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning. In this section, we'll connect to the workspace in which you'll perform deployment tasks.
+
+ 1. Import the required libraries:
+
+ ```python
+ from azure.ai.ml import MLClient, Input
+ from azure.ai.ml.entities import BatchEndpoint, BatchDeployment, Model, AmlCompute, Data, BatchRetrySettings
+ from azure.ai.ml.constants import AssetTypes, BatchDeploymentOutputAction
+ from azure.identity import DefaultAzureCredential
+ ```
+
+ 2. Configure workspace details and get a handle to the workspace:
+
+ ```python
+ subscription_id = "<subscription>"
+ resource_group = "<resource-group>"
+ workspace = "<workspace>"
+
+ ml_client = MLClient(DefaultAzureCredential(), subscription_id, resource_group, workspace)
+ ```
+
+
+2. Batch Endpoint can only deploy registered models. In this case, we already have a local copy of the model in the repository, so we only need to publish the model to the registry in the workspace. You can skip this step if the model you are trying to deploy is already registered.
+
+ # [Azure CLI](#tab/cli)
+
+ ```azurecli
+ MODEL_NAME='heart-classifier'
+ az ml model create --name $MODEL_NAME --type "mlflow_model" --path "heart-classifier-mlflow/model"
+ ```
+
+ # [Python](#tab/sdk)
+
+ ```python
+ model_name = 'heart-classifier'
+ model_local_path = "heart-classifier-mlflow/model"
+ model = ml_client.models.create_or_update(
+ Model(name=model_name, path=model_local_path, type=AssetTypes.MLFLOW_MODEL)
+ )
+ ```
+
+3. Before moving any forward, we need to make sure the batch deployments we are about to create can run on some infrastructure (compute). Batch deployments can run on any Azure ML compute that already exists in the workspace. That means that multiple batch deployments can share the same compute infrastructure. In this example, we are going to work on an AzureML compute cluster called `cpu-cluster`. Let's verify the compute exists on the workspace or create it otherwise.
+
+ # [Azure CLI](#tab/cli)
+
+ Create a compute definition `YAML` like the following one:
+
+ __cpu-cluster.yml__
+
+ ```yaml
+ $schema: https://azuremlschemas.azureedge.net/latest/amlCompute.schema.json
+ name: cluster-cpu
+ type: amlcompute
+ size: STANDARD_DS3_v2
+ min_instances: 0
+ max_instances: 2
+ idle_time_before_scale_down: 120
+ ```
+
+ Create the compute using the following command:
+
+ ```azurecli
+ az ml compute create -f cpu-cluster.yml
+ ```
+
+ # [Python](#tab/sdk)
+
+ To create a new compute cluster where to create the deployment, use the following script:
+
+ ```python
+ compute_name = "cpu-cluster"
+ if not any(filter(lambda m : m.name == compute_name, ml_client.compute.list())):
+ compute_cluster = AmlCompute(name=compute_name, description="amlcompute", min_instances=0, max_instances=2)
+ ml_client.begin_create_or_update(compute_cluster)
+ ```
+
+4. Now it is time to create the batch endpoint and deployment. Let's start with the endpoint first. Endpoints only require a name and a description to be created:
+
+ # [Azure CLI](#tab/cli)
+
+ To create a new endpoint, create a `YAML` configuration like the following:
+
+ ```yaml
+ $schema: https://azuremlschemas.azureedge.net/latest/batchEndpoint.schema.json
+ name: heart-classifier-batch
+ description: A heart condition classifier for batch inference
+ auth_mode: aad_token
+ ```
+
+ Then, create the endpoint with the following command:
+
+ ```azurecli
+ ENDPOINT_NAME='heart-classifier-batch'
+ az ml batch-endpoint create -n $ENDPOINT_NAME -f endpoint.yml
+ ```
+
+ # [Python](#tab/sdk)
+
+ To create a new endpoint, use the following script:
+
+ ```python
+ endpoint = BatchEndpoint(
+ name="heart-classifier-batch",
+ description="A heart condition classifier for batch inference",
+ )
+ ```
+
+ Then, create the endpoint with the following command:
+
+ ```python
+ ml_client.batch_endpoints.begin_create_or_update(endpoint)
+ ```
+
+5. Now, let create the deployment. MLflow models don't require you to indicate an environment or a scoring script when creating the deployments as it is created for you. However, you can specify them if you want to customize how the deployment does inference.
+
+ # [Azure CLI](#tab/cli)
+
+ To create a new deployment under the created endpoint, create a `YAML` configuration like the following:
+
+ ```yaml
+ $schema: https://azuremlschemas.azureedge.net/latest/batchDeployment.schema.json
+ endpoint_name: heart-classifier-batch
+ name: classifier-xgboost-mlflow
+ description: A heart condition classifier based on XGBoost
+ model: azureml:heart-classifier@latest
+ compute: azureml:cpu-cluster
+ resources:
+ instance_count: 2
+ max_concurrency_per_instance: 2
+ mini_batch_size: 2
+ output_action: append_row
+ output_file_name: predictions.csv
+ retry_settings:
+ max_retries: 3
+ timeout: 300
+ error_threshold: -1
+ logging_level: info
+ ```
+
+ Then, create the deployment with the following command:
+
+ ```azurecli
+ DEPLOYMENT_NAME="classifier-xgboost-mlflow"
+ az ml batch-deployment create -n $DEPLOYMENT_NAME -f endpoint.yml
+ ```
+
+ # [Python](#tab/sdk)
+
+ To create a new deployment under the created endpoint, first define the deployment:
+
+ ```python
+ deployment = BatchDeployment(
+ name="classifier-xgboost-mlflow",
+ description="A heart condition classifier based on XGBoost",
+ endpoint_name=endpoint.name,
+ model=model,
+ compute=compute_name,
+ instance_count=2,
+ max_concurrency_per_instance=2,
+ mini_batch_size=2,
+ output_action=BatchDeploymentOutputAction.APPEND_ROW,
+ output_file_name="predictions.csv",
+ retry_settings=BatchRetrySettings(max_retries=3, timeout=300),
+ logging_level="info",
+ )
+ ```
+
+ Then, create the deployment with the following command:
+
+ ```python
+ ml_client.batch_deployments.begin_create_or_update(deployment)
+ ```
+
+
+ > [!NOTE]
+ > `scoring_script` and `environment` auto generation only supports `pyfunc` model flavor. To use a different flavor, see [Using MLflow models with a scoring script](#using-mlflow-models-with-a-scoring-script).
+
+6. Although you can invoke a specific deployment inside of an endpoint, you will usually want to invoke the endpoint itself and let the endpoint decide which deployment to use. Such deployment is named the "default" deployment. This gives you the possibility of changing the default deployment and hence changing the model serving the deployment without changing the contract with the user invoking the endpoint. Use the following instruction to update the default deployment:
+
+ # [Azure CLI](#tab/cli)
+
+ ```azurecli
+ az ml batch-endpoint update --name $ENDPOINT_NAME --set defaults.deployment_name=$DEPLOYMENT_NAME
+ ```
+
+ # [Python](#tab/sdk)
+
+ ```python
+ endpoint = ml_client.batch_endpoints.get(endpoint.name)
+ endpoint.defaults.deployment_name = deployment.name
+ ml_client.batch_endpoints.begin_create_or_update(endpoint)
+ ```
+
+7. At this point, our batch endpoint is ready to be used.
+
+## Testing out the deployment
+
+For testing our endpoint, we are going to use a sample of unlabeled data located in this repository and that can be used with the model. Batch endpoints can only process data that is located in the cloud and that is accessible from the Azure Machine Learning workspace. In this example, we are going to upload it to an Azure Machine Learning data store. Particularly, we are going to create a data asset that can be used to invoke the endpoint for scoring. However, notice that batch endpoints accept data that can be placed in multiple type of locations.
+
+1. Let's create the data asset first. This data asset consists of a folder with multiple CSV files that we want to process in parallel using batch endpoints. You can skip this step is your data is already registered as a data asset or you want to use a different input type.
+
+ # [Azure CLI](#tab/cli)
+
+ a. Create a data asset definition in `YAML`:
+
+ __heart-dataset-unlabeled.yml__
+ ```yaml
+ $schema: https://azuremlschemas.azureedge.net/latest/data.schema.json
+ name: heart-dataset-unlabeled
+ description: An unlabeled dataset for heart classification.
+ type: uri_folder
+ path: heart-classifier-mlflow/data
+ ```
+
+ b. Create the data asset:
+
+ ```azurecli
+ az ml data create -f heart-dataset-unlabeled.yml
+ ```
+
+ # [Python](#tab/sdk)
+
+ a. Create a data asset definition:
+
+ ```python
+ data_path = "heart-classifier-mlflow/data"
+ dataset_name = "heart-dataset-unlabeled"
+
+ heart_dataset_unlabeled = Data(
+ path=data_path,
+ type=AssetTypes.URI_FOLDER,
+ description="An unlabeled dataset for heart classification",
+ name=dataset_name,
+ )
+ ```
+
+ b. Create the data asset:
+
+ ```python
+ ml_client.data.create_or_update(heart_dataset_unlabeled)
+ ```
+
+ c. Refresh the object to reflect the changes:
+
+ ```python
+ heart_dataset_unlabeled = ml_client.data.get(name=dataset_name)
+ ```
+
+2. Now that the data is uploaded and ready to be used, let's invoke the endpoint:
+
+ # [Azure CLI](#tab/cli)
+
+ ```azurecli
+ JOB_NAME = $(az ml batch-endpoint invoke --name $ENDPOINT_NAME --input azureml:heart-dataset-unlabeled@latest | jq -r '.name')
+ ```
+
+ > [!NOTE]
+ > The utility `jq` may not be installed on every installation. You can get installation instructions in [this link](https://stedolan.github.io/jq/download/).
+
+ # [Python](#tab/sdk)
+
+ ```python
+ input = Input(type=AssetTypes.URI_FOLDER, path=heart_dataset_unlabeled.id)
+ job = ml_client.batch_endpoints.invoke(
+ endpoint_name=endpoint.name,
+ input=input,
+ )
+ ```
+
+
+ > [!TIP]
+ > Notice how we are not indicating the deployment name in the invoke operation. That's because the endpoint automatically routes the job to the default deployment. Since our endpoint only has one deployment, then that one is the default one. You can target an specific deployment by indicating the argument/parameter `deployment_name`.
+
+3. A batch job is started as soon as the command returns. You can monitor the status of the job until it finishes:
+
+ # [Azure CLI](#tab/cli)
+
+ ```azurecli
+ az ml job show --name $JOB_NAME
+ ```
+
+ # [Python](#tab/sdk)
+
+ ```python
+ ml_client.jobs.get(job.name)
+ ```
+
+## Analyzing the outputs
+
+Output predictions are generated in the `predictions.csv` file as indicated in the deployment configuration. The job generates a named output called `score` where this file is placed. Only one file is generated per batch job.
+
+The file is structured as follows:
+
+* There is one row per each data point that was sent to the model. For tabular data, this means that one row is generated for each row in the input files and hence the number of rows in the generated file (`predictions.csv`) equals the sum of all the rows in all the processed files. For other data types, there is one row per each processed file.
+* Two columns are indicated:
+ * The file name where the data was read from. In tabular data, use this field to know which prediction belongs to which input data. For any given file, predictions are returned in the same order they appear in the input file so you can rely on the row number to match the corresponding prediction.
+ * The prediction associated with the input data. This value is returned "as-is" it was provided by the model's `predict().` function.
++
+You can download the results of the job by using the job name:
+
+# [Azure CLI](#tab/cli)
+
+To download the predictions, use the following command:
+
+```azurecli
+az ml job download --name $JOB_NAME --output-name score --download-path ./
+```
+
+# [Python](#tab/sdk)
+
+```python
+ml_client.jobs.download(name=job.name, output_name='score', download_path='./')
+```
++
+Once the file is downloaded, you can open it using your favorite tool. The following example loads the predictions using `Pandas` dataframe.
+
+```python
+import pandas as pd
+from ast import literal_eval
+
+with open('named-outputs/score/predictions.csv', 'r') as f:
+ pd.DataFrame(literal_eval(f.read().replace('\n', ',')), columns=['file', 'prediction'])
+```
+
+> [!WARNING]
+> The file `predictions.csv` may not be a regular CSV file and can't be read correctly using `pandas.read_csv()` method.
+
+The output looks as follows:
+
+| file | prediction |
+| -- | -- |
+| heart-unlabeled-0.csv | 0 |
+| heart-unlabeled-0.csv | 1 |
+| ... | 1 |
+| heart-unlabeled-3.csv | 0 |
+
+> [!TIP]
+> Notice that in this example the input data was tabular data in `CSV` format and there were 4 different input files (heart-unlabeled-0.csv, heart-unlabeled-1.csv, heart-unlabeled-2.csv and heart-unlabeled-3.csv).
+
+## Considerations when deploying to batch inference
+
+Azure Machine Learning supports no-code deployment for batch inference in [managed endpoints](concept-endpoints.md). This represents a convenient way to deploy models that require processing of big amounts of data in a batch-fashion.
+
+### How work is distributed on workers
+
+Work is distributed at the file level, for both structured and unstructured data. As a consequence, only [file datasets](v1/how-to-create-register-datasets.md#filedataset) or [URI folders](reference-yaml-data.md) are supported for this feature. Each worker processes batches of `Mini batch size` files at a time. Further parallelism can be achieved if `Max concurrency per instance` is increased.
+
+> [!WARNING]
+> Nested folder structures are not explored during inference. If you are partitioning your data using folders, make sure to flatten the structure beforehand.
+
+> [!WARNING]
+> Batch deployments will call the `predict` function of the MLflow model once per file. For CSV files containing multiple rows, this may impose a memory pressure in the underlying compute. When sizing your compute, take into account not only the memory consumption of the data being read but also the memory footprint of the model itself. This is specially true for models that processes text, like transformer-based models where the memory consumption is not linear with the size of the input. If you encouter several out-of-memory exceptions, consider splitting the data in smaller files with less rows or implement batching at the row level inside of the model/scoring script.
+
+### File's types support
+
+The following data types are supported for batch inference when deploying MLflow models without an environment and a scoring script:
+
+| File extension | Type returned as model's input | Signature requirement |
+| :- | :- | :- |
+| `.csv` | `pd.DataFrame` | `ColSpec`. If not provided, columns typing is not enforced. |
+| `.png`, `.jpg`, `.jpeg`, `.tiff`, `.bmp`, `.gif` | `np.ndarray` | `TensorSpec`. Input is reshaped to match tensors shape if available. If no signature is available, tensors of type `np.uint8` are inferred. For additional guidance read [Considerations for MLflow models processing images](how-to-image-processing-batch.md#considerations-for-mlflow-models-processing-images). |
+
+> [!WARNING]
+> Be advised that any unsupported file that may be present in the input data will make the job to fail. You will see an error entry as follows: *"ERROR:azureml:Error processing input file: '/mnt/batch/tasks/.../a-given-file.parquet'. File type 'parquet' is not supported."*.
+
+> [!TIP]
+> If you like to process a different file type, or execute inference in a different way that batch endpoints do by default you can always create the deploymnet with a scoring script as explained in [Using MLflow models with a scoring script](#using-mlflow-models-with-a-scoring-script).
+
+### Signature enforcement for MLflow models
+
+Input's data types are enforced by batch deployment jobs while reading the data using the available MLflow model signature. This means that your data input should comply with the types indicated in the model signature. If the data can't be parsed as expected, the job will fail with an error message similar to the following one: *"ERROR:azureml:Error processing input file: '/mnt/batch/tasks/.../a-given-file.csv'. Exception: invalid literal for int() with base 10: 'value'"*.
+
+> [!TIP]
+> Signatures in MLflow models are optional but they are highly encouraged as they provide a convenient way to early detect data compatibility issues. For more information about how to log models with signatures read [Logging models with a custom signature, environment or samples](how-to-log-mlflow-models.md#logging-models-with-a-custom-signature-environment-or-samples).
+
+You can inspect the model signature of your model by opening the `MLmodel` file associated with your MLflow model. For more details about how signatures work in MLflow see [Signatures in MLflow](concept-mlflow-models.md#signatures).
+
+### Flavor support
+
+Batch deployments only support deploying MLflow models with a `pyfunc` flavor. If you need to deploy a different flavor, see [Using MLflow models with a scoring script](#using-mlflow-models-with-a-scoring-script).
+
+## Using MLflow models with a scoring script
+
+MLflow models can be deployed to batch endpoints without indicating a scoring script in the deployment definition. However, you can opt in to indicate this file (usually referred as the *batch driver*) to customize how inference is executed.
+
+You will typically select this workflow when:
+> [!div class="checklist"]
+> * You need to process a file type not supported by batch deployments MLflow deployments.
+> * You need to customize the way the model is run, for instance, use an specific flavor to load it with `mlflow.<flavor>.load()`.
+> * You need to do pre/pos processing in your scoring routine when it is not done by the model itself.
+> * The output of the model can't be nicely represented in tabular data. For instance, it is a tensor representing an image.
+> * You model can't process each file at once because of memory constrains and it needs to read it in chunks.
+
+> [!IMPORTANT]
+> If you choose to indicate an scoring script for an MLflow model deployment, you will also have to specify the environment where the deployment will run.
+
+> [!WARNING]
+> Customizing the scoring script for MLflow deployments is only available from the Azure CLI or SDK for Python. If you are creating a deployment using [Azure ML studio UI](https://ml.azure.com), please switch to the CLI or the SDK.
++
+### Steps
+
+Use the following steps to deploy an MLflow model with a custom scoring script.
+
+1. Create a scoring script:
+
+ __batch_driver.py__
+
+ ```python
+ import os
+ import mlflow
+ import pandas as pd
+
+ def init():
+ global model
+
+ # AZUREML_MODEL_DIR is an environment variable created during deployment
+ # It is the path to the model folder
+ model_path = os.path.join(os.environ["AZUREML_MODEL_DIR"], "model")
+ model = mlflow.pyfunc.load_model(model_path)
+
+ def run(mini_batch):
+ results = pd.DataFrame(columns=['file', 'predictions'])
+
+ for file_path in mini_batch:
+ data = pd.read_csv(file_path)
+ pred = model.predict(data)
+
+ df = pd.DataFrame(pred, columns=['predictions'])
+ df['file'] = os.path.basename(file_path)
+ results = pd.concat([results, df])
+
+ return results
+ ```
+
+1. Let's create an environment where the scoring script can be executed. Since our model is MLflow, the conda requirements are also specified in the model package (for more details about MLflow models and the files included on it see [The MLmodel format](concept-mlflow-models.md#the-mlmodel-format)). We are going then to build the environment using the conda dependencies from the file. However, __we need also to include__ the package `azureml-core` which is required for Batch Deployments.
+
+ > [!TIP]
+ > If your model is already registered in the model registry, you can download/copy the `conda.yml` file associated with your model by going to [Azure ML studio](https://ml.azure.com) > Models > Select your model from the list > Artifacts. Open the root folder in the navigation and select the `conda.yml` file listed. Click on Download or copy its content.
+
+ > [!IMPORTANT]
+ > This example uses a conda environment specified at `/heart-classifier-mlflow/environment/conda.yaml`. This file was created by combining the original MLflow conda dependencies file and adding the package `azureml-core`. __You can't use the `conda.yml` file from the model directly__.
+
+ # [Azure CLI](#tab/cli)
+
+ No extra step is required for the Azure ML CLI. The environment definition will be included in the deployment file.
+
+ # [Python](#tab/sdk)
+
+ Let's get a reference to the environment:
+
+ ```python
+ environment = Environment(
+ conda_file="./heart-classifier-mlflow/environment/conda.yaml",
+ image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:latest",
+ )
+ ```
+
+1. Let's create the deployment now:
+
+ # [Azure CLI](#tab/cli)
+
+ To create a new deployment under the created endpoint, create a `YAML` configuration like the following:
+
+ ```yaml
+ $schema: https://azuremlschemas.azureedge.net/latest/batchDeployment.schema.json
+ endpoint_name: heart-classifier-batch
+ name: classifier-xgboost-custom
+ description: A heart condition classifier based on XGBoost
+ model: azureml:heart-classifier@latest
+ environment:
+ image: mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:latest
+ conda_file: ./heart-classifier-mlflow/environment/conda.yaml
+ code_configuration:
+ code: ./heart-classifier-custom/code/
+ scoring_script: batch_driver.py
+ compute: azureml:cpu-cluster
+ resources:
+ instance_count: 2
+ max_concurrency_per_instance: 2
+ mini_batch_size: 2
+ output_action: append_row
+ output_file_name: predictions.csv
+ retry_settings:
+ max_retries: 3
+ timeout: 300
+ error_threshold: -1
+ logging_level: info
+ ```
+
+ Then, create the deployment with the following command:
+
+ ```azurecli
+ az ml batch-deployment create -f deployment.yml
+ ```
+
+ # [Python](#tab/sdk)
+
+ To create a new deployment under the created endpoint, use the following script:
+
+ ```python
+ deployment = BatchDeployment(
+ name="classifier-xgboost-custom",
+ description="A heart condition classifier based on XGBoost",
+ endpoint_name=endpoint.name,
+ model=model,
+ environment=environment,
+ code_configuration=CodeConfiguration(
+ code="./heart-classifier-mlflow/code/",
+ scoring_script="batch_driver.py",
+ ),
+ compute=compute_name,
+ instance_count=2,
+ max_concurrency_per_instance=2,
+ mini_batch_size=2,
+ output_action=BatchDeploymentOutputAction.APPEND_ROW,
+ output_file_name="predictions.csv",
+ retry_settings=BatchRetrySettings(max_retries=3, timeout=300),
+ logging_level="info",
+ )
+ ml_client.batch_deployments.begin_create_or_update(deployment)
+ ```
+
+
+1. At this point, our batch endpoint is ready to be used.
+
+## Next steps
+
+* [Customize outputs in batch deployments](how-to-deploy-model-custom-output.md)
machine-learning How To Nlp Processing Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-nlp-processing-batch.md
+
+ Title: "Text processing with batch deployments"
+
+description: Learn how to use batch deployments to process text and output results.
++++++ Last updated : 10/10/2022++++
+# Text processing with batch deployments
++
+Batch Endpoints can be used for processing tabular data, but also any other file type like text. Those deployments are supported in both MLflow and custom models. In this tutorial we will learn how to deploy a model that can perform text summarization of long sequences of text using a model from HuggingFace.
+
+## About this sample
+
+The model we are going to work with was built using the popular library transformers from HuggingFace along with [a pre-trained model from Facebook with the BART architecture](https://huggingface.co/facebook/bart-large-cnn). It was introduced in the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation](https://arxiv.org/abs/1910.13461). This model has the following constrains that are important to keep in mind for deployment:
+
+* It can work with sequences up to 1024 tokens.
+* It is trained for summarization of text in English.
+* We are going to use TensorFlow as a backend.
+
+The information in this article is based on code samples contained in the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the commands locally without having to copy/paste YAML and other files, clone the repo and then change directories to the `cli/endpoints/batch` if you are using the Azure CLI or `sdk/endpoints/batch` if you are using our SDK for Python.
+
+```azurecli
+git clone https://github.com/Azure/azureml-examples --depth 1
+cd azureml-examples/cli/endpoints/batch
+```
+
+### Follow along in Jupyter Notebooks
+
+You can follow along this sample in a Jupyter Notebook. In the cloned repository, open the notebook: [text-summarization-batch.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/batch/text-summarization-batch.ipynb).
+
+## Prerequisites
++
+* You must have an endpoint already created. If you don't please follow the instructions at [Use batch endpoints for batch scoring](how-to-use-batch-endpoint.md). This example assumes the endpoint is named `text-summarization-batch`.
+* You must have a compute created where to deploy the deployment. If you don't please follow the instructions at [Create compute](how-to-use-batch-endpoint.md#create-compute). This example assumes the name of the compute is `cpu-cluster`.
+* Due to the size of the model, it hasn't been included in this repository. Instead, you can generate a local copy with the following code. A local copy of the model will be placed at `bart-text-summarization/model`. We will use it during the course of this tutorial.
+
+ ```python
+ from transformers import pipeline
+
+ model = pipeline("summarization", model="facebook/bart-large-cnn")
+ model_local_path = 'bart-text-summarization/model'
+ summarizer.save_pretrained(model_local_path)
+ ```
+
+## NLP tasks with batch deployments
+
+In this example, we are going to learn how to deploy a deep learning model based on the BART architecture that can perform text summarization over text in English. The text will be placed in CSV files for convenience.
+
+### Registering the model
+
+Batch Endpoint can only deploy registered models. In this case, we need to publish the model we have just downloaded from HuggingFace. You can skip this step if the model you are trying to deploy is already registered.
+
+# [Azure ML CLI](#tab/cli)
+
+```bash
+MODEL_NAME='bart-text-summarization'
+az ml model create --name $MODEL_NAME --type "custom_model" --path "bart-text-summarization/model"
+```
+
+# [Azure ML SDK for Python](#tab/sdk)
+
+```python
+model_name = 'bart-text-summarization'
+model = ml_client.models.create_or_update(
+ Model(name=model_name, path='bart-text-summarization/model', type=AssetTypes.CUSTOM_MODEL)
+)
+```
++
+### Creating a scoring script
+
+We need to create a scoring script that can read the CSV files provided by the batch deployment and return the scores of the model with the summary. The following script does the following:
+
+> [!div class="checklist"]
+> * Indicates an `init` function that load the model using `transformers`. Notice that the tokenizer of the model is loaded separately to account for the limitation in the sequence lenghs of the model we are currently using.
+> * Indicates a `run` function that is executed for each mini-batch the batch deployment provides.
+> * The `run` function read the entire batch using the `datasets` library. The text we need to summarize is on the column `text`.
+> * The `run` method iterates over each of the rows of the text and run the prediction. Since this is a very expensive model, running the prediction over entire files will result in an out-of-memory exception. Notice that the model is not execute with the `pipeline` object from `transformers`. This is done to account for long sequences of text and the limitation of 1024 tokens in the underlying model we are using.
+> * It returns the summary of the provided text.
+
+__transformer_scorer.py__
+
+```python
+import os
+import numpy as np
+from transformers import pipeline, AutoTokenizer, TFBartForConditionalGeneration
+from datasets import load_dataset
+
+def init():
+ global model
+ global tokenizer
+
+ # AZUREML_MODEL_DIR is an environment variable created during deployment
+ # Change "model" to the name of the folder used by you model, or model file name.
+ model_path = os.path.join(os.environ["AZUREML_MODEL_DIR"], "model")
+
+ # load the model
+ tokenizer = AutoTokenizer.from_pretrained(model_path, truncation=True, max_length=1024)
+ model = TFBartForConditionalGeneration.from_pretrained(model_path)
+
+def run(mini_batch):
+ resultList = []
+
+ ds = load_dataset('csv', data_files={ 'score': mini_batch})
+ for text in ds['score']['text']:
+ # perform inference
+ input_ids = tokenizer.batch_encode_plus([text], truncation=True, padding=True, max_length=1024)['input_ids']
+ summary_ids = model.generate(input_ids, max_length=130, min_length=30, do_sample=False)
+ summaries = [tokenizer.decode(s, skip_special_tokens=True, clean_up_tokenization_spaces=False) for s in summary_ids]
+
+ # Get results:
+ resultList.append(summaries[0])
+
+ return resultList
+```
+
+> [!TIP]
+> Although files are provided in mini-batches by the deployment, this scoring script processes one row at a time. This is a common pattern when dealing with expensive models (like transformers) as trying to load the entire batch and send it to the model at once may result in high-memory pressure on the batch executor (OOM exeptions).
++
+### Creating the deployment
+
+One the scoring script is created, it's time to create a batch deployment for it. Follow the following steps to create it:
+
+1. We need to indicate over which environment we are going to run the deployment. In our case, our model runs on `TensorFlow`. Azure Machine Learning already has an environment with the required software installed, so we can reutilize this environment. We are just going to add a couple of dependencies in a `conda.yml` file including the libraries `transformers` and `datasets`.
+
+ # [Azure ML CLI](#tab/cli)
+
+ No extra step is required for the Azure ML CLI. The environment definition will be included in the deployment file.
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ Let's get a reference to the environment:
+
+ ```python
+ environment = Environment(
+ conda_file="./bart-text-summarization/environment/conda.yml",
+ image="mcr.microsoft.com/azureml/tensorflow-2.4-ubuntu18.04-py37-cpu-inference:latest",
+ )
+ ```
+
+2. Now, let create the deployment.
+
+ > [!NOTE]
+ > This example assumes you have an endpoint created with the name `text-summarization-batch` and a compute cluster with name `cpu-cluster`. If you don't, please follow the steps in the doc [Use batch endpoints for batch scoring](how-to-use-batch-endpoint.md).
+
+ # [Azure ML CLI](#tab/cli)
+
+ To create a new deployment under the created endpoint, create a `YAML` configuration like the following:
+
+ ```yaml
+ $schema: https://azuremlschemas.azureedge.net/latest/batchDeployment.schema.json
+ endpoint_name: text-summarization-batch
+ name: text-summarization-hfbart
+ description: A text summarization deployment implemented with HuggingFace and BART architecture
+ model: azureml:bart-text-summarization@latest
+ compute: azureml:cpu-cluster
+ environment:
+ image: mcr.microsoft.com/azureml/tensorflow-2.4-ubuntu18.04-py37-cpu-inference:latest
+ conda_file: ./bart-text-summarization/environment/conda.yml
+ code_configuration:
+ code: ./bart-text-summarization/code/
+ scoring_script: transformer_scorer.py
+ resources:
+ instance_count: 2
+ max_concurrency_per_instance: 1
+ mini_batch_size: 1
+ output_action: append_row
+ output_file_name: predictions.csv
+ retry_settings:
+ max_retries: 3
+ timeout: 3000
+ error_threshold: -1
+ logging_level: info
+ ```
+
+ Then, create the deployment with the following command:
+
+ ```bash
+ DEPLOYMENT_NAME="text-summarization-hfbart"
+ az ml batch-deployment create -f endpoint.yml
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ To create a new deployment with the indicated environment and scoring script use the following code:
+
+ ```python
+ deployment = BatchDeployment(
+ name="text-summarization-hfbart",
+ description="A text summarization deployment implemented with HuggingFace and BART architecture",
+ endpoint_name=endpoint.name,
+ model=model,
+ environment=environment,
+ code_configuration=CodeConfiguration(
+ code="./bart-text-summarization/code/",
+ scoring_script="imagenet_scorer.py",
+ ),
+ compute=compute_name,
+ instance_count=2,
+ max_concurrency_per_instance=1,
+ mini_batch_size=1,
+ output_action=BatchDeploymentOutputAction.APPEND_ROW,
+ output_file_name="predictions.csv",
+ retry_settings=BatchRetrySettings(max_retries=3, timeout=3000),
+ logging_level="info",
+ )
+ ```
+
+ Then, create the deployment with the following command:
+
+ ```python
+ ml_client.batch_deployments.begin_create_or_update(deployment)
+ ```
+
+
+ > [!IMPORTANT]
+ > You will notice in this deployment a high value in `timeout` in the parameter `retry_settings`. The reason for it is due to the nature of the model we are running. This is a very expensive model and inference on a single row may take up to 60 seconds. The `timeout` parameters controls how much time the Batch Deployment should wait for the scoring script to finish processing each mini-batch. Since our model runs predictions row by row, processing a long file may take time. Also notice that the number of files per batch is set to 1 (`mini_batch_size=1`). This is again related to the nature of the work we are doing. Processing one file at a time per batch is expensive enough to justify it. You will notice this being a pattern in NLP processing.
+
+3. Although you can invoke a specific deployment inside of an endpoint, you will usually want to invoke the endpoint itself and let the endpoint decide which deployment to use. Such deployment is named the "default" deployment. This gives you the possibility of changing the default deployment and hence changing the model serving the deployment without changing the contract with the user invoking the endpoint. Use the following instruction to update the default deployment:
+
+ # [Azure ML CLI](#tab/cli)
+
+ ```bash
+ az ml batch-endpoint update --name $ENDPOINT_NAME --set defaults.deployment_name=$DEPLOYMENT_NAME
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ endpoint.defaults.deployment_name = deployment.name
+ ml_client.batch_endpoints.begin_create_or_update(endpoint)
+ ```
+
+4. At this point, our batch endpoint is ready to be used.
++
+## Considerations when deploying models that process text
+
+As mentioned in some of the notes along this tutorial, processing text may have some peculiarities that require specific configuration for batch deployments. Take the following consideration when designing the batch deployment:
+
+> [!div class="checklist"]
+> * Some NLP models may be very expensive in terms of memory and compute time. If this is the case, consider decreasing the number of files included on each mini-batch. In the example above, the number was taken to the minimum, 1 file per batch. While this may not be your case, take into consideration how many files your model can score at each time. Have in mind that the relationship between the size of the input and the memory footprint of your model may not be linear for deep learning models.
+> * If your model can't even handle one file at a time (like in this example), consider reading the input data in rows/chunks. Implement batching at the row level if you need to achieve higher throughput or hardware utilization.
+> * Set the `timeout` value of your deployment accordly to how expensive your model is and how much data you expect to process. Remember that the `timeout` indicates the time the batch deployment would wait for your scoring script to run for a given batch. If your batch have many files or files with many rows, this will impact the right value of this parameter.
+
+## Considerations for MLflow models that process text
+
+MLflow models in Batch Endpoints support reading CSVs as input data, which may contain long sequences of text. The same considerations mentioned above apply to MLflow models. However, since you are not required to provide a scoring script for your MLflow model deployment, some of the recommendation there may be harder to achieve.
+
+* Only `CSV` files are supported for MLflow deployments processing text. You will need to author a scoring script if you need to process other file types like `TXT`, `PARQUET`, etc. See [Using MLflow models with a scoring script](how-to-mlflow-batch.md#using-mlflow-models-with-a-scoring-script) for details.
+* Batch deployments will call your MLflow model's predict function with the content of an entire file in as Pandas dataframe. If your input data contains many rows, chances are that running a complex model (like the one presented in this tutorial) will result in an out-of-memory exception. If this is your case, you can consider:
+ * Customize how your model runs predictions and implement batching. To learn how to customize MLflow model's inference, see [Logging custom models](how-to-log-mlflow-models.md?#logging-custom-models).
+ * Author a scoring script and load your model using `mlflow.<flavor>.load_model()`. See [Using MLflow models with a scoring script](how-to-mlflow-batch.md#using-mlflow-models-with-a-scoring-script) for details.
++
machine-learning How To Secure Batch Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-batch-endpoint.md
+
+ Title: "Network isolation in batch endpoints"
+
+description: Learn how to deploy Batch Endpoints in private networks with isolation.
++++++ Last updated : 10/10/2022++++
+# Network isolation in batch endpoints
+
+When deploying a machine learning model to a batch endpoint, you can secure their communication using private networks. This article explains the requirements to use batch endpoint in an environment secured by private networks.
+
+## Prerequisites
+
+* A secure Azure Machine Learning workspace. For more details about how to achieve it read [Create a secure workspace](tutorial-create-secure-workspace.md).
+* For Azure Container Registry in private networks, please note that there are [some prerequisites about their configuration](how-to-secure-workspace-vnet.md#prerequisites).
+
+ > [!WARNING]
+ > Azure Container Registries with Quarantine feature enabled are not supported by the moment.
+
+* Ensure blob, file, queue, and table private endpoints are configured for the storage accounts as explained at [Secure Azure storage accounts](how-to-secure-workspace-vnet.md#secure-azure-storage-accounts). Batch deployments require all the 4 to properly work.
+
+## Securing batch endpoints
+
+All the batch endpoints created inside of secure workspace are deployed as private batch endpoints by default. No further configuration is required.
+
+> [!IMPORTANT]
+> When working on a private link-enabled workspaces, batch endpoints can be created and managed using Azure Machine Learning studio. However, they can't be invoked from the UI in studio. Please use the Azure ML CLI v2 instead for job creation. For more details about how to use it see [Invoke the batch endpoint to start a batch scoring job](how-to-use-batch-endpoint.md#invoke-the-batch-endpoint-to-start-a-batch-job).
+
+The following diagram shows how the networking looks like for batch endpoints when deployed in a private workspace:
++
+In order to enable the jump host VM (or self-hosted agent VMs if using [Azure Bastion](../bastion/bastion-overview.md)) access to the resources in Azure Machine Learning VNET, the previous architecture uses virtual network peering to seamlessly connect these two virtual networks. Thus the two virtual networks appear as one for connectivity purposes. The traffic between VMs and Azure Machine Learning resources in peered virtual networks uses the Microsoft backbone infrastructure. Like traffic between them in the same network, traffic is routed through Microsoft's private network only.
+
+## Securing batch deployment jobs
+
+Azure Machine Learning batch deployments run on compute clusters. To secure batch deployment jobs, those compute clusters have to be deployed in a virtual network too.
+
+1. Create an Azure Machine Learning [computer cluster in the virtual network](how-to-secure-training-vnet.md#compute-cluster).
+2. Ensure all related services have private endpoints configured in the network. Private endpoints are used for not only Azure Machine Learning workspace, but also its associated resources such as Azure Storage, Azure Key Vault, or Azure Container Registry. Azure Container Registry is a required service. While securing the Azure Machine Learning workspace with virtual networks, please note that there are [some prerequisites about Azure Container Registry](how-to-secure-workspace-vnet.md#prerequisites).
+4. If your compute instance uses a public IP address, you must [Allow inbound communication](how-to-secure-training-vnet.md#required-public-internet-access) so that management services can submit jobs to your compute resources.
+
+ > [!TIP]
+ > Compute cluster and compute instance can be created with or without a public IP address. If created with a public IP address, you get a load balancer with a public IP to accept the inbound access from Azure batch service and Azure Machine Learning service. You need to configure User Defined Routing (UDR) if you use a firewall. If created without a public IP, you get a private link service to accept the inbound access from Azure batch service and Azure Machine Learning service without a public IP.
+
+1. Extra NSG may be required depending on your case. Please see [Limitations for Azure Machine Learning compute cluster](how-to-secure-training-vnet.md#azure-machine-learning-compute-clusterinstance-1).
+
+For more details about how to configure compute clusters networking read [Secure an Azure Machine Learning training environment with virtual networks](how-to-secure-training-vnet.md#azure-machine-learning-compute-clusterinstance-1).
+
+## Using two-networks architecture
+
+There are cases where the input data is not in the same network as in the Azure Machine Learning resources. In those cases, your Azure Machine Learning workspace may need to interact with more than one VNet. You can achieve this configuration by adding an extra set of private endpoints to the VNet where the rest of the resources are located.
+
+The following diagram shows the high level design:
++
+### Considerations
+
+Have the following considerations when using such architecture:
+
+* Put the second set of private endpoints in a different resource group and hence in different private DNS zones. This prevents a name resolution conflict between the set of IPs used for the workspace and the ones used by the client VNets. Azure Private DNS provides a reliable, secure DNS service to manage and resolve domain names in a virtual network without the need to add a custom DNS solution. By using private DNS zones, you can use your own custom domain names rather than the Azure-provided names available today. Please note that the DNS resolution against a private DNS zone works only from virtual networks that are linked to it. For more details see [recommended zone names for Azure services](../private-link/private-endpoint-dns.md#azure-services-dns-zone-configuration).
+* For your storage accounts, add 4 private endpoints in each VNet for blob, file, queue, and table as explained at [Secure Azure storage accounts](how-to-secure-workspace-vnet.md#secure-azure-storage-accounts).
++
+## Recommended read
+
+* [Secure Azure Machine Learning workspace resources using virtual networks (VNets)](how-to-network-security-overview.md)
machine-learning How To Troubleshoot Batch Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-batch-endpoints.md
+
+ Title: "Troubleshooting batch endpoints"
+
+description: Learn how to troubleshoot and diagnostic errors with batch endpoints jobs
++++++ Last updated : 10/10/2022++++
+# Troubleshooting batch endpoints
++
+Learn how to troubleshoot and solve, or work around, common errors you may come across when using [batch endpoints](how-to-use-batch-endpoint.md) for batch scoring.
+
+## Understanding logs of a batch scoring job
+
+### Get logs
+
+After you invoke a batch endpoint using the Azure CLI or REST, the batch scoring job will run asynchronously. There are two options to get the logs for a batch scoring job.
+
+Option 1: Stream logs to local console
+
+You can run the following command to stream system-generated logs to your console. Only logs in the `azureml-logs` folder will be streamed.
+
+```azurecli
+az ml job stream -name <job_name>
+```
+
+Option 2: View logs in studio
+
+To get the link to the run in studio, run:
+
+```azurecli
+az ml job show --name <job_name> --query interaction_endpoints.Studio.endpoint -o tsv
+```
+
+1. Open the job in studio using the value returned by the above command.
+1. Choose __batchscoring__
+1. Open the __Outputs + logs__ tab
+1. Choose the log(s) you wish to review
+
+### Understand log structure
+
+There are two top-level log folders, `azureml-logs` and `logs`.
+
+The file `~/azureml-logs/70_driver_log.txt` contains information from the controller that launches the scoring script.
+
+Because of the distributed nature of batch scoring jobs, there are logs from several different sources. However, two combined files are created that provide high-level information:
+
+- `~/logs/job_progress_overview.txt`: This file provides high-level information about the number of mini-batches (also known as tasks) created so far and the number of mini-batches processed so far. As the mini-batches end, the log records the results of the job. If the job failed, it will show the error message and where to start the troubleshooting.
+
+- `~/logs/sys/master_role.txt`: This file provides the principal node (also known as the orchestrator) view of the running job. This log provides information on task creation, progress monitoring, the job result.
+
+For a concise understanding of errors in your script there is:
+
+- `~/logs/user/error.txt`: This file will try to summarize the errors in your script.
+
+For more information on errors in your script, there is:
+
+- `~/logs/user/error/`: This file contains full stack traces of exceptions thrown while loading and running the entry script.
+
+When you need a full understanding of how each node executed the score script, look at the individual process logs for each node. The process logs can be found in the `sys/node` folder, grouped by worker nodes:
+
+- `~/logs/sys/node/<ip_address>/<process_name>.txt`: This file provides detailed info about each mini-batch as it's picked up or completed by a worker. For each mini-batch, this file includes:
+
+ - The IP address and the PID of the worker process.
+ - The total number of items, the number of successfully processed items, and the number of failed items.
+ - The start time, duration, process time, and run method time.
+
+You can also view the results of periodic checks of the resource usage for each node. The log files and setup files are in this folder:
+
+- `~/logs/perf`: Set `--resource_monitor_interval` to change the checking interval in seconds. The default interval is `600`, which is approximately 10 minutes. To stop the monitoring, set the value to `0`. Each `<ip_address>` folder includes:
+
+ - `os/`: Information about all running processes in the node. One check runs an operating system command and saves the result to a file. On Linux, the command is `ps`.
+ - `%Y%m%d%H`: The sub folder name is the time to hour.
+ - `processes_%M`: The file ends with the minute of the checking time.
+ - `node_disk_usage.csv`: Detailed disk usage of the node.
+ - `node_resource_usage.csv`: Resource usage overview of the node.
+ - `processes_resource_usage.csv`: Resource usage overview of each process.
+
+### How to log in scoring script
+
+You can use Python logging in your scoring script. Logs are stored in `logs/user/stdout/<node_id>/processNNN.stdout.txt`.
+
+```python
+import argparse
+import logging
+
+# Get logging_level
+arg_parser = argparse.ArgumentParser(description="Argument parser.")
+arg_parser.add_argument("--logging_level", type=str, help="logging level")
+args, unknown_args = arg_parser.parse_known_args()
+print(args.logging_level)
+
+# Initialize Python logger
+logger = logging.getLogger(__name__)
+logger.setLevel(args.logging_level.upper())
+logger.info("Info log statement")
+logger.debug("Debug log statement")
+```
+
+## Common issues
+
+The following section contains common problems and solutions you may see during batch endpoint development and consumption.
+
+### No module named 'azureml'
+
+__Message logged__: `No module named 'azureml'`.
+
+__Reason__: Azure Machine Learning Batch Deployments require the package `azureml-core` to be installed.
+
+__Solution__: Add `azureml-core` to your conda dependencies file.
+
+### Output already exists
+
+__Reason__: Azure Machine Learning Batch Deployment can't overwrite the `predictions.csv` file generated by the output.
+
+__Solution__: If you are indicated an output location for the predictions, ensure the path leads to a non-existing file.
+
+### The run() function in the entry script had timeout for [number] times
+
+__Message logged__: `No progress update in [number] seconds. No progress update in this check. Wait [number] seconds since last update.`
+
+__Reason__: Batch Deployments can be configured with a `timeout` value that indicates the amount of time the deployment shall wait for a single batch to be processed. If the execution of the batch takes more than such value, the task is aborted. Tasks that are aborted can be retried up to a maximum of times that can also be configured. If the `timeout` occurs on each retry, then the deployment job fails. These properties can be configured for each deployment.
+
+__Solution__: Increase the `timemout` value of the deployment by updating the deployment. These properties are configured in the parameter `retry_settings`. By default, a `timeout=30` and `retries=3` is configured. When deciding the value of the `timeout`, take into consideration the number of files being processed on each batch and the size of each of those files. You can also decrease them to account for more mini-batches of smaller size and hence quicker to execute.
+
+### Dataset initialization failed
+
+__Message logged__: Dataset initialization failed: UserErrorException: Message: Cannot mount Dataset(id='xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx', name='None', version=None). Source of the dataset is either not accessible or does not contain any data.
+
+__Reason__: The compute cluster where the deployment is running can't mount the storage where the data asset is located. The managed identity of the compute don't have permissions to perform the mount.
+
+__Solutions__: Ensure the identity associated with the compute cluster where your deployment is running has at least has at least [Storage Blob Data Reader](../role-based-access-control/built-in-roles.md#storage-blob-data-reader) access to the storage account. Only storage account owners can [change your access level via the Azure portal](../storage/blobs/assign-azure-role-data-access.md).
+
+### Data set node [code] references parameter dataset_param which doesn't have a specified value or a default value
+
+__Message logged__: Data set node [code] references parameter dataset_param which doesn't have a specified value or a default value.
+
+__Reason__: The input data asset provided to the batch endpoint isn't supported.
+
+__Solution__: Ensure you are providing a data input that is supported for batch endpoints.
+
+### User program failed with Exception: Run failed, please check logs for details
+
+__Message logged__: User program failed with Exception: Run failed, please check logs for details. You can check logs/readme.txt for the layout of logs.
+
+__Reason__: There was an error while running the `init()` or `run()` function of the scoring script.
+
+__Solution__: Go to __Outputs + Logs__ and open the file at `logs > user > error > 10.0.0.X > process000.txt`. You will see the error message generated by the `init()` or `run()` method.
+
+### There is no succeeded mini batch item returned from run()
+
+__Message logged__: There is no succeeded mini batch item returned from run(). Please check 'response: run()' in https://aka.ms/batch-inference-documentation.
+
+__Reason__: The batch endpoint failed to provide data in the expected format to the `run()` method. This may be due to corrupted files being read or incompatibility of the input data with the signature of the model (MLflow).
+
+__Solution__: To understand what may be happening, go to __Outputs + Logs__ and open the file at `logs > user > stdout > 10.0.0.X > process000.stdout.txt`. Look for error entries like `Error processing input file`. You should find there details about why the input file can't be correctly read.
+
+### Audiences in JWT are not allowed
+
+__Context__: When invoking a batch endpoint using its REST APIs.
+
+__Reason__: The access token used to invoke the REST API for the endpoint/deployment is indicating a token that is issued for a different audience/service. Azure Active Directory tokens are issued for specific actions.
+
+__Solution__: When generating an authentication token to be used with the Batch Endpoint REST API, ensure the `resource` parameter is set to `https://ml.azure.com`. Please notice that this resource is different from the resource you need to indicate to manage the endpoint using the REST API. All Azure resources (including batch endpoints) use the resource `https://management.azure.com` for managing them. Ensure you use the right resource URI on each case. Notice that if you want to use the management API and the job invocation API at the same time, you will need two tokens. For details see: [Authentication on batch endpoints (REST)](how-to-authenticate-batch-endpoint.md?tabs=rest).
machine-learning How To Use Batch Azure Data Factory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-azure-data-factory.md
+
+ Title: "Invoking batch endpoints from Azure Data Factory"
+
+description: Learn how to use Azure Data Factory to invoke Batch Endpoints.
++++++ Last updated : 10/10/2022++++
+# Invoking batch endpoints from Azure Data Factory
++
+Big data requires a service that can orchestrate and operationalize processes to refine these enormous stores of raw data into actionable business insights. [Azure Data Factory](../data-factory/introduction.md) is a managed cloud service that's built for these complex hybrid extract-transform-load (ETL), extract-load-transform (ELT), and data integration projects.
+
+Azure Data Factory allows the creation of pipelines that can orchestrate multiple data transformations and manage them as a single unit. Batch endpoints are an excellent candidate to become a step in such processing workflow. In this example, learn how to use batch endpoints in Azure Data Factory activities by relying on the Web Invoke activity and the REST API.
+
+## Prerequisites
+
+* This example assumes that you have a model correctly deployed as a batch endpoint. Particularly, we are using the *heart condition classifier* created in the tutorial [Using MLflow models in batch deployments](how-to-mlflow-batch.md).
+* An Azure Data Factory resource created and configured. If you have not created your data factory yet, follow the steps in [Quickstart: Create a data factory by using the Azure portal and Azure Data Factory Studio](../data-factory/quickstart-create-data-factory-portal.md) to create one.
+* After creating it, browse to the data factory in the Azure portal:
+
+ :::image type="content" source="../data-factory/media/doc-common-process/data-factory-home-page.png" alt-text="Screenshot of the home page for the Azure Data Factory, with the Open Azure Data Factory Studio tile.":::
+
+* Select **Open** on the **Open Azure Data Factory Studio** tile to launch the Data Integration application in a separate tab.
+
+## Authenticating against batch endpoints
+
+Azure Data Factory can invoke the REST APIs of batch endpoints by using the [Web Invoke](../data-factory/control-flow-web-activity.md) activity. Batch endpoints support Azure Active Directory for authorization and hence the request made to the APIs require a proper authentication handling.
+
+You can use a service principal or a [managed identity](../active-directory/managed-identities-azure-resources/overview.md) to authenticate against Batch Endpoints. We recommend using a managed identity as it simplifies the use of secrets.
+
+> [!IMPORTANT]
+> When your data is stored in cloud locations instead of Azure Machine Learning Data Stores, the identity of the compute is used to read the data instead of the identity used to invoke the endpoint.
+
+# [Using a Managed Identity](#tab/mi)
+
+1. You can use Azure Data Factory managed identity to communicate with Batch Endpoints. In this case, you only need to make sure that your Azure Data Factory resource was deployed with a managed identity.
+2. If you don't have an Azure Data Factory resource or it was already deployed without a managed identity, please follow the following steps to create it: [Managed identity for Azure Data Factory](../data-factory/data-factory-service-identity.md#system-assigned-managed-identity).
+
+ > [!WARNING]
+ > Notice that changing the resource identity once deployed is not possible in Azure Data Factory. Once the resource is created, you will need to recreate it if you need to change the identity of it.
+
+3. Once deployed, grant access for the managed identity of the resource you created to your Azure Machine Learning workspace as explained at [Grant access](../role-based-access-control/quickstart-assign-role-user-portal.md#grant-access). In this example the service principal will require:
+
+ 1. Permission in the workspace to read batch deployments and perform actions over them.
+ 1. Permissions to read/write in data stores.
+ 2. Permissions to read in any cloud location (storage account) indicated as a data input.
+
+# [Using a Service Principal](#tab/sp)
+
+1. Create a service principal following the steps at [Register an application with Azure AD and create a service principal](../active-directory/develop/howto-create-service-principal-portal.md#register-an-application-with-azure-ad-and-create-a-service-principal).
+1. Create a secret to use for authentication as explained at [Option 2: Create a new application secret](../active-directory/develop/howto-create-service-principal-portal.md#option-2-create-a-new-application-secret).
+1. Take note of the `client secret` generated.
+1. Take note of the `client ID` and the `tenant id` as explained at [Get tenant and app ID values for signing in](../active-directory/develop/howto-create-service-principal-portal.md#option-2-create-a-new-application-secret).
+1. Grant access for the service principal you created to your workspace as explained at [Grant access](../role-based-access-control/quickstart-assign-role-user-portal.md#grant-access). In this example the service principal will require:
+
+ 1. Permission in the workspace to read batch deployments and perform actions over them.
+ 1. Permissions to read/write in data stores.
++
+## About the pipeline
+
+We are going to create a pipeline in Azure Data Factory that can invoke a given batch endpoint over some data. The pipeline will communicate with Azure Machine Learning batch endpoints using REST. To know more about how to use the REST API of batch endpoints read [Deploy models with REST for batch scoring](how-to-deploy-batch-with-rest.md).
+
+The pipeline will look as follows:
+
+# [Using a Managed Identity](#tab/mi)
++
+It is composed of the following activities:
+
+* __Run Batch-Endpoint__: It's a Web Activity that uses the batch endpoint URI to invoke it. It passes the input data URI where the data is located and the expected output file.
+* __Wait for job__: It's a loop activity that checks the status of the created job and waits for its completion, either as **Completed** or **Failed**. This activity, in turns, uses the following activities:
+ * __Check status__: It's a Web Activity that queries the status of the job resource that was returned as a response of the __Run Batch-Endpoint__ activity.
+ * __Wait__: It's a Wait Activity that controls the polling frequency of the job's status. We set a default of 120 (2 minutes).
+
+The pipeline requires the following parameters to be configured:
+
+| Parameter | Description | Sample value |
+| | -|- |
+| `endpoint_uri` | The endpoint scoring URI | `https://<endpoint_name>.<region>.inference.ml.azure.com/jobs` |
+| `api_version` | The API version to use with REST API calls. Defaults to `2020-09-01-preview` | `2020-09-01-preview` |
+| `poll_interval` | The number of seconds to wait before checking the job status for completion. Defaults to `120`. | `120` |
+| `endpoint_input_uri` | The endpoint's input data. Multiple data input types are supported. Ensure that the manage identity you are using for executing the job has access to the underlying location. Alternative, if using Data Stores, ensure the credentials are indicated there. | `azureml://datastores/.../paths/.../data/` |
+| `endpoint_output_uri` | The endpoint's output data file. It must be a path to an output file in a Data Store attached to the Machine Learning workspace. Not other type of URIs is supported. | `azureml://datastores/azureml/paths/batch/predictions.csv` |
+
+# [Using a Service Principal](#tab/sp)
++
+It is composed of the following activities:
+
+* __Authorize__: It's a Web Activity that uses the service principal created in [Authenticating against batch endpoints](#authenticating-against-batch-endpoints) to obtain an authorization token. This token will be used to invoke the endpoint later.
+* __Run Batch-Endpoint__: It's a Web Activity that uses the batch endpoint URI to invoke it. It passes the input data URI where the data is located and the expected output file.
+* __Wait for job__: It's a loop activity that checks the status of the created job and waits for its completion, either as **Completed** or **Failed**. This activity, in turns, uses the following activities:
+ * __Authorize Management__: It's a Web Activity that uses the service principal created in [Authenticating against batch endpoints](#authenticating-against-batch-endpoints) to obtain an authorization token to be used for job's status query.
+ * __Check status__: It's a Web Activity that queries the status of the job resource that was returned as a response of the __Run Batch-Endpoint__ activity.
+ * __Wait__: It's a Wait Activity that controls the polling frequency of the job's status. We set a default of 120 (2 minutes).
+
+The pipeline requires the following parameters to be configured:
+
+| Parameter | Description | Sample value |
+| | -|- |
+| `tenant_id` | Tenant ID where the endpoint is deployed | `00000000-0000-0000-00000000` |
+| `client_id` | The client ID of the service principal used to invoke the endpoint | `00000000-0000-0000-00000000` |
+| `client_secret` | The client secret of the service principal used to invoke the endpoint | `ABCDEFGhijkLMNOPQRstUVwz` |
+| `endpoint_uri` | The endpoint scoring URI | `https://<endpoint_name>.<region>.inference.ml.azure.com/jobs` |
+| `api_version` | The API version to use with REST API calls. Defaults to `2020-09-01-preview` | `2020-09-01-preview` |
+| `poll_interval` | The number of seconds to wait before checking the job status for completion. Defaults to `120`. | `120` |
+| `endpoint_input_uri` | The endpoint's input data. Multiple data input types are supported. Ensure that the manage identity you are using for executing the job has access to the underlying location. Alternative, if using Data Stores, ensure the credentials are indicated there. | `azureml://datastores/.../paths/.../data/` |
+| `endpoint_output_uri` | The endpoint's output data file. It must be a path to an output file in a Data Store attached to the Machine Learning workspace. Not other type of URIs is supported. | `azureml://datastores/azureml/paths/batch/predictions.csv` |
+++
+> [!WARNING]
+> Remember that `endpoint_output_uri` should be the path to a file that doesn't exist yet. Otherwise, the job will fail with the error *the path already exists*.
+
+> [!IMPORTANT]
+> The input data URI can be a path to an Azure Machine Learning data store, data asset, or a cloud URI. Depending on the case, further configuration may be required to ensure the deployment can read the data properly. See [Accessing storage services](how-to-identity-based-service-authentication.md#accessing-storage-services) for details.
+
+## Steps
+
+To create this pipeline in your existing Azure Data Factory, follow these steps:
+
+1. Open Azure Data Factory Studio and under __Factory Resources__ click the plus sign.
+2. Select __Pipeline__ > __Import from pipeline template__
+3. You will be prompted to select a `zip` file. Uses [the following template if using managed identities](https://azuremlexampledata.blob.core.windows.net/data/templates/batch-inference/Run-BatchEndpoint-MI.zip) or [the following one if using a service principal](https://azuremlexampledata.blob.core.windows.net/data/templates/batch-inference/Run-BatchEndpoint-SP.zip).
+4. A preview of the pipeline will show up in the portal. Click __Use this template__.
+5. The pipeline will be created for you with the name __Run-BatchEndpoint__.
+6. Configure the parameters of the batch deployment you are using:
+
+ # [Using a Managed Identity](#tab/mi)
+
+ :::image type="content" source="./media/how-to-use-batch-adf/pipeline-params-mi.png" alt-text="Screenshot of the pipeline parameters expected for the resulting pipeline.":::
+
+ # [Using a Service Principal](#tab/sp)
+
+ :::image type="content" source="./media/how-to-use-batch-adf/pipeline-params.png" alt-text="Screenshot of the pipeline parameters expected for the resulting pipeline.":::
+
+
+
+ > [!WARNING]
+ > Ensure that your batch endpoint has a default deployment configured before submitting a job to it. The created pipeline will invoke the endpoint and hence a default deployment needs to be created and configured.
+
+ > [!TIP]
+ > For best reusability, use the created pipeline as a template and call it from within other Azure Data Factory pipelines by leveraging the [Execute pipeline activity](../data-factory/control-flow-execute-pipeline-activity.md). In that case, do not configure the parameters in the created pipeline but pass them when you are executing the pipeline.
+ >
+ > :::image type="content" source="./media/how-to-use-batch-adf/pipeline-run.png" alt-text="Screenshot of the pipeline parameters expected for the resulting pipeline when invoked from another pipeline.":::
+
+7. Your pipeline is ready to be used.
++
+## Limitations
+
+When calling Azure Machine Learning batch deployments consider the following limitations:
+
+* __Data inputs__:
+ * Only Azure Machine Learning data stores or Azure Storage Accounts (Azure Blob Storage, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2) are supported as inputs. If your input data is in another source, use the Azure Data Factory Copy activity before the execution of the batch job to sink the data to a compatible store.
+ * Ensure the deployment has the required access to read the input data depending on the type of input you are using. See [Accessing storage services](how-to-identity-based-service-authentication.md#accessing-storage-services) for details.
+* __Data outputs__:
+ * Only registered Azure Machine Learning data stores are supported.
+ * Only Azure Blob Storage Accounts are supported for outputs. For instance, Azure Data Lake Storage Gen2 isn't supported as output in batch deployment jobs. If you need to output the data to a different location/sink, use the Azure Data Factory Copy activity after the execution of the batch job.
+
+## Considerations when reading and writing data
+
+When reading and writing data, take into account the following considerations:
+
+* Batch endpoint jobs don't explore nested folders and hence can't work with nested folder structures. If your data is distributed in multiple folders, notice that you will have to flatten the structure.
+* Make sure that your scoring script provided in the deployment can handle the data as it is expected to be fed into the job. If the model is MLflow, read the limitation in terms of the file type supported by the moment at [Using MLflow models in batch deployments](how-to-mlflow-batch.md).
+* Batch endpoints distribute and parallelize the work across multiple workers at the file level. Make sure that each worker node has enough memory to load the entire data file at once and send it to the model. Such is especially true for tabular data.
+* When estimating the memory consumption of your jobs, take into account the model memory footprint too. Some models, like transformers in NLP, don't have a liner relationship between the size of the inputs and the memory consumption. On those cases, you may want to consider further partitioning your data into multiple files to allow a greater degree of parallelization with smaller files.
machine-learning How To Use Batch Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-endpoint.md
+
+ Title: 'Use batch endpoints for batch scoring'
+
+description: In this article, learn how to create a batch endpoint to continuously batch score large data.
+++++++ Last updated : 11/04/2022+
+#Customer intent: As an ML engineer or data scientist, I want to create an endpoint to host my models for batch scoring, so that I can use the same endpoint continuously for different large datasets on-demand or on-schedule.
++
+# Use batch endpoints for batch scoring
++
+Batch endpoints provide a convenient way to run inference over large volumes of data. They simplify the process of hosting your models for batch scoring, so you can focus on machine learning, not infrastructure. For more information, see [What are Azure Machine Learning endpoints?](./concept-endpoints.md).
+
+Use batch endpoints when:
+
+> [!div class="checklist"]
+> * You have expensive models that requires a longer time to run inference.
+> * You need to perform inference over large amounts of data, distributed in multiple files.
+> * You don't have low latency requirements.
+> * You can take advantage of parallelization.
+
+In this article, you will learn how to use batch endpoints to do batch scoring.
+
+> [!TIP]
+> We suggest you to read the Scenarios sections (see the navigation bar at the left) to find more about how to use Batch Endpoints in specific scenarios including NLP, computer vision, or how to integrate them with other Azure services.
+
+## About this example
+
+On this example, we are going to deploy a model to solve the classic MNIST ("Modified National Institute of Standards and Technology") digit recognition problem to perform batch inferencing over large amounts of data (image files). In the first section of this tutorial, we are going to create a batch deployment with a model created using Torch. Such deployment will become our default one in the endpoint. On the second half, [we are going to see how we can create a second deployment](#adding-deployments-to-an-endpoint) using a model created with TensorFlow (Keras), test it out, and then switch the endpoint to start using the new deployment as default.
+
+The information in this article is based on code samples contained in the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the commands locally without having to copy/paste YAML and other files, clone the repo and then change directories to the `cli/endpoints/batch` if you are using the Azure CLI or `sdk/endpoints/batch` if you are using our SDK for Python.
+
+```azurecli
+git clone https://github.com/Azure/azureml-examples --depth 1
+cd azureml-examples/cli/endpoints/batch
+```
+
+### Follow along in Jupyter Notebooks
+
+You can follow along this sample in the following notebooks. In the cloned repository, open the notebook: [mnist-batch.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/batch/mnist-batch.ipynb).
+
+## Prerequisites
++
+### Connect to your workspace
+
+First, let's connect to Azure Machine Learning workspace where we are going to work on.
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli
+az account set --subscription <subscription>
+az configure --defaults workspace=<workspace> group=<resource-group> location=<location>
+```
+
+# [Python](#tab/python)
+
+The workspace is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning. In this section, we'll connect to the workspace in which you'll perform deployment tasks.
+
+1. Import the required libraries:
+
+```python
+from azure.ai.ml import MLClient, Input
+from azure.ai.ml.entities import BatchEndpoint, BatchDeployment, Model, AmlCompute, Data, BatchRetrySettings
+from azure.ai.ml.constants import AssetTypes, BatchDeploymentOutputAction
+from azure.identity import DefaultAzureCredential
+```
+
+2. Configure workspace details and get a handle to the workspace:
+
+```python
+subscription_id = "<subscription>"
+resource_group = "<resource-group>"
+workspace = "<workspace>"
+
+ml_client = MLClient(DefaultAzureCredential(), subscription_id, resource_group, workspace)
+```
+
+# [Studio](#tab/azure-studio)
+
+Open the [Azure ML studio portal](https://ml.azure.com) and log in using your credentials.
+++
+### Create compute
+
+Batch endpoints run on compute clusters. They support both [Azure Machine Learning Compute clusters (AmlCompute)](./how-to-create-attach-compute-cluster.md) or [Kubernetes clusters](./how-to-attach-kubernetes-anywhere.md). Clusters are a shared resource so one cluster can host one or many batch deployments (along with other workloads if desired).
+
+Run the following code to create an Azure Machine Learning compute cluster. The following examples in this article use the compute created here named `batch-cluster`. Adjust as needed and reference your compute using `azureml:<your-compute-name>`.
+
+# [Azure CLI](#tab/azure-cli)
++
+# [Python](#tab/python)
+
+```python
+compute_name = "batch-cluster"
+compute_cluster = AmlCompute(name=compute_name, description="amlcompute", min_instances=0, max_instances=5)
+ml_client.begin_create_or_update(compute_cluster)
+```
+
+# [Studio](#tab/azure-studio)
+
+*Create a compute cluster as explained in the following tutorial [Create an Azure Machine Learning compute cluster](./how-to-create-attach-compute-cluster.md?tabs=azure-studio).*
+++
+> [!NOTE]
+> You are not charged for compute at this point as the cluster will remain at 0 nodes until a batch endpoint is invoked and a batch scoring job is submitted. Learn more about [manage and optimize cost for AmlCompute](./how-to-manage-optimize-cost.md#use-azure-machine-learning-compute-cluster-amlcompute).
++
+### Registering the model
+
+Batch Deployments can only deploy models registered in the workspace. You can skip this step if the model you are trying to deploy is already registered. In this case, we are registering a Torch model for the popular digit recognition problem (MNIST).
+
+> [!TIP]
+> Models are associated with the deployment rather than with the endpoint. This means that a single endpoint can serve different models or different model versions under the same endpoint as long as they are deployed in different deployments.
+
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli
+MODEL_NAME='mnist'
+az ml model create --name $MODEL_NAME --type "custom_model" --path "./mnist/model/"
+```
+
+# [Python](#tab/python)
+
+```python
+model_name = 'mnist'
+model = ml_client.models.create_or_update(
+ Model(name=model_name, path='./mnist/model/', type=AssetTypes.CUSTOM_MODEL)
+)
+```
+
+# [Studio](#tab/azure-studio)
+
+1. Navigate to the __Models__ tab on the side menu.
+1. Click on __Register__ > __From local files__.
+1. In the wizard, leave the option *Model type* as __Unspecified type__.
+1. Click on __Browse__ > __Browse folder__ > Select the folder `./mnist/model/` > __Next__.
+1. Configure the name of the model: `mnist`. You can leave the rest of the fields as they are.
+1. Click on __Register__.
+++
+## Create a batch endpoint
+
+A batch endpoint is an HTTPS endpoint that clients can call to trigger a batch scoring job. A batch scoring job is a job that scores multiple inputs (for more, see [What are batch endpoints?](./concept-endpoints.md#what-are-batch-endpoints)). A batch deployment is a set of compute resources hosting the model that does the actual batch scoring. One batch endpoint can have multiple batch deployments.
+
+> [!TIP]
+> One of the batch deployments will serve as the default deployment for the endpoint. The default deployment will be used to do the actual batch scoring when the endpoint is invoked. Learn more about [batch endpoints and batch deployment](./concept-endpoints.md#what-are-batch-endpoints).
+
+### Steps
+
+1. Decide on the name of the endpoint. The name of the endpoint will end-up in the URI associated with your endpoint. Because of that, __batch endpoint names need to be unique within an Azure region__. For example, there can be only one batch endpoint with the name `mybatchendpoint` in `westus2`.
+
+ # [Azure CLI](#tab/azure-cli)
+
+ In this case, let's place the name of the endpoint in a variable so we can easily reference it later.
+
+ ```azurecli
+ ENDPOINT_NAME="mnist-batch"
+ ```
+
+ # [Python](#tab/python)
+
+ In this case, let's place the name of the endpoint in a variable so we can easily reference it later.
+
+ ```python
+ endpoint_name="mnist-batch"
+ ```
+
+ # [Studio](#tab/azure-studio)
+
+ *You will configure the name of the endpoint later in the creation wizard.*
+
+
+1. Configure your batch endpoint
+
+ # [Azure CLI](#tab/azure-cli)
+
+ The following YAML file defines a batch endpoint, which you can include in the CLI command for [batch endpoint creation](#create-a-batch-endpoint). In the repository, this file is located at `/cli/endpoints/batch/batch-endpoint.yml`.
+
+ :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/mnist-endpoint.yml":::
+
+ The following table describes the key properties of the endpoint. For the full batch endpoint YAML schema, see [CLI (v2) batch endpoint YAML schema](./reference-yaml-endpoint-batch.md).
+
+ | Key | Description |
+ | | -- |
+ | `name` | The name of the batch endpoint. Needs to be unique at the Azure region level.|
+ | `description` | The description of the batch endpoint. This property is optional. |
+ | `auth_mode` | The authentication method for the batch endpoint. Currently only Azure Active Directory token-based authentication (`aad_token`) is supported. |
+ | `defaults.deployment_name` | The name of the deployment that will serve as the default deployment for the endpoint. |
+
+ # [Python](#tab/python)
+
+ ```python
+ # create a batch endpoint
+ endpoint = BatchEndpoint(
+ name=endpoint_name,
+ description="A batch endpoint for scoring images from the MNIST dataset.",
+ )
+ ```
+
+ | Key | Description |
+ | | -- |
+ | `name` | The name of the batch endpoint. Needs to be unique at the Azure region level.|
+ | `description` | The description of the batch endpoint. This property is optional. |
+ | `auth_mode` | The authentication method for the batch endpoint. Currently only Azure Active Directory token-based authentication (`aad_token`) is supported. |
+ | `defaults.deployment_name` | The name of the deployment that will serve as the default deployment for the endpoint. |
+
+ # [Studio](#tab/azure-studio)
+
+ *You will create the endpoint in the same step you create the deployment.*
+
+
+1. Create the endpoint:
+
+ # [Azure CLI](#tab/azure-cli)
+
+ Run the following code to create a batch deployment under the batch endpoint and set it as the default deployment.
+
+ :::code language="azurecli" source="~/azureml-examples-main/cli/batch-score.sh" ID="create_batch_endpoint" :::
+
+ # [Python](#tab/python)
+
+ ```python
+ ml_client.batch_endpoints.begin_create_or_update(endpoint)
+ ```
+ # [Studio](#tab/azure-studio)
+
+ *You will create the endpoint in the same step you are creating the deployment later.*
++
+## Create a scoring script
+
+Batch deployments require a scoring script that indicates how the given model should be executed and how input data must be processed. In this case, we are deploying a model that read image files representing digits and outputs the corresponding digit. The scoring script looks as follows:
+
+> [!NOTE]
+> For MLflow models this scoring script is not required as it is automatically generated by Azure Machine Learning. If your model is an MLflow model, you can skip this step. For more details about how batch endpoints work with MLflow models, see the dedicated tutorial [Using MLflow models in batch deployments](how-to-mlflow-batch.md).
+
+> [!TIP]
+> For more information about how to write scoring scripts and best practices for it please see [Author scoring scripts for batch deployments](how-to-batch-scoring-script.md).
++
+## Create a batch deployment
+
+A deployment is a set of resources required for hosting the model that does the actual inferencing. To create a batch deployment, you need all the following items:
+
+* A registered model in the workspace.
+* The code to score the model.
+* The environment in which the model runs.
+* The pre-created compute and resource settings.
+
+1. Create an environment where your batch deployment will run. Include in the environment any dependency your code requires for running. You will also need to add the library `azureml-core` as it is required for batch deployments to work.
+
+ # [Azure CLI](#tab/azure-cli)
+
+ *No extra step is required for the Azure ML CLI. The environment definition will be included in the deployment file as an anonymous environment.*
+
+ # [Python](#tab/python)
+
+ Let's get a reference to the environment:
+
+ ```python
+ env = Environment(
+ conda_file="./mnist/environment/conda.yml",
+ image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:latest",
+ )
+ ```
+
+ # [Studio](#tab/azure-studio)
+
+ 1. Navigate to the __Environments__ tab on the side menu.
+ 1. Select the tab __Custom environments__ > __Create__.
+ 1. Enter the name of the environment, in this case `torch-batch-env`.
+ 1. On __Select environment type__ select __Use existing docker image with conda__.
+ 1. On __Container registry image path__, enter `mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04`.
+ 1. On __Customize__ section copy the content of the file `./mnist/environment/conda.yml` included in the repository into the portal. The conda file looks as follows:
+
+ :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/mnist/environment/conda.yml":::
+
+ 1. Click on __Next__ and then on __Create__.
+ 1. The environment is ready to be used.
+
+
+
+ > [!WARNING]
+ > Curated environments are not supported in batch deployments. You will need to indicate your own environment. You can always use the base image of a curated environment as yours to simplify the process.
+
+ > [!IMPORTANT]
+ > Do not forget to include the library `azureml-core` in your deployment as it is required by the executor.
+
+
+1. Create a deployment definition
+
+ # [Azure CLI](#tab/azure-cli)
+
+ __mnist-torch-deployment.yml__
+
+ :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/mnist-torch-deployment.yml":::
+
+ For the full batch deployment YAML schema, see [CLI (v2) batch deployment YAML schema](./reference-yaml-deployment-batch.md).
+
+ | Key | Description |
+ | | -- |
+ | `name` | The name of the deployment. |
+ | `endpoint_name` | The name of the endpoint to create the deployment under. |
+ | `model` | The model to be used for batch scoring. The example defines a model inline using `path`. Model files will be automatically uploaded and registered with an autogenerated name and version. Follow the [Model schema](./reference-yaml-model.md#yaml-syntax) for more options. As a best practice for production scenarios, you should create the model separately and reference it here. To reference an existing model, use the `azureml:<model-name>:<model-version>` syntax. |
+ | `code_configuration.code.path` | The local directory that contains all the Python source code to score the model. |
+ | `code_configuration.scoring_script` | The Python file in the above directory. This file must have an `init()` function and a `run()` function. Use the `init()` function for any costly or common preparation (for example, load the model in memory). `init()` will be called only once at beginning of process. Use `run(mini_batch)` to score each entry; the value of `mini_batch` is a list of file paths. The `run()` function should return a pandas DataFrame or an array. Each returned element indicates one successful run of input element in the `mini_batch`. For more information on how to author scoring script, see [Understanding the scoring script](how-to-batch-scoring-script.md#understanding-the-scoring-script). |
+ | `environment` | The environment to score the model. The example defines an environment inline using `conda_file` and `image`. The `conda_file` dependencies will be installed on top of the `image`. The environment will be automatically registered with an autogenerated name and version. Follow the [Environment schema](./reference-yaml-environment.md#yaml-syntax) for more options. As a best practice for production scenarios, you should create the environment separately and reference it here. To reference an existing environment, use the `azureml:<environment-name>:<environment-version>` syntax. |
+ | `compute` | The compute to run batch scoring. The example uses the `batch-cluster` created at the beginning and references it using `azureml:<compute-name>` syntax. |
+ | `resources.instance_count` | The number of instances to be used for each batch scoring job. |
+ | `max_concurrency_per_instance` | [Optional] The maximum number of parallel `scoring_script` runs per instance. |
+ | `mini_batch_size` | [Optional] The number of files the `scoring_script` can process in one `run()` call. |
+ | `output_action` | [Optional] How the output should be organized in the output file. `append_row` will merge all `run()` returned output results into one single file named `output_file_name`. `summary_only` won't merge the output results and only calculate `error_threshold`. |
+ | `output_file_name` | [Optional] The name of the batch scoring output file for `append_row` `output_action`. |
+ | `retry_settings.max_retries` | [Optional] The number of max tries for a failed `scoring_script` `run()`. |
+ | `retry_settings.timeout` | [Optional] The timeout in seconds for a `scoring_script` `run()` for scoring a mini batch. |
+ | `error_threshold` | [Optional] The number of input file scoring failures that should be ignored. If the error count for the entire input goes above this value, the batch scoring job will be terminated. The example uses `-1`, which indicates that any number of failures is allowed without terminating the batch scoring job. |
+ | `logging_level` | [Optional] Log verbosity. Values in increasing verbosity are: WARNING, INFO, and DEBUG. |
+
+ # [Python](#tab/python)
+
+ ```python
+ deployment = BatchDeployment(
+ name="mnist-torch-dpl",
+ description="A deployment using Torch to solve the MNIST classification dataset.",
+ endpoint_name=batch_endpoint_name,
+ model=model,
+ code_path="./mnist/code/",
+ scoring_script="batch_driver.py",
+ environment=env,
+ compute=compute_name,
+ instance_count=2,
+ max_concurrency_per_instance=2,
+ mini_batch_size=10,
+ output_action=BatchDeploymentOutputAction.APPEND_ROW,
+ output_file_name="predictions.csv",
+ retry_settings=BatchRetrySettings(max_retries=3, timeout=30),
+ logging_level="info",
+ )
+ ```
+
+ This class allows user to configure the following key aspects.
+ * `name` - Name of the deployment.
+ * `endpoint_name` - Name of the endpoint to create the deployment under.
+ * `model` - The model to use for the deployment. This value can be either a reference to an existing versioned model in the workspace or an inline model specification.
+ * `environment` - The environment to use for the deployment. This value can be either a reference to an existing versioned environment in the workspace or an inline environment specification.
+ * `code_path`- Path to the source code directory for scoring the model
+ * `scoring_script` - Relative path to the scoring file in the source code directory
+ * `compute` - Name of the compute target to execute the batch scoring jobs on
+ * `instance_count`- The number of nodes to use for each batch scoring job.
+ * `max_concurrency_per_instance`- The maximum number of parallel scoring_script runs per instance.
+ * `mini_batch_size` - The number of files the code_configuration.scoring_script can process in one `run`() call.
+ * `retry_settings`- Retry settings for scoring each mini batch.
+ * `max_retries`- The maximum number of retries for a failed or timed-out mini batch (default is 3)
+ * `timeout`- The timeout in seconds for scoring a mini batch (default is 30)
+ * `output_action`- Indicates how the output should be organized in the output file. Allowed values are `append_row` or `summary_only`. Default is `append_row`
+ * `output_file_name`- Name of the batch scoring output file. Default is `predictions.csv`
+ * `environment_variables`- Dictionary of environment variable name-value pairs to set for each batch scoring job.
+ * `logging_level`- The log verbosity level. Allowed values are `warning`, `info`, `debug`. Default is `info`.
+
+ # [Studio](#tab/azure-studio)
+
+ 1. Navigate to the __Endpoints__ tab on the side menu.
+ 1. Select the tab __Batch endpoints__ > __Create__.
+ 1. Give the endpoint a name, in this case `mnist-batch`. You can configure the rest of the fields or leave them blank.
+ 1. Click on __Next__.
+ 1. On the model list, select the model `mnist` and click on __Next__.
+ 1. On the deployment configuration page, give the deployment a name.
+ 1. On __Output action__, ensure __Append row__ is selected.
+ 1. On __Output file name__, ensure the batch scoring output file is the one you need. Default is `predictions.csv`.
+ 1. On __Mini batch size__, adjust the size of the files that will be included on each mini-batch. This will control the amount of data your scoring script receives per each batch.
+ 1. On __Scoring timeout (seconds)__, ensure you are giving enough time for your deployment to score a given batch of files. If you increase the number of files, you usually have to increase the timeout value too. More expensive models (like those based on deep learning), may require high values in this field.
+ 1. On __Max concurrency per instance__, configure the number of executors you want to have per each compute instance you get in the deployment. A higher number here guarantees a higher degree of parallelization but it also increases the memory pressure on the compute instance. Tune this value altogether with __Mini batch size__.
+ 1. Once done, click on __Next__.
+ 1. On environment, go to __Select scoring file and dependencies__ and click on __Browse__.
+ 1. Select the scoring script file on `/mnist/code/batch_driver.py`.
+ 1. On the section __Choose an environment__, select the environment you created a previous step.
+ 1. Click on __Next__.
+ 1. On the section __Compute__, select the compute cluster you created in a previous step.
+
+ > [!WARNING]
+ > Azure Kubernetes cluster are supported in batch deployments, but only when created using the Azure ML CLI or Python SDK.
+
+ 1. On __Instance count__, enter the number of compute instances you want for the deployment. In this case, we will use 2.
+ 1. Click on __Next__.
+
+1. Create the deployment:
+
+ # [Azure CLI](#tab/azure-cli)
+
+ Run the following code to create a batch deployment under the batch endpoint and set it as the default deployment.
+
+ :::code language="azurecli" source="~/azureml-examples-main/cli/batch-score.sh" ID="create_batch_deployment_set_default" :::
+
+ > [!TIP]
+ > The `--set-default` parameter sets the newly created deployment as the default deployment of the endpoint. It's a convenient way to create a new default deployment of the endpoint, especially for the first deployment creation. As a best practice for production scenarios, you may want to create a new deployment without setting it as default, verify it, and update the default deployment later. For more information, see the [Deploy a new model](#adding-deployments-to-an-endpoint) section.
+
+ # [Python](#tab/python)
+
+ Using the `MLClient` created earlier, we'll now create the deployment in the workspace. This command will start the deployment creation and return a confirmation response while the deployment creation continues.
+
+ ```python
+ ml_client.batch_deployments.begin_create_or_update(deployment)
+ ```
+
+ Once the deployment is completed, we need to ensure the new deployment is the default deployment in the endpoint:
+
+ ```python
+ endpoint = ml_client.batch_endpoints.get(endpoint_name)
+ endpoint.defaults.deployment_name = deployment.name
+ ml_client.batch_endpoints.begin_create_or_update(endpoint)
+ ```
+
+ # [Studio](#tab/azure-studio)
+
+ In the wizard, click on __Create__ to start the deployment process.
+
+ :::image type="content" source="./media/how-to-use-batch-endpoints-studio/review-batch-wizard.png" alt-text="Screenshot of batch endpoints/deployment review screen.":::
+
+
+
+ > [!NOTE]
+ > __How is work distributed?__:
+ >
+ > Batch deployments distribute work at the file level, which means that a folder containing 100 files with mini-batches of 10 files will generate 10 batches of 10 files each. Notice that this will happen regardless of the size of the files involved. If your files are too big to be processed in large mini-batches we suggest to either split the files in smaller files to achieve a higher level of parallelism or to decrease the number of files per mini-batch. At this moment, batch deployment can't account for skews in the file's size distribution.
+
+1. Check batch endpoint and deployment details.
+
+ # [Azure CLI](#tab/azure-cli)
+
+ Use `show` to check endpoint and deployment details. To check a batch deployment, run the following code:
+
+ :::code language="azurecli" source="~/azureml-examples-main/cli/batch-score.sh" ID="check_batch_deployment_detail" :::
+
+
+ # [Python](#tab/python)
+
+ To check a batch deployment, run the following code:
+
+ ```python
+ ml_client.batch_deployments.get(name=deployment.name, endpoint_name=endpoint.name)
+ ```
+
+ # [Studio](#tab/azure-studio)
+
+ 1. Navigate to the __Endpoints__ tab on the side menu.
+ 1. Select the tab __Batch endpoints__.
+ 1. Select the batch endpoint you want to get details from.
+ 1. In the endpoint page, you will see all the details of the endpoint along with all the deployments available.
+
+ :::image type="content" source="./media/how-to-use-batch-endpoints-studio/batch-endpoint-details.png" alt-text="Screenshot of the check batch endpoints and deployment details.":::
+
+## Invoke the batch endpoint to start a batch job
+
+Invoke a batch endpoint triggers a batch scoring job. A job `name` will be returned from the invoke response and can be used to track the batch scoring progress. The batch scoring job runs for a period of time. It splits the entire inputs into multiple `mini_batch` and processes in parallel on the compute cluster. The batch scoring job outputs will be stored in cloud storage, either in the workspace's default blob storage, or the storage you specified.
+
+# [Azure CLI](#tab/azure-cli)
+
+
+# [Python](#tab/python)
+
+```python
+job = ml_client.batch_endpoints.invoke(
+ endpoint_name=endpoint_name,
+ inputs=Input(path="https://pipelinedata.blob.core.windows.net/sampledata/mnist", type=AssetTypes.URI_FOLDER)
+)
+```
+
+# [Studio](#tab/azure-studio)
+
+1. Navigate to the __Endpoints__ tab on the side menu.
+1. Select the tab __Batch endpoints__.
+1. Select the batch endpoint you just created.
+1. Click on __Create job__.
+
+ :::image type="content" source="./media/how-to-use-batch-endpoints-studio/create-batch-job.png" alt-text="Screenshot of the create job option to start batch scoring.":::
+
+1. On __Deployment__, select the deployment you want to execute.
+
+ :::image type="content" source="./media/how-to-use-batch-endpoints-studio/job-setting-batch-scoring.png" alt-text="Screenshot of using the deployment to submit a batch job.":::
+
+1. Click on __Next__.
+1. On __Select data source__, select the data input you want to use. For this example, select __Datastore__ and in the section __Path__ enter the full URL `https://pipelinedata.blob.core.windows.net/sampledat) for details.
+
+ :::image type="content" source="./media/how-to-use-batch-endpoints-studio/select-datastore-job.png" alt-text="Screenshot of selecting datastore as an input option.":::
+
+1. Start the job.
+++
+### Configure job's inputs
+
+Batch endpoints support reading files or folders that are located in different locations. To learn more about how the supported types and how to specify them read [Accessing data from batch endpoints jobs](how-to-access-data-batch-endpoints-jobs.md).
+
+> [!TIP]
+> Local data folders/files can be used when executing batch endpoints from the Azure ML CLI or Azure ML SDK for Python. However, that operation will result in the local data to be uploaded to the default Azure Machine Learning Data Store of the workspace you are working on.
+
+> [!IMPORTANT]
+> __Deprecation notice__: Datasets of type `FileDataset` (V1) are deprecated and will be retired in the future. Existing batch endpoints relying on this functionality will continue to work but batch endpoints created with GA CLIv2 (2.4.0 and newer) or GA REST API (2022-05-01 and newer) will not support V1 dataset.
+
+### Configure the output location
+
+The batch scoring results are by default stored in the workspace's default blob store within a folder named by job name (a system-generated GUID). You can configure where to store the scoring outputs when you invoke the batch endpoint.
+
+# [Azure CLI](#tab/azure-cli)
+
+Use `output-path` to configure any folder in an Azure Machine Learning registered datastore. The syntax for the `--output-path` is the same as `--input` when you're specifying a folder, that is, `azureml://datastores/<datastore-name>/paths/<path-on-datastore>/`. Use `--set output_file_name=<your-file-name>` to configure a new output file name.
++
+# [Python](#tab/python)
+
+Use `output_path` to configure any folder in an Azure Machine Learning registered datastore. The syntax for the `--output-path` is the same as `--input` when you're specifying a folder, that is, `azureml://datastores/<datastore-name>/paths/<path-on-datastore>/`. Use `output_file_name=<your-file-name>` to configure a new output file name.
+
+```python
+job = ml_client.batch_endpoints.invoke(
+ endpoint_name=endpoint_name,
+ inputs={
+ "input": Input(path="https://pipelinedata.blob.core.windows.net/sampledata/mnist", type=AssetTypes.URI_FOLDER)
+ },
+ output_path={
+ "score": Input(path=f"azureml://datastores/workspaceblobstore/paths/{endpoint_name}")
+ },
+ output_file_name="predictions.csv"
+)
+```
+
+# [Studio](#tab/azure-studio)
+
+1. Navigate to the __Endpoints__ tab on the side menu.
+1. Select the tab __Batch endpoints__.
+1. Select the batch endpoint you just created.
+1. Click on __Create job__.
+
+ :::image type="content" source="./media/how-to-use-batch-endpoints-studio/create-batch-job.png" alt-text="Screenshot of the create job option to start batch scoring.":::
+
+1. On __Deployment__, select the deployment you want to execute.
+1. Click on __Next__.
+1. Check the option __Override deployment settings__.
+
+ :::image type="content" source="./media/how-to-use-batch-endpoints-studio/overwrite-setting.png" alt-text="Screenshot of the overwrite setting when starting a batch job.":::
+
+1. You can now configure __Output file name__ and some extra properties of the deployment execution. Just this execution will be affected.
+1. On __Select data source__, select the data input you want to use.
+1. On __Configure output location__, check the option __Enable output configuration__.
+
+ :::image type="content" source="./media/how-to-use-batch-endpoints-studio/configure-output-location.png" alt-text="Screenshot of optionally configuring output location.":::
+
+1. Configure the __Blob datastore__ where the outputs should be placed.
+++
+> [!WARNING]
+> You must use a unique output location. If the output file exists, the batch scoring job will fail.
+
+> [!IMPORTANT]
+> As opposite as for inputs, only Azure Machine Learning data stores running on blob storage accounts are supported for outputs.
+
+## Overwrite deployment configuration per each job
+
+Some settings can be overwritten when invoke to make best use of the compute resources and to improve performance. The following settings can be configured in a per-job basis:
+
+* Use __instance count__ to overwrite the number of instances to request from the compute cluster. For example, for larger volume of data inputs, you may want to use more instances to speed up the end to end batch scoring.
+* Use __mini-batch size__ to overwrite the number of files to include on each mini-batch. The number of mini batches is decided by total input file counts and mini_batch_size. Smaller mini_batch_size generates more mini batches. Mini batches can be run in parallel, but there might be extra scheduling and invocation overhead.
+* Other settings can be overwritten other settings including __max retries__, __timeout__, and __error threshold__. These settings might impact the end to end batch scoring time for different workloads.
+
+# [Azure CLI](#tab/azure-cli)
++
+# [Python](#tab/python)
+
+```python
+job = ml_client.batch_endpoints.invoke(
+ endpoint_name=endpoint_name,
+ input=Input(path="https://pipelinedata.blob.core.windows.net/sampledata/mnist"),
+ params_override=[
+ { "mini_batch_size": "20" },
+ { "compute.instance_count": "5" }
+ ],
+)
+```
+
+# [Studio](#tab/azure-studio)
+
+1. Navigate to the __Endpoints__ tab on the side menu.
+1. Select the tab __Batch endpoints__.
+1. Select the batch endpoint you just created.
+1. Click on __Create job__.
+
+ :::image type="content" source="./media/how-to-use-batch-endpoints-studio/create-batch-job.png" alt-text="Screenshot of the create job option to start batch scoring.":::
+
+1. On __Deployment__, select the deployment you want to execute.
+1. Click on __Next__.
+1. Check the option __Override deployment settings__.
+
+ :::image type="content" source="./media/how-to-use-batch-endpoints-studio/overwrite-setting.png" alt-text="Screenshot of the overwrite setting when starting a batch job.":::
+
+1. Configure the job parameters. Only the current job execution will be affected by this configuration.
+++
+### Monitor batch scoring job execution progress
+
+Batch scoring jobs usually take some time to process the entire set of inputs.
+
+# [Azure CLI](#tab/azure-cli)
+
+You can use CLI `job show` to view the job. Run the following code to check job status from the previous endpoint invoke. To learn more about job commands, run `az ml job -h`.
++
+# [Python](#tab/python)
+
+The following code checks the job status and outputs a link to the Azure ML studio for further details.
+
+```python
+ml_client.jobs.get(job.name)
+```
+
+# [Studio](#tab/azure-studio)
+
+1. Navigate to the __Endpoints__ tab on the side menu.
+1. Select the tab __Batch endpoints__.
+1. Select the batch endpoint you want to monitor.
+1. Click on the tab __Jobs__.
+
+ :::image type="content" source="media/how-to-use-batch-endpoints-studio/summary-jobs.png" alt-text="Screenshot of summary of jobs submitted to a batch endpoint.":::
+
+1. You will see a list of the jobs created for the selected endpoint.
+1. Select the last job that is running.
+1. You will be redirected to the job monitoring page.
+++
+### Check batch scoring results
+
+Follow the steps below to view the scoring results in Azure Storage Explorer when the job is completed:
+
+1. Run the following code to open batch scoring job in Azure Machine Learning studio. The job studio link is also included in the response of `invoke`, as the value of `interactionEndpoints.Studio.endpoint`.
+
+ :::code language="azurecli" source="~/azureml-examples-main/cli/batch-score.sh" ID="show_job_in_studio" :::
+
+1. In the graph of the job, select the `batchscoring` step.
+1. Select the __Outputs + logs__ tab and then select **Show data outputs**.
+1. From __Data outputs__, select the icon to open __Storage Explorer__.
+
+ :::image type="content" source="media/how-to-use-batch-endpoint/view-data-outputs.png" alt-text="Studio screenshot showing view data outputs location." lightbox="media/how-to-use-batch-endpoint/view-data-outputs.png":::
+
+ The scoring results in Storage Explorer are similar to the following sample page:
+
+ :::image type="content" source="media/how-to-use-batch-endpoint/scoring-view.png" alt-text="Screenshot of the scoring output." lightbox="media/how-to-use-batch-endpoint/scoring-view.png":::
+
+## Adding deployments to an endpoint
+
+Once you have a batch endpoint with a deployment, you can continue to refine your model and add new deployments. Batch endpoints will continue serving the default deployment while you develop and deploy new models under the same endpoint. Deployments can't affect one to another.
+
+In this example, you will learn how to add a second deployment __that solves the same MNIST problem but using a model built with Keras and TensorFlow__.
+
+### Adding a second deployment
+
+1. Create an environment where your batch deployment will run. Include in the environment any dependency your code requires for running. You will also need to add the library `azureml-core` as it is required for batch deployments to work. The following environment definition has the required libraries to run a model with TensorFlow.
+
+ # [Azure CLI](#tab/azure-cli)
+
+ *No extra step is required for the Azure ML CLI. The environment definition will be included in the deployment file as an anonymous environment.*
+
+ # [Python](#tab/python)
+
+ Let's get a reference to the environment:
+
+ ```python
+ env = Environment(
+ conda_file="./mnist-keras/environment/conda.yml",
+ image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:latest",
+ )
+ ```
+
+ # [Studio](#tab/azure-studio)
+
+ 1. Navigate to the __Environments__ tab on the side menu.
+ 1. Select the tab __Custom environments__ > __Create__.
+ 1. Enter the name of the environment, in this case `keras-batch-env`.
+ 1. On __Select environment type__ select __Use existing docker image with conda__.
+ 1. On __Container registry image path__, enter `mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04`.
+ 1. On __Customize__ section copy the content of the file `./mnist-keras/environment/conda.yml` included in the repository into the portal. The conda file looks as follows:
+
+ :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/mnist-keras/environment/conda.yml":::
+
+ 1. Click on __Next__ and then on __Create__.
+ 1. The environment is ready to be used.
+
+
+
+ > [!WARNING]
+ > Curated environments are not supported in batch deployments. You will need to indicate your own environment. You can always use the base image of a curated environment as yours to simplify the process.
+
+ > [!IMPORTANT]
+ > Do not forget to include the library `azureml-core` in your deployment as it is required by the executor.
+
+1. Create a scoring script for the model:
+
+ __batch_driver.py__
+
+ :::code language="python" source="~/azureml-examples-main/sdk/python/endpoints/batch/mnist-keras/code/batch_driver.py" :::
+
+3. Create a deployment definition
+
+ # [Azure CLI](#tab/azure-cli)
+
+ __mnist-keras-deployment__
+
+ :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/mnist-keras-deployment.yml":::
+
+ # [Python](#tab/python)
+
+ ```python
+ deployment = BatchDeployment(
+ name="non-mlflow-deployment",
+ description="this is a sample non-mlflow deployment",
+ endpoint_name=batch_endpoint_name,
+ model=model,
+ code_path="./mnist-keras/code/",
+ scoring_script="batch_driver.py",
+ environment=env,
+ compute=compute_name,
+ instance_count=2,
+ max_concurrency_per_instance=2,
+ mini_batch_size=10,
+ output_action=BatchDeploymentOutputAction.APPEND_ROW,
+ output_file_name="predictions.csv",
+ retry_settings=BatchRetrySettings(max_retries=3, timeout=30),
+ logging_level="info",
+ )
+ ```
+
+ # [Studio](#tab/azure-studio)
+
+ 1. Navigate to the __Endpoints__ tab on the side menu.
+ 1. Select the tab __Batch endpoints__.
+ 1. Select the existing batch endpoint where you want to add the deployment.
+ 1. Click on __Add deployment__.
+
+ :::image type="content" source="./media/how-to-use-batch-endpoints-studio/add-deployment-option.png" alt-text="Screenshot of add new deployment option.":::
+
+ 1. On the model list, select the model `mnist` and click on __Next__.
+ 1. On the deployment configuration page, give the deployment a name.
+ 1. On __Output action__, ensure __Append row__ is selected.
+ 1. On __Output file name__, ensure the batch scoring output file is the one you need. Default is `predictions.csv`.
+ 1. On __Mini batch size__, adjust the size of the files that will be included on each mini-batch. This will control the amount of data your scoring script receives per each batch.
+ 1. On __Scoring timeout (seconds)__, ensure you are giving enough time for your deployment to score a given batch of files. If you increase the number of files, you usually have to increase the timeout value too. More expensive models (like those based on deep learning), may require high values in this field.
+ 1. On __Max concurrency per instance__, configure the number of executors you want to have per each compute instance you get in the deployment. A higher number here guarantees a higher degree of parallelization but it also increases the memory pressure on the compute instance. Tune this value altogether with __Mini batch size__.
+ 1. Once done, click on __Next__.
+ 1. On environment, go to __Select scoring file and dependencies__ and click on __Browse__.
+ 1. Select the scoring script file on `/mnist-keras/code/batch_driver.py`.
+ 1. On the section __Choose an environment__, select the environment you created a previous step.
+ 1. Click on __Next__.
+ 1. On the section __Compute__, select the compute cluster you created in a previous step.
+ 1. On __Instance count__, enter the number of compute instances you want for the deployment. In this case, we will use 2.
+ 1. Click on __Next__
+
+1. Create the deployment:
+
+ # [Azure CLI](#tab/azure-cli)
+
+ Run the following code to create a batch deployment under the batch endpoint and set it as the default deployment.
+
+ :::code language="azurecli" source="~/azureml-examples-main/cli/batch-score.sh" ID="create_new_deployment_not_default" :::
+
+ > [!TIP]
+ > The `--set-default` parameter is missing in this case. As a best practice for production scenarios, you may want to create a new deployment without setting it as default, verify it, and update the default deployment later.
+
+ # [Python](#tab/python)
+
+ Using the `MLClient` created earlier, we'll now create the deployment in the workspace. This command will start the deployment creation and return a confirmation response while the deployment creation continues.
+
+ ```python
+ ml_client.batch_deployments.begin_create_or_update(deployment)
+ ```
+
+ # [Studio](#tab/azure-studio)
+
+ In the wizard, click on __Create__ to start the deployment process.
++
+### Test a non-default batch deployment
+
+To test the new non-default deployment, you will need to know the name of the deployment you want to run.
+
+# [Azure CLI](#tab/azure-cli)
++
+Notice `--deployment-name` is used to specify the deployment we want to execute. This parameter allows you to `invoke` a non-default deployment, and it will not update the default deployment of the batch endpoint.
+
+# [Python](#tab/python)
+
+```python
+job = ml_client.batch_endpoints.invoke(
+ deployment_name=deployment.name,
+ endpoint_name=endpoint.name,
+ input=input,
+)
+```
+
+Notice `deployment_name` is used to specify the deployment we want to execute. This parameter allows you to `invoke` a non-default deployment, and it will not update the default deployment of the batch endpoint.
+
+# [Studio](#tab/azure-studio)
+
+1. Navigate to the __Endpoints__ tab on the side menu.
+1. Select the tab __Batch endpoints__.
+1. Select the batch endpoint you just created.
+1. Click on __Create job__.
+1. On __Deployment__, select the deployment you want to execute. In this case, `mnist-keras`.
+1. Complete the job creation wizard to get the job started.
+++
+### Update the default batch deployment
+
+Although you can invoke a specific deployment inside of an endpoint, you will usually want to invoke the endpoint itself and let the endpoint decide which deployment to use. Such deployment is named the "default" deployment. This gives you the possibility of changing the default deployment and hence changing the model serving the deployment without changing the contract with the user invoking the endpoint. Use the following instruction to update the default deployment:
+
+# [Azure CLI](#tab/azure-cli)
++
+# [Python](#tab/python)
+
+```python
+endpoint = ml_client.batch_endpoints.get(endpoint_name)
+endpoint.defaults.deployment_name = deployment.name
+ml_client.batch_endpoints.begin_create_or_update(endpoint)
+```
+
+# [Studio](#tab/azure-studio)
+
+1. Navigate to the __Endpoints__ tab on the side menu.
+1. Select the tab __Batch endpoints__.
+1. Select the batch endpoint you want to configure.
+1. Click on __Update default deployment__.
+
+ :::image type="content" source="./media/how-to-use-batch-endpoints-studio/update-default-deployment.png" alt-text="Screenshot of updating default deployment.":::
+
+1. On __Select default deployment__, select the name of the deployment you want to be the default one.
+1. Click on __Update__.
+1. The selected deployment is now the default one.
+++
+## Delete the batch endpoint and the deployment
+
+# [Azure CLI](#tab/azure-cli)
+
+If you aren't going to use the old batch deployment, you should delete it by running the following code. `--yes` is used to confirm the deletion.
++
+Run the following code to delete the batch endpoint and all the underlying deployments. Batch scoring jobs won't be deleted.
++
+# [Python](#tab/python)
+
+Delete endpoint:
+
+```python
+ml_client.batch_endpoints.begin_delete(name=batch_endpoint_name)
+```
+
+Delete compute: optional, as you may choose to reuse your compute cluster with later deployments.
+
+```python
+ml_client.compute.begin_delete(name=compute_name)
+```
+
+# [Studio](#tab/azure-studio)
+
+1. Navigate to the __Endpoints__ tab on the side menu.
+1. Select the tab __Batch endpoints__.
+1. Select the batch endpoint you want to delete.
+1. Click on __Delete__.
+1. The endpoint all along with its deployments will be deleted.
+1. Notice that this won't affect the compute cluster where the deployment(s) run.
+++
+## Next steps
+
+* [Accessing data from batch endpoints jobs](how-to-access-data-batch-endpoints-jobs.md).
+* [Authentication on batch endpoints](how-to-authenticate-batch-endpoint.md).
+* [Network isolation in batch endpoints](how-to-secure-batch-endpoint.md).
+* [Troubleshooting batch endpoints](how-to-troubleshoot-batch-endpoints.md).
machine-learning How To Use Event Grid Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-event-grid-batch.md
+
+ Title: "Invoking batch endpoints from Event Grid events in storage"
+
+description: Learn how to use batch endpoints to be automatically triggered when new files are generated in storage.
++++++ Last updated : 10/10/2022++++
+# Invoking batch endpoints from Event Grid events in storage
++
+Event Grid is a fully managed service that enables you to easily manage events across many different Azure services and applications. It simplifies building event-driven and serverless applications. In this tutorial we are going to learn how to create a Logic App that can subscribe to the Event Grid event associated with new files created in a storage account and trigger a batch endpoint to process the given file.
+
+The workflow will work in the following way:
+
+1. It will be triggered when a new blob is created in a specific storage account.
+2. Since the storage account can contain multiple data assets, event filtering will be applied to only react to events happening in a specific folder inside of it. Further filtering can be done is needed.
+3. It will get an authorization token to invoke batch endpoints using the credentials from a Service Principal.
+4. It will trigger the batch endpoint (default deployment) using the newly created file as input.
+
+> [!IMPORTANT]
+> When using Logic App connected with event grid to invoke batch deployment, a job for each file that triggers the event of *blog created* will be generated. However, keep in mind that batch deployments distribute the work at the file level. Since this execution is specifying only one file, then, there will not be any parallelization happening in the deployment. Instead, you will be taking advantage of the capability of batch deployments of executing multiple scoring jobs under the same compute cluster. If you need to run jobs on entire folders in an automatic fashion, we recommend you to switch to [Invoking batch endpoints from Azure Data Factory](how-to-use-batch-azure-data-factory.md).
+
+## Prerequisites
+
+* This example assumes that you have a model correctly deployed as a batch endpoint. Particularly, we are using the *heart condition classifier* created in the tutorial [Using MLflow models in batch deployments](how-to-mlflow-batch.md).
+* This example assumes that your batch deployment runs in a compute cluster called `cpu-cluster`.
+* The Logic App we are creating will communicate with Azure Machine Learning batch endpoints using REST. To know more about how to use the REST API of batch endpoints read [Deploy models with REST for batch scoring](how-to-deploy-batch-with-rest.md).
+
+## Authenticating against batch endpoints
+
+Azure Logic Apps can invoke the REST APIs of batch endpoints by using the [HTTP](../connectors/connectors-native-http.md) activity. Batch endpoints support Azure Active Directory for authorization and hence the request made to the APIs require a proper authentication handling.
+
+We recommend to using a service principal for authentication and interaction with batch endpoints in this scenario.
+
+1. Create a service principal following the steps at [Register an application with Azure AD and create a service principal](../active-directory/develop/howto-create-service-principal-portal.md#register-an-application-with-azure-ad-and-create-a-service-principal).
+1. Create a secret to use for authentication as explained at [Option 2: Create a new application secret](../active-directory/develop/howto-create-service-principal-portal.md#option-2-create-a-new-application-secret).
+1. Take note of the `client secret` generated.
+1. Take note of the `client ID` and the `tenant id` as explained at [Get tenant and app ID values for signing in](../active-directory/develop/howto-create-service-principal-portal.md#option-2-create-a-new-application-secret).
+1. Grant access for the service principal you created to your workspace as explained at [Grant access](../role-based-access-control/quickstart-assign-role-user-portal.md#grant-access). In this example the service principal will require:
+
+ 1. Permission in the workspace to read batch deployments and perform actions over them.
+ 1. Permissions to read/write in data stores.
+
+## Enabling data access
+
+We will be using cloud URIs provided by Event Grid to indicate the input data to send to the deployment job. Batch deployments use the identity of the compute to mount the data. The identity of the job is used to read the data once mounted for external storage accounts. You will need to assign a user-assigned managed identity to the compute cluster in order to ensure it does have access to mount the underlying data. Follow these steps to ensure data access:
+
+1. Create a [managed identity resource](../active-directory/managed-identities-azure-resources/overview.md):
+
+ # [Azure ML CLI](#tab/cli)
+
+ ```azurecli
+ IDENTITY=$(az identity create -n azureml-cpu-cluster-idn --query id -o tsv)
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ # Use the Azure CLI to create the managed identity. Then copy the value of the variable IDENTITY into a Python variable
+ identity="/subscriptions/<subscription>/resourcegroups/<resource-group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/azureml-cpu-cluster-idn"
+ ```
+
+1. Update the compute cluster to use the managed identity we created:
+
+ > [!NOTE]
+ > This examples assumes you have a compute cluster created named `cpu-cluster` and it is used for the default deployment in the endpoint.
+
+ # [Azure ML CLI](#tab/cli)
+
+ ```azurecli
+ az ml compute update --name cpu-cluster --identity-type user_assigned --user-assigned-identities $IDENTITY
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ from azure.ai.ml import MLClient
+ from azure.ai.ml.entities import AmlCompute, ManagedIdentityConfiguration
+ from azure.ai.ml.constants import ManagedServiceIdentityType
+
+ compute_name = "cpu-cluster"
+ compute_cluster = ml_client.compute.get(name=compute_name)
+
+ compute_cluster.identity.type = ManagedServiceIdentityType.USER_ASSIGNED
+ compute_cluster.identity.user_assigned_identities = [
+ ManagedIdentityConfiguration(resource_id=identity)
+ ]
+
+ ml_client.compute.begin_create_or_update(compute_cluster)
+ ```
+
+1. Go to the [Azure portal](https://portal.azure.com) and ensure the managed identity has the right permissions to read the data. To access storage services, you must have at least [Storage Blob Data Reader](../role-based-access-control/built-in-roles.md#storage-blob-data-reader) access to the storage account. Only storage account owners can [change your access level via the Azure portal](../storage/blobs/assign-azure-role-data-access.md).
+
+## Create a Logic App
+
+1. In the [Azure portal](https://portal.azure.com), sign in with your Azure account.
+
+1. On the Azure home page, select **Create a resource**.
+
+1. On the Azure Marketplace menu, select **Integration** > **Logic App**.
+
+ ![Screenshot that shows Azure Marketplace menu with "Integration" and "Logic App" selected.](../logic-apps/media/tutorial-build-scheduled-recurring-logic-app-workflow/create-new-logic-app-resource.png)
+
+1. On the **Create Logic App** pane, on the **Basics** tab, provide the following information about your logic app resource.
+
+ ![Screenshot showing Azure portal, logic app creation pane, and info for new logic app resource.](../logic-apps/media/tutorial-build-scheduled-recurring-logic-app-workflow/create-logic-app-settings.png)
+
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **Subscription** | Yes | <*Azure-subscription-name*> | Your Azure subscription name. This example uses **Pay-As-You-Go**. |
+ | **Resource Group** | Yes | **LA-TravelTime-RG** | The [Azure resource group](../azure-resource-manager/management/overview.md) where you create your logic app resource and related resources. This name must be unique across regions and can contain only letters, numbers, hyphens (`-`), underscores (`_`), parentheses (`(`, `)`), and periods (`.`). |
+ | **Name** | Yes | **LA-TravelTime** | Your logic app resource name, which must be unique across regions and can contain only letters, numbers, hyphens (`-`), underscores (`_`), parentheses (`(`, `)`), and periods (`.`). |
+
+1. Before you continue making selections, go to the **Plan** section. For **Plan type**, select **Consumption** to show only the settings for a Consumption logic app workflow, which runs in multi-tenant Azure Logic Apps.
+
+ The **Plan type** property also specifies the billing model to use.
+
+ | Plan type | Description |
+ |--|-|
+ | **Standard** | This logic app type is the default selection and runs in single-tenant Azure Logic Apps and uses the [Standard billing model](../logic-apps/logic-apps-pricing.md#standard-pricing). |
+ | **Consumption** | This logic app type runs in global, multi-tenant Azure Logic Apps and uses the [Consumption billing model](../logic-apps/logic-apps-pricing.md#consumption-pricing). |
+
+1. Now continue with the following selections:
+
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **Region** | Yes | **West US** | The Azure datacenter region for storing your app's information. This example deploys the sample logic app to the **West US** region in Azure. <br><br>**Note**: If your subscription is associated with an integration service environment, this list includes those environments. |
+ | **Enable log analytics** | Yes | **No** | This option appears and applies only when you select the **Consumption** logic app type. Change this option only when you want to enable diagnostic logging. For this tutorial, keep the default selection. |
+
+1. When you're done, select **Review + create**. After Azure validates the information about your logic app resource, select **Create**.
+
+1. After Azure deploys your app, select **Go to resource**.
+
+ Azure opens the workflow template selection pane, which shows an introduction video, commonly used triggers, and workflow template patterns.
+
+1. Scroll down past the video and common triggers sections to the **Templates** section, and select **Blank Logic App**.
+
+ ![Screenshot that shows the workflow template selection pane with "Blank Logic App" selected.](../logic-apps/media/tutorial-build-scheduled-recurring-logic-app-workflow/select-logic-app-template.png)
++
+## Configure the workflow parameters
+
+This Logic App will use parameters to store specific pieces of information that you will need to run the batch deployment.
+
+1. On the workflow designer, under the tool bar, select the option __Parameters__ and configure them as follows:
+
+ :::image type="content" source="./media/how-to-use-event-grid-batch/parameters.png" alt-text="Screenshot of all the parameters required in the workflow.":::
+
+1. To create a parameter, use the __Add parameter__ option:
+
+ :::image type="content" source="./media/how-to-use-event-grid-batch/parameter.png" alt-text="Screenshot showing how to add one parameter in designer.":::
+
+1. Create the following parameters.
+
+ | Parameter | Description | Sample value |
+ | | -|- |
+ | `tenant_id` | Tenant ID where the endpoint is deployed | `00000000-0000-0000-00000000` |
+ | `client_id` | The client ID of the service principal used to invoke the endpoint | `00000000-0000-0000-00000000` |
+ | `client_secret` | The client secret of the service principal used to invoke the endpoint | `ABCDEFGhijkLMNOPQRstUVwz` |
+ | `endpoint_uri` | The endpoint scoring URI | `https://<endpoint_name>.<region>.inference.ml.azure.com/jobs` |
+
+ > [!IMPORTANT]
+ > `endpoint_uri` is the URI of the endpoint you are trying to execute. The endpoint must have a default deployment configured.
+
+ > [!TIP]
+ > Use the values configured at [Authenticating against batch endpoints](#authenticating-against-batch-endpoints).
+
+## Add the trigger
+
+We want to trigger the Logic App each time a new file is created in a given folder (data asset) of a Storage Account. The Logic App will also use the information of the event to invoke the batch endpoint and passing the specific file to be processed.
+
+1. On the workflow designer, under the search box, select **Built-in**.
+
+1. In the search box, enter **event grid**, and select the trigger named **When a resource event occurs**.
+
+1. Configure the trigger as follows:
+
+ | Property | Value | Description |
+ |-|-|-|
+ | **Subscription** | Your subscription name | The subscription where the Azure Storage Account is placed. |
+ | **Resource Type** | `Microsoft.Storage.StorageAccounts` | The resource type emitting the events. |
+ | **Resource Name** | Your storage account name | The name of the Storage Account where the files will be generated. |
+ | **Event Type Item** | `Microsoft.Storage.BlobCreated` | The event type. |
+
+1. Click on __Add new parameter__ and select __Prefix Filter__. Add the value `/blobServices/default/containers/<container_name>/blobs/<path_to_data_folder>`.
+
+ > [!IMPORTANT]
+ > __Prefix Filter__ allows Event Grid to only notify the workflow when a blob is created in the specific path we indicated. In this case, we are assumming that files will be created by some external process in the folder `<path_to_data_folder>` inside the container `<container_name>` in the selected Storage Account. Configure this parameter to match the location of your data. Otherwise, the event will be fired for any file created at any location of the Storage Account. See [Event filtering for Event Grid](../event-grid/event-filtering.md) for more details.
+
+ The trigger will look as follows:
+
+ :::image type="content" source="./media/how-to-use-event-grid-batch/create-trigger.png" alt-text="Screenshot of the trigger activity of the Logic App.":::
+
+## Configure the actions
+
+1. Click on __+ New step__.
+
+1. On the workflow designer, under the search box, select **Built-in** and then click on __HTTP__:
+
+1. Configure the action as follows:
+
+ | Property | Value | Notes |
+ |-|-|-|
+ | **Method** | `POST` | The HTTP method |
+ | **URI** | `concat('https://login.microsoftonline.com/', parameters('tenant_id'), '/oauth2/token')` | Click on __Add dynamic context__, then __Expression__, to enter this expression. |
+ | **Headers** | `Content-Type` with value `application/x-www-form-urlencoded` | |
+ | **Body** | `concat('grant_type=client_credentials&client_id=', parameters('client_id'), '&client_secret=', parameters('client_secret'), '&resource=https://ml.azure.com')` | Click on __Add dynamic context__, then __Expression__, to enter this expression. |
+
+ The action will look as follows:
+
+ :::image type="content" source="./media/how-to-use-event-grid-batch/authorize.png" alt-text="Screenshot of the authorize activity of the Logic App.":::
+
+1. Click on __+ New step__.
+
+1. On the workflow designer, under the search box, select **Built-in** and then click on __HTTP__:
+
+1. Configure the action as follows:
+
+ | Property | Value | Notes |
+ |-|-|-|
+ | **Method** | `POST` | The HTTP method |
+ | **URI** | `endpoint_uri` | Click on __Add dynamic context__, then select it under `parameters`. |
+ | **Headers** | `Content-Type` with value `application/json` | |
+ | **Headers** | `Authorization` with value `concat('Bearer ', body('Authorize')['access_token'])` | Click on __Add dynamic context__, then __Expression__, to enter this expression. |
+
+1. In the parameter __Body__, click on __Add dynamic context__, then __Expression__, to enter the following expression:
+
+ ```fx
+ replace('{
+ "properties": {
+ "InputData": {
+ "mnistinput": {
+ "JobInputType" : "UriFile",
+ "Uri" : "<JOB_INPUT_URI>"
+ }
+ }
+ }
+ }', '<JOB_INPUT_URI>', triggerBody()?[0]['data']['url'])
+ ```
+
+ The action will look as follows:
+
+ :::image type="content" source="./media/how-to-use-event-grid-batch/invoke.png" alt-text="Screenshot of the invoke activity of the Logic App.":::
+
+ > [!NOTE]
+ > Notice that this last action will trigger the batch deployment job, but it will not wait for its completion. AzureLogic Apps is not designed for long-running applications. If you need to wait for the job to complete, we recommend you to switch to [Invoking batch endpoints from Azure Data Factory](how-to-use-batch-azure-data-factory.md).
+
+1. Click on __Save__.
+
+1. The Logic App is ready to be executed and it will trigger automatically each time a new file is created under the indicated path. You will notice the app has successfully received the event by checking the __Run history__ of it:
+
+ :::image type="content" source="./media/how-to-use-event-grid-batch/invoke-history.png" alt-text="Screenshot of the invoke history of the Logic App.":::
+
+## Next steps
+
+* [Invoking batch endpoints from Azure Data Factory](how-to-use-batch-azure-data-factory.md)
machine-learning How To Use Low Priority Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-low-priority-batch.md
+
+ Title: "Using low priority VMs in batch deployments"
+
+description: Learn how to use low priority VMs to save costs when running batch jobs.
++++++ Last updated : 10/10/2022++++
+# Using low priority VMs in batch deployments
++
+Azure Batch Deployments supports low priority VMs to reduce the cost of batch inference workloads. Low priority VMs enable a large amount of compute power to be used for a low cost. Low priority VMs take advantage of surplus capacity in Azure. When you specify low priority VMs in your pools, Azure can use this surplus, when available.
+
+The tradeoff for using them is that those VMs may not always be available to be allocated, or may be preempted at any time, depending on available capacity. For this reason, __they are most suitable for batch and asynchronous processing workloads__ where the job completion time is flexible and the work is distributed across many VMs.
+
+Low priority VMs are offered at a significantly reduced price compared with dedicated VMs. For pricing details, see [Azure Machine Learning pricing](https://azure.microsoft.com/pricing/details/machine-learning/).
+
+## How batch deployment works with low priority VMs
+
+Azure Machine Learning Batch Deployments provides several capabilities that make it easy to consume and benefit from low priority VMs:
+
+- Batch deployment jobs consume low priority VMs by running on Azure Machine Learning compute clusters created with low priority VMs. Once a deployment is associated with a low priority VMs' cluster, all the jobs produced by such deployment will use low priority VMs. Per-job configuration is not possible.
+- Batch deployment jobs automatically seek the target number of VMs in the available compute cluster based on the number of tasks to submit. If VMs are preempted or unavailable, batch deployment jobs attempt to replace the lost capacity by queuing the failed tasks to the cluster.
+- When a job is interrupted, it is resubmitted to run again. Rescheduling is done at the mini batch level, regardless of the progress. No checkpointing capability is provided.
+- Low priority VMs have a separate vCPU quota that differs from the one for dedicated VMs. Low-priority cores per region have a default limit of 100 to 3,000, depending on your subscription offer type. The number of low-priority cores per subscription can be increased and is a single value across VM families. See [Azure Machine Learning compute quotas](how-to-manage-quotas.md#azure-machine-learning-compute).
+
+## Considerations and use cases
+
+Many batch workloads are a good fit for low priority VMs. However, this may introduce further execution delays when deallocation of VMs occurs. If there is flexibility in the time jobs have to complete, then potential drops in capacity can be tolerated at expenses of running with a lower cost.
+
+## Creating batch deployments with low priority VMs
+
+Batch deployment jobs consume low priority VMs by running on Azure Machine Learning compute clusters created with low priority VMs.
+
+> [!NOTE]
+> Once a deployment is associated with a low priority VMs' cluster, all the jobs produced by such deployment will use low priority VMs. Per-job configuration is not possible.
+
+You can create a low priority Azure Machine Learning compute cluster as follows:
+
+ # [Azure ML CLI](#tab/cli)
+
+ Create a compute definition `YAML` like the following one:
+
+ __low-pri-cluster.yml__
+ ```yaml
+ $schema: https://azuremlschemas.azureedge.net/latest/amlCompute.schema.json
+ name: low-pri-cluster
+ type: amlcompute
+ size: STANDARD_DS3_v2
+ min_instances: 0
+ max_instances: 2
+ idle_time_before_scale_down: 120
+ tier: low_priority
+ ```
+
+ Create the compute using the following command:
+
+ ```bash
+ az ml compute create -f low-pri-cluster.yml
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ To create a new compute cluster with low priority VMs where to create the deployment, use the following script:
+
+ ```python
+ compute_name = "low-pri-cluster"
+ compute_cluster = AmlCompute(
+ name=compute_name,
+ description="Low priority compute cluster",
+ min_instances=0,
+ max_instances=2,
+ tier='LowPriority'
+ )
+
+ ml_client.begin_create_or_update(compute_cluster)
+ ```
+
+
+
+Once you have the new compute created, you can create or update your deployment to use the new cluster:
+
+ # [Azure ML CLI](#tab/cli)
+
+ To create or update a deployment under the new compute cluster, create a `YAML` configuration like the following:
+
+ ```yaml
+ $schema: https://azuremlschemas.azureedge.net/latest/batchDeployment.schema.json
+ endpoint_name: heart-classifier-batch
+ name: classifier-xgboost
+ description: A heart condition classifier based on XGBoost
+ model: azureml:heart-classifier@latest
+ compute: azureml:low-pri-cluster
+ resources:
+ instance_count: 2
+ max_concurrency_per_instance: 2
+ mini_batch_size: 2
+ output_action: append_row
+ output_file_name: predictions.csv
+ retry_settings:
+ max_retries: 3
+ timeout: 300
+ error_threshold: -1
+ logging_level: info
+ ```
+
+ Then, create the deployment with the following command:
+
+ ```bash
+ az ml batch-endpoint create -f endpoint.yml
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ To create or update a deployment under the new compute cluster, use the following script:
+
+ ```python
+ deployment = BatchDeployment(
+ name="classifier-xgboost",
+ description="A heart condition classifier based on XGBoost",
+ endpoint_name=endpoint.name,
+ model=model,
+ compute=compute_name,
+ instance_count=2,
+ max_concurrency_per_instance=2,
+ mini_batch_size=2,
+ output_action=BatchDeploymentOutputAction.APPEND_ROW,
+ output_file_name="predictions.csv",
+ retry_settings=BatchRetrySettings(max_retries=3, timeout=300),
+ logging_level="info",
+ )
+
+ ml_client.batch_deployments.begin_create_or_update(deployment)
+ ```
+
+
+## View and monitor node deallocation
+
+New metrics are available in the [Azure portal](https://portal.azure.com) for low priority VMs to monitor low priority VMs. These metrics are:
+
+- Preempted nodes
+- Preempted cores
+
+To view these metrics in the Azure portal
+
+1. Navigate to your Azure Machine Learning workspace in the [Azure portal](https://portal.azure.com).
+2. Select **Metrics** from the **Monitoring** section.
+3. Select the metrics you desire from the **Metric** list.
++
+## Limitations
+
+- Once a deployment is associated with a low priority VMs' cluster, all the jobs produced by such deployment will use low priority VMs. Per-job configuration is not possible.
+- Rescheduling is done at the mini-batch level, regardless of the progress. No checkpointing capability is provided.
+
+> [!WARNING]
+> In the cases where the entire cluster is preempted (or running on a single-node cluster), the job will be cancelled as there is no capacity available for it to run. Resubmitting will be required in this case.
++
machine-learning Migrate To V2 Execution Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-execution-pipeline.md
For more information, see the documentation here:
* [steps in SDK v1](/python/api/azureml-pipeline-steps/azureml.pipeline.steps?view=azure-ml-py&preserve-view=true) * [Create and run machine learning pipelines using components with the Azure Machine Learning SDK v2](how-to-create-component-pipeline-python.md)
-* [Build a simple ML pipeline for image classification (SDK v1)](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/using-pipelines/image-classification.ipynb)
+* [Build a simple ML pipeline for image classification (SDK v1)](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/using-pipelines/image-classification.ipynb)
* [OutputDatasetConfig](/python/api/azureml-core/azureml.data.output_dataset_config.outputdatasetconfig?view=azure-ml-py&preserve-view=true) * [`mldesigner`](https://pypi.org/project/mldesigner/)
marketplace Gtm Your Marketplace Benefits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/gtm-your-marketplace-benefits.md
description: Go-To-Market Services - Describes Microsoft resources that publishe
Previously updated : 08/31/2022 Last updated : 11/16/2022
Once your offer is live on Microsoft AppSource or Azure Marketplace, go to [Part
> [!NOTE] > Currencies ($) referenced in images in this article are Marketplace Reward benefit tiers, which are based on cumulative billed sales or seats sold through Microsoft AppSource and Azure Marketplace.
-## Marketplace Rewards
+## ISV Success program and Marketplace rewards
+
+Microsoft continues its strong commitment to the growth and success of ISVs and supporting them throughout the entire journey of building, publishing, and selling apps through the Microsoft commercial marketplace. To further this mission, Marketplace Rewards is now included in the ISV Success program, available at no cost to all participants of the program.
Marketplace Rewards is designed to support you at your specific stage of growth, starting with awareness activities to help you get your first customers. As you grow through the commercial marketplace, you unlock new benefits designed to help you convert customers and close deals.
Each time you publish on Microsoft AppSource or Azure Marketplace, you will have
The table below summarizes the eligibility requirements for list, trial, and consulting offers:
-[![Go-To-Market benefits](media/marketplace-publishers-guide/go-to-market-gtm-eligibility-requirements.png)](media/marketplace-publishers-guide/go-to-market-gtm-eligibility-requirements.png#lightbox)
Detailed descriptions for all these benefits can be found in the [Marketplace Rewards program deck](https://aka.ms/marketplacerewards).
All partners who have a live transactable offer get to work with a dedicated eng
### Marketing benefits for transact offers
-[![Marketing benefits](media/marketplace-publishers-guide/marketing-benefit.png)](media/marketplace-publishers-guide/marketing-benefit.png#lightbox)
### Sales benefits for transact offers
-[![Sales benefits](media/marketplace-publishers-guide/sales-benefit.png)](media/marketplace-publishers-guide/sales-benefit.png#lightbox)
### Technical benefits for transact offers
-[![Technical benefits](media/marketplace-publishers-guide/technical-benefit.png)](media/marketplace-publishers-guide/technical-benefit.png#lightbox)
Detailed descriptions for all these benefits can be found in the [Marketplace Rewards program deck](https://aka.ms/marketplacerewards).
mysql Concepts Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-audit-logs.md
Title: Audit logs - Azure Database for MySQL - Flexible Server description: Describes the audit logs available in Azure Database for MySQL Flexible Server.+++ Last updated : 11/21/2022 -- Previously updated : 9/21/2020 # Track database activity with Audit Logs in Azure Database for MySQL Flexible Server
Azure Database for MySQL Flexible Server provides users with the ability to conf
## Configure audit logging
->[!IMPORTANT]
+> [!IMPORTANT]
> It is recommended to only log the event types and users required for your auditing purposes to ensure your server's performance is not heavily impacted and minium amount of data is collected.
-By default, audit logs are disabled. To enable them, set the `audit_log_enabled` server parameter to *ON*. This can be configured using the Azure portal or Azure CLI <!-- add link to server parameter-->.
+By default, audit logs are disabled. To enable them, set the `audit_log_enabled` server parameter to *ON*. This can be configured using the Azure portal or Azure CLI.
Other parameters you can adjust to control audit logging behavior include:
Other parameters you can adjust to control audit logging behavior include:
- `audit_log_include_users`: MySQL users to be included for logging. The default value for this parameter is empty, which will include all the users for logging. This has higher priority over `audit_log_exclude_users`. Max length of the parameter is 512 characters. - `audit_log_exclude_users`: MySQL users to be excluded from logging. Max length of the parameter is 512 characters.
-> [!NOTE]
+> [!NOTE]
> `audit_log_include_users` has higher priority over `audit_log_exclude_users`. For example, if `audit_log_include_users` = `demouser` and `audit_log_exclude_users` = `demouser`, the user will be included in the audit logs because `audit_log_include_users` has higher priority. | **Event** | **Description** |
-|||
-| `CONNECTION` | - Connection initiation (successful or unsuccessful) <br> - User re-authentication with different user/password during session <br> - Connection termination |
-| `DML_SELECT`| SELECT queries |
+| | |
+| `CONNECTION` | - Connection initiation (successful or unsuccessful)<br />- User reauthentication with different user/password during session<br />- Connection termination |
+| `DML_SELECT` | SELECT queries |
| `DML_NONSELECT` | INSERT/DELETE/UPDATE queries | | `DML` | DML = DML_SELECT + DML_NONSELECT | | `DDL` | Queries like "DROP DATABASE" | | `DCL` | Queries like "GRANT PERMISSION" | | `ADMIN` | Queries like "SHOW STATUS" | | `GENERAL` | All in DML_SELECT, DML_NONSELECT, DML, DDL, DCL, and ADMIN |
-| `TABLE_ACCESS` | - Table read statements, such as SELECT or INSERT INTO ... SELECT <br> - Table delete statements, such as DELETE or TRUNCATE TABLE <br> - Table insert statements, such as INSERT or REPLACE <br> - Table update statements, such as UPDATE |
+| `TABLE_ACCESS` | - Table read statements, such as SELECT or INSERT INTO ... SELECT<br/>- Table delete statements, such as DELETE or TRUNCATE TABLE<br/>- Table insert statements, such as INSERT or REPLACE<br/>- Table update statements, such as UPDATE |
## Access audit logs Audit logs are integrated with Azure Monitor diagnostic settings. Once you've enabled audit logs on your MySQL flexible server, you can emit them to Azure Monitor logs, Event Hubs, or Azure Storage. To learn more about diagnostic settings, see the [diagnostic logs documentation](../../azure-monitor/essentials/platform-logs-overview.md). To learn more about how to enable diagnostic settings in the Azure portal, see the [audit log portal article](tutorial-configure-audit.md#set-up-diagnostics).
->[!Note]
->Premium Storage accounts are not supported if you sending the logs to Azure storage via diagnostics and settings
+> [!NOTE]
+> Premium Storage accounts are not supported if you send the logs to Azure storage via diagnostics and settings.
The following sections describe the output of MySQL audit logs based on the event type. Depending on the output method, the fields included and the order in which they appear may vary. ### Connection | **Property** | **Description** |
-|||
+| | |
| `TenantId` | Your tenant ID | | `SourceSystem` | `Azure` | | `TimeGenerated [UTC]` | Time stamp when the log was recorded in UTC |
The following sections describe the output of MySQL audit logs based on the even
Schema below applies to GENERAL, DML_SELECT, DML_NONSELECT, DML, DDL, DCL, and ADMIN event types.
-> [!NOTE]
+> [!NOTE]
> For `sql_text_s`, log will be truncated if it exceeds 2048 characters. | **Property** | **Description** |
-|||
+| | |
| `TenantId` | Your tenant ID | | `SourceSystem` | `Azure` | | `TimeGenerated [UTC]` | Time stamp when the log was recorded in UTC |
Schema below applies to GENERAL, DML_SELECT, DML_NONSELECT, DML, DDL, DCL, and A
### Table access
-> [!NOTE]
+> [!NOTE]
> For `sql_text_s`, log will be truncated if it exceeds 2048 characters. | **Property** | **Description** |
-|||
+| | |
| `TenantId` | Your tenant ID | | `SourceSystem` | `Azure` | | `TimeGenerated [UTC]` | Time stamp when the log was recorded in UTC |
Once your audit logs are piped to Azure Monitor Logs through Diagnostic Logs, yo
``` ## Next steps+ - Learn more about [slow query logs](concepts-slow-query-logs.md) - Configure [auditing](tutorial-query-performance-insights.md) <!-
mysql Concepts Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-azure-ad-authentication.md
description: Learn about the concepts of Azure Active Directory for authenticati
Previously updated : 09/21/2022 Last updated : 11/21/2022
[!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)]
-Microsoft Azure Active Directory (Azure AD) authentication is a mechanism of connecting to Azure Database for MySQL Flexible server using identities defined in Azure AD. With Azure AD authentication, you can manage database user identities and other Microsoft services in a central location, which simplifies permission management.
+Microsoft Azure Active Directory (Azure AD) authentication is a mechanism of connecting to Azure Database for MySQL Flexible server using identities defined in Azure AD. With Azure AD authentication, you can manage database user identities and other Microsoft services in a central location, simplifying permission management.
## Benefits -- Authentication of users across Azure Services in a uniform way -- Management of password policies and password rotation in a single place -- Multiple forms of authentication supported by Azure Active Directory, which can eliminate the need to store passwords -- Customers can manage database permissions using external (Azure AD) groups. -- Azure AD authentication uses MySQL database users to authenticate identities at the database level
+- Authentication of users across Azure Services in a uniform way
+- Management of password policies and password rotation in a single place
+- Multiple forms of authentication supported by Azure Active Directory, which can eliminate the need to store passwords
+- Customers can manage database permissions using external (Azure AD) groups.
+- Azure AD authentication uses MySQL database users to authenticate identities at the database level
- Support of token-based authentication for applications connecting to Azure Database for MySQL Flexible server ## Use the steps below to configure and use Azure AD authentication
-1. Select your preferred authentication method for accessing the MySQL flexible server. By default, the authentication selected will be MySQL authentication only. Select Azure Active Directory authentication only or MySQL and Azure Active Directory authentication to enabled Azure AD authentication.
-2. Select the user managed identity (UMI) with the following privileges to configure Azure AD authentication:
+1. Select your preferred authentication method for accessing the MySQL flexible server. By default, the authentication selected is set to MySQL authentication only. Select Azure Active Directory authentication only or MySQL and Azure Active Directory authentication to enable Azure AD authentication.
+1. Select the user managed identity (UMI) with the following privileges to configure Azure AD authentication:
- [User.Read.All](/graph/permissions-reference#user-permissions): Allows access to Azure AD user information. - [GroupMember.Read.All](/graph/permissions-reference#group-permissions): Allows access to Azure AD group information. - [Application.Read.ALL](/graph/permissions-reference#application-resource-permissions): Allows access to Azure AD service principal (application) information.
-3. Add Azure AD Admin. It can be Azure AD Users or Groups, which will have access to Azure Database for MySQL flexible server.
-4. Create database users in your database mapped to Azure AD identities.
-5. Connect to your database by retrieving a token for an Azure AD identity and logging in.
+1. Add Azure AD Admin. It can be Azure AD Users or Groups, which have access to Azure Database for MySQL flexible server.
+1. Create database users in your database mapped to Azure AD identities.
+1. Connect to your database by retrieving a token for an Azure AD identity and logging in.
-> [!Note]
+> [!NOTE]
> For detailed, step-by-step instructions about how to configure Azure AD authentication with Azure Database for MySQL flexible server, see [Learn how to set up Azure Active Directory authentication for Azure Database for MySQL flexible Server](how-to-azure-ad.md) ## Architecture
-User-managed identities are required for Azure Active Directory authentication. When a User-Assigned Identity is linked to the flexible server, the Managed Identity Resource Provider (MSRP) issues a certificate internally to that identity, and when the managed identity is deleted, the corresponding service principal is automatically removed.
+User-managed identities are required for Azure Active Directory authentication. When a User-Assigned Identity is linked to the flexible server, the Managed Identity Resource Provider (MSRP) issues a certificate internally to that identity. When the managed identity is deleted, the corresponding service principal is automatically removed.
-The service then uses the managed identity to request access tokens for services that support Azure AD authentication. Only a User-assigned Managed Identity (UMI) is currently supported by Azure Database for MySQL-Flexible Server. For more information, see [Managed identity types](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types) in Azure.
+The service then uses the managed identity to request access tokens for services that support Azure AD authentication. Azure Database currently supports only a User-assigned Managed Identity (UMI) for MySQL-Flexible Server. For more information, see [Managed identity types](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types) in Azure.
The following high-level diagram summarizes how authentication works using Azure AD authentication with Azure Database for MySQL. The arrows indicate communication pathways.
-1. Your application can request a token from the Azure Instance Metadata Service identity endpoint.
-2. Using the client ID and certificate, a call is made to Azure AD to request an access token.
-3. A JSON Web Token (JWT) access token is returned by Azure AD. Your application sends the access token on a call to Azure Database for MySQL flexible server.
-4. MySQL flexible server validates the token with Azure AD.
+1. Your application can request a token from the Azure Instance Metadata Service identity endpoint.
+1. When you use the client ID and certificate, a call is made to Azure AD to request an access token.
+1. A JSON Web Token (JWT) access token is returned by Azure AD. Your application sends the access token on a call to Azure Database for MySQL flexible server.
+1. MySQL flexible server validates the token with Azure AD.
## Administrator structure
-
-When using Azure AD authentication, there are two Administrator accounts for the MySQL server; the original MySQL administrator and the Azure AD administrator.
-Only the administrator based on an Azure AD account can create the first Azure AD contained database user in a user database. The Azure AD administrator login can be an Azure AD user or an Azure AD group. When the administrator is a group account, it can be used by any group member, enabling multiple Azure AD administrators for the MySQL Flexible server. Using a group account as an administrator enhances manageability by allowing you to centrally add and remove group members in Azure AD without changing the users or permissions in the MySQL Flexible server. Only one Azure AD administrator (a user or group) can be configured at a time.
+There are two Administrator accounts for the MySQL server when using Azure AD authentication: the original MySQL administrator and the Azure AD administrator.
+Only the administrator based on an Azure AD account can create the first Azure AD contained database user in a user database. The Azure AD administrator sign-in can be an Azure AD user or an Azure AD group. When the administrator is a group account, it can be used by any group member, enabling multiple Azure AD administrators for the MySQL flexible server. Using a group account as an administrator enhances manageability by allowing you to centrally add and remove group members in Azure AD without changing the users or permissions in the MySQL Flexible server. Only one Azure AD administrator (a user or group) can be configured at a time.
-Methods of authentication for accessing the MySQL flexible server include:
-- MySQL Authentication only - This is the default option. This is the default option. Only native MySQL Authentication with a MySQL login and password will be used to access Azure Database for MySQL flexible server. -- Only Azure AD authentication - MySQL Native authentication will be disabled, and users will be able to authenticate using only their Azure AD user and token. To enable this mode, the server parameter **aad_auth_only** will be _enabled_. -- Authentication with MySQL and Azure AD - Both native MySQL authentication and Azure AD authentication are supported. To enable this mode, the server parameter **aad_auth_only** will be _disabled_. +
+Methods of authentication for accessing the MySQL flexible server include:
+- MySQL Authentication only - This is the default option. Only the native MySQL Authentication with a MySQL sign-in and password can be used to access Azure Database for MySQL flexible server.
+- Only Azure AD authentication - MySQL native authentication is disabled, and users are able to authenticate using only their Azure AD user and token. To enable this mode, the server parameter **aad_auth_only** is set to _enabled_.
+- Authentication with MySQL and Azure AD - Both native MySQL authentication and Azure AD authentication are supported. To enable this mode, the server parameter **aad_auth_only** is set to _disabled_.
## Permissions
-To allow the UMI to read from Microsoft Graph as the server identity, the following permissions are required. Alternatively, give the UMI the [Directory Readers](../../active-directory/roles/permissions-reference.md#directory-readers) role.
+The following permissions are required to allow the UMI to read from the Microsoft Graph as the server identity. Alternatively, give the UMI the [Directory Readers](../../active-directory/roles/permissions-reference.md#directory-readers) role.
-> [!IMPORTANT]
+> [!IMPORTANT]
> Only a [Global Administrator](../../active-directory/roles/permissions-reference.md#global-administrator) or [Privileged Role Administrator](../../active-directory/roles/permissions-reference.md#privileged-role-administrator) can grant these permissions. - [User.Read.All](/graph/permissions-reference#user-permissions): Allows access to Azure AD user information. - [GroupMember.Read.All](/graph/permissions-reference#group-permissions): Allows access to Azure AD group information. - [Application.Read.ALL](/graph/permissions-reference#application-resource-permissions): Allows access to Azure AD service principal (application) information.
-For guidance about how to grant and use the permissions, refer [Microsoft Graph permissions](/graph/permissions-reference)
+For guidance about how to grant and use the permissions, refer to [Microsoft Graph permissions](/graph/permissions-reference)
-After you grant the permissions to the UMI, they're enabled for all servers or instances that are created with the UMI assigned as a server identity.
+After you grant the permissions to the UMI, they're enabled for all servers or instances created with the UMI assigned as a server identity.
## Token Validation
-Azure AD authentication in Azure Database for MySQL flexible server ensures that the user exists in the MySQL server, and it checks the validity of the token by validating the contents of the token. The following token validation steps are performed:
+Azure AD authentication in Azure Database for MySQL flexible server ensures that the user exists in the MySQL server and checks the token's validity by validating the token's contents. The following token validation steps are performed:
-- Token is signed by Azure AD and has not been tampered with.-- Token was issued by Azure AD for the tenant associated with the server.-- Token has not expired.-- Token is for the Azure Database for MySQL flexible server resource (and not another Azure resource).
+- Token is signed by Azure AD and hasn't been tampered.
+- Token was issued by Azure AD for the tenant associated with the server.
+- Token hasn't expired.
+- Token is for the Azure Database for MySQL flexible server resource (and not another Azure resource).
-## Connecting using Azure AD identities
+## Connect using Azure AD identities
Azure Active Directory authentication supports the following methods of connecting to a database using Azure AD identities:
Azure Active Directory authentication supports the following methods of connecti
- Using Active Directory Application certificates or client secrets - Managed Identity
-Once you have authenticated against the Active Directory, you then retrieve a token. This token is your password for logging in.
+Once you authenticate against the Active Directory, you retrieve a token. This token is your password for logging in.
-Please note that management operations, such as adding new users, are only supported for Azure AD user roles at this point.
+> [!NOTE]
+> That management operation, such as adding new users, is only supported for Azure AD user roles.
-> [!NOTE]
-> For more details on how to connect with an Active Directory token, see [Configure and sign in with Azure AD for Azure Database for MySQL flexible server](how-to-azure-ad.md).
+> [!NOTE]
+> For more information on how to connect with an Active Directory token, see [Configure and sign in with Azure AD for Azure Database for MySQL flexible server](how-to-azure-ad.md).
-## Additional considerations
+## Other considerations
-- Only one Azure AD administrator can be configured for an Azure Database for MySQL Flexible server at any time.
+- Only one Azure AD administrator can be configured for an Azure Database for MySQL Flexible server at any time.
-- Only an Azure AD administrator for MySQL can initially connect to the Azure Database for MySQL Flexible server using an Azure Active Directory account. The Active Directory administrator can configure subsequent Azure AD database users or an Azure AD group. When the administrator is a group account, it can be used by any group member, enabling multiple Azure AD administrators for the MySQL Flexible server. Using a group account as an administrator enhances manageability by allowing you to centrally add and remove group members in Azure AD without changing the users or permissions in the MySQL Flexible server.
+- Only an Azure AD administrator for MySQL can initially connect to the Azure Database for MySQL Flexible server using an Azure Active Directory account. The Active Directory administrator can configure subsequent Azure AD database users or an Azure AD group. When the administrator is a group account, it can be used by any group member, enabling multiple Azure AD administrators for the MySQL Flexible server. Using a group account as an administrator enhances manageability by allowing you to centrally add and remove group members in Azure AD without changing the users or permissions in the MySQL Flexible server.
-- If a user is deleted from Azure AD, that user will no longer be able to authenticate with Azure AD, and therefore it will no longer be possible to acquire an access token for that user. In this case, although the matching user will still be in the database, it will not be possible to connect to the server with that user.
+- If a user is deleted from Azure AD, that user can no longer authenticate with Azure AD. Therefore, acquiring an access token for that user is no longer possible. Although the matching user is still in the database, connecting to the server with that user isn't possible.
-> [!NOTE]
-> Login with the deleted Azure AD user can still be done till the token expires (up to 60 minutes from token issuing). If you also remove the user from Azure Database for MySQL this access will be revoked immediately.
+> [!NOTE]
+> Log in with the deleted Azure AD user can still be done until the token expires (up to 60 minutes from token issuing). If you remove the user from Azure Database for MySQL, this access is revoked immediately.
-- If the Azure AD admin is removed from the server, the server will no longer be associated with an Azure AD tenant, and therefore all Azure AD logins will be disabled for the server. Adding a new Azure AD admin from the same tenant will re-enable Azure AD logins.
+- If the Azure AD admin is removed from the server, the server is no longer associated with an Azure AD tenant, and therefore all Azure AD logins are disabled for the server. Adding a new Azure AD admin from the same tenant re-enables Azure AD logins.
-- Azure Database for MySQL Flexible server matches access tokens to the Azure Database for MySQL user using the userΓÇÖs unique Azure AD user ID, as opposed to using the username. This means that if an Azure AD user is deleted in Azure AD and a new user created with the same name, Azure Database for MySQL considers that a different user. Therefore, if a user is deleted from Azure AD and then a new user with the same name added, the new user will not be able to connect with the existing user.
+- Azure Database for MySQL flexible server matches access tokens to the Azure Database for MySQL users using the user's unique Azure AD user ID instead of the username. This means that if an Azure AD user is deleted in Azure AD and a new user is created with the same name, Azure Database for MySQL considers that a different user. Therefore, if a user is deleted from Azure AD and then a new user with the same name is added, the new user isn't able to connect with the existing user.
## Next steps -- To learn how to configure Azure AD with Azure Database for MySQL, see [Set up Azure Active Directory authentication for Azure Database for MySQL flexible server](how-to-azure-ad.md)
+- To learn how to configure Azure AD with Azure Database for MySQL, see [Set up Azure Active Directory authentication for Azure Database for MySQL flexible server](how-to-azure-ad.md)
mysql Concepts Networking Public https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-networking-public.md
Title: Public Network Access overview - Azure Database for MySQL Flexible Server description: Learn about public access networking option in the Flexible Server deployment option for Azure Database for MySQL+++ Last updated : 11/21/2022 -- Previously updated : 8/6/2021 # Public Network Access for Azure Database for MySQL - Flexible Server [!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)]
-This article describes public connectivity option for your server. You will learn in detail the concepts to create Azure Database for MySQL Flexible server accessible securely through internet.
+This article describes the public connectivity option for your server. You learn in detail the concepts to create an Azure Database for MySQL flexible server that is accessible securely through the internet.
## Public access (allowed IP addresses)
-Configuring public access on your flexible server will allow the server to be accessed through a public endpoint that is the server will be accessible through the internet. The public endpoint is a publicly resolvable DNS address. The phrase ΓÇ£allowed IP addressesΓÇ¥ refers to a range of IPs you choose to give permission to access your server. These permissions are called firewall rules.
+Configuring public access on your flexible server allows the server access through a public endpoint. That is, the server is accessible through the internet. The public endpoint is a publicly resolvable DNS address. The phrase *allowed IP addresses refer to a range of IPs you choose to permit access to your server. These permissions are called firewall rules.
Characteristics of the public access method include:
-* Only the IP addresses you allow have permission to access your MySQL flexible server. By default no IP addresses are allowed. You can add IP addresses during server creation or afterwards.
-* Your MySQL server has a publicly resolvable DNS name
-* Your flexible server is not in one of your Azure virtual networks
-* Network traffic to and from your server does not go over a private network. The traffic uses the general internet pathways.
+- Only the IP addresses you allow have permission to access your MySQL flexible server. By default, no IP addresses are allowed. You can add IP addresses when initially setting up your server or after your server has been created.
+- Your MySQL server has a publicly resolvable DNS name
+- Your flexible server isn't in one of your Azure virtual networks
+- Network traffic to and from your server doesn't go over a private network. The traffic uses the general internet pathways.
### Firewall rules
-Granting permission to an IP address is called a firewall rule. If a connection attempt comes from an IP address you have not allowed, the originating client will see an error.
+Granting permission to an IP address is called a firewall rule. If a connection attempt comes from an IP address you haven't allowed, the originating client sees an error.
-### Allowing all Azure IP addresses
+### Allow all Azure IP addresses
-If a fixed outgoing IP address isn't available for your Azure service, you can consider enabling connections from all Azure datacenter IP addresses.
+You can consider enabling connections from all Azure data center IP addresses if a fixed outgoing IP address isn't available for your Azure service.
-> [!IMPORTANT]
-> The **Allow public access from Azure services and resources within Azure** option configures the firewall to allow all connections from Azure, including connections from the subscriptions of other customers. When selecting this option, make sure your login and user permissions limit access to only authorized users.
+> [!IMPORTANT]
+> The **Allow public access from Azure services and resources within Azure** option configures the firewall to allow all connections from Azure, including connections from the subscriptions of other customers. When selecting this option, ensure your login and user permissions limit access to only authorized users.
Learn how to enable and manage public access (allowed IP addresses) using the [Azure portal](how-to-manage-firewall-portal.md) or [Azure CLI](how-to-manage-firewall-cli.md).
-### Troubleshooting public access issues
-
-Consider the following points when access to the Microsoft Azure Database for MySQL Server service does not behave as you expect:
+### Troubleshoot public access issues
-* **Changes to the allowlist have not taken effect yet:** There may be as much as a five-minute delay for changes to the Azure Database for MySQL Server firewall configuration to take effect.
+Consider the following points when access to the Microsoft Azure Database for MySQL Server service doesn't behave as you expect:
-* **Authentication failed:** If a user does not have permissions on the Azure Database for MySQL server or the password used is incorrect, the connection to the Azure Database for MySQL server is denied. Creating a firewall setting only provides clients with an opportunity to attempt connecting to your server. Each client must still provide the necessary security credentials.
+- **Changes to the allowlist have yet to take effect:** There may be as much as a five-minute delay for changes to the Azure Database for MySQL Server firewall configuration to take effect.
-* **Dynamic client IP address:** If you have an Internet connection with dynamic IP addressing and you are having trouble getting through the firewall, you could try one of the following solutions:
+- **Authentication failed:** If a user doesn't have permissions on the Azure Database for MySQL server or the password used is incorrect, the connection to the Azure Database for MySQL server is denied. Creating a firewall setting only allows clients to attempt to connect to your server. Each client must still provide the necessary security credentials.
- * Ask your Internet Service Provider (ISP) for the IP address range assigned to your client computers that access the Azure Database for MySQL Server, and then add the IP address range as a firewall rule.
- * Get static IP addressing instead for your client computers, and then add the static IP address as a firewall rule.
+- **Dynamic client IP address:** If you have an Internet connection with dynamic IP addressing and you're having trouble getting through the firewall, you could try one of the following solutions:
+ - Ask your internet service provider (ISP) for the IP address range assigned to your client computers that access the Azure Database for MySQL Server, and then add the IP address range as a firewall rule.
+ - Get static IP addressing instead for your client computers, and then add the static IP address as a firewall rule.
-* **Firewall rule is not available for IPv6 format:** The firewall rules must be in IPv4 format. If you specify firewall rules in IPv6 format, it will show the validation error.
+- **Firewall rule is not available for IPv6 format:** The firewall rules must be in IPv4 format. If you specify firewall rules in IPv6 format, it shows the validation error.
## Next steps
-* Learn how to enable public access (allowed IP addresses) using the [Azure portal](how-to-manage-firewall-portal.md) or [Azure CLI](how-to-manage-firewall-cli.md)
-* Learn how to [use TLS](how-to-connect-tls-ssl.md)
+- Learn how to enable public access (allowed IP addresses) using the [Azure portal](how-to-manage-firewall-portal.md) or [Azure CLI](how-to-manage-firewall-cli.md)
+- Learn how to [use TLS](how-to-connect-tls-ssl.md)
mysql Concepts Networking Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-networking-vnet.md
Title: Private Network Access overview - Azure Database for MySQL Flexible Server description: Learn about private access networking option in the Flexible Server deployment option for Azure Database for MySQL+++ Last updated : 11/21/2022 -- Previously updated : 8/6/2021 # Private Network Access for Azure Database for MySQL - Flexible Server [!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)]
-This article describes the private connectivity option for Azure MySQL Flexible Server. You will learn in detail the Virtual network concepts for Azure Database for MySQL Flexible server to create a server securely in Azure.
-
+This article describes the private connectivity option for Azure MySQL Flexible Server. You learn in detail the virtual network concepts for Azure Database for MySQL Flexible server to create a server securely in Azure.
## Private access (VNet Integration)
-[Azure Virtual Network (VNet)](../../virtual-network/virtual-networks-overview.md) is the fundamental building block for your private network in Azure. Virtual Network (VNet) integration with Azure Database for MySQL - Flexible Server brings the Azure's benefits of network security and isolation.
+[Azure Virtual Network (VNet)](../../virtual-network/virtual-networks-overview.md) is the fundamental building block for your private network in Azure. Virtual Network (VNet) integration with Azure Database for MySQL - Flexible Server brings Azure's benefits of network security and isolation.
-Virtual Network (VNet) integration for an Azure Database for MySQL - Flexible Server enables you to lock down access to the server to only your virtual network infrastructure. Your virtual network(VNet) can include all your application and database resources in a single virtual network or may stretch across different VNets in the same or different regions. Seamless connectivity between various Virtual networks can be established by [peering](../../virtual-network/virtual-network-peering-overview.md), which uses Microsoft's low latency, high-bandwidth private backbone infrastructure backbone infrastructure. The virtual networks appear as one for connectivity purposes.
+Virtual Network (VNet) integration for an Azure Database for MySQL - Flexible Server enables you to lock down access to the server to only your virtual network infrastructure. Your virtual network(VNet) can include all your application and database resources in a single virtual network or may stretch across different VNets in the same region or a different region. Seamless connectivity between various Virtual networks can be established by [peering](../../virtual-network/virtual-network-peering-overview.md), which uses Microsoft's low latency, high-bandwidth private backbone infrastructure. The virtual networks appear as one for connectivity purposes.
Azure Database for MySQL - Flexible Server supports client connectivity from:
-* Virtual networks within the same Azure region. (locally peered VNets)
-* Virtual networks across Azure regions. (Global peered VNets)
-
-Subnets enable you to segment the virtual network into one or more sub-networks and allocate a portion of the virtual network's address space to which you can then deploy Azure resources. Azure Database for MySQL - Flexible Server requires a [delegated subnet](../../virtual-network/subnet-delegation-overview.md). A delegated subnet is an explicit identifier that a subnet can host a only Azure Database for MySQL - Flexible Servers. By delegating the subnet, the service gets explicit permissions to create service-specific resources in the subnet to seamlessly manage your Azure Database for MySQL - Flexible Server.
+- Virtual networks within the same Azure region. (locally peered VNets)
+- Virtual networks across Azure regions. (Global peered VNets)
-> [!NOTE]
-> The smallest range you can specify for a subnet is /29, which provides eight IP addresses, of which five will be utilized by Azure internally, whereas a Azure Database for MySQL - Flexible server you would require one IP address per node to be allocated from the delegated subnet when private access is enabled. HA enabled servers would need two and Non-HA server would need one IP address. Recommendation is to reserve at least 2 IP address per flexible server keeping in mind that we can enable high availability options later.
+Subnets enable you to segment the virtual network into one or more subnetworks and allocate a portion of the virtual network's address space to which you can then deploy Azure resources. Azure Database for MySQL - Flexible Server requires a [delegated subnet](../../virtual-network/subnet-delegation-overview.md). A delegated subnet is an explicit identifier that a subnet can host only Azure Database for MySQL - Flexible Servers. By delegating the subnet, the service gets direct permissions to create service-specific resources to manage your Azure Database for MySQL - Flexible Server seamlessly.
-Azure Database for MySQL - Flexible Server integrates with Azure [Private DNS zones](../../dns/private-dns-privatednszone.md) to provide a reliable, secure DNS service to manage and resolve domain names in a virtual network without the need to add a custom DNS solution. Private DNS zone can be linked to one or more virtual networks by creating [virtual network links](../../dns/private-dns-virtual-network-links.md)
+> [!NOTE]
+> The smallest range you can specify for a subnet is /29, which provides eight IP addresses, of which five are utilized by Azure internally. In contrast, for an Azure Database for MySQL - Flexible server, you would require one IP address per node to be allocated from the delegated subnet when private access is enabled. HA-enabled servers would need two, and Non-HA server would need one IP address. The recommendation is to reserve at least 2 IP addresses per flexible server, keeping in mind that we can enable high availability options later.
+Azure Database for MySQL: Flexible Server integrates with Azure [Private DNS zones](../../dns/private-dns-privatednszone.md) to provide a reliable, secure DNS service to manage and resolve domain names in a virtual network without the need to add a custom DNS solution. A private DNS zone can be linked to one or more virtual networks by creating [virtual network links](../../dns/private-dns-virtual-network-links.md)
:::image type="content" source="./media/concepts-networking/vnet-diagram.png" alt-text="Flexible server MySQL VNET"::: In the above diagram, 1. Flexible servers are injected into a delegated subnet - 10.0.1.0/24 of VNET **VNet-1**.
-2. Applications that are deployed on different subnets within the same vnet can access the Flexible servers directly.
-3. Applications that are deployed on a different VNET **VNet-2** do not have direct access to flexible servers. You have to perform [private DNS zone VNET peering](#private-dns-zone-and-vnet-peering) before they can access the flexible server.
+1. Applications deployed on different subnets within the same vnet can access the Flexible servers directly.
+1. Applications deployed on a different VNET **VNet-2** don't have direct access to flexible servers. Before they can access the flexible server, you must perform a [private DNS zone VNET peering](#private-dns-zone-and-vnet-peering).
## Virtual network concepts Here are some concepts to be familiar with when using virtual networks with MySQL flexible servers.
-* **Virtual network** -
+- **Virtual network** -
- An Azure Virtual Network (VNet) contains a private IP address space that is configured for your use. Visit the [Azure Virtual Network overview](../../virtual-network/virtual-networks-overview.md) to learn more about Azure virtual networking.
+ An Azure Virtual Network (VNet) contains a private IP address space configured for your use. Visit the [Azure Virtual Network overview](../../virtual-network/virtual-networks-overview.md) to learn more about Azure virtual networking.
- Your virtual network must be in the same Azure region as your flexible server.
+ Your virtual network must be in the same Azure region as your flexible server.
-* **Delegated subnet** -
+- **Delegated subnet** -
- A virtual network contains subnets (sub-networks). Subnets enable you to segment your virtual network into smaller address spaces. Azure resources are deployed into specific subnets within a virtual network.
+ A virtual network contains subnets (subnetworks). Subnets enable you to segment your virtual network into smaller address spaces. Azure resources are deployed into specific subnets within a virtual network.
Your MySQL flexible server must be in a subnet that is **delegated** for MySQL flexible server use only. This delegation means that only Azure Database for MySQL Flexible Servers can use that subnet. No other Azure resource types can be in the delegated subnet. You delegate a subnet by assigning its delegation property as Microsoft.DBforMySQL/flexibleServers.
-* **Network security groups (NSG)**
+- **Network security groups (NSG)**
Security rules in network security groups enable you to filter the type of network traffic that can flow in and out of virtual network subnets and network interfaces. Review the [network security group overview](../../virtual-network/network-security-groups-overview.md) for more information.
-* **Private DNS zone integration** -
+- **Private DNS zone integration**
Azure private DNS zone integration allows you to resolve the private DNS within the current VNET or any in-region peered VNET where the private DNS Zone is linked.
-* **Virtual network peering**
+- **Virtual network peering**
- Virtual network peering enables you to seamlessly connect two or more Virtual Networks in Azure. The peered virtual networks appear as one for connectivity purposes. The traffic between virtual machines in peered virtual networks uses the Microsoft backbone infrastructure. The traffic between client application and flexible server in peered VNets is routed through Microsoft's private network only and is isolated to that network only.
+ A virtual network peering enables you to connect two or more Virtual Networks in Azure seamlessly. The peered virtual networks appear as one for connectivity purposes. The traffic between virtual machines in peered virtual networks uses the Microsoft backbone infrastructure. The traffic between the client application and the flexible server in peered VNets is routed only through Microsoft's private network and is isolated to that network.
-## Using Private DNS Zone
+## Use Private DNS Zone
-* If you use the Azure portal or the Azure CLI to create flexible servers with VNET, a new private DNS zone is auto-provisioned per server in your subscription using the server name provided. Alternatively, if you want to setup your own private DNS zone to use with the flexible server, please see the [private DNS overview](../../dns/private-dns-overview.md) documentation.
-* If you use Azure API, an Azure Resource Manager template (ARM template), or Terraform, please create private DNS zones that end with `mysql.database.azure.com` and use them while configuring flexible servers with private access. For more information, see the [private DNS zone overview](../../dns/private-dns-overview.md).
+- If you use the Azure portal or the Azure CLI to create flexible servers with VNET, a new private DNS zone is auto-provisioned per server in your subscription using the server name provided. Alternatively, if you want to set up your own private DNS zone with the flexible server, see the [private DNS overview](../../dns/private-dns-overview.md) documentation.
+- If you use Azure API, an Azure Resource Manager template (ARM template), or Terraform, create private DNS zones that end with `mysql.database.azure.com` and use them while configuring flexible servers with private access. For more information, see the [private DNS zone overview](../../dns/private-dns-overview.md).
- > [!IMPORTANT]
- > Private DNS zone names must end with `mysql.database.azure.com`. If you are connecting to the Azure Database for MySQL - Flexible sever with SSL and are using an option to perform full verification (sslmode=VERTIFY_IDENTITY) with certificate subject name, use \<servername\>.mysql.database.azure.com in your connection string.
+ > [!IMPORTANT]
+ > Private DNS zone names must end with `mysql.database.azure.com`. If you are connecting to an Azure Database for MySQL flexible sever with SSL and you're using an option to perform full verification (sslmode=VERTIFY_IDENTITY) with certificate subject name, use \<servername\>.mysql.database.azure.com in your connection string.
Learn how to create a flexible server with private access (VNet integration) in [the Azure portal](how-to-manage-virtual-network-portal.md) or [the Azure CLI](how-to-manage-virtual-network-cli.md).
-## Integration with custom DNS server
+## Integration with a custom DNS server
-If you are using the custom DNS server then you must **use a DNS forwarder to resolve the FQDN of Azure Database for MySQL - Flexible Server**. The forwarder IP address should be [168.63.129.16](../../virtual-network/what-is-ip-address-168-63-129-16.md). The custom DNS server should be inside the VNet or reachable via the VNET's DNS Server setting. Refer to [name resolution that uses your own DNS server](../../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server) to learn more.
+If you're using the custom DNS server, then you must **use a DNS forwarder to resolve the FQDN of Azure Database for MySQL - Flexible Server**. The forwarder IP address should be [168.63.129.16](../../virtual-network/what-is-ip-address-168-63-129-16.md). The custom DNS server should be inside the VNet or reachable via the VNET's DNS Server setting. Refer to [name resolution that uses your DNS server](../../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server) to learn more.
-> [!IMPORTANT]
- > For successful provisioning of the Flexible Server, even if you are using a custom DNS server, **you must not block DNS traffic to [AzurePlatformDNS](../../virtual-network/service-tags-overview.md) using [NSG](../../virtual-network/network-security-groups-overview.md)**.
+> [!IMPORTANT]
+> For successful provisioning of the flexible server, even if you are using a custom DNS server, **you must not block DNS traffic to [AzurePlatformDNS](../../virtual-network/service-tags-overview.md) using [NSG](../../virtual-network/network-security-groups-overview.md)**.
## Private DNS zone and VNET peering
-Private DNS zone settings and VNET peering are independent of each other. Please refer to the [Using Private DNS Zone](concepts-networking-vnet.md#using-private-dns-zone) section above for more details on creating and using Private DNS zones.
+Private DNS zone settings and VNET peering are independent of each other. For more information on creating and using Private DNS zones, see the [Use Private DNS Zone](#use-private-dns-zone) section.
If you want to connect to the flexible server from a client that is provisioned in another VNET from the same region or a different region, you have to link the private DNS zone with the VNET. See [how to link the virtual network](../../dns/private-dns-getstarted-portal.md#link-the-virtual-network) documentation.
-> [!NOTE]
+> [!NOTE]
> Private DNS zone names that end with `mysql.database.azure.com` can only be linked.
-## Connecting from on-premises to flexible server in Virtual Network using ExpressRoute or VPN
+## Connect from an on-premises server to a flexible server in a Virtual Network using ExpressRoute or VPN
-For workloads requiring access to flexible server in virtual network from on-premises network, you will require [ExpressRoute](/azure/architecture/reference-architectures/hybrid-networking/expressroute/) or [VPN](/azure/architecture/reference-architectures/hybrid-networking/vpn/) and virtual network [connected to on-premises](/azure/architecture/reference-architectures/hybrid-networking/). With this setup in place, you will require a DNS forwarder to resolve the flexible servername if you would like to connect from client application (like MySQL Workbench) running on on-premises virtual network. This DNS forwarder is responsible for resolving all the DNS queries via a server-level forwarder to the Azure-provided DNS service [168.63.129.16](../../virtual-network/what-is-ip-address-168-63-129-16.md).
+For workloads requiring access to flexible server in virtual network from on-premises network, you need an [ExpressRoute](/azure/architecture/reference-architectures/hybrid-networking/expressroute/) or [VPN](/azure/architecture/reference-architectures/hybrid-networking/vpn/) and virtual network [connected to on-premises](/azure/architecture/reference-architectures/hybrid-networking/). With this setup in place, you need a DNS forwarder to resolve the flexible servername if you want to connect from client applications (like MySQL Workbench) running on on-premises virtual network. This DNS forwarder is responsible for resolving all the DNS queries via a server-level forwarder to the Azure-provided DNS service [168.63.129.16](../../virtual-network/what-is-ip-address-168-63-129-16.md).
-To configure properly, you need the following resources:
+To configure correctly, you need the following resources:
-* On-premises network
-* MySQL Flexible Server provisioned with private access (VNet integration)
-* Virtual network [connected to on-premises](/azure/architecture/reference-architectures/hybrid-networking/)
-* Use DNS forwarder [168.63.129.16](../../virtual-network/what-is-ip-address-168-63-129-16.md) deployed in Azure
+- On-premises network
+- MySQL Flexible Server provisioned with private access (VNet integration)
+- Virtual network [connected to on-premises](/azure/architecture/reference-architectures/hybrid-networking/)
+- Use DNS forwarder [168.63.129.16](../../virtual-network/what-is-ip-address-168-63-129-16.md) deployed in Azure
You can then use the flexible servername (FQDN) to connect from the client application in peered virtual network or on-premises network to flexible server. ## Unsupported virtual network scenarios
-* Public endpoint (or public IP or DNS) - A flexible server deployed to a virtual network cannot have a public endpoint
-* After the flexible server is deployed to a virtual network and subnet, you cannot move it to another virtual network or subnet. You cannot move the virtual network into another resource group or subscription.
-* Subnet size (address spaces) cannot be increased once resources exist in the subnet
-* Flexible server doesn't support Private Link. Instead, it uses VNet injection to make flexible server available within a VNet.
+- Public endpoint (or public IP or DNS) - A flexible server deployed to a virtual network can't have a public endpoint
+- After the flexible server is deployed to a virtual network and subnet, you can't move it to another virtual network or subnet. You can't move the virtual network into another resource group or subscription.
+- Subnet size (address spaces) can't be increased once resources exist in the subnet
+- Flexible server doesn't support Private Link. Instead, it uses VNet injection to make a flexible server available within a VNet.
+ ## Next steps
-* Learn how to enable private access (vnet integration) using the [Azure portal](how-to-manage-virtual-network-portal.md) or [Azure CLI](how-to-manage-virtual-network-cli.md)
-* Learn how to [use TLS](how-to-connect-tls-ssl.md)
+- Learn how to enable private access (VNet integration) using the [Azure portal](how-to-manage-virtual-network-portal.md) or [Azure CLI](how-to-manage-virtual-network-cli.md)
+- Learn how to [use TLS](how-to-connect-tls-ssl.md)
mysql Concepts Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-networking.md
Title: Networking overview - Azure Database for MySQL Flexible Server description: Learn about connectivity and networking options in the Flexible Server deployment option for Azure Database for MySQL+++ Last updated : 11/21/2022 -- Previously updated : 9/23/2020 # Connectivity and networking concepts for Azure Database for MySQL - Flexible Server [!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)]
-This article introduces the concepts to control connectivity to your Azure MySQL Flexible Server. You will learn in detail the networking concepts for Azure Database for MySQL Flexible server to create and access a server securely in Azure.
+This article introduces the concepts to control connectivity to your Azure MySQL Flexible Server. You learn in detail the networking concepts for Azure Database for MySQL Flexible server to create and access a server securely in Azure.
Azure Database for MySQL - Flexible server supports two ways to configure connectivity to your servers:
-> [!NOTE]
+> [!NOTE]
> Your networking option cannot be changed after the server is created.
- * **Private access (VNet Integration)** [Private access](./concepts-networking-vnet.md) You can deploy your flexible server into your [Azure Virtual Network](../../virtual-network/virtual-networks-overview.md). Azure virtual networks provide private and secure network communication. Resources in a virtual network can communicate through private IP addresses.
-
- * **Public access (allowed IP addresses)** [Public access](./concepts-networking-public.md) Your flexible server is accessed through a public endpoint. The public endpoint is a publicly resolvable DNS address. The phrase ΓÇ£allowed IP addressesΓÇ¥ refers to a range of IPs you choose to give permission to access your server. These permissions are called **firewall rules**
+ - **Private access (VNet Integration)** [Private access](./concepts-networking-vnet.md) You can deploy your flexible server into your [Azure Virtual Network](../../virtual-network/virtual-networks-overview.md). Azure virtual networks provide private and secure network communication. Resources in a virtual network can communicate through private IP addresses.
+
+ - **Public access (allowed IP addresses)** [Public access](./concepts-networking-public.md) Your flexible server is accessed through a public endpoint. The public endpoint is a publicly resolvable DNS address. The phrase ΓÇ£allowed IP addressesΓÇ¥ refers to a range of IPs you choose to give permission to access your server. These permissions are called **firewall rules**
-## Choosing a networking option
+## Choose a networking option
Choose **private access (VNet integration)** if you want the following capabilities:
- * Connect from Azure resources in the same virtual network or [peered virtual network](../../virtual-network/virtual-network-peering-overview.md) to your flexible server
- * Use VPN or ExpressRoute to connect from non-Azure resources to your flexible server
- * No public endpoint
+ - Connect from Azure resources in the same virtual network or [peered virtual network](../../virtual-network/virtual-network-peering-overview.md) to your flexible server
+ - Use VPN or ExpressRoute to connect from non-Azure resources to your flexible server
+ - No public endpoint
Choose **public access (allowed IP addresses)** method if you want the following capabilities:
- * Connect from Azure resources that do not support virtual networks
- * Connect from resources outside of an Azure that are not connected by VPN or ExpressRoute
- * The flexible server has a public endpoint
+ - Connect from Azure resources that don't support virtual networks
+ - Connect from resources outside of an Azure that aren't connected by VPN or ExpressRoute
+ - The flexible server has a public endpoint
The following characteristics apply whether you choose to use the private access or the public access option:
-* Connections from allowed IP addresses need to authenticate to the MySQL server with valid credentials
-* [Connection encryption](#tls-and-ssl) is available for your network traffic
-* The server has a fully qualified domain name (fqdn). For the hostname property in connection strings, we recommend using the fqdn instead of an IP address.
-* Both options control access at the server-level, not at the database- or table-level. You would use MySQLΓÇÖs roles properties to control database, table, and other object access.
-
+- Connections from allowed IP addresses need to authenticate to the MySQL server with valid credentials
+- [Connection encryption](#tls-and-ssl) is available for your network traffic
+- The server has a fully qualified domain name (fqdn). We recommend using the fqdn instead of an IP address for the hostname property in connection strings.
+- Both options control access at the server-level, not at the database- or table-level. You would use MySQLΓÇÖs roles properties to control database, table, and other object access.
### Unsupported virtual network scenarios
-* Public endpoint (or public IP or DNS) - A flexible server deployed to a virtual network cannot have a public endpoint
-* After the flexible server is deployed to a virtual network and subnet, you cannot move it to another virtual network or subnet. * After the flexible server is deployed, you cannot move the virtual network used by flexible server into another resource group or subscription.
-* Subnet size (address spaces) cannot be increased once resources exist in the subnet
-* Change from Public to Private access is not allowed after the server is created. Recommended way is to use Point-in-time restore
+- Public endpoint (or public IP or DNS) - A flexible server deployed to a virtual network can't have a public endpoint.
+- After the flexible server is deployed to a virtual network and subnet, you can't move it to another virtual network or subnet.
+- After the flexible server is deployed, you can't move the virtual network the flexible server uses into another resource group or subscription.
+- Subnet size (address spaces) can't be increased once resources exist in the subnet.
+- Change from Public to Private access isn't allowed after the server is created. The recommended way is to use point-in-time restore.
-Learn how to enable private access (vnet integration) using the [Azure portal](how-to-manage-virtual-network-portal.md) or [Azure CLI](how-to-manage-virtual-network-cli.md).
+Learn how to enable private access (VNet integration) using the [Azure portal](how-to-manage-virtual-network-portal.md) or [Azure CLI](how-to-manage-virtual-network-cli.md).
-> [!NOTE]
-> If you are using the custom DNS server then you must use a DNS forwarder to resolve the FQDN of Azure Database for MySQL - Flexible Server. Refer to [name resolution that uses your own DNS server](../../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server) to learn more.
+> [!NOTE]
+> If you are using the custom DNS server, you must use a DNS forwarder to resolve the FQDN of Azure Database for MySQL - Flexible Server. Refer to [name resolution that uses your DNS server](../../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server) to learn more.
## Hostname
-Regardless of the networking option you choose, we recommend you to use fully qualified domain name (FQDN) `<servername>.mysql.database.azure.com` in connection strings when connecting to your flexible server.
+
+Regardless of your networking option, we recommend you use the fully qualified domain name (FQDN) `<servername>.mysql.database.azure.com` in connection strings when connecting to your flexible server.
## TLS and SSL
-Azure Database for MySQL Flexible Server supports connecting your client applications to the MySQL server using Secure Sockets Layer (SSL) with Transport layer security(TLS) encryption. TLS is an industry standard protocol that ensures encrypted network connections between your database server and client applications, allowing you to adhere to compliance requirements.
-Azure Database for MySQL Flexible Server supports encrypted connections using Transport Layer Security (TLS 1.2) by default and all incoming connections with TLS 1.0 and TLS 1.1 will be denied by default. The encrypted connection enforcement or TLS version configuration on your flexible server can be configured and changed.
+Azure Database for MySQL Flexible Server supports connecting your client applications to the MySQL server using Secure Sockets Layer (SSL) with Transport layer security (TLS) encryption. TLS is an industry-standard protocol that ensures encrypted network connections between your database server and client applications, allowing you to adhere to compliance requirements.
-Following are the different configurations of SSL and TLS settings you can have for your flexible server:
+Azure Database for MySQL Flexible Server supports encrypted connections using Transport Layer Security (TLS 1.2) by default, and all incoming connections with TLS 1.0 and TLS 1.1 are denied by default. The encrypted connection enforcement or TLS version configuration on your flexible server can be configured and changed.
-| Scenario | Server parameter settings | Description |
-||--||
-|Disable SSL (encrypted connections) | require_secure_transport = OFF |If your legacy application doesn't support encrypted connections to MySQL server, you can disable enforcement of encrypted connections to your flexible server by setting require_secure_transport=OFF.|
-|Enforce SSL with TLS version < 1.2 | require_secure_transport = ON and tls_version = TLSV1 or TLSV1.1| If your legacy application supports encrypted connections but requires TLS version < 1.2, you can enable encrypted connections but configure your flexible server to allow connections with the tls version (v1.0 or v1.1) supported by your application|
-|Enforce SSL with TLS version = 1.2(Default configuration)|require_secure_transport = ON and tls_version = TLSV1.2| This is the recommended and default configuration for flexible server.|
-|Enforce SSL with TLS version = 1.3(Supported with MySQL v8.0 and above)| require_secure_transport = ON and tls_version = TLSV1.3| This is useful and recommended for new applications development|
+Following are the different configurations of SSL and TLS settings you can have for your flexible server:
-> [!Note]
-> Changes to SSL Cipher on flexible server is not supported. FIPS cipher suites is enforced by default when tls_version is set to TLS version 1.2 . For TLS versions other than version 1.2, SSL Cipher is set to default settings which comes with MySQL community installation.
+| Scenario | Server parameter settings | Description |
+| | | |
+| Disable SSL (encrypted connections) | require_secure_transport = OFF | If your legacy application doesn't support encrypted connections to the MySQL server, you can disable enforcement of encrypted connections to your flexible server by setting require_secure_transport=OFF. |
+| Enforce SSL with TLS version < 1.2 | require_secure_transport = ON and tls_version = TLS 1.0 or TLS 1.1 | If your legacy application supports encrypted connections but requires TLS version < 1.2, you can enable encrypted connections, but configure your flexible server to allow connections with the TLS version (v1.0 or v1.1) supported by your application |
+| Enforce SSL with TLS version = 1.2(Default configuration) | require_secure_transport = ON and tls_version = TLS 1.2 | This is the recommended and default configuration for a flexible server. |
+| Enforce SSL with TLS version = 1.3(Supported with MySQL v8.0 and above) | require_secure_transport = ON and tls_version = TLS 1.3 | This is useful and recommended for new applications development |
-Review how to [connect using SSL/TLS](how-to-connect-tls-ssl.md) to learn more.
+> [!NOTE]
+> Changes to SSL Cipher on the flexible server is not supported. FIPS cipher suites is enforced by default when tls_version is set to TLS version 1.2. For TLS versions other than version 1.2, SSL Cipher is set to default settings which comes with MySQL community installation.
+Review how to [connect using SSL/TLS](how-to-connect-tls-ssl.md) to learn more.
## Next steps
-* Learn how to enable private access (vnet integration) using the [Azure portal](how-to-manage-virtual-network-portal.md) or [Azure CLI](how-to-manage-virtual-network-cli.md)
-* Learn how to enable public access (allowed IP addresses) using the [Azure portal](how-to-manage-firewall-portal.md) or [Azure CLI](how-to-manage-firewall-cli.md)
-* Learn how to [use TLS](how-to-connect-tls-ssl.md)
+
+- Learn how to enable private access (VNet integration) using the [Azure portal](how-to-manage-virtual-network-portal.md) or [Azure CLI](how-to-manage-virtual-network-cli.md)
+- Learn how to enable public access (allowed IP addresses) using the [Azure portal](how-to-manage-firewall-portal.md) or [Azure CLI](how-to-manage-firewall-cli.md)
+- Learn how to [use TLS](how-to-connect-tls-ssl.md)
mysql How To Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-azure-ad.md
description: Learn how to set up Azure Active Directory authentication for Azure
Previously updated : 09/21/2022 Last updated : 11/21/2022
In this tutorial, you learn how to:
- Configure the Azure AD Admin - Connect to Azure Database for MySQL flexible server using Azure AD
+## Prerequisites
+
+- An Azure account with an active subscription.
+
+- If you don't have an Azure subscription, create an [Azure free account](https://azure.microsoft.com/free) before you begin.
+
+ > [!NOTE]
+ > With an Azure free account, you can now try Azure Database for MySQL - Flexible Server for free for 12 months. For more information, see [Try Flexible Server for free](how-to-deploy-on-azure-free-account.md).
+
+- Install or upgrade Azure CLI to the latest version. See [Install Azure CLI](/cli/azure/install-azure-cli).
+ ## Configure the Azure AD Admin
-
-To create an Azure AD Admin user, please follow the following steps.
-- In the Azure portal, select the instance of Azure Database for MySQL Flexible server that you want to enable for Azure AD.
-
-- Under Security pane, select **Authentication**:
+To create an Azure AD Admin user, follow the following steps.
+
+- In the Azure portal, select the instance of Azure Database for MySQL flexible server that you want to enable for Azure AD.
-- There are three types of authentication available:
+- Under the Security pane, select **Authentication**:
- - **MySQL authentication only** ΓÇô By default, MySQL uses the built-in mysql_native_password authentication plugin, which performs authentication using the native password hashing method
+- There are three types of authentication available:
- - **Azure Active Directory authentication only** ΓÇô Only allows authentication with an Azure AD account. Disables mysql_native_password authentication and turns _ON_ the server parameter aad_auth_only
+ - **MySQL authentication only** ΓÇô By default, MySQL uses the built-in mysql_native_password authentication plugin, which performs authentication using the native password hashing method
- - **MySQL and Azure Active Directory authentication** ΓÇô Allows authentication using a native MySQL password or an Azure AD account. Turns _OFF_ the server parameter aad_auth_only
+ - **Azure Active Directory authentication only** ΓÇô Only allows authentication with an Azure AD account. Disables mysql_native_password authentication and turns _ON_ the server parameter aad_auth_only
-- **Select Identity** ΓÇô Select/Add User assigned managed identity. To allow the UMI to read from Microsoft Graph as the server identity, the following permissions are required. Alternatively, give the UMI the [Directory Readers](../../active-directory/roles/permissions-reference.md#directory-readers) role.
+ - **MySQL and Azure Active Directory authentication** ΓÇô Allows authentication using a native MySQL password or an Azure AD account. Turns _OFF_ the server parameter aad_auth_only
+
+- **Select Identity** ΓÇô Select/Add User assigned managed identity. The following permissions are required to allow the UMI to read from Microsoft Graph as the server identity. Alternatively, give the UMI the [Directory Readers](../../active-directory/roles/permissions-reference.md#directory-readers) role.
- [User.Read.All](/graph/permissions-reference#user-permissions): Allows access to Azure AD user information. - [GroupMember.Read.All](/graph/permissions-reference#group-permissions): Allows access to Azure AD group information. - [Application.Read.ALL](/graph/permissions-reference#application-resource-permissions): Allows access to Azure AD service principal (application) information.
-For guidance about how to grant and use the permissions, refer [Microsoft Graph permissions](/graph/permissions-reference)
+For guidance about how to grant and use the permissions, refer to [Microsoft Graph permissions](/graph/permissions-reference)
-After you grant the permissions to the UMI, they're enabled for all servers or instances that are created with the UMI assigned as a server identity.
+After you grant the permissions to the UMI, they're enabled for all servers or instances created with the UMI assigned as a server identity.
-> [!IMPORTANT]
+> [!IMPORTANT]
> Only a [Global Administrator](../../active-directory/roles/permissions-reference.md#global-administrator) or [Privileged Role Administrator](../../active-directory/roles/permissions-reference.md#privileged-role-administrator) can grant these permissions. -- Select a valid Azure AD user or an Azure AD group in the customer tenant to be **Azure AD administrator**. Once Azure AD authentication support has been enabled, Azure AD Admins can be added as security principals with permissions to add Azure AD Users to the MySQL server.
+- Select a valid Azure AD user or an Azure AD group in the customer tenant to be **Azure AD administrator**. Once Azure AD authentication support has been enabled, Azure AD Admins can be added as security principals with permission to add Azure AD Users to the MySQL server.
- > [!NOTE]
- > Only one Azure AD admin can be created per MySQL server and selection of another one will overwrite the existing Azure AD admin configured for the server.
+ > [!NOTE]
+ > Only one Azure AD admin can be created per MySQL server, and selecting another overwrites the existing Azure AD admin configured for the server.
## Connect to Azure Database for MySQL flexible server using Azure AD
-#### Prerequisites
--- An Azure account with an active subscription.--- If you don't have an Azure subscription, create an [Azure free account](https://azure.microsoft.com/free) before you begin.
-
- > [!Note]
- > With an Azure free account, you can now try Azure Database for MySQL - Flexible Server for free for 12 months. For more information, see [Try Flexible Server for free](how-to-deploy-on-azure-free-account.md).
+### 1 - Authenticate with Azure AD
-- Install or upgrade Azure CLI to the latest version. See [Install Azure CLI](/cli/azure/install-azure-cli).
+Start by authenticating with Azure AD using the Azure CLI tool.
+_(This step isn't required in Azure Cloud Shell.)_
-**Step 1: Authenticate with Azure AD**
-
-Start by authenticating with Azure AD using the Azure CLI tool.
-_(This step is not required in Azure Cloud Shell.)_
--- Log in to Azure account using [az login](/cli/azure/reference-index#az-login) command. Note the ID property, which refers to Subscription ID for your Azure account:
+- Sign in to Azure account using [az login](/cli/azure/reference-index#az-login) command. Note the ID property, which refers to the Subscription ID for your Azure account:
```azurecli-interactive az login ```
-The command will launch a browser window to the Azure AD authentication page. It requires you to give your Azure AD user ID and the password.
+The command launches a browser window to the Azure AD authentication page. It requires you to give your Azure AD user ID and password.
- If you have multiple subscriptions, choose the appropriate subscription using the az account set command:
The command will launch a browser window to the Azure AD authentication page. It
az account set --subscription \<subscription id\> ```
-**Step 2: Retrieve Azure AD access token**
+### 2 - Retrieve Azure AD access token
-Invoke the Azure CLI tool to acquire an access token for the Azure AD authenticated user from step 1 to access Azure Database for MySQL flexible server.
+Invoke the Azure CLI tool to acquire an access token for the Azure AD authenticated user from step 1 to access Azure Database for MySQL flexible server.
- Example (for Public Cloud):
-
+ ```azurecli-interactive az account get-access-token --resource https://ossrdbms-aad.database.windows.net ``` -- The above resource value must be specified exactly as shown. For other clouds, the resource value can be looked up using:
+- The above resource value must be specified exactly as shown. For other clouds, the resource value can be looked up using the following:
```azurecli-interactive az cloud show ``` -- For Azure CLI version 2.0.71 and later, the command can be specified in the following more convenient version for all clouds:
+- For Azure CLI version 2.0.71 and later, the command can be specified in the following more convenient version for all clouds:
```azurecli-interactive az account get-access-token --resource-type oss-rdbms ```
-
-- Using PowerShell, you can use the following command to acquire access token: +
+- Using PowerShell, you can use the following command to acquire access token:
```powershell
- $accessToken = Get-AzAccessToken -ResourceUrl https://ossrdbms-aad.database.windows.net
+ $accessToken = Get-AzAccessToken -ResourceUrl https://ossrdbms-aad.database.windows.net
$accessToken.Token | out-file C:\temp\MySQLAccessToken.txt ```
-
-After authentication is successful, Azure AD will return an access token:
+
+After authentication is successful, Azure AD returns an access token:
```json
-{
- "accessToken": "TOKEN",
- "expiresOn": "...",
- "subscription": "...",
- "tenant": "...",
- "tokenType": "Bearer"
+{
+ "accessToken": "TOKEN",
+ "expiresOn": "...",
+ "subscription": "...",
+ "tenant": "...",
+ "tokenType": "Bearer"
} ```
-The token is a Base 64 string that encodes all the information about the authenticated user, and which is targeted to the Azure Database for MySQL service.
+The token is a Base 64 string that encodes all the information about the authenticated user and is targeted to the Azure Database for MySQL service.
-The access token validity is anywhere between 5 minutes to 60 minutes. We recommend you get the access token just before initiating the login to Azure Database for MySQL Flexible server.
+The access token validity is anywhere between 5 minutes to 60 minutes. We recommend you get the access token before initiating the sign-in to Azure Database for MySQL Flexible server.
-- You can use the following PowerShell command to see the token validity.
+- You can use the following PowerShell command to see the token validity.
```powershell $accessToken.ExpiresOn.DateTime ```
-**Step 3: Use token as password for logging in with MySQL**
+### 3 - Use a token as a password for logging in with MySQL
+
+You need to use the access token as the MySQL user password when connecting. You can use the method described above to retrieve the token using GUI clients such as MySQL workbench.
-When connecting you need to use the access token as the MySQL user password. When using GUI clients such as MySQLWorkbench, you can use the method described above to retrieve the token.
+## Connect to Azure Database for MySQL flexible server using MySQL CLI
-#### Using MySQL CLI
-When using the CLI, you can use this short-hand to connect:
+When using the CLI, you can use this shorthand to connect:
**Example (Linux/macOS):** ```
-mysql -h mydb.mysql.database.azure.com \
- --user user@tenant.onmicrosoft.com \
- --enable-cleartext-plugin \
+mysql -h mydb.mysql.database.azure.com \
+ --user user@tenant.onmicrosoft.com \
+ --enable-cleartext-plugin \
--password=`az account get-access-token --resource-type oss-rdbms --output tsv --query accessToken`
-```
-#### Using MySQL Workbench
+```
+
+## Connect to Azure Database for MySQL flexible server using MySQL Workbench
-* Launch MySQL Workbench and Click the Database option, then click "Connect to database"
-* In the hostname field, enter the MySQL FQDN eg. mysql.database.azure.com
-* In the username field, enter the MySQL Azure Active Directory administrator name and append this with MySQL server name, not the FQDN e.g. user@tenant.onmicrosoft.com
-* In the password field, click "Store in Vault" and paste in the access token from file e.g. C:\temp\MySQLAccessToken.txt
-* Click the advanced tab and ensure that you check "Enable Cleartext Authentication Plugin"
-* Click OK to connect to the database
+- Launch MySQL Workbench and Select the Database option, then select **Connect to database**.
+- In the hostname field, enter the MySQL FQDN for example, mysql.database.azure.com.
+- In the username field, enter the MySQL Azure Active Directory administrator name and append this with the MySQL server name, not the FQDN for example, user@tenant.onmicrosoft.com.
+- In the password field, select **Store in Vault** and paste in the access token from the file for example, C:\temp\MySQLAccessToken.txt.
+- Select the advanced tab and ensure that you check **Enable Cleartext Authentication Plugin**.
+- Select OK to connect to the database.
-#### Important considerations when connecting:
+## Important considerations when connecting
-* `user@tenant.onmicrosoft.com` is the name of the Azure AD user or group you are trying to connect as
-* Make sure to use the exact way the Azure AD user or group name is spelled
-* Azure AD user and group names are case sensitive
-* When connecting as a group, use only the group name (e.g. `GroupName`)
-* If the name contains spaces, use `\` before each space to escape it
+- `user@tenant.onmicrosoft.com` is the name of the Azure AD user or group you're trying to connect as
+- Make sure to use the exact way the Azure AD user or group name is spelled
+- Azure AD user and group names are case sensitive
+- When connecting as a group, use only the group name (for example, `GroupName`)
+- If the name contains spaces, use `\` before each space to escape it
-> [!Note]
-> The ΓÇ£enable-cleartext-pluginΓÇ¥ setting ΓÇô you need to use a similar configuration with other clients to make sure the token gets sent to the server without being hashed.
+> [!NOTE]
+> The ΓÇ£enable-cleartext-pluginΓÇ¥ setting ΓÇô you need to use a similar configuration with other clients to make sure the token gets sent to the server without being hashed.
-You are now authenticated to your MySQL flexible server using Azure AD authentication.
+You're now authenticated to your MySQL flexible server using Azure AD authentication.
-## Additional Azure AD Admin commands
+## Other Azure AD admin commands
-- Manage server Active Directory administrator
+- Manage server Active Directory administrator
```azurecli-interactive az mysql flexible-server ad-admin ``` -- Create an Active Directory administrator
+- Create an Active Directory administrator
```azurecli-interactive az mysql flexible-server ad-admin create ```
- _Example: Create Active Directory administrator with user 'john@contoso.com', administrator ID '00000000-0000-0000-0000-000000000000' and identity 'test-identity'_
+ _Example: Create Active Directory administrator with user 'john@contoso.com', administrator ID '00000000-0000-0000-0000-000000000000' and identity 'test-identity'_
```azurecli-interactive az mysql flexible-server ad-admin create -g testgroup -s testsvr -u john@contoso.com -i 00000000-0000-0000-0000-000000000000 --identity test-identity
- ```
+ ```
-- Delete an Active Directory administrator
+- Delete an Active Directory administrator
```azurecli-interactive az mysql flexible-server ad-admin delete ```
- _Example: Delete Active Directory administrator_
+ _Example: Delete Active Directory administrator_
```azurecli-interactive az mysql flexible-server ad-admin delete -g testgroup -s testsvr
You are now authenticated to your MySQL flexible server using Azure AD authentic
```azurecli-interactive az mysql flexible-server ad-admin list ```
- _Example: List Active Directory administrators_
+ _Example: List Active Directory administrators_
```azurecli-interactive az mysql flexible-server ad-admin list -g testgroup -s testsvr
You are now authenticated to your MySQL flexible server using Azure AD authentic
az mysql flexible-server ad-admin show ```
- _Example: Get Active Directory administrator_
+ _Example: Get Active Directory administrator_
```azurecli-interactive az mysql flexible-server ad-admin show -g testgroup -s testsvr ``` -- Wait for the Active Directory administrator to satisfy certain conditions
+- Wait for the Active Directory administrator to satisfy certain conditions
```azurecli-interactive az mysql flexible-server ad-admin wait ```
- _Examples:_
- - _Wait until the Active Directory administrator exists_
+ _Examples:_
+ - _Wait until the Active Directory administrator exists_
```azurecli-interactive az mysql flexible-server ad-admin wait -g testgroup -s testsvr --exists ```
- - _Wait for the Active Directory administrator to be deleted_
+ - _Wait for the Active Directory administrator to be deleted_
```azurecli-interactive az mysql flexible-server ad-admin wait -g testgroup -s testsvr ΓÇôdeleted ```
-## Creating Azure AD users in Azure Database for MySQL
+## Create Azure AD users in Azure Database for MySQL
To add an Azure AD user to your Azure Database for MySQL database, perform the following steps after connecting: 1. First ensure that the Azure AD user `<user>@yourtenant.onmicrosoft.com` is a valid user in Azure AD tenant.
-2. Sign in to your Azure Database for MySQL instance as the Azure AD Admin user.
-3. Create user `<user>@yourtenant.onmicrosoft.com` in Azure Database for MySQL.
+1. Sign in to your Azure Database for MySQL instance as the Azure AD Admin user.
+1. Create user `<user>@yourtenant.onmicrosoft.com` in Azure Database for MySQL.
_Example:_ ```sql CREATE AADUSER 'user1@yourtenant.onmicrosoft.com'; ```
-For user names that exceed 32 characters, it is recommended you use an alias instead, to be used when connecting:
+For user names that exceed 32 characters, it's recommended you use an alias instead, to be used when connecting:
_Example:_ ```sql
-CREATE AADUSER 'userWithLongName@yourtenant.onmicrosoft.com' as 'userDefinedShortName';
+CREATE AADUSER 'userWithLongName@yourtenant.onmicrosoft.com' as 'userDefinedShortName';
```
-> [!NOTE]
-> 1. MySQL ignores leading and trailing spaces so user name should not have any leading or trailing spaces.
+> [!NOTE]
+> 1. MySQL ignores leading and trailing spaces, so the user name should not have any leading or trailing spaces.
> 2. Authenticating a user through Azure AD does not give the user any permissions to access objects within the Azure Database for MySQL database. You must grant the user the required permissions manually.
-## Creating Azure AD groups in Azure Database for MySQL
+## Create Azure AD groups in Azure Database for MySQL
-To enable an Azure AD group for access to your database, use the same mechanism as for users, but instead specify the group name:
+To enable an Azure AD group for access to your database, use the exact mechanism as for users, but instead specify the group name:
_Example:_
_Example:_
CREATE AADUSER 'Prod_DB_Readonly'; ```
-When logging in, members of the group will use their personal access tokens, but sign with the group name specified as the username.
+When logging in, group members use their personal access tokens but sign in with the group name specified as the username.
## Compatibility with application drivers
-Most drivers are supported, however make sure to use the settings for sending the password in clear-text, so the token gets sent without modification.
+Most drivers are supported; however, make sure to use the settings for sending the password in clear text, so the token gets sent without modification.
- C/C++ - libmysqlclient: Supported
Most drivers are supported, however make sure to use the settings for sending th
- mysql-net/MySqlConnector: Supported - Node.js
- - mysqljs: Not supported (does not send token in cleartext without patch)
+ - mysqljs: Not supported (doesn't send the token in cleartext without patch)
- node-mysql2: Supported - Perl
Most drivers are supported, however make sure to use the settings for sending th
## Next steps -- Review the concepts for [Azure Active Directory authentication with Azure Database for MySQL flexible server](concepts-azure-ad-authentication.md)
+- Review the concepts for [Azure Active Directory authentication with Azure Database for MySQL flexible server](concepts-azure-ad-authentication.md)
mysql How To Connect Tls Ssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-connect-tls-ssl.md
Title: Encrypted connectivity using TLS/SSL in Azure Database for MySQL - Flexible Server description: Instructions and information on how to connect using TLS/SSL in Azure Database for MySQL - Flexible Server.+++ Last updated : 11/21/2022 -- Previously updated : 05/24/2022
+ms.devlang: csharp, golang, java, javascript, php, python, ruby
# Connect to Azure Database for MySQL - Flexible Server with encrypted connections
Last updated 05/24/2022
Azure Database for MySQL Flexible Server supports connecting your client applications to the MySQL server using Secure Sockets Layer (SSL) with Transport layer security(TLS) encryption. TLS is an industry standard protocol that ensures encrypted network connections between your database server and client applications, allowing you to adhere to compliance requirements.
-Azure Database for MySQL Flexible Server supports encrypted connections using Transport Layer Security (TLS 1.2) by default and all incoming connections with TLS 1.0 and TLS 1.1 will be denied by default. The encrypted connection enforcement or TLS version configuration on your flexible server can be changed as discussed in this article.
+Azure Database for MySQL Flexible Server supports encrypted connections using Transport Layer Security (TLS 1.2) by default and all incoming connections with TLS 1.0 and TLS 1.1 are denied by default. The encrypted connection enforcement or TLS version configuration on your flexible server can be changed as discussed in this article.
Following are the different configurations of SSL and TLS settings you can have for your flexible server:
-| Scenario | Server parameter settings | Description |
-||--||
-|Disable SSL enforcement | require_secure_transport = OFF |If your legacy application doesn't support encrypted connections to MySQL server, you can disable enforcement of encrypted connections to your flexible server by setting require_secure_transport=OFF.|
-|Enforce SSL with TLS version < 1.2 | require_secure_transport = ON and tls_version = TLSV1 or TLSV1.1| If your legacy application supports encrypted connections but requires TLS version < 1.2, you can enable encrypted connections but configure your flexible server to allow connections with the tls version (v1.0 or v1.1) supported by your application. Supported only with Azure Database for MySQL ΓÇô Flexible Server version v5.7|
-|Enforce SSL with TLS version = 1.2(Default configuration)|require_secure_transport = ON and tls_version = TLSV1.2| This is the recommended and default configuration for flexible server.|
-|Enforce SSL with TLS version = 1.3| require_secure_transport = ON and tls_version = TLSV1.3| This is useful and recommended for new applications development. Supported only with Azure Database for MySQL ΓÇô Flexible Server version v8.0|
+| Scenario | Server parameter settings | Description |
+| | | |
+| Disable SSL enforcement | require_secure_transport = OFF | If your legacy application doesn't support encrypted connections to MySQL server, you can disable enforcement of encrypted connections to your flexible server by setting require_secure_transport=OFF. |
+| Enforce SSL with TLS version < 1.2 | require_secure_transport = ON and tls_version = TLS 1.0 or TLS 1.1 | If your legacy application supports encrypted connections but requires TLS version < 1.2, you can enable encrypted connections, but configure your flexible server to allow connections with the TLS version (1.0 or 1.1) supported by your application. Supported only with Azure Database for MySQL ΓÇô Flexible Server version v5.7 |
+| Enforce SSL with TLS version = 1.2(Default configuration) | require_secure_transport = ON and tls_version = TLS 1.2 | This is the recommended and default configuration for flexible server. |
+| Enforce SSL with TLS version = 1.3 | require_secure_transport = ON and tls_version = TLS 1.3 | This is useful and recommended for new applications development. Supported only with Azure Database for MySQL ΓÇô Flexible Server version v8.0 |
-> [!Note]
-> * Changes to SSL Cipher on flexible server is not supported. FIPS cipher suites is enforced by default when tls_version is set to TLS version 1.2 . For TLS versions other than version 1.2, SSL Cipher is set to default settings which comes with MySQL community installation.
-> * MySQL open-source community editions starting with the release of MySQL versions 8.0.26 and 5.7.35, the TLSv1 and TLSv1.1 protocols are deprecated. These protocols released in 1996 and 2006, respectively to encrypt data in motion, are considered weak, outdated, and vulnerable to security threats. For more information, see [Removal of Support for the TLSv1 and TLSv1.1 Protocols.](https://dev.mysql.com/doc/refman/8.0/en/encrypted-connection-protocols-ciphers.html#encrypted-connection-deprecated-protocols).Azure Database for MySQL – Flexible Server will also stop supporting TLS versions once the community stops the support for the protocol, to align with modern security standards.
+> [!NOTE]
+> - Changes to SSL Cipher on flexible server is not supported. FIPS cipher suites is enforced by default when tls_version is set to TLS version 1.2 . For TLS versions other than version 1.2, SSL Cipher is set to default settings which comes with MySQL community installation.
+> - MySQL open-source community editions starting with the release of MySQL versions 8.0.26 and 5.7.35, the TLS 1.0 and TLS 1.1 protocols are deprecated. These protocols released in 1996 and 2006, respectively to encrypt data in motion, are considered weak, outdated, and vulnerable to security threats. For more information, see [Removal of Support for the TLS 1.0 and TLS 1.1 Protocols.](https://dev.mysql.com/doc/refman/8.0/en/encrypted-connection-protocols-ciphers.html#encrypted-connection-deprecated-protocols).Azure Database for MySQL – Flexible Server also stops supporting TLS versions once the community stops the support for the protocol, to align with modern security standards.
In this article, you learn how to:
-* Configure your flexible server
- * With SSL disabled
- * With SSL enforced with TLS version
-* Connect to your flexible server using mysql command-line
- * With encrypted connections disabled
- * With encrypted connections enabled
-* Verify encryption status for your connection
-* Connect to your flexible server with encrypted connections using various application frameworks
+- Configure your flexible server
+ - With SSL disabled
+ - With SSL enforced with TLS version
+- Connect to your flexible server using mysql command-line
+ - With encrypted connections disabled
+ - With encrypted connections enabled
+- Verify encryption status for your connection
+- Connect to your flexible server with encrypted connections using various application frameworks
## Disable SSL enforcement on your flexible server
-If your client application doesn't support encrypted connections, you'll need to disable encrypted connections enforcement on your flexible server. To disable encrypted connections enforcement, you'll need to set require_secure_transport server parameter to OFF as shown in the screenshot and save the server parameter configuration for it to take effect. require_secure_transport is a **dynamic server parameter** which takes effect immediately and doesn't require server restart to take effect.
+If your client application doesn't support encrypted connections, you need to disable encrypted connections enforcement on your flexible server. To disable encrypted connections enforcement, you need to set require_secure_transport server parameter to OFF as shown in the screenshot, and save the server parameter configuration for it to take effect. require_secure_transport is a **dynamic server parameter** which takes effect immediately and doesn't require server restart to take effect.
> :::image type="content" source="./media/how-to-connect-tls-ssl/disable-ssl.png" alt-text="Screenshot showing how to disable SSL with Azure Database for MySQL flexible server.":::
The following example shows how to connect to your server using the mysql comman
mysql.exe -h mydemoserver.mysql.database.azure.com -u myadmin -p --ssl-mode=DISABLED ```
-It's important to note that setting require_secure_transport to OFF doesn't mean encrypted connections won't supported on server side. If you set require_secure_transport to OFF on flexible server but if the client connects with encrypted connection, it will still be accepted. The following connection using mysql client to a flexible server configured with require_secure_transport=OFF will also work as shown below.
+> [!IMPORTANT]
+> Setting the require_secure_transport to OFF doesn't mean encrypted connections aren't supported on the server side. If you set require_secure_transport to OFF on the flexible server, but if the client connects with the encrypted connection, it still is accepted. The following connection using mysql client to a flexible server configured with require_secure_transport=OFF also works as shown below.
```bash mysql.exe -h mydemoserver.mysql.database.azure.com -u myadmin -p --ssl-mode=REQUIRED
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> show global variables like '%require_secure_transport%'; +--+-+
-| Variable_name | Value |
+| Variable_name | Value |
+--+-+
-| require_secure_transport | OFF |
+| require_secure_transport | OFF |
+--+-+ 1 row in set (0.02 sec) ``` In summary, require_secure_transport=OFF setting relaxes the enforcement of encrypted connections on flexible server and allows unencrypted connections to the server from client in addition to the encrypted connections.
-## Enforce SSL with TLS version
+## Enforce SSL with TLS version
-To set TLS versions on your flexible server, you'll need to set *tls_version* server parameter. The default setting for TLS protocol is TLSv1.2. If your application supports connections to MySQL server with SSL, but require any protocol other than TLSv1.2, you'll require to set the TLS versions in [server parameter](./how-to-configure-server-parameters-portal.md). *tls_version* is a **static server parameter** which will require a server restart for the parameter to take effect. Following are the Supported protocols for the available versions of Azure Database for MySQL ΓÇô Flexible Server
+To set TLS versions on your flexible server, you need to set *tls_version- server parameter. The default setting for TLS protocol is TLS 1.2. If your application supports connections to MySQL server with SSL, but require any protocol other than TLS 1.2, you require to set the TLS versions in [server parameter](./how-to-configure-server-parameters-portal.md). *tls_version- is a **static server parameter** which requires a server restart for the parameter to take effect. Following are the Supported protocols for the available versions of Azure Database for MySQL ΓÇô Flexible Server
-| Flexible Server version | Supported Values of tls_version | Default Setting |
-||--||
-|MySQL 5.7 |TLSv1, TLSv1.1, TLSv1.2 | TLSv1.2|
-|MySQL 8.0 | TLSv1.2, TLSv1.3 | TLSv1.2|
+| Flexible Server version | Supported Values of tls_version | Default Setting |
+| | | |
+| MySQL 5.7 | TLS 1.0, TLS 1.1, TLS 1.2 | TLS 1.2 |
+| MySQL 8.0 | TLS 1.2, TLS 1.0.3 | TLS 1.2 |
## Connect using mysql command-line client with TLS/SSL ### Download the public SSL certificate
-To use encrypted connections with your client applications,you'll need to download the [public SSL certificate](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem) which is also available in Azure portal Networking blade as shown in the screenshot below.
+To use encrypted connections with your client applications,you need to download the [public SSL certificate](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem), which is also available in Azure portal Networking pane as shown in the screenshot below.
-> :::image type="content" source="./media/how-to-connect-tls-ssl/download-ssl.png" alt-text="Screenshot showing how to download public SSL certificate from Azure portal.":::
-Save the certificate file to your preferred location. For example, this tutorial uses `c:\ssl` or `\var\www\html\bin` on your local environment or the client environment where your application is hosted. This will allow applications to connect securely to the database over SSL.
+Save the certificate file to your preferred location. For example, this tutorial uses `c:\ssl` or `\var\www\html\bin` on your local environment or the client environment where your application is hosted. This allows applications to connect securely to the database over SSL.
-If you created your flexible server with *Private access (VNet Integration)*, you'll need to connect to your server from a resource within the same VNet as your server. You can create a virtual machine and add it to the VNet created with your flexible server.
+If you created your flexible server with *Private access (VNet Integration)*, you need to connect to your server from a resource within the same VNet as your server. You can create a virtual machine and add it to the VNet created with your flexible server.
If you created your flexible server with *Public access (allowed IP addresses)*, you can add your local IP address to the list of firewall rules on your server.
wget --no-check-certificate https://dl.cacerts.digicert.com/DigiCertGlobalRootCA
mysql -h mydemoserver.mysql.database.azure.com -u mydemouser -p --ssl-mode=REQUIRED --ssl-ca=DigiCertGlobalRootCA.crt.pem ```
-> [!Note]
+> [!NOTE]
> Confirm that the value passed to `--ssl-ca` matches the file path for the certificate you saved.
->If you are connecting to the Azure Database for MySQL- Flexible with SSL and are using an option to perform full verification (sslmode=VERTIFY_IDENTITY) with certificate subject name, use \<servername\>.mysql.database.azure.com in your connection string.
--
+> If you are connecting to the Azure Database for MySQL- Flexible with SSL and are using an option to perform full verification (sslmode=VERTIFY_IDENTITY) with certificate subject name, use \<servername\>.mysql.database.azure.com in your connection string.
If you try to connect to your server with unencrypted connections, you'll see error stating connections using insecure transport are prohibited similar to one below:
You can run the command SHOW GLOBAL VARIABLES LIKE 'tls_version'; and check the
```sql mysql> SHOW GLOBAL VARIABLES LIKE 'tls_version'; ```
-**How to find which TLS protocol are being used by my clients to connect to the server ?**
-You can run the below command and look at tls_version for the session to identify which TLS version is used to connect
+**How to find which TLS protocol are being used by my clients to connect to the server?**
+
+You can run the below command and look at tls_version for the session to identify which TLS version is used to connect.
+ ```sql SELECT sbt.variable_value AS tls_version, t2.variable_value AS cipher, processlist_user AS user, processlist_host AS host
To establish an encrypted connection to your flexible server over TLS/SSL from y
Download [SSL public certificate](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem) and add the following lines in wp-config.php after the line ```// **MySQL settings - You can get this info from your web host** //```. ```php
-//** Connect with SSL** //
+//** Connect with SSL ** //
define('MYSQL_CLIENT_FLAGS', MYSQLI_CLIENT_SSL); //** SSL CERT **// define('MYSQL_SSL_CERT','/FULLPATH/on-client/to/DigiCertGlobalRootCA.crt.pem');
conn.connect(function(err) {
## Next steps
-* [Use MySQL Workbench to connect and query data in Azure Database for MySQL Flexible Server](./connect-workbench.md)
-* [Use PHP to connect and query data in Azure Database for MySQL Flexible Server](./connect-php.md)
-* [Create and manage Azure Database for MySQL Flexible Server virtual network using Azure CLI](./how-to-manage-virtual-network-cli.md).
-* Learn more about [networking in Azure Database for MySQL Flexible Server](./concepts-networking.md)
-* Understand more about [Azure Database for MySQL Flexible Server firewall rules](./concepts-networking-public.md#public-access-allowed-ip-addresses)
+- [Use MySQL Workbench to connect and query data in Azure Database for MySQL Flexible Server](./connect-workbench.md)
+- [Use PHP to connect and query data in Azure Database for MySQL Flexible Server](./connect-php.md)
+- [Create and manage Azure Database for MySQL Flexible Server virtual network using Azure CLI](./how-to-manage-virtual-network-cli.md).
+- Learn more about [networking in Azure Database for MySQL Flexible Server](./concepts-networking.md)
+- Understand more about [Azure Database for MySQL Flexible Server firewall rules](./concepts-networking-public.md#public-access-allowed-ip-addresses)
mysql How To Data Encryption Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-data-encryption-cli.md
description: Learn how to set up and manage data encryption for your Azure Datab
Previously updated : 09/15/2022 Last updated : 11/21/2022
This tutorial shows you how to set up and manage data encryption for your Azure Database for MySQL - Flexible Server using Azure CLI preview.
-In this tutorial you'll learn how to:
+In this tutorial, you learn how to:
- Create a MySQL flexible server with data encryption - Update an existing MySQL flexible server with data encryption
In this tutorial you'll learn how to:
- An Azure account with an active subscription. - If you don't have an Azure subscription, create an [Azure free account](https://azure.microsoft.com/free) before you begin.
-
- > [!Note]
+
+ > [!NOTE]
> With an Azure free account, you can now try Azure Database for MySQL - Flexible Server for free for 12 months. For more information, see [Try Flexible Server for free](how-to-deploy-on-azure-free-account.md). - Install or upgrade Azure CLI to the latest version. See [Install Azure CLI](/cli/azure/install-azure-cli). -- Log in to Azure account using [az login](/cli/azure/reference-index#az-login) command. Note the ID property, which refers to Subscription ID for your Azure account:
+- Sign in to Azure account using the [az login](/cli/azure/reference-index#az-login) command. Note the ID property, which refers to the Subscription ID for your Azure account:
```azurecli-interactive az login
az mysql flexible-server update --resource-group testGroup --name testserver --d
## Create flexible server with geo redundant backup and data encryption enabled
-```azurecli-interactive
+```azurecli-interactive
az mysql flexible-server create -g testGroup -n testServer --location testLocation \\ --geo-redundant-backup Enabled \\ --key <key identifier of testKey> --identity testIdentity \\
The params **identityUri** and **primaryKeyUri** are the resource ID of the user
- [Customer managed keys data encryption (Preview)](concepts-customer-managed-key.md) - [Data encryption with Azure portal (Preview)](how-to-data-encryption-portal.md)-
mysql How To Data Encryption Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-data-encryption-portal.md
description: Learn how to set up and manage data encryption for your Azure Datab
Previously updated : 09/15/2022 Last updated : 11/21/2022
This tutorial shows you how to set up and manage data encryption for your Azure
In this tutorial, you learn how to: - Set data encryption for Azure Database for MySQL flexible server.-- Configure data encryption for restore.
+- Configure data encryption for restoration.
- Configure data encryption for replica servers. ## Prerequisites - An Azure account with an active subscription. - If you don't have an Azure subscription, create an [Azure free account](https://azure.microsoft.com/free) before you begin.
-
- > [!Note]
+
+ > [!NOTE]
> With an Azure free account, you can now try Azure Database for MySQL - Flexible Server for free for 12 months. For more information, see [Try Flexible Server for free](how-to-deploy-on-azure-free-account.md).
-## Set the right permissions for key operations
+## Set the proper permissions for key operations
1. In Key Vault, select **Access policies**, and then select **Create**. :::image type="content" source="media/how-to-data-encryption-portal/1-mysql-key-vault-access-policy.jpeg" alt-text="Screenshot of Key Vault Access Policy in the Azure portal.":::
-2. On the **Permissions** tab, select the following **Key permissions - Get** , **List** , **Wrap Key** , **Unwrap Key**.
+1. On the **Permissions** tab, select the following **Key permissions - Get** , **List** , **Wrap Key** , **Unwrap Key**.
+
+1. On the **Principal** tab, select the User-assigned Managed Identity.
-3. On the **Principal** tab, select the User-assigned Managed Identity.
-
:::image type="content" source="media/how-to-data-encryption-portal/2-mysql-principal-tab.jpeg" alt-text="Screenshot of the principal tab in the Azure portal.":::
-4. Select **Create**.
+1. Select **Create**.
## Configure customer managed key
To set up the customer managed key, perform the following steps.
:::image type="content" source="media/how-to-data-encryption-portal/3-mysql-data-encryption.jpeg" alt-text="Screenshot of the data encryption page.":::
-2. On the **Data encryption** page, under **No identity assigned** , select **Change identity** ,
+1. On the **Data encryption** page, under **No identity assigned** , select **Change identity** ,
-3. In the **Select user assigned**** managed identity **dialog box, select the** demo-umi **identity, and then select** Add**.
+1. In the **Select user assigned**** managed identity **dialog box, select the** demo-umi **identity, and then select** Add**.
:::image type="content" source="media/how-to-data-encryption-portal/4-mysql-assigned-managed-identity-demo-uni.jpeg" alt-text="Screenshot of selecting the demo-umi from the assigned managed identity page.":::
-4. To the right of **Key selection method** , either **Select a key** and specify a key vault and key pair, or select **Enter a key identifier**.
+1. To the right of **Key selection method** , either **Select a key** and specify a key vault and key pair, or select **Enter a key identifier**.
:::image type="content" source="media/how-to-data-encryption-portal/5-mysql-select-key.jpeg" alt-text="Screenshot of the Select Key page in the Azure portal.":::
-5. Select **Save**.
+1. Select **Save**.
-## Using Data encryption for restore
+## Use Data encryption for restore
To use data encryption as part of a restore operation, perform the following steps.
To use data encryption as part of a restore operation, perform the following ste
:::image type="content" source="media/how-to-data-encryption-portal/6-mysql-navigate-overview-page.jpeg" alt-text="Screenshot of overview page.":::
-2. Select **Change identity** and select the **User assigned managed identity** and select on **Add**
+1. Select **Change identity** and select the **User assigned managed identity** and select on **Add**
**To select the Key** , you can either select a **key vault** and **key pair** or enter a **key identifier** :::image type="content" source="media/how-to-data-encryption-portal/7-mysql-change-identity.jpeg" alt-text="SCreenshot of the change identity page.":::
-## Using Data encryption for replica servers
+## Use Data encryption for replica servers
-After your Azure Database for MySQL flexible server is encrypted with a customer's managed key stored in Key Vault, any newly created copy of the server will also be encrypted.
+After your Azure Database for MySQL flexible server is encrypted with a customer's managed key stored in Key Vault, any newly created copy of the server is also encrypted.
1. To configuration replication, under **Settings** , select **Replication** , and then select **Add replica**. :::image type="content" source="media/how-to-data-encryption-portal/8-mysql-replication.jpeg" alt-text="Screenshot of the Replication page.":::
-2. In the Add Replica server to Azure Database for MySQL dialog box, select the appropriate **Compute + storage** option, and then select **OK**.
+1. In the Add Replica server to Azure Database for MySQL dialog box, select the appropriate **Compute + storage** option, and then select **OK**.
:::image type="content" source="media/how-to-data-encryption-portal/9-mysql-compute-storage.jpeg" alt-text="Screenshot of the Compute + Storage page.":::
- > [!Important]
+ > [!IMPORTANT]
> When trying to encrypt Azure Database for MySQL flexible server with a customer managed key that already has a replica(s), we recommend configuring the replica(s) as well by adding the managed identity and key. ## Next steps
mysql How To Manage Firewall Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-manage-firewall-cli.md
Title: Manage firewall rules - Azure CLI - Azure Database for MySQL - Flexible Server description: Create and manage firewall rules for Azure Database for MySQL - Flexible Server using Azure CLI command line.+++ Last updated : 11/21/2022 -- Previously updated : 9/21/2020
+ms.devlang: azurecli
# Manage firewall rules for Azure Database for MySQL - Flexible Server using Azure CLI [!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)] -
-Azure Database for MySQL Flexible Server supports two types of mutually exclusive network connectivity methods to connect to your flexible server. The two options are:
+Azure Database for MySQL Flexible Server supports two mutually exclusive network connectivity methods to connect to your flexible server. The two options are:
- Public access (allowed IP addresses) - Private access (VNet Integration)
-In this article, we will focus on creation of MySQL server with **Public access (allowed IP addresses)** using Azure CLI and will provide an overview on Azure CLI commands you can use to create, update, delete, list, and show firewall rules after creation of server. With *Public access (allowed IP addresses)*, the connections to the MySQL server are restricted to allowed IP addresses only. The client IP addresses need to be allowed in firewall rules. To learn more about it, refer to [Public access (allowed IP addresses)](./concepts-networking-public.md#public-access-allowed-ip-addresses). The firewall rules can be defined at the time of server creation (recommended) but can be added later as well.
+In this article, we focus on the creation of a MySQL server with **Public access (allowed IP addresses)** using Azure CLI. This article provides an overview of Azure CLI commands you can use to create, update, delete, list, and show firewall rules after creating a server. With *Public access (allowed IP addresses)*, the connections to the MySQL server are restricted to allowed IP addresses only. The client IP addresses need to be allowed in the firewall rules. To learn more about it, refer to [Public access (allowed IP addresses)](./concepts-networking-public.md#public-access-allowed-ip-addresses). The firewall rules can be defined at the time of server creation (recommended) but can be added later as well.
## Launch Azure Cloud Shell
If you prefer to install and use the CLI locally, this quickstart requires Azure
## Prerequisites
-You'll need to sign in to your account using the [az login](/cli/azure/reference-index#az-login) command. Note the **ID** property, which refers to **Subscription ID** for your Azure account.
+You must sign in to your account using the [az login](/cli/azure/reference-index#az-login) command. Note the **ID** property, which refers to **Subscription ID** for your Azure account.
```azurecli-interactive az login ```
-Select the specific subscription under your account using [az account set](/cli/azure/account#az-account-set) command. Make a note of the **ID** value from the **az login** output to use as the value for **subscription** argument in the command. If you have multiple subscriptions, choose the appropriate subscription in which the resource should be billed. To get all your subscription, use [az account list](/cli/azure/account#az-account-list).
+Select the specific subscription under your account using the [az account set](/cli/azure/account#az-account-set) command. Note the **ID** value from the **az login** output to use as the value for the **subscription** argument in the command. If you have multiple subscriptions, choose the appropriate subscription in which the resource should be billed. To get all your subscriptions, use [az account list](/cli/azure/account#az-account-list).
```azurecli az account set --subscription <subscription id> ```
-## Create firewall rule during flexible server create using Azure CLI
+## Create firewall rule during flexible server using Azure CLI
-You can use the `az mysql flexible-server --public access` command to create the flexible server with *Public access (allowed IP addresses)* and configure the firewall rules during creation of flexible server. You can use the **--public-access** switch to provide the allowed IP addresses that will be able to connect to the server. You can provide single or range of IP addresses to be included in the allowed list of IPs. IP address range must be dash separated and does not contain any spaces. There are various options to create a flexible server using CLI as shown in the example below.
+You can use the `az mysql flexible-server --public access` command to create the flexible server with *Public access (allowed IP addresses)* and configure the firewall rules while creating the flexible server. You can use the **--public-access** switch to provide the allowed IP addresses that can connect to the server. You can provide single or range of IP addresses to be included in the allowed list of IPs. IP address range must be dash-separated and doesn't contain any spaces. There are various options to create a flexible server using CLI as shown in the example below.
-Refer to the Azure CLI [reference documentation](/cli/azure/mysql/flexible-server) for the complete list of configurable CLI parameters. For example, in the below commands you can optionally specify the resource group.
+Refer to the Azure CLI [reference documentation](/cli/azure/mysql/flexible-server) for the complete list of configurable CLI parameters. For example, you can optionally specify the resource group in the below commands.
-- Create a flexible server with public access and add client IP address to have access to the server
+- Create a flexible server with public access and add the client IP address to have access to the server
```azurecli-interactive az mysql flexible-server create --public-access <my_client_ip>
Refer to the Azure CLI [reference documentation](/cli/azure/mysql/flexible-serve
az mysql flexible-server create --public-access 0.0.0.0 ```
- > [!IMPORTANT]
- > This option configures the firewall to allow public access from Azure services and resources within Azure to this server including connections from the subscriptions of other customers. When selecting this option, make sure your login and user permissions limit access to only authorized users.
+ > [!IMPORTANT]
+ > This option configures the firewall to allow public access from Azure services and resources within Azure to this server, including connections from the subscriptions of other customers. When selecting this option, ensure your login and user permissions limit access to only authorized users.
- Create a flexible server with public access and allow all IP address
Refer to the Azure CLI [reference documentation](/cli/azure/mysql/flexible-serve
az mysql flexible-server create --public-access all ```
- > [!Note]
- > The above command will create a firewall rule with start IP address=0.0.0.0, end IP address=255.255.255.255 and no IP addresses will be blocked. Any host on the Internet can access this server. It is strongly recommended to use this rule only temporarily and only on test servers that do not contain sensitive data.
+ > [!NOTE]
+ > The above command creates a firewall rule with start IP address=0.0.0.0, end IP address=255.255.255.255 and no IP addresses are blocked. Any host on the Internet can access this server. It is strongly recommended to use this rule only temporarily and only on test servers that do not contain sensitive data.
- Create a flexible server with public access and with no IP address
Refer to the Azure CLI [reference documentation](/cli/azure/mysql/flexible-serve
az mysql flexible-server create --public-access none ```
- >[!Note]
- > we do not recommend to create a server without any firewall rules. If you do not add any firewall rules then no client will be able to connect to the server.
+ > [!NOTE]
+ > we do not recommend creating a server without any firewall rules. If you do not add any firewall rules, no client can connect to the server.
## Create and manage firewall rule after server create
The **az mysql flexible-server firewall-rule** command is used from the Azure CL
Commands: -- **create**: Create an flexible server firewall rule.
+- **create**: Create a flexible server firewall rule.
- **list**: List the flexible server firewall rules.-- **update**: Update an flexible server firewall rule.-- **show**: Show the details of an flexible server firewall rule.-- **delete**: Delete an flexible server firewall rule.
+- **update**: Update a flexible server firewall rule.
+- **show**: Show the details of a flexible server firewall rule.
+- **delete**: Delete a flexible server firewall rule.
-Refer to the Azure CLI [reference documentation](/cli/azure/mysql/flexible-server) for the complete list of configurable CLI parameters. For example, in the below commands you can optionally specify the resource group.
+Refer to the Azure CLI [reference documentation](/cli/azure/mysql/flexible-server) for the complete list of configurable CLI parameters. For example, in the below commands, you can optionally specify the resource group.
### Create a firewall rule Use the `az mysql flexible-server firewall-rule create` command to create new firewall rule on the server.
-To allow access to a range of IP addresses, provide the IP address as the Start IP address and End IP address, as in this example.
+To allow access to a range of IP addresses, provide the IP address as the Start and End IP addresses, as in this example.
```azurecli-interactive az mysql flexible-server firewall-rule create --name mydemoserver --start-ip-address 13.83.152.0 --end-ip-address 13.83.152.15 ```
-To allow access for a single IP address, just provide single IP address, as in this example.
+To allow access for a single IP address, provide the single IP address, as in this example.
```azurecli-interactive az mysql flexible-server firewall-rule create --name mydemoserver --start-ip-address 1.1.1.1
To allow applications from Azure IP addresses to connect to your flexible server
az mysql flexible-server firewall-rule create --name mydemoserver --start-ip-address 0.0.0.0 ```
-> [!IMPORTANT]
-> This option configures the firewall to allow public access from Azure services and resources within Azure to this server including connections from the subscriptions of other customers. When selecting this option, make sure your login and user permissions limit access to only authorized users.
+> [!IMPORTANT]
+> This option configures the firewall to allow public access from Azure services and resources within Azure to this server, including connections from the subscriptions of other customers. When selecting this option, make sure your login and user permissions limit access to only authorized users.
-Upon success, each create command output lists the details of the firewall rule you have created, in JSON format (by default). If there is a failure, the output shows error message text instead.
+Upon success, each create command output lists the details of the firewall rule you've created in JSON format (by default). If there's a failure, the result shows an error message text instead.
### List firewall rules
az mysql flexible-server firewall-rule list --name mydemoserver --output table
### Update a firewall rule
-Use the `az mysql flexible-server firewall-rule update` command to update an existing firewall rule on the server. Provide the name of the existing firewall rule as input, as well as the start IP address and end IP address attributes to update.
+Use the `az mysql flexible-server firewall-rule update` command to update an existing firewall rule on the server. Provide the name of the existing firewall rule as input, and the start IP address and end IP address attributes to update.
```azurecli-interactive az mysql flexible-server firewall-rule update --name mydemoserver --rule-name FirewallRule1 --start-ip-address 13.83.152.0 --end-ip-address 13.83.152.1 ```
-Upon success, the command output lists the details of the firewall rule you have updated, in JSON format (by default). If there is a failure, the output shows error message text instead.
+Upon success, the command output lists the details of the firewall rule you've updated in JSON format (by default). If there's a failure, the output shows an error message text instead.
-> [!NOTE]
-> If the firewall rule does not exist, the rule is created by the update command.
+> [!NOTE]
+> If the firewall rule does not exist, the update command creates the rule.
### Show firewall rule details
Use the `az mysql flexible-server firewall-rule show` command to show the existi
az mysql flexible-server firewall-rule show --name mydemoserver --rule-name FirewallRule1 ```
-Upon success, the command output lists the details of the firewall rule you have specified, in JSON format (by default). If there is a failure, the output shows error message text instead.
+Upon success, the command output lists the details of the firewall rule you've specified in JSON format (by default). If there's a failure, the output shows an error message text instead.
### Delete a firewall rule
-Use the `az mysql flexible-server firewall-rule delete` command to delete an existing firewall rule from the server. Provide the name of the existing firewall rule.
+Use the `az mysql flexible-server firewall-rule delete` command to delete an existing firewall rule from the server. Provide the name of the current firewall rule.
```azurecli-interactive az mysql flexible-server firewall-rule delete --name mydemoserver --rule-name FirewallRule1 ```
-Upon success, there is no output. Upon failure, error message text displays.
+Upon success, there's no output. Upon failure, an error message text is displayed.
## Next steps
mysql How To Manage Firewall Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-manage-firewall-portal.md
Title: Manage firewall rules - Azure portal - Azure Database for MySQL - Flexible Server description: Create and manage firewall rules for Azure Database for MySQL - Flexible Server using the Azure portal+++ Last updated : 11/21/2022 -- Previously updated : 9/21/2020 # Manage firewall rules for Azure Database for MySQL - Flexible Server using the Azure portal [!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)]
+This article provides an overview of managing firewall rules after creating a flexible server. With *Public access (allowed IP addresses)*, the connections to the MySQL server are restricted to allowed IP addresses only. The client IP addresses need to be allowed in firewall rules.
-Azure Database for MySQL Flexible Server supports two types of mutually exclusive network connectivity methods to connect to your flexible server. The two options are:
+This article focuses on creating a MySQL server with **Public access (allowed IP addresses)** using the Azure portal.
-1. Public access (allowed IP addresses)
-2. Private access (VNet Integration)
+To learn more about it, refer to [Public access (allowed IP addresses)](./concepts-networking-public.md#public-access-allowed-ip-addresses). The firewall rules can be defined at the time of server creation (recommended) but can be added later.
-In this article, we will focus on creation of MySQL server with **Public access (allowed IP addresses)** using Azure portal and will provide an overview of managing firewall rules after creation of flexible server. With *Public access (allowed IP addresses)*, the connections to the MySQL server are restricted to allowed IP addresses only. The client IP addresses need to be allowed in firewall rules. To learn more about it, refer to [Public access (allowed IP addresses)](./concepts-networking-public.md#public-access-allowed-ip-addresses). The firewall rules can be defined at the time of server creation (recommended) but can be added later as well. In this article, we will provide an overview on how to create and manage firewall rules using public access (allowed IP addresses).
+Azure Database for MySQL Flexible Server supports two mutually exclusive network connectivity methods to connect to your flexible server. The two options are:
+
+1. Public access (allowed IP addresses)
+1. Private access (VNet Integration)
## Create a firewall rule when creating a server 1. Select **Create a resource** (+) in the upper-left corner of the portal.
-2. Select **Databases** > **Azure Database for MySQL**. You can also enter **MySQL** in the search box to find the service.
-3. Select **Flexible server** as the deployment option.
-4. Fill out the **Basics** form.
-5. Go to the **Networking** tab to configure how you want to connect to your server.
-6. In the **Connectivity method**, select *Public access (allowed IP addresses)*. To create the **Firewall rules**, specify the Firewall rule name and single IP address, or a range of addresses. If you want to limit the rule to a single IP address, type the same address in the field for Start IP address and End IP address. Opening the firewall enables administrators, users, and applications to access any database on the MySQL server to which they have valid credentials.
- > [!Note]
- > Azure Database for MySQL Flexible Server creates a firewall at the server level. It prevents external applications and tools from connecting to the server and any databases on the server, unless you create a rule to open the firewall for specific IP addresses.
+1. Select **Databases** > **Azure Database for MySQL**. You can also enter **MySQL** in the search box to find the service.
+1. Select **Flexible server** as the deployment option.
+1. Fill out the **Basics** form.
+1. Go to the **Networking** tab to configure how you want to connect to your server.
+1. In the **Connectivity method**, select *Public access (allowed IP addresses)*. To create the **Firewall rules**, specify the Firewall rule name and a single IP address or a range of addresses. If you want to limit the rule to a single IP address, type the same address in the field for the Start IP address and End IP address. Opening the firewall enables administrators, users, and applications to access any database on the MySQL server to which they have valid credentials.
-7. Select **Review + create** to review your flexible server configuration.
-8. Select **Create** to provision the server. Provisioning can take a few minutes.
+ > [!NOTE]
+ > Azure Database for MySQL Flexible Server creates a firewall at the server level. It prevents external applications and tools from connecting to the server and any databases on the server unless you create a rule to open the firewall for specific IP addresses.
-## Create a firewall rule after server is created
+1. Select **Review + create** to review your flexible server configuration.
+1. Select **Create** to provision the server. Provisioning can take a few minutes.
+
+## Create a firewall rule after the server is created
1. In the [Azure portal](https://portal.azure.com/), select the Azure Database for MySQL Flexible Server on which you want to add firewall rules.
-2. On the flexible server page, under **Settings** heading, click **Networking** to open the Networking page for flexible server.
- <!--:::image type="content" source="./media/howto-manage-firewall-portal/1-connection-security.png" alt-text="Azure portal - click Connection Security":::-->
+1. On the flexible server page, under **Settings** heading, select **Networking** to open the Networking page for the flexible server.
+
+ :::image type="content" source="./media/how-to-manage-firewall-portal/1-connection-security.png" alt-text="Azure portal - select Connection Security":::
-3. Click **Add current client IP address** in the firewall rules. This automatically creates a firewall rule with the public IP address of your computer, as perceived by the Azure system.
+1. Select **Add current client IP address** in the firewall rules. This automatically creates a firewall rule with the public IP address of your computer, as perceived by the Azure system.
- <!--:::image type="content" source="./media/howto-manage-firewall-portal/2-add-my-ip.png" alt-text="Azure portal - click Add My IP":::-->
+ :::image type="content" source="./media/how-to-manage-firewall-portal/2-add-my-ip.png" alt-text="Azure portal - select Add My IP":::
-4. Verify your IP address before saving the configuration. In some situations, the IP address observed by Azure portal differs from the IP address used when accessing the internet and Azure servers. Therefore, you may need to change the Start IP address and End IP address to make the rule function as expected.
+1. Verify your IP address before saving the configuration. In some situations, the IP address observed by the Azure portal differs from the IP address used when accessing the internet and Azure servers. Therefore, you may need to change the Start and End IP addresses to make the rule function as expected.
You can use a search engine or other online tool to check your own IP address. For example, search for "what is my IP."
- <!--:::image type="content" source="./media/howto-manage-firewall-portal/3-what-is-my-ip.png" alt-text="Bing search for What is my IP":::-->
+ :::image type="content" source="./media/how-to-manage-firewall-portal/3-what-is-my-ip.png" alt-text="Bing search for What is my IP":::
-5. Add additional address ranges. In the firewall rules for the Azure Database for MySQL Flexible Server, you can specify a single IP address, or a range of addresses. If you want to limit the rule to a single IP address, type the same address in the field for Start IP address and End IP address. Opening the firewall enables administrators, users, and applications to access any database on the MySQL server to which they have valid credentials.
+1. Add more address ranges. In the firewall rules for the Azure Database for MySQL Flexible Server, you can specify a single IP address or a range of addresses. If you want to limit the rule to a single IP address, type the same address in the field for the Start IP address and End IP address. Opening the firewall enables administrators, users, and applications to access any database on the MySQL server to which they have valid credentials.
- <!--:::image type="content" source="./media/howto-manage-firewall-portal/4-specify-addresses.png" alt-text="Azure portal - firewall rules":::-->
+ :::image type="content" source="./media/how-to-manage-firewall-portal/4-specify-addresses.png" alt-text="Azure portal - firewall rules":::
-6. Click **Save** on the toolbar to save this firewall rule. Wait for the confirmation that the update to the firewall rules was successful.
+1. Select **Save** on the toolbar to save this firewall rule. Wait for the confirmation that the update to the firewall rules was successful.
- <!--:::image type="content" source="./media/howto-manage-firewall-portal/5-save-firewall-rule.png" alt-text="Azure portal - click Save":::-->
+ :::image type="content" source="./media/how-to-manage-firewall-portal/5-save-firewall-rule.png" alt-text="Azure portal - select Save":::
## Connect from Azure
-You may want to enable resources or applications deployed in Azure to connect to your flexible server. This includes web applications hosted in Azure App Service, running on an Azure VM, an Azure Data Factory data management gateway and many more.
+You can enable resources or applications deployed in Azure to connect to your flexible server. This includes web applications hosted in Azure App Service, running on an Azure VM, an Azure Data Factory data management gateway, and many more.
-When an application within Azure attempts to connect to your server, the firewall verifies that Azure connections are allowed. You can enable this setting by selecting the **Allow public access from Azure services and resources within Azure to this server** option in the portal from the **Networking** tab and hit **Save**.
+When an application within Azure attempts to connect to your server, the firewall verifies that Azure connections are allowed. You can enable this setting by selecting the **Allow public access from Azure services and resources within Azure to this server** option in the portal from the **Networking** tab and selecting **Save**.
-The resources do not need to be in the same virtual network (VNet) or resource group for the firewall rule to enable those connections. If the connection attempt is not allowed, the request does not reach the Azure Database for MySQL Flexible Server.
+The resources can be in a different virtual network (VNet) or resource group for the firewall rule to enable those connections. The request doesn't reach the Azure Database for MySQL Flexible Server if the connection attempt isn't allowed.
-> [!IMPORTANT]
->This option configures the firewall to allow all connections from Azure including connections from the subscriptions of other customers. When selecting this option, make sure your login and user permissions limit access to only authorized users.
->
+> [!IMPORTANT]
+> This option configures the firewall to allow all connections from Azure, including connections from the subscriptions of other customers. When selecting this option, make sure your login and user permissions limit access to only authorized users.
+>
> We recommend choosing the **Private access (VNet Integration)** to securely access flexible server.
->
## Manage existing firewall rules through the Azure portal Repeat the following steps to manage the firewall rules. -- To add the current computer, click + **Add current client IP address** in the firewall rules. Click **Save** to save the changes.-- To add additional IP addresses, type in the Rule Name, Start IP Address, and End IP Address. Click **Save** to save the changes.-- To modify an existing rule, click any of the fields in the rule and modify. Click **Save** to save the changes.-- To delete an existing rule, click the ellipsis […] and click **Delete** to remove the rule. Click **Save** to save the changes.
+- To add the current computer, select + **Add current client IP address** in the firewall rules. Select **Save** to save the changes.
+- To add more IP addresses, type in the Rule Name, Start IP Address and End IP Address. Select **Save** to save the changes.
+- To modify an existing rule, select any fields in the rule and modify. Select **Save** to save the changes.
+- To delete an existing rule, select the ellipsis […] and select **Delete** to remove the rule. Select **Save** to save the changes.
## Next steps
mysql Quickstart Create Connect Server Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/quickstart-create-connect-server-vnet.md
Title: 'Connect to Azure Database for MySQL flexible server with private access in the Azure portal'
+ Title: "Connect to Azure Database for MySQL flexible server with private access in the Azure portal"
description: This article walks you through using the Azure portal to create and connect to an Azure Database for MySQL flexible server in private access.+++ Last updated : 11/21/2022 --- Previously updated : 04/18/2021+
+ - mvc
+ - mode-ui
# Connect Azure Database for MySQL Flexible Server with private access connectivity method
-Azure Database for MySQL Flexible Server is a managed service that you can use to run, manage, and scale highly available MySQL servers in the cloud. This quickstart shows you how to create a flexible server in a virtual network by using the Azure portal.
--
+Azure Database for MySQL Flexible Server is a managed service that runs, manages, and scales highly available MySQL servers in the cloud. This quickstart shows you how to create a flexible server in a virtual network by using the Azure portal.
[!INCLUDE [flexible-server-free-trial-note](../includes/flexible-server-free-trial-note.md)] - ## Sign in to the Azure portal+ Go to the [Azure portal](https://portal.azure.com/). Enter your credentials to sign in to the portal. The default view is your service dashboard. ## Create an Azure Database for MySQL flexible server
Complete these steps to create a flexible server:
> :::image type="content" source="./media/quickstart-create-connect-server-vnet/search-flexible-server-in-portal.png" alt-text="Screenshot that shows a search for Azure Database for MySQL servers." lightbox="./media/quickstart-create-connect-server-vnet/search-flexible-server-in-portal.png":::
-2. Select **Add**.
+1. Select **Add**.
-3. On the **Select Azure Database for MySQL deployment option** page, select **Flexible server** as the deployment option:
+1. On the **Select Azure Database for MySQL deployment option** page, select **Flexible server** as the deployment option:
> :::image type="content" source="./media/quickstart-create-connect-server-vnet/deployment-option.png" alt-text="Screenshot that shows the Flexible server option." lightbox="./media/quickstart-create-connect-server-vnet/deployment-option.png":::
-4. On the **Basics** tab, enter the **subscription**, **resource group** , **region**, **administrator username** and **administrator password**. With the default values, this will provision a MySQL server of version 5.7 with Burstable Sku using 1 vCore, 2GiB Memory and 32GiB storage. The backup retention is 7 days. You can change the configuration.
+1. On the **Basics** tab, enter the **subscription**, **resource group** , **region**, **administrator username** and **administrator password**. With the default values, this provisions a MySQL server of version 5.7 with Burstable Sku using 1 vCore, 2 GiB Memory, and 32 GiB storage. The backup retention is seven days. You can change the configuration.
> :::image type="content" source="./media/quickstart-create-connect-server-vnet/mysql-flexible-server-create-portal.png" alt-text="Screenshot that shows the Basics tab of the Flexible server page." lightbox="./media/quickstart-create-connect-server-vnet/mysql-flexible-server-create-portal.png":::
- > [!TIP]
- > For faster data loads during migration, it is recommended to increase the IOPS to the maximum size supported by compute size and later scale it back to save cost.
+ > [!TIP]
+ > For faster data loads during migration, increasing the IOPS to the maximum size supported by computing the size and later scaling it back to save cost is recommended.
-5. Go to the **Networking** tab, select **private access**.You can't change the connectivity method after you create the server. Select **Create virtual network** to create new virtual network **vnetenvironment1**.
+1. Go to the **Networking** tab, select **private access**.You can't change the connectivity method after you create the server. Select **Create virtual network** to create a new virtual network **vnetenvironment1**.
> :::image type="content" source="./media/quickstart-create-connect-server-vnet/create-new-vnet-for-mysql-server.png" alt-text="Screenshot that shows the Networking tab with new VNET." lightbox="./media/quickstart-create-connect-server-vnet/create-new-vnet-for-mysql-server.png":::
-6. Select **OK** once you have provided the virtual network name and subnet information.
+1. Select **OK** once you've provided the virtual network name and subnet information.
> :::image type="content" source="./media/quickstart-create-connect-server-vnet/show-server-vnet-information.png" alt-text="review VNET information":::
-7. Select **Review + create** to review your flexible server configuration.
+1. Select **Review + create** to review your flexible server configuration.
-8. Select **Create** to provision the server. Provisioning can take a few minutes.
+1. Select **Create** to provision the server. Provisioning can take a few minutes.
-9. Wait until the deployment is complete and successful.
+1. Wait until the deployment is complete and successful.
> :::image type="content" source="./media/quickstart-create-connect-server-vnet/deployment-success.png" alt-text="Screenshot that shows the Networking settings with new VNET." lightbox="./media/quickstart-create-connect-server-vnet/deployment-success.png":::
-9. Select **Go to resource** to view the server's **Overview** page opens.
+1. Select **Go to resource** to view the server's **Overview** page opens.
## Create Azure Linux virtual machine
-Since the server is in virtual network, you can only connect to the server from other Azure services in the same virtual network as the server. To connect and manage the server, let's create a Linux virtual machine. The virtual machine must be created in the **same region** and **same subscription**. The Linux virtual machine can be used as SSH tunnel to manage your database server.
+Since the server is in a virtual network, you can only connect to the server from other Azure services in the same virtual network as the server. To connect and manage the server, let's create a Linux virtual machine. The virtual machine must be created in the **same region** and **same subscription**. The Linux virtual machine can be used as SSH tunnel to manage your database server.
-1. Go to you resource group in which the server was created. Select **Add**.
-2. Select **Ubuntu Server 18.04 LTS**
-3. In the **Basics** tab, under **Project details**, make sure the correct subscription is selected and then choose to **Create new** resource group. Type *myResourceGroup* for the name.
+1. Go to your resource group in which the server was created. Select **Add**.
+1. Select **Ubuntu Server 18.04 LTS**
+1. In the **Basics** tab, under **Project details**, make sure the correct subscription is selected and then choose to **Create new** resource group. Type *myResourceGroup* for the name.
> :::image type="content" source="../../virtual-machines/linux/media/quick-create-portal/project-details.png" alt-text="Screenshot of the Project details section showing where you select the Azure subscription and the resource group for the virtual machine" lightbox="../../virtual-machines/linux/media/quick-create-portal/project-details.png":::
-2. Under **Instance details**, type *myVM* for the **Virtual machine name**, choose the same **Region** as your database server.
+1. Under **Instance details**, type *myVM* for the **Virtual machine name**, choose the same **Region** as your database server.
> :::image type="content" source="../../virtual-machines/linux/media/quick-create-portal/instance-details.png" alt-text="Screenshot of the Instance details section where you provide a name for the virtual machine and select its region, image and size]" lightbox="../../virtual-machines/linux/media/quick-create-portal/instance-details.png":::
-3. Under **Administrator account**, select **SSH public key**.
+1. Under **Administrator account**, select **SSH public key**.
-4. In **Username** type *azureuser*.
+1. In **Username** type *azureuser*.
-5. For **SSH public key source**, leave the default of **Generate new key pair**, and then type *myKey* for the **Key pair name**.
+1. For **SSH public key source**, leave the default of **Generate new key pair**, and then type *myKey* for the **Key pair name**.
> :::image type="content" source="../../virtual-machines/linux/media/quick-create-portal/administrator-account.png" alt-text="Screenshot of the Administrator account section where you select an authentication type and provide the administrator credentials" lightbox="../../virtual-machines/linux/media/quick-create-portal/administrator-account.png":::
-6. Under **Inbound port rules** > **Public inbound ports**, choose **Allow selected ports** and then select **SSH (22)** and **HTTP (80)** from the drop-down.
+1. Under **Inbound port rules** > **Public inbound ports**, choose **Allow selected ports** and then select **SSH (22)** and **HTTP (80)** from the drop-down.
> :::image type="content" source="../../virtual-machines/linux/media/quick-create-portal/inbound-port-rules.png" alt-text="Screenshot of the inbound port rules section where you select what ports inbound connections are allowed on" lightbox="../../virtual-machines/linux/media/quick-create-portal/inbound-port-rules.png":::
-7. Select **Networking** page to configure the virtual network. For virtual network, choose the **vnetenvironment1** created for the database server.
+1. Select **Networking** page to configure the virtual network. For the virtual network, choose the **vnetenvironment1** created for the database server.
- > :::image type="content" source="./media/quickstart-create-connect-server-vnet/vm-vnet-configuration.png" alt-text="Screenshot of select existing virtual network of the database server" lightbox="./media/quickstart-create-connect-server-vnet/vm-vnet-configuration.png":::
+ > :::image type="content" source="./media/quickstart-create-connect-server-vnet/vm-vnet-configuration.png" alt-text="Screenshot of the select existing virtual network of the database server" lightbox="./media/quickstart-create-connect-server-vnet/vm-vnet-configuration.png":::
-8. Select **Manage subnet configuration** to create a new subnet for the server.
+1. Select **Manage subnet configuration** to create a new subnet for the server.
> :::image type="content" source="./media/quickstart-create-connect-server-vnet/vm-manage-subnet-integration.png" alt-text="Screenshot of manage subnet" lightbox="./media/quickstart-create-connect-server-vnet/vm-manage-subnet-integration.png":::
-9. Add new subnet for the virtual machine.
+1. Add a new subnet for the virtual machine.
> :::image type="content" source="./media/quickstart-create-connect-server-vnet/vm-add-new-subnet.png" alt-text="Screenshot of adding a new subnet for virtual machine" lightbox="./media/quickstart-create-connect-server-vnet/vm-add-new-subnet.png":::
-10. After the subnet has been created successfully , close the page.
+1. After the subnet has been created successfully, close the page.
> :::image type="content" source="./media/quickstart-create-connect-server-vnet/subnetcreate-success.png" alt-text="Screenshot of success with adding a new subnet for virtual machine" lightbox="./media/quickstart-create-connect-server-vnet/subnetcreate-success.png":::
-11. Select **Review + Create**.
-12. Select **Create**. When the **Generate new key pair** window opens, select **Download private key and create resource**. Your key file will be download as **myKey.pem**.
+1. Select **Review + Create**.
+1. Select **Create**. When the **Generate new key pair** window opens, select **Download private key and create resource**. Your key file is downloaded as **myKey.pem**.
- >[!IMPORTANT]
- > Make sure you know where the `.pem` file was downloaded, you will need the path to it in the next step.
+ > [!IMPORTANT]
+ > Make sure you know where the `.pem` file was downloaded, you need the path to it in the next step.
-13. When the deployment is finished, select **Go to resource**.
+1. When the deployment is finished, select **Go to resource**.
> :::image type="content" source="./media/quickstart-create-connect-server-vnet/vm-create-success.png" alt-text="Screenshot of deployment success" lightbox="./media/quickstart-create-connect-server-vnet/vm-create-success.png":::
-11. On the page for your new VM, select the public IP address and copy it to your clipboard.
+1. On the page for your new VM, select the public IP address and copy it to your clipboard.
> :::image type="content" source="../../virtual-machines/linux/media/quick-create-portal/ip-address.png" alt-text="Screenshot showing how to copy the IP address for the virtual machine" lightbox="../../virtual-machines/linux/media/quick-create-portal/ip-address.png"::: ## Install MySQL client tools
Create an SSH connection with the VM using Bash or PowerShell. At your prompt, o
ssh -i .\Downloads\myKey1.pem azureuser@10.111.12.123 ```
-> [!TIP]
-> The SSH key you created can be used the next time your create a VM in Azure. Just select the **Use a key stored in Azure** for **SSH public key source** the next time you create a VM. You already have the private key on your computer, so you won't need to download anything.
+> [!TIP]
+> The SSH key you created can be used the next time you create a VM in Azure. Select the **Use a key stored in Azure** for **SSH public key source** the next time you create a VM. You already have the private key on your computer, so you won't need to download anything.
-You need to install mysql-client tool to be able to connect to the server.
+You need to install mysql-client tool to connect to the server.
```bash sudo apt-getupdate sudo apt-get install mysql-client ```
-Connections to the database are enforced with SSL, hence you need to download the public SSL certificate.
+Connections to the database are enforced with SSL; hence you need to download the public SSL certificate.
```bash wget --no-check-certificate https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem ``` ## Connect to the server from Azure Linux virtual machine
-With [mysql.exe](https://dev.mysql.com/doc/refman/8.0/en/mysql.html) client tool installed, we can now connect to the server from your local environment.
+
+With the [mysql.exe](https://dev.mysql.com/doc/refman/8.0/en/mysql.html) client tool installed, we can now connect to the server from your local environment.
```bash mysql -h mydemoserver.mysql.database.azure.com -u mydemouser -p --ssl-mode=REQUIRED --ssl-ca=DigiCertGlobalRootCA.crt.pem ``` ## Clean up resources
-You have now created an Azure Database for MySQL flexible server in a resource group. If you don't expect to need these resources in the future, you can delete them by deleting the resource group, or you can just delete the MySQL server. To delete the resource group, complete these steps:
-1. In the Azure portal, search for and select **Resource groups**.
+You've created an Azure Database for MySQL flexible server in a resource group. If you don't expect to need these resources in the future, you can delete them by deleting the resource group or the MySQL server. To delete the resource group, complete these steps:
+
+1. Search for and select **Resource groups**in the Azure portal.
1. In the list of resource groups, select the name of your resource group. 1. In the **Overview** page for your resource group, select **Delete resource group**.
-1. In the confirmation dialog box, type the name of your resource group, and then select **Delete**.
+1. In the confirmation dialog box, type the name of your resource group and then select **Delete**.
## Next steps > [!div class="nextstepaction"]
-> [Build a PHP (Laravel) web app with MySQL](tutorial-php-database-app.md)
+> [Build a PHP (Laravel) web app with MySQL](tutorial-php-database-app.md)
mysql How To Migrate Single Flexible Minimum Downtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/migrate/how-to-migrate-single-flexible-minimum-downtime.md
You can migrate an instance of Azure Database for MySQL ΓÇô Single Server to Azu
> [!NOTE] > This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
-Data-in replication is a technique that replicates data changes from the source server to the destination server based on the binary log file position method. In this scenario, the MySQL instance operating as the source (on which the database changes originate) writes updates and changes as ΓÇ£eventsΓÇ¥ to the binary log. The information in the binary log is stored in different logging formats according to the database changes being recorded. Replicas are configured to read the binary log from the source and to execute the events in the binary log on the replica's local database.
+Data-in replication is a technique that replicates data changes from the source server to the destination server based on the binary log file position method. In this scenario, the MySQL instance operating as the source (on which the database changes originate) writes updates and changes as ΓÇ£eventsΓÇ¥ to the binary log. The information in the binary log is stored in different logging formats according to the database changes being recorded. Replicas are configured to read the binary log from the source and to execute the events in the binary sign in the replica's local database.
-If you set up Data-in replication to synchronize data from one instance of Azure Database for MySQL to another, you can do a selective cutover of your applications from the primary (or source database) to the replica (or target database).
+If you set up Data-in replication to synchronize data from one instance of Azure Database for MySQL to another, you can do a selective cut over of your applications from the primary (or source database) to the replica (or target database).
In this tutorial, youΓÇÖll use mydumper/myloader and Data-in replication to migrate a sample database ([classicmodels](https://www.mysqltutorial.org/mysql-sample-database.aspx)) from an instance of Azure Database for MySQL - Single Server to an instance of Azure Database for MySQL - Flexible Server, and then synchronize data. In this tutorial, you learn how to:
-* Configure Network Settings for Data-in replication for different scenarios.
-* Configure Data-in replication between the primary and replica.
-* Test the replication.
-* Cutover to complete the migration.
+- Configure Network Settings for Data-in replication for different scenarios.
+- Configure Data-in replication between the primary and replica.
+- Test the replication.
+- Cutover to complete the migration.
## Prerequisites To complete this tutorial, you need:
-* An instance of Azure Database for MySQL Single Server running version 5.7 or 8.0.
+- An instance of Azure Database for MySQL Single Server running version 5.7 or 8.0.
+ > [!Note] > If you're running Azure Database for MySQL Single Server version 5.6, upgrade your instance to 5.7 and then configure data in replication. To learn more, see [Major version upgrade in Azure Database for MySQL - Single Server](../single-server/how-to-major-version-upgrade.md).
-* An instance of Azure Database for MySQL Flexible Server. For more information, see the article [Create an instance in Azure Database for MySQL Flexible Server](../flexible-server/quickstart-create-server-portal.md).
+
+- An instance of Azure Database for MySQL Flexible Server. For more information, see the article [Create an instance in Azure Database for MySQL Flexible Server](../flexible-server/quickstart-create-server-portal.md).
+
> [!Note] > Configuring Data-in replication for zone redundant high availability servers is not supported. If you would like to have zone redundant HA for your target server, then perform these steps: >
To complete this tutorial, you need:
> > *Make sure that **[GTID_Mode](../flexible-server/concepts-read-replicas.md#global-transaction-identifier-gtid)** has the same setting on the source and target servers.*
-* To connect and create a database using MySQL Workbench. For more information, see the article [Use MySQL Workbench to connect and query data](../flexible-server/connect-workbench.md).
-* To ensure that you have an Azure VM running Linux in same region (or on the same VNet, with private access) that hosts your source and target databases.
-* To install mysql client or MySQL Workbench (the client tools) on your Azure VM. Ensure that you can connect to both the primary and replica server. For the purposes of this article, mysql client is installed.
-* To install mydumper/myloader on your Azure VM. For more information, see the article [mydumper/myloader](concepts-migrate-mydumper-myloader.md).
-* To download and run the sample database script for the [classicmodels](https://www.mysqltutorial.org/wp-content/uploads/2018/03/mysqlsampledatabase.zip) database on the source server.
-* Configure [binlog_expire_logs_seconds](../flexible-server/concepts-server-parameters.md#binlog_expire_logs_seconds) on the source server to ensure that binlogs arenΓÇÖt purged before the replica commit the changes. Post successful cut over you can reset the value.
+- To connect and create a database using MySQL Workbench. For more information, see the article [Use MySQL Workbench to connect and query data](../flexible-server/connect-workbench.md).
+- To ensure that you have an Azure VM running Linux in same region (or on the same VNet, with private access) that hosts your source and target databases.
+- To install mysql client or MySQL Workbench (the client tools) on your Azure VM. Ensure that you can connect to both the primary and replica server. For the purposes of this article, mysql client is installed.
+- To install mydumper/myloader on your Azure VM. For more information, see the article [mydumper/myloader](concepts-migrate-mydumper-myloader.md).
+- To download and run the sample database script for the [classicmodels](https://www.mysqltutorial.org/wp-content/uploads/2018/03/mysqlsampledatabase.zip) database on the source server.
+- Configure [binlog_expire_logs_seconds](../flexible-server/concepts-server-parameters.md#binlog_expire_logs_seconds) on the source server to ensure that binlogs arenΓÇÖt purged before the replica commit the changes. Post successful cut over you can reset the value.
## Configure networking requirements To configure the Data-in replication, you need to ensure that the target can connect to the source over port 3306. Based on the type of endpoint set up on the source, perform the appropriate following steps.
-* If a public endpoint is enabled on the source, then ensure that the target can connect to the source by enabling ΓÇ£Allow access to Azure servicesΓÇ¥ in the firewall rule. To learn more, see [Firewall rules - Azure Database for MySQL](../single-server/concepts-firewall-rules.md#connecting-from-azure).
-* If a private endpoint and *[Deny public access](../single-server/concepts-data-access-security-private-link.md#deny-public-access-for-azure-database-for-mysql)* is enabled on the source, then install the private link in the same VNet that hosts the target. To learn more, see [Private Link - Azure Database for MySQL](../single-server/concepts-data-access-security-private-link.md).
+- If a public endpoint is enabled on the source, then ensure that the target can connect to the source by enabling ΓÇ£Allow access to Azure servicesΓÇ¥ in the firewall rule. To learn more, see [Firewall rules - Azure Database for MySQL](../single-server/concepts-firewall-rules.md#connecting-from-azure).
+- If a private endpoint and *[Deny public access](../single-server/concepts-data-access-security-private-link.md#deny-public-access-for-azure-database-for-mysql)* is enabled on the source, then install the private link in the same VNet that hosts the target. To learn more, see [Private Link - Azure Database for MySQL](../single-server/concepts-data-access-security-private-link.md).
## Configure Data-in replication
To configure Data in replication, perform the following steps:
$ mydumper --host=<primary_server>.mysql.database.azure.com --user=<username>@<primary_server> --password=<Password> --outputdir=./backup --rows=100 -G -E -R -z --trx-consistency-only --compress --build-empty-files --threads=16 --compress-protocol --ssl --regex '^(classicmodels\.)' -L mydumper-logs.txt ```
- > [!Tip]
+ > [!TIP]
> The option **--trx-consistency-only** is a required for transactional consistency while we take backup. >
- > * The mydumper equivalent of mysqldumpΓÇÖs --single-transaction.
- > * Useful if all your tables are InnoDB.
- > * The ΓÇ£mainΓÇ¥ thread only needs to hold the global lock until the ΓÇ£dumpΓÇ¥ threads can start a transaction.
- > * Offers the shortest duration of global locking
+ > - The mydumper equivalent of mysqldumpΓÇÖs --single-transaction.
+ > - Useful if all your tables are InnoDB.
+ > - The ΓÇ£mainΓÇ¥ thread only needs to hold the global lock until the ΓÇ£dumpΓÇ¥ threads can start a transaction.
+ > - Offers the shortest duration of global locking
The ΓÇ£mainΓÇ¥ thread only needs to hold the global lock until the ΓÇ£dumpΓÇ¥ threads can start a transaction. The variables in this command are explained below:
- * **--host:** Name of the primary server
- * **--user:** Name of a user (in the format username@servername since the primary server is running Azure Database for MySQL - Single Server). You can use server admin or a user having SELECT and RELOAD permissions.
- * **--Password:** Password of the user above
+ HOW-TO-MANAGE-FIREWALL-PORTAL **--host:** Name of the primary server
+ - **--user:** Name of a user (in the format username@servername since the primary server is running Azure Database for MySQL - Single Server). You can use server admin or a user having SELECT and RELOAD permissions.
+ - **--Password:** Password of the user above
- For more information about using mydumper, see [mydumper/myloader](../single-server/concepts-migrate-mydumper-myloader.md)
+ For more information about using mydumper, see [mydumper/myloader](../single-server/concepts-migrate-mydumper-myloader.md)
6. Read the metadata file to determine the binary log file name and offset by running the following command:
To configure Data in replication, perform the following steps:
The variables in this command are explained below:
- * **--host:** Name of the replica server
- * **--user:** Name of a user. You can use server admin or a user with read\write permission capable of restoring the schemas and data to the database
- * **--Password:** Password for the user above
+ - **--host:** Name of the replica server
+ - **--user:** Name of a user. You can use server admin or a user with read\write permission capable of restoring the schemas and data to the database
+ - **--Password:** Password for the user above
8. Depending on the SSL enforcement on the primary server, connect to the replica server using the mysql client tool and perform the following the steps.
- * If SSL enforcement is enabled, then:
+ - If SSL enforcement is enabled, then:
i. Download the certificate needed to communicate over SSL with your Azure Database for MySQL server from [here](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem).
To configure Data in replication, perform the following steps:
> [!Note] > Determine the position and file name from the information obtained in step 6.
- * If SSL enforcement isn't enabled, then run the following command:
+ - If SSL enforcement isn't enabled, then run the following command:
```sql CALL mysql.az_replication_change_master('<Primary_server>.mysql.database.azure.com', '<username>@<primary_server>', '<Password>', 3306, '<File_Name>', <Position>, ΓÇÿΓÇÖ);
To confirm that Data-in replication is working properly, you can verify that the
1. Identify a table to use for testing, for example, the Customers table, and then confirm that the number of entries it contains is the same on the primary and replica servers by running the following command on each:
- ```
+ ```sql
select count(*) from customers; ```
To confirm that Data-in replication is working properly, you can verify that the
To ensure a successful cutover, perform the following tasks:
-1. Configure the appropriate server-level firewall and virtual network rules to connect to target Server. You can compare the firewall rules for the source and [target](../flexible-server/how-to-manage-firewall-portal.md#create-a-firewall-rule-after-server-is-created) from the portal.
-2. Configure appropriate logins and database level permissions in the target server. You can run *SELECT * FROM mysql.user;* on the source and target servers to compare.
+1. Configure the appropriate server-level firewall and virtual network rules to connect to target Server. You can compare the firewall rules for the source and [target](../flexible-server/how-to-manage-firewall-portal.md#create-a-firewall-rule-when-creating-a-server) from the portal.
+2. Configure appropriate logins and database level permissions in the target server. You can run *SELECT FROM mysql.user;* on the source and target servers to compare.
3. Make sure that all the incoming connections to Azure Database for MySQL Single Server are stopped. > [!Tip] > You can set the Azure Database for MySQL Single Server to read only.
At this point, your applications are connected to the new Azure Database for MyS
[Create and manage Azure Database for MySQL firewall rules by using the Azure portal](../single-server/how-to-manage-firewall-using-portal.md) ## Next steps
-* Learn more about Data-in replication [Replicate data into Azure Database for MySQL Flexible Server](../flexible-server/concepts-data-in-replication.md) and [Configure Azure Database for MySQL Flexible Server Data-in replication](../flexible-server/how-to-data-in-replication.md)
-* Learn more about [troubleshooting common errors in Azure Database for MySQL](../single-server/how-to-troubleshoot-common-errors.md).
-* Learn more about [migrating MySQL to Azure Database for MySQL offline using Azure Database Migration Service](../../dms/tutorial-mysql-azure-mysql-offline-portal.md).
+- Learn more about Data-in replication [Replicate data into Azure Database for MySQL Flexible Server](../flexible-server/concepts-data-in-replication.md) and [Configure Azure Database for MySQL Flexible Server Data-in replication](../flexible-server/how-to-data-in-replication.md)
+- Learn more about [troubleshooting common errors in Azure Database for MySQL](../single-server/how-to-troubleshoot-common-errors.md).
+- Learn more about [migrating MySQL to Azure Database for MySQL offline using Azure Database Migration Service](../../dms/tutorial-mysql-azure-mysql-offline-portal.md).
mysql How To Migrate Rds Mysql Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-migrate-rds-mysql-data-in-replication.md
Title: Migrate Amazon RDS for MySQL to Azure Database for MySQL using Data-in Replication description: This article describes how to migrate Amazon RDS for MySQL to Azure Database for MySQL by using Data-in Replication. -++ Last updated : 11/22/2022 Previously updated : 06/20/2022 # Migrate Amazon RDS for MySQL to Azure Database for MySQL using Data-in Replication [!INCLUDE[applies-to-mysql-single-flexible-server](../includes/applies-to-mysql-single-flexible-server.md)]
-> [!NOTE]
+> [!NOTE]
> This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
-You can use methods such as MySQL dump and restore, MySQL Workbench Export and Import, or Azure Database Migration Service to migrate your MySQL databases to Azure Database for MySQL. By using a combination of open-source tools such as mysqldump or mydumper and myloader with Data-in Replication, you can migrate your workloads with minimum downtime.
+You can use methods such as MySQL dump and restore, MySQL Workbench Export and Import, or Azure Database Migration Service to migrate your MySQL databases to Azure Database for MySQL. You can migrate your workloads with minimum downtime by using a combination of open-source tools such as mysqldump or mydumper and myloader with Data-in Replication.
Data-in Replication is a technique that replicates data changes from the source server to the destination server based on the binary log file position method. In this scenario, the MySQL instance operating as the source (on which the database changes originate) writes updates and changes as *events* to the binary log. The information in the binary log is stored in different logging formats according to the database changes being recorded. Replicas are configured to read the binary log from the source and execute the events in the binary log on the replica's local database.
-If you set up [Data-in Replication](../flexible-server/concepts-data-in-replication.md) to synchronize data from a source MySQL server to a target MySQL server, you can do a selective cutover of your applications from the primary (or source database) to the replica (or target database).
+Set up [Data-in Replication](../flexible-server/concepts-data-in-replication.md) to synchronize data from a source MySQL server to a target MySQL server. You can do a selective cutover of your applications from the primary (or source database) to the replica (or target database).
In this tutorial, you'll learn how to set up Data-in Replication between a source server that runs Amazon Relational Database Service (RDS) for MySQL and a target server that runs Azure Database for MySQL. ## Performance considerations
-Before you begin this tutorial, consider the performance implications of the location and capacity of the client computer you'll use to perform the operation.
+Before you begin this tutorial, consider the performance implications of the location, and capacity of the client computer you'll use to perform the operation.
### Client location Perform dump or restore operations from a client computer that's launched in the same location as the database server: -- For Azure Database for MySQL servers, the client machine should be in the same virtual network and the same availability zone as the target database server.
+- For Azure Database for MySQL servers, the client machine should be in the same virtual network and availability zone as the target database server.
- For source Amazon RDS database instances, the client instance should exist in the same Amazon Virtual Private Cloud and availability zone as the source database server. In the preceding case, you can move dump files between client machines by using file transfer protocols like FTP or SFTP or upload them to Azure Blob Storage. To reduce the total migration time, compress files before you transfer them. ### Client capacity
-No matter where the client computer is located, it requires adequate compute, I/O, and network capacity to perform the requested operations. The general recommendations are:
+No matter where the client computer is located, it requires adequate computing, I/O, and network capacity to perform the requested operations. The general recommendations are:
- If the dump or restore involves real-time processing of data, for example, compression or decompression, choose an instance class with at least one CPU core per dump or restore thread. - Ensure there's enough network bandwidth available to the client instance. Use instance types that support the accelerated networking feature. For more information, see the "Accelerated Networking" section in the [Azure Virtual Machine Networking Guide](../../virtual-network/create-vm-accelerated-networking-cli.md).
To complete this tutorial, you need to:
- Install the [mysqlclient](https://dev.mysql.com/downloads/) on your client computer to create a dump, and perform a restore operation on your target Azure Database for MySQL server. - For larger databases, install [mydumper and myloader](https://centminmod.com/mydumper.html) for parallel dumping and restoring of databases.
- > [!NOTE]
+ > [!NOTE]
> Mydumper can only run on Linux distributions. For more information, see [How to install mydumper](https://github.com/maxbube/mydumper#how-to-install-mydumpermyloader). - Create an instance of Azure Database for MySQL server that runs version 5.7 or 8.0.
- > [!IMPORTANT]
+ > [!IMPORTANT]
> If your target is Azure Database for MySQL Flexible Server with zone-redundant high availability (HA), note that Data-in Replication isn't supported for this configuration. As a workaround, during server creation set up zone-redundant HA: > > 1. Create the server with zone-redundant HA enabled.
Ensure that several parameters and features are configured and set up properly,
- Make sure the character set of the source and the target database are the same. - Set the `wait_timeout` parameter to a reasonable time. The time depends on the amount of data or workload you want to import or migrate. - Verify that all your tables use InnoDB. The Azure Database for MySQL server only supports the InnoDB storage engine.-- For tables with many secondary indexes or for tables that are large, the effects of performance overhead are visible during restore. Modify the dump files so that the `CREATE TABLE` statements don't include secondary key definitions. After you import the data, re-create secondary indexes to avoid the performance penalty during the restore process.
+- For tables with many secondary indexes or large tables, performance overhead effects are visible during restore. Modify the dump files so that the `CREATE TABLE` statements don't include secondary key definitions. After you import the data, re-create secondary indexes to avoid the performance penalty during the restore process.
Finally, to prepare for Data-in Replication:
Finally, to prepare for Data-in Replication:
- Make sure you provide [site-to-site connectivity](../../vpn-gateway/tutorial-site-to-site-portal.md) to your source server by using either [Azure ExpressRoute](../../expressroute/expressroute-introduction.md) or [Azure VPN Gateway](../../vpn-gateway/vpn-gateway-about-vpngateways.md). For more information about creating a virtual network, see the [Azure Virtual Network documentation](../../virtual-network/index.yml). Also see the quickstart articles with step-by-step details. - Configure your source database server's network security groups to allow the target Azure Database for MySQL's server IP address.
-> [!IMPORTANT]
+> [!IMPORTANT]
> If the source Amazon RDS for MySQL instance has GTID_mode set to ON, the target instance of Azure Database for MySQL Flexible Server must also have GTID_mode set to ON. ## Configure the target instance of Azure Database for MySQL
To configure the target instance of Azure Database for MySQL, which is the targe
1. Set the `max_allowed_packet` parameter value to the maximum of **1073741824**, which is 1 GB. This value prevents any overflow issues related to long rows. 1. Set the `slow_query_log`, `general_log`, `audit_log_enabled`, and `query_store_capture_mode` parameters to **OFF** during the migration to help eliminate any overhead related to query logging.
-1. Scale up the compute size of the target Azure Database for MySQL server to the maximum of 64 vCores. This size provides more compute resources when you restore the database dump from the source server.
+1. Scale up the compute size of the target Azure Database for MySQL server to the maximum of 64 vCores. This size provides more compute resources when restoring the source server's database dump.
You can always scale back the compute to meet your application demands after the migration is complete. 1. Scale up the storage size to get more IOPS during the migration or increase the maximum IOPS for the migration.
- > [!NOTE]
+ > [!NOTE]
> Available maximum IOPS are determined by compute size. For more information, see the IOPS section in [Compute and storage options in Azure Database for MySQL - Flexible Server](../flexible-server/concepts-compute-storage.md#iops). ## Configure the source Amazon RDS for MySQL server To prepare and configure the MySQL server hosted in Amazon RDS, which is the *source* for Data-in Replication:
-1. Confirm that binary logging is enabled on the source Amazon RDS for MySQL server. Check that automated backups are enabled, or ensure that a read replica exists for the source Amazon RDS for MySQL server.
+1. Confirm that binary logging is enabled on the source Amazon RDS for MySQL server. Check that automated backups are enabled, or ensure a read replica exists for the source Amazon RDS for MySQL server.
1. Ensure that the binary log files on the source server are retained until after the changes are applied on the target instance of Azure Database for MySQL.
To prepare and configure the MySQL server hosted in Amazon RDS, which is the *so
``` mysql> call mysql.rds_show_configuration; ++-+--+
- | name | value | description |
+ | name | value | description |
++-+--+
- | binlog retention hours | 24 | binlog retention hours specifies the duration in hours before binary logs are automatically deleted. |
- | source delay | 0 | source delay specifies replication delay in seconds between current instance and its master. |
- | target delay | 0 | target delay specifies replication delay in seconds between current instance and its future read-replica. |
+ | binlog retention hours | 24 | binlog retention hours specifies the duration in hours before binary logs are automatically deleted. |
+ | source delay | 0 | source delay specifies replication delay in seconds between current instance and its master. |
+ | target delay | 0 | target delay specifies replication delay in seconds between current instance and its future read-replica. |
++- +--+ 3 rows in set (0.00 sec) ```
-1. To configure the binary log retention period, run the `rds_set_configuration` stored procedure to ensure that the binary logs are retained on the source server for the desired length of time. For example:
+1. To configure the binary log retention period, run the `rds_set_configuration` stored procedure to ensure that the binary logs are retained on the source server for the desired time. For example:
``` Mysql> Call mysql.rds_set_configuration(ΓÇÿbinlog retention hours', 96); ```
- If you're creating a dump and then restoring, the preceding command helps you to quickly catch up with the delta changes.
+ If you're creating a dump and restoring, the preceding command helps you catch up with the delta changes quickly.
- > [!NOTE]
- > Ensure there's ample disk space to store the binary logs on the source server based on the retention period defined.
+ > [!NOTE]
+ > Ensure ample disk space to store the binary logs on the source server based on the defined retention period.
There are two ways to capture a dump of data from the source Amazon RDS for MySQL server. One approach involves capturing a dump of data directly from the source server. The other approach involves capturing a dump from an Amazon RDS for MySQL read replica.
There are two ways to capture a dump of data from the source Amazon RDS for MySQ
1. Create an Amazon MySQL read replica with the same configuration as the source server. Then create the dump there. 1. Let the Amazon RDS for MySQL read replica catch up with the source Amazon RDS for MySQL server.
- 1. When the replica lag reaches **0** on the read replica, stop replication by calling the `mysql.rds_stop_replication` stored procedure.
+ 1. When the replica lag reaches **0** on the read replica, stop replication by calling the stored procedure `mysql.rds_stop_replication`.
``` Mysql> call mysql.rds_stop_replication;
There are two ways to capture a dump of data from the source Amazon RDS for MySQ
$ mysqldump -h hostname -u username -p ΓÇôsingle-transaction ΓÇôdatabases dbnames ΓÇôorder-by-primary> dumpname.sql ```
- > [!NOTE]
+ > [!NOTE]
> You can also use mydumper for capturing a parallelized dump of your data from your source Amazon RDS for MySQL database. For more information, see [Migrate large databases to Azure Database for MySQL using mydumper/myloader](concepts-migrate-mydumper-myloader.md). ## Link source and replica servers to start Data-in Replication
There are two ways to capture a dump of data from the source Amazon RDS for MySQ
$ mysql -h <target_server> -u <targetuser> -p < dumpname.sql ```
- > [!NOTE]
+ > [!NOTE]
> If you're instead using myloader, see [Migrate large databases to Azure Database for MySQL using mydumper/myloader](concepts-migrate-mydumper-myloader.md). 1. Sign in to the source Amazon RDS for MySQL server, and set up a replication user. Then grant the necessary privileges to this user.
There are two ways to capture a dump of data from the source Amazon RDS for MySQ
``` Mysql> CREATE USER 'syncuser'@'%' IDENTIFIED BY 'userpassword';
- Mysql> GRANT REPLICATION SLAVE, REPLICATION CLIENT on *.* to 'syncuser'@'%' REQUIRE SSL;
+ Mysql> GRANT REPLICATION SLAVE, REPLICATION CLIENT on *.* to 'syncuser'@'%' REQUIRE SSL;
Mysql> SHOW GRANTS FOR syncuser@'%'; ```
There are two ways to capture a dump of data from the source Amazon RDS for MySQ
``` Mysql> CREATE USER 'syncuser'@'%' IDENTIFIED BY 'userpassword';
- Mysql> GRANT REPLICATION SLAVE, REPLICATION CLIENT on *.* to 'syncuser'@'%';
+ Mysql> GRANT REPLICATION SLAVE, REPLICATION CLIENT on *.* to 'syncuser'@'%';
Mysql> SHOW GRANTS FOR syncuser@'%'; ```
- All Data-in Replication functions are done by stored procedures. For information about all procedures, see [Data-in Replication stored procedures](reference-stored-procedures.md#data-in-replication-stored-procedures). You can run these stored procedures in the MySQL shell or MySQL Workbench.
+ Stored procedures do all Data-in Replication functions. For information about all procedures, see [Data-in Replication stored procedures](../single-server/reference-stored-procedures.md#data-in-replication-stored-procedures). You can run these stored procedures in the MySQL shell or MySQL Workbench.
1. To link the Amazon RDS for MySQL source server and the Azure Database for MySQL target server, sign in to the target Azure Database for MySQL server. Set the Amazon RDS for MySQL server as the source server by running the following command:
There are two ways to capture a dump of data from the source Amazon RDS for MySQ
Mysql> CALL mysql.az_replication_start; ```
-1. To check the status of the replication, on the replica server, run the following command:
+1. To check the status of the replication on the replica server, run the following command:
``` Mysql> show slave status\G
At this point, the migration is complete. Your applications are connected to the
## Next steps - For more information about migrating databases to Azure Database for MySQL, see the [Database Migration Guide](https://github.com/Azure/azure-mysql/tree/master/MigrationGuide).-- View the video [Easily migrate MySQL/PostgreSQL apps to Azure managed service](https://medius.studios.ms/Embed/Video/THR2201?sid=THR2201). It contains a demo that shows how to migrate MySQL apps to Azure Database for MySQL.
+- View the video [Easily migrate MySQL/PostgreSQL apps to Azure managed service](https://medius.studios.ms/Embed/Video/THR2201?sid=THR2201). It contains a demo that shows how to migrate MySQL apps to Azure Database for MySQL.
postgresql How To Read Replicas Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-read-replicas-portal.md
In this article, you learn how to create and manage read replicas in Azure Datab
## Prerequisites
-An [Azure Database for PostgreSQL server](/azure/postgresql/flexible-server/quickstart-create-server-database-portal.md) to be the primary server.
+An [Azure Database for PostgreSQL server](/azure/postgresql/flexible-server/quickstart-create-server-portal) to be the primary server.
> [!NOTE] > When deploying read replicas for persistent heavy write-intensive primary workloads, the replication lag could continue to grow and may never be able to catch-up with the primary. This may also increase storage usage at the primary as the WAL files are not deleted until they are received at the replica.
postgresql Concepts Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-certificate-rotation.md
To verify if you are using SSL connection to connect to the server refer [SSL ve
No. There's no action needed if your certificate file already has the **DigiCertGlobalRootG2**.
-### 13. What if I have further questions?
+### 13. How can I check the certificate that is sent by the server?
+
+There are many tools that you can use. For example, DigiCert has a handy [tool](https://www.digicert.com/help/) that will show you the certificate chain of any server name. (This tool will only work with publicly accessible server; it cannot connect to server that is contained in a virtual network (VNET)).
+Another tool you can use is OpenSSL in the command line, you can use the syntax below:
+```bash
+openssl s_client -showcerts -connect <your-postgresql-server-name>:443
+```
+
+### 14. What if I have further questions?
If you have questions, get answers from community experts in [Microsoft Q&A](mailto:AzureDatabaseforPostgreSQL@service.microsoft.com). If you have a support plan and you need technical help please create a [support request](https://learn.microsoft.com/azure/azure-portal/supportability/how-to-create-azure-support-request): * ForΓÇ»*Issue type*, selectΓÇ»*Technical*. * ForΓÇ»*Subscription*, select your *subscription*.
postgresql Concepts Connection Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-connection-libraries.md
Most language client libraries used to connect to PostgreSQL server are external
| C\#/ .NET | [Npgsql](https://www.npgsql.org/) | ADO.NET Data Provider | [Download](https://dotnet.microsoft.com/download) | | ODBC | [psqlODBC](https://odbc.postgresql.org/) | ODBC Driver | [Download](https://www.postgresql.org/ftp/odbc/versions/) | | C | [libpq](https://www.postgresql.org/docs/9.6/static/libpq.html) | Primary C language interface | Included |
-| C++ | [libpqxx](http://pqxx.org/) | New-style C++ interface | [Download](http://pqxx.org/download/software/) |
+| C++ | [libpqxx](http://pqxx.org/) | New-style C++ interface | [Download](https://pqxx.org/libpqxx/) |
## Next steps
private-link Tutorial Private Endpoint Storage Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/tutorial-private-endpoint-storage-portal.md
Previously updated : 06/22/2022 Last updated : 11/23/2022
Azure Private endpoint is the fundamental building block for Private Link in Azure. It enables Azure resources, like virtual machines (VMs), to privately and securely communicate with Private Link resources such as Azure Storage.
-In this tutorial, you learn how to:
+In this tutorial, you'll learn how to:
> [!div class="checklist"] > * Create a virtual network and bastion host.
In this tutorial, you learn how to:
> * Create a storage account with a private endpoint. > * Test connectivity to the storage account private endpoint.
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
- ## Prerequisites
-* An Azure subscription
+* An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
## Sign in to Azure
Sign in to the [Azure portal](https://portal.azure.com).
## Create a virtual network and bastion host
-In this section, you'll create a virtual network, subnet, and bastion host.
+Create a virtual network, subnet, and bastion host. The virtual network and subnet will contain the private endpoint that connects to the Azure Storage Account.
The bastion host will be used to connect securely to the virtual machine for testing the private endpoint.
-1. On the upper-left side of the screen, select **Create a resource > Networking > Virtual network** or search for **Virtual network** in the search box.
+1. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results.
+
+2. Select **+ Create**.
-2. In **Create virtual network**, enter or select this information in the **Basics** tab:
+3. In **Create virtual network**, enter or select this information in the **Basics** tab:
| Setting | Value | ||| | **Project Details** | | | Subscription | Select your Azure subscription. |
- | Resource Group | Select **Create new**. </br> Enter **myResourceGroup** in **Name**. </br> Select **OK**. |
+ | Resource Group | Select **Create new**. </br> Enter **TutorPEstorage-rg** in **Name**. </br> Select **OK**. |
| **Instance details** | | | Name | Enter **myVNet**. | | Region | Select **East US**. |
-3. Select the **IP Addresses** tab or select the **Next: IP Addresses** button at the bottom of the page.
+4. Select the **IP Addresses** tab or select **Next: IP Addresses**.
-4. In the **IP Addresses** tab, enter this information:
+5. In the **IP Addresses** tab, enter this information:
| Setting | Value | |--|-| | IPv4 address space | Enter **10.1.0.0/16**. |
-5. Under **Subnet name**, select the word **default**.
+6. Under **Subnet name**, select the word **default**. If a subnet isn't listed, select **+ Add subnet**.
-6. In **Edit subnet**, enter this information:
+7. In **Edit subnet**, enter this information:
| Setting | Value | |--|-| | Subnet name | Enter **mySubnet**. | | Subnet address range | Enter **10.1.0.0/24**. |
-7. Select **Save**.
+8. Select **Save**.
-8. Select the **Security** tab.
+9. Select the **Security** tab.
-9. Under **BastionHost**, select **Enable**. Enter this information:
+10. Under **BastionHost**, select **Enable**. Enter this information:
| Setting | Value | |--|-| | Bastion name | Enter **myBastionHost**. |
- | AzureBastionSubnet address space | Enter **10.1.1.0/24**. |
+ | AzureBastionSubnet address space | Enter **10.1.1.0/26**. |
| Public IP Address | Select **Create new**. </br> For **Name**, enter **myBastionIP**. </br> Select **OK**. |
-8. Select the **Review + create** tab or select the **Review + create** button.
+11. Select the **Review + create** tab or select the **Review + create** button.
-9. Select **Create**.
+12. Select **Create**.
+
+It will take a few minutes for the virtual network and Azure Bastion host to deploy. Proceed to the next steps when the virtual network is created.
## Create a virtual machine In this section, you'll create a virtual machine that will be used to test the private endpoint. -
-1. On the upper-left side of the portal, select **Create a resource** > **Compute** > **Virtual machine** or search for **Virtual machine** in the search box.
+1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
-2. In **Create a virtual machine**, type or select the values in the **Basics** tab:
+2. Select **+ Create** > **Azure virtual machine**.
+
+3. In **Create a virtual machine**, enter or select the following in the **Basics** tab:
| Setting | Value | |--|-| | **Project Details** | | | Subscription | Select your Azure subscription. |
- | Resource Group | Select **myResourceGroup**. |
+ | Resource Group | Select **TutorPEstorage-rg**. |
| **Instance details** | | | Virtual machine name | Enter **myVM**. | | Region | Select **(US) East US**. | | Availability Options | Select **No infrastructure redundancy required**. | | Security type | Select **Standard**. |
- | Image | Select **Windows Server 2019 Datacenter - Gen2**. |
- | Azure Spot instance | Select **No**. |
- | Size | Choose VM size or take default setting. |
+ | Image | Select **Windows Server 2022 Datacenter: Azure Edition - Gen2**. |
+ | Size | Choose a size or leave the default setting. |
| **Administrator account** | | | Username | Enter a username. | | Password | Enter a password. | | Confirm password | Reenter password. |
+ | **Inbound port rules** | |
+ | Public inbound ports | Select **None**. |
-3. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**.
+4. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**.
-4. In the Networking tab, select or enter:
+5. In the Networking tab, enter or select the following information:
| Setting | Value | |-|-|
In this section, you'll create a virtual machine that will be used to test the p
| NIC network security group | **Basic**. | | Public inbound ports | Select **None**. |
-5. Select **Review + create**.
+6. Select **Review + create**.
-6. Review the settings, and then select **Create**.
+7. Review the settings, and then select **Create**.
[!INCLUDE [ephemeral-ip-note.md](../../includes/ephemeral-ip-note.md)] ## Create storage account with a private endpoint
-In this section, you'll create a storage account and configure the private endpoint.
+Create a storage account and configure the private endpoint. The private endpoint uses a network interface assigned an IP address in the virtual network you created previously.
-1. In the left-hand menu, select **Create a resource** > **Storage** > **Storage account**, or search for **Storage account** in the search box.
+1. In the search box at the top of the portal, enter **Storage account**. Select **Storage accounts** in the search results.
+
+2. Select **+ Create**.
-2. In the **Basics** tab of **Create storage account** enter or select the following information:
+3. In the **Basics** tab of **Create a storage account** enter or select the following information:
| Setting | Value | |--|-| | **Project Details** | | | Subscription | Select your Azure subscription. |
- | Resource Group | Select **myResourceGroup**. |
+ | Resource Group | Select **TutorPEstorage-rg**. |
| **Instance details** | | | Storage account name | Enter **mystorageaccount**. If the name is unavailable, enter a unique name. | | Location | Select **(US) East US**. | | Performance | Leave the default **Standard**. | | Redundancy | Select **Locally-redundant storage (LRS)**. |
-3. Select the **Networking** tab or select the **Next: Networking** button.
+4. Select the **Networking** tab or select **Next: Advanced** then **Next: Networking**.
-4. In the **Networking** tab, under **Network connectivity** select **Disable public access and use private access**.
+5. In the **Networking** tab, under **Network connectivity** select **Disable public access and use private access**.
-5. In **Private endpoint**, select **+ Add private endpoint**.
+6. In **Private endpoint**, select **+ Add private endpoint**.
-6. In **Create private endpoint** enter or select the following information:
+7. In **Create private endpoint** enter or select the following information:
| Setting | Value | |--|-| | Subscription | Select your Azure subscription. |
- | Resource Group | Select **myResourceGroup**. |
+ | Resource Group | Select **TutorPEstorage-rg**. |
| Location | Select **East US**. | | Name | Enter **myPrivateEndpoint**. |
- | Storage sub-resource | Leave the default **blob**. |
+ | Storage subresource | Leave the default **blob**. |
| **Networking** | | | Virtual network | Select **myVNet**. |
- | Subnet | Select **mySubnet**. |
+ | Subnet | Select **myVNet/mySubnet(10.1.0.0/24)**. |
| **Private DNS integration**. | | Integrate with private DNS zone | Leave the default **Yes**. | | Private DNS Zone | Leave the default **(New) privatelink.blob.core.windows.net**. |
-7. Select **OK**.
+8. Select **OK**.
+
+9. Select **Review**.
-8. Select **Review + create**.
+10. Select **Create**.
-9. Select **Create**.
+### Storage access key
-10. Select **Resource groups** in the left-hand navigation pane.
+The storage access key is required for the later steps. You'll go to the storage account you created previously and copy the connection string with the access key for the storage account.
-11. Select **myResourceGroup**.
+1. In the search box at the top of the portal, enter **Storage account**. Select **Storage accounts** in the search results.
-12. Select the storage account you created in the previous steps.
+2. Select the storage account you created in the previous steps.
-13. In the **Security + networking** section of the storage account, select **Access keys**.
+3. In the **Security + networking** section of the storage account, select **Access keys**.
-14. Select **Show keys**, then select copy on the **Connection string** for **key1**.
+4. Select **Show**, then select copy on the **Connection string** for **key1**.
-### Add a container
+### Add a blob container
-1. Select **Go to resource**, or in the left-hand menu of the Azure portal, select **All Resources** > **mystorageaccount**.
+1. In the search box at the top of the portal, enter **Storage account**. Select **Storage accounts** in the search results.
-2. Under the **Data storage** section, select **Containers**.
+2. Select the storage account you created in the previous steps.
-3. Select **+ Container** to create a new container.
+3. In the **Data storage** section, select **Containers**.
-4. Enter **mycontainer** in **Name** and select **Private (no anonymous access)** under **Public access level**.
+4. Select **+ Container** to create a new container.
-5. Select **Create**.
+5. Enter **mycontainer** in **Name** and select **Private (no anonymous access)** under **Public access level**.
+
+6. Select **Create**.
## Test connectivity to private endpoint In this section, you'll use the virtual machine you created in the previous steps to connect to the storage account across the private endpoint using **Microsoft Azure Storage Explorer**.
-1. Select **Resource groups** in the left-hand navigation pane.
-
-2. Select **myResourceGroup**.
+1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
-3. Select **myVM**.
+2. Select **myVM**.
-4. On the overview page for **myVM**, select **Connect** then **Bastion**.
+3. On the overview page for **myVM**, select **Connect** then **Bastion**.
-5. Enter the username and password that you entered during the virtual machine creation.
+4. Enter the username and password that you entered during the virtual machine creation.
-6. Select **Connect** button.
+5. Select **Connect**.
-7. Open Windows PowerShell on the server after you connect.
+6. Open Windows PowerShell on the server after you connect.
-8. Enter `nslookup <storage-account-name>.blob.core.windows.net`. Replace **\<storage-account-name>** with the name of the storage account you created in the previous steps. You'll receive a message similar to what is displayed below:
+7. Enter `nslookup <storage-account-name>.blob.core.windows.net`. Replace **\<storage-account-name>** with the name of the storage account you created in the previous steps. You'll receive a message similar to what is displayed below:
```powershell Server: UnKnown
In this section, you'll use the virtual machine you created in the previous step
A private IP address of **10.1.0.5** is returned for the storage account name. This address is in **mySubnet** subnet of **myVNet** virtual network you created previously.
-9. Install [Microsoft Azure Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md?tabs=windows&toc=%2fazure%2fstorage%2fblobs%2ftoc.json) on the virtual machine.
+8. Install [Microsoft Azure Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md?tabs=windows&toc=%2fazure%2fstorage%2fblobs%2ftoc.json) on the virtual machine.
-10. Select **Finish** after the **Microsoft Azure Storage Explorer** is installed. Leave the box checked to open the application.
+9. Select **Finish** after the **Microsoft Azure Storage Explorer** is installed. Leave the box checked to open the application.
-11. In the **Select Resource** screen, select **Storage account or service** to add a connection in **Microsoft Azure Storage Explorer** to your storage account that you created in the previous steps.
+10. Select the **Power plug** symbol to open the **Select Resource** dialog box.
+
+11. In **Select Resource** , select **Storage account or service** to add a connection in **Microsoft Azure Storage Explorer** to your storage account that you created in the previous steps.
12. In the **Select Connection Method** screen, select **Connection string**, and then **Next**.
In this section, you'll use the virtual machine you created in the previous step
15. Verify the settings are correct in **Summary**.
-16. Select **Connect**, then select **mystorageaccount** from the **Storage Accounts** left-hand menu.
+16. Select **Connect**
+
+17. Select your storage account from the **Storage Accounts** in the explorer menu.
-17. Under **Blob Containers**, you see **mycontainer** that you created in the previous steps.
+18. Expand the storage account and then **Blob Containers**.
-18. Close the connection to **myVM**.
+19. The **mycontainer** you created previously is displayed.
+
+20. Close the connection to **myVM**.
## Clean up resources
If you're not going to continue to use this application, delete the virtual netw
1. From the left-hand menu, select **Resource groups**.
-2. Select **myResourceGroup**.
+2. Select **TutorPEstorage-rg**.
3. Select **Delete resource group**.
-4. Enter **myResourceGroup** in **TYPE THE RESOURCE GROUP NAME**.
+4. Enter **TutorPEstorage-rg** in **TYPE THE RESOURCE GROUP NAME**.
5. Select **Delete**. ## Next steps In this tutorial, you learned how to create:+ * Virtual network and bastion host.+ * Virtual machine.+ * Storage account and a container. Learn how to connect to an Azure Cosmos DB account via Azure Private Endpoint:
public-multi-access-edge-compute-mec Key Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/public-multi-access-edge-compute-mec/key-concepts.md
Azure public MEC supports specific compute and GPU VM SKUs. The following table
| Type | Series | VM size | | - | | - |
-| VM | D-series | D2s_v3, D4s_v3, D8s_v3 |
+| VM | D-series | D1s_v2, D2s_v2, D2s_v3, D4s_v3, D8s_v3 |
| VM | E-series | E4s_v3, E8s_v3 | | GPU | NCasT4_v3-series | Standard_NC4asT4_v3, Standard_NC8asT4_v3 |
By default, all services running in the Azure public MEC use the DNS infrastruct
To learn about considerations for deployment in the Azure public MEC, advance to the following article: > [!div class="nextstepaction"]
-> [Considerations for deployment in the Azure public MEC](considerations-for-deployment.md)
+> [Considerations for deployment in the Azure public MEC](considerations-for-deployment.md)
purview How To Policies Data Owner Arc Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-policies-data-owner-arc-sql-server.md
Previously updated : 10/31/2022 Last updated : 11/23/2022 # Provision access by data owner for SQL Server on Azure Arc-enabled servers (preview)
Register each data source with Microsoft Purview to later define access policies
1. Enable Data Use Management. Data Use Management needs certain permissions and can affect the security of your data, as it delegates to certain Microsoft Purview roles to manage access to the data sources. **Go through the secure practices related to Data Use Management in this guide**: [How to enable Data Use Management](./how-to-enable-data-use-management.md)
-1. Upon enabling Data Use Management, Microsoft Purview will automatically capture the **Application ID** of the App Registration related to this Arc-enabled SQL server. Come back to this screen and hit the refresh button on the side of it to refresh, in case the association between the Arc-enabled SQL server and the App Registration changes in the future.
+1. Upon enabling Data Use Management, Microsoft Purview will automatically capture the **Application ID** of the App Registration related to this Arc-enabled SQL server if one has been configured. Come back to this screen and hit the refresh button on the side of it to refresh, in case the association between the Arc-enabled SQL server and the App Registration changes in the future.
1. Select **Register** or **Apply** at the bottom Once your data source has the **Data Use Management** toggle *Enabled*, it will look like this picture. ![Screenshot shows how to register a data source for policy.](./media/how-to-policies-data-owner-sql/register-data-source-for-policy-arc-sql.png)
+## Enable policies in Arc-enabled SQL Server
-## Create and publish a data owner policy
+## Create and publish a Data owner policy
Execute the steps in the **Create a new policy** and **Publish a policy** sections of the [data-owner policy authoring tutorial](./how-to-policies-data-owner-authoring-generic.md#create-a-new-policy). The result will be a data owner policy similar to the example:
purview How To Policies Devops Arc Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-policies-devops-arc-sql-server.md
The Arc-enabled SQL Server data source needs to be registered first with Microso
1. Enable Data Use Management. Data Use Management needs certain permissions and can affect the security of your data, as it delegates to certain Microsoft Purview roles to manage access to the data sources. **Go through the secure practices related to Data Use Management in this guide**: [How to enable Data Use Management](./how-to-enable-data-use-management.md)
-1. Upon enabling Data Use Management, Microsoft Purview will automatically capture the **Application ID** of the App Registration related to this Arc-enabled SQL server. Come back to this screen and hit the refresh button on the side of it to refresh, in case the association between the Arc-enabled SQL server and the App Registration changes in the future.
+1. Upon enabling Data Use Management, Microsoft Purview will automatically capture the **Application ID** of the App Registration related to this Arc-enabled SQL server if one has been configured. Come back to this screen and hit the refresh button on the side of it to refresh, in case the association between the Arc-enabled SQL server and the App Registration changes in the future.
1. Select **Register** or **Apply** at the bottom
-Once your data source has the **Data Use Management** toggle *Enabled*, it will look like this picture.
+Once your data source has the **Data Use Management** toggle *Enabled*, it will look like this picture.
![Screenshot shows how to register a data source for policy.](./media/how-to-policies-data-owner-sql/register-data-source-for-policy-arc-sql.png)
+## Enable policies in Arc-enabled SQL Server
## Create a new DevOps policy Follow this link for the steps to [create a new DevOps policy in Microsoft Purview](how-to-policies-devops-authoring-generic.md#create-a-new-devops-policy).
purview Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/overview.md
Previously updated : 11/16/2022 Last updated : 11/23/2022 # What's available in the Microsoft Purview governance portal?
Microsoft Purview automates data discovery by providing data scanning and classi
|App |Description | |-|--|
-|[Data Catalog](#data-catalog) | Finds trusted data sources by browsing and searching your data assets. The data catalog aligns your assets with friendly business terms and data classification to identify data sources. |
-|[Data Estate Insights](#data-estate-insights) | Gives you an overview of your data estate to help you discover what kinds of data you have and where. |
-|[Data Sharing](#data-sharing) | Allows you to securely share data internally or cross organizations with business partners and customers. |
-|[Data Policy](#data-policy) | A set of central, cloud-based experiences that help you provision access to data securely and at scale. |
+|[Data Catalog](#data-catalog-app) | Finds trusted data sources by browsing and searching your data assets. The data catalog aligns your assets with friendly business terms and data classification to identify data sources. |
+|[Data Estate Insights](#data-estate-insights-app) | Gives you an overview of your data estate to help you discover what kinds of data you have and where. |
+|[Data Sharing](#data-sharing-app) | Allows you to securely share data internally or cross organizations with business partners and customers. |
+|[Data Policy](#data-policy-app) | A set of central, cloud-based experiences that help you provision access to data securely and at scale. |
Microsoft Purview Data Map provides the foundation for data discovery and data governance. Microsoft Purview Data Map is a cloud native PaaS service that captures metadata about enterprise data present in analytics and operation systems on-premises and cloud. Microsoft Purview Data Map is automatically kept up to date with built-in automated scanning and classification system. Business users can configure and use the data map through an intuitive UI and developers can programmatically interact with the Data Map using open-source Apache Atlas 2.2 APIs.
Microsoft Purview Data Map powers the Microsoft Purview Data Catalog and Microso
For more information, see our [introduction to Data Map](concept-elastic-data-map.md).
-## Data Catalog
+## Data Catalog app
With the Microsoft Purview Data Catalog, business and technical users can quickly and easily find relevant data using a search experience with filters based on lenses such as glossary terms, classifications, sensitivity labels and more. For subject matter experts, data stewards and officers, the Microsoft Purview Data Catalog provides data curation features such as business glossary management and the ability to automate tagging of data assets with glossary terms. Data consumers and producers can also visually trace the lineage of data assets: for example, starting from operational systems on-premises, through movement, transformation & enrichment with various data storage and processing systems in the cloud, to consumption in an analytics system like Power BI. For more information, see our [introduction to search using Data Catalog](how-to-search-catalog.md).
-## Data Estate Insights
+## Data Estate Insights app
With the Microsoft Purview Data Estate Insights, the chief data officers and other governance stakeholders can get a birdΓÇÖs eye view of their data estate and can gain actionable insights into the governance gaps that can be resolved from the experience itself. For more information, see our [introduction to Data Estate Insights](concept-insights.md).
-## Data Sharing
+## Data Sharing app
Microsoft Purview Data Sharing enables organizations to securely share data both within your organization or cross organizations with business partners and customers. You can share or receive data with just a few clicks. Data providers can centrally manage and monitor data sharing relationships, and revoke sharing at any time. Data consumers can access received data with their own analytics tools and turn data into insights. For more information, see our [introduction to Data Sharing](concept-data-share.md).
-## Data Policy
+## Data Policy app
Microsoft Purview Data Policy is a set of central, cloud-based experiences that help you manage access to data sources and datasets securely and at scale. - Manage access to data sources from a single-pane of glass, cloud-based experience - At-scale access provisioning
purview Register Scan Azure Arc Enabled Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-arc-enabled-sql-server.md
Previously updated : 11/07/2022 Last updated : 11/23/2022
Before you can create policies, you must register the Azure Arc-enabled SQL Serv
1. Enable **Data use management**. **Data use management** needs certain permissions and can affect the security of your data, because it delegates to certain Microsoft Purview roles to manage access to the data sources. Go through the secure practices related to **Data use management** in this guide: [Enable Data use management on your Microsoft Purview sources](./how-to-enable-data-use-management.md).
-1. After you enable **Data use management**, Microsoft Purview automatically captures the application ID of the app registration that's related to this Azure Arc-enabled SQL Server instance. Come back to this screen and select the refresh button, in case the association between Azure Arc-enabled SQL Server and the app registration changes in the future.
+1. Upon enabling Data Use Management, Microsoft Purview will automatically capture the **Application ID** of the App Registration related to this Arc-enabled SQL server if one has been configured. Come back to this screen and hit the refresh button on the side of it to refresh, in case the association between the Arc-enabled SQL server and the App Registration changes in the future.
1. Select **Register** or **Apply**. ![Screenshot that shows selections for registering a data source for a policy.](./media/how-to-policies-data-owner-sql/register-data-source-for-policy-arc-sql.png)
+### Enable policies in Arc-enabled SQL Server
+ ### Create a policy To create an access policy for Azure Arc-enabled SQL Server, follow these guides:
remote-rendering Holographic Remoting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/how-tos/unity/holographic-remoting.md
# Use Holographic Remoting and Remote Rendering in Unity
-[Holographic Remoting](/windows/mixed-reality/holographic-remoting-player) and Azure Remote Rendering are mutually exclusive within one application. As such, [Unity play mode](/windows/mixed-reality/unity-play-mode) is also not available.
+[Holographic Remoting](/windows/mixed-reality/holographic-remoting-player) and Azure Remote Rendering are mutually exclusive within one application. As such, [Unity play mode](/windows/mixed-reality/develop/unity/preview-and-debug-your-app) is also not available.
For each run of the Unity editor only one of the two can be used. To use the other one, restart Unity first.
For each run of the Unity editor only one of the two can be used. To use the oth
## Use a WMR VR headset to preview on desktop
-If a Windows Mixed Reality VR headset is present, it can be used to preview inside Unity. In this case, it is fine to initialize ARR, however it will not be possible to connect to a session while the WMR headset is used.
+If a Windows Mixed Reality VR headset is present, it can be used to preview inside Unity. In this case, it is fine to initialize ARR, however it will not be possible to connect to a session while the WMR headset is used.
search Tutorial Csharp Create Load Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-csharp-create-load-index.md
Continue to build your Search-enabled website by:
## Create an Azure Search resource
-Create a new Search resource with the [Azure Cognitive Search](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurecognitivesearch) extension for Visual Studio Code.
+Create a new Search resource with the Azure Cognitive Search extension for Visual Studio Code.
1. In Visual Studio Code, open the [Activity bar](https://code.visualstudio.com/docs/getstarted/userinterface), and select the Azure icon.
The script uses the Azure SDK for Cognitive Search:
## Next steps
-[Deploy your Static Web App](tutorial-csharp-deploy-static-web-app.md)
+[Deploy your Static Web App](tutorial-csharp-deploy-static-web-app.md)
search Tutorial Csharp Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-csharp-overview.md
Install the following for your local development environment.
- [.NET 6](https://dotnet.microsoft.com/download/dotnet/6.0) - [Git](https://git-scm.com/downloads) - [Visual Studio Code](https://code.visualstudio.com/) and the following extensions
- - [Azure Cognitive Search 0.2.0+](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurecognitivesearch)
- [Azure Static Web App](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurestaticwebapps) - Optional: - This tutorial doesn't run the Azure Function API locally but if you intend to run it locally, you need to install [azure-functions-core-tools](../azure-functions/functions-run-local.md?tabs=linux%2ccsharp%2cbash#install-the-azure-functions-core-tools).
Forking the sample repository is critical to be able to deploy the Static Web Ap
## Next steps * [Create a Search Index and load with documents](tutorial-csharp-create-load-index.md)
-* [Deploy your Static Web App](tutorial-csharp-deploy-static-web-app.md)
+* [Deploy your Static Web App](tutorial-csharp-deploy-static-web-app.md)
search Tutorial Javascript Create Load Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-javascript-create-load-index.md
Continue to build your Search-enabled website by:
## Create an Azure Search resource
-Create a new Search resource with the [Azure Cognitive Search](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurecognitivesearch) extension for Visual Studio Code.
+Create a new Search resource with the Azure Cognitive Search extension for Visual Studio Code.
1. In Visual Studio Code, open the [Activity bar](https://code.visualstudio.com/docs/getstarted/userinterface), and select the Azure icon.
search Tutorial Javascript Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-javascript-overview.md
Install the following for your local development environment.
- If you have a different version of Node.js installed on your local computer, consider using [Node Version Manager](https://github.com/nvm-sh/nvm) (nvm) or a Docker container. - [Git](https://git-scm.com/downloads) - [Visual Studio Code](https://code.visualstudio.com/) and the following extensions
- - [Azure Cognitive Search](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurecognitivesearch)
- [Azure Static Web App](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurestaticwebapps) - Optional: - This tutorial doesn't run the Azure Function API locally. If you intend to run it locally, you need to install [azure-functions-core-tools](../azure-functions/functions-run-local.md?tabs=linux%2ccsharp%2cbash) globally with the following bash command:
Forking the sample repository is critical to be able to deploy the Static Web Ap
## Next steps * [Create a Search Index and load with documents](tutorial-javascript-create-load-index.md)
-* [Deploy your Static Web App](tutorial-javascript-deploy-static-web-app.md)
+* [Deploy your Static Web App](tutorial-javascript-deploy-static-web-app.md)
search Tutorial Javascript Search Query Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-javascript-search-query-integration.md
The Suggest function API is called in the React app at `\src\components\SearchBa
The `Lookup` [API](https://github.com/Azure-Samples/azure-search-javascript-samples/blob/master/search-website-functions-v4/api/Lookup/index.js) takes an ID and returns the document object from the Search Index.
-Routing for the Lookup API is contained in the [function.json](https://github.com/Azure-Samples/azure-search-javascript-samples/blob/master/search-website/api/Lookup/function.json) bindings.
+Routing for the Lookup API is contained in the [function.json](https://github.com/Azure-Samples/azure-search-javascript-samples/blob/main/search-website-functions-v4/api/Lookup/function.json) bindings.
:::code language="javascript" source="~/azure-search-javascript-samples/search-website-functions-v4/api/Lookup/index.js" highlight="4-9, 17" :::
search Tutorial Python Create Load Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-python-create-load-index.md
Continue to build your Search-enabled website by:
## Create an Azure Cognitive Search resource
-Create a new Search resource with the [Azure Cognitive Search](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurecognitivesearch) extension for Visual Studio Code.
+Create a new Search resource with the Azure Cognitive Search extension for Visual Studio Code.
1. In Visual Studio Code, open the [Activity bar](https://code.visualstudio.com/docs/getstarted/userinterface), and select the Azure icon.
The script uses the Azure SDK for Cognitive Search:
## Next steps
-[Deploy your Static Web App](tutorial-python-deploy-static-web-app.md)
+[Deploy your Static Web App](tutorial-python-deploy-static-web-app.md)
search Tutorial Python Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-python-overview.md
Install the following for your local development environment.
- [Python 3.9](https://www.python.org/downloads/) - [Git](https://git-scm.com/downloads) - [Visual Studio Code](https://code.visualstudio.com/) and the following extensions
- - [Azure Cognitive Search](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurecognitivesearch)
- [Azure Static Web App](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurestaticwebapps) - Optional: - This tutorial doesn't run the Azure Function API locally but if you intend to run it locally, you need to install [azure-functions-core-tools](../azure-functions/functions-run-local.md?tabs=linux%2ccsharp%2cbash).
Forking the sample repository is critical to be able to deploy the static web ap
## Next steps * [Create a Search Index and load with documents](tutorial-python-create-load-index.md)
-* [Deploy your Static Web App](tutorial-python-deploy-static-web-app.md)
+* [Deploy your Static Web App](tutorial-python-deploy-static-web-app.md)
sentinel Connect Cef Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-cef-ama.md
Select the machines on which you want to install the AMA. These machines are VMs
1. Run this command to launch the installation script: ```python
- sudo wget -O Forwarder_AMA_installer.py https://raw.githubusercontent.com/Azure/Azure- Sentinel/master/DataConnectors/Syslog/Forwarder_AMA_installer.py&&sudo python Forwarder_AMA_installer.py
+ sudo wget -O Forwarder_AMA_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/Syslog/Forwarder_AMA_installer.py&&sudo python Forwarder_AMA_installer.py
``` The installation script configures the `rsyslog` or `syslog-ng` daemon to use the required protocol and restarts the daemon.
sentinel Create Custom Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/create-custom-connector.md
While calling a RESTful endpoint directly requires more programming, it also pro
For more information, see the [Log Analytics Data collector API](../azure-monitor/logs/data-collector-api.md), especially the following examples: -- [C#](../azure-monitor/logs/data-collector-api.md#c-sample)-- [Python](../azure-monitor/logs/data-collector-api.md#python-sample)
+- [C#](../azure-monitor/logs/data-collector-api.md#sample-requests)
+- [Python](../azure-monitor/logs/data-collector-api.md#sample-requests)
## Connect with Azure Functions
site-recovery Azure To Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-support-matrix.md
Windows 7 (x64) with SP1 onwards | From version [9.30](https://support.microsoft
**Operating system** | **Details** | Red Hat Enterprise Linux | 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6,[7.7](https://support.microsoft.com/help/4528026/update-rollup-41-for-azure-site-recovery), [7.8](https://support.microsoft.com/help/4564347/), [7.9](https://support.microsoft.com/help/4578241/), [8.0](https://support.microsoft.com/help/4531426/update-rollup-42-for-azure-site-recovery), 8.1, [8.2](https://support.microsoft.com/help/4570609/), [8.3](https://support.microsoft.com/help/4597409/), [8.4](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8) (4.18.0-305.30.1.el8_4.x86_64 or higher), [8.5](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8) (4.18.0-348.5.1.el8_5.x86_64 or higher), [8.6](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b)
-CentOS | 6.5, 6.6, 6.7, 6.8, 6.9, 6.10 </br> 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, 7.7, [7.8](https://support.microsoft.com/help/4564347/), [7.9 pre-GA version](https://support.microsoft.com/help/4578241/), 7.9 GA version is supported from 9.37 hot fix patch** </br> 8.0, 8.1, [8.2](https://support.microsoft.com/help/4570609), [8.3](https://support.microsoft.com/help/4597409/), 8.4, 8.5 (4.18.0-348.5.1.el8_5.x86_64 or higher), 8.6
+CentOS | 6.5, 6.6, 6.7, 6.8, 6.9, 6.10 </br> 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, 7.7, [7.8](https://support.microsoft.com/help/4564347/), [7.9 pre-GA version](https://support.microsoft.com/help/4578241/), 7.9 GA version is supported from 9.37 hot fix patch** </br> 8.0, 8.1, [8.2](https://support.microsoft.com/help/4570609), [8.3](https://support.microsoft.com/help/4597409/), 8.4 (4.18.0-305.30.1.el8_4.x86_64 or later), 8.5 (4.18.0-348.5.1.el8_5.x86_64 or later), 8.6
Ubuntu 14.04 LTS Server | Includes support for all 14.04.*x* versions; [Supported kernel versions](#supported-ubuntu-kernel-versions-for-azure-virtual-machines); Ubuntu 16.04 LTS Server | Includes support for all 16.04.*x* versions; [Supported kernel version](#supported-ubuntu-kernel-versions-for-azure-virtual-machines)<br/><br/> Ubuntu servers using password-based authentication and sign-in, and the cloud-init package to configure cloud VMs, might have password-based sign-in disabled on failover (depending on the cloudinit configuration). Password-based sign in can be re-enabled on the virtual machine by resetting the password from the Support > Troubleshooting > Settings menu (of the failed over VM in the Azure portal. Ubuntu 18.04 LTS Server | Includes support for all 18.04.*x* versions; [Supported kernel version](#supported-ubuntu-kernel-versions-for-azure-virtual-machines)<br/><br/> Ubuntu servers using password-based authentication and sign-in, and the cloud-init package to configure cloud VMs, might have password-based sign-in disabled on failover (depending on the cloudinit configuration). Password-based sign in can be re-enabled on the virtual machine by resetting the password from the Support > Troubleshooting > Settings menu (of the failed over VM in the Azure portal.
site-recovery Vmware Physical Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-azure-support-matrix.md
Guest/server hot add/remove disk | No
Guest/server - exclude disk | Yes Guest/server multipath (MPIO) | No ReFS | Resilient File System is supported with Mobility service version 9.23 or higher
-Guest/server EFI/UEFI boot | - Supported for all [Azure Marketplace UEFI operating systems](../virtual-machines/generation-2.md#generation-2-vm-images-in-azure-marketplace). <br/> - Secure UEFI boot type is not supported. [Learn more.](../virtual-machines/generation-2.md#on-premises-vs-azure-generation-2-vms)
+Guest/server EFI/UEFI boot | - Supported for all [Azure Marketplace UEFI operating systems](../virtual-machines/generation-2.md#generation-2-vm-images-in-azure-marketplace). <br/> - Secure UEFI boot type is not supported. [Learn more.](../virtual-machines/generation-2.md#on-premises-vs-azure-generation-2-vms) <br/> - Windows 2008 R2 SP1 & Windows 2008 SP2 servers with UEFI is not supported.
RAID disk| No ## Replication channels
spatial-anchors Create Locate Anchors Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/how-tos/create-locate-anchors-java.md
Learn more about the [TokenRequiredListener](/java/api/com.microsoft.azure.spati
[!INCLUDE [Setup](../../../includes/spatial-anchors-create-locate-anchors-setup-non-ios.md)]
-Learn more about the [start](/java/api/com.microsoft.azure.spatialanchors.cloudspatialanchorsession.start) method.
```java mCloudSession.setSession(mSession);
Learn more about the [start](/java/api/com.microsoft.azure.spatialanchors.clouds
[!INCLUDE [Frames](../../../includes/spatial-anchors-create-locate-anchors-frames.md)]
-Learn more about the [processFrame](/java/api/com.microsoft.azure.spatialanchors.cloudspatialanchorsession.processframe) method.
```java mCloudSession.processFrame(mSession.update());
Learn more about the [CloudSpatialAnchor](/java/api/com.microsoft.azure.spatiala
[!INCLUDE [Session Status](../../../includes/spatial-anchors-create-locate-anchors-session-status.md)]
-Learn more about the [getSessionStatusAsync](/java/api/com.microsoft.azure.spatialanchors.cloudspatialanchorsession.getsessionstatusasync) method.
```java Future<SessionStatus> sessionStatusFuture = mCloudSession.getSessionStatusAsync();
Learn more about the [getSessionStatusAsync](/java/api/com.microsoft.azure.spati
[!INCLUDE [Setting Properties](../../../includes/spatial-anchors-create-locate-anchors-setting-properties.md)]
-Learn more about the [getAppProperties](/java/api/com.microsoft.azure.spatialanchors.cloudspatialanchor.getappproperties) method.
```java CloudSpatialAnchor cloudAnchor = new CloudSpatialAnchor();
Learn more about the [getAppProperties](/java/api/com.microsoft.azure.spatialanc
[!INCLUDE [Update Anchor Properties](../../../includes/spatial-anchors-create-locate-anchors-updating-properties.md)]
-Learn more about the [updateAnchorPropertiesAsync](/java/api/com.microsoft.azure.spatialanchors.cloudspatialanchorsession.updateanchorpropertiesasync) method.
```java CloudSpatialAnchor anchor = /* locate your anchor */;
Learn more about the [updateAnchorPropertiesAsync](/java/api/com.microsoft.azure
[!INCLUDE [Getting Properties](../../../includes/spatial-anchors-create-locate-anchors-getting-properties.md)]
-Learn more about the [getAnchorPropertiesAsync](/java/api/com.microsoft.azure.spatialanchors.cloudspatialanchorsession.getanchorpropertiesasync) method.
```java Future<CloudSpatialAnchor> getAnchorPropertiesFuture = mCloudSession.getAnchorPropertiesAsync("anchorId");
Learn more about the [getAnchorPropertiesAsync](/java/api/com.microsoft.azure.sp
[!INCLUDE [Expiration](../../../includes/spatial-anchors-create-locate-anchors-expiration.md)]
-Learn more about the [setExpiration](/java/api/com.microsoft.azure.spatialanchors.cloudspatialanchor.setexpiration) method.
```java Date now = new Date();
Learn more about the [setExpiration](/java/api/com.microsoft.azure.spatialanchor
[!INCLUDE [Locate](../../../includes/spatial-anchors-create-locate-anchors-locating.md)]
-Learn more about the [createWatcher](/java/api/com.microsoft.azure.spatialanchors.cloudspatialanchorsession.createwatcher) method.
```java AnchorLocateCriteria criteria = new AnchorLocateCriteria();
Learn more about the [AnchorLocatedListener](/java/api/com.microsoft.azure.spati
[!INCLUDE [Deleting](../../../includes/spatial-anchors-create-locate-anchors-deleting.md)]
-Learn more about the [deleteAnchorAsync](/java/api/com.microsoft.azure.spatialanchors.cloudspatialanchorsession.deleteanchorasync) method.
```java Future deleteAnchorFuture = mCloudSession.deleteAnchorAsync(cloudAnchor);
Learn more about the [deleteAnchorAsync](/java/api/com.microsoft.azure.spatialan
[!INCLUDE [Stopping](../../../includes/spatial-anchors-create-locate-anchors-stopping.md)]
-Learn more about the [stop](/java/api/com.microsoft.azure.spatialanchors.cloudspatialanchorsession.stop) method.
```java mCloudSession.stop();
Learn more about the [stop](/java/api/com.microsoft.azure.spatialanchors.cloudsp
[!INCLUDE [Resetting](../../../includes/spatial-anchors-create-locate-anchors-resetting.md)]
-Learn more about the [reset](/java/api/com.microsoft.azure.spatialanchors.cloudspatialanchorsession.reset) method.
```java mCloudSession.reset();
Learn more about the [reset](/java/api/com.microsoft.azure.spatialanchors.clouds
[!INCLUDE [Cleanup](../../../includes/spatial-anchors-create-locate-anchors-cleanup-java.md)]
-Learn more about the [close](/java/api/com.microsoft.azure.spatialanchors.cloudspatialanchorsession.close) method.
```java mCloudSession.close(); ```
storage Secure File Transfer Protocol Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-support.md
The following clients have compatible algorithm support with SFTP for Azure Blob
- WinSCP 5.10+ - Workday - XFB.Gateway
+- JSCH 0.1.54+
The supported client list above isn't exhaustive and may change over time.
See the [limitations and known issues article](secure-file-transfer-protocol-kno
## Pricing and billing
-Enabling the SFTP endpoint has an hourly cost. We will start applying this hourly cost on or after January 1, 2023. For the latest pricing information, see [Azure Blob Storage pricing](/pricing/details/storage/blobs/).
+Enabling the SFTP endpoint has an hourly cost. We will start applying this hourly cost on or after January 1, 2023. For the latest pricing information, see [Azure Blob Storage pricing](https://azure.microsoft.com/pricing/details/storage/blobs/).
Transaction, storage, and networking prices for the underlying storage account apply. To learn more, see [Understand the full billing model for Azure Blob Storage](../common/storage-plan-manage-costs.md#understand-the-full-billing-model-for-azure-blob-storage).
synapse-analytics Quickstart Transform Data Using Spark Job Definition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-transform-data-using-spark-job-definition.md
On this panel, you can reference to the Spark job definition to run.
|Main class name| The fully qualified identifier or the main class that is in the main definition file. <br> Sample: `WordCount`| |Command-line arguments| You can add command-line arguments by clicking the **New** button. It should be noted that adding command-line arguments will override the command-line arguments defined by the Spark job definition. <br> *Sample: `abfss://…/path/to/shakespeare.txt` `abfss://…/path/to/result`* <br> | |Apache Spark pool| You can select Apache Spark pool from the list.|
- |Python code reference| Additional python code files used for reference in the main definition file. |
+ |Python code reference| Additional python code files used for reference in the main definition file. <br> It supports passing files (.py, .py3, .zip) to the "pyFiles" property. It will override the "pyFiles" property defined in Spark job definition. <br>|
|Reference files | Additional files used for reference in the main definition file. | |Dynamically allocate executors| This setting maps to the dynamic allocation property in Spark configuration for Spark Application executors allocation.| |Min executors| Min number of executors to be allocated in the specified Spark pool for the job.|
synapse-analytics Apache Spark Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-overview.md
Spark pools in Azure Synapse Analytics enable the following key scenarios:
- Data Engineering/Data Preparation
- Apache Spark includes language features to support preparation and processing of large volumes of data so that it can be made more valuable and then consumed by other services within Azure Synapse Analytics. This approach is enabled through multiple languages, including C#, Scala, PySpark, and Spark SQL, and supplied libraries for processing and connectivity.
+Apache Spark includes many language features to support preparation and processing of large volumes of data so that it can be made more valuable and then consumed by other services within Azure Synapse Analytics. This is enabled through multiple languages (C#, Scala, PySpark, Spark SQL) and supplied libraries for processing and connectivity.
- Machine Learning
- Apache Spark comes with [MLlib](https://spark.apache.org/mllib/), a machine learning library built on top of Spark that you can use from a Spark pool in Azure Synapse Analytics. Spark pools in Azure Synapse Analytics also include Anaconda, a Python distribution with various packages for data science including machine learning. When combined with built-in support for notebooks, you have an environment for creating machine learning applications.
+Apache Spark comes with [MLlib](https://spark.apache.org/mllib/), a machine learning library built on top of Spark that you can use from a Spark pool in Azure Synapse Analytics. Spark pools in Azure Synapse Analytics also include Anaconda, a Python distribution with a variety of packages for data science including machine learning. When combined with built-in support for notebooks, you have an environment for creating machine learning applications.
+
+- Streaming Data
+
+Synapse Spark supports Spark structured streaming as long as you are running supported version of Azure Synapse Spark runtime release. All jobs are supported to live for seven days. This applies to both batch and streaming jobs, and generally, customers automate restart process using Azure Functions.
+ ## Where do I start
synapse-analytics Query Cosmos Db Analytical Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/query-cosmos-db-analytical-store.md
Database account master key is placed in server-level credential or database sco
The examples in this article are based on data from the [European Centre for Disease Prevention and Control (ECDC) COVID-19 Cases](../../open-datasets/dataset-ecdc-covid-cases.md) and [COVID-19 Open Research Dataset (CORD-19), doi:10.5281/zenodo.3715505](https://azure.microsoft.com/services/open-datasets/catalog/covid-19-open-research/).
-You can see the license and the structure of data on these pages. You can also download sample data for the [ECDC](https://pandemicdatalake.blob.core.windows.net/public/curated/covid-19/ecdc_cases/latest/ecdc_cases.json) and [CORD-19](https://azureopendatastorage.blob.core.windows.net/covid19temp/comm_use_subset/pdf_json/000b7d1517ceebb34e1e3e817695b6de03e2fa78.json) datasets.
+You can see the license and the structure of data on these pages. You can also download sample data for the [ECDC](https://pandemicdatalake.blob.core.windows.net/public/curated/covid-19/ecdc_cases/latest/ecdc_cases.json) and CORD-19 datasets.
To follow along with this article showcasing how to query Azure Cosmos DB data with a serverless SQL pool, make sure that you create the following resources:
virtual-desktop Connect Web https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/connect-web.md
When you sign in to the Remote Desktop Web client, you'll see your workspaces. A
| Azure environment | Workspace URL | |--|--|
- | Azure cloud *(most common)* | `https://client.wvd.microsoft.com/arm/webclient/` |
- | Azure cloud (classic) | `https://client.wvd.microsoft.com/webclient/https://docsupdatetracker.net/index.html` |
- | Azure US Gov | `https://rdweb.wvd.azure.us/arm/webclient/` |
- | Azure China 21Vianet | `https://rdweb.wvd.azure.cn/arm/webclient/` |
+ | Azure cloud *(most common)* | [https://client.wvd.microsoft.com/arm/webclient/](https://client.wvd.microsoft.com/arm/webclient/) |
+ | Azure cloud (classic) | [https://client.wvd.microsoft.com/webclient/https://docsupdatetracker.net/index.html](https://client.wvd.microsoft.com/webclient/https://docsupdatetracker.net/index.html) |
+ | Azure US Gov | [https://rdweb.wvd.azure.us/arm/webclient/](https://rdweb.wvd.azure.us/arm/webclient/) |
+ | Azure China 21Vianet | [https://rdweb.wvd.azure.cn/arm/webclient/](https://rdweb.wvd.azure.cn/arm/webclient/) |
1. Sign in with your user account. Once you've signed in successfully, your workspaces should show the desktops and applications that have been made available to you by your admin.
virtual-desktop Remote Desktop Clients Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/remote-desktop-clients-overview.md
Title: Remote Desktop clients for Azure Virtual Desktop - Azure Virtual Desktop
description: Overview of the Remote Desktop clients you can use to connect to Azure Virtual Desktop. Previously updated : 10/04/2022 Last updated : 11/22/2022
There are many features you can use to enhance your remote experience, such as:
Some features are only available with certain clients, so it's important to check [Compare the features of the Remote Desktop clients](../compare-remote-desktop-clients.md?toc=%2Fazure%2Fvirtual-desktop%2Fusers%2Ftoc.json) to understand the differences when connecting to Azure Virtual Desktop. > [!TIP]
-> You can also use most versions of the Remote Desktop client to also connect to [Remote Desktop Services](/windows-server/remote/remote-desktop-services/welcome-to-rds) in Windows Server or to a remote PC, as well as to Azure Virtual Desktop. If you want information on Remote Desktop Services instead, see [Remote Desktop clients for Remote Desktop Services](/windows-server/remote/remote-desktop-services/clients/remote-desktop-clients).
+> You can use most versions of the Remote Desktop client to connect to [Remote Desktop Services](/windows-server/remote/remote-desktop-services/welcome-to-rds) in Windows Server or to a remote PC, as well as to Azure Virtual Desktop. If you'd prefer to use Remote Desktop Services instead, learn more at [Remote Desktop clients for Remote Desktop Services](/windows-server/remote/remote-desktop-services/clients/remote-desktop-clients).
Here's a list of the Remote Desktop client apps and our documentation for connecting to Azure Virtual Desktop, where you can find download links, what's new, and learn how to install and use each client.
virtual-machine-scale-sets Azure Hybrid Benefit Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/azure-hybrid-benefit-linux.md
Title: Azure Hybrid Benefit for Linux virtual machine scale sets
-description: Learn how Azure Hybrid Benefit can apply to virtual machine scale sets and save you money on Linux virtual machines in Azure.
+ Title: Azure Hybrid Benefit for Linux Virtual Machine Scale Sets
+description: Learn how Azure Hybrid Benefit can apply to Virtual Machine Scale Sets and save you money on Linux virtual machines in Azure.
documentationcenter: ''
Previously updated : 06/16/2022 Last updated : 11/22/2022
-# Explore Azure Hybrid Benefit for Linux virtual machine scale sets
+# Explore Azure Hybrid Benefit for Linux Virtual Machine Scale Sets
-Azure Hybrid Benefit can reduce the cost of running your Red Hat Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SLES) [virtual machine scale sets](./overview.md). Azure Hybrid Benefit for Linux virtual machine scale sets is generally available now. It's available for all RHEL and SLES pay-as-you-go images from Azure Marketplace.
+Azure Hybrid Benefit can reduce the cost of running your Red Hat Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SLES) [Virtual Machine Scale Sets](./overview.md). Azure Hybrid Benefit for Linux Virtual Machine Scale Sets is generally available now. It's available for all RHEL and SLES pay-as-you-go images from Azure Marketplace.
When you enable Azure Hybrid Benefit, the only fee that you incur is the cost of your scale set infrastructure. > [!NOTE]
-> This article focuses on virtual machine scale sets running in Uniform orchestration mode. We recommend using Flexible orchestration for new workloads. For more information, see [Orchestration modes for virtual machine scale sets in Azure](virtual-machine-scale-sets-orchestration-modes.md).
+> This article focuses on Virtual Machine Scale Sets running in Uniform orchestration mode. We recommend using Flexible orchestration for new workloads. For more information, see [Orchestration modes for Virtual Machine Scale Sets in Azure](virtual-machine-scale-sets-orchestration-modes.md).
-## What is Azure Hybrid Benefit for Linux virtual machine scale sets?
-Azure Hybrid Benefit allows you to switch your virtual machine scale sets to *bring-your-own-subscription (BYOS)* billing. You can use your cloud access licenses from Red Hat or SUSE for this. You can also switch pay-as-you-go instances to BYOS without the need to redeploy.
+## What is Azure Hybrid Benefit for Linux Virtual Machine Scale Sets?
+Azure Hybrid Benefit allows you to switch your Virtual Machine Scale Sets to *bring-your-own-subscription (BYOS)* billing. You can use your cloud access licenses from Red Hat or SUSE for this. You can also switch pay-as-you-go instances to BYOS without the need to redeploy.
-A virtual machine scale set deployed from pay-as-you-go Azure Marketplace images is charged both infrastructure and software fees when Azure Hybrid Benefit is enabled.
+A Virtual Machine Scale Set deployed from pay-as-you-go Azure Marketplace images is charged both infrastructure and software fees when Azure Hybrid Benefit is enabled.
:::image type="content" source="./media/azure-hybrid-benefit-linux/azure-hybrid-benefit-linux-cost.png" alt-text="Diagram that shows the effect of Azure Hybrid Benefit on costs for Linux virtual machines.":::
Azure dedicated host instances and SQL hybrid benefits are not eligible for Azur
## Get started
-### Enable Azure Hybrid Benefit for Red Hat virtual machine scale sets
+### Enable Azure Hybrid Benefit for Red Hat Virtual Machine Scale Sets
Azure Hybrid Benefit for RHEL is available to Red Hat customers who meet the following criteria:
To start using Azure Hybrid Benefit for Red Hat:
1. Enable your eligible RHEL subscriptions in Azure by using the [Red Hat Cloud Access customer interface](https://access.redhat.com/management/cloud). The Azure subscriptions that you provide during the Red Hat Cloud Access enablement process are permitted to use Azure Hybrid Benefit.
-1. Apply Azure Hybrid Benefit to any of your new or existing RHEL pay-as-you-go virtual machine scale sets. You can use the Azure portal or the Azure CLI to enable Azure Hybrid Benefit.
+1. Apply Azure Hybrid Benefit to any of your new or existing RHEL pay-as-you-go Virtual Machine Scale Sets. You can use the Azure portal or the Azure CLI to enable Azure Hybrid Benefit.
1. Follow the recommended [next steps](https://access.redhat.com/articles/5419341) to configure update sources for your RHEL virtual machines and for RHEL subscription compliance guidelines.
-### Enable Azure Hybrid Benefit for SUSE virtual machine scale sets
+### Enable Azure Hybrid Benefit for SUSE Virtual Machine Scale Sets
To start using Azure Hybrid Benefit for SUSE: 1. Register with the SUSE public cloud program.
-1. Apply Azure Hybrid Benefit to your newly created or existing virtual machine scale sets via the Azure portal or the Azure CLI.
+1. Apply Azure Hybrid Benefit to your newly created or existing Virtual Machine Scale Sets via the Azure portal or the Azure CLI.
1. Register your virtual machines that are receiving Azure Hybrid Benefit with a separate source of updates. ## Enable Azure Hybrid Benefit in the Azure portal
-### Enable Azure Hybrid Benefit during virtual machine scale set creation
+### Enable Azure Hybrid Benefit during Virtual Machine Scale Set creation
1. Go to the [Azure portal](https://portal.azure.com/).
-1. Go to **Create a virtual machine scale set**.
+1. Go to **Create a Virtual Machine Scale Set**.
- :::image type="content" source="./media/azure-hybrid-benefit-linux/create-vmss-ahb.png" alt-text="Screenshot of the portal page for creating a virtual machine scale set.":::
+ :::image type="content" source="./media/azure-hybrid-benefit-linux/create-vmss-ahb.png" alt-text="Screenshot of the portal page for creating a Virtual Machine Scale Set.":::
1. In the **Licensing** section, select the checkbox that asks if you want to use an existing RHEL subscription and the checkbox to confirm that your subscription is eligible. :::image type="content" source="./media/azure-hybrid-benefit-linux/create-vmss-ahb-checkbox.png" alt-text="Screenshot of the Azure portal that shows checkboxes selected for licensing.":::
-1. Create a virtual machine scale set by following the next set of instructions.
+1. Create a Virtual Machine Scale Set by following the next set of instructions.
1. On the **Operating system** pane, confirm that the option is enabled. :::image type="content" source="./media/azure-hybrid-benefit-linux/create-vmss-ahb-os-blade.png" alt-text="Screenshot of the Azure Hybrid Benefit pane for the operating system after you create a virtual machine.":::
-### Enable Azure Hybrid Benefit in an existing virtual machine scale set
+### Enable Azure Hybrid Benefit in an existing Virtual Machine Scale Set
1. Go to the [Azure portal](https://portal.azure.com/).
-1. Open the page for the virtual machine scale set on which you want to apply the conversion.
+1. Open the page for the Virtual Machine Scale Set on which you want to apply the conversion.
1. Go to **Operating system** > **Licensing**. To enable the Azure Hybrid Benefit conversion, select **Yes**, and then select the confirmation checkbox. ![Screenshot of the Azure portal that shows the Licensing section of the pane for the operating system.](./media/azure-hybrid-benefit-linux/create-vmss-ahb-os-blade.png)
To start using Azure Hybrid Benefit for SUSE:
In the Azure CLI, you can use the `az vmss update` command to enable Azure Hybrid Benefit. For RHEL virtual machines, run the command with a `--license-type` parameter of `RHEL_BYOS`. For SLES virtual machines, run the command with a `--license-type` parameter of `SLES_BYOS`. ```azurecli
-# This will enable Azure Hybrid Benefit on a RHEL virtual machine scale set
+# This will enable Azure Hybrid Benefit on a RHEL Virtual Machine Scale Set
az vmss update --resource-group myResourceGroup --name myVmName --license-type RHEL_BYOS
-# This will enable Azure Hybrid Benefit on a SLES virtual machine scale set
+# This will enable Azure Hybrid Benefit on a SLES Virtual Machine Scale Set
az vmss update --resource-group myResourceGroup --name myVmName --license-type SLES_BYOS ```
az vmss update -g myResourceGroup -n myVmName --license-type None
> If your scale sets have a **Manual** upgrade policy, you'll have to manually upgrade your virtual machines by using the Azure CLI: > > ```azurecli
-> # This will bring virtual machine scale set instances up to date with the latest virtual machine scale set model
+> # This will bring Virtual Machine Scale Set instances up to date with the latest Virtual Machine Scale Set model
> az vmss update-instances --resource-group myResourceGroup --name myScaleSet --instance-ids {instanceIds} > ```
-## Apply Azure Hybrid Benefit to virtual machine scale sets at creation time
-In addition to applying Azure Hybrid Benefit to existing pay-as-you-go virtual machine scale sets, you can invoke it when you create virtual machine scale sets. The benefits of doing so are threefold:
+## Apply Azure Hybrid Benefit to Virtual Machine Scale Sets at creation time
+In addition to applying Azure Hybrid Benefit to existing pay-as-you-go Virtual Machine Scale Sets, you can invoke it when you create Virtual Machine Scale Sets. The benefits of doing so are threefold:
-- You can provision both pay-as-you-go and BYOS virtual machine scale sets by using the same image and process.
+- You can provision both pay-as-you-go and BYOS Virtual Machine Scale Sets by using the same image and process.
- It enables future licensing mode changes. These changes aren't available with a BYOS-only image.-- The virtual machine scale sets will be connected to Red Hat Update Infrastructure (RHUI) by default, to help keep it up to date and secure. You can change the updated mechanism after deployment at any time.
+- The Virtual Machine Scale Sets will be connected to Red Hat Update Infrastructure (RHUI) by default, to help keep it up to date and secure. You can change the updated mechanism after deployment at any time.
-To apply Azure Hybrid Benefit to virtual machine scale sets at creation time by using the Azure CLI, use one of the following commands:
+To apply Azure Hybrid Benefit to Virtual Machine Scale Sets at creation time by using the Azure CLI, use one of the following commands:
```azurecli
-# This will enable Azure Hybrid Benefit while creating a RHEL virtual machine scale set
+# This will enable Azure Hybrid Benefit while creating a RHEL Virtual Machine Scale Set
az vmss create --name myVmName --resource-group myResourceGroup --vnet-name myVnet --subnet mySubnet --image myRedHatImageURN --admin-username myAdminUserName --admin-password myPassword --instance-count myInstanceCount --license-type RHEL_BYOS
-# This will enable Azure Hybrid Benefit while creating a SLES virtual machine scale set
+# This will enable Azure Hybrid Benefit while creating a SLES Virtual Machine Scale Set
az vmss create --name myVmName --resource-group myResourceGroup --vnet-name myVnet --subnet mySubnet --image myRedHatImageURN --admin-username myAdminUserName --admin-password myPassword --instance-count myInstanceCount --license-type SLES_BYOS ```
virtual-machine-scale-sets Disk Encryption Azure Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/disk-encryption-azure-resource-manager.md
Previously updated : 10/10/2019 Last updated : 11/22/2022
-# Encrypt virtual machine scale sets with Azure Resource Manager
+# Encrypt Virtual Machine Scale Sets with Azure Resource Manager
-You can encrypt or decrypt Linux virtual machine scale sets using Azure Resource Manager templates.
+You can encrypt or decrypt Linux Virtual Machine Scale Sets using Azure Resource Manager templates.
## Deploying templates First, select the template that fits your scenario. -- [Enable disk encryption on a running Linux virtual machine scale set](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/encrypt-running-vmss-linux)
+- [Enable disk encryption on a running Linux Virtual Machine Scale Set](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/encrypt-running-vmss-linux)
-- [Enable disk encryption on a running Windows virtual machine scale set](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/encrypt-running-vmss-windows)
+- [Enable disk encryption on a running Windows Virtual Machine Scale Set](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/encrypt-running-vmss-windows)
- - [Deploy a virtual machine scale set of Linux VMs with a jumpbox and enables encryption on Linux virtual machine scale sets](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/encrypt-vmss-linux-jumpbox)
+ - [Deploy a Virtual Machine Scale Set of Linux VMs with a jumpbox and enables encryption on Linux Virtual Machine Scale Sets](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/encrypt-vmss-linux-jumpbox)
- - [Deploy a virtual machine scale set of Windows VMs with a jumpbox and enables encryption on Windows virtual machine scale sets](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/encrypt-vmss-windows-jumpbox)
+ - [Deploy a Virtual Machine Scale Set of Windows VMs with a jumpbox and enables encryption on Windows Virtual Machine Scale Sets](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/encrypt-vmss-windows-jumpbox)
-- [Disable disk encryption on a running Linux virtual machine scale set](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/decrypt-vmss-linux)
+- [Disable disk encryption on a running Linux Virtual Machine Scale Set](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/decrypt-vmss-linux)
-- [Disable disk encryption on a running Windows virtual machine scale set](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/decrypt-vmss-windows)
+- [Disable disk encryption on a running Windows Virtual Machine Scale Set](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/decrypt-vmss-windows)
Then follow these steps:
Then follow these steps:
3. Click **Purchase** to deploy the template. > [!NOTE]
-> Virtual machine scale set encryption is supported with API version `2017-03-30` onwards. If you are using templates to enable scale set encryption, update the API version for virtual machine scale sets and the ADE extension inside the template. See this [sample template](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.compute/encrypt-running-vmss-windows/azuredeploy.json) for more information.
+> Virtual Machine Scale Set encryption is supported with API version `2017-03-30` onwards. If you are using templates to enable scale set encryption, update the API version for Virtual Machine Scale Sets and the ADE extension inside the template. See this [sample template](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.compute/encrypt-running-vmss-windows/azuredeploy.json) for more information.
## Next steps -- [Azure Disk Encryption for virtual machine scale sets](disk-encryption-overview.md)-- [Encrypt a virtual machine scale sets using the Azure CLI](disk-encryption-cli.md)-- [Encrypt a virtual machine scale sets using the Azure PowerShell](disk-encryption-powershell.md)
+- [Azure Disk Encryption for Virtual Machine Scale Sets](disk-encryption-overview.md)
+- [Encrypt a Virtual Machine Scale Sets using the Azure CLI](disk-encryption-cli.md)
+- [Encrypt a Virtual Machine Scale Sets using the Azure PowerShell](disk-encryption-powershell.md)
- [Create and configure a key vault for Azure Disk Encryption](disk-encryption-key-vault.md)-- [Use Azure Disk Encryption with virtual machine scale set extension sequencing](disk-encryption-extension-sequencing.md)
+- [Use Azure Disk Encryption with Virtual Machine Scale Set extension sequencing](disk-encryption-extension-sequencing.md)
virtual-machine-scale-sets Disk Encryption Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/disk-encryption-cli.md
Title: Encrypt disks for Azure scale sets with Azure CLI
-description: Learn how to use Azure CLI to encrypt VM instances and attached disks in a Windows virtual machine scale set
+description: Learn how to use Azure CLI to encrypt VM instances and attached disks in a Windows Virtual Machine Scale Set
Previously updated : 10/15/2019 Last updated : 11/22/2022
-# Encrypt OS and attached data disks in a virtual machine scale set with the Azure CLI
+# Encrypt OS and attached data disks in a Virtual Machine Scale Set with the Azure CLI
-The Azure CLI is used to create and manage Azure resources from the command line or in scripts. This quickstart shows you how to use the Azure CLI to create and encrypt a virtual machine scale set. For more information on applying Azure Disk encryption to a virtual machine scale set, see [Azure Disk Encryption for Virtual Machine Scale Sets](disk-encryption-overview.md).
+The Azure CLI is used to create and manage Azure resources from the command line or in scripts. This quickstart shows you how to use the Azure CLI to create and encrypt a Virtual Machine Scale Set. For more information on applying Azure Disk encryption to a Virtual Machine Scale Set, see [Azure Disk Encryption for Virtual Machine Scale Sets](disk-encryption-overview.md).
[!INCLUDE [azure-cli-prepare-your-environment.md](../../includes/azure-cli-prepare-your-environment.md)]
Before you can create a scale set, create a resource group with [az group create
az group create --name myResourceGroup --location eastus ```
-Now create a virtual machine scale set with [az vmss create](/cli/azure/vmss). The following example creates a scale set named *myScaleSet* that is set to automatically update as changes are applied, and generates SSH keys if they don't exist in *~/.ssh/id_rsa*. A 32-Gb data disk is attached to each VM instance, and the Azure [Custom Script Extension](../virtual-machines/extensions/custom-script-linux.md) is used to prepare the data disks with [az vmss extension set](/cli/azure/vmss/extension):
+Now create a Virtual Machine Scale Set with [az vmss create](/cli/azure/vmss). The following example creates a scale set named *myScaleSet* that is set to automatically update as changes are applied, and generates SSH keys if they don't exist in *~/.ssh/id_rsa*. A 32-Gb data disk is attached to each VM instance, and the Azure [Custom Script Extension](../virtual-machines/extensions/custom-script-linux.md) is used to prepare the data disks with [az vmss extension set](/cli/azure/vmss/extension):
```azurecli-interactive # Create a scale set with attached data disk
As the scale set is upgrade policy on the scale set created in an earlier step i
### Enable encryption using KEK to wrap the key
-You can also use a Key Encryption Key for added security when encrypting the virtual machine scale set.
+You can also use a Key Encryption Key for added security when encrypting the Virtual Machine Scale Set.
```azurecli-interactive # Get the resource ID of the Key Vault
az vmss encryption disable --resource-group myResourceGroup --name myScaleSet
## Next steps -- In this article, you used the Azure CLI to encrypt a virtual machine scale set. You can also use [Azure PowerShell](disk-encryption-powershell.md) or [Azure Resource Manager templates](disk-encryption-azure-resource-manager.md).
+- In this article, you used the Azure CLI to encrypt a Virtual Machine Scale Set. You can also use [Azure PowerShell](disk-encryption-powershell.md) or [Azure Resource Manager templates](disk-encryption-azure-resource-manager.md).
- If you wish to have Azure Disk Encryption applied after another extension is provisioned, you can use [extension sequencing](virtual-machine-scale-sets-extension-sequencing.md). -- An end-to-end batch file example for Linux scale set data disk encryption can be found [here](https://gist.githubusercontent.com/ejarvi/7766dad1475d5f7078544ffbb449f29b/raw/03e5d990b798f62cf188706221ba6c0c7c2efb3f/enable-linux-vmss.bat). This example creates a resource group, Linux scale set, mounts a 5-GB data disk, and encrypts the virtual machine scale set.
+- An end-to-end batch file example for Linux scale set data disk encryption can be found [here](https://gist.githubusercontent.com/ejarvi/7766dad1475d5f7078544ffbb449f29b/raw/03e5d990b798f62cf188706221ba6c0c7c2efb3f/enable-linux-vmss.bat). This example creates a resource group, Linux scale set, mounts a 5-GB data disk, and encrypts the Virtual Machine Scale Set.
virtual-machine-scale-sets Disk Encryption Extension Sequencing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/disk-encryption-extension-sequencing.md
Title: Azure Disk Encryption and Azure virtual machine scale sets extension sequencing
+ Title: Azure Disk Encryption and Azure Virtual Machine Scale Sets extension sequencing
description: In this article, learn how to enable Microsoft Azure Disk Encryption for Linux IaaS VMs. Previously updated : 10/10/2019 Last updated : 11/22/2022
-# Use Azure Disk Encryption with virtual machine scale set extension sequencing
+# Use Azure Disk Encryption with Virtual Machine Scale Set extension sequencing
Extensions such as Azure disk encryption can be added to an Azure virtual machines scale set in a specified order. To do so, use [extension sequencing](virtual-machine-scale-sets-extension-sequencing.md).
For a more in-depth template, see:
## Next steps-- Learn more about extension sequencing: [Sequence extension provisioning in virtual machine scale sets](virtual-machine-scale-sets-extension-sequencing.md).
+- Learn more about extension sequencing: [Sequence extension provisioning in Virtual Machine Scale Sets](virtual-machine-scale-sets-extension-sequencing.md).
- Learn more about the `provisionAfterExtensions` property: [Microsoft.Compute virtualMachineScaleSets/extensions template reference](/azure/templates/microsoft.compute/2018-10-01/virtualmachinescalesets/extensions).-- [Azure Disk Encryption for virtual machine scale sets](disk-encryption-overview.md)-- [Encrypt a virtual machine scale sets using the Azure CLI](disk-encryption-cli.md)-- [Encrypt a virtual machine scale sets using the Azure PowerShell](disk-encryption-powershell.md)
+- [Azure Disk Encryption for Virtual Machine Scale Sets](disk-encryption-overview.md)
+- [Encrypt a Virtual Machine Scale Sets using the Azure CLI](disk-encryption-cli.md)
+- [Encrypt a Virtual Machine Scale Sets using the Azure PowerShell](disk-encryption-powershell.md)
- [Create and configure a key vault for Azure Disk Encryption](disk-encryption-key-vault.md)
virtual-machine-scale-sets Disk Encryption Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/disk-encryption-key-vault.md
Previously updated : 05/23/2022 Last updated : 11/22/2022 ms.devlang: azurecli
Connect-AzAccount
## Next steps - [Azure Disk Encryption overview](disk-encryption-overview.md)-- [Encrypt a virtual machine scale sets using the Azure CLI](disk-encryption-cli.md)-- [Encrypt a virtual machine scale sets using the Azure PowerShell](disk-encryption-powershell.md)
+- [Encrypt a Virtual Machine Scale Sets using the Azure CLI](disk-encryption-cli.md)
+- [Encrypt a Virtual Machine Scale Sets using the Azure PowerShell](disk-encryption-powershell.md)
virtual-machine-scale-sets Disk Encryption Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/disk-encryption-overview.md
Previously updated : 10/10/2019 Last updated : 11/22/2022
Azure Disk Encryption provides volume encryption for the OS and data disks of your virtual machines, helping protect and safeguard your data to meet organizational security and compliance commitments. To learn more, see [Azure Disk Encryption: Linux VMs](../virtual-machines/linux/disk-encryption-overview.md) and [Azure Disk Encryption: Windows VMs](../virtual-machines/windows/disk-encryption-overview.md)
-Azure Disk Encryption can also be applied to Windows and Linux virtual machine scale sets, in these instances:
+Azure Disk Encryption can also be applied to Windows and Linux Virtual Machine Scale Sets, in these instances:
- Scale sets created with managed disks. Azure Disk encryption is not supported for native (or unmanaged) disk scale sets. - OS and data volumes in Windows scale sets. - Data volumes in Linux scale sets. OS disk encryption is NOT supported at present for Linux scale sets.
-You can learn the fundamentals of Azure Disk Encryption for virtual machine scale sets in just a few minutes with the [Encrypt a virtual machine scale sets using the Azure CLI](disk-encryption-cli.md) or the [Encrypt a virtual machine scale sets using the Azure PowerShell](disk-encryption-powershell.md) tutorials.
+You can learn the fundamentals of Azure Disk Encryption for Virtual Machine Scale Sets in just a few minutes with the [Encrypt a Virtual Machine Scale Sets using the Azure CLI](disk-encryption-cli.md) or the [Encrypt a Virtual Machine Scale Sets using the Azure PowerShell](disk-encryption-powershell.md) tutorials.
## Next steps -- [Encrypt a virtual machine scale sets using the Azure Resource Manager](disk-encryption-azure-resource-manager.md)
+- [Encrypt a Virtual Machine Scale Sets using the Azure Resource Manager](disk-encryption-azure-resource-manager.md)
- [Create and configure a key vault for Azure Disk Encryption](disk-encryption-key-vault.md)-- [Use Azure Disk Encryption with virtual machine scale set extension sequencing](disk-encryption-extension-sequencing.md)
+- [Use Azure Disk Encryption with Virtual Machine Scale Set extension sequencing](disk-encryption-extension-sequencing.md)
virtual-machine-scale-sets Disk Encryption Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/disk-encryption-powershell.md
Title: Encrypt disks for Azure scale sets with Azure PowerShell
-description: Learn how to use Azure PowerShell to encrypt VM instances and attached disks in a Windows virtual machine scale set
+description: Learn how to use Azure PowerShell to encrypt VM instances and attached disks in a Windows Virtual Machine Scale Set
Previously updated : 10/15/2019 Last updated : 11/22/2022
-# Encrypt OS and attached data disks in a virtual machine scale set with Azure PowerShell
+# Encrypt OS and attached data disks in a Virtual Machine Scale Set with Azure PowerShell
-The Azure PowerShell module is used to create and manage Azure resources from the PowerShell command line or in scripts. This article shows you how to use Azure PowerShell to create and encrypt a virtual machine scale set. For more information on applying Azure Disk Encryption to a virtual machine scale set, see [Azure Disk Encryption for Virtual Machine Scale Sets](disk-encryption-overview.md).
+The Azure PowerShell module is used to create and manage Azure resources from the PowerShell command line or in scripts. This article shows you how to use Azure PowerShell to create and encrypt a Virtual Machine Scale Set. For more information on applying Azure Disk Encryption to a Virtual Machine Scale Set, see [Azure Disk Encryption for Virtual Machine Scale Sets](disk-encryption-overview.md).
[!INCLUDE [cloud-shell-try-it.md](../../includes/cloud-shell-try-it.md)]
First, set an administrator username and password for the VM instances with [Get
$cred = Get-Credential ```
-Now create a virtual machine scale set with [New-AzVmss](/powershell/module/az.compute/new-azvmss). To distribute traffic to the individual VM instances, a load balancer is also created. The load balancer includes rules to distribute traffic on TCP port 80, as well as allow remote desktop traffic on TCP port 3389 and PowerShell remoting on TCP port 5985:
+Now create a Virtual Machine Scale Set with [New-AzVmss](/powershell/module/az.compute/new-azvmss). To distribute traffic to the individual VM instances, a load balancer is also created. The load balancer includes rules to distribute traffic on TCP port 80, as well as allow remote desktop traffic on TCP port 3389 and PowerShell remoting on TCP port 5985:
```azurepowershell-interactive $vmssName="myScaleSet"
When prompted, type *y* to continue the disk encryption process on the scale set
### Enable encryption using KEK to wrap the key
-You can also use a Key Encryption Key for added security when encrypting the virtual machine scale set.
+You can also use a Key Encryption Key for added security when encrypting the Virtual Machine Scale Set.
```azurepowershell-interactive $diskEncryptionKeyVaultUrl=(Get-AzKeyVault -ResourceGroupName $rgName -Name $vaultName).VaultUri
Disable-AzVmssDiskEncryption -ResourceGroupName $rgName -VMScaleSetName $vmssNam
## Next steps -- In this article, you used Azure PowerShell to encrypt a virtual machine scale set. You can also use the [Azure CLI](disk-encryption-cli.md) or [Azure Resource Manager templates](disk-encryption-azure-resource-manager.md).
+- In this article, you used Azure PowerShell to encrypt a Virtual Machine Scale Set. You can also use the [Azure CLI](disk-encryption-cli.md) or [Azure Resource Manager templates](disk-encryption-azure-resource-manager.md).
- If you wish to have Azure Disk Encryption applied after another extension is provisioned, you can use [extension sequencing](virtual-machine-scale-sets-extension-sequencing.md).
virtual-machine-scale-sets Flexible Virtual Machine Scale Sets Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/flexible-virtual-machine-scale-sets-cli.md
description: Learn how to create a Virtual Machine Scale Set in Flexible orchest
-- Previously updated : 11/01/2022+ Last updated : 11/22/2022
virtual-machine-scale-sets Flexible Virtual Machine Scale Sets Migration Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/flexible-virtual-machine-scale-sets-migration-resources.md
Previously updated : 11/01/2022 Last updated : 11/22/2022
virtual-machine-scale-sets Flexible Virtual Machine Scale Sets Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/flexible-virtual-machine-scale-sets-portal.md
description: Learn how to create a Virtual Machine Scale Set in Flexible orchest
-- Previously updated : 11/01/2022+ Last updated : 11/22/2022
virtual-machine-scale-sets Flexible Virtual Machine Scale Sets Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/flexible-virtual-machine-scale-sets-powershell.md
description: Learn how to create a Virtual Machine Scale Set in Flexible orchest
-- Previously updated : 11/01/2022+ Last updated : 11/22/2022
virtual-machine-scale-sets Flexible Virtual Machine Scale Sets Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/flexible-virtual-machine-scale-sets-rest-api.md
description: Learn how to create a Virtual Machine Scale Set in Flexible orchest
-- Previously updated : 11/01/2022+ Last updated : 11/22/2022
virtual-machine-scale-sets Instance Generalized Image Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/instance-generalized-image-version.md
Previously updated : 11/01/2022 Last updated : 11/22/2022
virtual-machine-scale-sets Instance Specialized Image Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/instance-specialized-image-version.md
Previously updated : 11/01/2022 Last updated : 11/22/2022
virtual-machine-scale-sets Orchestration Modes Api Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/orchestration-modes-api-comparison.md
Previously updated : 11/01/2022 Last updated : 11/22/2022
virtual-machine-scale-sets Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/overview.md
Previously updated : 11/01/2022 Last updated : 11/22/2022
virtual-machine-scale-sets Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/policy-reference.md
Title: Built-in policy definitions for Azure Virtual Machine Scale Sets description: Lists Azure Policy built-in policy definitions for Azure Virtual Machine Scale Sets. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/04/2022 Last updated : 11/22/2022+ # Azure Policy built-in definitions for Azure Virtual Machine Scale Sets
virtual-machine-scale-sets Proximity Placement Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/proximity-placement-groups.md
Previously updated : 11/01/2022-- Last updated : 11/22/2022++
virtual-machine-scale-sets Quick Create Bicep Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/quick-create-bicep-windows.md
Previously updated : 11/01/2022 Last updated : 11/22/2022+
virtual-machine-scale-sets Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/quick-create-cli.md
Previously updated : 11/01/2022 Last updated : 11/22/2022
virtual-machine-scale-sets Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/quick-create-portal.md
Previously updated : 11/01/2022 Last updated : 11/22/2022
virtual-machine-scale-sets Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/quick-create-powershell.md
Previously updated : 11/01/2022 Last updated : 11/22/2022
virtual-machine-scale-sets Quick Create Template Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/quick-create-template-linux.md
Previously updated : 11/01/2022 Last updated : 11/22/2022
virtual-machine-scale-sets Quick Create Template Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/quick-create-template-windows.md
Title: Quickstart - Create a Windows virtual machine scale set with an Azure template
+ Title: Quickstart - Create a Windows Virtual Machine Scale Set with an Azure template
description: Learn how to quickly create a Windows virtual machine scale with an Azure Resource Manager template that deploys a sample app and configures autoscale rules Previously updated : 11/01/2022 Last updated : 11/22/2022
virtual-machine-scale-sets Share Images Across Tenants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/share-images-across-tenants.md
Previously updated : 11/01/2022 Last updated : 11/22/2022
virtual-machine-scale-sets Spot Priority Mix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/spot-priority-mix.md
Previously updated : 11/01/2022 Last updated : 11/22/2022
You can refer to this [ARM template example](https://paste.microsoft.com/f84d2f8
You can set your Spot Priority Mix in the Scaling tab of the Virtual Machine Scale Sets creation process in the Azure portal. The following steps will instruct you on how to access this feature during that process. 1. Log in to the [Azure portal](https://portal.azure.com).
-1. In the search bar, search for and select **Virtual machine scale sets**.
-1. Select **Create** on the **Virtual machine scale sets** page.
+1. In the search bar, search for and select **Virtual Machine Scale Sets**.
+1. Select **Create** on the **Virtual Machine Scale Sets** page.
1. In the **Basics** tab, fill out the required fields and select **Flexible** as the **Orchestration** mode. 1. Fill out the **Disks** and **Networking** tabs. 1. In the **Scaling** tab, select the check-box next to *Scale with VMs and Spot VMs* option under the **Scale with VMs and discounted Spot VMs** section.
virtual-machine-scale-sets Spot Vm Size Recommendation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/spot-vm-size-recommendation.md
Previously updated : 11/01/2022 Last updated : 11/22/2022
The Spot VM size recommendations tool is an easy way to view and select alternat
## Azure portal
-You can access Azure's size recommendations through the virtual machine scale sets creation process in the Azure portal. The following steps will instruct you on how to access this tool during that process.
+You can access Azure's size recommendations through the Virtual Machine Scale Sets creation process in the Azure portal. The following steps will instruct you on how to access this tool during that process.
1. Log in to the [Azure portal](https://portal.azure.com).
-1. In the search bar, search for and select **Virtual machine scale sets**.
-1. Select **Create** on the **Virtual machine scale sets** page.
+1. In the search bar, search for and select **Virtual Machine Scale Sets**.
+1. Select **Create** on the **Virtual Machine Scale Sets** page.
1. In the **Basics** tab, fill out the required fields. 1. Under **Instance details**, select **Run with Azure Spot discount**.
You can access Azure's size recommendations through the virtual machine scale se
:::image type="content" source="./media/spot-vm-size-recommendation/size-recommendations-tab.png" alt-text="Screenshot of the Size recommendations tab with a list of alternative VM sizes."::: 1. Make your selection and click **Save**.
-1. Continue through the virtual machine scale set creation process.
+1. Continue through the Virtual Machine Scale Set creation process.
## Next steps
virtual-machine-scale-sets Tutorial Autoscale Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-autoscale-cli.md
Title: Tutorial - Autoscale a scale set with the Azure CLI
-description: Learn how to use the Azure CLI to automatically scale a virtual machine scale set as CPU demands increases and decreases
+description: Learn how to use the Azure CLI to automatically scale a Virtual Machine Scale Set as CPU demands increases and decreases
Previously updated : 05/18/2018- Last updated : 11/22/2022+
-# Tutorial: Automatically scale a virtual machine scale set with the Azure CLI
+# Tutorial: Automatically scale a Virtual Machine Scale Set with the Azure CLI
> [!NOTE]
-> This tutorial uses Uniform Orchestration mode. We recommend using Flexible Orchestration for new workloads. For more information, see [Orchesration modes for virtual machine scale sets in Azure](virtual-machine-scale-sets-orchestration-modes.md).
+> This tutorial uses Uniform Orchestration mode. We recommend using Flexible Orchestration for new workloads. For more information, see [Orchesration modes for Virtual Machine Scale Sets in Azure](virtual-machine-scale-sets-orchestration-modes.md).
When you create a scale set, you define the number of VM instances that you wish to run. As your application demand changes, you can automatically increase or decrease the number of VM instances. The ability to autoscale lets you keep up with customer demand or respond to application performance changes throughout the lifecycle of your app. In this tutorial you learn how to:
Create a resource group with [az group create](/cli/azure/group) as follows:
az group create --name myResourceGroup --location eastus ```
-Now create a virtual machine scale set with [az vmss create](/cli/azure/vmss). The following example creates a scale set with an instance count of *2*, and generates SSH keys if they do not exist:
+Now create a Virtual Machine Scale Set with [az vmss create](/cli/azure/vmss). The following example creates a scale set with an instance count of *2*, and generates SSH keys if they do not exist:
```azurecli-interactive az vmss create \
virtual-machine-scale-sets Tutorial Autoscale Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-autoscale-powershell.md
Previously updated : 03/27/2018- Last updated : 11/22/2022+
-# Tutorial: Automatically scale a virtual machine scale set with Azure PowerShell
+# Tutorial: Automatically scale a Virtual Machine Scale Set with Azure PowerShell
> [!NOTE]
-> This tutorial uses Uniform Orchestration mode. We recommend using Flexible Orchestration for new workloads. For more information, see [Orchestration modes for virtual machine scale sets in Azure](virtual-machine-scale-sets-orchestration-modes.md).
+> This tutorial uses Uniform Orchestration mode. We recommend using Flexible Orchestration for new workloads. For more information, see [Orchestration modes for Virtual Machine Scale Sets in Azure](virtual-machine-scale-sets-orchestration-modes.md).
[!INCLUDE [requires-azurerm](../../includes/requires-azurerm.md)]
$myScaleSet = "myScaleSet"
$myLocation = "East US" ```
-Now create a virtual machine scale set with [New-AzureRmVmss](/powershell/module/azurerm.compute/new-azurermvmss). To distribute traffic to the individual VM instances, a load balancer is also created. The load balancer includes rules to distribute traffic on TCP port 80, as well as allow remote desktop traffic on TCP port 3389 and PowerShell remoting on TCP port 5985. When prompted, provide your own desired administrative credentials for the VM instances in the scale set:
+Now create a Virtual Machine Scale Set with [New-AzureRmVmss](/powershell/module/azurerm.compute/new-azurermvmss). To distribute traffic to the individual VM instances, a load balancer is also created. The load balancer includes rules to distribute traffic on TCP port 80, as well as allow remote desktop traffic on TCP port 3389 and PowerShell remoting on TCP port 5985. When prompted, provide your own desired administrative credentials for the VM instances in the scale set:
```azurepowershell-interactive New-AzureRmVmss `
virtual-machine-scale-sets Tutorial Autoscale Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-autoscale-template.md
Title: Tutorial - Autoscale a scale set with Azure templates
-description: Learn how to use Azure Resource Manager templates to automatically scale a virtual machine scale set as CPU demands increases and decreases
+description: Learn how to use Azure Resource Manager templates to automatically scale a Virtual Machine Scale Set as CPU demands increases and decreases
Previously updated : 03/27/2018- Last updated : 11/22/2022+
-# Tutorial: Automatically scale a virtual machine scale set with an Azure template
+# Tutorial: Automatically scale a Virtual Machine Scale Set with an Azure template
> [!NOTE]
-> This tutorial uses Uniform Orchestration mode. We recommend using Flexible Orchestration for new workloads. For more information, see [Orchesration modes for virtual machine scale sets in Azure](virtual-machine-scale-sets-orchestration-modes.md).
+> This tutorial uses Uniform Orchestration mode. We recommend using Flexible Orchestration for new workloads. For more information, see [Orchesration modes for Virtual Machine Scale Sets in Azure](virtual-machine-scale-sets-orchestration-modes.md).
When you create a scale set, you define the number of VM instances that you wish to run. As your application demand changes, you can automatically increase or decrease the number of VM instances. The ability to autoscale lets you keep up with customer demand or respond to application performance changes throughout the lifecycle of your app. In this tutorial you learn how to:
First, create a resource group with [az group create](/cli/azure/group). The fol
az group create --name myResourceGroup --location eastus ```
-Now create a virtual machine scale set with [az deployment group create](/cli/azure/deployment/group). When prompted, provide your own username, such as *azureuser*, and password that is used as the credentials for each VM instance:
+Now create a Virtual Machine Scale Set with [az deployment group create](/cli/azure/deployment/group). When prompted, provide your own username, such as *azureuser*, and password that is used as the credentials for each VM instance:
```azurecli-interactive az deployment group create \
virtual-machine-scale-sets Tutorial Create And Manage Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-create-and-manage-cli.md
Title: 'Tutorial: Create & manage a virtual machine scale set ΓÇô Azure CLI'
-description: Learn how to use the Azure CLI to create a virtual machine scale set, along with some common management tasks such as how to start and stop an instance, or change the scale set capacity.
+ Title: 'Tutorial: Create & manage a Virtual Machine Scale Set ΓÇô Azure CLI'
+description: Learn how to use the Azure CLI to create a Virtual Machine Scale Set, along with some common management tasks such as how to start and stop an instance, or change the scale set capacity.
Previously updated : 03/27/2018 Last updated : 11/22/2022
-# Tutorial: Create and manage a virtual machine scale set with the Azure CLI
+# Tutorial: Create and manage a Virtual Machine Scale Set with the Azure CLI
> [!NOTE]
-> This tutorial uses Uniform Orchestration mode. We recommend using Flexible Orchestration for new workloads. For more information, see [Orchesration modes for virtual machine scale sets in Azure](virtual-machine-scale-sets-orchestration-modes.md).
+> This tutorial uses Uniform Orchestration mode. We recommend using Flexible Orchestration for new workloads. For more information, see [Orchesration modes for Virtual Machine Scale Sets in Azure](virtual-machine-scale-sets-orchestration-modes.md).
-A virtual machine scale set allows you to deploy and manage a set of identical, auto-scaling virtual machines. Throughout the lifecycle of a virtual machine scale set, you may need to run one or more management tasks. In this tutorial you learn how to:
+A Virtual Machine Scale Set allows you to deploy and manage a set of identical, auto-scaling virtual machines. Throughout the lifecycle of a Virtual Machine Scale Set, you may need to run one or more management tasks. In this tutorial you learn how to:
> [!div class="checklist"]
-> * Create and connect to a virtual machine scale set
+> * Create and connect to a Virtual Machine Scale Set
> * Select and use VM images > * View and use specific VM instance sizes > * Manually scale a scale set
A virtual machine scale set allows you to deploy and manage a set of identical,
## Create a resource group
-An Azure resource group is a logical container into which Azure resources are deployed and managed. A resource group must be created before a virtual machine scale set. Create a resource group with the [az group create](/cli/azure/group) command. In this example, a resource group named *myResourceGroup* is created in the *eastus* region.
+An Azure resource group is a logical container into which Azure resources are deployed and managed. A resource group must be created before a Virtual Machine Scale Set. Create a resource group with the [az group create](/cli/azure/group) command. In this example, a resource group named *myResourceGroup* is created in the *eastus* region.
```azurecli-interactive az group create --name myResourceGroup --location eastus
The resource group name is specified when you create or modify a scale set throu
## Create a scale set
-You create a virtual machine scale set with the [az vmss create](/cli/azure/vmss) command. The following example creates a scale set named *myScaleSet*, and generates SSH keys if they do not exist:
+You create a Virtual Machine Scale Set with the [az vmss create](/cli/azure/vmss) command. The following example creates a scale set named *myScaleSet*, and generates SSH keys if they do not exist:
```azurecli-interactive az vmss create \
az group delete --name myResourceGroup --no-wait --yes
In this tutorial, you learned how to perform some basic scale set creation and management tasks with the Azure CLI: > [!div class="checklist"]
-> * Create and connect to a virtual machine scale set
+> * Create and connect to a Virtual Machine Scale Set
> * Select and use VM images > * View and use specific VM sizes > * Manually scale a scale set
virtual-machine-scale-sets Tutorial Create And Manage Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-create-and-manage-powershell.md
Title: 'Tutorial: Create and manage a virtual machine scale set with Azure PowerShell'
-description: Learn how to use Azure PowerShell to create a virtual machine scale set, along with some common management tasks such as how to start and stop an instance, or change the scale set capacity.
+ Title: 'Tutorial: Create and manage a Virtual Machine Scale Set with Azure PowerShell'
+description: Learn how to use Azure PowerShell to create a Virtual Machine Scale Set, along with some common management tasks such as how to start and stop an instance, or change the scale set capacity.
Previously updated : 05/18/2018 Last updated : 11/22/2022
-# Tutorial: Create and manage a virtual machine scale set with Azure PowerShell
+# Tutorial: Create and manage a Virtual Machine Scale Set with Azure PowerShell
> [!NOTE]
-> This tutorial uses Uniform Orchestration mode. We recommend using Flexible Orchestration for new workloads. For more information, see [Orchesration modes for virtual machine scale sets in Azure](virtual-machine-scale-sets-orchestration-modes.md).
+> This tutorial uses Uniform Orchestration mode. We recommend using Flexible Orchestration for new workloads. For more information, see [Orchesration modes for Virtual Machine Scale Sets in Azure](virtual-machine-scale-sets-orchestration-modes.md).
-A virtual machine scale set allows you to deploy and manage a set of identical, auto-scaling virtual machines. Throughout the lifecycle of a virtual machine scale set, you may need to run one or more management tasks. In this tutorial you learn how to:
+A Virtual Machine Scale Set allows you to deploy and manage a set of identical, auto-scaling virtual machines. Throughout the lifecycle of a Virtual Machine Scale Set, you may need to run one or more management tasks. In this tutorial you learn how to:
> [!div class="checklist"]
-> * Create and connect to a virtual machine scale set
+> * Create and connect to a Virtual Machine Scale Set
> * Select and use VM images > * View and use specific VM instance sizes > * Manually scale a scale set
If you don't have an Azure subscription, create a [free account](https://azure.m
## Create a resource group
-An Azure resource group is a logical container into which Azure resources are deployed and managed. A resource group must be created before a virtual machine scale set. Create a resource group with the [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) command. In this example, a resource group named *myResourceGroup* is created in the *EastUS* region.
+An Azure resource group is a logical container into which Azure resources are deployed and managed. A resource group must be created before a Virtual Machine Scale Set. Create a resource group with the [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) command. In this example, a resource group named *myResourceGroup* is created in the *EastUS* region.
```azurepowershell-interactive New-AzResourceGroup -ResourceGroupName "myResourceGroup" -Location "EastUS"
First, set an administrator username and password for the VM instances with [Get
$cred = Get-Credential ```
-Now create a virtual machine scale set with [New-AzVmss](/powershell/module/az.compute/new-azvmss). To distribute traffic to the individual VM instances, a load balancer is also created. The load balancer includes rules to distribute traffic on TCP port 80, as well as allow remote desktop traffic on TCP port 3389 and PowerShell remoting on TCP port 5985:
+Now create a Virtual Machine Scale Set with [New-AzVmss](/powershell/module/az.compute/new-azvmss). To distribute traffic to the individual VM instances, a load balancer is also created. The load balancer includes rules to distribute traffic on TCP port 80, as well as allow remote desktop traffic on TCP port 3389 and PowerShell remoting on TCP port 5985:
```azurepowershell-interactive New-AzVmss `
Get-AzVmssVM -ResourceGroupName "myResourceGroup" -VMScaleSetName "myScaleSet" -
>[!IMPORTANT] >Exposing the RDP port 3389 is only recommended for testing. For production environments, we recommend using a VPN or private connection.
-To allow access using remote desktop, create a network security group with [New-AzNetworkSecurityRuleConfig](/powershell/module/az.network/new-aznetworksecurityruleconfig) and [New-AzNetworkSecurityGroup](/powershell/module/az.network/new-aznetworksecuritygroup). For more information, see [Networking for Azure virtual machine scale sets](virtual-machine-scale-sets-networking.md).
+To allow access using remote desktop, create a network security group with [New-AzNetworkSecurityRuleConfig](/powershell/module/az.network/new-aznetworksecurityruleconfig) and [New-AzNetworkSecurityGroup](/powershell/module/az.network/new-aznetworksecuritygroup). For more information, see [Networking for Azure Virtual Machine Scale Sets](virtual-machine-scale-sets-networking.md).
```azurepowershell-interactive # Get information about the scale set
Remove-AzResourceGroup -Name "myResourceGroup" -Force -AsJob
In this tutorial, you learned how to perform some basic scale set creation and management tasks with Azure PowerShell: > [!div class="checklist"]
-> * Create and connect to a virtual machine scale set
+> * Create and connect to a Virtual Machine Scale Set
> * Select and use VM images > * View and use specific VM sizes > * Manually scale a scale set
virtual-machine-scale-sets Tutorial Install Apps Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-install-apps-cli.md
Title: Tutorial - Install applications in a scale set with Azure CLI
-description: Learn how to use the Azure CLI to install applications into virtual machine scale sets with the Custom Script Extension
+description: Learn how to use the Azure CLI to install applications into Virtual Machine Scale Sets with the Custom Script Extension
Previously updated : 03/27/2018 Last updated : 11/22/2022
-# Tutorial: Install applications in virtual machine scale sets with the Azure CLI
+# Tutorial: Install applications in Virtual Machine Scale Sets with the Azure CLI
> [!NOTE]
-> This tutorial uses Uniform Orchestration mode. We recommend using Flexible Orchestration for new workloads. For more information, see [Orchesration modes for virtual machine scale sets in Azure](virtual-machine-scale-sets-orchestration-modes.md).
+> This tutorial uses Uniform Orchestration mode. We recommend using Flexible Orchestration for new workloads. For more information, see [Orchesration modes for Virtual Machine Scale Sets in Azure](virtual-machine-scale-sets-orchestration-modes.md).
To run applications on virtual machine (VM) instances in a scale set, you first need to install the application components and required files. In a previous tutorial, you learned how to create and use a custom VM image to deploy your VM instances. This custom image included manual application installs and configurations. You can also automate the install of applications to a scale set after each VM instance is deployed, or update an application that already runs on a scale set. In this tutorial you learn how to:
Create a resource group with [az group create](/cli/azure/group). The following
az group create --name myResourceGroup --location eastus ```
-Now create a virtual machine scale set with [az vmss create](/cli/azure/vmss). The following example creates a scale set named *myScaleSet*, and generates SSH keys if they do not exist:
+Now create a Virtual Machine Scale Set with [az vmss create](/cli/azure/vmss). The following example creates a scale set named *myScaleSet*, and generates SSH keys if they do not exist:
```azurecli-interactive az vmss create \
virtual-machine-scale-sets Tutorial Install Apps Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-install-apps-powershell.md
Title: Tutorial - Install applications in a scale set with Azure PowerShell
-description: Learn how to use Azure PowerShell to install applications into virtual machine scale sets with the Custom Script Extension
+description: Learn how to use Azure PowerShell to install applications into Virtual Machine Scale Sets with the Custom Script Extension
Previously updated : 11/08/2018 Last updated : 11/22/2022
-# Tutorial: Install applications in virtual machine scale sets with Azure PowerShell
+# Tutorial: Install applications in Virtual Machine Scale Sets with Azure PowerShell
> [!NOTE]
-> This tutorial uses Uniform Orchestration mode. We recommend using Flexible Orchestration for new workloads. For more information, see [Orchesration modes for virtual machine scale sets in Azure](virtual-machine-scale-sets-orchestration-modes.md).
+> This tutorial uses Uniform Orchestration mode. We recommend using Flexible Orchestration for new workloads. For more information, see [Orchesration modes for Virtual Machine Scale Sets in Azure](virtual-machine-scale-sets-orchestration-modes.md).
To run applications on virtual machine (VM) instances in a scale set, you first need to install the application components and required files. In a previous tutorial, you learned how to create and use a custom VM image to deploy your VM instances. This custom image included manual application installs and configurations. You can also automate the install of applications to a scale set after each VM instance is deployed, or update an application that already runs on a scale set. In this tutorial you learn how to:
To see the Custom Script Extension in action, create a scale set that installs t
## Create a scale set
-Now create a virtual machine scale set with [New-AzVmss](/powershell/module/az.compute/new-azvmss). To distribute traffic to the individual VM instances, a load balancer is also created. The load balancer includes rules to distribute traffic on TCP port 80. It also allows remote desktop traffic on TCP port 3389 and PowerShell remoting on TCP port 5985. When prompted, you can set your own administrative credentials for the VM instances in the scale set:
+Now create a Virtual Machine Scale Set with [New-AzVmss](/powershell/module/az.compute/new-azvmss). To distribute traffic to the individual VM instances, a load balancer is also created. The load balancer includes rules to distribute traffic on TCP port 80. It also allows remote desktop traffic on TCP port 3389 and PowerShell remoting on TCP port 5985. When prompted, you can set your own administrative credentials for the VM instances in the scale set:
```azurepowershell-interactive New-AzVmss `
Each VM instance in the scale set downloads and runs the script from GitHub. In
## Allow traffic to application
-To allow access to the basic web application, create a network security group with [New-AzNetworkSecurityRuleConfig](/powershell/module/az.network/new-aznetworksecurityruleconfig) and [New-AzNetworkSecurityGroup](/powershell/module/az.network/new-aznetworksecuritygroup). For more information, see [Networking for Azure virtual machine scale sets](virtual-machine-scale-sets-networking.md).
+To allow access to the basic web application, create a network security group with [New-AzNetworkSecurityRuleConfig](/powershell/module/az.network/new-aznetworksecurityruleconfig) and [New-AzNetworkSecurityGroup](/powershell/module/az.network/new-aznetworksecuritygroup). For more information, see [Networking for Azure Virtual Machine Scale Sets](virtual-machine-scale-sets-networking.md).
```azurepowershell-interactive
virtual-machine-scale-sets Tutorial Install Apps Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-install-apps-template.md
Title: Tutorial - Install apps in a scale set with Azure templates
-description: Learn how to use Azure Resource Manager templates to install applications into virtual machine scale sets with the Custom Script Extension
+description: Learn how to use Azure Resource Manager templates to install applications into Virtual Machine Scale Sets with the Custom Script Extension
Previously updated : 03/27/2018 Last updated : 11/22/2022
-# Tutorial: Install applications in virtual machine scale sets with an Azure template
+# Tutorial: Install applications in Virtual Machine Scale Sets with an Azure template
> [!NOTE]
-> This tutorial uses Uniform Orchestration mode. We recommend using Flexible Orchestration for new workloads. For more information, see [Orchesration modes for virtual machine scale sets in Azure](virtual-machine-scale-sets-orchestration-modes.md).
+> This tutorial uses Uniform Orchestration mode. We recommend using Flexible Orchestration for new workloads. For more information, see [Orchesration modes for Virtual Machine Scale Sets in Azure](virtual-machine-scale-sets-orchestration-modes.md).
To run applications on virtual machine (VM) instances in a scale set, you first need to install the application components and required files. In a previous tutorial, you learned how to create and use a custom VM image to deploy your VM instances. This custom image included manual application installs and configurations. You can also automate the install of applications to a scale set after each VM instance is deployed, or update an application that already runs on a scale set. In this tutorial you learn how to:
To see the Custom Script Extension in action, create a scale set that installs t
## Create Custom Script Extension definition
-When you define a virtual machine scale set with an Azure template, the *Microsoft.Compute/virtualMachineScaleSets* resource provider can include a section on extensions. The *extensionsProfile* details what is applied to the VM instances in a scale set. To use the Custom Script Extension, you specify a publisher of *Microsoft.Azure.Extensions* and a type of *CustomScript*.
+When you define a Virtual Machine Scale Set with an Azure template, the *Microsoft.Compute/virtualMachineScaleSets* resource provider can include a section on extensions. The *extensionsProfile* details what is applied to the VM instances in a scale set. To use the Custom Script Extension, you specify a publisher of *Microsoft.Azure.Extensions* and a type of *CustomScript*.
The *fileUris* property is used to define the source install scripts or packages. To start the install process, the required scripts are defined in *commandToExecute*. The following example defines a sample script from GitHub that installs and configures the NGINX web server:
Let's use the sample template to create a scale set and apply the Custom Script
az group create --name myResourceGroup --location eastus ```
-Now create a virtual machine scale set with [az deployment group create](/cli/azure/deployment/group). When prompted, provide your own username and password that is used as the credentials for each VM instance:
+Now create a Virtual Machine Scale Set with [az deployment group create](/cli/azure/deployment/group). When prompted, provide your own username and password that is used as the credentials for each VM instance:
```azurecli-interactive az deployment group create \
virtual-machine-scale-sets Tutorial Use Custom Image Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-use-custom-image-cli.md
Title: Tutorial - Use a custom VM image in a scale set with Azure CLI
-description: Learn how to use the Azure CLI to create a custom VM image that you can use to deploy a virtual machine scale set
+description: Learn how to use the Azure CLI to create a custom VM image that you can use to deploy a Virtual Machine Scale Set
Previously updated : 05/01/2020 Last updated : 11/22/2022+ -
-# Tutorial: Create and use a custom image for virtual machine scale sets with the Azure CLI
+
+# Tutorial: Create and use a custom image for Virtual Machine Scale Sets with the Azure CLI
> [!NOTE]
-> This tutorial uses Uniform Orchestration mode. We recommend using Flexible Orchestration for new workloads. For more information, see [Orchesration modes for virtual machine scale sets in Azure](virtual-machine-scale-sets-orchestration-modes.md).
+> This tutorial uses Uniform Orchestration mode. We recommend using Flexible Orchestration for new workloads. For more information, see [Orchesration modes for Virtual Machine Scale Sets in Azure](virtual-machine-scale-sets-orchestration-modes.md).
When you create a scale set, you specify an image to be used when the VM instances are deployed. To reduce the number of tasks after VM instances are deployed, you can use a custom VM image. This custom VM image includes any required application installs or configurations. Any VM instances created in the scale set use the custom VM image and are ready to serve your application traffic. In this tutorial you learn how to:
virtual-machine-scale-sets Tutorial Use Custom Image Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-use-custom-image-powershell.md
Title: Tutorial - Use a custom VM image in a scale set with Azure PowerShell
-description: Learn how to use Azure PowerShell to create a custom VM image that you can use to deploy a virtual machine scale set
+description: Learn how to use Azure PowerShell to create a custom VM image that you can use to deploy a Virtual Machine Scale Set
Previously updated : 05/04/2020 Last updated : 11/22/2022
-# Tutorial: Create and use a custom image for virtual machine scale sets with Azure PowerShell
+# Tutorial: Create and use a custom image for Virtual Machine Scale Sets with Azure PowerShell
> [!NOTE]
-> This tutorial uses Uniform Orchestration mode. We recommend using Flexible Orchestration for new workloads. For more information, see [Orchesration modes for virtual machine scale sets in Azure](virtual-machine-scale-sets-orchestration-modes.md).
+> This tutorial uses Uniform Orchestration mode. We recommend using Flexible Orchestration for new workloads. For more information, see [Orchesration modes for Virtual Machine Scale Sets in Azure](virtual-machine-scale-sets-orchestration-modes.md).
When you create a scale set, you specify an image to be used when the VM instances are deployed. To reduce the number of tasks after VM instances are deployed, you can use a custom VM image. This custom VM image includes any required application installs or configurations. Any VM instances created in the scale set use the custom VM image and are ready to serve your application traffic. In this tutorial you learn how to:
virtual-machine-scale-sets Tutorial Use Disks Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-use-disks-cli.md
Title: Tutorial - Create and use disks for scale sets with Azure CLI
-description: Learn how to use the Azure CLI to create and use Managed Disks with virtual machine scale set, including how to add, prepare, list, and detach disks.
+description: Learn how to use the Azure CLI to create and use Managed Disks with Virtual Machine Scale Set, including how to add, prepare, list, and detach disks.
Previously updated : 03/27/2018 Last updated : 11/22/2022
-# Tutorial: Create and use disks with virtual machine scale set with the Azure CLI
+# Tutorial: Create and use disks with Virtual Machine Scale Set with the Azure CLI
> [!NOTE]
-> This tutorial uses Uniform Orchestration mode. We recommend using Flexible Orchestration for new workloads. For more information, see [Orchesration modes for virtual machine scale sets in Azure](virtual-machine-scale-sets-orchestration-modes.md).
+> This tutorial uses Uniform Orchestration mode. We recommend using Flexible Orchestration for new workloads. For more information, see [Orchesration modes for Virtual Machine Scale Sets in Azure](virtual-machine-scale-sets-orchestration-modes.md).
-Virtual machine scale sets use disks to store the VM instance's operating system, applications, and data. As you create and manage a scale set, it is important to choose a disk size and configuration appropriate to the expected workload. This tutorial covers how to create and manage VM disks. In this tutorial you learn how to:
+Virtual Machine Scale Sets use disks to store the VM instance's operating system, applications, and data. As you create and manage a scale set, it is important to choose a disk size and configuration appropriate to the expected workload. This tutorial covers how to create and manage VM disks. In this tutorial you learn how to:
> [!div class="checklist"] > * OS disks and temporary disks
While the above table identifies max IOPS per disk, a higher level of performanc
## Create and attach disks You can create and attach disks when you create a scale set, or with an existing scale set.
-As of API version `2019-07-01`, you can set the size of the OS disk in a virtual machine scale set with the [storageProfile.osDisk.diskSizeGb](/rest/api/compute/virtualmachinescalesets/createorupdate#virtualmachinescalesetosdisk) property. After provisioning, you may have to expand or repartition the disk to make use of the whole space. Learn more about how to expand the volume in your OS in either [Windows](../virtual-machines/windows/expand-os-disk.md#expand-the-volume-in-the-operating-system) or [Linux](../virtual-machines/linux/expand-disks.md#expand-a-disk-partition-and-filesystem).
+As of API version `2019-07-01`, you can set the size of the OS disk in a Virtual Machine Scale Set with the [storageProfile.osDisk.diskSizeGb](/rest/api/compute/virtualmachinescalesets/createorupdate#virtualmachinescalesetosdisk) property. After provisioning, you may have to expand or repartition the disk to make use of the whole space. Learn more about how to expand the volume in your OS in either [Windows](../virtual-machines/windows/expand-os-disk.md#expand-the-volume-in-the-operating-system) or [Linux](../virtual-machines/linux/expand-disks.md#expand-a-disk-partition-and-filesystem).
### Attach disks at scale set creation First, create a resource group with the [az group create](/cli/azure/group) command. In this example, a resource group named *myResourceGroup* is created in the *eastus* region.
First, create a resource group with the [az group create](/cli/azure/group) comm
az group create --name myResourceGroup --location eastus ```
-Create a virtual machine scale set with the [az vmss create](/cli/azure/vmss) command. The following example creates a scale set named *myScaleSet*, and generates SSH keys if they do not exist. Two disks are created with the `--data-disk-sizes-gb` parameter. The first disk is *64* GB in size, and the second disk is *128* GB:
+Create a Virtual Machine Scale Set with the [az vmss create](/cli/azure/vmss) command. The following example creates a scale set named *myScaleSet*, and generates SSH keys if they do not exist. Two disks are created with the `--data-disk-sizes-gb` parameter. The first disk is *64* GB in size, and the second disk is *128* GB:
```azurecli-interactive az vmss create \
virtual-machine-scale-sets Tutorial Use Disks Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-use-disks-powershell.md
Title: Tutorial - Create and use disks for scale sets with Azure PowerShell
-description: Learn how to use Azure PowerShell to create and use Managed Disks with virtual machine scale sets, including how to add, prepare, list, and detach disks.
+description: Learn how to use Azure PowerShell to create and use Managed Disks with Virtual Machine Scale Sets, including how to add, prepare, list, and detach disks.
Previously updated : 03/27/2018 Last updated : 11/22/2022
-# Tutorial: Create and use disks with virtual machine scale set with Azure PowerShell
+# Tutorial: Create and use disks with Virtual Machine Scale Set with Azure PowerShell
> [!NOTE]
-> This tutorial uses Uniform Orchestration mode. We recommend using Flexible Orchestration for new workloads. For more information, see [Orchesration modes for virtual machine scale sets in Azure](virtual-machine-scale-sets-orchestration-modes.md).
+> This tutorial uses Uniform Orchestration mode. We recommend using Flexible Orchestration for new workloads. For more information, see [Orchesration modes for Virtual Machine Scale Sets in Azure](virtual-machine-scale-sets-orchestration-modes.md).
-Virtual machine scale sets use disks to store the VM instance's operating system, applications, and data. As you create and manage a scale set, it is important to choose a disk size and configuration appropriate to the expected workload. This tutorial covers how to create and manage VM disks. In this tutorial you learn how to:
+Virtual Machine Scale Sets use disks to store the VM instance's operating system, applications, and data. As you create and manage a scale set, it is important to choose a disk size and configuration appropriate to the expected workload. This tutorial covers how to create and manage VM disks. In this tutorial you learn how to:
> [!div class="checklist"] > * OS disks and temporary disks
While the above table identifies max IOPS per disk, a higher level of performanc
## Create and attach disks You can create and attach disks when you create a scale set, or with an existing scale set.
-As of API version `2019-07-01`, you can set the size of the OS disk in a virtual machine scale set with the [storageProfile.osDisk.diskSizeGb](/rest/api/compute/virtualmachinescalesets/createorupdate#virtualmachinescalesetosdisk) property. After provisioning, you may have to expand or repartition the disk to make use of the whole space. Learn more about how to expand the volume in your OS in either [Windows](../virtual-machines/windows/expand-os-disk.md#expand-the-volume-in-the-operating-system) or [Linux](../virtual-machines/linux/expand-disks.md#expand-a-disk-partition-and-filesystem).
+As of API version `2019-07-01`, you can set the size of the OS disk in a Virtual Machine Scale Set with the [storageProfile.osDisk.diskSizeGb](/rest/api/compute/virtualmachinescalesets/createorupdate#virtualmachinescalesetosdisk) property. After provisioning, you may have to expand or repartition the disk to make use of the whole space. Learn more about how to expand the volume in your OS in either [Windows](../virtual-machines/windows/expand-os-disk.md#expand-the-volume-in-the-operating-system) or [Linux](../virtual-machines/linux/expand-disks.md#expand-a-disk-partition-and-filesystem).
### Attach disks at scale set creation
-Create a virtual machine scale set with [New-AzVmss](/powershell/module/az.compute/new-azvmss). When prompted, provide a username and password for the VM instances. To distribute traffic to the individual VM instances, a load balancer is also created. The load balancer includes rules to distribute traffic on TCP port 80, as well as allow remote desktop traffic on TCP port 3389 and PowerShell remoting on TCP port 5985.
+Create a Virtual Machine Scale Set with [New-AzVmss](/powershell/module/az.compute/new-azvmss). When prompted, provide a username and password for the VM instances. To distribute traffic to the individual VM instances, a load balancer is also created. The load balancer includes rules to distribute traffic on TCP port 80, as well as allow remote desktop traffic on TCP port 3389 and PowerShell remoting on TCP port 5985.
Two disks are created with the `-DataDiskSizeGb` parameter. The first disk is *64* GB in size, and the second disk is *128* GB. When prompted, provide your own desired administrative credentials for the VM instances in the scale set:
virtual-machine-scale-sets Use Spot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/use-spot.md
Title: Create a scale set that uses Azure Spot Virtual Machines
-description: Learn how to create Azure virtual machine scale sets that use Azure Spot Virtual Machines to save on costs.
+description: Learn how to create Azure Virtual Machine Scale Sets that use Azure Spot Virtual Machines to save on costs.
Previously updated : 10/22/2021- Last updated : 11/22/2022+
-# Azure Spot Virtual Machines for virtual machine scale sets
+# Azure Spot Virtual Machines for Virtual Machine Scale Sets
Using Azure Spot Virtual Machines on scale sets allows you to take advantage of our unused capacity at a significant cost savings. At any point in time when Azure needs the capacity back, the Azure infrastructure will evict Azure Spot Virtual Machine instances. Therefore, Azure Spot Virtual Machine instances are great for workloads that can handle interruptions like batch processing jobs, dev/test environments, large compute workloads, and more.
-The amount of available capacity can vary based on size, region, time of day, and more. When deploying Azure Spot Virtual Machine instances on scale sets, Azure will allocate the instance only if there is capacity available, but there is no SLA for these instances. An Azure Spot Virtual machine scale set is deployed in a single fault domain and offers no high availability guarantees.
+The amount of available capacity can vary based on size, region, time of day, and more. When deploying Azure Spot Virtual Machine instances on scale sets, Azure will allocate the instance only if there is capacity available, but there is no SLA for these instances. An Azure Spot Virtual Machine Scale Set is deployed in a single fault domain and offers no high availability guarantees.
## Limitations
When creating a scale set using Azure Spot Virtual Machines, you can set the evi
The *Deallocate* policy moves your evicted instances to the stopped-deallocated state allowing you to redeploy evicted instances. However, there is no guarantee that the allocation will succeed. The deallocated VMs will count against your scale set instance quota and you will be charged for your underlying disks.
-If you would like your instances to be deleted when they are evicted, you can set the eviction policy to *delete*. With the eviction policy set to delete, you can create new VMs by increasing the scale set instance count property. The evicted VMs are deleted together with their underlying disks, and therefore you will not be charged for the storage. You can also use the auto-scaling feature of scale sets to automatically try and compensate for evicted VMs, however, there is no guarantee that the allocation will succeed. It is recommended you only use the autoscale feature on Azure Spot Virtual machine scale sets when you set the eviction policy to delete to avoid the cost of your disks and hitting quota limits.
+If you would like your instances to be deleted when they are evicted, you can set the eviction policy to *delete*. With the eviction policy set to delete, you can create new VMs by increasing the scale set instance count property. The evicted VMs are deleted together with their underlying disks, and therefore you will not be charged for the storage. You can also use the auto-scaling feature of scale sets to automatically try and compensate for evicted VMs, however, there is no guarantee that the allocation will succeed. It is recommended you only use the autoscale feature on Azure Spot Virtual Machine Scale Sets when you set the eviction policy to delete to avoid the cost of your disks and hitting quota limits.
Users can opt in to receive in-VM notifications through [Azure Scheduled Events](../virtual-machines/linux/scheduled-events.md). This will notify you if your VMs are being evicted and you will have 30 seconds to finish any jobs and perform shutdown tasks prior to the eviction.
For more information, see [Testing a simulated eviction notification](../virtual
**A:** Yes, you will be able to submit the request to increase your quota for Azure Spot Virtual Machines through the [standard quota request process](../azure-portal/supportability/per-vm-quota-requests.md).
-**Q:** Can I convert existing scale sets to Azure Spot Virtual machine scale sets?
+**Q:** Can I convert existing scale sets to Azure Spot Virtual Machine Scale Sets?
**A:** No, setting the `Spot` flag is only supported at creation time.
For more information, see [Testing a simulated eviction notification](../virtual
**A:** No, a scale set cannot support more than one priority type.
-**Q:** Can I use autoscale with Azure Spot Virtual machine scale sets?
+**Q:** Can I use autoscale with Azure Spot Virtual Machine Scale Sets?
-**A:** Yes, you can set autoscaling rules on your Azure Spot Virtual machine scale set. If your VMs are evicted, autoscale can try to create new Azure Spot Virtual Machines. Remember, you are not guaranteed this capacity though.
+**A:** Yes, you can set autoscaling rules on your Azure Spot Virtual Machine Scale Set. If your VMs are evicted, autoscale can try to create new Azure Spot Virtual Machines. Remember, you are not guaranteed this capacity though.
**Q:** Does autoscale work with both eviction policies (deallocate and delete)?
-**A:** Yes, however it is recommended that you set your eviction policy to delete when using autoscale. This is because deallocated instances are counted against your capacity count on the scale set. When using autoscale, you will likely hit your target instance count quickly due to the deallocated, evicted instances. Also, your scaling operations could be impacted by spot evictions. For example, virtual machine scale set instances could fall below the set min count due to multiple spot evictions during scaling operations.
+**A:** Yes, however it is recommended that you set your eviction policy to delete when using autoscale. This is because deallocated instances are counted against your capacity count on the scale set. When using autoscale, you will likely hit your target instance count quickly due to the deallocated, evicted instances. Also, your scaling operations could be impacted by spot evictions. For example, Virtual Machine Scale Set instances could fall below the set min count due to multiple spot evictions during scaling operations.
**Q:** Where can I post questions?
For more information, see [Testing a simulated eviction notification](../virtual
## Next steps
-Check out the [virtual machine scale set pricing page](https://azure.microsoft.com/pricing/details/virtual-machine-scale-sets/linux/) for pricing details.
+Check out the [Virtual Machine Scale Set pricing page](https://azure.microsoft.com/pricing/details/virtual-machine-scale-sets/linux/) for pricing details.
virtual-machine-scale-sets Virtual Machine Scale Sets Attached Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-attached-disks.md
Title: Azure Virtual Machine Scale Sets Attached Data Disks
-description: Learn how to use attached data disks with virtual machine scale sets through outlines of specific use cases.
+description: Learn how to use attached data disks with Virtual Machine Scale Sets through outlines of specific use cases.
Previously updated : 4/25/2017 Last updated : 11/22/2022
-# Azure virtual machine scale sets and attached data disks
+# Azure Virtual Machine Scale Sets and attached data disks
-To expand your available storage, Azure [virtual machine scale sets](./index.yml) support VM instances with attached data disks. You can attach data disks when the scale set is created, or to an existing scale set.
+To expand your available storage, Azure [Virtual Machine Scale Sets](./index.yml) support VM instances with attached data disks. You can attach data disks when the scale set is created, or to an existing scale set.
> [!NOTE] > When you create a scale set with attached data disks, you need to mount and format the disks from within a VM to use them (just like for standalone Azure VMs). A convenient way to complete this process is to use a Custom Script Extension that calls a script to partition and format all the data disks on a VM. For examples of this, see [Azure CLI](tutorial-use-disks-cli.md#prepare-the-data-disks) [Azure PowerShell](tutorial-use-disks-powershell.md#prepare-the-data-disks).
The rest of this article outlines specific use cases such as Service Fabric clus
## Create a Service Fabric cluster with attached data disks
-Each [node type](../service-fabric/service-fabric-cluster-nodetypes.md) in a [Service Fabric](../service-fabric/index.yml) cluster running in Azure is backed by a virtual machine scale set. Using an Azure Resource Manager template, you can attach data disks to the scale set(s) that make up the Service Fabric cluster. You can use an [existing template](https://github.com/Azure-Samples/service-fabric-cluster-templates) as a starting point. In the template, include a _dataDisks_ section in the _storageProfile_ of the _Microsoft.Compute/virtualMachineScaleSets_ resource(s) and deploy the template. The following example attaches a 128 GB data disk:
+Each [node type](../service-fabric/service-fabric-cluster-nodetypes.md) in a [Service Fabric](../service-fabric/index.yml) cluster running in Azure is backed by a Virtual Machine Scale Set. Using an Azure Resource Manager template, you can attach data disks to the scale set(s) that make up the Service Fabric cluster. You can use an [existing template](https://github.com/Azure-Samples/service-fabric-cluster-templates) as a starting point. In the template, include a _dataDisks_ section in the _storageProfile_ of the _Microsoft.Compute/virtualMachineScaleSets_ resource(s) and deploy the template. The following example attaches a 128 GB data disk:
```json "dataDisks": [
virtual-machine-scale-sets Virtual Machine Scale Sets Automatic Instance Repairs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-instance-repairs.md
Previously updated : 10/19/2022 Last updated : 11/22/2022
Automatic repairs doesn't currently support scenarios where a VM instance is mar
## How do automatic instance repairs work?
-Automatic instance repair feature relies on health monitoring of individual instances in a scale set. VM instances in a scale set can be configured to emit application health status using either the [Application Health extension](./virtual-machine-scale-sets-health-extension.md) or [Load balancer health probes](../load-balancer/load-balancer-custom-probe-overview.md). If an instance is found to be unhealthy, then the scale set performs repair action by deleting the unhealthy instance and creating a new one to replace it. The latest virtual machine scale set model is used to create the new instance. This feature can be enabled in the virtual machine scale set model by using the *automaticRepairsPolicy* object.
+Automatic instance repair feature relies on health monitoring of individual instances in a scale set. VM instances in a scale set can be configured to emit application health status using either the [Application Health extension](./virtual-machine-scale-sets-health-extension.md) or [Load balancer health probes](../load-balancer/load-balancer-custom-probe-overview.md). If an instance is found to be unhealthy, then the scale set performs repair action by deleting the unhealthy instance and creating a new one to replace it. The latest Virtual Machine Scale Set model is used to create the new instance. This feature can be enabled in the Virtual Machine Scale Set model by using the *automaticRepairsPolicy* object.
### Batching
When an instance goes through a state change operation because of a PUT, PATCH,
### Suspension of Repairs
-Virtual Machine Scale Sets provide the capability to temporarily suspend automatic instance repairs if needed. The *serviceState* for automatic repairs under the property *orchestrationServices* in instance view of virtual machine scale set shows the current state of the automatic repairs. When a scale set is opted into automatic repairs, the value of parameter *serviceState* is set to *Running*. When the automatic repairs are suspended for a scale set, the parameter *serviceState* is set to *Suspended*. If *automaticRepairsPolicy* is defined on a scale set but the automatic repairs feature isn't enabled, then the parameter *serviceState* is set to *Not Running*.
+Virtual Machine Scale Sets provide the capability to temporarily suspend automatic instance repairs if needed. The *serviceState* for automatic repairs under the property *orchestrationServices* in instance view of Virtual Machine Scale Set shows the current state of the automatic repairs. When a scale set is opted into automatic repairs, the value of parameter *serviceState* is set to *Running*. When the automatic repairs are suspended for a scale set, the parameter *serviceState* is set to *Suspended*. If *automaticRepairsPolicy* is defined on a scale set but the automatic repairs feature isn't enabled, then the parameter *serviceState* is set to *Not Running*.
If newly created instances for replacing the unhealthy ones in a scale set continue to remain unhealthy even after repeatedly performing repair operations, then as a safety measure the platform updates the *serviceState* for automatic repairs to *Suspended*. You can resume the automatic repairs again by setting the value of *serviceState* for automatic repairs to *Running*. Detailed instructions are provided in the section on [viewing and updating the service state of automatic repairs policy](#viewing-and-updating-the-service-state-of-automatic-instance-repairs-policy) for your scale set.
If the [terminate notification](./virtual-machine-scale-sets-terminate-notificat
## Enabling automatic repairs policy when creating a new scale set
-For enabling automatic repairs policy while creating a new scale set, ensure that all the [requirements](#requirements-for-using-automatic-instance-repairs) for opting in to this feature are met. The application endpoint should be correctly configured for scale set instances to avoid triggering unintended repairs while the endpoint is getting configured. For newly created scale sets, any instance repairs are performed only after the grace period completes. To enable the automatic instance repair in a scale set, use *automaticRepairsPolicy* object in the virtual machine scale set model.
+For enabling automatic repairs policy while creating a new scale set, ensure that all the [requirements](#requirements-for-using-automatic-instance-repairs) for opting in to this feature are met. The application endpoint should be correctly configured for scale set instances to avoid triggering unintended repairs while the endpoint is getting configured. For newly created scale sets, any instance repairs are performed only after the grace period completes. To enable the automatic instance repair in a scale set, use *automaticRepairsPolicy* object in the Virtual Machine Scale Set model.
-You can also use this [quickstart template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/vmss-automatic-repairs-slb-health-probe) to deploy a virtual machine scale set. The scale set has a load balancer health probe and automatic instance repairs enabled with a grace period of 30 minutes.
+You can also use this [quickstart template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/vmss-automatic-repairs-slb-health-probe) to deploy a Virtual Machine Scale Set. The scale set has a load balancer health probe and automatic instance repairs enabled with a grace period of 30 minutes.
### Azure portal The following steps enabling automatic repairs policy when creating a new scale set.
-1. Go to **Virtual machine scale sets**.
+1. Go to **Virtual Machine Scale Sets**.
1. Select **+ Add** to create a new scale set. 1. Go to the **Health** tab. 1. Locate the **Health** section.
PUT or PATCH on '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupNa
### Azure PowerShell
-The automatic instance repair feature can be enabled while creating a new scale set by using the [New-AzVmssConfig](/powershell/module/az.compute/new-azvmssconfig) cmdlet. This sample script walks through the creation of a scale set and associated resources using the configuration file: [Create a complete virtual machine scale set](./scripts/powershell-sample-create-complete-scale-set.md). You can configure automatic instance repairs policy by adding the parameters *EnableAutomaticRepair* and *AutomaticRepairGracePeriod* to the configuration object for creating the scale set. The following example enables the feature with a grace period of 30 minutes.
+The automatic instance repair feature can be enabled while creating a new scale set by using the [New-AzVmssConfig](/powershell/module/az.compute/new-azvmssconfig) cmdlet. This sample script walks through the creation of a scale set and associated resources using the configuration file: [Create a complete Virtual Machine Scale Set](./scripts/powershell-sample-create-complete-scale-set.md). You can configure automatic instance repairs policy by adding the parameters *EnableAutomaticRepair* and *AutomaticRepairGracePeriod* to the configuration object for creating the scale set. The following example enables the feature with a grace period of 30 minutes.
```azurepowershell-interactive New-AzVmssConfig `
The above example uses an existing load balancer and health probe for monitoring
## Enabling automatic repairs policy when updating an existing scale set
-Before enabling automatic repairs policy in an existing scale set, ensure that all the [requirements](#requirements-for-using-automatic-instance-repairs) for opting in to this feature are met. The application endpoint should be correctly configured for scale set instances to avoid triggering unintended repairs while the endpoint is getting configured. To enable the automatic instance repair in a scale set, use *automaticRepairsPolicy* object in the virtual machine scale set model.
+Before enabling automatic repairs policy in an existing scale set, ensure that all the [requirements](#requirements-for-using-automatic-instance-repairs) for opting in to this feature are met. The application endpoint should be correctly configured for scale set instances to avoid triggering unintended repairs while the endpoint is getting configured. To enable the automatic instance repair in a scale set, use *automaticRepairsPolicy* object in the Virtual Machine Scale Set model.
After updating the model of an existing scale set, ensure that the latest model is applied to all the instances of the scale. Refer to the instruction on [how to bring VMs up-to-date with the latest scale set model](./virtual-machine-scale-sets-upgrade-scale-set.md#how-to-bring-vms-up-to-date-with-the-latest-scale-set-model).
After updating the model of an existing scale set, ensure that the latest model
You can modify the automatic repairs policy of an existing scale set through the Azure portal.
-1. Go to an existing virtual machine scale set.
+1. Go to an existing Virtual Machine Scale Set.
1. Under **Settings** in the menu on the left, select **Health and repair**. 1. Enable the **Monitor application health** option. 1. Locate the **Automatic repair policy** section.
az vmss update \
### REST API
-Use [Get Instance View](/rest/api/compute/virtualmachinescalesets/getinstanceview) with API version 2019-12-01 or higher for virtual machine scale set to view the *serviceState* for automatic repairs under the property *orchestrationServices*.
+Use [Get Instance View](/rest/api/compute/virtualmachinescalesets/getinstanceview) with API version 2019-12-01 or higher for Virtual Machine Scale Set to view the *serviceState* for automatic repairs under the property *orchestrationServices*.
```http GET '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/instanceView?api-version=2019-12-01'
GET '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/provider
} ```
-Use *setOrchestrationServiceState* API with API version 2019-12-01 or higher on a virtual machine scale set to set the state of automatic repairs. Once the scale set is opted into the automatic repairs feature, you can use this API to suspend or resume automatic repairs for your scale set.
+Use *setOrchestrationServiceState* API with API version 2019-12-01 or higher on a Virtual Machine Scale Set to set the state of automatic repairs. Once the scale set is opted into the automatic repairs feature, you can use this API to suspend or resume automatic repairs for your scale set.
```http POST '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/setOrchestrationServiceState?api-version=2019-12-01'
Set-AzVmssOrchestrationServiceState `
**Failure to enable automatic repairs policy**
-If you get a 'BadRequest' error with a message stating "Couldn't find member 'automaticRepairsPolicy' on object of type 'properties'", then check the API version used for virtual machine scale set. API version 2018-10-01 or higher is required for this feature.
+If you get a 'BadRequest' error with a message stating "Couldn't find member 'automaticRepairsPolicy' on object of type 'properties'", then check the API version used for Virtual Machine Scale Set. API version 2018-10-01 or higher is required for this feature.
**Instance not getting repaired even when policy is enabled**
The instance could be in grace period. This period is the amount of time to wait
**Viewing application health status for scale set instances**
-You can use the [Get Instance View API](/rest/api/compute/virtualmachinescalesetvms/getinstanceview) for instances in a virtual machine scale set to view the application health status. With Azure PowerShell, you can use the cmdlet [Get-AzVmssVM](/powershell/module/az.compute/get-azvmssvm) with the *-InstanceView* flag. The application health status is provided under the property *vmHealth*.
+You can use the [Get Instance View API](/rest/api/compute/virtualmachinescalesetvms/getinstanceview) for instances in a Virtual Machine Scale Set to view the application health status. With Azure PowerShell, you can use the cmdlet [Get-AzVmssVM](/powershell/module/az.compute/get-azvmssvm) with the *-InstanceView* flag. The application health status is provided under the property *vmHealth*.
In the Azure portal, you can see the health status as well. Go to an existing scale set, select **Instances** from the menu on the left, and look at the **Health state** column for the health status of each scale set instance.
virtual-machine-scale-sets Virtual Machine Scale Sets Automatic Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md
Title: Automatic OS image upgrades with Azure virtual machine scale sets
+ Title: Automatic OS image upgrades with Azure Virtual Machine Scale Sets
description: Learn how to automatically upgrade the OS image on VM instances in a scale set Previously updated : 08/24/2022- Last updated : 11/22/2022+
-# Azure virtual machine scale set automatic OS image upgrades
+# Azure Virtual Machine Scale Set automatic OS image upgrades
Enabling automatic OS image upgrades on your scale set helps ease update management by safely and automatically upgrading the OS disk for all instances in the scale set.
The availability-first model for platform orchestrated updates described below e
**Within a 'set':** - All VMs in a common scale set are not updated concurrently. -- VMs in a common virtual machine scale set are grouped in batches and updated within Update Domain boundaries as described below.
+- VMs in a common Virtual Machine Scale Set are grouped in batches and updated within Update Domain boundaries as described below.
The platform orchestrated updates process is followed for rolling out supported OS platform image upgrades every month. For custom images through Azure Compute Gallery, an image upgrade is only kicked off for a particular Azure region when the new image is published and [replicated](../virtual-machines/azure-compute-gallery.md#replication) to the region of that scale set.
The recommended steps to recover VMs and re-enable automatic OS upgrade if there
* Deploy the updated scale set, which will update all VM instances including the failed ones. ## Using Application Health extension
-The Application Health extension is deployed inside a virtual machine scale set instance and reports on VM health from inside the scale set instance. You can configure the extension to probe on an application endpoint and update the status of the application on that instance. This instance status is checked by Azure to determine whether an instance is eligible for upgrade operations.
+The Application Health extension is deployed inside a Virtual Machine Scale Set instance and reports on VM health from inside the scale set instance. You can configure the extension to probe on an application endpoint and update the status of the application on that instance. This instance status is checked by Azure to determine whether an instance is eligible for upgrade operations.
As the extension reports health from within a VM, the extension can be used in situations where external probes such as Application Health Probes (that utilize custom Azure Load Balancer [probes](../load-balancer/load-balancer-custom-probe-overview.md)) canΓÇÖt be used.
For specific cases where you do not want to wait for the orchestrator to apply t
> Manual trigger of OS image upgrades does not provide automatic rollback capabilities. If an instance does not recover its health after an upgrade operation, its previous OS disk can't be restored. ### REST API
-Use the [Start OS Upgrade](/rest/api/compute/virtualmachinescalesetrollingupgrades/startosupgrade) API call to start a rolling upgrade to move all virtual machine scale set instances to the latest available image OS version. Instances that are already running the latest available OS version are not affected. The following example details how you can start a rolling OS upgrade on a scale set named *myScaleSet* in the resource group named *myResourceGroup*:
+Use the [Start OS Upgrade](/rest/api/compute/virtualmachinescalesetrollingupgrades/startosupgrade) API call to start a rolling upgrade to move all Virtual Machine Scale Set instances to the latest available image OS version. Instances that are already running the latest available OS version are not affected. The following example details how you can start a rolling OS upgrade on a scale set named *myScaleSet* in the resource group named *myResourceGroup*:
``` POST on `/subscriptions/subscription_id/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachineScaleSets/myScaleSet/osRollingUpgrade?api-version=2021-03-01`
virtual-machine-scale-sets Virtual Machine Scale Sets Autoscale Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-autoscale-overview.md
Title: Overview of autoscale with Azure virtual machine scale sets
-description: Learn about the different ways that you can automatically scale an Azure virtual machine scale set based on performance or on a fixed schedule
+ Title: Overview of autoscale with Azure Virtual Machine Scale Sets
+description: Learn about the different ways that you can automatically scale an Azure Virtual Machine Scale Set based on performance or on a fixed schedule
Previously updated : 06/30/2020 Last updated : 11/22/2022
-# Overview of autoscale with Azure virtual machine scale sets
+# Overview of autoscale with Azure Virtual Machine Scale Sets
-An Azure virtual machine scale set can automatically increase or decrease the number of VM instances that run your application. This automated and elastic behavior reduces the management overhead to monitor and optimize the performance of your application. You create rules that define the acceptable performance for a positive customer experience. When those defined thresholds are met, autoscale rules take action to adjust the capacity of your scale set. You can also schedule events to automatically increase or decrease the capacity of your scale set at fixed times. This article provides an overview of which performance metrics are available and what actions autoscale can perform.
+An Azure Virtual Machine Scale Set can automatically increase or decrease the number of VM instances that run your application. This automated and elastic behavior reduces the management overhead to monitor and optimize the performance of your application. You create rules that define the acceptable performance for a positive customer experience. When those defined thresholds are met, autoscale rules take action to adjust the capacity of your scale set. You can also schedule events to automatically increase or decrease the capacity of your scale set at fixed times. This article provides an overview of which performance metrics are available and what actions autoscale can perform.
## Benefits of autoscale
You can create autoscale rules that use host-based metrics with one of the follo
- [Azure CLI](tutorial-autoscale-cli.md) - [Azure template](tutorial-autoscale-template.md)
-For information on how to manage your VM instances, see [Manage virtual machine scale sets with Azure PowerShell](./virtual-machine-scale-sets-manage-powershell.md).
+For information on how to manage your VM instances, see [Manage Virtual Machine Scale Sets with Azure PowerShell](./virtual-machine-scale-sets-manage-powershell.md).
To learn how to generate alerts when your autoscale rules trigger, see [Use autoscale actions to send email and webhook alert notifications in Azure Monitor](../azure-monitor/autoscale/autoscale-webhook-email.md). You can also [Use audit logs to send email and webhook alert notifications in Azure Monitor](../azure-monitor/alerts/alerts-log-webhook.md).
virtual-machine-scale-sets Virtual Machine Scale Sets Autoscale Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-autoscale-portal.md
Title: Autoscale virtual machine scale sets in the Azure portal
-description: How to create autoscale rules for virtual machine scale sets in the Azure portal
+ Title: Autoscale Virtual Machine Scale Sets in the Azure portal
+description: How to create autoscale rules for Virtual Machine Scale Sets in the Azure portal
Previously updated : 05/29/2018- Last updated : 11/22/2022+
-# Automatically scale a virtual machine scale set in the Azure portal
+# Automatically scale a Virtual Machine Scale Set in the Azure portal
When you create a scale set, you define the number of VM instances that you wish to run. As your application demand changes, you can automatically increase or decrease the number of VM instances. The ability to autoscale lets you keep up with customer demand or respond to application performance changes throughout the lifecycle of your app.
This article shows you how to create autoscale rules in the Azure portal that mo
## Prerequisites
-To create autoscale rules, you need an existing virtual machine scale set. You can create a scale set with the [Azure portal](quick-create-portal.md), [Azure PowerShell](quick-create-powershell.md), or [Azure CLI](quick-create-cli.md).
+To create autoscale rules, you need an existing Virtual Machine Scale Set. You can create a scale set with the [Azure portal](quick-create-portal.md), [Azure PowerShell](quick-create-powershell.md), or [Azure CLI](quick-create-cli.md).
## Create a rule to automatically scale out
To see how your autoscale rules are applied, select **Run history** across the t
## Next steps
-In this article, you learned how to use autoscale rules to scale horizontally and increase or decrease the *number* of VM instances in your scale set. You can also scale vertically to increase or decrease the VM instance *size*. For more information, see [Vertical autoscale with Virtual Machine Scale sets](virtual-machine-scale-sets-vertical-scale-reprovision.md).
+In this article, you learned how to use autoscale rules to scale horizontally and increase or decrease the *number* of VM instances in your scale set. You can also scale vertically to increase or decrease the VM instance *size*. For more information, see [Vertical autoscale with Virtual Machine Scale Sets](virtual-machine-scale-sets-vertical-scale-reprovision.md).
-For information on how to manage your VM instances, see [Manage virtual machine scale sets with Azure PowerShell](./virtual-machine-scale-sets-manage-powershell.md).
+For information on how to manage your VM instances, see [Manage Virtual Machine Scale Sets with Azure PowerShell](./virtual-machine-scale-sets-manage-powershell.md).
To learn how to generate alerts when your autoscale rules trigger, see [Use autoscale actions to send email and webhook alert notifications in Azure Monitor](../azure-monitor/autoscale/autoscale-webhook-email.md). You can also [Use audit logs to send email and webhook alert notifications in Azure Monitor](../azure-monitor/alerts/alerts-log-webhook.md).
virtual-machine-scale-sets Virtual Machine Scale Sets Deploy App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-deploy-app.md
Title: Deploy an application to an Azure virtual machine scale set
+ Title: Deploy an application to an Azure Virtual Machine Scale Set
description: Learn how to deploy applications to Linux and Windows virtual machine instances in a scale set Previously updated : 05/29/2018- Last updated : 11/22/2022+ ms.devlang: azurecli
-# Deploy your application on virtual machine scale sets
+# Deploy your application on Virtual Machine Scale Sets
To run applications on virtual machine (VM) instances in a scale set, you first need to install the application components and required files. This article introduces ways to build a custom VM image for instances in a scale set, or automatically run install scripts on existing VM instances. You also learn how to manage application or OS updates across a scale set.
virtual-machine-scale-sets Virtual Machine Scale Sets Design Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-design-overview.md
Title: Design Considerations for Azure Virtual Machine Scale Sets description: Learn about the design considerations for your Azure Virtual Machine Scale Sets. Compare scale sets features with VM features.
-keywords: linux virtual machine,virtual machine scale sets
+keywords: linux virtual machine,Virtual Machine Scale Sets
Previously updated : 06/25/2020 Last updated : 11/22/2022
While overprovisioning does improve provisioning success rates, it can cause con
If your scale set uses user-managed storage, and you turn off overprovisioning, you can have more than 20 VMs per storage account, but it is not recommended to go above 40 for IO performance reasons. ## Limits
-A scale set built on a Marketplace image (also known as a platform image) and configured to use Azure Managed Disks supports a capacity of up to 1,000 VMs. If you configure your scale set to support more than 100 VMs, not all scenarios work the same (for example load balancing). For more information, see [Working with large virtual machine scale sets](virtual-machine-scale-sets-placement-groups.md).
+A scale set built on a Marketplace image (also known as a platform image) and configured to use Azure Managed Disks supports a capacity of up to 1,000 VMs. If you configure your scale set to support more than 100 VMs, not all scenarios work the same (for example load balancing). For more information, see [Working with large Virtual Machine Scale Sets](virtual-machine-scale-sets-placement-groups.md).
A scale set configured with user-managed storage accounts is currently limited to 100 VMs (and 5 storage accounts are recommended for this scale).
virtual-machine-scale-sets Virtual Machine Scale Sets Dsc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-dsc.md
Previously updated : 6/25/2020 Last updated : 11/22/2022 # Using Virtual Machine Scale Sets with the Azure DSC Extension
-[Virtual Machine Scale Sets](./overview.md) can be used with the [Azure Desired State Configuration (DSC)](../virtual-machines/extensions/dsc-overview.md?toc=/azure/virtual-machines/windows/toc.json) extension handler. Virtual machine scale sets provide a way to deploy and manage large numbers of virtual machines, and can elastically scale in and out in response to load. DSC is used to configure the VMs as they come online so they are running the production software.
+[Virtual Machine Scale Sets](./overview.md) can be used with the [Azure Desired State Configuration (DSC)](../virtual-machines/extensions/dsc-overview.md?toc=/azure/virtual-machines/windows/toc.json) extension handler. Virtual Machine Scale Sets provide a way to deploy and manage large numbers of virtual machines, and can elastically scale in and out in response to load. DSC is used to configure the VMs as they come online so they are running the production software.
## Differences between deploying to Virtual Machines and Virtual Machine Scale Sets
-The underlying template structure for a virtual machine scale set is slightly different from a single VM. Specifically, a single VM deploys extensions under the "virtualMachines" node. There is an entry of type "extensions" where DSC is added to the template
+The underlying template structure for a Virtual Machine Scale Set is slightly different from a single VM. Specifically, a single VM deploys extensions under the "virtualMachines" node. There is an entry of type "extensions" where DSC is added to the template
``` "resources": [
The underlying template structure for a virtual machine scale set is slightly di
] ```
-A virtual machine scale set node has a "properties" section with the "VirtualMachineProfile", "extensionProfile" attribute. DSC is added under "extensions"
+A Virtual Machine Scale Set node has a "properties" section with the "VirtualMachineProfile", "extensionProfile" attribute. DSC is added under "extensions"
``` "extensionProfile": {
A virtual machine scale set node has a "properties" section with the "VirtualMac
``` ## Behavior for a Virtual Machine Scale Set
-The behavior for a virtual machine scale set is identical to the behavior for a single VM. When a new VM is created, it is automatically provisioned with the DSC extension. If a newer version of the WMF is required by the extension, the VM reboots before coming online. Once it is online, it downloads the DSC configuration .zip and provision it on the VM. More details can be found in [the Azure DSC Extension Overview](../virtual-machines/extensions/dsc-overview.md?toc=/azure/virtual-machines/windows/toc.json).
+The behavior for a Virtual Machine Scale Set is identical to the behavior for a single VM. When a new VM is created, it is automatically provisioned with the DSC extension. If a newer version of the WMF is required by the extension, the VM reboots before coming online. Once it is online, it downloads the DSC configuration .zip and provision it on the VM. More details can be found in [the Azure DSC Extension Overview](../virtual-machines/extensions/dsc-overview.md?toc=/azure/virtual-machines/windows/toc.json).
## Next steps Examine the [Azure Resource Manager template for the DSC extension](../virtual-machines/extensions/dsc-template.md?toc=/azure/virtual-machines/windows/toc.json).
virtual-machine-scale-sets Virtual Machine Scale Sets Extension Sequencing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-extension-sequencing.md
Title: Use extension sequencing with Azure virtual machine scale sets
-description: Learn how to sequence extension provisioning when deploying multiple extensions on virtual machine scale sets.
+ Title: Use extension sequencing with Azure Virtual Machine Scale Sets
+description: Learn how to sequence extension provisioning when deploying multiple extensions on Virtual Machine Scale Sets.
Previously updated : 01/30/2019 Last updated : 11/22/2022
-# Sequence extension provisioning in virtual machine scale sets
+# Sequence extension provisioning in Virtual Machine Scale Sets
Azure virtual machine extensions provide capabilities such as post-deployment configuration and management, monitoring, security, and more. Production deployments typically use a combination of multiple extensions configured for the VM instances to achieve desired results. When using multiple extensions on a virtual machine, it's important to ensure that extensions requiring the same OS resources aren't trying to acquire these resources at the same time. Some extensions also depend on other extensions to provide required configurations such as environment settings and secrets. Without the correct ordering and sequencing in place, dependent extension deployments can fail.
-This article details how you can sequence extensions to be configured for the VM instances in virtual machine scale sets.
+This article details how you can sequence extensions to be configured for the VM instances in Virtual Machine Scale Sets.
## Prerequisites This article assumes that you're familiar with: - Azure virtual machine [extensions](../virtual-machines/extensions/overview.md)-- [Modifying](virtual-machine-scale-sets-upgrade-scale-set.md) virtual machine scale sets
+- [Modifying](virtual-machine-scale-sets-upgrade-scale-set.md) Virtual Machine Scale Sets
## When to use extension sequencing Sequencing extensions in not mandatory for scale sets, and unless specified, extensions can be provisioned on a scale set instance in any order.
ExtensionA -> ExtensionB -> ExtensionC -> ExtensionA
Ensure that the extensions being removed are not listed under provisionAfterExtensions for any other extensions. ## Next steps
-Learn how to [deploy your application](virtual-machine-scale-sets-deploy-app.md) on virtual machine scale sets.
+Learn how to [deploy your application](virtual-machine-scale-sets-deploy-app.md) on Virtual Machine Scale Sets.
virtual-machine-scale-sets Virtual Machine Scale Sets Health Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-health-extension.md
Title: Use Application Health extension with Azure virtual machine scale sets
-description: Learn how to use the Application Health extension to monitor the health of your applications deployed on virtual machine scale sets.
+ Title: Use Application Health extension with Azure Virtual Machine Scale Sets
+description: Learn how to use the Application Health extension to monitor the health of your applications deployed on Virtual Machine Scale Sets.
Previously updated : 05/06/2020 Last updated : 11/22/2022
-# Using Application Health extension with virtual machine scale sets
+# Using Application Health extension with Virtual Machine Scale Sets
Monitoring your application health is an important signal for managing and upgrading your deployment. Azure Virtual Machine Scale Sets provide support for [Rolling Upgrades](virtual-machine-scale-sets-upgrade-scale-set.md#how-to-bring-vms-up-to-date-with-the-latest-scale-set-model) including [Automatic OS-Image Upgrades](virtual-machine-scale-sets-automatic-upgrade.md) and [Automatic VM Guest Patching](https://learn.microsoft.com/azure/virtual-machines/automatic-vm-guest-patching), which rely on health monitoring of the individual instances to upgrade your deployment. You can also use Application Health Extension to monitor the application health of each instance in your scale set and perform instance repairs using [Automatic Instance Repairs](virtual-machine-scale-sets-automatic-instance-repairs.md).
-This article describes how you can use the Application Health extension to monitor the health of your applications deployed on virtual machine scale sets.
+This article describes how you can use the Application Health extension to monitor the health of your applications deployed on Virtual Machine Scale Sets.
## Prerequisites This article assumes that you are familiar with: - Azure virtual machine [extensions](../virtual-machines/extensions/overview.md)-- [Modifying](virtual-machine-scale-sets-upgrade-scale-set.md) virtual machine scale sets
+- [Modifying](virtual-machine-scale-sets-upgrade-scale-set.md) Virtual Machine Scale Sets
## When to use the Application Health extension
-The Application Health extension is deployed inside a virtual machine scale set instance and reports on VM health from inside the scale set instance. You can configure the extension to probe on an application endpoint and update the status of the application on that instance. This instance status is checked by Azure to determine whether an instance is eligible for upgrade operations.
+The Application Health extension is deployed inside a Virtual Machine Scale Set instance and reports on VM health from inside the scale set instance. You can configure the extension to probe on an application endpoint and update the status of the application on that instance. This instance status is checked by Azure to determine whether an instance is eligible for upgrade operations.
As the extension reports health from within a VM, the extension can be used in situations where external probes such as Application Health Probes (that utilize custom Azure Load Balancer [probes](../load-balancer/load-balancer-custom-probe-overview.md)) canΓÇÖt be used.
C:\WindowsAzure\Logs\Plugins\Microsoft.ManagedServices.ApplicationHealthWindows\
The logs also periodically capture the application health status. ## Next steps
-Learn how to [deploy your application](virtual-machine-scale-sets-deploy-app.md) on virtual machine scale sets.
+Learn how to [deploy your application](virtual-machine-scale-sets-deploy-app.md) on Virtual Machine Scale Sets.
virtual-machine-scale-sets Virtual Machine Scale Sets Instance Ids https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-instance-ids.md
Previously updated : 02/22/2018 Last updated : 11/22/2022
virtual-machine-scale-sets Virtual Machine Scale Sets Instance Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-instance-protection.md
Title: Instance Protection for Azure virtual machine scale set instances
-description: Learn how to protect Azure virtual machine scale set instances from scale-in and scale-set operations.
--
+ Title: Instance Protection for Azure Virtual Machine Scale Set instances
+description: Learn how to protect Azure Virtual Machine Scale Set instances from scale-in and scale-set operations.
++ Previously updated : 02/26/2020- Last updated : 11/22/2022+
-# Instance Protection for Azure virtual machine scale set instances
+# Instance Protection for Azure Virtual Machine Scale Set instances
**Applies to:** :heavy_check_mark: Uniform scale sets
-Azure virtual machine scale sets enable better elasticity for your workloads through [Autoscale](virtual-machine-scale-sets-autoscale-overview.md), so you can configure when your infrastructure scales-out and when it scales-in. Scale sets also enable you to centrally manage, configure, and update a large number of VMs through different [upgrade policy](virtual-machine-scale-sets-upgrade-scale-set.md#how-to-bring-vms-up-to-date-with-the-latest-scale-set-model) settings. You can configure an update on the scale set model and the new configuration is applied automatically to every scale set instance if you've set the upgrade policy to Automatic or Rolling.
+Azure Virtual Machine Scale Sets enable better elasticity for your workloads through [Autoscale](virtual-machine-scale-sets-autoscale-overview.md), so you can configure when your infrastructure scales-out and when it scales-in. Scale sets also enable you to centrally manage, configure, and update a large number of VMs through different [upgrade policy](virtual-machine-scale-sets-upgrade-scale-set.md#how-to-bring-vms-up-to-date-with-the-latest-scale-set-model) settings. You can configure an update on the scale set model and the new configuration is applied automatically to every scale set instance if you've set the upgrade policy to Automatic or Rolling.
As your application processes traffic, there can be situations where you want specific instances to be treated differently from the rest of the scale set instance. For example, certain instances in the scale set could be performing long-running operations, and you don't want these instances to be scaled-in until the operations complete. You might also have specialized a few instances in the scale set to perform additional or different tasks than the other members of the scale set. You require these 'special' VMs not to be modified with the other instances in the scale set. Instance protection provides the additional controls to enable these and other scenarios for your application.
There are multiple ways of applying scale-in protection on your scale set instan
You can apply scale-in protection through the Azure portal to an instance in the scale set. You cannot adjust more than one instance at a time. Repeat the steps for each instance you want to protect.
-1. Go to an existing virtual machine scale set.
+1. Go to an existing Virtual Machine Scale Set.
1. Select **Instances** from the menu on the left, under **Settings**. 1. Select the name of the instance you want to protect. 1. Select the **Protection Policy** tab.
There are multiple ways of applying scale set actions protection on your scale s
You can apply protection from scale set actions through the Azure portal to an instance in the scale set. You cannot adjust more than one instance at a time. Repeat the steps for each instance you want to protect.
-1. Go to an existing virtual machine scale set.
+1. Go to an existing Virtual Machine Scale Set.
1. Select **Instances** from the menu on the left, under **Settings**. 1. Select the name of the instance you want to protect. 1. Select the **Protection Policy** tab.
You can apply instance protection to scale set instances after the instances are
Instance protection is only supported with API version 2019-03-01 and above. Check the API version being used and update as required. You might also need to update your PowerShell or CLI to the latest version. ## Next steps
-Learn how to [deploy your application](virtual-machine-scale-sets-deploy-app.md) on virtual machine scale sets.
+Learn how to [deploy your application](virtual-machine-scale-sets-deploy-app.md) on Virtual Machine Scale Sets.
virtual-machine-scale-sets Virtual Machine Scale Sets Maintenance Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-maintenance-notifications.md
Title: Maintenance notifications for virtual machine scale sets in Azure
-description: View maintenance notifications and start self-service maintenance for virtual machine scale sets in Azure.
+ Title: Maintenance notifications for Virtual Machine Scale Sets in Azure
+description: View maintenance notifications and start self-service maintenance for Virtual Machine Scale Sets in Azure.
Previously updated : 04/26/2021 Last updated : 11/22/2022
-# Planned maintenance notifications for virtual machine scale sets
+# Planned maintenance notifications for Virtual Machine Scale Sets
Azure periodically performs updates to improve the reliability, performance, and security of the host infrastructure for virtual machines (VMs). Updates might include patching the hosting environment or upgrading and decommissioning hardware. Most updates don't affect the hosted VMs. However, updates affect VMs in these scenarios:
Planned maintenance that requires a reboot is scheduled in waves. Each wave has
The goal in having two windows is to give you enough time to start maintenance and reboot your VM while knowing when Azure will automatically start maintenance.
-You can use the Azure portal, PowerShell, the REST API, and the Azure CLI to query for maintenance windows for your virtual machine scale set VMs, and to start self-service maintenance.
+You can use the Azure portal, PowerShell, the REST API, and the Azure CLI to query for maintenance windows for your Virtual Machine Scale Set VMs, and to start self-service maintenance.
## Should you start maintenance during the self-service window?
It's best to use self-service maintenance in the following cases:
- You need more than 30 minutes of VM recovery time between two update domains. To control the time between update domains, you must trigger maintenance on your VMs one update domain at a time.
-## View virtual machine scale sets that are affected by maintenance in the portal
+## View Virtual Machine Scale Sets that are affected by maintenance in the portal
-When a planned maintenance wave is scheduled, you can view the list of virtual machine scale sets that are affected by the upcoming maintenance wave by using the Azure portal.
+When a planned maintenance wave is scheduled, you can view the list of Virtual Machine Scale Sets that are affected by the upcoming maintenance wave by using the Azure portal.
1. Sign in to the [Azure portal](https://portal.azure.com).
-2. In the left menu, select **All services**, and then select **Virtual machine scale sets**.
-3. Under **Virtual machine scale sets**, select **Edit columns** to open the list of available columns.
+2. In the left menu, select **All services**, and then select **Virtual Machine Scale Sets**.
+3. Under **Virtual Machine Scale Sets**, select **Edit columns** to open the list of available columns.
4. In the **Available columns** section, select **Self-service maintenance**, and then move it to the **Selected columns** list. Select **Apply**. To make the **Self-service maintenance** item easier to find, you can change the drop-down option in the **Available columns** section from **All** to **Properties**.
-The **Self-service maintenance** column now appears in the list of virtual machine scale sets. Each virtual machine scale set can have one of the following values for the self-service maintenance column:
+The **Self-service maintenance** column now appears in the list of Virtual Machine Scale Sets. Each Virtual Machine Scale Set can have one of the following values for the self-service maintenance column:
| Value | Description | |-|-|
-| Yes | At least one VM in your virtual machine scale set is in a self-service window. You can start maintenance at any time during this self-service window. |
-| No | No VMs are in a self-service window in the affected virtual machine scale set. |
+| Yes | At least one VM in your Virtual Machine Scale Set is in a self-service window. You can start maintenance at any time during this self-service window. |
+| No | No VMs are in a self-service window in the affected Virtual Machine Scale Set. |
| - | Your virtual machines scale sets aren't part of a planned maintenance wave.| ## Notification and alerts in the portal
Azure communicates a schedule for planned maintenance by sending an email to the
To learn more about how to configure Activity Log alerts, see [Create Activity Log alerts](../azure-monitor/alerts/activity-log-alerts.md)
-## Start maintenance on your virtual machine scale set from the portal
+## Start maintenance on your Virtual Machine Scale Set from the portal
-You can see more maintenance-related details in the overview of virtual machine scale sets. If at least one VM in the virtual machine scale set is included in the planned maintenance wave, a new notification ribbon is added near the top of the page. Select the notification ribbon to go to the **Maintenance** page.
+You can see more maintenance-related details in the overview of Virtual Machine Scale Sets. If at least one VM in the Virtual Machine Scale Set is included in the planned maintenance wave, a new notification ribbon is added near the top of the page. Select the notification ribbon to go to the **Maintenance** page.
On the **Maintenance** page, you can see which VM instance is affected by the planned maintenance. To start maintenance, select the check box that corresponds to the affected VM. Then, select **Start maintenance**.
-After you start maintenance, the affected VMs in your virtual machine scale set undergo maintenance and are temporarily unavailable. If you missed the self-service window, you can still see the time window when your virtual machine scale set will be maintained by Azure.
+After you start maintenance, the affected VMs in your Virtual Machine Scale Set undergo maintenance and are temporarily unavailable. If you missed the self-service window, you can still see the time window when your Virtual Machine Scale Set will be maintained by Azure.
## Check maintenance status by using PowerShell
-You can use Azure PowerShell to see when VMs in your virtual machine scale sets are scheduled for maintenance. Planned maintenance information is available by using the [Get-AzVmssVM](/powershell/module/az.compute/get-azvmssvm) cmdlet when you use the `-InstanceView` parameter.
+You can use Azure PowerShell to see when VMs in your Virtual Machine Scale Sets are scheduled for maintenance. Planned maintenance information is available by using the [Get-AzVmssVM](/powershell/module/az.compute/get-azvmssvm) cmdlet when you use the `-InstanceView` parameter.
Maintenance information is returned only if maintenance is planned. If no maintenance is scheduled that affects the VM instance, the cmdlet doesn't return any maintenance information.
az vmss perform-maintenance -g rgName -n vmssName --instance-ids id
**Q: If I follow your recommendations for high availability by using an availability set, am I safe?**
-**A:** Virtual machines deployed in an availability set or in virtual machine scale sets use update domains. When performing maintenance, Azure honors the update domain constraint and doesn't reboot VMs from a different update domain (within the same availability set). Azure also waits for at least 30 minutes before moving to the next group of VMs.
+**A:** Virtual machines deployed in an availability set or in Virtual Machine Scale Sets use update domains. When performing maintenance, Azure honors the update domain constraint and doesn't reboot VMs from a different update domain (within the same availability set). Azure also waits for at least 30 minutes before moving to the next group of VMs.
For more information about high availability, see [Regions and availability for virtual machines in Azure](../virtual-machines/availability.md).
For more information about high availability, see [Regions and availability for
**Q: How long will it take you to reboot my VM?**
-**A:** Depending on the size of your VM, reboot might take up to several minutes during the self-service maintenance window. During the Azure-initiated reboots in the scheduled maintenance window, the reboot typically takes about 25 minutes. If you use Cloud Services (Web/Worker Role), virtual machine scale sets, or availability sets, you are given 30 minutes between each group of VMs (update domain) during the scheduled maintenance window.
+**A:** Depending on the size of your VM, reboot might take up to several minutes during the self-service maintenance window. During the Azure-initiated reboots in the scheduled maintenance window, the reboot typically takes about 25 minutes. If you use Cloud Services (Web/Worker Role), Virtual Machine Scale Sets, or availability sets, you are given 30 minutes between each group of VMs (update domain) during the scheduled maintenance window.
**Q: I donΓÇÖt see any maintenance information on my VMs. What went wrong?**
virtual-machine-scale-sets Virtual Machine Scale Sets Manage Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-manage-cli.md
Previously updated : 05/29/2018 Last updated : 11/22/2022
-# Manage a virtual machine scale set with the Azure CLI
+# Manage a Virtual Machine Scale Set with the Azure CLI
-Throughout the lifecycle of a virtual machine scale set, you may need to run one or more management tasks. Additionally, you may want to create scripts that automate various lifecycle-tasks. This article details some of the common Azure CLI commands that let you perform these tasks.
+Throughout the lifecycle of a Virtual Machine Scale Set, you may need to run one or more management tasks. Additionally, you may want to create scripts that automate various lifecycle-tasks. This article details some of the common Azure CLI commands that let you perform these tasks.
-To complete these management tasks, you need the latest Azure CLI. For information, see [Install the Azure CLI](/cli/azure/install-azure-cli). If you need to create a virtual machine scale set, you can [create a scale set with the Azure CLI](quick-create-cli.md).
+To complete these management tasks, you need the latest Azure CLI. For information, see [Install the Azure CLI](/cli/azure/install-azure-cli). If you need to create a Virtual Machine Scale Set, you can [create a scale set with the Azure CLI](quick-create-cli.md).
## View information about a scale set
virtual-machine-scale-sets Virtual Machine Scale Sets Manage Fault Domains https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-manage-fault-domains.md
Title: Manage fault domains in Azure virtual machine scale sets
-description: Learn how to choose the right number of FDs while creating a virtual machine scale set.
+ Title: Manage fault domains in Azure Virtual Machine Scale Sets
+description: Learn how to choose the right number of FDs while creating a Virtual Machine Scale Set.
Previously updated : 12/18/2018 Last updated : 11/22/2022
-# Choosing the right number of fault domains for virtual machine scale set
+# Choosing the right number of fault domains for Virtual Machine Scale Set
-Virtual machine scale sets are created with five fault domains by default in Azure regions with no zones. For the regions that support zonal deployment of virtual machine scale sets and this option is selected, the default value of the fault domain count is 1 for each of the zones. FD=1 in this case implies that the VM instances belonging to the scale set will be spread across many racks on a best effort basis.
+Virtual Machine Scale Sets are created with five fault domains by default in Azure regions with no zones. For the regions that support zonal deployment of Virtual Machine Scale Sets and this option is selected, the default value of the fault domain count is 1 for each of the zones. FD=1 in this case implies that the VM instances belonging to the scale set will be spread across many racks on a best effort basis.
You can also consider aligning the number of scale set fault domains with the number of Managed Disks fault domains. This alignment can help prevent loss of quorum if an entire Managed Disks fault domain goes down. The FD count can be set to less than or equal to the number of Managed Disks fault domains available in each of the regions. Refer to this [document](../virtual-machines/availability.md) to learn about the number of Managed Disks fault domains by region.
virtual-machine-scale-sets Virtual Machine Scale Sets Manage Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-manage-powershell.md
Previously updated : 05/29/2018 Last updated : 11/22/2022
-# Manage a virtual machine scale set with Azure PowerShell
+# Manage a Virtual Machine Scale Set with Azure PowerShell
-Throughout the lifecycle of a virtual machine scale set, you may need to run one or more management tasks. Additionally, you may want to create scripts that automate various lifecycle-tasks. This article details some of the common Azure PowerShell cmdlets that let you perform these tasks.
+Throughout the lifecycle of a Virtual Machine Scale Set, you may need to run one or more management tasks. Additionally, you may want to create scripts that automate various lifecycle-tasks. This article details some of the common Azure PowerShell cmdlets that let you perform these tasks.
-If you need to create a virtual machine scale set, you can [create a scale set with Azure PowerShell](quick-create-powershell.md).
+If you need to create a Virtual Machine Scale Set, you can [create a scale set with Azure PowerShell](quick-create-powershell.md).
[!INCLUDE [updated-for-az.md](../../includes/updated-for-az.md)]
virtual-machine-scale-sets Virtual Machine Scale Sets Mvss Custom Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-mvss-custom-image.md
Previously updated : 04/26/2018 Last updated : 11/22/2022
virtual-machine-scale-sets Virtual Machine Scale Sets Mvss Existing Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-mvss-existing-vnet.md
Previously updated : 03/30/2021 Last updated : 11/22/2022
virtual-machine-scale-sets Virtual Machine Scale Sets Mvss Guest Based Autoscale Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-mvss-guest-based-autoscale-linux.md
Previously updated : 04/26/2019- Last updated : 11/22/2022+
virtual-machine-scale-sets Virtual Machine Scale Sets Mvss Start https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-mvss-start.md
Title: Learn about virtual machine scale set templates
-description: Learn how to create a basic scale set template for Azure virtual machine scale sets through several simple steps.
+ Title: Learn about Virtual Machine Scale Set templates
+description: Learn how to create a basic scale set template for Azure Virtual Machine Scale Sets through several simple steps.
Previously updated : 04/26/2019 Last updated : 11/22/2022
-# Learn about virtual machine scale set templates
+# Learn about Virtual Machine Scale Set templates
[Azure Resource Manager templates](../azure-resource-manager/templates/overview.md#template-deployment-process) are a great way to deploy groups of related resources. This tutorial series shows how to create a basic scale set template and how to modify this template to suit various scenarios. All examples come from this [GitHub repository](https://github.com/gatneil/mvss).
virtual-machine-scale-sets Virtual Machine Scale Sets Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-networking.md
Title: Networking for Azure virtual machine scale sets
-description: How to configuration some of the more advanced networking properties for Azure virtual machine scale sets.
+ Title: Networking for Azure Virtual Machine Scale Sets
+description: How to configuration some of the more advanced networking properties for Azure Virtual Machine Scale Sets.
Previously updated : 06/25/2020 Last updated : 11/22/2022
-# Networking for Azure virtual machine scale sets
+# Networking for Azure Virtual Machine Scale Sets
-When you deploy an Azure virtual machine scale set through the portal, certain network properties are defaulted, for example an Azure Load Balancer with inbound NAT rules. This article describes how to use some of the more advanced networking features that you can configure with scale sets.
+When you deploy an Azure Virtual Machine Scale Set through the portal, certain network properties are defaulted, for example an Azure Load Balancer with inbound NAT rules. This article describes how to use some of the more advanced networking features that you can configure with scale sets.
You can configure all of the features covered in this article using Azure Resource Manager templates. Azure CLI and PowerShell examples are also included for selected features.
Azure Accelerated Networking improves network performance by enabling single roo
} ```
-## Azure virtual machine scale sets with Azure Load Balancer
+## Azure Virtual Machine Scale Sets with Azure Load Balancer
See [Azure Load Balancer and Virtual Machine Scale Sets](../load-balancer/load-balancer-standard-virtual-machine-scale-sets.md) to learn more about how to configure your Standard Load Balancer with Virtual Machine Scale Sets based on your scenario. ## Create a scale set that references an Application Gateway
To configure custom DNS servers in an Azure template, add a dnsSettings property
``` ### Creating a scale set with configurable virtual machine domain names
-To create a scale set with a custom DNS name for virtual machines using the CLI, add the **--vm-domain-name** argument to the **virtual machine scale set create** command, followed by a string representing the domain name.
+To create a scale set with a custom DNS name for virtual machines using the CLI, add the **--vm-domain-name** argument to the **Virtual Machine Scale Set create** command, followed by a string representing the domain name.
To set the domain name in an Azure template, add a **dnsSettings** property to the scale set **networkInterfaceConfigurations** section. For example:
To create a scale set using an Azure template, make sure the API version of the
} } ```
-Note when virtual machine scale sets with public IPs per instance are created with a load balancer in front, the of the instance IPs is determined by the SKU of the Load Balancer (i.e. Basic or Standard). If the virtual machine scale set is created without a load balancer, the SKU of the instance IPs can be set directly by using the SKU section of the template as shown above.
+Note when Virtual Machine Scale Sets with public IPs per instance are created with a load balancer in front, the of the instance IPs is determined by the SKU of the Load Balancer (i.e. Basic or Standard). If the Virtual Machine Scale Set is created without a load balancer, the SKU of the instance IPs can be set directly by using the SKU section of the template as shown above.
Example template using a Basic Load Balancer: [vmss-public-ip-linux](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/vmss-public-ip-linux)
-Alternatively, a [Public IP Prefix](../virtual-network/ip-services/public-ip-address-prefix.md) (a contiguous block of Standard SKU Public IPs) can be used to generate instance-level IPs in a virtual machine scale set. The zonal properties of the prefix will be passed to the instance IPs, though they will not be shown in the output.
+Alternatively, a [Public IP Prefix](../virtual-network/ip-services/public-ip-address-prefix.md) (a contiguous block of Standard SKU Public IPs) can be used to generate instance-level IPs in a Virtual Machine Scale Set. The zonal properties of the prefix will be passed to the instance IPs, though they will not be shown in the output.
Example template using a Public IP Prefix: [vmms-with-public-ip-prefix](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/vmss-with-public-ip-prefix)
az vmss show \
## Make networking updates to specific instances
-You can make networking updates to specific virtual machine scale set instances.
+You can make networking updates to specific Virtual Machine Scale Set instances.
You can `PUT` against the instance to update the network configuration. This can be used to do things like add or remove network interface cards (NICs), or remove an instance from a backend pool.
PUT https://management.azure.com/subscriptions/.../resourceGroups/vmssnic/provid
The following example shows how to add a second IP Configuration to your NIC.
-1. `GET` the details for a specific virtual machine scale set instance.
+1. `GET` the details for a specific Virtual Machine Scale Set instance.
``` GET https://management.azure.com/subscriptions/.../resourceGroups/vmssnic/providers/Microsoft.Compute/virtualMachineScaleSets/vmssnic/virtualMachines/1/?api-version=2019-07-01
The following example shows how to add a second IP Configuration to your NIC.
## Explicit network outbound connectivity for Flexible scale sets
-In order to enhance default network security, [virtual machine scale sets with Flexible orchestration](..\virtual-machines\flexible-virtual-machine-scale-sets.md) will require that instances created implicitly via the autoscaling profile have outbound connectivity defined explicitly through one of the following methods:
+In order to enhance default network security, [Virtual Machine Scale Sets with Flexible orchestration](..\virtual-machines\flexible-virtual-machine-scale-sets.md) will require that instances created implicitly via the autoscaling profile have outbound connectivity defined explicitly through one of the following methods:
- For most scenarios, we recommend [NAT Gateway attached to the subnet](../virtual-network/nat-gateway/quickstart-create-nat-gateway-portal.md). - For scenarios with high security requirements or when using Azure Firewall or Network Virtual Appliance (NVA), you can specify a custom User Defined Route as next hop through firewall. - Instances are in the backend pool of a Standard SKU Azure Load Balancer. - Attach a Public IP Address to the instance network interface.
-With single instance VMs and Virtual machine scale sets with Uniform orchestration, outbound connectivity is provided automatically.
+With single instance VMs and Virtual Machine Scale Sets with Uniform orchestration, outbound connectivity is provided automatically.
Common scenarios that will require explicit outbound connectivity include:
virtual-machine-scale-sets Virtual Machine Scale Sets Orchestration Modes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md
Previously updated : 08/05/2021 Last updated : 11/22/2022
Scale set orchestration modes allow you to have greater control over how virtual
## Scale sets with Uniform orchestration Optimized for large-scale stateless workloads with identical instances.
-Virtual Machine Scale Sets with Uniform orchestration use a virtual machine profile or template to scale up to desired capacity. While there is some ability to manage or customize individual virtual machine instances, Uniform uses identical VM instances. Individual Uniform VM instances are exposed via the virtual machine scale set VM API commands. Individual instances aren't compatible with the standard Azure IaaS VM API commands, Azure management features such as Azure Resource Manager resource tagging RBAC permissions, Azure Backup, or Azure Site Recovery. Uniform orchestration provides fault domain high availability guarantees when configured with fewer than 100 instances. Uniform orchestration is generally available and supports a full range of scale set management and orchestration, including metrics-based autoscaling, instance protection, and automatic OS upgrades.
+Virtual Machine Scale Sets with Uniform orchestration use a virtual machine profile or template to scale up to desired capacity. While there is some ability to manage or customize individual virtual machine instances, Uniform uses identical VM instances. Individual Uniform VM instances are exposed via the Virtual Machine Scale Set VM API commands. Individual instances aren't compatible with the standard Azure IaaS VM API commands, Azure management features such as Azure Resource Manager resource tagging RBAC permissions, Azure Backup, or Azure Site Recovery. Uniform orchestration provides fault domain high availability guarantees when configured with fewer than 100 instances. Uniform orchestration is generally available and supports a full range of scale set management and orchestration, including metrics-based autoscaling, instance protection, and automatic OS upgrades.
## Scale sets with Flexible orchestration
With Flexible orchestration, Azure provides a unified experience across the Azur
## What has changed with Flexible orchestration mode?
-One of the main advantages of Flexible orchestration is that it provides orchestration features over standard Azure IaaS VMs, instead of scale set child virtual machines. This means you can use all of the standard VM APIs when managing Flexible orchestration instances, instead of the virtual machine scale set VM APIs you use with Uniform orchestration. There are several differences between managing instances in Flexible orchestration versus Uniform orchestration. In general, we recommend that you use the standard Azure IaaS VM APIs when possible. In this section, we highlight examples of best practices for managing VM instances with Flexible orchestration.
+One of the main advantages of Flexible orchestration is that it provides orchestration features over standard Azure IaaS VMs, instead of scale set child virtual machines. This means you can use all of the standard VM APIs when managing Flexible orchestration instances, instead of the Virtual Machine Scale Set VM APIs you use with Uniform orchestration. There are several differences between managing instances in Flexible orchestration versus Uniform orchestration. In general, we recommend that you use the standard Azure IaaS VM APIs when possible. In this section, we highlight examples of best practices for managing VM instances with Flexible orchestration.
Flexible orchestration mode can be used with all VM sizes. Flexible orchestration mode provides the highest scale and configurability for VM sizes that support memory preserving updates or live migration such as when using the B, D, E and F-series or when the scale set is configured for maximum spreading between instances `platformFaultDomainCount=1`. Currently, the Flexible orchestration mode has additional constraints for VM sizes that don't support memory preserving updates including the G, H, L, M, and N-series VMs and instances are spread across multiple fault domains. You can use the Compute Resource SKUs API to determine whether a specific VM SKU support memory preserving updates.
Flexible orchestration mode can be used with all VM sizes. Flexible orchestratio
| Single Placement Group | Optional. This will be set to false based on first VM deployed | Optional. This will be set to true based on first VM deployed | ### Scale out with standard Azure virtual machines
-Virtual Machine Scale Sets in Flexible Orchestration mode manage standard Azure VMs. You have full control over the virtual machine lifecycle, as well as network interfaces and disks using the standard Azure APIs and commands. Virtual machines created with Uniform orchestration mode are exposed and managed via the virtual machine scale set VM API commands. Individual instances aren't compatible with the standard Azure IaaS VM API commands, Azure management features such as Azure Resource Manager resource tagging RBAC permissions, Azure Backup, or Azure Site Recovery.
+Virtual Machine Scale Sets in Flexible Orchestration mode manage standard Azure VMs. You have full control over the virtual machine lifecycle, as well as network interfaces and disks using the standard Azure APIs and commands. Virtual machines created with Uniform orchestration mode are exposed and managed via the Virtual Machine Scale Set VM API commands. Individual instances aren't compatible with the standard Azure IaaS VM API commands, Azure management features such as Azure Resource Manager resource tagging RBAC permissions, Azure Backup, or Azure Site Recovery.
### Assign fault domain during VM creation You can choose the number of fault domains for the Flexible orchestration scale set. By default, when you add a VM to a Flexible scale set, Azure evenly spreads instances across fault domains. While it is recommended to let Azure assign the fault domain, for advanced or troubleshooting scenarios you can override this default behavior and specify the fault domain where the instance will land.
The following table compares the Flexible orchestration mode, Uniform orchestrat
| List VMs in Set | Yes | Yes | Yes, list VMs in AvSet | | Automatic Scaling (manual, metrics based, schedule based) | Yes | Yes | No | | Auto-Remove NICs and Disks when deleting VM instances | Yes | Yes | No |
-| Upgrade Policy (virtual machine scale set) | No, upgrade policy must be null or [] during create | Automatic, Rolling, Manual | N/A |
-| Automatic OS Updates (virtual machine scale set) | No | Yes | N/A |
+| Upgrade Policy (Virtual Machine Scale Set) | No, upgrade policy must be null or [] during create | Automatic, Rolling, Manual | N/A |
+| Automatic OS Updates (Virtual Machine Scale Set) | No | Yes | N/A |
| In Guest Security Patching | Yes, read [Auto VM Guest Patching](../virtual-machines/automatic-vm-guest-patching.md) | No | Yes |
-| Terminate Notifications (virtual machine scale set) | Yes, read [Terminate Notifications documentation](../virtual-machine-scale-sets/virtual-machine-scale-sets-terminate-notification.md) | Yes, read [Terminate Notifications documentation](../virtual-machine-scale-sets/virtual-machine-scale-sets-terminate-notification.md) | N/A |
+| Terminate Notifications (Virtual Machine Scale Set) | Yes, read [Terminate Notifications documentation](../virtual-machine-scale-sets/virtual-machine-scale-sets-terminate-notification.md) | Yes, read [Terminate Notifications documentation](../virtual-machine-scale-sets/virtual-machine-scale-sets-terminate-notification.md) | N/A |
| Monitor Application Health | Application health extension | Application health extension or Azure load balancer probe | Application health extension |
-| Instance Repair (virtual machine scale set) | Yes, read [Instance Repair documentation](../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-instance-repairs.md) | Yes, read [Instance Repair documentation](../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-instance-repairs.md) | N/A |
+| Instance Repair (Virtual Machine Scale Set) | Yes, read [Instance Repair documentation](../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-instance-repairs.md) | Yes, read [Instance Repair documentation](../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-instance-repairs.md) | N/A |
| Instance Protection | No, use [Azure resource lock](../azure-resource-manager/management/lock-resources.md) | Yes | No | | Scale In Policy | No | Yes | No | | VMSS Get Instance View | No | Yes | N/A |
The following table compares the Flexible orchestration mode, Uniform orchestrat
### Unsupported parameters
-The following virtual machine scale set parameters aren't currently supported with Virtual Machine Scale Sets in Flexible orchestration mode:
+The following Virtual Machine Scale Set parameters aren't currently supported with Virtual Machine Scale Sets in Flexible orchestration mode:
- Single placement group - you must choose `singlePlacementGroup=False` - Ultra disk configuration: `diskIOPSReadWrite`, `diskMBpsReadWrite`-- Virtual machine scale set Overprovisioning
+- Virtual Machine Scale Set Overprovisioning
- Image-based Automatic OS Upgrades - Application health via SLB health probe - use Application Health Extension on instances-- Virtual machine scale set upgrade policy - must be null or empty
+- Virtual Machine Scale Set upgrade policy - must be null or empty
- Deployment onto Azure Dedicated Host - Unmanaged disks-- Virtual machine scale set Scale in Policy-- Virtual machine scale set Instance Protection
+- Virtual Machine Scale Set Scale in Policy
+- Virtual Machine Scale Set Instance Protection
- Basic Load Balancer - Port Forwarding via Standard Load Balancer NAT Pool - you can configure NAT rules to specific instances
OutboundConnectivityNotEnabledOnVM. No outbound connectivity configured for virt
``` **Cause:** Trying to create a Virtual Machine Scale Set in Flexible Orchestration Mode with no outbound internet connectivity.
-**Solution:** Enable secure outbound access for your virtual machine scale set in the manner best suited for your application. Outbound access can be enabled with a NAT Gateway on your subnet, adding instances to a Load Balancer backend pool, or adding an explicit public IP per instance. For highly secure applications, you can specify custom User Defined Routes through your firewall or virtual network applications. See [Default Outbound Access](../virtual-network/ip-services/default-outbound-access.md) for more details.
+**Solution:** Enable secure outbound access for your Virtual Machine Scale Set in the manner best suited for your application. Outbound access can be enabled with a NAT Gateway on your subnet, adding instances to a Load Balancer backend pool, or adding an explicit public IP per instance. For highly secure applications, you can specify custom User Defined Routes through your firewall or virtual network applications. See [Default Outbound Access](../virtual-network/ip-services/default-outbound-access.md) for more details.
## Get started with Flexible orchestration mode
virtual-machine-scale-sets Virtual Machine Scale Sets Placement Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-placement-groups.md
Title: Virtual machine scale sets and placement groups
-description: What you need to know about large Azure virtual machine scale sets in order to use them in your application.
+ Title: Virtual Machine Scale Sets and placement groups
+description: What you need to know about large Azure Virtual Machine Scale Sets in order to use them in your application.
Previously updated : 06/25/2020 Last updated : 11/22/2022
-# Virtual machine scale sets and placement groups
+# Virtual Machine Scale Sets and placement groups
> [!NOTE]
-> This document covers virtual machine scale sets running in Uniform Orchestration mode. We recommend using Flexible Orchestration for new workloads. For more information, see [Orchesration modes for virtual machine scale sets in Azure](virtual-machine-scale-sets-orchestration-modes.md).
+> This document covers Virtual Machine Scale Sets running in Uniform Orchestration mode. We recommend using Flexible Orchestration for new workloads. For more information, see [Orchesration modes for Virtual Machine Scale Sets in Azure](virtual-machine-scale-sets-orchestration-modes.md).
-You can now create Azure [virtual machine scale sets](./index.yml) with a capacity of up to 1,000 VMs. In this document, a _large virtual machine scale set_ is defined as a scale set capable of scaling to greater than 100 VMs. This capability is set by a scale set property (_singlePlacementGroup=False_).
+You can now create Azure [Virtual Machine Scale Sets](./index.yml) with a capacity of up to 1,000 VMs. In this document, a _large Virtual Machine Scale Set_ is defined as a scale set capable of scaling to greater than 100 VMs. This capability is set by a scale set property (_singlePlacementGroup=False_).
Certain aspects of large scale sets, such as load balancing and fault domains behave differently to a standard scale set. This document explains the characteristics of large scale sets, and describes what you need to know to successfully use them in your applications.
-A common approach for deploying cloud infrastructure at large scale is to create a set of _scale units_, for example by creating multiple VMs scale sets across multiple VNETs and storage accounts. This approach provides easier management compared to single VMs, and multiple scale units are useful for many applications, particularly those that require other stackable components like multiple virtual networks and endpoints. If your application requires a single large cluster however, it can be more straightforward to deploy a single scale set of up to 1,000 VMs. Example scenarios include centralized big data deployments, or compute grids requiring simple management of a large pool of worker nodes. Combined with virtual machine scale set [attached data disks](virtual-machine-scale-sets-attached-disks.md), large scale sets enable you to deploy a scalable infrastructure consisting of thousands of vCPUs and petabytes of storage, as a single operation.
+A common approach for deploying cloud infrastructure at large scale is to create a set of _scale units_, for example by creating multiple VMs scale sets across multiple VNETs and storage accounts. This approach provides easier management compared to single VMs, and multiple scale units are useful for many applications, particularly those that require other stackable components like multiple virtual networks and endpoints. If your application requires a single large cluster however, it can be more straightforward to deploy a single scale set of up to 1,000 VMs. Example scenarios include centralized big data deployments, or compute grids requiring simple management of a large pool of worker nodes. Combined with Virtual Machine Scale Set [attached data disks](virtual-machine-scale-sets-attached-disks.md), large scale sets enable you to deploy a scalable infrastructure consisting of thousands of vCPUs and petabytes of storage, as a single operation.
## Placement groups What makes a _large_ scale set special is not the number of VMs, but the number of _placement groups_ it contains. A placement group is a construct similar to an Azure availability set, with its own fault domains and upgrade domains. By default, a scale set consists of a single placement group with a maximum size of 100 VMs. If a scale set property called _singlePlacementGroup_ is set to _false_, the scale set can be composed of multiple placement groups and has a range of 0-1,000 VMs. When set to the default value of _true_, a scale set is composed of a single placement group, and has a range of 0-100 VMs.
When you create a scale set in the Azure portal, just specify the *Instance coun
![This image shows the instances blade of the Azure Portal. Options to select the Instance Count and Instance size are available.](./media/virtual-machine-scale-sets-placement-groups/portal-large-scale.png)
-You can create a large virtual machine scale set using the [Azure CLI](https://github.com/Azure/azure-cli) _az vmss create_ command. This command sets intelligent defaults such as subnet size based on the _instance-count_ argument:
+You can create a large Virtual Machine Scale Set using the [Azure CLI](https://github.com/Azure/azure-cli) _az vmss create_ command. This command sets intelligent defaults such as subnet size based on the _instance-count_ argument:
```azurecli az group create -l southcentralus -n biginfra
If you are creating a large scale set by composing an Azure Resource Manager tem
For a complete example of a large scale set template, refer to [https://github.com/gbowerman/azure-myriad/blob/main/bigtest/bigbottle.json](https://github.com/gbowerman/azure-myriad/blob/main/bigtest/bigbottle.json). ## Converting an existing scale set to span multiple placement groups
-To make an existing virtual machine scale set capable of scaling to more than 100 VMs, you need to change the _singlePlacementGroup_ property to _false_ in the scale set model. You can test changing this property with the [Azure Resource Explorer](https://resources.azure.com/). Find an existing scale set, select _Edit_ and change the _singlePlacementGroup_ property. If you do not see this property, you may be viewing the scale set with an older version of the Microsoft.Compute API.
+To make an existing Virtual Machine Scale Set capable of scaling to more than 100 VMs, you need to change the _singlePlacementGroup_ property to _false_ in the scale set model. You can test changing this property with the [Azure Resource Explorer](https://resources.azure.com/). Find an existing scale set, select _Edit_ and change the _singlePlacementGroup_ property. If you do not see this property, you may be viewing the scale set with an older version of the Microsoft.Compute API.
> [!NOTE] > You can change a scale set from supporting a single placement group only (the default behavior) to a supporting multiple placement groups, but you cannot convert the other way around. Therefore make sure you understand the properties of large scale sets before converting.
virtual-machine-scale-sets Virtual Machine Scale Sets Scale In Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-scale-in-policy.md
Title: Use custom scale-in policies with Azure virtual machine scale sets
-description: Learn how to use custom scale-in policies with Azure virtual machine scale sets that use autoscale configuration to manage instance count
+ Title: Use custom scale-in policies with Azure Virtual Machine Scale Sets
+description: Learn how to use custom scale-in policies with Azure Virtual Machine Scale Sets that use autoscale configuration to manage instance count
Previously updated : 02/26/2020- Last updated : 11/22/2022+
-# Use custom scale-in policies with Azure virtual machine scale sets
+# Use custom scale-in policies with Azure Virtual Machine Scale Sets
-A virtual machine scale set deployment can be scaled-out or scaled-in based on an array of metrics, including platform and user-defined custom metrics. While a scale-out creates new virtual machines based on the scale set model, a scale-in affects running virtual machines that may have different configurations and/or functions as the scale set workload evolves.
+A Virtual Machine Scale Set deployment can be scaled-out or scaled-in based on an array of metrics, including platform and user-defined custom metrics. While a scale-out creates new virtual machines based on the scale set model, a scale-in affects running virtual machines that may have different configurations and/or functions as the scale set workload evolves.
The scale-in policy feature provides users a way to configure the order in which virtual machines are scaled-in, by way of three scale-in configurations:
The scale-in policy feature provides users a way to configure the order in which
3. OldestVM > [!IMPORTANT]
-> Flexible orchestration for virtual machine scale sets does not currently support scale-in policy.
+> Flexible orchestration for Virtual Machine Scale Sets does not currently support scale-in policy.
### Default scale-in policy
-By default, virtual machine scale set applies this policy to determine which instance(s) will be scaled in. With the *Default* policy, VMs are selected for scale-in in the following order:
+By default, Virtual Machine Scale Set applies this policy to determine which instance(s) will be scaled in. With the *Default* policy, VMs are selected for scale-in in the following order:
1. Balance virtual machines across availability zones (if the scale set is deployed in zonal configuration) 2. Balance virtual machines across fault domains (best effort)
Note that balancing across availability zones or fault domains does not move ins
### NewestVM scale-in policy
-This policy will delete the newest created virtual machine in the scale set, after balancing VMs across availability zones (for zonal deployments). Enabling this policy requires a configuration change on the virtual machine scale set model.
+This policy will delete the newest created virtual machine in the scale set, after balancing VMs across availability zones (for zonal deployments). Enabling this policy requires a configuration change on the Virtual Machine Scale Set model.
### OldestVM scale-in policy
-This policy will delete the oldest created virtual machine in the scale set, after balancing VMs across availability zones (for zonal deployments). Enabling this policy requires a configuration change on the virtual machine scale set model.
+This policy will delete the oldest created virtual machine in the scale set, after balancing VMs across availability zones (for zonal deployments). Enabling this policy requires a configuration change on the Virtual Machine Scale Set model.
## Enabling scale-in policy
-A scale-in policy is defined in the virtual machine scale set model. As noted in the sections above, a scale-in policy definition is needed when using the ΓÇÿNewestVMΓÇÖ and ΓÇÿOldestVMΓÇÖ policies. Virtual machine scale set will automatically use the ΓÇÿDefaultΓÇÖ scale-in policy if there is no scale-in policy definition found on the scale set model.
+A scale-in policy is defined in the Virtual Machine Scale Set model. As noted in the sections above, a scale-in policy definition is needed when using the ΓÇÿNewestVMΓÇÖ and ΓÇÿOldestVMΓÇÖ policies. Virtual Machine Scale Set will automatically use the ΓÇÿDefaultΓÇÖ scale-in policy if there is no scale-in policy definition found on the scale set model.
-A scale-in policy can be defined on the virtual machine scale set model in the following ways:
+A scale-in policy can be defined on the Virtual Machine Scale Set model in the following ways:
### Azure portal The following steps define the scale-in policy when creating a new scale set.
-1. Go to **Virtual machine scale sets**.
+1. Go to **Virtual Machine Scale Sets**.
1. Select **+ Add** to create a new scale set. 1. Go to the **Scaling** tab. 1. Locate the **Scale-in policy** section.
The following steps define the scale-in policy when creating a new scale set.
### Using API
-Execute a PUT on the virtual machine scale set using API 2019-03-01:
+Execute a PUT on the Virtual Machine Scale Set using API 2019-03-01:
``` PUT
In your template, under ΓÇ£propertiesΓÇ¥, add the following:
} ```
-The above blocks specify that the virtual machine scale set will delete the Oldest VM in a zone-balanced scale set, when a scale-in is triggered (through Autoscale or manual delete).
+The above blocks specify that the Virtual Machine Scale Set will delete the Oldest VM in a zone-balanced scale set, when a scale-in is triggered (through Autoscale or manual delete).
-When a virtual machine scale set is not zone balanced, the scale set will first delete VMs across the imbalanced zone(s). Within the imbalanced zones, the scale set will use the scale-in policy specified above to determine which VM to scale in. In this case, within an imbalanced zone, the scale set will select the Oldest VM in that zone to be deleted.
+When a Virtual Machine Scale Set is not zone balanced, the scale set will first delete VMs across the imbalanced zone(s). Within the imbalanced zones, the scale set will use the scale-in policy specified above to determine which VM to scale in. In this case, within an imbalanced zone, the scale set will select the Oldest VM in that zone to be deleted.
-For non-zonal virtual machine scale set, the policy selects the oldest VM across the scale set for deletion.
+For non-zonal Virtual Machine Scale Set, the policy selects the oldest VM across the scale set for deletion.
The same process applies when using ΓÇÿNewestVMΓÇÖ in the above scale-in policy.
Modifying the scale-in policy follows the same process as applying the scale-in
You can modify the scale-in policy of an existing scale set through the Azure portal.
-1. In an existing virtual machine scale set, select **Scaling** from the menu on the left.
+1. In an existing Virtual Machine Scale Set, select **Scaling** from the menu on the left.
1. Select the **Scale-In Policy** tab. 1. Select a scale-in policy from the drop-down. 1. When you are done, select **Save**. ### Using API
-Execute a PUT on the virtual machine scale set using API 2019-03-01:
+Execute a PUT on the Virtual Machine Scale Set using API 2019-03-01:
``` PUT
The same process will apply if you decide to change ΓÇÿNewestVMΓÇÖ to ΓÇÿDefault
## Instance protection and scale-in policy
-Virtual machine scale sets provide two types of [instance protection](./virtual-machine-scale-sets-instance-protection.md#types-of-instance-protection):
+Virtual Machine Scale Sets provide two types of [instance protection](./virtual-machine-scale-sets-instance-protection.md#types-of-instance-protection):
1. Protect from scale-in 2. Protect from scale-set actions
A protected virtual machine can be manually deleted by the user at any time, reg
## Usage examples
-The below examples demonstrate how a virtual machine scale set will select VMs to be deleted when a scale-in event is triggered. Virtual machines with the highest instance IDs are assumed to be the newest VMs in the scale set and the VMs with the smallest instance IDs are assumed to be the oldest VMs in the scale set.
+The below examples demonstrate how a Virtual Machine Scale Set will select VMs to be deleted when a scale-in event is triggered. Virtual machines with the highest instance IDs are assumed to be the newest VMs in the scale set and the VMs with the smallest instance IDs are assumed to be the oldest VMs in the scale set.
### OldestVM scale-in policy
The below examples demonstrate how a virtual machine scale set will select VMs t
| Scale-in | 5, 10 | ***6***, 9, 11 | 7, 8 | Choose Zone 2 even though Zone 1 has the oldest VM. Delete VM6 in Zone 1 as it is the oldest VM in that zone. | | Scale-in | ***5***, 10 | 9, 11 | 7, 8 | Zones are balanced. Delete VM5 in Zone 1 as it is the oldest VM in the scale set. |
-For non-zonal virtual machine scale sets, the policy selects the oldest VM across the scale set for deletion. Any ΓÇ£protectedΓÇ¥ VM will be skipped for deletion.
+For non-zonal Virtual Machine Scale Sets, the policy selects the oldest VM across the scale set for deletion. Any ΓÇ£protectedΓÇ¥ VM will be skipped for deletion.
### NewestVM scale-in policy
For non-zonal virtual machine scale sets, the policy selects the oldest VM acros
| Scale-in | 3, 4, ***5*** | 2, 6 | 1, 7 | Choose Zone 1 even though Zone 3 has the newest VM. Delete VM5 in Zone 1 as it is the newest VM in that Zone. | | Scale-in | 3, 4 | 2, 6 | 1, ***7*** | Zones are balanced. Delete VM7 in Zone 3 as it is the newest VM in the scale set. |
-For non-zonal virtual machine scale sets, the policy selects the newest VM across the scale set for deletion. Any ΓÇ£protectedΓÇ¥ VM will be skipped for deletion.
+For non-zonal Virtual Machine Scale Sets, the policy selects the newest VM across the scale set for deletion. Any ΓÇ£protectedΓÇ¥ VM will be skipped for deletion.
## Troubleshoot 1. Failure to enable scaleInPolicy
- If you get a ΓÇÿBadRequestΓÇÖ error with an error message stating "Could not find member 'scaleInPolicy' on object of type 'properties'ΓÇ¥, then check the API version used for virtual machine scale set. API version 2019-03-01 or higher is required for this feature.
+ If you get a ΓÇÿBadRequestΓÇÖ error with an error message stating "Could not find member 'scaleInPolicy' on object of type 'properties'ΓÇ¥, then check the API version used for Virtual Machine Scale Set. API version 2019-03-01 or higher is required for this feature.
2. Wrong selection of VMs for scale-in
- Refer to the examples above. If your virtual machine scale set is a Zonal deployment, scale-in policy is applied first to the imbalanced Zones and then across the scale set once it is zone balanced. If the order of scale-in is not consistent with the examples above, raise a query with the virtual machine scale set team for troubleshooting.
+ Refer to the examples above. If your Virtual Machine Scale Set is a Zonal deployment, scale-in policy is applied first to the imbalanced Zones and then across the scale set once it is zone balanced. If the order of scale-in is not consistent with the examples above, raise a query with the Virtual Machine Scale Set team for troubleshooting.
## Next steps
-Learn how to [deploy your application](virtual-machine-scale-sets-deploy-app.md) on virtual machine scale sets.
+Learn how to [deploy your application](virtual-machine-scale-sets-deploy-app.md) on Virtual Machine Scale Sets.
virtual-machine-scale-sets Virtual Machine Scale Sets Terminate Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-terminate-notification.md
Title: Terminate notification for Azure virtual machine scale set instances
-description: Learn how to enable termination notification for Azure virtual machine scale set instances
--
+ Title: Terminate notification for Azure Virtual Machine Scale Set instances
+description: Learn how to enable termination notification for Azure Virtual Machine Scale Set instances
++ Previously updated : 02/26/2020- Last updated : 11/22/2022+
-# Terminate notification for Azure virtual machine scale set instances
+# Terminate notification for Azure Virtual Machine Scale Set instances
Scale set instances can opt in to receive instance termination notifications and set a pre-defined delay timeout to the terminate operation. The termination notification is sent through Azure Metadata Service ΓÇô [Scheduled Events](../virtual-machines/windows/scheduled-events.md), which provides notifications for and delaying of impactful operations such as reboots and redeploy. The solution adds another event ΓÇô Terminate ΓÇô to the list of Scheduled Events, and the associated delay of the terminate event will depend on the delay limit as specified by users in their scale set model configurations.
There are multiple ways of enabling termination notifications on your scale set
The following steps enable terminate notification when creating a new scale set.
-1. Go to **Virtual machine scale sets**.
+1. Go to **Virtual Machine Scale Sets**.
1. Select **+ Add** to create a new scale set. 1. Go to the **Management** tab. 1. Locate the **Instance termination** section.
After enabling *scheduledEventsProfile* on the scale set model and setting the *
### Azure PowerShell When creating a new scale set, you can enable termination notifications on the scale set by using the [New-AzVmssConfig](/powershell/module/az.compute/new-azvmssconfig) cmdlet.
-This sample script walks through the creation of a scale set and associated resources using the configuration file: [Create a complete virtual machine scale set](./scripts/powershell-sample-create-complete-scale-set.md). You can provide configure terminate notification by adding the parameters *TerminateScheduledEvents* and *TerminateScheduledEventNotBeforeTimeoutInMinutes* to the configuration object for creating scale set. The following example enables the feature with a delay timeout of 10 minutes.
+This sample script walks through the creation of a scale set and associated resources using the configuration file: [Create a complete Virtual Machine Scale Set](./scripts/powershell-sample-create-complete-scale-set.md). You can provide configure terminate notification by adding the parameters *TerminateScheduledEvents* and *TerminateScheduledEventNotBeforeTimeoutInMinutes* to the configuration object for creating scale set. The following example enables the feature with a delay timeout of 10 minutes.
```azurepowershell-interactive New-AzVmssConfig `
If you are not getting any **Terminate** events through Scheduled Events, then c
After enabling *scheduledEventsProfile* on the scale set model and setting the *notBeforeTimeout*, update the individual instances to the [latest model](virtual-machine-scale-sets-upgrade-scale-set.md#how-to-bring-vms-up-to-date-with-the-latest-scale-set-model) to reflect the changes. ## Next steps
-Learn how to [deploy your application](virtual-machine-scale-sets-deploy-app.md) on virtual machine scale sets.
+Learn how to [deploy your application](virtual-machine-scale-sets-deploy-app.md) on Virtual Machine Scale Sets.
virtual-machine-scale-sets Virtual Machine Scale Sets Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-troubleshoot.md
Title: Troubleshoot autoscale with Virtual Machine Scale Sets description: Troubleshoot autoscale with Virtual Machine Scale Sets. Understand typical problems encountered and how to resolve them.-+ Previously updated : 06/25/2020
-ms.reviwer: jushiman
Last updated : 11/22/2022+ # Troubleshooting autoscale with Virtual Machine Scale Sets
-**Problem** ΓÇô youΓÇÖve created an autoscaling infrastructure in Azure Resource Manager using virtual machine scale sets ΓÇô for example, by deploying a template like this one: https://github.com/Azure/azure-quickstart-templates/blob/master/application-workloads/python/vmss-bottle-autoscale/azuredeploy.parameters.json ΓÇô you have your scale rules defined and it works great, except no matter how much load you put on the VMs, it doesn't autoscale.
+**Problem** ΓÇô youΓÇÖve created an autoscaling infrastructure in Azure Resource Manager using Virtual Machine Scale Sets ΓÇô for example, by deploying a template like this one: https://github.com/Azure/azure-quickstart-templates/blob/master/application-workloads/python/vmss-bottle-autoscale/azuredeploy.parameters.json ΓÇô you have your scale rules defined and it works great, except no matter how much load you put on the VMs, it doesn't autoscale.
## Troubleshooting steps Some things to consider include: * How many vCPUs does each VM have, and are you loading each vCPU? The preceding sample Azure Quickstart template has a do_work.php script, which loads a single vCPU. If you're using a VM bigger than a single-vCPU VM size like Standard_A1 or D1, you'd need to run this load multiple times. Check how many vCPUs for your VMs by reviewing [Sizes for Windows virtual machines in Azure](../virtual-machines/sizes.md?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json)
-* How many VMs in the virtual machine scale set, are you doing work on each one?
+* How many VMs in the Virtual Machine Scale Set, are you doing work on each one?
A scale-out event only takes place when the average CPU across **all** the VMs in a scale set exceeds the threshold value, over the time internal defined in the autoscale rules. * Did you miss any scale events?
Some things to consider include:
It is easy to make mistakes, so start with a template like the one above which is proven to work, and make small incremental changes. * Can you manually scale in or out?
- Try redeploying the virtual machine scale set resource with a different "capacity" setting to change the number of VMs manually. An example template is here: https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/vmss-scale-existing ΓÇô you might need to edit the template to make sure it has the same machine size as your Scale Set uses. If you can successfully change the number of VMs manually, you then know the problem is isolated to autoscale.
+ Try redeploying the Virtual Machine Scale Set resource with a different "capacity" setting to change the number of VMs manually. An example template is here: https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/vmss-scale-existing ΓÇô you might need to edit the template to make sure it has the same machine size as your Scale Set uses. If you can successfully change the number of VMs manually, you then know the problem is isolated to autoscale.
* Check your Microsoft.Compute/virtualMachineScaleSet, and Microsoft.Insights resources in the [Azure Resource Explorer](https://resources.azure.com/)
- The Azure Resource Explorer is an indispensable troubleshooting tool that shows you the state of your Azure Resource Manager resources. Click on your subscription and look at the Resource Group you are troubleshooting. Under the Compute resource provider, look at the virtual machine scale set you created and check the Instance View, which shows you the state of a deployment. Also, check the instance view of VMs in the virtual machine scale set. Then, go into the Microsoft.Insights resource provider and check that the autoscale rules look right.
+ The Azure Resource Explorer is an indispensable troubleshooting tool that shows you the state of your Azure Resource Manager resources. Click on your subscription and look at the Resource Group you are troubleshooting. Under the Compute resource provider, look at the Virtual Machine Scale Set you created and check the Instance View, which shows you the state of a deployment. Also, check the instance view of VMs in the Virtual Machine Scale Set. Then, go into the Microsoft.Insights resource provider and check that the autoscale rules look right.
* Is the Diagnostic extension working and emitting performance data? **Update:** Azure autoscale has been enhanced to use a host-based metrics pipeline, which no longer requires a diagnostics extension to be installed. The next few paragraphs no longer apply if you create an autoscaling application using the new pipeline. An example of Azure templates that have been converted to use the host pipeline is available here: https://github.com/Azure/azure-quickstart-templates/blob/master/application-workloads/python/vmss-bottle-autoscale/azuredeploy.parameters.json.
virtual-machine-scale-sets Virtual Machine Scale Sets Upgrade Scale Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-upgrade-scale-set.md
Title: Modify an Azure virtual machine scale set
-description: Learn how to modify and update an Azure virtual machine scale set with the REST APIs, Azure PowerShell, and Azure CLI
+ Title: Modify an Azure Virtual Machine Scale Set
+description: Learn how to modify and update an Azure Virtual Machine Scale Set with the REST APIs, Azure PowerShell, and Azure CLI
Previously updated : 03/10/2020 Last updated : 11/22/2022
-# Modify a virtual machine scale set
+# Modify a Virtual Machine Scale Set
-Throughout the lifecycle of your applications, you may need to modify or update your virtual machine scale set. These updates may include how to update the configuration of the scale set, or change the application configuration. This article describes how to modify an existing scale set with the REST APIs, Azure PowerShell, or Azure CLI.
+Throughout the lifecycle of your applications, you may need to modify or update your Virtual Machine Scale Set. These updates may include how to update the configuration of the scale set, or change the application configuration. This article describes how to modify an existing scale set with the REST APIs, Azure PowerShell, or Azure CLI.
## Fundamental concepts
virtual-machine-scale-sets Virtual Machine Scale Sets Use Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-use-availability-zones.md
Title: Create an Azure scale set that uses Availability Zones
-description: Learn how to create Azure virtual machine scale sets that use Availability Zones for increased redundancy against outages
+description: Learn how to create Azure Virtual Machine Scale Sets that use Availability Zones for increased redundancy against outages
Previously updated : 08/08/2018 Last updated : 11/22/2022
-# Create a virtual machine scale set that uses Availability Zones
+# Create a Virtual Machine Scale Set that uses Availability Zones
-To protect your virtual machine scale sets from datacenter-level failures, you can create a scale set across Availability Zones. Azure regions that support Availability Zones have a minimum of three separate zones, each with their own independent power source, network, and cooling. For more information, see [Overview of Availability Zones](../availability-zones/az-overview.md).
+To protect your Virtual Machine Scale Sets from datacenter-level failures, you can create a scale set across Availability Zones. Azure regions that support Availability Zones have a minimum of three separate zones, each with their own independent power source, network, and cooling. For more information, see [Overview of Availability Zones](../availability-zones/az-overview.md).
## Availability considerations
With max spreading, the scale set spreads your VMs across as many fault domains
### Placement groups > [!IMPORTANT]
-> Placement groups only apply to virtual machine scale sets running in Uniform orchestration mode.
+> Placement groups only apply to Virtual Machine Scale Sets running in Uniform orchestration mode.
When you deploy a scale set, you also have the option to deploy with a single [placement group](./virtual-machine-scale-sets-placement-groups.md) per Availability Zone, or with multiple per zone. For regional (non-zonal) scale sets, the choice is to have a single placement group in the region or to have multiple in the region. If the scale set property called `singlePlacementGroup` is set to false, the scale set can be composed of multiple placement groups and has a range of 0-1,000 VMs. When set to the default value of true, the scale set is composed of a single placement group, and has a range of 0-100 VMs. For most workloads, we recommend multiple placement groups, which allows for greater scale. In API version *2017-12-01*, scale sets default to multiple placement groups for single-zone and cross-zone scale sets, but they default to single placement group for regional (non-zonal) scale sets.
To use best-effort zone balance, set *zoneBalance* to *false*. This setting is t
## Single-zone and zone-redundant scale sets
-When you deploy a virtual machine scale set, you can choose to use a single Availability Zone in a region, or multiple zones.
+When you deploy a Virtual Machine Scale Set, you can choose to use a single Availability Zone in a region, or multiple zones.
When you create a scale set in a single zone, you control which zone all those VM instances run in, and the scale set is managed and autoscales only within that zone. A zone-redundant scale set lets you create a single scale set that spans multiple zones. As VM instances are created, by default they are evenly balanced across zones. Should an interruption occur in one of the zones, a scale set does not automatically scale out to increase capacity. A best practice would be to configure autoscale rules based on CPU or memory usage. The autoscale rules would allow the scale set to respond to a loss of the VM instances in that one zone by scaling out new instances in the remaining operational zones.
For a complete example of a zone-redundant scale set and network resources, see
## Next steps
-Now that you have created a scale set in an Availability Zone, you can learn how to [Deploy applications on virtual machine scale sets](tutorial-install-apps-cli.md) or [Use autoscale with virtual machine scale sets](tutorial-autoscale-cli.md).
+Now that you have created a scale set in an Availability Zone, you can learn how to [Deploy applications on Virtual Machine Scale Sets](tutorial-install-apps-cli.md) or [Use autoscale with Virtual Machine Scale Sets](tutorial-autoscale-cli.md).
virtual-machine-scale-sets Virtual Machine Scale Sets Vs Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-vs-create.md
Previously updated : 09/09/2019 Last updated : 11/22/2022
virtual-machine-scale-sets Vmss Support Help https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/vmss-support-help.md
Title: Azure virtual machine scale sets support and help options
-description: How to obtain help and support for questions or problems when you create solutions using Azure virtual machine scale sets.
+ Title: Azure Virtual Machine Scale Sets support and help options
+description: How to obtain help and support for questions or problems when you create solutions using Azure Virtual Machine Scale Sets.
Previously updated : 4/28/2021 Last updated : 11/22/2022+
-# Support and troubleshooting for Azure virtual machine scale sets
+# Support and troubleshooting for Azure Virtual Machine Scale Sets
-Here are suggestions for where you can get help when developing your Azure virtual machine scale sets solutions.
+Here are suggestions for where you can get help when developing your Azure Virtual Machine Scale Sets solutions.
## Self help troubleshooting
-Various articles explain how to determine, diagnose, and fix issues that you might encounter when using [Azure Virtual Machines](../virtual-machines/index.yml) and [virtual machine scale sets](overview.md).
+Various articles explain how to determine, diagnose, and fix issues that you might encounter when using [Azure Virtual Machines](../virtual-machines/index.yml) and [Virtual Machine Scale Sets](overview.md).
-- [Azure Virtual Machine scale set troubleshooting documentation](/troubleshoot/azure/virtual-machine-scale-sets/welcome-virtual-machine-scale-sets) -- [Frequently asked questions about Azure virtual machine scale sets](virtual-machine-scale-sets-faq.yml)
+- [Azure Virtual Machine Scale Set troubleshooting documentation](/troubleshoot/azure/virtual-machine-scale-sets/welcome-virtual-machine-scale-sets)
+- [Frequently asked questions about Azure Virtual Machine Scale Sets](virtual-machine-scale-sets-faq.yml)
## Post a question on Microsoft Q&A
If you can't find an answer to your problem using search, submit a new question
| Area | Tag | |-|-|
-| [Azure virtual machine scale sets](overview.md) | [azure-virtual-machine-scale-set](/answers/topics/azure-virtual-machines-scale-set.html) |
+| [Azure Virtual Machine Scale Sets](overview.md) | [azure-virtual-machine-scale-set](/answers/topics/azure-virtual-machines-scale-set.html) |
| [Azure Virtual Machines](../virtual-machines/linux/overview.md) | [azure-virtual-machines](/answers/topics/azure-virtual-machines.html) | | [Azure SQL Virtual Machines](/azure/azure-sql/virtual-machines/index) | [azure-sql-virtual-machines](/answers/topics/azure-sql-virtual-machines.html)| | [Azure Virtual Machine backup](../virtual-machines/backup-recovery.md) | [azure-virtual-machine-backup](/answers/questions/36892/azure-virtual-machine-backups.html) |
Explore the range of [Azure support options and choose the plan](https://azure.m
## Create a GitHub issue
-If you need help with the language and tools used to develop and manage Azure virtual machine scale sets, open an issue in its repository on GitHub.
+If you need help with the language and tools used to develop and manage Azure Virtual Machine Scale Sets, open an issue in its repository on GitHub.
| Library | GitHub issues URL| | | |
If you need help with the language and tools used to develop and manage Azure vi
Learn about important product updates, roadmap, and announcements in [Azure Updates](https://azure.microsoft.com/updates/?category=compute).
-News and information about Azure virtual machine scale sets is shared at the [Azure blog](https://azure.microsoft.com/blog/topics/virtual-machines/).
+News and information about Azure Virtual Machine Scale Sets is shared at the [Azure blog](https://azure.microsoft.com/blog/topics/virtual-machines/).
## Next steps
-Learn more about [Azure virtual machine scale sets](overview.md)
+Learn more about [Azure Virtual Machine Scale Sets](overview.md)
virtual-machine-scale-sets Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/whats-new.md
description: Learn about what's new for Virtual Machine Scale Sets in Azure.
Previously updated : 10/12/2022 Last updated : 11/22/2022+
virtual-machines Capacity Reservation Associate Virtual Machine Scale Set Flex https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/capacity-reservation-associate-virtual-machine-scale-set-flex.md
Previously updated : 03/28/2022 Last updated : 11/22/2022
virtual-machines Capacity Reservation Associate Virtual Machine Scale Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/capacity-reservation-associate-virtual-machine-scale-set.md
Previously updated : 08/09/2021 Last updated : 11/22/2022
virtual-machines Capacity Reservation Associate Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/capacity-reservation-associate-vm.md
Previously updated : 01/03/2022 Last updated : 11/22/2022
virtual-machines Capacity Reservation Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/capacity-reservation-create.md
Previously updated : 08/09/2021 Last updated : 11/22/2022
virtual-machines Capacity Reservation Modify https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/capacity-reservation-modify.md
Previously updated : 08/09/2021 Last updated : 11/22/2022
virtual-machines Capacity Reservation Overallocate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/capacity-reservation-overallocate.md
Previously updated : 08/09/2021 Last updated : 11/22/2022
virtual-machines Capacity Reservation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/capacity-reservation-overview.md
Previously updated : 08/09/2021 Last updated : 11/22/2022
virtual-machines Capacity Reservation Remove Virtual Machine Scale Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/capacity-reservation-remove-virtual-machine-scale-set.md
Title: Remove a virtual machine scale set association from a Capacity Reservation group
-description: Learn how to remove a virtual machine scale set from a Capacity Reservation group.
+ Title: Remove a Virtual Machine Scale Set association from a Capacity Reservation group
+description: Learn how to remove a Virtual Machine Scale Set from a Capacity Reservation group.
Previously updated : 08/09/2021 Last updated : 11/22/2022
-# Remove a virtual machine scale set association from a Capacity Reservation group
+# Remove a Virtual Machine Scale Set association from a Capacity Reservation group
**Applies to:** :heavy_check_mark: Uniform scale set :heavy_check_mark: Flexible scale sets
-This article walks you through removing a virtual machine scale set association from a Capacity Reservation group. To learn more about capacity reservations, see the [overview article](capacity-reservation-overview.md).
+This article walks you through removing a Virtual Machine Scale Set association from a Capacity Reservation group. To learn more about capacity reservations, see the [overview article](capacity-reservation-overview.md).
Because both the VM and the underlying Capacity Reservation logically occupy capacity, Azure imposes some constraints on this process to avoid ambiguous allocation states and unexpected errors. There are two ways to change an association: -- Option 1: Deallocate the Virtual machine scale set, change the Capacity Reservation group property at the scale set level, and then update the underlying VMs
+- Option 1: Deallocate the Virtual Machine Scale Set, change the Capacity Reservation group property at the scale set level, and then update the underlying VMs
- Option 2: Update the reserved quantity to zero and then change the Capacity Reservation group property
-## Deallocate the Virtual machine scale set
+## Deallocate the Virtual Machine Scale Set
-The first option is to deallocate the virtual machine scale set, change the Capacity Reservation group property at the scale set level, and then update the underlying VMs.
+The first option is to deallocate the Virtual Machine Scale Set, change the Capacity Reservation group property at the scale set level, and then update the underlying VMs.
Go to [upgrade policies](#upgrade-policies) for more information about automatic, rolling, and manual upgrades. ### [API](#tab/api1)
-1. Deallocate the virtual machine scale set
+1. Deallocate the Virtual Machine Scale Set
```rest POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{VMScaleSetName}/deallocate?api-version=2021-04-01 ```
-1. Update the virtual machine scale set to remove association with the Capacity Reservation group
+1. Update the Virtual Machine Scale Set to remove association with the Capacity Reservation group
```rest PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{VMScaleSetName}/update?api-version=2021-04-01 ```
- In the request body, set the `capacityReservationGroup` property to null to remove the virtual machine scale set association to the group:
+ In the request body, set the `capacityReservationGroup` property to null to remove the Virtual Machine Scale Set association to the group:
```json {
Go to [upgrade policies](#upgrade-policies) for more information about automatic
### [CLI](#tab/cli1)
-1. Deallocate the virtual machine scale set. The following command will deallocate all virtual machines within the scale set:
+1. Deallocate the Virtual Machine Scale Set. The following command will deallocate all virtual machines within the scale set:
```azurecli-interactive az vmss deallocate
Go to [upgrade policies](#upgrade-policies) for more information about automatic
### [PowerShell](#tab/powershell1)
-1. Deallocate the virtual machine scale set. The following command will deallocate all virtual machines within the scale set:
+1. Deallocate the Virtual Machine Scale Set. The following command will deallocate all virtual machines within the scale set:
```powershell-interactive Stop-AzVmss
Go to [upgrade policies](#upgrade-policies) for more information about automatic
Note that `capacity` property is set to 0.
-1. Update the virtual machine scale set to remove the association with the Capacity Reservation group
+1. Update the Virtual Machine Scale Set to remove the association with the Capacity Reservation group
```rest PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{VMScaleSetName}/update?api-version=2021-04-01
To learn more, go to Azure PowerShell commands [New-AzCapacityReservation](/powe
- **Automatic Upgrade** ΓÇô In this mode, the scale set VM instances are automatically dissociated from the Capacity Reservation group without any further action from you. - **Rolling Upgrade** ΓÇô In this mode, the scale set VM instances are dissociated from the Capacity Reservation group without any further action from you. However, they are updated in batches with an optional pause time between them.-- **Manual Upgrade** ΓÇô In this mode, nothing happens to the scale set VM instances when the virtual machine scale set is updated. You will need to individually remove each scale set VM by [upgrading it with the latest Scale Set model](../virtual-machine-scale-sets/virtual-machine-scale-sets-upgrade-scale-set.md#how-to-bring-vms-up-to-date-with-the-latest-scale-set-model).
+- **Manual Upgrade** ΓÇô In this mode, nothing happens to the scale set VM instances when the Virtual Machine Scale Set is updated. You will need to individually remove each scale set VM by [upgrading it with the latest Scale Set model](../virtual-machine-scale-sets/virtual-machine-scale-sets-upgrade-scale-set.md#how-to-bring-vms-up-to-date-with-the-latest-scale-set-model).
## Next steps
virtual-machines Capacity Reservation Remove Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/capacity-reservation-remove-vm.md
Previously updated : 08/09/2021 Last updated : 11/22/2022
virtual-machines Redhat Create Upload Vhd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/redhat-create-upload-vhd.md
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
-In this article, you will learn how to prepare a Red Hat Enterprise Linux (RHEL) virtual machine for use in Azure. The versions of RHEL that are covered in this article are 6.7+ and 7.1+. The hypervisors for preparation that are covered in this article are Hyper-V, kernel-based virtual machine (KVM), and VMware. For more information about eligibility requirements for participating in Red Hat's Cloud Access program, see [Red Hat's Cloud Access website](https://www.redhat.com/en/technologies/cloud-computing/cloud-access) and [Running RHEL on Azure](https://access.redhat.com/ecosystem/ccsp/microsoft-azure). For ways to automate building RHEL images, see [Azure Image Builder](../image-builder-overview.md).
+In this article, you'll learn how to prepare a Red Hat Enterprise Linux (RHEL) virtual machine for use in Azure. The versions of RHEL that are covered in this article are 6.7+ and 7.1+. The hypervisors for preparation that are covered in this article are Hyper-V, kernel-based virtual machine (KVM), and VMware. For more information about eligibility requirements for participating in Red Hat's Cloud Access program, see [Red Hat's Cloud Access website](https://www.redhat.com/en/technologies/cloud-computing/cloud-access) and [Running RHEL on Azure](https://access.redhat.com/ecosystem/ccsp/microsoft-azure). For ways to automate building RHEL images, see [Azure Image Builder](../image-builder-overview.md).
## Hyper-V Manager This section shows you how to prepare a [RHEL 6](#rhel-6-using-hyper-v-manager), [RHEL 7](#rhel-7-using-hyper-v-manager), or [RHEL 8](#rhel-8-using-hyper-v-manager) virtual machine using Hyper-V Manager. ### Prerequisites
-This section assumes that you have already obtained an ISO file from the Red Hat website and installed the RHEL image to a virtual hard disk (VHD). For more details about how to use Hyper-V Manager to install an operating system image, see [Install the Hyper-V Role and Configure a Virtual Machine](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/hh846766(v=ws.11)).
+This section assumes that you've already obtained an ISO file from the Red Hat website and installed the RHEL image to a virtual hard disk (VHD). For more details about how to use Hyper-V Manager to install an operating system image, see [Install the Hyper-V Role and Configure a Virtual Machine](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/hh846766(v=ws.11)).
**RHEL installation notes**
-* Azure does not support the VHDX format. Azure supports only fixed VHD. You can use Hyper-V Manager to convert the disk to VHD format, or you can use the convert-vhd cmdlet. If you use VirtualBox, select **Fixed size** as opposed to the default dynamically allocated option when you create the disk.
+* Azure doesn't support the VHDX format. Azure supports only fixed VHD. You can use Hyper-V Manager to convert the disk to VHD format, or you can use the convert-vhd cmdlet. If you use VirtualBox, select **Fixed size** as opposed to the default dynamically allocated option when you create the disk.
* Azure supports Gen1 (BIOS boot) & Gen2 (UEFI boot) Virtual machines. * The maximum size that's allowed for the VHD is 1,023 GB.
-* Logical Volume Manager (LVM) is supported and may be used on the OS disk or data disks in Azure virtual machines. However, in general it is recommended to use standard partitions on the OS disk rather than LVM. This practice will avoid LVM name conflicts with cloned virtual machines, particularly if you ever need to attach an operating system disk to another identical virtual machine for troubleshooting. See also [LVM](/previous-versions/azure/virtual-machines/linux/configure-lvm) and [RAID](/previous-versions/azure/virtual-machines/linux/configure-raid) documentation.
+* The vfat kernel module must be enabled in the kernel.
+* Logical Volume Manager (LVM) is supported and may be used on the OS disk or data disks in Azure virtual machines. However, in general its recommended to use standard partitions on the OS disk rather than LVM. This practice will avoid LVM name conflicts with cloned virtual machines, particularly if you ever need to attach an operating system disk to another identical virtual machine for troubleshooting. See also [LVM](/previous-versions/azure/virtual-machines/linux/configure-lvm) and [RAID](/previous-versions/azure/virtual-machines/linux/configure-raid) documentation.
* **Kernel support for mounting Universal Disk Format (UDF) file systems is required**. At first boot on Azure, the UDF-formatted media that is attached to the guest passes the provisioning configuration to the Linux virtual machine. The Azure Linux Agent must be able to mount the UDF file system to read its configuration and provision the virtual machine, without this, provisioning will fail!
-* Do not configure a swap partition on the operating system disk. More information about this can be found in the following steps.
+* Don't configure a swap partition on the operating system disk. More information about this can be found in the following steps.
* All VHDs on Azure must have a virtual size aligned to 1MB. When converting from a raw disk to VHD you must ensure that the raw disk size is a multiple of 1MB before conversion. More details can be found in the steps below. See also [Linux Installation Notes](create-upload-generic.md#general-linux-installation-notes) for more information. +
+> [!NOTE]
+> **(_Cloud-init >= 21.2 removes the udf requirement._)** however without the udf module enabled the cdrom will not mount during provisioning preventing custom data from being applied. A workaround for this would be to apply custom data using user data however, unlike custom data user data is not encrypted. https://cloudinit.readthedocs.io/en/latest/topics/format.html
++ ### RHEL 6 using Hyper-V Manager 1. In Hyper-V Manager, select the virtual machine.
This section assumes that you have already obtained an ISO file from the Red Hat
rhgb quiet crashkernel=auto ```
- Graphical and quiet boot are not useful in a cloud environment where we want all the logs to be sent to the serial port. You can leave the `crashkernel` option configured if desired. Note that this parameter reduces the amount of available memory in the virtual machine by 128 MB or more. This configuration might be problematic on smaller virtual machine sizes.
+ Graphical and quiet boot aren't useful in a cloud environment where we want all the logs to be sent to the serial port. You can leave the `crashkernel` option configured if desired. Note that this parameter reduces the amount of available memory in the virtual machine by 128 MB or more. This configuration might be problematic on smaller virtual machine sizes.
1. Ensure that the secure shell (SSH) server is installed and configured to start at boot time, which is usually the default. Modify /etc/ssh/sshd_config to include the following line:
This section assumes that you have already obtained an ISO file from the Red Hat
# sudo chkconfig waagent on ```
- Installing the WALinuxAgent package removes the NetworkManager and NetworkManager-gnome packages if they were not already removed in step 3.
+ Installing the WALinuxAgent package removes the NetworkManager and NetworkManager-gnome packages if they weren't already removed in step 3.
-1. Do not create swap space on the operating system disk.
+1. Don't create swap space on the operating system disk.
The Azure Linux Agent can automatically configure swap space by using the local resource disk that is attached to the virtual machine after the virtual machine is provisioned on Azure. Note that the local resource disk is a temporary disk and that it might be emptied if the virtual machine is deprovisioned. After you install the Azure Linux Agent in the previous step, modify the following parameters in /etc/waagent.conf appropriately:
This section assumes that you have already obtained an ISO file from the Red Hat
1. Run the following commands to deprovision the virtual machine and prepare it for provisioning on Azure: ```console
- # Note: if you are migrating a specific virtual machine and do not wish to create a generalized image,
+ # Note: if you're migrating a specific virtual machine and don't wish to create a generalized image,
# skip the deprovision step # sudo waagent -force -deprovision
This section assumes that you have already obtained an ISO file from the Red Hat
rhgb quiet crashkernel=auto ```
- Graphical and quiet boot are not useful in a cloud environment where we want all the logs to be sent to the serial port. You can leave the `crashkernel` option configured if desired. Note that this parameter reduces the amount of available memory in the virtual machine by 128 MB or more, which might be problematic on smaller virtual machine sizes.
+ Graphical and quiet boot aren't useful in a cloud environment where we want all the logs to be sent to the serial port. You can leave the `crashkernel` option configured if desired. Note that this parameter reduces the amount of available memory in the virtual machine by 128 MB or more, which might be problematic on smaller virtual machine sizes.
-1. After you are done editing `/etc/default/grub`, run the following command to rebuild the grub configuration:
+1. After you're done editing `/etc/default/grub`, run the following command to rebuild the grub configuration:
```console # sudo grub2-mkconfig -o /boot/grub2/grub.cfg
This section assumes that you have already obtained an ISO file from the Red Hat
1. Configure waagent for cloud-init: ```console
- sed -i 's/Provisioning.Agent=auto/Provisioning.Agent=cloud-init/g' /etc/waagent.conf
+ sed -i 's/Provisioning.Agent=auto/Provisioning.Agent=auto/g' /etc/waagent.conf
sed -i 's/ResourceDisk.Format=y/ResourceDisk.Format=n/g' /etc/waagent.conf sed -i 's/ResourceDisk.EnableSwap=y/ResourceDisk.EnableSwap=n/g' /etc/waagent.conf ``` > [!NOTE]
- > If you are migrating a specific virtual machine and do not wish to create a generalized image, set `Provisioning.Agent=disabled` on the `/etc/waagent.conf` config.
+ > If you are migrating a specific virtual machine and don't wish to create a generalized image, set `Provisioning.Agent=disabled` on the `/etc/waagent.conf` config.
1. Configure mounts:
This section assumes that you have already obtained an ISO file from the Red Hat
``` 1. Swap configuration
- Do not create swap space on the operating system disk.
+ Don't create swap space on the operating system disk.
Previously, the Azure Linux Agent was used automatically configure swap space by using the local resource disk that is attached to the virtual machine after the virtual machine is provisioned on Azure. However this is now handled by cloud-init, you **must not** use the Linux Agent to format the resource disk create the swap file, modify the following parameters in `/etc/waagent.conf` appropriately:
This section assumes that you have already obtained an ISO file from the Red Hat
Run the following commands to deprovision the virtual machine and prepare it for provisioning on Azure: > [!CAUTION]
- > If you are migrating a specific virtual machine and do not wish to create a generalized image, skip the deprovision step. Running the command `waagent -force -deprovision+user` will render the source machine unusable, this step is intended only to create a generalized image.
+ > If you are migrating a specific virtual machine and don't wish to create a generalized image, skip the deprovision step. Running the command `waagent -force -deprovision+user` will render the source machine unusable, this step is intended only to create a generalized image.
```console # sudo rm -f /var/log/waagent.log # sudo cloud-init clean
This section assumes that you have already obtained an ISO file from the Red Hat
# grub2-editenv - unset kernelopts ```
- 1. Edit `/etc/default/grub` in a text editor, and add the following paramters:
+ 1. Edit `/etc/default/grub` in a text editor, and add the following parameters:
```config-grub GRUB_CMDLINE_LINUX="console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 earlyprintk=ttyS0 net.ifnames=0"
This section assumes that you have already obtained an ISO file from the Red Hat
rhgb quiet crashkernel=auto ```
- Graphical and quiet boot are not useful in a cloud environment where we want all the logs to be sent to the serial port. You can leave the `crashkernel` option configured if desired. Note that this parameter reduces the amount of available memory in the virtual machine by 128 MB or more, which might be problematic on smaller virtual machine sizes.
+ Graphical and quiet boot aren't useful in a cloud environment where we want all the logs to be sent to the serial port. You can leave the `crashkernel` option configured if desired. Note that this parameter reduces the amount of available memory in the virtual machine by 128 MB or more, which might be problematic on smaller virtual machine sizes.
1. After you are done editing `/etc/default/grub`, run the following command to rebuild the grub configuration:
This section assumes that you have already obtained an ISO file from the Red Hat
sed -i 's/ResourceDisk.EnableSwap=y/ResourceDisk.EnableSwap=n/g' /etc/waagent.conf ``` > [!NOTE]
- > If you are migrating a specific virtual machine and do not wish to create a generalized image, set `Provisioning.Agent=disabled` on the `/etc/waagent.conf` config.
+ > If you are migrating a specific virtual machine and don't wish to create a generalized image, set `Provisioning.Agent=disabled` on the `/etc/waagent.conf` config.
1. Configure mounts:
This section assumes that you have already obtained an ISO file from the Red Hat
``` 1. Swap configuration
- Do not create swap space on the operating system disk.
+ Don't create swap space on the operating system disk.
Previously, the Azure Linux Agent was used automatically configure swap space by using the local resource disk that is attached to the virtual machine after the virtual machine is provisioned on Azure. However this is now handled by cloud-init, you **must not** use the Linux Agent to format the resource disk create the swap file, modify the following parameters in `/etc/waagent.conf` appropriately:
This section assumes that you have already obtained an ISO file from the Red Hat
# logout ``` > [!CAUTION]
- > If you are migrating a specific virtual machine and do not wish to create a generalized image, skip the deprovision step. Running the command `waagent -force -deprovision+user` will render the source machine unusable, this step is intended only to create a generalized image.
+ > If you are migrating a specific virtual machine and don't wish to create a generalized image, skip the deprovision step. Running the command `waagent -force -deprovision+user` will render the source machine unusable, this step is intended only to create a generalized image.
1. Click **Action** > **Shut Down** in Hyper-V Manager. Your Linux VHD is now ready to be [**uploaded to Azure**](./upload-vhd.md#option-1-upload-a-vhd).
This section shows you how to use KVM to prepare a [RHEL 6](#rhel-6-using-kvm) o
rhgb quiet crashkernel=auto ```
- Graphical and quiet boot are not useful in a cloud environment where we want all the logs to be sent to the serial port. You can leave the `crashkernel` option configured if desired. Note that this parameter reduces the amount of available memory in the virtual machine by 128 MB or more, which might be problematic on smaller virtual machine sizes.
+ Graphical and quiet boot aren't useful in a cloud environment where we want all the logs to be sent to the serial port. You can leave the `crashkernel` option configured if desired. Note that this parameter reduces the amount of available memory in the virtual machine by 128 MB or more, which might be problematic on smaller virtual machine sizes.
1. Add Hyper-V modules to initramfs:
This section shows you how to use KVM to prepare a [RHEL 6](#rhel-6-using-kvm) o
1. Run the following commands to deprovision the virtual machine and prepare it for provisioning on Azure: ```console
- # Note: if you are migrating a specific virtual machine and do not wish to create a generalized image,
+ # Note: if you are migrating a specific virtual machine and don't wish to create a generalized image,
# skip the deprovision step # sudo rm -rf /var/lib/waagent/ # sudo rm -f /var/log/waagent.log
This section shows you how to use KVM to prepare a [RHEL 6](#rhel-6-using-kvm) o
rhgb quiet crashkernel=auto ```
- Graphical and quiet boot are not useful in a cloud environment where we want all the logs to be sent to the serial port. You can leave the `crashkernel` option configured if desired. Note that this parameter reduces the amount of available memory in the virtual machine by 128 MB or more, which might be problematic on smaller virtual machine sizes.
+ Graphical and quiet boot aren't useful in a cloud environment where we want all the logs to be sent to the serial port. You can leave the `crashkernel` option configured if desired. Note that this parameter reduces the amount of available memory in the virtual machine by 128 MB or more, which might be problematic on smaller virtual machine sizes.
1. After you are done editing `/etc/default/grub`, run the following command to rebuild the grub configuration:
Follow the steps in 'Prepare a RHEL 7 virtual machine from Hyper-V Manager', ste
1. Swap configuration
- Do not create swap space on the operating system disk.
+ Don't create swap space on the operating system disk.
Follow the steps in 'Prepare a RHEL 7 virtual machine from Hyper-V Manager', step 13, 'Swap configuration'
This section shows you how to prepare a [RHEL 6](#rhel-6-using-vmware) or [RHEL
This section assumes that you have already installed a RHEL virtual machine in VMware. For details about how to install an operating system in VMware, see [VMware Guest Operating System Installation Guide](https://partnerweb.vmware.com/GOSIG/home.html). * When you install the Linux operating system, we recommend that you use standard partitions rather than LVM, which is often the default for many installations. This will avoid LVM name conflicts with cloned virtual machine, particularly if an operating system disk ever needs to be attached to another virtual machine for troubleshooting. LVM or RAID can be used on data disks if preferred.
-* Do not configure a swap partition on the operating system disk. You can configure the Linux agent to create a swap file on the temporary resource disk. You can find more information about this in the steps that follow.
+* Don't configure a swap partition on the operating system disk. You can configure the Linux agent to create a swap file on the temporary resource disk. You can find more information about this in the steps that follow.
* When you create the virtual hard disk, select **Store virtual disk as a single file**. ### RHEL 6 using VMware
This section assumes that you have already installed a RHEL virtual machine in V
rhgb quiet crashkernel=auto ```
- Graphical and quiet boot are not useful in a cloud environment where we want all the logs to be sent to the serial port. You can leave the `crashkernel` option configured if desired. Note that this parameter reduces the amount of available memory in the virtual machine by 128 MB or more, which might be problematic on smaller virtual machine sizes.
+ Graphical and quiet boot aren't useful in a cloud environment where we want all the logs to be sent to the serial port. You can leave the `crashkernel` option configured if desired. Note that this parameter reduces the amount of available memory in the virtual machine by 128 MB or more, which might be problematic on smaller virtual machine sizes.
1. Add Hyper-V modules to initramfs:
This section assumes that you have already installed a RHEL virtual machine in V
# sudo chkconfig waagent on ```
-1. Do not create swap space on the operating system disk.
+1. Don't create swap space on the operating system disk.
The Azure Linux Agent can automatically configure swap space by using the local resource disk that is attached to the virtual machine after the virtual machine is provisioned on Azure. Note that the local resource disk is a temporary disk, and it might be emptied if the virtual machine is deprovisioned. After you install the Azure Linux Agent in the previous step, modify the following parameters in `/etc/waagent.conf` appropriately:
This section assumes that you have already installed a RHEL virtual machine in V
1. Run the following commands to deprovision the virtual machine and prepare it for provisioning on Azure: ```console
- # Note: if you are migrating a specific virtual machine and do not wish to create a generalized image,
+ # Note: if you are migrating a specific virtual machine and don't wish to create a generalized image,
# skip the deprovision step # sudo rm -rf /var/lib/waagent/ # sudo rm -f /var/log/waagent.log
This section assumes that you have already installed a RHEL virtual machine in V
rhgb quiet crashkernel=auto ```
- Graphical and quiet boot are not useful in a cloud environment where we want all the logs to be sent to the serial port. You can leave the `crashkernel` option configured if desired. Note that this parameter reduces the amount of available memory in the virtual machine by 128 MB or more, which might be problematic on smaller virtual machine sizes.
+ Graphical and quiet boot aren't useful in a cloud environment where we want all the logs to be sent to the serial port. You can leave the `crashkernel` option configured if desired. Note that this parameter reduces the amount of available memory in the virtual machine by 128 MB or more, which might be problematic on smaller virtual machine sizes.
1. After you are done editing `/etc/default/grub`, run the following command to rebuild the grub configuration:
This section assumes that you have already installed a RHEL virtual machine in V
1. Swap configuration
- Do not create swap space on the operating system disk.
+ Don't create swap space on the operating system disk.
Follow the steps in 'Prepare a RHEL 7 virtual machine from Hyper-V Manager', step 13, 'Swap configuration' 1. If you want to unregister the subscription, run the following command:
This section shows you how to prepare a RHEL 7 distro from an ISO using a kickst
# Use graphical install text
- # Do not run the Setup Agent on first boot
+ # Don't run the Setup Agent on first boot
firstboot --disable # Keyboard layouts
This section shows you how to prepare a RHEL 7 distro from an ISO using a kickst
1. Wait for the installation to finish. When it's finished, the virtual machine will be shut down automatically. Your Linux VHD is now ready to be uploaded to Azure. ## Known issues
-### The Hyper-V driver could not be included in the initial RAM disk when using a non-Hyper-V hypervisor
+### The Hyper-V driver couldn't be included in the initial RAM disk when using a non-Hyper-V hypervisor
In some cases, Linux installers might not include the drivers for Hyper-V in the initial RAM disk (initrd or initramfs) unless Linux detects that it is running in a Hyper-V environment.
virtual-machines Virtual Machine Scale Sets Maintenance Control Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/virtual-machine-scale-sets-maintenance-control-cli.md
Title: Maintenance control for OS image upgrades on Azure virtual machine scale sets using Azure CLI
-description: Learn how to control when automatic OS image upgrades are rolled out to your Azure virtual machine scale sets using Maintenance control and Azure CLI.
+ Title: Maintenance control for OS image upgrades on Azure Virtual Machine Scale Sets using Azure CLI
+description: Learn how to control when automatic OS image upgrades are rolled out to your Azure Virtual Machine Scale Sets using Maintenance control and Azure CLI.
Previously updated : 06/01/2021 Last updated : 11/22/2022 ms.devlang: azurecli #pmcontact: PPHILLIPS
-# Maintenance control for OS image upgrades on Azure virtual machine scale sets using Azure CLI
+# Maintenance control for OS image upgrades on Azure Virtual Machine Scale Sets using Azure CLI
-Maintenance control lets you decide when to apply automatic guest OS image upgrades to your virtual machine scale sets. This topic covers the Azure CLI options for Maintenance control. For more information on using Maintenance control, see [Maintenance control for Azure virtual machine scale sets](virtual-machine-scale-sets-maintenance-control.md).
+Maintenance control lets you decide when to apply automatic guest OS image upgrades to your Virtual Machine Scale Sets. This topic covers the Azure CLI options for Maintenance control. For more information on using Maintenance control, see [Maintenance control for Azure Virtual Machine Scale Sets](virtual-machine-scale-sets-maintenance-control.md).
## Create a maintenance configuration
az maintenance configuration create \
## Assign the configuration
-Use `az maintenance assignment create` to assign the configuration to your virtual machine scale set.
+Use `az maintenance assignment create` to assign the configuration to your Virtual Machine Scale Set.
## Enable automatic OS upgrade
-You can enable automatic OS upgrades for each virtual machine scale set that is going to use maintenance control. For more information about enabling automatic OS upgrades on your virtual machine scale set, see [Azure virtual machine scale set automatic OS image upgrades](../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md).
+You can enable automatic OS upgrades for each Virtual Machine Scale Set that is going to use maintenance control. For more information about enabling automatic OS upgrades on your Virtual Machine Scale Set, see [Azure Virtual Machine Scale Set automatic OS image upgrades](../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md).
## Next steps
virtual-machines Virtual Machine Scale Sets Maintenance Control Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/virtual-machine-scale-sets-maintenance-control-portal.md
Title: Maintenance control for OS image upgrades on Azure virtual machine scale sets using Azure portal
-description: Learn how to control when automatic OS image upgrades are rolled out to your Azure virtual machine scale sets using Maintenance control and Azure portal.
+ Title: Maintenance control for OS image upgrades on Azure Virtual Machine Scale Sets using Azure portal
+description: Learn how to control when automatic OS image upgrades are rolled out to your Azure Virtual Machine Scale Sets using Maintenance control and Azure portal.
Previously updated : 06/01/2021 Last updated : 11/22/2022 #pmcontact: PPHILLIPS
-# Maintenance control for OS image upgrades on Azure virtual machine scale sets using Azure portal
+# Maintenance control for OS image upgrades on Azure Virtual Machine Scale Sets using Azure portal
-Maintenance control lets you decide when to apply automatic guest OS image upgrades to your virtual machine scale sets. This topic covers the Azure portal options for Maintenance control. For more information on using Maintenance control, see [Maintenance control for Azure virtual machine scale sets](virtual-machine-scale-sets-maintenance-control.md).
+Maintenance control lets you decide when to apply automatic guest OS image upgrades to your Virtual Machine Scale Sets. This topic covers the Azure portal options for Maintenance control. For more information on using Maintenance control, see [Maintenance control for Azure Virtual Machine Scale Sets](virtual-machine-scale-sets-maintenance-control.md).
## Create a maintenance configuration
On the details page of the maintenance configuration, select **Assignments** and
![Screenshot showing how to assign a resource](media/virtual-machine-scale-sets-maintenance-control-portal/maintenance-configurations-add-assignment.png)
-Select the virtual machine scale set resources that you want the maintenance configuration assigned to and select **Ok**.
+Select the Virtual Machine Scale Set resources that you want the maintenance configuration assigned to and select **Ok**.
## Next steps
virtual-machines Virtual Machine Scale Sets Maintenance Control Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/virtual-machine-scale-sets-maintenance-control-powershell.md
Title: Maintenance control for OS image upgrades on Azure virtual machine scale sets using PowerShell
-description: Learn how to control when automatic OS image upgrades are rolled out to your Azure virtual machine scale sets using Maintenance control and PowerShell.
+ Title: Maintenance control for OS image upgrades on Azure Virtual Machine Scale Sets using PowerShell
+description: Learn how to control when automatic OS image upgrades are rolled out to your Azure Virtual Machine Scale Sets using Maintenance control and PowerShell.
Previously updated : 09/11/2020 Last updated : 11/22/2022 #pmcontact: PPHILLIPS
-# Maintenance control for OS image upgrades on Azure virtual machine scale sets using PowerShell
+# Maintenance control for OS image upgrades on Azure Virtual Machine Scale Sets using PowerShell
**Applies to:** :heavy_check_mark: Uniform scale sets
-Maintenance control lets you decide when to apply automatic guest OS image upgrades to your virtual machine scale sets. This topic covers the Azure PowerShell options for Maintenance control. For more information on using Maintenance control, see [Maintenance control for Azure virtual machine scale sets](virtual-machine-scale-sets-maintenance-control.md).
+Maintenance control lets you decide when to apply automatic guest OS image upgrades to your Virtual Machine Scale Sets. This topic covers the Azure PowerShell options for Maintenance control. For more information on using Maintenance control, see [Maintenance control for Azure Virtual Machine Scale Sets](virtual-machine-scale-sets-maintenance-control.md).
## Enable the PowerShell module
You can query for available maintenance configurations using [Get-AzMaintenanceC
Get-AzMaintenanceConfiguration | Format-Table -Property Name,Id ```
-## Associate your virtual machine scale set to the maintenance configuration
+## Associate your Virtual Machine Scale Set to the maintenance configuration
-A virtual machine scale set can be associated to any Maintenance configuration regardless of the region and subscription of the Maintenance configuration. By opting in to the Maintenance configuration, new OS image updates for the scale set will be automatically scheduled on the next available maintenance window.
+A Virtual Machine Scale Set can be associated to any Maintenance configuration regardless of the region and subscription of the Maintenance configuration. By opting in to the Maintenance configuration, new OS image updates for the scale set will be automatically scheduled on the next available maintenance window.
-Use [New-AzConfigurationAssignment](/powershell/module/az.maintenance/new-azconfigurationassignment) to associate your virtual machine scale set the maintenance configuration.
+Use [New-AzConfigurationAssignment](/powershell/module/az.maintenance/new-azconfigurationassignment) to associate your Virtual Machine Scale Set the maintenance configuration.
```azurepowershell-interactive New-AzConfigurationAssignment `
New-AzConfigurationAssignment `
## Enable automatic OS upgrade
-You can enable automatic OS upgrades for each virtual machine scale set that is going to use maintenance control. For more information about enabling automatic OS upgrades on your virtual machine scale set, see [Azure virtual machine scale set automatic OS image upgrades](../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md).
+You can enable automatic OS upgrades for each Virtual Machine Scale Set that is going to use maintenance control. For more information about enabling automatic OS upgrades on your Virtual Machine Scale Set, see [Azure Virtual Machine Scale Set automatic OS image upgrades](../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md).
## Next steps
virtual-machines Virtual Machine Scale Sets Maintenance Control Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/virtual-machine-scale-sets-maintenance-control-template.md
Title: Maintenance control for OS image upgrades on Azure virtual machine scale sets using an Azure Resource Manager template
-description: Learn how to control when automatic OS image upgrades are rolled out to your Azure virtual machine scale sets using Maintenance control and an Azure Resource Manager (ARM) template.
+ Title: Maintenance control for OS image upgrades on Azure Virtual Machine Scale Sets using an Azure Resource Manager template
+description: Learn how to control when automatic OS image upgrades are rolled out to your Azure Virtual Machine Scale Sets using Maintenance control and an Azure Resource Manager (ARM) template.
Previously updated : 08/31/2022 Last updated : 11/22/2022 #pmcontact: PPHILLIPS
-# Maintenance control for OS image upgrades on Azure virtual machine scale sets using an ARM template
+# Maintenance control for OS image upgrades on Azure Virtual Machine Scale Sets using an ARM template
-Maintenance control lets you decide when to apply automatic OS image upgrades to your virtual machine scale sets. For more information on using Maintenance control, see [Maintenance control for Azure virtual machine scale sets](virtual-machine-scale-sets-maintenance-control.md).
+Maintenance control lets you decide when to apply automatic OS image upgrades to your Virtual Machine Scale Sets. For more information on using Maintenance control, see [Maintenance control for Azure Virtual Machine Scale Sets](virtual-machine-scale-sets-maintenance-control.md).
This article explains how you can use an Azure Resource Manager (ARM) template to create a maintenance configuration. You will learn how to:
virtual-machines Virtual Machine Scale Sets Maintenance Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/virtual-machine-scale-sets-maintenance-control.md
Title: Overview of Maintenance control for OS image upgrades on Azure virtual machine scale sets
-description: Learn how to control when automatic OS image upgrades are rolled out to your Azure virtual machine scale sets using Maintenance control.
+ Title: Overview of Maintenance control for OS image upgrades on Azure Virtual Machine Scale Sets
+description: Learn how to control when automatic OS image upgrades are rolled out to your Azure Virtual Machine Scale Sets using Maintenance control.
Previously updated : 09/11/2020 Last updated : 11/22/2022 #pmcontact: PPHILLIPS
-# Maintenance control for Azure virtual machine scale sets
+# Maintenance control for Azure Virtual Machine Scale Sets
**Applies to:** :heavy_check_mark: Uniform scale sets
-Manage [automatic OS image upgrades](../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md) for your virtual machine scale sets using maintenance control.
+Manage [automatic OS image upgrades](../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md) for your Virtual Machine Scale Sets using maintenance control.
-Maintenance control lets you decide when to apply updates to OS disks in your virtual machine scale sets through an easier and more predictable experience.
+Maintenance control lets you decide when to apply updates to OS disks in your Virtual Machine Scale Sets through an easier and more predictable experience.
Maintenance configurations work across subscriptions and resource groups. The entire workflow comes down to these steps: - Create a maintenance configuration.-- Associate a virtual machine scale set to a maintenance configuration.
+- Associate a Virtual Machine Scale Set to a maintenance configuration.
- Enable automatic OS upgrades.
You can create and manage maintenance configurations using any of the following
## Next steps > [!div class="nextstepaction"]
-> [Virtual machine scale set maintenance control by using PowerShell](virtual-machine-scale-sets-maintenance-control-powershell.md)
+> [Virtual Machine Scale Set maintenance control by using PowerShell](virtual-machine-scale-sets-maintenance-control-powershell.md)
virtual-machines Disaster Recovery Sap Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/disaster-recovery-sap-guide.md
Previously updated : 11/21/2022 Last updated : 11/22/2022 # Disaster recovery guidelines for SAP application
-To configure Disaster Recovery (DR) for SAP workload on Azure, you need to test, fine tune and update the process regularly. Testing disaster recovery helps to identify the sequence of the dependent services that are required before you trigger SAP workload DR failover or start the system on the secondary site. Organizations usually have their SAP systems connected to Active Directory (AD) and Domain Name System (DNS) services to function correctly. When you set up DR for your SAP workload, you ensure AD and DNS services are functioning before your recover SAP and other non-SAP systems, to ensure the application functions correctly. For guidance on protecting Active Directory and DNS, learn [how to protect Active Directory and DNS](../../../site-recovery/site-recovery-active-directory.md). The recommendation for SAP application described in this document is at abstract level, you need to design your DR strategy based on your specific setup and document the end-to-end DR scenario.
+To configure Disaster Recovery (DR) for SAP workload on Azure, you need to test, fine tune and update the process regularly. Testing disaster recovery helps in identifying sequence of dependent services that are required before you can trigger SAP workload DR failover or start the system on the secondary site. Organizations usually have their SAP systems connected to Active Directory (AD) and Domain Name System (DNS) services to function correctly. When you set up DR for your SAP workload, ensure AD and DNS services are functioning before you recover SAP and other non-SAP systems, to ensure the application functions correctly. For guidance on protecting Active Directory and DNS, learn [how to protect Active Directory and DNS](../../../site-recovery/site-recovery-active-directory.md). The DR recommendation for SAP application described in this document is at abstract level, you need to design your DR strategy based on your specific setup and document the end-to-end scenario.
## DR recommendation for SAP workloads
For SAP systems running on virtual machines, you can use [Azure Site Recovery](.
SAP Web Dispatcher component works as a load balancer for SAP traffic among SAP application servers. You have different options to achieve high availability of SAP Web Dispatcher component in the primary region. For more information about this option, see [High Availability of the SAP Web Dispatcher](https://help.sap.com/docs/SAP_S4HANA_ON-PREMISE/683d6a1797a34730a6e005d1e8de6f22/489a9a6b48c673e8e10000000a42189b.html) and [SAP Web dispatcher HA setup on Azure](https://blogs.sap.com/2022/04/02/sap-on-azure-sap-web-dispatcher-highly-availability-setup-and-virtual-hostname-ip-configuration-with-azure-load-balancer/). -- Option 1: High availability using cluster solution-- Option 2: High availability with several parallel web SAP Web Dispatchers.
+- Option 1: High availability using cluster solution.
+- Option 2: High availability with parallel SAP Web Dispatchers.
-To achieve DR for highly available SAP Web Dispatcher setup in primary region, you can use [Azure Site Recovery](../../../site-recovery/site-recovery-overview.md). In the above reference architecture, parallel web dispatchers (option 2) are running in the primary region and Azure Site Recovery is used to achieve DR. If you have configured SAP Web Dispatcher using option 1 in primary region, you need to make some additional changes after failover to have similar HA setup on the DR region. As the configuration of SAP Web Dispatcher high availability with cluster solution is configured in similar manner to SAP central services. Follow the same guidelines as mentioned for SAP Central Services.
+To achieve DR for highly available SAP Web Dispatcher setup in primary region, you can use [Azure Site Recovery](../../../site-recovery/site-recovery-overview.md). For parallel web dispatchers (option 2) running in primary region, you can configure Azure Site Recovery to achieve DR. But if you have configured SAP Web Dispatcher using option 1 in primary region, you need to make some additional changes after failover to have similar HA setup on the DR region. As the configuration of SAP Web Dispatcher high availability with cluster solution is configured in similar manner to SAP central services. Follow the same guidelines as mentioned for SAP Central Services.
### SAP Central Services
-The SAP central services, which contain the enqueue and the message server is one of the SPOFs of your SAP application. In an SAP system, there can be only one such instance, and it can be configured for high availability. Read [High Availability for SAP Central Service](sap-planning-supported-configurations.md#high-availability-for-sap-central-service) to understand the different high availability solution for SAP workload on Azure.
+The SAP central services contain enqueue and message server, which is one of the SPOF of your SAP application. In an SAP system, there can be only one such instance, and it can be configured for high availability. Read [High Availability for SAP Central Service](sap-planning-supported-configurations.md#high-availability-for-sap-central-service) to understand the different high availability solution for SAP workload on Azure.
-Configuring high availability for SAP Central Services protects resources and processes from local incidents. To achieve DR for SAP Central Services, you can use Azure Site Recovery. Alongside Azure Site Recovery to replicate VMs and local disk, there are additional considerations for your DR strategy. Check the section below for more information, based on the operating system used for SAP central services.
+Configuring high availability for SAP Central Services protects resources and processes from local incidents. To achieve DR for SAP Central Services, you can use Azure Site Recovery. Azure Site Recovery replicate VMs and the attached managed disks, but there are additional considerations for the DR strategy. Check the section below for more information, based on the operating system used for SAP central services.
#### [Linux](#tab/linux)
-For SAP system, the redundancy of SPOF component in the primary region is achieved by configuring high availability. To achieve similar high availability setup in the disaster recovery region after failover, you need to consider additional points like cluster reconfiguration, SAP shared directories availability, alongside Azure Site Recovery to replicate VMs to DR site. On Linux, the high availability of SAP application can be achieved using pacemaker cluster solution. The diagram below shows the different components involved in configuring high availability for SAP central services with Pacemaker. Each component must be taken into consideration to have similar high availability set up on the DR site. If you have configured SAP Web Dispatcher using pacemaker cluster solution, similar consideration would apply as well.
+For SAP system, the redundancy of SPOF component in the primary region is achieved by configuring high availability. To achieve similar high availability setup in the disaster recovery region after failover, you need to consider additional points like cluster reconfiguration, SAP shared directories availability, alongside of replicating VMs and attached managed disk to DR site using Azure Site Recovery. On Linux, the high availability of SAP application can be achieved using pacemaker cluster solution. The diagram below shows the different components involved in configuring high availability for SAP central services with Pacemaker. Each component must be taken into consideration to have similar high availability set up in the DR site. If you have configured SAP Web Dispatcher using pacemaker cluster solution, similar consideration would apply as well.
![SAP system Linux architecture](media/disaster-recovery/disaster-recovery-sap-linux-architecture.png)
virtual-machines Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/get-started.md
ms.assetid: ad8e5c75-0cf6-4564-ae62-ea1246b4e5f2
vm-linux Previously updated : 11/18/2022 Last updated : 11/22/2022
In the SAP workload documentation space, you can find the following areas:
## Change Log
+- November 22, 2022: Release of Disaster Recovery guidelines for SAP workload on Azure - [Disaster Recovery overview and infrastructure guidelines for SAP workload](disaster-recovery-overview-guide.md) and [Disaster Recovery recommendation for SAP workload](disaster-recovery-sap-guide.md).
- November 22, 2022: Update of [SAP workloads on Azure: planning and deployment checklist](sap-deployment-checklist.md) to add latest recommendations - November 18, 2022: Add a recommendation to use Pacemaker simple mount configuration for new implementations on SLES 15 in [Azure VMs HA for SAP NW on SLES with simple mount and NFS](high-availability-guide-suse-nfs-simple-mount.md), [Azure VMs HA for SAP NW on SLES with NFS on Azure File](high-availability-guide-suse-nfs-azure-files.md), [Azure VMs HA for SAP NW on SLES with Azure NetApp Files](high-availability-guide-suse-netapp-files.md) and [Azure VMs HA for SAP NW on SLES](high-availability-guide-suse.md) - November 15, 2022: Change in [HA for SAP HANA Scale-up with ANF on SLES](sap-hana-high-availability-netapp-files-suse.md), [SAP HANA scale-out with standby node on Azure VMs with ANF on SLES](./sap-hana-scale-out-standby-netapp-files-suse.md), [HA for SAP HANA scale-up with ANF on RHEL](./sap-hana-high-availability-netapp-files-red-hat.md) and [SAP HANA scale-out with standby node on Azure VMs with ANF on RHEL](./sap-hana-scale-out-standby-netapp-files-rhel.md) to add recommendation to use mount option `nconnect` for workloads with higher throughput requirements
virtual-machines Sap Deployment Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-deployment-checklist.md
We recommend that you set up and validate a full HADR solution and security desi
- Use [Azure premium storage](/azure/virtual-machines/disks-types#premium-ssds), [premium storage v2](/azure/virtual-machines/disks-types#premium-ssd-v2) for all production grade SAP environments and when ensuring high SLA. For some DBMS, Azure NetApp Files can be used for [large parts of the overall storage requirements](./planning-guide-storage.md#azure-netapp-files-anf). - At a minimum, use [Azure standard SSD](/azure/virtual-machines/disks-types#standard-ssds) storage for VMs that represent SAP application layers and for deployment of DBMSs that aren't performance sensitive. Keep in mind different Azure storage types influence the [single VM availability SLA](https://azure.microsoft.com/support/legal/sla/virtual-machines). - In general, we don't recommend the use of [Azure standard HDD](./planning-guide-storage.md#azure-standard-hdd-storage) disks for SAP.
- - For the different DBMS types, check the [generic SAP-related DBMS documentation](./dbms_guide_general.md) and DBMS-specific documentation that the first document points to. Use disk striping over multiple disks with premium storage (v1 or v2) for database data and log area.
+ - For the different DBMS types, check the [generic SAP-related DBMS documentation](./dbms_guide_general.md) and DBMS-specific documentation that the first document points to. Use disk striping over multiple disks with premium storage (v1 or v2) for database data and log area. Verify lvm disk striping is active and with correct stripe size with command 'lvs -a -o+lv_layout,lv_role,stripes,stripe_size,devices' on Linux, see storage spaces properties on Windows.
- For optimal storage configuration with SAP HANA, see [SAP HANA Azure virtual machine storage configurations](./hana-vm-operations-storage.md). - Use LVM for all disks on Linux VMs, as it allows easier management and online expansion. This includes volumes on single disks, for example /usr/sap.
We recommend that you set up and validate a full HADR solution and security desi
- **High availability and disaster recovery deployments** - Always use standard load balancer for clustered environments. Basic load balancer will be [retired](/azure/load-balancer/skus).
- - If you deploy the SAP application layer without defining a specific availability zone, make sure that all VMs that run SAP dialog instances or middleware instances of a single SAP system are deployed in an [availability set](/azure/virtual-machines/availability-set-overview).
- - If you don't need high availability for SAP Central Services and the DBMS, you can deploy these VMs into the same availability set as the SAP application layer.
- - When you protect SAP Central Services and the DBMS layer for high availability by using passive replication, place the two nodes for SAP Central Services in one separate availability set and the two DBMS nodes in another availability set.
+ - If you deploy the SAP application layer without defining a specific availability zone, make sure that all VMs that run SAP application instances of a single SAP system are deployed in an [availability set](/azure/virtual-machines/availability-set-overview).
+ - When you protect SAP Central Services and the DBMS layer for high availability by using passive replication, place the two nodes for SAP Central Services in one separate availability set and the two DBMS nodes in another availability set. Do not mix application VM roles inside an availability set.
- If you deploy into [availability zones](./sap-ha-availability-zones.md), you can't combine with availability sets. But you do need to make sure you deploy the active and passive central services nodes into two different availability zones. Use two availability zones that have the lowest latency between them. - If you're using Azure Load Balancer together with Linux guest operating systems, check that the Linux network parameter net.ipv4.tcp_timestamps is set to 0. This recommendation conflicts with recommendations in older versions of [SAP note 2382421](https://launchpad.support.sap.com/#/notes/2382421). The SAP note is now updated to state that this parameter needs to be set to 0 to work with Azure load balancers.
See these articles:
> [!div class="checklist"] > * [Azure planning and implementation for SAP NetWeaver](./planning-guide.md) > * [Considerations for Azure Virtual Machines DBMS deployment for SAP workloads](./dbms_guide_general.md)
-> * [Azure Virtual Machines deployment for SAP NetWeaver](./deployment-guide.md)
+> * [Azure Virtual Machines deployment for SAP NetWeaver](./deployment-guide.md)
virtual-network Virtual Networks Udr Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-networks-udr-overview.md
You can specify the following next hop types when creating a user-defined route:
* The [private IP address](./ip-services/private-ip-addresses.md) of a network interface attached to a virtual machine. Any network interface attached to a virtual machine that forwards network traffic to an address other than its own must have the Azure *Enable IP forwarding* option enabled for it. The setting disables Azure's check of the source and destination for a network interface. Learn more about how to [enable IP forwarding for a network interface](virtual-network-network-interface.md#enable-or-disable-ip-forwarding). Though *Enable IP forwarding* is an Azure setting, you may also need to enable IP forwarding within the virtual machine's operating system for the appliance to forward traffic between private IP addresses assigned to Azure network interfaces. If the appliance must route traffic to a public IP address, it must either proxy the traffic, or network address translate the private IP address of the source's private IP address to its own private IP address, which Azure then network address translates to a public IP address, before sending the traffic to the Internet. To determine required settings within the virtual machine, see the documentation for your operating system or network application. To understand outbound connections in Azure, see [Understanding outbound connections](../load-balancer/load-balancer-outbound-connections.md?toc=%2fazure%2fvirtual-network%2ftoc.json). > [!NOTE]
- > Deploy a virtual appliance into a different subnet then the resources that route through the virtual appliance are deployed in. Deploying the virtual appliance to the same subnet, then applying a route table to the subnet that routes traffic through the virtual appliance, can result in routing loops, where traffic never leaves the subnet.
+ > Deploy a virtual appliance into a different subnet than the resources that route through the virtual appliance. Deploying the virtual appliance to the same subnet then applying a route table to the subnet that routes traffic through the virtual appliance can result in routing loops where traffic never leaves the subnet.
> > A next hop private IP address must have direct connectivity without having to route through ExpressRoute Gateway or Virtual WAN. Setting the next hop to an IP address without direct connectivity results in an invalid user-defined routing configuration.