Updates from: 02/02/2024 02:15:09
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c B2c Global Identity Funnel Based Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/b2c-global-identity-funnel-based-design.md
Last updated 01/26/2024
-#customer intent: I'm a developer, and I need to understand how to build a global identity solution using a funnel-based approach, so I can implement it in my organization's Azure AD B2C environment.
+# Customer intent: I'm a developer, and I need to understand how to build a global identity solution using a funnel-based approach, so I can implement it in my organization's Azure AD B2C environment.
# Build a global identity solution with funnel-based approach
active-directory-b2c B2c Global Identity Proof Of Concept Funnel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/b2c-global-identity-proof-of-concept-funnel.md
Last updated 01/26/2024
-#customer intent: As a developer, I want to understand how to build a global identity solution using a funnel-based approach, so I can implement it in my organization's Azure AD B2C environment.
+# Customer intent: As a developer, I want to understand how to build a global identity solution using a funnel-based approach, so I can implement it in my organization's Azure AD B2C environment.
# Azure Active Directory B2C global identity framework proof of concept for funnel-based configuration
active-directory-b2c B2c Global Identity Proof Of Concept Regional https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/b2c-global-identity-proof-of-concept-regional.md
Last updated 01/24/2024
-#customer intent: I'm a developer implementing Azure Active Directory B2C, and I want to configure region-based sign-up, sign-in, and password reset journeys. My goal is for users to be directed to the correct region and their data managed accordingly.
+# Customer intent: I'm a developer implementing Azure Active Directory B2C, and I want to configure region-based sign-up, sign-in, and password reset journeys. My goal is for users to be directed to the correct region and their data managed accordingly.
# Azure Active Directory B2C global identity framework proof of concept for region-based configuration
active-directory-b2c B2c Global Identity Region Based Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/b2c-global-identity-region-based-design.md
Last updated 01/26/2024
-#customer intent: I'm a developer implementing a global identity solution. I need to understand the different scenarios and workflows for region-based design approach in Azure AD B2C. My goal is to design and implement the authentication and sign-up processes effectively for users from different regions.
+# Customer intent: I'm a developer implementing a global identity solution. I need to understand the different scenarios and workflows for region-based design approach in Azure AD B2C. My goal is to design and implement the authentication and sign-up processes effectively for users from different regions.
# Build a global identity solution with region-based approach
active-directory-b2c B2c Global Identity Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/b2c-global-identity-solutions.md
Last updated 01/26/2024
-#customer intent: I'm a developer building a customer-facing application. I need to understand the different approaches to implement an identity platform using Azure AD B2C tenants for a globally operating business model. I want to make an informed decision about the architecture that best suits my application's requirements.
+# Customer intent: I'm a developer building a customer-facing application. I need to understand the different approaches to implement an identity platform using Azure AD B2C tenants for a globally operating business model. I want to make an informed decision about the architecture that best suits my application's requirements.
# Azure Active Directory B2C global identity framework
active-directory-b2c External Identities Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/external-identities-videos.md
Last updated 01/26/2024
-#customer intent: I'm a developers working with Azure Active Directory B2C. I need videos that provide a deep-dive into the architecture and features of the service. My goal is to gain a better understanding of how to implement and utilize Azure AD B2C in my applications.
+# Customer intent: I'm a developers working with Azure Active Directory B2C. I need videos that provide a deep-dive into the architecture and features of the service. My goal is to gain a better understanding of how to implement and utilize Azure AD B2C in my applications.
# Microsoft Azure Active Directory B2C external identity video series
active-directory-b2c Identity Verification Proofing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-verification-proofing.md
Last updated 01/26/2024
-#customer intent: I'm a developer integrating Azure AD B2C, and I want to configure an identity verification and proofing provider. I need to combat identity fraud and create a trusted user experience for account registration.
+# Customer intent: I'm a developer integrating Azure AD B2C, and I want to configure an identity verification and proofing provider. I need to combat identity fraud and create a trusted user experience for account registration.
# Identity verification and proofing partners
active-directory-b2c Partner Akamai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-akamai.md
Last updated 01/26/2024
-#customer intent: I'm an IT admin, and I want to configure Azure Active Directory B2C with Akamai Enterprise Application Access for SSO and secure hybrid access. I want to enable Azure AD B2C authentication for end users accessing private applications secured by Akamai Enterprise Application Access.
+# Customer intent: I'm an IT admin, and I want to configure Azure Active Directory B2C with Akamai Enterprise Application Access for SSO and secure hybrid access. I want to enable Azure AD B2C authentication for end users accessing private applications secured by Akamai Enterprise Application Access.
# Configure Azure Active Directory B2C with Akamai Web Application Protector
active-directory-b2c Partner Arkose Labs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-arkose-labs.md
Last updated 01/26/2024
-#customer intent: I'm a developer integrating Azure Active Directory B2C with the Arkose Labs platform. I need to configure the integration, so I can protect against bot attacks, account takeover, and fraudulent account openings.
+# Customer intent: I'm a developer integrating Azure Active Directory B2C with the Arkose Labs platform. I need to configure the integration, so I can protect against bot attacks, account takeover, and fraudulent account openings.
active-directory-b2c Partner Asignio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-asignio.md
zone_pivot_groups: b2c-policy-type
-#customer intent: I'm a developer integrating Asignio with Azure AD B2C for multifactor authentication. I want to configure an application with Asignio and set it up as an identity provider (IdP) in Azure AD B2C, so I can provide a passwordless, soft biometric, and multifactor authentication experience to customers.
+# Customer intent: I'm a developer integrating Asignio with Azure AD B2C for multifactor authentication. I want to configure an application with Asignio and set it up as an identity provider (IdP) in Azure AD B2C, so I can provide a passwordless, soft biometric, and multifactor authentication experience to customers.
# Configure Asignio with Azure Active Directory B2C for multifactor authentication
active-directory-b2c Partner Bindid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-bindid.md
zone_pivot_groups: b2c-policy-type
-#customer intent: I'm a developer integrating Azure Active Directory B2C with Transmit Security BindID. I need instructions to configure integration, so I can enable passwordless authentication using FIDO2 biometrics for my application.
+# Customer intent: I'm a developer integrating Azure Active Directory B2C with Transmit Security BindID. I need instructions to configure integration, so I can enable passwordless authentication using FIDO2 biometrics for my application.
# Configure Transmit Security with Azure Active Directory B2C for passwordless authentication
active-directory-b2c Partner Biocatch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-biocatch.md
Last updated 01/26/2024
-#customer intent: I'm a developer integrating Azure AD B2C authentication with BioCatch technology. I need to configure the custom UI, policies, and user journey. My goal is to enhance the security of my Customer Identity and Access Management (CIAM) system by analyzing user physical and cognitive behaviors.
+# Customer intent: I'm a developer integrating Azure AD B2C authentication with BioCatch technology. I need to configure the custom UI, policies, and user journey. My goal is to enhance the security of my Customer Identity and Access Management (CIAM) system by analyzing user physical and cognitive behaviors.
# Tutorial: Configure BioCatch with Azure Active Directory B2C
active-directory-b2c Partner Bloksec https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-bloksec.md
zone_pivot_groups: b2c-policy-type
-#customer intent: I'm a developer integrating Azure Active Directory B2C with BlokSec for passwordless authentication. I need to configure integration, so I can simplify user sign-in and protect against identity-related attacks.
+# Customer intent: I'm a developer integrating Azure Active Directory B2C with BlokSec for passwordless authentication. I need to configure integration, so I can simplify user sign-in and protect against identity-related attacks.
# Tutorial: Configure Azure Active Directory B2C with BlokSec for passwordless authentication
active-directory-b2c Partner Cloudflare https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-cloudflare.md
Last updated 01/26/2024
-#customer intent: I'm a developer configuring Azure AD B2C with Cloudflare WAF. I need to enable and configure the Web Application Firewall, so I can protect my application from malicious attacks such as SQL Injection and cross-site scripting (XSS).
+# Customer intent: I'm a developer configuring Azure AD B2C with Cloudflare WAF. I need to enable and configure the Web Application Firewall, so I can protect my application from malicious attacks such as SQL Injection and cross-site scripting (XSS).
# Tutorial: Configure Cloudflare Web Application Firewall with Azure Active Directory B2C
active-directory-b2c Partner Datawiza https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-datawiza.md
Last updated 01/26/2024
-#customer intent: I'm a developer, and I want to integrate Azure Active Directory B2C with Datawiza Access Proxy (DAP). My goal is to enable single sign-on (SSO) and granular access control for on-premises legacy applications, without rewriting them.
+# Customer intent: I'm a developer, and I want to integrate Azure Active Directory B2C with Datawiza Access Proxy (DAP). My goal is to enable single sign-on (SSO) and granular access control for on-premises legacy applications, without rewriting them.
# Tutorial: Configure Azure Active Directory B2C with Datawiza to provide secure hybrid access
active-directory-b2c Partner Deduce https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-deduce.md
-#customer intent: As an Azure AD B2C administrator, I want to integrate Deduce with Azure AD B2C authentication. I want to combat identity fraud and create a trusted user experience for my organization.
+# Customer intent: As an Azure AD B2C administrator, I want to integrate Deduce with Azure AD B2C authentication. I want to combat identity fraud and create a trusted user experience for my organization.
# Configure Azure Active Directory B2C with Deduce to combat identity fraud and create a trusted user experience
active-directory-b2c Partner Dynamics 365 Fraud Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-dynamics-365-fraud-protection.md
Last updated 01/26/2024
-#customer intent: I'm a developer, and I want to integrate Microsoft Dynamics 365 Fraud Protection with Azure Active Directory B2C. I need to assess risk during attempts to create fraudulent accounts and sign-ins, and then block or challenge suspicious attempts.
+# Customer intent: I'm a developer, and I want to integrate Microsoft Dynamics 365 Fraud Protection with Azure Active Directory B2C. I need to assess risk during attempts to create fraudulent accounts and sign-ins, and then block or challenge suspicious attempts.
# Tutorial: Configure Microsoft Dynamics 365 Fraud Protection with Azure Active Directory B2C
active-directory-b2c Partner Eid Me https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-eid-me.md
zone_pivot_groups: b2c-policy-type
-#customer intent: I'm an Azure AD B2C administrator, and I want to configure eID-Me as an identity provider (IdP). My goal is to enable users to verify their identity and sign in using eID-Me.
+# Customer intent: I'm an Azure AD B2C administrator, and I want to configure eID-Me as an identity provider (IdP). My goal is to enable users to verify their identity and sign in using eID-Me.
# Configure Azure Active Directory B2C with Bluink eID-Me for identity verification
active-directory-b2c Partner Experian https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-experian.md
Last updated 01/26/2024
-#customer intent: I'm an Azure AD B2C administrator, and I want to integrate Experian CrossCore with Azure AD B2C. I need to verify user identification and perform risk analysis based on user attributes during sign-up.
+# Customer intent: I'm an Azure AD B2C administrator, and I want to integrate Experian CrossCore with Azure AD B2C. I need to verify user identification and perform risk analysis based on user attributes during sign-up.
# Tutorial: Configure Experian with Azure Active Directory B2C
active-directory-b2c Partner F5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-f5.md
Last updated 01/26/2024
-#customer intent: As an IT admin responsible for securing applications, I want to integrate Azure Active Directory B2C with F5 BIG-IP Access Policy Manager. I want to expose legacy applications securely to the internet with preauthentication, Conditional Access, and single sign-on (SSO) capabilities.
+# Customer intent: As an IT admin responsible for securing applications, I want to integrate Azure Active Directory B2C with F5 BIG-IP Access Policy Manager. I want to expose legacy applications securely to the internet with preauthentication, Conditional Access, and single sign-on (SSO) capabilities.
# Tutorial: Enable secure hybrid access for applications with Azure Active Directory B2C and F5 BIG-IP
active-directory-b2c Partner Grit App Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-grit-app-proxy.md
-#customer intent: I'm an application developer using header-based authentication, and I want to migrate my legacy application to Azure Active Directory B2C with Grit app proxy. I need to enable modern authentication experiences, enhance security, and save on licensing costs.
+# Customer intent: I'm an application developer using header-based authentication, and I want to migrate my legacy application to Azure Active Directory B2C with Grit app proxy. I need to enable modern authentication experiences, enhance security, and save on licensing costs.
# Migrate applications using header-based authentication to Azure Active Directory B2C with Grit's app proxy
active-directory-b2c Partner Grit Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-grit-authentication.md
-#customer intent: As an application developer using header-based authentication, I want to migrate my legacy application to Azure Active Directory B2C with Grit app proxy. I want to enable modern authentication experiences, enhance security, and save on licensing costs.
+# Customer intent: As an application developer using header-based authentication, I want to migrate my legacy application to Azure Active Directory B2C with Grit app proxy. I want to enable modern authentication experiences, enhance security, and save on licensing costs.
# Configure Grit's biometric authentication with Azure Active Directory B2C
active-directory-b2c Partner Grit Editor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-grit-editor.md
-#customer intent: I'm an Azure AD B2C administrator, and I want to use the Visual IEF Editor tool to create, modify, and deploy Azure AD B2C policies, without writing code.
+# Customer intent: I'm an Azure AD B2C administrator, and I want to use the Visual IEF Editor tool to create, modify, and deploy Azure AD B2C policies, without writing code.
active-directory-b2c Partner Grit Iam https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-grit-iam.md
-#customer intent: I'm a developer, and I want to integrate Azure Active Directory B2C authentication with the Grit IAM B2B2C solution. I need to provide secure and user-friendly identity and access management for my customers.
+# Customer intent: I'm a developer, and I want to integrate Azure Active Directory B2C authentication with the Grit IAM B2B2C solution. I need to provide secure and user-friendly identity and access management for my customers.
# Tutorial: Configure the Grit IAM B2B2C solution with Azure Active Directory B2C
active-directory-b2c Partner Haventec https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-haventec.md
Last updated 01/26/2024
-#customer intent: I'm a developer integrating Haventec Authenticate with Azure AD B2C. I need instructions to configure integration, so I can enable single-step, multi-factor passwordless authentication for my web and mobile applications.
+# Customer intent: I'm a developer integrating Haventec Authenticate with Azure AD B2C. I need instructions to configure integration, so I can enable single-step, multi-factor passwordless authentication for my web and mobile applications.
# Tutorial: Configure Haventec Authenticate with Azure Active Directory B2C for single-step, multi-factor passwordless authentication
active-directory-b2c Partner Hypr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-hypr.md
Last updated 01/26/2024
-#customer intent: I'm a developer integrating HYPR with Azure AD B2C. I want a tutorial to configure the Azure AD B2C policy to enable passwordless authentication using HYPR for my customer applications.
+# Customer intent: I'm a developer integrating HYPR with Azure AD B2C. I want a tutorial to configure the Azure AD B2C policy to enable passwordless authentication using HYPR for my customer applications.
# Tutorial for configuring HYPR with Azure Active Directory B2C
active-directory-b2c Partner Idemia https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-idemia.md
zone_pivot_groups: b2c-policy-type
-#customer intent: I'm an Azure AD B2C administrator, and I want to configure IDEMIA Mobile ID integration with Azure AD B2C. I want users to authenticate using biometric authentication services and benefit from a trusted, government-issued digital ID.
+# Customer intent: I'm an Azure AD B2C administrator, and I want to configure IDEMIA Mobile ID integration with Azure AD B2C. I want users to authenticate using biometric authentication services and benefit from a trusted, government-issued digital ID.
# Tutorial: Configure IDEMIA Mobile ID with Azure Active Directory B2C
active-directory-b2c Partner Jumio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-jumio.md
Last updated 01/26/2024
-#customer intent: I'm an Azure AD B2C administrator, and I want to integrate Jumio with Azure AD B2C. I need to enable real-time automated ID verification for user accounts and protect customer data.
+# Customer intent: I'm an Azure AD B2C administrator, and I want to integrate Jumio with Azure AD B2C. I need to enable real-time automated ID verification for user accounts and protect customer data.
# Tutorial for configuring Jumio with Azure Active Directory B2C
active-directory-b2c Partner Keyless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-keyless.md
Last updated 01/26/2024
-#customer intent: I'm a developer integrating Azure AD B2C with Keyless for passwordless authentication. I need to configure Keyless with Azure AD B2C, so I can provide a secure and convenient passwordless authentication experience for my customer applications.
+# Customer intent: I'm a developer integrating Azure AD B2C with Keyless for passwordless authentication. I need to configure Keyless with Azure AD B2C, so I can provide a secure and convenient passwordless authentication experience for my customer applications.
active-directory-b2c Partner Lexisnexis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-lexisnexis.md
Last updated 01/26/2024
-#customer intent: I'm a developer integrating Azure Active Directory B2C with LexisNexis ThreatMetrix. I want to configure the API and UI components, so I can verify user identities and perform risk analysis based on user attributes and device profiling information.
+# Customer intent: I'm a developer integrating Azure Active Directory B2C with LexisNexis ThreatMetrix. I want to configure the API and UI components, so I can verify user identities and perform risk analysis based on user attributes and device profiling information.
# Tutorial for configuring LexisNexis with Azure Active Directory B2C
active-directory-b2c Partner N8identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-n8identity.md
Last updated 01/26/2024
-#customer intent: As an administrator managing customer accounts in Azure AD B2C, I want to configure TheAccessHub Admin Tool with Azure AD B2C. My goal is to migrate customer accounts, administer CSR requests, synchronize data, and customize notifications.
+# Customer intent: As an administrator managing customer accounts in Azure AD B2C, I want to configure TheAccessHub Admin Tool with Azure AD B2C. My goal is to migrate customer accounts, administer CSR requests, synchronize data, and customize notifications.
active-directory-b2c Partner Nevis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-nevis.md
Last updated 01/26/2024
-#customer intent: I'm a developer, and I want to configure Nevis with Azure Active Directory B2C for passwordless authentication. I need to enable customer authentication and comply with Payment Services Directive 2 (PSD2) transaction requirements.
+# Customer intent: I'm a developer, and I want to configure Nevis with Azure Active Directory B2C for passwordless authentication. I need to enable customer authentication and comply with Payment Services Directive 2 (PSD2) transaction requirements.
# Tutorial to configure Nevis with Azure Active Directory B2C for passwordless authentication
active-directory-b2c Partner Nok Nok https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-nok-nok.md
Last updated 01/26/2024
-#customer intent: I'm a developer integrating Azure Active Directory B2C with a third-party authentication provider. I want to learn how to configure Nok Nok Passport as an identity provider (IdP) in Azure AD B2C. My goal is to enable passwordless FIDO authentication for my users.
+# Customer intent: I'm a developer integrating Azure Active Directory B2C with a third-party authentication provider. I want to learn how to configure Nok Nok Passport as an identity provider (IdP) in Azure AD B2C. My goal is to enable passwordless FIDO authentication for my users.
# Tutorial: Configure Nok Nok Passport with Azure Active Directory B2C for passwordless FIDO2 authentication
active-directory-b2c Partner Onfido https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-onfido.md
Last updated 01/26/2024
-#customer intent: I'm a developer integrating Azure Active Directory B2C with Onfido. I need to configure the Onfido service to verify identity in the sign-up or sign-in flow. My goal is to meet Know Your Customer and identity requirements and provide a reliable onboarding experience, while reducing fraud.
+# Customer intent: I'm a developer integrating Azure Active Directory B2C with Onfido. I need to configure the Onfido service to verify identity in the sign-up or sign-in flow. My goal is to meet Know Your Customer and identity requirements and provide a reliable onboarding experience, while reducing fraud.
# Tutorial for configuring Onfido with Azure Active Directory B2C
active-directory-b2c Partner Ping Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-ping-identity.md
Last updated 01/26/2024
-#customer intent: I'm a developer, and I want to learn how to configure Ping Identity with Azure Active Directory B2C for secure hybrid access (SHA). I need to extend the capabilities of Azure AD B2C and enable secure hybrid access using PingAccess and PingFederate.
+# Customer intent: I'm a developer, and I want to learn how to configure Ping Identity with Azure Active Directory B2C for secure hybrid access (SHA). I need to extend the capabilities of Azure AD B2C and enable secure hybrid access using PingAccess and PingFederate.
# Tutorial: Configure Ping Identity with Azure Active Directory B2C for secure hybrid access
active-directory-b2c Partner Saviynt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-saviynt.md
Last updated 01/26/2024
-#customer intent: As a security manager, I want to integrate Azure Active Directory B2C with Saviynt. I need visibility, security, and governance over user life-cycle management and access control.
+# Customer intent: As a security manager, I want to integrate Azure Active Directory B2C with Saviynt. I need visibility, security, and governance over user life-cycle management and access control.
# Tutorial to configure Saviynt with Azure Active Directory B2C
active-directory-b2c Partner Strata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-strata.md
Last updated 01/26/2024
-#customer intent: As an IT admin, I want to integrate Azure Active Directory B2C with StrataMaverics Identity Orchestrator. I need to protect on-premises applications and enable customer single sign-on (SSO) to hybrid apps.
+# Customer intent: As an IT admin, I want to integrate Azure Active Directory B2C with StrataMaverics Identity Orchestrator. I need to protect on-premises applications and enable customer single sign-on (SSO) to hybrid apps.
active-directory-b2c Partner Trusona https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-trusona.md
zone_pivot_groups: b2c-policy-type
-#customer intent: I'm a developer integrating Azure AD B2C authentication with Trusona Authentication Cloud. I want to configure Trusona Authentication Cloud as an identity provider (IdP) in Azure AD B2C, so I can enable passwordless authentication and provide a better user experience for my web application users.
+# Customer intent: I'm a developer integrating Azure AD B2C authentication with Trusona Authentication Cloud. I want to configure Trusona Authentication Cloud as an identity provider (IdP) in Azure AD B2C, so I can enable passwordless authentication and provide a better user experience for my web application users.
# Configure Trusona Authentication Cloud with Azure Active Directory B2C
active-directory-b2c Partner Typingdna https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-typingdna.md
Last updated 01/26/2024
-#customer intent: I'm an Azure AD B2C administrator, and I want to integrate TypingDNA with Azure AD B2C. I need to comply with Payment Services Directive 2 (PSD2) transaction requirements through keystroke dynamics and strong customer authentication.
+# Customer intent: I'm an Azure AD B2C administrator, and I want to integrate TypingDNA with Azure AD B2C. I need to comply with Payment Services Directive 2 (PSD2) transaction requirements through keystroke dynamics and strong customer authentication.
# Tutorial for configuring TypingDNA with Azure Active Directory B2C
active-directory-b2c Partner Web Application Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-web-application-firewall.md
Last updated 01/26/2024
-#customer intent: I'm a developer configuring Azure Active Directory B2C with Azure Web Application Firewall. I want to enable the WAF service for my B2C tenant with a custom domain, so I can protect my web applications from common exploits and vulnerabilities.
+# Customer intent: I'm a developer configuring Azure Active Directory B2C with Azure Web Application Firewall. I want to enable the WAF service for my B2C tenant with a custom domain, so I can protect my web applications from common exploits and vulnerabilities.
active-directory-b2c Partner Whoiam Rampart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-whoiam-rampart.md
-#customer intent: I'm a developer integrating WhoIAM Rampart with Azure AD B2C. I need to configure and integrate Rampart with Azure AD B2C using custom policies. My goal is to enable an integrated helpdesk and invitation-gated user registration experience for my application.
+# Customer intent: I'm a developer integrating WhoIAM Rampart with Azure AD B2C. I need to configure and integrate Rampart with Azure AD B2C using custom policies. My goal is to enable an integrated helpdesk and invitation-gated user registration experience for my application.
active-directory-b2c Partner Whoiam https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-whoiam.md
Last updated 01/26/2024
-#customer intent: I'm a developer integrating Azure Active Directory B2C with a third-party identity management system. I need a tutorial to configure WhoIAM Branded Identity Management System (BRIMS) with Azure AD B2C. My goal is to enable user verification with voice, SMS, and email in my application.
+# Customer intent: I'm a developer integrating Azure Active Directory B2C with a third-party identity management system. I need a tutorial to configure WhoIAM Branded Identity Management System (BRIMS) with Azure AD B2C. My goal is to enable user verification with voice, SMS, and email in my application.
# Tutorial to configure Azure Active Directory B2C with WhoIAM
active-directory-b2c Partner Xid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-xid.md
Last updated 01/26/2024
-#customer intent: As an Azure AD B2C administrator, I want to configure xID as an identity provider, so users can sign in using xID and authenticate with their digital identity on their device.
+# Customer intent: As an Azure AD B2C administrator, I want to configure xID as an identity provider, so users can sign in using xID and authenticate with their digital identity on their device.
# Configure xID with Azure Active Directory B2C for passwordless authentication
active-directory-b2c Partner Zscaler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-zscaler.md
Last updated 01/26/2024
-#customer intent: As an IT admin, I want to integrate Azure Active Directory B2C authentication with Zscaler Private Access. I need to provide secure access to private applications and assets without the need for a virtual private network (VPN).
+# Customer intent: As an IT admin, I want to integrate Azure Active Directory B2C authentication with Zscaler Private Access. I need to provide secure access to private applications and assets without the need for a virtual private network (VPN).
# Tutorial: Configure Zscaler Private Access with Azure Active Directory B2C
ai-services Liveness https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/Tutorials/liveness.md
The liveness detection solution successfully defends against a variety of spoof
- Once you have your Azure subscription, <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesFace" title="Create a Face resource" target="_blank">create a Face resource</a> in the Azure portal to get your key and endpoint. After it deploys, select **Go to resource**. - You need the key and endpoint from the resource you create to connect your application to the Face service. You'll paste your key and endpoint into the code later in the quickstart. - You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.-- Access to the Azure AI Vision SDK for mobile (IOS and Android). To get started, you need to apply for the [Face Recognition Limited Access features](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUQjA5SkYzNDM4TkcwQzNEOE1NVEdKUUlRRCQlQCN0PWcu) to get access to the SDK. For more information, see the [Face Limited Access](/legal/cognitive-services/computer-vision/limited-access-identity?context=%2Fazure%2Fcognitive-services%2Fcomputer-vision%2Fcontext%2Fcontext) page.
+- Access to the Azure AI Vision Face Client SDK for mobile (IOS and Android). To get started, you need to apply for the [Face Recognition Limited Access features](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUQjA5SkYzNDM4TkcwQzNEOE1NVEdKUUlRRCQlQCN0PWcu) to get access to the SDK. For more information, see the [Face Limited Access](/legal/cognitive-services/computer-vision/limited-access-identity?context=%2Fazure%2Fcognitive-services%2Fcomputer-vision%2Fcontext%2Fcontext) page.
## Perform liveness detection
The high-level steps involved in liveness with verification orchestration are il
```json Request:
- curl --location '<insert-api-endpoint>/face/v1.1-preview.1/detectlivenesswithverify/singlemodal' \
+ curl --location '<insert-api-endpoint>/face/v1.1-preview.1/detectlivenesswithverify/singlemodal/sessions/3847ffd3-4657-4e6c-870c-8e20de52f567' \
--header 'Content-Type: multipart/form-data' \ --header 'apim-recognition-model-preview-1904: true' \ --header 'Authorization: Bearer.<session-authorization-token> \
ai-services Concept Image Retrieval https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-image-retrieval.md
Multi-modal embedding has a variety of applications in different fields, includi
## What are vector embeddings?
-Vector embeddings are a way of representing content&mdash;text or images&mdash;as vectors of real numbers in a high-dimensional space. Vector embeddings are often learned from large amounts of textual and visual data using machine learning algorithms, such as neural networks. Each dimension of the vector corresponds to a different feature or attribute of the content, such as its semantic meaning, syntactic role, or context in which it commonly appears.
+Vector embeddings are a way of representing content&mdash;text or images&mdash;as vectors of real numbers in a high-dimensional space. Vector embeddings are often learned from large amounts of textual and visual data using machine learning algorithms, such as neural networks.
+
+Each dimension of the vector corresponds to a different feature or attribute of the content, such as its semantic meaning, syntactic role, or context in which it commonly appears. In Azure AI Vision, image and text vector embeddings have 1024 dimensions.
> [!NOTE] > Vector embeddings can only be meaningfully compared if they are from the same model type.
The image and video retrieval services return a field called "relevance." The te
> [!IMPORTANT] > The relevance score is a good measure to rank results such as images or video frames with respect to a single query. However, the relevance score cannot be accurately compared across queries. Therefore, it's not possible to easily map the relevance score to a confidence level. It's also not possible to trivially create a threshold algorithm to eliminate irrelevant results based solely on the relevance score.
+## Input requirements
+
+**Image input**
+- The file size of the image must be less than 20 megabytes (MB)
+- The dimensions of the image must be greater than 10 x 10 pixels and less than 16,000 x 16,000 pixels
+
+**Text input**
+- The text string must be between (inclusive) one word and 70 words.
+ ## Next steps Enable Multi-modal embeddings for your search service and follow the steps to generate vector embeddings for text and images.
ai-services Overview Image Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/overview-image-analysis.md
Image Analysis works on images that meet the following requirements:
- The file size of the image must be less than 20 megabytes (MB) - The dimensions of the image must be greater than 50 x 50 pixels and less than 16,000 x 16,000 pixels
+> [!TIP]
+> Input requirements for multi-modal embeddings are different and are listed in [Multi-modal embeddings](/azure/ai-services/computer-vision/concept-image-retrieval#input-requirements)
#### [Version 3.2](#tab/3-2)
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/whats-new.md
Azure AI Language is updated on an ongoing basis. To stay up-to-date with recent
* [Native document support](native-document-support/use-native-documents.md) is now available in `2023-11-15-preview` public preview.
+## December 2023
+
+* [Text Analytics for health](./text-analytics-for-health/overview.md) new model 2023-12-01 is now available.
+* New Relation Type: `BodySiteOfExamination`
+ * Quality enhancements to support radiology documents
+ * Significant latency improvements
+ * Various bug fixes: Improvements across NER, Entity Linking, Relations and Assertion Detection
+ ## November 2023 * [Named Entity Recognition Container](./named-entity-recognition/how-to/use-containers.md) is now Generally Available (GA).
ai-services Content Filter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/content-filter.md
The default content filtering configuration is set to filter at the medium sever
| Severity filtered | Configurable for prompts | Configurable for completions | Descriptions | |-|--||--| | Low, medium, high | Yes | Yes | Strictest filtering configuration. Content detected at severity levels low, medium and high is filtered.|
-| Medium, high | Yes | Yes | Default setting. Content detected at severity level low is not filtered, content at medium and high is filtered.|
+| Medium, high | Yes | Yes | Default setting. Content detected at severity level low isn't filtered, content at medium and high is filtered.|
| High | Yes| Yes | Content detected at severity levels low and medium isn't filtered. Only content at severity level high is filtered.| | No filters | If approved<sup>\*</sup>| If approved<sup>\*</sup>| No content is filtered regardless of severity level detected. Requires approval<sup>\*</sup>.|
-<sup>\*</sup> Only customers who have been approved for modified content filtering have full content filtering control and can turn content filters partially or fully off. Content filtering control does not apply to content filters for DALL-E (preview) or GPT-4 Turbo with Vision (preview). Apply for modified content filters using this form: [Azure OpenAI Limited Access Review: Modified Content Filtering (microsoft.com)](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUMlBQNkZMR0lFRldORTdVQzQ0TEI5Q1ExOSQlQCN0PWcu).
+<sup>\*</sup> Only customers who have been approved for modified content filtering have full content filtering control and can turn content filters partially or fully off. Content filtering control doesn't apply to content filters for DALL-E (preview) or GPT-4 Turbo with Vision (preview). Apply for modified content filters using this form: [Azure OpenAI Limited Access Review: Modified Content Filtering (microsoft.com)](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUMlBQNkZMR0lFRldORTdVQzQ0TEI5Q1ExOSQlQCN0PWcu).
Customers are responsible for ensuring that applications integrating Azure OpenAI comply with the [Code of Conduct](/legal/cognitive-services/openai/code-of-conduct?context=%2Fazure%2Fai-services%2Fopenai%2Fcontext%2Fcontext).
For details on the inference REST API endpoints for Azure OpenAI and how to crea
} ```
-## Streaming
+## Content streaming
-Azure OpenAI Service includes a content filtering system that works alongside core models. The following section describes the AOAI streaming experience and options in the context of content filters.
+This section describes the Azure OpenAI content streaming experience and options. With approval, you have the option to receive content from the API as it's generated, instead of waiting for chunks of content that have been verified to pass your content filters.
### Default
-The content filtering system is integrated and enabled by default for all customers. In the default streaming scenario, completion content is buffered, the content filtering system runs on the buffered content, and ΓÇô depending on content filtering configuration ΓÇô content is either returned to the user if it does not violate the content filtering policy (Microsoft default or custom user configuration), or itΓÇÖs immediately blocked which returns a content filtering error, without returning harmful completion content. This process is repeated until the end of the stream. Content was fully vetted according to the content filtering policy before returned to the user. Content is not returned token-by-token in this case, but in ΓÇ£content chunksΓÇ¥ of the respective buffer size.
+The content filtering system is integrated and enabled by default for all customers. In the default streaming scenario, completion content is buffered, the content filtering system runs on the buffered content, and ΓÇô depending on the content filtering configuration ΓÇô content is either returned to the user if it doesn't violate the content filtering policy (Microsoft's default or a custom user configuration), or itΓÇÖs immediately blocked and returns a content filtering error, without returning the harmful completion content. This process is repeated until the end of the stream. Content is fully vetted according to the content filtering policy before it's returned to the user. Content isn't returned token-by-token in this case, but in ΓÇ£content chunksΓÇ¥ of the respective buffer size.
### Asynchronous modified filter
-Customers who have been approved for modified content filters can choose Asynchronous Modified Filter as an additional option, providing a new streaming experience. In this case, content filters are run asynchronously, completion content is returned immediately with a smooth token-by-token streaming experience. No content is buffered, the content filters run asynchronously, which allows for zero latency in this context.
+Customers who have been approved for modified content filters can choose the asynchronous modified filter as an additional option, providing a new streaming experience. In this case, content filters are run asynchronously, and completion content is returned immediately with a smooth token-by-token streaming experience. No content is buffered, which allows for zero latency.
-> [!NOTE]
-> Customers must be aware that while the feature improves latency, it can bring a trade-off in terms of the safety and real-time vetting of smaller sections of model output. Because content filters are run asynchronously, content moderation messages and the content filtering signal in case of a policy violation are delayed, which means some sections of harmful content that would otherwise have been filtered immediately could be displayed to the user.
+Customers must be aware that while the feature improves latency, it's a trade-off against the safety and real-time vetting of smaller sections of model output. Because content filters are run asynchronously, content moderation messages and policy violation signals are delayed, which means some sections of harmful content that would otherwise have been filtered immediately could be displayed to the user.
-**Annotations**: Annotations and content moderation messages are continuously returned during the stream. We strongly recommend to consume annotations and implement additional AI content safety mechanisms, such as redacting content or returning additional safety information to the user.
+**Annotations**: Annotations and content moderation messages are continuously returned during the stream. We strongly recommend you consume annotations in your app and implement additional AI content safety mechanisms, such as redacting content or returning additional safety information to the user.
-**Content filtering signal**: The content filtering error signal is delayed; in case of a policy violation, itΓÇÖs returned as soon as itΓÇÖs available, and the stream is stopped. The content filtering signal is guaranteed within ~1,000-character windows in case of a policy violation.
+**Content filtering signal**: The content filtering error signal is delayed. In case of a policy violation, itΓÇÖs returned as soon as itΓÇÖs available, and the stream is stopped. The content filtering signal is guaranteed within a ~1,000-character window of the policy-violating content.
-Approval for Modified Content Filtering is required for access to Streaming ΓÇô Asynchronous Modified Filter. The application can be found [here](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xURE01NDY1OUhBRzQ3MkQxMUhZSE1ZUlJKTiQlQCN0PWcu). To enable it via Azure OpenAI Studio please follow the instructions [here](/azure/ai-services/openai/how-to/content-filters) to create a new content filtering configuration, and select ΓÇ£Asynchronous Modified FilterΓÇ¥ in the Streaming section, as shown in the below screenshot.
+Approval for modified content filtering is required for access to the asynchronous modified filter. The application can be found [here](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xURE01NDY1OUhBRzQ3MkQxMUhZSE1ZUlJKTiQlQCN0PWcu). To enable it in Azure OpenAI Studio, follow the [Content filter how-to guide](/azure/ai-services/openai/how-to/content-filters) to create a new content filtering configuration, and select **Asynchronous Modified Filter** in the Streaming section.
-### Overview
+### Comparison of content filtering modes
-| Category | Streaming - Default | Streaming - Asynchronous Modified Filter |
+| | Streaming - Default | Streaming - Asynchronous Modified Filter |
|||| |Status |GA |Public Preview |
-| Access | Enabled by default, no action needed |Customers approved for Modified Content Filtering can configure directly via Azure OpenAI Studio (as part of a content filtering configuration; applied on deployment-level) |
-| Eligibility |All customers |Customers approved for Modified Content Filtering |
-|Modality and Availability |Text; all GPT-models |Text; all GPT-models except gpt-4-vision |
+| Eligibility |All customers |Customers approved for modified content filtering |
+| How to enable | Enabled by default, no action needed |Customers approved for modified content filtering can configure it directly in Azure OpenAI Studio (as part of a content filtering configuration, applied at the deployment level) |
+|Modality and availability |Text; all GPT models |Text; all GPT models except gpt-4-vision |
|Streaming experience |Content is buffered and returned in chunks |Zero latency (no buffering, filters run asynchronously) |
-|Content filtering signal |Immediate filtering signal |Delayed filtering signal (in up to ~1,000 char increments) |
-|Content filtering configurations |Supports default and any customer-defined filter setting (including optional models) |Supports default and any customer-defined filter setting (including optional models) |
+|Content filtering signal |Immediate filtering signal |Delayed filtering signal (in up to ~1,000-character increments) |
+|Content filtering configurations |Supports default and any customer-defined filter setting (including optional models) |Supports default and any customer-defined filter setting (including optional models) |
-### Annotations and sample response stream
+### Annotations and sample responses
#### Prompt annotation message
data: {
#### Annotation message
-The text field will always be an empty string, indicating no new tokens. Annotations will only be relevant to already-sent tokens. There may be multiple Annotation Messages referring to the same tokens.
+The text field will always be an empty string, indicating no new tokens. Annotations will only be relevant to already-sent tokens. There may be multiple annotation messages referring to the same tokens.
-ΓÇ£start_offsetΓÇ¥ and ΓÇ£end_offsetΓÇ¥ are low-granularity offsets in text (with 0 at beginning of prompt) which the annotation is relevant to.
+`"start_offset"` and `"end_offset"` are low-granularity offsets in text (with 0 at beginning of prompt) to mark which text the annotation is relevant to.
-ΓÇ£check_offsetΓÇ¥ represents how much text has been fully moderated. It is an exclusive lower bound on the end_offsets of future annotations. It is nondecreasing.
+`"check_offset"` represents how much text has been fully moderated. It's an exclusive lower bound on the `"end_offset"` values of future annotations. It's non-decreasing.
```json data: {
data: {
```
-### Sample response stream
+#### Sample response stream (passes filters)
-Below is a real chat completion response using Asynchronous Modified Filter. Note how prompt annotations are not changed; completion tokens are sent without annotations; and new annotation messages are sent without tokens, instead associated with certain content filter offsets.
+Below is a real chat completion response using asynchronous modified filter. Note how the prompt annotations aren't changed, completion tokens are sent without annotations, and new annotation messages are sent without tokens&mdash;they are instead associated with certain content filter offsets.
`{"temperature": 0, "frequency_penalty": 0, "presence_penalty": 1.0, "top_p": 1.0, "max_tokens": 800, "messages": [{"role": "user", "content": "What is color?"}], "stream": true}`
data: {"id":"","object":"","created":0,"model":"","choices":[{"index":0,"finish_
data: [DONE] ```
-### Sample response stream (blocking)
+#### Sample response stream (blocked by filters)
`{"temperature": 0, "frequency_penalty": 0, "presence_penalty": 1.0, "top_p": 1.0, "max_tokens": 800, "messages": [{"role": "user", "content": "Tell me the lyrics to \"Hey Jude\"."}], "stream": true}`
ai-services Batch Transcription Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-transcription-create.md
Last updated 1/26/2024 zone_pivot_groups: speech-cli-rest
-#customer intent: As a user who implements audio transcription, I want create transcriptions in bulk so that I don't have to submit audio content repeatedly.
+# Customer intent: As a user who implements audio transcription, I want create transcriptions in bulk so that I don't have to submit audio content repeatedly.
# Create a batch transcription
aks Artifact Streaming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/artifact-streaming.md
Now that you enabled Artifact Streaming on a premium ACR and connected that to a
* Check if your node pool has Artifact Streaming enabled using the [`az aks nodepool show`][az-aks-nodepool-show] command. ```azurecli-interactive
- az aks nodepool show --resource-group myResourceGroup --cluster-name myAKSCluster --name myNodePool grep ArtifactStreamingConfig
+ az aks nodepool show --resource-group myResourceGroup --cluster-name myAKSCluster --name myNodePool --query artifactStreamingProfile
``` In the output, check that the `Enabled` field is set to `true`.
aks Azure Disk Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-customer-managed-keys.md
Title: Use a customer-managed key to encrypt Azure disks in Azure Kubernetes Service (AKS)
-description: Bring your own keys (BYOK) to encrypt AKS OS and Data disks.
+ Title: Use a customer-managed key to encrypt Azure managed disks in Azure Kubernetes Service (AKS)
+description: Bring your own keys (BYOK) to encrypt managed OS and data disks in AKS.
Previously updated : 11/24/2023 Last updated : 02/01/2024
-# Bring your own keys (BYOK) with Azure disks in Azure Kubernetes Service (AKS)
+# Bring your own keys (BYOK) with Azure managed disks in Azure Kubernetes Service (AKS)
-Azure Storage encrypts all data in a storage account at rest. By default, data is encrypted with Microsoft-managed keys. For more control over encryption keys, you can supply customer-managed keys to use for encryption at rest for both the OS and data disks for your AKS clusters.
+Azure encrypts all data in a managed disk at rest. By default, data is encrypted with Microsoft-managed keys. For more control over encryption keys, you can supply customer-managed keys to use for encryption at rest for both the OS and data disks for your AKS clusters.
Learn more about customer-managed keys on [Linux][customer-managed-keys-linux] and [Windows][customer-managed-keys-windows].
Learn more about customer-managed keys on [Linux][customer-managed-keys-linux] a
## Limitations
-* Encryption of OS disk with customer-managed keys can only be enabled when creating an AKS cluster.
+* Encryption of an OS disk with customer-managed keys can only be enabled when creating an AKS cluster.
* Virtual nodes are not supported.
-* When encrypting ephemeral OS disk-enabled node pool with customer-managed keys, if you want to rotate the key in Azure Key Vault, you need to:
+* When encrypting an ephemeral OS disk-enabled node pool with customer-managed keys, if you want to rotate the key in Azure Key Vault, you need to:
* Scale down the node pool count to 0 * Rotate the key
aks Concepts Clusters Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-clusters-workloads.md
Last updated 01/16/2024
Application development continues to move toward a container-based approach, increasing our need to orchestrate and manage resources. As the leading platform, Kubernetes provides reliable scheduling of fault-tolerant application workloads. Azure Kubernetes Service (AKS), a managed Kubernetes offering, further simplifies container-based application deployment and management. This article introduces core concepts:+ * Kubernetes infrastructure components:
- * *control plane*
- * *nodes*
- * *node pools*
-* Workload resources:
- * *pods*
- * *deployments*
- * *sets*
+
+ * *control plane*
+ * *nodes*
+ * *node pools*
+
+* Workload resources:
+
+ * *pods*
+ * *deployments*
+ * *sets*
+ * Group resources using *namespaces*. ## What is Kubernetes?
When you create an AKS cluster, the following namespaces are available:
| *kube-system* | Where core resources exist, such as network features like DNS and proxy, or the Kubernetes dashboard. You typically don't deploy your own applications into this namespace. | | *kube-public* | Typically not used, but can be used for resources to be visible across the whole cluster, and can be viewed by any user. | - For more information, see [Kubernetes namespaces][kubernetes-namespaces]. ## Next steps This article covers some of the core Kubernetes components and how they apply to AKS clusters. For more information on core Kubernetes and AKS concepts, see the following articles: -- [Kubernetes / AKS access and identity][aks-concepts-identity]-- [Kubernetes / AKS security][aks-concepts-security]-- [Kubernetes / AKS virtual networks][aks-concepts-network]-- [Kubernetes / AKS storage][aks-concepts-storage]-- [Kubernetes / AKS scale][aks-concepts-scale]
+- [AKS access and identity][aks-concepts-identity]
+- [AKS security][aks-concepts-security]
+- [AKS virtual networks][aks-concepts-network]
+- [AKS storage][aks-concepts-storage]
+- [AKS scale][aks-concepts-scale]
<!-- EXTERNAL LINKS --> [cluster-api-provider-azure]: https://github.com/kubernetes-sigs/cluster-api-provider-azure
aks Howto Deploy Java Liberty App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/howto-deploy-java-liberty-app.md
This article is intended to help you quickly get to deployment. Before going to
* This article requires at least version 2.31.0 of Azure CLI. If using Azure Cloud Shell, the latest version is already installed. > [!NOTE]
-> This guidance can also be executed from a local developer command line with Azure CLI installed. To learn how to install the Azure CLI, see [How to install the Azure CLI](/cli/azure/install-azure-cli).
+> You can also execute this guidance from the [Azure Cloud Shell](/azure/cloud-shell/quickstart). This approach has all the prerequisite tools pre-installed, with the exception of Docker.
+>
+> [![Image of button to launch Cloud Shell in a new window.](../../includes/media/cloud-shell-try-it/hdi-launch-cloud-shell.png)](https://shell.azure.com)
* If running the commands in this guide locally (instead of Azure Cloud Shell): * Prepare a local machine with Unix-like operating system installed (for example, Ubuntu, Azure Linux, macOS, Windows Subsystem for Linux).
The following steps guide you to create a Liberty runtime on AKS. After completi
1. Create a new resource group. Because resource groups must be unique within a subscription, pick a unique name. An easy way to have unique names is to use a combination of your initials, today's date, and some identifier. For example, `ejb0913-java-liberty-project-rg`. 1. Select *East US* as **Region**.
+
+ Create environment variables in your shell for the resource group names for the cluster and the database.
-1. Select **Next**, enter the **AKS** pane. This pane allows you to select an existing AKS cluster and Azure Container Registry (ACR), instead of causing the deployment to create a new one, if desired. This capability enables you to use the sidecar pattern, as shown in the [Azure architecture center](/azure/architecture/patterns/sidecar). You can also adjust the settings for the size and number of the virtual machines in the AKS node pool. Leave all other values at the defaults.
+ ### [Bash](#tab/in-bash)
+
+ ```bash
+ export RESOURCE_GROUP_NAME=<your-resource-group-name>
+ export DB_RESOURCE_GROUP_NAME=<your-resource-group-name>
+ ```
+
+ ### [PowerShell](#tab/in-powershell)
+
+ ```powershell
+ $Env:RESOURCE_GROUP_NAME="<your-resource-group-name>"
+ $Env:DB_RESOURCE_GROUP_NAME="<your-resource-group-name>"
+ ```
+
+
+
+1. Select **Next**, enter the **AKS** pane. This pane allows you to select an existing AKS cluster and Azure Container Registry (ACR), instead of causing the deployment to create a new one, if desired. This capability enables you to use the sidecar pattern, as shown in the [Azure architecture center](/azure/architecture/patterns/sidecar). You can also adjust the settings for the size and number of the virtual machines in the AKS node pool. The remaining values do not need to be changed from their default values.
1. Select **Next**, enter the **Load Balancing** pane. Next to **Connect to Azure Application Gateway?** select **Yes**. This section lets you customize the following deployment options.
- 1. You can customize the **virtual network** and **subnet** into which the deployment will place the resources. Leave these values at their defaults.
+ 1. You can customize the **virtual network** and **subnet** into which the deployment will place the resources. The remaining values do not need to be changed from their default values.
1. You can provide the **TLS/SSL certificate** presented by the Azure Application Gateway. Leave the values at the default to cause the offer to generate a self-signed certificate. Don't go to production using a self-signed certificate. For more information about self-signed certificates, see [Create a self-signed public certificate to authenticate your application](../active-directory/develop/howto-create-self-signed-certificate.md). 1. You can select **Enable cookie based affinity**, also known as sticky sessions. We want sticky sessions enabled for this article, so ensure this option is selected.
To avoid Azure charges, you should clean up unnecessary resources. When the clus
```bash az group delete --name $RESOURCE_GROUP_NAME --yes --no-wait
-az group delete --name <db-resource-group> --yes --no-wait
+az group delete --name $DB_RESOURCE_GROUP_NAME --yes --no-wait
``` ### [PowerShell](#tab/in-powershell) ```powershell az group delete --name $Env:RESOURCE_GROUP_NAME --yes --no-wait
-az group delete --name <db-resource-group> --yes --no-wait
+az group delete --name $Env:DB_RESOURCE_GROUP_NAME --yes --no-wait
```
aks Long Term Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/long-term-support.md
Title: Long term support for Azure Kubernetes Service (AKS)
-description: Learn about Azure Kubernetes Service (AKS) Long term support for Kubernetes
+ Title: Long-term support for Azure Kubernetes Service (AKS)
+description: Learn about Azure Kubernetes Service (AKS) long-term support for Kubernetes
Previously updated : 08/16/2023 Last updated : 01/24/2024
-#Customer intent: As a cluster operator or developer, I want to understand how Long Term Support for Kubernetes on AKS works.
+#Customer intent: As a cluster operator or developer, I want to understand how long-term support for Kubernetes on AKS works.
-# Long term support
-The Kubernetes community releases a new minor version approximately every four months, with a support window for each version for one year. This support in terms of Azure Kubernetes Service (AKS) is called "Community Support."
+# Long-term support
-AKS supports versions of Kubernetes that are within this Community Support window, to push bug fixes and security updates from community releases.
+The Kubernetes community releases a new minor version approximately every four months, with a support window for each version for one year. In Azure Kubernetes Service (AKS), this support window is called "Community support."
-While innovation delivered with this release cadence provides huge benefits to you, it challenges you to keep up to date with Kubernetes releases, which can be made more difficult based on the number of AKS clusters you have to maintain.
+AKS supports versions of Kubernetes that are within this Community support window, to push bug fixes and security updates from community releases.
+While innovation delivered with this release cadence provides huge benefits to you, it challenges you to keep up to date with Kubernetes releases, which can be made more difficult based on the number of AKS clusters you have to maintain.
## AKS support types
-After approximately one year, the Kubernetes version exits Community Support and your AKS clusters are now at-risk as bug fixes and security updates become unavailable.
-AKS provides one year Community Support and one year of Long Term Support (LTS) to back port security fixes from the community upstream in our public repository. Our upstream LTS working group contributes efforts back to the community to provide our customers with a longer support window.
+After approximately one year, the Kubernetes version exits Community support and your AKS clusters are now at risk as bug fixes and security updates become unavailable.
+
+AKS provides one year Community support and one year of long-term support (LTS) to back port security fixes from the community upstream in our public repository. Our upstream LTS working group contributes efforts back to the community to provide our customers with a longer support window.
LTS intends to give you an extended period of time to plan and test for upgrades over a two-year period from the General Availability of the designated Kubernetes version.
-| | Community Support |Long Term Support |
+| | Community support |Long-term support |
|||| | **When to use** | When you can keep up with upstream Kubernetes releases | When you need control over when to migrate from one version to another | | **Support versions** | Three GA minor versions | One Kubernetes version (currently *1.27*) for two years |
+## Enable long-term support
-## Enable Long Term Support
-
-Enabling and disabling Long Term Support is a combination of moving your cluster to the Premium tier and explicitly selecting the LTS support plan.
+Enabling and disabling long-term support is a combination of moving your cluster to the Premium tier and explicitly selecting the LTS support plan.
> [!NOTE]
-> While it's possible to enable LTS when the cluster is in Community Support, you'll be charged once you enable the Premium tier.
+> While it's possible to enable LTS when the cluster is in Community support, you'll be charged once you enable the Premium tier.
### Create a cluster with LTS enabled
-```
+
+```azurecli
az aks create --resource-group myResourceGroup --name myAKSCluster --tier premium --k8s-support-plan AKSLongTermSupport --kubernetes-version 1.27 ``` > [!NOTE]
-> Enabling and disabling LTS is a combination of moving your cluster to the Premium tier, as well as enabling Long Term Support. Both must either be turned on or off.
+> Enabling and disabling LTS is a combination of moving your cluster to the Premium tier, as well as enabling long-term support. Both must either be turned on or off.
### Enable LTS on an existing cluster
-```
+
+```azurecli
az aks update --resource-group myResourceGroup --name myAKSCluster --tier premium --k8s-support-plan AKSLongTermSupport ``` ### Disable LTS on an existing cluster
-```
+
+```azurecli
az aks update --resource-group myResourceGroup --name myAKSCluster --tier [free|standard] --k8s-support-plan KubernetesOfficial ``` ## Long term support, add-ons and features
-The AKS team currently tracks add-on versions where Kubernetes community support exists. Once a version leaves Community Support, we rely on Open Source projects for managed add-ons to continue that support. Due to various external factors, some add-ons and features may not support Kubernetes versions outside these upstream Community Support windows.
+
+The AKS team currently tracks add-on versions where Kubernetes Community support exists. Once a version leaves Community support, we rely on open source projects for managed add-ons to continue that support. Due to various external factors, some add-ons and features may not support Kubernetes versions outside these upstream Community support windows.
See the following table for a list of add-ons and features that aren't supported and the reason why.
See the following table for a list of add-ons and features that aren't supported
|| | Istio | The Istio support cycle is short (six months), and there will not be maintenance releases for Kubernetes 1.27 | | Keda | Unable to guarantee future version compatibility with Kubernetes 1.27 |
-| Calico | Requires Calico Enterprise agreement past Community Support |
-| Cillium | Requires Cillium Enterprise agreement past Community Support |
+| Calico | Requires Calico Enterprise agreement past Community support |
+| Cillium | Requires Cillium Enterprise agreement past Community support |
| Azure Linux | Support timeframe for Azure Linux 2 ends during this LTS cycle | | Key Management Service (KMS) | KMSv2 replaces KMS during this LTS cycle | | Dapr | AKS extensions are not supported |
See the following table for a list of add-ons and features that aren't supported
| Open Service Mesh | OSM will be deprecated| | AAD Pod Identity | Deprecated in place of Workload Identity | - > [!NOTE]
->You can't move your cluster to Long Term support if any of these add-ons or features are enabled.
->Whilst these AKS managed add-ons aren't supported by Microsoft, you're able to install the Open Source versions of these on your cluster if you wish to use it past Community Support.
+>You can't move your cluster to long-term support if any of these add-ons or features are enabled.
+>Whilst these AKS managed add-ons aren't supported by Microsoft, you're able to install the Open Source versions of these on your cluster if you wish to use it past Community support.
## How we decide the next LTS version+ Versions of Kubernetes LTS are available for two years from General Availability, we mark a later version of Kubernetes as LTS based on the following criteria:+ * Sufficient time for customers to migrate from the prior LTS version to the current have passed * The previous version has had a two year support window Read the AKS release notes to stay informed of when you're able to plan your migration. ### Migrate from LTS to Community support+ Using LTS is a way to extend your window to plan a Kubernetes version upgrade. You may want to migrate to a version of Kubernetes that is within the [standard support window](supported-kubernetes-versions.md#kubernetes-version-support-policy). To move from an LTS enabled cluster to a version of Kubernetes that is within the standard support window, you need to disable LTS on the cluster:
-```
+```azurecli
az aks update --resource-group myResourceGroup --name myAKSCluster --tier [free|standard] --k8s-support-plan KubernetesOfficial ``` And then upgrade the cluster to a later supported version:
-```
+```azurecli
az aks upgrade --resource-group myResourceGroup --name myAKSCluster --kubernetes-version 1.28.3 ```+ > [!NOTE] > Kubernetes 1.28.3 is used as an example here, please check the [AKS release tracker](release-tracker.md) for available Kubernetes releases. There are approximately two years between one LTS version and the next. In lieu of upstream support for migrating more than two minor versions, there's a high likelihood your application depends on Kubernetes APIs that have been deprecated. We recommend you thoroughly test your application on the target LTS Kubernetes version and carry out a blue/green deployment from one version to another. ### Migrate from LTS to the next LTS release
-The upstream Kubernetes community supports a two minor version upgrade path. The process migrates the objects in your Kubernetes cluster as part of the upgrade process, and provides a tested, and accredited migration path.
+
+The upstream Kubernetes community supports a two-minor-version upgrade path. The process migrates the objects in your Kubernetes cluster as part of the upgrade process, and provides a tested, and accredited migration path.
For customers that wish to carry out an in-place migration, the AKS service will migrate your control plane from the previous LTS version to the latest, and then migrate your data plane. To carry out an in-place upgrade to the latest LTS version, you need to specify an LTS enabled Kubernetes version as the upgrade target.
-```
+```azurecli
az aks upgrade --resource-group myResourceGroup --name myAKSCluster --kubernetes-version 1.30.2 ``` > [!NOTE]
-> Kubernetes 1.30.2 is used as an example here, please check the [AKS release tracker](release-tracker.md) for available Kubernetes releases.
+> Kubernetes 1.30.2 is used as an example version in this article. Check the [AKS release tracker](release-tracker.md) for available Kubernetes releases.
aks Monitor Control Plane Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/monitor-control-plane-metrics.md
This article helps you understand this new feature, how to implement it, and how
- [Private link](../azure-monitor/logs/private-link-security.md) isn't supported. - Only the default [ama-metrics-settings-config-map](../azure-monitor/containers/prometheus-metrics-scrape-configuration.md#configmaps) can be customized. All other customizations are not supported. - The cluster must use [managed identity authentication](use-managed-identity.md).-- This feature is currently available in the following regions: West US 2, East Asia, UK South, East US, Australia Central, Australia East, Brazil South, Canada Central, Central India, East US 2, France Central, and Germany West Central.
+- This feature is currently available in the following regions: West US 2, East Asia, UK South, East US, Australia Central, Australia East, Brazil South, Canada Central, Central India, East US 2, France Central, and Germany West Central, Israel Central, Italy North, Japan East, JioIndia West, Korea Central, Malaysia South, Mexico Central, North Central.
### Install or update the `aks-preview` Azure CLI extension
aks Operator Best Practices Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/operator-best-practices-network.md
spec:
paths: - path: /blog backend:
- service
+ service:
name: blogservice port: 80 - path: /store backend:
- service
+ service:
name: storeservice port: 80 ```
aks Quickstart Helm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/quickstart-helm.md
Title: Develop on Azure Kubernetes Service (AKS) with Helm
description: Use Helm with AKS and Azure Container Registry to package and run application containers in a cluster. Previously updated : 01/18/2024 Last updated : 01/25/2024 # Quickstart: Develop on Azure Kubernetes Service (AKS) with Helm
You need to store your container images in an Azure Container Registry (ACR) to
az group create --name myResourceGroup --location eastus ```
-2. Create an Azure Container Registry using the [az acr create][az-acr-create] command. The following example creates an ACR named *myhelmacr* with the *Basic* SKU.
+2. Create an Azure Container Registry with a unique name by calling the [az acr create][az-acr-create] command. The following example creates an ACR named *myhelmacr* with the *Basic* SKU.
```azurecli-interactive az acr create --resource-group myResourceGroup --name myhelmacr --sku Basic
You need to store your container images in an Azure Container Registry (ACR) to
New-AzResourceGroup -Name myResourceGroup -Location eastus ```
-2. Create an Azure Container Registry using the [New-AzContainerRegistry][new-azcontainerregistry] cmdlet. The following example creates an ACR named *myhelmacr* with the *Basic* SKU.
+2. Create an Azure Container Registry with a unique name by calling the [New-AzContainerRegistry][new-azcontainerregistry] cmdlet. The following example creates an ACR named *myhelmacr* with the *Basic* SKU.
```azurepowershell-interactive
- New-AzContainerRegistry -ResourceGroupName myResourceGroup -Name myhelmacr -Sku Basic
+ New-AzContainerRegistry -ResourceGroupName myResourceGroup -Name myhelmacr -Sku Basic -Location eastus
``` Your output should look similar to the following condensed example output. Take note of your *loginServer* value for your ACR to use in a later step.
api-management Api Management Howto Disaster Recovery Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-disaster-recovery-backup-restore.md
Previously updated : 11/30/2023 Last updated : 01/31/2023
Backup-AzApiManagement -ResourceGroupName $apiManagementResourceGroup -Name $api
-TargetBlobName $blobName -AccessType "UserAssignedManagedIdentity" ` -identityClientId $identityid ```
-Backup is a long-running operation that may take several minutes to complete.
+Backup is a long-running operation that may take several minutes to complete. During this time the API gateway continues to handle requests, but the state of the service is Updating.
### [REST](#tab/rest)
In the body of the request, specify the target storage account name, blob contai
Set the value of the `Content-Type` request header to `application/json`.
-Backup is a long-running operation that may take several minutes to complete. If the request succeeded and the backup process began, you receive a `202 Accepted` response status code with a `Location` header. Make `GET` requests to the URL in the `Location` header to find out the status of the operation. While the backup is in progress, you continue to receive a `202 Accepted` status code. A Response code of `200 OK` indicates successful completion of the backup operation.
+Backup is a long-running operation that may take several minutes to complete. If the request succeeded and the backup process began, you receive a `202 Accepted` response status code with a `Location` header. Make `GET` requests to the URL in the `Location` header to find out the status of the operation. While the backup is in progress, you continue to receive a `202 Accepted` status code. During this time the API gateway continues to handle requests, but the state of the service is Updating. A Response code of `200 OK` indicates successful completion of the backup operation.
api-management Api Management Howto Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-ip-addresses.md
Previously updated : 12/21/2021 Last updated : 01/29/2024
API Management uses a public IP address for a connection outside the VNet or a p
* When a request is sent from API Management to a public (internet-facing) backend, a public IP address will always be visible as the origin of the request.
-## IP addresses of Consumption tier API Management service
+## IP addresses of Consumption, Basic v2, and Standard v2 tier API Management service
-If your API Management service is a Consumption tier service, it doesn't have a dedicated IP address. Consumption tier service runs on a shared infrastructure and without a deterministic IP address.
+If your API Management instance is created in a service tier that runs on a shared infrastructure, it doesn't have a dedicated IP address. Currently, instances in the following service tiers run on a shared infrastructure and without a deterministic IP address: Consumption, Basic v2 (preview), Standard v2 (preview).
-If you need to add the outbound IP addresses used by your Consumption tier instance to an allowlist, you can add the instance's data center (Azure region) to an allowlist. You can [download a JSON file that lists IP addresses for all Azure data centers](https://www.microsoft.com/download/details.aspx?id=56519). Then find the JSON fragment that applies to the region that your instance runs in.
+If you need to add the outbound IP addresses used by your Consumption, Basic v2, or Standard v2 tier instance to an allowlist, you can add the instance's data center (Azure region) to an allowlist. You can [download a JSON file that lists IP addresses for all Azure data centers](https://www.microsoft.com/download/details.aspx?id=56519). Then find the JSON fragment that applies to the region that your instance runs in.
For example, the following JSON fragment is what the allowlist for Western Europe might look like:
api-management Api Management Howto Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-migrate.md
Last updated 08/20/2021
-#customerintent: As an Azure service administrator, I want to move my service resources to another Azure region.
+# Customer intent: As an Azure service administrator, I want to move my service resources to another Azure region.
# How to move Azure API Management across regions
api-management Api Management Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-subscriptions.md
A subscriber can use an API Management subscription key in one of two ways:
> **Ocp-Apim-Subscription-Key** is the default name of the subscription key header, and **subscription-key** is the default name of the query parameter. If desired, you may modify these names in the settings for each API. For example, in the portal, update these names on the **Settings** tab of an API. > [!NOTE]
-> When included in a request header or query parameter, the subscription key by default is passed to the backend and may be exposed in backend monitoring logs or other systems. If this is considered sensitive data, you can configure a policy in the `outbound` section to remove the subscription key header ([`set-header`](set-header-policy.md)) or query parameter ([`set-query-parameter`](set-query-parameter-policy.md)).
+> When included in a request header or query parameter, the subscription key by default is passed to the backend and may be exposed in backend monitoring logs or other systems. If this is considered sensitive data, you can configure a policy at the end of the `inbound` section to remove the subscription key header ([`set-header`](set-header-policy.md)) or query parameter ([`set-query-parameter`](set-query-parameter-policy.md)).
## Enable or disable subscription requirement for API or product access
api-management Compute Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/compute-infrastructure.md
Most new instances created in service tiers other than the Consumption tier are
## What are the compute platforms for API Management?
-The following table summarizes the compute platforms currently used in the **Consumption**, **Developer**, **Basic**, **Standard**, and **Premium** tiers of API Management.
+The following table summarizes the compute platforms currently used in the **Consumption**, **Developer**, **Basic**, **Standard**, and **Premium** tiers of API Management. This table doesn't apply to the [v2 pricing tiers (preview)](#what-about-the-v2-pricing-tiers).
| Version | Description | Architecture | Tiers | | -| -| -- | - |
api-management Howto Protect Backend Frontend Azure Ad B2c https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/howto-protect-backend-frontend-azure-ad-b2c.md
--+ Last updated 02/18/2021
For a conceptual overview of API authorization, see [Authentication and authoriz
## Aims
-We're going to see how API Management can be used in a simplified scenario with Azure Functions and Azure AD B2C. You'll create a JavaScript (JS) app calling an API, that signs in users with Azure AD B2C. Then you'll use API Management's validate-jwt, CORS, and Rate Limit By Key policy features to protect the Backend API.
+We're going to see how API Management can be used in a simplified scenario with Azure Functions and Azure AD B2C. You'll create a JavaScript (JS) app calling an API that signs in users with Azure AD B2C. Then you'll use API Management's validate-jwt, CORS, and Rate Limit By Key policy features to protect the Backend API.
For defense in depth, we then use EasyAuth to validate the token again inside the back-end API and ensure that API management is the only service that can call the Azure Functions backend.
Here's a quick overview of the steps:
1. Create the sign-up and sign-in policies to allow users to sign in with Azure AD B2C 1. Configure API Management with the new Azure AD B2C Client IDs and keys to Enable OAuth2 user authorization in the Developer Console 1. Build the Function API
-1. Configure the Function API to enable EasyAuth with the new Azure AD B2C Client IDΓÇÖs and Keys and lock down to APIM VIP
+1. Configure the Function API to enable EasyAuth with the new Azure AD B2C Client IDs and Keys and lock down to APIM VIP
1. Build the API Definition in API Management 1. Set up Oauth2 for the API Management API configuration 1. Set up the **CORS** policy and add the **validate-jwt** policy to validate the OAuth token for every incoming request 1. Build the calling application to consume the API 1. Upload the JS SPA Sample
-1. Configure the Sample JS Client App with the new Azure AD B2C Client IDΓÇÖs and keys
+1. Configure the Sample JS Client App with the new Azure AD B2C Client IDs and keys
1. Test the Client Application > [!TIP]
Open the Azure AD B2C blade in the portal and do the following steps.
> > We still have no IP security applied, if you have a valid key and OAuth2 token, anyone can call this from anywhere - ideally we want to force all requests to come via API Management. >
- > If you're using APIM Consumption tier then [there isn't a dedicated Azure API Management Virtual IP](./api-management-howto-ip-addresses.md#ip-addresses-of-consumption-tier-api-management-service) to allow-list with the functions access-restrictions. In the Azure API Management Standard SKU and above [the VIP is single tenant and for the lifetime of the resource](./api-management-howto-ip-addresses.md#changes-to-the-ip-addresses). For the Azure API Management Consumption tier, you can lock down your API calls via the shared secret function key in the portion of the URI you copied above. Also, for the Consumption tier - steps 12-17 below do not apply.
+ > If you're using the API Management Consumption, Basic v2, and Standard v2 tiers then [there isn't a dedicated Azure API Management Virtual IP](./api-management-howto-ip-addresses.md#ip-addresses-of-consumption-basic-v2-and-standard-v2-tier-api-management-service) to allow-list with the functions access-restrictions. In the Azure API Management dedicated tiers [the VIP is single tenant and for the lifetime of the resource](./api-management-howto-ip-addresses.md#changes-to-the-ip-addresses). For the tiers that run on shared infrastructure, you can lock down your API calls via the shared secret function key in the portion of the URI you copied above. Also, for these tiers - steps 12-17 below do not apply.
1. Close the 'Authentication' blade from the App Service / Functions portal. 1. Open the *API Management blade of the portal*, then open *your instance*.
api-management Import Soap Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/import-soap-api.md
In this article, you learn how to:
With this selection, the API is exposed as SOAP, and API consumers have to use SOAP rules. If you want to "restify" the API, follow the steps in [Import a SOAP API and convert it to REST](restify-soap-api.md). ![Create SOAP API from WSDL specification](./media/import-soap-api/pass-through.png)
-1. The following fields are filled automatically with information from the SOAP API: **Display name**, **Name**, **Description**.
+1. The following API settings are filled automatically based on information from the SOAP API: **Display name**, **Name**, **Description**. Operations are filled automatically with **Display name**, **URL**, and **Description**, and receive a system-generated **Name**.
1. Enter other API settings. You can set the values during creation or configure them later by going to the **Settings** tab. For more information about API settings, see [Import and publish your first API](import-and-publish.md#import-and-publish-a-backend-api) tutorial.
api-management Restify Soap Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/restify-soap-api.md
Complete the following quickstart: [Create an Azure API Management instance](get
![SOAP to REST](./media/restify-soap-api/soap-to-rest.png)
-1. The following fields are filled automatically with information from the SOAP API: **Display name**, **Name**, **Description**.
+1. The following fields are filled automatically with information from the SOAP API: **Display name**, **Name**, **Description**. Operations are filled automatically with **Display name**, **URL**, and **Description**, and receive a system-generated **Name**.
1. Enter other API settings. You can set the values during creation or configure them later by going to the **Settings** tab. For more information about API settings, see [Import and publish your first API](import-and-publish.md#import-and-publish-a-backend-api) tutorial.
Operations can be called directly from the Azure portal, which provides a conven
## Next steps > [!div class="nextstepaction"]
-> [Transform and protect a published API](transform-api.md)
+> [Transform and protect a published API](transform-api.md)
app-service Manage Create Arc Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-create-arc-environment.md
The [custom location](../azure-arc/kubernetes/custom-locations.md) in Azure is u
```bash CUSTOM_LOCATION_NAME="my-custom-location" # Name of the custom location
- CONNECTED_CLUSTER_ID=$(az connectedk8s show --resource-group $GROUP_NAME --name $CLUSTER_NAME-query id --output tsv)
+ CONNECTED_CLUSTER_ID=$(az connectedk8s show --resource-group $GROUP_NAME --name $CLUSTER_NAME --query id --output tsv)
``` # [PowerShell](#tab/powershell)
app-service Troubleshoot Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/troubleshoot-diagnostic-logs.md
Select **On** for either **Application Logging (Filesystem)** or **Application L
The **Filesystem** option is for temporary debugging purposes, and turns itself off in 12 hours. The **Blob** option is for long-term logging, and needs a blob storage container to write logs to. The **Blob** option also includes additional information in the log messages, such as the ID of the origin VM instance of the log message (`InstanceId`), thread ID (`Tid`), and a more granular timestamp ([`EventTickCount`](/dotnet/api/system.datetime.ticks)).
-> [!NOTE]
-> If your Azure Storage account is secured by firewall rules, see [Networking considerations](#networking-considerations).
- > [!NOTE] > Currently only .NET application logs can be written to the blob storage. Java, PHP, Node.js, Python application logs can only be stored on the App Service file system (without code modifications to write logs to external storage). >
To enable web server logging for Windows apps in the [Azure portal](https://port
For **Web server logging**, select **Storage** to store logs on blob storage, or **File System** to store logs on the App Service file system.
-> [!NOTE]
-> If your Azure Storage account is secured by firewall rules, see [Networking considerations](#networking-considerations).
- In **Retention Period (Days)**, set the number of days the logs should be retained. > [!NOTE]
The following table shows the supported log types and descriptions:
## Networking considerations -- App Service logs aren't supported using Regional VNet integration, our recommendation is to use the Diagnostic settings feature.-
-If you secure your Azure Storage account by [only allowing selected networks](../storage/common/storage-network-security.md#change-the-default-network-access-rule), it can receive logs from App Service only if both of the following are true:
--- The Azure Storage account is in a different Azure region from the App Service app.-- All outbound addresses of the App Service app are [added to the Storage account's firewall rules](../storage/common/storage-network-security.md#managing-ip-network-rules). To find the outbound addresses for your app, see [Find outbound IPs](overview-inbound-outbound-ips.md#find-outbound-ips).
+For Diagnostic Settings restrictions, refer to the [official Diagnostic Settings documentation regarding destination limits](../azure-monitor/essentials/diagnostic-settings.md#destination-limitations).
## <a name="nextsteps"></a> Next steps * [Query logs with Azure Monitor](../azure-monitor/logs/log-query-overview.md) * [How to Monitor Azure App Service](web-sites-monitor.md) * [Troubleshooting Azure App Service in Visual Studio](troubleshoot-dotnet-visual-studio.md)
-* [Analyze app Logs in HDInsight](https://gallery.technet.microsoft.com/scriptcenter/Analyses-Windows-Azure-web-0b27d413)
* [Tutorial: Run a load test to identify performance bottlenecks in a web app](../load-testing/tutorial-identify-bottlenecks-azure-portal.md)
application-gateway Key Vault Certs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/key-vault-certs.md
Previously updated : 03/04/2022 Last updated : 02/01/2024
Application Gateway integration with Key Vault offers many benefits, including:
Application Gateway currently supports software-validated certificates only. Hardware security module (HSM)-validated certificates arenΓÇÖt supported.
-After Application Gateway is configured to use Key Vault certificates, its instances retrieve the certificate from Key Vault and install them locally for TLS termination. The instances poll Key Vault at four-hour intervals to retrieve a renewed version of the certificate, if it exists. If an updated certificate is found, the TLS/SSL certificate that's currently associated with the HTTPS listener is automatically rotated.
+After Application Gateway is configured to use Key Vault certificates, its instances retrieve the certificate from Key Vault and install them locally for TLS termination. The instances poll Key Vault at four-hour intervals to retrieve a renewed version of the certificate, if it exists. If an updated certificate is found, the TLS/SSL certificate that's associated with the HTTPS listener is automatically rotated.
> [!TIP]
-> Any change to Application Gateway will force a check against Key Vault to see if any new versions of certificates are available. This includes, but not limited to, changes to Frontend IP Configurations, Listeners, Rules, Backend Pools, Resource Tags, and more. If an updated certificate is found, the new certificate will immediately be presented.
+> Any change to Application Gateway forces a check against Key Vault to see if any new versions of certificates are available. This includes, but not limited to, changes to Frontend IP Configurations, Listeners, Rules, Backend Pools, Resource Tags, and more. If an updated certificate is found, the new certificate is immediately presented.
-Application Gateway uses a secret identifier in Key Vault to reference the certificates. For Azure PowerShell, the Azure CLI, or Azure Resource Manager, we strongly recommend that you use a secret identifier that doesn't specify a version. This way, Application Gateway will automatically rotate the certificate if a newer version is available in your Key Vault. An example of a secret URI without a version is `https://myvault.vault.azure.net/secrets/mysecret/`. You may refer to the PowerShell steps provided in the [section below](#key-vault-azure-role-based-access-control-permission-model).
+Application Gateway uses a secret identifier in Key Vault to reference the certificates. For Azure PowerShell, the Azure CLI, or Azure Resource Manager, we strongly recommend that you use a secret identifier that doesn't specify a version. This way, Application Gateway automatically rotates the certificate if a newer version is available in your Key Vault. An example of a secret URI without a version is `https://myvault.vault.azure.net/secrets/mysecret/`. You may refer to the PowerShell steps provided in the [following section](#key-vault-azure-role-based-access-control-permission-model).
The Azure portal supports only Key Vault certificates, not secrets. Application Gateway still supports referencing secrets from Key Vault, but only through non-portal resources like PowerShell, the Azure CLI, APIs, and Azure Resource Manager templates (ARM templates).
You can either create a new user-assigned managed identity or reuse an existing
Define access policies to use the user-assigned managed identity with your Key Vault: 1. In the Azure portal, go to **Key Vault**.
-1. Select the Key Vault that contains your certificate.
-1. If you're using the permission model **Vault access policy**: Select **Access Policies**, select **+ Add Access Policy**, select **Get** for **Secret permissions**, and choose your user-assigned managed identity for **Select principal**. Then select **Save**.
+2. Select the Key Vault that contains your certificate.
+3. If you're using the permission model **Vault access policy**: Select **Access Policies**, select **+ Add Access Policy**, select **Get** for **Secret permissions**, and choose your user-assigned managed identity for **Select principal**. Then select **Save**.
If you're using **Azure role-based access control** follow the article [Assign a managed identity access to a resource](../active-directory/managed-identities-azure-resources/howto-assign-access-portal.md) and assign the user-assigned managed identity the **Key Vault Secrets User** role to the Azure Key Vault.
+> [!NOTE]
+> If you have Key Vaults for your HTTPS listener that use different identities, creating or updating the listener requires checking the certificates associated with each identity. In order for the operation to be successful, you must [grant permission](../key-vault/general/rbac-guide.md) to all identities.
+ ### Verify Firewall Permissions to Key Vault As of March 15, 2021, Key Vault recognizes Application Gateway as a trusted service by leveraging User Managed Identities for authentication to Azure Key Vault. With the use of service endpoints and enabling the trusted services option for Key Vault's firewall, you can build a secure network boundary in Azure. You can deny access to traffic from all networks (including internet traffic) to Key Vault but still make Key Vault accessible for an Application Gateway resource under your subscription.
When you're using a restricted Key Vault, use the following steps to configure A
> Steps 1-3 are not required if your Key Vault has a Private Endpoint enabled. The application gateway can access the Key Vault using the private IP address. > [!IMPORTANT]
-> If using Private Endpoints to access Key Vault, you must link the privatelink.vaultcore.azure.net private DNS zone, containing the corresponding record to the referenced Key Vault, to the virtual network containing Application Gateway. Custom DNS servers may continue to be used on the virtual network instead of the Azure DNS provided resolvers, however the private dns zone will need to remain linked to the virtual network as well.
+> If using Private Endpoints to access Key Vault, you must link the privatelink.vaultcore.azure.net private DNS zone, containing the corresponding record to the referenced Key Vault, to the virtual network containing Application Gateway. Custom DNS servers may continue to be used on the virtual network instead of the Azure DNS provided resolvers, however the private DNS zone needs to remain linked to the virtual network as well.
1. In the Azure portal, in your Key Vault, select **Networking**.
-1. On the **Firewalls and virtual networks** tab, select **Selected networks**.
-1. For **Virtual networks**, select **+ Add existing virtual networks**, and then add the virtual network and subnet for your Application Gateway instance. If prompted, ensure the _Do not configure 'Microsoft.KeyVault' service endpoint(s) at this time_ checkbox is unchecked to ensure the `Microsoft.KeyVault` service endpoint is enabled on the subnet.
-1. Select **Yes** to allow trusted services to bypass the Key Vault's firewall.
+2. On the **Firewalls and virtual networks** tab, select **Selected networks**.
+3. For **Virtual networks**, select **+ Add existing virtual networks**, and then add the virtual network and subnet for your Application Gateway instance. If prompted, ensure the _Do not configure 'Microsoft.KeyVault' service endpoint(s) at this time_ checkbox is unchecked to ensure the `Microsoft.KeyVault` service endpoint is enabled on the subnet.
+4. Select **Yes** to allow trusted services to bypass the Key Vault's firewall.
![Screenshot that shows selections for configuring Application Gateway to use firewalls and virtual networks.](media/key-vault-certs/key-vault-firewall.png)
$appgw = Get-AzApplicationGateway -Name MyApplicationGateway -ResourceGroupName
Set-AzApplicationGatewayIdentity -ApplicationGateway $appgw -UserAssignedIdentityId "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/MyResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/MyManagedIdentity" # Get the secret ID from Key Vault $secret = Get-AzKeyVaultSecret -VaultName "MyKeyVault" -Name "CertificateName"
-$secretId = $secret.Id.Replace($secret.Version, "") # Remove the secret version so AppGW will use the latest version in future syncs
+$secretId = $secret.Id.Replace($secret.Version, "") # Remove the secret version so Application Gateway uses the latest version in future syncs
# Specify the secret ID from Key Vault Add-AzApplicationGatewaySslCertificate -KeyVaultSecretId $secretId -ApplicationGateway $appgw -Name $secret.Name # Commit the changes to the Application Gateway
Under **Choose a certificate** select the certificate named in the previous step
## Investigating and resolving Key Vault errors > [!NOTE]
-> It is important to consider any impact on your Application Gateway resource when making changes or revoking access to your Key Vault resource. In case your application gateway is unable to access the associated key vault or locate the certificate object in it, it will automatically put that listener in a disabled state.
+> It is important to consider any impact on your application gateway resource when making changes or revoking access to your Key Vault resource. If your application gateway is unable to access the associated key vault or locate the certificate object in it, the application gateway automatically sets the listener to a disabled state.
>
-> You can identify this user-driven event by viewing the Resource Health for your Application Gateway. [Learn more](../application-gateway/disabled-listeners.md).
+> You can identify this user-driven event by viewing the Resource Health for your application gateway. [Learn more](../application-gateway/disabled-listeners.md).
Azure Application Gateway doesn't just poll for the renewed certificate version on Key Vault at every four-hour interval. It also logs any error and is integrated with Azure Advisor to surface any misconfiguration with a recommendation for its fix. 1. Sign-in to your Azure portal
-1. Select Advisor
-1. Select Operational Excellence category from the left menu.
-1. You will find a recommendation titled **Resolve Azure Key Vault issue for your Application Gateway**, if your gateway is experiencing this issue. Ensure the correct Subscription is selected from the drop-down options above.
-1. Select it to view the error details, the associated key vault resource and the [troubleshooting guide](../application-gateway/application-gateway-key-vault-common-errors.md) to fix your exact issue.
+2. Select Advisor
+3. Select Operational Excellence category from the left menu.
+4. You find a recommendation titled **Resolve Azure Key Vault issue for your Application Gateway**, if your gateway is experiencing this issue. Ensure the correct subscription is selected from the drop-down options above.
+5. Select it to view the error details, the associated key vault resource and the [troubleshooting guide](../application-gateway/application-gateway-key-vault-common-errors.md) to fix your exact issue.
By identifying such an event through Azure Advisor or Resource Health, you can quickly resolve any configuration problems with your Key Vault. We strongly recommend you take advantage of [Azure Advisor](../advisor/advisor-alerts-portal.md) and [Resource Health](../service-health/resource-health-alert-monitor-guide.md) alerts to stay informed when a problem is detected.
-For Advisor alert, use "Resolve Azure Key Vault issue for your Application Gateway" in the recommendation type as shown below.</br>
+For Advisor alert, use "Resolve Azure Key Vault issue for your Application Gateway" in the recommendation type shown:</br>
![Diagram that shows steps for Advisor alert.](media/key-vault-certs/advisor-alert.png)
-You can configure the Resource health alert as illustrated below.</br>
+You can configure the Resource health alert as illustrated:</br>
![Diagram that shows steps for Resource health alert.](media/key-vault-certs/resource-health-alert.png) ## Next steps
azure-app-configuration Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/overview.md
The easiest way to add an App Configuration store to your application is through
| ASP.NET Core | App Configuration [provider](/dotnet/api/Microsoft.Extensions.Configuration.AzureAppConfiguration) for .NET Core | ASP.NET Core [quickstart](./quickstart-aspnet-core-app.md) | | .NET Framework and ASP.NET | App Configuration [builder](https://go.microsoft.com/fwlink/?linkid=2074663) for .NET | .NET Framework [quickstart](./quickstart-dotnet-app.md) | | Java Spring | App Configuration [provider](https://go.microsoft.com/fwlink/?linkid=2180917) for Spring Cloud | Java Spring [quickstart](./quickstart-java-spring-app.md) |
-| JavaScript/Node.js | App Configuration [client](https://go.microsoft.com/fwlink/?linkid=2103664) for JavaScript | Javascript/Node.js [quickstart](./quickstart-javascript.md)|
-| Python | App Configuration [client](https://go.microsoft.com/fwlink/?linkid=2103727) for Python | Python [quickstart](./quickstart-python.md) |
+| JavaScript/Node.js | App Configuration [provider](https://github.com/Azure/AppConfiguration-JavaScriptProvider) for JavaScript | Javascript/Node.js [quickstart](./quickstart-javascript-provider.md)|
+| Python | App Configuration [provider](https://pypi.org/project/azure-appconfiguration-provider/) for Python | Python [quickstart](./quickstart-python-provider.md)) |
| Other | App Configuration [REST API](/rest/api/appconfiguration/) | None | ## Next steps
azure-arc Troubleshoot Resource Bridge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/troubleshoot-resource-bridge.md
If you run `az arcappliance` CLI commands for Arc Resource Bridge via remote Pow
Using `az arcappliance` commands from remote PowerShell isn't currently supported. Instead, sign in to the node through Remote Desktop Protocol (RDP) or use a console session.
-### Resource bridge cannot be updated
+### Resource bridge configurations cannot be updated
In this release, all the parameters are specified at time of creation. To update the Azure Arc resource bridge, you must delete it and redeploy it again.
For example, if you specified the wrong location, or subscription during deploym
To resolve this issue, delete the appliance and update the appliance YAML file. Then redeploy and create the resource bridge.
+### Appliance Network Unavailable
+
+If Arc resource bridge is experiencing a network communication problem, you may see an "Appliance Network Unavailable" error when trying to perform an action that interacts with the resource bridge or an extension operating on top of the bridge. This error can also surface as "Error while dialing dial tcp xx.xx.xxx.xx:55000: connect: no route to host" and this is typically a network communication problem. The problem could be that communication from the host to the Arc resource bridge VM needs to be opened with the help of your network administrator. It could be that there was a temporary network issue not allowing the host to reach the Arc resource bridge VM and once the network issue is resolved, you can retry the operation.
+ ### Connection closed before server preface received When there are multiple attempts to deploy Arc resource bridge, expired credentials left on the management machine might cause future deployments to fail. The error will contain the message `Unavailable desc = connection closed before server preface received`. This error will surface in various `az arcappliance` commands including `validate`, `prepare` and `delete`.
azure-arc Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/upgrade.md
Before upgrading an Arc resource bridge, the following prerequisites must be met
- The appliance VM must be online, its status is "Running" and the [credentials in the appliance VM](maintenance.md#update-credentials-in-the-appliance-vm) must be valid. -- There must be sufficient space on the management machine (~3.5 GB) and appliance VM (35 GB) to download required images. For VMware, a new template is created.
+- There must be sufficient space on the management machine (~3.5 GB) and appliance VM (35 GB) to download required images.
+
+- For Arc-enabled VMware, upgrading the resource bridge requires 200GB of free space on the datastore. A new template is also created.
- The outbound connection from the Appliance VM IPs (`k8snodeippoolstart/end`, VM IP 1/2) to `msk8s.sb.tlu.dl.delivery.mp.microsoft.com`, port 443 must be enabled. Be sure the full list of [required endpoints for Arc resource bridge](network-requirements.md) are also enabled.
azure-arc Deliver Extended Security Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/deliver-extended-security-updates.md
To enroll Azure Arc-enabled servers eligible for ESUs at no additional cost, fol
This linking will not trigger a compliance violation or enforcement block, allowing you to extend the application of a license beyond its provisioned cores. The expectation is that the license only includes cores for production and billed servers. Any additional cores will be charged and result in over-billing.
+> [!IMPORTANT]
+> Adding these tags to your license will NOT make the license free or reduce the number of license cores that are chargeable. These tags allow you to link your Azure machines to existing licenses that are already configured with payable cores without needing to create any new licenses or add additional cores to your free machines.
+ **Example:** You have 8 Windows Server 2012 R2 Standard instances, each with 8 physical cores. 6 of these Windows Server 2012 R2 Standard machines are for production, and 2 of these Windows Server 2012 R2 Standard machines are eligible for free ESUs through the Visual Studio Dev Test subscription. You should first provision and activate a regular ESU License for Windows Server 2012/R2 that's Standard edition and has 48 physical cores. You should link this regular, production ESU license to your 6 production servers. Next, you should use this existing license, not add any more cores or provision a separate license, and link this license to your 2 non-production Windows Server 2012 R2 standard machines. You should tag the license and the 2 non-production Windows Server 2012 R2 Standard machines with Name: ΓÇ£ESU UsageΓÇ¥ and Value: ΓÇ£WS2012 VISUAL STUDIO DEV TESTΓÇ¥.
azure-functions Functions Add Output Binding Storage Queue Vs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-add-output-binding-storage-queue-vs.md
Because you're using a Queue storage output binding, you need the Storage bindin
# [Isolated worker model](#tab/isolated-process) ```bash
- Install-Package /dotnet/api/microsoft.azure.webjobs.blobattribute.Queues -IncludePrerelease
+ Install-Package Microsoft.Azure.Functions.Worker.Extensions.Storage.Queues
``` # [In-process model](#tab/in-process) ```bash
azure-functions Functions Kubernetes Keda https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-kubernetes-keda.md
The Azure Functions runtime provides flexibility in hosting where and how you want. [KEDA](https://keda.sh) (Kubernetes-based Event Driven Autoscaling) pairs seamlessly with the Azure Functions runtime and tooling to provide event driven scale in Kubernetes. > [!IMPORTANT]
-> Azure Functions on Kubernetes using KEDA is an open-source effort that you can use free of cost. Best-effort support is provided by contributors and from the community, so please use [GitHub issues in the Azure Functions repository](https://github.com/Azure/Azure-Functions/issues) to report bugs and raise feature requests. Azure Functions deployment to Azure Container Apps, which runs on managed Kubernetes clusters in Azure, is currently in preview. For more information, see [Azure Container Apps hosting of Azure Functions](functions-container-apps-hosting.md).
+> Running your containerized function apps on Kubernetes, either by using KEDA or by direct deployment, is an open-source effort that you can use free of cost. Best-effort support is provided by contributors and from the community by using [GitHub issues in the Azure Functions repository](https://github.com/Azure/Azure-Functions/issues). Please use these issues to report bugs and raise feature requests. Containerized function app deployments to Azure Container Apps, which runs on managed Kubernetes clusters in Azure, is currently in preview. For more information, see [Azure Container Apps hosting of Azure Functions](functions-container-apps-hosting.md).
## How Kubernetes-based functions work
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
Last updated 7/19/2023
-#customer-intent: As an IT manager, I want to understand the capabilities of Azure Monitor Agent to determine whether I can use the agent to collect the data I need from the operating systems of my virtual machines.
+# Customer intent: As an IT manager, I want to understand the capabilities of Azure Monitor Agent to determine whether I can use the agent to collect the data I need from the operating systems of my virtual machines.
# Azure Monitor Agent overview
azure-monitor Azure Monitor Agent Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-performance.md
Last updated 4/07/2023
-#customer-intent: As a deployment engineer, I can scope the resources required to scale my gateway data colletors the use the Azure Monitor Agent.
+# Customer intent: As a deployment engineer, I can scope the resources required to scale my gateway data colletors the use the Azure Monitor Agent.
# Azure Monitor Agent Performance Benchmark
azure-monitor Troubleshooter Ama Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/troubleshooter-ama-linux.md
Last updated 12/14/2023
-# customer-intent: When AMA is experiencing issues, I want to investigate the issues and determine if I can resolve the issue on my own.
+# Customer intent: When AMA is experiencing issues, I want to investigate the issues and determine if I can resolve the issue on my own.
# How to use the Linux operating system (OS) Azure Monitor Agent Troubleshooter
azure-monitor Troubleshooter Ama Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/troubleshooter-ama-windows.md
Last updated 12/14/2023
-# customer-intent: When AMA is experiencing issues, I want to investigate the issues and determine if I can resolve the issue on my own.
+# Customer intent: When AMA is experiencing issues, I want to investigate the issues and determine if I can resolve the issue on my own.
# How to use the Windows operating system (OS) Azure Monitor Agent Troubleshooter
azure-monitor Alerts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-overview.md
The alert condition for stateful alerts is `fired`, until it is considered resol
For stateful alerts, while the alert itself is deleted after 30 days, the alert condition is stored until the alert is resolved, to prevent firing another alert, and so that notifications can be sent when the alert is resolved.
-Stateful log alerts have these limitations:
-- they can trigger up to 300 alerts per evaluation.-- you can have a maximum of 6000 alerts with the `fired` alert condition.
+Stateful log alerts have limitations - details [here](https://learn.microsoft.com/azure/azure-monitor/service-limits#alerts).
This table describes when a stateful alert is considered resolved:
azure-monitor Prometheus Metrics Scrape Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-metrics-scrape-configuration.md
Four different configmaps can be configured to provide scrape configuration and
2. [`ama-metrics-prometheus-config`](https://aka.ms/azureprometheus-addon-rs-configmap) (**Recommended**) This config map can be used to provide Prometheus scrape config for addon replica. Addon runs a singleton replica, and any cluster level services can be discovered and scraped by providing scrape jobs in this configmap. You can take the sample configmap from the above git hub repo, add scrape jobs that you would need and apply/deploy the config map to `kube-system` namespace for your cluster. 3. [`ama-metrics-prometheus-config-node`](https://aka.ms/azureprometheus-addon-ds-configmap) (**Advanced**)
- This config map can be used to provide Prometheus scrape config for addon DaemonSet that runs on every **Linux** node in the cluster, and any node level targets on each node can be scraped by providing scrape jobs in this configmap. When you use this configmap, you can use `$NODE_IP` variable in your scrape config, which gets substituted by corresponding node's ip address in DaemonSet pod running on each node. This way you get access to scrape anything that runs on that node from the metrics addon DaemonSet. **Please be careful when you use discoveries in scrape config in this node level config map, as every node in the cluster will setup & discover the target(s) and will collect redundant metrics**.
+ This config map can be used to provide Prometheus scrape config for addon DaemonSet that runs on every **Linux** node in the cluster, and any node level targets on each node can be scraped by providing scrape jobs in this configmap. When you use this configmap, you can use `$NODE_IP` variable in your scrape config, which gets substituted by corresponding node's ip address in DaemonSet pod running on each node. This way you get access to scrape anything that runs on that node from the metrics addon DaemonSet. **Please be careful when you use discoveries in scrape config in this node level config map, as every node in the cluster will setup & discover the target(s) and will collect redundant metrics**.
You can take the sample configmap from the above git hub repo, add scrape jobs that you would need and apply/deploy the config map to `kube-system` namespace for your cluster 4. [`ama-metrics-prometheus-config-node-windows`](https://aka.ms/azureprometheus-addon-ds-configmap-windows) (**Advanced**)
- This config map can be used to provide Prometheus scrape config for addon DaemonSet that runs on every **Windows** node in the cluster, and node level targets on each node can be scraped by providing scrape jobs in this configmap. When you use this configmap, you can use `$NODE_IP` variable in your scrape config, which will be substituted by corresponding node's ip address in DaemonSet pod running on each node. This way you get access to scrape anything that runs on that node from the metrics addon DaemonSet. **Please be careful when you use discoveries in scrape config in this node level config map, as every node in the cluster will setup & discover the target(s) and will collect redundant metrics**.
+ This config map can be used to provide Prometheus scrape config for addon DaemonSet that runs on every **Windows** node in the cluster, and node level targets on each node can be scraped by providing scrape jobs in this configmap. When you use this configmap, you can use `$NODE_IP` variable in your scrape config, which will be substituted by corresponding node's ip address in DaemonSet pod running on each node. This way you get access to scrape anything that runs on that node from the metrics addon DaemonSet. **Please be careful when you use discoveries in scrape config in this node level config map, as every node in the cluster will setup & discover the target(s) and will collect redundant metrics**.
You can take the sample configmap from the above git hub repo, add scrape jobs that you would need and apply/deploy the config map to `kube-system` namespace for your cluster ## Metrics add-on settings configmap
metric_relabel_configs:
regex: '.+' ```
+### TLS based scraping
+
+If you have a Prometheus instance served with TLS and you want to scrape metrics from it, you need to set scheme to `https` and set the TLS settings in your configmap or respective CRD. You can use the `tls_config` configuration property inside a custom scrape job to configure the TLS settings either using a CRD or a configmap. You need to provide a CA certificate to validate API server certificate with. The CA certificate is used to verify the authenticity of the server's certificate when Prometheus connects to the target over TLS. It helps ensure that the server's certificate is signed by a trusted authority.
+
+The secret should be created in kube-system namespace and then the configmap/CRD should be created in kube-system namespace. The order of secret creation matters. When there's no secret but a valid CRD/config map, you will find errors in collector log -> `no file found for cert....`
+
+Below are the details about how to provide the TLS config settings through a configmap or CRD.
+
+- To provide the TLS config setting in a configmap, please create the self-signed certificate and key inside /etc/prometheus/certs directory inside your mtls enabled app.
+ An example tlsConfig inside the config map should look like this:
+
+```yaml
+tls_config:
+ ca_file: /etc/prometheus/certs/client-cert.pem
+ cert_file: /etc/prometheus/certs/client-cert.pem
+ key_file: /etc/prometheus/certs/client-key.pem
+ insecure_skip_verify: false
+```
+
+- To provide the TLS config setting in a CRD, please create the self-signed certificate and key inside /etc/prometheus/certs directory inside your mtls enabled app.
+ An example tlsConfig inside a Podmonitor should look like this:
+
+```yaml
+tlsConfig:
+ ca:
+ secret:
+ key: "client-cert.pem" # since it is self-signed
+ name: "ama-metrics-mtls-secret"
+ cert:
+ secret:
+ key: "client-cert.pem"
+ name: "ama-metrics-mtls-secret"
+ keySecret:
+ key: "client-key.pem"
+ name: "ama-metrics-mtls-secret"
+ insecureSkipVerify: false
+```
+> [!NOTE]
+> Make sure that the certificate file name and key name inside the mtls app is in the following format in case of a CRD based scraping.
+ For example: secret_kube-system_ama-metrics-mtls-secret_cert-name.pem and secret_kube-system_ama-metrics-mtls-secret_key-name.pem.
+> The CRD needs to be created in kube-system namespace.
+> The secret name should exactly be ama-metrics-mtls-secret in kube-system namespace. An example command for creating secret: kubectl create secret generic ama-metrics-mtls-secret --from-file=secret_kube-system_ama-metrics-mtls-secret_client-cert.pem=secret_kube-system_ama-metrics-mtls-secret_client-cert.pem --from-file=secret_kube-system_ama-metrics-mtls-secret_client-key.pem=secret_kube-system_ama-metrics-mtls-secret_client-key.pem -n kube-system
+
+To read more on TLS authentication, the following documents might be helpful.
+
+- Generating TLS certificates -> https://o11y.eu/blog/prometheus-server-tls/
+- Configurations -> https://prometheus.io/docs/alerting/latest/configuration/#tls_config
+ ## Next steps [Setup Alerts on Prometheus metrics](./container-insights-metric-alerts.md)<br>
azure-monitor Activity Log Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/activity-log-insights.md
Last updated 12/11/2023
-#customer-intent: As an IT manager, I want to understand how I can use activity log insights to monitor changes to resources and resource groups in an Azure subscription.
+# Customer intent: As an IT manager, I want to understand how I can use activity log insights to monitor changes to resources and resource groups in an Azure subscription.
# Monitor changes to resources and resource groups with Azure Monitor activity log insights
azure-monitor Data Platform Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-platform-metrics.md
Azure Monitor collects metrics from the following sources. After these metrics a
- **Azure resources**: Platform metrics are created by Azure resources and give you visibility into their health and performance. Each type of resource creates a [distinct set of metrics](./metrics-supported.md) without any configuration required. Platform metrics are collected from Azure resources at one-minute frequency unless specified otherwise in the metric's definition. - **Applications**: Application Insights creates metrics for your monitored applications to help you detect performance issues and track trends in how your application is being used. Values include _Server response time_ and _Browser exceptions_.-- **Virtual machine agents**: Metrics are collected from the guest operating system of a virtual machine. You can enable guest OS metrics for Windows virtual machines by using the [Windows diagnostic extension](../agents/diagnostics-extension-overview.md) and for Linux virtual machines by using the [InfluxData Telegraf agent](https://www.influxdata.com/time-series-platform/telegraf/).
+- **Virtual machine agents**: Metrics are collected from the guest operating system of a virtual machine. You can enable guest OS metrics for Windows virtual machines by using the [Azure Monitor Agent](/azure/azure-monitor/agents/agents-overview). Azure Monitor Agent replaces the legacy agents - [Windows diagnostic extension](../agents/diagnostics-extension-overview.md) and the [InfluxData Telegraf agent](https://www.influxdata.com/time-series-platform/telegraf/) for Linux virtual machines.
- **Custom metrics**: You can define metrics in addition to the standard metrics that are automatically available. You can [define custom metrics in your application](../app/api-custom-events-metrics.md) that's monitored by Application Insights. You can also create custom metrics for an Azure service by using the [custom metrics API](./metrics-store-custom-rest-api.md). - **Kubernetes clusters**: Kubernetes clusters typically send metric data to a local Prometheus server that you must maintain. [Azure Monitor managed service for Prometheus ](prometheus-metrics-overview.md) provides a managed service that collects metrics from Kubernetes clusters and store them in Azure Monitor Metrics.
azure-monitor Migrate To Batch Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/migrate-to-batch-api.md
Last updated 05/07/2023
-#customer-intent: As a customer, I want to understand how to migrate from the metrics API to the getBatch API
+# Customer intent: As a customer, I want to understand how to migrate from the metrics API to the getBatch API
# How to migrate from the metrics API to the getBatch API
azure-monitor Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/insights-overview.md
Last updated 10/15/2022 + # Azure Monitor Insights overview Some services have a curated monitoring experience. That is, Microsoft provides customized functionality meant to act as a starting point for monitoring those services. These experiences are collectively known as *curated visualizations* with the larger more complex of them being called *Insights*.
The following table lists the available curated visualizations and information a
|Name with docs link| State | [Azure portal link](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/more)| Description | |:--|:--|:--|:--| |**Compute**||||
- | [Azure VM Insights](/azure/azure-monitor/insights/vminsights-overview) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/virtualMachines) | Monitors your Azure VMs and Virtual Machine Scale Sets at scale. It analyzes the performance and health of your Windows and Linux VMs and monitors their processes and dependencies on other resources and external processes. |
-| [Azure Container Insights](/azure/azure-monitor/insights/container-insights-overview) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/containerInsights) | Monitors the performance of container workloads that are deployed to managed Kubernetes clusters hosted on Azure Kubernetes Service. It gives you performance visibility by collecting metrics from controllers, nodes, and containers that are available in Kubernetes through the Metrics API. Container logs are also collected. After you enable monitoring from Kubernetes clusters, these metrics and logs are automatically collected for you through a containerized version of the Log Analytics agent for Linux. |
+|[Azure VM Insights](/azure/azure-monitor/insights/vminsights-overview) |General Availability (GA) | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/virtualMachines) |Monitors your Azure VMs and Virtual Machine Scale Sets at scale. It analyzes the performance and health of your Windows and Linux VMs and monitors their processes and dependencies on other resources and external processes. |
+|[Azure Container Insights](/azure/azure-monitor/insights/container-insights-overview) |GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/containerInsights) |Monitors the performance of container workloads that are deployed to managed Kubernetes clusters hosted on Azure Kubernetes Service. It gives you performance visibility by collecting metrics from controllers, nodes, and containers that are available in Kubernetes through the Metrics API. Container logs are also collected. After you enable monitoring from Kubernetes clusters, these metrics and logs are automatically collected for you through a containerized version of the Log Analytics agent for Linux. |
|**Networking**||||
- | [Azure Network Insights](../../network-watcher/network-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/networkInsights) | Provides a comprehensive view of health and metrics for all your network resources. The advanced search capability helps you identify resource dependencies, enabling scenarios like identifying resources that are hosting your website, by searching for your website name. |
+|[Azure Network Insights](../../network-watcher/network-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/networkInsights) | Provides a comprehensive view of health and metrics for all your network resources. The advanced search capability helps you identify resource dependencies, enabling scenarios like identifying resources that are hosting your website, by searching for your website name. |
|**Storage**||||
- | [Azure Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/storageInsights) | Provides comprehensive monitoring of your Azure Storage accounts by delivering a unified view of your Azure Storage services performance, capacity, and availability. |
+| [Azure Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/storageInsights) | Provides comprehensive monitoring of your Azure Storage accounts by delivering a unified view of your Azure Storage services performance, capacity, and availability. |
| [Azure Backup](../../backup/backup-azure-monitoring-use-azuremonitor.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_DataProtection/BackupCenterMenuBlade/backupReportsConfigure/menuId/backupReportsConfigure) | Provides built-in monitoring and alerting capabilities in a Recovery Services vault. | |**Databases**|||| | [Azure Cosmos DB Insights](../../cosmos-db/cosmosdb-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/cosmosDBInsights) | Provides a view of the overall performance, failures, capacity, and operational health of all your Azure Cosmos DB resources in a unified interactive experience. | | [Azure Monitor for Azure Cache for Redis (preview)](../../azure-cache-for-redis/redis-cache-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/redisCacheInsights) | Provides a unified, interactive view of overall performance, failures, capacity, and operational health. | |**Analytics**|||| | [Azure Data Explorer Insights](/azure/data-explorer/data-explorer-insights) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/adxClusterInsights) | Azure Data Explorer Insights provides comprehensive monitoring of your clusters by delivering a unified view of your cluster performance, operations, usage, and failures. |
- | [Azure Monitor Log Analytics Workspace](../logs/log-analytics-workspace-insights-overview.md) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/lawsInsights) | Log Analytics Workspace Insights (preview) provides comprehensive monitoring of your workspaces through a unified view of your workspace usage, performance, health, agent, queries, and change log. This article will help you understand how to onboard and use Log Analytics Workspace Insights (preview). |
+| [Azure Monitor Log Analytics Workspace](../logs/log-analytics-workspace-insights-overview.md) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/lawsInsights) | Log Analytics Workspace Insights (preview) provides comprehensive monitoring of your workspaces through a unified view of your workspace usage, performance, health, agent, queries, and change log. This article will help you understand how to onboard and use Log Analytics Workspace Insights (preview). |
|**Security**||||
- | [Azure Key Vault Insights (preview)](../../key-vault/key-vault-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/keyvaultsInsights) | Provides comprehensive monitoring of your key vaults by delivering a unified view of your Key Vault requests, performance, failures, and latency. |
+| [Azure Key Vault Insights](../../key-vault/key-vault-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/keyvaultsInsights) | Provides comprehensive monitoring of your key vaults by delivering a unified view of your Key Vault requests, performance, failures, and latency. |
|**Monitor**||||
- | [Azure Monitor Application Insights](../app/app-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/applicationsInsights) | Extensible application performance management service that monitors the availability, performance, and usage of your web applications whether they're hosted in the cloud or on-premises. It uses the powerful data analysis platform in Azure Monitor to provide you with deep insights into your application's operations. It enables you to diagnose errors without waiting for a user to report them. Application Insights includes connection points to various development tools and integrates with Visual Studio to support your DevOps processes. |
-| [Azure activity Log Insights](../essentials/activity-log-insights.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_DataProtection/BackupCenterMenuBlade/backupReportsConfigure/menuId/backupReportsConfigure) | Provides built-in monitoring and alerting capabilities in a Recovery Services vault. |
+| [Azure Monitor Application Insights](../app/app-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/applicationsInsights) | Extensible application performance management service that monitors the availability, performance, and usage of your web applications whether they're hosted in the cloud or on-premises. It uses the powerful data analysis platform in Azure Monitor to provide you with deep insights into your application's operations. It enables you to diagnose errors without waiting for a user to report them. Application Insights includes connection points to various development tools and integrates with Visual Studio to support your DevOps processes. |
+| [Azure activity Log Insights](../essentials/activity-log-insights.md) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_DataProtection/BackupCenterMenuBlade/backupReportsConfigure/menuId/backupReportsConfigure) | Provides built-in monitoring and alerting capabilities in a Recovery Services vault. |
| [Azure Monitor for Resource Groups](resource-group-insights.md) | GA | No | Triage and diagnose any problems your individual resources encounter, while offering context for the health and performance of the resource group as a whole. | |**Integration**|||| | [Azure Service Bus Insights](../../service-bus-messaging/service-bus-insights.md) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/serviceBusInsights) | Azure Service Bus Insights provide a view of the overall performance, failures, capacity, and operational health of all your Service Bus resources in a unified interactive experience. |
- [Azure IoT Edge](../../iot-edge/how-to-explore-curated-visualizations.md) | GA | No | Visualize and explore metrics collected from the IoT Edge device right in the Azure portal by using Azure Monitor Workbooks-based public templates. The curated workbooks use built-in metrics from the IoT Edge runtime. These views don't need any metrics instrumentation from the workload modules. |
+|[Azure IoT Edge](../../iot-edge/how-to-explore-curated-visualizations.md) | GA | No | Visualize and explore metrics collected from the IoT Edge device right in the Azure portal by using Azure Monitor Workbooks-based public templates. The curated workbooks use built-in metrics from the IoT Edge runtime. These views don't need any metrics instrumentation from the workload modules. |
|**Workloads**|||| | [Azure SQL Insights (preview)](/azure/azure-sql/database/sql-insights-overview) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/sqlWorkloadInsights) | A comprehensive interface for monitoring any product in the Azure SQL family. SQL Insights uses dynamic management views to expose the data you need to monitor health, diagnose problems, and tune performance. Note: If you're just setting up SQL monitoring, use SQL Insights instead of the SQL Analytics solution. | | [Azure Monitor for SAP solutions](../../virtual-machines/workloads/sap/monitor-sap-on-azure.md) | Preview | No | An Azure-native monitoring product for anyone running their SAP landscapes on Azure. It works with both SAP on Azure Virtual Machines and SAP on Azure Large Instances. Collects telemetry data from Azure infrastructure and databases in one central location and visually correlates the data for faster troubleshooting. You can monitor different components of an SAP landscape, such as Azure virtual machines (VMs), high-availability clusters, SAP HANA database, and SAP NetWeaver, by adding the corresponding provider for that component. | |**Other**|||| | [Azure Virtual Desktop Insights](../../virtual-desktop/azure-monitor.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_WVD/WvdManagerMenuBlade/insights/menuId/insights) | Azure Virtual Desktop Insights is a dashboard built on Azure Monitor Workbooks that helps IT professionals understand their Azure Virtual Desktop environments. |
-| [Azure Stack HCI Insights](/azure-stack/hci/manage/azure-stack-hci-insights) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/azureStackHCIInsights) | Based on Azure Monitor Workbooks. Provides health, performance, and usage insights about registered Azure Stack HCI version 21H2 clusters that are connected to Azure and enrolled in monitoring. It stores its data in a Log Analytics workspace, which allows it to deliver powerful aggregation and filtering and analyze data trends over time. |
+| [Azure Stack HCI Insights](/azure-stack/hci/manage/azure-stack-hci-insights) | GA| [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/azureStackHCIInsights) | Based on Azure Monitor Workbooks. Provides health, performance, and usage insights about registered Azure Stack HCI version 21H2 clusters that are connected to Azure and enrolled in monitoring. It stores its data in a Log Analytics workspace, which allows it to deliver powerful aggregation and filtering and analyze data trends over time. |
| [Windows Update for Business](/windows/deployment/update/wufb-reports-overview) | GA | [Yes](https://ms.portal.azure.com/#view/AppInsightsExtension/WorkbookViewerBlade/Type/updatecompliance-insights/ComponentId/Azure%20Monitor/GalleryResourceType/Azure%20Monitor/ConfigurationId/community-Workbooks%2FUpdateCompliance%2FUpdateComplianceHub) | Detailed deployment monitoring, compliance assessment and failure troubleshooting for all Windows 10/11 devices.| |**Not in Azure portal Insight hub**||||
-| [Azure Monitor Workbooks for Microsoft Entra ID](../../active-directory/reports-monitoring/howto-use-azure-monitor-workbooks.md) | General availability (GA) | [Yes](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Workbooks) | Microsoft Entra ID provides workbooks to understand the effect of your Conditional Access policies, troubleshoot sign-in failures, and identify legacy authentications. |
+| [Azure Monitor Workbooks for Microsoft Entra ID](../../active-directory/reports-monitoring/howto-use-azure-monitor-workbooks.md) |GA| [Yes](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Workbooks) | Microsoft Entra ID provides workbooks to understand the effect of your Conditional Access policies, troubleshoot sign-in failures, and identify legacy authentications. |
| [Azure HDInsight](../../hdinsight/log-analytics-migration.md#insights) | Preview | No | An Azure Monitor workbook that collects important performance metrics from your HDInsight cluster and provides the visualizations and dashboards for most common scenarios. Gives a complete view of a single HDInsight cluster including resource utilization and application status.| --- ## Next steps - Reference some of the insights listed above to review their functionality
azure-monitor Aiops Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/aiops-machine-learning.md
Last updated 02/28/2023
-#customer-intent: As a DevOps manager or data scientist, I want to understand which AIOps features Azure Monitor offers and how to implement a machine learning pipeline on data in Azure Monitor Logs so that I can use artifical intelligence to improve service quality and reliability of my IT environment.
+# Customer intent: As a DevOps manager or data scientist, I want to understand which AIOps features Azure Monitor offers and how to implement a machine learning pipeline on data in Azure Monitor Logs so that I can use artifical intelligence to improve service quality and reliability of my IT environment.
# Detect and mitigate potential issues using AIOps and machine learning in Azure Monitor
azure-monitor Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/availability-zones.md
Last updated 06/05/2023
-#customer-intent: As an IT manager, I want to understand the data and service resilience benefits Azure Monitor availability zones provide to ensure my data and services are sufficiently protected in the event of datacenter failure.
+# Customer intent: As an IT manager, I want to understand the data and service resilience benefits Azure Monitor availability zones provide to ensure my data and services are sufficiently protected in the event of datacenter failure.
# Enhance data and service resilience in Azure Monitor Logs with availability zones
azure-monitor Basic Logs Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-configure.md
Title: Set a table's log data plan to Basic Logs or Analytics Logs
description: Learn how to use Basic Logs and Analytics Logs to reduce costs and take advantage of advanced features and analytics capabilities in Azure Monitor Logs. -+ Last updated 12/17/2023
All custom tables created with or migrated to the [data collection rule (DCR)-ba
| Application Gateways | [AGWAccessLogs](/azure/azure-monitor/reference/tables/AGWAccessLogs)<br>[AGWPerformanceLogs](/azure/azure-monitor/reference/tables/AGWPerformanceLogs)<br>[AGWFirewallLogs](/azure/azure-monitor/reference/tables/AGWFirewallLogs) | | Application Gateway for Containers | [AGCAccessLogs](/azure/azure-monitor/reference/tables/AGCAccessLogs) | | Application Insights | [AppTraces](/azure/azure-monitor/reference/tables/apptraces) |
-| Bare Metal Machines | [NCBMSystemLogs](/azure/azure-monitor/reference/tables/NCBMSystemLogs)<br>[NCBMSecurityLogs](/azure/azure-monitor/reference/tables/NCBMSecurityLogs) |
+| Bare Metal Machines | [NCBMSecurityDefenderLogs](/azure/azure-monitor/reference/tables/ncbmsecuritydefenderlogs)<br>[NCBMSystemLogs](/azure/azure-monitor/reference/tables/NCBMSystemLogs)<br>[NCBMSecurityLogs](/azure/azure-monitor/reference/tables/NCBMSecurityLogs) |
| Chaos Experiments | [ChaosStudioExperimentEventLogs](/azure/azure-monitor/reference/tables/ChaosStudioExperimentEventLogs) | | Cloud HSM | [CHSMManagementAuditLogs](/azure/azure-monitor/reference/tables/CHSMManagementAuditLogs) | | Container Apps | [ContainerAppConsoleLogs](/azure/azure-monitor/reference/tables/containerappconsoleLogs) |
azure-monitor Ingest Logs Event Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/ingest-logs-event-hub.md
Last updated 12/28/2023
-# customer-intent: As a DevOps engineer, I want to ingest data from an event hub into a Log Analytics workspace so that I can monitor logs that I send to Azure Event Hubs.
+# Customer intent: As a DevOps engineer, I want to ingest data from an event hub into a Log Analytics workspace so that I can monitor logs that I send to Azure Event Hubs.
# Tutorial: Ingest events from Azure Event Hubs into Azure Monitor Logs (Public Preview)
azure-monitor Migrate Splunk To Azure Monitor Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/migrate-splunk-to-azure-monitor-logs.md
Last updated 01/27/2023
-#customer-intent: As an IT manager, I want to understand the steps required to migrate my Splunk deployment to Azure Monitor Logs so that I can decide whether to migrate and plan and execute my migration.
+# Customer intent: As an IT manager, I want to understand the steps required to migrate my Splunk deployment to Azure Monitor Logs so that I can decide whether to migrate and plan and execute my migration.
azure-monitor Search Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/search-jobs.md
description: Search jobs are asynchronous log queries in Azure Monitor that make
Last updated 10/01/2022
-#customer-intent: As a data scientist or workspace administrator, I want an efficient way to search through large volumes of data in a table, including archived and basic logs.
+# Customer intent: As a data scientist or workspace administrator, I want an efficient way to search through large volumes of data in a table, including archived and basic logs.
# Run search jobs in Azure Monitor
azure-netapp-files Access Smb Volume From Windows Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/access-smb-volume-from-windows-client.md
You can use Microsoft Entra ID with the Hybrid Authentication Management module
>[!NOTE] >Using Microsoft Entra ID for authenticating [hybrid user identities](../active-directory/hybrid/whatis-hybrid-identity.md) allows Microsoft Entra users to access Azure NetApp Files SMB shares. This means your end users can access Azure NetApp Files SMB shares without requiring a line-of-sight to domain controllers from Microsoft Entra hybrid joined and Microsoft Entra joined VMs. Cloud-only identities aren't currently supported. For more information, see [Understand guidelines for Active Directory Domain Services site design and planning](understand-guidelines-active-directory-domain-service-site.md). ## Requirements and considerations
The configuration process takes you through five process:
1. Under **Computers**, right-click on the computer account created as part of the Azure NetApp Files volume then select **Properties**. 1. Under **Attribute Editor,** locate `servicePrincipalName`. In the Multi-valued string editor, add the CIFS SPN value using the CIFS/FQDN format. <a name='register-a-new-azure-ad-application'></a>
The configuration process takes you through five process:
1. Assign a **Name**. Under select the **Supported account type**, choose **Accounts in this organizational directory only (Single tenant)**. 1. Select **Register**. 1. Configure the permissions for the application. From your **App Registrations**, select **API Permissions** then **Add a permission**. 1. Select **Microsoft Graph** then **Delegated Permissions**. Under **Select Permissions**, select **openid** and **profile** under **OpenId permissions**.
- :::image type="content" source="../media/azure-netapp-files/api-permissions.png" alt-text="Screenshot to register API permissions." lightbox="../media/azure-netapp-files/api-permissions.png":::
+ :::image type="content" source="./media/access-smb-volume-from-windows-client/api-permissions.png" alt-text="Screenshot to register API permissions." lightbox="./media/access-smb-volume-from-windows-client/api-permissions.png":::
1. Select **Add permission**. 1. From **API Permissions**, select **Grant admin consent for...**.
- :::image type="content" source="../media/azure-netapp-files/grant-admin-consent.png" alt-text="Screenshot to grant API permissions." lightbox="../media/azure-netapp-files/grant-admin-consent.png ":::
+ :::image type="content" source="./media/access-smb-volume-from-windows-client/grant-admin-consent.png" alt-text="Screenshot to grant API permissions." lightbox="./media/access-smb-volume-from-windows-client/grant-admin-consent.png ":::
1. From **Authentication**, under **App instance property lock**, select **Configure** then deselect the checkbox labeled **Enable property lock**.
- :::image type="content" source="../media/azure-netapp-files/authentication-registration.png" alt-text="Screenshot of app registrations." lightbox="../media/azure-netapp-files/authentication-registration.png":::
+ :::image type="content" source="./media/access-smb-volume-from-windows-client/authentication-registration.png" alt-text="Screenshot of app registrations." lightbox="./media/access-smb-volume-from-windows-client/authentication-registration.png":::
1. From **Overview**, make note of the **Application (client) ID**, which is required later.
The configuration process takes you through five process:
* Value name: KERBEROS.MICROSOFTONLINE.COM * Value: .contoso.com
- :::image type="content" source="../media/azure-netapp-files/define-host-name-to-kerberos.png" alt-text="Screenshot to define how-name-to-Kerberos real mappings." lightbox="../media/azure-netapp-files/define-host-name-to-kerberos.png":::
+ :::image type="content" source="./media/access-smb-volume-from-windows-client/define-host-name-to-kerberos.png" alt-text="Screenshot to define how-name-to-Kerberos real mappings." lightbox="./media/access-smb-volume-from-windows-client/define-host-name-to-kerberos.png":::
### Mount the Azure NetApp Files SMB volumes
The configuration process takes you through five process:
2. Mount the Azure NetApp Files SMB volume using the info provided in the Azure portal. For more information, see [Mount SMB volumes for Windows VMs](mount-volumes-vms-smb.md). 3. Confirm the mounted volume is using Kerberos authentication and not NTLM authentication. Open a command prompt, issue the `klist` command; observe the output in the cloud TGT (krbtgt) and CIFS server ticket information.
- :::image type="content" source="../media/azure-netapp-files/klist-output.png" alt-text="Screenshot of CLI output." lightbox="../media/azure-netapp-files/klist-output.png":::
+ :::image type="content" source="./media/access-smb-volume-from-windows-client/klist-output.png" alt-text="Screenshot of CLI output." lightbox="./media/access-smb-volume-from-windows-client/klist-output.png":::
## Further information
azure-netapp-files Application Volume Group Add Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/application-volume-group-add-hosts.md
Building a multiple-host SAP HANA database always starts with creating a volume
Click **Next: Volume Group**.
- [ ![Screenshot that shows the HANA section for adding hosts.](../media/azure-netapp-files/application-multiple-hosts-sap-hana.png) ](../media/azure-netapp-files/application-multiple-hosts-sap-hana.png#lightbox)
+ [ ![Screenshot that shows the HANA section for adding hosts.](./media/application-volume-group-add-hosts/application-multiple-hosts-sap-hana.png) ](./media/application-volume-group-add-hosts/application-multiple-hosts-sap-hana.png#lightbox)
3. In the **Volume group** tab, provide identical input as you did when you created the first HANA host.
Building a multiple-host SAP HANA database always starts with creating a volume
Click **Next: Review + Create**.
- [ ![Screenshot that shows the Volumes section for adding hosts.](../media/azure-netapp-files/application-multiple-hosts-volumes.png) ](../media/azure-netapp-files/application-multiple-hosts-volumes.png#lightbox)
+ [ ![Screenshot that shows the Volumes section for adding hosts.](./media/application-volume-group-add-hosts/application-multiple-hosts-volumes.png) ](./media/application-volume-group-add-hosts/application-multiple-hosts-volumes.png#lightbox)
4. In the **Review + Create** tab, the `{HostId}` placeholder is replaced with the individual numbers for each of the volume groups that will be created. You can click **Next Group** to navigate through all volume groups that are being created (one for each host). You can also click a particular volume to view its details.
- [ ![Screenshot that shows the Review and Create section for adding hosts.](../media/azure-netapp-files/application-multiple-review-create.png) ](../media/azure-netapp-files/application-multiple-review-create.png#lightbox)
+ [ ![Screenshot that shows the Review and Create section for adding hosts.](./media/application-volume-group-add-hosts/application-multiple-review-create.png) ](./media/application-volume-group-add-hosts/application-multiple-review-create.png#lightbox)
5. After you navigate through the volume groups, click **Create All Groups** to create all the volumes for the HANA hosts you are adding.
- [ ![Screenshot that shows the Create All Groups button.](../media/azure-netapp-files/application-multiple-create-groups.png) ](../media/azure-netapp-files/application-multiple-create-groups.png#lightbox)
+ [ ![Screenshot that shows the Create All Groups button.](./media/application-volume-group-add-hosts/application-multiple-create-groups.png) ](./media/application-volume-group-add-hosts/application-multiple-create-groups.png#lightbox)
The **Create Volume Group** page shows the added volume groups with the "Creating" status.
azure-netapp-files Application Volume Group Add Volume Secondary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/application-volume-group-add-volume-secondary.md
The HANA System Replication (HSR) functionality enables SAP HANA databases to sy
The following diagram illustrates the concept of HSR:
- ![Diagram that explains HANA System Replication.](../media/azure-netapp-files/application-hana-system-replication.png)
+ ![Diagram that explains HANA System Replication.](./media/application-volume-group-add-volume-secondary/application-hana-system-replication.png)
To enable HSR, the configuration of the secondary SAP HANA system must be identical to the primary SAP HANA system. That is, if the primary system is a single-host HANA system, then the secondary SAP HANA system also needs to be a single-hosts system. The same applies for multiple host systems.
This section shows an example of creating a single-host, secondary SAP HANA syst
Click **Next: Volume Group** to continue.
- [ ![Screenshot that shows the HANA section in HSR configuration.](../media/azure-netapp-files/application-secondary-sap-hana.png) ](../media/azure-netapp-files/application-secondary-sap-hana.png#lightbox)
+ [ ![Screenshot that shows the HANA section in HSR configuration.](./media/application-volume-group-add-volume-secondary/application-secondary-sap-hana.png) ](./media/application-volume-group-add-volume-secondary/application-secondary-sap-hana.png#lightbox)
3. In the **Volume group** tab, provide information for creating the volume group:
This section shows an example of creating a single-host, secondary SAP HANA syst
Click **Next: Volumes**.
- [ ![Screenshot that shows the Tags section of the Volume Group tab.](../media/azure-netapp-files/application-secondary-volume-group-tags.png) ](../media/azure-netapp-files/application-secondary-volume-group-tags.png#lightbox)
+ [ ![Screenshot that shows the Tags section of the Volume Group tab.](./media/application-volume-group-add-volume-secondary/application-secondary-volume-group-tags.png) ](./media/application-volume-group-add-volume-secondary/application-secondary-volume-group-tags.png#lightbox)
6. The **Volumes** tab displays information about the volumes that are being created. The volume naming convention includes an `"HA-"` prefix to indicate that the volume belongs to the secondary system of an HSR setup.
- [ ![Screenshot that shows the Volume Group tab.](../media/azure-netapp-files/application-secondary-volumes-tags.png) ](../media/azure-netapp-files/application-secondary-volumes-tags.png#lightbox)
+ [ ![Screenshot that shows the Volume Group tab.](./media/application-volume-group-add-volume-secondary/application-secondary-volumes-tags.png) ](./media/application-volume-group-add-volume-secondary/application-secondary-volumes-tags.png#lightbox)
7. In the **Volumes** tab, you can select each volume to view or change the volume details, including the protocol and tag for the volume. In the **Tags** section of a volume, you can populate the `HSRPartnerStorageResourceId` tag with the resource ID of the corresponding primary volume. This action only marks the primary volume; it does not validate the provided resource ID.
- [ ![Screenshot that shows the tag details.](../media/azure-netapp-files/application-secondary-volumes-tag-details.png) ](../media/azure-netapp-files/application-secondary-volumes-tag-details.png#lightbox)
+ [ ![Screenshot that shows the tag details.](./media/application-volume-group-add-volume-secondary/application-secondary-volumes-tag-details.png) ](./media/application-volume-group-add-volume-secondary/application-secondary-volumes-tag-details.png#lightbox)
Click **Volumes** to return to the Volumes overview page.
azure-netapp-files Application Volume Group Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/application-volume-group-delete.md
This article describes how to delete an application volume group.
1. Click **Application volume groups**. Select the volume group you want to delete.
- [![Screenshot that shows Application Volume Groups list.](../media/azure-netapp-files/application-volume-group-list.png) ](../media/azure-netapp-files/application-volume-group-list.png#lightbox)
+ [![Screenshot that shows Application Volume Groups list.](./media/application-volume-group-delete/application-volume-group-list.png) ](./media/application-volume-group-delete/application-volume-group-list.png#lightbox)
2. To delete the volume group, click **Delete**. If you are prompted, type the volume group name to confirm the deletion.
- [![Screenshot that shows Application Volume Groups deletion.](../media/azure-netapp-files/application-volume-group-delete.png)](../media/azure-netapp-files/application-volume-group-delete.png#lightbox)
+ [![Screenshot that shows Application Volume Groups deletion.](./media/application-volume-group-delete/application-volume-group-delete.png)](./media/application-volume-group-delete/application-volume-group-delete.png#lightbox)
## Next steps
azure-netapp-files Application Volume Group Deploy First Host https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/application-volume-group-deploy-first-host.md
Be sure to follow the **[pinning recommendations](https://aka.ms/HANAPINNING)**
1. From your NetApp account, select **Application volume groups**, then **+Add Group**.
- [ ![Screenshot that shows how to add a group.](../media/azure-netapp-files/application-volume-group-add-group.png) ](../media/azure-netapp-files/application-volume-group-add-group.png#lightbox)
+ [ ![Screenshot that shows how to add a group.](./media/application-volume-group-deploy-first-host/application-volume-group-add-group.png) ](./media/application-volume-group-deploy-first-host/application-volume-group-add-group.png#lightbox)
2. In Deployment Type, select **SAP HANA** then **Next**.
- [ ![Screenshot that shows the Create Volume Group window.](../media/azure-netapp-files/application-volume-group-create-group.png) ](../media/azure-netapp-files/application-volume-group-create-group.png#lightbox)
+ [ ![Screenshot that shows the Create Volume Group window.](./media/application-volume-group-deploy-first-host/application-volume-group-create-group.png) ](./media/application-volume-group-deploy-first-host/application-volume-group-create-group.png#lightbox)
3. In the **SAP HANA** tab, provide HANA-specific information:
Be sure to follow the **[pinning recommendations](https://aka.ms/HANAPINNING)**
Select **Next: Volume Group**.
- [ ![Screenshot that shows the SAP HANA tag.](../media/azure-netapp-files/application-sap-hana-tag.png) ](../media/azure-netapp-files/application-sap-hana-tag.png#lightbox)
+ [ ![Screenshot that shows the SAP HANA tag.](./media/application-volume-group-deploy-first-host/application-sap-hana-tag.png) ](./media/application-volume-group-deploy-first-host/application-sap-hana-tag.png#lightbox)
4. In the **Volume group** tab, provide information for creating the volume group:
Be sure to follow the **[pinning recommendations](https://aka.ms/HANAPINNING)**
Select **Next: Tags**.
- [ ![Screenshot that shows the Volume Group tag.](../media/azure-netapp-files/application-volume-group-tag.png) ](../media/azure-netapp-files/application-volume-group-tag.png#lightbox)
+ [ ![Screenshot that shows the Volume Group tag.](./media/application-volume-group-deploy-first-host/application-volume-group-tag.png) ](./media/application-volume-group-deploy-first-host/application-volume-group-tag.png#lightbox)
5. In the **Tags** section of the Volume Group tab, you can add tags as needed for the volumes. Select **Next: Protocol**.
- [ ![Screenshot that shows how to add tags.](../media/azure-netapp-files/application-add-tags.png) ](../media/azure-netapp-files/application-add-tags.png#lightbox)
+ [ ![Screenshot that shows how to add tags.](./media/application-volume-group-deploy-first-host/application-add-tags.png) ](./media/application-volume-group-deploy-first-host/application-add-tags.png#lightbox)
6. In the **Protocols** section of the Volume Group tab, you can modify the **Export Policy**, which should be common to all volumes. Select **Next: Volumes**.
- [ ![Screenshot that shows the protocols tags.](../media/azure-netapp-files/application-protocols-tag.png) ](../media/azure-netapp-files/application-protocols-tag.png#lightbox)
+ [ ![Screenshot that shows the protocols tags.](./media/application-volume-group-deploy-first-host/application-protocols-tag.png) ](./media/application-volume-group-deploy-first-host/application-protocols-tag.png#lightbox)
7. The **Volumes** tab summarizes the volumes that are being created with proposed volume name, quota, and throughput.
Be sure to follow the **[pinning recommendations](https://aka.ms/HANAPINNING)**
The creation for the data-backup and log-backup volumes is optional.
- [ ![Screenshot that shows a list of volumes being created.](../media/azure-netapp-files/application-volume-list.png) ](../media/azure-netapp-files/application-volume-list.png#lightbox)
+ [ ![Screenshot that shows a list of volumes being created.](./media/application-volume-group-deploy-first-host/application-volume-list.png) ](./media/application-volume-group-deploy-first-host/application-volume-list.png#lightbox)
8. In the **Volumes** tab, you can select each volume to view or change the volume details. For example, select "data-*volume-name*".
Be sure to follow the **[pinning recommendations](https://aka.ms/HANAPINNING)**
Select **Next: Protocols** to review the protocol settings.
- [ ![Screenshot that shows the Basics tab of Create a Volume Group page.](../media/azure-netapp-files/application-create-volume-basics-tab.png) ](../media/azure-netapp-files/application-create-volume-basics-tab.png#lightbox)
+ [ ![Screenshot that shows the Basics tab of Create a Volume Group page.](./media/application-volume-group-deploy-first-host/application-create-volume-basics-tab.png) ](./media/application-volume-group-deploy-first-host/application-create-volume-basics-tab.png#lightbox)
9. In the **Protocols** tab of a volume, you can modify **File path** (the export name where the volume can be mounted) and **Export policy** as needed.
Be sure to follow the **[pinning recommendations](https://aka.ms/HANAPINNING)**
Select the **Tags** tab if you want to specify tags for a volume. Or select **Volumes** to return to the Volumes overview page.
- [ ![Screenshot that shows the Protocol tab of Create a Volume Group page.](../media/azure-netapp-files/application-create-volume-protocol-tab.png) ](../media/azure-netapp-files/application-create-volume-protocol-tab.png#lightbox)
+ [ ![Screenshot that shows the Protocol tab of Create a Volume Group page.](./media/application-volume-group-deploy-first-host/application-create-volume-protocol-tab.png) ](./media/application-volume-group-deploy-first-host/application-create-volume-protocol-tab.png#lightbox)
10. The **Volumes** page displays volume details.
- [ ![Screenshot that shows Volumes page with volume details.](../media/azure-netapp-files/application-volume-details.png) ](../media/azure-netapp-files/application-volume-details.png#lightbox)
+ [ ![Screenshot that shows Volumes page with volume details.](./media/application-volume-group-deploy-first-host/application-volume-details.png) ](./media/application-volume-group-deploy-first-host/application-volume-details.png#lightbox)
If you want to remove the optional volumes (marked with a `*`), such as data-backup volume or log-backup volume from the volume group, select the volume then select **Remove volume**. Confirm the removal in the dialog box that appears. > [!IMPORTANT] > You cannot add a removed volume back to the volume group again. You need to stop and restart the application volume group configuration.
- [ ![Screenshot that shows how to remove a volume.](../media/azure-netapp-files/application-volume-remove.png) ](../media/azure-netapp-files/application-volume-remove.png#lightbox)
+ [ ![Screenshot that shows how to remove a volume.](./media/application-volume-group-deploy-first-host/application-volume-remove.png) ](./media/application-volume-group-deploy-first-host/application-volume-remove.png#lightbox)
Select **Volumes** to return to the Volume overview page. Select **Next: Review + create**. 11. The **Review + Create** tab lists all the volumes and how they will be created. Select **Create Volume Group** to start the volume group creation.
- [ ![Screenshot that shows the Review and Create tab.](../media/azure-netapp-files/application-review-create.png) ](../media/azure-netapp-files/application-review-create.png#lightbox)
+ [ ![Screenshot that shows the Review and Create tab.](./media/application-volume-group-deploy-first-host/application-review-create.png) ](./media/application-volume-group-deploy-first-host/application-review-create.png#lightbox)
12. The **Volume Groups** deployment workflow starts, and the progress is displayed. This process can take a few minutes to complete.
- [ ![Screenshot that shows the Deployment in Progress window.](../media/azure-netapp-files/application-deployment-in-progress.png) ](../media/azure-netapp-files/application-deployment-in-progress.png#lightbox)
+ [ ![Screenshot that shows the Deployment in Progress window.](./media/application-volume-group-deploy-first-host/application-deployment-in-progress.png) ](./media/application-volume-group-deploy-first-host/application-deployment-in-progress.png#lightbox)
You can display the list of volume groups to see the new volume group. You can select the new volume group to see the details and status of each of the volumes being created. Creating a volume group is an "all-or-none" operation. If one volume cannot be created, all remaining volumes will be removed as well.
- [ ![Screenshot that shows the new volume group.](../media/azure-netapp-files/application-new-volume-group.png) ](../media/azure-netapp-files/application-new-volume-group.png#lightbox)
+ [ ![Screenshot that shows the new volume group.](./media/application-volume-group-deploy-first-host/application-new-volume-group.png) ](./media/application-volume-group-deploy-first-host/application-new-volume-group.png#lightbox)
## Next steps
azure-netapp-files Application Volume Group Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/application-volume-group-disaster-recovery.md
Instead of using HANA System Replication (HSR), you can use cross-region replica
The following diagram illustrates cross-region replication between the source and destination HANA servers. Cross-region replication is asynchronous. As such, not all volumes need to be replicated.
- ![Diagram that shows cross-region replication between the source and destination HANA servers.](../media/azure-netapp-files/application-cross-region-replication.png)
+ ![Diagram that shows cross-region replication between the source and destination HANA servers.](./media/application-volume-group-disaster-recovery/application-cross-region-replication.png)
> [!NOTE] > When you use an HA deployment with HSR at the primary side, you can choose to replicate not only the primary HANA system as described in this section, but also the HANA secondary system using cross-region replication. To automatically adapt the naming convention, you select both the **HSR secondary** and **Disaster recovery destination** options in the Create a Volume Group screen. The prefix will then be changed to `DR2-`.
The following example adds volumes to an SAP HANA system. The system serves as a
Click **Next: Volume Group**.
- [ ![Screenshot that shows the Create a Volume Group page in a cross-region replication configuration.](../media/azure-netapp-files/application-cross-region-create-volume.png) ](../media/azure-netapp-files/application-cross-region-create-volume.png#lightbox)
+ [ ![Screenshot that shows the Create a Volume Group page in a cross-region replication configuration.](./media/application-volume-group-disaster-recovery/application-cross-region-create-volume.png) ](./media/application-volume-group-disaster-recovery/application-cross-region-create-volume.png#lightbox)
3. In the **Volume group** tab, provide information for creating the volume group:
The following example adds volumes to an SAP HANA system. The system serves as a
5. In the **Replication** section of the Volume Group tab, the Replication Schedule field defaults to "Multiple" (disabled). The default replication schedules are different for the replicated volumes. As such, you can modify the replication schedules only for each volume individually from the Volumes tab, and not globally for the entire volume group.
- [ ![Screenshot that shows Multiple field is disabled in Create a Volume Group page.](../media/azure-netapp-files/application-cross-region-multiple-disabled.png) ](../media/azure-netapp-files/application-cross-region-multiple-disabled.png#lightbox)
+ [ ![Screenshot that shows Multiple field is disabled in Create a Volume Group page.](./media/application-volume-group-disaster-recovery/application-cross-region-multiple-disabled.png) ](./media/application-volume-group-disaster-recovery/application-cross-region-multiple-disabled.png#lightbox)
Click **Next: Tags**.
The following example adds volumes to an SAP HANA system. The system serves as a
The default type for the data-backup volume is DP, but this setting can be changed to RW.
- [ ![Screenshot that shows volume types in Create a Volume Group page.](../media/azure-netapp-files/application-cross-region-volume-types.png) ](../media/azure-netapp-files/application-cross-region-volume-types.png#lightbox)
+ [ ![Screenshot that shows volume types in Create a Volume Group page.](./media/application-volume-group-disaster-recovery/application-cross-region-volume-types.png) ](./media/application-volume-group-disaster-recovery/application-cross-region-volume-types.png#lightbox)
8. Click each volume with the DP type to specify the **Source volume ID**. For more information, see [Locate the source volume resource ID](cross-region-replication-create-peering.md#locate-the-source-volume-resource-id). You can optionally change the default replication schedule of a volume. See [Replication schedules, RTO, and RPO](#replication-schedules-rto-and-rpo) for the replication schedule options.
- [ ![Screenshot that shows the Replication tab in Create a Volume Group page.](../media/azure-netapp-files/application-cross-region-replication-tab.png) ](../media/azure-netapp-files/application-cross-region-replication-tab.png#lightbox)
+ [ ![Screenshot that shows the Replication tab in Create a Volume Group page.](./media/application-volume-group-disaster-recovery/application-cross-region-replication-tab.png) ](./media/application-volume-group-disaster-recovery/application-cross-region-replication-tab.png#lightbox)
9. After you create the volume group, set up replication by following instructions in [Authorize replication from the source volume](cross-region-replication-create-peering.md#authorize-replication-from-the-source-volume).
In this scenario, you typically donΓÇÖt change roles for primary and secondary s
The following diagram describes this scenario:
-[ ![Diagram that shows replication for only the primary HANA database volumes.](../media/azure-netapp-files/replicate-only-primary-database-volumes.png) ](../media/azure-netapp-files/replicate-only-primary-database-volumes.png#lightbox)
+[ ![Diagram that shows replication for only the primary HANA database volumes.](./media/application-volume-group-disaster-recovery/replicate-only-primary-database-volumes.png) ](./media/application-volume-group-disaster-recovery/replicate-only-primary-database-volumes.png#lightbox)
In this scenario, a DR setup must include only the volumes of the primary HANA system. With the daily replication of the primary data volume and the log backups of both the primary and secondary systems, the system can be recovered at the DR site. In the diagram, a single volume is used for the log backups of the primary and secondary systems.
For reasons other than HA, you might want to periodically switch roles between t
The following diagram describes this scenario:
-[ ![Diagram that shows replication for both the primary and the secondary HANA database volumes.](../media/azure-netapp-files/replicate-both-primary-secondary-database-volumes.png) ](../media/azure-netapp-files/replicate-both-primary-secondary-database-volumes.png#lightbox)
+[ ![Diagram that shows replication for both the primary and the secondary HANA database volumes.](./media/application-volume-group-disaster-recovery/replicate-both-primary-secondary-database-volumes.png) ](./media/application-volume-group-disaster-recovery/replicate-both-primary-secondary-database-volumes.png#lightbox)
In this scenario, you might want to replicate both sets of volumes from the primary and secondary HANA systems as shown in the diagram.
azure-netapp-files Application Volume Group Manage Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/application-volume-group-manage-volumes.md
You can manage a volume from its volume group. You can resize, delete, or change
1. From your NetApp account, select **Application volume groups**. Click a volume group to display the volumes in the group. Select the volume you want to resize, delete, or change throughput. The volume overview will be displayed.
- [![Screenshot that shows Application Volume Groups overview page.](../media/azure-netapp-files/application-volume-group-overview.png)](../media/azure-netapp-files/application-volume-group-overview.png#lightbox)
+ [![Screenshot that shows Application Volume Groups overview page.](./media/application-volume-group-manage-volumes/application-volume-group-overview.png)](./media/application-volume-group-manage-volumes/application-volume-group-overview.png#lightbox)
1. To resize the volume, click **Resize** and specify the quota in GiB.
- ![Screenshot that shows the Update Volume Quota window.](../media/azure-netapp-files/application-volume-resize.png)
+ ![Screenshot that shows the Update Volume Quota window.](./media/application-volume-group-manage-volumes/application-volume-resize.png)
2. To change the throughput for the volume, click **Change throughput** and specify the intended throughput in MiB/s.
- ![Screenshot that shows the Change Throughput window.](../media/azure-netapp-files/application-volume-change-throughput.png)
+ ![Screenshot that shows the Change Throughput window.](./media/application-volume-group-manage-volumes/application-volume-change-throughput.png)
3. To delete the volume in the volume group, click **Delete**. If you are prompted, type the volume name to confirm the deletion. > [!IMPORTANT] > The volume deletion operation cannot be undone.
- ![Screenshot that shows the Delete Volume window.](../media/azure-netapp-files/application-volume-delete.png)
+ ![Screenshot that shows the Delete Volume window.](./media/application-volume-group-manage-volumes/application-volume-delete.png)
## Next steps
azure-netapp-files Auxiliary Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/auxiliary-groups.md
Accept requests from the kernel to map user id numbers into lists of group
When an access request is made, only 16 GIDs are passed in the RPC portion of the packet. Any GID beyond the limit of 16 is dropped by the protocol. Extended GIDs in Azure NetApp Files can only be used with external name services such as LDAP.
azure-netapp-files Azure Government https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-government.md
All [Azure NetApp Files features](whats-new.md) available on Azure public cloud
Azure Government users can access Azure NetApp Files by pointing their browsers to **portal.azure.us**. The portal site name is **Microsoft Azure Government**. For more information, see [Connect to Azure Government using portal](../azure-government/documentation-government-get-started-connect-with-portal.md).
-![Screenshot that shows the Azure Government portal highlighting portal.azure.us as the URL.](../media/azure-netapp-files/azure-government.jpg)
+![Screenshot that shows the Azure Government portal highlighting portal.azure.us as the URL.](./media/azure-government/azure-government.jpg)
From the Azure Government portal, you can access Azure NetApp Files the same way you would in the Azure portal. For example, you can enter **Azure NetApp Files** in the portal's **Search resources** box, and then select **Azure NetApp Files** from the list that appears.
azure-netapp-files Azure Netapp Files Configure Export Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-configure-export-policy.md
Before modifying policy rules with NFS Kerberos enabled, see [Export policy rule
* **Read-only** and **Read/Write**: If you use Kerberos encryption with NFSv4.1, follow the instructions in [Configure NFSv4.1 Kerberos encryption](configure-kerberos-encryption.md). For performance impact of Kerberos, see [Performance impact of Kerberos on NFSv4.1 volumes](performance-impact-kerberos.md).
- ![Kerberos security options](../media/azure-netapp-files/kerberos-security-options.png)
+ ![Kerberos security options](./media/azure-netapp-files-configure-export-policy/kerberos-security-options.png)
* **Root Access**: Specify whether the `root` account can access the volume. By default, Root Access is set to **On**, and the `root` account has access to the volume. This option is not available for NFSv4.1 Kerberos volumes.
- ![Export policy](../media/azure-netapp-files/azure-netapp-files-export-policy.png)
+ ![Export policy](./media/azure-netapp-files-configure-export-policy/azure-netapp-files-export-policy.png)
* **Chown Mode**: Modify the change ownership mode as needed to set the ownership management capabilities of files and directories. Two options are available:
Before modifying policy rules with NFS Kerberos enabled, see [Export policy rule
Registration requirement and considerations apply for setting **`Chown Mode`**. Follow instructions in [Configure Unix permissions and change ownership mode](configure-unix-permissions-change-ownership-mode.md).
- ![Screenshot that shows the change ownership mode option.](../media/azure-netapp-files/chown-mode-export-policy.png)
+ ![Screenshot that shows the change ownership mode option.](./media/azure-netapp-files-configure-export-policy/chown-mode-export-policy.png)
## Next steps * [Understand NAS permissions in Azure NetApp Files](network-attached-storage-permissions.md)
azure-netapp-files Azure Netapp Files Configure Nfsv41 Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-configure-nfsv41-domain.md
The root user mapping can illustrate what happens if there is a mismatch between
In the following directory listing example, the user `root` mounts a volume on a Linux client that uses its default configuration `localdomain` for the ID authentication domain, which is different from Azure NetApp FilesΓÇÖ default configuration of `defaultv4iddomain.com`. In the listing of the files in the directory, `file1` shows as being mapped to `nobody`, when it should be owned by the root user.
You can also use [Azure CLI commands](/cli/azure/feature) `az feature register`
1. Select **Configure**. 1. To use the default domain `defaultv4iddomain.com`, select the box next to **Use Default NFSv4 ID Domain**. To use another domain, uncheck the text box and provide the name of the NFSv4.1 ID domain.
- :::image type="content" source="../media/azure-netapp-files/nfsv4-id-domain.png" alt-text="Screenshot with field to set NFSv4 domain." lightbox="../media/azure-netapp-files/nfsv4-id-domain.png":::
+ :::image type="content" source="./media/azure-netapp-files-configure-nfsv41-domain/nfsv4-id-domain.png" alt-text="Screenshot with field to set NFSv4 domain." lightbox="./media/azure-netapp-files-configure-nfsv41-domain/nfsv4-id-domain.png":::
1. Select **Save**.
You can also use [Azure CLI commands](/cli/azure/feature) `az feature register`
The following example shows the resulting user/group change:
-![Screenshot that shows an example of the resulting user/group change.](../media/azure-netapp-files/azure-netapp-files-nfsv41-resulting-config.png)
+![Screenshot that shows an example of the resulting user/group change.](./media/azure-netapp-files-configure-nfsv41-domain/azure-netapp-files-nfsv41-resulting-config.png)
As the example shows, the user/group has now changed from `nobody` to `root`.
Azure NetApp Files supports local users and groups (created locally on the NFS c
In the following example, `Host1` has three user accounts (`testuser01`, `testuser02`, `testuser03`):
-![Screenshot that shows that Host1 has three existing test user accounts.](../media/azure-netapp-files/azure-netapp-files-nfsv41-host1-users.png)
+![Screenshot that shows that Host1 has three existing test user accounts.](./media/azure-netapp-files-configure-nfsv41-domain/azure-netapp-files-nfsv41-host1-users.png)
On `Host2`, no corresponding user accounts exist, but the same volume is mounted on both hosts:
-![Resulting configuration for NFSv4.1](../media/azure-netapp-files/azure-netapp-files-nfsv41-host2-users.png)
+![Resulting configuration for NFSv4.1](./media/azure-netapp-files-configure-nfsv41-domain/azure-netapp-files-nfsv41-host2-users.png)
To resolve this issue, either create the missing accounts on the NFS client or configure your NFS clients to use the LDAP server that Azure NetApp Files is using for centrally managed UNIX identities.
azure-netapp-files Azure Netapp Files Cost Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-cost-model.md
If your capacity pool size requirements fluctuate (for example, because of varia
For example, you are using the Premium capacity 24 hours (1 day) at 10 TiB, 96 hours (4 days) at 24 TiB, four times at 6 hours (1 day) at 5 TiB, 480 hours (20 days) at 6 TiB, and the monthΓÇÖs remaining hours at 0 TiB. A dynamic cloud consumption deployment profile looks different from a traditional static on-premises consumption profile:
-[ ![Bar chart that shows dynamic versus static capacity pool provisioning.](../media/azure-netapp-files/cost-model-example-one-capacity.png) ](../media/azure-netapp-files/cost-model-example-one-capacity.png#lightbox)
+[ ![Bar chart that shows dynamic versus static capacity pool provisioning.](./media/azure-netapp-files-cost-model/cost-model-example-one-capacity.png) ](./media/azure-netapp-files-cost-model/cost-model-example-one-capacity.png#lightbox)
When costs are billed at $0.000403 per GiB/hour ([pricing depending on the region](https://azure.microsoft.com/pricing/details/netapp/)), the monthly cost breakdown looks like this:
When costs are billed at $0.000403 per GiB/hour ([pricing depending on the regio
* 6 TiB x 480 hours x $0.000403 per GiB/hour = $1,188.50 * Total = **$2,238.33**
-[ ![Bar chart that shows static versus dynamic service level cost model.](../media/azure-netapp-files/cost-model-example-one-pricing.png) ](../media/azure-netapp-files/cost-model-example-one-pricing.png#lightbox)
+[ ![Bar chart that shows static versus dynamic service level cost model.](./media/azure-netapp-files-cost-model/cost-model-example-one-pricing.png) ](./media/azure-netapp-files-cost-model/cost-model-example-one-pricing.png#lightbox)
This scenario constitutes a monthly savings of $4,892.64 compared to static provisioning.
If your capacity pool size requirements remain the same but performance requirem
Consider a scenario where the capacity requirement is a constant 24 TiB. But your performance needs fluctuate between 384 hours (16 days) of Standard service level, 120 hours (5 days) of Premium service level, 168 hours (7 days) of Ultra service level, and then back to 48 hours (2 days) of standard service level performance. In this scenario, a dynamic cloud consumption deployment profile looks different compared to a traditional static on-premises consumption profile:
-[ ![Bar chart that shows provisioning with and without dynamic service level change.](../media/azure-netapp-files/cost-model-example-two-capacity.png) ](../media/azure-netapp-files/cost-model-example-two-capacity.png#lightbox)
+[ ![Bar chart that shows provisioning with and without dynamic service level change.](./media/azure-netapp-files-cost-model/cost-model-example-two-capacity.png) ](./media/azure-netapp-files-cost-model/cost-model-example-two-capacity.png#lightbox)
In this case, when costs are billed at $0.000202 per GiB/hour (Standard), $0.000403 per GiB/hour (Premium) and $0.000538 per GiB/hour (Ultra) respectively ([pricing depending on the region](https://azure.microsoft.com/pricing/details/netapp/)), the monthly cost breakdown looks like this:
In this case, when costs are billed at $0.000202 per GiB/hour (Standard), $0.000
* 24 TiB x 48 hours x $0.000202 per GiB/hour = $238.29 * Total = **$5,554.37**
-[ ![Bar chart that shows static versus dynamic service level change cost model.](../media/azure-netapp-files/cost-model-example-two-pricing.png) ](../media/azure-netapp-files/cost-model-example-two-pricing.png#lightbox)
+[ ![Bar chart that shows static versus dynamic service level change cost model.](./media/azure-netapp-files-cost-model/cost-model-example-two-pricing.png) ](./media/azure-netapp-files-cost-model/cost-model-example-two-pricing.png#lightbox)
This scenario constitutes a monthly savings of $3,965.39 compared to static provisioning.
The following diagram illustrates the concepts.
* 7.9 TiB of capacity is used (3.5 TiB, 400 GiB, 4 TiB in Volumes 1, 2, and 3). * The capacity pool has 100 GiB of unprovisioned capacity remaining. ## Next steps
azure-netapp-files Azure Netapp Files Create Netapp Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-create-netapp-account.md
You must register your subscription for using the NetApp Resource Provider. For
* **Resource group**: Use an existing resource group or create a new one. * **Location**: Select the region where you want the account and its child resources to be located.
- ![Screenshot that shows New NetApp account.](../media/azure-netapp-files/azure-netapp-files-new-netapp-account.png)
+ ![Screenshot that shows New NetApp account.](./media/azure-netapp-files-create-netapp-account/azure-netapp-files-new-netapp-account.png)
1. Select **Create**. The NetApp account you created now appears in the Azure NetApp Files pane.
azure-netapp-files Azure Netapp Files Create Volumes Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-create-volumes-smb.md
Before creating an SMB volume, you need to create an Active Directory connection
1. Select the **Volumes** blade from the Capacity Pools blade.
- ![Navigate to Volumes](../media/azure-netapp-files/azure-netapp-files-navigate-to-volumes.png)
+ ![Navigate to Volumes](./media/shared/azure-netapp-files-navigate-to-volumes.png)
2. Select **+ Add volume** to create a volume. The Create a Volume window appears.
Before creating an SMB volume, you need to create an Active Directory connection
If you haven't delegated a subnet, you can select **Create new** on the Create a Volume page. Then in the Create Subnet page, specify the subnet information, and select **Microsoft.NetApp/volumes** to delegate the subnet for Azure NetApp Files. In each VNet, only one subnet can be delegated to Azure NetApp Files.
- ![Create subnet](../media/azure-netapp-files/azure-netapp-files-create-subnet.png)
+ ![Create subnet](./media/shared/azure-netapp-files-create-subnet.png)
* **Network features** In supported regions, you can specify whether you want to use **Basic** or **Standard** network features for the volume. See [Configure network features for a volume](configure-network-features.md) and [Guidelines for Azure NetApp Files network planning](azure-netapp-files-network-topologies.md) for details.
Before creating an SMB volume, you need to create an Active Directory connection
For information about creating a snapshot policy, see [Manage snapshot policies](snapshots-manage-policy.md).
- ![Show advanced selection](../media/azure-netapp-files/volume-create-advanced-selection.png)
+ ![Show advanced selection](./media/shared/volume-create-advanced-selection.png)
4. Select **Protocol** and complete the following information: * Select **SMB** as the protocol type for the volume.
Before creating an SMB volume, you need to create an Active Directory connection
**Custom applications are not supported with SMB Continuous Availability.**
- :::image type="content" source="../media/azure-netapp-files/azure-netapp-files-protocol-smb.png" alt-text="Screenshot showing the Protocol tab of creating an SMB volume." lightbox="../media/azure-netapp-files/azure-netapp-files-protocol-smb.png":::
+ :::image type="content" source="./media/azure-netapp-files-create-volumes-smb/azure-netapp-files-protocol-smb.png" alt-text="Screenshot showing the Protocol tab of creating an SMB volume." lightbox="./media/azure-netapp-files-create-volumes-smb/azure-netapp-files-protocol-smb.png":::
5. Select **Review + Create** to review the volume details. Then select **Create** to create the SMB volume.
Access to an SMB volume is managed through permissions.
You can set permissions for a file or folder by using the **Security** tab of the object's properties in the Windows SMB client.
-![Set file and folder permissions](../media/azure-netapp-files/set-file-folder-permissions.png)
+![Set file and folder permissions](./media/azure-netapp-files-create-volumes-smb/set-file-folder-permissions.png)
### Modify SMB share permissions
azure-netapp-files Azure Netapp Files Create Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-create-volumes.md
This article shows you how to create an NFS volume. For SMB volumes, see [Create
1. Select the **Volumes** blade from the Capacity Pools blade. Select **+ Add volume** to create a volume.
- ![Navigate to Volumes](../media/azure-netapp-files/azure-netapp-files-navigate-to-volumes.png)
+ ![Navigate to Volumes](./media/shared/azure-netapp-files-navigate-to-volumes.png)
2. In the Create a Volume window, select **Create**, and provide information for the following fields under the Basics tab: * **Volume name**
This article shows you how to create an NFS volume. For SMB volumes, see [Create
If you have not delegated a subnet, you can select **Create new** on the Create a Volume page. Then in the Create Subnet page, specify the subnet information, and select **Microsoft.NetApp/volumes** to delegate the subnet for Azure NetApp Files. In each Virtual Network, only one subnet can be delegated to Azure NetApp Files.
- ![Create subnet](../media/azure-netapp-files/azure-netapp-files-create-subnet.png)
+ ![Create subnet](./media/shared/azure-netapp-files-create-subnet.png)
* **Network features** In supported regions, you can specify whether you want to use **Basic** or **Standard** network features for the volume. See [Configure network features for a volume](configure-network-features.md) and [Guidelines for Azure NetApp Files network planning](azure-netapp-files-network-topologies.md) for details.
This article shows you how to create an NFS volume. For SMB volumes, see [Create
For information about creating a snapshot policy, see [Manage snapshot policies](snapshots-manage-policy.md).
- ![Show advanced selection](../media/azure-netapp-files/volume-create-advanced-selection.png)
+ ![Show advanced selection](./media/shared/volume-create-advanced-selection.png)
>[!NOTE] >By default, the `.snapshot` directory path is hidden from NFSv4.1 clients. Enabling the **Hide snapshot path** option will hide the .snapshot directory from NFSv3 clients; the directory will still be accessible.
This article shows you how to create an NFS volume. For SMB volumes, see [Create
* Optionally, [configure export policy for the NFS volume](azure-netapp-files-configure-export-policy.md).
- ![Specify NFS protocol](../media/azure-netapp-files/azure-netapp-files-protocol-nfs.png)
+ ![Specify NFS protocol](./media/azure-netapp-files-create-volumes/azure-netapp-files-protocol-nfs.png)
4. Select **Review + Create** to review the volume details. Select **Create** to create the volume.
azure-netapp-files Azure Netapp Files Delegate Subnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-delegate-subnet.md
You must delegate a subnet to Azure NetApp Files. When you create a volume, you
* **Address range**: Specify the IP address range. * **Subnet delegation**: Select **Microsoft.NetApp/volumes**.
- ![Subnet delegation](../media/azure-netapp-files/azure-netapp-files-subnet-delegation.png)
+ ![Subnet delegation](./media/azure-netapp-files-delegate-subnet/azure-netapp-files-subnet-delegation.png)
You can also create and delegate a subnet when you [create a volume for Azure NetApp Files](azure-netapp-files-create-volumes.md).
azure-netapp-files Azure Netapp Files Manage Snapshots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-manage-snapshots.md
Azure NetApp Files supports creating on-demand [snapshots](snapshots-introductio
1. Go to the volume that you want to create a snapshot for. Select **Snapshots**.
- ![Screenshot that shows how to navigate to the snapshots blade.](../media/azure-netapp-files/azure-netapp-files-navigate-to-snapshots.png)
+ ![Screenshot that shows how to navigate to the snapshots blade.](./media/azure-netapp-files-manage-snapshots/azure-netapp-files-navigate-to-snapshots.png)
2. Select **+ Add snapshot** to create an on-demand snapshot for a volume.
- ![Screenshot that shows how to add a snapshot.](../media/azure-netapp-files/azure-netapp-files-add-snapshot.png)
+ ![Screenshot that shows how to add a snapshot.](./media/azure-netapp-files-manage-snapshots/azure-netapp-files-add-snapshot.png)
3. In the New Snapshot window, provide a name for the new snapshot that you are creating.
- ![Screenshot that shows the New Snapshot window.](../media/azure-netapp-files/azure-netapp-files-new-snapshot.png)
+ ![Screenshot that shows the New Snapshot window.](./media/azure-netapp-files-manage-snapshots/azure-netapp-files-new-snapshot.png)
4. Select **OK**.
azure-netapp-files Azure Netapp Files Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-metrics.md
Azure NetApp Files metrics are natively integrated into Azure monitor. From with
- From Azure monitor, select **Metrics**, select a capacity pool or volume. Then select **Metric** to view the available metrics:
- :::image type="content" source="../media/azure-netapp-files/metrics-select-pool-volume.png" alt-text="Screenshot that shows how to access Azure NetApp Files metrics for capacity pools or volumes." lightbox="../media/azure-netapp-files/metrics-select-pool-volume.png":::
+ :::image type="content" source="./media/azure-netapp-files-metrics/metrics-select-pool-volume.png" alt-text="Screenshot that shows how to access Azure NetApp Files metrics for capacity pools or volumes." lightbox="./media/azure-netapp-files-metrics/metrics-select-pool-volume.png":::
- From the Azure NetApp Files capacity pool or volume, select **Metrics**. Then select **Metric** to view the available metrics:
- :::image type="content" source="../media/azure-netapp-files/metrics-navigate-volume.png" alt-text="Snapshot that shows how to navigate to the Metric pull-down." lightbox="../media/azure-netapp-files/metrics-navigate-volume.png":::
+ :::image type="content" source="./media/azure-netapp-files-metrics/metrics-navigate-volume.png" alt-text="Snapshot that shows how to navigate to the Metric pull-down." lightbox="./media/azure-netapp-files-metrics/metrics-navigate-volume.png":::
## <a name="capacity_pools"></a>Usage metrics for capacity pools
Azure NetApp Files metrics are natively integrated into Azure monitor. From with
Consider repurposing the volume and delegating a different volume with a larger size and/or in a higher service level to meet your application requirements. If it's an NFS volume, consider changing mount options to reduce data flow if your application supports those changes.
- :::image type="content" source="../media/azure-netapp-files/throughput-limit-reached.png" alt-text="Screenshot that shows Azure NetApp Files metrics a line graph demonstrating throughput limit reached." lightbox="../media/azure-netapp-files/throughput-limit-reached.png":::
+ :::image type="content" source="./media/azure-netapp-files-metrics/throughput-limit-reached.png" alt-text="Screenshot that shows Azure NetApp Files metrics a line graph demonstrating throughput limit reached." lightbox="./media/azure-netapp-files-metrics/throughput-limit-reached.png":::
## Performance metrics for volumes
azure-netapp-files Azure Netapp Files Mount Unmount Volumes For Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-mount-unmount-volumes-for-virtual-machines.md
For more information about how NFS operates in Azure NetApp Files, see [Understa
1. Review the [Linux NFS mount options best practices](performance-linux-mount-options.md). 2. Select the **Volumes** pane and then the NFS volume that you want to mount. 3. To mount the NFS volume using a Linux client, select **Mount instructions** from the selected volume. Follow the displayed instructions to mount the volume.
- :::image type="content" source="../media/azure-netapp-files/azure-netapp-files-mount-instructions-nfs.png" alt-text="Screenshot of Mount instructions." lightbox="../media/azure-netapp-files/azure-netapp-files-mount-instructions-nfs.png":::
+ :::image type="content" source="./media/azure-netapp-files-mount-unmount-volumes-for-virtual-machines/azure-netapp-files-mount-instructions-nfs.png" alt-text="Screenshot of Mount instructions." lightbox="./media/azure-netapp-files-mount-unmount-volumes-for-virtual-machines/azure-netapp-files-mount-instructions-nfs.png":::
* Ensure that you use the `vers` option in the `mount` command to specify the NFS protocol version that corresponds to the volume you want to mount. For example, if the NFS version is NFSv4.1: `sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,vers=4.1,tcp,sec=sys $MOUNTTARGETIPADDRESS:/$VOLUMENAME $MOUNTPOINT`
azure-netapp-files Azure Netapp Files Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md
Configuring UDRs on the source VM subnets with the address prefix of delegated s
The following diagram illustrates an Azure-native environment: ### Local VNet
In the diagram above, although VM 3 can connect to Volume 1, VM 4 can't connect
The following diagram illustrates an Azure-native environment with cross-region VNet peering. With Standard network features, VMs are able to connect to volumes in another region via global or cross-region VNet peering. The above diagram adds a second region to the configuration in the [local VNet peering section](#vnet-peering). For VNet 4 in this diagram, an Azure NetApp Files volume is created in a delegated subnet and can be mounted on VM5 in the application subnet.
In the diagram, VM2 in Region 1 can connect to Volume 3 in Region 2. VM5 in Regi
The following diagram illustrates a hybrid environment: In the hybrid scenario, applications from on-premises datacenters need access to the resources in Azure. This is the case whether you want to extend your datacenter to Azure or you want to use Azure native services or for disaster recovery. See [VPN Gateway planning options](../vpn-gateway/vpn-gateway-about-vpngateways.md?toc=%2fazure%2fvirtual-network%2ftoc.json#planningtable) for information on how to connect multiple resources on-premises to resources in Azure through a site-to-site VPN or an ExpressRoute.
azure-netapp-files Azure Netapp Files Performance Metrics Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-performance-metrics-volumes.md
Ensure that you choose the correct service level and volume quota size for the e
You should perform the benchmark testing in the same VNet as Azure NetApp Files. The example below demonstrates the recommendation:
-![VNet recommendations](../media/azure-netapp-files/azure-netapp-files-benchmark-testing-vnet.png)
+![VNet recommendations](./media/azure-netapp-files-performance-metrics-volumes/azure-netapp-files-benchmark-testing-vnet.png)
## Performance benchmarking tools
You can view historical data for the following information:
You can access Azure NetApp Files counters on a per-volume basis from the Metrics page, as shown below:
-![Azure Monitor metrics](../media/azure-netapp-files/azure-netapp-files-benchmark-monitor-metrics.png)
+![Azure Monitor metrics](./media/azure-netapp-files-performance-metrics-volumes/azure-netapp-files-benchmark-monitor-metrics.png)
You can also create a dashboard in Azure Monitor for Azure NetApp Files by going to the Metrics page, filtering for NetApp, and specifying the volume counters of interest:
-![Azure Monitor dashboard](../media/azure-netapp-files/azure-netapp-files-benchmark-monitor-dashboard.png)
+![Azure Monitor dashboard](./media/azure-netapp-files-performance-metrics-volumes/azure-netapp-files-benchmark-monitor-dashboard.png)
### Azure Monitor API access
azure-netapp-files Azure Netapp Files Quickstart Set Up Account Create Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-quickstart-set-up-account-create-volumes.md
Use the Azure portal, PowerShell, or the Azure CLI to [register for NetApp Resou
1. In the Azure portal's search box, enter **Azure NetApp Files** and then select **Azure NetApp Files** from the list that appears.
- ![Select Azure NetApp Files](../media/azure-netapp-files/azure-netapp-files-select-azure-netapp-files.png)
+ ![Select Azure NetApp Files](./media/azure-netapp-files-quickstart-set-up-account-create-volumes/azure-netapp-files-select-azure-netapp-files.png)
2. Select **+ Create** to create a new NetApp account.
Use the Azure portal, PowerShell, or the Azure CLI to [register for NetApp Resou
3. Select **Create new** to create new resource group. Enter **myRG1** for the resource group name. Select **OK**. 4. Select your account location.
- ![New NetApp Account window](../media/azure-netapp-files/azure-netapp-files-new-account-window.png)
+ ![New NetApp Account window](./media/azure-netapp-files-quickstart-set-up-account-create-volumes/azure-netapp-files-new-account-window.png)
- ![Resource group window](../media/azure-netapp-files/azure-netapp-files-resource-group-window.png)
+ ![Resource group window](./media/azure-netapp-files-quickstart-set-up-account-create-volumes/azure-netapp-files-resource-group-window.png)
4. Select **Create** to create your new NetApp account.
The following code snippet shows how to create a NetApp account in an Azure Reso
1. From the Azure NetApp Files management blade, select your NetApp account (**myaccount1**).
- ![Screenshot of selecting NetApp account menu.](../media/azure-netapp-files/azure-netapp-files-select-netapp-account.png)
+ ![Screenshot of selecting NetApp account menu.](./media/azure-netapp-files-quickstart-set-up-account-create-volumes/azure-netapp-files-select-netapp-account.png)
2. From the Azure NetApp Files management blade of your NetApp account, select **Capacity pools**.
- ![Screenshot of Capacity pool selection interface.](../media/azure-netapp-files/azure-netapp-files-click-capacity-pools.png)
+ ![Screenshot of Capacity pool selection interface.](./media/azure-netapp-files-quickstart-set-up-account-create-volumes/azure-netapp-files-click-capacity-pools.png)
3. Select **+ Add pools**.
- :::image type="content" source="../media/azure-netapp-files/azure-netapp-files-new-capacity-pool.png" alt-text="Screenshot of new capacity pool options.":::
+ :::image type="content" source="./media/shared/azure-netapp-files-new-capacity-pool.png" alt-text="Screenshot of new capacity pool options.":::
4. Provide information for the capacity pool: * Enter **mypool1** as the pool name.
The following code snippet shows how to create a capacity pool in an Azure Resou
1. From the Azure NetApp Files management blade of your NetApp account, select **Volumes**.
- ![Screenshot of select volumes interface.](../media/azure-netapp-files/azure-netapp-files-click-volumes.png)
+ ![Screenshot of select volumes interface.](./media/azure-netapp-files-quickstart-set-up-account-create-volumes/azure-netapp-files-click-volumes.png)
2. Select **+ Add volume**.
- ![Screenshot of add volumes interface.](../media/azure-netapp-files/azure-netapp-files-click-add-volumes.png)
+ ![Screenshot of add volumes interface.](./media/azure-netapp-files-quickstart-set-up-account-create-volumes/azure-netapp-files-click-add-volumes.png)
3. In the Create a Volume window, provide information for the volume: 1. Enter **myvol1** as the volume name.
The following code snippet shows how to create a capacity pool in an Azure Resou
* Select **OK** to create the VNet. 5. In subnet, select the newly created Vnet (**myvnet1**) as the delegate subnet.
- ![Screenshot of create a volume window.](../media/azure-netapp-files/azure-netapp-files-create-volume-window.png)
+ ![Screenshot of create a volume window.](./media/azure-netapp-files-quickstart-set-up-account-create-volumes/azure-netapp-files-create-volume-window.png)
- ![Screenshot of create a virtual network window.](../media/azure-netapp-files/azure-netapp-files-create-virtual-network-window.png)
+ ![Screenshot of create a virtual network window.](./media/azure-netapp-files-quickstart-set-up-account-create-volumes/azure-netapp-files-create-virtual-network-window.png)
4. Select **Protocol**, and then complete the following actions: * Select **NFS** as the protocol type for the volume.
The following code snippet shows how to create a capacity pool in an Azure Resou
* Select the NFS version (**NFSv3** or **NFSv4.1**) for the volume. See [considerations](azure-netapp-files-create-volumes.md#considerations) and [best practice](azure-netapp-files-create-volumes.md#best-practice) about NFS versions.
- ![Screenshot of NFS protocol for selection.](../media/azure-netapp-files/azure-netapp-files-quickstart-protocol-nfs.png)
+ ![Screenshot of NFS protocol for selection.](./media/azure-netapp-files-quickstart-set-up-account-create-volumes/azure-netapp-files-quickstart-protocol-nfs.png)
5. Select **Review + create** to display information for the volume you are creating. 6. Select **Create** to create the volume. The created volume appears in the Volumes blade.
- ![Screenshot of volume creation confirmation.](../media/azure-netapp-files/azure-netapp-files-create-volume-created.png)
+ ![Screenshot of volume creation confirmation.](./media/azure-netapp-files-quickstart-set-up-account-create-volumes/azure-netapp-files-create-volume-created.png)
# [PowerShell](#tab/azure-powershell)
When you are done and if you want to, you can delete the resource group. The act
2. In the list of subscriptions, select the resource group (myRG1) you want to delete.
- ![Screenshot of the resource groups menu.](../media/azure-netapp-files/azure-netapp-files-azure-navigate-to-resource-groups.png)
+ ![Screenshot of the resource groups menu.](./media/azure-netapp-files-quickstart-set-up-account-create-volumes/azure-netapp-files-azure-navigate-to-resource-groups.png)
3. In the resource group page, select **Delete resource group**.
- ![Screenshot that highlights the Delete resource group button.](../media/azure-netapp-files/azure-netapp-files-azure-delete-resource-group.png)
+ ![Screenshot that highlights the Delete resource group button.](./media/azure-netapp-files-quickstart-set-up-account-create-volumes/azure-netapp-files-azure-delete-resource-group.png)
A window opens and displays a warning about the resources that will be deleted with the resource group. 4. Enter the name of the resource group (myRG1) to confirm that you want to permanently delete the resource group and all resources in it, and then select **Delete**.
- ![Screenshot showing confirmation of deleting resource group.](../media/azure-netapp-files/azure-netapp-files-azure-confirm-resource-group-deletion.png)
+ ![Screenshot showing confirmation of deleting resource group.](./media/azure-netapp-files-quickstart-set-up-account-create-volumes/azure-netapp-files-azure-confirm-resource-group-deletion.png)
# [PowerShell](#tab/azure-powershell)
azure-netapp-files Azure Netapp Files Register https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-register.md
To use the Azure NetApp Files service, you need to register the NetApp Resource
1. From the Azure portal, click the Azure Cloud Shell icon on the upper right-hand corner:
- ![Azure Cloud Shell icon](../media/azure-netapp-files/azure-netapp-files-azure-cloud-shell.png)
+ ![Azure Cloud Shell icon](./media/azure-netapp-files-register/azure-netapp-files-azure-cloud-shell.png)
2. If you have multiple subscriptions on your Azure account, select the one that you want to configure for Azure NetApp Files:
To use the Azure NetApp Files service, you need to register the NetApp Resource
6. In the Subscriptions blade, click your subscription ID. 7. In the settings of the subscription, click **Resource providers** to verify that Microsoft.NetApp Provider indicates the Registered status:
- ![Registered Microsoft.NetApp](../media/azure-netapp-files/azure-netapp-files-registered-resource-providers.png)
+ ![Registered Microsoft.NetApp](./media/azure-netapp-files-register/azure-netapp-files-registered-resource-providers.png)
## Next steps
azure-netapp-files Azure Netapp Files Resize Capacity Pools Or Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-resize-capacity-pools-or-volumes.md
Resizing the capacity pool changes the purchased Azure NetApp Files capacity.
1. From the NetApp Account view, go to **Capacity pools**, and select the capacity pool that you want to resize. 2. Right-click the capacity pool name or select the "…" icon at the end of the capacity pool row to display the context menu. Select **Resize**.
- ![Screenshot that shows pool context menu.](../media/azure-netapp-files/resize-pool-context-menu.png)
+ ![Screenshot that shows pool context menu.](./media/azure-netapp-files-resize-capacity-pools-or-volumes/resize-pool-context-menu.png)
3. In the Resize pool window, specify the pool size. Select **OK**.
- ![Screenshot that shows Resize pool window.](../media/azure-netapp-files/resize-pool-window.png)
+ ![Screenshot that shows Resize pool window.](./media/azure-netapp-files-resize-capacity-pools-or-volumes/resize-pool-window.png)
## Resize a volume using the Azure portal
You can change the size of a volume as necessary. A volume's capacity consumptio
1. From the NetApp Account view, go to **Volumes**, and select the volume that you want to resize. 2. Right-click the volume name or select the "…" icon at the end of the volume's row to display the context menu. Select **Resize**.
- ![Screenshot that shows volume context menu.](../media/azure-netapp-files/resize-volume-context-menu.png)
+ ![Screenshot that shows volume context menu.](./media/azure-netapp-files-resize-capacity-pools-or-volumes/resize-volume-context-menu.png)
3. In the Update volume quota window, specify the quota for the volume. Select **OK**.
- ![Screenshot that shows Update Volume Quota window.](../media/azure-netapp-files/resize-volume-quota-window.png)
+ ![Screenshot that shows Update Volume Quota window.](./media/azure-netapp-files-resize-capacity-pools-or-volumes/resize-volume-quota-window.png)
## Resizing the capacity pool or a volume using Azure CLI
azure-netapp-files Azure Netapp Files Resource Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-resource-limits.md
You can create an Azure support request to increase the adjustable limits from t
2. For **Subscription**, select your subscription. 3. For **Quota Type**, select **Storage: Azure NetApp Files limits**.
- ![Screenshot that shows the Problem Description tab.](../media/azure-netapp-files/support-problem-descriptions.png)
+ ![Screenshot that shows the Problem Description tab.](./media/shared/support-problem-descriptions.png)
3. Under the **Additional details** tab, select **Enter details** in the Request Details field.
- ![Screenshot that shows the Details tab and the Enter Details field.](../media/azure-netapp-files/quota-additional-details.png)
+ ![Screenshot that shows the Details tab and the Enter Details field.](./media/shared/quota-additional-details.png)
4. To request limit increase, provide the following information in the Quota Details window that appears: 1. In **Quota Type**, select the type of resource you want to increase.
You can create an Azure support request to increase the adjustable limits from t
3. Enter a value to request an increase for the quota type you specified.
- ![Screenshot that shows how to display and request increase for regional quota.](../media/azure-netapp-files/quota-details-regional-request.png)
+ ![Screenshot that shows how to display and request increase for regional quota.](./media/azure-netapp-files-resource-limits/quota-details-regional-request.png)
5. Select **Save and continue**. Select **Review + create** to create the request.
azure-netapp-files Azure Netapp Files Service Levels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-service-levels.md
The throughput limit for a volume is determined by the combination of the follow
The following diagram shows throughput limit examples of volumes in an auto QoS capacity pool:
-![Service level illustration](../media/azure-netapp-files/azure-netapp-files-service-levels.png)
+![Service level illustration](./media/azure-netapp-files-service-levels/azure-netapp-files-service-levels.png)
* In Example 1, a volume from an auto QoS capacity pool with the Premium storage tier that is assigned 2 TiB of quota will be assigned a throughput limit of 128 MiB/s (2 TiB * 64 MiB/s). This scenario applies regardless of the capacity pool size or the actual volume consumption.
For example, for an SAP HANA system, this capacity pool can be used to create th
The following diagram illustrates the scenarios for the SAP HANA volumes:
-![QoS SAP HANA volume scenarios](../media/azure-netapp-files/qos-sap-hana-volume-scenarios.png)
+![QoS SAP HANA volume scenarios](./media/azure-netapp-files-service-levels/qos-sap-hana-volume-scenarios.png)
## Next steps
azure-netapp-files Azure Netapp Files Set Up Capacity Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-set-up-capacity-pool.md
Creating a capacity pool enables you to create volumes within it.
1. Go to the management blade for your NetApp account, and then, from the navigation pane, click **Capacity pools**.
- ![Navigate to capacity pool](../media/azure-netapp-files/azure-netapp-files-navigate-to-capacity-pool.png)
+ ![Navigate to capacity pool](./media/azure-netapp-files-set-up-capacity-pool/azure-netapp-files-navigate-to-capacity-pool.png)
2. Select **+ Add pools** to create a new capacity pool. The New Capacity Pool window appears.
Creating a capacity pool enables you to create volumes within it.
``` You can also use [Azure CLI commands](/cli/azure/feature) `az feature register` and `az feature show` to register the feature and display the registration status.
- :::image type="content" source="../media/azure-netapp-files/azure-netapp-files-new-capacity-pool.png" alt-text="Screenshot showing the New Capacity Pool window.":::
+ :::image type="content" source="./media/shared/azure-netapp-files-new-capacity-pool.png" alt-text="Screenshot showing the New Capacity Pool window.":::
4. Select **Create**.
azure-netapp-files Azure Netapp Files Smb Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-smb-performance.md
With SMB Multichannel disabled on the client, pure 4 KiB read and write tests we
The command `netstat -na | findstr 445` proved that additional connections were established with increments from `1` to `4` to `8` and to `16`. Four CPU cores were fully utilized for SMB during each test, as confirmed by the perfmon `Per Processor Network Activity Cycles` statistic (not included in this article.)
-![Chart that shows random I/O comparison of SMB Multichannel.](../media/azure-netapp-files/azure-netapp-files-random-io-tests.png)
+![Chart that shows random I/O comparison of SMB Multichannel.](./media/azure-netapp-files-smb-performance/azure-netapp-files-random-io-tests.png)
The Azure virtual machine does not affect SMB (nor NFS) storage I/O limits. As shown in the following chart, the D32ds instance type has a limited rate of 308,000 for cached storage IOPS and 51,200 for uncached storage IOPS. However, the graph above shows significantly more I/O over SMB.
-![Chart that shows random I/O comparison test.](../media/azure-netapp-files/azure-netapp-files-random-io-tests-list.png)
+![Chart that shows random I/O comparison test.](./media/azure-netapp-files-smb-performance/azure-netapp-files-random-io-tests-list.png)
#### Sequential IO Tests similar to the random I/O tests described previously were performed with 64-KiB sequential I/O. Although the increases in client connection count per RSS network interface beyond 4ΓÇÖ had no noticeable effect on random I/O, the same does not apply to sequential I/O. As the following graph shows, each increase is associated with a corresponding increase in read throughput. Write throughput remained flat due to network bandwidth restrictions placed by Azure for each instance type/size.
-![Chart that shows throughput test comparison.](../media/azure-netapp-files/azure-netapp-files-sequential-io-tests.png)
+![Chart that shows throughput test comparison.](./media/azure-netapp-files-smb-performance/azure-netapp-files-sequential-io-tests.png)
Azure places network rate limits on each virtual machine type/size. The rate limit is imposed on outbound traffic only. The number of NICs present on a virtual machine has no bearing on the total amount of bandwidth available to the machine. For example, the D32ds instance type has an imposed network limit of 16,000 Mbps (2,000 MiB/s). As the sequential graph above shows, the limit affects the outbound traffic (writes) but not multichannel reads.
-![Chart that shows sequential I/O comparison test.](../media/azure-netapp-files/azure-netapp-files-sequential-io-tests-list.png)
+![Chart that shows sequential I/O comparison test.](./media/azure-netapp-files-smb-performance/azure-netapp-files-sequential-io-tests-list.png)
## SMB Signing
SMB Signing is supported for all SMB protocol versions that are supported by Azu
SMB Signing has a deleterious effect upon SMB performance. Among other potential causes of the performance degradation, the digital signing of each packet consumes additional client-side CPU as the perfmon output below shows. In this case, Core 0 appears responsible for SMB, including SMB Signing. A comparison with the non-multichannel sequential read throughput numbers in the previous section shows that SMB Signing reduces overall throughput from 875MiB/s to approximately 250MiB/s.
-![Chart that shows SMB Signing performance impact.](../media/azure-netapp-files/azure-netapp-files-smb-signing-performance.png)
+![Chart that shows SMB Signing performance impact.](./media/azure-netapp-files-smb-performance/azure-netapp-files-smb-signing-performance.png)
## Performance for a single instance with a 1-TB dataset
To provide more detailed insight into workloads with read/write mixes, the follo
The following chart shows the results for 4k random I/O, with a single VM instance and a read/write mix at 10% intervals:
-![Chart that shows Windows 2019 standard _D32ds_v4 4K random IO test.](../media/azure-netapp-files/smb-performance-standard-4k-random-io.png)
+![Chart that shows Windows 2019 standard _D32ds_v4 4K random IO test.](./media/azure-netapp-files-smb-performance/smb-performance-standard-4k-random-io.png)
The following chart shows the results for sequential I/O:
-![Chart that shows Windows 2019 standard _D32ds_v4 64K sequential throughput.](../media/azure-netapp-files/smb-performance-standard-64k-throughput.png)
+![Chart that shows Windows 2019 standard _D32ds_v4 64K sequential throughput.](./media/azure-netapp-files-smb-performance/smb-performance-standard-64k-throughput.png)
## Performance when scaling out using 5 VMs with a 1-TB dataset
These tests with 5 VMs use the same testing environment as the single VM, with e
The following chart shows the results for random I/O:
-![Chart that shows Windows 2019 standard _D32ds_v4 4K 5-instance randio IO test.](../media/azure-netapp-files/smb-performance-standard-4k-random-io-5-instances.png)
+![Chart that shows Windows 2019 standard _D32ds_v4 4K 5-instance randio IO test.](./media/azure-netapp-files-smb-performance/smb-performance-standard-4k-random-io-5-instances.png)
The following chart shows the results for sequential I/O:
-![Chart that shows Windows 2019 standard _D32ds_v4 64K 5-instance sequential throughput.](../media/azure-netapp-files/smb-performance-standard-64k-throughput-5-instances.png)
+![Chart that shows Windows 2019 standard _D32ds_v4 64K 5-instance sequential throughput.](./media/azure-netapp-files-smb-performance/smb-performance-standard-64k-throughput-5-instances.png)
## How to monitor Hyper-V ethernet adapters
One strategy used in testing with FIO is to set `numjobs=16`. Doing so forks eac
You can check for activity on each of the adapters in Windows Performance Monitor by selecting **Performance Monitor > Add Counters > Network Interface > Microsoft Hyper-V Network Adapter**.
-![Screenshot that shows Performance Monitor Add Counter interface.](../media/azure-netapp-files/smb-performance-performance-monitor-add-counter.png)
+![Screenshot that shows Performance Monitor Add Counter interface.](./media/azure-netapp-files-smb-performance/smb-performance-performance-monitor-add-counter.png)
After you have data traffic running in your volumes, you can monitor your adapters in Windows Performance Monitor. If you do not use all of these 16 virtual adapters, you might not be maximizing your network bandwidth capacity.
-![Screenshot that shows Performance Monitor output.](../media/azure-netapp-files/smb-performance-performance-monitor-output.png)
+![Screenshot that shows Performance Monitor output.](./media/azure-netapp-files-smb-performance/smb-performance-performance-monitor-output.png)
## SMB encryption
With SMB Multichannel enabled, an SMB3 client establishes multiple TCP connectio
To see if your Azure virtual machine NICs support RSS, run the command `Get-SmbClientNetworkInterface` as follows and check the field `RSS Capable`:
-![Screenshot that shows RSS output for Azure virtual machine.](../media/azure-netapp-files/azure-netapp-files-formance-rss-support.png)
+![Screenshot that shows RSS output for Azure virtual machine.](./media/azure-netapp-files-smb-performance/azure-netapp-files-formance-rss-support.png)
## Multiple NICs on SMB clients
You should not configure multiple NICs on your client for SMB. The SMB client wi
As the output of `Get-SmbClientNetworkInterace` below shows, the virtual machine has 2 network interfaces--15 and 12. As shown under the following command `Get-SmbMultichannelConnection`, even though there are two RSS-capable NICS, only interface 12 is used in connection with the SMB share; interface 15 is not in use.
-![Screeshot that shows output for RSS-capable NICS.](../media/azure-netapp-files/azure-netapp-files-rss-capable-nics.png)
+![Screeshot that shows output for RSS-capable NICS.](./media/azure-netapp-files-smb-performance/azure-netapp-files-rss-capable-nics.png)
## Next steps
azure-netapp-files Azure Netapp Files Solution Architectures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md
Azure NetApp FilesΓÇÖ integration with Azure native services like Azure Kubernet
The following diagram depicts the categorization of reference architectures, blueprints and solutions on this page as laid out in the above introduction: **Azure NetApp Files key use cases** In summary, Azure NetApp Files is a versatile and scalable storage service that provides an ideal platform for migrating various workload categories, running specialized workloads, and integrating with Azure native services. Azure NetApp FilesΓÇÖ high-performance, security, and scalability features make it a reliable choice for businesses looking to run their applications and workloads in Azure.
azure-netapp-files Azure Netapp Files Understand Storage Hierarchy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-understand-storage-hierarchy.md
Before creating a volume in Azure NetApp Files, you must purchase and set up a p
## <a name="conceptual_diagram_of_storage_hierarchy"></a>Conceptual diagram of storage hierarchy The following example shows the relationships of the Azure subscription, NetApp accounts, capacity pools, and volumes. ## <a name="azure_netapp_files_account"></a>NetApp accounts
azure-netapp-files Backup Configure Manual https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-configure-manual.md
If you havenΓÇÖt done so, enable the backup functionality for the volume before
3. In the Configure Backup page, toggle the **Enabled** setting to **On**. 4. Select **OK**.
-![Screenshot that shows the Enabled setting of Configure Backups window.](../media/azure-netapp-files/backup-configure-enabled.png)
+![Screenshot that shows the Enabled setting of Configure Backups window.](./media/shared/backup-configure-enabled.png)
## Create a manual backup for a volume
If you havenΓÇÖt done so, enable the backup functionality for the volume before
When you create a manual backup, a snapshot is also created on the volume using the same name you specified for the backup. This snapshot represents the current state of the active file system. It is transferred to Azure storage. Once the backup completes, the manual backup entry appears in the list of backups for the volume.
-![Screenshot that shows the New Backup window.](../media/azure-netapp-files/backup-new.png)
+![Screenshot that shows the New Backup window.](./media/backup-configure-manual/backup-new.png)
## Next steps
azure-netapp-files Backup Configure Policy Based https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-configure-policy-based.md
To enable a policy-based (scheduled) backup:
2. Select your Azure NetApp Files account. 3. Select **Backups**.
- <!-- :::image type="content" source="../media/azure-netapp-files/backup-navigate.png" alt-text="Screenshot that shows how to navigate to Backups option." lightbox="../media/azure-netapp-files/backup-navigate.png"::: -->
+ <!-- :::image type="content" source="./media/backup-configure-policy-based/backup-navigate.png" alt-text="Screenshot that shows how to navigate to Backups option." lightbox="./media/backup-configure-policy-based/backup-navigate.png"::: -->
4. Select **Backup Policies**. 5. Select **Add**.
To enable a policy-based (scheduled) backup:
The minimum value for **Daily Backups to Keep** is 2.
- :::image type="content" source="../media/azure-netapp-files/backup-policy-window-daily.png" alt-text="Screenshot that shows the Backup Policy window." lightbox="../media/azure-netapp-files/backup-policy-window-daily.png":::
+ :::image type="content" source="./media/backup-configure-policy-based/backup-policy-window-daily.png" alt-text="Screenshot that shows the Backup Policy window." lightbox="./media/backup-configure-policy-based/backup-policy-window-daily.png":::
### Example of a valid configuration
To enable the backup functionality for a volume:
The Vault information is prepopulated.
- :::image type="content" source="../media/azure-netapp-files/backup-configure-enabled.png" alt-text="Screenshot showing Configure Backups window." lightbox="../media/azure-netapp-files/backup-configure-enabled.png":::
+ :::image type="content" source="./media/shared/backup-configure-enabled.png" alt-text="Screenshot showing Configure Backups window." lightbox="./media/shared/backup-configure-enabled.png":::
## Next steps
azure-netapp-files Backup Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-delete.md
If you need to delete backups to free up space, select an older backup from the
2. Navigate to **Backups**. 3. From the backup list, select the backup to delete. Click the three dots (`…`) to the right of the backup, then click **Delete** from the Action menu.
- ![Screenshot that shows the Delete menu for backups.](../media/azure-netapp-files/backup-action-menu-delete.png)
+ ![Screenshot that shows the Delete menu for backups.](./media/backup-delete/backup-action-menu-delete.png)
## Next steps
azure-netapp-files Backup Disable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-disable.md
If a volume is deleted but the backup policy wasnΓÇÖt disabled before the volume
3. Select **Configure**. 4. In the Configure Backups page, toggle the **Enabled** setting to **Off**. Enter the volume name to confirm, and click **OK**.
- ![Screenshot that shows the Restore to with Configure Backups window with backup disabled.](../media/azure-netapp-files/backup-configure-backups-disable.png)
+ ![Screenshot that shows the Restore to with Configure Backups window with backup disabled.](./media/backup-disable/backup-configure-backups-disable.png)
## Next steps
azure-netapp-files Backup Manage Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-manage-policies.md
To modify the backup policy settings:
2. Select **Backup Policies** then select the three dots (`…`) to the right of a backup policy. Select **Edit**.
- :::image type="content" source="../media/azure-netapp-files/backup-policies-edit.png" alt-text="Screenshot that shows context sensitive menu of Backup Policies." lightbox="../media/azure-netapp-files/backup-policies-edit.png":::
+ :::image type="content" source="./media/backup-manage-policies/backup-policies-edit.png" alt-text="Screenshot that shows context sensitive menu of Backup Policies." lightbox="./media/backup-manage-policies/backup-policies-edit.png":::
3. In the Modify Backup Policy window, update the number of backups you want to keep for daily, weekly, and monthly backups. Enter the backup policy name to confirm the action. Click **Save**.
- :::image type="content" source="../media/azure-netapp-files/backup-modify-policy.png" alt-text="Screenshot showing the Modify Backup Policy window." lightbox="../media/azure-netapp-files/backup-modify-policy.png":::
+ :::image type="content" source="./media/backup-manage-policies/backup-modify-policy.png" alt-text="Screenshot showing the Modify Backup Policy window." lightbox="./media/backup-manage-policies/backup-modify-policy.png":::
> [!NOTE] > After backups are enabled and have taken effect for the scheduled frequency, you cannot change the backup retention count to `0`. A minimum number of `1` retention is required for the backup policy. See [Resource limits for Azure NetApp Files](azure-netapp-files-resource-limits.md) for details.
A backup policy can be suspended so that it does not perform any new backup oper
1. Toggle **Policy State** to **Disabled**, enter the policy name to confirm, and click **Save**.
- ![Screenshot that shows the Modify Backup Policy window with Policy State disabled.](../media/azure-netapp-files/backup-modify-policy-disabled.png)
+ ![Screenshot that shows the Modify Backup Policy window with Policy State disabled.](./media/backup-manage-policies/backup-modify-policy-disabled.png)
### Suspend a backup policy for a specific volume
A backup policy can be suspended so that it does not perform any new backup oper
3. Select **Configure**. 4. In the Configure Backups page, toggle **Policy State** to **Suspend**, enter the volume name to confirm, and click **OK**.
- ![Screenshot that shows the Configure Backups window with the Suspend Policy State.](../media/azure-netapp-files/backup-modify-policy-suspend.png)
+ ![Screenshot that shows the Configure Backups window with the Suspend Policy State.](./media/backup-manage-policies/backup-modify-policy-suspend.png)
## Next steps
azure-netapp-files Backup Restore New Volume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-restore-new-volume.md
See [Requirements and considerations for Azure NetApp Files backup](backup-requi
2. From the backup list, select the backup to restore. Select the three dots (`…`) to the right of the backup, then select **Restore to new volume** from the Action menu.
- :::image type="content" source="../media/azure-netapp-files/backup-restore-new-volume.png" alt-text="Screenshot of selecting restore backup to new volume." lightbox="../media/azure-netapp-files/backup-restore-new-volume.png":::
+ :::image type="content" source="./media/backup-restore-new-volume/backup-restore-new-volume.png" alt-text="Screenshot of selecting restore backup to new volume." lightbox="./media/backup-restore-new-volume/backup-restore-new-volume.png":::
3. In the Create a Volume page that appears, provide information for the fields in the page as applicable, and select **Review + Create** to begin restoring the backup to a new volume.
See [Requirements and considerations for Azure NetApp Files backup](backup-requi
* The **Capacity pool** that the backup is restored into must have sufficient unused capacity to host the new restored volume. Otherwise, the restore operation fails.
- ![Screenshot that shows the Create a Volume page.](../media/azure-netapp-files/backup-restore-create-volume.png)
+ ![Screenshot that shows the Create a Volume page.](./media/backup-restore-new-volume/backup-restore-create-volume.png)
4. The Volumes page displays the new volume. In the Volumes page, the **Originated from** field identifies the name of the snapshot used to create the volume.
azure-netapp-files Backup Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-search.md
If a volume is deleted, its backups are still retained. The backups are listed i
A partial search is supported; you donΓÇÖt have to specify the entire backup name. The search filters the backups based on the search string.
- :::image type="content" source="../media/azure-netapp-files/backup-search-vault.png" alt-text="Screenshot that shows a list of backups in a vault." lightbox="../media/azure-netapp-files/backup-search-vault.png":::
+ :::image type="content" source="./media/backup-search/backup-search-vault.png" alt-text="Screenshot that shows a list of backups in a vault." lightbox="./media/backup-search/backup-search-vault.png":::
## Search backups at volume level
You can display and search backups at the volume level:
A partial search is supported; you donΓÇÖt have to specify the entire backup name. The search filters the backups based on the search string.
- :::image type="content" source="../media/azure-netapp-files/backup-search-volume-level.png" alt-text="Screenshot that shows a list of backup for a volume." lightbox="../media/azure-netapp-files/backup-search-volume-level.png":::
+ :::image type="content" source="./media/backup-search/backup-search-volume-level.png" alt-text="Screenshot that shows a list of backup for a volume." lightbox="./media/backup-search/backup-search-volume-level.png":::
## Next steps
azure-netapp-files Backup Vault Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-vault-manage.md
Although it's possible to create multiple backup vaults in your Azure NetApp Fil
1. Select **+ Add Backup Vault**. Assign a name to your backup vault then select **Create**.
- :::image type="content" source="../media/azure-netapp-files/backup-vault-create.png" alt-text="Screenshot of backup vault creation." lightbox="../media/azure-netapp-files/backup-vault-create.png":::
+ :::image type="content" source="./media/backup-vault-manage/backup-vault-create.png" alt-text="Screenshot of backup vault creation." lightbox="./media/backup-vault-manage/backup-vault-create.png":::
## Migrate backups to a backup vault
If you have existing backups, you must migrate them to a backup vault before you
If there are backups from volumes that have been deleted that you want to migrate, select **Include backups from Deleted Volumes**. This option will only be enabled if backups from deleted volumes are present.
- :::image type="content" source="../media/azure-netapp-files/backup-vault-assign.png" alt-text="Screenshot of backup vault assignment." lightbox="../media/azure-netapp-files/backup-vault-assign.png":::
+ :::image type="content" source="./media/backup-vault-manage/backup-vault-assign.png" alt-text="Screenshot of backup vault assignment." lightbox="./media/backup-vault-manage/backup-vault-assign.png":::
1. Navigate to the **Backup Vault** menu to view and manage your backups.
If you have existing backups, you must migrate them to a backup vault before you
1. Navigate to the **Backup Vault** menu. 1. Identify the backup vault you want to delete and select the three dots `...` next to the backup's name. Select **Delete**.
- :::image type="content" source="../media/azure-netapp-files/backup-vault-delete.png" alt-text="Screenshot of deleting a backup vault." lightbox="../media/azure-netapp-files/backup-vault-delete.png":::
+ :::image type="content" source="./media/backup-vault-manage/backup-vault-delete.png" alt-text="Screenshot of deleting a backup vault." lightbox="./media/backup-vault-manage/backup-vault-delete.png":::
## Next steps
azure-netapp-files Configure Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-customer-managed-keys.md
Customer-managed keys for Azure NetApp Files volume encryption enable you to use
The following diagram demonstrates how customer-managed keys work with Azure NetApp Files: 1. Azure NetApp Files grants permissions to encryption keys to a managed identity. The managed identity is either a user-assigned managed identity that you create and manage or a system-assigned managed identity associated with the NetApp account. 2. You configure encryption with a customer-managed key for the NetApp account.
You can also use [Azure CLI commands](/cli/azure/feature) `az feature register`
The **Encryption** page enables you to manage encryption settings for your NetApp account. It includes an option to let you set your NetApp account to use your own encryption key, which is stored in [Azure Key Vault](../key-vault/general/basic-concepts.md). This setting provides a system-assigned identity to the NetApp account, and it adds an access policy for the identity with the required key permissions.
- :::image type="content" source="../media/azure-netapp-files/encryption-menu.png" alt-text="Screenshot of the encryption menu." lightbox="../media/azure-netapp-files/encryption-menu.png":::
+ :::image type="content" source="./media/configure-customer-managed-keys/encryption-menu.png" alt-text="Screenshot of the encryption menu." lightbox="./media/configure-customer-managed-keys/encryption-menu.png":::
1. When you set your NetApp account to use customer-managed key, you have two ways to specify the Key URI: * The **Select from key vault** option allows you to select a key vault and a key.
- :::image type="content" source="../media/azure-netapp-files/select-key.png" alt-text="Screenshot of the select a key interface." lightbox="../media/azure-netapp-files/select-key.png":::
+ :::image type="content" source="./media/configure-customer-managed-keys/select-key.png" alt-text="Screenshot of the select a key interface." lightbox="./media/configure-customer-managed-keys/select-key.png":::
* The **Enter key URI** option allows you to enter manually the key URI.
- :::image type="content" source="../media/azure-netapp-files/key-enter-uri.png" alt-text="Screenshot of the encryption menu showing key URI field." lightbox="../media/azure-netapp-files/key-enter-uri.png":::
+ :::image type="content" source="./media/configure-customer-managed-keys/key-enter-uri.png" alt-text="Screenshot of the encryption menu showing key URI field." lightbox="./media/configure-customer-managed-keys/key-enter-uri.png":::
1. Select the identity type that you want to use for authentication to the Azure Key Vault. If your Azure Key Vault is configured to use Vault access policy as its permission model, both options are available. Otherwise, only the user-assigned option is available. * If you choose **System-assigned**, select the **Save** button. The Azure portal configures the NetApp account automatically with the following process: A system-assigned identity is added to your NetApp account. An access policy is to be created on your Azure Key Vault with key permissions Get, Encrypt, Decrypt.
- :::image type="content" source="../media/azure-netapp-files/encryption-system-assigned.png" alt-text="Screenshot of the encryption menu with system-assigned options." lightbox="../media/azure-netapp-files/encryption-system-assigned.png":::
+ :::image type="content" source="./media/configure-customer-managed-keys/encryption-system-assigned.png" alt-text="Screenshot of the encryption menu with system-assigned options." lightbox="./media/configure-customer-managed-keys/encryption-system-assigned.png":::
* If you choose **User-assigned**, you must select an identity. Choose **Select an identity** to open a context pane where you select a user-assigned managed identity.
- :::image type="content" source="../media/azure-netapp-files/encryption-user-assigned.png" alt-text="Screenshot of user-assigned submenu." lightbox="../media/azure-netapp-files/encryption-user-assigned.png":::
+ :::image type="content" source="./media/configure-customer-managed-keys/encryption-user-assigned.png" alt-text="Screenshot of user-assigned submenu." lightbox="./media/configure-customer-managed-keys/encryption-user-assigned.png":::
If you've configured your Azure Key Vault to use Vault access policy, the Azure portal configures the NetApp account automatically with the following process: The user-assigned identity you select is added to your NetApp account. An access policy is created on your Azure Key Vault with the key permissions Get, Encrypt, Decrypt.
You can use an Azure Key Vault that is configured to use Azure role-based access
1. In your Azure account, navigate to **Key vaults** then **Access policies**. 1. To create an access policy, under **Permission model**, select **Azure role-based access-control**.
- :::image type="content" source="../media/azure-netapp-files/rbac-permission.png" alt-text="Screenshot of access configuration menu." lightbox="../media/azure-netapp-files/rbac-permission.png":::
+ :::image type="content" source="./media/configure-customer-managed-keys/rbac-permission.png" alt-text="Screenshot of access configuration menu." lightbox="./media/configure-customer-managed-keys/rbac-permission.png":::
1. When creating the user-assigned role, there are three permissions required for customer-managed keys: 1. `Microsoft.KeyVault/vaults/keys/read` 1. `Microsoft.KeyVault/vaults/keys/encrypt/action`
You can use an Azure Key Vault that is configured to use Azure role-based access
1. Once the custom role is created and available to use with the key vault, you apply it to the user-assigned identity.
- :::image type="content" source="../media/azure-netapp-files/rbac-review-assign.png" alt-text="Screenshot of RBAC review and assign menu." lightbox="../media/azure-netapp-files/rbac-review-assign.png":::
+ :::image type="content" source="./media/configure-customer-managed-keys/rbac-review-assign.png" alt-text="Screenshot of RBAC review and assign menu." lightbox="./media/configure-customer-managed-keys/rbac-review-assign.png":::
## Create an Azure NetApp Files volume using customer-managed keys
You can use an Azure Key Vault that is configured to use Azure role-based access
You must select a key vault private endpoint as well. The dropdown menu displays private endpoints in the selected Virtual network. If there's no private endpoint for your key vault in the selected virtual network, then the dropdown is empty, and you won't be able to proceed. If so, see to [Azure Private Endpoint](../private-link/private-endpoint-overview.md).
- :::image type="content" source="../media/azure-netapp-files/keys-create-volume.png" alt-text="Screenshot of create volume menu." lightbox="../media/azure-netapp-files/keys-create-volume.png":::
+ :::image type="content" source="./media/configure-customer-managed-keys/keys-create-volume.png" alt-text="Screenshot of create volume menu." lightbox="./media/configure-customer-managed-keys/keys-create-volume.png":::
1. Continue to complete the volume creation process. Refer to: * [Create an NFS volume](azure-netapp-files-create-volumes.md)
You can use an Azure Key Vault that is configured to use Azure role-based access
If you have already configured your NetApp account for customer-managed keys and have one or more volumes encrypted with customer-managed keys, you can change the key that is used to encrypt all volumes under the NetApp account. You can select any key that is in the same key vault. Changing key vaults isn't supported. 1. Under your NetApp account, navigate to the **Encryption** menu. Under the **Current key** input field, select the **Rekey** link. 1. In the **Rekey** menu, select one of the available keys from the dropdown menu. The chosen key must be different from the current key. 1. Select **OK** to save. The rekey operation may take several minutes.
azure-netapp-files Configure Kerberos Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-kerberos-encryption.md
The following requirements apply to NFSv4.1 client encryption:
> [!IMPORTANT] > You cannot modify the Kerberos enablement selection after the volume is created.
- ![Create NFSv4.1 Kerberos volume](../media/azure-netapp-files/create-kerberos-volume.png)
+ ![Create NFSv4.1 Kerberos volume](./media/configure-kerberos-encryption/create-kerberos-volume.png)
2. Select **Export Policy** to match the desired level of access and security option (Kerberos 5, Kerberos 5i, or Kerberos 5p) for the volume.
The following requirements apply to NFSv4.1 client encryption:
AD Server and KDC IP can be the same server. This information is used to create the SPN computer account used by Azure NetApp Files. After the computer account is created, Azure NetApp Files will use DNS Server records to locate additional KDC servers as needed.
- ![Kerberos Realm](../media/azure-netapp-files/kerberos-realm.png)
+ ![Kerberos Realm](./media/configure-kerberos-encryption/kerberos-realm.png)
3. Click **Join** to save the configuration.
Follow instructions in [Configure an NFS client for Azure NetApp Files](configur
For example:
- ![Mount instructions for Kerberos volumes](../media/azure-netapp-files/mount-instructions-kerberos-volume.png)
+ ![Mount instructions for Kerberos volumes](./media/configure-kerberos-encryption/mount-instructions-kerberos-volume.png)
3. Create the directory (mount point) for the new volume.
azure-netapp-files Configure Ldap Extended Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-ldap-extended-groups.md
The following information is passed to the server in the query:
>[!NOTE] >If the POSIX attributes are not set up correctly, user and group lookup operations may fail, and users may be squashed to `nobody` when accessing NFS volumes.
- ![Screenshot of Multi-valued String Editor that shows multiple values specified for Object Class.](../media/azure-netapp-files/multi-valued-string-editor.png)
+ ![Screenshot of Multi-valued String Editor that shows multiple values specified for Object Class.](./media/shared/multi-valued-string-editor.png)
You can manage POSIX attributes by using the Active Directory Users and Computers MMC snap-in. The following example shows the Active Directory Attribute Editor. See [Access Active Directory Attribute Editor](create-volumes-dual-protocol.md#access-active-directory-attribute-editor) for details.
- ![Active Directory Attribute Editor](../media/azure-netapp-files/active-directory-attribute-editor.png)
+ ![Active Directory Attribute Editor](./media/shared/active-directory-attribute-editor.png)
4. If you want to configure an LDAP-integrated NFSv4.1 Linux client, see [Configure an NFS client for Azure NetApp Files](configure-nfs-clients.md).
The following information is passed to the server in the query:
6. Follow steps in [Create an NFS volume for Azure NetApp Files](azure-netapp-files-create-volumes.md) to create an NFS volume. During the volume creation process, under the **Protocol** tab, enable the **LDAP** option.
- ![Screenshot that shows Create a Volume page with LDAP option.](../media/azure-netapp-files/create-nfs-ldap.png)
+ ![Screenshot that shows Create a Volume page with LDAP option.](./media/configure-ldap-extended-groups/create-nfs-ldap.png)
7. Optional - You can enable local NFS client users not present on the Windows LDAP server to access an NFS volume that has LDAP with extended groups enabled. To do so, enable the **Allow local NFS users with LDAP** option as follows: 1. Select **Active Directory connections**. On an existing Active Directory connection, select the context menu (the three dots `…`), and select **Edit**. 2. On the **Edit Active Directory settings** window that appears, select the **Allow local NFS users with LDAP** option.
- ![Screenshot that shows the Allow local NFS users with LDAP option](../media/azure-netapp-files/allow-local-nfs-users-with-ldap.png)
+ ![Screenshot that shows the Allow local NFS users with LDAP option](./media/shared/allow-local-nfs-users-with-ldap.png)
8. <a name="ldap-search-scope"></a>Optional - If you have large topologies, and you use the Unix security style with a dual-protocol volume or LDAP with extended groups, you can use the **LDAP Search Scope** option to avoid "access denied" errors on Linux clients for Azure NetApp Files.
The following information is passed to the server in the query:
* If a user is a member of more than 256 groups, only 256 groups will be listed. * Refer to [errors for LDAP volumes](troubleshoot-volumes.md#errors-for-ldap-volumes) if you run into errors.
- ![Screenshot that shows options related to LDAP Search Scope](../media/azure-netapp-files/ldap-search-scope.png)
+ ![Screenshot that shows options related to LDAP Search Scope](./media/configure-ldap-extended-groups/ldap-search-scope.png)
## Next steps
azure-netapp-files Configure Ldap Over Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-ldap-over-tls.md
If you do not have a root CA certificate, you need to generate one and export it
3. Export the root CA certificate. Root CA certificates can be exported from the Personal or Trusted Root Certification Authorities directory, as shown in the following examples:
- ![screenshot that shows personal certificates](../media/azure-netapp-files/personal-certificates.png)
- ![screenshot that shows trusted root certification authorities](../media/azure-netapp-files/trusted-root-certification-authorities.png)
+ ![screenshot that shows personal certificates](./media/configure-ldap-over-tls/personal-certificates.png)
+ ![screenshot that shows trusted root certification authorities](./media/configure-ldap-over-tls/trusted-root-certification-authorities.png)
Ensure that the certificate is exported in the Base-64 encoded X.509 (.CER) format:
- ![Certificate Export Wizard](../media/azure-netapp-files/certificate-export-wizard.png)
+ ![Certificate Export Wizard](./media/configure-ldap-over-tls/certificate-export-wizard.png)
## Enable LDAP over TLS and upload root CA certificate
If you do not have a root CA certificate, you need to generate one and export it
2. In the **Join Active Directory** or **Edit Active Directory** window that appears, select the **LDAP over TLS** checkbox to enable LDAP over TLS for the volume. Then select **Server root CA Certificate** and upload the [generated root CA certificate](#generate-and-export-root-ca-certificate) to use for LDAP over TLS.
- ![Screenshot that shows the LDAP over TLS option](../media/azure-netapp-files/ldap-over-tls-option.png)
+ ![Screenshot that shows the LDAP over TLS option](./media/configure-ldap-over-tls/ldap-over-tls-option.png)
Ensure that the certificate authority name can be resolved by DNS. This name is the "Issued By" or "Issuer" field on the certificate:
- ![Screenshot that shows certificate information](../media/azure-netapp-files/certificate-information.png)
+ ![Screenshot that shows certificate information](./media/configure-ldap-over-tls/certificate-information.png)
If you uploaded an invalid certificate, and you have existing AD configurations, SMB volumes, or Kerberos volumes, an error similar to the following occurs:
azure-netapp-files Configure Network Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-network-features.md
This section shows you how to set the network features option when you create a
The following screenshot shows a volume creation example for a region that supports the Standard network features capabilities:
- ![Screenshot that shows volume creation for Standard network features.](../media/azure-netapp-files/network-features-create-standard.png)
+ ![Screenshot that shows volume creation for Standard network features.](./media/configure-network-features/network-features-create-standard.png)
The following screenshot shows a volume creation example for a region that does *not* support the Standard network features capabilities:
- ![Screenshot that shows volume creation for Basic network features.](../media/azure-netapp-files/network-features-create-basic.png)
+ ![Screenshot that shows volume creation for Basic network features.](./media/configure-network-features/network-features-create-basic.png)
2. Before completing the volume creation process, you can display the specified network features setting in the **Review + Create** tab of the Create a Volume screen. Select **Create** to complete the volume creation.
- ![Screenshot that shows the Review and Create tab of volume creation.](../media/azure-netapp-files/network-features-review-create-tab.png)
+ ![Screenshot that shows the Review and Create tab of volume creation.](./media/configure-network-features/network-features-review-create-tab.png)
3. You can select **Volumes** to display the network features setting for each volume:
- [ ![Screenshot that shows the Volumes page displaying the network features setting.](../media/azure-netapp-files/network-features-volume-list.png)](../media/azure-netapp-files/network-features-volume-list.png#lightbox)
+ [ ![Screenshot that shows the Volumes page displaying the network features setting.](./media/configure-network-features/network-features-volume-list.png)](./media/configure-network-features/network-features-volume-list.png#lightbox)
## <a name="edit-network-features-option-for-existing-volumes"></a> Edit network features option for existing volumes (preview)
This feature currently doesn't support SDK.
1. Select **Change network features**. 1. The **Edit network features** window displays the volumes that are in the same network sibling set. Confirm whether you want to modify the network features option.
- :::image type="content" source="../media/azure-netapp-files/edit-network-features.png" alt-text="Screenshot showing the Edit Network Features window." lightbox="../media/azure-netapp-files/edit-network-features.png":::
+ :::image type="content" source="./media/configure-network-features/edit-network-features.png" alt-text="Screenshot showing the Edit Network Features window." lightbox="./media/configure-network-features/edit-network-features.png":::
### Update Terraform-managed Azure NetApp Files volume from Basic to Standard
Updating the network features of your volume alters the underlying network sibli
The name of the state file in your Terraform module is `terraform.tfstate`. It contains the arguments and their values of all deployed resources in the module. Below is highlighted the `network_features` argument with value ΓÇ£BasicΓÇ¥ for an Azure NetApp Files Volume in a `terraform.tfstate` example file: Do _not_ manually update the `terraform.tfstate` file. Likewise, the `network_features` argument in the `*.tf` and `*.tf.json` configuration files should also not be updated until you follow the steps outlined here as this would cause a mismatch in the arguments of the remote volume and the local configuration file representing that remote volume. When Terraform detects a mismatch between the arguments of remote resources and local configuration files representing those remote resources, Terraform can destroy the remote resources and reprovision them with the arguments in the local configuration files. This can cause data loss in a volume.
Changing the network features for an Azure NetApp Files Volume can impact the ne
1. Select the **Change network features**. ***Do **not** select Save.*** 1. Record the paths of the affected volumes then select **Cancel**. All Terraform configuration files that define these volumes need to be updated, meaning you need to find the Terraform configuration files that define these volumes. The configuration files representing the affected volumes might not be in the same Terraform module.
You must modify the configuration files for each affected volume managed by Terr
1. Locate the affected Terraform-managed volumes configuration files. 1. Add the `ignore_changes = [network_features]` to the volume's `lifecycle` configuration block. If the `lifecycle` block does not exist in that volumeΓÇÖs configuration, add it.
- :::image type="content" source="../media/azure-netapp-files/terraform-lifecycle.png" alt-text="Screenshot of the lifecycle configuration." lightbox="../media/azure-netapp-files/terraform-lifecycle.png":::
+ :::image type="content" source="./media/configure-network-features/terraform-lifecycle.png" alt-text="Screenshot of the lifecycle configuration." lightbox="./media/configure-network-features/terraform-lifecycle.png":::
1. Repeat for each affected Terraform-managed volume.
The `ignore_changes` feature is intended to be used when a resourceΓÇÖs referenc
1. Select the **Change network features**. 1. In the **Action** field, confirm that it reads **Change to Standard**.
- :::image type="content" source="../media/azure-netapp-files/change-network-features-standard.png" alt-text="Screenshot of confirm change of network features." lightbox="../media/azure-netapp-files/change-network-features-standard.png":::
+ :::image type="content" source="./media/configure-network-features/change-network-features-standard.png" alt-text="Screenshot of confirm change of network features." lightbox="./media/configure-network-features/change-network-features-standard.png":::
1. Select **Save**. 1. Wait until you receive a notification that the network features update has completed. In your **Notifications**, the message reads "Successfully updated network features. Network features for network sibling set have successfully updated to ΓÇÿStandardΓÇÖ." 1. In the terminal, run `terraform plan` to view any potential changes. The output should indicate that the infrastructure matches the configuration with a message reading "No changes. Your infrastructure matches the configuration."
- :::image type="content" source="../media/azure-netapp-files/terraform-plan-output.png" alt-text="Screenshot of terraform plan command output." lightbox="../media/azure-netapp-files/terraform-plan-output.png":::
+ :::image type="content" source="./media/configure-network-features/terraform-plan-output.png" alt-text="Screenshot of terraform plan command output." lightbox="./media/configure-network-features/terraform-plan-output.png":::
>[!IMPORTANT] > As a safety precaution, execute `terraform plan` before executing `terraform apply`. The command `terraform plan` allows you to create a ΓÇ£planΓÇ¥ file, which contains the changes to your remote resources. This plan allows you to know if any of your affected volumes will be destroyed by running `terraform apply`.
The `ignore_changes` feature is intended to be used when a resourceΓÇÖs referenc
Observe the change in the value of the `network_features` argument in the `terraform.tfstate` files, which changed from "Basic" to "Standard":
- :::image type="content" source="../media/azure-netapp-files/updated-terraform-module.png" alt-text="Screenshot of updated Terraform module." lightbox="../media/azure-netapp-files/updated-terraform-module.png":::
+ :::image type="content" source="./media/configure-network-features/updated-terraform-module.png" alt-text="Screenshot of updated Terraform module." lightbox="./media/configure-network-features/updated-terraform-module.png":::
#### Update Terraform-managed Azure NetApp Files volumesΓÇÖ configuration file for configuration parity
Once you've update the volumes' network features, you must also modify the `netw
1. In the configuration file, set `network_features` to "Standard" and remove the `ignore_changes = [network_features]` line from the `lifecycle` block.
- :::image type="content" source="../media/azure-netapp-files/terraform-network-features-standard.png" alt-text="Screenshot of Terraform module with Standard network features." lightbox="../media/azure-netapp-files/terraform-network-features-standard.png":::
+ :::image type="content" source="./media/configure-network-features/terraform-network-features-standard.png" alt-text="Screenshot of Terraform module with Standard network features." lightbox="./media/configure-network-features/terraform-network-features-standard.png":::
1. Repeat for each affected Terraform-managed volume. 1. Verify that the updated configuration files accurately represent the configuration of the remote resources by running `terraform plan`. Confirm the output reads "No changes."
azure-netapp-files Configure Unix Permissions Change Ownership Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-unix-permissions-change-ownership-mode.md
The change ownership mode (**`Chown Mode`**) functionality enables you to set th
The following example shows the Create a Volume screen for an NFS volume.
- ![Screenshots that shows the Create a Volume screen for NFS.](../media/azure-netapp-files/unix-permissions-create-nfs-volume.png)
+ ![Screenshots that shows the Create a Volume screen for NFS.](./media/configure-unix-permissions-change-ownership-mode/unix-permissions-create-nfs-volume.png)
2. For existing NFS or dual-protocol volumes, you can set or modify **Unix permissions** and **change ownership mode** as follows: 1. To modify Unix permissions, right-click the **volume**, and select **Edit**. In the Edit window that appears, specify a value for **Unix Permissions**.
- ![Screenshots that shows the Edit screen for Unix permissions.](../media/azure-netapp-files/unix-permissions-edit.png)
+ ![Screenshots that shows the Edit screen for Unix permissions.](./media/configure-unix-permissions-change-ownership-mode/unix-permissions-edit.png)
2. To modify the change ownership mode, click the **volume**, click **Export policy**, then modify the **`Chown Mode`** setting.
- ![Screenshots that shows the Export Policy screen.](../media/azure-netapp-files/chown-mode-edit.png)
+ ![Screenshots that shows the Export Policy screen.](./media/configure-unix-permissions-change-ownership-mode/chown-mode-edit.png)
## Next steps
azure-netapp-files Configure Virtual Wan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-virtual-wan.md
Refer to [What is Azure Virtual WAN?](../virtual-wan/virtual-wan-about.md) to le
The following diagram shows the concept of deploying Azure NetApp Files volume in one or more spokes of a Virtual WAN and accessing the volumes globally. This article will explain how to deploy and access an Azure NetApp Files volume over Virtual WAN.
Deploying Azure NetApp Files volume with Standard network features in a Virtual
This diagram shows routing traffic from on-premises to an Azure NetApp Files volume in a Virtual WAN spoke VNet via a Virtual WAN hub with a VPN gateway and an Azure firewall deployed inside the virtual hub. To learn how to install an Azure Firewall in a Virtual WAN hub, refer [Configure Azure Firewall in a Virtual WAN hub](../virtual-wan/howto-firewall.md).
To force the Azure NetApp Files-bound traffic through Azure Firewall in the Virt
The following image of the Azure portal shows an example virtual hub of effective routes. In the first item, the IP address is listed as 10.2.0.5/32. The static routing entry's destination prefix is `<IP-Azure NetApp Files-Volume>/32`, and the next hop is `Azure-Firewall-in-hub`. > [!IMPORTANT] > Azure NetApp Files mount leverages private IP addresses within a delegated [subnet](azure-netapp-files-network-topologies.md#subnets). The specific IP address entry is required, even if a CIDR to which the Azure NetApp Files volume IP address belongs is pointing to the Azure Firewall as its next hop. For example, 10.2.0.5/32 should be listed even though 10.0.0.0/8 is listed with the Azure Firewall as the next hop.
To identify the private IP address associated with your Azure NetApp Files volum
1. Navigate to the **Volumes** in your Azure NetApp Files subscription. 1. Identify the volume you're looking for. The private IP address associated with an Azure NetApp Files volume is listed as part of the mount path of the volume. ### Edit virtual hub effective routes
You can effect changes to a virtual hub's effective routes by adding routes expl
1. In the virtual hub, navigate to **Route Tables**. 1. Select the route table you want to edit.
- :::image type="content" source="../media/azure-netapp-files/virtual-hub-route-table.png" alt-text="Screenshot of virtual hub route table.":::
+ :::image type="content" source="./media/configure-virtual-wan/virtual-hub-route-table.png" alt-text="Screenshot of virtual hub route table.":::
1. Choose a **Route name** then add the **Destination prefix** and **Next hop**.
- :::image type="content" source="../media/azure-netapp-files/route-table-edit.png" alt-text="Screenshot of route table edits.":::
+ :::image type="content" source="./media/configure-virtual-wan/route-table-edit.png" alt-text="Screenshot of route table edits.":::
1. Save your changes. ## Next steps
azure-netapp-files Convert Nfsv3 Nfsv41 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/convert-nfsv3-nfsv41.md
This section shows you how to convert the NFSv3 volume to NFSv4.1.
2. Select **Edit**. 3. In the Edit window that appears, select **NSFv4.1** in the **Protocol type** pulldown.
- ![screenshot that shows the Edit menu with the Protocol Type field](../media/azure-netapp-files/edit-protocol-type.png)
+ ![screenshot that shows the Edit menu with the Protocol Type field](./media/convert-nfsv3-nfsv41/edit-protocol-type.png)
3. Wait for the conversion operation to complete.
This section shows you how to convert the NFSv4.1 volume to NFSv3.
2. Select **Edit**. 3. In the Edit window that appears, select **NSFv3** in the **Protocol type** pulldown.
- ![screenshot that shows the Edit menu with the Protocol Type field](../media/azure-netapp-files/edit-protocol-type.png)
+ ![screenshot that shows the Edit menu with the Protocol Type field](./media/convert-nfsv3-nfsv41/edit-protocol-type.png)
3. Wait for the conversion operation to complete.
azure-netapp-files Cool Access Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cool-access-introduction.md
Azure NetApp Files supports three [service levels](azure-netapp-files-service-le
The following diagram illustrates an application with a volume enabled for cool access. In the initial write, data blocks are assigned a "warm" temperature value (in the diagram, red data blocks) and exist on the "hot" tier. As the data resides on the volume, a temperature scan monitors the activity of each block. When a data block is inactive, the temperature scan decreases the value of the block until it has been inactive for the number of days specified in the cooling period. The cooling period can be between 7 and 183 days; it has a default value of 31 days. Once marked "cold," the tiering scan collects blocks and packages them into 4-MB objects, which are moved to Azure storage fully transparently. To the application and users, those cool blocks still appear online. Tiered data appears to be online and continues to be available to users and applications by transparent and automated retrieval from the cool tier.
This test had a large dataset and ran several days starting the worst-case most-
The following chart shows a test that ran over 2.5 days on the 10-TB working dataset that has been 100% cooled and the buffers cleared (absolute worst-case aged data). ### 64k sequential-read test
azure-netapp-files Create Active Directory Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-active-directory-connections.md
Several features of Azure NetApp Files require that you have an Active Directory
1. From your NetApp account, select **Active Directory connections**, then select **Join**.
- ![Screenshot showing the Active Directory connections menu. The join button is highlighted.](../media/azure-netapp-files/azure-netapp-files-active-directory-connections.png)
+ ![Screenshot showing the Active Directory connections menu. The join button is highlighted.](./media/create-active-directory-connections/azure-netapp-files-active-directory-connections.png)
>[!NOTE] >Azure NetApp Files supports only one Active Directory connection within the same region and the same subscription.
Several features of Azure NetApp Files require that you have an Active Directory
If you're using Azure NetApp Files with Microsoft Entra Domain Services, the organizational unit path is `OU=AADDC Computers`
- :::image type="content" source="../media/azure-netapp-files/azure-netapp-files-join-active-directory.png" alt-text="Screenshot of the Join Active Directory input fields.":::
+ :::image type="content" source="./media/create-active-directory-connections/azure-netapp-files-join-active-directory.png" alt-text="Screenshot of the Join Active Directory input fields.":::
* <a name="aes-encryption"></a>**AES Encryption** This option enables AES encryption authentication support for the admin account of the AD connection.
- ![Screenshot of the AES description field. The field is a checkbox.](../media/azure-netapp-files/active-directory-aes-encryption.png)
+ ![Screenshot of the AES description field. The field is a checkbox.](./media/create-active-directory-connections/active-directory-aes-encryption.png)
See [Requirements for Active Directory connections](#requirements-for-active-directory-connections) for requirements.
Several features of Azure NetApp Files require that you have an Active Directory
>[!NOTE] >DNS PTR records for the AD DS computer account(s) must be created in the AD DS **Organizational Unit** specified in the Azure NetApp Files AD connection for LDAP Signing to work.
- ![Screenshot of the LDAP signing checkbox.](../media/azure-netapp-files/active-directory-ldap-signing.png)
+ ![Screenshot of the LDAP signing checkbox.](./media/create-active-directory-connections/active-directory-ldap-signing.png)
* **Allow local NFS users with LDAP** This option enables local NFS client users to access to NFS volumes. Setting this option disables extended groups for NFS volumes. It also limits the number of groups to 16. For more information, see [Allow local NFS users with LDAP to access a dual-protocol volume](create-volumes-dual-protocol.md#allow-local-nfs-users-with-ldap-to-access-a-dual-protocol-volume).
Several features of Azure NetApp Files require that you have an Active Directory
The **Group Membership Filter** option allows you to create a custom search filter for users who are members of specific AD DS groups.
- ![Screenshot of the LDAP search scope field, showing a checked box.](../media/azure-netapp-files/ldap-search-scope-checked.png)
+ ![Screenshot of the LDAP search scope field, showing a checked box.](./media/create-active-directory-connections/ldap-search-scope-checked.png)
See [Configure AD DS LDAP with extended groups for NFS volume access](configure-ldap-extended-groups.md#ldap-search-scope) for information about these options.
Several features of Azure NetApp Files require that you have an Active Directory
* <a name="backup-policy-users"></a> **Backup policy users** This option grants addition security privileges to AD DS domain users or groups that require elevated backup privileges to support backup, restore, and migration workflows in Azure NetApp Files. The specified AD DS user accounts or groups will have elevated NTFS permissions at the file or folder level.
- ![Screenshot of the Backup policy users field showing an empty text input field.](../media/azure-netapp-files/active-directory-backup-policy-users.png)
+ ![Screenshot of the Backup policy users field showing an empty text input field.](./media/create-active-directory-connections/active-directory-backup-policy-users.png)
The following privileges apply when you use the **Backup policy users** setting:
Several features of Azure NetApp Files require that you have an Active Directory
* **Security privilege users** <!-- SMB CA share feature --> This option grants security privilege (`SeSecurityPrivilege`) to AD DS domain users or groups that require elevated privileges to access Azure NetApp Files volumes. The specified AD DS users or groups will be allowed to perform certain actions on SMB shares that require security privilege not assigned by default to domain users.
- ![Screenshot showing the Security privilege users box of Active Directory connections window.](../media/azure-netapp-files/security-privilege-users.png)
+ ![Screenshot showing the Security privilege users box of Active Directory connections window.](./media/create-active-directory-connections/security-privilege-users.png)
The following privilege applies when you use the **Security privilege users** setting:
Several features of Azure NetApp Files require that you have an Active Directory
>[!NOTE] >The domain admins are automatically added to the Administrators privilege users group.
- ![Screenshot that shows the Administrators box of Active Directory connections window.](../media/azure-netapp-files/active-directory-administrators.png)
+ ![Screenshot that shows the Administrators box of Active Directory connections window.](./media/create-active-directory-connections/active-directory-administrators.png)
The following privileges apply when you use the **Administrators privilege users** setting:
Several features of Azure NetApp Files require that you have an Active Directory
* Credentials, including your **username** and **password**
- ![Screenshot that shows Active Directory credentials fields showing username, password and confirm password fields.](../media/azure-netapp-files/active-directory-credentials.png)
+ ![Screenshot that shows Active Directory credentials fields showing username, password and confirm password fields.](./media/create-active-directory-connections/active-directory-credentials.png)
>[!IMPORTANT] >Although Active Directory supports 256-character passwords, Active Directory passwords with Azure NetApp Files **cannot** exceed 64 characters.
Several features of Azure NetApp Files require that you have an Active Directory
The Active Directory connection you created appears.
- ![Screenshot of the Active Directory connections menu showing a successfully created connection.](../media/azure-netapp-files/azure-netapp-files-active-directory-connections-created.png)
+ ![Screenshot of the Active Directory connections menu showing a successfully created connection.](./media/create-active-directory-connections/azure-netapp-files-active-directory-connections-created.png)
## <a name="shared_ad"></a>Map multiple NetApp accounts in the same subscription and region to an AD connection
You can also use [Azure CLI commands](/cli/azure/feature) `az feature register`
### Steps 1. Navigate to the volume **Overview** menu. Select **Reset Active Directory Account**. Alternately, navigate to the **Volumes** menu. Identify the volume for which you want to reset the Active Directory account and select the three dots (`...`) at the end of the row. Select **Reset Active Directory Account**. 2. A warning message that explains the implications of this action will pop up. Type **yes** in the text box to proceed. ## Next steps
azure-netapp-files Create Cross Zone Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-cross-zone-replication.md
This process requires that your account is subscribed to the [availability zone
> [!IMPORTANT] > Logical availability zones for the subscription without Azure NetApp Files presence are marked `(Unavailable)` and are greyed out.
- :::image type="content" source="../media/azure-netapp-files/create-volume-availability-zone.png" alt-text="Screenshot of the 'Create a Zone' menu requires you to select an availability zone." lightbox="../media/azure-netapp-files/create-volume-availability-zone.png":::
+ :::image type="content" source="./media/create-cross-zone-replication/create-volume-availability-zone.png" alt-text="Screenshot of the 'Create a Zone' menu requires you to select an availability zone." lightbox="./media/create-cross-zone-replication/create-volume-availability-zone.png":::
1. Follow the steps indicated in the interface to create the volume. The **Review + Create** page shows the selected availability zone you specified.
- :::image type="content" source="../media/azure-netapp-files/zone-replication-review-create.png" alt-text="Screenshot showing the need to confirm selection of correct availability zone in the Review and Create page." lightbox="../media/azure-netapp-files/zone-replication-review-create.png":::
+ :::image type="content" source="./media/create-cross-zone-replication/zone-replication-review-create.png" alt-text="Screenshot showing the need to confirm selection of correct availability zone in the Review and Create page." lightbox="./media/create-cross-zone-replication/zone-replication-review-create.png":::
1. After you create the volume, the **Volume Overview** page includes availability zone information for the volume.
- :::image type="content" source="../media/azure-netapp-files/zone-replication-volume-overview.png" alt-text="The selected availability zone will display when you create the volume." lightbox="../media/azure-netapp-files/zone-replication-volume-overview.png":::
+ :::image type="content" source="./media/create-cross-zone-replication/zone-replication-volume-overview.png" alt-text="The selected availability zone will display when you create the volume." lightbox="./media/create-cross-zone-replication/zone-replication-volume-overview.png":::
## Create the data replication volume in another availability zone of the same region
This process requires that your account is subscribed to the [availability zone
1. Create the data replication volume (the destination volume) _in another availability zone, but in the same region as the source volume_. In the **Basics** tab of the **Create a new protection volume** page, select an available availability zone. > [!IMPORTANT] > Logical availability zones for the subscription without Azure NetApp Files presence are marked `(Unavailable)` and are greyed out.
- :::image type="content" source="../media/azure-netapp-files/zone-replication-create-new-volume.png" alt-text="Select an availability zone for the cross-zone replication volume." lightbox="../media/azure-netapp-files/zone-replication-create-new-volume.png":::
+ :::image type="content" source="./media/create-cross-zone-replication/zone-replication-create-new-volume.png" alt-text="Select an availability zone for the cross-zone replication volume." lightbox="./media/create-cross-zone-replication/zone-replication-create-new-volume.png":::
## Complete cross-zone replication configuration
azure-netapp-files Create Volumes Dual Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-volumes-dual-protocol.md
You can also use [Azure CLI commands](/cli/azure/feature) `az feature register`
1. Click the **Volumes** blade from the Capacity Pools blade. Click **+ Add volume** to create a volume.
- ![Navigate to Volumes](../media/azure-netapp-files/azure-netapp-files-navigate-to-volumes.png)
+ ![Navigate to Volumes](./media/shared/azure-netapp-files-navigate-to-volumes.png)
2. In the Create a Volume window, click **Create**, and provide information for the following fields under the Basics tab: * **Volume name**
You can also use [Azure CLI commands](/cli/azure/feature) `az feature register`
If you have not delegated a subnet, you can click **Create new** on the Create a Volume page. Then in the Create Subnet page, specify the subnet information, and select **Microsoft.NetApp/volumes** to delegate the subnet for Azure NetApp Files. In each VNet, only one subnet can be delegated to Azure NetApp Files.
- ![Create subnet](../media/azure-netapp-files/azure-netapp-files-create-subnet.png)
+ ![Create subnet](./media/shared/azure-netapp-files-create-subnet.png)
* **Network features** In supported regions, you can specify whether you want to use **Basic** or **Standard** network features for the volume. See [Configure network features for a volume](configure-network-features.md) and [Guidelines for Azure NetApp Files network planning](azure-netapp-files-network-topologies.md) for details.
You can also use [Azure CLI commands](/cli/azure/feature) `az feature register`
For information about creating a snapshot policy, see [Manage snapshot policies](snapshots-manage-policy.md).
- ![Show advanced selection](../media/azure-netapp-files/volume-create-advanced-selection.png)
+ ![Show advanced selection](./media/shared/volume-create-advanced-selection.png)
3. Click the **Protocol** tab, and then complete the following actions: * Select **Dual-protocol** as the protocol type for the volume.
You can also use [Azure CLI commands](/cli/azure/feature) `az feature register`
* Optionally, [configure export policy for the volume](azure-netapp-files-configure-export-policy.md).
- ![Specify dual-protocol](../media/azure-netapp-files/create-volume-protocol-dual.png)
+ ![Specify dual-protocol](./media/create-volumes-dual-protocol/create-volume-protocol-dual.png)
4. Click **Review + Create** to review the volume details. Then click **Create** to create the volume.
The **Allow local NFS users with LDAP** option in Active Directory connections e
2. On the **Edit Active Directory settings** window that appears, select the **Allow local NFS users with LDAP** option.
- ![Screenshot that shows the Allow local NFS users with LDAP option](../media/azure-netapp-files/allow-local-nfs-users-with-ldap.png)
+ ![Screenshot that shows the Allow local NFS users with LDAP option](./media/shared/allow-local-nfs-users-with-ldap.png)
## Manage LDAP POSIX Attributes You can manage POSIX attributes such as UID, Home Directory, and other values by using the Active Directory Users and Computers MMC snap-in. The following example shows the Active Directory Attribute Editor:
-![Active Directory Attribute Editor](../media/azure-netapp-files/active-directory-attribute-editor.png)
+![Active Directory Attribute Editor](./media/shared/active-directory-attribute-editor.png)
You need to set the following attributes for LDAP users and LDAP groups: * Required attributes for LDAP users:
You need to set the following attributes for LDAP users and LDAP groups:
The values specified for `objectClass` are separate entries. For example, in Multi-valued String Editor, `objectClass` would have separate values (`user` and `posixAccount`) specified as follows for LDAP users:
-![Screenshot of Multi-valued String Editor that shows multiple values specified for Object Class.](../media/azure-netapp-files/multi-valued-string-editor.png)
+![Screenshot of Multi-valued String Editor that shows multiple values specified for Object Class.](./media/shared/multi-valued-string-editor.png)
Microsoft Entra Domain Services doesnΓÇÖt allow you to modify the objectClass POSIX attribute on users and groups created in the organizational AADDC Users OU. As a workaround, you can create a custom OU and create users and groups in the custom OU.
On a Windows system, you can access the Active Directory Attribute Editor as fol
1. Click **Start**, navigate to **Windows Administrative Tools**, and then click **Active Directory Users and Computers** to open the Active Directory Users and Computers window. 2. Click the domain name that you want to view, and then expand the contents. 3. To display the advanced Attribute Editor, enable the **Advanced Features** option in the Active Directory Users Computers **View** menu.
- ![Screenshot that shows how to access the Attribute Editor Advanced Features menu.](../media/azure-netapp-files/attribute-editor-advanced-features.png)
+ ![Screenshot that shows how to access the Attribute Editor Advanced Features menu.](./media/create-volumes-dual-protocol/attribute-editor-advanced-features.png)
4. Double-click **Users** on the left pane to see the list of users. 5. Double-click a particular user to see its **Attribute Editor** tab.
azure-netapp-files Cross Region Replication Create Peering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-region-replication-create-peering.md
Before you begin, ensure that you have reviewed the [requirements and considerat
You need to obtain the resource ID of the source volume that you want to replicate. 1. Go to the source volume, and select **Properties** under Settings to display the source volume resource ID.
- ![Locate source volume resource ID](../media/azure-netapp-files/cross-region-replication-source-volume-resource-id.png)
+ ![Locate source volume resource ID](./media/cross-region-replication-create-peering/cross-region-replication-source-volume-resource-id.png)
2. Copy the resource ID to the clipboard. You will need it later.
You can also select an existing NetApp account in a different region.
4. Create the data replication volume by selecting **Volumes** under Storage Service in the destination NetApp account. Then select the **+ Add data replication** button.
- ![Add data replication](../media/azure-netapp-files/cross-region-replication-add-data-replication.png)
+ ![Add data replication](./media/cross-region-replication-create-peering/cross-region-replication-add-data-replication.png)
5. In the Create a Volume page that appears, complete the following fields under the **Basics** tab: * Volume name
For the NFS protocol, ensure that the export policy rules satisfy the requiremen
8. Under the **Replication** tab, paste in the source volume resource ID that you obtained in [Locate the source volume resource ID](#locate-the-source-volume-resource-id), and then select the desired replication schedule. There are three options for the replication schedule: every 10 minutes, hourly, and daily.
- ![Create volume replication](../media/azure-netapp-files/cross-region-replication-create-volume-replication.png)
+ ![Create volume replication](./media/cross-region-replication-create-peering/cross-region-replication-create-volume-replication.png)
9. Select **Review + Create**, then select **Create** to create the data replication volume.
- ![Review and create replication](../media/azure-netapp-files/cross-region-replication-review-create-replication.png)
+ ![Review and create replication](./media/cross-region-replication-create-peering/cross-region-replication-review-create-replication.png)
## Authorize replication from the source volume
To authorize the replication, you need to obtain the resource ID of the replicat
3. Select the replication destination volume, go to **Properties** under Settings, and locate the **Resource ID** of the destination volume. Copy the destination volume resource ID to the clipboard.
- ![Properties resource ID](../media/azure-netapp-files/cross-region-replication-properties-resource-id.png)
+ ![Properties resource ID](./media/cross-region-replication-create-peering/cross-region-replication-properties-resource-id.png)
4. In Azure NetApp Files, go to the replication source account and source capacity pool. 5. Locate the replication source volume and select it. Navigate to **Replication** under Storage Service then select **Authorize**.
- ![Authorize replication](../media/azure-netapp-files/cross-region-replication-authorize.png)
+ ![Authorize replication](./media/cross-region-replication-create-peering/cross-region-replication-authorize.png)
6. In the Authorize field, paste the destination replication volume resource ID that you obtained in Step 3, then select **OK**.
azure-netapp-files Cross Region Replication Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-region-replication-delete.md
You can terminate the replication connection between the source and the destinat
4. Type **Yes** when prompted and click **Break**.
- ![Break replication peering](../media/azure-netapp-files/cross-region-replication-break-replication-peering.png)
+ ![Break replication peering](./media/shared/cross-region-replication-break-replication-peering.png)
1. To delete volume replication, select **Replication** from the source or the destination volume.
You can terminate the replication connection between the source and the destinat
3. Confirm deletion by typing **Yes** and clicking **Delete**.
- ![Delete replication](../media/azure-netapp-files/cross-region-replication-delete-replication.png)
+ ![Delete replication](./media/cross-region-replication-delete/cross-region-replication-delete-replication.png)
## Delete source or destination volumes
If you want to delete the source or destination volume, you must perform the fol
2. Delete the destination or source volume as needed by right-clicking the volume name and select **Delete**.
- ![Screenshot that shows right-click menu of a volume.](../media/azure-netapp-files/cross-region-replication-delete-volume.png)
+ ![Screenshot that shows right-click menu of a volume.](./media/cross-region-replication-delete/cross-region-replication-delete-volume.png)
## Next steps
azure-netapp-files Cross Region Replication Display Health Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-region-replication-display-health-status.md
You can view replication status on the source volume or the destination volume.
* **Total progress** ΓÇô Shows the total number of cumulative bytes transferred over the lifetime of the relationship. This amount is the actual bytes transferred, and it might differ from the logical space that the source and destination volumes report.
- ![Replication health status](../media/azure-netapp-files/cross-region-replication-health-status.png)
+ ![Replication health status](./media/cross-region-replication-display-health-status/cross-region-replication-health-status.png)
> [!NOTE] > Replication relationship shows health status as *unhealthy* if previous replication jobs are not complete. This status is a result of large volumes being transferred with a lower transfer window (for example, a ten-minute transfer time for a large volume). In this case, the relationship status shows *transferring* and health status shows *unhealthy*.
Create [alert rules in Azure Monitor](../azure-monitor/alerts/alerts-overview.md
* If your replication schedule is daily, enter 103,680 (24 hours * 60 minutes * 60 seconds * 1.2). 9. Select **Review + create**. The alert rule is ready for use. ## Next steps
azure-netapp-files Cross Region Replication Manage Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-region-replication-manage-disaster-recovery.md
When you need to activate the destination volume (for example, when you want to
4. Type **Yes** when prompted and click the **Break** button.
- ![Break replication peering](../media/azure-netapp-files/cross-region-replication-break-replication-peering.png)
+ ![Break replication peering](./media/shared/cross-region-replication-break-replication-peering.png)
5. Mount the destination volume by following the steps in [Mount or unmount a volume for Windows or Linux virtual machines](azure-netapp-files-mount-unmount-volumes-for-virtual-machines.md). This step enables a client to access the destination volume.
After disaster recovery, you can reactivate the source volume by performing a re
2. Type **Yes** when prompted and click **OK**.
- ![Resync replication](../media/azure-netapp-files/cross-region-replication-resync-replication.png)
+ ![Resync replication](./media/cross-region-replication-manage-disaster-recovery/cross-region-replication-resync-replication.png)
3. Monitor the source volume health status by following steps in [Display health status of replication relationship](cross-region-replication-display-health-status.md). When the source volume health status shows the following values, the reverse resync operation is complete, and changes made at the destination volume are now captured on the source volume:
azure-netapp-files Default Individual User Group Quotas Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/default-individual-user-group-quotas-introduction.md
The following subsections describe and depict the behavior of the various quota
A default user quota automatically applies a quota limit to *all users* accessing the volume without creating separate quotas for each target user. Each user can only consume the amount of storage as defined by the default user quota setting. No single user can exhaust the volumeΓÇÖs capacity, as long as the default user quota is less than the volume quota. The following diagram depicts this behavior. ### Individual user quota An individual user quota applies a quota to *individual target user* accessing the volume. You can specify the target user by a UNIX user ID (UID) or a Windows security identifier (SID), depending on volume protocol (NFS or SMB). You can define multiple individual user quota settings on a volume. Each user can only consume the amount of storage as defined by their individual user quota setting. No single user can exhaust the volumeΓÇÖs capacity, as long as the individual user quota is less than the volume quota. Individual user quotas override a default user quota, where applicable. The following diagram depicts this behavior. ### Combining default and individual user quotas You can create quota exceptions for specific users by allowing those users less or more capacity than a default user quota setting by combining default and individual user quota settings. In the following example, individual user quotas are set for `user1`, `user2`, and `user3`. Any other user is subjected to the default user quota setting. The individual quota settings can be smaller or larger than the default user quota setting. The following diagram depicts this behavior. ### Default group quota A default group quota automatically applies a quota limit to *all users within all groups* accessing the volume without creating separate quotas for each target group. The total consumption for all users in any group can't exceed the group quota limit. Group quotas arenΓÇÖt applicable to SMB and dual-protocol volumes. A single user can potentially consume the entire group quota. The following diagram depicts this behavior. ### Individual group quota An individual group quota applies a quota to *all users within an individual target group* accessing the volume. The total consumption for all users *in that group* can't exceed the group quota limit. Group quotas arenΓÇÖt applicable to SMB and dual-protocol volumes. You specify the group by a UNIX group ID (GID). Individual group quotas override default group quotas where applicable. The following diagram depicts this behavior. ### Combining individual and default group quota You can create quota exceptions for specific groups by allowing those groups less or more capacity than a default group quota setting by combining default and individual group quota settings. Group quotas arenΓÇÖt applicable to SMB and dual-protocol volumes. In the following example, individual group quotas are set for `group1` and `group2`. Any other group is subjected to the default group quota setting. The individual group quota settings can be smaller or larger than the default group quota setting. The following diagram depicts this scenario. ### Combining default and individual user and group quotas You can combine the various previously described quota options to achieve very specific quota definitions. You can create very specific quota definitions by (optionally) starting with defining a default group quota, followed by individual group quotas matching your requirements. Then you can further tighten individual user consumption by first (optionally) defining a default user quota, followed by individual user quotas matching individual user requirements. Group quotas arenΓÇÖt applicable to SMB and dual-protocol volumes. In the following example, a default group quota has been set as well as individual group quotas for `group1` and `group2`. Furthermore, a default user quota has been set as well as individual quotas for `user1`, `user2`, `user3`, `user5`, and `userZ`. The following diagram depicts this scenario. ## Observing user quota settings and consumption
Windows users can observe their user quota and consumption in Windows Explorer a
* Administrator view:
- :::image type="content" source="../media/azure-netapp-files/user-quota-administrator-view.png" alt-text="Screenshot showing administrator view of user quota and consumption.":::
+ :::image type="content" source="./media/default-individual-user-group-quotas-introduction/user-quota-administrator-view.png" alt-text="Screenshot showing administrator view of user quota and consumption.":::
* User view:
- :::image type="content" source="../media/azure-netapp-files/user-quota-user-view.png" alt-text="Screenshot showing user view of user quota and consumption.":::
+ :::image type="content" source="./media/default-individual-user-group-quotas-introduction/user-quota-user-view.png" alt-text="Screenshot showing user view of user quota and consumption.":::
### Linux client Linux users can observe their *user* quota and consumption by using the [`quota(1)`](https://man7.org/linux/man-pages/man1/quota.1.html) command. Assume a scenario where a 2-TiB volume with a 100-MiB default or individual user quota has been configured. On the client, this scenario is represented as follows: Azure NetApp Files currently doesn't support group quota reporting. However, you know you've reached your groupΓÇÖs quota limit when you receive a `Disk quota exceeded` error in writing to the volume while you havenΓÇÖt reached your user quota yet. In the following scenario, users `user4` and `user5` are members of `group2`. The group `group2` has a 200-MiB default or individual group quota assigned. The volume is already populated with 150 MiB of data owned by user `user4`. User `user5` appears to have a 100-MiB quota available as reported by the `quota(1)` command, but `user5` canΓÇÖt consume more than 50 MiB due to the remaining group quota for `group2`. User `user5` receives a `Disk quota exceeded` error message after writing 50 MiB, despite not reaching the user quota. > [!IMPORTANT] > For quota reporting to work, the client needs access to port 4049/UDP on the Azure NetApp Files volumesΓÇÖ storage endpoint. When using NSGs with standard network features on the Azure NetApp Files delegated subnet, make sure that access is enabled.
azure-netapp-files Disable Showmount https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/disable-showmount.md
The disable showmount capability is currently in preview. If you're using this f
3. Confirm that you've disabled the showmount in the **Overview** menu of your Azure subscription. The attribute **Disable Showmount** displays as true if the operation succeeded.
- :::image type="content" source="../media/azure-netapp-files/disable-showmount.png" alt-text="Screenshot of the Azure interface depicting the disable showmount option." lightbox="../media/azure-netapp-files/disable-showmount.png":::
+ :::image type="content" source="./media/disable-showmount/disable-showmount.png" alt-text="Screenshot of the Azure interface depicting the disable showmount option." lightbox="./media/disable-showmount/disable-showmount.png":::
4. If you need to enable showmount, unregister the feature.
azure-netapp-files Double Encryption At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/double-encryption-at-rest.md
If you are using this feature for the first time, you need to [register for the
When you create a volume in a double-encryption capacity pool, the default key management (the **Encryption key source** field) is `Microsoft Managed Key`, and the other choice is `Customer Managed Key`. Using customer-managed keys requires additional preparation of an Azure Key Vault and other details. For more information about using volume encryption with customer managed keys, see [Configure customer-managed keys for Azure NetApp Files volume encryption](configure-customer-managed-keys.md). ## Supported regions
azure-netapp-files Dual Protocol Permission Behaviors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/dual-protocol-permission-behaviors.md
One such example is if your storage resides in Azure NetApp Files, while your NA
The following figure shows an example of that kind of configuration. ## Next steps
azure-netapp-files Dynamic Change Volume Service Level https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/dynamic-change-volume-service-level.md
The capacity pool that you want to move the volume to must already exist. The ca
1. On the Volumes page, right-click the volume whose service level you want to change. Select **Change Pool**.
- ![Right-click volume](../media/azure-netapp-files/right-click-volume.png)
+ ![Right-click volume](./media/dynamic-change-volume-service-level/right-click-volume.png)
2. In the Change pool window, select the capacity pool you want to move the volume to.
- ![Change pool](../media/azure-netapp-files/change-pool.png)
+ ![Change pool](./media/dynamic-change-volume-service-level/change-pool.png)
3. Select **OK**.
azure-netapp-files Enable Continuous Availability Existing SMB https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/enable-continuous-availability-existing-SMB.md
You can enable the SMB Continuous Availability (CA) feature when you [create a n
1. Select the SMB volume that you want to have SMB CA enabled. Then select **Edit**. 1. On the Edit window that appears, select the **Enable Continuous Availability** checkbox.
- ![Snapshot that shows the Enable Continuous Availability option.](../media/azure-netapp-files/enable-continuous-availability.png)
+ ![Snapshot that shows the Enable Continuous Availability option.](./media/enable-continuous-availability-existing-smb/enable-continuous-availability.png)
1. Reboot the Windows systems connecting to the existing SMB share.
azure-netapp-files Faq Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-smb.md
The Azure NetApp Files service has a policy that automatically updates the passw
To see when the password was last updated on the Azure NetApp Files SMB computer account, check the `pwdLastSet` property on the computer account using the [Attribute Editor](create-volumes-dual-protocol.md#access-active-directory-attribute-editor) in the **Active Directory Users and Computers** utility:
-![Screenshot that shows the Active Directory Users and Computers utility](../media/azure-netapp-files/active-directory-users-computers-utility.png)
+![Screenshot that shows the Active Directory Users and Computers utility](./media/faq-smb/active-directory-users-computers-utility.png)
>[!NOTE] > Due to an interoperability issue with the [April 2022 Monthly Windows Update](
azure-netapp-files Lightweight Directory Access Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/lightweight-directory-access-protocol.md
LDAP can be a name mapping resource, if the LDAP schema attributes on the LDAP s
In the following example, a user has a Windows name of `asymmetric` and needs to map to a UNIX identity of `UNIXuser`. To achieve that in Azure NetApp Files, open an instance of the [Active Directory Users and Computers MMC](/troubleshoot/windows-server/system-management-components/remote-server-administration-tools). Then, find the desired user and open the properties box. (Doing so requires [enabling the Attribute Editor](http://directoryadmin.blogspot.com/2019/02/attribute-editor-tab-missing-in-active.html)). Navigate to the Attribute Editor tab and find the UID field, then populate the UID field with the desired UNIX user name `UNIXuser` and click **Add** and **OK** to confirm. After this action is done, files written from Windows SMB shares by the Windows user `asymmetric` will be owned by `UNIXuser` from the NFS side. The following example shows Windows SMB owner `asymmetric`: The following example shows NFS owner `UNIXuser` (mapped from Windows user `asymmetric` using LDAP):
azure-netapp-files Manage Availability Zone Volume Placement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/manage-availability-zone-volume-placement.md
You can deploy new volumes in the logical availability zone of your choice. You
> [!IMPORTANT] > Logical availability zones for the subscription without Azure NetApp Files presence are marked `(Unavailable)` and are greyed out.
- [ ![Screenshot that shows the Availability Zone menu.](../media/azure-netapp-files/availability-zone-menu-drop-down.png) ](../media/azure-netapp-files/availability-zone-menu-drop-down.png#lightbox)
+ [ ![Screenshot that shows the Availability Zone menu.](./media/manage-availability-zone-volume-placement/availability-zone-menu-drop-down.png) ](./media/manage-availability-zone-volume-placement/availability-zone-menu-drop-down.png#lightbox)
3. Follow the UI to create the volume. The **Review + Create** page shows the selected availability zone you specified.
- [ ![Screenshot that shows the Availability Zone review.](../media/azure-netapp-files/availability-zone-display-down.png) ](../media/azure-netapp-files/availability-zone-display-down.png#lightbox)
+ [ ![Screenshot that shows the Availability Zone review.](./media/manage-availability-zone-volume-placement/availability-zone-display-down.png) ](./media/manage-availability-zone-volume-placement/availability-zone-display-down.png#lightbox)
4. Navigate to **Properties** to confirm your availability zone configuration.
- :::image type="content" source="../media/azure-netapp-files/availability-zone-volume-overview.png" alt-text="Screenshot of volume properties interface." lightbox="../media/azure-netapp-files/availability-zone-volume-overview.png":::
+ :::image type="content" source="./media/manage-availability-zone-volume-placement/availability-zone-volume-overview.png" alt-text="Screenshot of volume properties interface." lightbox="./media/manage-availability-zone-volume-placement/availability-zone-volume-overview.png":::
## Populate an existing volume with availability zone information
You can deploy new volumes in the logical availability zone of your choice. You
> [!IMPORTANT] > Availability zone information can only be populated as provided. You can't select an availability zone or move the volume to another availability zone by using this feature. If you want to move this volume to another availability zone, consider using [cross-zone replication](create-cross-zone-replication.md) (after populating the volume with the availability zone information). >
- > :::image type="content" source="../media/azure-netapp-files/populate-availability-zone.png" alt-text="Screenshot of the Populate Availability Zone window." lightbox="../media/azure-netapp-files/populate-availability-zone.png":::
+ > :::image type="content" source="./media/manage-availability-zone-volume-placement/populate-availability-zone.png" alt-text="Screenshot of the Populate Availability Zone window." lightbox="./media/manage-availability-zone-volume-placement/populate-availability-zone.png":::
## Populate availability zone for Terraform-managed volumes
The populate availability zone features requires a `zone` property on the volume
``` 1. In the Azure portal, locate the Terraform module. In the volume **Overview**, select **Populate availability zone** and make note of the availability zone. Do _not_ select save.
- :::image type="content" source="../media/azure-netapp-files/populate-availability-zone.png" alt-text="Screenshot of the Populate Availability Zone menu." lightbox="../media/azure-netapp-files/populate-availability-zone.png":::
+ :::image type="content" source="./media/manage-availability-zone-volume-placement/populate-availability-zone.png" alt-text="Screenshot of the Populate Availability Zone menu." lightbox="./media/manage-availability-zone-volume-placement/populate-availability-zone.png":::
1. In the volume's configuration file (`main.tf`), add a value for `zone`, entering the numerical value you retrieved in the previous step. For example, if the volume's availability zone is 2, enter `zone = 2`. Save the file. 1. Return to the Azure portal. Select **Save** to populate the availability zone.
azure-netapp-files Manage Billing Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/manage-billing-tags.md
Billing tags are assigned at the capacity pool level, not volume level.
> [!IMPORTANT] > Tag data is replicated globally. As such, do not use tag names or values that could compromise the security of your resources. For example, do not use tag names that contain personal or sensitive information.
- ![Snapshot that shows the Tags window of a capacity pool.](../media/azure-netapp-files/billing-tags-capacity-pool.png)
+ ![Snapshot that shows the Tags window of a capacity pool.](./media/manage-billing-tags/billing-tags-capacity-pool.png)
3. You can display and download information about tagged resources by using the [Azure Cost Management](../cost-management-billing/cost-management-billing-overview.md) portal: 1. Click **Cost Analysis** and select the **Cost by resource** view.
- [ ![Screenshot that shows Cost Analysis of Azure Cost Management](../media/azure-netapp-files/cost-analysis.png) ](../media/azure-netapp-files/cost-analysis.png#lightbox)
+ [ ![Screenshot that shows Cost Analysis of Azure Cost Management](./media/manage-billing-tags/cost-analysis.png) ](./media/manage-billing-tags/cost-analysis.png#lightbox)
2. To download an invoice, selecting **Invoices** and then the **Download** button.
- [ ![Screenshot that shows Invoices of Azure Cost Management](../media/azure-netapp-files/azure-cost-invoices.png) ](../media/azure-netapp-files/azure-cost-invoices.png#lightbox)
+ [ ![Screenshot that shows Invoices of Azure Cost Management](./media/manage-billing-tags/azure-cost-invoices.png) ](./media/manage-billing-tags/azure-cost-invoices.png#lightbox)
1. In the Download window that appears, download usage details. The downloaded `csv` file will include capacity pool billing tags for the corresponding resources.
- ![Snapshot that shows the Download window of Azure Cost Management.](../media/azure-netapp-files/invoice-download.png)
+ ![Snapshot that shows the Download window of Azure Cost Management.](./media/manage-billing-tags/invoice-download.png)
- [ ![Screenshot that shows the downloaded spreadsheet.](../media/azure-netapp-files/spreadsheet-download.png) ](../media/azure-netapp-files/spreadsheet-download.png#lightbox)
+ [ ![Screenshot that shows the downloaded spreadsheet.](./media/manage-billing-tags/spreadsheet-download.png) ](./media/manage-billing-tags/spreadsheet-download.png#lightbox)
## Next steps
azure-netapp-files Manage Cool Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/manage-cool-access.md
Before creating or enabling a cool-access volume, you need to configure a Standa
1. Check the **Enable Cool Access** checkbox, then select **Create**. When you select **Enable Cool Access**, the UI automatically selects the auto QoS type. The manual QoS type isn't supported for Standard service with cool access.
- :::image type="content" source="../media/azure-netapp-files/cool-access-new-capacity-pool.png" alt-text="Screenshot that shows the New Capacity Pool window with the Enable Cool Access option selected." lightbox="../media/azure-netapp-files/cool-access-new-capacity-pool.png":::
+ :::image type="content" source="./media/manage-cool-access/cool-access-new-capacity-pool.png" alt-text="Screenshot that shows the New Capacity Pool window with the Enable Cool Access option selected." lightbox="./media/manage-cool-access/cool-access-new-capacity-pool.png":::
#### <a name="enable-cool-access-existing-pool"></a> Enable cool access on an existing capacity pool
You can enable cool access support on an existing Standard service-level capacit
2. Select **Enable Cool Access**:
- :::image type="content" source="../media/azure-netapp-files/cool-access-existing-pool.png" alt-text="Screenshot that shows the right-click menu on an existing capacity pool. The menu enables you to select the Enable Cool Access option." lightbox="../media/azure-netapp-files/cool-access-existing-pool.png":::
+ :::image type="content" source="./media/manage-cool-access/cool-access-existing-pool.png" alt-text="Screenshot that shows the right-click menu on an existing capacity pool. The menu enables you to select the Enable Cool Access option." lightbox="./media/manage-cool-access/cool-access-existing-pool.png":::
### Configure a volume for cool access
Standard storage with cool access can be enabled during the creation of a volume
* When the cool access setting is disabled on the volume, you can't modify the cool access retrieval policy setting on the volume. * Once you disable the cool access setting on the volume, the cool access retrieval policy setting automatically reverts to `Default`.
- :::image type="content" source="../media/azure-netapp-files/cool-access-new-volume.png" alt-text="Screenshot that shows the Create a Volume page. Under the basics tab, the Enable Cool Access checkbox is selected. The options for the cool access retrieval policy are displayed. " lightbox="../media/azure-netapp-files/cool-access-new-volume.png":::
+ :::image type="content" source="./media/manage-cool-access/cool-access-new-volume.png" alt-text="Screenshot that shows the Create a Volume page. Under the basics tab, the Enable Cool Access checkbox is selected. The options for the cool access retrieval policy are displayed. " lightbox="./media/manage-cool-access/cool-access-new-volume.png":::
1. Follow one of the following articles to complete the volume creation: * [Create an NFS volume](azure-netapp-files-create-volumes.md)
In a Standard service-level, cool-access enabled capacity pool, you can enable a
* Once you disable the cool access setting on the volume, the cool access retrieval policy setting automatically reverts to `Default`.
- :::image type="content" source="../media/azure-netapp-files/cool-access-existing-volume.png" alt-text="Screenshot that shows the Enable Cool Access window with the Enable Cool Access field selected. " lightbox="../media/azure-netapp-files/cool-access-existing-volume.png":::
+ :::image type="content" source="./media/manage-cool-access/cool-access-existing-volume.png" alt-text="Screenshot that shows the Enable Cool Access window with the Enable Cool Access field selected. " lightbox="./media/manage-cool-access/cool-access-existing-volume.png":::
### <a name="modify_cool"></a>Modify cool access configuration for a volume
azure-netapp-files Manage Default Individual User Group Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/manage-default-individual-user-group-quotas.md
Quota rules only come into effect on the CRR/CZR destination volume after the re
1. From the Azure portal, navigate to the volume for which you want to create a quota rule. Select **User and group quotas** in the navigation pane, then click **Add** to create a quota rule for a volume.
- ![Screenshot that shows the New Quota window of Users and Group Quotas.](../media/azure-netapp-files/user-group-quotas-new-quota.png)
+ ![Screenshot that shows the New Quota window of Users and Group Quotas.](./media/manage-default-individual-user-group-quotas/user-group-quotas-new-quota.png)
2. In the **New quota** window that appears, provide information for the following fields, then click **Create**.
Quota rules only come into effect on the CRR/CZR destination volume after the re
1. On the Azure portal, navigate to the volume whose quota rule you want to edit or delete. Select `…` at the end of the quota rule row, then select **Edit** or **Delete** as appropriate.
- ![Screenshot that shows the Edit and Delete options of Users and Group Quotas.](../media/azure-netapp-files/user-group-quotas-delete-edit.png)
+ ![Screenshot that shows the Edit and Delete options of Users and Group Quotas.](./media/manage-default-individual-user-group-quotas/user-group-quotas-delete-edit.png)
1. If you're editing a quota rule, update **Quota Limit** in the Edit User Quota Rule window that appears.
- ![Screenshot that shows the Edit User Quota Rule window of Users and Group Quotas.](../media/azure-netapp-files/user-group-quotas-edit-rule.png)
+ ![Screenshot that shows the Edit User Quota Rule window of Users and Group Quotas.](./media/manage-default-individual-user-group-quotas/user-group-quotas-edit-rule.png)
1. If you're deleting a quota rule, confirm the deletion by selecting **Yes**.
- ![Screenshot that shows the Confirm Delete window of Users and Group Quotas.](../media/azure-netapp-files/user-group-quotas-confirm-delete.png)
+ ![Screenshot that shows the Confirm Delete window of Users and Group Quotas.](./media/manage-default-individual-user-group-quotas/user-group-quotas-confirm-delete.png)
## Next steps * [Understand default and individual user and group quotas](default-individual-user-group-quotas-introduction.md)
azure-netapp-files Manage Manual Qos Capacity Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/manage-manual-qos-capacity-pool.md
You can change a capacity pool that currently uses the auto QoS type to use the
3. Select **Change QoS type**. Then set **New QoS Type** to **Manual**. Select **OK**.
-![Change QoS type](../media/azure-netapp-files/change-qos-type.png)
+![Change QoS type](./media/manage-manual-qos-capacity-pool/change-qos-type.png)
## Monitor the throughput of a manual QoS capacity pool
If a volume is contained in a manual QoS capacity pool, you can modify the allot
2. Select **Change throughput**. Specify the **Throughput (MiB/S)** that you want. Select **OK**.
- ![Change QoS throughput](../media/azure-netapp-files/change-qos-throughput.png)
+ ![Change QoS throughput](./media/manage-manual-qos-capacity-pool/change-qos-throughput.png)
## Next steps
azure-netapp-files Manage Smb Share Access Control Lists https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/manage-smb-share-access-control-lists.md
There are two ways to view share settings:
You must have the mount path. You can retrieve this in the Azure portal by navigating to the **Overview** menu of the volume for which you want to configure share ACLs. Identify the **Mount path**. ## View SMB share ACLs with advanced permissions
Advanced permissions for files, folders, and shares on an Azure NetApp File volu
1. In Windows Explorer, use the mount path to open the volume. Right-click on the volume, select **Properties**. Switch to the **Security** tab then select **Advanced**.
- :::image type="content" source="../media/azure-netapp-files/security-advanced-tab.png" alt-text="Screenshot of security tab." lightbox="../media/azure-netapp-files/security-advanced-tab.png":::
+ :::image type="content" source="./media/manage-smb-share-access-control-lists/security-advanced-tab.png" alt-text="Screenshot of security tab." lightbox="./media/manage-smb-share-access-control-lists/security-advanced-tab.png":::
1. In the new window that pops up, switch to the **Share** tab to view the share-level ACLs. You cannot modify share-level ACLs. >[!NOTE] >Azure NetApp Files doesn't support windows audit ACLs. Azure NetApp Files ignores any audit ACL applied to files or directories hosted on Azure NetApp Files volumes.
- :::image type="content" source="../media/azure-netapp-files/view-permissions.png" alt-text="Screenshot of the permissions tab." lightbox="../media/azure-netapp-files/view-permissions.png":::
+ :::image type="content" source="./media/manage-smb-share-access-control-lists/view-permissions.png" alt-text="Screenshot of the permissions tab." lightbox="./media/manage-smb-share-access-control-lists/view-permissions.png":::
- :::image type="content" source="../media/azure-netapp-files/view-shares.png" alt-text="Screenshot of the share tab." lightbox="../media/azure-netapp-files/view-shares.png":::
+ :::image type="content" source="./media/manage-smb-share-access-control-lists/view-shares.png" alt-text="Screenshot of the share tab." lightbox="./media/manage-smb-share-access-control-lists/view-shares.png":::
## Modify share-levels ACLs with the Microsoft Management Console
You can only modify the share ACLs in Azure NetApp Files with the Microsoft Mana
1. In the Computer Management window, right-click **Computer management (local)** then select **Connect to another computer**.
- :::image type="content" source="../media/azure-netapp-files/computer-management-local.png" alt-text="Screenshot of the computer management window." lightbox="../media/azure-netapp-files/computer-management-local.png":::
+ :::image type="content" source="./media/manage-smb-share-access-control-lists/computer-management-local.png" alt-text="Screenshot of the computer management window." lightbox="./media/manage-smb-share-access-control-lists/computer-management-local.png":::
1. In the **Another computer** field, enter the fully qualified domain name (FQDN).
You can only modify the share ACLs in Azure NetApp Files with the Microsoft Mana
1. Once connected, expand **System Tools** then select **Shared Folders > Shares**. 1. To manage share permissions, right-click on the name of the share you want to modify from list and select **Properties**.
- :::image type="content" source="../media/azure-netapp-files/share-folder.png" alt-text="Screenshot of the share folder." lightbox="../media/azure-netapp-files/share-folder.png":::
+ :::image type="content" source="./media/manage-smb-share-access-control-lists/share-folder.png" alt-text="Screenshot of the share folder." lightbox="./media/manage-smb-share-access-control-lists/share-folder.png":::
1. Add, remove, or modify the share ACLs as appropriate.
- :::image type="content" source="../media/azure-netapp-files/add-share.png" alt-text="Screenshot showing how to add a share." lightbox="../media/azure-netapp-files/add-share.png":::
+ :::image type="content" source="./media/manage-smb-share-access-control-lists/add-share.png" alt-text="Screenshot showing how to add a share." lightbox="./media/manage-smb-share-access-control-lists/add-share.png":::
## Next step
azure-netapp-files Monitor Volume Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/monitor-volume-capacity.md
You can use Windows clients to check the used and available capacity of a volume
* Go to File Explorer, right-click the mapped drive, and select **Properties** to display capacity.
- [ ![Screenshot that shows Explorer drive properties and volume properties.](../media/azure-netapp-files/monitor-explorer-drive-properties.png) ](../media/azure-netapp-files/monitor-explorer-drive-properties.png#lightbox)
+ [ ![Screenshot that shows Explorer drive properties and volume properties.](./media/monitor-volume-capacity/monitor-explorer-drive-properties.png) ](./media/monitor-volume-capacity/monitor-explorer-drive-properties.png#lightbox)
* Use the `dir` command at the command prompt:
- ![Screenshot that shows using the dir command to display capacity.](../media/azure-netapp-files/monitor-volume-properties-dir-command.png)
+ ![Screenshot that shows using the dir command to display capacity.](./media/monitor-volume-capacity/monitor-volume-properties-dir-command.png)
The *available space* is accurate using File Explorer or the `dir` command. However, the *consumed/used space* will be an estimate when snapshots are generated on the volume. The [consumed snapshot capacity](azure-netapp-files-cost-model.md#capacity-consumption-of-snapshots) counts towards the total consumed space on the volume. To get the absolute volume consumption, including the capacity used by snapshots, use the [Azure NetApp Metrics](azure-netapp-files-metrics.md#volumes) in the Azure portal.
The `-h` option shows the size, including used and available space in human read
The following snapshot shows volume capacity reporting in Linux:
-![Screenshot that shows volume capacity reporting in Linux.](../media/azure-netapp-files/monitor-volume-properties-linux-command.png)
+![Screenshot that shows volume capacity reporting in Linux.](./media/monitor-volume-capacity/monitor-volume-properties-linux-command.png)
The *available space* is accurate using the `df` command. However, the *consumed/used space* will be an estimate when snapshots are generated on the volume. The [consumed snapshot capacity](azure-netapp-files-cost-model.md#capacity-consumption-of-snapshots) counts towards the total consumed space on the volume. To get the absolute volume consumption, including the capacity used by snapshots, use the [Azure NetApp Metrics](azure-netapp-files-metrics.md#volumes) in the Azure portal.
azure-netapp-files Mount Volumes Vms Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/mount-volumes-vms-smb.md
You can mount an SMB file for Windows virtual machines (VMs).
1. Select the **Volumes** menu and then the SMB volume that you want to mount. 1. To mount the SMB volume using a Windows client, select **Mount instructions** from the selected volume. Follow the displayed instructions to mount the volume.
- :::image type="content" source="../media/azure-netapp-files/azure-netapp-files-mount-instructions-smb.png" alt-text="Screenshot of Mount instructions." lightbox="../media/azure-netapp-files/azure-netapp-files-mount-instructions-smb.png":::
+ :::image type="content" source="./media/mount-volumes-vms-smb/azure-netapp-files-mount-instructions-smb.png" alt-text="Screenshot of Mount instructions." lightbox="./media/mount-volumes-vms-smb/azure-netapp-files-mount-instructions-smb.png":::
## Next steps
azure-netapp-files Network Attached File Permissions Nfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/network-attached-file-permissions-nfs.md
Mode bit permissions in NFS provide basic permissions for files and folders, usi
Numeric values are applied to different segments of an access control: owner, group and everyone else, meaning that there are no granular user access controls in place for basic NFSv3. The following image shows an example of how a mode bit access control might be constructed for use with an NFSv3 object. Azure NetApp Files doesn't support POSIX ACLs. Thus granular ACLs are only possible with NFSv3 when using an NTFS security style volume with valid UNIX to Windows name mappings via a name service such as Active Directory LDAP. Alternately, you can use NFSv4.1 with Azure NetApp Files and NFSv4.1 ACLs.
azure-netapp-files Network Attached File Permissions Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/network-attached-file-permissions-smb.md
SMB volumes in Azure NetApp Files can leverage NTFS security styles to make use
NTFS ACLs provide granular permissions and ownership for files and folders by way of access control entries (ACEs). Directory permissions can also be set to enable or disable inheritance of permissions. For a complete overview of NTFS-style ACLs, see [Microsoft Access Control overview](/windows/security/identity-protection/access-control/access-control).
azure-netapp-files Network Attached File Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/network-attached-file-permissions.md
Folders can be assigned inheritance flags, which means that parent folder permis
* In Windows SMB shares, inheritance is controlled in the advanced permission view. * For NFSv3, permission inheritance doesnΓÇÖt work via ACL, but instead can be mimicked using umask and setgid flags. * With NFSv4.1, permission inheritance can be handled using inheritance flags on ACLs.
azure-netapp-files Network Attached Storage Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/network-attached-storage-concept.md
Network Attached Storage (NAS) is a way for a centralized storage system to present data to multiple networked clients across a WAN or LAN. Datasets in a NAS environment can be structured (data in a well-defined format, such as databases) or unstructured (data not stored in a structured database format, such as images, media files, logs, home directories, etc.). Regardless of the structure, the data is served through a standard conversation between a NAS client and the Azure NetApp Files NAS services. The conversation happens following these basic steps:
azure-netapp-files Network Attached Storage Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/network-attached-storage-permissions.md
The initial entry point to be secured in a NAS environment is access to the shar
Since the most restrictive permissions override other permissions, and a share is the main entry point to the volume (with the fewest access controls), share permissions should abide by a funnel logic, where the share allows more access than the underlying files and folders. The funnel logic enacts more granular, restrictive controls. ## NFS export policies
Volumes in Azure NetApp Files are shared out to NFS clients by exporting a path
An export policy is a container for a set of access rules that are listed in order of desired access. These rules control access to NFS shares by using client IP addresses or subnets. If a client isn't listed in an export policy ruleΓÇöeither allowing or explicitly denying accessΓÇöthen that client is unable to mount the NFS export. Since the rules are read in sequential order, if a more restrictive policy rule is applied to a client (for example, by way of a subnet), then it's read and applied first. Subsequent policy rules that allow more access are ignored. This diagram shows a client that has an IP of 10.10.10.10 getting read-only access to a volume because the subnet 0.0.0.0/0 (every client in every subnet) is set to read-only and is listed first in the policy. ### Export policy rule options available in Azure NetApp Files
The order of export policy rules determines how they are applied. The first rule
Consider the following example: - The first rule in the index includes *all clients* in *all subnets* by way of the default policy rule using 0.0.0.0/0 as the **Allowed clients** entry. That rule allows ΓÇ£Read & WriteΓÇ¥ access to all clients for that Azure NetApp Files NFSv3 volume. - The second rule in the index explicitly lists NFS client 10.10.10.10 and is configured to limit access to ΓÇ£Read only,ΓÇ¥ with no root access (root is squashed).
To fix this and set access to the desired level, the rules can be re-ordered to
SMB shares enable end users can access SMB or dual-protocol volumes in Azure NetApp Files. Access controls for SMB shares are limited in the Azure NetApp Files control plane to only SMB security options such as access-based enumeration and non-browsable share functionality. These security options are configured during volume creation with the **Edit volume** functionality. Share-level permission ACLs are managed through a Windows MMC console rather than through Azure NetApp Files.
Azure NetApp Files offers multiple share properties to enhance security for admi
[Access-based enumeration](azure-netapp-files-create-volumes-smb.md#access-based-enumeration) is an Azure NetApp Files SMB volume feature that limits enumeration of files and folders (that is, listing the contents) in SMB only to users with allowed access on the share. For instance, if a user doesn't have access to read a file or folder in a share with access-based enumeration enabled, then the file or folder doesn't show up in directory listings. In the following example, a user (`smbuser`) doesn't have access to read a folder named ΓÇ£ABEΓÇ¥ in an Azure NetApp Files SMB volume. Only `contosoadmin` has access. In the below example, access-based enumeration is disabled, so the user has access to the `ABE` directory of `SMBVolume`. In the next example, access-based enumeration is enabled, so the `ABE` directory of `SMBVolume` doesn't display for the user. The permissions also extend to individual files. In the below example, access-based enumeration is disabled and `ABE-file` displays to the user. With access-based enumeration enabled, `ABE-file` doesn't display to the user. #### Non-browsable shares
The non-browsable shares feature in Azure NetApp Files limits clients from brows
In the following image, the non-browsable share property isn't enabled for `SMBVolume`, so the volume displays in the listing of the file server (using `\\servername`). With non-browsable shares enabled on `SMBVolume` in Azure NetApp Files, the same view of the file server excludes `SMBVolume`. In the next image, the share `SMBVolume` has non-browsable shares enabled in Azure NetApp Files. When that is enabled, this is the view of the top level of the file server. Even though the volume in the listing cannot be seen, it remains accessible if the user knows the file path. #### SMB3 encryption SMB3 encryption is an Azure NetApp Files SMB volume feature that enforces encryption over the wire for SMB clients for greater security in NAS environments. The following image shows a screen capture of network traffic when SMB encryption is disabled. Sensitive informationΓÇösuch as file names and file handlesΓÇöis visible. When SMB Encryption is enabled, the packets are marked as encrypted, and no sensitive information can be seen. Instead, itΓÇÖs shown as "Encrypted SMB3 data." #### SMB share ACLs
azure-netapp-files Network Attached Storage Protocols https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/network-attached-storage-protocols.md
In the following illustration, `user1` authenticates to Azure NetApp Files to ac
In this instance, `user1` gets full control on their own folder (`user1-dir`) and no access to the `HR` folder. This setting is based on the security ACLs specified in the file system, and `user1` will get the expected access regardless of which protocol they're accessing the volumes from. ### Considerations for Azure NetApp Files dual-protocol volumes
azure-netapp-files Network File System Group Memberships https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/network-file-system-group-memberships.md
The following example shows the output from Active Directory with a userΓÇÖs DN
The following example shows the Windows group member field: The following example shows `LDAPsearch` of all groups where `User1` is a member: You can also query group memberships for a user in Azure NetApp Files by selecting **LDAP Group ID List** link under **Support + troubleshooting** on the volume menu. ## Group limits in NFS
The options to extend the group limitation work the same way that the `manage-gi
The following example shows RPC packet with 16 GIDs. Any GID past the limit of 16 is dropped by the protocol. With extended groups in Azure NetApp Files, when a new NFS request comes in, information about the userΓÇÖs group membership is requested.
azure-netapp-files Nfs Access Control Lists https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/nfs-access-control-lists.md
The NFSv4.x protocol can provide access control in the form of [access control lists (ACLs)](/windows/win32/secauthz/access-control-lists), which conceptually similar to ACLs used in [SMB via Windows NTFS permissions](network-attached-file-permissions-smb.md). An NFSv4.x ACL consists of individual [Access Control Entries (ACEs)](/windows/win32/secauthz/access-control-entries), each of which provides an access control directive to the server. Each NFSv4.x ACL is created with the format of `type:flags:principal:permissions`.
When a local user or group ACL is set, any user or group that corresponds to the
The credentials passed from client to server can be seen via a packet capture as seen below. **Caveats:**
chown: changing ownership of ΓÇÿtestdirΓÇÖ: Operation not permitted
The export policy rule on the volume can be modified to change this behavior. In the **Export policy** menu for the volume, modify **Chown mode** to "unrestricted." Once modified, ownership can be changed by users other than root if they have appropriate access rights. This requires the ΓÇ£Take OwnershipΓÇ¥ NFSv4.x ACL permission (designated by the letter ΓÇ£oΓÇ¥). Ownership can also be changed if the user changing ownership currently owns the file or folder.
Root access with NFSv4.x ACLs can't be limited unless [root is squashed](network
To configure root squashing, navigate to the **Export policy** menu on the volume then change ΓÇ£Root accessΓÇ¥ to ΓÇ£offΓÇ¥ for the policy rule. The effect of disabling root access root squashes to anonymous user `nfsnobody:65534`. Root access is then unable to change ownership.
azure-netapp-files Performance Azure Vmware Solution Datastore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-azure-vmware-solution-datastore.md
If you stripe volumes across multiple disks, ensure the backup software or disas
To understand how well a single AVS VM scales as more virtual disks are added, tests were performed with one, two, four, and eight datastores (each containing a single VMDK). The following diagram shows a single disk averaged around 73,040 IOPS (scaling from 100% write / 0% read, to 0% write / 100% read). When this test was increased to two drives, performance increased by 75.8% to 128,420 IOPS. Increasing to four drives began to show diminishing returns of what a single VM, sized as tested, could push. The peak IOPS observed were 147,000 IOPS with 100% random reads. ### Single-host scaling ΓÇô Single datastore
It scales poorly to increase the number of VMs driving IO to a single datastore
Increasing the block size (to 64 KB) for large block workloads had comparable results, reaching a peak of 2148 MiB/s (single VM, single VMDK) and 2138 MiB/s (4 VMs, 16 VMDKs). ### Single-host scaling ΓÇô Multiple datastores From the context of a single AVS host, while a single datastore allowed the VMs to drive about 76,000 IOPS, spreading the workloads across two datastores increased total throughput by 76% on average. Moving beyond two datastores to four resulted in a 163% increase (over one datastore, a 49% increase from two to four) as shown in the following diagram. Even though there were still performance gains, increasing beyond eight datastores showed diminishing returns. ### Multi-host scaling ΓÇô Single datastore A single datastore from a single host produced over 2000 MiB/s of sequential 64-KB throughput. Distributing the same workload across all four hosts produced a peak gain of 135% driving over 5000 MiB/s. This outcome likely represents the upper ceiling of a single Azure NetApp Files volume throughput performance. Decreasing the block size from 64 KB to 8 KB and rerunning the same iterations resulted in four VMs producing 195,000 IOPS, as shown the following diagram. Performance scales as both the number of hosts and the number of datastores increase, because the number of network flows increases. The performance increases by scaling the number of hosts multiplied by the number of datastores, because the count of network flows is a factor of hosts times datastores. ### Multi-host scaling ΓÇô Multiple datastores A single datastore with four VMs spread across four hosts produced over 5000 MiB/s of sequential 64-KB IO. For more demanding workloads, each VM is moved to a dedicated datastore, producing over 10,500 MiB/s in total, as shown in the following diagram. For small-block, random workloads, a single datastore produced 195,000 random 8-KB IOPS. Scaling to four datastores produced over 530,000 random 8K IOPS. ## Implications and recommendations
azure-netapp-files Performance Benchmarks Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-benchmarks-azure-vmware-solution.md
Traffic latency from AVS to Azure NetApp Files datastores varies from sub-millis
In a single AVS host scenario, the AVS to Azure NetApp Files datastore I/O occurs over a single network flow. The following graphs compare the throughput and IOPs of a single virtual machine with the aggregated throughput and IOPs of four virtual machines. In the subsequent scenarios, the number of network flows increases as more hosts and datastores are added. ## One-to-multiple Azure NetApp Files datastores with a single AVS host The following graphs compare the throughput of a single virtual machine on a single Azure NetApp Files datastore with the aggregated throughput of four Azure NetApp Files datastores. In both scenarios, each virtual machine has a VMDK on each Azure NetApp Files datastore. The following graphs compare the IOPs of a single virtual machine on a single Azure NetApp Files datastore with the aggregated IOPs of eight Azure NetApp Files datastores. In both scenarios, each virtual machine has a VMDK on each Azure NetApp Files datastore. ## Scale-out Azure NetApp Files datastores with multiple AVS hosts
The following graph shows the aggregated throughput and IOPs of 16 virtual machi
Nearly identical results were achieved with a single virtual machine on each host with four VMDKs per virtual machine and each of those VMDKs on a separate datastore. ## Next steps
azure-netapp-files Performance Benchmarks Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-benchmarks-linux.md
The graph below represents a 64-kibibyte (KiB) sequential workload and a 1 TiB w
The graph illustrates decreases in 10% at a time, from pure read to pure write. It demonstrates what you can expect when using varying read/write ratios (100%:0%, 90%:10%, 80%:20%, and so on).
-![Linux workload throughput](../media/azure-netapp-files/performance-benchmarks-linux-workload-throughput.png)
+![Linux workload throughput](./media/performance-benchmarks-linux/performance-benchmarks-linux-workload-throughput.png)
### Linux workload IOPS
The following graph represents a 4-kibibyte (KiB) random workload and a 1 TiB wo
This graph illustrates decreases in 10% at a time, from pure read to pure write. It demonstrates what you can expect when using varying read/write ratios (100%:0%, 90%:10%, 80%:20%, and so on).
-![Linux workload IOPS](../media/azure-netapp-files/performance-benchmarks-linux-workload-iops.png)
+![Linux workload IOPS](./media/performance-benchmarks-linux/performance-benchmarks-linux-workload-iops.png)
## Linux scale-up
The graphs compare the advantages of `nconnect` to a non-`connected` mounted vol
The following graphs show 64-KiB sequential reads of ~3,500 MiB/s reads with `nconnect`, roughly 2.3X non-`nconnect`.
-![Linux read throughput](../media/azure-netapp-files/performance-benchmarks-linux-read-throughput.png)
+![Linux read throughput](./media/performance-benchmarks-linux/performance-benchmarks-linux-read-throughput.png)
### Linux write throughput The following graphs show sequential writes. They indicate that `nconnect` has no noticeable benefit for sequential writes. 1,500 MiB/s is roughly both the sequential write volume upper limit and the D32s_v4 instance egress limit.
-![Linux write throughput](../media/azure-netapp-files/performance-benchmarks-linux-write-throughput.png)
+![Linux write throughput](./media/performance-benchmarks-linux/performance-benchmarks-linux-write-throughput.png)
### Linux read IOPS The following graphs show 4-KiB random reads of ~200,000 read IOPS with `nconnect`, roughly 3X non-`nconnect`.
-![Linux read IOPS](../media/azure-netapp-files/performance-benchmarks-linux-read-iops.png)
+![Linux read IOPS](./media/performance-benchmarks-linux/performance-benchmarks-linux-read-iops.png)
### Linux write IOPS The following graphs show 4-KiB random writes of ~135,000 write IOPS with `nconnect`, roughly 3X non-`nconnect`.
-![Linux write IOPS](../media/azure-netapp-files/performance-benchmarks-linux-write-iops.png)
+![Linux write IOPS](./media/performance-benchmarks-linux/performance-benchmarks-linux-write-iops.png)
## Next steps
azure-netapp-files Performance Linux Concurrency Session Slots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-linux-concurrency-session-slots.md
A concurrency level as low as 155 is sufficient to achieve 155,000 Oracle DB NFS
* Considering a latency of 0.5 ms, a concurrency of 55 is needed to achieve 110,000 IOPS. * Considering a latency of 1 ms, a concurrency of 155 is needed to achieve 155,000 IOPS.
-![Oracle DNFS latency curve](../media/azure-netapp-files/performance-oracle-dnfs-latency-curve.png)
+![Oracle DNFS latency curve](./media/shared/performance-oracle-dnfs-latency-curve.png)
See [Oracle database performance on Azure NetApp Files single volumes](performance-oracle-single-volumes.md) for details.
Use the following `tcpdump` command to capture the mount command:
Using Wireshark, the packets of interest are as follows:
-![Screenshot that shows packets of interest.](../media/azure-netapp-files/performance-packets-interest.png)
+![Screenshot that shows packets of interest.](./media/performance-linux-concurrency-session-slots/performance-packets-interest.png)
Within these two packets, look at the `max_reqs` field within the middle section of the trace file.
Within these two packets, look at the `max_reqs` field within the middle section
Packet 12 (client maximum requests) shows that the client had a `max_session_slots` value of 64. In the next section, notice that the server supports a concurrency of 180 for the session. The session ends up negotiating the lower of the two provided values.
-![Screenshot that shows max session slots for Packet 12.](../media/azure-netapp-files/performance-max-session-packet-12.png)
+![Screenshot that shows max session slots for Packet 12.](./media/performance-linux-concurrency-session-slots/performance-max-session-packet-12.png)
The following example shows Packet 14 (server maximum requests):
-![Screenshot that shows max session slots for Packet 14.](../media/azure-netapp-files/performance-max-session-packet-14.png)
+![Screenshot that shows max session slots for Packet 14.](./media/performance-linux-concurrency-session-slots/performance-max-session-packet-14.png)
## Next steps
azure-netapp-files Performance Oracle Multiple Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-oracle-multiple-volumes.md
The following charts capture the performance profile of a single E104ids_v5 Azur
The following diagram depicts the architecture that testing was completed against; note the Oracle database spread across multiple Azure NetApp Files volumes and endpoints. #### Single-host storage IO The following diagram shows a 100% randomly selected workload with a database buffer hit ratio of about 8%. SLOB2 was able to drive approximately 850,000 I/O requests per second while maintaining a submillisecond DB file sequential read event latency. With a database block size of 8K that amounts to approximately 6,800 MiB/s of storage throughput. #### Single-host throughput The following diagram demonstrates that, for bandwidth intensive sequential IO workloads such as full table scans or RMAN activities, Azure NetApp Files can deliver the full bandwidth capabilities of the E104ids_v5 VM itself. >[!NOTE] >As the compute instance is at the theoretical maximum of its bandwidth, adding additional application concurrency results only in increased client-side latency. This results in SLOB2 workloads exceeding the targeted completion timeframe therefore thread count was capped at six.
The following charts capture the performance profile of three E104ids_v5 Azure V
The following diagram depicts the architecture that testing was completed against; note the three Oracle databases spread across multiple Azure NetApp Files volumes and endpoints. Endpoints can be dedicated to a single host as shown with Oracle VM 1 or shared among hosts as shown with Oracle VM2 and Oracle VM 3. #### Multi-host storage IO The following diagram shows a 100% randomly selected workload with a database buffer hit ratio of about 8%. SLOB2 was able to drive approximately 850,000 I/O requests per second across all three hosts individually. SLOB2 was able accomplish this while executing in parallel to a collective total of about 2,500,000 I/O requests per second with each host still maintaining a submillisecond db file sequential read event latency. With a database block size of 8K, this amounts to approximately 20,000 MiB/s between the three hosts. #### Multi-host throughput The following diagram demonstrates that for sequential workloads, Azure NetApp Files can still deliver the full bandwidth capabilities of the E104ids_v5 VM itself even as it scales outward. SLOB2 was able to drive I/O totaling over 30,000 MiB/s across the three hosts while running in parallel. #### Real-world performance
In the scenario where multiple NICs are configured, you need to determine which
Use the following process to identify the mapping between configured network interface and its associated virtual interface. This process validates that accelerated networking is enabled for a specific NIC on your Linux machine and display the physical ingress speed the NIC can potentially achieve. 1. Execute the `ip a` command:
- :::image type="content" alt-text="Screenshot of output of ip a command." source="../media/azure-netapp-files/ip-a-command-output.png":::
+ :::image type="content" alt-text="Screenshot of output of ip a command." source="./media/performance-oracle-multiple-volumes/ip-a-command-output.png":::
1. List the `/sys/class/net/` directory of the NIC ID you are verifying (`eth0` in the example) and `grep` for the word lower: ```bash ls /sys/class/net/eth0 | grep lower lower_eth1 ``` 1. Execute the `ethtool` command against the ethernet device identified as the lower device in the previous step.
- :::image type="content" alt-text="Screenshot of output of settings for eth1." source="../media/azure-netapp-files/ethtool-output.png":::
+ :::image type="content" alt-text="Screenshot of output of settings for eth1." source="./media/performance-oracle-multiple-volumes/ethtool-output.png":::
#### Azure VM: Network vs. disk bandwidth limits
A level of expertise is required when reading Azure VM performance limits docume
A sample chart is shown for reference: ### Azure NetApp Files
Automatic Storage Management (ASM) is supported for NFS volumes. Though typicall
An ASM over dNFS configuration was used to produce all test results discussed in this article. The following diagram illustrates the ASM file layout within the Azure NetApp Files volumes and the file allocation to the ASM disk groups. There are some limitations with the use of ASM over Azure NetApp Files NFS mounted volumes when it comes to storage snapshots that can be overcome with certain architectural considerations. Contact your Azure NetApp Files specialist or cloud solutions architect for an in-depth review of these considerations.
In Oracle version 12.2 and above, an Exadata specific addition will be included
* The top cells by percentage CPU are display and are in descending order of percentage CPU * Average: 39.34% CPU, 28.57% user, 10.77% sys
- :::image type="content" alt-text="Screenshot of a table showing top cells by percentage CPU." source="../media/azure-netapp-files/exadata-top-cells.png":::
+ :::image type="content" alt-text="Screenshot of a table showing top cells by percentage CPU." source="./media/performance-oracle-multiple-volumes/exadata-top-cells.png":::
* Single cell physical block reads * Flash cache usage
azure-netapp-files Performance Oracle Single Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-oracle-single-volumes.md
This article addresses the following topics about Oracle in the cloud. These top
The following diagram illustrates the environment used for testing. For consistency and simplicity, Ansible playbooks were used to deploy all elements of the test bed.
-![Oracle testing environment](../media/azure-netapp-files/performance-oracle-test-environment.png)
+![Oracle testing environment](./media/performance-oracle-single-volumes/performance-oracle-test-environment.png)
### Virtual machine configuration
A PDB was created for the SLOB database.
The following diagram shows the tablespace named PERFIO with 600 GB in size (20 data files, 30 GB each) created to host four SLOB user schemas. Each user schema was 125 GB in size.
-![Oracle database](../media/azure-netapp-files/performance-oracle-tablespace.png)
+![Oracle database](./media/performance-oracle-single-volumes/performance-oracle-tablespace.png)
## Performance metrics
This scenario was running on an Azure VM Standard_D32s_v3 (Intel E5-2673 v4 @ 2.
As shown in the following diagram, the Oracle DNFS client delivered up to 2.8x more throughput than the regular Linux kNFS Client:
-![Linux kNFS Client compared with Oracle Direct NFS](../media/azure-netapp-files/performance-oracle-kfns-compared-dnfs.png)
+![Linux kNFS Client compared with Oracle Direct NFS](./media/performance-oracle-single-volumes/performance-oracle-kfns-compared-dnfs.png)
The following diagram shows the latency curve for the read operations. In this context, the bottleneck for the kNFS client is the single NFS TCP socket connection established between the client and the NFS server (the Azure NetApp Files volume).
-![Linux kNFS Client compared with Oracle Direct NFS latency curve](../media/azure-netapp-files/performance-oracle-latency-curve.png)
+![Linux kNFS Client compared with Oracle Direct NFS latency curve](./media/performance-oracle-single-volumes/performance-oracle-latency-curve.png)
The DNFS client was able to push more IO requests/sec due to its ability to create hundreds of TCP socket connections, therefore taking advantage of the parallelism. As described in [Azure NetApp Files configuration](#anf_config), each additional TiB of capacity allocated allows for an additional 128MiB/s of bandwidth. DNFS topped out at 1 GiB/s of throughput, which is the limit imposed by the 8-TiB capacity selection. Given more capacity, more throughput would have been driven. Throughput is only one of the considerations. Another consideration is latency, which has the primary impact on user experience. As the following diagram shows, latency increases can be expected far more rapidly with kNFS than with DNFS.
-![Linux kNFS Client compared with Oracle Direct NFS read latency](../media/azure-netapp-files/performance-oracle-read-latency.png)
+![Linux kNFS Client compared with Oracle Direct NFS read latency](./media/performance-oracle-single-volumes/performance-oracle-read-latency.png)
Histograms provide excellent insight into database latencies. The following diagram provides a complete view from the perspective of the recorded "db file sequential read", while using DNFS at the highest concurrency data point (32 threads/schema). As shown in the following diagram, 47% of all read operations were honored between 512 microseconds and 1000 microseconds, while 90% of all read operations were served at a latency below 2 ms.
-![Linux kNFS Client compared with Oracle Direct NFS histograms](../media/azure-netapp-files/performance-oracle-histogram-read-latency.png)
+![Linux kNFS Client compared with Oracle Direct NFS histograms](./media/performance-oracle-single-volumes/performance-oracle-histogram-read-latency.png)
In conclusion, it's clear that DNFS is a must-have when it comes to improving the performance of an Oracle database instance on NFS.
DNFS is capable of consuming far more bandwidth than what is provided by an 8-TB
The following diagram shows a configuration for an 80% select and 20% update workload, and with a database buffer hit ratio of 8%. SLOB was able to drive a single volume to 200,000 NFS I/O requests per second. Considering that each operation is 8-KiB size, the system under test was able to deliver ~200,000 IO requests/sec or 1600 MiB/s.
-![Oracle DNFS throughput](../media/azure-netapp-files/performance-oracle-dnfs-throughput.png)
+![Oracle DNFS throughput](./media/performance-oracle-single-volumes/performance-oracle-dnfs-throughput.png)
The following read latency curve diagram shows that, as the read throughput increases, the latency increases smoothly below the 1-ms line, and it hits the knee of the curve at ~165,000 average read IO requests/sec at the average read latency of ~1.3 ms. This value is an incredible latency value for an I/O rate unachievable with almost any other technology in the Azure Cloud.
-![Oracle DNFS latency curve](../media/azure-netapp-files/performance-oracle-dnfs-latency-curve.png)
+![Oracle DNFS latency curve](./media/shared/performance-oracle-dnfs-latency-curve.png)
#### Sequential I/O As shown in the following diagram, not all I/O is random in nature, considering an RMAN backup or a full table scan, for example, as workloads requiring as much bandwidth as they can get. Using the same configuration as described previously but with the volume resized to 32 TiB, the following diagram shows that a single Oracle DB instance can drive upwards of 3,900 MB/s of throughput, very close to the Azure NetApp Files volume's performance quota of 32 TB (128 MB/s * 32 = 4096 MB/s).
-![Oracle DNFS I/O](../media/azure-netapp-files/performance-oracle-dnfs-io.png)
+![Oracle DNFS I/O](./media/performance-oracle-single-volumes/performance-oracle-dnfs-io.png)
In summary, Azure NetApp Files helps you take your Oracle databases to the cloud. It delivers on performance when the database demands it. You can dynamically and non-disruptively resize your volume quota at any time.
azure-netapp-files Regional Capacity Quota https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/regional-capacity-quota.md
You can click **Quota** under Settings of Azure NetApp Files to display the curr
For example:
-![Screenshot that shows how to display quota information.](../media/azure-netapp-files/quota-display.png)
+![Screenshot that shows how to display quota information.](./media/regional-capacity-quota/quota-display.png)
## Request regional capacity quota increase
azure-netapp-files Request Region Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/request-region-access.md
In some special situations, you might need to explicitly request access to a reg
2. For **Subscription**, select your subscription. 3. For **Quota Type**, select **Storage: Azure NetApp Files limits**.
- ![Screenshot that shows the Problem Description tab.](../media/azure-netapp-files/support-problem-descriptions.png)
+ ![Screenshot that shows the Problem Description tab.](./media/shared/support-problem-descriptions.png)
3. Under the **Additional details** tab, click **Enter details** in the Request Details field.
- ![Screenshot that shows the Details tab and the Enter Details field.](../media/azure-netapp-files/quota-additional-details.png)
+ ![Screenshot that shows the Details tab and the Enter Details field.](./media/shared/quota-additional-details.png)
4. To request region access, provide the following information in the Quota Details window that appears: 1. In **Quota Type**, select **Region Access**. 2. In **Region Requested**, select your region.
- ![Screenshot that shows the Quota Details window for requesting region access.](../media/azure-netapp-files/quota-details-region-access.png)
+ ![Screenshot that shows the Quota Details window for requesting region access.](./media/request-region-access/quota-details-region-access.png)
5. Click **Save and continue**. Click **Review + create** to create the request.
azure-netapp-files Snapshots Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/snapshots-delete.md
You can delete snapshots that you no longer need to keep.
1. Go to the **Snapshots** menu of a volume. Right-click the snapshot you want to delete. Select **Delete**.
- ![Screenshot that describes the right-click menu of a snapshot](../media/azure-netapp-files/snapshot-right-click-menu.png)
+ ![Screenshot that describes the right-click menu of a snapshot](./media/shared/snapshot-right-click-menu.png)
2. In the Delete Snapshot window, confirm that you want to delete the snapshot by clicking **Yes**.
- ![Screenshot that confirms snapshot deletion](../media/azure-netapp-files/snapshot-confirm-delete.png)
+ ![Screenshot that confirms snapshot deletion](./media/snapshots-delete/snapshot-confirm-delete.png)
## Next steps
azure-netapp-files Snapshots Edit Hide Path https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/snapshots-edit-hide-path.md
The Hide Snapshot Path option controls whether the snapshot path of a volume is
## Steps 1. To view the Hide Snapshot Path option setting of a volume, select the volume. The **Hide snapshot path** field shows whether the option is enabled.
- ![Screenshot that describes the Hide Snapshot Path field.](../media/azure-netapp-files/hide-snapshot-path-field.png)
+ ![Screenshot that describes the Hide Snapshot Path field.](./media/snapshots-edit-hide-path/hide-snapshot-path-field.png)
2. To edit the Hide Snapshot Path option, click **Edit** on the volume page and modify the **Hide snapshot path** option as needed.
- ![Screenshot that describes the Edit volume snapshot option.](../media/azure-netapp-files/volume-edit-snapshot-options.png)
+ ![Screenshot that describes the Edit volume snapshot option.](./media/snapshots-edit-hide-path/volume-edit-snapshot-options.png)
## Next steps
azure-netapp-files Snapshots Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/snapshots-introduction.md
The following diagrams illustrate the concepts:
1. Files consist of metadata and data blocks written to a volume. In this illustration, there are three files, each consisting of three blocks: file 1, file 2, and file 3.
- [![Volume contains three files, file1, file2 and file3, each consisting of three data blocks.](../media/azure-netapp-files/single-file-snapshot-restore-one.png)](../media/azure-netapp-files/single-file-snapshot-restore-one.png#lightbox)
+ [![Volume contains three files, file1, file2 and file3, each consisting of three data blocks.](./media/snapshots-introduction/single-file-snapshot-restore-one.png)](./media/snapshots-introduction/single-file-snapshot-restore-one.png#lightbox)
2. A snapshot `Snapshot1` is taken, which copies the metadata and only the pointers to the blocks that represent the files:
- [![Snapshot1 is created, which is a copy of the volume metadata and only the pointers to the data blocks (in file1, file2 and file3).](../media/azure-netapp-files/single-file-snapshot-restore-two.png)](../media/azure-netapp-files/single-file-snapshot-restore-two.png#lightbox)
+ [![Snapshot1 is created, which is a copy of the volume metadata and only the pointers to the data blocks (in file1, file2 and file3).](./media/snapshots-introduction/single-file-snapshot-restore-two.png)](./media/snapshots-introduction/single-file-snapshot-restore-two.png#lightbox)
3. Files on the volume continue to change, and new files are added. Modified data blocks are written as new data blocks on the volume. The blocks that were previously captured in `Snapshot1` remain unchanged:
- [![Changes to file2 and file3 are written to new data blocks, and a new file file4 is created. Blocks that were previously captured in Snapshot1 remain unchanged.](../media/azure-netapp-files/single-file-snapshot-restore-three.png)](../media/azure-netapp-files/single-file-snapshot-restore-three.png#lightbox)
+ [![Changes to file2 and file3 are written to new data blocks, and a new file file4 is created. Blocks that were previously captured in Snapshot1 remain unchanged.](./media/snapshots-introduction/single-file-snapshot-restore-three.png)](./media/snapshots-introduction/single-file-snapshot-restore-three.png#lightbox)
4. A new snapshot `Snapshot2` is taken to capture the changes and additions:
- [ ![The latest changes are captured in Snapshot2 for a second point in time view of the volume (and the files within).](../media/azure-netapp-files/single-file-snapshot-restore-four.png) ](../media/azure-netapp-files/single-file-snapshot-restore-four.png#lightbox)
+ [ ![The latest changes are captured in Snapshot2 for a second point in time view of the volume (and the files within).](./media/snapshots-introduction/single-file-snapshot-restore-four.png) ](./media/snapshots-introduction/single-file-snapshot-restore-four.png#lightbox)
When a snapshot is taken, the pointers to the data blocks are copied, and modifications are written to new data locations. The snapshot pointers continue to point to the original data blocks that the file occupied when the snapshot was taken, giving you a live and a historical view of the data. If you were to create a new snapshot, the current pointers (that is, the ones created after the most recent additions and modifications) are copied to a new snapshot `Snapshot2`. This creates access to three generations of data (the live data, `Snapshot2`, and `Snapshot1`, in order of age) without taking up the volume space that three full copies would require.
Meanwhile, the data blocks that are pointed to from snapshots remain stable and
The following diagram shows a volumeΓÇÖs snapshots and used space over time:
-[ ![Diagram that shows a volumeΓÇÖs snapshots and used space over time](../media/azure-netapp-files/snapshots-used-space-over-time.png)](../media/azure-netapp-files/snapshots-used-space-over-time.png#lightbox)
+[ ![Diagram that shows a volumeΓÇÖs snapshots and used space over time](./media/snapshots-introduction/snapshots-used-space-over-time.png)](./media/snapshots-introduction/snapshots-used-space-over-time.png#lightbox)
Because a volume snapshot records only the block changes since the latest snapshot, it provides the following key benefits:
Azure NetApp Files supports [cross-region replication](cross-region-replication-
The following diagram shows snapshot traffic in cross-region replication scenarios:
-[ ![Diagram that shows snapshot traffic in cross-region replication scenarios](../media/azure-netapp-files/snapshot-traffic-cross-region-replication.png)](../media/azure-netapp-files/snapshot-traffic-cross-region-replication.png#lightbox)
+[ ![Diagram that shows snapshot traffic in cross-region replication scenarios](./media/snapshots-introduction/snapshot-traffic-cross-region-replication.png)](./media/snapshots-introduction/snapshot-traffic-cross-region-replication.png#lightbox)
## How snapshots can be vaulted for long-term retention and cost savings
To enable snapshot vaulting on your Azure NetApp Files volume, [configure a back
The following diagram shows how snapshot data is transferred from the Azure NetApp Files volume to Azure NetApp Files backup storage, hosted on Azure storage.
-[ ![Diagram that shows snapshot data transferred from the Azure NetApp Files volume to Azure NetApp Files backup storage](../media/azure-netapp-files/snapshot-data-transfer-backup-storage.png) ](../media/azure-netapp-files/snapshot-data-transfer-backup-storage.png#lightbox)
+[ ![Diagram that shows snapshot data transferred from the Azure NetApp Files volume to Azure NetApp Files backup storage](./media/snapshots-introduction/snapshot-data-transfer-backup-storage.png) ](./media/snapshots-introduction/snapshot-data-transfer-backup-storage.png#lightbox)
The Azure NetApp Files backup functionality is designed to keep a longer history of backups as indicated in this simplified example. Notice how the backup repository on the right contains more and older snapshots than the protected volume and snapshots on the left.
You can restore Azure NetApp Files snapshots to separate, independent volumes (c
The following diagram shows a new volume created by restoring (cloning) a snapshot:
-[![Diagram that shows a new volume created by restoring a snapshot](../media/azure-netapp-files/snapshot-restore-clone-new-volume.png)
-](../media/azure-netapp-files/snapshot-restore-clone-new-volume.png#lightbox)
+[![Diagram that shows a new volume created by restoring a snapshot](./media/snapshots-introduction/snapshot-restore-clone-new-volume.png)
+](./media/snapshots-introduction/snapshot-restore-clone-new-volume.png#lightbox)
The same operation can be performed on replicated snapshots to a disaster-recovery (DR) volume. Any snapshot can be restored to a new volume, even when cross-region replication remains active or in progress. This capability enables non-disruptive creation of test and development environments in a DR region, putting the data to use, whereas the replicated volumes would otherwise be used only for DR purposes. This use case enables test and development to be isolated from production, eliminating potential impact on production environments. The following diagram shows volume restoration (cloning) by using DR target volume snapshot while cross-region replication is taking place:
-[![Diagram that shows volume restoration using DR target volume snapshot](../media/azure-netapp-files/snapshot-restore-clone-target-volume.png)](../media/azure-netapp-files/snapshot-restore-clone-target-volume.png#lightbox)
+[![Diagram that shows volume restoration using DR target volume snapshot](./media/snapshots-introduction/snapshot-restore-clone-target-volume.png)](./media/snapshots-introduction/snapshot-restore-clone-target-volume.png#lightbox)
When you restore a snapshot to a new volume, the Volume overview page displays the name of the snapshot used to create the new volume in the **Originated from** field. See [Restore a snapshot to a new volume](snapshots-restore-new-volume.md) about volume restore operations.
Reverting a volume snapshot is near-instantaneous and takes only a few seconds t
The following diagram shows a volume reverting to an earlier snapshot:
-[![Diagram that shows a volume reverting to an earlier snapshot](../media/azure-netapp-files/snapshot-volume-revert.png)
-](../media/azure-netapp-files/snapshot-volume-revert.png#lightbox)
+[![Diagram that shows a volume reverting to an earlier snapshot](./media/snapshots-introduction/snapshot-volume-revert.png)
+](./media/snapshots-introduction/snapshot-volume-revert.png#lightbox)
> [!IMPORTANT]
If the [Snapshot Path visibility](snapshots-edit-hide-path.md) is not set to `hi
The following diagram shows file or directory access to a snapshot using a client:
-[![Diagram that shows file or directory access to a snapshot](../media/azure-netapp-files/snapshot-file-directory-access.png)](../media/azure-netapp-files/snapshot-file-directory-access.png#lightbox)
+[![Diagram that shows file or directory access to a snapshot](./media/snapshots-introduction/snapshot-file-directory-access.png)](./media/snapshots-introduction/snapshot-file-directory-access.png#lightbox)
In the diagram, Snapshot 1 consumes only the delta blocks between the active volume and the moment of snapshot creation. But when you access the snapshot via the volume snapshot path, the data will *appear* as if itΓÇÖs the full volume capacity at the time of the snapshot creation. By accessing the snapshot folders, you can restore data by copying files and directories out of a snapshot of choice.
Similarly, snapshots in target cross-region replication volumes can be accessed
The following diagram shows snapshot access in cross-region replication scenarios:
-[![Diagram that shows snapshot access in cross-region replication](../media/azure-netapp-files/snapshot-access-cross-region-replication.png)](../media/azure-netapp-files/snapshot-access-cross-region-replication.png#lightbox)
+[![Diagram that shows snapshot access in cross-region replication](./media/snapshots-introduction/snapshot-access-cross-region-replication.png)](./media/snapshots-introduction/snapshot-access-cross-region-replication.png#lightbox)
See [Restore a file from a snapshot using a client](snapshots-restore-file-client.md) about restoring individual files or directories from snapshots.
The following diagram describes how single-file snapshot restore works:
When a single file is restored in-place (`file2`) or to a new file in the volume (`file2ΓÇÖ`), only the *pointers* to existing blocks previously captured in a snapshot are reverted. This operation eliminates the copying of any data blocks and is near-instantaneous, irrespective of the size of the file (the number of blocks in the file).
- [![Individual files can be restored from any snapshot by reverting block pointers to an existing file (file2) or to a new file (file2ΓÇÖ) by creating new file metadata and pointers to blocks in the snapshot.](../media/azure-netapp-files/single-file-snapshot-restore-five.png)](../media/azure-netapp-files/single-file-snapshot-restore-five.png#lightbox)
+ [![Individual files can be restored from any snapshot by reverting block pointers to an existing file (file2) or to a new file (file2ΓÇÖ) by creating new file metadata and pointers to blocks in the snapshot.](./media/snapshots-introduction/single-file-snapshot-restore-five.png)](./media/snapshots-introduction/single-file-snapshot-restore-five.png#lightbox)
### Restoring volume backups from vaulted snapshots
You can [search for backups](backup-search.md) at the volume level or the NetApp
The following diagram illustrates the operation of restoring a selected vaulted snapshot to a new volume:
-[![Diagram that shows restoring a selected vaulted snapshot to a new volume](../media/azure-netapp-files/snapshot-restore-vaulted-new-volume.png)](../media/azure-netapp-files/snapshot-restore-vaulted-new-volume.png#lightbox)
+[![Diagram that shows restoring a selected vaulted snapshot to a new volume](./media/snapshots-introduction/snapshot-restore-vaulted-new-volume.png)](./media/snapshots-introduction/snapshot-restore-vaulted-new-volume.png#lightbox)
### Restoring individual files or directories from vaulted snapshots
When a snapshot is deleted, all pointers from that snapshot to existing data blo
The following diagram shows the effect on storage consumption of Snapshot 3 deletion from a volume:
-[![Diagram that shows storage consumption effect of snapshot deletion](../media/azure-netapp-files/snapshot-delete-storage-consumption.png)](../media/azure-netapp-files/snapshot-delete-storage-consumption.png#lightbox)
+[![Diagram that shows storage consumption effect of snapshot deletion](./media/snapshots-introduction/snapshot-delete-storage-consumption.png)](./media/snapshots-introduction/snapshot-delete-storage-consumption.png#lightbox)
Be sure to [monitor volume and snapshot consumption](azure-netapp-files-metrics.md#volumes) and understand how the application, active volume, and snapshot consumption interact.
azure-netapp-files Snapshots Manage Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/snapshots-manage-policy.md
A snapshot policy enables you to specify the snapshot creation frequency in hour
1. From the NetApp Account view, select **Snapshot policy**.
- ![Screenshot that shows how to navigate to Snapshot Policy.](../media/azure-netapp-files/snapshot-policy-navigation.png)
+ ![Screenshot that shows how to navigate to Snapshot Policy.](./media/snapshots-manage-policy/snapshot-policy-navigation.png)
2. In the Snapshot Policy window, set Policy State to **Enabled**.
A snapshot policy enables you to specify the snapshot creation frequency in hour
The following example shows hourly snapshot policy configuration.
- ![Screenshot that shows the hourly snapshot policy.](../media/azure-netapp-files/snapshot-policy-hourly.png)
+ ![Screenshot that shows the hourly snapshot policy.](./media/snapshots-manage-policy/snapshot-policy-hourly.png)
The following example shows daily snapshot policy configuration.
- ![Screenshot that shows the daily snapshot policy.](../media/azure-netapp-files/snapshot-policy-daily.png)
+ ![Screenshot that shows the daily snapshot policy.](./media/snapshots-manage-policy/snapshot-policy-daily.png)
The following example shows weekly snapshot policy configuration.
- ![Screenshot that shows the weekly snapshot policy.](../media/azure-netapp-files/snapshot-policy-weekly.png)
+ ![Screenshot that shows the weekly snapshot policy.](./media/snapshots-manage-policy/snapshot-policy-weekly.png)
The following example shows monthly snapshot policy configuration.
- ![Screenshot that shows the monthly snapshot policy.](../media/azure-netapp-files/snapshot-policy-monthly.png)
+ ![Screenshot that shows the monthly snapshot policy.](./media/snapshots-manage-policy/snapshot-policy-monthly.png)
4. Select **Save**.
You cannot apply a snapshot policy to a destination volume in cross-region repli
1. Go to the **Volumes** page, right-click the volume that you want to apply a snapshot policy to, and select **Edit**.
- ![Screenshot that shows the Volumes right-click menu.](../media/azure-netapp-files/volume-right-cick-menu.png)
+ ![Screenshot that shows the Volumes right-click menu.](./media/snapshots-manage-policy/volume-right-cick-menu.png)
2. In the Edit window, under **Snapshot policy**, select a policy to use for the volume. Select **OK** to apply the policy.
- ![Screenshot that shows the Snapshot policy menu.](../media/azure-netapp-files/snapshot-policy-edit.png)
+ ![Screenshot that shows the Snapshot policy menu.](./media/snapshots-manage-policy/snapshot-policy-edit.png)
## Modify a snapshot policy
You can modify an existing snapshot policy to change the policy state, snapshot
2. Right-click the snapshot policy you want to modify, then select **Edit**.
- ![Screenshot that shows the Snapshot policy right-click menu.](../media/azure-netapp-files/snapshot-policy-right-click-menu.png)
+ ![Screenshot that shows the Snapshot policy right-click menu.](./media/snapshots-manage-policy/snapshot-policy-right-click-menu.png)
3. Make the changes in the Snapshot Policy window that appears, then select **Save**.
You can delete a snapshot policy that you no longer want to keep.
2. Right-click the snapshot policy you want to modify, then select **Delete**.
- ![Screenshot that shows the Delete menu item.](../media/azure-netapp-files/snapshot-policy-right-click-menu.png)
+ ![Screenshot that shows the Delete menu item.](./media/snapshots-manage-policy/snapshot-policy-right-click-menu.png)
3. Select **Yes** to confirm that you want to delete the snapshot policy.
- ![Screenshot that shows snapshot policy delete confirmation.](../media/azure-netapp-files/snapshot-policy-delete-confirm.png)
+ ![Screenshot that shows snapshot policy delete confirmation.](./media/snapshots-manage-policy/snapshot-policy-delete-confirm.png)
## Next steps
azure-netapp-files Snapshots Restore File Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/snapshots-restore-file-client.md
NFSv4.1 does not show the `.snapshot` directory (`ls -la`). However, when the Hi
1. If the `~snapshot` directory of the volume is hidden, [show hidden items](https://support.microsoft.com/help/4028316/windows-view-hidden-files-and-folders-in-windows-10) in the parent directory to display `~snapshot`.
- ![Screenshot that shows hidden items of a directory.](../media/azure-netapp-files/snapshot-show-hidden.png)
+ ![Screenshot that shows hidden items of a directory.](./media/snapshots-restore-file-client/snapshot-show-hidden.png)
2. Navigate to the subdirectory within `~snapshot` to find the file you want to restore. Right-click the file. Select **Copy**.
- ![Screenshot that shows how to copy a file to restore.](../media/azure-netapp-files/snapshot-copy-file-restore.png)
+ ![Screenshot that shows how to copy a file to restore.](./media/snapshots-restore-file-client/snapshot-copy-file-restore.png)
3. Return to the parent directory. Right-click in the parent directory and select `Paste` to paste the file to the directory.
- ![Screenshot that shows how to paste a file to restore.](../media/azure-netapp-files/snapshot-paste-file-restore.png)
+ ![Screenshot that shows how to paste a file to restore.](./media/snapshots-restore-file-client/snapshot-paste-file-restore.png)
4. You can also right-click the parent directory, select **Properties**, click the **Previous Versions** tab to see the list of snapshots, and select **Restore** to restore a file.
- ![Screenshot that shows the properties previous versions.](../media/azure-netapp-files/snapshot-properties-previous-version.png)
+ ![Screenshot that shows the properties previous versions.](./media/snapshots-restore-file-client/snapshot-properties-previous-version.png)
## Next steps
azure-netapp-files Snapshots Restore File Single https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/snapshots-restore-file-single.md
The restore operation does not create directories in the process. If the specifi
3. Right-click the snapshot that you want to use for restoring files, and then select **Restore Files** from the menu.
- [ ![Snapshot that shows how to access the Restore Files menu item.](../media/azure-netapp-files/snapshot-restore-files-menu.png) ](../media/azure-netapp-files/snapshot-restore-files-menu.png#lightbox)
+ [ ![Snapshot that shows how to access the Restore Files menu item.](./media/snapshots-restore-file-single/snapshot-restore-files-menu.png) ](./media/snapshots-restore-file-single/snapshot-restore-files-menu.png#lightbox)
5. In the Restore Files window that appears, provide the following information: 1. In the **File Paths** field, specify the file or files to restore by using their full paths.
The restore operation does not create directories in the process. If the specifi
3. Click **Restore** to begin the restore operation.
- ![Snapshot the Restore Files window.](../media/azure-netapp-files/snapshot-restore-files-window.png)
+ ![Snapshot the Restore Files window.](./media/snapshots-restore-file-single/snapshot-restore-files-window.png)
## Examples The following examples show you how to specify files from a volume snapshot for restore.
azure-netapp-files Snapshots Restore New Volume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/snapshots-restore-new-volume.md
1. Select **Snapshots** from the Volume page to display the snapshot list. 2. Right-click the snapshot to restore and select **Restore to new volume** from the menu option.
- ![Screenshot that shows the Restore New Volume menu.](../media/azure-netapp-files/azure-netapp-files-snapshot-restore-to-new-volume.png)
+ ![Screenshot that shows the Restore New Volume menu.](./media/snapshots-restore-new-volume/azure-netapp-files-snapshot-restore-to-new-volume.png)
3. In the **Create a Volume** page, provide information for the new volume.
By default, the new volume includes a reference to the snapshot that was used for the restore operation from the original volume from Step 2, referred to as the *base snapshot*. This base snapshot does *not* consume any additional space because of [how snapshots work](snapshots-introduction.md). If you don't want the new volume to contain this base snapshot, select **Delete base snapshot** during the new volume creation.
- :::image type="content" source="../media/azure-netapp-files/snapshot-restore-new-volume.png" alt-text="Screenshot showing the Create a Volume window for restoring a volume from a snapshot.":::
+ :::image type="content" source="./media/snapshots-restore-new-volume/snapshot-restore-new-volume.png" alt-text="Screenshot showing the Create a Volume window for restoring a volume from a snapshot.":::
4. Select **Review+create**. Select **Create**. The Volumes page displays the new volume to which the snapshot restores. Refer to the **Originated from** field to see the name of the snapshot used to create the volume.
azure-netapp-files Snapshots Revert Volume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/snapshots-revert-volume.md
The revert functionality is also available in configurations with volume replica
1. Go to the **Snapshots** menu of a volume. Right-click the snapshot you want to use for the revert operation. Select **Revert volume**.
- ![Screenshot that describes the right-click menu of a snapshot.](../media/azure-netapp-files/snapshot-right-click-menu.png)
+ ![Screenshot that describes the right-click menu of a snapshot.](./media/shared/snapshot-right-click-menu.png)
2. In the Revert Volume to Snapshot window, type the name of the volume, and click **Revert**. The volume is now restored to the point in time of the selected snapshot.
-![Screenshot that shows the Revert Volume to Snapshot window.](../media/azure-netapp-files/snapshot-revert-volume.png)
+![Screenshot that shows the Revert Volume to Snapshot window.](./media/snapshots-revert-volume/snapshot-revert-volume.png)
## Next steps
azure-netapp-files Solutions Benefits Azure Netapp Files Electronic Design Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/solutions-benefits-azure-netapp-files-electronic-design-automation.md
The 12-volume scenario demonstrates a general decrease in latency over the six-v
The following graph illustrates the latency and operations rate for the EDA workload on Azure NetApp Files.
-![Latency and operations rate for the EDA workload on Azure NetApp Files](../media/azure-netapp-files/solutions-electronic-design-automation-workload-latency-operation-rate.png)
+![Latency and operations rate for the EDA workload on Azure NetApp Files](./media/solutions-benefits-azure-netapp-files-electronic-design-automation/solutions-electronic-design-automation-workload-latency-operation-rate.png)
The following graph illustrates the latency and throughput for the EDA workload on Azure NetApp Files.
-![Latency and throughput for the EDA workload on Azure NetApp Files](../media/azure-netapp-files/solutions-electronic-design-automation-workload-latency-throughput.png)
+![Latency and throughput for the EDA workload on Azure NetApp Files](./media/solutions-benefits-azure-netapp-files-electronic-design-automation/solutions-electronic-design-automation-workload-latency-throughput.png)
## Layout of test scenarios
azure-netapp-files Solutions Benefits Azure Netapp Files Oracle Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/solutions-benefits-azure-netapp-files-oracle-database.md
The following summary explains how Oracle Direct NFS works at a high level:
* The traditional NFS client uses a single network flow as shown below:
- ![Traditional NFS client using a single network flow](../media/azure-netapp-files/solutions-traditional-nfs-client-using-single-network-flow.png)
+ ![Traditional NFS client using a single network flow](./media/solutions-benefits-azure-netapp-files-oracle-database/solutions-traditional-nfs-client-using-single-network-flow.png)
Oracle Direct NFS further improves performance by load-balancing network traffic across multiple network flows. As tested and shown below, 650 distinct network connections were established dynamically by the Oracle Database:
- ![Oracle Direct NFS improving performance](../media/azure-netapp-files/solutions-oracle-direct-nfs-performance-load-balancing.png)
+ ![Oracle Direct NFS improving performance](./media/solutions-benefits-azure-netapp-files-oracle-database/solutions-oracle-direct-nfs-performance-load-balancing.png)
The [Oracle FAQ for Direct NFS](http://www.orafaq.com/wiki/Direct_NFS) shows that Oracle dNFS is an optimized NFS client. It provides fast and scalable access to NFS storage that is located on NAS storage devices (accessible over TCP/IP). dNFS is built into the database kernel just like ASM, which is used primarily with DAS or SAN storage. As such, *the guideline is to use dNFS when implementing NAS storage and use ASM when implementing SAN storage.*
dNFS is the default option in Oracle 18c.
dNFS is available starting with Oracle Database 11g. The diagram below compares dNFS with native NFS. When you use dNFS, an Oracle database that runs on an Azure virtual machine can drive more I/O than the native NFS client.
-![Oracle and Azure NetApp Files comparison of dNFS with native NFS](../media/azure-netapp-files/solutions-oracle-azure-netapp-files-comparing-dnfs-native-nfs.png)
+![Oracle and Azure NetApp Files comparison of dNFS with native NFS](./media/solutions-benefits-azure-netapp-files-oracle-database/solutions-oracle-azure-netapp-files-comparing-dnfs-native-nfs.png)
You can enable or disable dNFS by running two commands and restarting the database.
azure-netapp-files Solutions Benefits Azure Netapp Files Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/solutions-benefits-azure-netapp-files-sql-server.md
The two sets of graphics in this section show the TCO example. The number and t
The first set of graphic shows the overall cost of the solution using a 1-TiB database size, comparing the D16s_v4 to the D64, the D8 to the D32, and the D4 to the D16. The projected IOPs for each configuration are indicated by a green or yellow line and corresponds to the right-hand side Y axis.
-[ ![Graphic that shows overall cost of the solution using a 1-TiB database size.](../media/azure-netapp-files/solution-sql-server-cost-1-tib.png) ](../media/azure-netapp-files/solution-sql-server-cost-1-tib.png#lightbox)
+[ ![Graphic that shows overall cost of the solution using a 1-TiB database size.](./media/solutions-benefits-azure-netapp-files-sql-server/solution-sql-server-cost-1-tib.png) ](./media/solutions-benefits-azure-netapp-files-sql-server/solution-sql-server-cost-1-tib.png#lightbox)
The second set of graphic shows the overall cost using a 50-TiB database. The comparisons are otherwise the same ΓÇô D16 compared with Azure NetApp Files versus D64 with block by example.
-[ ![Graphic that shows overall cost using a 50-TiB database size.](../media/azure-netapp-files/solution-sql-server-cost-50-tib.png) ](../media/azure-netapp-files/solution-sql-server-cost-50-tib.png#lightbox)
+[ ![Graphic that shows overall cost using a 50-TiB database size.](./media/solutions-benefits-azure-netapp-files-sql-server/solution-sql-server-cost-50-tib.png) ](./media/solutions-benefits-azure-netapp-files-sql-server/solution-sql-server-cost-50-tib.png#lightbox)
## Performance, and lots of it
With Azure NetApp Files, each of the instances in the D class can meet or exceed
The following diagram summarizes the S3B CPU limits test:
-![Diagram that shows average CPU percentage for single-instance SQL Server over Azure NetApp Files.](../media/azure-netapp-files/solution-sql-server-single-instance-average-cpu.png)
+![Diagram that shows average CPU percentage for single-instance SQL Server over Azure NetApp Files.](./media/solutions-benefits-azure-netapp-files-sql-server/solution-sql-server-single-instance-average-cpu.png)
Scalability is only part of the story. The other part is latency. ItΓÇÖs one thing for smaller virtual machines to have the ability to drive much higher I/O rates, itΓÇÖs another thing to do so with low single-digit latencies as shown below.
Scalability is only part of the story. The other part is latency. ItΓÇÖs one th
The following diagram shows the latency for single-instance SQL Server over Azure NetApp Files:
-![Diagram that shows latency for single-instance SQL Server over Azure NetApp Files.](../media/azure-netapp-files/solution-sql-server-single-instance-latency.png)
+![Diagram that shows latency for single-instance SQL Server over Azure NetApp Files.](./media/solutions-benefits-azure-netapp-files-sql-server/solution-sql-server-single-instance-latency.png)
## SSB testing tool
azure-netapp-files Solutions Windows Virtual Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/solutions-windows-virtual-desktop.md
This recommendation is confirmed by a 500-user LoginVSI test, logging approximat
As an example, at 62 users per D16as_V4 virtual machine, Azure NetApp Files can easily support 60,000 users per environment. Testing to evaluate the upper limit of the D32as_v4 virtual machine is ongoing. If the Azure Virtual Desktop user per vCPU recommendation holds true for the D32as_v4, more than 120,000 users would fit within 1,000 virtual machines before broaching [the 1,000 IP VNet limit](./azure-netapp-files-network-topologies.md), as shown in the following figure.
-![Azure Virtual Desktop pooled desktop scenario](../media/azure-netapp-files/solutions-pooled-desktop-scenario.png)
+![Azure Virtual Desktop pooled desktop scenario](./media/solutions-windows-virtual-desktop/solutions-pooled-desktop-scenario.png)
### Personal desktop scenario In a personal desktop scenario, the following figure shows the general-purpose architectural recommendation. Users are mapped to specific desktop pods and each pod has just under 1,000 virtual machines, leaving room for IP addresses propagating from the management VNet. Azure NetApp Files can easily handle 900+ personal desktops per single-session host pool VNet, with the actual number of virtual machines being equal to 1,000 minus the number of management hosts found in the Hub VNet. If more personal desktops are needed, it's easy to add more pods (host pools and virtual networks), as shown in the following figure.
-![Azure Virtual Desktop personal desktop scenario](../media/azure-netapp-files/solutions-personal-desktop-scenario.png)
+![Azure Virtual Desktop personal desktop scenario](./media/solutions-windows-virtual-desktop/solutions-personal-desktop-scenario.png)
When building a POD based architecture like this, assigning users to the correct pod upon login is of importance to assure users will always find their user profiles.
azure-netapp-files Storage Service Add Ons https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/storage-service-add-ons.md
The **Storage service add-ons** portal menu of Azure NetApp Files provides a ΓÇ£
Clicking a category (for example, **NetApp add-ons**) under **Storage service add-ons** displays tiles for available add-ons in that category. Clicking an add-on tile in the category takes you to a landing page for quick access of that add-on and directs you to the add-on installation page.
-![Snapshot that shows how to access to the storage service add-ons menu.](../media/azure-netapp-files/storage-service-add-ons.png)
+![Snapshot that shows how to access to the storage service add-ons menu.](./media/storage-service-add-ons/storage-service-add-ons.png)
## Next steps
azure-netapp-files Troubleshoot Diagnose Solve Problems https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/troubleshoot-diagnose-solve-problems.md
You can use Azure **diagnose and solve problems** tool to troubleshoot issues of
The following screenshot shows an example of issue types that you can troubleshoot for Azure NetApp Files:
- :::image type="content" source="../media/azure-netapp-files/troubleshoot-issue-types.png" alt-text="Screenshot that shows an example of issue types in diagnose and solve problems page." lightbox="../media/azure-netapp-files/troubleshoot-issue-types.png":::
+ :::image type="content" source="./media/troubleshoot-diagnose-solve-problems/troubleshoot-issue-types.png" alt-text="Screenshot that shows an example of issue types in diagnose and solve problems page." lightbox="./media/troubleshoot-diagnose-solve-problems/troubleshoot-issue-types.png":::
3. After specifying the problem type, select an option (problem subtype) from the pull-down menu to describe the specific problem you are experiencing. Then follow the on-screen directions to troubleshoot the problem.
- :::image type="content" source="../media/azure-netapp-files/troubleshoot-diagnose-pull-down.png" alt-text="Screenshot that shows the pull-down menu for problem subtype selection." lightbox="../media/azure-netapp-files/troubleshoot-diagnose-pull-down.png":::
+ :::image type="content" source="./media/troubleshoot-diagnose-solve-problems/troubleshoot-diagnose-pull-down.png" alt-text="Screenshot that shows the pull-down menu for problem subtype selection." lightbox="./media/troubleshoot-diagnose-solve-problems/troubleshoot-diagnose-pull-down.png":::
This page presents general guidelines and relevant resources for the problem subtype you select. In some situations, you might be prompted to fill out a questionnaire to trigger diagnostics. If issues are identified, the tool presents a diagnosis and possible solutions.
- :::image type="content" source="../media/azure-netapp-files/troubleshoot-problem-subtype.png" alt-text="Screenshot that shows the capacity pool troubleshoot page." lightbox="../media/azure-netapp-files/troubleshoot-problem-subtype.png":::
+ :::image type="content" source="./media/troubleshoot-diagnose-solve-problems/troubleshoot-problem-subtype.png" alt-text="Screenshot that shows the capacity pool troubleshoot page." lightbox="./media/troubleshoot-diagnose-solve-problems/troubleshoot-problem-subtype.png":::
For more information about using this tool, see [Diagnostics and solve tool - Azure App Service](../app-service/overview-diagnostics.md).
azure-netapp-files Troubleshoot File Locks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/troubleshoot-file-locks.md
You can break file locks for all files in a volume or break all file locks initi
1. Select **Break File Locks**.
- :::image type="content" source="../media/azure-netapp-files/break-file-locks.png" alt-text="Screenshot of break file locks portal." lightbox="../media/azure-netapp-files/break-file-locks.png":::
+ :::image type="content" source="./media/troubleshoot-file-locks/break-file-locks.png" alt-text="Screenshot of break file locks portal." lightbox="./media/troubleshoot-file-locks/break-file-locks.png":::
1. Confirm you understand that breaking file locks may be disruptive.
azure-netapp-files Troubleshoot User Access Ldap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/troubleshoot-user-access-ldap.md
Validating user access is helpful for scenarios such as ensuring POSIX attribute
1. In the volume page for the LDAP-enabled volume, select **LDAP Group ID List** under **Support & Troubleshooting**. 1. Enter the user ID and select **Get group IDs**.
- :::image type="content" source="../media/azure-netapp-files/troubleshoot-ldap-user-id.png" alt-text="Screenshot of the LDAP group ID list portal." lightbox="../media/azure-netapp-files/troubleshoot-ldap-user-id.png":::
+ :::image type="content" source="./media/troubleshoot-user-access-ldap/troubleshoot-ldap-user-id.png" alt-text="Screenshot of the LDAP group ID list portal." lightbox="./media/troubleshoot-user-access-ldap/troubleshoot-ldap-user-id.png":::
1. The portal will display up to 256 results even if the user is in more than 256 groups. You can search for a specific group ID in the results.
azure-netapp-files Understand Guidelines Active Directory Domain Service Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/understand-guidelines-active-directory-domain-service-site.md
An AD DS site topology is a logical representation of the network where Azure Ne
The following diagram shows a sample network topology: sample-network-topology.png In the sample network topology, an on-premises AD DS domain (`anf.local`) is extended into an Azure virtual network. The on-premises network is connected to the Azure virtual network using an Azure ExpressRoute circuit.
Azure NetApp Files can only use one AD DS site to determine which domain control
In the Active Directory Sites and Services tool, verify that the AD DS domain controllers deployed into the AD DS subnet are assigned to the `ANF` site: To create the subnet object that maps to the AD DS subnet in the Azure virtual network, right-click the **Subnets** container in the **Active Directory Sites and Services** utility and select **New Subnet...**. ΓÇâ In the **New Object - Subnet** dialog, the 10.0.0.0/24 IP address range for the AD DS Subnet is entered in the **Prefix** field. Select `ANF` as the site object for the subnet. Select **OK** to create the subnet object and assign it to the `ANF` site. To verify that the new subnet object is assigned to the correct site, right-click the 10.0.0.0/24 subnet object and select **Properties**. The **Site** field should show the `ANF` site object: To create the subnet object that maps to the Azure NetApp Files delegated subnet in the Azure virtual network, right-click the **Subnets** container in the **Active Directory Sites and Services** utility and select **New Subnet...**.
azure-netapp-files Use Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/use-availability-zones.md
Azure availability zones are highly available, fault tolerant, and more scalable
The use of high availability (HA) architectures with availability zones are now a default and best practice recommendation inΓÇ»[AzureΓÇÖs Well-Architected Framework](/azure/architecture/framework/resiliency/design-best-practices#use-zone-aware-services). Enterprise applications and resources are increasingly deployed into multiple availability zones to achieve this level of high availability (HA) or failure domain (zone) isolation. Azure NetApp Files' [availability zone volume placement](manage-availability-zone-volume-placement.md) feature lets you deploy volumes in availability zones of your choice, in alignment with Azure compute and other services in the same zone.
azure-netapp-files Use Dfs N And Dfs Root Consolidation With Azure Netapp Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/use-dfs-n-and-dfs-root-consolidation-with-azure-netapp-files.md
If you already have a DFS Namespace in place, no special steps are required to u
| File share type | SMB | NFS | dual-protocol* | |-|:-:|:-:|:-:|
-| Azure NetApp Files | ![Yes](../media/azure-netapp-files/icons/yes-icon.png) | ![No](../media/azure-netapp-files/icons/no-icon.png) | ![Yes](../media/azure-netapp-files/icons/yes-icon.png) |
+| Azure NetApp Files | ![Yes](./media/shared/yes-icon.png) | ![No](./media/shared/no-icon.png) | ![Yes](./media/shared/yes-icon.png) |
> [!IMPORTANT] > This functionality applies to the SMB side of Azure NetApp Files dual-protocol volumes.
For all DFS Namespace types, the **DFS Namespaces** server role must be installe
8. In the **Server Roles** section, select and check the **DFS Namespaces** role from role list under **File and Storage Services** > **File and iSCSI Services**.
-![A screenshot of the **Add Roles and Features** wizard with the **DFS Namespaces** role selected.](../media/azure-netapp-files/azure-netapp-files-dfs-namespaces-install.png)
+![A screenshot of the **Add Roles and Features** wizard with the **DFS Namespaces** role selected.](./media/use-dfs-n-and-dfs-root-consolidation-with-azure-netapp-files/azure-netapp-files-dfs-namespaces-install.png)
9. Click **Next** until the **Install** button is available
Install-WindowsFeature -Name "FS-DFS-Namespace", "RSAT-DFS-Mgmt-Con"
If you don't need to take over an existing legacy file server, a domain-based namespace is recommended. Domain-based namespaces are hosted as part of AD and have a UNC path containing the name of your domain, for example, `\\contoso.com\corporate\finance`, if your domain is `contoso.com`. The following graphic shows an example of this architecture.
-![A screenshot of the architecture for DFS-N with Azure NetApp Files volumes.](../media/azure-netapp-files/azure-netapp-files-dfs-domain-architecture-example.png)
+![A screenshot of the architecture for DFS-N with Azure NetApp Files volumes.](./media/use-dfs-n-and-dfs-root-consolidation-with-azure-netapp-files/azure-netapp-files-dfs-domain-architecture-example.png)
>[!IMPORTANT]
The basic unit of management for DFS Namespaces is the namespace. The namespace
5. The **Namespace Type** section allows you to choose between a **Domain-based namespace** and a **Stand-alone namespace**. Select a domain-based namespace. Refer to [namespace types](#namespace-types) above for more information on choosing between namespace types.
-![A screenshot of selecting domain-based namespace **New Namespace Wizard**.](../media/azure-netapp-files/azure-netapp-files-dfs-domain-namespace-type.png)
+![A screenshot of selecting domain-based namespace **New Namespace Wizard**.](./media/use-dfs-n-and-dfs-root-consolidation-with-azure-netapp-files/azure-netapp-files-dfs-domain-namespace-type.png)
6. Select **Create** to create the namespace and **Close** when the dialog completes.
You can think of DFS Namespaces folders as analogous to file shares.
1. In the DFS Management console, select the namespace you just created and select **New Folder**. The resulting **New Folder** dialog allows you to create both the folder and its targets.
-![A screenshot of the **New Folder** domain-based dialog.](../media/azure-netapp-files/azure-netapp-files-dfs-domain-folder-targets.png)
+![A screenshot of the **New Folder** domain-based dialog.](./media/use-dfs-n-and-dfs-root-consolidation-with-azure-netapp-files/azure-netapp-files-dfs-domain-folder-targets.png)
2. In the textbox labeled **Name** provide the name of the share.
Root consolidation may only be used with standalone namespaces. If you already h
This section outlines the steps to configure DFS Namespace root consolidation on a standalone server. For a highly available architecture please work with your Microsoft technical team to configure Windows Server failover clustering and an Azure Load Balancer as required. The following graphic shows an example of a highly available architecture.
-![A screenshot of the architecture for root consolidation with Azure NetApp Files.](../media/azure-netapp-files/azure-netapp-files-root-consolidation-architecture-example.png)
+![A screenshot of the architecture for root consolidation with Azure NetApp Files.](./media/use-dfs-n-and-dfs-root-consolidation-with-azure-netapp-files/azure-netapp-files-root-consolidation-architecture-example.png)
### Enabling root consolidation
In order for DFS Namespaces to respond to existing file server names, **you must
5. In the textbox labeled **Fully qualified domain name (FQDN) for the target host**, enter the name of the DFS-N server you have set up. You can use the **Browse** button to help you select the server if desired.
-![A screenshot depicting the **New Resource Record** for a CNAME DNS entry.](../media/azure-netapp-files/azure-netapp-files-root-consolidation-cname.png)
+![A screenshot depicting the **New Resource Record** for a CNAME DNS entry.](./media/use-dfs-n-and-dfs-root-consolidation-with-azure-netapp-files/azure-netapp-files-root-consolidation-cname.png)
6. Select **OK** to create the CNAME record for your server.
To take over an existing server name with root consolidation, the name of the na
6. Select the desired namespace type for your environment and select **Next**. The wizard then summarizes the namespace to be created.
-![A screenshot of selecting standalone namespace in the **New Namespace Wizard**.](../media/azure-netapp-files/azure-netapp-files-dfs-namespace-type.png)
+![A screenshot of selecting standalone namespace in the **New Namespace Wizard**.](./media/use-dfs-n-and-dfs-root-consolidation-with-azure-netapp-files/azure-netapp-files-dfs-namespace-type.png)
7. Select **Create** to create the namespace and **Close** when the dialog completes.
You can think of DFS Namespaces folders as analogous to file shares.
1. In the DFS Management console, select the namespace you just created and select **New Folder**. The resulting **New Folder** dialog allows you to create both the folder and its targets.
-![A screenshot of the **New Folder** dialog.](../media/azure-netapp-files/azure-netapp-files-dfs-folder-targets.png)
+![A screenshot of the **New Folder** dialog.](./media/use-dfs-n-and-dfs-root-consolidation-with-azure-netapp-files/azure-netapp-files-dfs-folder-targets.png)
2. In the textbox labeled **Name** provide the name of the share.
azure-netapp-files Volume Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/volume-delete.md
This article describes how to delete an Azure NetApp Files volume.
1. From the Azure portal and under storage service, select **Volumes**. Locate the volume you want to delete. 2. Right click the volume name and select **Delete**.
- ![Screenshot that shows right-click menu for deleting a volume.](../media/azure-netapp-files/volume-delete.png)
+ ![Screenshot that shows right-click menu for deleting a volume.](./media/volume-delete/volume-delete.png)
## Next steps
azure-netapp-files Volume Hard Quota Guidelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/volume-hard-quota-guidelines.md
Windows clients can check the used and available capacity of a volume by using t
The following examples show the volume capacity reporting in Windows *before* the changed behavior:
-![Screenshots that show example storage capacity of a volume before behavior change.](../media/azure-netapp-files/hard-quota-windows-capacity-before.png)
+![Screenshots that show example storage capacity of a volume before behavior change.](./media/volume-hard-quota-guidelines/hard-quota-windows-capacity-before.png)
You can also use the `dir` command at the command prompt as shown below:
-![Screenshot that shows using a command to display storage capacity for a volume before behavior change.](../media/azure-netapp-files/hard-quota-command-capacity-before.png)
+![Screenshot that shows using a command to display storage capacity for a volume before behavior change.](./media/volume-hard-quota-guidelines/hard-quota-command-capacity-before.png)
The following examples show the volume capacity reporting in Windows *after* the changed behavior:
-![Screenshots that show example storage capacity of a volume after behavior change.](../media/azure-netapp-files/hard-quota-windows-capacity-after.png)
+![Screenshots that show example storage capacity of a volume after behavior change.](./media/volume-hard-quota-guidelines/hard-quota-windows-capacity-after.png)
The following example shows the `dir` command output:
-![Screenshot that shows using a command to display storage capacity for a volume after behavior change.](../media/azure-netapp-files/hard-quota-command-capacity-after.png)
+![Screenshot that shows using a command to display storage capacity for a volume after behavior change.](./media/volume-hard-quota-guidelines/hard-quota-command-capacity-after.png)
##### Linux
Linux clients can check the used and available capacity of a volume by using the
The following example shows volume capacity reporting in Linux *before* the changed behavior:
-![Screenshot that shows using Linux to display storage capacity for a volume before behavior change.](../media/azure-netapp-files/hard-quota-linux-capacity-before.png)
+![Screenshot that shows using Linux to display storage capacity for a volume before behavior change.](./media/volume-hard-quota-guidelines/hard-quota-linux-capacity-before.png)
The following example shows volume capacity reporting in Linux *after* the changed behavior:
-![Screenshot that shows using Linux to display storage capacity for a volume after behavior change.](../media/azure-netapp-files/hard-quota-linux-capacity-after.png)
+![Screenshot that shows using Linux to display storage capacity for a volume after behavior change.](./media/volume-hard-quota-guidelines/hard-quota-linux-capacity-after.png)
### Configure alerts using ANFCapacityManager
You can configure the following key alerting settings:
The following illustration shows the alert configuration:
-![Illustration that shows alert configuration by using ANFCapacityManager.](../media/azure-netapp-files/hard-quota-anfcapacitymanager-configuration.png)
+![Illustration that shows alert configuration by using ANFCapacityManager.](./media/volume-hard-quota-guidelines/hard-quota-anfcapacitymanager-configuration.png)
After installing ANFCapacityManager, you can expect the following behavior: When an Azure NetApp Files capacity pool or volume is created, modified, or deleted, the Logic App will automatically create, modify, or delete a capacity-based Metric Alert rule with the name `ANF_Pool_poolname` or `ANF_Volume_poolname_volname`.
You can [change the size of a volume](azure-netapp-files-resize-capacity-pools-o
2. Right-click the name of the volume that you want to resize or select the `…` icon at the end of the volume's row to display the context menu. 3. Use the context menu options to resize or delete the volume.
- ![Screenshot that shows context menu options for a volume.](../media/azure-netapp-files/hard-quota-volume-options.png)
+ ![Screenshot that shows context menu options for a volume.](./media/volume-hard-quota-guidelines/hard-quota-volume-options.png)
- ![Screenshot that shows the Update Volume Quota window.](../media/azure-netapp-files/hard-quota-update-volume-quota.png)
+ ![Screenshot that shows the Update Volume Quota window.](./media/volume-hard-quota-guidelines/hard-quota-update-volume-quota.png)
In some cases, the hosting capacity pool does not have sufficient capacity to resize the volumes. However, you can [change the capacity pool size](azure-netapp-files-resize-capacity-pools-or-volumes.md#resizing-the-capacity-pool-or-a-volume-using-azure-cli) in 1-TiB increments or decrements. The capacity pool size cannot be smaller than 4 TiB. *Resizing the capacity pool changes the purchased Azure NetApp Files capacity.*
In some cases, the hosting capacity pool does not have sufficient capacity to re
2. Right-click the capacity pool name or select the `…` icon at the end of the capacity pool’s row to display the context menu. 3. Use the context menu options to resize or delete the capacity pool.
- ![Screenshot that shows context menu options for a capacity pool.](../media/azure-netapp-files/hard-quota-pool-options.png)
+ ![Screenshot that shows context menu options for a capacity pool.](./media/volume-hard-quota-guidelines/hard-quota-pool-options.png)
- ![Screenshot that shows the Resize Pool window.](../media/azure-netapp-files/hard-quota-update-resize-pool.png)
+ ![Screenshot that shows the Resize Pool window.](./media/volume-hard-quota-guidelines/hard-quota-update-resize-pool.png)
##### CLI or PowerShell
You can use the [Azure NetApp Files CLI tools](azure-netapp-files-sdk-cli.md#cli
To manage Azure NetApp Files resources using Azure CLI, you can open the Azure portal and select the Azure **Cloud Shell** link in the top of the menu bar:
-[ ![Screenshot that shows how to access Cloud Shell link.](../media/azure-netapp-files/hard-quota-update-cloud-shell-link.png) ](../media/azure-netapp-files/hard-quota-update-cloud-shell-link.png#lightbox)
+[ ![Screenshot that shows how to access Cloud Shell link.](./media/volume-hard-quota-guidelines/hard-quota-update-cloud-shell-link.png) ](./media/volume-hard-quota-guidelines/hard-quota-update-cloud-shell-link.png#lightbox)
This action will open the Azure Cloud Shell:
-[ ![Screenshot that shows Cloud Shell window.](../media/azure-netapp-files/hard-quota-update-cloud-shell-window.png) ](../media/azure-netapp-files/hard-quota-update-cloud-shell-window.png#lightbox)
+[ ![Screenshot that shows Cloud Shell window.](./media/volume-hard-quota-guidelines/hard-quota-update-cloud-shell-window.png) ](./media/volume-hard-quota-guidelines/hard-quota-update-cloud-shell-window.png#lightbox)
The following examples use the commands to [show](/cli/azure/netappfiles/volume#az-netappfiles-volume-show) and [update](/cli/azure/netappfiles/volume#az-netappfiles-volume-update) the size of a volume:
-[ ![Screenshot that shows using PowerShell to show volume size.](../media/azure-netapp-files/hard-quota-update-powershell-volume-show.png) ](../media/azure-netapp-files/hard-quota-update-powershell-volume-show.png#lightbox)
+[ ![Screenshot that shows using PowerShell to show volume size.](./media/volume-hard-quota-guidelines/hard-quota-update-powershell-volume-show.png) ](./media/volume-hard-quota-guidelines/hard-quota-update-powershell-volume-show.png#lightbox)
-[ ![Screenshot that shows using PowerShell to update volume size.](../media/azure-netapp-files/hard-quota-update-powershell-volume-update.png) ](../media/azure-netapp-files/hard-quota-update-powershell-volume-update.png#lightbox)
+[ ![Screenshot that shows using PowerShell to update volume size.](./media/volume-hard-quota-guidelines/hard-quota-update-powershell-volume-update.png) ](./media/volume-hard-quota-guidelines/hard-quota-update-powershell-volume-update.png#lightbox)
The following examples use the commands to [show](/cli/azure/netappfiles/pool#az-netappfiles-pool-show) and [update](/cli/azure/netappfiles/pool#az-netappfiles-pool-update) the size of a capacity pool:
-[ ![Screenshot that shows using PowerShell to show capacity pool size.](../media/azure-netapp-files/hard-quota-update-powershell-pool-show.png) ](../media/azure-netapp-files/hard-quota-update-powershell-pool-show.png#lightbox)
+[ ![Screenshot that shows using PowerShell to show capacity pool size.](./media/volume-hard-quota-guidelines/hard-quota-update-powershell-pool-show.png) ](./media/volume-hard-quota-guidelines/hard-quota-update-powershell-pool-show.png#lightbox)
-[ ![Screenshot that shows using PowerShell to update capacity pool size.](../media/azure-netapp-files/hard-quota-update-powershell-pool-update.png) ](../media/azure-netapp-files/hard-quota-update-powershell-pool-update.png#lightbox)
+[ ![Screenshot that shows using PowerShell to update capacity pool size.](./media/volume-hard-quota-guidelines/hard-quota-update-powershell-pool-update.png) ](./media/volume-hard-quota-guidelines/hard-quota-update-powershell-pool-update.png#lightbox)
#### Automated
You can configure the following key capacity management setting:
* **AutoGrow Percent Increase** - Percent of the existing volume size to automatically grow a volume if it reaches the specified **% Full Threshold**. A value of 0 (zero) will disable the AutoGrow feature. A value between 10 and 100 is recommended.
- ![Screenshot that shows Set Volume Auto Growth Percent window.](../media/azure-netapp-files/hard-quota-volume-anfcapacitymanager-auto-grow-percent.png)
+ ![Screenshot that shows Set Volume Auto Growth Percent window.](./media/volume-hard-quota-guidelines/hard-quota-volume-anfcapacitymanager-auto-grow-percent.png)
## FAQ
azure-portal How To Create Azure Support Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/supportability/how-to-create-azure-support-request.md
Next, we collect more details about the problem. Providing thorough and detailed
In some cases, you may see additional options. For example, for certain types of Virtual Machine problem types, you can choose whether to [allow access to a virtual machine's memory](#memory-dump-collection).
-1. In the **Support method** section, select the **Severity** level, depending on the business impact. The [maximum available severity level and time to respond](https://azure.microsoft.com/support/plans/response/) depends on your [support plan](https://azure.microsoft.com/support/plans) and the country/region in which you're located, including the timing of business hours in that country/region.
+1. In the **Support method** section, select the **Support plan**, the **Severity** level, depending on the business impact. The [maximum available severity level and time to respond](https://azure.microsoft.com/support/plans/response/) depends on your [support plan](https://azure.microsoft.com/support/plans) and the country/region in which you're located, including the timing of business hours in that country/region.
+
+ > [!TIP]
+ > To add a support plan that requires an **Access ID** and **Contract ID**, select **Help + Support** > **Support plans** > **Link support benefits**. When a limited support plan expires or has no support incidents remaining, it won't be available to select.
+ 1. Provide your preferred contact method, your availability, and your preferred support language. Confirm that your country/region setting is accurate, as this setting affects the business hours in which a support engineer can work on your request.
azure-resource-manager Msbuild Bicep File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/msbuild-bicep-file.md
You need the latest versions of the following software:
- [Visual Studio](/visualstudio/install/install-visual-studio), or [Visual Studio Code](./install.md#visual-studio-code-and-bicep-extension). The Visual Studio community version, available for free, installs .NET 6.0, .NET Core 3.1, .NET SDK, MSBuild, .NET Framework 4.8, NuGet package manager, and C# compiler. From the installer, select **Workloads** > **.NET desktop development**. With Visual Studio Code, you also need the extensions for [Bicep](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep) and [Azure Resource Manager (ARM) Tools](https://marketplace.visualstudio.com/items?itemName=msazurermtools.azurerm-vscode-tools) - [PowerShell](/powershell/scripting/install/installing-powershell) or a command-line shell for your operating system.
+If your environment doesn't have nuget.org configured as a package feed, depending on how `nuget.config` is configured, you might need to run the following command:
+
+```powershell
+dotnet nuget add source https://api.nuget.org/v3/index.json -n nuget.org
+```
+
+In certain environments, using a single package feed helps prevent problems arising from packages with the same ID and version containing different contents in different feeds. For Azure Artifacts users, this can be done using the [upstream sources feature](/azure/devops/artifacts/concepts/upstream-sources).
+ ## MSBuild tasks and Bicep packages From your continuous integration (CI) pipeline, you can use MSBuild tasks and CLI packages to convert Bicep files and Bicep parameter files into JSON. The functionality relies on the following NuGet packages:
You can find the latest version from these pages. For example:
:::image type="content" source="./media/msbuild-bicep-file/bicep-nuget-package-version.png" alt-text="Screenshot showing how to find the latest Bicep NuGet package version." border="true":::
-The latest NuGet package versions match the latest [Bicep CLI](./bicep-cli.md) version.
+The latest NuGet package versions match the latest [Bicep CLI](./bicep-cli.md) version.
- **Azure.Bicep.MSBuild**
- When included in project file's `PackageReference` property, the `Azure.Bicep.MSBuild` package imports the Bicep task used for invoking the Bicep CLI.
+ When included in project file's `PackageReference` property, the `Azure.Bicep.MSBuild` package imports the Bicep task used for invoking the Bicep CLI.
```xml <ItemGroup>
The latest NuGet package versions match the latest [Bicep CLI](./bicep-cli.md) v
- **Azure.Bicep.CommandLine**
- The `Azure.Bicep.CommandLine.*` packages are available for Windows, Linux, and macOS. The following example references the package for Windows.
+ The `Azure.Bicep.CommandLine.*` packages are available for Windows, Linux, and macOS. The following example references the package for Windows.
```xml <ItemGroup>
Build a project in .NET with the dotnet CLI.
New-Item -Name .\msBuildDemo -ItemType Directory Set-Location -Path .\msBuildDemo ```+ 1. Run the `dotnet` command to create a new console with the .NET 6 framework. ```powershell
Build a project in .NET Core 3.1 using the dotnet CLI.
New-Item -Name .\msBuildDemo -ItemType Directory Set-Location -Path .\msBuildDemo ```+ 1. Run the `dotnet` command to create a new console with the .NET 6 framework. ```powershell
You need a Bicep file and a BicepParam file to be converted to JSON.
Replace `{prefix}` with a string value used as a prefix for the storage account name. - ### Run MSBuild Run MSBuild to convert the Bicep file and the Bicep parameter file to JSON.
Run MSBuild to convert the Bicep file and the Bicep parameter file to JSON.
dotnet build .\msBuildDemo.csproj ```
- or
+ or
```powershell dotnet restore .\msBuildDemo.csproj
backup About Restore Microsoft Azure Recovery Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/about-restore-microsoft-azure-recovery-services.md
Using the MARS agent you can:
- **[Restore all backed up files in a volume](restore-all-files-volume-mars.md):** This option recovers all backed up data in a specified volume from the recovery point in Azure Backup. It allows a faster transfer speed (up to 40 MBPS).<br>We recommend you to use this option for recovering large amounts of data, or entire volumes. - **[Restore a specific set of backed up files and folders in a volume using PowerShell](backup-client-automation.md#restore-data-from-azure-backup):** If the paths to the files and folders relative to the volume root are known, this option allows you to restore the specified set of files and folders from a recovery point, using the faster transfer speed of the full volume restore. However, this option doesnΓÇÖt provide the convenience of browsing files and folders in the recovery point using the Instant Restore option. - **[Restore individual files and folders using Instant Restore](backup-azure-restore-windows-server.md):** This option allows quick access to the backup data by mounting volume in the recovery point as a drive. You can then browse, and copy files and folders. This option offers a copy speed of up to 6 MBPS, which is suitable for recovering individual files and folders of total size less than 80 GB. Once the required files are copied, you can unmount the recovery point.-- **Cross Region Restore for MARS (preview)**: If your Recovery Services vault uses GRS resiliency and has the [Cross Region Restore setting turned on](backup-create-recovery-services-vault.md#set-cross-region-restore), you can restore the backup data from the secondary region.
+- **Cross Region Restore for MARS**: If your Recovery Services vault uses GRS resiliency and has the [Cross Region Restore setting turned on](backup-create-recovery-services-vault.md#set-cross-region-restore), you can restore the backup data from the secondary region.
-## Cross Region Restore (preview)
+## Cross Region Restore
Cross Region Restore (CRR) allows you to restore MARS backup data from a secondary region, which is an Azure paired region. This enables you to conduct drills for audit and compliance, and recover data during the unavailability of the primary region in Azure in the case of a disaster.
backup Backup Create Recovery Services Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-create-recovery-services-vault.md
For more information about backup and restore with Cross Region Restore, see the
- [Cross Region Restore for Azure VMs](backup-azure-arm-restore-vms.md#cross-region-restore) - [Cross Region Restore for SQL Server databases](restore-sql-database-azure-vm.md#cross-region-restore) - [Cross Region Restore for SAP HANA databases](sap-hana-db-restore.md#cross-region-restore)-- [Cross Region Restore for MARS (Preview)](about-restore-microsoft-azure-recovery-services.md#cross-region-restore-preview)
+- [Cross Region Restore for MARS (Preview)](about-restore-microsoft-azure-recovery-services.md#cross-region-restore)
## Set encryption settings
backup Backup Vault Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-vault-overview.md
Title: Overview of the Backup vaults description: An overview of Backup vaults. Previously updated : 07/05/2023 Last updated : 02/01/2024
This section discusses the options available for encrypting your backup data sto
By default, all your data is encrypted using platform-managed keys. You don't need to take any explicit action from your end to enable this encryption. It applies to all workloads being backed up to your Backup vault.
-## Cross Region Restore support for PostgreSQL using Azure Backup (preview)
+## Cross Region Restore support for PostgreSQL using Azure Backup
Azure Backup allows you to replicate your backups to an additional Azure paired region by using Geo-redundant Storage (GRS) to protect your backups from regional outages. When you enable the backups with GRS, the backups in the secondary region become accessible only when Microsoft declares an outage in the primary region. However, Cross Region Restore enables you to access and perform restores from the secondary region recovery points even when no outage occurs in the primary region; thus, enables you to perform drills to assess regional resiliency.
-Learn [how to perform Cross Region Restore](create-manage-backup-vault.md#perform-cross-region-restore-using-azure-portal-preview).
+Learn [how to perform Cross Region Restore](create-manage-backup-vault.md#perform-cross-region-restore-using-azure-portal).
>[!Note] >- Cross Region Restore is now available for PostgreSQL backups protected in Backup vaults.
->- Backup vaults enabled with Cross Region Restore will be automatically charged at [RA-GRS rates](https://azure.microsoft.com/pricing/details/backup/) for the PostgreSQL backups stored in the vault once the feature is generally available.
+>- Backup vaults enabled with Cross Region Restore are automatically charged at [RA-GRS rates](https://azure.microsoft.com/pricing/details/backup/) for the PostgreSQL backups stored in the vault.
## Next steps -- [Create and manage Backup vault](create-manage-backup-vault.md)
+- [Create and manage Backup vault](create-manage-backup-vault.md#perform-cross-region-restore-using-azure-portal).
backup Create Manage Backup Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/create-manage-backup-vault.md
Title: Create and manage Backup vaults description: Learn how to create and manage the Backup vaults. Previously updated : 08/10/2023 Last updated : 02/01/2024
Troubleshoot the following common issues you might encounter during Backup vault
**Cause**: You may face this error if you try to move multiple Backup vaults in a single attempt.
-**Recommentation**: Ensure that only one Backup vault is selected for every move operation.
+**Recommendation**: Ensure that only one Backup vault is selected for every move operation.
#### UserErrorBackupVaultResourceMoveNotAllowedUntilResourceProvisioned
Troubleshoot the following common issues you might encounter during Backup vault
**Recommendation**: Remove the Managed Identity from the existing Tenant; move the resource and add it again to the new one.
-## Perform Cross Region Restore using Azure portal (preview)
+## Perform Cross Region Restore using Azure portal
-Follow these steps:
+The Cross Region Restore option allows you to restore data in a secondary Azure paired region. To configure Cross Region Restore for the backup vault: 
1. Sign in to [Azure portal](https://portal.azure.com/).
-1. [Create a new Backup vault](create-manage-backup-vault.md#create-backup-vault) or choose an existing Backup vault, and then enable Cross Region Restore by going to **Properties** > **Cross Region Restore (Preview)**, and choose **Enable**.
+1. [Create a new Backup vault](create-manage-backup-vault.md#create-backup-vault) or choose an existing Backup vault, and then enable Cross Region Restore by going to **Properties** > **Cross Region Restore**, and choose **Enable**.
:::image type="content" source="./media/backup-vault-overview/enable-cross-region-restore-for-postgresql-database.png" alt-text="Screenshot shows how to enable Cross Region Restore for PostgreSQL database." lightbox="./media/backup-vault-overview/enable-cross-region-restore-for-postgresql-database.png":::
Follow these steps:
:::image type="content" source="./media/backup-vault-overview/check-availability-of-recovery-point-in-secondary-region.png" alt-text="Screenshot shows how to check availability for the recovery points in the secondary region." lightbox="./media/backup-vault-overview/check-availability-of-recovery-point-in-secondary-region.png":::
-1. The recovery points available in the secondary region are now listed.
+ The recovery points available in the secondary region are now listed.
- Choose **Restore to secondary region**.
+1. Select **Restore to secondary region**.
:::image type="content" source="./media/backup-vault-overview/initiate-restore-to-secondary-region.png" alt-text="Screenshot shows how to initiate restores to the secondary region." lightbox="./media/backup-vault-overview/initiate-restore-to-secondary-region.png":::
Follow these steps:
:::image type="content" source="./media/backup-vault-overview/monitor-postgresql-restore-to-secondary-region.png" alt-text="Screenshot shows how to monitor the postgresql restore to the secondary region." lightbox="./media/backup-vault-overview/monitor-postgresql-restore-to-secondary-region.png":::
+> [!NOTE]
+> Cross Region Restore is currently only available for PostGreSQL servers.
+ ## Cross Subscription Restore using Azure portal Some datasources of Backup vault support restore to a subscription different from that of the source machine. Cross Subscription Restore (CSR) is enabled for existing vaults by default, and you can use it if supported for the intended datasource.
You can also select the state of CSR during the creation of Backup vault.
>- CSR once permanently disabled on a vault can't be re-enabled because it's an irreversible operation. >- If CSR is disabled but not permanently disabled, then you can reverse the operation by selecting **Vault** > **Properties** > **Cross Subscription Restore** > **Enable**. >- If a Backup vault is moved to a different subscription when CSR is disabled or permanently disabled, restore to the original subscription will fail.
-
++ ## Next steps -- [Configure backup on Azure PostgreSQL databases](backup-azure-database-postgresql.md#configure-backup-on-azure-postgresql-databases)
+- [Configure backup on Azure PostgreSQL databases](backup-azure-database-postgresql.md#configure-backup-on-azure-postgresql-databases)
backup Quick Cross Region Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/quick-cross-region-restore.md
+
+ Title: Quickstart - Restore a PostgreSQL database across regions using Azure Backup
+description: Learn how to restore a PostgreSQL database across regions by using Azure Backup.
+ Last updated : 02/01/2024++++
+# Quickstart: Restore a PostgreSQL database across regions by using Azure Backup
+
+This quickstart describes how to enable Cross Region Restore on your Backup vault to restore the data to an alternate region when the primary region is down.
+
+The Cross Region Restore option allows you to restore data in a secondaryΓÇ»[Azure paired region](/azure/availability-zones/cross-region-replication-azure) even when no outage occurs in the primary region; thus, enabling you to perform drills when there's an audit or compliance requirement.
+
+> [!NOTE]
+>- Currently, Geo-redundant Storage (GRS) vaults with Cross Region Restore enabled can't be changed to Zone-redundant Storage (ZRS) or Locally redundant Storage (LRS) after the protection starts for the first time.
+>- Cross Regional Restore (CRR) with Cross Subscription Restore (CSR) is currently not supported.
+
+## Prerequisites
+
+To begin with the Cross Region Restore, ensure that:
+
+- A Backup vault with Cross Region Restore configured. [Create one](./create-manage-backup-vault.md#create-a-backup-vault) in case you donΓÇÖt a Backup vault.
+- PostgreSQL database is protected by using Azure Backup, and one full backup is run. To protect and back up a database, see [Back up Azure Database for PostgreSQL server](backup-azure-database-postgresql.md).
+
+## Restore the database using Azure portal
+
+To restore the database to the secondary region using the Azure portal, follow these steps:
+
+1. Sign in to [Azure portal](https://portal.azure.com/).
+1. To check the available recovery point in the secondary region, go to theΓÇ»**Backup center**ΓÇ»>ΓÇ»**Backup Instances**.
+1. Filter to **Azure Database for PostgreSQL servers**, then filter **Instance Region** as *Secondary Region*.
+1. Select the required Backup instance.
+
+ The recovery points available in the secondary region are now listed.
+
+1. Select **Restore to secondary region** to review the target region selected, and then select the appropriate recovery point and restore parameters.
+ You can also trigger restores from the respective backup instance.
+
+ :::image type="content" source="./media/create-manage-backup-vault/restore-to-secondary-region.png" alt-text="Screenshot showing how to restore to secondary region." lightbox="./media/create-manage-backup-vault/restore-to-secondary-region.png":::
++
+
+1. Once the restore starts, you can monitor the completion of the restore operation under **Backup Jobs** of the Backup vault by filtering **Datasource type** to *Azure Database for PostgreSQL servers*  and **Instance Region** to *Secondary Region*.
+
+## Next steps
+
+- [Learn about the Cross Region Restore](./tutorial-cross-region-restore.md) feature.
backup Quick Secondary Region Restore Postgresql Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/quick-secondary-region-restore-postgresql-powershell.md
+
+ Title: Quickstart - Cross region restore for PostgreSQL database with PowerShell by using Azure Backup
+description: In this Quickstart, learn how to restore PostgreSQL database across region with the Azure PowerShell module.
+ms.devlang: azurecli
+ Last updated : 02/01/2024+++++
+# Quickstart: Restore Azure Database for PostgreSQL server across regions with PowerShell by using Azure Backup
+
+This quickstart describes how to configure and perform cross-region restore for Azure Database for PostgreSQL server with Azure PowerShell.
+
+[Azure Backup](backup-overview.md) allows you to back up and restore the Azure Database for PostgreSQL server. The [Azure PowerShell AZ](/powershell/azure/new-azureps-module-az) module allows you to create and manage Azure resources from the command line or in scripts. If you want to restore the PostgreSQL database across regions by using the Azure portal, see [this Quickstart](quick-cross-region-restore.md).
+
+## Enable Cross Region Restore for Backup vault
+
+To enable the Cross Region Restore feature on the Backup vault that has Geo-redundant Storage enabled, run the following cmdlet:
+
+```azurepowershell
+Update-AzDataProtectionBackupVault -SubscriptionId $subscriptionId -ResourceGroupName $resourceGroupName -CrossRegionRestoreState $CrossRegionRestoreState
+```
+
+>[!Note]
+>You can't disable Cross Region Restore once protection has started with this feature enabled.
++
+## Configure restore for the PostgreSQL database to a secondary region
+
+To restore the database to a secondary region after enabling Cross Region Restore, run the following cmdlets:
+
+1. Fetch the backup instances from secondary region.
+
+ ```azurepowershell
+ Search-AzDataProtectionBackupInstanceInAzGraph -Subscription $subscriptionId -ResourceGroup $resourceGroupName -Vault $vaultName -DatasourceType AzureDatabaseForPostgreSQL
+ ```
+
+2. Once you identify the backed-up instance, fetch the relevant recovery point by using the `Get-AzDataProtectionRecoveryPoint` cmdlet.
+
+ ```azurepowershell
+ $recoveryPointsCrr = Get-AzDataProtectionRecoveryPoint -BackupInstanceName $instance.Name -ResourceGroupName $resourceGroupName -ResourceGroupName $resourceGroupName e -SubscriptionId $subscriptionId -UseSecondaryRegion
+ ```
+
+3. Prepare the restore request.
+
+ To restore the database, follow one of the following methods:
+
+ **Restore as database**
+
+ Follow these steps:
+
+ 1. Create the Azure Resource Manager ID for the new PostgreSQL database. You need to create this with the [target PostgreSQL server to which permissions are assigned](/azure/backup/restore-postgresql-database-ps#set-up-permissions). Additionally, create the required *PostgreSQL database name*.
+
+ For example, you can name a PostgreSQL database as `emprestored21` under a target PostgreSQL server `targetossserver` in a resource group `targetrg` with a different subscription.
+
+ ```azurepowershell
+ $targetResourceId = "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx/resourceGroups/targetrg/providers/providers/Microsoft.DBforPostgreSQL/servers/targetossserver/databases/emprestored21"
+ ```
+ 2. Use the `Intialize-AzDataProtectionRestoreRequest` cmdlet to prepare the restore request with relevant details.
+
+ ```azurepowershell
+ $OssRestoreReq = Initialize-AzDataProtectionRestoreRequest -DatasourceType AzureDatabaseForPostgreSQL -SourceDataStore VaultStore -RestoreLocation $vault.ReplicatedRegion[0] -RestoreType AlternateLocation -RecoveryPoint $recoveryPointsCrr[0].Property.RecoveryPointId -TargetResourceId $targetResourceId -SecretStoreURI $secretURI -SecretStoreType AzureKeyVault
+ ```
+
+ **Restore as files**
+
+ Follow these steps:
+
+ 1. Fetch the *Uniform Resource Identifier (URI)* of the container, in the [storage account to which permissions are assigned](/azure/backup/restore-postgresql-database-ps#set-up-permissions).
+
+ For example, a container named `testcontainerrestore` under a storage account `testossstorageaccount` with a different subscription.
+
+ ```azurepowershell
+ $contURI = https://testossstorageaccount.blob.core.windows.net/testcontainerrestore
+ ```
+
+ 2. Use the `Intialize-AzDataProtectionRestoreRequest` cmdlet to prepare the restore request with relevant details.
+
+ ```azurepowershell
+ $OssRestoreReq = Initialize-AzDataProtectionRestoreRequest -DatasourceType AzureDatabaseForPostgreSQL -SourceDataStore VaultStore -RestoreLocation $vault.ReplicatedRegion[0] -RestoreType RestoreAsFiles -RecoveryPoint $recoveryPointsCrr[0].Property.RecoveryPointId -TargetContainerURI $targetContainerURI -FileNamePrefix $fileNamePrefix
+ ```
+
+## Validate the PostgreSQL database restore configuration
+
+To validate the probabilities of success for the restore operation, run the following cmdlet:
+
+```azurepowershell
+$validate = Test-AzDataProtectionBackupInstanceRestore -ResourceGroupName $ResourceGroupName -Name $instance[0].Name -VaultName $VaultName -RestoreRequest $OssRestoreReq -SubscriptionId $SubscriptionId -RestoreToSecondaryRegion #-Debug
+```
+
+## Trigger the restore operation
+
+To trigger the restore operation, run the following cmdlet:
+
+```azurepowershell
+$restoreJob = Start-AzDataProtectionBackupInstanceRestore -BackupInstanceName $instance.Name -ResourceGroupName $ResourceGroupName -VaultName $vaultName -SubscriptionId $SubscriptionId -Parameter $OssRestoreReq -RestoreToSecondaryRegion # -Debug
+```
+
+## Track the restore job
+
+To monitor the restore job progress, choose one of the methods:
+
+- To get the complete list of Cross Region Restore jobs from the secondary region, run the following cmdlet:
+
+ ```azurepowershell
+ $job = Get-AzDataProtectionJob -ResourceGroupName $resourceGroupName -SubscriptionId $subscriptionId -VaultName $vaultName -UseSecondaryRegion
+ ```
+
+- To get a single job detail, run the following cmdlet:
+
+ ```azurepowershell
+ $restoreJob = Start-AzDataProtectionBackupInstanceRestore -BackupInstanceName $instance.Name -ResourceGroupName $ResourceGroupName -VaultName $vaultName -SubscriptionId $SubscriptionId -Parameter $OssRestoreReq -RestoreToSecondaryRegion # -Debug
+ ```
+
+## Next steps
+
+- Learn how to [configure and run Cross Region Restore for Azure database for PostgreSQL](tutorial-cross-region-restore.md).
backup Restore Azure Database Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-azure-database-postgresql.md
Title: Restore Azure Database for PostgreSQL description: Learn about how to restore Azure Database for PostgreSQL backups. Previously updated : 01/21/2022 Last updated : 02/01/2024
az role assignment create --assignee $VaultMSI_AppId --role "Storage Blob Data
``` Replace the assignee parameter with the _Application ID_ of the vault's MSI and the scope parameter to refer to your specific container. To get the **Application ID** of the vault MSI, select **All applications** under **Application type**. Search for the vault name and copy the Application ID.
- :::image type="content" source="./media/restore-azure-database-postgresql/select-application-type-for-id-inline.png" alt-text="Screenshot showing the process to get the Application I D of the vault MSI." lightbox="./media/restore-azure-database-postgresql/select-application-type-for-id-expanded.png":::
+ :::image type="content" source="./media/restore-azure-database-postgresql/select-application-type-for-id-inline.png" alt-text="Screenshot showing the process to get the Application ID of the vault MSI." lightbox="./media/restore-azure-database-postgresql/select-application-type-for-id-expanded.png":::
- :::image type="content" source="./media/restore-azure-database-postgresql/copy-vault-id-inline.png" alt-text="Screenshot showing the process to copy the Application I D of the vault." lightbox="./media/restore-azure-database-postgresql/copy-vault-id-expanded.png":::
+ :::image type="content" source="./media/restore-azure-database-postgresql/copy-vault-id-inline.png" alt-text="Screenshot showing the process to copy the Application ID of the vault." lightbox="./media/restore-azure-database-postgresql/copy-vault-id-expanded.png":::
+## Restore databases across regions
+
+As one of the restore options, Cross Region Restore (CRR) allows you to restore Azure Database for PostgreSQL servers in a secondary region, which is an Azure-paired region.
+
+### Considerations
+
+- To begin using the feature, read the [Before you start](create-manage-backup-vault.md#before-you-start) section.
+- To check if Cross Region Restore is enabled, see the [Configure Cross Region Restore](create-manage-backup-vault.md#perform-cross-region-restore-using-azure-portal) section.
++
+### View backup instances in secondary region
+
+If CRR is enabled, you can view the backup instances in the secondary region.
+
+1. From the [Azure portal](https://portal.azure.com/), go to **Backup Vault** > **Backup Instances**.
+1. Select the filter as **Instance Region == Secondary Region**.
++
+ :::image type="content" source="./media/create-manage-backup-vault/select-secondary-region-as-instance-region.png" alt-text="Screenshot showing the selection of the secondary region as the instance region." lightbox="./media/create-manage-backup-vault/select-secondary-region-as-instance-region.png":::
+
+ >[!Note]
+ > Only Backup Management Types supporting the CRR feature are listed. Currently, the restoration of primary region data to a secondary region for PostgreSQL servers is only supported.
++
+### Restore in secondary region
+
+The secondary region restore experience is similar to the primary region restore.
+
+When configuring details in the **Restore Configuration** pane to configure your restore, youΓÇÖre prompted to provide only secondary region parameters. So, a vault should already exist in the secondary region and the PostgreSQL server should be registered to the vault in the secondary region.ΓÇ»
+
+Follow these steps:
++
+1. Select **Backup Instance name** to view details.
+2. Select **Restore to secondary region**.
+
+ :::image type="content" source="./media/create-manage-backup-vault/restore-to-secondary-region.png" alt-text="Screenshot showing how to restore to secondary region." lightbox="./media/create-manage-backup-vault/restore-to-secondary-region.png":::
+
+1. Select the restore point, the region, and the resource group.
+1. Select **Restore**.
+ >[!Note]
+ > - After the restore is triggered in the data transfer phase, the restore job can't be canceled.
+ > - The role/access level required to perform restore operation in cross-regions are *Backup Operator* role in the subscription and *Contributor (write)* access on the source and target virtual machines. To view backup jobs, *Backup reader* is the minimum permission required in the subscription.
+ > - The RPO for the backup data to be available in secondary region is 12 hours. Therefore, when you turn on CRR, the RPO for the secondary region is 12 hours + log frequency duration (that can be set to a minimum of 15 minutes).
++
+### Monitoring secondary region restore jobs
+
+1. In the Azure portal, go to **Monitoring + Reporting** > **Backup Jobs**.
+2. Filter Instance Region for **Secondary Region** to view the jobs in the secondary region.
+
+ :::image type="content" source="./media/create-manage-backup-vault/view-jobs-in-secondary-region.png" alt-text="Screenshot showing how to view jobs in secondary region." lightbox="./media/create-manage-backup-vault/view-jobs-in-secondary-region.png":::
++ ## Next steps [Troubleshoot PostgreSQL database backup by using Azure Backup](backup-azure-database-postgresql-troubleshoot.md)
backup Tutorial Cross Region Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/tutorial-cross-region-restore.md
+
+ Title: Tutorial - Configure and run Cross Region Restore for Azure database for PostgreSQL
+description: Learn how to configure and run Cross Region Restore for Azure database for PostgreSQL using Azure Backup.
+ Last updated : 02/01/2024++++
+# Tutorial: Configure and run Cross Region Restore for Azure database for PostgreSQL by using Azure Backup
+
+This tutorial describes how you can enable and run Cross Region Restore to restore SQL databases hosted on Azure VMs in a secondary region.
+
+The Cross Region Restore option allows you to restore data in a secondaryΓÇ»[Azure paired region](/azure/availability-zones/cross-region-replication-azure) even when no outage occurs in the primary region; thus, enabling you to perform drills to assess regional resiliency.ΓÇ»
+
+> [!NOTE]
+>- Currently, Geo-redundant Storage (GRS) vault with Cross Region Restore enabled can't be changed to Zone-redundant Storage (ZRS) or Locally-redundant Storage (LRS) after the protection starts for the first time.ΓÇ»
+>- Secondary region Recovery Point Objective (RPO) is currently *36 hours*. This is because the RPO in the primary region is 24 hours and can take up to 12 hours to replicate the backup data from the primary to the secondary region.
+
+## Considerations
+
+Before you begin Cross Region Restore for PostgreSQL server, see the following information:
+
+- Cross Region Restore is supported only for a Backup vault that uses Storage Redundancy = Geo-redundant.
+- PostgreSQL databases hosted on Azure VMs are supported. You can restore databases or their files.ΓÇ»
+- Review the [support matrix](./backup-support-matrix.md) for a list of supported managed types and regions.
+- Cross Region Restore option incurs additional charges. [Learn more about pricing](https://azure.microsoft.com/pricing/details/backup/).
+- Once you enable Cross Region Restore, it might take up to 48 hours for the backup items to be available in secondary regions.
+- Review theΓÇ»[permissions required to use Cross Region Restore](backup-rbac-rs-vault.md#minimum-role-requirements-for-azure-vm-backup).ΓÇ»
+
+A vault created with GRS redundancy includes the option to configure the Cross Region Restore feature. Every GRS vault has a banner that links to the documentation.ΓÇ»
+
+## Enable Cross Region Restore on a Backup vault
+
+The Cross Region Restore option allows you to restore data in a secondary Azure paired region.
+
+To configure Cross Region Restore for the backup vault, follow these steps:ΓÇ»
+
+1. Sign in to [Azure portal](https://portal.azure.com/).
+1. [Create a new Backup vault](create-manage-backup-vault.md#create-backup-vault) or choose an existing Backup vault.
+1. Enable Cross Region Restore:
+ 1. Select **Properties** (under Manage). 
+ 1. Under **Vault Settings**, select **Update** for Cross Region Restore.
+ 1. UnderΓÇ»**Cross Region Restore**, selectΓÇ»**Enable**.
+
+ :::image type="content" source="./media/tutorial-cross-region-restore/update-for-cross-region-restore.png" alt-text="Screenshot showing the selection of update for cross region restore.":::
+
+ :::image type="content" source="./media/tutorial-cross-region-restore/enable-cross-region-restore.png" alt-text="Screenshot shows the Enable cross region restore option.":::
+
+## View backup instances in secondary region
+
+If CRR is enabled, you can view the backup instances in the secondary region.
+
+Follow these steps:
+
+1. From the [Azure portal](https://portal.azure.com/), go to your Backup vault.
+1. Select **Backup instances** under **Manage**.
+1. Select **Instance Region** == *Secondary Region* on the filters.
+
+ :::image type="content" source="./media/tutorial-cross-region-restore/backup-instances-secondary-region.png" alt-text="Screenshot showing the secondary region filter." lightbox="./media/tutorial-cross-region-restore/backup-instances-secondary-region.png":::
++
+## Restore the database to the secondary region
+
+To restore the database to the secondary region, follow these steps:
+
+1. Go to the Backup vaultΓÇÖs **Overview** pane, and then configure a backup for PostgreSQL database.
+ > [!Note]
+ > Once the backup is complete in the primary region, it can take up to 12 hours for the recovery point in the primary region to get replicated to the secondary region.
+1. To check the availability of recovery point in the secondary region, go to theΓÇ»**Backup center** >ΓÇ»**Backup Instances**.
+1. Filter to **Azure Database for PostgreSQL servers**, then filter Instance region as **Secondary Region**, and then select the required Backup Instance.
+ :::image type="content" source="./media/create-manage-backup-vault/view-jobs-in-secondary-region.png" alt-text="Screenshot showing how to view jobs in secondary region." lightbox="./media/create-manage-backup-vault/view-jobs-in-secondary-region.png":::
+
+ The recovery points available in the secondary region are now listed.
+
+1. SelectΓÇ»**Restore to secondary region**.
+
+ You can also trigger restores from the respective backup instance.
+1. Select Restore to secondary region to review the target region selected, and then select the appropriate recovery point and restore parameters.
+1. Once the restore starts, you can monitor the completion of the restore operation under Backup Jobs of the Backup vault by filtering Jobs workload type to Azure Database for PostgreSQL servers and Instance Region to Secondary Region.
++
+## Next steps
+
+For more information about backup and restore with Cross Region Restore, see:
+
+- [Cross Region Restore for PostGreSQL Servers](create-manage-backup-vault.md#perform-cross-region-restore-using-azure-portal).
+- [Restore Azure Database for PostgreSQL backups](./restore-azure-database-postgresql.md).
backup Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/whats-new.md
Title: What's new in Azure Backup description: Learn about the new features in the Azure Backup service. Previously updated : 12/25/2023 Last updated : 02/01/2024 - ignite-2023
You can learn more about the new releases by bookmarking this page or by [subscr
## Updates summary
+- January 2024
+ - [Cross Region Restore support for PostgreSQL by using Azure Backup is now generally available](#cross-region-restore-support-for-postgresql-by-using-azure-backup-is-now-generally-available)
- December 2023 - [Vaulted backup and Cross Region Restore for support for AKS (preview)](#vaulted-backup-and-cross-region-restore-for-support-for-aks-preview) - November 2023
You can learn more about the new releases by bookmarking this page or by [subscr
- [Backup for Azure Blobs (in preview)](#backup-for-azure-blobs-in-preview)
+## Cross Region Restore support for PostgreSQL by using Azure Backup is now generally available
+
+Azure Backup allows you to replicate your backups to an additional Azure paired region by using Geo-redundant Storage (GRS) to protect your backups from regional outages. When you enable the backups with GRS, the backups in the secondary region become accessible only when Microsoft declares an outage in the primary region. However, Cross Region Restore enables you to access and perform restores from the secondary region recovery points even when no outage occurs in the primary region; thus, enables you to perform drills to assess regional resiliency.
+
+For more information, see [Cross Region Restore support for PostgreSQL using Azure Backup](backup-vault-overview.md#cross-region-restore-support-for-postgresql-using-azure-backup).
+ ## Vaulted backup and Cross Region Restore for support for AKS (preview) Azure Backup supports storing AKS backups offsite, which is protected against tenant compromise, malicious attacks and ransomware threats. Along with backup stored in a vault, you can also use the backups in a regional disaster scenario and recover backups.
For more information, see [Save and manage MARS agent passphrase securely in Azu
You can now restore data from the secondary region for MARS Agent backups using Cross Region Restore on Recovery Services vaults with Geo-redundant storage (GRS) replication. You can use this capability to do recovery drills from the secondary region for audit or compliance. If disasters cause partial or complete unavailability of the primary region, you can directly access the backup data from the secondary region.
-For more information, see [Cross Region Restore for MARS (preview)](about-restore-microsoft-azure-recovery-services.md#cross-region-restore-preview).
+For more information, see [Cross Region Restore for MARS (preview)](about-restore-microsoft-azure-recovery-services.md#cross-region-restore).
## SAP HANA System Replication database backup support is now generally available
For more information, see [Back up a HANA system with replication enabled](sap-h
Azure Backup allows you to replicate your backups to an additional Azure paired region by using Geo-redundant Storage (GRS) to protect your backups from regional outages. When you enable the backups with GRS, the backups in the secondary region become accessible only when Microsoft declares an outage in the primary region.
-For more information, see [Cross Region Restore support for PostgreSQL using Azure Backup](backup-vault-overview.md#cross-region-restore-support-for-postgresql-using-azure-backup-preview).
+For more information, see [Cross Region Restore support for PostgreSQL using Azure Backup](backup-vault-overview.md#cross-region-restore-support-for-postgresql-using-azure-backup).
## Microsoft Azure Backup Server v4 is now generally available
communication-services Pstn Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/pstn-pricing.md
All prices shown below are in USD.
### Phone number leasing charges |Number type |Monthly fee | |--|--|
+|Geographic |USD 3.00/mo |
|Toll-Free |USD 16.00/mo | + ### Usage charges |Number type |To make calls* |To receive calls| |-||-|
communication-services Migrating To Azure Communication Services Calling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/migrating-to-azure-communication-services-calling.md
Azure Communication Services offers various call types. The type of call you cho
## Installation
-### Install the Azure Communication Services calling SDK
+### Install the Azure Communication Services Calling SDK
Use the `npm install` command to install the Azure Communication Services Calling SDK for JavaScript. ```console
Call creation and start are synchronous. The `call` instance enables you to subs
call.on('stateChanged', async () =\> { console.log(\`Call state changed: \${call.state}\`) }); ```
-### Azure Communication Services 1:1 Call
+#### 1:1 Call
To call another Azure Communication Services user, use the `startCall` method on `callAgent` and pass the recipient's `CommunicationUserIdentifier` that you [created with the Communication Services administration library](../quickstarts/identity/access-tokens.md). ```javascript
const userCallee = { communicationUserId: '\<Azure_Communication_Services_USER_I
const oneToOneCall = callAgent.startCall([userCallee]); ```
-### Azure Communication Services Room Call
+#### Rooms Call
To join a `Room` call, you can instantiate a context object with the `roomId` property as the room identifier. To join the call, use the `join` method and pass the context instance. ```javascript
const call = callAgent.join(context);
``` A **Room** offers application developers better control over who can join a call, when they meet and how they collaborate. To learn more about **Rooms**, see the [Rooms overview](../concepts/rooms/room-concept.md), or see [Quickstart: Join a room call](../quickstarts/rooms/join-rooms-call.md).
-### Azure Communication Services Group Call
+#### Group Call
To start a new group call or join an ongoing group call, use the `join` method and pass an object with a `groupId` property. The `groupId` value must be a GUID. ```javascript
const context = { groupId: '\<GUID\>'};
const call = callAgent.join(context); ```
-### Azure Communication Services Teams call
+#### Teams call
Start a synchronous one-to-one or group call using the `startCall` API on `teamsCallAgent`. You can provide `MicrosoftTeamsUserIdentifier` or `PhoneNumberIdentifier` as a parameter to define the target of the call. The method returns the `TeamsCall` instance that allows you to subscribe to call events. ```javascript
callAgent.on('callsUpdated', (event) => {
For Azure Communication Services Teams implementation, see how to [Receive a Teams Incoming Call](../how-tos/cte-calling-sdk/manage-calls.md#receive-a-teams-incoming-call).
-## Adding participants to call
+## Adding and removing participants to a call
### Twilio
remoteParticipant.on('stateChanged', () => {
}); ```
-## Video
+## Video calling
-### Starting and stopping video
+## Starting and stopping video
-#### Twilio
+### Twilio
```javascript const videoTrack = await twilioVideo.createLocalVideoTrack({ constraints });
localParticipant.unpublishTrack(videoTrack);
Then create a new Video Track with the correct constraints.
-#### Azure Communication Services
+### Azure Communication Services
To start a video while on a call, you need to enumerate cameras using the `getCameras` method on the `deviceManager` object. Then create a new instance of `LocalVideoStream` with the desired camera and pass the `LocalVideoStream` object into the `startVideo` method of an existing call object: ```javascript
await blurProcessor.loadModel();
``` As soon as the model is loaded, you can add the background to the video track using the `addProcessor` method:-
-| videoTrack.addProcessor(processor, { inputFrameBufferType: 'video', outputFrameBufferContextType: 'webgl2' }); |
-||
+```javascript
+videoTrack.addProcessor(processor, { inputFrameBufferType: 'video', outputFrameBufferContextType: 'webgl2' });
+```
#### Azure Communication Services
-Use the npm install command to install the Azure Communication Services Effects SDK for JavaScript.
+Use the npm install command to install the [Azure Communication Services Effects SDK](../quickstarts/voice-video-calling/get-started-video-effects.md?pivots=platform-web) for JavaScript.
```console npm install @azure/communication-calling-effects --save ```
You can learn more about ensuring precall readiness in [Pre-Call diagnostics](..
## Event listeners
-Twilio
+### Twilio
```javascript twilioRoom.on('participantConneted', (participant) => {
connectors Compare Built In Azure Connectors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/compare-built-in-azure-connectors.md
ms.suite: integration
Last updated 01/04/2024-
-# As a developer, I want to understand the differences between built-in and Azure connectors in Azure Logic Apps (Standard).
+# Customer intent: As a developer, I want to understand the differences between built-in and Azure connectors in Azure Logic Apps (Standard).
# Differences between built-in operations and Azure connectors in Azure Logic Apps (Standard)
connectors Connectors Azure Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-azure-application-insights.md
Last updated 01/10/2024 tags: connectors
-# As a developer, I want to get telemetry from an Application Insights resource to use with my workflow in Azure Logic Apps.
+# Customer intent: As a developer, I want to get telemetry from an Application Insights resource to use with my workflow in Azure Logic Apps.
# Connect to Azure Application Insights from workflows in Azure Logic Apps
connectors Connectors Azure Monitor Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-azure-monitor-logs.md
Last updated 01/10/2024 tags: connectors
-# As a developer, I want to get log data from my Log Analytics workspace or telemetry from my Application Insights resource to use with my workflow in Azure Logic Apps.
+# Customer intent: As a developer, I want to get log data from my Log Analytics workspace or telemetry from my Application Insights resource to use with my workflow in Azure Logic Apps.
# Connect to Log Analytics or Application Insights from workflows in Azure Logic Apps
connectors Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/introduction.md
Last updated 01/10/2024
-# As a developer, I want to learn how connectors help me access data, events, and resources in other apps, services, systems, and platforms from my workflow in Azure Logic Apps.
+# Customer intent: As a developer, I want to learn how connectors help me access data, events, and resources in other apps, services, systems, and platforms from my workflow in Azure Logic Apps.
# What are connectors in Azure Logic Apps
container-apps Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/networking.md
Subnet address ranges can't overlap with the following ranges reserved by Azure
- 172.31.0.0/16 - 192.0.2.0/24
+If you created your container apps environment with a custom service CIDR, make sure your container app's subnet (or any peered subnet) doesn't conflict with your custom service CIDR range.
+ ### Subnet configuration with CLI
container-apps User Defined Routes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/user-defined-routes.md
Your virtual networks in Azure have default route tables in place when you creat
| Setting | Action | |--|--|
- | **Address prefix** | Select the virtual network for your container app. |
- | **Next hop type** | Select the subnet your for container app. |
+ | **Virtual network** | Select the virtual network for your container app. |
+ | **Subnet** | Select the subnet your for container app. |
1. Select **OK**.
container-registry Container Registry Artifact Streaming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-artifact-streaming.md
Last updated 12/14/2023
-#customer intent: As a developer, I want artifact streaming capabilities so that I can efficiently deliver and serve containerized applications to end-users in real-time.
+# Customer intent: As a developer, I want artifact streaming capabilities so that I can efficiently deliver and serve containerized applications to end-users in real-time.
# Artifact streaming in Azure Container Registry (Preview)
cosmos-db Merge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/merge.md
az rest `
--url $endpoint ` --body "{}"
+```
+ #### [API for MongoDB](#tab/mongodb/azure-powershell) + For **provisioned throughput** containers, use `Invoke-AzCosmosDBMongoDBCollectionMerge` with the `-WhatIf` parameter to preview the merge without actually performing the operation. ```azurepowershell-interactive+ $parameters = @{ ResourceGroupName = "<resource-group-name>" AccountName = "<cosmos-account-name>"
$parameters = @{
Name = "<cosmos-container-name>" WhatIf = $true }+ Invoke-AzCosmosDBMongoDBCollectionMerge @parameters ``` Start the merge by running the same command without the `-WhatIf` parameter. - ```azurepowershell-interactive $parameters = @{ ResourceGroupName = "<resource-group-name>"
az cosmosdb mongodb database merge \
```
-```http-interactive
-POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DocumentDB/databaseAccounts/{accountName}/mongodbDatabases/{databaseName}/partitionMerge?api-version=2023-11-15-preview
-```
--
cosmos-db Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/role-based-access-control.md
The **Access control (IAM)** pane in the Azure portal is used to configure Azure
:::image type="content" source="./media/role-based-access-control/database-security-identity-access-management-rbac.png" alt-text="Access control (IAM) in the Azure portal - demonstrating database security."::: + ## Custom roles In addition to the built-in roles, users may also create [custom roles](../role-based-access-control/custom-roles.md) in Azure and apply these roles to service principals across all subscriptions within their Active Directory tenant. Custom roles provide users a way to create Azure role definitions with a custom set of resource provider operations. To learn which operations are available for building custom roles for Azure Cosmos DB see, [Azure Cosmos DB resource provider operations](../role-based-access-control/resource-provider-operations.md#microsoftdocumentdb)
In addition to the built-in roles, users may also create [custom roles](../role-
> [!NOTE] > Custom role assignments may not always be visible in the Azure portal.
+> [!WARNING]
+> Account keys are not automatically rotated or revoked after management RBAC changes. These keys give access to data plane operations. When removing access to the keys from an user, it is recommended to rotate the keys as well. For RBAC Data Plane, the Cosmos DB backend will reject requests once the roles/claims no longer match. If an user requires temporary access to data plane operations, it's recommended to use [Azure Cosmos DB RBAC](how-to-setup-rbac.md) Data Plane.
+ ## <a id="prevent-sdk-changes"></a>Preventing changes from the Azure Cosmos DB SDKs The Azure Cosmos DB resource provider can be locked down to prevent any changes to resources from a client connecting using the account keys (that is applications connecting via the Azure Cosmos DB SDK). This feature may be desirable for users who want higher degrees of control and governance for production environments. Preventing changes from the SDK also enables features such as resource locks and diagnostic logs for control plane operations. The clients connecting from Azure Cosmos DB SDK will be prevented from changing any property for the Azure Cosmos DB accounts, databases, containers, and throughput. The operations involving reading and writing data to Azure Cosmos DB containers themselves are not impacted.
cost-management-billing Migrate Ea Marketplace Store Charge Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/automate/migrate-ea-marketplace-store-charge-api.md
+
+ Title: Migrate from EA Marketplace Store Charge API
+
+description: This article has information to help you migrate from the EA Marketplace Store Charge API.
++ Last updated : 01/31/2024++++++
+# Migrate from EA Marketplace Store Charge API
+
+EA customers who were previously using the Enterprise Reporting consumption.azure.com API to [get their marketplace store charges](/rest/api/billing/enterprise/billing-enterprise-api-marketplace-storecharge) need to migrate to a replacement Azure Resource Manager API. This article helps you migrate by using the following instructions. It also explains the contract differences between the old API and the new API.
+
+Endpoints to migrate off:
+
+|Endpoint|API Comments|
+|||
+| `/v3/enrollments/{enrollmentNumber}/marketplacecharges` | ΓÇó API method: GET <br><br> ΓÇó Synchronous (non polling) <br><br> ΓÇó Data format: JSON |
+| `/v3/enrollments/{enrollmentNumber}/billingPeriods/{billingPeriod}/marketplacecharges` | ΓÇó API method: GET <br><br> ΓÇó Synchronous (non polling) <br><br> ΓÇó Data format: JSON |
+| `/v3/enrollments/{enrollmentNumber}/marketplacechargesbycustomdate?startTime=2017-01-01&endTime=2017-01-10` | ΓÇó API method: GET <br><br> ΓÇó Synchronous (non polling) <br><br> ΓÇó Data format: JSON |
+
+## Assign permissions to an SPN to call the API
+
+Before calling the API, you need to configure a service principal with the correct permission. You use the service principal to call the API. For more information, see [Assign permissions to ACM APIs](cost-management-api-permissions.md).
+
+### Call the Marketplaces API
+
+Use the following request URIs when calling the new Marketplaces API. All Azure and Marketplace charges are merged into a single file that is available through the new solutions. You can identify which charges are *Azure* versus *Marketplace* charges by using the `PublisherType` field that is available in the new dataset.
+
+Your enrollment number should be used as the `billingAccountId`.
+
+#### Supported requests
+
+You can call the API using the following scopes:
+
+- Department: `/providers/Microsoft.Billing/departments/{departmentId}`
+- Enrollment: `/providers/Microsoft.Billing/billingAccounts/{billingAccountId}`
+- EnrollmentAccount: `/providers/Microsoft.Billing/enrollmentAccounts/{enrollmentAccountId}`
+- Management Group: `/providers/Microsoft.Management/managementGroups/{managementGroupId}`
+- Subscription: `/subscriptions/{subscriptionId}/`
+
+For subscription, billing account, department, enrollment account, and management group scopes you can also add a billing period to the scope using `/providers/Microsoft.Billing/billingPeriods/{billingPeriodName}`. For example, to specify a billing period at the department scope, use `/providers/Microsoft.Billing/departments/{departmentId}/providers/Microsoft.Billing/billingPeriods/{billingPeriodName}`.
+
+[List Marketplaces](/rest/api/consumption/marketplaces/list#marketplaceslistresult)
+
+```http
+GET https://management.azure.com/{scope}/providers/Microsoft.Consumption/marketplaces
+```
+
+With optional parameters:
+
+```http
+https://management.azure.com/{scope}/providers/Microsoft.Consumption/marketplaces?$filter={$filter}&$top={$top}&$skiptoken={$skiptoken}
+```
+
+#### Response body changes
+
+Old response:
++
+```json
+[
+ {
+ "id": "id",
+ "subscriptionGuid": "00000000-0000-0000-0000-000000000000",
+ "subscriptionName": "subName",
+ "meterId": "2core",
+ "usageStartDate": "2015-09-17T00:00:00Z",
+ "usageEndDate": "2015-09-17T23:59:59Z",
+ "offerName": "Virtual LoadMaster&trade; (VLM) for Azure",
+ "resourceGroup": "Res group",
+ "instanceId": "id",
+ "additionalInfo": "{\"ImageType\":null,\"ServiceType\":\"Medium\"}",
+ "tags": "",
+ "orderNumber": "order",
+ "unitOfMeasure": "",
+ "costCenter": "100",
+ "accountId": 100,
+ "accountName": "Account Name",
+ "accountOwnerId": "account@live.com",
+ "departmentId": 101,
+ "departmentName": "Department 1",
+ "publisherName": "Publisher 1",
+ "planName": "Plan name",
+ "consumedQuantity": 1.15,
+ "resourceRate": 0.1,
+ "extendedCost": 1.11,
+ "isRecurringCharge": "False"
+ },
+ ...
+ ]
+```
+
+New response:
+
+```json
+ {
+ "id": "/subscriptions/subid/providers/Microsoft.Billing/billingPeriods/201702/providers/Microsoft.Consumption/marketPlaces/marketplacesId1",
+ "name": "marketplacesId1",
+ "type": "Microsoft.Consumption/marketPlaces",
+ "tags": {
+ "env": "newcrp",
+ "dev": "tools"
+ },
+ "properties": {
+ "accountName": "Account1",
+ "additionalProperties": "additionalProperties",
+ "costCenter": "Center1",
+ "departmentName": "Department1",
+ "billingPeriodId": "/subscriptions/subid/providers/Microsoft.Billing/billingPeriods/201702",
+ "usageStart": "2017-02-13T00:00:00Z",
+ "usageEnd": "2017-02-13T23:59:59Z",
+ "instanceName": "shared1",
+ "instanceId": "/subscriptions/subid/resourceGroups/Default-Web-eastasia/providers/Microsoft.Web/sites/shared1",
+ "currency": "USD",
+ "consumedQuantity": 0.00328,
+ "pretaxCost": 0.67,
+ "isEstimated": false,
+ "meterId": "00000000-0000-0000-0000-000000000000",
+ "offerName": "offer1",
+ "resourceGroup": "TEST",
+ "orderNumber": "00000000-0000-0000-0000-000000000000",
+ "publisherName": "xyz",
+ "planName": "plan2",
+ "resourceRate": 0.24,
+ "subscriptionGuid": "00000000-0000-0000-0000-000000000000",
+ "subscriptionName": "azure subscription",
+ "unitOfMeasure": "10 Hours",
+ "isRecurringCharge": false
+ }
+ }
+
+```
+
+## Next steps
+
+- Read the [Migrate from Azure Enterprise Reporting to Microsoft Cost Management APIs overview](migrate-ea-reporting-arm-apis-overview.md) article.
cost-management-billing Migrate Ea Usage Details Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/automate/migrate-ea-usage-details-api.md
description: This article has information to help you migrate from the EA Usage Details APIs. Previously updated : 11/17/2023 Last updated : 01/30/2024
The table below summarizes the different APIs that you may be using today to ing
| `/v3/enrollments/{enrollmentNumber}/usagedetails/submit?billingPeriod={billingPeriod}` | - API method: POST<br> - Asynchronous (polling based)<br> - Data format: CSV | | `/v3/enrollments/{enrollmentNumber}/usagedetails/submit?startTime=2017-04-01&endTime=2017-04-10` | - API method: POST<br> - Asynchronous (polling based)<br> - Data format: CSV |
-## Enterprise Marketplace Store Charge APIs to migrate off
-
-In addition to the usage details APIs outlined above, you'll need to migrate off the [Enterprise Marketplace Store Charge APIs](/rest/api/billing/enterprise/billing-enterprise-api-marketplace-storecharge). All Azure and Marketplace charges have been merged into a single file that is available through the new solutions. You can identify which charges are *Azure* versus *Marketplace* charges by using the `PublisherType` field that is available in the new dataset. The table below outlines the applicable APIs. All of the following APIs are behind the *https://consumption.azure.com* endpoint.
-
-| Endpoint | API Comments |
-| | |
-| `/v3/enrollments/{enrollmentNumber}/marketplacecharges` | - API method: GET<br> - Synchronous (non polling)<br> - Data format: JSON |
-| `/v3/enrollments/{enrollmentNumber}/billingPeriods/{billingPeriod}/marketplacecharges` | - API method: GET<br> - Synchronous (non polling)<br> - Data format: JSON |
-| `/v3/enrollments/{enrollmentNumber}/marketplacechargesbycustomdate?startTime=2017-01-01&endTime=2017-01-10` | - API method: GET<br> - Synchronous (non polling)<br> - Data format: JSON |
- ## Data field mapping The table below provides a summary of the old fields available in the solutions you're currently using along with the field to use in the new solutions.
cost-management-billing Tutorial Improved Exports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/tutorial-improved-exports.md
+
+ Title: Tutorial - Improved exports experience - Preview
+description: This tutorial helps you create automatic exports for your actual and amortized costs in the Cost and Usage Specification standard (FOCUS) format.
++ Last updated : 01/31/2023++++++
+# Tutorial: Improved exports experience - Preview
+
+This tutorial helps you create automatic exports using the improved exports experience that can be enabled from [Cost Management labs](enable-preview-features-cost-management-labs.md#exports-preview) by selecting **Exports (preview)** button. The improved Exports experience is designed to streamline your FinOps practice by automating the export of other cost-impacting datasets. The updated exports are optimized to handle large datasets while enhancing the user experience.
+
+Review [Azure updates](https://azure.microsoft.com/updates/) to see when the feature becomes available generally available.
+
+## Improved functionality
+
+The improved Exports feature supports new datasets including price sheets, reservation recommendations, reservation details, and reservation transactions. Also, you can download cost and usage details using the open-source FinOps Open Cost and Usage Specification [FOCUS](https://focus.finops.org/) format. It combines actual and amortized costs and reduces data processing times and storage and compute costs.
+FinOps datasets are often large and challenging to manage. Exports improve file manageability, reduce download latency, and help save on storage and network charges with the following functionality:
+
+- File partitioning, which breaks the file into manageable smaller chunks.
+- File overwrite, which replaces the previous day's file with an updated file each day in daily export.
+
+The Exports feature has an updated user interface, which helps you to easily create multiple exports for various cost management datasets to Azure storage using a single, simplified create experience. Exports let you choose the latest or any of the earlier dataset schema versions when you create a new export. Supporting multiple versions ensures that the data processing layers that you built on for existing datasets are reused while you adopt the latest API functionality. You can selectively export historical data by rerunning an existing Export job for a historical period. So you don't have to create a new one-time export for a specific date range. You can enhance security and compliance by configuring exports to storage accounts behind a firewall. The Azure Storage firewall provides access control for the public endpoint of the storage account.
+
+## Prerequisites
+
+Data export is available for various Azure account types, including [Enterprise Agreement (EA)](https://azure.microsoft.com/pricing/enterprise-agreement/) and [Microsoft Customer Agreement (MCA)](get-started-partners.md) customers. To view the full list of supported account types, see [Understand Cost Management data](understand-cost-mgt-data.md). The following Azure permissions, or scopes, are supported per subscription for data export by user and group. For more information about scopes, see [Understand and work with scopes](understand-work-scopes.md).
+
+- Owner - Can create, modify, or delete scheduled exports for a subscription.
+- Contributor - Can create, modify, or delete their own scheduled exports. Can modify the name of scheduled exports created by others.
+- Reader - Can schedule exports that they have permission to.
+ - **For more information about scopes, including access needed to configure exports for Enterprise Agreement and Microsoft Customer agreement scopes, see [Understand and work with scopes](understand-work-scopes.md)**.
+
+For Azure Storage accounts:
+- Write permissions are required to change the configured storage account, independent of permissions on the export.
+- Your Azure storage account must be configured for blob or file storage.
+- Don't configure exports to a storage container that is configured as a destination in an [object replication rule](../../storage/blobs/object-replication-overview.md#object-replication-policies-and-rules).
+- To export to storage accounts with configured firewalls, you need other privileges on the storage account. The other privileges are only required during export creation or modification. They are:
+ - Owner role on the storage account.
+ Or
+ - Any custom role with `Microsoft.Authorization/roleAssignments/write` and `Microsoft.Authorization/permissions/read` permissions.
+ Additionally, ensure that you enable [Allow trusted Azure service access](../../storage/common/storage-network-security.md#grant-access-to-trusted-azure-services) to the storage account when you configure the firewall.
+- The storage account configuration must have the **Permitted scope for copy operations (preview)** option set to **From any storage account**.
+ :::image type="content" source="./media/tutorial-export-acm-data/permitted-scope-copy-operations.png" alt-text="Screenshot showing From any storage account option set." lightbox="./media/tutorial-export-acm-data/permitted-scope-copy-operations.png" :::
+
+If you have a new subscription, you can't immediately use Cost Management features. It might take up to 48 hours before you can use all Cost Management features.
+
+Enable the new Exports experience from Cost Management labs by selecting **Exports (preview)**. For more information about how to enable Exports (preview), see [Explore preview features](enable-preview-features-cost-management-labs.md#explore-preview-features). The preview feature is being deployed progressively.
+
+## Create exports
+
+You can create multiple exports of various data types using the following steps.
+
+### Choose a scope and navigate to Exports
+
+1. Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com/).
+2. Search for **Cost Management**.
+3. Select a billing scope.
+4. In the left navigation menu, select **Exports**.
+ - **For Partners**: Sign in as a partner at the billing account scope or on a customer's tenant. Then you can export data to an Azure Storage account that is linked to your partner storage account. However, you must have an active subscription in your CSP tenant.
+5. Set the schedule frequency.
+
+### Create new exports
+
+On the Exports page, at the top of the page, select **+ Create**.
+
+### Fill in export details
+
+1. On the Add export page, select the **Type of data**, the **Dataset version**, and enter an **Export name**. Optionally, enter an **Export description**.
+2. For **Type of data**, when you select **Reservation recommendations**, select values for the other fields that appear:
+ - Reservation scope
+ - Resource type
+ - Look back period
+3. Depending on the **Type of data** and **Frequency** that you select, you might need to specify more fields to define the date range in UTC format.
+4. Select **Add** to see the export listed on the Basic tab.
++
+### Optionally add more exports
+
+You can create up to 10 exports when you select **+ Add new exports**.
+
+Select **Next** when you're ready to define the destination.
+
+### Define the export destination
+
+1. On the Destination tab, select the **Storage type**. The default is Azure blob storage.
+2. Specify your Azure storage account subscription. Choose an existing resource group or create a new one.
+3. Select the Storage account name or create a new one.
+4. If you create a new storage account, choose an Azure region.
+5. Specify the storage container and directory path for the export file.
+6. File partitioning is enabled by default. It splits large files into smaller ones.
+7. **Overwrite data** is enabled by default. For daily exports, it replaces the previous day's file with an updated file.
+8. Select **Next** to move to the **Review + create** tab.
++
+### Review and create
+
+Review your export configuration and make any necessary changes. When done, select **Review + create** to complete the process.
+
+## Manage exports
+
+You can view and manage your exports by navigating to the Exports page where a summary of details for each export appears, including:
+
+- Type of data
+- Schedule status
+- Data version
+- Last run time
+- Frequency
+- Storage account
+- Estimated next run date and time
+
+You can perform the following actions by selecting the ellipsis (**…**) on the right side of the page or by selecting the individual export.
+
+- Run now - Queues an unplanned export to run at the next available moment, regardless of the scheduled run time.
+- Export selected dates - Reruns an export for a historical date range instead of creating a new one-time export. You can extract up to 13 months of historical data in three-month chunks. This option isn't available for price sheets.
+- Disable - Temporarily suspends the export job.
+- Delete - Permanently removes the export.
+- Refresh - Updates the Run history.
++
+### Schedule frequency
+
+All types of data support various schedule frequency options, as described in the following table.
+
+| **Type of data** | **Frequency options** |
+| | |
+| Price sheet | ΓÇó One-time export <br> ΓÇó Current month <br>ΓÇó Daily export of the current month |
+| Reservation details | ΓÇó One-time export <br> ΓÇó Daily export of month-to-date costs <br> ΓÇó Monthly export of last month's costs |
+| Reservation recommendations | ΓÇó One-time export <br> ΓÇó Daily export |
+| Reservation transactions | ΓÇó One-time export <br> ΓÇó Daily export <br> ΓÇó Monthly export of last month's data|
+| Cost and usage details (actual)<br> Cost and usage details (amortized) <br> Cost and usage details (FOCUS)<br> Cost and usage details (usage only) | ΓÇó One-time export <br>ΓÇó Daily export of month-to-date costs<br>ΓÇó Monthly export of last month's costs <br>ΓÇó Monthly export of last billing month's costs |
+
+## Understand data types
+
+- Cost and usage details (actual) - Select this option to export standard usage and purchase charges.
+- Cost and usage details (amortized) - Select this option to export amortized costs for purchases like Azure reservations and Azure savings plan for compute.
+- Cost and usage details (FOCUS) - Select this option to export cost and usage details using the open-source FinOps Open Cost and Usage Specification ([FOCUS](https://focus.finops.org/)) format. It combines actual and amortized costs. This format reduces data processing time and storage and compute charges for exports. The management group scope isn't supported for Cost and usage details (FOCUS) exports.
+- Cost and usage details (usage only) - Select this option to export standard usage charges without purchase information. Although you can't use this option when creating new exports, existing exports using this option are still supported.
+- Price sheet ΓÇô Select this option to export your download your organization's Azure pricing.
+- Reservation details ΓÇô Select this option to export the current list of all available reservations.
+- Reservation recommendations ΓÇô Select this option to export the list of reservation recommendations, which help with rate optimization.
+- Reservation transactions ΓÇô Select this option to export the list of all reservation purchases, exchanges, and refunds.
+
+Agreement types, scopes, and required roles are explained at [Understand and work with scopes](understand-work-scopes.md).
+
+| **Data types** | **Supported agreement** | **Supported scopes** |
+| | | |
+| Cost and usage (actual) | ΓÇó EA<br> ΓÇó MCA that you bought through the Azure website <br> ΓÇó MCA enterprise<br> ΓÇó MCA that you buy through a Microsoft partner <br> ΓÇó Microsoft Online Service Program (MOSP), also known as pay-as-you-go (PAYG) <br> ΓÇó Azure internal | ΓÇó EA - Enrollment, department, account, management group, subscription, and resource group <br> ΓÇó MCA - Billing account, billing profile, Invoice section, subscription, and resource group <br> ΓÇó Microsoft Partner Agreement (MPA) - Customer, subscription, and resource group |
+| Cost and usage (amortized) | ΓÇó EA <br> ΓÇó MCA that you bought through the Azure website <br> ΓÇó MCA enterprise <br> ΓÇó MCA that you buy through a Microsoft partner <br> ΓÇó Microsoft Online Service Program (MOSP), also known as pay-as-you-go (PAYG) <br> ΓÇó Azure internal | ΓÇó EA - Enrollment, department, account, management group, subscription, and resource group <br> ΓÇó MCA - Billing account, billing profile, Invoice section, subscription, and resource group <br> ΓÇó MPA - Customer, subscription, and resource group |
+| Cost and usage (FOCUS) | ΓÇó EA <br> ΓÇó MCA that you bought through the Azure website <br> ΓÇó MCA enterprise <br> ΓÇó MCA that you buy through a Microsoft partner| ΓÇó EA - Enrollment, department, account, subscription, and resource group <br> ΓÇó MCA - Billing account, billing profile, invoice section, subscription, and resource group <br> ΓÇó MPA - Customer, subscription, resource group. **NOTE**: The management group scope isn't supported for Cost and usage details (FOCUS) exports. |
+| All available prices | ΓÇó EA <br> ΓÇó MCA that you bought through the Azure website <br> ΓÇó MCA enterprise <br> ΓÇó MCA that you buy through a Microsoft partner | ΓÇó EA - Billing account <br> ΓÇó All other supported agreements - Billing profile |
+| Reservation recommendations | ΓÇó EA <br> ΓÇó MCA that you bought through the Azure website <br> ΓÇó MCA enterprise <br> ΓÇó MCA that you buy through a Microsoft partner | ΓÇó EA - Billing account <br> ΓÇó All other supported agreements - Billing profile |
+| Reservation transactions | ΓÇó EA <br> ΓÇó MCA that you bought through the Azure website <br> ΓÇó MCA enterprise <br> ΓÇó MCA that you buy through a Microsoft partner | ΓÇó EA - Billing account <br> ΓÇó All other supported agreements - Billing profile |
+| Reservation details | ΓÇó EA <br> ΓÇó MCA that you bought through the Azure website <br> ΓÇó MCA enterprise <br> ΓÇó MCA that you buy through a Microsoft partner | ΓÇó EA - Billing account <br> ΓÇó All other supported agreements - Billing profile |
+
+## Limitations
+
+The improved exports experience currently has the following limitations.
+
+- The new Exports experience doesn't fully support the management group scope and it has feature limitations.
+- Azure internal and MOSP billing scopes and subscriptions donΓÇÖt support FOCUS datasets.
+- Shared access service (SAS) key-based cross tenant export is only supported for Microsoft partners at the billing account scope. It isn't supported for other partner scenarios like any other scope, EA indirect contract or Azure Lighthouse.
+
+## Next steps
+
+- Learn more about exports at [Tutorial: Create and manage exported data](tutorial-export-acm-data.md).
data-factory Connector Microsoft Fabric Lakehouse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-microsoft-fabric-lakehouse.md
Microsoft Fabric Lakehouse connector supports the following file formats. Refer
- [JSON format](format-json.md) - [ORC format](format-orc.md) - [Parquet format](format-parquet.md)
+
+To use Fabric Lakehouse file-based connector in inline dataset type, you need to choose the right Inline dataset type for your data. You can use DelimitedText, Avro, JSON, ORC, or Parquet depending on your data format.
### Microsoft Fabric Lakehouse Table in mapping data flow
sink(allowSchemaDrift: true,
skipDuplicateMapOutputs: true) ~> CustomerTable ```
+For Fabric Lakehouse table-based connector in inline dataset type, you only need to use Delta as dataset type. This will allow you to read and write data from Fabric Lakehouse tables.
## Related content
defender-for-cloud Agentless Vulnerability Assessment Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/agentless-vulnerability-assessment-azure.md
Vulnerability assessment for Azure, powered by Microsoft Defender Vulnerability
> [!NOTE] > This feature supports scanning of images in the Azure Container Registry (ACR) only. Images that are stored in other container registries should be imported into ACR for coverage. Learn how to [import container images to a container registry](/azure/container-registry/container-registry-import-images).
-In every subscription where this capability is enabled, all images stored in ACR that meet the following criteria for scan triggers are scanned for vulnerabilities without any extra configuration of users or registries. Recommendations with vulnerability reports are provided for all images in ACR as well as images that are currently running in AKS that were pulled from an ACR registry. Images are scanned shortly after being added to a registry, and rescanned for new vulnerabilities once every 24 hours.
+In every subscription where this capability is enabled, all images stored in ACR that meet the criteria for scan triggers are scanned for vulnerabilities without any extra configuration of users or registries. Recommendations with vulnerability reports are provided for all images in ACR as well as images that are currently running in AKS that were pulled from an ACR registry or any other Defender for Cloud supported registry (ECR, GCR, or GAR). Images are scanned shortly after being added to a registry, and rescanned for new vulnerabilities once every 24 hours.
Container vulnerability assessment powered by Microsoft Defender Vulnerability Management has the following capabilities:
defender-for-cloud Concept Data Security Posture Prepare https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-data-security-posture-prepare.md
Previously updated : 01/14/2024 Last updated : 01/28/2024
Sensitive data discovery is available in the Defender CSPM, Defender for Storage
- Existing plan status shows as ΓÇ£PartialΓÇ¥ rather than ΓÇ£FullΓÇ¥ if one or more extensions aren't turned on. - The feature is turned on at the subscription level. - If sensitive data discovery is turned on, but Defender CSPM isn't enabled, only storage resources will be scanned.
+- If a subscription is enabled with Defender CSPM and in parallel you scanned the same resources with Purview, Purview's scan result is ignored and defaults to displaying the Microsoft Defender for Cloud's scanning results for the supported resource type.
## What's supported
defender-for-cloud Concept Data Security Posture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-data-security-posture.md
Previously updated : 10/26/2023 Last updated : 01/28/2024 # About data-aware security posture
defender-for-cloud Configure Servers Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/configure-servers-coverage.md
Title: Configure monitoring coverage
+ Title: Configure Defender for Servers features
description: Learn how to configure the different monitoring components that are available in Defender for Servers in Microsoft Defender for Cloud. Previously updated : 01/25/2024 Last updated : 02/01/2024
-# Configure monitoring coverage
+# Configure Defender for Servers features
Microsoft Defender for Cloud's Defender for Servers plans contains components that monitor your environments to provide extended coverage on your servers. Each of these components can be enabled, disabled or configured to your meet your specific requirements.
Microsoft Defender for Cloud's Defender for Servers plans contains components th
When you enable Defender for Servers plan 2, all of these components are toggled to **On** by default.
+> [!NOTE]
+> The Log Analytics agent (also known as MMA) is set to retire in [August 2024](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). All Defender for Servers features that depend on the AMA, including those described on the [Enable Defender for Endpoint (Log Analytics)](endpoint-protection-recommendations-technical.md) page, will be available through either [Microsoft Defender for Endpoint integration](integration-defender-for-endpoint.md) or [agentless scanning](concept-agentless-data-collection.md), before the retirement date. For more information about the roadmap for each of the features that are currently rely on Log Analytics Agent, see [this announcement](upcoming-changes.md#defender-for-cloud-plan-and-strategy-for-the-log-analytics-agent-deprecation).
+ ## Configure Log Analytics agent After enabling the Log Analytics agent, you'll be presented with the option to select which workspace should be utilized.
defender-for-cloud Continuous Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/continuous-export.md
Title: Continuous export of alerts and recommendations to Log Analytics or Azure Event Hubs
-description: Learn how to configure continuous export of security alerts and recommendations to Log Analytics or Azure Event Hubs
+ Title: Set up continuous export of alerts and recommendations
+description: Learn how to set up continuous export of Microsoft Defender for Cloud security alerts and recommendations to Log Analytics in Azure Monitor or to Azure Event Hubs.
Last updated 06/19/2023
# Continuously export Microsoft Defender for Cloud data
-Microsoft Defender for Cloud generates detailed security alerts and recommendations. To analyze the information in these alerts and recommendations, you can export them to Azure Log Analytics, Event Hubs, or to another [SIEM, SOAR, or IT classic deployment model solution](export-to-siem.md). You can stream the alerts and recommendations as they're generated or define a schedule to send periodic snapshots of all of the new data.
+Microsoft Defender for Cloud generates detailed security alerts and recommendations. To analyze the information that's in these alerts and recommendations, you can export them to Log Analytics in Azure Monitor, to Azure Event Hubs, or to another Security Information and Event Management (SIEM), Security Orchestration Automated Response (SOAR), or IT classic [deployment model solution](export-to-siem.md). You can stream the alerts and recommendations as they're generated or define a schedule to send periodic snapshots of all new data.
-With **continuous export**, you can fully customize what information to export and where it goes. For example, you can configure it so that:
+When you set up continuous export, you can fully customize what information to export and where the information goes. For example, you can configure it so that:
-- All high severity alerts are sent to an Azure event hub-- All medium or higher severity findings from vulnerability assessment scans of your SQL servers are sent to a specific Log Analytics workspace-- Specific recommendations are delivered to an event hub or Log Analytics workspace whenever they're generated-- The secure score for a subscription is sent to a Log Analytics workspace whenever the score for a control changes by 0.01 or more
+- All high-severity alerts are sent to an Azure event hub.
+- All medium or higher-severity findings from vulnerability assessment scans of your computers running SQL Server are sent to a specific Log Analytics workspace.
+- Specific recommendations are delivered to an event hub or Log Analytics workspace whenever they're generated.
+- The secure score for a subscription is sent to a Log Analytics workspace whenever the score for a control changes by 0.01 or more.
-This article describes how to configure continuous export to Log Analytics workspaces or Azure event hubs.
+This article describes how to set up continuous export to a Log Analytics workspace or to an event hub in Azure.
> [!TIP]
-> Defender for Cloud also offers the option to perform a one-time, manual export to CSV. Learn more in [Manual one-time export of alerts and recommendations](#manual-one-time-export-of-alerts-and-recommendations).
+> Defender for Cloud also offers the option to do a onetime, manual export to a comma-separated values (CSV) file. Learn more in [Manually export alerts and recommendations](#manually-export-alerts-and-recommendations).
## Availability |Aspect|Details| |-|:-|
-|Release state:|General availability (GA)|
+|Release status:|General availability (GA)|
|Pricing:|Free|
-|Required roles and permissions:|<ul><li>**Security admin** or **Owner** on the resource group</li><li>Write permissions for the target resource.</li><li>If you're using the [Azure Policy 'DeployIfNotExist' policies](#configure-continuous-export-at-scale-using-the-supplied-policies), you need the permissions that allow you to assign policies</li><li>To export data to Event Hubs, you need Write permission on the Event Hubs Policy.</li><li>To export to a Log Analytics workspace:<ul><li>if it **has the SecurityCenterFree solution**, you need a minimum of read permissions for the workspace solution: `Microsoft.OperationsManagement/solutions/read`</li><li>if it **doesn't have the SecurityCenterFree solution**, you need write permissions for the workspace solution: `Microsoft.OperationsManagement/solutions/action`</li><li>Learn more about [Azure Monitor and Log Analytics workspace solutions](/previous-versions/azure/azure-monitor/insights/solutions)</li></ul></li></ul>|
+|Required roles and permissions:|<ul><li>Security Admin or Owner for the resource group.</li><li>Write permissions for the target resource.</li><li>If you use the [Azure Policy DeployIfNotExist policies](#set-up-continuous-export-at-scale-by-using-provided-policies), you must have permissions that let you assign policies.</li><li>To export data to Event Hubs, you must have Write permissions on the Event Hubs policy.</li><li>To export to a Log Analytics workspace:<ul><li>If it *has the SecurityCenterFree solution*, you must have a minimum of Read permissions for the workspace solution: `Microsoft.OperationsManagement/solutions/read`.</li><li>If it *doesn't have the SecurityCenterFree solution*, you must have write permissions for the workspace solution: `Microsoft.OperationsManagement/solutions/action`.</li><li>Learn more about [Azure Monitor and Log Analytics workspace solutions](/previous-versions/azure/azure-monitor/insights/solutions).</li></ul></li></ul>|
|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Microsoft Azure operated by 21Vianet)| ## What data types can be exported?
-Continuous export can export the following data types whenever they change:
+You can use continuous export to export the following data types whenever they change:
- Security alerts. - Security recommendations.-- Security findings. Findings can be thought of as 'sub' recommendations and belong to a 'parent' recommendation. For example:
- - The recommendations [System updates should be installed on your machines (powered by Update Center)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/e1145ab1-eb4f-43d8-911b-36ddf771d13f) and [System updates should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/4ab6e3c5-74dd-8b35-9ab9-f61b30875b27) each has one 'sub' recommendation per outstanding system update.
- - The recommendation [Machines should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1195afff-c881-495e-9bc5-1486211ae03f) has a 'sub' recommendation for every vulnerability identified by the vulnerability scanner.
+- Security findings.
+
+ Findings can be thought of as "sub" recommendations and belong to a "parent" recommendation. For example:
+
+ - The recommendations [System updates should be installed on your machines (powered by Update Center)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/e1145ab1-eb4f-43d8-911b-36ddf771d13f) and [System updates should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/4ab6e3c5-74dd-8b35-9ab9-f61b30875b27) each has one sub recommendation per outstanding system update.
+ - The recommendation [Machines should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1195afff-c881-495e-9bc5-1486211ae03f) has a sub recommendation for every vulnerability that the vulnerability scanner identifies.
+ > [!NOTE]
- > If youΓÇÖre configuring a continuous export with the REST API, always include the parent with the findings.
+ > If youΓÇÖre configuring continuous export by using the REST API, always include the parent with the findings.
+ - Secure score per subscription or per control. - Regulatory compliance data.
-## Set up a continuous export
+<a name="set-up-a-continuous-export"></a>
+
+## Set up continuous export
+
+You can set up continuous export on the Microsoft Defender for Cloud pages in the Azure portal, by using the REST API, or at scale by using provided Azure Policy templates.
-You can configure continuous export from the Microsoft Defender for Cloud pages in Azure portal, via the REST API, or at scale using the supplied Azure Policy templates.
+### [Azure portal](#tab/azure-portal)
-### [**Use the Azure portal**](#tab/azure-portal)
+<a name="configure-continuous-export-from-the-defender-for-cloud-pages-in-azure-portal"></a>
-### Configure continuous export from the Defender for Cloud pages in Azure portal
+### Set up continuous export on the Defender for Cloud pages in the Azure portal
-If you're setting up a continuous export to Log Analytics or Azure Event Hubs:
+To set up a continuous export to Log Analytics or Azure Event Hubs by using the Azure portal:
-1. From Defender for Cloud's menu, open **Environment settings**.
+1. On the Defender for Cloud resource menu, select **Environment settings**.
-1. Select the specific subscription for which you want to configure the data export.
+1. Select the subscription that you want to configure data export for.
-1. From the sidebar of the settings page for that subscription, select **Continuous export**.
+1. In the resource menu under **Settings**, select **Continuous export**.
- :::image type="content" source="./media/continuous-export/continuous-export-options-page.png" alt-text="Export options in Microsoft Defender for Cloud." lightbox="./media/continuous-export/continuous-export-options-page.png":::
+ :::image type="content" source="./media/continuous-export/continuous-export-options-page.png" alt-text="Screenshot that shows the export options in Microsoft Defender for Cloud." lightbox="./media/continuous-export/continuous-export-options-page.png":::
- Here you see the export options. There's a tab for each available export target, either event hub or Log Analytics workspace.
+ The export options appear. There's a tab for each available export target, either event hub or Log Analytics workspace.
-1. Select the data type you'd like to export and choose from the filters on each type (for example, export only high severity alerts).
+1. Select the data type you'd like to export, and choose from the filters on each type (for example, export only high-severity alerts).
1. Select the export frequency:
- - **Streaming** ΓÇô assessments are sent when a resourceΓÇÖs health state is updated (if no updates occur, no data is sent).
- - **Snapshots** ΓÇô a snapshot of the current state of the selected data types that are sent once a week per subscription. To identify snapshot data, look for the field ``IsSnapshot``.
- If your selection includes one of these recommendations, you can include the vulnerability assessment findings together with them:
+ - **Streaming**. Assessments are sent when a resourceΓÇÖs health state is updated (if no updates occur, no data is sent).
+ - **Snapshots**. A snapshot of the current state of the selected data types that are sent once a week per subscription. To identify snapshot data, look for the field **IsSnapshot**.
+
+ If your selection includes one of these recommendations, you can include the vulnerability assessment findings with them:
+ - [SQL databases should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/82e20e14-edc5-4373-bfc4-f13121257c37) - [SQL servers on machines should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/f97aa83c-9b63-4f9a-99f6-b22c4398f936) - [Container registry images should have vulnerability findings resolved (powered by Qualys)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/dbd0cb49-b563-45e7-9724-889e799fa648) - [Machines should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1195afff-c881-495e-9bc5-1486211ae03f) - [System updates should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/4ab6e3c5-74dd-8b35-9ab9-f61b30875b27)
- To include the findings with these recommendations, enable the **include security findings** option.
+ To include the findings with these recommendations, set **Include security findings** to **Yes**.
- :::image type="content" source="./media/continuous-export/include-security-findings-toggle.png" alt-text="Include security findings toggle in continuous export configuration." :::
+ :::image type="content" source="./media/continuous-export/include-security-findings-toggle.png" alt-text="Screenshot that shows the Include security findings toggle in a continuous export configuration." :::
-1. From the "Export target" area, choose where you'd like the data saved. Data can be saved in a target of a different subscription (for example, on a Central Event Hubs instance or a central Log Analytics workspace).
+1. Under **Export target**, choose where you'd like the data saved. Data can be saved in a target of a different subscription (for example, in a central Event Hubs instance or in a central Log Analytics workspace).
- You can also send the data to an [Event hubs or Log Analytics workspace in a different tenant](#export-data-to-an-azure-event-hubs-or-log-analytics-workspace-in-another-tenant).
+ You can also send the data to an [event hub or Log Analytics workspace in a different tenant](#export-data-to-an-event-hub-or-log-analytics-workspace-in-another-tenant).
1. Select **Save**. > [!NOTE]
-> Log analytics supports records that are only up to 32KB in size. When the data limit is reached, you will see an alert telling you that the `Data limit has been exceeded`.
+> Log Analytics supports only records that are up to 32 KB in size. When the data limit is reached, an alert displays the message **Data limit has been exceeded**.
-### [**Use the REST API**](#tab/rest-api)
+### [REST API](#tab/rest-api)
-### Configure continuous export using the REST API
+### Set up continuous export by using the REST API
-Continuous export can be configured and managed via the Microsoft Defender for Cloud [automations API](/rest/api/defenderforcloud/automations). Use this API to create or update rules for exporting to any of the following possible destinations:
+You can set up and manage continuous export by using the Microsoft Defender for Cloud [automations API](/rest/api/defenderforcloud/automations). Use this API to create or update rules for exporting to any of the following destinations:
- Azure Event Hubs - Log Analytics workspace - Azure Logic Apps
-You can also send the data to an [Event Hubs or Log Analytics workspace in a different tenant](#export-data-to-an-azure-event-hubs-or-log-analytics-workspace-in-another-tenant).
+You also can send the data to an [event hub or Log Analytics workspace in a different tenant](#export-data-to-an-event-hub-or-log-analytics-workspace-in-another-tenant).
-Here are some examples of options that you can only use in the API:
+Here are some examples of options that you can use only in the API:
-- **Greater volume** - You can create multiple export configurations on a single subscription with the API. The **Continuous Export** page in the Azure portal supports only one export configuration per subscription.
+- **Greater volume**: You can create multiple export configurations on a single subscription by using the API. The **Continuous Export** page in the Azure portal supports only one export configuration per subscription.
-- **Additional features** - The API offers parameters that aren't shown in the Azure portal. For example, you can add tags to your automation resource and define your export based on a wider set of alert and recommendation properties than the ones offered in the **Continuous Export** page in the Azure portal.
+- **Additional features**: The API offers parameters that aren't shown in the Azure portal. For example, you can add tags to your automation resource and define your export based on a wider set of alert and recommendation properties than the ones that are offered on the **Continuous export** page in the Azure portal.
-- **More focused scope** - The API provides a more granular level for the scope of your export configurations. When defining an export with the API, you can do so at the resource group level. If you're using the **Continuous Export** page in the Azure portal, you have to define it at the subscription level.
+- **Focused scope**: The API offers you a more granular level for the scope of your export configurations. When you define an export by using the API, you can define it at the resource group level. If you're using the **Continuous export** page in the Azure portal, you must define it at the subscription level.
> [!TIP]
- > These API-only options are not shown in the Azure portal. If you use them, there'll be a banner informing you that other configurations exist.
+ > These API-only options are not shown in the Azure portal. If you use them, a banner informs you that other configurations exist.
+
+### [Azure Policy](#tab/azure-policy)
-### [**Deploy at scale with Azure Policy**](#tab/azure-policy)
+<a name="configure-continuous-export-at-scale-using-the-supplied-policies"></a>
-### Configure continuous export at scale using the supplied policies
+### Set up continuous export at scale by using provided policies
-Automating your organization's monitoring and incident response processes can greatly improve the time it takes to investigate and mitigate security incidents.
+Automating your organization's monitoring and incident response processes can help you reduce the time it takes to investigate and mitigate security incidents.
-To deploy your continuous export configurations across your organization, use the supplied Azure Policy 'DeployIfNotExist' policies to create and configure continuous export procedures.
+To deploy your continuous export configurations across your organization, use the provided Azure Policy `DeployIfNotExist` policies to create and configure continuous export procedures.
-**To implement these policies**:
+To implement these policies:
-1. Select the policy you want to apply from this table:
+1. In the following table, choose a policy to apply:
|Goal |Policy |Policy ID | ||||
To deploy your continuous export configurations across your organization, use th
|Continuous export to Log Analytics workspace|[Deploy export to Log Analytics workspace for Microsoft Defender for Cloud alerts and recommendations](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fffb6f416-7bd2-4488-8828-56585fef2be9)|ffb6f416-7bd2-4488-8828-56585fef2be9| > [!TIP]
- > You can also find these by searching Azure Policy:
+ > You can also find the policies by searching Azure Policy:
> > 1. Open Azure Policy.
- > :::image type="content" source="./media/continuous-export/opening-azure-policy.png" alt-text="Accessing Azure Policy.":::
- > 1. From the Azure Policy menu, select **Definitions** and search for them by name.
+ >
+ > :::image type="content" source="./media/continuous-export/opening-azure-policy.png" alt-text="Screenshot that shows accessing Azure Policy.":::
+ >
+ > 1. On the Azure Policy menu, select **Definitions** and search for the policies by name.
-1. From the relevant Azure Policy page, select **Assign**.
- :::image type="content" source="./media/continuous-export/export-policy-assign.png" alt-text="Assigning the Azure Policy.":::
+1. On the relevant page in Azure Policy, select **Assign**.
+
+ :::image type="content" source="./media/continuous-export/export-policy-assign.png" alt-text="Screenshot that shows assigning the Azure Policy.":::
+
+1. Select each tab and set the parameters to meet your requirements:
+
+ 1. On the **Basics** tab, set the scope for the policy. To use centralized management, assign the policy to the management group that contains the subscriptions that use the continuous export configuration.
+
+ 1. On the **Parameters** tab, set the resource group and data type details.
-1. Open each tab and set the parameters as desired:
- 1. In the **Basics** tab, set the scope for the policy. To use centralized management, assign the policy to the Management Group containing the subscriptions that use continuous export configuration.
- 1. In the **Parameters** tab, set the resource group and data type details.
> [!TIP]
- > Each parameter has a tooltip explaining the options available to you.
+ > Each parameter has a tooltip that explains the options that are available.
+ >
+ > The Azure Policy **Parameters** tab (1) provides access to configuration options that are similar to options that you can access on the Defender for Cloud **Continuous export** page (2).
>
- > Azure Policy's parameters tab (1) provides access to similar configuration options as Defender for Cloud's continuous export page (2).
- > :::image type="content" source="./media/continuous-export/azure-policy-next-to-continuous-export.png" alt-text="Comparing the parameters in continuous export with Azure Policy." lightbox="./media/continuous-export/azure-policy-next-to-continuous-export.png":::
- 1. Optionally, to apply this assignment to existing subscriptions, open the **Remediation** tab and select the option to create a remediation task.
-1. Review the summary page and select **Create**.
+ > :::image type="content" source="./media/continuous-export/azure-policy-next-to-continuous-export.png" alt-text="Screenshot that shows comparing the parameters in continuous export with Azure Policy." lightbox="./media/continuous-export/azure-policy-next-to-continuous-export.png":::
+ >
+
+ 1. Optionally, to apply this assignment to existing subscriptions, select the **Remediation** tab, and then select the option to create a remediation task.
+
+1. Review the summary page, and then select **Create**.
-## Exporting to a Log Analytics workspace
+## Export to a Log Analytics workspace
If you want to analyze Microsoft Defender for Cloud data inside a Log Analytics workspace or use Azure alerts together with Defender for Cloud alerts, set up continuous export to your Log Analytics workspace. ### Log Analytics tables and schemas
-Security alerts and recommendations are stored in the *SecurityAlert* and *SecurityRecommendation* tables respectively.
+Security alerts and recommendations are stored in the **SecurityAlert** and **SecurityRecommendation** tables respectively.
-The name of the Log Analytics solution containing these tables depends on whether you've enabled the enhanced security features: Security ('Security and Audit') or SecurityCenterFree.
+The name of the Log Analytics solution that contains these tables depends on whether you enabled the enhanced security features: Security (the Security and Audit solution) or SecurityCenterFree.
> [!TIP]
-> To see the data on the destination workspace, you must enable one of these solutions **Security and Audit** or **SecurityCenterFree**.
+> To see the data on the destination workspace, you must enable one of these solutions: Security and Audit or SecurityCenterFree.
-![The *SecurityAlert* table in Log Analytics.](./media/continuous-export/log-analytics-securityalert-solution.png)
+![Screenshot that shows the SecurityAlert table in Log Analytics.](./media/continuous-export/log-analytics-securityalert-solution.png)
-To view the event schemas of the exported data types, visit the [Log Analytics table schemas](https://aka.ms/ASCAutomationSchemas).
+To view the event schemas of the exported data types, see [Log Analytics table schemas](https://aka.ms/ASCAutomationSchemas).
-## Export data to an Azure Event Hubs or Log Analytics workspace in another tenant
+## Export data to an event hub or Log Analytics workspace in another tenant
-You ***cannot*** configure data to be exported to a log analytics workspace in another tenant when using Azure Policy to assign the configuration. This process only works with the REST API, and the configuration is unsupported in the Azure portal (due to requiring multitenant context). Azure Lighthouse ***does not*** resolve this issue with Policy, although you can use Lighthouse as the authentication method.
+You *can't* configure data to be exported to a Log Analytics workspace in another tenant if you use Azure Policy to assign the configuration. This process works only when you use the REST API to assign the configuration, and the configuration is unsupported in the Azure portal (because it requires a multitenant context). Azure Lighthouse *doesn't* resolve this issue with Azure Policy, although you can use Azure Lighthouse as the authentication method.
-When collecting data into a tenant, you can analyze the data from one central location.
+When you collect data in a tenant, you can analyze the data from one, central location.
-To export data to an Azure Event Hubs or Log Analytics workspace in a different tenant:
+To export data to an event hub or Log Analytics workspace in a different tenant:
-1. In the tenant that has the Azure Event Hubs or Log Analytics workspace, [invite a user](../active-directory/external-identities/what-is-b2b.md#easily-invite-guest-users-from-the-azure-portal) from the tenant that hosts the continuous export configuration, or alternatively configure Azure Lighthouse for the source and destination tenant.
-1. If using Microsoft Entra B2B Guest access, ensure that the user accepts the invitation to access the tenant as a guest.
-1. If you're using a Log Analytics Workspace, assign the user in the workspace tenant one of these roles: Owner, Contributor, Log Analytics Contributor, Sentinel Contributor, or Monitoring Contributor.
-1. Create and submit the request to the Azure REST API to configure the required resources. You'll need to manage the bearer tokens in both the context of the local (workspace) and the remote (continuous export) tenant.
+1. In the tenant that has the event hub or Log Analytics workspace, [invite a user](../active-directory/external-identities/what-is-b2b.md#easily-invite-guest-users-from-the-azure-portal) from the tenant that hosts the continuous export configuration, or you can configure Azure Lighthouse for the source and destination tenant.
+1. If you use business-to-business (B2B) guest user access in Microsoft Entra ID, ensure that the user accepts the invitation to access the tenant as a guest.
+1. If you use a Log Analytics workspace, assign the user in the workspace tenant one of these roles: Owner, Contributor, Log Analytics Contributor, Sentinel Contributor, or Monitoring Contributor.
+1. Create and submit the request to the Azure REST API to configure the required resources. You must manage the bearer tokens in both the context of the local (workspace) tenant and the remote (continuous export) tenant.
## Continuously export to an event hub behind a firewall
-You can enable continuous export as a trusted service, so that you can send data to an event hub that has an Azure Firewall enabled.
+You can enable continuous export as a trusted service so that you can send data to an event hub that has Azure Firewall enabled.
-**To grant access to continuous export as a trusted service**:
+To grant access to continuous export as a trusted service:
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Navigate to **Microsoft Defender for Cloud** > **Environmental settings**.
+1. Go to **Microsoft Defender for Cloud** > **Environmental settings**.
1. Select the relevant resource.
You can enable continuous export as a trusted service, so that you can send data
:::image type="content" source="media/continuous-export/export-as-trusted.png" alt-text="Screenshot that shows where the checkbox is located to select export as trusted service.":::
-You need to add the relevant role assignment on the destination Event Hubs.
+You must add the relevant role assignment to the destination event hub.
-**To add the relevant role assignment on the destination Event Hub**:
+To add the relevant role assignment to the destination event hub:
-1. Navigate to the selected Event Hubs.
+1. Go to the selected event hub.
-1. Select **Access Control** > **Add role assignment**
+1. In the resource menu, select **Access control (IAM)** > **Add role assignment**.
- :::image type="content" source="media/continuous-export/add-role-assignment.png" alt-text="Screenshot that shows where the add role assignment button is found." lightbox="media/continuous-export/add-role-assignment.png":::
+ :::image type="content" source="media/continuous-export/add-role-assignment.png" alt-text="Screenshot that shows the Add role assignment button." lightbox="media/continuous-export/add-role-assignment.png":::
1. Select **Azure Event Hubs Data Sender**. 1. Select the **Members** tab.
-1. Select **+ Select members**.
+1. Choose **+ Select members**.
-1. Search for and select **Windows Azure Security Resource Provider**.
+1. Search for and then select **Windows Azure Security Resource Provider**.
:::image type="content" source="media/continuous-export/windows-security-resource.png" alt-text="Screenshot that shows you where to enter and search for Microsoft Azure Security Resource Provider." lightbox="media/continuous-export/windows-security-resource.png":::
You need to add the relevant role assignment on the destination Event Hubs.
## View exported alerts and recommendations in Azure Monitor
-You might also choose to view exported Security Alerts and/or recommendations in [Azure Monitor](../azure-monitor/alerts/alerts-overview.md).
+You might also choose to view exported security alerts or recommendations in [Azure Monitor](../azure-monitor/alerts/alerts-overview.md).
-Azure Monitor provides a unified alerting experience for various Azure alerts including Diagnostic Log, Metric alerts, and custom alerts based on Log Analytics workspace queries.
+Azure Monitor provides a unified alerting experience for various Azure alerts, including a diagnostic log, metric alerts, and custom alerts that are based on Log Analytics workspace queries.
-To view alerts and recommendations from Defender for Cloud in Azure Monitor, configure an Alert rule based on Log Analytics queries (Log Alert):
+To view alerts and recommendations from Defender for Cloud in Azure Monitor, configure an alert rule that's based on Log Analytics queries (a log alert rule):
-1. From Azure Monitor's **Alerts** page, select **New alert rule**.
+1. On the Azure Monitor **Alerts** page, select **New alert rule**.
- ![Azure Monitor's alerts page.](./media/continuous-export/azure-monitor-alerts.png)
+ ![Screenshot that shows the Azure Monitor alerts page.](./media/continuous-export/azure-monitor-alerts.png)
-1. In the create rule page, configure your new rule (in the same way you'd configure a [log alert rule in Azure Monitor](../azure-monitor/alerts/alerts-unified-log.md)):
+1. On the **Create rule** pane, set up your new rule the same way you'd configure a [log alert rule in Azure Monitor](../azure-monitor/alerts/alerts-unified-log.md):
- For **Resource**, select the Log Analytics workspace to which you exported security alerts and recommendations.
- - For **Condition**, select **Custom log search**. In the page that appears, configure the query, lookback period, and frequency period. In the search query, you can type *SecurityAlert* or *SecurityRecommendation* to query the data types that Defender for Cloud continuously exports to as you enable the Continuous export to Log Analytics feature.
+ - For **Condition**, select **Custom log search**. In the page that appears, configure the query, lookback period, and frequency period. In the search query, you can enter **SecurityAlert** or **SecurityRecommendation** to query the data types that Defender for Cloud continuously exports to as you enable the continuous export to Log Analytics feature.
+
+ - Optionally, create an [action group](../azure-monitor/alerts/action-groups.md) to trigger. Action groups can automate sending an email, creating an ITSM ticket, running a webhook, and more, based on an event in your environment.
+
+ ![Screenshot that shows the Azure Monitor create alert rule pane.](./media/continuous-export/azure-monitor-alert-rule.png)
- - Optionally, configure the [Action Group](../azure-monitor/alerts/action-groups.md) that you'd like to trigger. Action groups can trigger email sending, ITSM tickets, WebHooks, and more.
- ![Azure Monitor alert rule.](./media/continuous-export/azure-monitor-alert-rule.png)
+The Defender for Cloud alerts or recommendations appear (depending on your configured continuous export rules and the condition that you defined in your Azure Monitor alert rule) in Azure Monitor alerts, with automatic triggering of an action group (if provided).
-The Microsoft Defender for Cloud alerts or recommendations appears (depending on your configured continuous export rules and the condition you defined in your Azure Monitor alert rule) in Azure Monitor alerts, with automatic triggering of an action group (if provided).
+<a name="manual-one-time-export-of-alerts-and-recommendations"></a>
-## Manual one-time export of alerts and recommendations
+## Manually export alerts and recommendations
-To download a CSV report for alerts or recommendations, open the **Security alerts** or **Recommendations** page and select the **Download CSV report** button.
+To download a CSV file that lists alerts or recommendations, go to the **Security alerts** page or the **Recommendations** page, and then select the **Download CSV report** button.
> [!TIP]
-> Due to Azure Resource Graph limitations, the reports are limited to a file size of 13K rows. If you're seeing errors related to too much data being exported, try limiting the output by selecting a smaller set of subscriptions to be exported.
+> Due to Azure Resource Graph limitations, the reports are limited to a file size of 13,000 rows. If you see errors related to too much data being exported, try limiting the output by selecting a smaller set of subscriptions to be exported.
> [!NOTE] > These reports contain alerts and recommendations for resources from the currently selected subscriptions.
-## Next steps
+## Related content
In this article, you learned how to configure continuous exports of your recommendations and alerts. You also learned how to download your alerts data as a CSV file.
-For related material, see the following documentation:
+To see related content:
- Learn more about [workflow automation templates](https://github.com/Azure/Azure-Security-Center/tree/master/Workflow%20automation).-- [Azure Event Hubs documentation](../event-hubs/index.yml)-- [Microsoft Sentinel documentation](../sentinel/index.yml)-- [Azure Monitor documentation](../azure-monitor/index.yml)-- [Export data types schemas](https://aka.ms/ASCAutomationSchemas)
+- See the [Azure Event Hubs documentation](../event-hubs/index.yml).
+- Learn more about [Microsoft Sentinel](../sentinel/index.yml).
+- Review the [Azure Monitor documentation](../azure-monitor/index.yml).
+- Learn how to [export data types schemas](https://aka.ms/ASCAutomationSchemas).
- Check out [common questions](faq-general.yml) about continuous export.
defender-for-cloud Custom Dashboards Azure Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/custom-dashboards-azure-workbooks.md
Title: Workbooks gallery
-description: Learn how to create rich, interactive reports of your Microsoft Defender for Cloud data with the integrated Azure Monitor Workbooks gallery
+ Title: Use Azure Monitor gallery workbooks with Defender for Cloud data
+description: Learn how to create rich, interactive reports for your Microsoft Defender for Cloud data by using workbooks from the integrated Azure Monitor workbooks gallery.
Last updated 12/06/2023
-# Create rich, interactive reports of Defender for Cloud data
+# Create rich, interactive reports of Defender for Cloud data by using workbooks
-[Azure Workbooks](../azure-monitor/visualize/workbooks-overview.md) provide a flexible canvas for data analysis and the creation of rich visual reports within the Azure portal. They allow you to tap into multiple data sources from across Azure, and combine them into unified interactive experiences.
+[Azure workbooks](../azure-monitor/visualize/workbooks-overview.md) are flexible canvas that you can use to analyze data and create rich, visual reports in the Azure portal. In workbooks, you can access multiple data sources across Azure. Combine workbooks into unified, interactive experiences.
-Workbooks provide a rich set of capabilities for visualizing your Azure data. For detailed examples of each visualization type, see the [visualizations examples and documentation](../azure-monitor/visualize/workbooks-text-visualizations.md).
+Workbooks provide a rich set of capabilities for visualizing your Azure data. For detailed information about each visualization type, see the [visualizations examples and documentation](../azure-monitor/visualize/workbooks-text-visualizations.md).
-Within Microsoft Defender for Cloud, you can access the built-in workbooks to track your organizationΓÇÖs security posture. You can also build custom workbooks to view a wide range of data from Defender for Cloud or other supported data sources.
+In Microsoft Defender for Cloud, you can access built-in workbooks to track your organizationΓÇÖs security posture. You can also build custom workbooks to view a wide range of data from Defender for Cloud or other supported data sources.
-For pricing, check out the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
+For pricing, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
## Prerequisites
-**Required roles and permissions**: To save workbooks, you must have at least [Workbook Contributor](../role-based-access-control/built-in-roles.md#workbook-contributor) permissions on the target resource group
+**Required roles and permissions**: To save a workbook, you must have at least [Workbook Contributor](../role-based-access-control/built-in-roles.md#workbook-contributor) permissions for the relevant resource group.
**Cloud availability**: :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds :::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Microsoft Azure operated by 21Vianet)
-## Workbooks gallery in Microsoft Defender for Cloud
+<a name="workbooks-gallery-in-microsoft-defender-for-cloud"></a>
-With the integrated Azure Workbooks functionality, Microsoft Defender for Cloud makes it straightforward to build your own custom, interactive workbooks. Defender for Cloud also includes a gallery with the following workbooks ready for your customization:
+## Use Defender for Cloud gallery workbooks
-- ['Coverage' workbook](#use-the-coverage-workbook) - Track the coverage of Defender for Cloud plans and extensions across your environments and subscriptions.-- ['Secure Score Over Time' workbook](#use-the-secure-score-over-time-workbook) - Track your subscriptions' scores and changes to recommendations for your resources-- ['System Updates' workbook](#use-the-system-updates-workbook) - View missing system updates by resources, OS, severity, and more-- ['Vulnerability Assessment Findings' workbook](#use-the-vulnerability-assessment-findings-workbook) - View the findings of vulnerability scans of your Azure resources-- ['Compliance Over Time' workbook](#use-the-compliance-over-time-workbook) - View the status of a subscription's compliance with the regulatory or industry standards you've selected-- ['Active Alerts' workbook](#use-the-active-alerts-workbook) - View active alerts by severity, type, tag, MITRE ATT&CK tactics, and location.-- Price Estimation workbook - View monthly consolidated price estimations for Microsoft Defender for Cloud plans based on the resource telemetry in your own environment. These numbers are estimates based on retail prices and don't provide actual billing data.-- Governance workbook - The governance report in the governance rules settings lets you track progress of the rules effective in the organization.-- ['DevOps Security (Preview)' workbook](#use-the-devops-security-workbook) - View a customizable foundation that helps you visualize the state of your DevOps posture for the connectors you've configured.
+In Defender for Cloud, you can use integrated Azure workbooks functionality to build custom, interactive workbooks that display your security data. Defender for Cloud includes a workbooks gallery that has the following workbooks ready for you to customize:
-In addition to the built-in workbooks, you can also find other useful workbooks found under the ΓÇ£Community" category, which is provided as is with no SLA or support. Choose one of the supplied workbooks or create your own.
+- [Coverage workbook](#coverage-workbook): Track the coverage of Defender for Cloud plans and extensions across your environments and subscriptions.
+- [Secure Score Over Time workbook](#secure-score-over-time-workbook): Track your subscription scores and changes to recommendations for your resources.
+- [System Updates workbook](#system-updates-workbook): View missing system updates by resource, OS, severity, and more.
+- [Vulnerability Assessment Findings workbook](#vulnerability-assessment-findings-workbook): View the findings of vulnerability scans of your Azure resources.
+- [Compliance Over Time workbook](#compliance-over-time-workbook): View the status of a subscription's compliance with regulatory standards or industry standards that you select.
+- [Active Alerts workbook](#active-alerts-workbook): View active alerts by severity, type, tag, MITRE ATT&CK tactics, and location.
+- Price Estimation workbook: View monthly, consolidated price estimations for Defender for Cloud plans based on the resource telemetry in your environment. The numbers are estimates that are based on retail prices and don't represent actual billing or invoice data.
+- Governance workbook: Use the governance report in the governance rules settings to track progress of the rules that affect your organization.
+- [DevOps Security (preview) workbook](#devops-security-workbook): View a customizable foundation that helps you visualize the state of your DevOps posture for the connectors that you set up.
+Along with built-in workbooks, you can find useful workbooks in the **Community** category. These workbooks are provided as-is and have no SLA or support. You can choose one of the provided workbooks or create your own workbook.
+ > [!TIP]
-> Use the **Edit** button to customize any of the supplied workbooks to your satisfaction. When you're done editing, select **Save** and your changes will be saved to a new workbook.
+> To customize any of the workbooks, select the **Edit** button. When you're done editing, select **Save**. The changes are saved in a new workbook.
+>
+> :::image type="content" source="media/custom-dashboards-azure-workbooks/editing-supplied-workbooks.png" alt-text="Screenshot that shows how to edit a supplied workbook to customize it for your needs.":::
>
-> :::image type="content" source="media/custom-dashboards-azure-workbooks/editing-supplied-workbooks.png" alt-text="Editing the supplied workbooks to customize them for your particular needs.":::
-### Use the 'Coverage' workbook
+<a name="use-the-coverage-workbook"></a>
-Enabling Defender for Cloud across multiple subscriptions and environments (Azure, AWS, and GCP) can make it hard to keep track of which plans are active. This is especially true if you have multiple subscriptions and environments.
+### Coverage workbook
-The Coverage workbook allows you to keep track of which Defender for Cloud plans are active on which parts of your environments. This workbook can help you to ensure that your environments and subscriptions are fully protected. By having access to detailed coverage information, you can also identify any areas that might need other protection and take action to address those areas.
+If you enable Defender for Cloud across multiple subscriptions and environments (Azure, Amazon Web Services, and Google Cloud Platform), you might find it challenging to keep track of which plans are active. It's especially true if you have multiple subscriptions and environments.
+The Coverage workbook helps you keep track of which Defender for Cloud plans are active in which parts of your environments. This workbook can help you ensure that your environments and subscriptions are fully protected. By having access to detailed coverage information, you can identify areas that might need more protection so that you can take action to address those areas.
-This workbook allows you to select a subscription (or all subscriptions) from the dropdown menu and view:
+
+In this workbook, you can select a subscription (or all subscriptions), and then view the following tabs:
- **Additional information**: Shows release notes and an explanation of each toggle.-- **Relative coverage**: Shows the percentage of subscriptions/connectors that have a particular Defender for Cloud plan enabled.
+- **Relative coverage**: Shows the percentage of subscriptions or connectors that have a specific Defender for Cloud plan enabled.
- **Absolute coverage**: Shows each plan's status per subscription.-- **Detailed coverage** - Shows additional settings that can/need to be enabled on relevant plans in order to get each plan's full value.
+- **Detailed coverage**: Shows additional settings that can be enabled or that must need to be enabled on relevant plans to get each plan's full value.
+
+You also can select the Azure, Amazon Web Services, or Google Cloud Platform environment in each or all subscriptions to see which plans and extensions are enabled for the environments.
-You can also select which environment (Azure, AWS or GCP) under each or all subscriptions to see which plans and extensions are enabled under that environment.
+<a name="use-the-secure-score-over-time-workbook"></a>
-### Use the 'Secure Score Over Time' workbook
+### Secure Score Over Time workbook
-This workbook uses secure score data from your Log Analytics workspace. That data needs to be exported from the continuous export tool as described in [Configure continuous export from the Defender for Cloud pages in Azure portal](continuous-export.md?tabs=azure-portal).
+The Secure Score Over Time workbook uses secure score data from your Log Analytics workspace. The data must be exported by using the continuous export tool as described in [Set up continuous export for Defender for Cloud in the Azure portal](continuous-export.md?tabs=azure-portal).
-When you set up the continuous export, set the export frequency to both **streaming updates** and **snapshots**.
+When you set up continuous export, under **Export frequency**, select both **Streaming updates** and **Snapshots (Preview)**.
> [!NOTE]
-> Snapshots get exported weekly, so you'll need to wait at least one week for the first snapshot to be exported before you can view data in this workbook.
+> Snapshots are exported weekly. There's a delay of at least one week after the first snapshot is exported before you can view data in the workbook.
> [!TIP]
-> To configure continuous export across your organization, use the supplied Azure Policy 'DeployIfNotExist' policies described in [Configure continuous export at scale](continuous-export.md?tabs=azure-policy).
+> To configure continuous export across your organization, use the provided `DeployIfNotExist` policies in Azure Policy that are described in [Set up continuous export at scale](continuous-export.md?tabs=azure-policy).
-The secure score over time workbook has five graphs for the subscriptions reporting to the selected workspaces:
+The Secure Score Over Time workbook has five graphs for the subscriptions that report to the selected workspaces:
|Graph |Example | |||
-|**Score trends for the last week and month**<br>Use this section to monitor the current score and general trends of the scores for your subscriptions.|:::image type="content" source="media/custom-dashboards-azure-workbooks/secure-score-over-time-table-1.png" alt-text="Trends for secure score on the built-in workbook.":::|
-|**Aggregated score for all selected subscriptions**<br>Hover your mouse over any point in the trend line to see the aggregated score at any date in the selected time range.|:::image type="content" source="media/custom-dashboards-azure-workbooks/secure-score-over-time-table-2.png" alt-text="Aggregated score for all selected subscriptions.":::|
-|**Recommendations with the most unhealthy resources**<br>This table helps you triage the recommendations that have had the most resources changed to unhealthy over the selected period.|:::image type="content" source="media/custom-dashboards-azure-workbooks/secure-score-over-time-table-3.png" alt-text="Recommendations with the most unhealthy resources.":::|
-|**Scores for specific security controls**<br>Defender for Cloud's security controls is logical groupings of recommendations. This chart shows you, at a glance, the weekly scores for all of your controls.|:::image type="content" source="media/custom-dashboards-azure-workbooks/secure-score-over-time-table-4.png" alt-text="Scores for your security controls over the selected time period.":::|
-|**Resources changes**<br>Recommendations with the most resources that have changed state (healthy, unhealthy, or not applicable) during the selected period are listed here. Select any recommendation from the list to open a new table listing the specific resources.|:::image type="content" source="media/custom-dashboards-azure-workbooks/secure-score-over-time-table-5.png" alt-text="Recommendations with the most resources that have changed health state.":::|
+|**Score trends for the last week and month**<br>Use this section to monitor the current score and general trends of the scores for your subscriptions.|:::image type="content" source="media/custom-dashboards-azure-workbooks/secure-score-over-time-table-1.png" alt-text="Screenshot that shows trends for secure score on the built-in workbook.":::|
+|**Aggregated score for all selected subscriptions**<br>Hover your mouse over any point in the trend line to see the aggregated score at any date in the selected time range.|:::image type="content" source="media/custom-dashboards-azure-workbooks/secure-score-over-time-table-2.png" alt-text="Screenshot that shows an aggregated score for all selected subscriptions.":::|
+|**Recommendations with the most unhealthy resources**<br>This table helps you triage the recommendations that had the most resources that changed to an unhealthy status in the selected period.|:::image type="content" source="media/custom-dashboards-azure-workbooks/secure-score-over-time-table-3.png" alt-text="Screenshot that shows recommendations that have the most unhealthy resources.":::|
+|**Scores for specific security controls**<br>The security controls in Defender for Cloud are logical groupings of recommendations. This chart shows you at a glance the weekly scores for all your controls.|:::image type="content" source="media/custom-dashboards-azure-workbooks/secure-score-over-time-table-4.png" alt-text="Screenshot that shows scores for your security controls over the selected time period.":::|
+|**Resources changes**<br>Recommendations that have the most resources that changed state (healthy, unhealthy, or not applicable) during the selected period are listed here. Select any recommendation in the list to open a new table that lists the specific resources.|:::image type="content" source="media/custom-dashboards-azure-workbooks/secure-score-over-time-table-5.png" alt-text="Screenshot that shows recommendations that have the most resources that changed health state during the selected period.":::|
-### Use the 'System Updates' workbook
+### System Updates workbook
-This workbook is based on the security recommendation "System updates should be installed on your machines".
+The System Updates workbook is based on the security recommendation that system updates should be installed on your machines. The workbook helps you identify machines that have updates to apply.
-The workbook helps you identify machines with outstanding updates.
+You can view the update status for selected subscriptions by:
-You can view the situation for the selected subscriptions according to:
+- A list of resources that have outstanding updates to apply.
+- A list of updates that are missing from your resources.
-- The list of resources with outstanding updates-- The list of updates missing from your resources
+### Vulnerability Assessment Findings workbook
-### Use the 'Vulnerability Assessment Findings' workbook
-
-Defender for Cloud includes vulnerability scanners for your machines, containers in container registries, and SQL servers.
+Defender for Cloud includes vulnerability scanners for your machines, containers in container registries, and computers running SQL Server.
Learn more about using these scanners:
Findings for each resource type are reported in separate recommendations:
- [SQL databases should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/82e20e14-edc5-4373-bfc4-f13121257c37) - [SQL servers on machines should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/f97aa83c-9b63-4f9a-99f6-b22c4398f936)
-This workbook gathers these findings and organizes them by severity, resource type, and category.
+The Vulnerability Assessment Findings workbook gathers these findings and organizes them by severity, resource type, and category.
-### Use the 'Compliance Over Time' workbook
+### Compliance Over Time workbook
-Microsoft Defender for Cloud continually compares the configuration of your resources with requirements in industry standards, regulations, and benchmarks. Built-in standards include NIST SP 800-53, SWIFT CSP CSCF v2020, Canada Federal PBMM, HIPAA HITRUST, and more. You can select the specific standards relevant to your organization using the regulatory compliance dashboard. Learn more in [Customize the set of standards in your regulatory compliance dashboard](update-regulatory-compliance-packages.md).
+Microsoft Defender for Cloud continually compares the configuration of your resources with requirements in industry standards, regulations, and benchmarks. Built-in standards include NIST SP 800-53, SWIFT CSP CSCF v2020, Canada Federal PBMM, HIPAA HITRUST, and more. You can select standards that are relevant to your organization by using the regulatory compliance dashboard. Learn more in [Customize the set of standards in your regulatory compliance dashboard](update-regulatory-compliance-packages.md).
-This workbook tracks your compliance status over time with the various standards you've added to your dashboard.
+The Compliance Over Time workbook tracks your compliance status over time by using the various standards that you add to your dashboard.
-When you select a standard from the overview area of the report, the lower pane reveals a more detailed breakdown:
+When you select a standard from the overview area of the report, the lower pane displays a more detailed breakdown:
-You can keep drilling down - right down to the recommendation level - to view the resources that have passed or failed each control.
+To view the resources that passed or failed each control, you can keep drilling down, all the way to the recommendation level.
> [!TIP]
-> For each panel of the report, you can export the data to Excel with the "Export to Excel" option.
+> For each panel of the report, you can export the data to Excel by using the **Export to Excel** option.
>
-> :::image type="content" source="media/custom-dashboards-azure-workbooks/export-workbook-data.png" alt-text="Exporting compliance workbook data to Excel.":::
+> :::image type="content" source="media/custom-dashboards-azure-workbooks/export-workbook-data.png" alt-text="Screenshot that shows how to export a compliance workbook data to Excel.":::
+
+<a name="use-the-active-alerts-workbook"></a>
-### Use the 'Active Alerts' workbook
+### Active Alerts workbook
-This workbook displays the active security alerts for your subscriptions on one dashboard. Security alerts are the notifications that Defender for Cloud generates when it detects threats on your resources. Defender for Cloud prioritizes, and lists the alerts, along with information needed for quick investigation and remediation.
+The Active Alerts workbook displays the active security alerts for your subscriptions on one dashboard. Security alerts are the notifications that Defender for Cloud generates when it detects threats against your resources. Defender for Cloud prioritizes and lists the alerts with the information that you need to quickly investigate and remediate.
-This workbook benefits you by letting you understand the active threats on your environment, and allows you to prioritize between the active alerts.
+This workbook benefits you by helping you be aware of and prioritize the active threats in your environment.
> [!NOTE]
-> Most workbooks use Azure Resource Graph (ARG) to query their data. For example, to display the Map View, Log Analytics workspace is used to query the data. [Continuous export](continuous-export.md) should be enabled, and export the security alerts to the Log Analytics workspace.
+> Most workbooks use Azure Resource Graph to query data. For example, to display a map view, data is queried in a Log Analytics workspace. [Continuous export](continuous-export.md) should be enabled. Export the security alerts to the Log Analytics workspace.
-You can view the active alerts by severity, resource group, or tag.
+You can view active alerts by severity, resource group, and tag.
You can also view your subscription's top alerts by attacked resources, alert types, and new alerts. +
+To see more details about an alert, select the alert.
-You can get more details on any of these alerts by selecting it.
+The **MITRE ATT&CK tactics** tab lists alerts in the order of the kill chain and the number of alerts that the subscription has at each stage.
-The MITRE ATT&CK tactics display by the order of the kill-chain, and the number of alerts the subscription has at each stage.
+You can see all the active alerts in a table and filter by columns.
-You can see all of the active alerts in a table with the ability to filter by columns. Select an alert to view button appears.
+To see details for a specific alert, select the alert in the table, and then select the **Open Alert View** button.
-By selecting the Open Alert View button, you can see all the details of that specific alert.
+To see all alerts by location in a map view, select the **Map View** tab.
-By selecting Map View, you can also see all alerts based on their location.
+Select a location on the map to view all the alerts for that location.
-Select a location on the map to view all of the alerts for that location.
+To view the details for an alert, select an alert, and then select the **Open Alert View** button.
-You can see the details for that alert with the Open Alert View button.
+<a name="use-the-devops-security-workbook"></a>
-### Use the 'DevOps Security' workbook
+### DevOps Security workbook
-This workbook provides a customizable visual report of your DevOps security posture. You can use this workbook to view insights into your repositories with the highest number of CVEs and weaknesses, active repositories that have Advanced Security disabled, security posture assessments of your DevOps environment configurations, and much more. Customize and add your own visual reports using the rich set of data in Azure Resource Graph to fit the business needs of your security team.
+The DevOps Security workbook provides a customizable visual report of your DevOps security posture. You can use this workbook to view insights about your repositories that have the highest number of common vulnerabilities and exposures (CVEs) and weaknesses, active repositories that have Advanced Security turned off, security posture assessments of your DevOps environment configurations, and much more. Customize and add your own visual reports by using the rich set of data in Azure Resource Graph to fit the business needs of your security team.
> [!NOTE]
-> You must have a [GitHub connector](quickstart-onboard-github.md), [GitLab connector](quickstart-onboard-gitlab.md), or an [Azure DevOps connector](quickstart-onboard-devops.md), connected to your environment in order to utilize this workbook
+> To use this workbork, your environment must have a [GitHub connector](quickstart-onboard-github.md), [GitLab connector](quickstart-onboard-gitlab.md), or [Azure DevOps connector](quickstart-onboard-devops.md).
-**To deploy the workbook**:
+To deploy the workbook:
1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Navigate to **Microsoft Defender for Cloud** > **Workbooks**.
+1. Go to **Microsoft Defender for Cloud** > **Workbooks**.
1. Select the **DevOps Security (Preview)** workbook.
-The workbook will load and show you the Overview tab where you can see the number of exposed secrets, code security and DevOps security. All of these findings are broken down by total for each repository and the severity.
+The workbook loads and displays the **Overview** tab. On this tab, you can see the number of exposed secrets, the code security, and DevOps security. The findings are shown by total for each repository and by severity.
-Select the Secrets tab to view the count by secret type.
+To view the count by secret type, select the **Secrets** tab.
-The Code tab displays your count findings by tool and repository and your code scanning by severity.
+The **Code** tab displays the findings count by tool and repository. It shows the results of your code scanning by severity.
-The Open Source Security (OSS) Vulnerabilities tab displays your OSS vulnerabilities by severity and the count of findings by repository.
+The **OSS Vulnerabilities** tab displays Open Source Security (OSS) vulnerabilities by severity and the count of findings by repository.
-The Infrastructure as Code tab displays your findings by tool and repository.
+The **Infrastructure as Code** tab displays your findings by tool and repository.
-The Posture tab displays your security posture by severity and repository.
+The **Posture** tab displays security posture by severity and repository.
-The Threats and Tactics tab displays the total count of threats and tactics and by repository.
+The **Threats & Tactics** tab displays the count of threats and tactics by repository and the total count.
## Import workbooks from other workbook galleries
-To move workbooks that you've built in other Azure services into your Microsoft Defender for Cloud workbooks gallery:
+To move workbooks that you build in other Azure services into your Microsoft Defender for Cloud workbook gallery:
-1. Open the target workbook.
+1. Open the workbook that you want to import.
-1. From the toolbar, select **Edit**.
+1. On the toolbar, select **Edit**.
- :::image type="content" source="media/custom-dashboards-azure-workbooks/editing-workbooks.png" alt-text="Editing a workbook.":::
+ :::image type="content" source="media/custom-dashboards-azure-workbooks/editing-workbooks.png" alt-text="Screenshot that shows how to edit a workbook.":::
-1. From the toolbar, select **</>** to enter the Advanced Editor.
+1. On the toolbar, select **</>** to open the advanced editor.
- :::image type="content" source="media/custom-dashboards-azure-workbooks/editing-workbooks-advanced-editor.png" alt-text="Launching the advanced editor to get the Gallery Template JSON code.":::
+ :::image type="content" source="media/custom-dashboards-azure-workbooks/editing-workbooks-advanced-editor.png" alt-text="Screenshot that shows how to open the advanced editor to copy the gallery template JSON code.":::
-1. Copy the workbook's Gallery Template JSON.
+1. In the workbook gallery template, select all the JSON in the file and copy it.
+
+1. Open the workbook gallery in Defender for Cloud, and then select **New** on the menu bar.
+
+1. Select **</>** to open the Advanced Editor.
+
+1. Paste the entire gallery template JSON code.
-1. Open the workbooks gallery in Defender for Cloud and from the menu bar select **New**.
-1. Select the **</>** to enter the Advanced Editor.
-1. Paste in the entire Gallery Template JSON.
1. Select **Apply**.
-1. From the toolbar, select **Save As**.
- :::image type="content" source="media/custom-dashboards-azure-workbooks/editing-workbooks-save-as.png" alt-text="Saving the workbook to the gallery in Defender for Cloud.":::
+1. On the toolbar, select **Save As**.
+
+ :::image type="content" source="media/custom-dashboards-azure-workbooks/editing-workbooks-save-as.png" alt-text="Screenshot that shows saving the workbook to the gallery in Defender for Cloud.":::
+
+1. To save changes to the workbook, enter or select the following information:
+
+ - A name for the workbook.
+ - The Azure region to use.
+ - Any relevant information about the subscription, resource group, and sharing.
-1. Enter the required details for saving the workbook:
- 1. A name for the workbook
- 1. The desired region
- 1. Subscription, resource group, and sharing as appropriate.
+To find the saved workbook, go to the **Recently modified workbooks** category.
-You'll find your saved workbook in the **Recently modified workbooks** category.
+## Related content
-## Next steps
+This article describes the Defender for Cloud integrated Azure workbooks page that has built-in reports and the option to build your own custom, interactive reports.
-This article described Defender for Cloud's integrated Azure Workbooks page with built-in reports and the option to build your own custom, interactive reports.
+- Learn more about [Azure workbooks](../azure-monitor/visualize/workbooks-overview.md).
-- Learn more about [Azure Workbooks](../azure-monitor/visualize/workbooks-overview.md)
+Built-in workbooks get their data from Defender for Cloud recommendations.
-- The built-in workbooks pull their data from Defender for Cloud's recommendations. Learn about the many security recommendations in [Security recommendations - a reference guide](recommendations-reference.md)
+- Learn about the many security recommendations in [Security recommendations: A reference guide](recommendations-reference.md).
defender-for-cloud Defender For Containers Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-architecture.md
To learn more about implementation details such as supported operating systems,
When Defender for Cloud protects a cluster hosted in Azure Kubernetes Service, the collection of audit log data is agentless and collected automatically through Azure infrastructure with no additional cost or configuration considerations. These are the required components in order to receive the full protection offered by Microsoft Defender for Containers: - **Defender agent**: The DaemonSet that is deployed on each node, collects signals from hosts using [eBPF technology](https://ebpf.io/), and provides runtime protection. The agent is registered with a Log Analytics workspace, and used as a data pipeline. However, the audit log data isn't stored in the Log Analytics workspace. The Defender agent is deployed as an AKS Security profile.-- **Azure Policy for Kubernetes**: A pod that extends the open-source [Gatekeeper v3](https://github.com/open-policy-agent/gatekeeper) and registers as a web hook to Kubernetes admission control making it possible to apply at-scale enforcements, and safeguards on your clusters in a centralized, consistent manner. The Azure Policy for Kubernetes pod is deployed as an AKS add-on. For more information, see [Protect your Kubernetes workloads](kubernetes-workload-protections.md) and [Understand Azure Policy for Kubernetes clusters](/azure/governance/policy/concepts/policy-for-kubernetes).
+- **Azure Policy for Kubernetes**: A pod that extends the open-source [Gatekeeper v3](https://github.com/open-policy-agent/gatekeeper) and registers as a web hook to Kubernetes admission control making it possible to apply at-scale enforcements, and safeguards on your clusters in a centralized, consistent manner. The Azure Policy for Kubernetes pod is deployed as an AKS add-on. It's only installed on one node in the cluster. For more information, see [Protect your Kubernetes workloads](kubernetes-workload-protections.md) and [Understand Azure Policy for Kubernetes clusters](/azure/governance/policy/concepts/policy-for-kubernetes).
:::image type="content" source="./media/defender-for-containers/architecture-aks-cluster.png" alt-text="Diagram of high-level architecture of the interaction between Microsoft Defender for Containers, Azure Kubernetes Service, and Azure Policy." lightbox="./media/defender-for-containers/architecture-aks-cluster.png":::
When you enable the agentless discovery for Kubernetes extension, the following
These components are required in order to receive the full protection offered by Microsoft Defender for Containers: -- **[Azure Arc-enabled Kubernetes](/azure/azure-arc/kubernetes/overview)** - An agent based solution that connects your clusters to Azure. Azure then is capable of providing services such as Defender, and Policy as [Arc extensions](/azure/azure-arc/kubernetes/extensions). For more information, see [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md). The following two components are the required Arc extensions.
+- **[Azure Arc-enabled Kubernetes](/azure/azure-arc/kubernetes/overview)** - Azure Arc-enabled Kubernetes - An agent based solution, installed on one node in the cluster, that connects your clusters to Defender for Cloud. Defender for Cloud is then able to deploy the following two agents as [Arc extensions](/azure/azure-arc/kubernetes/extensions):
- **Defender agent**: The DaemonSet that is deployed on each node, collects host signals using [eBPF technology](https://ebpf.io/) and Kubernetes audit logs, to provide runtime protection. The agent is registered with a Log Analytics workspace, and used as a data pipeline. However, the audit log data isn't stored in the Log Analytics workspace. The Defender agent is deployed as an Arc-enabled Kubernetes extension. -- **Azure Policy for Kubernetes**: A pod that extends the open-source [Gatekeeper v3](https://github.com/open-policy-agent/gatekeeper) and registers as a web hook to Kubernetes admission control making it possible to apply at-scale enforcements, and safeguards on your clusters in a centralized, consistent manner. The Azure Policy for Kubernetes pod is deployed as an Arc-enabled Kubernetes extension. For more information, see [Protect your Kubernetes workloads](/azure/defender-for-cloud/kubernetes-workload-protections) and [Understand Azure Policy for Kubernetes clusters](/azure/governance/policy/concepts/policy-for-kubernetes).
+- **Azure Policy for Kubernetes**: A pod that extends the open-source [Gatekeeper v3](https://github.com/open-policy-agent/gatekeeper) and registers as a web hook to Kubernetes admission control making it possible to apply at-scale enforcements, and safeguards on your clusters in a centralized, consistent manner. The Azure Policy for Kubernetes pod is deployed as an Arc-enabled Kubernetes extension. It's only installed on one node in the cluster. For more information, see [Protect your Kubernetes workloads](/azure/defender-for-cloud/kubernetes-workload-protections) and [Understand Azure Policy for Kubernetes clusters](/azure/governance/policy/concepts/policy-for-kubernetes).
> [!NOTE] > Defender for Containers support for Arc-enabled Kubernetes clusters is a preview feature.
These components are required in order to receive the full protection offered by
When Defender for Cloud protects a cluster hosted in Elastic Kubernetes Service, the collection of audit log data is agentless. These are the required components in order to receive the full protection offered by Microsoft Defender for Containers: - **[Kubernetes audit logs](https://kubernetes.io/docs/tasks/debug-application-cluster/audit/)** ΓÇô [AWS accountΓÇÖs CloudWatch](https://aws.amazon.com/cloudwatch/) enables, and collects audit log data through an agentless collector, and sends the collected information to the Microsoft Defender for Cloud backend for further analysis.-- **[Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md)** - An agent based solution that connects your EKS clusters to Azure. Azure then is capable of providing services such as Defender, and Policy as [Arc extensions](../azure-arc/kubernetes/extensions.md). For more information, see [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md). The following two components are the required Arc extensions.
+- **[Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md)** - Azure Arc-enabled Kubernetes - An agent based solution, installed on one node in the cluster, that connects your clusters to Defender for Cloud. Defender for Cloud is then able to deploy the following two agents as [Arc extensions](/azure/azure-arc/kubernetes/extensions):
- **Defender agent**: The DaemonSet that is deployed on each node, collects signals from hosts using [eBPF technology](https://ebpf.io/), and provides runtime protection. The agent is registered with a Log Analytics workspace, and used as a data pipeline. However, the audit log data isn't stored in the Log Analytics workspace. The Defender agent is deployed as an Arc-enabled Kubernetes extension.-- **Azure Policy for Kubernetes**: A pod that extends the open-source [Gatekeeper v3](https://github.com/open-policy-agent/gatekeeper) and registers as a web hook to Kubernetes admission control making it possible to apply at-scale enforcements, and safeguards on your clusters in a centralized, consistent manner. The Azure Policy for Kubernetes pod is deployed as an Arc-enabled Kubernetes extension. For more information, see [Protect your Kubernetes workloads](kubernetes-workload-protections.md) and [Understand Azure Policy for Kubernetes clusters](/azure/governance/policy/concepts/policy-for-kubernetes).
+- **Azure Policy for Kubernetes**: A pod that extends the open-source [Gatekeeper v3](https://github.com/open-policy-agent/gatekeeper) and registers as a web hook to Kubernetes admission control making it possible to apply at-scale enforcements, and safeguards on your clusters in a centralized, consistent manner. The Azure Policy for Kubernetes pod is deployed as an Arc-enabled Kubernetes extension. It's only installed on one node in the cluster. For more information, see [Protect your Kubernetes workloads](kubernetes-workload-protections.md) and [Understand Azure Policy for Kubernetes clusters](/azure/governance/policy/concepts/policy-for-kubernetes).
> [!NOTE] > Defender for Containers support for AWS EKS clusters is a preview feature.
When Defender for Cloud protects a cluster hosted in Google Kubernetes Engine, t
- **[Kubernetes audit logs](https://kubernetes.io/docs/tasks/debug-application-cluster/audit/)** ΓÇô [GCP Cloud Logging](https://cloud.google.com/logging/) enables, and collects audit log data through an agentless collector, and sends the collected information to the Microsoft Defender for Cloud backend for further analysis. -- **[Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md)** - An agent based solution that connects your GKE clusters to Azure. Azure then is capable of providing services such as Defender, and Policy as [Arc extensions](../azure-arc/kubernetes/extensions.md). For more information, see [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md). The following two components are the required Arc extensions.
+- **[Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md)** - Azure Arc-enabled Kubernetes - An agent based solution, installed on one node in the cluster, that connects your clusters to Defender for Cloud. Defender for Cloud is then able to deploy the following two agents as [Arc extensions](/azure/azure-arc/kubernetes/extensions):
- **Defender agent**: The DaemonSet that is deployed on each node, collects signals from hosts using [eBPF technology](https://ebpf.io/), and provides runtime protection. The agent is registered with a Log Analytics workspace, and used as a data pipeline. However, the audit log data isn't stored in the Log Analytics workspace. The Defender agent is deployed as an Arc-enabled Kubernetes extension.-- **Azure Policy for Kubernetes**: A pod that extends the open-source [Gatekeeper v3](https://github.com/open-policy-agent/gatekeeper) and registers as a web hook to Kubernetes admission control making it possible to apply at-scale enforcements, and safeguards on your clusters in a centralized, consistent manner. The Azure Policy for Kubernetes pod is deployed as an Arc-enabled Kubernetes extension. For more information, see [Protect your Kubernetes workloads](kubernetes-workload-protections.md) and [Understand Azure Policy for Kubernetes clusters](/azure/governance/policy/concepts/policy-for-kubernetes).
+- **Azure Policy for Kubernetes**: A pod that extends the open-source [Gatekeeper v3](https://github.com/open-policy-agent/gatekeeper) and registers as a web hook to Kubernetes admission control making it possible to apply at-scale enforcements, and safeguards on your clusters in a centralized, consistent manner. The Azure Policy for Kubernetes pod is deployed as an Arc-enabled Kubernetes extension. It only needs to be installed on one node in the cluster. For more information, see [Protect your Kubernetes workloads](kubernetes-workload-protections.md) and [Understand Azure Policy for Kubernetes clusters](/azure/governance/policy/concepts/policy-for-kubernetes).
> [!NOTE] > Defender for Containers support for GCP GKE clusters is a preview feature.
defender-for-cloud Endpoint Protection Recommendations Technical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/endpoint-protection-recommendations-technical.md
Title: Endpoint protection recommendations
+ Title: Assessment checks for endpoint detection and response solutions
description: How the endpoint protection solutions are discovered and identified as healthy. Previously updated : 06/15/2023 Last updated : 02/01/2024
-# Endpoint protection assessment and recommendations in Microsoft Defender for Cloud
-> [!NOTE]
-> As the Log Analytics agent (also known as MMA) is set to retire in [August 2024](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/), all Defender for Servers features that currently depend on it, including those described on this page, will be available through either [Microsoft Defender for Endpoint integration](integration-defender-for-endpoint.md) or [agentless scanning](concept-agentless-data-collection.md), before the retirement date. For more information about the roadmap for each of the features that are currently rely on Log Analytics Agent, see [this announcement](upcoming-changes.md#defender-for-cloud-plan-and-strategy-for-the-log-analytics-agent-deprecation).
+# Assessment checks for endpoint detection and response solutions
Microsoft Defender for Cloud provides health assessments of [supported](supported-machines-endpoint-solutions-clouds-servers.md#endpoint-supported) versions of Endpoint protection solutions. This article explains the scenarios that lead Defender for Cloud to generate the following two recommendations: - [Endpoint protection should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/4fb67663-9ab9-475d-b026-8c544cced439) - [Endpoint protection health issues should be resolved on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/37a3689a-818e-4a0e-82ac-b1392b9bb000)
+> [!NOTE]
+> As the Log Analytics agent (also known as MMA) is set to retire in [August 2024](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/), all Defender for Servers features that currently depend on it, including those described on this page, will be available through either [Microsoft Defender for Endpoint integration](integration-defender-for-endpoint.md) or [agentless scanning](concept-agentless-data-collection.md), before the retirement date. For more information about the roadmap for each of the features that are currently rely on Log Analytics Agent, see [this announcement](upcoming-changes.md#defender-for-cloud-plan-and-strategy-for-the-log-analytics-agent-deprecation).
+ > [!TIP] > At the end of 2021, we revised the recommendation that installs endpoint protection. One of the changes affects how the recommendation displays machines that are powered off. In the previous version, machines that were turned off appeared in the 'Not applicable' list. In the newer recommendation, they don't appear in any of the resources lists (healthy, unhealthy, or not applicable). ## Windows Defender -- Defender for Cloud recommends **Endpoint protection should be installed on your machines** when [Get-MpComputerStatus](/powershell/module/defender/get-mpcomputerstatus) runs and the result is **AMServiceEnabled: False**--- Defender for Cloud recommends **Endpoint protection health issues should be resolved on your machines** when [Get-MpComputerStatus](/powershell/module/defender/get-mpcomputerstatus) runs and any of the following occurs:-
- - Any of the following properties are false:
-
- - **AMServiceEnabled**
- - **AntispywareEnabled**
- - **RealTimeProtectionEnabled**
- - **BehaviorMonitorEnabled**
- - **IoavProtectionEnabled**
- - **OnAccessProtectionEnabled**
- - If one or both of the following properties are 7 or more:
-
- - **AntispywareSignatureAge**
- - **AntivirusSignatureAge**
+| Recommendation | Appears when |
+|--|--|
+| **Endpoint protection should be installed on your machines** | [Get-MpComputerStatus](/powershell/module/defender/get-mpcomputerstatus) runs and the result is **AMServiceEnabled: False** |
+| **Endpoint protection health issues should be resolved on your machines** | [Get-MpComputerStatus](/powershell/module/defender/get-mpcomputerstatus) runs and any of the following occurs: <br><br> Any of the following properties are false: <br><br> - **AMServiceEnabled** <br> - **AntispywareEnabled** <br> - **RealTimeProtectionEnabled** <br> - **BehaviorMonitorEnabled** <br> - **IoavProtectionEnabled** <br> - **OnAccessProtectionEnabled** <br> <br> If one or both of the following properties are 7 or more: <br><br> - **AntispywareSignatureAge** <br> - **AntivirusSignatureAge** |
## Microsoft System Center endpoint protection -- Defender for Cloud recommends **Endpoint protection should be installed on your machines** when importing **SCEPMpModule ("$env:ProgramFiles\Microsoft Security Client\MpProvider\MpProvider.psd1")** and running **Get-MProtComputerStatus** results in **AMServiceEnabled = false**.--- Defender for Cloud recommends **Endpoint protection health issues should be resolved on your machines** when **Get-MprotComputerStatus** runs and any of the following occurs:-
- - At least one of the following properties is false:
-
- - **AMServiceEnabled**
- - **AntispywareEnabled**
- - **RealTimeProtectionEnabled**
- - **BehaviorMonitorEnabled**
- - **IoavProtectionEnabled**
- - **OnAccessProtectionEnabled**
-
- - If one or both of the following Signature Updates are greater or equal to 7:
-
- - **AntispywareSignatureAge**
- - **AntivirusSignatureAge**
+| Recommendation | Appears when |
+|--|--|
+| **Endpoint protection should be installed on your machines** | importing **SCEPMpModule ("$env:ProgramFiles\Microsoft Security Client\MpProvider\MpProvider.psd1")** and running **Get-MProtComputerStatus** results in **AMServiceEnabled = false** |
+| **Endpoint protection health issues should be resolved on your machines** | **Get-MprotComputerStatus** runs and any of the following occurs: <br><br> At least one of the following properties is false: <br><br> - **AMServiceEnabled** <br> - **AntispywareEnabled** <br> - **RealTimeProtectionEnabled** <br> - **BehaviorMonitorEnabled** <br> - **IoavProtectionEnabled** <br> - **OnAccessProtectionEnabled** <br><br> If one or both of the following Signature Updates are greater or equal to 7: <br><br> - **AntispywareSignatureAge** <br> - **AntivirusSignatureAge** |
## Trend Micro -- Defender for Cloud recommends **Endpoint protection should be installed on your machines** when any of the following checks aren't met:
- - **HKLM:\SOFTWARE\TrendMicro\Deep Security Agent** exists
- - **HKLM:\SOFTWARE\TrendMicro\Deep Security Agent\InstallationFolder** exists
- - The **dsa_query.cmd** file is found in the Installation Folder
- - Running **dsa_query.cmd** results with **Component.AM.mode: on - Trend Micro Deep Security Agent detected**
+| Recommendation | Appears when |
+|--|--|
+| **Endpoint protection should be installed on your machines** | any of the following checks aren't met: <br><br> - **HKLM:\SOFTWARE\TrendMicro\Deep Security Agent** exists <br> - **HKLM:\SOFTWARE\TrendMicro\Deep Security Agent\InstallationFolder** exists <br> - The **dsa_query.cmd** file is found in the Installation Folder <br> - Running **dsa_query.cmd** results with **Component.AM.mode: on - Trend Micro Deep Security Agent detected** |
## Symantec endpoint protection
-Defender for Cloud recommends **Endpoint protection should be installed on your machines** when any of the following checks aren't met:
--- **HKLM:\Software\Symantec\Symantec Endpoint Protection\CurrentVersion\PRODUCTNAME = "Symantec Endpoint Protection"**-- **HKLM:\Software\Symantec\Symantec Endpoint Protection\CurrentVersion\public-opstate\ASRunningStatus = 1**-
-Or
--- **HKLM:\Software\Wow6432Node\Symantec\Symantec Endpoint Protection\CurrentVersion\PRODUCTNAME = "Symantec Endpoint Protection"**-- **HKLM:\Software\Wow6432Node\Symantec\Symantec Endpoint Protection\CurrentVersion\public-opstate\ASRunningStatus = 1**-
-Defender for Cloud recommends **Endpoint protection health issues should be resolved on your machines** when any of the following checks aren't met:
--- Check Symantec Version >= 12: Registry location: **HKLM:\Software\Symantec\Symantec Endpoint Protection\CurrentVersion" -Value "PRODUCTVERSION"**-- Check Real-Time Protection status: **HKLM:\Software\Wow6432Node\Symantec\Symantec Endpoint Protection\AV\Storages\Filesystem\RealTimeScan\OnOff == 1**-- Check Signature Update status: **HKLM\Software\Symantec\Symantec Endpoint Protection\CurrentVersion\public-opstate\LatestVirusDefsDate <= 7 days**-- Check Full Scan status: **HKLM:\Software\Symantec\Symantec Endpoint Protection\CurrentVersion\public-opstate\LastSuccessfulScanDateTime <= 7 days**-- Find signature version number Path to signature version for Symantec 12: **Registry Paths+ "CurrentVersion\SharedDefs" -Value "SRTSP"**-- Path to signature version for Symantec 14: **Registry Paths+ "CurrentVersion\SharedDefs\SDSDefs" -Value "SRTSP"**-
-Registry Paths:
--- **"HKLM:\Software\Symantec\Symantec Endpoint Protection" + $Path;**-- **"HKLM:\Software\Wow6432Node\Symantec\Symantec Endpoint Protection" + $Path**
+| Recommendation | Appears when |
+|--|--|
+| **Endpoint protection should be installed on your machines** | any of the following checks aren't met: <br> <br> - **HKLM:\Software\Symantec\Symantec Endpoint Protection\CurrentVersion\PRODUCTNAME = "Symantec Endpoint Protection"** <br> - **HKLM:\Software\Symantec\Symantec Endpoint Protection\CurrentVersion\public-opstate\ASRunningStatus = 1** <br> Or <br> - **HKLM:\Software\Wow6432Node\Symantec\Symantec Endpoint Protection\CurrentVersion\PRODUCTNAME = "Symantec Endpoint Protection"** <br> - **HKLM:\Software\Wow6432Node\Symantec\Symantec Endpoint Protection\CurrentVersion\public-opstate\ASRunningStatus = 1**|
+| **Endpoint protection health issues should be resolved on your machines** | any of the following checks aren't met: <br> <br> - Check Symantec Version >= 12: Registry location: **HKLM:\Software\Symantec\Symantec Endpoint Protection\CurrentVersion" -Value "PRODUCTVERSION"** <br> - Check Real-Time Protection status: **HKLM:\Software\Wow6432Node\Symantec\Symantec Endpoint Protection\AV\Storages\Filesystem\RealTimeScan\OnOff == 1** <br> - Check Signature Update status: **HKLM\Software\Symantec\Symantec Endpoint Protection\CurrentVersion\public-opstate\LatestVirusDefsDate <= 7 days** <br> - Check Full Scan status: **HKLM:\Software\Symantec\Symantec Endpoint Protection\CurrentVersion\public-opstate\LastSuccessfulScanDateTime <= 7 days** <br> - Find signature version number Path to signature version for Symantec 12: **Registry Paths+ "CurrentVersion\SharedDefs" -Value "SRTSP"** <br> - Path to signature version for Symantec 14: **Registry Paths+ "CurrentVersion\SharedDefs\SDSDefs" -Value "SRTSP"** <br><br> Registry Paths: <br> <br> - **"HKLM:\Software\Symantec\Symantec Endpoint Protection" + $Path;** <br> - **"HKLM:\Software\Wow6432Node\Symantec\Symantec Endpoint Protection" + $Path** |
## McAfee endpoint protection for Windows
-Defender for Cloud recommends **Endpoint protection should be installed on your machines** when any of the following checks aren't met:
--- **HKLM:\SOFTWARE\McAfee\Endpoint\AV\ProductVersion** exists-- **HKLM:\SOFTWARE\McAfee\AVSolution\MCSHIELDGLOBAL\GLOBAL\enableoas = 1**-
-Defender for Cloud recommends **Endpoint protection health issues should be resolved on your machines** when any of the following checks aren't met:
--- McAfee Version: **HKLM:\SOFTWARE\McAfee\Endpoint\AV\ProductVersion >= 10**-- Find Signature Version: **HKLM:\Software\McAfee\AVSolution\DS\DS -Value "dwContentMajorVersion"**-- Find Signature date: **HKLM:\Software\McAfee\AVSolution\DS\DS -Value "szContentCreationDate" >= 7 days**-- Find Scan date: **HKLM:\Software\McAfee\Endpoint\AV\ODS -Value "LastFullScanOdsRunTime" >= 7 days**
+| Recommendation | Appears when |
+|--|--|
+| **Endpoint protection should be installed on your machines** | any of the following checks aren't met: <br><br> - **HKLM:\SOFTWARE\McAfee\Endpoint\AV\ProductVersion** exists <br> - **HKLM:\SOFTWARE\McAfee\AVSolution\MCSHIELDGLOBAL\GLOBAL\enableoas = 1**|
+| **Endpoint protection health issues should be resolved on your machines** | any of the following checks aren't met: <br> <br> - McAfee Version: **HKLM:\SOFTWARE\McAfee\Endpoint\AV\ProductVersion >= 10** <br> - Find Signature Version: **HKLM:\Software\McAfee\AVSolution\DS\DS -Value "dwContentMajorVersion"** <br> - Find Signature date: **HKLM:\Software\McAfee\AVSolution\DS\DS -Value "szContentCreationDate" >= 7 days** <br> - Find Scan date: **HKLM:\Software\McAfee\Endpoint\AV\ODS -Value "LastFullScanOdsRunTime" >= 7 days** |
## McAfee Endpoint Security for Linux Threat Prevention
-Defender for Cloud recommends **Endpoint protection should be installed on your machines** when any of the following checks aren't met:
--- File **/opt/McAfee/ens/tp/bin/mfetpcli** exists-- **"/opt/McAfee/ens/tp/bin/mfetpcli --version"** output is: **McAfee name = McAfee Endpoint Security for Linux Threat Prevention and McAfee version >= 10**-
-Defender for Cloud recommends **Endpoint protection health issues should be resolved on your machines** when any of the following checks aren't met:
--- **"/opt/McAfee/ens/tp/bin/mfetpcli --listtask"** returns **Quick scan, Full scan** and both of the scans <= 7 days-- **"/opt/McAfee/ens/tp/bin/mfetpcli --listtask"** returns **DAT and engine Update time** and both of them <= 7 days-- **"/opt/McAfee/ens/tp/bin/mfetpcli --getoasconfig --summary"** returns **On Access Scan** status
+| Recommendation | Appears when |
+|--|--|
+| **Endpoint protection should be installed on your machines** | any of the following checks aren't met: <br> <br> - File **/opt/McAfee/ens/tp/bin/mfetpcli** exists <br> - **"/opt/McAfee/ens/tp/bin/mfetpcli --version"** output is: **McAfee name = McAfee Endpoint Security for Linux Threat Prevention and McAfee version >= 10** |
+| **Endpoint protection health issues should be resolved on your machines** | any of the following checks aren't met: <br> <br> - **"/opt/McAfee/ens/tp/bin/mfetpcli --listtask"** returns **Quick scan, Full scan** and both of the scans <= 7 days <br> - **"/opt/McAfee/ens/tp/bin/mfetpcli --listtask"** returns **DAT and engine Update time** and both of them <= 7 days <br> - **"/opt/McAfee/ens/tp/bin/mfetpcli --getoasconfig --summary"** returns **On Access Scan** status |
## Sophos Antivirus for Linux
-Defender for Cloud recommends **Endpoint protection should be installed on your machines** when any of the following checks aren't met:
--- File **/opt/sophos-av/bin/savdstatus** exits or search for customized location **"readlink $(which savscan)"**-- **"/opt/sophos-av/bin/savdstatus --version"** returns Sophos name = **Sophos Anti-Virus and Sophos version >= 9**-
-Defender for Cloud recommends **Endpoint protection health issues should be resolved on your machines** when any of the following checks aren't met:
--- **"/opt/sophos-av/bin/savlog --maxage=7 | grep -i "Scheduled scan .\* completed" | tail -1"**, returns a value-- **"/opt/sophos-av/bin/savlog --maxage=7 | grep "scan finished"** | tail -1", returns a value-- **"/opt/sophos-av/bin/savdstatus --lastupdate"** returns lastUpdate, which should be <= 7 days-- **"/opt/sophos-av/bin/savdstatus -v"** is equal to **"On-access scanning is running"**-- **"/opt/sophos-av/bin/savconfig get LiveProtection"** returns enabled
+| Recommendation | Appears when |
+|--|--|
+| **Endpoint protection should be installed on your machines** | any of the following checks aren't met: <br> <br> - File **/opt/sophos-av/bin/savdstatus** exits or search for customized location **"readlink $(which savscan)"** <br> - **"/opt/sophos-av/bin/savdstatus --version"** returns Sophos name = **Sophos Anti-Virus and Sophos version >= 9** |
+| **Endpoint protection health issues should be resolved on your machines** | any of the following checks aren't met: <br> <br> - **"/opt/sophos-av/bin/savlog --maxage=7 \| grep -i "Scheduled scan .\* completed" \| tail -1"**, returns a value <br> - **"/opt/sophos-av/bin/savlog --maxage=7 \| grep "scan finished"** \| tail -1", returns a value <br> - **"/opt/sophos-av/bin/savdstatus --lastupdate"** returns lastUpdate, which should be <= 7 days <br> - **"/opt/sophos-av/bin/savdstatus -v"** is equal to **"On-access scanning is running"** <br> - **"/opt/sophos-av/bin/savconfig get LiveProtection"** returns enabled |
## Troubleshoot and support
defender-for-cloud Iac Template Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/iac-template-mapping.md
Title: Map IaC templates from code to cloud
-description: Learn how to map your Infrastructure as Code templates to your cloud resources.
+ Title: Map Infrastructure as Code templates from code to cloud
+description: Learn how to map your Infrastructure as Code (IaC) templates to your cloud resources.
Last updated 11/03/2023
# Map Infrastructure as Code templates to cloud resources
-Mapping Infrastructure as Code (IaC) templates to cloud resources ensures consistent, secure, and auditable infrastructure provisioning. It enables rapid response to security threats and a security-by-design approach. If there are misconfigurations in runtime resources, this mapping allows remediation at the template level, ensuring no drift and facilitating deployment via CI/CD methodology.
+Mapping Infrastructure as Code (IaC) templates to cloud resources helps you ensure consistent, secure, and auditable infrastructure provisioning. It supports rapid response to security threats and a security-by-design approach. You can use mapping to discover misconfigurations in runtime resources. Then, remediate at the template level to help ensure no drift and to facilitate deployment via CI/CD methodology.
## Prerequisites
-To allow Microsoft Defender for Cloud to map Infrastructure as Code template to cloud resources, you need:
+To set Microsoft Defender for Cloud to map IaC templates to cloud resources, you need:
-- An Azure account with Defender for Cloud onboarded. If you don't already have an Azure account, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- [Azure DevOps](quickstart-onboard-devops.md) environment onboarded into Microsoft Defender for Cloud.
+- An Azure account with Defender for Cloud configured. If you don't already have an Azure account, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An [Azure DevOps](quickstart-onboard-devops.md) environment set up in Defender for Cloud.
- [Defender Cloud Security Posture Management (CSPM)](tutorial-enable-cspm-plan.md) enabled.-- Configure your Azure Pipelines to run [Microsoft Security DevOps Azure DevOps extension](azure-devops-extension.md).-- Tag your supported Infrastructure as Code templates and your cloud resources. (Open-source tools like [Yor_trace](https://github.com/bridgecrewio/yor) can be used to automatically tag Infrastructure as Code templates)
- - Supported cloud platforms: AWS, Azure, GCP.
- - Supported source code management systems: Azure DevOps.
- - Supported template languages: Azure Resource Manager, Bicep, CloudFormation, Terraform.
+- Azure Pipelines set up to run the [Microsoft Security DevOps Azure DevOps extension](azure-devops-extension.md).
+- IaC templates and cloud resources set up with tag support. You can use open-source tools like [Yor_trace](https://github.com/bridgecrewio/yor) to automatically tag IaC templates.
+ - Supported cloud platforms: Microsoft Azure, Amazon Web Services, Google Cloud Platform
+ - Supported source code management systems: Azure DevOps
+ - Supported template languages: Azure Resource Manager, Bicep, CloudFormation, Terraform
> [!NOTE]
-> Microsoft Defender for Cloud will only use the following tags from Infrastructure as Code templates for mapping:
-
-> - yor_trace
-> - mapping_tag
+> Microsoft Defender for Cloud uses only the following tags from IaC templates for mapping:
+>
+> - `yor_trace`
+> - `mapping_tag`
## See the mapping between your IaC template and your cloud resources
-To see the mapping between your IaC template and your cloud resources in the [Cloud Security Explorer](how-to-manage-cloud-security-explorer.md):
+To see the mapping between your IaC template and your cloud resources in [Cloud Security Explorer](how-to-manage-cloud-security-explorer.md):
1. Sign in to the [Azure portal](https://portal.azure.com/).+ 1. Go to **Microsoft Defender for Cloud** > **Cloud Security Explorer**.
-1. Search for and select all your cloud resources from the drop-down menu.
-1. Select + to add other filters to your query.
-1. Add the subfilter **Provisioned by** from the category **Identity & Access**.
-1. Select **Code repositories** from the category **DevOps**.
-1. After building your query, select **Search** to run the query.
-Alternatively, you can use the built-in template named ΓÇ£Cloud resources provisioned by IaC templates with high severity misconfigurationsΓÇ¥.
+1. In the dropdown menu, search for and select all your cloud resources.
+
+1. To add more filters to your query, select **+**.
+
+1. In the **Identity & Access** category, add the subfilter **Provisioned by**.
+
+1. In the **DevOps** category, select **Code repositories**.
+
+1. After you build your query, select **Search** to run the query.
-![Screenshot of IaC Mapping Cloud Security Explorer template.](media/iac-template-mapping/iac-mapping.png)
+Alternatively, select the built-in template **Cloud resources provisioned by IaC templates with high severity misconfigurations**.
+ > [!NOTE]
-> Please note that mapping between your Infrastructure as Code templates to your cloud resources can take up to 12 hours to appear in the Cloud Security Explorer.
+> Mapping between your IaC templates and your cloud resources might take up to 12 hours to appear in Cloud Security Explorer.
## (Optional) Create sample IaC mapping tags
-To create sample IaC mapping tags within your code repositories, follow these steps:
+To create sample IaC mapping tags in your code repositories:
+
+1. In your repository, add an IaC template that includes tags.
+
+ You can start with a [sample template](https://github.com/microsoft/security-devops-azdevops/tree/main/samples/IaCMapping).
+
+1. To commit directly to the main branch or create a new branch for this commit, select **Save**.
+
+1. Confirm that you included the **Microsoft Security DevOps** task in your Azure pipeline.
-1. Add an **IaC template with tags** to your repository. To use an example template, see [here](https://github.com/microsoft/security-devops-azdevops/tree/main/samples/IaCMapping).
-1. Select **save** to commit directly to the main branch or create a new branch for this commit.
-1. Include the **Microsoft Security DevOps** task in your Azure pipeline.
-1. Verify that the **pipeline logs** show a finding saying **ΓÇ£An IaC tag(s) was found on this resourceΓÇ¥**. This means that Defender for Cloud successfully discovered tags.
+1. Verify that pipeline logs show a finding that says **An IaC tag(s) was found on this resource**. The finding indicates that Defender for Cloud successfully discovered tags.
-## Next steps
+## Related content
- Learn more about [DevOps security in Defender for Cloud](defender-for-devops-introduction.md).
defender-for-cloud Iac Vulnerabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/iac-vulnerabilities.md
Title: Discover misconfigurations in Infrastructure as Code
-description: Learn how to use DevOps security in Defender for Cloud to discover misconfigurations in Infrastructure as Code (IaC)
+ Title: Scan for misconfigurations in Infrastructure as Code
+description: Learn how to use Microsoft Security DevOps scanning with Microsoft Defender for Cloud to find misconfigurations in Infrastructure as Code (IaC) in a connected GitHub repository or Azure DevOps project.
Last updated 01/24/2023
-# Discover misconfigurations in Infrastructure as Code (IaC)
+# Scan your connected GitHub repository or Azure DevOps project
-Once you have set up the Microsoft Security DevOps GitHub action or Azure DevOps extension, you can configure the YAML configuration file to run a single tool or multiple tools. For example, you can set up the action or extension to run Infrastructure as Code (IaC) scanning tools only. This can help reduce pipeline run time.
+You can set up Microsoft Security DevOps to scan your connected GitHub repository or Azure DevOps project. Use a GitHub action or an Azure DevOps extension to run Microsoft Security DevOps only on your Infrastructure as Code (IaC) source code, and help reduce your pipeline runtime.
+
+This article shows you how to apply a template YAML configuration file to scan your connected repository or project specifically for IaC security issues by using Microsoft Security DevOps rules.
## Prerequisites -- Configure Microsoft Security DevOps for GitHub and/or Azure DevOps based on your source code management system:
- - [Microsoft Security DevOps GitHub action](github-action.md)
- - [Microsoft Security DevOps Azure DevOps extension](azure-devops-extension.md).
-- Ensure you have an IaC template in your repository.
+- For Microsoft Security DevOps, set up the GitHub action or the Azure DevOps extension based on your source code management system:
+ - If your repository is in GitHub, set up the [Microsoft Security DevOps GitHub action](github-action.md).
+ - If you manage your source code in Azure DevOps, set up the [Microsoft Security DevOps Azure DevOps extension](azure-devops-extension.md).
+- Ensure that you have an IaC template in your repository.
+
+<a name="configure-iac-scanning-and-view-the-results-in-github"></a>
-## Configure IaC scanning and view the results in GitHub
+## Set up and run a GitHub action to scan your connected IaC source code
+
+To set up an action and view scan results in GitHub:
1. Sign in to [GitHub](https://www.github.com).
-1. Navigate to **`your repository's home page`** > **.github/workflows** > **msdevopssec.yml** that was created in the [prerequisites](github-action.md#configure-the-microsoft-security-devops-github-action-1).
+1. Go to the main page of your repository.
+
+1. In the file directory, select **.github** > **workflows** > **msdevopssec.yml**.
+
+ For more information about working with an action in GitHub, see [Prerequisites](github-action.md#configure-the-microsoft-security-devops-github-action-1).
+
+1. Select the **Edit this file** (pencil) icon.
+
+ :::image type="content" source="media/tutorial-iac-vulnerabilities/workflow-yaml.png" alt-text="Screenshot that highlights the Edit this file icon for the msdevopssec.yml file." lightbox="media/tutorial-iac-vulnerabilities/workflow-yaml.png":::
-1. Select **Edit file**.
+1. In the **Run analyzers** section of the YAML file, add this code:
- :::image type="content" source="media/tutorial-iac-vulnerabilities/workflow-yaml.png" alt-text="Screenshot that shows where to find the edit button for the msdevopssec.yml file." lightbox="media/tutorial-iac-vulnerabilities/workflow-yaml.png":::
+ ```yaml
+ with:
+ categories: 'IaC'
+ ```
-1. Under the Run Analyzers section, add:
+ > [!NOTE]
+ > Values are case sensitive.
- ```yml
- with:
- categories: 'IaC'
- ```
+ Here's an example:
- > [!NOTE]
- > Categories are case sensitive.
- :::image type="content" source="media/tutorial-iac-vulnerabilities/add-to-yaml.png" alt-text="Screenshot that shows the information that needs to be added to the yaml file.":::
+ :::image type="content" source="media/tutorial-iac-vulnerabilities/add-to-yaml.png" alt-text="Screenshot that shows the information to add to the YAML file.":::
-1. Select **Start Commit**.
+1. Select **Commit changes . . .** .
1. Select **Commit changes**.
- :::image type="content" source="media/tutorial-iac-vulnerabilities/commit-change.png" alt-text="Screenshot that shows where to select commit change on the GitHub page.":::
+ :::image type="content" source="media/tutorial-iac-vulnerabilities/commit-change.png" alt-text="Screenshot that shows where to select Commit changes on the GitHub page.":::
-1. (Optional) Add an IaC template to your repository. Skip if you already have an IaC template in your repository.
+1. (Optional) Add an IaC template to your repository. If you already have an IaC template in your repository, skip this step.
- For example, [commit an IaC template to deploy a basic Linux web application](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/webapp-basic-linux) to your repository.
+ For example, commit an IaC template that you can use to [deploy a basic Linux web application](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/webapp-basic-linux).
- 1. Select `azuredeploy.json`.
+ 1. Select the **azuredeploy.json** file.
- :::image type="content" source="media/tutorial-iac-vulnerabilities/deploy-json.png" alt-text="Screenshot that shows where the azuredeploy.json file is located.":::
+ :::image type="content" source="media/tutorial-iac-vulnerabilities/deploy-json.png" alt-text="Screenshot that shows where the azuredeploy.json file is located.":::
- 1. Select **Raw**.
+ 1. Select **Raw**.
- 1. Copy all the information in the file.
+ 1. Copy all the information in the file, like in the following example:
```json {
Once you have set up the Microsoft Security DevOps GitHub action or Azure DevOps
"type": "string", "defaultValue": "AzureLinuxApp", "metadata": {
- "description": "Base name of the resource such as web app name and app service plan "
+ "description": "The base name of the resource, such as the web app name or the App Service plan."
}, "minLength": 2 },
Once you have set up the Microsoft Security DevOps GitHub action or Azure DevOps
"type": "string", "defaultValue": "S1", "metadata": {
- "description": "The SKU of App Service Plan "
+ "description": "The SKU of the App Service plan."
} }, "linuxFxVersion": { "type": "string", "defaultValue": "php|7.4", "metadata": {
- "description": "The Runtime stack of current web app"
+ "description": "The runtime stack of the current web app."
} }, "location": { "type": "string", "defaultValue": "[resourceGroup().location]", "metadata": {
- "description": "Location for all resources."
+ "description": "The location for all resources."
} } },
Once you have set up the Microsoft Security DevOps GitHub action or Azure DevOps
} ```
- 1. On GitHub, navigate to your repository.
+ 1. In your GitHub repository, go to the **.github/workflows** folder.
- 1. **Select Add file** > **Create new file**.
+ 1. Select **Add file** > **Create new file**.
- :::image type="content" source="media/tutorial-iac-vulnerabilities/create-file.png" alt-text="Screenshot that shows you where to navigate to, to create a new file." lightbox="media/tutorial-iac-vulnerabilities/create-file.png":::
+ :::image type="content" source="media/tutorial-iac-vulnerabilities/create-file.png" alt-text="Screenshot that shows you how to create a new file." lightbox="media/tutorial-iac-vulnerabilities/create-file.png":::
- 1. Enter a name for the file.
+ 1. Enter a name for the file.
- 1. Paste the copied information into the file.
+ 1. Paste the copied information in the file.
- 1. Select **Commit new file**.
+ 1. Select **Commit new file**.
- The file is now added to your repository.
+ The template file is added to your repository.
- :::image type="content" source="media/tutorial-iac-vulnerabilities/file-added.png" alt-text="Screenshot that shows that the new file you created has been added to your repository.":::
+ :::image type="content" source="media/tutorial-iac-vulnerabilities/file-added.png" alt-text="Screenshot that shows that the new file you created is added to your repository.":::
-1. Confirm the Microsoft Security DevOps scan completed:
- 1. Select **Actions**.
- 1. Select the workflow to see the results.
+1. Verify that the Microsoft Security DevOps scan is finished:
-1. Navigate to **Security** > **Code scanning alerts** to view the results of the scan (filter by tool as needed to see just the IaC findings).
+ 1. For the repository, select **Actions**.
-## Configure IaC scanning and view the results in Azure DevOps
+ 1. Select the workflow to see the action status.
-**To view the results of the IaC scan in Azure DevOps**:
+1. To view the results of the scan, go to **Security** > **Code scanning alerts**.
+
+ You can filter by tool to see only the IaC findings.
+
+<a name="configure-iac-scanning-and-view-the-results-in-azure-devops"></a>
+
+## Set up and run an Azure DevOps extension to scan your connected IaC source code
+
+To set up an extension and view scan results in Azure DevOps:
1. Sign in to [Azure DevOps](https://dev.azure.com/).
-1. Select the desired project
+1. Select your project.
-1. Select **Pipeline**.
+1. Select **Pipelines**.
-1. Select the pipeline where the Microsoft Security DevOps Azure DevOps Extension is configured.
+1. Select the pipeline where your Azure DevOps extension for Microsoft Security DevOps is configured.
-1. **Edit** the pipeline configuration YAML file adding the following lines:
+1. Select **Edit pipeline**.
-1. Add the following lines to the YAML file
+1. In the pipeline YAML configuration file, below the `displayName` line for the **MicrosoftSecurityDevOps@1** task, add this code:
- ```yml
- inputs:
- categories: 'IaC'
- ```
+ ```yaml
+ inputs:
+ categories: 'IaC'
+ ```
- :::image type="content" source="media/tutorial-iac-vulnerabilities/addition-to-yaml.png" alt-text="Screenshot showing you where to add this line to the YAML file.":::
+ Here's an example:
-1. Select **Save**.
+ :::image type="content" source="media/tutorial-iac-vulnerabilities/addition-to-yaml.png" alt-text="Screenshot that shows where to add the IaC categories line in the pipeline configuration YAML file.":::
-1. (Optional) Add an IaC template to your repository. Skip if you already have an IaC template in your repository.
+1. Select **Save**.
-1. Select **Save** to commit directly to the main branch or Create a new branch for this commit.
+1. (Optional) Add an IaC template to your Azure DevOps project. If you already have an IaC template in your project, skip this step.
-1. Select **Pipeline** > **`Your created pipeline`** to view the results of the IaC scan.
+1. Choose whether to commit directly to the main branch or to create a new branch for the commit, and then select **Save**.
-1. Select any result to see the details.
+1. To view the results of the IaC scan, select **Pipelines**, and then select the pipeline you modified.
-## View details and remediation information on IaC rules included with Microsoft Security DevOps
+1. See see more details, select a specific pipeline run.
-The IaC scanning tools that are included with Microsoft Security DevOps, are [Template Analyzer](https://github.com/Azure/template-analyzer) (which contains [PSRule](https://aka.ms/ps-rule-azure)) and [Terrascan](https://github.com/tenable/terrascan).
+## View details and remediation information for applied IaC rules
-Template Analyzer runs rules on ARM and Bicep templates. You can learn more about [Template Analyzer's rules and remediation details](https://github.com/Azure/template-analyzer/blob/main/docs/built-in-rules.md#built-in-rules).
+The IaC scanning tools that are included with Microsoft Security DevOps are [Template Analyzer](https://github.com/Azure/template-analyzer) ([PSRule](https://aka.ms/ps-rule-azure) is included in Template Analyzer) and [Terrascan](https://github.com/tenable/terrascan).
-Terrascan runs rules on ARM, CloudFormation, Docker, Helm, Kubernetes, Kustomize, and Terraform templates. You can learn more about the [Terrascan rules](https://runterrascan.io/docs/policies/).
+Template Analyzer runs rules on Azure Resource Manager templates (ARM templates) and Bicep templates. For more information, see the [Template Analyzer rules and remediation details](https://github.com/Azure/template-analyzer/blob/main/docs/built-in-rules.md#built-in-rules).
-## Learn more
+Terrascan runs rules on ARM templates and templates for CloudFormation, Docker, Helm, Kubernetes, Kustomize, and Terraform. For more information, see the [Terrascan rules](https://runterrascan.io/docs/policies/).
-- Learn more about [Template Analyzer](https://github.com/Azure/template-analyzer).-- Learn more about [PSRule](https://aka.ms/ps-rule-azure).-- Learn more about [Terrascan](https://runterrascan.io/).
+To learn more about the IaC scanning tools that are included with Microsoft Security DevOps, see:
-In this tutorial you learned how to configure the Microsoft Security DevOps GitHub Action and Azure DevOps Extension to scan for Infrastructure as Code (IaC) security misconfigurations and how to view the results.
+- [Template Analyzer](https://github.com/Azure/template-analyzer)
+- [PSRule](https://aka.ms/ps-rule-azure)
+- [Terrascan](https://runterrascan.io/)
-## Next steps
+## Related content
-Learn more about [DevOps security](defender-for-devops-introduction.md).
+In this article, you learned how to set up a GitHub action and an Azure DevOps extension for Microsoft Security DevOps to scan for IaC security misconfigurations and how to view the results.
-Learn how to [connect your GitHub](quickstart-onboard-github.md) to Defender for Cloud.
+To get more information:
-Learn how to [connect your Azure DevOps](quickstart-onboard-devops.md) to Defender for Cloud.
+- Learn more about [DevOps security](defender-for-devops-introduction.md).
+- Learn how to [connect your GitHub repository](quickstart-onboard-github.md) to Defender for Cloud.
+- Learn how to [connect your Azure DevOps project](quickstart-onboard-devops.md) to Defender for Cloud.
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
Title: Important upcoming changes description: Upcoming changes to Microsoft Defender for Cloud that you might need to be aware of and for which you might need to plan Previously updated : 01/21/2024 Last updated : 02/01/2024 # Important upcoming changes to Microsoft Defender for Cloud
If you're looking for the latest release notes, you can find them in the [What's
| Planned change | Announcement date | Estimated date for change | |--|--|--|
+| [Changes in endpoint protection recommendations](#changes-in-endpoint-protection-recommendations) | February 1, 2024 | February 28, 2024 |
| [Change in pricing for multicloud container threat detection](#change-in-pricing-for-multicloud-container-threat-detection) | January 30, 2024 | April 2024 | | [Enforcement of Defender CSPM for Premium DevOps Security Capabilities](#enforcement-of-defender-cspm-for-premium-devops-security-value) | January 29, 2024 | March 2024 | | [Update to agentless VM scanning built-in Azure role](#update-to-agentless-vm-scanning-built-in-azure-role) |January 14, 2024 | February 2024 |
If you're looking for the latest release notes, you can find them in the [What's
| [Deprecating two security incidents](#deprecating-two-security-incidents) | | November 2023 | | [Defender for Cloud plan and strategy for the Log Analytics agent deprecation](#defender-for-cloud-plan-and-strategy-for-the-log-analytics-agent-deprecation) | | August 2024 |
+## Changes in endpoint protection recommendations
+
+**Announcement date: February 1, 2024**
+
+**Estimated date of change: February 2024**
+
+As use of the Azure Monitor Agent (AMA) and the Log Analytics agent (also known as the Microsoft Monitoring Agent (MMA)) is [phased out in Defender for Servers](https://techcommunity.microsoft.com/t5/user/ssoregistrationpage?dest_url=https:%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fblogs%2Fblogworkflowpage%2Fblog-id%2FMicrosoftDefenderCloudBlog%2Farticle-id%2F1269), existing endpoint recommendations which rely on those agents, will be replaced with new recommendations. The new recommendations rely on [agentless machine scanning](concept-agentless-data-collection.md) which allows the recommendations to discover and assesses the configuration of supported endpoint detection and response solutions and offers remediation steps, if issues are found.
+
+These public preview recommendations will be deprecated.
+
+| Recommendation | Agent | Deprecation date | Replacement recommendation |
+|--|--|--|--|
+| [Endpoint protection should be installed on your machines](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/4fb67663-9ab9-475d-b026-8c544cced439) (public) | MMA/AMA | February 2024 | New agentless recommendations. |
+| [Endpoint protection health issues should be resolved on your machines](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/37a3689a-818e-4a0e-82ac-b1392b9bb000) (public)| MMA/AMA | February 2024 | New agentless recommendations. |
+
+The current generally available recommendations will remain supported until August 2024.
+
+As part of that deprecation, weΓÇÖll be introducing new agentless endpoint protection recommendations. These recommendations will be available in Defender for Servers Plan 2 and the Defender CSPM plan. They will support Azure and multicloud machines. On-premises machines are not supported.
+
+| Preliminary recommendation name | Estimated release date |
+|--|--|--|
+| Endpoint Detection and Response (EDR) solution should be installed on Virtual Machines | February 2024 |
+| Endpoint Detection and Response (EDR) solution should be installed on EC2s | February 2024 |
+| Endpoint Detection and Response (EDR) solution should be installed on Virtual Machines (GCP) | February 2024 |
+| Endpoint Detection and Response (EDR) configuration issues should be resolved on virtual machines | February 2024 |
+| Endpoint Detection and Response (EDR) configuration issues should be resolved on EC2s | February 2024 |
+| Endpoint Detection and Response (EDR) configuration issues should be resolved on GCP virtual machines | February 2024 |
+ ## Change in pricing for multicloud container threat detection **Announcement date: January 30, 2024**
deployment-environments Best Practice Catalog Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/best-practice-catalog-structure.md
Last updated 11/27/2023
-#customer intent: As a platform engineer, I want to structure my catalog so that Azure Deployment Environments can find and cache environment definitions efficiently.
+# Customer intent: As a platform engineer, I want to structure my catalog so that Azure Deployment Environments can find and cache environment definitions efficiently.
deployment-environments Concept Environment Yaml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/concept-environment-yaml.md
Last updated 11/17/2023
-#customer intent: As a developer, I want to know which parameters I can assign for parameters in environment.yaml.
+# Customer intent: As a developer, I want to know which parameters I can assign for parameters in environment.yaml.
deployment-environments How To Create Environment With Azure Developer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-create-environment-with-azure-developer.md
Last updated 01/26/2023
-#customer intent: As a developer, I want to be able to create an enviroment by using AZD so that I can create my coding environment.
+# Customer intent: As a developer, I want to be able to create an enviroment by using AZD so that I can create my coding environment.
deployment-environments How To Schedule Environment Deletion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-schedule-environment-deletion.md
Last updated 11/10/2023
-#customer intent: As a developer, I want automatically delete my environment on a specific date so that I can keep resources current.
+# Customer intent: As a developer, I want automatically delete my environment on a specific date so that I can keep resources current.
deployment-environments Overview What Is Azure Deployment Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/overview-what-is-azure-deployment-environments.md
Azure Deployment Environments enables usage [scenarios](./concept-environments-s
Developers have the following self-service experience when working with [environments](./concept-environments-key-concepts.md#environments).
->[!NOTE]
-> Developers have a CLI-based experience to create and manage environments for Azure Deployment Environments.
- - Deploy a preconfigured environment for any stage of the development cycle. - Spin up a sandbox environment to explore Azure. - Create platform as a service (PaaS) and infrastructure as a service (IaaS) environments quickly and easily by following a few simple steps. - Deploy environments right from where they work.
+Developers create and manage environments for Azure Deployment Environments through the [developer portal](./quickstart-create-access-environments.md), with the [Azure CLI](./how-to-create-access-environments.md) or with the [Azure Developer CLI](./how-to-create-environment-with-azure-developer.md).
+ ### Platform engineering scenarios Azure Deployment Environments helps your platform engineer apply the right set of policies and settings on various types of environments, control the resource configuration that developers can create, and track environments across projects. They perform the following tasks:
Azure Deployment Environments provides the following benefits to creating, confi
Capture and share IaC templates in source control within your team or organization, to easily create on-demand environments. Promote collaboration through inner-sourcing of templates from source control repositories. - **Compliance and governance**:
-Platform engineering teams can curate environment templates to enforce enterprise security policies and map projects to Azure subscriptions, identities, and permissions by environment types.
+Platform engineering teams can curate environment definitions to enforce enterprise security policies and map projects to Azure subscriptions, identities, and permissions by environment types.
- **Project-based configurations**:
-Create and organize environment templates by the types of applications that development teams are working on, rather than using an unorganized list of templates or a traditional IaC setup.
+Create and organize environment definitions by the types of applications that development teams are working on, rather than using an unorganized list of templates or a traditional IaC setup.
- **Worry-free self-service**: Enable your development teams to quickly and easily create app infrastructure (PaaS, serverless, and more) resources by using a set of preconfigured templates. You can also track costs on these resources to stay within your budget.
dev-box How To Configure Intune Conditional Access Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-configure-intune-conditional-access-policies.md
Last updated 12/20/2023
-#customer intent: As a platform engineer, I want to configure conditional access policies in Microsoft Intune so that I can control access to dev boxes.
+# Customer intent: As a platform engineer, I want to configure conditional access policies in Microsoft Intune so that I can control access to dev boxes.
dev-box Tutorial Connect To Dev Box With Remote Desktop App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/tutorial-connect-to-dev-box-with-remote-desktop-app.md
In this tutorial, you download and use a remote desktop client application to connect to a dev box.
-Remote Desktop apps let you use and control a dev box from almost any device. For your desktop or laptop, you can choose to download the Remote Desktop client for Windows Desktop or Microsoft Remote Desktop for Mac. You can also download a Remote Desktop app for your mobile device: Microsoft Remote Desktop for iOS or Microsoft Remote Desktop for Android.
+Remote desktop apps let you use and control a dev box from almost any device. For your desktop or laptop, you can choose to download the Remote Desktop client for Windows Desktop or Microsoft Remote Desktop for Mac. You can also download a remote desktop app for your mobile device: Microsoft Remote Desktop for iOS or Microsoft Remote Desktop for Android.
+
+> [!TIP]
+> Many remote desktops apps allow you to [use multiple monitors](tutorial-configure-multiple-monitors.md) when you connect to your dev box.
Alternately, you can connect to your dev box through the browser from the Microsoft Dev Box developer portal.
event-grid Event Hubs Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-hubs-integration.md
Title: 'Tutorial: Send Event Hubs data to data warehouse - Event Grid' description: Shows how to migrate Event Hubs captured data from Azure Blob Storage to Azure Synapse Analytics, specifically a dedicated SQL pool, using Azure Event Grid and Azure Functions. Previously updated : 11/14/2022 Last updated : 01/31/2024 ms.devlang: csharp
To complete this tutorial, you must have:
- An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin. - [Visual studio](https://www.visualstudio.com/vs/) with workloads for: .NET desktop development, Azure development, ASP.NET and web development, Node.js development, and Python development. - Download the [EventHubsCaptureEventGridDemo sample project](https://github.com/Azure/azure-event-hubs/tree/master/samples/e2e/EventHubsCaptureEventGridDemo) to your computer.
- - WindTurbineDataGenerator ΓÇô A simple publisher that sends sample wind turbine data to a capture-enabled event hub
- - FunctionDWDumper ΓÇô An Azure Function that receives a notification from Azure Event Grid when an Avro file is captured to the Azure Storage blob. It receives the blobΓÇÖs URI path, reads its contents, and pushes this data to Azure Synapse Analytics (dedicated SQL pool).
+ - WindTurbineDataGenerator ΓÇô A simple publisher that sends sample wind turbine data to an event hub with the Capture feature enabled.
+ - FunctionDWDumper ΓÇô An Azure function that receives a notification from Azure Event Grid when an Avro file is captured to the Azure Storage blob. It receives the blobΓÇÖs URI path, reads its contents, and pushes this data to Azure Synapse Analytics (dedicated SQL pool).
## Deploy the infrastructure In this step, you deploy the required infrastructure with a [Resource Manager template](https://github.com/Azure/azure-docs-json-samples/blob/master/event-grid/EventHubsDataMigration.json). When you deploy the template, the following resources are created:
In this step, you deploy the required infrastructure with a [Resource Manager te
} ``` 2. Deploy all the resources mentioned in the previous section (event hub, storage account, functions app, Azure Synapse Analytics) by running the following CLI command:
- 1. Copy and paste the command into the Cloud Shell window. Alternatively, you may want to copy/paste into an editor of your choice, set values, and then copy the command to the Cloud Shell.
+ 1. Copy and paste the command into the Cloud Shell window. Alternatively, you can copy/paste into an editor of your choice, set values, and then copy the command to the Cloud Shell. If you see an error due to an Azure resource name, delete the resource group, fix the name, and retry the command again.
> [!IMPORTANT] > Specify values for the following entities before running the command:
In this step, you deploy the required infrastructure with a [Resource Manager te
--template-uri https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/event-grid/EventHubsDataMigration.json \ --parameters eventHubNamespaceName=<event-hub-namespace> eventHubName=hubdatamigration sqlServerName=<sql-server-name> sqlServerUserName=<user-name> sqlServerPassword=<password> sqlServerDatabaseName=<database-name> storageName=<unique-storage-name> functionAppName=<app-name> ```
- 3. Press **ENTER** in the Cloud Shell window to run the command. This process may take a while since you're creating a bunch of resources. In the result of the command, ensure that there have been no failures.
+ 3. Press **ENTER** in the Cloud Shell window to run the command. This process might take a while since you're creating a bunch of resources. In the result of the command, ensure that there have been no failures.
1. Close the Cloud Shell by selecting the **Cloud Shell** button in the portal (or) **X** button in the top-right corner of the Cloud Shell window. ### Verify that the resources are created
First, get the publish profile for the Functions app from the Azure portal. Then
1. On the **Resource Group** page, select the **Azure Functions app** in the list of resources.
- :::image type="content" source="media/event-hubs-functions-synapse-analytics/select-function-app.png" alt-text="Screenshot showing the selection of the function app in the list of resources for a resource group.":::
+ :::image type="content" source="media/event-hubs-functions-synapse-analytics/select-function-app.png" lightbox="media/event-hubs-functions-synapse-analytics/select-function-app.png" alt-text="Screenshot showing the selection of the function app in the list of resources for a resource group.":::
1. On the **Function App** page for your app, select **Get publish profile** on the command bar.
- :::image type="content" source="media/event-hubs-functions-synapse-analytics/get-publish-profile.png" alt-text="Screenshot showing the selection of the **Get Publish Profile** button on the command bar of the function app page.":::
+ :::image type="content" source="media/event-hubs-functions-synapse-analytics/get-publish-profile.png" lightbox="media/event-hubs-functions-synapse-analytics/get-publish-profile.png" alt-text="Screenshot showing the selection of the **Get Publish Profile** button on the command bar of the function app page.":::
1. Download and save the file into the **FunctionEGDDumper** subfolder of the **EventHubsCaptureEventGridDemo** folder. ### Use the publish profile to publish the Functions app
First, get the publish profile for the Functions app from the Azure portal. Then
:::image type="content" source="media/event-hubs-functions-synapse-analytics/import-profile.png" alt-text="Screenshot showing the selection **Import Profile** on the **Publish** dialog box."::: 1. On the **Import profile** tab, select the publish settings file that you saved earlier in the **FunctionEGDWDumper** folder, and then select **Finish**. 1. When Visual Studio has configured the profile, select **Publish**. Confirm that the publishing succeeded.
-2. In the web browser that has the **Azure Function** page open, select **Functions** on the left menu. Confirm that the **EventGridTriggerMigrateData** function shows up in the list. If you don't see it, try publishing from Visual Studio again, and then refresh the page in the portal.
+2. In the web browser that has the **Azure Function** page open, select **Functions** in the middle pane. Confirm that the **EventGridTriggerMigrateData** function shows up in the list. If you don't see it, try publishing from Visual Studio again, and then refresh the page in the portal.
:::image type="content" source="media/event-hubs-functions-synapse-analytics/confirm-function-creation.png" alt-text="Screenshot showing the confirmation of function creation.":::
After publishing the function, you're ready to subscribe to the event.
1. Verify that the event subscription is created. Switch to the **Event Subscriptions** tab on the **Events** page for the Event Hubs namespace. :::image type="content" source="media/event-hubs-functions-synapse-analytics/confirm-event-subscription.png" alt-text="Screenshot showing the Event Subscriptions tab on the Events page." lightbox="media/event-hubs-functions-synapse-analytics/confirm-event-subscription.png":::
-1. Select the App Service plan (not the App Service) in the list of resources in the resource group.
## Run the app to generate data You've finished setting up your event hub, dedicate SQL pool (formerly SQL Data Warehouse), Azure function app, and event subscription. Before running an application that generates data for event hub, you need to configure a few values.
This section helps you with monitoring or troubleshooting the solution.
### View captured data in the storage account 1. Navigate to the resource group and select the storage account used for capturing event data.
-1. On the **Storage account** page, select **Storage Explorer (preview**) on the left menu.
+1. On the **Storage account** page, select **Storage browser** on the left menu.
1. Expand **BLOB CONTAINERS**, and select **windturbinecapture**. 1. Open the folder named same as your **Event Hubs namespace** in the right pane. 1. Open the folder named same as your event hub (**hubdatamigration**).
This section helps you with monitoring or troubleshooting the solution.
### Verify that the Event Grid trigger invoked the function 1. Navigate to the resource group and select the function app.
-1. Select **Functions** on the left menu.
+1. Select **Functions** tab in the middle pane.
1. Select the **EventGridTriggerMigrateData** function from the list. 1. On the **Function** page, select **Monitor** on the left menu. 1. Select **Configure** to configure application insights to capture invocation logs.
expressroute Configure Expressroute Private Peering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/configure-expressroute-private-peering.md
Last updated 01/02/2024
-#customer intent: As a network engineer, I want to establish a private connection from my on-premises network to my Azure virtual network using ExpressRoute.
+# Customer intent: As a network engineer, I want to establish a private connection from my on-premises network to my Azure virtual network using ExpressRoute.
# Tutorial: Establish a private connection from on-premises to an Azure virtual network using ExpressRoute
expressroute Expressroute Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-faqs.md
See the recommendation for [High availability and failover with Azure ExpressRou
Yes. Office 365 GCC service endpoints are reachable through the Azure US Government ExpressRoute. However, you first need to open a support ticket on the Azure portal to provide the prefixes you intend to advertise to Microsoft. Your connectivity to Office 365 GCC services will be established after the support ticket is resolved.
+### Can I have ExpressRoute Private Peering in an Azure Goverment environment with Virtual Network Gateways in Azure commercial cloud?
+
+No, it's not possible to establish ExpressRoute Private peering in an Azure Goverment environment with a virtual network gateway in Azure commercial cloud environments. Furthermore, the scope of the ExpressRoute Government Microsoft Peering is limited to only public IPs within Azure government regions and doesn't extend to the broader ranges of commercial public IPs.
+ ## Route filters for Microsoft peering ### Are Azure service routes advertised when I first configure Microsoft peering?
expressroute Expressroute Howto Macsec https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-macsec.md
Follow these steps to begin the configuration:
``` > [!NOTE]
- > CKN must be an even-length string up to 64 hexadecimal digits (0-9, A-F).
- >
- > CAK length depends on cipher suite specified:
- > * For GcmAes128 and GcmAesXpn128, the CAK must be an even-length string with 32 hexadecimal digits (0-9, A-F).
- > * For GcmAes256 and GcmAesXpn256, the CAK must be an even-length string with 64 hexadecimal digits (0-9, A-F).
+ > * CKN must be an even-length string up to 64 hexadecimal digits (0-9, A-F).
+ > * CAK length depends on cipher suite specified:
+ > * For GcmAes128 and GcmAesXpn128, the CAK must be an even-length string with 32 hexadecimal digits (0-9, A-F).
+ > * For GcmAes256 and GcmAesXpn256, the CAK must be an even-length string with 64 hexadecimal digits (0-9, A-F).
+ > * For CAK, the full length of the key must be used. If the key is shorter than the required length then `0's` will be added to the end of the key to meet the length requirement. For example, CAK of 1234 will be 12340000... for both 128-bit and 256-bit based on the cipher.
1. Grant the user identity the authorization to perform the `GET` operation.
frontdoor Classic Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/classic-overview.md
Last updated 08/09/2023
-# customer intent: As an IT admin, I want to learn about Front Door and what I can use it for.
+# Customer intent: As an IT admin, I want to learn about Front Door and what I can use it for.
# What is Azure Front Door (classic)?
healthcare-apis Events Disable Delete Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-disable-delete-workspace.md
Title: How to disable events and delete events enabled workspaces - Azure Health Data Services
-description: Learn how to disable events and delete events enabled workspaces.
+ Title: Disable events for the FHIR or DICOM service in Azure Health Data Services
+description: Disable events for the FHIR or DICOM service in Azure Health Services by deleting an event subscription. Learn why and how to stop sending notifications from your data and resources.
-+ Previously updated : 09/26/2023 Last updated : 01/31/2024
-# How to disable events and delete event enabled workspaces
+# Disable events
-> [!NOTE]
-> [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
+**Applies to:** [!INCLUDE [Yes icon](../includes/applies-to.md)][!INCLUDE [FHIR service](../includes/fhir-service.md)], [!INCLUDE [DICOM service](../includes/DICOM-service.md)]
-In this article, learn how to disable events and delete events enabled workspaces.
+Events in Azure Health Services allow you to monitor and respond to changes in your data and resources. By creating an event subscription, you can specify the conditions and actions for sending notifications to various endpoints.
-## Disable events
+However, there may be situations where you want to temporarily or permanently stop receiving notifications from an event subscription. For example, you might want to pause notifications during maintenance or testing, or delete the event subscription if you no longer need it.
-To disable events from sending event messages for a single **Event Subscription**, the **Event Subscription** must be deleted.
+To disable events from sending notifications for an **Event Subscription**, you need to delete the subscription.
-1. Select the **Event Subscription** to be deleted. In this example, we're selecting an Event Subscription named **fhir-events**.
+1. In the Azure portal on the left pane, select **Events**.
- :::image type="content" source="media/disable-delete-workspaces/select-event-subscription.png" alt-text="Screenshot of Events Subscriptions and select event subscription to be deleted." lightbox="media/disable-delete-workspaces/select-event-subscription.png":::
+1. Select **Event Subscriptions**.
-2. Select **Delete** and confirm the **Event Subscription** deletion.
+1. Select the **Event Subscription** you want to disable notifications for. In the example, the event subscription is named **azuredocsdemo-fhir-events-subscription**.
- :::image type="content" source="media/disable-delete-workspaces/select-subscription-delete.png" alt-text="Screenshot of events subscriptions and select delete and confirm the event subscription to be deleted." lightbox="media/disable-delete-workspaces/select-subscription-delete.png":::
+ :::image type="content" source="media/disable-delete-workspaces/select-event-subscription.png" alt-text="Screenshot showing selection of event subscription to be deleted." lightbox="media/disable-delete-workspaces/select-event-subscription.png":::
-3. If you have multiple **Event Subscriptions**, follow the steps to delete the **Event Subscriptions** so that no **Event Subscriptions** remain.
+1. Choose **Delete**.
- :::image type="content" source="media/disable-delete-workspaces/no-event-subscriptions-found.png" alt-text="Screenshot of Event Subscriptions and delete all event subscriptions to disable events." lightbox="media/disable-delete-workspaces/no-event-subscriptions-found.png":::
+ :::image type="content" source="media/disable-delete-workspaces/select-subscription-delete-sml.png" alt-text="Screenshot showing confirmation of the event subscription to be deleted." lightbox="media/disable-delete-workspaces/select-subscription-delete-lrg.png":::
-> [!NOTE]
-> The FHIR service will automatically go into an **Updating** status to disable events when a full delete of **Event Subscriptions** is executed. The FHIR service will remain online while the operation is completing, however, you won't be able to make any further configuration changes to the FHIR service until the updating has completed.
+1. If there are multiple event subscriptions, repeat these steps to delete each one until the message **No Event Subscriptions Found** is displayed in the **Name** field.
-## Delete events enabled workspaces
+ :::image type="content" source="media/disable-delete-workspaces/no-event-subscriptions-found-sml.png" alt-text="Screenshot showing deletion of all event subscriptions to disable events." lightbox="media/disable-delete-workspaces/no-event-subscriptions-found-lrg.png":::
-To avoid errors and successfully delete events enabled workspaces, follow these steps and in this specific order:
+> [!NOTE]
+> When you delete all event subscriptions, the FHIR or DICOM service disables events and goes into **Updating** status. The FHIR or DICOM service stays online during the update, but you canΓÇÖt change the configuration until it completes.
-1. Delete all workspace associated child resources (for example: DICOM services, FHIR services, and MedTech services).
-2. Delete all workspace associated **Event Subscriptions**.
-3. Delete workspace.
+## Delete events-enabled workspaces
-## Next steps
+To delete events-enabled workspaces without errors, do these steps in this exact order:
+
+1. Delete all child resources associated with the workspace (for example, FHIR&reg; services, DICOM&reg; services, and MedTech services).
-In this article, you learned how to disable events and delete events enabled workspaces.
+1. [Delete all event subscriptions](#disable-events) associated with the workspace.
-To learn about how to troubleshoot events, see
+1. Delete the workspace.
+
+## Next steps
-> [!div class="nextstepaction"]
-> [Troubleshoot events](events-troubleshooting-guide.md)
+[Troubleshoot events](events-troubleshooting-guide.md)
-FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Events Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-faqs.md
Title: Frequently asked questions about events - Azure Health Data Services
-description: Learn about the frequently asked questions about events.
+ Title: Events FAQ for Azure Health Data Services
+description: Get answers to common questions about the events capability in the FHIR and DICOM services in Azure Health Data Services. Find out how events work, what types of events are supported, and how to subscribe to events by using Azure Event Grid.
-+ Previously updated : 07/11/2023 Last updated : 01/31/2024
-# Frequently asked questions about events
+# Events FAQ
-> [!NOTE]
-> [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
+**Applies to:** [!INCLUDE [Yes icon](../includes/applies-to.md)][!INCLUDE [FHIR service](../includes/fhir-service.md)], [!INCLUDE [DICOM service](../includes/DICOM-service.md)]
-## Events: The basics
+Events let you subscribe to data changes in the FHIR&reg; or DICOM&reg; service and get notified through Azure Event Grid. You can use events to trigger workflows, automate tasks, send alerts, and more. In this FAQ, youΓÇÖll find answers to some common questions about events.
-## Can I use events with a different FHIR/DICOM service other than the Azure Health Data Services FHIR/DICOM service?
+**Can I use events with a non-Microsoft FHIR or DICOM service?**
-No. The Azure Health Data Services events feature only currently supports the Azure Health Data Services FHIR and DICOM services.
+No. The Events capability only supports the Azure Health Data Services FHIR and DICOM services.
-## What FHIR resource changes does events support?
+**What FHIR resource changes are supported by events?**
-Events are generated from the following FHIR service types:
+Events are generated from these FHIR service types:
-* **FhirResourceCreated** - The event emitted after a FHIR resource gets created successfully.
+- **FhirResourceCreated**. The event emitted after a FHIR resource is created.
-* **FhirResourceUpdated** - The event emitted after a FHIR resource gets updated successfully.
+- **FhirResourceUpdated**. The event emitted after a FHIR resource is updated.
-* **FhirResourceDeleted** - The event emitted after a FHIR resource gets soft deleted successfully.
+- **FhirResourceDeleted**. The event emitted after a FHIR resource is soft deleted.
-For more information about the FHIR service delete types, see [FHIR REST API capabilities for Azure Health Data Services FHIR service](../../healthcare-apis/fhir/fhir-rest-api-capabilities.md).
+For more information about delete types in the FHIR service, see [FHIR REST API capabilities for Azure Health Data Services](../../healthcare-apis/fhir/fhir-rest-api-capabilities.md).
-## Does events support FHIR bundles?
+**Does events support FHIR bundles?**
-Yes. The events feature is designed to emit notifications of data changes at the FHIR resource level.
+Yes. The events capability emits notifications of data changes at the FHIR resource level.
-Events support these [FHIR bundle types](http://hl7.org/fhir/R4/valueset-bundle-type.html) in the following ways:
+Events support these [FHIR bundle types](http://hl7.org/fhir/R4/valueset-bundle-type.html):
-* **Batch**: An event is emitted for each successful data change operation in a bundle. If one of the operations generates an error, no event is emitted for that operation. For example: the batch bundle contains five operations, however, there's an error with one of the operations. Events are emitted for the four successful operations with no event emitted for the operation that generated an error.
+- **Batch**. An event is emitted for each successful data change operation in a bundle. If one of the operations generates an error, no event is emitted for that operation. For example: the batch bundle contains five operations, however, there's an error with one of the operations. Events are emitted for the four successful operations with no event emitted for the operation that generated an error.
-* **Transaction**: An event is emitted for each successful bundle operation as long as there are no errors. If there are any errors within a transaction bundle, then no events are emitted. For example: the transaction bundle contains five operations, however, there's an error with one of the operations. No events are emitted for that bundle.
+- **Transaction**. An event is emitted for each successful bundle operation as long as there are no errors. If there are any errors within a transaction bundle, then no events are emitted. For example: the transaction bundle contains five operations, however, there's an error with one of the operations. No events are emitted for that bundle.
> [!NOTE]
-> Events are not sent in the sequence of the data operations in the FHIR bundle.
+> Events aren't sent in the sequence of the data operations in the FHIR bundle.
-## What DICOM image changes does events support?
+**What DICOM image changes does events support?**
Events are generated from the following DICOM service types:
-* **DicomImageCreated** - The event emitted after a DICOM image gets created successfully.
+- **DicomImageCreated**. The event emitted after a DICOM image is created.
-* **DicomImageDeleted** - The event emitted after a DICOM image gets deleted successfully.
+- **DicomImageDeleted**. The event emitted after a DICOM image is deleted.
-* **DicomImageUpdated** - The event emitted after a DICOM image gets updated successfully.
+- **DicomImageUpdated**. The event emitted after a DICOM image is updated. For more information, see [Update DICOM files](../dicom/update-files.md).
-## What is the payload of an events message?
+**What is the payload of an events message?**
-For a detailed description of the events message structure and both required and nonrequired elements, see [Events message structures](events-message-structure.md).
+For a description of the events message structure and required and nonrequired elements, see [Events message structures](events-message-structure.md).
-## What is the throughput for the events messages?
+**What is the throughput for events messages?**
-The throughput of the FHIR or DICOM service and the Event Grid govern the throughput of FHIR and DICOM events. When a request made to the FHIR service is successful, it returns a 2xx HTTP status code. It also generates a FHIR resource or DICOM image changing event. The current limitation is 5,000 events/second per workspace for all FHIR or DICOM service instances in the workspace.
+The throughput of the FHIR or DICOM service and the Event Grid governs the throughput of FHIR and DICOM events. When a request made to the FHIR service is successful, it returns a 2xx HTTP status code. It also generates a FHIR resource or DICOM image changing event. The current limitation is 5,000 events/second per workspace for all FHIR or DICOM service instances in the workspace.
-## How am I charged for using events?
+**How am I charged for using events?**
There are no extra charges for using [Azure Health Data Services events](https://azure.microsoft.com/pricing/details/health-data-services/). However, applicable charges for the [Event Grid](https://azure.microsoft.com/pricing/details/event-grid/) are assessed against your Azure subscription.
-## How do I subscribe to multiple FHIR and/or DICOM services in the same workspace separately?
+**How do I subscribe separately to multiple FHIR or DICOM services in the same workspace?**
-You can use the Event Grid filtering feature. There are unique identifiers in the event message payload to differentiate different accounts and workspaces. You can find a global unique identifier for workspace in the `source` field, which is the Azure Resource ID. You can locate the unique FHIR account name in that workspace in the `data.resourceFhirAccount` field. You can locate the unique DICOM account name in that workspace in the `data.serviceHostName` field. When you create a subscription, you can use the filtering operators to select the events you want to get in that subscription.
+Use the Event Grid filtering feature. There are unique identifiers in the event message payload to differentiate accounts and workspaces. You can find a global unique identifier for workspace in the `source` field, which is the Azure Resource ID. You can locate the unique FHIR account name in that workspace in the `data.resourceFhirAccount` field. You can locate the unique DICOM account name in the workspace in the `data.serviceHostName` field. When you create a subscription, use the filtering operators to select the events you want to include in the subscription.
:::image type="content" source="media\event-grid\event-grid-filters.png" alt-text="Screenshot of the Event Grid filters tab." lightbox="media\event-grid\event-grid-filters.png":::
-## Can I use the same subscriber for multiple workspaces, FHIR accounts, or DICOM accounts?
+**Can I use the same subscriber for multiple workspaces, FHIR accounts, or DICOM accounts?**
-Yes. We recommend that you use different subscribers for each individual FHIR or DICOM service to process in isolated scopes.
+Yes. We recommend that you use different subscribers for each FHIR or DICOM service to enable processing in isolated scopes.
-## Is Event Grid compatible with HIPAA and HITRUST compliance obligations?
+**Is the Event Grid compatible with HIPAA and HITRUST compliance requirements?**
-Yes. Event Grid supports customer's Health Insurance Portability and Accountability Act (HIPAA) and Health Information Trust Alliance (HITRUST) obligations. For more information, see [Microsoft Azure Compliance Offerings](https://azure.microsoft.com/resources/microsoft-azure-compliance-offerings/).
+Yes. Event Grid supports Health Insurance Portability and Accountability Act (HIPAA) and Health Information Trust Alliance (HITRUST) obligations. For more information, see [Microsoft Azure Compliance Offerings](https://azure.microsoft.com/resources/microsoft-azure-compliance-offerings/).
-## What is the expected time to receive an events message?
+**How long does it take to receive an events message?**
-On average, you should receive your event message within one second after a successful HTTP request. 99.99% of the event messages should be delivered within five seconds unless the limitation of either the FHIR service, DICOM service, or [Event Grid](../../event-grid/quotas-limits.md) has been met.
+On average, you should receive your event message within one second after a successful HTTP request. 99.99% of the event messages should be delivered within five seconds unless the limitation of either the FHIR service, DICOM service, or [Event Grid](../../event-grid/quotas-limits.md) is reached.
-## Is it possible to receive duplicate events messages?
+**Is it possible to receive duplicate events messages?**
-Yes. The Event Grid guarantees at least one events message delivery with its push mode. There may be chances that the event delivery request returns with a transient failure status code for random reasons. In this situation, the Event Grid considers that as a delivery failure and resends the events message. For more information, see [Azure Event Grid delivery and retry](../../event-grid/delivery-and-retry.md).
+Yes. The Event Grid guarantees at least one events message delivery with its push mode. There may be cases when the event delivery request returns with a transient failure status code for random reasons. In this situation, the Event Grid considers it a delivery failure and resends the events message. For more information, see [Azure Event Grid delivery and retry](../../event-grid/delivery-and-retry.md).
-Generally, we recommend that developers ensure idempotency for the event subscriber. The event ID or the combination of all fields in the `data` property of the message content are unique per each event. The developer can rely on them to deduplicate.
+Generally, we recommend that developers ensure idempotency for the event subscriber. The event ID or the combination of all fields in the `data` property of the message content are unique for each event. You can rely on them to deduplicate.
-## More frequently asked questions
-
-[FAQs about the Azure Health Data Services](../healthcare-apis-faqs.md)
-
-[FAQs about Azure Health Data Services DICOM service](../dicom/dicom-services-faqs.yml)
-
-[FAQs about Azure Health Data Services FHIR service](../fhir/fhir-faq.md)
-
-[FAQs about Azure Health Data Services MedTech service](../iot/iot-connector-faqs.md)
-
-FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
iot-central How To Connect Iot Edge Transparent Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/how-to-connect-iot-edge-transparent-gateway.md
Your transparent gateway is now configured and ready to start forwarding telemet
## Provision a downstream device
-IoT Central relies on the Device Provisioning Service (DPS) to provision devices in IoT Central. Currently, IoT Edge can't use DPS provision a downstream device to your IoT Central application. The following steps show you how to provision the `thermostat1` device manually. To complete these steps, you need an environment with Python 3.6 (or higher) installed and internet connectivity. The [Azure Cloud Shell](https://shell.azure.com/) has Python 3.7 pre-installed:
+IoT Central relies on the Device Provisioning Service (DPS) to provision devices in IoT Central. Currently, IoT Edge can't use DPS provision a downstream device to your IoT Central application. The following steps show you how to provision the `thermostat1` device manually. To complete these steps, you need an environment with Python installed and internet connectivity. Check the [Azure IoT Python SDK](https://github.com/Azure/azure-iot-sdk-python/blob/main/README.md) for current Python version requirements. The [Azure Cloud Shell](https://shell.azure.com/) has Python pre-installed:
1. Run the following command to install the `azure.iot.device` module:
iot-operations Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/reference/glossary.md
Last updated 01/10/2024
-#customer intent: As a user of Azure IoT Operations, I want learn about the terminology associated with Azure IoT Operations so that I can use the terminology correctly.
+# Customer intent: As a user of Azure IoT Operations, I want learn about the terminology associated with Azure IoT Operations so that I can use the terminology correctly.
iot-operations Tutorial Anomaly Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/view-analyze-data/tutorial-anomaly-detection.md
description: Learn how to detect anomalies in real time in your manufacturing pr
Previously updated : 12/18/2023 Last updated : 02/01/2024 #CustomerIntent: As an OT, I want to configure my Azure IoT Operations deployment to detect anomalies in real time in my manufacturing process.
By default, the anomaly detection service uses preset estimated control limits.
## Transform and enrich the measurement data
-<!-- TODO: Clarify here where the anomaly detection takes place -->
- To transform the measurement data from your production lines into a structure that the anomaly detector can use, you use Data Processor pipelines. In this tutorial, you create three pipelines:
To create the ERP reference data pipeline that ingests the data from the HTTP en
| Field | Value | |-|--|
- | Name | `erp-input` |
+ | Name | `HTTP Endpoint - ERP data` |
| Method | `GET` | | URL | `http://callout-svc-http:3333/ref_data` | | Authentication | `None` |
To create the ERP reference data pipeline that ingests the data from the HTTP en
1. Select **Add destination** and then select **Reference datasets**.
+1. Name the stage _Reference dataset - erp-data_.
+ 1. Select **erp-data** in the **Dataset** field, and select **Apply**. 1. Select **Save** to save the pipeline.
To create the _opcua-anomaly-pipeline_ pipeline:
| Field | Value | |--||
+ | Name | `MQ - ContosoLLC/#` |
| Broker | `tls://aio-mq-dmqtt-frontend:8883` | | Topic | `ContosoLLC/#` | | Data format | `JSON` | Select **Apply**. The simulated production line assets send measurements to the MQ broker in the cluster. This input stage configuration subscribes to all the topics under the `ContosoLLC` topic in the MQ broker. This topic receives measurement data from the Redmond, Seattle, and Tacoma sites.
-1. Add a **Transform** stage after the source stage with the following JQ expressions. This transform reorganizes the data and makes it easier to read:
+1. Add a **Transform** stage after the source stage. Name the stage _Transform - Reorganize message_ and add the following JQ expressions. This transform reorganizes the data and makes it easier to read:
```jq .payload[0].Payload |= with_entries(.value |= .Value) |
To create the _opcua-anomaly-pipeline_ pipeline:
Select **Apply**.
-1. Use the **Stages** list on the left to add an **Enrich** stage after the transform stage, and select it from the pipeline diagram. This stage enriches the measurements from the simulated production line assets with reference data from the `erp-data` dataset. This stage uses a condition to determine when to add the ERP data. Open the **Add condition** options and add the following information:
+1. Use the **Stages** list on the left to add an **Enrich** stage after the transform stage, and select it from the pipeline diagram. Name the stage _Enrich - Add ERP data_. This stage enriches the measurements from the simulated production line assets with reference data from the `erp-data` dataset. This stage uses a condition to determine when to add the ERP data. Open the **Add condition** options and add the following information:
| Field | Value | |-|-|
To create the _opcua-anomaly-pipeline_ pipeline:
Select **Apply**.
-1. Add a **Transform** stage after the enrich stage with the following JQ expressions. This transform stage reorganizes the data and makes it easier to read. These JQ expressions move the enrichment data to the same flat path as the real-time data to make it easier to export to Azure Data Explorer:
+1. Add a **Transform** stage after the enrich stage with the following JQ expressions. Name the stage _Transform - Flatten ERP data_. This transform stage reorganizes the data and makes it easier to read. These JQ expressions move the enrichment data to the same flat path as the real-time data to make it easier to export to Azure Data Explorer:
```jq .payload.Payload |= . + .enrich |
To create the _opcua-anomaly-pipeline_ pipeline:
| Field | Value | |--||
- | Name | `Call out HTTP - Anomaly` |
+ | Name | `Call out HTTP - Anomaly detection` |
| Method | `POST` | | URL | `http://anomaly-svc:3333/anomaly` | | Authentication | None |
To create the _opcua-anomaly-pipeline_ pipeline:
| Field | Value | |-||
+ | Name | `MQ - processed-output` |
| Broker | `tls://aio-mq-dmqtt-frontend:8883` | | Topic | `processed-output` | | Data format | `JSON` |
The next step is to create a Data Processor pipeline that sends the transformed
1. In the pipeline diagram, select **Configure source** and then select **MQ**. Enter the information from the following table:
- | Field | Value |
- |-|-|
- | Name | processed-mq-data |
- | Broker | tls://aio-mq-dmqtt-frontend:8883 |
- | Topic | processed-output |
- | Data Format | JSON |
+ | Field | Value |
+ |-||
+ | Name | `MQ - processed-output` |
+ | Broker | `tls://aio-mq-dmqtt-frontend:8883` |
+ | Topic | `processed-output` |
+ | Data Format | `JSON` |
Select **Apply**.
The next step is to create a Data Processor pipeline that sends the transformed
1. To connect the source and destination stages, select the red dot at the bottom of the source stage and drag it to the red dot at the top of the destination stage.
-1. Select **Add destination** and then select Azure Data Explorer.
-
-1. Use the information in the following table to configure the destination stage:
-
- | Field | Value |
- |-|-|
- | Cluster URL | To find this value, navigate to your cluster at [Azure Data Explorer](https://dataexplorer.azure.com) and select the **Edit connection** icon next to your cluster name in the left pane. |
- | Database | `bakery_ops` |
- | Table | `edge_data` |
- | Authentication | Service principal |
- | Tenant ID | The tenant ID you made a note of when you created the service principal. |
- | Client ID | The app ID you made a note of when you created the service principal. |
- | Secret | `AIOFabricSecret` |
- | Batching > Batch time | `5s` |
- | Batching > Batch path | `.payload.payload` |
- | Column > Name | `AssetID` |
- | Column > Path | `.assetId` |
- | Column > Name | `Timestamp` |
- | Column > Path | `.sourceTimestamp` |
- | Column > Name | `Name` |
- | Column > Path | `.assetName` |
- | Column > Name | `SerialNumber` |
- | Column > Path | `.serialNumber` |
- | Column > Name | `Status` |
- | Column > Path | `.machineStatus` |
- | Column > Name | `Maintenance` |
- | Column > Path | `.maintenanceStatus` |
- | Column > Name | `Location` |
- | Column > Path | `.site` |
- | Column > Name | `OperatingTime` |
- | Column > Path | `.operatingTime` |
- | Column > Name | `Humidity` |
- | Column > Path | `.humidity` |
- | Column > Name | `HumidityAnomalyFactor` |
- | Column > Path | `.humidityAnomalyFactor` |
- | Column > Name | `HumidityAnomaly` |
- | Column > Path | `.humidityAnomaly` |
- | Column > Name | `Temperature` |
- | Column > Path | `.temperature` |
- | Column > Name | `TemperatureAnomalyFactor` |
- | Column > Path | `.temperatureAnomalyFactor` |
- | Column > Name | `TemperatureAnomaly` |
- | Column > Path | `.temperatureAnomaly` |
- | Column > Name | `Vibration` |
- | Column > Path | `.vibration` |
- | Column > Name | `VibrationAnomalyFactor` |
- | Column > Path | `.vibrationAnomalyFactor` |
- | Column > Name | `VibrationAnomaly` |
- | Column > Path | `.vibrationAnomaly` |
+1. Select **Add destination** and then select **Azure Data Explorer**. Select the **Advanced** tab and then paste in the following configuration:
+
+ ```json
+ {
+ "displayName": "Azure Data Explorer - bakery_ops",
+ "type": "output/dataexplorer@v1",
+ "viewOptions": {
+ "position": {
+ "x": 0,
+ "y": 432
+ }
+ },
+ "clusterUrl": "https://your-cluster.northeurope.kusto.windows.net/",
+ "database": "bakery_ops",
+ "table": "edge_data",
+ "authentication": {
+ "type": "servicePrincipal",
+ "tenantId": "your tenant ID",
+ "clientId": "your client ID",
+ "clientSecret": "AIOFabricSecret"
+ },
+ "batch": {
+ "time": "5s",
+ "path": ".payload.payload"
+ },
+ "columns": [
+ {
+ "name": "AssetID",
+ "path": ".assetId"
+ },
+ {
+ "name": "Timestamp",
+ "path": ".sourceTimestamp"
+ },
+ {
+ "name": "Name",
+ "path": ".assetName"
+ },
+ {
+ "name": "SerialNumber",
+ "path": ".serialNumber"
+ },
+ {
+ "name": "Status",
+ "path": ".machineStatus"
+ },
+ {
+ "name": "Maintenance",
+ "path": ".maintenanceStatus"
+ },
+ {
+ "name": "Location",
+ "path": ".site"
+ },
+ {
+ "name": "OperatingTime",
+ "path": ".operatingTime"
+ },
+ {
+ "name": "Humidity",
+ "path": ".humidity"
+ },
+ {
+ "name": "HumidityAnomalyFactor",
+ "path": ".humidityAnomalyFactor"
+ },
+ {
+ "name": "HumidityAnomaly",
+ "path": ".humidityAnomaly"
+ },
+ {
+ "name": "Temperature",
+ "path": ".temperature"
+ },
+ {
+ "name": "TemperatureAnomalyFactor",
+ "path": ".temperatureAnomalyFactor"
+ },
+ {
+ "name": "TemperatureAnomaly",
+ "path": ".temperatureAnomaly"
+ },
+ {
+ "name": "Vibration",
+ "path": ".vibration"
+ },
+ {
+ "name": "VibrationAnomalyFactor",
+ "path": ".vibrationAnomalyFactor"
+ },
+ {
+ "name": "VibrationAnomaly",
+ "path": ".vibrationAnomaly"
+ }
+ ]
+ }
+ ```
- Select **Apply**.
+1. Then navigate to the **Basic** tag and fill in the following fields by using the information you made a note of previously:
+
+ | Field | Value |
+ |--|-|
+ | Cluster URL | To find this value, navigate to your cluster at [Azure Data Explorer](https://dataexplorer.azure.com) and select the **Edit connection** icon next to your cluster name in the left pane. |
+ | Tenant ID | The tenant ID you made a note of when you created the service principal. |
+ | Client ID | The app ID you made a note of when you created the service principal. |
+ | Secret | `AIOFabricSecret` |
+
+ Select **Apply**.
1. Select **Save** to save the pipeline.
To visualize anomalies and process data, you can use Azure Managed Grafana. Use
1. On the **Add data source** page, search for and select **Azure Data Explorer Datasource**. 1. In the **Connection Details** section, add your Azure Data Explorer cluster URI.
-
+ 1. In the **Authentication** section, select **App Registration** and enter your service principal details. You made a note of these values when you created your service principal.
-
+ 1. To test the connection to the Azure Data Explorer database, select **Save & test**. You should see a **Success** indicator. Now that your Grafana instance is connected to your Azure Data Explorer database, you can build a dashboard:
iot-operations Tutorial Overall Equipment Effectiveness https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/view-analyze-data/tutorial-overall-equipment-effectiveness.md
description: Learn how to calculate overall equipment and effectiveness and powe
Previously updated : 12/18/2023 Last updated : 02/01/2024 #CustomerIntent: As an OT, I want to configure my Azure IoT Operations deployment to calculate overall equipment effectiveness and power consumption for my manufacturing process.
To create the _production-data-reference_ pipeline that ingests the data from th
| Field | Value | |-|--|
- | Name | `HTTP Endpoint - prod` |
+ | Name | `HTTP Endpoint - production data` |
| Method | `GET` | | URL | `http://callout-svc-http:3333/productionData` | | Authentication | `None` |
To create the _production-data-reference_ pipeline that ingests the data from th
| API Request ΓÇô Request Body | `{}` | | Request Interval | `1m` |
- Select **Apply**.
+ Select **Apply**.
1. Select **Add stages** and then select **Delete** to delete the middle stage.
To create the _production-data-reference_ pipeline that ingests the data from th
1. Select **Add destination** and then select **Reference datasets**.
+1. Name the stage _Reference dataset - production-data_.
+ 1. Select **production-data** in the **Dataset** field, and select **Apply**. 1. Select **Save** to save the pipeline.
To create the _operations-data-reference_ pipeline that ingests the data from th
| Field | Value | |-|--|
- | Name | `HTTP Endpoint - operator` |
+ | Name | `HTTP Endpoint - operations data` |
| Method | `GET` | | URL | `http://callout-svc-http:3333/operatorData` | | Authentication | `None` |
To create the _operations-data-reference_ pipeline that ingests the data from th
| API Request ΓÇô Request Body | `{}` | | Request Interval | `1m` |
- Select **Apply**.
+ Select **Apply**.
1. Select **Add stages** and then select **Delete** to delete the middle stage.
To create the _operations-data-reference_ pipeline that ingests the data from th
1. Select **Add destination** and then select **Reference datasets**.
+1. Name the stage _Reference dataset - operations-data_.
+ 1. Select **operations-data** in the **Dataset** field, select **Apply**. 1. Select **Save** to save the pipeline.
To create the _oee-process-pipeline_ pipeline:
| Field | Value | |--||
+ | Name | `MQ - Contoso/#` |
| Broker | `tls://aio-mq-dmqtt-frontend:8883` | | Topic | `Contoso/#` | | Data format | `JSON` | Select **Apply**. The simulated production line assets send measurements to the MQ broker in the cluster. This input stage configuration subscribes to all the topics under the `Contoso` topic in the MQ broker.
-1. Use the **Stages** list on the left to add a **Transform** stage after the source stage with the following JQ expressions. This transform creates a flat, readable view of the message and extracts the `Line` and `Site` information from the topic:
+1. Use the **Stages** list on the left to add a **Transform** stage after the source stage. Name the stage _Transform - flatten message_ and add the following JQ expressions. This transform creates a flat, readable view of the message and extracts the `Line` and `Site` information from the topic:
```jq .payload[0].Payload |= with_entries(.value |= .Value) |
To create the _oee-process-pipeline_ pipeline:
Select **Apply**.
-1. Use the **Stages** list on the left to add an **Aggregate** stage after the transform stage and select it. In this pipeline, you use the aggregate stage to down sample the measurements from the production line assets. You configure the stage to aggregate data for 10 seconds. Then for the relevant data, calculate the average or pick the latest value. Select the **Advanced** tab in the aggregate stage and paste in the following configuration: <!-- TODO: Need to double check this - can we avoid error associated with "next"? -->
+1. Use the **Stages** list on the left to add an **Aggregate** stage after the transform stage and select it. Name the stage _Aggregate - down sample measurements_. In this pipeline, you use the aggregate stage to down sample the measurements from the production line assets. You configure the stage to aggregate data for 10 seconds. Then for the relevant data, calculate the average or pick the latest value. Select the **Advanced** tab in the aggregate stage and paste in the following configuration:
```json {
To create the _oee-process-pipeline_ pipeline:
1. Use the **Stages** list on the left to add a **Call out HTTP** stage after the aggregate stage and select it. This HTTP call out stage calls a custom module running in the Kubernetes cluster that exposes an HTTP API. The module calculates the shift based on the current time. To configure the stage, select **Add condition** and enter the information from the following table:
- | Field | Value |
- |--|-|
- | Name | Call out HTTP - Shift |
- | Method | POST |
- | URL | http://shift-svc-http:3333 |
- | Authentication | None |
- | API Request - Data format | JSON |
- | API Request - Path | .payload |
- | API Response - Data format | JSON |
- | API Response - Path | .payload |
+ | Field | Value |
+ |--||
+ | Name | `Call out HTTP - Fetch shift data` |
+ | Method | `POST` |
+ | URL | `http://shift-svc-http:3333` |
+ | Authentication | `None` |
+ | API Request - Data format | `JSON` |
+ | API Request - Path | `.payload` |
+ | API Response - Data format | `JSON` |
+ | API Response - Path | `.payload` |
Select **Apply**. 1. Use the **Stages** list on the left to add an **Enrich** stage after the HTTP call out stage and select it. This stage enriches the measurements from the simulated production line assets with reference data from the _operations-data_ dataset. This stage uses a condition to determine when to add the operations data. Open the **Add condition** options and add the following information:
- | Field | Value |
- |--|-|
- | Dataset | operations-data |
- | Output path | .payload.operatorData |
- | Input path | .payload.shift |
- | Property | Shift |
- | Operator | Key match |
+ | Field | Value |
+ |--||
+ | Name | `Enrich - Operations data` |
+ | Dataset | `operations-data` |
+ | Output path | `.payload.operatorData` |
+ | Input path | `.payload.shift` |
+ | Property | `Shift` |
+ | Operator | `Key match` |
Select **Apply**. 1. Use the **Stages** list on the left to add another **Enrich** stage after the first enrich stage and select it. This stage enriches the measurements from the simulated production line assets with reference data from the _production-data_ dataset. Open the **Add condition** options and add the following information:
- | Field | Value |
- |--|-|
- | Dataset | production-data |
- | Output path | .payload.productionData |
- | Input path | .payload.Line |
- | Property | Line |
- | Operator | Key match |
+ | Field | Value |
+ |--||
+ | Name | `Enrich - Production data` |
+ | Dataset | `production-data` |
+ | Output path | `.payload.productionData` |
+ | Input path | `.payload.Line` |
+ | Property | `Line` |
+ | Operator | `Key match` |
Select **Apply**.
-1. Use the **Stages** list on the left to add another **Transform** stage after the enrich stage and select it. Add the following JQ expressions:
+1. Use the **Stages** list on the left to add another **Transform** stage after the enrich stage and select it. Name the stage _Transform - flatten enrichment data_. Add the following JQ expressions:
```json .payload |= . + .operatorData |
To create the _oee-process-pipeline_ pipeline:
1. Use the **Destinations** tab on the left to select **MQ** for the output stage, and select the stage. Add the following configuration:
- | Field | Value |
- |-|-|
- | Broker | tls://aio-mq-dmqtt-frontend:8883 |
- | Topic | Oee-processed-output |
- | Data format | JSON |
- | Path | .payload |
+ | Field | Value |
+ |-||
+ | Name | `MQ - Oee-processed-output` |
+ | Broker | `tls://aio-mq-dmqtt-frontend:8883` |
+ | Topic | `Oee-processed-output` |
+ | Data format | `JSON` |
+ | Path | `.payload` |
Select **Apply**.
The next step is to create a Data Processor pipeline that sends the transformed
1. Back in the [Azure IoT Operations](https://iotoperations.azure.com) portal, navigate to **Data pipelines** and select **Create pipeline**.
+1. Select the title of the pipeline on the top left corner, rename it to _oee-fabric_, and **Apply** the change.
+ 1. In the pipeline diagram, select **Configure source** and then select **MQ**. Use the information from the following table to configure it:
- | Field | Value |
- |-|-|
- | Name | processed-oee-data |
- | Broker | tls://aio-mq-dmqtt-frontend:8883 |
- | Topic | Oee-processed-output |
- | Data Format | JSON |
+ | Field | Value |
+ |-||
+ | Name | `MQ - Oee-processed-output` |
+ | Broker | `tls://aio-mq-dmqtt-frontend:8883` |
+ | Topic | `Oee-processed-output` |
+ | Data Format | `JSON` |
Select **Apply**.
The next step is to create a Data Processor pipeline that sends the transformed
```json {
- "displayName": "Node - 26cdc2",
+ "displayName": "Fabric Lakehouse - OEE table",
"type": "output/fabric@v1", "viewOptions": { "position": {
The next step is to create a Data Processor pipeline that sends the transformed
Select **Apply**.
-1. Save the pipeline as **oee-fabric**.
+1. To save your pipeline, select **Save**. It may take a few minutes for the pipeline to deploy to your cluster, so make sure it's finished before you proceed.
## View your measurement data in Microsoft Fabric
iot Iot Overview Analyze Visualize https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-overview-analyze-visualize.md
Last updated 04/11/2023 -
-# As a solution builder, I want a high-level overview of the options for analyzing and visualizing device data in an IoT solution.
+# Customer intent: As a solution builder, I want a high-level overview of the options for analyzing and visualizing device data in an IoT solution.
# Analyze and visualize your IoT data
iot Iot Overview Device Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-overview-device-connectivity.md
Last updated 03/20/2023
- template-overview - ignite-2023
-# As a solution builder or device developer I want a high-level overview of the issues around device infrastructure and connectivity so that I can easily find relevant content.
+# Customer intent: As a solution builder or device developer I want a high-level overview of the issues around device infrastructure and connectivity so that I can easily find relevant content.
# Device infrastructure and connectivity
iot Iot Overview Device Development https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-overview-device-development.md
Last updated 03/20/2023 -
-# As a solution builder or device developer I want a high-level overview of the issues around device development so that I can easily find relevant content.
+# Customer intent: As a solution builder or device developer I want a high-level overview of the issues around device development so that I can easily find relevant content.
# IoT device development
iot Iot Overview Device Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-overview-device-management.md
Last updated 03/20/2023 -
-# As a solution builder or device developer I want a high-level overview of the issues around device management and control so that I can easily find relevant content.
+# Customer intent: As a solution builder or device developer I want a high-level overview of the issues around device management and control so that I can easily find relevant content.
# Device management and control
iot Iot Overview Message Processing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-overview-message-processing.md
Last updated 04/03/2023 -
-# As a solution builder or device developer I want a high-level overview of the message processing in IoT solutions so that I can easily find relevant content for my scenario.
+# Customer intent: As a solution builder or device developer I want a high-level overview of the message processing in IoT solutions so that I can easily find relevant content for my scenario.
# Message processing in an IoT solution
iot Iot Overview Scalability High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-overview-scalability-high-availability.md
Last updated 05/18/2023 -
-# As a solution builder, I want a high-level overview of the options for scalability, high availability, and disaster recovery in an IoT solution so that I can easily find relevant content for my scenario.
+# Customer intent: As a solution builder, I want a high-level overview of the options for scalability, high availability, and disaster recovery in an IoT solution so that I can easily find relevant content for my scenario.
# IoT solution scalability, high availability, and disaster recovery
iot Iot Overview Solution Extensibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-overview-solution-extensibility.md
Last updated 04/03/2023 -
-# As a solution builder, I want a high-level overview of the options for extending an IoT solution so that I can easily find relevant content for my scenario.
+# Customer intent: As a solution builder, I want a high-level overview of the options for extending an IoT solution so that I can easily find relevant content for my scenario.
# Extend your IoT solution
iot Iot Overview Solution Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-overview-solution-management.md
Last updated 05/04/2023
-# As a solution builder, I want a high-level overview of the options for managing an IoT solution so that I can easily find relevant content for my scenario.
+# Customer intent: As a solution builder, I want a high-level overview of the options for managing an IoT solution so that I can easily find relevant content for my scenario.
# Manage your IoT solution
key-vault Built In Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/built-in-roles.md
To manage control plane permissions for the Managed HSM resource, you must use [
|Managed HSM Policy Administrator| Grants permissions to create and delete role assignments.|4bd23610-cdcf-4971-bdee-bdc562cc28e4| |Managed HSM Crypto Auditor|Grants read permissions to read (but not use) key attributes.|2c18b078-7c48-4d3a-af88-5a3a1b3f82b3| |Managed HSM Crypto Service Encryption User| Grants permissions to use a key for service encryption. |33413926-3206-4cdd-b39a-83574fe37a17|
-|Managed HSM Backup| Grants permissions to perform single-key or whole-HSM backup.|7b127d3c-77bd-4e3e-bbe0-dbb8971fa7f8|
|Managed HSM Crypto Service Release User| Grants permissions to release a key to a trusted execution environment. |21dbd100-6940-42c2-9190-5d6cb909625c|
+|Managed HSM Backup| Grants permissions to perform single-key or whole-HSM backup.|7b127d3c-77bd-4e3e-bbe0-dbb8971fa7f8|
+|Managed HSM Restore| Grants permissions to perform single-key or whole-HSM restore. |6efe6056-5259-49d2-8b3d-d3d73544b20b|
## Permitted operations
To manage control plane permissions for the Managed HSM resource, you must use [
> - All the data action names have the prefix **Microsoft.KeyVault/managedHsm**, which is omitted in the table for brevity. > - All role names have the prefix **Managed HSM**, which is omitted in the following table for brevity.
-|Data action | Administrator | Crypto Officer | Crypto User | Policy Administrator | Crypto Service Encryption User | Backup | Crypto Auditor| Crypto Service Released User|
-||::|::|::|::|::|::|::|::|
-|**Security domain management**|||||||||
-|/securitydomain/download/action|X||||||||
-|/securitydomain/upload/action|X||||||||
-|/securitydomain/upload/read|X||||||||
-|/securitydomain/transferkey/read|X||||||||
-|**Key management**|||||||||
-|/keys/read/action|||X||X||X||
-|/keys/write/action|||X||||||
-|/keys/rotate/action|||X||||||
-|/keys/create|||X||||||
-|/keys/delete|||X||||||
-|/keys/deletedKeys/read/action||X|||||||
-|/keys/deletedKeys/recover/action||X|||||||
-|/keys/deletedKeys/delete||X|||||X||
-|/keys/backup/action|||X|||X|||
-|/keys/restore/action|||X||||||
-|/keys/release/action|||X|||||X |
-|/keys/import/action|||X||||||
-|**Key cryptographic operations**|||||||||
-|/keys/encrypt/action|||X||||||
-|/keys/decrypt/action|||X||||||
-|/keys/wrap/action|||X||X||||
-|/keys/unwrap/action|||X||X||||
-|/keys/sign/action|||X||||||
-|/keys/verify/action|||X||||||
-|**Role management**|||||||||
-|/roleAssignments/read/action|X|X|X|X|||X||
-|/roleAssignments/write/action|X|X||X|||||
-|/roleAssignments/delete/action|X|X||X|||||
-|/roleDefinitions/read/action|X|X|X|X|||X||
-|/roleDefinitions/write/action|X|X||X|||||
-|/roleDefinitions/delete/action|X|X||X|||||
-|**Backup and restore management**|||||||||
-|/backup/start/action|X|||||X|||
-|/backup/status/action|X|||||X|||
-|/restore/start/action|X||||||||
-|/restore/status/action|X||||||||
+|Data action | Administrator | Crypto Officer | Crypto User | Policy Administrator | Crypto Service Encryption User | Backup | Crypto Auditor | Crypto Service Release User | Restore|
+||::|::|::|::|::|::|::|::|::|
+|**Security domain management**||||||||||
+|/securitydomain/download/action|X|||||||||
+|/securitydomain/upload/action|X|||||||||
+|/securitydomain/upload/read|X|||||||||
+|/securitydomain/transferkey/read|X|||||||||
+|**Key management**||||||||||
+|/keys/read/action|||X||X||X|||
+|/keys/write/action|||X|||||||
+|/keys/rotate/action|||X|||||||
+|/keys/create|||X|||||||
+|/keys/delete|||X|||||||
+|/keys/deletedKeys/read/action||X||||||||
+|/keys/deletedKeys/recover/action||X||||||||
+|/keys/deletedKeys/delete||X|||||X|||
+|/keys/backup/action|||X|||X||||
+|/keys/restore/action|||X||||||X|
+|/keys/release/action|||X|||||X||
+|/keys/import/action|||X|||||||
+|**Key cryptographic operations**||||||||||
+|/keys/encrypt/action|||X|||||||
+|/keys/decrypt/action|||X|||||||
+|/keys/wrap/action|||X||X|||||
+|/keys/unwrap/action|||X||X|||||
+|/keys/sign/action|||X|||||||
+|/keys/verify/action|||X|||||||
+|**Role management**||||||||||
+|/roleAssignments/read/action|X|X|X|X|||X|||
+|/roleAssignments/write/action|X|X||X||||||
+|/roleAssignments/delete/action|X|X||X||||||
+|/roleDefinitions/read/action|X|X|X|X|||X|||
+|/roleDefinitions/write/action|X|X||X||||||
+|/roleDefinitions/delete/action|X|X||X||||||
+|**Backup and restore management**||||||||||
+|/backup/start/action|X|||||X||||
+|/backup/status/action|X|||||X||||
+|/restore/start/action|X||||||||X|
+|/restore/status/action|X||||||||X|
## Next steps
load-balancer Load Balancer Basic Upgrade Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-basic-upgrade-guidance.md
Last updated 09/27/2023
-#customer-intent: As an cloud engineer with basic Load Balancer services, I need guidance and direction on migrating my workloads off basic to standard SKUs
+# Customer intent: As an cloud engineer with basic Load Balancer services, I need guidance and direction on migrating my workloads off basic to standard SKUs
# Upgrading from basic Load Balancer - Guidance
load-balancer Load Balancer Custom Probe Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-custom-probe-overview.md
Last updated 10/10/2023
-#customer intent: As a network engineer, I want to understand how to configure health probes for Azure Load Balancer so that I can detect application failures, manage load, and plan for downtime.
+# Customer intent: As a network engineer, I want to understand how to configure health probes for Azure Load Balancer so that I can detect application failures, manage load, and plan for downtime.
# Azure Load Balancer health probes
load-balancer Quickstart Load Balancer Standard Internal Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-internal-terraform.md
+
+ Title: "Quickstart: Create an internal load balancer - Terraform"
+
+description: This quickstart shows how to create an internal load balancer by using Terraform.
++++++ Last updated : 01/02/2024++
+#Customer intent: I want to create an internal load balancer by using Terraform so that I can load balance internal traffic to VMs.
++
+# Quickstart: Create an internal load balancer to load balance internal traffic to VMs using Terraform
+
+This quickstart shows you how to deploy a standard internal load balancer and two virtual machines using Terraform. Additional resources include Azure Bastion, NAT Gateway, a virtual network, and the required subnets.
++
+> [!div class="checklist"]
+> * Create an Azure resource group using [azurerm_resource_group](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/resource_group)
+> * Create an Azure Virtual Network using [azurerm_virtual_network](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/virtual_network)
+> * Create an Azure subnet using [azurerm_subnet](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/subnet)
+> * Create an Azure public IP using [azurerm_public_ip](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/public_ip)
+> * Create an Azure Load Balancer using [azurerm_lb](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/lb)
+> * Create an Azure network interface using [azurerm_network_interface](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/network_interface)
+> * Create an Azure network interface load balancer backend address pool association using [azurerm_network_interface_backend_address_pool_association](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/network_interface_backend_address_pool_association)
+> * Create an Azure Linux Virtual Machine using [azurerm_linux_virtual_machine](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/linux_virtual_machine)
+> * Create an Azure Virtual Machine Extension using [azurerm_virtual_machine_extension](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/virtual_machine_extension)
+> * Create an Azure NAT Gateway using [azurerm_nat_gateway](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/nat_gateway)
+> * Create an Azure Bastion using [azurerm_bastion_host](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/bastion_host)
+
+## Prerequisites
+
+- [Install and configure Terraform](/azure/developer/terraform/quickstart-configure)
+
+## Implement the Terraform code
+
+> [!NOTE]
+> See more [articles and sample code showing how to use Terraform to manage Azure resources](/azure/terraform).
+
+1. Create a directory in which to test the sample Terraform code and make it the current directory.
+
+1. Create a file named `providers.tf` and insert the following code:
+
+ ```
+ terraform {
+   required_version = ">=0.12"
+
+   required_providers {
+     azapi = {
+       source = "azure/azapi"
+       version = "~>1.5"
+     }
+     azurerm = {
+       source = "hashicorp/azurerm"
+       version = "~>2.0"
+     }
+     random = {
+       source = "hashicorp/random"
+       version = "~>3.0"
+     }
+   }
+ }
+
+ provider "azurerm" {
+   features {}
+ }
+ ```
+
+1. Create a file named `main.tf` and insert the following code:
+
+ ```
+ resource "random_string" "my_resource_group" {
+   length  = 8
+   upper   = false
+  special = false
+ }
+
+ # Create Resource Group
+ resource "azurerm_resource_group" "my_resource_group" {
+  name     = "${var.public_ip_name}-${random_string.my_resource_group.result}"
+  location = var.resource_group_location
+ }
+
+ # Create Virtual Network
+ resource "azurerm_virtual_network" "my_virtual_network" {
+   name = var.virtual_network_name
+   address_space = ["10.0.0.0/16"]
+   location = azurerm_resource_group.my_resource_group.location
+   resource_group_name = azurerm_resource_group.my_resource_group.name
+ }
+
+ # Create a subnet in the Virtual Network
+ resource "azurerm_subnet" "my_subnet" {
+   name = var.subnet_name
+   resource_group_name = azurerm_resource_group.my_resource_group.name
+   virtual_network_name = azurerm_virtual_network.my_virtual_network.name
+   address_prefixes = ["10.0.1.0/24"]
+ }
+
+ # Create a subnet named as "AzureBastionSubnet" in the Virtual Network for creating Azure Bastion
+ resource "azurerm_subnet" "my_bastion_subnet" {
+   name = "AzureBastionSubnet"
+   resource_group_name = azurerm_resource_group.my_resource_group.name
+   virtual_network_name = azurerm_virtual_network.my_virtual_network.name
+   address_prefixes = ["10.0.2.0/24"]
+ }
+
+ # Create Network Security Group and rules
+ resource "azurerm_network_security_group" "my_nsg" {
+   name = var.network_security_group_name
+   location = azurerm_resource_group.my_resource_group.location
+   resource_group_name = azurerm_resource_group.my_resource_group.name
+
+   security_rule {
+     name = "ssh"
+     priority = 1022
+     direction = "Inbound"
+     access = "Allow"
+     protocol = "Tcp"
+     source_port_range = "*"
+     destination_port_range = "22"
+     source_address_prefix = "*"
+     destination_address_prefix = "10.0.1.0/24"
+   }
+
+   security_rule {
+     name = "web"
+     priority = 1080
+     direction = "Inbound"
+     access = "Allow"
+     protocol = "Tcp"
+     source_port_range = "*"
+     destination_port_range = "80"
+     source_address_prefix = "*"
+     destination_address_prefix = "10.0.1.0/24"
+   }
+ }
+
+ # Associate the Network Security Group to the subnet
+ resource "azurerm_subnet_network_security_group_association" "my_nsg_association" {
+   subnet_id = azurerm_subnet.my_subnet.id
+   network_security_group_id = azurerm_network_security_group.my_nsg.id
+ }
+
+ # Create Public IPs
+ resource "azurerm_public_ip" "my_public_ip" {
+   count = 2
+   name = "${var.public_ip_name}-${count.index}"
+   location = azurerm_resource_group.my_resource_group.location
+   resource_group_name = azurerm_resource_group.my_resource_group.name
+   allocation_method = "Static"
+   sku = "Standard"
+ }
+
+ # Create a NAT Gateway for outbound internet access of the Virtual Machines in the Backend Pool of the Load Balancer
+ resource "azurerm_nat_gateway" "my_nat_gateway" {
+   name = var.nat_gateway
+   location = azurerm_resource_group.my_resource_group.location
+   resource_group_name = azurerm_resource_group.my_resource_group.name
+   sku_name = "Standard"
+ }
+
+ # Associate one of the Public IPs to the NAT Gateway
+ resource "azurerm_nat_gateway_public_ip_association" "my_nat_gateway_ip_association" {
+   nat_gateway_id = azurerm_nat_gateway.my_nat_gateway.id
+   public_ip_address_id = azurerm_public_ip.my_public_ip[0].id
+ }
+
+ # Associate the NAT Gateway to subnet
+ resource "azurerm_subnet_nat_gateway_association" "my_nat_gateway_subnet_association" {
+   subnet_id = azurerm_subnet.my_subnet.id
+   nat_gateway_id = azurerm_nat_gateway.my_nat_gateway.id
+ }
+
+ # Create Network Interfaces
+ resource "azurerm_network_interface" "my_nic" {
+   count = 3
+   name = "${var.network_interface_name}-${count.index}"
+   location = azurerm_resource_group.my_resource_group.location
+   resource_group_name = azurerm_resource_group.my_resource_group.name
+
+   ip_configuration {
+     name = "ipconfig-${count.index}"
+     subnet_id = azurerm_subnet.my_subnet.id
+     private_ip_address_allocation = "Dynamic"
+     primary = true
+   }
+ }
+
+ # Create Azure Bastion for accessing the Virtual Machines
+ resource "azurerm_bastion_host" "my_bastion" {
+   name = var.bastion_name
+   location = azurerm_resource_group.my_resource_group.location
+   resource_group_name = azurerm_resource_group.my_resource_group.name
+   sku = "Standard"
+
+   ip_configuration {
+     name = "ipconfig"
+     subnet_id = azurerm_subnet.my_bastion_subnet.id
+     public_ip_address_id = azurerm_public_ip.my_public_ip[1].id
+   }
+ }
+
+ # Associate Network Interface to the Backend Pool of the Load Balancer
+ resource "azurerm_network_interface_backend_address_pool_association" "my_nic_lb_pool" {
+   count = 2
+   network_interface_id = azurerm_network_interface.my_nic[count.index].id
+   ip_configuration_name = "ipconfig-${count.index}"
+   backend_address_pool_id = azurerm_lb_backend_address_pool.my_lb_pool.id
+ }
+
+ # Create Virtual Machine
+ resource "azurerm_linux_virtual_machine" "my_vm" {
+   count = 3
+   name = "${var.virtual_machine_name}-${count.index}"
+   location = azurerm_resource_group.my_resource_group.location
+   resource_group_name = azurerm_resource_group.my_resource_group.name
+   network_interface_ids = [azurerm_network_interface.my_nic[count.index].id]
+   size = var.virtual_machine_size
+
+   os_disk {
+     name = "${var.disk_name}-${count.index}"
+     caching = "ReadWrite"
+     storage_account_type = var.redundancy_type
+   }
+
+   source_image_reference {
+     publisher = "Canonical"
+     offer = "0001-com-ubuntu-server-jammy"
+     sku = "22_04-lts-gen2"
+     version = "latest"
+   }
+
+   admin_username = var.username
+   admin_password = var.password
+   disable_password_authentication = false
+
+ }
+
+ # Enable virtual machine extension and install Nginx
+ resource "azurerm_virtual_machine_extension" "my_vm_extension" {
+   count = 2
+   name = "Nginx"
+   virtual_machine_id = azurerm_linux_virtual_machine.my_vm[count.index].id
+   publisher = "Microsoft.Azure.Extensions"
+   type = "CustomScript"
+   type_handler_version = "2.0"
+
+   settings = <<SETTINGS
+  {
+   "commandToExecute": "sudo apt-get update && sudo apt-get install nginx -y && echo \"Hello World from $(hostname)\" > /var/www/html/https://docsupdatetracker.net/index.html && sudo systemctl restart nginx"
+  }
+ SETTINGS
+
+ }
+
+ # Create an Internal Load Balancer
+ resource "azurerm_lb" "my_lb" {
+   name = var.load_balancer_name
+   location = azurerm_resource_group.my_resource_group.location
+   resource_group_name = azurerm_resource_group.my_resource_group.name
+   sku = "Standard"
+
+   frontend_ip_configuration {
+     name = "frontend-ip"
+     subnet_id = azurerm_subnet.my_subnet.id
+     private_ip_address_allocation = "Dynamic"
+   }
+ }
+
+ resource "azurerm_lb_backend_address_pool" "my_lb_pool" {
+   loadbalancer_id = azurerm_lb.my_lb.id
+   name = "test-pool"
+ }
+
+ resource "azurerm_lb_probe" "my_lb_probe" {
+   resource_group_name = azurerm_resource_group.my_resource_group.name
+   loadbalancer_id = azurerm_lb.my_lb.id
+   name = "test-probe"
+   port = 80
+ }
+
+ resource "azurerm_lb_rule" "my_lb_rule" {
+   resource_group_name = azurerm_resource_group.my_resource_group.name
+   loadbalancer_id = azurerm_lb.my_lb.id
+   name = "test-rule"
+   protocol = "Tcp"
+   frontend_port = 80
+   backend_port = 80
+   disable_outbound_snat = true
+   frontend_ip_configuration_name = "frontend-ip"
+   probe_id = azurerm_lb_probe.my_lb_probe.id
+   backend_address_pool_ids = [azurerm_lb_backend_address_pool.my_lb_pool.id]
+ }
+ ```
+
+1. Create a file named `variables.tf` and insert the following code:
+
+ ```
+ variable "resource_group_location" {
+   type = string
+   default = "eastus"
+   description = "Location of the resource group."
+ }
+
+ variable "resource_group_name" {
+   type = string
+   default = "test-group"
+   description = "Name of the resource group."
+ }
+
+ variable "username" {
+   type = string
+   default = "microsoft"
+   description = "The username for the local account that will be created on the new VM."
+ }
+
+ variable "password" {
+   type = string
+   default = "Microsoft@123"
+   description = "The passoword for the local account that will be created on the new VM."
+ }
+
+ variable "virtual_network_name" {
+   type = string
+   default = "test-vnet"
+   description = "Name of the Virtual Network."
+ }
+
+ variable "subnet_name" {
+   type = string
+   default = "test-subnet"
+   description = "Name of the subnet."
+ }
+
+ variable public_ip_name {
+   type = string
+   default = "test-public-ip"
+   description = "Name of the Public IP for the NAT Gateway."
+ }
+
+ variable "nat_gateway" {
+   type = string
+   default = "test-nat"
+   description = "Name of the NAT gateway."
+ }
+
+ variable "bastion_name" {
+   type = string
+   default = "test-bastion"
+   description = "Name of the Bastion."
+ }
+
+ variable network_security_group_name {
+   type = string
+   default = "test-nsg"
+   description = "Name of the Network Security Group."
+ }
+
+ variable "network_interface_name" {
+   type = string
+   default = "test-nic"
+   description = "Name of the Network Interface."  
+ }
+
+ variable "virtual_machine_name" {
+   type = string
+   default = "test-vm"
+   description = "Name of the Virtual Machine."
+ }
+
+ variable "virtual_machine_size" {
+   type = string
+   default = "Standard_B2s"
+   description = "Size or SKU of the Virtual Machine."
+ }
+
+ variable "disk_name" {
+   type = string
+   default = "test-disk"
+   description = "Name of the OS disk of the Virtual Machine."
+ }
+
+ variable "redundancy_type" {
+   type = string
+   default = "Standard_LRS"
+   description = "Storage redundancy type of the OS disk."
+ }
+
+ variable "load_balancer_name" {
+   type = string
+   default = "test-lb"
+   description = "Name of the Load Balancer."
+ }
+ ```
+
+1. Create a file named `outputs.tf` and insert the following code:
+
+ ```
+ output "private_ip_address" {
+ value = "http://${azurerm_lb.my_lb.private_ip_address}"
+ }
+ ```
+
+## Initialize Terraform
++
+## Create a Terraform execution plan
++
+## Apply a Terraform execution plan
++
+## Verify the results
+
+1. When you apply the execution plan, Terraform displays the frontend private IP address. If you've cleared the screen, you can retrieve that value with the following Terraform command:
+
+ ```console
+ echo $(terraform output -raw private_ip_address)
+ ```
+
+1. Login to the VM which is not associated to the backend pool of load balancer using Bastion.
+
+1. Run the curl command to access the custom web page of the Nginx web server using the frontend private IP address of the load balancer.
+
+ ```
+ curl http://<Frontend IP address>
+ ```
+
+## Clean up resources
++
+## Troubleshoot Terraform on Azure
+
+[Troubleshoot common problems when using Terraform on Azure](/azure/developer/terraform/troubleshoot)
+
+## Next steps
+
+In this quickstart, you:
+
+- Created an internal Azure Load Balancer
+
+- Attached 2 VMs to the load balancer
+
+- Configured the load balancer traffic rule, health probe, and then tested the load balancer
+
+To learn more about Azure Load Balancer, continue to:
+> [!div class="nextstepaction"]
+> [What is Azure Load Balancer?](load-balancer-overview.md)
load-testing How To Test Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-test-private-endpoint.md
The subnet you use for deploying the load test can't be delegated to another Azu
Learn more about [adding or removing a subnet delegation](/azure/virtual-network/manage-subnet-delegation#remove-subnet-delegation-from-an-azure-service).
-### Starting the load test fails with `User doesn't have subnet/join/action permission on the virtual network (ALTVNET004)`
+### Updating or starting the load test fails with `User doesn't have subnet/join/action permission on the virtual network (ALTVNET004)`
-To start a load test, you must have sufficient permissions to deploy Azure Load Testing to the virtual network. You require the [Network Contributor](/azure/role-based-access-control/built-in-roles#network-contributor) role, or a parent of this role, on the virtual network.
+To update or start a load test, you must have sufficient permissions to deploy Azure Load Testing to the virtual network. You require the [Network Contributor](/azure/role-based-access-control/built-in-roles#network-contributor) role, or a parent of this role, on the virtual network.
1. See [Check access for a user to Azure resources](/azure/role-based-access-control/check-access) to verify your permissions.
logic-apps Biztalk Server Azure Integration Services Migration Approaches https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/biztalk-server-azure-integration-services-migration-approaches.md
Last updated 01/04/2024
-# As a BizTalk Server customer, I want to learn about migration options, planning considerations, and best practices for moving from BizTalk Server to Azure Integration Services.
+# Customer intent: As a BizTalk Server customer, I want to learn about migration options, planning considerations, and best practices for moving from BizTalk Server to Azure Integration Services.
# Migration approaches for BizTalk Server to Azure Integration Services
logic-apps Biztalk Server To Azure Integration Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/biztalk-server-to-azure-integration-services-overview.md
Last updated 01/04/2024
-# As a BizTalk Server customer, I want to better understand why I should migrate to Azure Integration Services in the cloud from on-premises BizTalk Server.
+# Customer intent: As a BizTalk Server customer, I want to better understand why I should migrate to Azure Integration Services in the cloud from on-premises BizTalk Server.
# Why migrate from BizTalk Server to Azure Integration Services?
logic-apps Create Maps Data Transformation Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-maps-data-transformation-visual-studio-code.md
ms.suite: integration
Last updated 11/15/2023
-# As a developer, I want to transform data in Azure Logic Apps by creating a map between schemas with Visual Studio Code.
+# Customer intent: As a developer, I want to transform data in Azure Logic Apps by creating a map between schemas with Visual Studio Code.
# Create maps to transform data in Azure Logic Apps with Visual Studio Code
logic-apps Custom Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/custom-connector-overview.md
ms.suite: integration
Last updated 01/04/2024
-# As a developer, I want learn about the capability to create custom connectors with operations that I can use in my Azure Logic Apps workflows.
+# Customer intent: As a developer, I want learn about the capability to create custom connectors with operations that I can use in my Azure Logic Apps workflows.
# Custom connectors in Azure Logic Apps
logic-apps Deploy Single Tenant Logic Apps Private Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/deploy-single-tenant-logic-apps-private-storage-account.md
Last updated 10/09/2023
-# As a developer, I want to deploy Standard logic apps to Azure storage accounts that use private endpoints.
+# Customer intent: As a developer, I want to deploy Standard logic apps to Azure storage accounts that use private endpoints.
# Deploy single-tenant Standard logic apps to private storage accounts using private endpoints
logic-apps Devops Deployment Single Tenant Azure Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/devops-deployment-single-tenant-azure-logic-apps.md
ms.suite: integration
Last updated 01/04/2024-
-# As a developer, I want to learn about DevOps deployment support for single-tenant Azure Logic Apps.
+# Customer intent: As a developer, I want to learn about DevOps deployment support for single-tenant Azure Logic Apps.
# DevOps deployment for single-tenant Azure Logic Apps
logic-apps Logic Apps Limits And Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-limits-and-config.md
For Azure Logic Apps to receive incoming communication through your firewall, yo
| Norway East | 51.120.88.93, 51.13.66.86, 51.120.89.182, 51.120.88.77, 20.100.27.17, 20.100.36.102 | | Norway West | 51.120.220.160, 51.120.220.161, 51.120.220.162, 51.120.220.163, 51.13.155.184, 51.13.151.90 | | Poland Central | 20.215.144.231, 20.215.145.0 |
+| Qatar Central | 20.21.211.241, 20.21.211.242 |
| South Africa North | 102.133.228.4, 102.133.224.125, 102.133.226.199, 102.133.228.9, 20.87.92.64, 20.87.91.171 | | South Africa West | 102.133.72.190, 102.133.72.145, 102.133.72.184, 102.133.72.173, 40.117.9.225, 102.133.98.91 | | South Central US | 13.65.98.39, 13.84.41.46, 13.84.43.45, 40.84.138.132, 20.94.151.41, 20.88.209.113 | | South India | 52.172.9.47, 52.172.49.43, 52.172.51.140, 104.211.225.152, 104.211.221.215,104.211.205.148 | | Southeast Asia | 52.163.93.214, 52.187.65.81, 52.187.65.155, 104.215.181.6, 20.195.49.246, 20.198.130.155, 23.98.121.180 |
+| Sweden Central | 20.91.178.13, 20.240.10.125 |
| Switzerland North | 51.103.128.52, 51.103.132.236, 51.103.134.138, 51.103.136.209, 20.203.230.170, 20.203.227.226 | | Switzerland West | 51.107.225.180, 51.107.225.167, 51.107.225.163, 51.107.239.66, 51.107.235.139,51.107.227.18 | | UAE Central | 20.45.75.193, 20.45.64.29, 20.45.64.87, 20.45.71.213, 40.126.212.77, 40.126.209.97 |
This section lists the outbound IP addresses that Azure Logic Apps requires in y
| Norway East | 51.120.88.52, 51.120.88.51, 51.13.65.206, 51.13.66.248, 51.13.65.90, 51.13.65.63, 51.13.68.140, 51.120.91.248, 20.100.26.148, 20.100.26.52, 20.100.36.49, 20.100.36.10 | | Norway West | 51.120.220.128, 51.120.220.129, 51.120.220.130, 51.120.220.131, 51.120.220.132, 51.120.220.133, 51.120.220.134, 51.120.220.135, 51.13.153.172, 51.13.148.178, 51.13.148.11, 51.13.149.162 | | Poland Central | 20.215.144.229, 20.215.128.160, 20.215.144.235, 20.215.144.246 |
+| Qatar Central | 20.21.211.240, 20.21.209.216, 20.21.211.245, 20.21.210.251 |
| South Africa North | 102.133.231.188, 102.133.231.117, 102.133.230.4, 102.133.227.103, 102.133.228.6, 102.133.230.82, 102.133.231.9, 102.133.231.51, 20.87.92.40, 20.87.91.122, 20.87.91.169, 20.87.88.47 | | South Africa West | 102.133.72.98, 102.133.72.113, 102.133.75.169, 102.133.72.179, 102.133.72.37, 102.133.72.183, 102.133.72.132, 102.133.75.191, 102.133.101.220, 40.117.9.125, 40.117.10.230, 40.117.9.229 | | South Central US | 104.210.144.48, 13.65.82.17, 13.66.52.232, 23.100.124.84, 70.37.54.122, 70.37.50.6, 23.100.127.172, 23.101.183.225, 20.94.150.220, 20.94.149.199, 20.88.209.97, 20.88.209.88 | | South India | 52.172.50.24, 52.172.55.231, 52.172.52.0, 104.211.229.115, 104.211.230.129, 104.211.230.126, 104.211.231.39, 104.211.227.229, 104.211.211.221, 104.211.210.192, 104.211.213.78, 104.211.218.202 | | Southeast Asia | 13.76.133.155, 52.163.228.93, 52.163.230.166, 13.76.4.194, 13.67.110.109, 13.67.91.135, 13.76.5.96, 13.67.107.128, 20.195.49.240, 20.195.49.29, 20.198.130.152, 20.198.128.124, 23.98.121.179, 23.98.121.115 |
+| Sweden Central | 20.91.178.11, 20.91.177.115, 20.240.10.91, 20.240.10.89 |
| Switzerland North | 51.103.137.79, 51.103.135.51, 51.103.139.122, 51.103.134.69, 51.103.138.96, 51.103.138.28, 51.103.136.37, 51.103.136.210, 20.203.230.58, 20.203.229.127, 20.203.224.37, 20.203.225.242 | | Switzerland West | 51.107.239.66, 51.107.231.86, 51.107.239.112, 51.107.239.123, 51.107.225.190, 51.107.225.179, 51.107.225.186, 51.107.225.151, 51.107.239.83, 51.107.232.61, 51.107.234.254, 51.107.226.253, 20.199.193.249 | | UAE Central | 20.45.75.200, 20.45.72.72, 20.45.75.236, 20.45.79.239, 20.45.67.170, 20.45.72.54, 20.45.67.134, 20.45.67.135, 40.126.210.93, 40.126.209.151, 40.126.208.156, 40.126.214.92 |
logic-apps Logic Apps Perform Data Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-perform-data-operations.md
ms.suite: integration
Last updated 12/13/2023
-# As a developer using Azure Logic Apps, I want to perform various data operations on various data types for my workflow in Azure Logic Apps.
+# Customer intent: As a developer using Azure Logic Apps, I want to perform various data operations on various data types for my workflow in Azure Logic Apps.
# Perform data operations in Azure Logic Apps
logic-apps Logic Apps Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-pricing.md
ms.suite: integration
Last updated 01/10/2024
-# As a logic apps developer, I want to learn and understand how usage metering, billing, and pricing work in Azure Logic Apps.
+# Customer intent: As a logic apps developer, I want to learn and understand how usage metering, billing, and pricing work in Azure Logic Apps.
# Usage metering, billing, and pricing for Azure Logic Apps
logic-apps Monitor Workflows Collect Diagnostic Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/monitor-workflows-collect-diagnostic-data.md
Last updated 01/10/2024
-# As a developer, I want to collect and send diagnostics data for my logic app workflows to specific destinations, such as a Log Analytics workspace, storage account, or event hub, for further review.
+# Customer intent: As a developer, I want to collect and send diagnostics data for my logic app workflows to specific destinations, such as a Log Analytics workspace, storage account, or event hub, for further review.
# Monitor and collect diagnostic data for workflows in Azure Logic Apps
logic-apps Secure Single Tenant Workflow Virtual Network Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/secure-single-tenant-workflow-virtual-network-private-endpoint.md
Last updated 01/10/2024-
-# As a developer, I want to connect to my Standard logic app workflows with virtual networks using private endpoints and virtual network integration.
+# Customer intent: As a developer, I want to connect to my Standard logic app workflows with virtual networks using private endpoints and virtual network integration.
# Secure traffic between Standard logic apps and Azure virtual networks using private endpoints
logic-apps Set Up Devops Deployment Single Tenant Azure Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/set-up-devops-deployment-single-tenant-azure-logic-apps.md
Last updated 01/04/2024
-# As a developer, I want to automate deployment for workflows hosted in single-tenant Azure Logic Apps by using DevOps tools and processes.
+# Customer intent: As a developer, I want to automate deployment for workflows hosted in single-tenant Azure Logic Apps by using DevOps tools and processes.
# Set up DevOps deployment for Standard logic app workflows in single-tenant Azure Logic Apps
logic-apps View Workflow Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/view-workflow-metrics.md
Last updated 01/10/2024
-# As a developer, I want to review the health and performance metrics for workflows in Azure Logic Apps.
+# Customer intent: As a developer, I want to review the health and performance metrics for workflows in Azure Logic Apps.
# View metrics for workflow health and performance in Azure Logic Apps
machine-learning Concept Ml Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-ml-pipelines.md
Previously updated : 05/10/2022 Last updated : 01/31/2024 monikerRange: 'azureml-api-2 || azureml-api-1'
Besides being the tool to put MLOps into practice, the machine learning pipeline
## Getting started best practices
-Depending on what a machine learning project already has, the starting point of building a machine learning pipeline may vary. There are a few typical approaches to building a pipeline.
+Depending on what a machine learning project already has, the starting point of building a machine learning pipeline might vary. There are a few typical approaches to building a pipeline.
The first approach usually applies to the team that hasn't used pipeline before and wants to take some advantage of pipeline like MLOps. In this situation, data scientists typically have developed some machine learning models on their local environment using their favorite tools. Machine learning engineers need to take data scientists' output into production. The work involves cleaning up some unnecessary code from original notebook or Python code, changes the training input from local data to parameterized values, split the training code into multiple steps as needed, perform unit test of each step, and finally wraps all steps into a pipeline.
-Once the teams get familiar with pipelines and want to do more machine learning projects using pipelines, they'll find the first approach is hard to scale. The second approach is set up a few pipeline templates, each try to solve one specific machine learning problem. The template predefines the pipeline structure including how many steps, each step's inputs and outputs, and their connectivity. To start a new machine learning project, the team first forks one template repo. The team leader then assigns members which step they need to work on. The data scientists and data engineers do their regular work. When they're happy with their result, they structure their code to fit in the pre-defined steps. Once the structured codes are checked-in, the pipeline can be executed or automated. If there's any change, each member only needs to work on their piece of code without touching the rest of the pipeline code.
+Once the teams get familiar with pipelines and want to do more machine learning projects using pipelines, they'll find the first approach is hard to scale. The second approach is set up a few pipeline templates, each try to solve one specific machine learning problem. The template predefines the pipeline structure including how many steps, each step's inputs and outputs, and their connectivity. To start a new machine learning project, the team first forks one template repo. The team leader then assigns members which step they need to work on. The data scientists and data engineers do their regular work. When they're happy with their result, they structure their code to fit in the pre-defined steps. Once the structured codes are checked-in, the pipeline can be executed or automated. If there's any change, each member only needs to work on their piece of code without touching the rest of the pipeline code.
Once a team has built a collection of machine learnings pipelines and reusable components, they could start to build the machine learning pipeline from cloning previous pipeline or tie existing reusable component together. At this stage, the team's overall productivity will be improved significantly. :::moniker range="azureml-api-2"
-Azure Machine Learning offers different methods to build a pipeline. For users who are familiar with DevOps practices, we recommend using [CLI](how-to-create-component-pipelines-cli.md). For data scientists who are familiar with python, we recommend writing pipeline using the [Azure Machine Learning SDK v2](how-to-create-component-pipeline-python.md). For users who prefer to use UI, they could use the [designer to build pipeline by using registered components](how-to-create-component-pipelines-ui.md).
+Azure Machine Learning offers different methods to build a pipeline. For users who are familiar with DevOps practices, we recommend using [CLI](how-to-create-component-pipelines-cli.md). For data scientists who are familiar with python, we recommend writing pipelines using the [Azure Machine Learning SDK v2](how-to-create-component-pipeline-python.md). For users who prefer to use the UI, they could use the [designer to build pipelines by using registered components](how-to-create-component-pipelines-ui.md).
:::moniker-end
machine-learning Concept Responsible Ai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-responsible-ai.md
Previously updated : 11/09/2022 Last updated : 01/31/2024 #Customer intent: As a data scientist, I want to learn what Responsible AI is and how I can use it in Azure Machine Learning.
Responsible Artificial Intelligence (Responsible AI) is an approach to developing, assessing, and deploying AI systems in a safe, trustworthy, and ethical way. AI systems are the product of many decisions made by those who develop and deploy them. From system purpose to how people interact with AI systems, Responsible AI can help proactively guide these decisions toward more beneficial and equitable outcomes. That means keeping people and their goals at the center of system design decisions and respecting enduring values like fairness, reliability, and transparency.
-Microsoft has developed a [Responsible AI Standard](https://blogs.microsoft.com/wp-content/uploads/prod/sites/5/2022/06/Microsoft-Responsible-AI-Standard-v2-General-Requirements-3.pdf). It's a framework for building AI systems according to six principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. For Microsoft, these principles are the cornerstone of a responsible and trustworthy approach to AI, especially as intelligent technology becomes more prevalent in products and services that people use every day.
+Microsoft developed a [Responsible AI Standard](https://blogs.microsoft.com/wp-content/uploads/prod/sites/5/2022/06/Microsoft-Responsible-AI-Standard-v2-General-Requirements-3.pdf). It's a framework for building AI systems according to six principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. For Microsoft, these principles are the cornerstone of a responsible and trustworthy approach to AI, especially as intelligent technology becomes more prevalent in products and services that people use every day.
This article demonstrates how Azure Machine Learning supports tools for enabling developers and data scientists to implement and operationalize the six principles.
When AI systems help inform decisions that have tremendous impacts on people's l
A crucial part of transparency is *interpretability*: the useful explanation of the behavior of AI systems and their components. Improving interpretability requires stakeholders to comprehend how and why AI systems function the way they do. The stakeholders can then identify potential performance issues, fairness issues, exclusionary practices, or unintended outcomes.
-**Transparency in Azure Machine Learning**: The [model interpretability](how-to-machine-learning-interpretability.md) and [counterfactual what-if](./concept-counterfactual-analysis.md) components of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md) enable data scientists and developers to generate human-understandable descriptions of the predictions of a model.
+**Transparency in Azure Machine Learning**: The [model interpretability](how-to-machine-learning-interpretability.md) and [counterfactual what-if](./concept-counterfactual-analysis.md) components of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md) enable data scientists and developers to generate human-understandable descriptions of the predictions of a model.
-The model interpretability component provides multiple views into a model's behavior:
+The model interpretability component provides multiple views into a model's behavior:
- *Global explanations*. For example, what features affect the overall behavior of a loan allocation model? - *Local explanations*. For example, why was a customer's loan application approved or rejected?
As AI becomes more prevalent, protecting privacy and securing personal and busin
- Scan for vulnerabilities. - Apply and audit configuration policies.
-Microsoft has also created two open-source packages that can enable further implementation of privacy and security principles:
+Microsoft also created two open-source packages that can enable further implementation of privacy and security principles:
- [SmartNoise](https://github.com/opendifferentialprivacy/smartnoise-core): Differential privacy is a set of systems and practices that help keep the data of individuals safe and private. In machine learning solutions, differential privacy might be required for regulatory compliance. SmartNoise is an open-source project (co-developed by Microsoft) that contains components for building differentially private systems that are global.
machine-learning How To Authenticate Batch Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-authenticate-batch-endpoint.md
To successfully invoke a batch endpoint you need the following explicit actions
"Microsoft.MachineLearningServices/workspaces/datastores/listsecrets/action", "Microsoft.MachineLearningServices/workspaces/listStorageAccountKeys/action", "Microsoft.MachineLearningServices/workspaces/batchEndpoints/read",
+ "Microsoft.MachineLearningServices/workspaces/batchEndpoints/write",
"Microsoft.MachineLearningServices/workspaces/batchEndpoints/deployments/read",
+ "Microsoft.MachineLearningServices/workspaces/batchEndpoints/write",
+ "Microsoft.MachineLearningServices/workspaces/batchEndpoints/deployments/write",
+ "Microsoft.MachineLearningServices/workspaces/batchEndpoints/deployments/jobs/write",
+ "Microsoft.MachineLearningServices/workspaces/batchEndpoints/jobs/write",
"Microsoft.MachineLearningServices/workspaces/computes/read", "Microsoft.MachineLearningServices/workspaces/computes/listKeys/action", "Microsoft.MachineLearningServices/workspaces/metadata/secrets/read",
To successfully invoke a batch endpoint you need the following explicit actions
"Microsoft.MachineLearningServices/workspaces/endpoints/pipelines/write", "Microsoft.MachineLearningServices/workspaces/environments/read", "Microsoft.MachineLearningServices/workspaces/environments/write",
- "Microsoft.MachineLearningServices/workspaces/environments/build/action"
+ "Microsoft.MachineLearningServices/workspaces/environments/build/action",
"Microsoft.MachineLearningServices/workspaces/environments/readSecrets/action" ] ```
machine-learning How To Create Component Pipelines Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-component-pipelines-cli.md
Previously updated : 05/26/2022 Last updated : 01/31/2024 - devplatv2
ms.devlang: azurecli
[!INCLUDE [cli v2](includes/machine-learning-cli-v2.md)] -
-In this article, you learn how to create and run [machine learning pipelines](concept-ml-pipelines.md) by using the Azure CLI and components (for more, see [What is an Azure Machine Learning component?](concept-component.md)). You can create pipelines without using components, but components offer the greatest amount of flexibility and reuse. Azure Machine Learning Pipelines may be defined in YAML and run from the CLI, authored in Python, or composed in Azure Machine Learning studio Designer with a drag-and-drop UI. This document focuses on the CLI.
+In this article, you learn how to create and run [machine learning pipelines](concept-ml-pipelines.md) by using Azure CLI and [components](concept-component.md). You can create pipelines without using components, but components offer the greatest amount of flexibility and reuse. Azure Machine Learning Pipelines can be defined in YAML and run from the CLI, authored in Python, or composed in Azure Machine Learning studio Designer with a drag-and-drop UI. This document focuses on the CLI.
## Prerequisites
In this article, you learn how to create and run [machine learning pipelines](co
## Create your first pipeline with component
-Let's create your first pipeline with component using an example. This section aims to give you an initial impression of what pipeline and component look like in Azure Machine Learning with a concrete example.
+Let's create your first pipeline with components using an example. This section aims to give you an initial impression of what a pipeline and component look like in Azure Machine Learning with a concrete example.
From the `cli/jobs/pipelines-with-components/basics` directory of the [`azureml-examples` repository](https://github.com/Azure/azureml-examples), navigate to the `3b_pipeline_with_data` subdirector. There are three types of files in this directory. Those are the files you need to create when building your own pipeline. -- **pipeline.yml**: This YAML file defines the machine learning pipeline. This YAML file describes how to break a full machine learning task into a multistep workflow. For example, considering a simple machine learning task of using historical data to train a sales forecasting model, you may want to build a sequential workflow with data processing, model training, and model evaluation steps. Each step is a component that has well defined interface and can be developed, tested, and optimized independently. The pipeline YAML also defines how the child steps connect to other steps in the pipeline, for example the model training step generate a model file and the model file will pass to a model evaluation step.
+- **pipeline.yml**: This YAML file defines the machine learning pipeline. This YAML file describes how to break a full machine learning task into a multistep workflow. For example, considering a simple machine learning task of using historical data to train a sales forecasting model, you might want to build a sequential workflow with data processing, model training, and model evaluation steps. Each step is a component that has well defined interface and can be developed, tested, and optimized independently. The pipeline YAML also defines how the child steps connect to other steps in the pipeline, for example, the model training step generates a model file and the model file will pass to a model evaluation step.
-- **component.yml**: This YAML file defines the component. It packages following information:
+- **component.yml**: This YAML file defines the component. It packages the following information:
- Metadata: name, display name, version, description, type etc. The metadata helps to describe and manage the component. - Interface: inputs and outputs. For example, a model training component takes training data and number of epochs as input, and generate a trained model file as output. Once the interface is defined, different teams can develop and test the component independently.
- - Command, code & environment: the command, code and environment to run the component. Command is the shell command to execute the component. Code usually refers to a source code directory. Environment could be an Azure Machine Learning environment(curated or customer created), docker image or conda environment.
+ - Command, code & environment: the command, code and environment to run the component. Command is the shell command to execute the component. Code usually refers to a source code directory. Environment could be an Azure Machine Learning environment(curated or customer created), docker image or conda environment.
+
+- **component_src**: This is the source code directory for a specific component. It contains the source code that is executed in the component. You can use your preferred language(Python, R...). The code must be executed by a shell command. The source code can take a few inputs from the shell command line to control how this step is going to be executed. For example, a training step might take training data, learning rate, number of epochs to control the training process. The argument of a shell command is used to pass inputs and outputs to the code.
-- **component_src**: This is the source code directory for a specific component. It contains the source code that is executed in the component. You can use your preferred language(Python, R...). The code must be executed by a shell command. The source code can take a few inputs from shell command line to control how this step is going to be executed. For example, a training step may take training data, learning rate, number of epochs to control the training process. The argument of a shell command is used to pass inputs and outputs to the code.
+ Now let's create a pipeline using the `3b_pipeline_with_data` example. We explain the detailed meaning of each file in following sections.
- Now let's create a pipeline using the `3b_pipeline_with_data` example. We explain the detailed meaning of each file in following sections.
-
- First list your available compute resources with the following command:
+ First list your available compute resources with the following command:
```azurecli az ml compute list
If you don't have it, create a cluster called `cpu-cluster` by running:
az ml compute create -n cpu-cluster --type amlcompute --min-instances 0 --max-instances 10 ```
-Now, create a pipeline job defined in the pipeline.yml file with the following command. The compute target is referenced in the pipeline.yml file as `azureml:cpu-cluster`. If your compute target uses a different name, remember to update it in the pipeline.yml file.
+Now, create a pipeline job defined in the pipeline.yml file with the following command. The compute target is referenced in the pipeline.yml file as `azureml:cpu-cluster`. If your compute target uses a different name, remember to update it in the pipeline.yml file.
```azurecli az ml job create --file pipeline.yml ```
-You should receive a JSON dictionary with information about the pipeline job, including:
+You should receive a JSON dictionary with information about the pipeline job including:
| Key | Description | |-|--| | `name` | The GUID-based name of the job. |
-| `experiment_name` | The name under which jobs will be organized in Studio. |
+| `experiment_name` | The name under which jobs will be organized in studio. |
| `services.Studio.endpoint` | A URL for monitoring and reviewing the pipeline job. | | `status` | The status of the job. This will likely be `Preparing` at this point. |
-Open the `services.Studio.endpoint` URL you see a graph visualization of the pipeline looks like below.
+Open the `services.Studio.endpoint` URL to see a graph visualization of the pipeline.
:::image type="content" source="./media/how-to-create-component-pipelines-cli/pipeline-graph-dependencies.png" alt-text="Screenshot of a graph visualization of the pipeline.":::
Open the `services.Studio.endpoint` URL you see a graph visualization of the pip
Let's take a look at the pipeline definition in the *3b_pipeline_with_data/pipeline.yml* file. - > [!NOTE] > To use [serverless compute](how-to-use-serverless-compute.md), replace `default_compute: azureml:cpu-cluster` with `default_compute: azureml:serverless` in this file. :::code language="yaml" source="~/azureml-examples-main/cli/jobs/pipelines-with-components/basics/3b_pipeline_with_data/pipeline.yml":::
-Below table describes the most common used fields of pipeline YAML schema. See [full pipeline YAML schema here](reference-yaml-job-pipeline.md).
+The table describes the most common used fields of pipeline YAML schema. To learn more, see the [full pipeline YAML schema](reference-yaml-job-pipeline.md).
|key|description| |||
-|type|**Required**. Job type, must be `pipeline` for pipeline jobs.|
-|display_name|Display name of the pipeline job in Studio UI. Editable in Studio UI. Doesn't have to be unique across all jobs in the workspace.|
+|type|**Required**. Job type must be `pipeline` for pipeline jobs.|
+|display_name|Display name of the pipeline job in studio UI. Editable in studio UI. Doesn't have to be unique across all jobs in the workspace.|
|jobs|**Required**. Dictionary of the set of individual jobs to run as steps within the pipeline. These jobs are considered child jobs of the parent pipeline job. In this release, supported job types in pipeline are `command` and `sweep` |inputs|Dictionary of inputs to the pipeline job. The key is a name for the input within the context of the job and the value is the input value. These pipeline inputs can be referenced by the inputs of an individual step job in the pipeline using the ${{ parent.inputs.<input_name> }} expression.| |outputs|Dictionary of output configurations of the pipeline job. The key is a name for the output within the context of the job and the value is the output configuration. These pipeline outputs can be referenced by the outputs of an individual step job in the pipeline using the ${{ parents.outputs.<output_name> }} expression. |
In the *3b_pipeline_with_data* example, we've created a three steps pipeline.
- This pipeline has data dependency, which is common in most real world pipelines. Component_a takes data input from local folder under `./data`(line 17-20) and passes its output to componentB (line 29). Component_a's output can be referenced as `${{parent.jobs.component_a.outputs.component_a_output}}`. - The `compute` defines the default compute for this pipeline. If a component under `jobs` defines a different compute for this component, the system respects component specific setting. ### Read and write data in pipeline
-One common scenario is to read and write data in your pipeline. In Azure Machine Learning, we use the same schema to [read and write data](how-to-read-write-data-v2.md) for all type of jobs (pipeline job, command job, and sweep job). Below are pipeline job examples of using data for common scenarios.
+One common scenario is to read and write data in your pipeline. In Azure Machine Learning, we use the same schema to [read and write data](how-to-read-write-data-v2.md) for all type of jobs (pipeline job, command job, and sweep job). The following are pipeline job examples of using data for common scenarios.
- [local data](https://github.com/Azure/azureml-examples/tree/sdk-preview/cli/jobs/pipelines-with-components/basics/4a_local_data_input) - [web file with public URL](https://github.com/Azure/azureml-examples/blob/sdk-preview/cli/jobs/pipelines-with-components/basics/4c_web_url_input/pipeline.yml)
Now let's look at the *componentA.yml* as an example to understand component def
:::code language="yaml" source="~/azureml-examples-main/cli/jobs/pipelines-with-components/basics/3b_pipeline_with_data/componentA.yml":::
-The most common used schema of the component YAML is described in below table. See [full component YAML schema here](reference-yaml-component-command.md).
+The most common used schema of the component YAML is described in table. To learn more, see the [full component YAML schema](reference-yaml-component-command.md).
|key|description| |||
The most common used schema of the component YAML is described in below table. S
|outputs|Dictionary of component outputs. The key is a name for the output within the context of the component and the value is the component output definition. Outputs can be referenced in the command using the ${{ outputs.<output_name> }} expression.| |is_deterministic|Whether to reuse the previous job's result if the component inputs didn't change. Default value is `true`, also known as reuse by default. The common scenario when set as `false` is to force reload data from a cloud storage or URL.|
-For the example in *3b_pipeline_with_data/componentA.yml*, componentA has one data input and one data output, which can be connected to other steps in the parent pipeline. All the files under `code` section in component YAML will be uploaded to Azure Machine Learning when submitting the pipeline job. In this example, files under `./componentA_src` will be uploaded (line 16 in *componentA.yml*). You can see the uploaded source code in Studio UI: double select the ComponentA step and navigate to Snapshot tab, as shown in below screenshot. We can see it's a hello-world script just doing some simple printing, and write current datetime to the `componentA_output` path. The component takes input and output through command line argument, and it's handled in the *hello.py* using `argparse`.
+For the example in *3b_pipeline_with_data/componentA.yml*, componentA has one data input and one data output, which can be connected to other steps in the parent pipeline. All the files under `code` section in component YAML will be uploaded to Azure Machine Learning when submitting the pipeline job. In this example, files under `./componentA_src` will be uploaded (line 16 in *componentA.yml*). You can see the uploaded source code in Studio UI: double select the ComponentA step and navigate to Snapshot tab, as shown in the following screenshot. We can see it's a hello-world script just doing some simple printing, and write current datetime to the `componentA_output` path. The component takes input and output through command line argument, and it's handled in the *hello.py* using `argparse`.
### Input and output Input and output define the interface of a component. Input and output could be either of a literal value(of type `string`,`number`,`integer`, or `boolean`) or an object containing input schema. **Object input** (of type `uri_file`, `uri_folder`,`mltable`,`mlflow_model`,`custom_model`) can connect to other steps in the parent pipeline job and hence pass data/model to other steps. In pipeline graph, the object type input renders as a connection dot.
-**Literal value inputs** (`string`,`number`,`integer`,`boolean`) are the parameters you can pass to the component at run time. You can add default value of literal inputs under `default` field. For `number` and `integer` type, you can also add minimum and maximum value of the accepted value using `min` and `max` fields. If the input value exceeds the min and max, pipeline fails at validation. Validation happens before you submit a pipeline job to save your time. Validation works for CLI, Python SDK and designer UI. Below screenshot shows a validation example in designer UI. Similarly, you can define allowed values in `enum` field.
+**Literal value inputs** (`string`,`number`,`integer`,`boolean`) are the parameters you can pass to the component at run time. You can add default value of literal inputs under `default` field. For `number` and `integer` type, you can also add minimum and maximum value of the accepted value using `min` and `max` fields. If the input value exceeds the min and max, pipeline fails at validation. Validation happens before you submit a pipeline job to save your time. Validation works for CLI, Python SDK and designer UI. The following screenshot shows a validation example in designer UI. Similarly, you can define allowed values in `enum` field.
:::image type="content" source="./media/how-to-create-component-pipelines-cli/component-input-output.png" alt-text="Screenshot of the input and output of the train linear regression model component." lightbox= "./media/how-to-create-component-pipelines-cli/component-input-output.png":::
-If you want to add an input to a component, remember to edit three places: 1)`inputs` field in component YAML 2) `command` field in component YAML. 3) component source code to handle the command line input. It's marked in green box in above screenshot.
+If you want to add an input to a component, remember to edit three places:
+
+- `inputs` field in component YAML
+- `command` field in component YAML.
+- Component source code to handle the command line input. It's marked in green box in the previous screenshot.
To learn more about inputs and outputs, see [Manage inputs and outputs of component and pipeline](./how-to-manage-inputs-outputs-pipeline.md). ### Environment
-Environment defines the environment to execute the component. It could be an Azure Machine Learning environment(curated or custom registered), docker image or conda environment. See examples below.
+Environment defines the environment to execute the component. It could be an Azure Machine Learning environment(curated or custom registered), docker image or conda environment. See the following examples.
- [Azure Machine Learning registered environment asset](https://github.com/Azure/azureml-examples/tree/sdk-preview/cli/jobs/pipelines-with-components/basics/5b_env_registered). It's referenced in component following `azureml:<environment-name>:<environment-version>` syntax. - [public docker image](https://github.com/Azure/azureml-examples/tree/sdk-preview/cli/jobs/pipelines-with-components/basics/5a_env_public_docker_image)
Under **Details** tab, you see basic information of the component like name, cre
Under **Jobs** tab, you see the history of all jobs that use this component. - ### Use registered components in a pipeline job YAML file Let's use `1b_e2e_registered_components` to demo how to use registered component in pipeline YAML. Navigate to `1b_e2e_registered_components` directory, open the `pipeline.yml` file. The keys and values in the `inputs` and `outputs` fields are similar to those already discussed. The only significant difference is the value of the `component` field in the `jobs.<JOB_NAME>.component` entries. The `component` value is of the form `azureml:<COMPONENT_NAME>:<COMPONENT_VERSION>`. The `train-job` definition, for instance, specifies the latest version of the registered component `my_train` should be used:
Let's use `1b_e2e_registered_components` to demo how to use registered component
### Manage components
-You can check component details and manage the component using CLI (v2). Use `az ml component -h` to get detailed instructions on component command. Below table lists all available commands. See more examples in [Azure CLI reference](/cli/azure/ml/component?view=azure-cli-latest&preserve-view=true)
+You can check component details and manage the component using CLI (v2). Use `az ml component -h` to get detailed instructions on component command. The following table lists all available commands. See more examples in [Azure CLI reference](/cli/azure/ml/component?view=azure-cli-latest&preserve-view=true).
|commands|description| |||
machine-learning How To Create Component Pipelines Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-component-pipelines-ui.md
Previously updated : 03/27/2022 Last updated : 01/31/2024
[!INCLUDE [cli v2](includes/machine-learning-cli-v2.md)]
-In this article, you'll learn how to create and run [machine learning pipelines](concept-ml-pipelines.md) by using the Azure Machine Learning studio and [Components](concept-component.md). You can create pipelines without using components, but components offer better amount of flexibility and reuse. Azure Machine Learning Pipelines may be defined in YAML and [run from the CLI](how-to-create-component-pipelines-cli.md), [authored in Python](how-to-create-component-pipeline-python.md), or composed in Azure Machine Learning studio Designer with a drag-and-drop UI. This document focuses on the Azure Machine Learning studio designer UI.
+In this article, you'll learn how to create and run [machine learning pipelines](concept-ml-pipelines.md) by using the Azure Machine Learning studio and [Components](concept-component.md). You can create pipelines without using components, but components offer better amount of flexibility and reuse. Azure Machine Learning Pipelines can be defined in YAML and [run from the CLI](how-to-create-component-pipelines-cli.md), [authored in Python](how-to-create-component-pipeline-python.md), or composed in Azure Machine Learning studio Designer with a drag-and-drop UI. This document focuses on the Azure Machine Learning studio designer UI.
## Prerequisites
-* If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
+- If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
-* An Azure Machine Learning workspace[Create workspace resources](quickstart-create-resources.md).
+- An Azure Machine Learning workspace [Create workspace resources](quickstart-create-resources.md).
-* [Install and set up the Azure CLI extension for Machine Learning](how-to-configure-cli.md).
+- [Install and set up the Azure CLI extension for Machine Learning](how-to-configure-cli.md).
-* Clone the examples repository:
+- Clone the examples repository:
```azurecli-interactive git clone https://github.com/Azure/azureml-examples --depth 1
In this article, you'll learn how to create and run [machine learning pipelines]
``` >[!Note]
-> Designer supports two types of components, classic prebuilt components(v1) and custom components(v2). These two types of components are NOT compatible.
+> Designer supports two types of components, classic prebuilt components(v1) and custom components(v2). These two types of components are NOT compatible.
> >Classic prebuilt components provide prebuilt components majorly for data processing and traditional machine learning tasks like regression and classification. This type of component continues to be supported but will not have any new components added. >
->Custom components allow you to wrap your own code as a component. It supports sharing components across workspaces and seamless authoring across Studio, CLI v2, and SDK v2 interfaces.
+>Custom components allow you to wrap your own code as a component. It supports sharing components across workspaces and seamless authoring across studio, CLI v2, and SDK v2 interfaces.
>
->For new projects, we highly suggest you use custom component, which is compatible with AzureML V2 and will keep receiving new updates.
+>For new projects, we highly suggest you use custom component, which is compatible with AzureML V2 and will keep receiving new updates.
> >This article applies to custom components.
In this article, you'll learn how to create and run [machine learning pipelines]
To build pipeline using components in UI, you need to register components to your workspace first. You can use UI, CLI or SDK to register components to your workspace, so that you can share and reuse the component within the workspace. Registered components support automatic versioning so you can update the component but assure that pipelines that require an older version continues to work.
-The example below uses UI to register components, and the [component source files](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/pipelines-with-components/basics/1b_e2e_registered_components) are in the `cli/jobs/pipelines-with-components/basics/1b_e2e_registered_components` directory of the [`azureml-examples` repository](https://github.com/Azure/azureml-examples). You need to clone the repo to local first.
+The following example uses UI to register components, and the [component source files](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/pipelines-with-components/basics/1b_e2e_registered_components) are in the `cli/jobs/pipelines-with-components/basics/1b_e2e_registered_components` directory of the [`azureml-examples` repository](https://github.com/Azure/azureml-examples). You need to clone the repo to local first.
-1. In your Azure Machine Learning workspace, navigate to **Components** page and select **New Component**.
+1. In your Azure Machine Learning workspace, navigate to **Components** page and select **New Component** (one of the two style pages will appear).
+ This example uses `train.yml` [in the directory](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/pipelines-with-components/basics/1b_e2e_registered_components). The YAML file defines the name, type, interface including inputs and outputs, code, environment and command of this component. The code of this component `train.py` is under `./train_src` folder, which describes the execution logic of this component. To learn more about the component schema, see the [command component YAML schema reference](reference-yaml-component-command.md).
This example uses `train.yml` [in the directory](https://github.com/Azure/azurem
> When register components in UI, `code` defined in the component YAML file can only point to the current folder where YAML file locates or the subfolders, which means you cannot specify `../` for `code` as UI cannot recognize the parent directory. > `additional_includes` can only point to the current or sub folder. -
-2. Select Upload from **Folder**, and select the `1b_e2e_registered_components` folder to upload. Select `train.yml` from the drop down list below.
+2. Select Upload from **Folder**, and select the `1b_e2e_registered_components` folder to upload. Select `train.yml` from the drop-down list.
:::image type="content" source="./media/how-to-create-component-pipelines-ui/upload-from-local-folder.png" alt-text="Screenshot showing upload from local folder." lightbox ="./media/how-to-create-component-pipelines-ui/upload-from-local-folder.png"::: 3. Select **Next** in the bottom, and you can confirm the details of this component. Once you've confirmed, select **Create** to finish the registration process.
-4. Repeat the steps above to register Score and Eval component using `score.yml` and `eval.yml` as well.
+4. Repeat the previous steps to register Score and Eval component using `score.yml` and `eval.yml` as well.
5. After registering the three components successfully, you can see your components in the studio UI.
This example uses `train.yml` [in the directory](https://github.com/Azure/azurem
:::image type="content" source="./media/how-to-create-component-pipelines-ui/new-pipeline.png" alt-text="Screenshot showing creating new pipeline in designer homepage." lightbox ="./media/how-to-create-component-pipelines-ui/new-pipeline.png":::
-2. Give the pipeline a meaningful name by selecting the pencil icon besides the autogenerated name.
+2. Give the pipeline a meaningful name by selecting the pencil icon besides the autogenerated name.
:::image type="content" source="./media/how-to-create-component-pipelines-ui/rename-pipeline.png" alt-text="Screenshot showing rename the pipeline." lightbox ="./media/how-to-create-component-pipelines-ui/rename-pipeline.png":::
This example uses `train.yml` [in the directory](https://github.com/Azure/azurem
:::image type="content" source="./media/how-to-create-component-pipelines-ui/asset-library.png" alt-text="Screenshot showing registered component in asset library." lightbox ="./media/how-to-create-component-pipelines-ui/asset-library.png"::: Find the *train*, *score* and *eval* components registered in previous section then drag-and-drop them on the canvas. By default it uses the default version of the component, and you can change to a specific version in the right pane of component. The component right pane is invoked by double click on the component.
-
+ :::image type="content" source="./media/how-to-create-component-pipelines-ui/change-component-version.png" alt-text="Screenshot showing changing version of component." lightbox ="./media/how-to-create-component-pipelines-ui/change-component-version.png":::
-
- In this example, we'll use the sample data under [this path](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/pipelines-with-components/basics/1b_e2e_registered_components/data). Register the data into your workspace by clicking the add icon in designer asset library -> data tab, set Type = Folder(uri_folder) then follow the wizard to register the data. The data type need to be uri_folder to align with the [train component definition](https://github.com/Azure/azureml-examples/blob/main/cli/jobs/pipelines-with-components/basics/1b_e2e_registered_components/train.yml).
+
+ In this example, we'll use the sample data under [this path](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/pipelines-with-components/basics/1b_e2e_registered_components/data). Register the data into your workspace by selecting the add icon in designer asset library -> data tab, set Type = Folder(uri_folder) then follow the wizard to register the data. The data type need to be uri_folder to align with the [train component definition](https://github.com/Azure/azureml-examples/blob/main/cli/jobs/pipelines-with-components/basics/1b_e2e_registered_components/train.yml).
:::image type="content" source="./media/how-to-create-component-pipelines-ui/add-data.png" alt-text="Screenshot showing add data." lightbox ="./media/how-to-create-component-pipelines-ui/add-data.png"::: Then drag and drop the data into the canvas. Your pipeline look should look like the following screenshot now.
-
- :::image type="content" source="./media/how-to-create-component-pipelines-ui/pipeline-with-all-boxes.png" alt-text="Screenshot showing the pipeline draft." lightbox ="./media/how-to-create-component-pipelines-ui/pipeline-with-all-boxes.png":::
+ :::image type="content" source="./media/how-to-create-component-pipelines-ui/pipeline-with-all-boxes.png" alt-text="Screenshot showing the pipeline draft." lightbox ="./media/how-to-create-component-pipelines-ui/pipeline-with-all-boxes.png":::
-
4. Connect the data and components by dragging connections in the canvas. :::image type="content" source="./media/how-to-create-component-pipelines-ui/connect.gif" alt-text="Gif showing connecting the pipeline." lightbox ="./media/how-to-create-component-pipelines-ui/connect.gif":::
This example uses `train.yml` [in the directory](https://github.com/Azure/azurem
:::image type="content" source="./media/how-to-create-component-pipelines-ui/promote-pipeline-input.png" alt-text="Screenshot showing how to promote component input to pipeline input." lightbox ="./media/how-to-create-component-pipelines-ui/promote-pipeline-input.png"::: -- > [!NOTE] > Custom components and the designer classic prebuilt components cannot be used together.
This example uses `train.yml` [in the directory](https://github.com/Azure/azurem
:::image type="content" source="./media/how-to-create-component-pipelines-ui/configure-submit.png" alt-text="Screenshot showing configure and submit button." border="false"::: - 1. Then you'll see a step-by-step wizard, follow the wizard to submit the pipeline job. :::image type="content" source="./media/how-to-create-component-pipelines-ui/submission-wizard.png" alt-text="Screenshot showing submission wizard." lightbox ="./media/how-to-create-component-pipelines-ui/submission-wizard.png":::
In **Runtime settings**, you can configure the default datastore and default com
The **Review + Submit** step is the last step to review all configurations before submit. The wizard remembers your last time's configuration if you ever submit the pipeline.
-After submitting the pipeline job, there will be a message on the top with a link to the job detail. You can click this link to review the job details.
+After submitting the pipeline job, there will be a message on the top with a link to the job detail. You can select this link to review the job details.
:::image type="content" source="./media/how-to-create-component-pipelines-ui/submit-message.png" alt-text="Screenshot showing submission message." lightbox ="./media/how-to-create-component-pipelines-ui/submit-message.png"::: -- ## Next steps - Use [these Jupyter notebooks on GitHub](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/pipelines-with-components) to explore machine learning pipelines further
machine-learning How To Create Image Labeling Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-image-labeling-projects.md
Previously updated : 08/16/2023 Last updated : 02/01/2024 monikerRange: 'azureml-api-1 || azureml-api-2'
-#customer intent: As a project manager, I want to set up a project to label images in the project. I want to enable machine learning-assisted labeling to help with the task.
+# Customer intent: As a project manager, I want to set up a project to label images in the project. I want to enable machine learning-assisted labeling to help with the task.
-# Set up an image labeling project and export labels
+# Set up an image labeling project
Learn how to create and run data labeling projects to label images in Azure Machine Learning. Use machine learning (ML)-assisted data labeling or human-in-the-loop labeling to help with the task.
-Set up labels for classification, object detection (bounding box), instance segmentation (polygon), or semantic segmentation (Preview).
+Set up labels for classification, object detection (bounding box), instance segmentation (polygon), or semantic segmentation (preview).
You can also use the data labeling tool in Azure Machine Learning to [create a text labeling project](how-to-create-text-labeling-projects.md).
Image data can be any file that has one of these file extensions:
Each file is an item to be labeled.
+You can also use an MLTable data asset as input to an image labeling project, as long as the images in the table are one of the above formats. For more information, see [How to use MLTable data assets](./how-to-mltable.md).
+ ## Prerequisites You use these items to set up image labeling in Azure Machine Learning:
You use these items to set up image labeling in Azure Machine Learning:
## Specify the data to label
-If you already created a dataset that contains your data, select the dataset in the **Select an existing dataset** dropdown. You can also select **Create a dataset** to use an existing Azure datastore or to upload local files.
+If you already created a dataset that contains your data, select the dataset in the **Select an existing dataset** dropdown.
+
+You can also select **Create a dataset** to use an existing Azure datastore or to upload local files.
> [!NOTE] > A project can't contain more than 500,000 files. If your dataset exceeds this file count, only the first 500,000 files are loaded.
+### Data column mapping (preview)
+
+If you select an MLTable data asset, an additional **Data Column Mapping** step appears for you to specify the column that contains the image URLs.
++
+### Import options (preview)
+
+ When you include a **Category** column in the **Data Column Mapping** step, use **Import Options** to specify how to treat the labeled data.
++ ### Create a dataset from an Azure datastore In many cases, you can upload local files. However, [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/) provides a faster and more robust way to transfer a large amount of data. We recommend Storage Explorer as the default way to move files.
After a machine learning model is trained on your manually labeled data, the mod
[!INCLUDE [initialize](includes/machine-learning-data-labeling-initialize.md)]
-## Run and monitor the project
--
-### Dashboard
-
-The **Dashboard** tab shows the progress of the labeling task.
--
-The progress charts show how many items have been labeled, skipped, need review, or aren't yet complete. Hover over the chart to see the number of items in each section.
-
-A distribution of the labels for completed tasks is shown below the chart. In some project types, an item can have multiple labels. The total number of labels can exceed the total number of items.
-
-A distribution of labelers and how many items they've labeled also are shown.
-
-The middle section shows a table that has a queue of unassigned tasks. When ML-assisted labeling is off, this section shows the number of manual tasks that are awaiting assignment.
-
-When ML-assisted labeling is on, this section also shows:
-
-* Tasks that contain clustered items in the queue.
-* Tasks that contain pre-labeled items in the queue.
-
-Additionally, when ML-assisted labeling is enabled, you can scroll down to see the ML-assisted labeling status. The **Jobs** sections give links for each of the machine learning runs.
-
-* **Training**: Trains a model to predict the labels.
-* **Validation**: Determines whether item pre-labeling uses the prediction of this model.
-* **Inference**: Prediction run for new items.
-* **Featurization**: Clusters items (only for image classification projects).
-
-### Data tab
-
-On the **Data** tab, you can see your dataset and review labeled data. Scroll through the labeled data to see the labels. If you see data that's incorrectly labeled, select it and choose **Reject** to remove the labels and return the data to the unlabeled queue.
-
-If your project uses consensus labeling, review images that have no consensus:
-
-1. Select the **Data** tab.
-1. On the left menu, select **Review labels**.
-1. On the command bar above **Review labels**, select **All filters**.
-
- :::image type="content" source="media/how-to-create-labeling-projects/select-filters.png" alt-text="Screenshot that shows how to select filters to review consensus label problems." lightbox="media/how-to-create-labeling-projects/select-filters.png":::
-
-1. Under **Labeled datapoints**, select **Consensus labels in need of review** to show only images for which the labelers didn't come to a consensus.
-
- :::image type="content" source="media/how-to-create-labeling-projects/select-need-review.png" alt-text="Screenshot that shows how to select labels in need of review.":::
-
-1. For each image to review, select the **Consensus label** dropdown to view the conflicting labels.
-
- :::image type="content" source="media/how-to-create-labeling-projects/consensus-dropdown.png" alt-text="Screenshot that shows the Select Consensus label dropdown to review conflicting labels." lightbox="media/how-to-create-labeling-projects/consensus-dropdown.png":::
-
-1. Although you can select an individual labeler to see their labels, to update or reject the labels, you must use the top choice, **Consensus label (preview)**.
-
-### Details tab
-
-View and change details of your project. On this tab, you can:
-
-* View project details and input datasets.
-* Set or clear the **Enable incremental refresh at regular intervals** option, or request an immediate refresh.
-* View details of the storage container that's used to store labeled outputs in your project.
-* Add labels to your project.
-* Edit instructions you give to your labels.
-* Change settings for ML-assisted labeling and kick off a labeling task.
-
-### Vision Studio tab
-
-If your project was created from [Vision Studio](../ai-services/computer-vision/how-to/model-customization.md), you'll also see a **Vision Studio** tab. Select **Go to Vision Studio** to return to Vision Studio. Once you return to Vision Studio, you will be able to import your labeled data.
-
-### Access for labelers
--
-## Add new labels to a project
--
-## Start an ML-assisted labeling task
--
-## Export the labels
-
-To export the labels, on the **Project details** page of your labeling project, select the **Export** button. You can export the label data for Machine Learning experimentation at any time.
-
-If your project type is Semantic segmentation (Preview), an [Azure MLTable data asset](./how-to-mltable.md) is created.
-
-For all other project types, you can export an image label as:
-
-* A CSV file. Azure Machine Learning creates the CSV file in a folder inside *Labeling/export/csv*.
-* A [COCO format](http://cocodataset.org/#format-data) file. Azure Machine Learning creates the COCO file in a folder inside *Labeling/export/coco*.
-* An [Azure Machine Learning dataset with labels](v1/how-to-use-labeled-dataset.md).
-* A CSV file. Azure Machine Learning creates the CSV file in a folder inside *Labeling/export/csv*.
-* A [COCO format](http://cocodataset.org/#format-data) file. Azure Machine Learning creates the COCO file in a folder inside *Labeling/export/coco*.
-* An [Azure MLTable data asset](./how-to-mltable.md).
-
-When you export a CSV or COCO file, a notification appears briefly when the file is ready to download. Select the **Download file** link to download your results. You'll also find the notification in the **Notification** section on the top bar:
--
-Access exported Azure Machine Learning datasets and data assets in the **Data** section of Machine Learning. The data details page also provides sample code you can use to access your labels by using Python.
--
-After you export your labeled data to an Azure Machine Learning dataset, you can use AutoML to build computer vision models that are trained on your labeled data. Learn more at [Set up AutoML to train computer vision models by using Python](how-to-auto-train-image-models.md).
-
-## Troubleshoot issues
--
-### Troubleshoot object detection
-|Issue |Resolution |
-|||
-|If you select the Esc key when you label for object detection, a zero-size label is created and label submission fails.|To delete the label, select the **X** delete icon next to the label.|
## Next steps <!-- * [Tutorial: Create your first image classification labeling project](tutorial-labeling.md). -->
+* [Manage labeling projects](how-to-manage-labeling-projects.md)
* [How to tag images](how-to-label-data.md)
machine-learning How To Create Text Labeling Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-text-labeling-projects.md
Previously updated : 02/08/2023 Last updated : 02/01/2024 monikerRange: 'azureml-api-1 || azureml-api-2'
After you train the machine learning model on your manually labeled data, the mo
[!INCLUDE [initialize](includes/machine-learning-data-labeling-initialize.md)]
-## Run and monitor the project
--
-### Dashboard
-
-The **Dashboard** tab shows the labeling task progress.
--
-The progress charts show how many items have been labeled, skipped, need review, or aren't yet complete. Hover over the chart to see the number of items in each section.
-
-A distribution of the labels for completed tasks is shown below the chart. In some project types, an item can have multiple labels. The total number of labels can exceed the total number of items.
-
-A distribution of labelers and how many items they've labeled also are shown.
-
-The middle section shows a table that has a queue of unassigned tasks. When ML-assisted labeling is off, this section shows the number of manual tasks that are awaiting assignment.
-
-When ML-assisted labeling is on, this section also shows:
-
-* Tasks that contain clustered items in the queue.
-* Tasks that contain pre-labeled items in the queue.
-
-Additionally, when ML-assisted labeling is enabled, you can scroll down to see the ML-assisted labeling status. The **Jobs** sections give links for each of the machine learning runs.
-
-### Data
-
-On the **Data** tab, you can see your dataset and review labeled data. Scroll through the labeled data to see the labels. If you see data that's incorrectly labeled, select it and choose **Reject** to remove the labels and return the data to the unlabeled queue.
-
-If your project uses consensus labeling, review items that have no consensus:
-
-1. Select the **Data** tab.
-1. On the left menu, select **Review labels**.
-1. On the command bar above **Review labels**, select **All filters**.
-
- :::image type="content" source="media/how-to-create-text-labeling-projects/text-labeling-select-filter.png" alt-text="Screenshot that shows how to select filters to review consensus label problems." lightbox="media/how-to-create-text-labeling-projects/text-labeling-select-filter.png":::
-
-1. Under **Labeled datapoints**, select **Consensus labels in need of review** to show only items for which the labelers didn't come to a consensus.
-
- :::image type="content" source="media/how-to-create-labeling-projects/select-need-review.png" alt-text="Screenshot that shows how to select labels in need of review.":::
-
-1. For each item to review, select the **Consensus label** dropdown to view the conflicting labels.
-
- :::image type="content" source="media/how-to-create-text-labeling-projects/text-labeling-consensus-dropdown.png" alt-text="Screenshot that shows the Select Consensus label dropdown to review conflicting labels." lightbox="media/how-to-create-text-labeling-projects/text-labeling-consensus-dropdown.png":::
-
-1. Although you can select an individual labeler to see their labels, to update or reject the labels, you must use the top choice, **Consensus label (preview)**.
-
-### Details tab
-
-View and change details of your project. On this tab, you can:
-
-* View project details and input datasets.
-* Set or clear the **Enable incremental refresh at regular intervals** option, or request an immediate refresh.
-* View details of the storage container that's used to store labeled outputs in your project.
-* Add labels to your project.
-* Edit instructions you give to your labels.
-* Change settings for ML-assisted labeling and kick off a labeling task.
-
-### Language Studio tab
-
-If your project was created from [Language Studio](../ai-services/language-service/custom/azure-machine-learning-labeling.md), you'll also see a **Language Studio** tab.
-
-* If labeling is active in Language Studio, you can't also label in Azure Machine Learning. In that case, Language Studio is the only tab available. Select **View in Language Studio** to go to the active labeling project in Language Studio. From there, you can switch to labeling in Azure Machine Learning if you wish.
-
-If labeling is active in Azure Machine Learning, you have two choices:
-
-* Select **Switch to Language Studio** to switch your labeling activity back to Language Studio. When you switch, all your currently labeled data is imported into Language Studio. Your ability to label data in Azure Machine Learning is disabled, and you can label data in Language Studio. You can switch back to labeling in Azure Machine Learning at any time through Language Studio.
-
- > [!NOTE]
- > Only users with the [correct roles](how-to-add-users.md) in Azure Machine Learning have the ability to switch labeling.
-
-* Select **Disconnect from Language Studio** to sever the relationship with Language Studio. Once you disconnect, the project will lose its association with Language Studio, and will no longer have the Language Studio tab. Disconnecting your project from Language Studio is a permanent, irreversible process and can't be undone. You will no longer be able to access your labels for this project in Language Studio. The labels are available only in Azure Machine Learning from this point onward.
-
-### Access for labelers
--
-## Add new labels to a project
--
-## Start an ML-assisted labeling task
--
-## Export the labels
-
-To export the labels, on the **Project details** page of your labeling project, select the **Export** button. You can export the label data for Machine Learning experimentation at any time.
-
-For all project types except **Text Named Entity Recognition**, you can export label data as:
-
-* A CSV file. Azure Machine Learning creates the CSV file in a folder inside *Labeling/export/csv*.
-* An [Azure Machine Learning dataset with labels](v1/how-to-use-labeled-dataset.md).
-* A CSV file. Azure Machine Learning creates the CSV file in a folder inside *Labeling/export/csv*.
-* An [Azure MLTable data asset](./how-to-mltable.md).
-
-For **Text Named Entity Recognition** projects, you can export label data as:
-
-* An [Azure Machine Learning dataset (v1) with labels](v1/how-to-use-labeled-dataset.md).
-* A CoNLL file. For this export, you'll also have to assign a compute resource. The export process runs offline and generates the file as part of an experiment run. Azure Machine Learning creates the CoNLL file in a folder inside*Labeling/export/conll*.
-* An [Azure MLTable data asset](./how-to-mltable.md).
-* A CoNLL file. For this export, you'll also have to assign a compute resource. The export process runs offline and generates the file as part of an experiment run. Azure Machine Learning creates the CoNLL file in a folder inside*Labeling/export/conll*.
-
-When you export a CSV or CoNLL file, a notification appears briefly when the file is ready to download. Select the **Download file** link to download your results. You'll also find the notification in the **Notification** section on the top bar:
--
-Access exported Azure Machine Learning datasets and data assets in the **Data** section of Machine Learning. The data details page also provides sample code you can use to access your labels by using Python.
--
-## Troubleshoot issues
- ## Next steps
-* [How to tag text](how-to-label-data.md#label-text)
+* [Manage labeling projects](how-to-manage-labeling-projects.md)
+* [How to tag text](how-to-label-data.md#label-text)
machine-learning How To Deploy Mlflow Models Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-mlflow-models-online-endpoints.md
Title: Deploy MLflow models to online endpoint
+ Title: Deploy MLflow models to real-time endpoints
-description: Learn to deploy your MLflow model as a web service that's automatically managed by Azure.
+description: Learn to deploy your MLflow model as a web service that's managed by Azure.
-+ Previously updated : 03/31/2022 Last updated : 01/31/2024
[!INCLUDE [cli v2](includes/machine-learning-cli-v2.md)]
-In this article, learn how to deploy your [MLflow](https://www.mlflow.org) model to an [online endpoint](concept-endpoints.md) for real-time inference. When you deploy your MLflow model to an online endpoint, you don't need to indicate a scoring script or an environment. This characteristic is referred as __no-code deployment__.
+In this article, learn how to deploy your [MLflow](https://www.mlflow.org) model to an [online endpoint](concept-endpoints.md) for real-time inference. When you deploy your MLflow model to an online endpoint, you don't need to specify a scoring script or an environmentΓÇöthis functionality is known as _no-code deployment_.
-For no-code-deployment, Azure Machine Learning
+For no-code-deployment, Azure Machine Learning:
-* Dynamically installs Python packages provided in the `conda.yaml` file. Hence, dependencies are installed during container runtime.
-* Provides a MLflow base image/curated environment that contains the following items:
+* Dynamically installs Python packages provided in the `conda.yaml` file. Hence, dependencies get installed during container runtime.
+* Provides an MLflow base image/curated environment that contains the following items:
* [`azureml-inference-server-http`](how-to-inference-server-http.md) * [`mlflow-skinny`](https://github.com/mlflow/mlflow/blob/master/README_SKINNY.rst)
- * A scoring script to perform inference.
+ * A scoring script for inferencing.
[!INCLUDE [mlflow-model-package-for-workspace-without-egress](includes/mlflow-model-package-for-workspace-without-egress.md)]
-## About this example
+## About the example
-This example shows how you can deploy an MLflow model to an online endpoint to perform predictions. This example uses an MLflow model based on the [Diabetes dataset](https://www4.stat.ncsu.edu/~boos/var.select/diabetes.html). This dataset contains ten baseline variables, age, sex, body mass index, average blood pressure, and six blood serum measurements obtained from n = 442 diabetes patients. It also contains the response of interest, a quantitative measure of disease progression one year after baseline (regression).
+The example shows how you can deploy an MLflow model to an online endpoint to perform predictions. The example uses an MLflow model that's based on the [Diabetes dataset](https://www4.stat.ncsu.edu/~boos/var.select/diabetes.html). This dataset contains 10 baseline variables: age, sex, body mass index, average blood pressure, and six blood serum measurements obtained from 442 diabetes patients. It also contains the response of interest, a quantitative measure of disease progression one year after baseline.
-The model was trained using an `scikit-learn` regressor and all the required preprocessing has been packaged as a pipeline, making this model an end-to-end pipeline that goes from raw data to predictions.
+The model was trained using a `scikit-learn` regressor, and all the required preprocessing has been packaged as a pipeline, making this model an end-to-end pipeline that goes from raw data to predictions.
-The information in this article is based on code samples contained in the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the commands locally without having to copy/paste YAML and other files, clone the repo, and then change directories to the `cli/endpoints/online` if you are using the Azure CLI or `sdk/endpoints/online` if you are using our SDK for Python.
+The information in this article is based on code samples contained in the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the commands locally without having to copy/paste YAML and other files, clone the repo, and then change directories to `cli`, if you're using the Azure CLI. If you're using the Azure Machine Learning SDK for Python, change directories to `sdk/python/endpoints/online/mlflow`.
```azurecli git clone https://github.com/Azure/azureml-examples --depth 1
-cd azureml-examples/cli/endpoints/online
+cd azureml-examples/cli
```
-### Follow along in Jupyter Notebooks
+### Follow along in Jupyter Notebook
-You can follow along this sample in the following notebooks. In the cloned repository, open the notebook: [mlflow_sdk_online_endpoints_progresive.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/deploy/mlflow_sdk_online_endpoints.ipynb).
+You can follow the steps for using the Azure Machine Learning Python SDK by opening the [Deploy MLflow model to online endpoints](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/online/mlflow/online-endpoints-deploy-mlflow-model.ipynb) notebook in the cloned repository.
## Prerequisites Before following the steps in this article, make sure you have the following prerequisites: - An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).-- Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure Machine Learning. To perform the steps in this article, your user account must be assigned the owner or contributor role for the Azure Machine Learning workspace, or a custom role allowing Microsoft.MachineLearningServices/workspaces/onlineEndpoints/*. For more information, see [Manage access to an Azure Machine Learning workspace](how-to-assign-roles.md).-- You must have a MLflow model registered in your workspace. Particularly, this example registers a model trained for the [Diabetes dataset](https://www4.stat.ncsu.edu/~boos/var.select/diabetes.html).
+- Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure Machine Learning. To perform the steps in this article, your user account must be assigned the owner or contributor role for the Azure Machine Learning workspace, or a custom role allowing `Microsoft.MachineLearningServices/workspaces/onlineEndpoints/*`. For more information on roles, see [Manage access to an Azure Machine Learning workspace](how-to-assign-roles.md).
+- You must have an MLflow model registered in your workspace. This article registers a model trained for the [Diabetes dataset](https://www4.stat.ncsu.edu/~boos/var.select/diabetes.html) in the workspace.
-Additionally, you need to:
+- Also, you need to:
-# [Azure CLI](#tab/cli)
+ # [Azure CLI](#tab/cli)
-- Install the Azure CLI and the ml extension to the Azure CLI. For more information, see [Install, set up, and use the CLI (v2)](how-to-configure-cli.md).
+ - Install the Azure CLI and the `ml` extension to the Azure CLI. For more information on installing the CLI, see [Install and set up the CLI (v2)](how-to-configure-cli.md).
-# [Python (Azure Machine Learning SDK)](#tab/sdk)
+ # [Python (Azure Machine Learning SDK)](#tab/sdk)
-- Install the Azure Machine Learning SDK for Python
-
- ```bash
- pip install azure-ai-ml azure-identity
- ```
-
-# [Python (MLflow SDK)](#tab/mlflow)
+ - Install the Azure Machine Learning SDK for Python.
-- Install the MLflow SDK package `mlflow` and the Azure Machine Learning plug-in for MLflow `azureml-mlflow`.
+ ```bash
+ pip install azure-ai-ml azure-identity
+ ```
- ```bash
- pip install mlflow azureml-mlflow
- ```
+ # [Python (MLflow SDK)](#tab/mlflow)
-- If you are not running in Azure Machine Learning compute, configure the MLflow tracking URI or MLflow's registry URI to point to the workspace you are working on. See [Configure MLflow for Azure Machine Learning](how-to-use-mlflow-configure-tracking.md) for more details.
+ - Install the MLflow SDK package `mlflow` and the Azure Machine Learning plug-in for MLflow `azureml-mlflow`.
-# [Studio](#tab/studio)
+ ```bash
+ pip install mlflow azureml-mlflow
+ ```
+
+ - If you're not running code in the Azure Machine Learning compute, configure the MLflow tracking URI or MLflow's registry URI to point to the Azure Machine Learning workspace you're working on. For more information on how to connect MLflow to the workspace, see [Configure MLflow for Azure Machine Learning](how-to-use-mlflow-configure-tracking.md).
+
+ # [Studio](#tab/studio)
-There are no more prerequisites when working in Azure Machine Learning studio.
+ No additional prerequisites when working in Azure Machine Learning studio.
### Connect to your workspace
-First, let's connect to Azure Machine Learning workspace where we are going to work on.
+First, connect to the Azure Machine Learning workspace where you'll work.
# [Azure CLI](#tab/cli)
az configure --defaults workspace=<workspace> group=<resource-group> location=<l
# [Python (Azure Machine Learning SDK)](#tab/sdk)
-The workspace is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning. In this section, we connect to the workspace in which you perform deployment tasks.
+The workspace is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning. In this section, connect to the workspace in which you'll perform deployment tasks.
1. Import the required libraries: ```python from azure.ai.ml import MLClient, Input
- from azure.ai.ml.entities import ManagedOnlineEndpoint, ManagedOnlineDeployment, Model
- from azure.ai.ml.constants import AssetTypes
+ from azure.ai.ml.entities import (
+ ManagedOnlineEndpoint,
+ ManagedOnlineDeployment,
+ Model,
+ Environment,
+ CodeConfiguration,
+ )
from azure.identity import DefaultAzureCredential
+ from azure.ai.ml.constants import AssetTypes
``` 2. Configure workspace details and get a handle to the workspace:
Navigate to [Azure Machine Learning studio](https://ml.azure.com).
-### Registering the model
+### Register the model
+
+You can deploy only registered models to online endpoints. In this case, you already have a local copy of the model in the repository, so you only need to publish the model to the registry in the workspace. You can skip this step if the model you're trying to deploy is already registered.
-Online Endpoint can only deploy registered models. In this case, we already have a local copy of the model in the repository, so we only need to publish the model to the registry in the workspace. You can skip this step if the model you are trying to deploy is already registered.
-
# [Azure CLI](#tab/cli) ```azurecli MODEL_NAME='sklearn-diabetes'
-az ml model create --name $MODEL_NAME --type "mlflow_model" --path "sklearn-diabetes/model"
+az ml model create --name $MODEL_NAME --type "mlflow_model" --path "endpoints/online/ncd/sklearn-diabetes/model"
``` # [Python (Azure Machine Learning SDK)](#tab/sdk)
version = registered_model.version
# [Studio](#tab/studio)
-To create a model in Azure Machine Learning, open the Models page in Azure Machine Learning. Select **Register model** and select where your model is located. Fill out the required fields, and then select __Register__.
+To create a model in Azure Machine Learning studio:
+
+- Open the __Models__ page in the studio.
+- Select __Register__ and select where your model is located. For this example, select __From local files__.
+- On the __Upload model__ page, select __MLflow__ for the model type.
+- Select __Browse__ to select the model folder, then select __Next__.
+- Provide a __Name__ for the model on the __Model settings__ page and select __Next__.
+- Review the uploaded model fines and model settings on the __Review__ page, then select __Register__.
-Alternatively, if your model was logged inside of a run, you can register it directly.
+#### What if your model was logged inside of a run?
+
+If your model was logged inside of a run, you can register it directly.
-> [!TIP]
-> To register the model, you will need to know the location where the model has been stored. If you are using `autolog` feature of MLflow, the path will depend on the type and framework of the model being used. We recommend to check the jobs output to identify which is the name of this folder. You can look for the folder that contains a file named `MLModel`. If you are logging your models manually using `log_model`, then the path is the argument you pass to such method. As an example, if you log the model using `mlflow.sklearn.log_model(my_model, "classifier")`, then the path where the model is stored is `classifier`.
+To register the model, you need to know the location where it is stored. If you're using MLflow's `autolog` feature, the path to the model depends on the model type and framework. You should check the jobs output to identify the name of the model's folder. This folder contains a file named `MLModel`.
+
+If you're using the `log_model` method to manually log your models, then pass the path to the model as the argument to the method. For example, if you log the model, using `mlflow.sklearn.log_model(my_model, "classifier")`, then the path where the model is stored is called `classifier`.
# [Azure CLI](#tab/cli)
version = registered_model.version
## Deploy an MLflow model to an online endpoint
-1. First. we need to configure the endpoint where the model will be deployed. The following example configures the name and authentication mode of the endpoint:
-
+1. Configure the endpoint where the model will be deployed. The following example configures the name and authentication mode of the endpoint:
+ # [Azure CLI](#tab/cli)
-
- __endpoint.yaml__
+
+ Set an endpoint name by running the following command (replace `YOUR_ENDPOINT_NAME` with a unique name):
+
+ :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-ncd.sh" ID="set_endpoint_name":::
+
+ Configure the endpoint:
+
+ __create-endpoint.yaml__
:::code language="yaml" source="~/azureml-examples-main/cli/endpoints/online/ncd/create-endpoint.yaml"::: # [Python (Azure Machine Learning SDK)](#tab/sdk) ```python
+ # Creating a unique endpoint name with current datetime to avoid conflicts
+ import datetime
+ endpoint_name = "sklearn-diabetes-" + datetime.datetime.now().strftime("%m%d%H%M%f") endpoint = ManagedOnlineEndpoint(
version = registered_model.version
# [Python (MLflow SDK)](#tab/mlflow)
- We can configure the properties of this endpoint using a configuration file. In this case, we are configuring the authentication mode of the endpoint to be "key".
+ You can configure the properties of this endpoint using a configuration file. In this case, you're configuring the authentication mode of the endpoint to be "key".
```python endpoint_config = {
version = registered_model.version
# [Studio](#tab/studio)
- *You will perform this step in the deployment stage.*
+ *You'll perform this step in the deployment stage.*
-1. Let's create the endpoint:
+1. Create the endpoint:
# [Azure CLI](#tab/cli)
version = registered_model.version
# [Studio](#tab/studio)
- *You will perform this step in the deployment stage.*
+ *You'll perform this step in the deployment stage.*
-1. Now, it is time to configure the deployment. A deployment is a set of resources required for hosting the model that does the actual inferencing.
+1. Configure the deployment. A deployment is a set of resources required for hosting the model that does the actual inferencing.
# [Azure CLI](#tab/cli)
version = registered_model.version
) ```
- If your endpoint doesn't have egress connectivity, use [model packaging (preview)](how-to-package-models.md) by including the argument `with_package=True`:
+ Alternatively, if your endpoint doesn't have egress connectivity, use [model packaging (preview)](how-to-package-models.md) by including the argument `with_package=True`:
```python blue_deployment = ManagedOnlineDeployment(
version = registered_model.version
blue_deployment_name = "blue" ```
- To configure the hardware requirements of your deployment, you need to create a JSON file with the desired configuration:
+ To configure the hardware requirements of your deployment, create a JSON file with the desired configuration:
```python deploy_config = {
version = registered_model.version
``` > [!NOTE]
- > The full specification of this configuration can be found at [Managed online deployment schema (v2)](reference-yaml-deployment-managed-online.md).
+ > For details about the full specification of this configuration, see [Managed online deployment schema (v2)](reference-yaml-deployment-managed-online.md).
Write the configuration to a file:
version = registered_model.version
# [Studio](#tab/studio)
- *You will perform this step in the deployment stage.*
+ *You'll perform this step in the deployment stage.*
> [!NOTE]
- > `scoring_script` and `environment` auto generation are only supported for `pyfunc` model's flavor. To use a different flavor, see [Customizing MLflow model deployments](#customizing-mlflow-model-deployments).
+ > Autogeneration of the `scoring_script` and `environment` are only supported for `pyfunc` model flavor. To use a different model flavor, see [Customizing MLflow model deployments](#customize-mlflow-model-deployments).
-1. Let's create the deployment:
+1. Create the deployment:
# [Azure CLI](#tab/cli)
version = registered_model.version
# [Studio](#tab/studio)
- 1. From the __Endpoints__ page, Select **+Create**.
+ 1. From the __Endpoints__ page, Select **Create** from the **Real-time endpoints** tab.
:::image type="content" source="media/how-to-deploy-mlflow-models-online-endpoints/create-from-endpoints.png" lightbox="media/how-to-deploy-mlflow-models-online-endpoints/create-from-endpoints.png" alt-text="Screenshot showing create option on the Endpoints UI page.":::
- 1. Provide a name and authentication type for the endpoint, and then select __Next__.
- 1. When selecting a model, select the MLflow model registered previously. Select __Next__ to continue.
- 1. When you select a model registered in MLflow format, in the Environment step of the wizard, you don't need a scoring script or an environment.
+ 1. Choose the MLflow model that you registered previously, then select the **Select** button.
+
+ > [!NOTE]
+ > The configuration page includes a note to inform you that the the scoring script and environment are auto generated for your selected MLflow model.
- :::image type="content" source="media/how-to-deploy-mlflow-models-online-endpoints/ncd-wizard.png" lightbox="media/how-to-deploy-mlflow-models-online-endpoints/ncd-wizard.png" alt-text="Screenshot showing no code and environment needed for MLflow models":::
+ 1. Select **New** to deploy to a new endpoint.
+ 1. Provide a name for the endpoint and deployment or keep the default names.
+ 1. Select __Deploy__ to deploy the model to the endpoint.
- 1. Complete the wizard to deploy the model to the endpoint.
+ :::image type="content" source="media/how-to-deploy-mlflow-models-online-endpoints/deployment-wizard.png" lightbox="media/how-to-deploy-mlflow-models-online-endpoints/deployment-wizard.png" alt-text="Screenshot showing no code and environment needed for MLflow models.":::
- :::image type="content" source="media/how-to-deploy-mlflow-models-online-endpoints/review-screen-ncd.png" lightbox="media/how-to-deploy-mlflow-models-online-endpoints/review-screen-ncd.png" alt-text="Screenshot showing NCD review screen":::
-1. Assign all the traffic to the deployment: So far, the endpoint has one deployment, but none of its traffic is assigned to it. Let's assign it.
+1. Assign all the traffic to the deployment. So far, the endpoint has one deployment, but none of its traffic is assigned to it.
# [Azure CLI](#tab/cli)
- *This step in not required in the Azure CLI since we used the `--all-traffic` during creation. If you need to change traffic, you can use the command `az ml online-endpoint update --traffic` as explained at [Progressively update traffic](how-to-deploy-mlflow-models-online-progressive.md#progressively-update-the-traffic).*
+ *This step in not required in the Azure CLI, since you used the `--all-traffic` flag during creation. If you need to change traffic, you can use the command `az ml online-endpoint update --traffic`. For more information on how to update traffic, see [Progressively update traffic](how-to-deploy-mlflow-models-online-progressive.md#progressively-update-the-traffic).*
# [Python (Azure Machine Learning SDK)](#tab/sdk) ```python
- endpoint.traffic = { blue_deployment_name: 100 }
+ endpoint.traffic = {"blue": 100}
``` # [Python (MLflow SDK)](#tab/mlflow)
version = registered_model.version
# [Studio](#tab/studio)
- *This step in not required in studio since we assigned the traffic during creation.*
+ *This step in not required in the studio.*
1. Update the endpoint configuration: # [Azure CLI](#tab/cli)
-
- *This step in not required in the Azure CLI since we used the `--all-traffic` during creation. If you need to change traffic, you can use the command `az ml online-endpoint update --traffic` as explained at [Progressively update traffic](how-to-deploy-mlflow-models-online-progressive.md#progressively-update-the-traffic).*
+
+ *This step in not required in the Azure CLI, since you used the `--all-traffic` flag during creation. If you need to change traffic, you can use the command `az ml online-endpoint update --traffic`. For more information on how to update traffic, see [Progressively update traffic](how-to-deploy-mlflow-models-online-progressive.md#progressively-update-the-traffic).*
# [Python (Azure Machine Learning SDK)](#tab/sdk)
version = registered_model.version
# [Studio](#tab/studio)
- *This step in not required in studio since we assigned the traffic during creation.*
+ *This step in not required in the studio.*
-### Invoke the endpoint
+## Invoke the endpoint
-Once your deployment completes, your deployment is ready to serve request. One of the easier ways to test the deployment is by using the built-in invocation capability in the deployment client you are using.
+Once your deployment is ready, you can use it to serve request. One way to test the deployment is by using the built-in invocation capability in the deployment client you're using. The following JSON is a sample request for the deployment.
**sample-request-sklearn.json** :::code language="json" source="~/azureml-examples-main/cli/endpoints/online/ncd/sample-request-sklearn.json"::: > [!NOTE]
-> Notice how the key `input_data` has been used in this example instead of `inputs` as used in MLflow serving. This is because Azure Machine Learning requires a different input format to be able to automatically generate the swagger contracts for the endpoints. See [Differences between models deployed in Azure Machine Learning and MLflow built-in server](how-to-deploy-mlflow-models.md#models-deployed-in-azure-machine-learning-vs-models-deployed-in-the-mlflow-built-in-server) for details about expected input format.
+>`input_data` is used in this example, instead of `inputs` that is used in MLflow serving. This is because Azure Machine Learning requires a different input format to be able to automatically generate the swagger contracts for the endpoints. For more information about expected input formats, see [Differences between models deployed in Azure Machine Learning and MLflow built-in server](how-to-deploy-mlflow-models.md#models-deployed-in-azure-machine-learning-vs-models-deployed-in-the-mlflow-built-in-server).
-To submit a request to the endpoint, you can do as follows:
+Submit a request to the endpoint as follows:
# [Azure CLI](#tab/cli)
ml_client.online_endpoints.invoke(
# [Python (MLflow SDK)](#tab/mlflow) ```python
-# Read the sample request we have in the json file to construct a pandas data frame
+# Read the sample request that's in the json file to construct a pandas data frame
with open("sample-request-sklearn.json", "r") as f: sample_request = json.loads(f.read()) samples = pd.DataFrame(**sample_request["input_data"])
deployment_client.predict(endpoint=endpoint_name, df=samples)
MLflow models can use the __Test__ tab to create invocations to the created endpoints. To do that:
-1. Go to the __Endpoints__ tab and select the new endpoint created.
+1. Go to the __Endpoints__ tab and select the endpoint you created.
1. Go to the __Test__ tab. 1. Paste the content of the file `sample-request-sklearn.json`.
-1. Click on __Test__.
+1. Select __Test__.
1. The predictions will show up in the box on the right.
The response will be similar to the following text:
> For MLflow no-code-deployment, **[testing via local endpoints](how-to-deploy-online-endpoints.md#deploy-and-debug-locally-by-using-local-endpoints)** is currently not supported.
-## Customizing MLflow model deployments
+## Customize MLflow model deployments
-MLflow models can be deployed to online endpoints without indicating a scoring script in the deployment definition. However, you can opt to customize how inference is executed.
+You don't have to specify a scoring script in the deployment definition of an MLflow model to an online endpoint. However, you can opt to do so and customize how inference gets executed.
-You will typically select this workflow when:
+You'll typically want to customize your MLflow model deployment when:
> [!div class="checklist"] > - The model doesn't have a `PyFunc` flavor on it.
-> - You need to customize the way the model is run, for instance, use an specific flavor to load it with `mlflow.<flavor>.load_model()`.
-> - You need to do pre/post processing in your scoring routine when it is not done by the model itself.
-> - The output of the model can't be nicely represented in tabular data. For instance, it is a tensor representing an image.
+> - You need to customize the way the model is run, for instance, to use a specific flavor to load the model, using `mlflow.<flavor>.load_model()`.
+> - You need to do pre/post processing in your scoring routine when it's not done by the model itself.
+> - The output of the model can't be nicely represented in tabular data. For instance, it's a tensor representing an image.
> [!IMPORTANT]
-> If you choose to indicate an scoring script for an MLflow model deployment, you will also have to specify the environment where the deployment will run.
+> If you choose to specify a scoring script for an MLflow model deployment, you'll also have to specify the environment where the deployment will run.
### Steps
-Use the following steps to deploy an MLflow model with a custom scoring script.
+To deploy an MLflow model with a custom scoring script:
-1. Identify the folder where your MLflow model is placed.
+1. Identify the folder where your MLflow model is located.
- a. Go to [Azure Machine Learning portal](https://ml.azure.com).
+ a. Go to the [Azure Machine Learning studio](https://ml.azure.com).
- b. Go to the section __Models__.
+ b. Go to the __Models__ section.
- c. Select the model you are trying to deploy and click on the tab __Artifacts__.
+ c. Select the model you're trying to deploy and go to its __Artifacts__ tab.
- d. Take note of the folder that is displayed. This folder was indicated when the model was registered.
+ d. Take note of the folder that is displayed. This folder was specified when the model was registered.
:::image type="content" source="media/how-to-deploy-mlflow-models-online-endpoints/mlflow-model-folder-name.png" lightbox="media/how-to-deploy-mlflow-models-online-endpoints/mlflow-model-folder-name.png" alt-text="Screenshot showing the folder where the model artifacts are placed.":::
-1. Create a scoring script. Notice how the folder name `model` you identified before has been included in the `init()` function.
+1. Create a scoring script. Notice how the folder name `model` that you previously identified is included in the `init()` function.
+
+ > [!TIP]
+ > The following scoring script is provided as an example about how to perform inference with an MLflow model. You can adapt this script to your needs or change any of its parts to reflect your scenario.
__score.py__ :::code language="python" source="~/azureml-examples-main/cli/endpoints/online/ncd/sklearn-diabetes/src/score.py" highlight="14":::
- > [!TIP]
- > The previous scoring script is provided as an example about how to perform inference of an MLflow model. You can adapt this example to your needs or change any of its parts to reflect your scenario.
- > [!WARNING]
- > __MLflow 2.0 advisory__: The provided scoring script will work with both MLflow 1.X and MLflow 2.X. However, be advised that the expected input/output formats on those versions may vary. Check the environment definition used to ensure you are using the expected MLflow version. Notice that MLflow 2.0 is only supported in Python 3.8+.
+ > __MLflow 2.0 advisory__: The provided scoring script will work with both MLflow 1.X and MLflow 2.X. However, be advised that the expected input/output formats on those versions might vary. Check the environment definition used to ensure you're using the expected MLflow version. Notice that MLflow 2.0 is only supported in Python 3.8+.
-1. Let's create an environment where the scoring script can be executed. Since our model is MLflow, the conda requirements are also specified in the model package (for more details about MLflow models and the files included on it see The MLmodel format). We are going then to build the environment using the conda dependencies from the file. However, we need also to include the package `azureml-inference-server-http` which is required for Online Deployments in Azure Machine Learning.
+1. Create an environment where the scoring script can be executed. Since the model is an MLflow model, the conda requirements are also specified in the model package. For more details about the files included in an MLflow model see [The MLmodel format](concept-mlflow-models.md#the-mlmodel-format). You'll then build the environment using the conda dependencies from the file. However, you need to also include the package `azureml-inference-server-http`, which is required for online deployments in Azure Machine Learning.
- The conda definition file looks as follows:
+ The conda definition file is as follows:
__conda.yml__ :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/online/ncd/sklearn-diabetes/environment/conda.yaml"::: > [!NOTE]
- > Note how the package `azureml-inference-server-http` has been added to the original conda dependencies file.
+ > The `azureml-inference-server-http` package has been added to the original conda dependencies file.
- We will use this conda dependencies file to create the environment:
+ You'll use this conda dependencies file to create the environment:
# [Azure CLI](#tab/cli)
Use the following steps to deploy an MLflow model with a custom scoring script.
# [Python (MLflow SDK)](#tab/mlflow)
- *This operation is not supported in MLflow SDK*
+ *This operation isn't supported in MLflow SDK*
# [Studio](#tab/studio)
-
- On [Azure Machine Learning studio portal](https://ml.azure.com), follow these steps:
-
- 1. Navigate to the __Environments__ tab on the side menu.
+
+ 1. Go to the __Environments__ tab on the side menu.
1. Select the tab __Custom environments__ > __Create__. 1. Enter the name of the environment, in this case `sklearn-mlflow-online-py37`.
- 1. On __Select environment type__ select __Use existing docker image with conda__.
- 1. On __Container registry image path__, enter `mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu22.04`.
- 1. On __Customize__ section copy the content of the file `sklearn-diabetes/environment/conda.yml` we introduced before.
- 1. Click on __Next__ and then on __Create__.
- 1. The environment is ready to be used.
-
+ 1. For __Select environment source__, choose __Use existing docker image with optional conda file__.
+ 1. For __Container registry image path__, enter `mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu22.04`.
+ 1. Select __Next__ to go to the __Customize__ section.
+ 1. Copy the content of the `sklearn-diabetes/environment/conda.yml` file and paste it in the text box.
+ 1. Select __Next__ to go to the __Tags__ page, and then __Next__ again.
+ 1. On the __Review__ page, select __Create__. The environment is ready for use.
-1. Let's create the deployment now:
+1. Create the deployment:
# [Azure CLI](#tab/cli)
-
- Create a deployment configuration file:
-
+
+ Create a deployment configuration file __deployment.yml__:
+ ```yaml $schema: https://azuremlschemas.azureedge.net/latest/managedOnlineDeployment.schema.json name: sklearn-diabetes-custom
Use the following steps to deploy an MLflow model with a custom scoring script.
instance_type: Standard_F2s_v2 instance_count: 1 ```
-
+ Create the deployment:
-
+ ```azurecli az ml online-deployment create -f deployment.yml ```
Use the following steps to deploy an MLflow model with a custom scoring script.
# [Python (MLflow SDK)](#tab/mlflow)
- *This operation is not supported in MLflow SDK*
+ *This operation isn't supported in MLflow SDK*
# [Studio](#tab/studio)
-
- On [Azure Machine Learning studio portal](https://ml.azure.com), follow these steps:
-
+ 1. From the __Endpoints__ page, Select **+Create**.
- 1. Provide a name and authentication type for the endpoint, and then select __Next__.
- 1. When selecting a model, select the MLflow model registered previously. Select __Next__ to continue.
- 1. When you select a model registered in MLflow format, in the Environment step of the wizard, you don't need a scoring script or an environment. However, you can indicate one by selecting the checkbox __Customize environment and scoring script__.
+ 1. Select the MLflow model you registered previously.
+ 1. Select __More options__ in the endpoint creation wizard to open up advanced options.
- :::image type="content" source="media/how-to-batch-scoring-script/configure-scoring-script-mlflow.png" lightbox="media/how-to-batch-scoring-script/configure-scoring-script-mlflow.png" alt-text="Screenshot showing how to indicate an environment and scoring script for MLflow models":::
-
- 1. Select the environment and scoring script you created before, then select __Next__.
+ :::image type="content" source="media/how-to-deploy-mlflow-models-online-endpoints/select-advanced-deployment-options.png" lightbox="media/how-to-deploy-mlflow-models-online-endpoints/select-advanced-deployment-options.png" alt-text="Screenshot showing how to select advanced deployment options when creating an endpoint.":::
+
+ 1. Provide a name and authentication type for the endpoint, and then select __Next__ to see that the model you selected is being used for your deployment.
+ 1. Select __Next__ to continue to the ___Deployment__ page.
+ 1. Select __Next__ to go to the __Code + environment__ page. When you select a model registered in MLflow format, you don't need to specify a scoring script or an environment on this page. However, you want to specify one in this section
+ 1. Select the slider next to __Customize environment and scoring script__.
+
+ :::image type="content" source="media/how-to-deploy-mlflow-models-online-endpoints/configure-scoring-script-mlflow.png" lightbox="media/how-to-deploy-mlflow-models-online-endpoints/configure-scoring-script-mlflow.png" alt-text="Screenshot showing how to indicate an environment and scoring script for MLflow models.":::
+
+ 1. Browse to select the scoring script you created previously.
+ 1. Select __Custom environments__ for the environment type.
+ 1. Select the custom environment you created previously, and select __Next__.
1. Complete the wizard to deploy the model to the endpoint.
-1. Once your deployment completes, your deployment is ready to serve request. One of the easier ways to test the deployment is by using a sample request file along with the `invoke` method.
+1. Once your deployment completes, it is ready to serve requests. One way to test the deployment is by using a sample request file along with the `invoke` method.
**sample-request-sklearn.json** :::code language="json" source="~/azureml-examples-main/cli/endpoints/online/ncd/sample-request-sklearn.json":::
- To submit a request to the endpoint, you can do as follows:
+ Submit a request to the endpoint as follows:
# [Azure CLI](#tab/cli)
-
- ```azurecli
- az ml online-endpoint invoke --name $ENDPOINT_NAME --request-file endpoints/online/mlflow/sample-request-sklearn-custom.json
- ```
+
+ :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-ncd.sh" ID="test_sklearn_deployment":::
# [Python (Azure Machine Learning SDK)](#tab/sdk)
Use the following steps to deploy an MLflow model with a custom scoring script.
# [Python (MLflow SDK)](#tab/mlflow)
- *This operation is not supported in MLflow SDK*
+ *This operation isn't supported in MLflow SDK*
# [Studio](#tab/studio)
-
- MLflow models can use the __Test__ tab to create invocations to the created endpoints. To do that:
-
+ 1. Go to the __Endpoints__ tab and select the new endpoint created. 1. Go to the __Test__ tab.
- 1. Paste the content of the file `sample-request-sklearn.json`.
- 1. Click on __Test__.
- 1. The predictions will show up in the box on the right.
-
+ 1. Paste the content of the `sample-request-sklearn.json` file into the __Input data to test endpoint__ box.
+ 1. Select __Test__.
+ 1. The predictions will show up under "Test results" to the right-hand side of the box.
+ The response will be similar to the following text:
Use the following steps to deploy an MLflow model with a custom scoring script.
``` > [!WARNING]
- > __MLflow 2.0 advisory__: In MLflow 1.X, the key `predictions` will be missing.
+ > __MLflow 2.0 advisory__: In MLflow 1.X, the `predictions` key will be missing.
## Clean up resources
-Once you're done with the endpoint, you can delete the associated resources:
+Once you're done using the endpoint, delete its associated resources:
# [Azure CLI](#tab/cli) :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-ncd.sh" ID="delete_endpoint"::: # [Python (Azure Machine Learning SDK)](#tab/sdk)
-
+ ```python ml_client.online_endpoints.begin_delete(endpoint_name) ```
deployment_client.delete_endpoint(endpoint_name)
# [Studio](#tab/studio)
-1. Navigate to the __Endpoints__ tab on the side menu.
-1. Select the tab __Online endpoints__.
+1. Go to the __Endpoints__ tab in the studio.
+1. Select the __Real-time endpoints__ tab.
1. Select the endpoint you want to delete.
-1. Click on __Delete__.
-1. The endpoint all along with its deployments will be deleted.
+1. Select __Delete__.
+1. The endpoint and all its deployments will be deleted.
-## Next steps
-
-To learn more, review these articles:
+## Related content
- [Deploy models with REST](how-to-deploy-with-rest.md)-- [Create and use online endpoints in the studio](how-to-use-managed-online-endpoint-studio.md) - [Safe rollout for online endpoints](how-to-safely-rollout-online-endpoints.md)-- [How to autoscale managed online endpoints](how-to-autoscale-endpoints.md)-- [Use batch endpoints for batch scoring](batch-inference/how-to-use-batch-endpoint.md)-- [View costs for an Azure Machine Learning managed online endpoint](how-to-view-online-endpoints-costs.md)-- [Access Azure resources with an online endpoint and managed identity](how-to-access-resources-from-endpoints-managed-identities.md)-- [Troubleshoot online endpoint deployment](how-to-troubleshoot-managed-online-endpoints.md)
+- [Troubleshoot online endpoint deployment](how-to-troubleshoot-managed-online-endpoints.md)- [How to autoscale managed online endpoints](how-to-autoscale-endpoints.md)
machine-learning How To Deploy Mlflow Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-mlflow-models.md
In some cases, however, you might want to do some preprocessing or post-processi
#### Customize inference with a scoring script
-Although MLflow models don't require a scoring script, you can still provide one, if needed. You can use the scoring script to customize how inference is executed for MLflow models. For more information on how to customize inference, see [Customizing MLflow model deployments (online endpoints)](how-to-deploy-mlflow-models-online-endpoints.md#customizing-mlflow-model-deployments) and [Customizing MLflow model deployments (batch endpoints)](how-to-mlflow-batch.md#customizing-mlflow-models-deployments-with-a-scoring-script).
+Although MLflow models don't require a scoring script, you can still provide one, if needed. You can use the scoring script to customize how inference is executed for MLflow models. For more information on how to customize inference, see [Customizing MLflow model deployments (online endpoints)](how-to-deploy-mlflow-models-online-endpoints.md#customize-mlflow-model-deployments) and [Customizing MLflow model deployments (batch endpoints)](how-to-mlflow-batch.md#customizing-mlflow-models-deployments-with-a-scoring-script).
> [!IMPORTANT] > If you choose to specify a scoring script for an MLflow model deployment, you also need to provide an environment for the deployment.
Each workflow has different capabilities, particularly around which type of comp
| Scenario | MLflow SDK | Azure Machine Learning CLI/SDK | Azure Machine Learning studio | | :- | :-: | :-: | :-: | | Deploy to managed online endpoints | [See example](how-to-deploy-mlflow-models-online-progressive.md)<sup>1</sup> | [See example](how-to-deploy-mlflow-models-online-endpoints.md)<sup>1</sup> | [See example](how-to-deploy-mlflow-models-online-endpoints.md?tabs=studio)<sup>1</sup> |
-| Deploy to managed online endpoints (with a scoring script) | Not supported<sup>3</sup> | [See example](how-to-deploy-mlflow-models-online-endpoints.md#customizing-mlflow-model-deployments) | [See example](how-to-deploy-mlflow-models-online-endpoints.md?tab=studio#customizing-mlflow-model-deployments) |
+| Deploy to managed online endpoints (with a scoring script) | Not supported<sup>3</sup> | [See example](how-to-deploy-mlflow-models-online-endpoints.md#customize-mlflow-model-deployments) | [See example](how-to-deploy-mlflow-models-online-endpoints.md?tab=studio#customize-mlflow-model-deployments) |
| Deploy to batch endpoints | Not supported<sup>3</sup> | [See example](how-to-mlflow-batch.md) | [See example](how-to-mlflow-batch.md?tab=studio) | | Deploy to batch endpoints (with a scoring script) | Not supported<sup>3</sup> | [See example](how-to-mlflow-batch.md#customizing-mlflow-models-deployments-with-a-scoring-script) | [See example](how-to-mlflow-batch.md?tab=studio#customizing-mlflow-models-deployments-with-a-scoring-script) | | Deploy to web services (ACI/AKS) | Legacy support<sup>2</sup> | Not supported<sup>2</sup> | Not supported<sup>2</sup> |
machine-learning How To Manage Labeling Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-labeling-projects.md
+
+ Title: Manage labeling projects
+
+description: Tasks for the project manager to administer a labeling project in Azure Machine Learning, including how to export the labels.
++++++ Last updated : 02/01/2024+
+monikerRange: 'azureml-api-1 || azureml-api-2'
+# customer intent: As a project manager, I want to monitor and administer a labeling project in Azure Machine Learning.
++
+# Manage labeling projects
+
+Learn how to manage a labeling project in Azure Machine Learning. This article is for project managers who are responsible for managing text or image labeling projects. For information about how to create the project, see [Set up a text labeling project](how-to-create-text-labeling-projects.md) or [Set up an image labeling project](how-to-create-image-labeling-projects.md).
++
+## Run and monitor the project
+
+After you initialize the project, Azure begins to run it. To manage a project, select the project on the main **Data Labeling** page.
+
+To pause or restart the project, on the project command bar, toggle the **Running** status. You can label data only when the project is running.
+
+### Monitor progress
+
+The **Dashboard** tab shows the progress of the labeling task.
+
+#### [Image projects](#tab/image)
++
+The progress charts show how many items were labeled, skipped, need review, or aren't yet complete. Hover over the chart to see the number of items in each section.
+
+A distribution of the labels for completed tasks is shown below the chart. In some project types, an item can have multiple labels. The total number of labels can exceed the total number of items.
+
+A distribution of labelers and how many items they labeled also are shown.
+
+The middle section shows a table that has a queue of unassigned tasks. When ML-assisted labeling is off, this section shows the number of manual tasks that are awaiting assignment.
+
+When ML-assisted labeling is on, this section also shows:
+
+* Tasks that contain clustered items in the queue.
+* Tasks that contain prelabeled items in the queue.
+
+Additionally, when ML-assisted labeling is enabled, you can scroll down to see the ML-assisted labeling status. The **Jobs** sections give links for each of the machine learning runs.
+
+* **Training**: Trains a model to predict the labels.
+* **Validation**: Determines whether item prelabeling uses the prediction of this model.
+* **Inference**: Prediction run for new items.
+* **Featurization**: Clusters items (only for image classification projects).
+
+#### [Text projects](#tab/text)
++
+The progress charts show how many items were labeled, skipped, need review, or aren't yet complete. Hover over the chart to see the number of items in each section.
+
+A distribution of the labels for completed tasks is shown below the chart. In some project types, an item can have multiple labels. The total number of labels can exceed the total number of items.
+
+A distribution of labelers and how many items they labeled also are shown.
+
+The middle section shows a table that has a queue of unassigned tasks. When ML-assisted labeling is off, this section shows the number of manual tasks that are awaiting assignment.
+
+When ML-assisted labeling is on, this section also shows:
+
+* Tasks that contain clustered items in the queue.
+* Tasks that contain prelabeled items in the queue.
+
+Additionally, when ML-assisted labeling is enabled, you can scroll down to see the ML-assisted labeling status. The **Jobs** sections give links for each of the machine learning runs.
+
+
+
+### Review data and labels
+
+On the **Data** tab, preview the dataset and review labeled data.
+
+Scroll through the labeled data to see the labels. If you see data that's incorrectly labeled, select it and choose **Reject** to remove the labels and return the data to the unlabeled queue.
+
+#### Skipped items
+
+A set of filters apply to the items you're reviewing. By default, you review labeled data. Select the **Asset type** filter to switch the type to **Skipped* to review items that were skipped.
++
+If you think the skipped data should be labeled, select **Reject** to put in back into the unlabeled queue. If you think the skipped data isn't relevant to your project, select **Accept** to remove it from the project.
+
+#### Consensus labeling
+
+If your project uses consensus labeling, review images that have no consensus:
+
+#### [Image projects](#tab/image)
+
+1. Select the **Data** tab.
+1. On the left menu, select **Review labels**.
+1. On the command bar above **Review labels**, select **All filters**.
+
+ :::image type="content" source="media/how-to-create-labeling-projects/select-filters.png" alt-text="Screenshot that shows how to select filters to review consensus label problems." lightbox="media/how-to-create-labeling-projects/select-filters.png":::
+
+1. Under **Labeled datapoints**, select **Consensus labels in need of review** to show only images for which the labelers didn't come to a consensus.
+
+ :::image type="content" source="media/how-to-create-labeling-projects/select-need-review.png" alt-text="Screenshot that shows how to select labels in need of review.":::
+
+1. For each image to review, select the **Consensus label** dropdown to view the conflicting labels.
+
+ :::image type="content" source="media/how-to-create-labeling-projects/consensus-dropdown.png" alt-text="Screenshot that shows the Select Consensus label dropdown to review conflicting labels." lightbox="media/how-to-create-labeling-projects/consensus-dropdown.png":::
+
+1. Although you can select an individual labeler to see their labels, to update or reject the labels, you must use the top choice, **Consensus label (preview)**.
+
+#### [Text projects](#tab/text)
+
+1. Select the **Data** tab.
+1. On the left menu, select **Review labels**.
+1. On the command bar above **Review labels**, select **All filters**.
+
+ :::image type="content" source="media/how-to-create-text-labeling-projects/text-labeling-select-filter.png" alt-text="Screenshot that shows how to select filters to review consensus label problems." lightbox="media/how-to-create-text-labeling-projects/text-labeling-select-filter.png":::
+
+1. Under **Labeled datapoints**, select **Consensus labels in need of review** to show only items for which the labelers didn't come to a consensus.
+
+ :::image type="content" source="media/how-to-create-labeling-projects/select-need-review.png" alt-text="Screenshot that shows how to select labels in need of review.":::
+
+1. For each item to review, select the **Consensus label** dropdown to view the conflicting labels.
+
+ :::image type="content" source="media/how-to-create-text-labeling-projects/text-labeling-consensus-dropdown.png" alt-text="Screenshot that shows the Select Consensus label dropdown to review conflicting labels." lightbox="media/how-to-create-text-labeling-projects/text-labeling-consensus-dropdown.png":::
+
+1. Although you can select an individual labeler to see their labels, to update or reject the labels, you must use the top choice, **Consensus label (preview)**.
+++
+### Change project details
+
+View and change details of your project on the **Details** tab. On this tab, you can:
+
+* View project details and input datasets.
+* Set or clear the **Enable incremental refresh at regular intervals** option, or request an immediate refresh.
+* View details of the storage container that's used to store labeled outputs in your project.
+* Add labels to your project.
+* Edit instructions you give to your labels.
+* Change settings for ML-assisted labeling and kick off a labeling task.
+
+### Projects created in Azure AI services
+
+If your labeling project was created from [Vision Studio](../ai-services/computer-vision/how-to/model-customization.md) or [Language Studio](../ai-services/language-service/custom/azure-machine-learning-labeling.md), you'll see an extra tab on the **Details** page. The tab allows you to switch between labeling in Azure Machine Learning and labeling in Vision Studio or Language Studio.
+
+#### [Image projects](#tab/image)
+
+If your project was created from [Vision Studio](../ai-services/computer-vision/how-to/model-customization.md), you'll also see a **Vision Studio** tab. Select **Go to Vision Studio** to return to Vision Studio. Once you return to Vision Studio, you'll be able to import your labeled data.
+
+#### [Text projects](#tab/text)
+
+If your project was created from [Language Studio](../ai-services/language-service/custom/azure-machine-learning-labeling.md), you'll see a **Language Studio** tab.
+
+* If labeling is active in Language Studio, you can't label in Azure Machine Learning. In that case, Language Studio is the only tab available. Select **View in Language Studio** to go to the active labeling project in Language Studio. From there, you can switch to labeling in Azure Machine Learning if you wish.
+
+If labeling is active in Azure Machine Learning, you have two choices:
+
+* Select **Switch to Language Studio** to switch your labeling activity back to Language Studio. When you switch, all your currently labeled data is imported into Language Studio. Your ability to label data in Azure Machine Learning is disabled, and you can label data in Language Studio. You can switch back to labeling in Azure Machine Learning at any time through Language Studio.
+
+ > [!NOTE]
+ > Only users with the [correct roles](how-to-add-users.md) in Azure Machine Learning have the ability to switch labeling.
+
+* Select **Disconnect from Language Studio** to sever the relationship with Language Studio. Once you disconnect, the project loses its association with Language Studio, and no longer shows the Language Studio tab. Disconnecting your project from Language Studio is a permanent, irreversible process and can't be undone. You'll no longer be able to access your labels for this project in Language Studio. The labels are available only in Azure Machine Learning from this point onward.
+++
+## Add new labels to a project
+
+During the data labeling process, you might want to add more labels to classify your items. For example, you might want to add an *Unknown* or *Other* label to indicate confusion.
+
+To add one or more labels to a project:
+
+1. On the main **Data Labeling** page, select the project.
+1. On the project command bar, toggle the status from **Running** to **Paused** to stop labeling activity.
+1. Select the **Details** tab.
+1. In the list on the left, select **Label categories**.
+1. Modify your labels.
+
+ :::image type="content" source="./media/how-to-create-labeling-projects/add-label.png" alt-text="Screenshot that shows how to add a label in Machine Learning Studio.":::
+
+1. In the form, add your new label. Then choose how to continue the project. Because you changed the available labels, choose how to treat data that's already labeled:
+
+ * Start over, and remove all existing labels. Choose this option if you want to start labeling from the beginning by using the new full set of labels.
+ * Start over, and keep all existing labels. Choose this option to mark all data as unlabeled, but keep the existing labels as a default tag for images that were previously labeled.
+ * Continue, and keep all existing labels. Choose this option to keep all data already labeled as it is, and start using the new label for data that's not yet labeled.
+
+1. Modify your instructions page as necessary for new labels.
+1. After you've added all new labels, toggle **Paused** to **Running** to restart the project.
+
+## Start an ML-assisted labeling task
+
+ML-assisted labeling starts automatically after some items have been labeled. This automatic threshold varies by project. You can manually start an ML-assisted training run if your project contains at least some labeled data.
+
+> [!NOTE]
+> On-demand training is not available for projects created before December 2022. To use this feature, create a new project.
+
+To start a new ML-assisted training run:
+
+1. At the top of your project, select **Details**.
+1. On the left menu, select **ML assisted labeling**.
+1. Near the bottom of the page, for **On-demand training**, select **Start**.
+
+## Export the labels
+
+To export the labels, on the project command bar, select the **Export** button. You can export the label data for Machine Learning experimentation at any time.
+
+#### [Image projects](#tab/image)
+
+If your project type is Semantic segmentation (Preview), an [Azure MLTable data asset](./how-to-mltable.md) is created.
+
+For all other project types, you can export an image label as:
+
+* A CSV file. Azure Machine Learning creates the CSV file in a folder inside *Labeling/export/csv*.
+* A [COCO format](http://cocodataset.org/#format-data) file. Azure Machine Learning creates the COCO file in a folder inside *Labeling/export/coco*.
+* An [Azure Machine Learning dataset with labels](v1/how-to-use-labeled-dataset.md).
++
+* A CSV file. Azure Machine Learning creates the CSV file in a folder inside *Labeling/export/csv*.
+* A [COCO format](https://cocodataset.org/#format-data) file. Azure Machine Learning creates the COCO file in a folder inside *Labeling/export/coco*.
+* An [Azure MLTable data asset](./how-to-mltable.md).
+
+When you export a CSV or COCO file, a notification appears briefly when the file is ready to download. Select the **Download file** link to download your results. You can also find the notification in the **Notification** section on the top bar:
++
+Access exported Azure Machine Learning datasets and data assets in the **Data** section of Machine Learning. The data details page also provides sample code you can use to access your labels by using Python.
++
+After you export your labeled data to an Azure Machine Learning dataset, you can use AutoML to build computer vision models that are trained on your labeled data. Learn more at [Set up AutoML to train computer vision models by using Python](how-to-auto-train-image-models.md).
+
+#### [Text projects](#tab/text)
+
+To export the labels, on the **Project details** page of your labeling project, select the **Export** button. You can export the label data for Machine Learning experimentation at any time.
+
+For all project types except **Text Named Entity Recognition**, you can export label data as:
+
+* A CSV file. Azure Machine Learning creates the CSV file in a folder inside *Labeling/export/csv*.
+* An [Azure Machine Learning dataset with labels](v1/how-to-use-labeled-dataset.md).
+* A CSV file. Azure Machine Learning creates the CSV file in a folder inside *Labeling/export/csv*.
+* An [Azure MLTable data asset](./how-to-mltable.md).
+
+For **Text Named Entity Recognition** projects, you can export label data as:
+
+* An [Azure Machine Learning dataset (v1) with labels](v1/how-to-use-labeled-dataset.md).
+* A `CoNLL` file. For this export, you'll also have to assign a compute resource. The export process runs offline and generates the file as part of an experiment run. Azure Machine Learning creates the `CoNLL` file in a folder inside*Labeling/export/conll*.
+* An [Azure MLTable data asset](./how-to-mltable.md).
+* A `CoNLL` file. For this export, you also have to assign a compute resource. The export process runs offline and generates the file as part of an experiment run. Azure Machine Learning creates the `CoNLL` file in a folder. inside*Labeling/export/conll*.
+
+When you export a `CSV` or `CoNLL` file, a notification appears briefly when the file is ready to download. Select the **Download file** link to download your results. You'll also find the notification in the **Notification** section on the top bar:
++
+Access exported Azure Machine Learning datasets and data assets in the **Data** section of Machine Learning. The data details page also provides sample code you can use to access your labels by using Python.
++++
+## Import labels (preview)
+
+If you have an Azure MLTable data asset or COCO file that contains labels for your current data, you can import these labels into your project. For example, you might have labels that were exported from a previous labeling project using the same data. The import labels feature is available for image projects only.
+
+#### [Image projects](#tab/image)
+
+To import labels, on the project command bar, select the **Import** button. You can import labeled data for Machine Learning experimentation at any time.
+
+Import from either a COCO file or an Azure MLTable data asset.
+
+### Data mapping
++
+### Import options
++
+#### [Text projects](#tab/text)
+
+The import labels feature is not available for text projects.
+++
+## Access for labelers
+
+Anyone who has Contributor or Owner access to your workspace can label data in your project.
+
+You can also add users and customize the permissions so that they can access labeling but not other parts of the workspace or your labeling project. For more information, see [Add users to your data labeling project](./how-to-add-users.md).
+
+## Troubleshoot issues
+
+Use these tips if you see any of the following issues:
+
+|Issue |Resolution |
+|||
+|Only datasets created on blob datastores can be used.|This issue is a known limitation of the current release.|
+|Removing data from the dataset your project uses causes an error in the project.|Don't remove data from the version of the dataset you used in a labeling project. Create a new version of the dataset to use to remove data.|
+|After a project is created, the project status is *Initializing* for an extended time.|Manually refresh the page. Initialization should complete at roughly 20 data points per second. No automatic refresh is a known issue.|
+|Newly labeled items aren't visible in data review.|To load all labeled items, select the **First** button. The **First** button takes you back to the front of the list, and it loads all labeled data.|
+|You can't assign a set of tasks to a specific labeler.|This issue is a known limitation of the current release.|
+
+### Troubleshoot object detection
+
+|Issue |Resolution |
+|||
+|If you select the Esc key when you label for object detection, a zero-size label is created and label submission fails.|To delete the label, select the **X** delete icon next to the label.|
+
+## Next step
+
+[Labeling images and text documents](how-to-label-data.md)
machine-learning How To Outsource Data Labeling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-outsource-data-labeling.md
Last updated 10/21/2021
-# As a project manager, I want to hire a company to label the data in my data labeling project
+# Customer intent: As a project manager, I want to hire a company to label the data in my data labeling project
# Keywords: data labeling companies, volume 170. No other keywords found.
machine-learning How To Prepare Datasets For Automl Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-prepare-datasets-for-automl-images.md
It helps to create, manage, and monitor data labeling tasks for
+ Object detection (bounding box) + Instance segmentation (polygon)
-If you already have a data labeling project and you want to use that data, you can [export your labeled data as an Azure Machine Learning Dataset](how-to-create-image-labeling-projects.md#export-the-labels) and then access the dataset under 'Datasets' tab in Azure Machine Learning studio. This exported dataset can then be passed as an input using `azureml:<tabulardataset_name>:<version>` format. Here's an example of how to pass existing dataset as input for training computer vision models.
+If you already have a data labeling project and you want to use that data, you can [export your labeled data as an Azure Machine Learning Dataset](how-to-manage-labeling-projects.md#export-the-labels) and then access the dataset under 'Datasets' tab in Azure Machine Learning studio. This exported dataset can then be passed as an input using `azureml:<tabulardataset_name>:<version>` format. Here's an example of how to pass existing dataset as input for training computer vision models.
# [Azure CLI](#tab/cli)
my_training_data_input = Input(
# [Studio](#tab/Studio)
-Refer to Cli/Sdk tabs for reference.
+Refer to CLI/SDK tabs for reference.
If you have previously labeled data that you would like to use to train your mod
The following script uploads the image data on your local machine at path "./data/odFridgeObjects" to datastore in Azure Blob Storage. It then creates a new data asset with the name "fridge-items-images-object-detection" in your Azure Machine Learning Workspace.
-If there already exists a data asset with the name "fridge-items-images-object-detection" in your Azure Machine Learning Workspace, it updates the version number of the data asset and point it to the new location where the image data uploaded.
+
+If there already exists a data asset with the name "fridge-items-images-object-detection" in your Azure Machine Learning Workspace, it updates the version number of the data asset and points it to the new location where the image data uploaded.
# [Azure CLI](#tab/cli) [!INCLUDE [cli v2](includes/machine-learning-cli-v2.md)]
az ml data create -f [PATH_TO_YML_FILE] --workspace-name [YOUR_AZURE_WORKSPACE]
-If you already have your data present in an existing datastore and want to create a data asset out of it, you can do so by providing the path to the data in the datastore, instead of providing the path of your local machine. Update the code [above](how-to-prepare-datasets-for-automl-images.md#using-prelabeled-training-data-from-local-machine) with the following snippet.
+If you already have your data present in an existing datastore and want to create a data asset out of it, you can do so by providing the path to the data in the datastore, instead of providing the path of your local machine. Update the code [above](#using-prelabeled-training-data-from-local-machine) with the following snippet.
# [Azure CLI](#tab/cli) [!INCLUDE [cli v2](includes/machine-learning-cli-v2.md)]
Once you have created jsonl file following the above steps, you can register it
![Animation showing how to register a data asset from the jsonl files](media\how-to-prepare-datasets-for-automl-images\ui-dataset-jsnol.gif) ### Using prelabeled training data from Azure Blob storage
-If you have your labeled training data present in a container in Azure Blob storage, then you can access it directly from there by [creating a datastore referring to that container](how-to-datastore.md#create-an-azure-blob-datastore).
+If you have your labeled training data present in a container in Azure Blob storage, then you can access it directly from there by [creating a datastore referring to that container](how-to-datastore.md#create-an-azure-blob-datastore).
## Create MLTable
machine-learning How To Secure Kubernetes Inferencing Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-kubernetes-inferencing-environment.md
Last updated 08/31/2022
-#customer intent: I would like to have machine learning with all private IP only
+# Customer intent: I would like to have machine learning with all private IP only
# Secure Azure Kubernetes Service inferencing environment
machine-learning How To Troubleshoot Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-environments.md
channels:
Choose which Python version you want to use, and remove all other versions ```python
-myenv.python.conda_dependencies.remove_conda_package("python=3.6")
+myenv.python.conda_dependencies.remove_conda_package("python=3.8")
``` :::moniker-end
Ensure that you have a working MPI installation (preference for MPI-3 support an
* If needed, follow these [steps on building MPI](https://mpi4py.readthedocs.io/en/stable/appendix.html#building-mpi-from-sources) Ensure that you're using a compatible python version
-* Azure Machine Learning requires Python 2.5 or 3.5+, but Python 3.7+ is recommended
+* Python 3.8+ is recommended due to older versions reaching end-of-life
* See [mpi4py installation](https://aka.ms/azureml/environment/install-mpi4py) **Resources**
machine-learning Tutorial Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-deploy-model.md
Use these steps to delete your Azure Machine Learning workspace and all compute
- [Test the deployment with mirrored traffic](how-to-safely-rollout-online-endpoints.md#test-the-deployment-with-mirrored-traffic) - [Monitor online endpoints](how-to-monitor-online-endpoints.md) - [Autoscale an online endpoint](how-to-autoscale-endpoints.md)-- [Customize MLflow model deployments with scoring script](how-to-deploy-mlflow-models-online-endpoints.md#customizing-mlflow-model-deployments)
+- [Customize MLflow model deployments with scoring script](how-to-deploy-mlflow-models-online-endpoints.md#customize-mlflow-model-deployments)
- [View costs for an Azure Machine Learning managed online endpoint](how-to-view-online-endpoints-costs.md)
machine-learning How To Auto Train Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-auto-train-image-models.md
automl_image_config = AutoMLImageConfig(task=ImageTask.IMAGE_OBJECT_DETECTION)
## Training and validation data
-In order to generate computer vision models, you need to bring labeled image data as input for model training in the form of an Azure Machine Learning [TabularDataset](/python/api/azureml-core/azureml.data.tabulardataset). You can either use a `TabularDataset` that you have [exported from a data labeling project](../how-to-create-image-labeling-projects.md#export-the-labels), or create a new `TabularDataset` with your labeled training data.
+In order to generate computer vision models, you need to bring labeled image data as input for model training in the form of an Azure Machine Learning [TabularDataset](/python/api/azureml-core/azureml.data.tabulardataset). You can either use a `TabularDataset` that you have [exported from a data labeling project](../how-to-manage-labeling-projects.md#export-the-labels), or create a new `TabularDataset` with your labeled training data.
If your training data is in a different format (like, pascal VOC or COCO), you can apply the helper scripts included with the sample notebooks to convert the data to JSONL. Learn more about how to [prepare data for computer vision tasks with automated ML](../how-to-prepare-datasets-for-automl-images.md).
machine-learning How To Prepare Datasets For Automl Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-prepare-datasets-for-automl-images.md
It helps to create, manage, and monitor data labeling tasks for
+ Object detection (bounding box) + Instance segmentation (polygon)
-If you already have a data labeling project and you want to use that data, you can [export your labeled data as an Azure Machine Learning TabularDataset](../how-to-create-image-labeling-projects.md#export-the-labels), which can then be used directly with automated ML for training computer vision models.
+If you already have a data labeling project and you want to use that data, you can [export your labeled data as an Azure Machine Learning TabularDataset](../how-to-manage-labeling-projects.md#export-the-labels), which can then be used directly with automated ML for training computer vision models.
## Use conversion scripts
machine-learning How To Use Labeled Dataset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-use-labeled-dataset.md
Azure Machine Learning datasets with labels are referred to as labeled datasets.
## Export data labels
-When you complete a data labeling project, you can [export the label data from a labeling project](../how-to-create-image-labeling-projects.md#export-the-labels). Doing so, allows you to capture both the reference to the data and its labels, and export them in [COCO format](http://cocodataset.org/#format-data) or as an Azure Machine Learning dataset.
+When you complete a data labeling project, you can [export the label data from a labeling project](../how-to-manage-labeling-projects.md#export-the-labels). Doing so, allows you to capture both the reference to the data and its labels, and export them in [COCO format](http://cocodataset.org/#format-data) or as an Azure Machine Learning dataset.
Use the **Export** button on the **Project details** page of your labeling project.
machine-learning Tutorial Power Bi Custom Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/tutorial-power-bi-custom-model.md
- Title: "Tutorial: Create the predictive model with a notebook (part 1 of 2)"-
-description: Learn how to build and deploy a machine learning model by using code in a Jupyter Notebook. Also create a scoring script that defines input and output for easy integration into Microsoft Power BI.
------- Previously updated : 12/22/2021---
-# Tutorial: Power BI integration - Create the predictive model with a Jupyter Notebook (part 1 of 2)
--
-In part 1 of this tutorial, you train and deploy a predictive machine learning model by using code in a Jupyter Notebook. You also create a scoring script to define the input and output schema of the model for integration into Power BI. In part 2, you use the model to predict outcomes in Microsoft Power BI.
-
-In this tutorial, you:
-
-> [!div class="checklist"]
-> * Create a Jupyter Notebook.
-> * Create an Azure Machine Learning compute instance.
-> * Train a regression model by using scikit-learn.
-> * Write a scoring script that defines the input and output for easy integration into Microsoft Power BI.
-> * Deploy the model to a real-time scoring endpoint.
--
-## Prerequisites
--- An Azure subscription. If you don't already have a subscription, you can use a [free trial](https://azure.microsoft.com/free/). -- An Azure Machine Learning workspace. If you don't already have a workspace, see [Create workspace resources](../quickstart-create-resources.md).-- Introductory knowledge of the Python language and machine learning workflows.-
-## Create a notebook and compute
-
-On the [**Azure Machine Learning Studio**](https://ml.azure.com) home page, select **Create new** > **Notebook**:
-
-
-On the **Create a new file** page:
-
-1. Name your notebook (for example, *my_model_notebook*).
-1. Change the **File Type** to **Notebook**.
-1. Select **Create**.
-
-Next, to run code cells, create a compute instance and attach it to your notebook. Start by selecting the plus icon at the top of the notebook:
--
-On the **Create compute instance** page:
-
-1. Choose a CPU virtual machine size. For this tutorial, you can choose a **Standard_D11_v2**, with 2 cores and 14 GB of RAM.
-1. Select **Next**.
-1. On the **Configure Settings** page, provide a valid **Compute name**. Valid characters are uppercase and lowercase letters, digits, and hyphens (-).
-1. Select **Create**.
-
-In the notebook, you might notice the circle next to **Compute** turned cyan. This color change indicates that the compute instance is being created:
--
-> [!NOTE]
-> The compute instance can take 2 to 4 minutes to be provisioned.
-
-After the compute is provisioned, you can use the notebook to run code cells. For example, in the cell you can type the following code:
-
-```python
-import numpy as np
-
-np.sin(3)
-```
-
-Then select Shift + Enter (or select Control + Enter or select the **Play** button next to the cell). You should see the following output:
--
-Now you're ready to build a machine learning model.
-
-## Build a model by using scikit-learn
-
-In this tutorial, you use the [Diabetes dataset](https://www4.stat.ncsu.edu/~boos/var.select/diabetes.html). This dataset is available in [Azure Open Datasets](https://azure.microsoft.com/services/open-datasets/).
--
-### Import data
-
-To import your data, copy the following code and paste it into a new *code cell* in your notebook.
-
-```python
-from azureml.opendatasets import Diabetes
-
-diabetes = Diabetes.get_tabular_dataset()
-X = diabetes.drop_columns("Y")
-y = diabetes.keep_columns("Y")
-X_df = X.to_pandas_dataframe()
-y_df = y.to_pandas_dataframe()
-X_df.info()
-```
-
-The `X_df` pandas data frame contains 10 baseline input variables. These variables include age, sex, body mass index, average blood pressure, and six blood serum measurements. The `y_df` pandas data frame is the target variable. It contains a quantitative measure of disease progression one year after the baseline. The data frame contains 442 records.
-
-### Train the model
-
-Create a new *code cell* in your notebook. Then copy the following code and paste it into the cell. This code snippet constructs a ridge regression model and serializes the model by using the Python pickle format.
-
-```python
-import joblib
-from sklearn.linear_model import Ridge
-
-model = Ridge().fit(X_df,y_df)
-joblib.dump(model, 'sklearn_regression_model.pkl')
-```
-
-### Register the model
-
-In addition to the content of the model file itself, your registered model will store metadata. The metadata includes the model description, tags, and framework information.
-
-Metadata is useful when you're managing and deploying models in your workspace. By using tags, for instance, you can categorize your models and apply filters when you list models in your workspace. Also, if you mark this model with the scikit-learn framework, you'll simplify deploying it as a web service.
-
-Copy the following code and then paste it into a new *code cell* in your notebook.
-
-```python
-import sklearn
-
-from azureml.core import Workspace
-from azureml.core import Model
-from azureml.core.resource_configuration import ResourceConfiguration
-
-ws = Workspace.from_config()
-
-model = Model.register(workspace=ws,
- model_name='my-sklearn-model', # Name of the registered model in your workspace.
- model_path='./sklearn_regression_model.pkl', # Local file to upload and register as a model.
- model_framework=Model.Framework.SCIKITLEARN, # Framework used to create the model.
- model_framework_version=sklearn.__version__, # Version of scikit-learn used to create the model.
- sample_input_dataset=X,
- sample_output_dataset=y,
- resource_configuration=ResourceConfiguration(cpu=2, memory_in_gb=4),
- description='Ridge regression model to predict diabetes progression.',
- tags={'area': 'diabetes', 'type': 'regression'})
-
-print('Name:', model.name)
-print('Version:', model.version)
-```
-
-You can also view the model in Azure Machine Learning Studio. In the menu on the left, select **Models**:
--
-## Define the scoring script
-
-When you deploy a model that will be integrated into Power BI, you need to define a Python *scoring script* and custom environment. The scoring script contains two functions:
--- The `init()` function runs when the service starts. It loads the model (which is automatically downloaded from the model registry) and deserializes it.-- The `run(data)` function runs when a call to the service includes input data that needs to be scored. -
->[!NOTE]
-> The Python decorators in the code below define the schema of the input and output data, which is important for integration into Power BI.
-
-Copy the following code and paste it into a new *code cell* in your notebook. The following code snippet has cell magic that writes the code to a file named *score.py*.
-
-```python
-%%writefile score.py
-
-import json
-import pickle
-import numpy as np
-import pandas as pd
-import os
-import joblib
-from azureml.core.model import Model
-
-from inference_schema.schema_decorators import input_schema, output_schema
-from inference_schema.parameter_types.numpy_parameter_type import NumpyParameterType
-from inference_schema.parameter_types.pandas_parameter_type import PandasParameterType
--
-def init():
- global model
- # Replace filename if needed.
- path = os.getenv('AZUREML_MODEL_DIR')
- model_path = os.path.join(path, 'sklearn_regression_model.pkl')
- # Deserialize the model file back into a sklearn model.
- model = joblib.load(model_path)
--
-input_sample = pd.DataFrame(data=[{
- "AGE": 5,
- "SEX": 2,
- "BMI": 3.1,
- "BP": 3.1,
- "S1": 3.1,
- "S2": 3.1,
- "S3": 3.1,
- "S4": 3.1,
- "S5": 3.1,
- "S6": 3.1
-}])
-
-# This is an integer type sample. Use the data type that reflects the expected result.
-output_sample = np.array([0])
-
-# To indicate that we support a variable length of data input,
-# set enforce_shape=False
-@input_schema('data', PandasParameterType(input_sample))
-@output_schema(NumpyParameterType(output_sample))
-def run(data):
- try:
- print("input_data....")
- print(data.columns)
- print(type(data))
- result = model.predict(data)
- print("result.....")
- print(result)
- # You can return any data type, as long as it can be serialized by JSON.
- return result.tolist()
- except Exception as e:
- error = str(e)
- return error
-```
-
-## Define the custom environment
-
-Next, define the environment to score the model. In the environment, define the Python packages, such as pandas and scikit-learn, that the scoring script (*score.py*) requires.
-
-To define the environment, copy the following code and paste it into a new *code cell* in your notebook.
-
-```python
-from azureml.core.model import InferenceConfig
-from azureml.core import Environment
-from azureml.core.conda_dependencies import CondaDependencies
-
-environment = Environment('my-sklearn-environment')
-environment.python.conda_dependencies = CondaDependencies.create(pip_packages=[
- 'azureml-defaults',
- 'inference-schema[numpy-support]',
- 'joblib',
- 'numpy',
- 'pandas',
- 'scikit-learn=={}'.format(sklearn.__version__)
-])
-
-inference_config = InferenceConfig(entry_script='./score.py',environment=environment)
-```
-
-## Deploy the model
-
-To deploy the model, copy the following code and paste it into a new *code cell* in your notebook:
-
-```python
-service_name = 'my-diabetes-model'
-
-service = Model.deploy(ws, service_name, [model], inference_config, overwrite=True)
-service.wait_for_deployment(show_output=True)
-```
-
->[!NOTE]
-> The service can take 2 to 4 minutes to deploy.
-
-If the service deploys successfully, you should see the following output:
-
-```txt
-Tips: You can try get_logs(): https://aka.ms/debugimage#dockerlog or local deployment: https://aka.ms/debugimage#debug-locally to debug if deployment takes longer than 10 minutes.
-Running......................................................................................
-Succeeded
-ACI service creation operation finished, operation "Succeeded"
-```
-
-You can also view the service in Azure Machine Learning Studio. In the menu on the left, select **Endpoints**:
--
-We recommend that you test the web service to ensure it works as expected. To return your notebook, in Azure Machine Learning Studio, in the menu on the left, select **Notebooks**. Then copy the following code and paste it into a new *code cell* in your notebook to test the service.
-
-```python
-import json
-
-input_payload = json.dumps({
- 'data': X_df[0:2].values.tolist()
-})
-
-output = service.run(input_payload)
-
-print(output)
-```
-
-The output should look like this JSON structure: `{'predict': [[205.59], [68.84]]}`.
-
-## Next steps
-
-In this tutorial, you saw how to build and deploy a model so that it can be consumed by Power BI. In the next part, you'll learn how to consume this model in a Power BI report.
-
-> [!div class="nextstepaction"]
-> [Tutorial: Consume a model in Power BI](/power-bi/connect-data/service-aml-integrate?context=azure/machine-learning/context/ml-context)
managed-grafana How To Set Up Private Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-set-up-private-access.md
Title: How to set up private access (preview) in Azure Managed Grafana+ description: How to disable public access to your Azure Managed Grafana workspace and configure private endpoints.
+#CustomerIntent: As a data professional or developer I want to configure private access to an Azure Managed Grafana workspace.
Previously updated : 10/27/2023 Last updated : 01/31/2024
In this guide, you'll learn how to disable public access to your Azure Managed G
Public access is enabled by default when you create an Azure Grafana workspace. Disabling public access prevents all traffic from accessing the resource unless you go through a private endpoint. > [!NOTE]
-> When private access (preview) is enabled, pinging charts using the [*Pin to Grafana*](../azure-monitor/visualize/grafana-plugin.md#pin-charts-from-the-azure-portal-to-azure-managed-grafana) feature will no longer work as the Azure portal canΓÇÖt access a Managed Grafana workspace on a private IP address.
+> When private access (preview) is enabled, pinging charts using the [*Pin to Grafana*](../azure-monitor/visualize/grafana-plugin.md#pin-charts-from-the-azure-portal-to-azure-managed-grafana) feature will no longer work as the Azure portal canΓÇÖt access an Azure Managed Grafana workspace on a private IP address.
### [Portal](#tab/azure-portal)
Once you have disabled public access, set up a [private endpoint](../private-lin
1. A subscription and resource group for your private DNS zone are preselected. You can change them optionally.
- To learn more about DNS configuration, go to [Name resolution for resources in Azure virtual networks](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server) and [DNS configuration for Private Endpoints](../private-link/private-endpoint-overview.md#dns-configuration).
+ To learn more about DNS configuration, go to [Name resolution for resources in Azure virtual networks](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server) and [DNS configuration for Private Endpoints](../private-link/private-endpoint-overview.md#dns-configuration). Azure Private Endpoint private DNS zone values for Azure Managed Grafana are listed at [Azure services DNS zone](../private-link/private-endpoint-dns.md#management-and-governance).
+ :::image type="content" source="media/private-endpoints/create-endpoint-dns.png" alt-text="Screenshot of the Azure portal filling out DNS tab.":::
If you have issues with a private endpoint, check the following guide: [Troubles
## Next steps
-In this how-to guide, you learned how to set up private access from your users to a Managed Grafana workspace. To learn how to configure private access between a Managed Grafana workspace and a data source, see [Connect to a data source privately](how-to-connect-to-data-source-privately.md).
+In this how-to guide, you learned how to set up private access from your users to an Azure Managed Grafana workspace. To learn how to configure private access between a Managed Grafana workspace and a data source, see [Connect to a data source privately](how-to-connect-to-data-source-privately.md).
managed-instance-apache-cassandra Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/faq.md
The settings for table metadata such as bloom filter, caching, read repair chanc
Yes. You can find a sample for deploying a cluster with a datacenter [here](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/cosmosdb_cassandra_datacenter).
+### How can I add a single public endpoint to my Azure Managed Instance Cassandra Cluster?
+
+To achieve this, you can [create a load balancer](../load-balancer/basic/quickstart-basic-internal-load-balancer-portal.md). When configuring the Backend pools of the load balancer, utilize all the IP addresses from the data center within your Managed Instance cluster. You might see errors in the logs when using java and other Cassandra drivers. Users use this approach to work around network restrictions when administering clusters with cqlsh. This approach might result in extra costs. Also, you should carefully assess how opting for a single endpoint can affect performance.
+ ## Next steps To learn about frequently asked questions in other APIs, see:
migrate How To Build A Business Case https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-build-a-business-case.md
ms. Previously updated : 01/17/2023 Last updated : 01/24/2024
This article describes how to build a Business case for on-premises servers and
**Discovery Source** | **Details** | **Migration strategies that can be used to build a business case** | |
- Use more accurate data insights collected via **Azure Migrate appliance** | You need to set up an Azure Migrate appliance for [VMware](how-to-set-up-appliance-vmware.md) or [Hyper-V](how-to-set-up-appliance-hyper-v.md) or [Physical/Bare-metal or other clouds](how-to-set-up-appliance-physical.md). The appliance discovers servers, SQL Server instance and databases, and ASP.NET webapps and sends metadata and performance (resource utilization) data to Azure Migrate. [Learn more](migrate-appliance.md). | Azure recommended to minimize cost, Migrate to all IaaS (Infrastructure as a Service), Modernize to PaaS (Platform as a Service)
- Build a quick business case using the **servers imported via a .csv file** | You need to provide the server inventory in a [.CSV file and import in Azure Migrate](tutorial-discover-import.md) to get a quick business case based on the provided inputs. You don't need to set up the Azure Migrate appliance to discover servers for this option. | Migrate to all IaaS (Infrastructure as a Service)
+ Use more accurate data insights collected via **Azure Migrate appliance** | You need to set up an Azure Migrate appliance for [VMware](how-to-set-up-appliance-vmware.md) or [Hyper-V](how-to-set-up-appliance-hyper-v.md) or [Physical/Bare-metal or other clouds](how-to-set-up-appliance-physical.md). The appliance discovers servers, SQL Server instance and databases, and ASP.NET webapps and sends metadata and performance (resource utilization) data to Azure Migrate. [Learn more](migrate-appliance.md). | Azure recommended to minimize cost, Migrate to all IaaS (Infrastructure as a Service), Modernize to PaaS (Platform as a Service), Migrate to AVS (Azure VMware Solution)
+ Build a quick business case using the **servers imported via a .csv file** | You need to provide the server inventory in a [.CSV file and import in Azure Migrate](tutorial-discover-import.md) to get a quick business case based on the provided inputs. You don't need to set up the Azure Migrate appliance to discover servers for this option. | Migrate to all IaaS (Infrastructure as a Service), Migrate to AVS (Azure VMware Solution)
## Business case overview
There are three types of migration strategies that you can choose while building
| | **Azure recommended to minimize cost** | You can get the most cost efficient and compatible target recommendation in Azure across Azure IaaS and Azure PaaS targets. | For SQL Servers, sizing and cost comes from the *Recommended report* with optimization strategy - minimize cost from Azure SQL assessment.<br/><br/> For web apps, sizing and cost comes from Azure App Service and Azure Kubernetes Service assessments depending on web app readiness and minimum cost.<br/><br/> For general servers, sizing and cost comes from Azure VM assessment. **Migrate to all IaaS (Infrastructure as a Service)** | You can get a quick lift and shift recommendation to Azure IaaS. | For SQL Servers, sizing and cost comes from the *Instance to SQL Server on Azure VM* report. <br/><br/> For general servers and servers hosting web apps, sizing and cost comes from Azure VM assessment.
-**Modernize to PaaS (Platform as a Service)** | You can get a PaaS preferred recommendation that means, the logic identifies workloads best fit for PaaS targets.<br/><br/> General servers are recommended with a quick lift and shift recommendation to Azure IaaS. | For SQL Servers, sizing and cost comes from the *Instance to Azure SQL MI* report.<br/><br/> For web apps, sizing and cost comes from Azure App Service and Azure Kubernetes Service assessments, with a preference to App Service. <br/><br/> For general servers, sizing and cost comes from Azure VM assessment.<br/><br/>
+**Modernize to PaaS (Platform as a Service)** | You can get a PaaS preferred recommendation that means, the logic identifies workloads best fit for PaaS targets.<br/><br/> General servers are recommended with a quick lift and shift recommendation to Azure IaaS. | For SQL Servers, sizing and cost comes from the *Instance to Azure SQL MI* report.<br/><br/> For web apps, sizing and cost comes from Azure App Service and Azure Kubernetes Service assessments, with a preference to App Service. <br/><br/> For general servers, sizing and cost comes from Azure VM assessment.
+**Migrate to AVS (Azure VMware Solution)** | You can get a quick lift and shift recommendation to AVS (Azure VMware Solution). | For all servers, sizing and cost comes from Azure VMware Solution assessment.<br/><br/>
> [!Note] > Although the Business case picks Azure recommendations from certain assessments, you won't be able to access the assessments directly. To deep dive into sizing, readiness and Azure cost estimates, you can create respective assessments for the servers or workloads.
There are three types of migration strategies that you can choose while building
- With the default *Azure recommended approach to minimize cost*, you can get the most cost-efficient and compatible target recommendation in Azure across Azure IaaS and Azure PaaS targets. - With *Migrate to all IaaS (Infrastructure as a Service)*, you can get a quick lift and shift recommendation to Azure IaaS. - With *Modernize to PaaS (Platform as a Service)*, you can get cost effective recommendations for Azure IaaS and more PaaS preferred targets in Azure PaaS.
+ - With *Migrate to AVS (Azure VMware Solution)*, you can get the most cost effective and compatible target recommendation for hosting workloads on AVS. Only Reserved Instances are available as a savings option for migrating to AVS.
1. In **Savings options**, specify the savings options combination that you want to be considered while optimizing your Azure costs and maximize savings. Based on the availability of the savings option in the chosen region and the targets, the business case recommends the appropriate savings options to maximize your savings on Azure. - Choose 'Reserved Instance', if your datacenter comprises most consistently running resources. - Choose 'Reserved Instance + Azure Savings Plan', if you want additional flexibility and automated cost optimization for workloads applicable for Azure Savings Plan (Compute targets including Azure VM and Azure App Service).
migrate How To View A Business Case https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-view-a-business-case.md
ms. Previously updated : 12/12/2023 Last updated : 01/24/2024
There are four major reports that you need to review:
- Support status of the operating system and database licenses. - **On-premises vs Azure**: This report covers the breakdown of the total cost of ownership by cost categories and insights on savings. - **Azure IaaS**: This report covers the Azure and on-premises footprint of the servers and workloads recommended for migrating to Azure IaaS.
+- **On-premises vs AVS (Azure VMware Solution)**: In case you build a business case to ΓÇ£Migrate to AVSΓÇ¥, youΓÇÖll see this report which covers the AVS and on-premises footprint of the workloads for migrating to AVS.
- **Azure PaaS**: This report covers the Azure and on-premises footprint of the workloads recommended for migrating to Azure PaaS. ## View a business case
This card covers your potential total cost of ownership savings based on the cho
It covers the cost of running all the servers scoped in the business case using some of the industry benchmarks. It doesn't cover Facilities (lease/colocation/power) cost by default, but you can edit it in the on-premises cost assumptions section. It includes one time cost for some of the capital expenditures like hardware acquisition etc., and annual cost for other components that you might pay as operating expenses like maintenance etc. ### Estimated Azure cost
-It covers the cost of all servers and workloads that have been identified as ready for migration/modernization as per the recommendation. Refer to the respective *Azure IaaS* and *Azure PaaS* report for details. The Azure cost is calculated based on the right sized Azure configuration, ideal migration target and most suitable pricing offers for your workloads. You can override the migration strategy, target location or other settings in the 'Azure cost' assumptions to see how your savings could change by migrating to Azure.
+It covers the cost of all servers and workloads that have been identified as ready for migration/modernization as per the recommendation. Refer to the respective [Azure IaaS](how-to-view-a-business-case.md#azure-iaas-report) and [Azure PaaS](how-to-view-a-business-case.md#azure-paas-report) report for details. The Azure cost is calculated based on the right sized Azure configuration, ideal migration target and most suitable pricing offers for your workloads. You can override the migration strategy, target location or other settings in the 'Azure cost' assumptions to see how your savings could change by migrating to Azure.
### YoY estimated current vs future state cost As you plan to migrate to Azure in phases, this line chart shows your cashflow per year based on the estimated migration completed that year. By default, it's assumed that you'll migrate 0% in the current year, 20% in Year 1, 50% in Year 2, and 100% in Year 3.
As you plan to migrate to Azure in phases, this line chart shows your cashflow p
This card shows a static percentage of maximum savings you could get with Azure hybrid Benefits. ### Savings with Extended security updates
-It shows the potential savings with respect to extended security update license. It is the cost of extended security update license required to run Windows Server and SQL Server securely after the end of support of its licenses on-premises. Extended security updates are offered at no additional cost on Azure.
+It shows the potential savings with respect to extended security update license. It's the cost of extended security update license required to run Windows Server and SQL Server securely after the end of support of its licenses on-premises. Extended security updates are offered at no additional cost on Azure.
### Savings with security and management
It covers cost components for on-premises and Azure, savings, and insights to un
## Azure IaaS report
-**Azure tab**
+#### [Azure](#tab/iaas-azure)
This section contains the cost estimate by recommended target (Annual cost and also includes Compute, Storage, Network, labor components) and savings from Hybrid benefits. - IaaS cost estimate:
This section contains the cost estimate by recommended target (Annual cost and a
- **Compute and license cost**: This card shows the comparison of compute and license cost when using Azure hybrid benefit and without Azure hybrid benefit. - **Savings** - This card displays the estimated maximum savings when using Azure hybrid benefit and with extended security updates over a period of one year. - Azure VM:
- - **Estimated cost by savings options**: This card includes compute cost for Azure VMs. It is recommended that all idle servers are migrated via Pay as you go Dev/Test and others (Active and unknown) are migrated using 3 year Reserved Instance or 3 year Azure Savings Plan to maximize savings.
+ - **Estimated cost by savings options**: This card includes compute cost for Azure VMs. It's recommended that all idle servers are migrated via Pay as you go Dev/Test and others (Active and unknown) are migrated using 3 year Reserved Instance or 3 year Azure Savings Plan to maximize savings.
- **Recommended VM family**: This card covers the VM sizes recommended. The ones marked Unknown are the VMs that have some readiness issues and no SKUs could be found for them. - **Recommended storage type**: This card covers the storage cost distribution across different recommended storage types. - SQL Server on Azure VM:
This section assumes instance to SQL Server on Azure VM migration recommendation
- **Recommended VM family**: This card covers the VM sizes recommended. The ones marked Unknown are the VMs that have some readiness issues and no SKUs could be found for them. - **Recommended storage type**: This card covers the storage cost distribution across different recommended storage types.
-**On-premises tab**
+#### [On-premises](#tab/iaas-on-premises)
- On-premises footprint of the servers recommended to be migrated to Azure IaaS. - Contribution of Zombie servers in the on-premises cost. - Distribution of servers by OS, virtualization, and activity state. - Distribution by support status of OS licenses and OS versions. ++ ## Azure PaaS report
-**Azure tab**
+#### [Azure](#tab/paas-azure)
This section contains the cost estimate by recommended target (Annual cost and also includes Compute, Storage, Network, labor components) and savings from Hybrid benefits. - PaaS cost estimate:
This section contains the cost estimate by recommended target (Annual cost and a
- **Compute and license cost**: This card shows the comparison of compute and license cost when using Azure hybrid benefit and without Azure hybrid benefit. - **Savings** - This card displays the estimated maximum savings when using Azure hybrid benefit and with extended security updates over a period of one year. - Azure SQL:
- - **Estimated cost by savings options**: This card includes compute cost for Azure SQL MI. It is recommended that all idle SQL instances are migrated via Pay as you go Dev/Test and others (Active and unknown) are migrated using 3 year Reserved Instance to maximize savings.
+ - **Estimated cost by savings options**: This card includes compute cost for Azure SQL MI. It's recommended that all idle SQL instances are migrated via Pay as you go Dev/Test and others (Active and unknown) are migrated using 3 year Reserved Instance to maximize savings.
- **Distribution by recommended service tier** : This card covers the recommended service tier. - Azure App Service and App Service Container:
- - **Estimated cost by savings options**: This card includes Azure App Service Plans cost. It is recommended that the web apps are migrated using 3 year Reserved Instance or 3 year Savings Plan to maximize savings.
+ - **Estimated cost by savings options**: This card includes Azure App Service Plans cost. It's recommended that the web apps are migrated using 3 year Reserved Instance or 3 year Savings Plan to maximize savings.
- **Distribution by recommended plans** : This card covers the recommended App Service plan. - Azure Kubernetes Service:
- - **Estimated cost by savings options**: This card includes the cost of the recommended AKS node pools. It is recommended that the web apps are migrated using 3 year Reserved Instance or 3 year Savings Plan to maximize savings.
+ - **Estimated cost by savings options**: This card includes the cost of the recommended AKS node pools. It's recommended that the web apps are migrated using 3 year Reserved Instance or 3 year Savings Plan to maximize savings.
- **Distribution by recommended Node pool SKU**: This card covers the recommended SKUs for AKS node pools.
-**On-premises tab**
+#### [On-premises](#tab/paas-on-premises)
- On-premises footprint of the servers recommended to be migrated to Azure PaaS. - Contribution of Zombie SQL instances in the on-premises cost. - Distribution by support status of OS licenses and OS versions. - Distribution of SQL instances by SQL version and activity state. ++
+## On-premises vs AVS report
+It covers cost components for on-premises and AVS, savings, and insights to understand the savings better.
+
+## AVS report
+
+#### [AVS (Azure VMware Solution)](#tab/avs-azure)
+
+This section contains the cost estimate by recommended target (Annual cost includes Compute, Storage, Network, labor components) and savings from Hybrid benefits.
+- AVS cost estimate:
+ - **Estimated AVS cost**: This card includes the total cost of ownership for hosting all workloads on AVS including the AVS nodes cost (which includes storage cost), networking and labor cost. The node cost is computed by taking the most cost optimum AVS node SKU. A default CPU over-subscription of 4:1, 100% memory overcommit and compression and deduplication factor of 1.5 is assumed to get the compute cost of AVS. You can learn more about this [here](concepts-azure-vmware-solution-assessment-calculation.md#whats-in-an-azure-vmware-solution-assessment). External storage options like ANF arenΓÇÖt a part of the business case yet.
+ - **Compute and license cost**: This card shows the comparison of compute and license cost when using Azure hybrid benefit and without Azure hybrid benefit.
+- Savings and optimization:
+ - **Savings with 3-year RI**: This card shows the node cost with 3-year RI.
+ - **Savings with Azure Hybrid Benefit & Extended Security Updates**: This card displays the estimated maximum savings when using Azure hybrid benefit and with extended security updates over a period of one year.
+
+#### [On-premises](#tab/avs-on-premises)
+
+- On-premises footprint of the servers recommended to be migrated to AVS.
+- Contribution of Zombie servers in the on-premises cost.
+- Distribution of servers by OS, virtualization, and activity state.
+- Distribution by support status of OS licenses and OS versions.
++ ## Next steps - [Learn more](concepts-business-case-calculation.md) about how business cases are calculated.
migrate Tutorial Discover Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-hyper-v.md
ms. Previously updated : 12/28/2023 Last updated : 01/30/2024 #Customer intent: As a Hyper-V admin, I want to discover my on-premises servers on Hyper-V.
The **Database instances** displays the number of instances discovered by Azure
To view the remaining duration until end of support, that is, the number of months for which the license is valid, select **Columns** > **Support ends in** > **Submit**. The **Support ends in** column displays the duration in months.
+## Onboard to Azure Stack HCI (optional)
+
+> [!Note]
+> Perform this step only if you are migrating to [Azure Stack HCI](https://learn.microsoft.com/azure-stack/hci/overview).
+
+Provide the Azure Stack cluster information and the credentials to connect to the cluster. For more information, see [Download the Azure Stack HCI software](https://learn.microsoft.com/azure-stack/hci/deploy/download-azure-stack-hci-software).
++ ## Next steps
mysql How To Connect Tls Ssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-connect-tls-ssl.md
The following example shows how to connect to your server using the mysql comman
``` > [!IMPORTANT]
-> Setting the require_secure_transport to OFF doesn't mean encrypted connections aren't supported on the server side. If you set require_secure_transport to OFF on the Azure Database for MySQL flexible server instance, but if the client connects with the encrypted connection, it still is accepted. The following connection using mysql client to a Azure Database for MySQL flexible server instance configured with require_secure_transport=OFF also works as shown below.
+> Setting the require_secure_transport to OFF doesn't mean encrypted connections aren't supported on the server side. If you set require_secure_transport to OFF on the Azure Database for MySQL flexible server instance, but if the client connects with the encrypted connection, it still is accepted. The following connection using mysql client to an Azure Database for MySQL flexible server instance configured with require_secure_transport=OFF also works as shown below.
```bash mysql.exe -h mydemoserver.mysql.database.azure.com -u myadmin -p --ssl-mode=REQUIRED
openshift Howto Monitor Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-monitor-alerts.md
Configuring Resource Health alerts for an ARO cluster requires an alert rule. Al
1. Select **Resource health**, then select **Add resource health alert**.
+ :::image type="content" source="media/howto-monitor-alerts/resource-health.png" alt-text="Screenshot showing Resource health window with Add resource health alert button highlighted.":::
+ 1. Enter all applicable parameters for the alert rule in the various tabs of the window, including an **Alert rule name** in the **Details** tab. 1. Select **Review + Create**.
openshift Tutorial Delete Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/tutorial-delete-cluster.md
If you have access to multiple subscriptions, run `az account set -s {subscripti
## Delete the cluster
-In previous tutorials, the following variables were set.
+In previous tutorials, the following variable was set:
```bash
-CLUSTER=yourclustername
RESOURCEGROUP=yourresourcegroup ```
-Using these values, delete your cluster:
+Using this value, delete your cluster:
```azurecli
-az aro delete --resource-group $RESOURCEGROUP --name $CLUSTER
+az group delete --name $RESOURCEGROUP
```
-You'll then be prompted to confirm if you want to delete the cluster. After you confirm with `y`, it will take several minutes to delete the cluster. When the command finishes, the entire resource group and all resources inside it, including the cluster, will be deleted.
+You'll then be prompted to confirm if you are sure you want to perform this operation. After you confirm with `y`, it will take several minutes to delete the cluster. When the command finishes, the entire resource group and all resources inside it, including the cluster and the virtual network, will be deleted.
## Next steps
operator-insights Mcc Edr Agent Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/mcc-edr-agent-configuration.md
agent_id: mcc-edr-agent01
# a unique name. # The name can then be referenced for secrets later in the config. secret_providers:
-ΓÇ» - name: dp_keyvault
-ΓÇ» ΓÇ» provider:
-ΓÇ» ΓÇ» ΓÇ» type: key_vault
-ΓÇ» ΓÇ» ΓÇ» vault_name: contoso-dp-kv
-ΓÇ» ΓÇ» ΓÇ» auth:
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» tenant_id: ad5421f5-99e4-44a9-8a46-cc30f34e8dc7
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» identity_name: 98f3263d-218e-4adf-b939-eacce6a590d2
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» cert_path: /path/to/local/certkey.pkcs
+ - name: dp_keyvault
+ provider:
+ type: key_vault
+ vault_name: contoso-dp-kv
+ auth:
+ tenant_id: ad5421f5-99e4-44a9-8a46-cc30f34e8dc7
+ identity_name: 98f3263d-218e-4adf-b939-eacce6a590d2
+ cert_path: /path/to/local/certkey.pkcs
# Source configuration. This controls how EDRs are ingested from # MCC. source:
operator-insights Sftp Agent Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/sftp-agent-configuration.md
site_id: london-lab01
# Config for secrets providers. We support reading secrets from Azure Key Vault and from the VM's local filesystem. # Multiple secret providers can be defined and each must be given a unique name, which is referenced later in the config. # Two secret providers must be configured for the SFTP agent to run:
- # A secret provider of type `key_vault` which contains details required to connect to the Azure Key Vault and allow connection to the storage account.
- # A secret provider of type `file_system`, which specifies a directory on the VM where secrets for connecting to the SFTP server are stored.
+# A secret provider of type `key_vault` which contains details required to connect to the Azure Key Vault and allow connection to the storage account.
+# A secret provider of type `file_system`, which specifies a directory on the VM where secrets for connecting to the SFTP server are stored.
secret_providers:
-ΓÇ» - name: data_product_keyvault
-ΓÇ» ΓÇ» provider:
-ΓÇ» ΓÇ» ΓÇ» type: key_vault
-ΓÇ» ΓÇ» ΓÇ» vault_name: contoso-dp-kv
-ΓÇ» ΓÇ» ΓÇ» auth:
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» tenant_id: ad5421f5-99e4-44a9-8a46-cc30f34e8dc7
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» identity_name: 98f3263d-218e-4adf-b939-eacce6a590d2
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» cert_path: /path/to/local/certkey.pkcs
+ - name: data_product_keyvault
+ provider:
+ type: key_vault
+ vault_name: contoso-dp-kv
+ auth:
+ tenant_id: ad5421f5-99e4-44a9-8a46-cc30f34e8dc7
+ identity_name: 98f3263d-218e-4adf-b939-eacce6a590d2
+ cert_path: /path/to/local/certkey.pkcs
- name: local_file_system
- ΓÇ» provider:
+ provider:
# The file system provider specifies a folder in which secrets are stored. # Each secret must be an individual file without a file extension, where the secret name is the file name, and the file contains the secret only. type: file_system # The absolute path to the secrets directory secrets_directory: /path/to/secrets/directory file_sources:
-# Source configuration. This specifies which files are ingested from the SFTP server.
-# Multiple sources can be defined here (where they can reference different folders on the same SFTP server). Each source must have a unique identifier where any URL reserved characters in source_id must be percent-encoded.
-# A sink must be configured for each source.
-- source_id: sftp-source01
- source:
- sftp:
- # The IP address or hostname of the SFTP server.
- host: 192.0.2.0
- # Optional. The port to connect to on the SFTP server. Defaults to 22.
- port: 22
- # The path to a folder on the SFTP server that files will be uploaded to Azure Operator Insights from.
- base_path: /path/to/sftp/folder
- # The path on the VM to the 'known_hosts' file for the SFTP server.  This file must be in SSH format and contain details of any public SSH keys used by the SFTP server. This is required by the agent to verify it is connecting to the correct SFTP server.
- known_hosts_file: /path/to/known_hosts
- # The name of the user on the SFTP server which the agent will use to connect.
- user: sftp-user
- auth:
- # The name of the secret provider configured above which contains the secret for the SFTP user.
- secret_provider: local_file_system
- # The form of authentication to the SFTP server. This can take the values 'password' or 'ssh_key'. The appropriate field(s) must be configured below depending on which type is specified.
- type: password
- # Only for use with 'type: password'. The name of the file containing the password in the secrets_directory folder
- secret_name: sftp-user-password
- # Only for use with 'type: ssh_key'. The name of the file containing the SSH key in the secrets_directory folder
- key_secret: sftp-user-ssh-key
- # Optional. Only for use with 'type: ssh_key'. The passphrase for the SSH key. This can be omitted if the key is not protected by a passphrase.
- passphrase_secret_name: sftp-user-ssh-key-passphrase
- # Optional. A regular expression to specify which files in the base_path folder should be ingested. If not specified, the STFP agent will attempt to ingest all files in the base_path folder (subject to exclude_pattern, settling_time_secs and exclude_before_time).
- include_pattern: "*\.csv$"
- # Optional. A regular expression to specify any files in the base_path folder which should not be ingested. Takes priority over include_pattern, so files which match both regular expressions will not be ingested.
- exclude_pattern: '\.backup$'
- # A duration in seconds. During an upload run, any files last modified within the settling time are not selected for upload, as they may still be being modified.
- settling_time_secs: 60
- # A datetime that adheres to the RFC 3339 format. Any files last modified before this datetime will be ignored.
- exclude_before_time: "2022-12-31T21:07:14-05:00"
- # An expression in cron format, specifying when upload runs are scheduled for this source. All times refer to UTC. The cron schedule should include fields for: second, minute, hour, day of month, month, day of week, and year. E.g.:
- # `* /3 * * * * *` for once every 3 minutes
- # `0 30 5 * * * *` for 05:30 every day
- # `0 15 3 * * Fri,Sat *` for 03:15 every Friday and Saturday
- schedule: "*/30 * * * Apr-Jul Fri,Sat,Sun 2025"
- sink:
- auth:
- type: sas_token
- # This must reference a secret provider configured above.
- secret_provider: data_product_keyvault
- # The name of a secret in the corresponding provider.
- # This will be the name of a secret in the Key Vault.
- # This is created by the Data Product and should not be changed.
- secret_name: adls-sas-token
- # The container within the ingestion account. This *must* be exactly the name of the container that Azure Operator Insights expects.
- container_name: example-container
- # Optional. A string giving an optional base path to use in Azure Blob Storage. Reserved URL characters must be percent-encoded. It may be required depending on the Data Product.
- base_path: pmstats
- # Optional. How often, in hours, the sink should refresh its ADLS token. Defaults to 1.
- adls_token_cache_period_hours: 1
- # Optional. The maximum number of blobs that can be uploaded to ADLS in parallel. Further blobs will be queued in memory until an upload completes. Defaults to 10.
- # Note: This value is also the maximum number of concurrent SFTP reads for the associated source. Ensure your SFTP server can handle this many concurrent connections. If you set this to a value greater than 10 and are using an OpenSSH server, you may need to increase `MaxSessions` and/or `MaxStartups` in `sshd_config`.
- maximum_parallel_uploads: 10
- # Optional. The maximum size of each block that is uploaded to Azure.
- # Each blob is composed of one or more blocks. Defaults to 32MiB (=33554432 Bytes).
- block_size_in_bytes : 33554432
-```
+ # Source configuration. This specifies which files are ingested from the SFTP server.
+ # Multiple sources can be defined here (where they can reference different folders on the same SFTP server). Each source must have a unique identifier where any URL reserved characters in source_id must be percent-encoded.
+ # A sink must be configured for each source.
+ - source_id: sftp-source01
+ source:
+ sftp:
+ # The IP address or hostname of the SFTP server.
+ host: 192.0.2.0
+ # Optional. The port to connect to on the SFTP server. Defaults to 22.
+ port: 22
+ # The path to a folder on the SFTP server that files will be uploaded to Azure Operator Insights from.
+ base_path: /path/to/sftp/folder
+ # The path on the VM to the 'known_hosts' file for the SFTP server.  This file must be in SSH format and contain details of any public SSH keys used by the SFTP server. This is required by the agent to verify it is connecting to the correct SFTP server.
+ known_hosts_file: /path/to/known_hosts
+ # The name of the user on the SFTP server which the agent will use to connect.
+ user: sftp-user
+ auth:
+ # The name of the secret provider configured above which contains the secret for the SFTP user.
+ secret_provider: local_file_system
+ # The form of authentication to the SFTP server. This can take the values 'password' or 'ssh_key'. The appropriate field(s) must be configured below depending on which type is specified.
+ type: password
+ # Only for use with 'type: password'. The name of the file containing the password in the secrets_directory folder
+ secret_name: sftp-user-password
+ # Only for use with 'type: ssh_key'. The name of the file containing the SSH key in the secrets_directory folder
+ key_secret: sftp-user-ssh-key
+ # Optional. Only for use with 'type: ssh_key'. The passphrase for the SSH key. This can be omitted if the key is not protected by a passphrase.
+ passphrase_secret_name: sftp-user-ssh-key-passphrase
+ # Optional. A regular expression to specify which files in the base_path folder should be ingested. If not specified, the STFP agent will attempt to ingest all files in the base_path folder (subject to exclude_pattern, settling_time_secs and exclude_before_time).
+ include_pattern: "*\.csv$"
+ # Optional. A regular expression to specify any files in the base_path folder which should not be ingested. Takes priority over include_pattern, so files which match both regular expressions will not be ingested.
+ exclude_pattern: '\.backup$'
+ # A duration in seconds. During an upload run, any files last modified within the settling time are not selected for upload, as they may still be being modified.
+ settling_time_secs: 60
+ # A datetime that adheres to the RFC 3339 format. Any files last modified before this datetime will be ignored.
+ exclude_before_time: "2022-12-31T21:07:14-05:00"
+ # An expression in cron format, specifying when upload runs are scheduled for this source. All times refer to UTC. The cron schedule should include fields for: second, minute, hour, day of month, month, day of week, and year. E.g.:
+ # `* /3 * * * * *` for once every 3 minutes
+ # `0 30 5 * * * *` for 05:30 every day
+ # `0 15 3 * * Fri,Sat *` for 03:15 every Friday and Saturday
+ schedule: "*/30 * * * Apr-Jul Fri,Sat,Sun 2025"
+ sink:
+ auth:
+ type: sas_token
+ # This must reference a secret provider configured above.
+ secret_provider: data_product_keyvault
+ # The name of a secret in the corresponding provider.
+ # This will be the name of a secret in the Key Vault.
+ # This is created by the Data Product and should not be changed.
+ secret_name: adls-sas-token
+ # The container within the ingestion account. This *must* be exactly the name of the container that Azure Operator Insights expects.
+ container_name: example-container
+ # Optional. A string giving an optional base path to use in Azure Blob Storage. Reserved URL characters must be percent-encoded. It may be required depending on the Data Product.
+ base_path: pmstats
+ # Optional. How often, in hours, the sink should refresh its ADLS token. Defaults to 1.
+ adls_token_cache_period_hours: 1
+ # Optional. The maximum number of blobs that can be uploaded to ADLS in parallel. Further blobs will be queued in memory until an upload completes. Defaults to 10.
+ # Note: This value is also the maximum number of concurrent SFTP reads for the associated source. Ensure your SFTP server can handle this many concurrent connections. If you set this to a value greater than 10 and are using an OpenSSH server, you may need to increase `MaxSessions` and/or `MaxStartups` in `sshd_config`.
+ maximum_parallel_uploads: 10
+ # Optional. The maximum size of each block that is uploaded to Azure.
+ # Each blob is composed of one or more blocks. Defaults to 32MiB (=33554432 Bytes).
+ block_size_in_bytes : 33554432
+ ```
postgresql Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-limits.md
Previously updated : 01/16/2024 Last updated : 2/1/2024 # Limits in Azure Database for PostgreSQL - Flexible Server
When using Azure Database for PostgreSQL flexible server for a busy database wit
- Server storage can only be scaled in 2x increments, see [Compute and Storage](concepts-compute-storage.md) for details. - Decreasing server storage size is currently not supported. The only way to do is [dump and restore](../howto-migrate-using-dump-and-restore.md) it to a new Azure Database for PostgreSQL flexible server instance.
-### Server version upgrades
--- Automated migration between major database engine versions is currently not supported. If you would like to upgrade to the next major version, take a [dump and restore](../howto-migrate-using-dump-and-restore.md) it to a server that was created with the new engine version.
-
### Storage - Once configured, storage size can't be reduced. You have to create a new server with desired storage size, perform manual [dump and restore](../howto-migrate-using-dump-and-restore.md) and migrate your database(s) to the new server.
postgresql Concepts Version Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-version-policy.md
Previously updated : 1/30/2024- Last updated : 2/1/2024 # Azure Database for PostgreSQL - Flexible Server versioning policy
Azure Database for PostgreSQL flexible server supports the following database ve
| Version | Azure Database for PostgreSQL single server | Azure Database for PostgreSQL flexible server | | -- | :: | :-: |
+| PostgreSQL 16 | | X |
| PostgreSQL 15 | | X | | PostgreSQL 14 | | X | | PostgreSQL 13 | | X |
Azure Database for PostgreSQL flexible server automatically performs minor versi
The table below provides the retirement details for PostgreSQL major versions. The dates follow the [PostgreSQL community versioning policy](https://www.postgresql.org/support/versioning/).
-| Version | What's New | Azure support start date | Retirement date (Azure)|
-| - | - | | - |
-| [PostgreSQL 9.5 (retired)](https://www.postgresql.org/about/news/postgresql-132-126-1111-1016-9621-and-9525-released-2165/)| [Features](https://www.postgresql.org/docs/9.5/release-9-5.html) | April 18, 2018 | February 11, 2021
-| [PostgreSQL 9.6 (retired)](https://www.postgresql.org/about/news/postgresql-96-released-1703/) | [Features](https://wiki.postgresql.org/wiki/NewIn96) | April 18, 2018 | November 11, 2021 |
-| [PostgreSQL 10 (retired)](https://www.postgresql.org/about/news/postgresql-10-released-1786/) | [Features](https://wiki.postgresql.org/wiki/New_in_postgres_10) | June 4, 2018 | November 10, 2022 |
-| [PostgreSQL 11](https://www.postgresql.org/about/news/postgresql-11-released-1894/) | [Features](https://www.postgresql.org/docs/11/release-11.html) | July 24, 2019 | November 9, 2024 |
-| [PostgreSQL 12](https://www.postgresql.org/about/news/postgresql-12-released-1976/) | [Features](https://www.postgresql.org/docs/12/release-12.html) | Sept 22, 2020 | November 14, 2024 |
-| [PostgreSQL 13](https://www.postgresql.org/about/news/postgresql-13-released-2077/) | [Features](https://www.postgresql.org/docs/13/release-13.html) | May 25, 2021 | November 13, 2025 |
-| [PostgreSQL 14](https://www.postgresql.org/about/news/postgresql-14-released-2318/) | [Features](https://www.postgresql.org/docs/14/release-14.html) | June 29, 2022 | November 12, 2026 |
-| [PostgreSQL 15](https://www.postgresql.org/about/news/postgresql-15-released-2526/) | [Features](https://www.postgresql.org/docs/14/release-14.html) | May 15, 2023 | November 11, 2027 |
-
-## PostgreSQL 11 support in Azure Database for PostgreSQL single server and Azure Database for PostgreSQL flexible server
-
-Azure is extending support for PostgreSQL 11 in Azure Database for PostgreSQL single server and Azure Database for PostgreSQL flexible server by one more year until **November 9, 2024**.
--- You will be able to create and use your PostgreSQL 11 servers until November 9, 2024 without any restrictions. This extended support is provided to help you with more time to plan and [migrate to Azure Database for PostgreSQL flexible server](../migrate/concepts-single-to-flexible.md) for higher PostgreSQL versions.-- Until November 9, 2023, Azure will continue to update your PostgreSQL 11 server with PostgreSQL community provided minor versions.-- Between November 9, 2023 and November 9, 2024, you can continue to use your PostgreSQL 11 servers and create new Azure Database for PostgreSQL flexible server instances without any restrictions. However, other retired PostgreSQL engine [restrictions](#retired-postgresql-engine-versions-not-supported-in-azure-database-for-postgresql-flexible-server) apply.-- Beyond Nov 9 2024, all retired PostgreSQL engine [restrictions](#retired-postgresql-engine-versions-not-supported-in-azure-database-for-postgresql-flexible-server) apply.
-
+|Version|What's New |Azure support start date|Retirement date (Azure) |
+|-|--|||
+|[PostgreSQL 16](https://www.postgresql.org/about/news/postgresql-16-released-2715/)|[Features](https://www.postgresql.org/docs/16/release-16.html)|15-Oct-23 |9-Nov-28 |
+|[PostgreSQL 15](https://www.postgresql.org/about/news/postgresql-15-released-2526/)|[Features](https://www.postgresql.org/docs/15/release-15.html)|15-May-23 |11-Nov-27 |
+|[PostgreSQL 14](https://www.postgresql.org/about/news/postgresql-14-released-2318/)|[Features](https://www.postgresql.org/docs/14/release-14.html)|29-Jun-22 |12-Nov-26 |
+|[PostgreSQL 13](https://www.postgresql.org/about/news/postgresql-13-released-2077/)|[Features](https://www.postgresql.org/docs/13/release-13.html)|25-May-21 |13-Nov-25 |
+|[PostgreSQL 12](https://www.postgresql.org/about/news/postgresql-12-released-1976/)|[Features](https://www.postgresql.org/docs/12/release-12.html)|22-Sep-20 |14-Nov-24 |
+|[PostgreSQL 11](https://www.postgresql.org/about/news/postgresql-11-released-1894/)|[Features](https://www.postgresql.org/docs/11/release-11.html)|24-Jul-19 |9-Nov-25 |
+|[PostgreSQL 10 (retired)](https://www.postgresql.org/about/news/postgresql-10-released-1786/)|[Features](https://wiki.postgresql.org/wiki/New_in_postgres_10)|4-Jun-18 |10-Nov-22 |
+|[PostgreSQL 9.5 (retired)](https://www.postgresql.org/about/news/postgresql-132-126-1111-1016-9621-and-9525-released-2165/)|[Features](https://www.postgresql.org/docs/9.5/release-9-5.html)|18-Apr-18 |11-Feb-21 |
+|[PostgreSQL 9.6 (retired)](https://www.postgresql.org/about/news/postgresql-96-released-1703/)|[Features](https://wiki.postgresql.org/wiki/NewIn96)|18-Apr-18 |11-Nov-21 |
+
+## PostgreSQL 11 support
+
+Azure is extending its support for PostgreSQL 11 within both the Azure Database for PostgreSQL Single Server and Azure Database for PostgreSQL Flexible Server platforms. This extended support timeline is designed to provide more time for users to plan and [migrate to Azure Database for PostgreSQL flexible server](../migrate/concepts-single-to-flexible.md) for higher PostgreSQL versions.
+
+### Single Server Support:
+- Until March 28, 2025, users can continue to create and utilize PostgreSQL 11 servers on the Azure Database for PostgreSQL Single Server, except for creation through the Azure portal. It's important to note that other [restrictions](#retired-postgresql-engine-versions-not-supported-in-azure-database-for-postgresql-flexible-server) associated with retired PostgreSQL engines still apply.
+- Azure will offer updates incorporating minor versions provided by the PostgreSQL community for PostgreSQL 11 servers until November 9, 2023.
+
+### Flexible Server Support
+- Users can create and operate PostgreSQL 11 servers on Azure Database for PostgreSQL Flexible Server until November 9, 2025.
+- Similar to the Single Server, updates with PostgreSQL community provided minor versions will be available for PostgreSQL 11 servers until November 9, 2023.
+- From November 9, 2023, to November 9, 2025, while users can continue using and creating new instances of PostgreSQL 11 on the Flexible Server, they will be subject to the [restrictions](#retired-postgresql-engine-versions-not-supported-in-azure-database-for-postgresql-flexible-server) of other retired PostgreSQL engines.
+
+This extension of Postgres 11 support is part of Azure's commitment to providing a seamless migration path and ensuring continued functionality for users.
+ ## Retired PostgreSQL engine versions not supported in Azure Database for PostgreSQL flexible server You might continue to run the retired version in Azure Database for PostgreSQL flexible server. However, note the following restrictions after the retirement date for each PostgreSQL database version:-- As the community won't be releasing any further bug fixes or security fixes, Azure Database for PostgreSQL flexible server won't patch the retired database engine for any bugs or security issues, or otherwise take security measures with regard to the retired database engine. You might experience security vulnerabilities or other issues as a result. However, Azure will continue to perform periodic maintenance and patching for the host, OS, containers, and any other service-related components.
+- As the community won't be releasing any further bug fixes or security fixes, Azure Database for PostgreSQL flexible server won't patch the retired database engine for any bugs or security issues, or otherwise take security measures regarding the retired database engine. You might experience security vulnerabilities or other issues as a result. However, Azure continues to perform periodic maintenance and patching for the host, OS, containers, and any other service-related components.
- If any support issue you might experience relates to the PostgreSQL engine itself, as the community no longer provides the patches, we might not be able to provide you with support. In such cases, you have to upgrade your database to one of the supported versions. - You won't be able to create new database servers for the retired version. However, you'll be able to perform point-in-time recoveries and create read replicas for your existing servers. - New service capabilities developed by Azure Database for PostgreSQL flexible server might only be available to supported database server versions.-- Uptime SLAs will apply solely to Azure Database for PostgreSQL flexible server service-related issues and not to any downtime caused by database engine-related bugs. -- In the extreme event of a serious threat to the service caused by the PostgreSQL database engine vulnerability identified in the retired database version, Azure might choose to stop your database server to secure the service. In such case, you'll be notified to upgrade the server before bringing the server online.
+- Uptime SLAs apply solely to Azure Database for PostgreSQL flexible server service-related issues and not to any downtime caused by database engine-related bugs.
+- In the extreme event of a serious threat to the service caused by the PostgreSQL database engine vulnerability identified in, the retired database version, Azure might choose to stop your database server to secure the service. In such case, you are notified to upgrade the server before bringing the server online.
- The new extensions introduced for Azure Postgres Flexible Server will not be supported on the community retired postgres versions.
private-link How To Approve Private Link Cross Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/how-to-approve-private-link-cross-subscription.md
Last updated 01/11/2024
-#customer intent: As a network administrator, I want to approve Private Link connections across Azure subscriptions.
+# Customer intent: As a network administrator, I want to approve Private Link connections across Azure subscriptions.
private-link Private Endpoint Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-dns.md
For Azure services, use the recommended zone names as described in the following
>| Storage account (Microsoft.Storage/storageAccounts) | web </br> web_secondary | privatelink.web.core.windows.net | web.core.windows.net | >| Azure Data Lake File System Gen2 (Microsoft.Storage/storageAccounts) | dfs </br> dfs_secondary | privatelink.dfs.core.windows.net | dfs.core.windows.net | >| Azure File Sync (Microsoft.StorageSync/storageSyncServices) | afs | privatelink.afs.azure.net | afs.azure.net |
->| Azure Managed Disks (Microsoft.Compute/diskAccesses) | disks | privatelink.blob.core.windows.net | privatelink.blob.core.windows.net |
+>| Azure Managed Disks (Microsoft.Compute/diskAccesses) | disks | privatelink.blob.core.windows.net | blob.core.windows.net |
### Web
sap Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/get-started.md
In the SAP workload documentation space, you can find the following areas:
## Change Log
+- February 01, 2024: Added guidance for [SAP front-end printing to Universal Print](./universal-print-sap-frontend.md).
- January 24, 2024: Split [SAP RISE integration documentation](./rise-integration.md) into multiple segments for improved legibility, additional overview information added. - January 22, 2024: Changes in all high availability documentation to include guidelines for setting the ΓÇ£probeThresholdΓÇ¥ property to 2 in the load balancerΓÇÖs health probe configuration. - January 21, 2024: Change recommendations around LARGEPAGES in [Azure Virtual Machines Oracle DBMS deployment for SAP workload](./dbms-guide-oracle.md)
sap Universal Print Sap Frontend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/universal-print-sap-frontend.md
+
+ Title: SAP front-end printing with Universal Print
+description: Enabling SAP front-end printing with Universal Print
+++
+tags: azure-resource-manager
+++
+ vm-linux
+ Last updated : 01/31/2024++++
+# SAP front-end printing with Universal Print
+
+Printing from your SAP landscape is a requirement for many customers. Depending on your business, printing needs can come in different areas and SAP applications. Examples can be data list printing, mass- or label printing. Such production and batch print scenarios are often solved with specialized hardware, drivers and printing solutions. This article addresses options to use [Universal Print](/universal-print/fundamentals/universal-print-whatis) for SAP front-end printing of the SAP users.
+
+Universal Print is a cloud-based print solution that enables organizations to manage printers and printer drivers in a centralized manner. Removes the need to use dedicated printer servers and available for use by company employees and applications. While Universal Print runs entirely on Microsoft Azure, for use with SAP systems there's no such requirement. Your SAP landscape can run on Azure, be located on-premises or operate in any other cloud environment. You can use SAP systems deployed by SAP RISE. Similarly, SAP cloud services, which are browser based can be used with Universal Print in most front-end printing scenarios.
+
+## Prerequisites
+
+[SAP front-end printing](https://help.sap.com/docs/SAP_NETWEAVER_750/290ce8983cbc4848a9d7b6f5e77491b9/4e96bc2a7e9e40fee10000000a421937.html) sends an output to a printer available for the user on their front-end device. In other words, a printer accessible by the operating system. Same client computer runs SAP GUI or browser. To use Universal Print, you need to have access to such printer(s).
+
+- Client OS with support for Universal Print
+- Add Universal Print printer to your Windows client
+- Able to print on Universal Print printer from OS
+
+See the [Universal Print documentation](/universal-print/fundamentals/universal-print-getting-started#step-4-add-a-universal-print-printer-to-a-windows-device.md) for details on these prerequisites. As a result, one or more Universal Print printers are visible in your deviceΓÇÖs printer list. For SAP front-end printing, it's not necessary to make it your default printer.
+
+[![Example showing Universal Print printers in Windows 11 settings dialog.](./media/universtal-print-sap/frontend-os-printer.png)](./media/universtal-print-sap/frontend-os-printer.png#lightbox)
+
+## SAP web applications
+
+A web application such as SAP Fiori or SAP Web GUI is used to access SAP data and display it. It doesnΓÇÖt matter if you access the SAP system through an internal network, public URL or if your SAP system is an ABAP or Java system, or SAP application running within SAP Business Technology Platform. All SAP application data displayed within a browser can be printed. The print job creation in Universal Print is done by the operating system and doesn't require any SAP configuration at all. There's no SAP integration and communication with Universal Print directly.
+
+![Diagram with connection between user's client device, Universal Print service and printer.](./media/universtal-print-sap/sap-frontend-to-universal-print-connection.png)
+
+## SAP GUI printing
+For SAP front-end printing, Universal Print relies on SAP GUI and [SAP printer access method G](https://help.sap.com/docs/SAP_NETWEAVER_750/290ce8983cbc4848a9d7b6f5e77491b9/4e740b270f6f34e1e10000000a42189e.html). Your SAP system likely has one or more SAP printers defined already for such purpose. An example, SAP printer LOCL, defined in SAP transaction code SPAD.
+
+![Example dialog in SAP transaction SPAD entry screen.](./media/universtal-print-sap/frontend-sap-spad-1.png)
+
+![Example dialog in SAP transaction SPAD showing printer definition.](./media/universtal-print-sap/frontend-sap-spad-2.png)
+
+
+For Universal Print use, itΓÇÖs important the access method (1) is set to ΓÇÿGΓÇÖ, as this uses SAP GUIΓÇÖs integration into the operating system. For host printer field (2), value of __DEFAULT calls the relevant default printer name. Leaving option ΓÇ£No device selection at front endΓÇ¥ unchecked (3), you're prompted to select the printer from your OS printer list. With the option checked, print output goes directly to the OS default printer without extra user input.
+
+With such SAP printer definition, SAP GUI uses the operating system printer details. The operating system already knows your added Universal Print printers. As with SAP web applications, there's no direct communication between the SAP system and Universal Print APIs. No settings to configure for your SAP system beyond the available output device for front-end printing.
+
+When using SAP GUI for HTML and front-end printing, you can print to an SAP defined printer, too. In the SAP system, you need a front-end printer with access method ΓÇÿGΓÇÖ and a device type of PDF or derivate. For more information, see [SAPΓÇÖs documentation](https://help.sap.com/docs/SAP_NETWEAVER_750/290ce8983cbc4848a9d7b6f5e77491b9/4e96c13b7e9e40fee10000000a421937.html). Such print output is displayed in browser as a PDF from the SAP system. You open the common OS printing dialog and select a Universal Print printer installed on your computer.
+
+## Limitations
+
+SAP defines front-end printing with several [constraints](https://help.sap.com/docs/SAP_NETWEAVER_750/290ce8983cbc4848a9d7b6f5e77491b9/4e96cd237e6240fde10000000a421937.html). It can't be used for background printing, nor should it be relied upon for production or mass printing. See if your SAP printer definition is correct, as printers with access method ΓÇÿFΓÇÖ don't work correctly with current SAP releases. More details can be found in [SAP note 2028598 - Technical changes for front-end printing with access method F](https://me.sap.com/notes/2028598).
+++
+## Next steps
+Check out the documentation:
+
+- [SAPΓÇÖs print queue API](https://api.sap.com/api/API_CLOUD_PRINT_PULL_SRV/overview)
+- [Universal Print API](/graph/api/resources/print)
search Cognitive Search Skill Image Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-image-analysis.md
- ignite-2023 Previously updated : 06/24/2022 Last updated : 01/31/2024 # Image Analysis cognitive skill
Parameters are case-sensitive.
| Parameter name | Description | |--|-|
-| `defaultLanguageCode` | A string indicating the language to return. The service returns recognition results in a specified language. If this parameter isn't specified, the default value is "en". <br/><br/>Supported languages include all of the [generally available languages](../ai-services/computer-vision/language-support.md#image-analysis) of Azure AI Vision. |
+| `defaultLanguageCode` | A string indicating the language to return. The service returns recognition results in a specified language. If this parameter isn't specified, the default value is "en". <br/><br/>Supported languages include a subset of [generally available languages](../ai-services/computer-vision/language-support.md#image-analysis) of Azure AI Vision. When a language is newly introduced with general availability status into the AI Vision service, there is expected delay before they are fully integrated within this skill. |
| `visualFeatures` | An array of strings indicating the visual feature types to return. Valid visual feature types include: <ul><li>*adult* - detects if the image is pornographic (depicts nudity or a sex act), gory (depicts extreme violence or blood) or suggestive (also known as racy content). </li><li>*brands* - detects various brands within an image, including the approximate location. </li><li> *categories* - categorizes image content according to a [taxonomy](../ai-services/Computer-vision/Category-Taxonomy.md) defined by Azure AI services. </li><li>*description* - describes the image content with a complete sentence in supported languages.</li><li>*faces* - detects if faces are present. If present, generates coordinates, gender and age. </li><li>*objects* - detects various objects within an image, including the approximate location. </li><li> *tags* - tags the image with a detailed list of words related to the image content.</li></ul> Names of visual features are case-sensitive. Both *color* and *imageType* visual features have been deprecated, but you can access this functionality through a [custom skill](./cognitive-search-custom-skill-interface.md). Refer to the [Azure AI Vision Image Analysis documentation](../ai-services/computer-vision/language-support.md#image-analysis) on which visual features are supported with each `defaultLanguageCode`.| | `details` | An array of strings indicating which domain-specific details to return. Valid visual feature types include: <ul><li>*celebrities* - identifies celebrities if detected in the image.</li><li>*landmarks* - identifies landmarks if detected in the image. </li></ul> |
search Index Similarity And Scoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/index-similarity-and-scoring.md
Last updated 09/27/2023
-# Relevance scoring for full text search (BM25)
+# Relevance in keyword search (BM25 scoring)
This article explains the BM25 relevance scoring algorithm used to compute search scores for [full text search](search-lucene-query-architecture.md). BM25 relevance is exclusive to full text search. Filter queries, autocomplete and suggested queries, wildcard search or fuzzy search queries aren't scored or ranked for relevance.
search Search Lucene Query Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-lucene-query-architecture.md
Last updated 10/09/2023
# Full text search in Azure AI Search
-Full text search is an approach in information retrieval that matches on plain text content stored in an index. For example, given a query string "hotels in San Diego on the beach", the search engine looks for content containing those terms. To make scans more efficient, query strings undergo lexical analysis: lower-casing all terms, removing stop words like "the", and reducing terms to primitive root forms. When matching terms are found, the search engine retrieves documents, ranks them in order of relevance, and returns the top results.
+Full text search is an approach in information retrieval that matches on plain text stored in an index. For example, given a query string "hotels in San Diego on the beach", the search engine looks for tokenized strings based on those terms. To make scans more efficient, query strings undergo lexical analysis: lower-casing all terms, removing stop words like "the", and reducing terms to primitive root forms. When matching terms are found, the search engine retrieves documents, ranks them in order of relevance, and returns the top results.
Query execution can be complex. This article is for developers who need a deeper understanding of how full text search works in Azure AI Search. For text queries, Azure AI Search seamlessly delivers expected results in most scenarios, but occasionally you might get a result that seems "off" somehow. In these situations, having a background in the four stages of Lucene query execution (query parsing, lexical analysis, document matching, scoring) can help you identify specific changes to query parameters or index configuration that produce the desired outcome.
search Vector Search How To Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-how-to-query.md
All results are returned in plain text, including vectors in fields marked as `r
If you aren't sure whether your search index already has vector fields, look for:
-+ A non-empty `vectorSearch` property containing algorithms and other vector-related configurations embedded in the index schema.
++ A nonempty `vectorSearch` property containing algorithms and other vector-related configurations embedded in the index schema. + In the fields collection, look for fields of type `Collection(Edm.Single)` with a `dimensions` attribute, and a `vectorSearch` section in the index.
REST API version [**2023-07-01-Preview**](/rest/api/searchservice/index-preview)
In the following example, the vector is a representation of this query string: "what Azure services support full text search". The query targets the "contentVector" field. The actual vector has 1536 embeddings, so it's trimmed in this example for readability.
-In this API version, there's no pre-filter support or `vectorFilterMode` parameter. The filter criteria are applied after the search engine executes the vector query. The set of `"k"` nearest neighbors is retrieved, and then combined with the set of filtered results. As such, the value of `"k"` predetermines the surface over which the filter is applied. For `"k": 10`, the filter is applied to 10 most similar documents. For `"k": 100`, the filter iterates over 100 documents (assuming the index contains 100 documents that are sufficiently similar to the query).
+In this API version, there's no prefilter support or `vectorFilterMode` parameter. The filter criteria are applied after the search engine executes the vector query. The set of `"k"` nearest neighbors is retrieved, and then combined with the set of filtered results. As such, the value of `"k"` predetermines the surface over which the filter is applied. For `"k": 10`, the filter is applied to 10 most similar documents. For `"k": 100`, the filter iterates over 100 documents (assuming the index contains 100 documents that are sufficiently similar to the query).
```http POST https://{{search-service-name}}.search.windows.net/indexes/{{index-name}}/docs/search?api-version=2023-07-01-Preview
Search results are composed of "retrievable" fields from your search index. A re
+ All "retrievable" fields (a REST API default). + Fields explicitly listed in a "select" parameter on the query.
-The examples in this article used a "select" statement to specify text (non-vector) fields in the response.
+The examples in this article used a "select" statement to specify text (nonvector) fields in the response.
> [!NOTE] > Vectors aren't designed for readability, so avoid returning them in the response. Instead, choose non-vector fields that are representative of the search document. For example, if the query targets a "descriptionVector" field, return an equivalent text field if you have one ("description") in the response.
-### Number of results
+### Number of ranked results in a vector query response
-A query might match to any number of documents, as many as all of them if the search criteria are weak (for example "search=*" for a null query). Because it's seldom practical to return unbounded results, you should specify a maximum for the response:
+A vector query specifies the `k` parameter, which determines how many matches are returned in the results. The search engine always returns `k` number of matches. If `k` is larger than the number of documents in the index, then the number of documents determines the upper limit of what can be returned.
+
+If you're familiar with full text search, you know to expect zero results if the index doesn't contain a term or phrase. However, in vector search, the search operation is identifying nearest neighbors, and it will always return `k` results even if the nearest neighbors aren't that similar. So, it's possible to get results for nonsensical or off-topic queries, especially if you aren't using prompts to set boundaries. Less relevant results have a worse similarity score, but they're still the "nearest" vectors if there isn't anything closer. As such, a response with no meaningful results can still return `k` results, but each result's similarity score would be low.
+
+A [hybrid approach](hybrid-search-overview.md) that includes full text search can mitigate this problem. Another mitigation is to set a minimum threshold on the search score, but only if the query is a pure single vector query. Hybrid queries aren't conducive to minimum thresholds because the ranges are so much smaller and volatile.
+
+Query parameters affecting result count include:
+ `"k": n` results for vector-only queries + `"top": n` results for hybrid queries that include a "search" parameter
Both "k" and "top" are optional. Unspecified, the default number of results in a
Ranking of results is computed by either:
-+ The similarity metric specified in the index `vectorSearch` section for a vector-only query. Valid values are `cosine` , `euclidean`, and `dotProduct`.
++ The similarity metric specified in the index `vectorSearch` section for a vector-only query. Valid values are `cosine`, `euclidean`, and `dotProduct`. + Reciprocal Rank Fusion (RRF) if there are multiple sets of search results. Azure OpenAI embedding models use cosine similarity, so if you're using Azure OpenAI embedding models, `cosine` is the recommended metric. Other supported ranking metrics include `euclidean` and `dotProduct`.
search Vector Search Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-overview.md
Title: Vector search
-description: Describes concepts, scenarios, and availability of the vector search feature in Azure AI Search.
+description: Describes concepts, scenarios, and availability of vector capabilities in Azure AI Search.
Last updated 01/29/2024
-# Vector stores and vector search in Azure AI Search
+# Vectors in Azure AI Search
Vector search is an approach in information retrieval that stores numeric representations of content for search scenarios. Because the content is numeric rather than plain text, the search engine matches on vectors that are the most similar to the query, with no requirement for matching on exact terms.
-This article is a high-level introduction to vector support in Azure AI Search. It also explains integration with other Azure services and covers [terminology and concepts](#vector-search-concepts) related to vector search development.
+This article is a high-level introduction to vectors in Azure AI Search. It also explains integration with other Azure services and covers [terminology and concepts](#vector-search-concepts) related to vector search development.
We recommend this article for background, but if you'd rather get started, follow these steps:
In order to create effective embeddings for vector search, it's important to tak
### What is the embedding space?
-*Embedding space* is the corpus for vector queries. Within a search index, it's all of the vector fields populated with embeddings from the same embedding model. Machine learning models create the embedding space by mapping individual words, phrases, or documents (for natural language processing), images, or other forms of data into a representation comprised of a vector of real numbers representing a coordinate in a high-dimensional space. In this embedding space, similar items are located close together, and dissimilar items are located farther apart.
+*Embedding space* is the corpus for vector queries. Within a search index, an embedding space is all of the vector fields populated with embeddings from the same embedding model. Machine learning models create the embedding space by mapping individual words, phrases, or documents (for natural language processing), images, or other forms of data into a representation comprised of a vector of real numbers representing a coordinate in a high-dimensional space. In this embedding space, similar items are located close together, and dissimilar items are located farther apart.
For example, documents that talk about different species of dogs would be clustered close together in the embedding space. Documents about cats would be close together, but farther from the dogs cluster while still being in the neighborhood for animals. Dissimilar concepts such as cloud computing would be much farther away. In practice, these embedding spaces are abstract and don't have well-defined, human-interpretable meanings, but the core idea stays the same.
search Vector Search Ranking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-ranking.md
- ignite-2023 Previously updated : 10/24/2023 Last updated : 01/31/2024
-# Relevance and ranking in vector search
+# Relevance in vector search
-In vector query execution, the search engine looks for similar vectors to find the best candidates to return in search results. Depending on how you indexed the vector content, the search for relevant matches is either exhaustive, or constrained to near neighbors for faster processing. Once candidates are found, similarity metrics are used to score each result based on the strength of the match. This article explains the algorithms used to determine relevance and the similarity metrics used for scoring.
+In vector query execution, the search engine looks for similar vectors to find the best candidates to return in search results. Depending on how you indexed the vector content, the search for relevant matches is either exhaustive, or constrained to near neighbors for faster processing. Once candidates are found, similarity metrics are used to score each result based on the strength of the match.
-## Determine relevance in vector search
+This article explains the algorithms used to find relevant matches and the similarity metrics used for scoring. It also offers tips for improving relevance if search results don't meet expectations.
-The algorithms used in vector search are used to navigate the vector database and find matching vectors. Supported algorithms include exhaustive k-nearest neighbors (KNN) and Hierarchical Navigable Small World (HNSW).
+## Scope of a vector search
-Exhaustive KNN performs a brute-force search that enables users to search the entire vector space for matches that are most similar to the query. It does this by calculating the distances between all pairs of data points and finding the exact `k` nearest neighbors for a query point.
+Vector search algorithms include exhaustive k-nearest neighbors (KNN) and Hierarchical Navigable Small World (HNSW).
-HNSW is an algorithm used for efficient approximate nearest neighbor (ANN) search in high-dimensional spaces. It organizes data points into a hierarchical graph structure that enables fast neighbor queries by navigating through the graph while maintaining a balance between search accuracy and computational efficiency.
++ Exhaustive KNN performs a brute-force search that scans the entire vector space.
-Only fields marked as `searchable` in the index, or as `searchFields` in the query, are used for searching and scoring. Only fields marked as `retrievable`, or fields specified in `select` in the query, are returned in search results, along with their search score.
++ HNSW performs an [approximate nearest neighbor (ANN)](vector-search-overview.md#approximate-nearest-neighbors) search. +
+Only vector fields marked as `searchable` in the index, or as `searchFields` in the query, are used for searching and scoring.
### When to use exhaustive KNN
-This algorithm is intended for scenarios where high recall is of utmost importance, and users are willing to accept the trade-offs in search performance. Because it's computationally intensive, use exhaustive KNN for small to medium datasets, or when precision requirements outweigh query performance considerations.
+Exhaustive KNN calculates the distances between all pairs of data points and finds the exact `k` nearest neighbors for a query point. It's intended for scenarios where high recall is of utmost importance, and users are willing to accept the trade-offs in search performance. Because it's computationally intensive, use exhaustive KNN for small to medium datasets, or when precision requirements outweigh query performance considerations.
-Another use is to build a dataset to evaluate approximate nearest neighbor algorithm recall. Exhaustive KNN can be used to build the ground truth set of nearest neighbors.
+Another use case is to build a dataset to evaluate approximate nearest neighbor algorithm recall. Exhaustive KNN can be used to build the ground truth set of nearest neighbors.
Exhaustive KNN support is available through [2023-11-01 REST API](/rest/api/searchservice/search-service-api-versions#2023-11-01), [2023-10-01-Preview REST API](/rest/api/searchservice/search-service-api-versions#2023-10-01-Preview), and in Azure SDK client libraries that target either REST API version. ### When to use HNSW
-HNSW is recommended for most scenarios due to its efficiency when searching over larger data sets. Internally, HNSW creates extra data structures for faster search. However, you aren't locked into using them on every search. HNSW has several configuration parameters that can be tuned to achieve the throughput, latency, and recall objectives for your search application. For example, at query time, you can specify options for exhaustive search, even if the vector field is indexed for HNSW.
+During indexing, HNSW creates extra data structures for faster search, organizing data points into a hierarchical graph structure. HHNSW has several configuration parameters that can be tuned to achieve the throughput, latency, and recall objectives for your search application. For example, at query time, you can specify options for exhaustive search, even if the vector field is indexed for HNSW.
+
+During query execution, HNSW enables fast neighbor queries by navigating through the graph. This approach strikes a balance between search accuracy and computational efficiency. HNSW is recommended for most scenarios due to its efficiency when searching over larger data sets.
## How nearest neighbor search works
When vector fields are indexed for exhaustive KNN, the query executes against "a
### Creating the HNSW graph
-The goal of indexing a new vector into an HNSW graph is to add it to the graph structure in a manner that allows for efficient nearest neighbor search. The following steps summarize the process:
+During indexing, the search service constructs the HNSW graph. The goal of indexing a new vector into an HNSW graph is to add it to the graph structure in a manner that allows for efficient nearest neighbor search. The following steps summarize the process:
1. Initialization: Start with an empty HNSW graph, or the existing HNSW graph if it's not a new index.
The goal of indexing a new vector into an HNSW graph is to add it to the graph s
- Each node is connected to up to `m` neighbors that are nearby. This is the `m` parameter.
- - The number of data points that considered as candidate connections is governed by the `efConstruction` parameter. This dynamic list forms the set of closest points in the existing graph for the algorithm to consider. Higher `efConstruction` values result in more nodes being considered, which often leads to denser local neighborhoods for each vector.
+ - The number of data points considered as candidate connections is governed by the `efConstruction` parameter. This dynamic list forms the set of closest points in the existing graph for the algorithm to consider. Higher `efConstruction` values result in more nodes being considered, which often leads to denser local neighborhoods for each vector.
- These connections use the configured similarity `metric` to determine distance. Some connections are "long-distance" connections that connect across different hierarchical levels, creating shortcuts in the graph that enhance search efficiency. 1. Graph pruning and optimization: This can happen after indexing all vectors, and it improves navigability and efficiency of the HNSW graph.
-### Retrieving vectors with the HNSW algorithm
+### Navigating the HNSW graph at query time
-In the HNSW algorithm, a vector query search operation is executed by navigating through this hierarchical graph structure. The following summarize the steps in the process:
+A vector query navigates the hierarchical graph structure to scan for matches. The following summarize the steps in the process:
1. Initialization: The algorithm initiates the search at the top-level of the hierarchical graph. This entry point contains the set of vectors that serve as starting points for search.
The algorithm finds candidate vectors to evaluate similarity. To perform this ta
## Scores in a vector search results
-Whenever results are ranked, **`@search.score`** property contains the value used to order the results.
+Scores are calculated and assigned to each match, with the highest matches returned as `k` results. The **`@search.score`** property contains the score. The following table shows the range within which a score will fall.
| Search method | Parameter | Scoring metric | Range | ||--|-|-| | vector search | `@search.score` | Cosine | 0.333 - 1.00 |
-If you're using the `cosine` metric, it's important to note that the calculated `@search.score` isn't the cosine value between the query vector and the document vectors. Instead, Azure AI Search applies transformations such that the score function is monotonically decreasing, meaning score values will always decrease in value as the similarity becomes worse. This transformation ensures that search scores are usable for ranking purposes.
+For`cosine` metric, it's important to note that the calculated `@search.score` isn't the cosine value between the query vector and the document vectors. Instead, Azure AI Search applies transformations such that the score function is monotonically decreasing, meaning score values will always decrease in value as the similarity becomes worse. This transformation ensures that search scores are usable for ranking purposes.
There are some nuances with similarity scores:
double ScoreToSimilarity(double score)
Having the original cosine value can be useful in custom solutions that set up thresholds to trim results of low quality results.
-## Number of ranked results in a vector query response
+## Tips for relevance tuning
+
+If you aren't getting relevant results, experiment with changes to [query configuration](vector-search-how-to-query.md). There are no specific tuning features, such as a scoring profile or field or term boosting, for vector queries:
+++ Experiment with [chunk size and overlap](vector-search-how-to-chunk-documents.md). Try increasing the chunk size and ensuring there's sufficient overlap to preserve context or continuity between chunks.+++ For HNSW, try different levels of `efConstruction` to change the internal composition of the proximity graph. The default is 400. The range is 100 to 1,000.
-A vector query specifies the `k` parameter, which determines how many nearest neighbors of the query vector should be found in vector space and returned in the results. If `k` is larger than the number of documents in the index, then the number of documents determines the upper limit of what can be returned.
++ Increase `k` results to feed more search results into a chat model, if you're using one.
-The search engine always returns `k` number of matches, as long as there are enough documents in the index. If you're familiar with full text search, you know to expect zero results if the index doesn't contain a term or phrase. However, in vector search, similarity is relative to the input query vector, not absolute. It's possible to get positive results for a nonsensical or off-topic query. Less relevant results have a worse similarity score, but they're still the "nearest" vectors if there isn't anything closer. As such, a response with no meaningful results can still return `k` results, but each result's similarity score would be low. A [hybrid approach](hybrid-search-overview.md) that includes full text search can mitigate this problem.
++ Try [hybrid queries](hybrid-search-how-to-query.md) with semantic ranking. In benchmark testing, this combination consistently produced the most relevant results. ## Next steps
search Vector Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-store.md
+
+ Title: Vector store database
+
+description: Describes concepts behind vector storage in Azure AI Search.
+++++
+ - ignite-2023
+ Last updated : 01/29/2024++
+# Vector storage in Azure AI Search
+
+Azure AI Search provides vector storage and configurations for [vector search](vector-search-overview.md) and [hybrid queries](hybrid-search-overview.md). Support is implemented at the field level, which means you can combine vector and nonvector fields in the same search corpus.
+
+Vectors are stored in a search index. Use the [Create Index REST API](/rest/api/searchservice/indexes/create-or-update) or an equivalent Azure SDK method to create the vector store.
+
+## Retrieval patterns
+
+In Azure AI Search, there are two patterns for working with the search engine's response. Your index schema should reflect your primary use case.
+++ Send the search results directly to the client app. In a direct response from the search engine, results are returned in a flattened row set, and you can choose which fields are included. It's expected that you would populate the vector store (search index) with nonvector content that's human readable so that you don't have to decode vectors for your response. The search engine matches on vectors, but returns nonvector values from the same search document.+++ Send the search results to a chat model and an orchestration layer that coordinates prompts and maintains chat history for a conversational approach.+
+In a chat solution, results are fed into prompt flows and chat models like GPT and Text-Davinci use the search results, with or without their own training data, as grounding data for formulating the response. This is approach is based on [**Retrieval augmented generation (RAG)**](retrieval-augmented-generation-overview.md) architecture.
+
+## Basic schema for vectors
+
+An index schema for a vector store requires a name, a key field, one or more vector fields, and a vector configuration. Content fields are recommended for hybrid queries, or for returning human readable content that doesn't have to be decoded first. For more information about configuring a vector index, see [Create a vector store](vector-search-how-to-create-index.md).
+
+```json
+{
+ "name": "example-index",
+ "fields": [
+ { "name": "id", "type": "Edm.String", "searchable": false, "filterable": true, "retrievable": true, "key": true },
+ { "name": "content", "type": "Edm.String", "searchable": true, "retrievable": true, "analyzer": null },
+ { "name": "content_vector", "type": "Collection(Edm.Single)", "searchable": true, "filterable": false, "retrievable": true,
+ "dimensions": 1536, "vectorSearchProfile": null },
+ { "name": "metadata", "type": "Edm.String", "searchable": true, "filterable": false, "retrievable": true, "sortable": false, "facetable": false }
+ ],
+ "vectorSearch": {
+ "algorithms": [
+ {
+ "name": "default",
+ "kind": "hnsw",
+ "hnswParameters": {
+ "metric": "cosine",
+ "m": 4,
+ "efConstruction": 400,
+ "efSearch": 500
+ },
+ "exhaustiveKnnParameters": null
+ }
+ ],
+ "profiles": [],
+ "vectorizers": []
+ }
+}
+```
+
+The vector search algorithms specify the navigation structures used at query time. The structures are created during indexing, but used during queries.
+
+The content of your vector fields is determined by the [embedding step](vector-search-how-to-generate-embeddings.md) that vectorizes or encodes your content. If you use the same embedding model for all of your fields, you can [build vector queries](vector-search-how-to-query.md) that cover all of them.
+
+If you use search results as grounding data, where a chat model generates the answer to a query, design a schema that stores chunks of text. Data chunking is a requirement if source files are too large for the embedding model, but it's also efficient for chat if the original source files contain a varied information.
++
+## Next steps
+++ [Try the quickstart](search-get-started-vector.md)++ [Learn more about vector stores](vector-search-how-to-create-index.md)++ [Learn more about vector queries](vector-search-how-to-query.md)++ [Azure Cognitive Search and LangChain: A Seamless Integration for Enhanced Vector Search Capabilities](https://techcommunity.microsoft.com/t5/azure-ai-services-blog/azure-cognitive-search-and-langchain-a-seamless-integration-for/ba-p/3901448)
security Encryption Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/encryption-overview.md
Finally, you can also use the Azure Storage Client Library for Java to perform c
#### Transparent Data Encryption
-[TDE](/sql/relational-databases/security/encryption/transparent-data-encryption-tde) is used to encrypt [SQL Server](https://www.microsoft.com/sql-server/sql-server-2016), [Azure SQL Database](/azure/azure-sql/database/sql-database-paas-overview), and [Azure Synapse Analytics](../../synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is.md) data files in real time, using a Database Encryption Key (DEK), which is stored in the database boot record for availability during recovery.
+[TDE](/sql/relational-databases/security/encryption/transparent-data-encryption-tde) is used to encrypt [SQL Server](https://www.microsoft.com/sql-server), [Azure SQL Database](/azure/azure-sql/database/sql-database-paas-overview), and [Azure Synapse Analytics](../../synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is.md) data files in real time, using a Database Encryption Key (DEK), which is stored in the database boot record for availability during recovery.
TDE protects data and log files, using AES and Triple Data Encryption Standard (3DES) encryption algorithms. Encryption of the database file is performed at the page level. The pages in an encrypted database are encrypted before they are written to disk and are decrypted when theyΓÇÖre read into memory. TDE is now enabled by default on newly created Azure SQL databases.
sentinel Billing Monitor Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/billing-monitor-costs.md
Usage
| where TimeGenerated > ago(32d) | where StartTime >= startofday(ago(31d)) and EndTime < startofday(now()) | where IsBillable == true
-| summarize BillableDataGB = sum(Quantity) by Solution, DataType
+| summarize BillableDataGB = sum(Quantity) / 1000. by Solution, DataType
| extend Solution = iif(Solution == "SecurityInsights", "AzureSentinel", Solution) | sort by Solution asc, DataType asc ```
sentinel Connect Dns Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-dns-ama.md
To create filters:
:::image type="content" source="media/connect-dns-ama/windows-dns-ama-connector-create-filter.png" alt-text="Screenshot of creating a filter for the Windows D N S over A M A connector.":::
-1. To add complex filters, select **Add field to filter** and add the relevant field.
+1. Choose the values for which you want to filter the field from among the values listed in the drop-down.
:::image type="content" source="media/connect-dns-ama/windows-dns-ama-connector-filter-fields.png" alt-text="Screenshot of adding fields to a filter for the Windows D N S over A M A connector.":::
-1. To add new filters, select **Add new filters**.
-1. To edit, or delete existing filters or fields, select the edit or delete icons in the table under the **Configuration** area. To add fields or filters, select **Add data collection filters** again.
-1. To save and deploy the filters to your connectors, select **Apply changes**.
+1. To add complex filters, select **Add exclude field to filter** and add the relevant field. See examples in the [Use advanced filters](#use-advanced-filters) section below.
+
+1. To add more new filters, select **Add new exclude filter**.
+
+1. When finished adding filters, select **Add**.
+
+1. Back on the main connector page, select **Apply changes** to save and deploy the filters to your connectors. To edit or delete existing filters or fields, select the edit or delete icons in the table under the **Configuration** area.
+
+1. To add fields or filters after your initial deployment, select **Add data collection filters** again.
### Set up the connector with the API
sentinel Threat Intelligence Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/threat-intelligence-integration.md
Microsoft Sentinel gives you a few different ways to [use threat intelligence fe
- Use one of many available integrated [threat intelligence platform (TIP) products](connect-threat-intelligence-tip.md). - [Connect to TAXII servers](connect-threat-intelligence-taxii.md) to take advantage of any STIX-compatible threat intelligence source. - Connect directly to the [Microsoft Defender Threat Intelligence](connect-mdti-data-connector.md) feed.-- Make use of any custom solutions that can communicate directly with the [Microsoft Graph Security tiIndicators API](/graph/api/resources/tiindicator).
+- Make use of any custom solutions that can communicate directly with the [Threat Intelligence Upload Indicators API](connect-threat-intelligence-upload-api.md).
- You can also connect to threat intelligence sources from playbooks, in order to enrich incidents with TI information that can help direct investigation and response actions. > [!TIP]
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md
The listed features were released in the last three months. For information abou
## February 2024
+- [AWS and GCP data connectors now support Azure Government clouds](#aws-and-gcp-data-connectors-now-support-azure-government-clouds)
+- [Windows DNS Events via AMA connector now generally available (GA)](#windows-dns-events-via-ama-connector-now-generally-available-ga)
+ ### AWS and GCP data connectors now support Azure Government clouds Microsoft Sentinel data connectors for Amazon Web Services (AWS) and Google Cloud Platform (GCP) now include supporting configurations to ingest data into workspaces in Azure Government clouds.
The configurations for these connectors for Azure Government customers differs s
- [Connect Microsoft Sentinel to Amazon Web Services to ingest AWS service log data](connect-aws.md) - [Ingest Google Cloud Platform log data into Microsoft Sentinel](connect-google-cloud-platform.md)
+### Windows DNS Events via AMA connector now generally available (GA)
+
+Windows DNS events can now be ingested to Microsoft Sentinel using the Azure Monitor Agent with the now generally available data connector. This connector allows you to define Data Collection Rules (DCRs) and powerful, complex filters so that you ingest only the specific DNS records and fields you need.
+
+- For more information, see [Stream and filter data from Windows DNS servers with the AMA connector](connect-dns-ama.md).
+ ## January 2024 ### Reduce false positives for SAP systems with analytics rules
spring-apps Quickstart Provision Standard Consumption App Environment With Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/consumption-dedicated/quickstart-provision-standard-consumption-app-environment-with-virtual-network.md
Use the following steps to create an Azure Spring Apps instance in an Azure Cont
- Select the names for **Virtual network** and for **Infrastructure subnet** from the dropdown menus or use **Create new** as needed. - Set **Virtual IP** to **External**. You can set the value to **Internal** if you prefer to use only internal IP addresses available in the virtual network instead of a public static IP.
- :::image type="content" source="media/quickstart-provision-standard-consumption-app-environment-with-virtual-network/create-azure-container-apps-environment-virtual-network.png" alt-text="Screenshot of the Azure portal showing the Create Container Apps environment page with the Networking tab selected." lightbox="media/quickstart-provision-standard-consumption-app-environment-with-virtual-network/create-azure-container-apps-environment-virtual-network.png":::
- >[!NOTE] > The subnet associated with an Azure Container Apps environment requires a CIDR prefix of `/23` or higher.
spring-apps Quickstart Deploy Microservice Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/quickstart-deploy-microservice-apps.md
description: Learn how to deploy microservice applications to Azure Spring Apps.
Previously updated : 01/10/2024 Last updated : 01/19/2023 zone_pivot_groups: spring-apps-tier-selection
The diagram shows the following architectural flows and relationships of the Pet
The Pet Clinic sample demonstrates the microservice architecture pattern. The following diagram shows the architecture of the PetClinic application on the Azure Spring Apps Standard plan. The diagram shows the following architectural flows and relationships of the Pet Clinic sample:
This article provides the following options for deploying to Azure Spring Apps:
- The **Azure portal** option is the easiest and the fastest way to create resources and deploy applications with a single click. This option is suitable for Spring developers who want to quickly deploy applications to Azure cloud services. - The **Azure portal + Maven plugin** option is a more conventional way to create resources and deploy applications step by step. This option is suitable for Spring developers using Azure cloud services for the first time.
+- The **Azure CLI** option uses a powerful command line tool to manage Azure resources. This option is suitable for Spring developers who are familiar with Azure cloud services.
::: zone-end
This article provides the following options for deploying to Azure Spring Apps:
- (Optional) [Node.js](https://nodejs.org/en/download), version 16.20 or higher. - [Azure CLI](/cli/azure/install-azure-cli), version 2.45.0 or higher.
+### [Azure CLI](#tab/Azure-CLI-ent)
+
+- An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+- If you're deploying an Azure Spring Apps Enterprise plan instance for the first time in the target subscription, see the [Requirements](./how-to-enterprise-marketplace-offer.md#requirements) section of [Enterprise plan in Azure Marketplace](./how-to-enterprise-marketplace-offer.md).
+- [Git](https://git-scm.com/downloads).
+- [Java Development Kit (JDK)](/java/azure/jdk/), version 17.
+- [Azure CLI](/cli/azure/install-azure-cli), version 2.55.0 or higher.
+ ::: zone-end
The following sections describe how to validate the deployment.
### [Azure portal](#tab/Azure-portal-ent)
+### 5.1. Access the applications
+
+After the deployment finishes, you can find the Spring Cloud Gateway URL from the deployment outputs, as shown in the following screenshot:
++
+Open the gateway URL. The application should look similar to the following screenshot:
++
+### 5.2. Query the application logs
+
+After you browse each function of the Pet Clinic, the Log Analytics workspace collects logs of each application. You can check the logs by using custom queries, as shown in the following screenshot:
++
+### 5.3. Monitor the applications
+
+Application Insights monitors the application dependencies, as shown by the following application tracing map:
++
+You can find the Application Live View URL from the deployment outputs. Open the Application Live View URL to monitor application runtimes, as shown in the following screenshot:
+ ### [Azure portal + Maven plugin](#tab/Azure-portal-maven-plugin-ent) ### 5.1. Access the applications
-Using the endpoint assigned from Spring Cloud Gateway - for example, `
-https://<your-Azure-Spring-Apps-instance-name>-gateway-xxxxx.svc.azuremicroservices.io`. The application should look similar to the following screenshot:
+Use the endpoint assigned from Spring Cloud Gateway - for example, `https://<your-Azure-Spring-Apps-instance-name>-gateway-xxxxx.svc.azuremicroservices.io`. The application should look similar to the following screenshot:
:::image type="content" source="media/quickstart-deploy-microservice-apps/application-enterprise.png" alt-text="Screenshot of the PetClinic application running on the Azure Spring Apps Enterprise plan." lightbox="media/quickstart-deploy-microservice-apps/application-enterprise.png":::
Open the Application Live View URL exposed by the Developer Tools to monitor app
:::image type="content" source="media/quickstart-deploy-microservice-apps/application-live-view.png" alt-text="Screenshot of the Application Live View for the PetClinic application." lightbox="media/quickstart-deploy-microservice-apps/application-live-view.png":::
+### [Azure CLI](#tab/Azure-CLI-ent)
+
+### 5.1. Access the applications
+
+Use the following commands to retrieve the URL for Spring Cloud Gateway:
+
+```azurecli
+export GATEWAY_URL=$(az spring gateway show \
+ --service ${SPRING_APPS} \
+ --query properties.url \
+ --output tsv)
+echo "https://${GATEWAY_URL}"
+```
+
+The application should look similar to the following screenshot:
++
+### 5.2. Query the application logs
+
+After you browse each function of the Pet Clinic, the Log Analytics workspace collects logs of each application. You can check the logs by using custom queries, as shown in the following screenshot:
++
+### 5.3. Monitor the applications
+
+Application Insights monitors the application dependencies, as shown by the following application tracing map:
++
+Use the following commands to retrieve the URL for Application Live View:
+
+```azurecli
+export DEV_TOOL_URL=$(az spring dev-tool show \
+ --service ${SPRING_APPS} \
+ --query properties.url \
+ --output tsv)
+echo "https://${DEV_TOOL_URL}/app-live-view"
+```
+
+Open the Application Live View URL to monitor application runtimes, as shown in the following screenshot:
++ ::: zone-end
Open the Application Live View URL exposed by the Developer Tools to monitor app
Be sure to delete the resources you created in this article when you no longer need them. You can delete the Azure resource group, which includes all the resources in the resource group.
-Use the following steps to delete the entire resource group, including the newly created service instance:
+### [Azure portal](#tab/Azure-portal-ent)
+
+Use the following steps to delete the entire resource group:
-1. Locate your resource group in the Azure portal. On the navigation menu, select **Resource groups** and then select the name of your resource group.
+1. Locate your resource group in the Azure portal. On the navigation menu, select **Resource groups**, and then select the name of your resource group.
1. On the **Resource group** page, select **Delete**. Enter the name of your resource group in the text box to confirm deletion, then select **Delete**.
+### [Azure portal + Maven plugin](#tab/Azure-portal-maven-plugin-ent)
+
+Use the following steps to delete the entire resource group:
+
+1. Locate your resource group in the Azure portal. On the navigation menu, select **Resource groups**, and then select the name of your resource group.
+
+1. On the **Resource group** page, select **Delete**. Enter the name of your resource group in the text box to confirm deletion, then select **Delete**.
+
+### [Azure CLI](#tab/Azure-CLI-ent)
+
+Use the following command to delete the resource group:
+
+```azurecli
+az group delete --name ${RESOURCE_GROUP}
+```
+++ ::: zone-end ## 7. Next steps
static-web-apps Assign Roles Microsoft Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/assign-roles-microsoft-graph.md
There's a function named *GetRoles* in the app's API. This function uses the use
| `rolesSource` | The URL where the login process gets a list of available roles. For the sample application the URL is `/api/GetRoles`. | | `userDetailsClaim` | The URL of the schema used to validate the login request. | | `openIdIssuer` | The Microsoft Entra login route, appended with your tenant ID. |
- | `clientIdSettingName` | Your Microsoft Entra tenant ID. |
+ | `clientIdSettingName` | Your Microsoft Entra client ID. |
| `clientSecretSettingName` | Your Microsoft Entra client secret value. | | `loginParameters` | To obtain an access token for Microsoft Graph, the `loginParameters` field must be configured with `resource=https://graph.microsoft.com`. |
static-web-apps Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/troubleshooting.md
If you see one of the following error messages in the error log, it's an indicat
| Error message | Description | | | |
-|App Directory Location: '/*folder*' is invalid. Could not detect this directory. | Verify your workflow reflects your repository structure. |
+|App Directory Location: '/*folder*' is invalid. Couldn't detect this directory. | Verify your workflow reflects your repository structure. |
| The app build failed to produce artifact folder: '*folder*'. | Ensure the `folder` property is configured correctly in your workflow file. |
-| Either no Api directory was specified, or the specified directory was not found. | Azure Functions isn't created, as the workflow doesn't define a value for the `api` folder. |
+| Either no API directory was specified, or the specified directory wasn't found. | Azure Functions isn't created, as the workflow doesn't define a value for the `api` folder. |
There are three folder locations specified in the workflow. Ensure these settings match both your project and any tools that transform your source code before deployment.
There are three folder locations specified in the workflow. Ensure these setting
| `api_location` |The root location of your Azure Functions application hosted by Azure Static Web Apps. This points to the root folder of all Azure Functions for your project, typically *api*. | > [!NOTE]
-> Error messages generated by an incorrect `api_location` configuration may still build successfully, as Azure Static Web Apps does not require serverless code.
+> Error messages generated by an incorrect `api_location` configuration may still build successfully, as Azure Static Web Apps doesn't require serverless code.
## Review server errors
-Use [Application Insights](../azure-monitor/app/app-insights-overview.md) to find runtime error messages. If you do not already have an instance created, refer to [Monitoring Azure Static Web Apps](monitor.md). Application Insights logs the full error message and stack trace generated by each error.
+Use [Application Insights](../azure-monitor/app/app-insights-overview.md) to find runtime error messages. If you don't already have an instance created, refer to [Monitoring Azure Static Web Apps](monitor.md). Application Insights logs the full error message and stack trace generated by each error.
> [!NOTE] > You can only view error messages that are generated after Application Insights is installed.
Use the following steps to add a new variable.
1. Set the **Value**. 1. Select **OK**. 1. Select **Save**.+
+## Review diagnostic reports
+The [diagnose and solve](diagnostics-overview.md) feature can guide you through steps to troubleshoot problems.
storage Azcopy Cost Estimation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/azcopy-cost-estimation.md
For each blob, AzCopy uses the [Get Blob Properties](/rest/api/storageservices/g
| Price of a single other operations (price / 10,000) | $0.00000044 | $0.00000044 | $0.00000052 | | **Cost to get blob properties (2000 * operation price)** | **$0.00088** | **$0.00088** | **$0.00104** | | Price of a single write operation (price / 10,000) | $0.0000055 | $0.00001 | $0.000018 |
-| **Cost to write (1000 * operation price)** | **$3.53** | **$0.0055** | **$0.01** |
-| **Total cost (listing + properties + write)** | **$3.5309** | **$0.0064** | **$0.0110** |
+| **Cost to write (1000 * operation price)** | **$0.0055** | **$0.01** | **$0.018** |
+| **Total cost (listing + properties + write)** | **$0.0064** | **$0.0109** | **$0.0190** |
### Cost of copying blobs to another account in the same region
The following table shows the operations that are used by each AzCopy command. T
| Command | Scenario | Operations | ||-|--|
-| [azcopy bench](../common/storage-ref-azcopy-bench.md?toc=/azure/storage/blobs/toc.json) | Upload | [Put Block](/rest/api/storageservices/put-block-list) and [Put Block List](/rest/api/storageservices/put-block-list) |
-| [azcopy bench](../common/storage-ref-azcopy-bench.md?toc=/azure/storage/blobs/toc.json) | Download | [List Blobs](/rest/api/storageservices/list-blobs), [Get Blob Properties](/rest/api/storageservices/get-blob-properties), and [Get Blob](/rest/api/storageservices/get-blob) |
-| [azcopy copy](../common/storage-ref-azcopy-copy.md?toc=/azure/storage/blobs/toc.json) | Upload | [Put Block](/rest/api/storageservices/put-block-list) and [Put Block List](/rest/api/storageservices/put-block-list), [Get Blob Properties](/rest/api/storageservices/get-blob-properties) |
-| [azcopy copy](../common/storage-ref-azcopy-copy.md?toc=/azure/storage/blobs/toc.json) | Download | [List Blobs](/rest/api/storageservices/list-blobs), [Get Blob Properties](/rest/api/storageservices/get-blob-properties), and [Get Blob](/rest/api/storageservices/get-blob) |
-| [azcopy copy](../common/storage-ref-azcopy-copy.md?toc=/azure/storage/blobs/toc.json) | Perform a dry run | [List Blobs](/rest/api/storageservices/list-blobs) |
-| [azcopy copy](../common/storage-ref-azcopy-copy.md?toc=/azure/storage/blobs/toc.json) | Copy from Amazon S3| [Put Blob from URL](/rest/api/storageservices/put-blob-from-url) |
-| [azcopy copy](../common/storage-ref-azcopy-copy.md?toc=/azure/storage/blobs/toc.json) | Copy from Google Cloud Storage | [Put Blob from URL](/rest/api/storageservices/put-blob-from-url) |
-| [azcopy copy](../common/storage-ref-azcopy-copy.md?toc=/azure/storage/blobs/toc.json) | Copy to another container | [List Blobs](/rest/api/storageservices/list-blobs), [Get Blob Properties](/rest/api/storageservices/get-blob-properties), and [Copy Blob](/rest/api/storageservices/copy-blob) |
-| [azcopy sync](../common/storage-ref-azcopy-sync.md?toc=/azure/storage/blobs/toc.json) | Update local with changes to container | [List Blobs](/rest/api/storageservices/list-blobs), [Get Blob Properties](/rest/api/storageservices/get-blob-properties), and [Get Blob](/rest/api/storageservices/get-blob) |
-| [azcopy sync](../common/storage-ref-azcopy-sync.md?toc=/azure/storage/blobs/toc.json) | Update container with changes to local file system | [List Blobs](/rest/api/storageservices/list-blobs), [Get Blob Properties](/rest/api/storageservices/get-blob-properties), [Put Block](/rest/api/storageservices/put-block-list), and [Put Block List](/rest/api/storageservices/put-block-list) |
-| [azcopy sync](../common/storage-ref-azcopy-sync.md?toc=/azure/storage/blobs/toc.json) | Synchronize containers | [List Blobs](/rest/api/storageservices/list-blobs), [Get Blob Properties](/rest/api/storageservices/get-blob-properties), and [Copy Blob](/rest/api/storageservices/copy-blob) |
-| [azcopy set-properties](../common/storage-ref-azcopy-set-properties.md?toc=/azure/storage/blobs/toc.json) | Set blob tier | [Set Blob Tier](/rest/api/storageservices/set-blob-tier) |
-| [azcopy set-properties](../common/storage-ref-azcopy-set-properties.md?toc=/azure/storage/blobs/toc.json) | Set metadata | [Set Blob Metadata](/rest/api/storageservices/set-blob-metadata) |
-| [azcopy set-properties](../common/storage-ref-azcopy-set-properties.md?toc=/azure/storage/blobs/toc.json) | Set blob tags | [Set Blob Tags](/rest/api/storageservices/set-blob-tags) |
-| [azcopy list](../common/storage-ref-azcopy-list.md?toc=/azure/storage/blobs/toc.json) | List blobs in a container| [List Blobs](/rest/api/storageservices/list-blobs) |
-| [azcopy make](../common/storage-ref-azcopy-make.md?toc=/azure/storage/blobs/toc.json) | Create a container | [Create Container](/rest/api/storageservices/create-container) |
-| [azcopy remove](../common/storage-ref-azcopy-remove.md?toc=/azure/storage/blobs/toc.json) | Delete a container | [Delete Container](/rest/api/storageservices/delete-container) |
-| [azcopy remove](../common/storage-ref-azcopy-remove.md?toc=/azure/storage/blobs/toc.json) | Delete a blob | [Delete Blob](/rest/api/storageservices/delete-blob) |
+| [azcopy bench](../common/storage-ref-azcopy-bench.md?toc=/azure/storage/blobs/toc.json) | Upload | [Put Block](/rest/api/storageservices/put-block-list) and [Put Block List](/rest/api/storageservices/put-block-list). Possibly [Put Blob](/rest/api/storageservices/put-blob) based on object size.|
+| [azcopy bench](../common/storage-ref-azcopy-bench.md?toc=/azure/storage/blobs/toc.json) | Download |[List Blobs](/rest/api/storageservices/list-blobs), [Get Blob Properties](/rest/api/storageservices/get-blob-properties), and [Get Blob](/rest/api/storageservices/get-blob) |
+| [azcopy copy](../common/storage-ref-azcopy-copy.md?toc=/azure/storage/blobs/toc.json) | Upload | [Put Block](/rest/api/storageservices/put-block-list), [Put Block List](/rest/api/storageservices/put-block-list), and [Get Blob Properties](/rest/api/storageservices/get-blob-properties). Possibly [Put Blob](/rest/api/storageservices/put-blob) based on object size. |
+| [azcopy copy](../common/storage-ref-azcopy-copy.md?toc=/azure/storage/blobs/toc.json) | Download | [List Blobs](/rest/api/storageservices/list-blobs), [Get Blob Properties](/rest/api/storageservices/get-blob-properties), and [Get Blob](/rest/api/storageservices/get-blob) |
+| [azcopy copy](../common/storage-ref-azcopy-copy.md?toc=/azure/storage/blobs/toc.json) | Perform a dry run | [List Blobs](/rest/api/storageservices/list-blobs) |
+| [azcopy copy](../common/storage-ref-azcopy-copy.md?toc=/azure/storage/blobs/toc.json) | Copy from Amazon S3|[Put Blob from URL](/rest/api/storageservices/put-blob-from-url). Based on object size, could also be [Put Block From URL](/rest/api/storageservices/put-block-from-url) and [Put Block List](/rest/api/storageservices/put-block-list). |
+| [azcopy copy](../common/storage-ref-azcopy-copy.md?toc=/azure/storage/blobs/toc.json) | Copy from Google Cloud Storage |[Put Blob from URL](/rest/api/storageservices/put-blob-from-url). Based on object size, could also be [Put Block From URL](/rest/api/storageservices/put-block-from-url) and [Put Block List](/rest/api/storageservices/put-block-list). |
+| [azcopy copy](../common/storage-ref-azcopy-copy.md?toc=/azure/storage/blobs/toc.json) | Copy to another container |[List Blobs](/rest/api/storageservices/list-blobs), [Get Blob Properties](/rest/api/storageservices/get-blob-properties), and [Put Blob From URL](/rest/api/storageservices/put-blob-from-url). Based on object size, could also be [Put Block From URL](/rest/api/storageservices/put-block-from-url) and [Put Block List](/rest/api/storageservices/put-block-list). |
+| [azcopy sync](../common/storage-ref-azcopy-sync.md?toc=/azure/storage/blobs/toc.json) | Update local with changes to container |[List Blobs](/rest/api/storageservices/list-blobs), [Get Blob Properties](/rest/api/storageservices/get-blob-properties), and [Get Blob](/rest/api/storageservices/get-blob) |
+| [azcopy sync](../common/storage-ref-azcopy-sync.md?toc=/azure/storage/blobs/toc.json) | Update container with changes to local file system |[List Blobs](/rest/api/storageservices/list-blobs), [Get Blob Properties](/rest/api/storageservices/get-blob-properties), [Put Block](/rest/api/storageservices/put-block-list), and [Put Block List](/rest/api/storageservices/put-block-list). Possibly [Put Blob](/rest/api/storageservices/put-blob) based on object size. |
+| [azcopy sync](../common/storage-ref-azcopy-sync.md?toc=/azure/storage/blobs/toc.json) | Synchronize containers |[List Blobs](/rest/api/storageservices/list-blobs), [Get Blob Properties](/rest/api/storageservices/get-blob-properties), and [Put Blob From URL](/rest/api/storageservices/put-blob-from-url). Based on object size, could also be [Put Block From URL](/rest/api/storageservices/put-block-from-url) and [Put Block List](/rest/api/storageservices/put-block-list). |
+| [azcopy set-properties](../common/storage-ref-azcopy-set-properties.md?toc=/azure/storage/blobs/toc.json) | Set blob tier |[Set Blob Tier](/rest/api/storageservices/set-blob-tier) and [List Blobs](/rest/api/storageservices/list-blobs) (if targeting a virtual directory) |
+| [azcopy set-properties](../common/storage-ref-azcopy-set-properties.md?toc=/azure/storage/blobs/toc.json) | Set metadata |[Set Blob Metadata](/rest/api/storageservices/set-blob-metadata) and [List Blobs](/rest/api/storageservices/list-blobs) (if targeting a virtual directory) |
+| [azcopy set-properties](../common/storage-ref-azcopy-set-properties.md?toc=/azure/storage/blobs/toc.json) | Set blob tags |[Set Blob Tags](/rest/api/storageservices/set-blob-tags) and [List Blobs](/rest/api/storageservices/list-blobs) (if targeting a virtual directory) |
+| [azcopy list](../common/storage-ref-azcopy-list.md?toc=/azure/storage/blobs/toc.json) | List blobs in a container|[List Blobs](/rest/api/storageservices/list-blobs) |
+| [azcopy make](../common/storage-ref-azcopy-make.md?toc=/azure/storage/blobs/toc.json) | Create a container |[Create Container](/rest/api/storageservices/create-container) |
+| [azcopy remove](../common/storage-ref-azcopy-remove.md?toc=/azure/storage/blobs/toc.json) | Delete a container |[Delete Container](/rest/api/storageservices/delete-container) |
+| [azcopy remove](../common/storage-ref-azcopy-remove.md?toc=/azure/storage/blobs/toc.json) | Delete a blob |[Get Blob Properties](/rest/api/storageservices/get-blob-properties). [List Blobs](/rest/api/storageservices/list-blobs) (if targeting a virtual directory), and [Delete Blob](/rest/api/storageservices/delete-blob) |
### Commands that target the Data Lake Storage endpoint
storage Data Lake Storage Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-best-practices.md
This article provides best practice guidelines that help you optimize performanc
For general suggestions around structuring a data lake, see these articles: -- [Overview of Azure Data Lake Storage for the data management and analytics scenario](/azure/cloud-adoption-framework/scenarios/data-management/best-practices/data-lake-overview?toc=/azure/storage/blobs/toc.json)-- [Provision three Azure Data Lake Storage Gen2 accounts for each data landing zone](/azure/cloud-adoption-framework/scenarios/data-management/best-practices/data-lake-services?toc=/azure/storage/blobs/toc.json)
+- [Overview of Azure Data Lake Storage for the data management and analytics scenario](/azure/cloud-adoption-framework/scenarios/data-management/best-practices/data-lake-overview?toc=/azure/storage/blobs/toc.json&bc=/azure/storage/blobs/breadcrumb/toc.json)
+- [Provision three Azure Data Lake Storage Gen2 accounts for each data landing zone](/azure/cloud-adoption-framework/scenarios/data-management/best-practices/data-lake-services?toc=/azure/storage/blobs/toc.json&bc=/azure/storage/blobs/breadcrumb/toc.json)
## Find documentation
storage Object Replication Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/object-replication-overview.md
Here's a breakdown of the costs. To find the price of each cost component, see [
||Storage cost of the blob and each blob version<sup>1</sup>| ||Cost of network egress<sup>3</sup>| --
-<sup>1</sup> See [Blob versioning pricing and Billing](versioning-overview.md#pricing-and-billing).
+<sup>1</sup> On the source account, if you haven't changed a blob or version's tier, then you're billed for unique blocks of data across that blob, its versions. See [Blob versioning pricing and Billing](versioning-overview.md#pricing-and-billing). At the destination account, for a version, you're billed for all of the blocks of a version whether or not those blocks are unique.
<sup>2</sup> This includes only blob versions created since the last replication completed.
-<sup>3</sup> See [Bandwidth pricing](https://azure.microsoft.com/pricing/details/bandwidth/).
-
+<sup>3</sup> Object replication copies the whole version to destination (not just the unique blocks of the version). This transfer incurs the cost of network egress. See [Bandwidth pricing](https://azure.microsoft.com/pricing/details/bandwidth/).
+> [!TIP]
+> To reduce the risk of an unexpected bill, enable object replication in an account that contains only a small number of objects. Then, measure the impact on cost before you enable the feature in a production setting.
## Next steps
storage Storage Use Azcopy Optimize https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-optimize.md
File scans on some Linux systems don't execute fast enough to saturate all of th
You can increase throughput by setting the `AZCOPY_CONCURRENCY_VALUE` environment variable. This variable specifies the number of concurrent requests that can occur.
-If your computer has fewer than 5 CPUs, then the value of this variable is set to `32`. Otherwise, the default value is equal to 16 multiplied by the number of CPUs. The maximum default value of this variable is `300`, but you can manually set this value higher or lower.
+If your computer has fewer than 5 CPUs, then the value of this variable is set to `32`. Otherwise, the default value is equal to 16 multiplied by the number of CPUs. The maximum default value of this variable is `3000`, but you can manually set this value higher or lower.
| Operating system | Command | |--|--|
storage Files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-whats-new.md
description: Learn about new features and enhancements in Azure Files and Azure
Previously updated : 01/29/2024 Last updated : 02/01/2024
Azure Files and Azure File Sync are updated regularly to offer new features and
### 2024 quarter 1 (January, February, March)
+#### Metadata caching for premium SMB file shares is in public preview
+
+Metadata caching is an enhancement for SMB Azure premium file shares aimed to reduce metadata latency, increase available IOPS, and boost network throughput. [Learn more](smb-performance.md#metadata-caching-for-premium-smb-file-shares).
+ #### Snapshot support for NFS Azure premium file shares is generally available Customers using NFS Azure file shares can now take point-in-time snapshots of file shares. This enables users to roll back their entire filesystem to a previous point in time, or restore specific files that were accidentally deleted or corrupted. Customers using this feature can perform share-level snapshot management operations via the Azure portal, REST API, Azure PowerShell, and Azure CLI. This feature is now available in all Azure public cloud regions except West US 2. [Learn more](storage-files-how-to-mount-nfs-shares.md#nfs-file-share-snapshots).
storage Smb Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/smb-performance.md
Title: SMB performance - Azure Files
-description: Learn about different ways to improve performance for SMB Azure file shares, including SMB Multichannel.
+description: Learn about different ways to improve performance for premium SMB Azure file shares, including SMB Multichannel and metadata caching.
Previously updated : 08/31/2023 Last updated : 02/01/2024 + # Improve SMB Azure file share performance
-This article explains how you can improve performance for SMB Azure file shares, including using SMB Multichannel.
+
+This article explains how you can improve performance for premium SMB Azure file shares, including using SMB Multichannel and metadata caching (preview).
## Applies to+ | File share type | SMB | NFS | |-|:-:|:-:| | Standard file shares (GPv2), LRS/ZRS | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
This article explains how you can improve performance for SMB Azure file shares,
The following tips might help you optimize performance: -- Ensure that your storage account and your client are colocated in the same Azure region to reduce network latency.
+- Ensure that your storage account and your client are co-located in the same Azure region to reduce network latency.
- Use multi-threaded applications and spread load across multiple files. - Performance benefits of SMB Multichannel increase with the number of files distributing load. - Premium share performance is bound by provisioned share size (IOPS/egress/ingress) and single file limits. For details, see [Understanding provisioning for premium file shares](understanding-billing.md#provisioned-model).
The following tips might help you optimize performance:
Higher I/O sizes drive higher throughput and will have higher latencies, resulting in a lower number of net IOPS. Smaller I/O sizes will drive higher IOPS, but will result in lower net throughput and latencies. To learn more, see [Understand Azure Files performance](understand-performance.md). ## SMB Multichannel+ SMB Multichannel enables an SMB 3.x client to establish multiple network connections to an SMB file share. Azure Files supports SMB Multichannel on premium file shares (file shares in the FileStorage storage account kind) for Windows clients. On the service side, SMB Multichannel is disabled by default in Azure Files, but there's no additional cost for enabling it. ### Benefits+ SMB Multichannel enables clients to use multiple network connections that provide increased performance while lowering the cost of ownership. Increased performance is achieved through bandwidth aggregation over multiple NICs and utilizing Receive Side Scaling (RSS) support for NICs to distribute the I/O load across multiple CPUs. - **Increased throughput**:
To learn more about SMB Multichannel, refer to the [Windows documentation](/azur
This feature provides greater performance benefits to multi-threaded applications but typically doesn't help single-threaded applications. See the [Performance comparison](#performance-comparison) section for more details. ### Limitations+ SMB Multichannel for Azure file shares currently has the following restrictions:+ - Only supported on Windows clients that are using SMB 3.1.1. Ensure SMB client operating systems are patched to recommended levels. - Not currently supported or recommended for Linux clients. - Maximum number of channels is four, for details see [here](/troubleshoot/azure/azure-storage/files-troubleshoot-performance?toc=/azure/storage/files/toc.json#cause-4-number-of-smb-channels-exceeds-four). ### Configuration+ SMB Multichannel only works when the feature is enabled on both client-side (your client) and service-side (your Azure storage account).
-On Windows clients, SMB Multichannel is enabled by default. You can verify your configuration by running the following PowerShell command:
+On Windows clients, SMB Multichannel is enabled by default. You can verify your configuration by running the following PowerShell command:
```PowerShell Get-SmbClientConfiguration | Select-Object -Property EnableMultichannel
Get-SmbClientConfiguration | Select-Object -Property EnableMultichannel
On your Azure storage account, you'll need to enable SMB Multichannel. See [Enable SMB Multichannel](files-smb-protocol.md#smb-multichannel). ### Disable SMB Multichannel+ In most scenarios, particularly multi-threaded workloads, clients should see improved performance with SMB Multichannel. However, for some specific scenarios such as single-threaded workloads or for testing purposes, you might want to disable SMB Multichannel. See [Performance comparison](#performance-comparison) for more details. ### Verify SMB Multichannel is configured correctly
In most scenarios, particularly multi-threaded workloads, clients should see imp
1. Mount a file share to your client. 1. Generate load with your application. A copy tool such as robocopy /MT, or any performance tool such as Diskspd to read/write files can generate load.
-1. Open PowerShell as an admin and use the following command:
+1. Open PowerShell as an admin and use the following command:
`Get-SmbMultichannelConnection |fl` 1. Look for **MaxChannels** and **CurrentChannels** properties.
The load was generated against a single 128 GiB file. With SMB Multichannel enab
- For smaller I/O sizes, there was a slight impact of ~10% on performance with SMB Multichannel enabled. This could be mitigated by spreading the load over multiple files, or disabling the feature. - Performance is still bound by [single file limits](storage-files-scale-targets.md#file-scale-targets).
+## Metadata caching for premium SMB file shares
+
+Metadata caching is an enhancement for SMB Azure premium file shares aimed to reduce metadata latency, increase available IOPS, and boost network throughput. This preview feature improves the following metadata APIs and can be used from both Windows and Linux clients:
+
+- Create
+- Open
+- Close
+- Delete
+
+To onboard, [sign up for the public preview](https://aka.ms/PremiumFilesMetadataCachingPreview) and we'll provide you with additional details. Currently this preview feature is only available for premium SMB file shares (file shares in the FileStorage storage account kind). There are no additional costs associated with using this feature.
+
+### Regional availability
+
+Currently the metadata caching preview is only available in the following Azure regions.
+
+- Australia East
+- Brazil South East
+- France South
+- Germany West Central
+- Switzerland North
+- UAE Central
+- UAE North
+- US West Central
+
+### Performance improvements with metadata caching
+
+Most workloads or usage patterns that contain metadata can benefit from metadata caching. To determine if your workload contains metadata, you can [use Azure Monitor](analyze-files-metrics.md#monitor-utilization) to split the transactions by API dimension.
+
+Typical metadata-heavy workloads and usage patterns include:
+
+- Web/app services
+- DevOps tasks
+- Indexing/batch jobs
+- Virtual desktops with home directories or other workloads that are primarily interacting with many small files, directories, or handles
+
+The following diagrams depict potential results.
+
+#### Reduce metadata latency
+
+By caching file and directory paths for future lookups, metadata caching can reduce latency on frequently accessed files and directories by 30% or more for metadata-heavy workloads at scale.
++
+#### Increase available IOPS
+
+Metadata caching can increase available IOPS by more than 60% for metadata-heavy workloads at scale.
++
+#### Increase network throughput
+
+Metadata caching can increase network throughput by more than 60% for metadata-heavy workloads at scale.
++ ## Next steps+ - [Enable SMB Multichannel](files-smb-protocol.md#smb-multichannel) - See the [Windows documentation](/azure-stack/hci/manage/manage-smb-multichannel) for SMB Multichannel
update-manager Deploy Manage Updates Using Updates View https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/deploy-manage-updates-using-updates-view.md
+
+ Title: Deploy and manage updates using Updates view (preview).
+description: This article describes how to view the updates pending for your environment and then deploy and manage them using the Updates (preview) option in Azure Update Manager.
+++ Last updated : 01/18/2024+++
+# Deploy and manage updates using the Update view (preview)
++
+**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers.
+
+This article describes how you can manage machines from an updates standpoint.
+
+The Updates blade (preview) allows you to manage machines from an updates viewpoint. It implies that you can see how many Linux and Windows updates are pending and the update applies to which machines. It also enables you to act on each of the pending updates. To view the latest pending updates on each of the machines, we recommend that you enable periodic assessment on all your machines. For more information, see [enable periodic assessment at scale using Policy](periodic-assessment-at-scale.md) or [enable using update settings](manage-update-settings.md).
+
+ :::image type="content" source="./media/deploy-manage-updates-using-updates-view/overview-pending-updates.png" alt-text="Screenshot that shows number of updates and the type of updates pending on your Windows and Linux machines." lightbox="./media/deploy-manage-updates-using-updates-view/overview-pending-updates.png":::
++
+## Classic use case
+
+This option is helpful when you discover a vulnerability and want to fix it by applying a specific update on all machines on which it was pending. For example, a vulnerability is discovered in software, which can potentially expose the customer's environment to risk like remote code extension. The Central IT team discovers this threat and want to secure their enterprise's environment by applying an update *abc* that would mitigate vulnerability. Using the Updates view, they can apply the update *abc* on all the impacted machines.
+
+ ## Summarized view
+
+In the **Overview** blade of Azure Update Manager, the Updates view provides a summary of pending updates. Select the individual updates to see a detailed view of each of the pending category of updates. Following is a screenshot that gives a summarized view of the pending updates on Windows and Linux machines.
+
+ :::image type="content" source="./media/deploy-manage-updates-using-updates-view/overview-pending-updates.png" alt-text="Screenshot that shows number of updates and the type of updates pending on your Windows and Linux machines." lightbox="./media/deploy-manage-updates-using-updates-view/overview-pending-updates.png":::
+
+## Updates list view
+
+You can use either the **Overview** blade or select the **Updates (preview)** blade that provides a list view of the updates pending in your environment. You can perform the following actions on this page:
+
+- Filter Windows and Linux updates by selecting the cards for each.
+- Filter updates by using the filter options at the top like **Resource group**, **Location**, **Resource type**, **Workloads**, **Update Classifications**
+- Edit columns, export data to csv or see the query powering this view using the options at the top.
+- Displays a ribbon at the top that shows the number of machines that don't have periodic assessment enabled on them and suggestion to enable periodic assessment on them.
+
+ > [!NOTE]
+ > We recommend you to enable periodic assessment to see the latest pending updates on the machines.
+
+ :::image type="content" source="./media/deploy-manage-updates-using-updates-view/updates-view.png" alt-text="Screenshot that shows the pending updates and various filter options from Updates." lightbox="./media/deploy-manage-updates-using-updates-view/updates-view.png":::
+
+- Select any row of the Machine(s) applicable column for a list view of all machines on which the update is applicable. Using this option, you can view all the machines on which the update is applicable and pending. You can trigger **One-time update** to install the update on demand or use the **Schedule updates** option to schedule update installation on a later date.
+
+ :::image type="content" source="./media/deploy-manage-updates-using-updates-view/schedule-updates-applicable-machines.png" alt-text="Screenshot that shows the machines for which updates are applicable and pending." lightbox="./media/deploy-manage-updates-using-updates-view/schedule-updates-applicable-machines.png":::
+
+- Multi-select updates from the **Updates** list view and perform **One-time updates** or **Schedule updates**.
+
+ :::image type="content" source="./media/deploy-manage-updates-using-updates-view/multi-updates-selection.png" alt-text="Screenshot that shows multi selection from list view." lightbox="./media/deploy-manage-updates-using-updates-view/multi-updates-selection.png":::
+
+1. **One-time update** - Allows you to install update(s) on the applicable machines on demand and can take instant action about the pending update(s). For more information on how to use One-time update, see [how to deploy on demand updates](deploy-updates.md#).
+
+ :::image type="content" source="./media/deploy-manage-updates-using-updates-view/install-one-time-updates.png" alt-text="Screenshot that shows how to install one-time updates." lightbox="./media/deploy-manage-updates-using-updates-view/install-one-time-updates.png":::
++
+1. **Schedule updates** - Allows you to install updates later, you have to select a future date on when you would like to install the update(s) and specify an end date when the schedule should end. For more information on scheduled updates, see [how to schedule updates](scheduled-patching.md).
+
+ :::image type="content" source="./media/deploy-manage-updates-using-updates-view/schedule-updates.png" alt-text="Screenshot that shows how to schedule updates." lightbox="./media/deploy-manage-updates-using-updates-view/schedule-updates.png":::
++
+## Next steps
+
+* [View updates for single machine](view-updates.md)
+* [Deploy updates now (on-demand) for single machine](deploy-updates.md)
+* [Schedule recurring updates](scheduled-patching.md)
+* [Manage update settings via Portal](manage-update-settings.md)
+* [Manage multiple machines using update Manager](manage-multiple-machines.md)
update-manager Guidance Migration Automation Update Management Azure Update Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/guidance-migration-automation-update-management-azure-update-manager.md
description: Guidance overview on migration from Automation Update Management to
Previously updated : 01/23/2024 Last updated : 02/01/2024
Guidance to move various capabilities is provided in table below:
7 | Customize workflows using pre and post scripts. | Available as Automation runbooks. | We recommend that you try out the Public Preview for pre and post scripts on your non-production machines and use the feature on production workloads once the feature enters General Availability. |[Manage pre and post events (preview)](manage-pre-post-events.md) | | 8 | Create alerts based on updates data for your environment | Alerts can be set up on updates data stored in Log Analytics. | We recommend that you try out the Public Preview for alerts on your non-production machines and use the feature on production workloads once the feature enters General Availability. |[Create alerts (preview)](manage-alerts.md) | |
-## Scripts to migrate from Automation Update Management to Azure Update Manager
+## Scripts to migrate from Automation Update Management to Azure Update Manager (preview)
Using migration runbooks, you can automatically migrate all workloads (machines and schedules) from Automation Update Management to Azure Update Manager. This section details on how to run the script, what the script does at the backend, expected behavior, and any limitations, if applicable. The script can migrate all the machines and schedules in one automation account at one go. If you have multiple automation accounts, you have to run the runbook for all the automation accounts.
virtual-desktop Add Session Hosts Host Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/add-session-hosts-host-pool.md
description: Learn how to add session hosts virtual machines to a host pool in A
Previously updated : 11/16/2023 Last updated : 01/24/2024 # Add session hosts to a host pool > [!IMPORTANT]
-> Using Azure Stack HCI with Azure Virtual Desktop is currently in PREVIEW. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> Azure Virtual Desktop for Azure Stack HCI is currently in preview for Azure Government and Azure China. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
Once you've created a host pool, workspace, and an application group, you need to add session hosts to the host pool for your users to connect to. You may also need to add more session hosts for extra capacity. You can create new virtual machines (VMs) to use as session hosts and add them to a host pool natively using the Azure Virtual Desktop service in the Azure portal. Alternatively you can also create VMs outside of the Azure Virtual Desktop service, such as with an automated pipeline, then add them as session hosts to a host pool. When using Azure CLI or Azure PowerShell you'll need to create the VMs outside of Azure Virtual Desktop, then add them as session hosts to a host pool separately.
-For Azure Stack HCI (preview), you can also create new VMs to use as session hosts and add them to a host pool natively using the Azure Virtual Desktop service in the Azure portal. Alternatively, if you want to create the VMs outside of the Azure Virtual Desktop service, see [Create Arc virtual machines on Azure Stack HCI](/azure-stack/hci/manage/create-arc-virtual-machines), then add them as session hosts to a host pool separately.
+For Azure Stack HCI, you can also create new VMs to use as session hosts and add them to a host pool natively using the Azure Virtual Desktop service in the Azure portal. Alternatively, if you want to create the VMs outside of the Azure Virtual Desktop service, see [Create Arc virtual machines on Azure Stack HCI](/azure-stack/hci/manage/create-arc-virtual-machines), then add them as session hosts to a host pool separately.
This article shows you how to generate a registration key using the Azure portal, Azure CLI, or Azure PowerShell, then how to add session hosts to a host pool using the Azure Virtual Desktop service or add them to a host pool separately.
Here's how to create session hosts and register them to a host pool using the Az
|--|--| | Resource group | This automatically defaults to the resource group you chose your host pool to be in on the *Basics* tab, but you can also select an alternative. | | Name prefix | Enter a name for your session hosts, for example **hp01-sh**.<br /><br />This value is used as the prefix for your session hosts. Each session host has a suffix of a hyphen and then a sequential number added to the end, for example **hp01-sh-0**.<br /><br />This name prefix can be a maximum of 11 characters and is used in the computer name in the operating system. The prefix and the suffix combined can be a maximum of 15 characters. Session host names must be unique. |
- | Virtual machine type | Select **Azure Stack HCI virtual machine (Preview)**. |
+ | Virtual machine type | Select **Azure Stack HCI virtual machine**. |
| Custom location | Select the Azure Stack HCI cluster where you want to deploy your session hosts from the drop-down list. | | Images | Select the OS image you want to use from the list, or select **Manage VM images** to manage the images available on the cluster you selected. | | Number of VMs | Enter the number of virtual machines you want to deploy. You can add more later. |
virtual-desktop Azure Stack Hci Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/azure-stack-hci-overview.md
Title: Azure Virtual Desktop for Azure Stack HCI (preview)
-description: Learn about using Azure Virtual Desktop for Azure Stack HCI (preview) to deploy session hosts where you need them.
+ Title: Azure Virtual Desktop for Azure Stack HCI
+description: Learn about using Azure Virtual Desktop for Azure Stack HCI to deploy session hosts where you need them.
Previously updated : 11/06/2023 Last updated : 01/24/2024
-# Azure Virtual Desktop for Azure Stack HCI (preview)
+# Azure Virtual Desktop for Azure Stack HCI
> [!IMPORTANT]
-> Azure Virtual Desktop for Azure Stack HCI is currently in PREVIEW. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> Azure Virtual Desktop for Azure Stack HCI is currently in preview for Azure Government and Azure China. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-With Azure Virtual Desktop for Azure Stack HCI (preview), you can deploy session hosts for Azure Virtual Desktop where you need them. If you already have an existing on-premises virtual desktop infrastructure (VDI) deployment, Azure Virtual Desktop for Azure Stack HCI can improve your experience. If you're already using Azure Virtual Desktop on Azure, you can extend your deployment to your on-premises infrastructure to better meet your performance or data locality needs.
+With Azure Virtual Desktop for Azure Stack HCI, you can deploy session hosts for Azure Virtual Desktop where you need them. If you already have an existing on-premises virtual desktop infrastructure (VDI) deployment, Azure Virtual Desktop for Azure Stack HCI can improve your experience. If you're already using Azure Virtual Desktop on Azure, you can extend your deployment to your on-premises infrastructure to better meet your performance or data locality needs.
Azure Virtual Desktop for Azure Stack HCI isn't an Azure Arc-enabled service. As such, it's not supported as a standalone service outside of Azure, in a multicloud environment, or on Azure Arc-enabled servers besides Azure Stack HCI virtual machines as described in this article.
virtual-desktop Deploy Azure Virtual Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/deploy-azure-virtual-desktop.md
Previously updated : 11/16/2023 Last updated : 01/24/2024 # Deploy Azure Virtual Desktop > [!IMPORTANT]
-> Using Azure Stack HCI with Azure Virtual Desktop is currently in PREVIEW. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> Azure Virtual Desktop for Azure Stack HCI is currently in preview for Azure Government and Azure China. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
This article shows you how to deploy Azure Virtual Desktop on Azure or Azure Stack HCI by using the Azure portal, Azure CLI, or Azure PowerShell. To deploy Azure Virtual Desktop you: - Create a host pool.
For more information on the terminology used in this article, see [Azure Virtual
## Prerequisites
-Review the [Prerequisites for Azure Virtual Desktop](prerequisites.md) for a general idea of what's required and supported, such as operating systems (OS), virtual networks, and identity providers. It also includes a list of the [supported Azure regions](prerequisites.md#azure-regions) in which you can deploy host pools, workspaces, and application groups. This list of regions is where the *metadata* for the host pool can be stored. However, session hosts can be located in any Azure region, and on-premises with [Azure Stack HCI (preview)](azure-stack-hci-overview.md). For more information about the types of data and locations, see [Data locations for Azure Virtual Desktop](data-locations.md).
+Review the [Prerequisites for Azure Virtual Desktop](prerequisites.md) for a general idea of what's required and supported, such as operating systems (OS), virtual networks, and identity providers. It also includes a list of the [supported Azure regions](prerequisites.md#azure-regions) in which you can deploy host pools, workspaces, and application groups. This list of regions is where the *metadata* for the host pool can be stored. However, session hosts can be located in any Azure region, and on-premises with [Azure Stack HCI](azure-stack-hci-overview.md). For more information about the types of data and locations, see [Data locations for Azure Virtual Desktop](data-locations.md).
Select the relevant tab for your scenario for more prerequisites.
Here's how to create a host pool using the Azure portal.
| Add virtual machines | Select **Yes**. This shows several new options. | | Resource group | This automatically defaults to the resource group you chose your host pool to be in on the *Basics* tab, but you can also select an alternative. | | Name prefix | Enter a name for your session hosts, for example **hp01-sh**.<br /><br />This value is used as the prefix for your session hosts. Each session host has a suffix of a hyphen and then a sequential number added to the end, for example **hp01-sh-0**.<br /><br />This name prefix can be a maximum of 11 characters and is used in the computer name in the operating system. The prefix and the suffix combined can be a maximum of 15 characters. Session host names must be unique. |
- | Virtual machine type | Select **Azure Stack HCI virtual machine (Preview)**. |
+ | Virtual machine type | Select **Azure Stack HCI virtual machine**. |
| Custom location | Select the Azure Stack HCI cluster where you want to deploy your session hosts from the drop-down list. | | Images | Select the OS image you want to use from the list, or select **Manage VM images** to manage the images available on the cluster you selected. | | Number of VMs | Enter the number of virtual machines you want to deploy. You can add more later. |
virtual-desktop Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new.md
Previously updated : 01/04/2023 Last updated : 02/01/2023 # What's new in Azure Virtual Desktop?
Make sure to check back here often to keep up with new updates.
> [!TIP] > See [What's new in documentation](whats-new-documentation.md), where we highlight new and updated articles for Azure Virtual Desktop.
+## January 2024
+
+There were no major releases or new features in January 2024.
+ ## December 2023 Here's what changed in December 2023:
virtual-machines Capacity Reservation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/capacity-reservation-overview.md
From this example accumulation of Minutes Not Available, here's the calculation
- UltraSSD storage - VMs resuming from hibernation - VMs requiring vnet encryption
+- Pinned subscription cannot use the feature
- Only the subscription that created the reservation can use it. - Reservations are only available to paid Azure customers. Sponsored accounts such as Free Trial and Azure for Students aren't eligible to use this feature.
virtual-machines Sizes Previous Gen https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes-previous-gen.md
- Title: Azure VM sizes - previous generations | Microsoft Docs
-description: Lists the previous generations of sizes available for virtual machines in Azure. Lists information about the number of vCPUs, data disks and NICs as well as storage throughput and network bandwidth for sizes in this series.
----- Previously updated : 12/20/2022----
-# Previous generations of virtual machine sizes
-
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
-
-> [!TIP]
-> Try the **[Virtual machines selector tool](https://aka.ms/vm-selector)** to find other sizes that best fit your workload.
-
-This section provides information on previous generations of virtual machine sizes. These sizes can still be used, but there are newer generations available.
-
-## F-series
-
-F-series is based on the 2.4 GHz Intel Xeon® E5-2673 v3 (Haswell) processor, which can achieve clock speeds as high as 3.1 GHz with the Intel Turbo Boost Technology 2.0. This is the same CPU performance as the Dv2-series of VMs.
-
-F-series VMs are an excellent choice for workloads that demand faster CPUs but do not need as much memory or temporary storage per vCPU. Workloads such as analytics, gaming servers, web servers, and batch processing will benefit from the value of the F-series.
-
-ACU: 210 - 250
-
-Premium Storage: Not Supported
-
-Premium Storage caching: Not Supported
-
-| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max temp storage throughput: IOPS/Read MBps/Write MBps | Max data disks/throughput: IOPS | Max NICs/Expected network bandwidth (Mbps) |
-||||||||
-| Standard_F1 | 1 | 2 | 16 | 3000/46/23 | 4/4x500 | 2/750 |
-| Standard_F2 | 2 | 4 | 32 | 6000/93/46 | 8/8x500 | 2/1500 |
-| Standard_F4 | 4 | 8 | 64 | 12000/187/93 | 16/16x500 | 4/3000 |
-| Standard_F8 | 8 | 16 | 128 | 24000/375/187 | 32/32x500 | 8/6000 |
-| Standard_F16 | 16 | 32 | 256 | 48000/750/375 | 64/64x500 | 8/12000 |
-
-## Fs-series <sup>1</sup>
-
-The Fs-series provides all the advantages of the F-series, in addition to Premium storage.
-
-ACU: 210 - 250
-
-Premium Storage: Supported
-
-Premium Storage caching: Supported
-
-[Ephemeral OS Disks](ephemeral-os-disks.md): Supported
-
-| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max cached and temp storage throughput: IOPS/MBps (cache size in GiB) | Max uncached disk throughput: IOPS/MBps | Max NICs/Expected network bandwidth (Mbps) |
-|||||||||
-| Standard_F1s | 1 | 2 | 4 | 4 | 4000/32 (12) | 3200/48 | 2/750 |
-| Standard_F2s | 2 | 4 | 8 | 8 | 8000/64 (24) | 6400/96 | 2/1500 |
-| Standard_F4s | 4 | 8 | 16 | 16 | 16000/128 (48) | 12800/192 | 4/3000 |
-| Standard_F8s | 8 | 16 | 32 | 32 | 32000/256 (96) | 25600/384 | 8/6000 |
-| Standard_F16s | 16 | 32 | 64 | 64 | 64000/512 (192) | 51200/768 | 8/12000 |
-
-MBps = 10^6 bytes per second, and GiB = 1024^3 bytes.
-
-<sup>1</sup> The maximum disk throughput (IOPS or MBps) possible with a Fs series VM may be limited by the number, size, and striping of the attached disk(s). For details, see [Design for high performance](premium-storage-performance.md).
--
-## NVv2-series
-
-**Newer size recommendation**: [NVv3-series](nvv3-series.md)
-
-The NVv2-series virtual machines are powered by [NVIDIA Tesla M60](https://images.nvidia.com/content/tesla/pdf/188417-Tesla-M60-DS-A4-fnl-Web.pdf) GPUs and NVIDIA GRID technology with Intel Broadwell CPUs. These virtual machines are targeted for GPU accelerated graphics applications and virtual desktops where customers want to visualize their data, simulate results to view, work on CAD, or render and stream content. Additionally, these virtual machines can run single precision workloads such as encoding and rendering. NVv2 virtual machines support Premium Storage and come with twice the system memory (RAM) when compared with its predecessor NV-series.
-
-Each GPU in NVv2 instances comes with a GRID license. This license gives you the flexibility to use an NV instance as a virtual workstation for a single user, or 25 concurrent users can connect to the VM for a virtual application scenario.
-
-[Ephemeral OS Disks](ephemeral-os-disks.md): Supported
-
-| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | GPU | GPU memory: GiB | Max data disks | Max NICs | Virtual Workstations | Virtual Applications |
-|||||||||||
-| Standard_NV6s_v2 | 6 | 112 | 320 | 1 | 8 | 12 | 4 | 1 | 25 |
-| Standard_NV12s_v2 | 12 | 224 | 640 | 2 | 16 | 24 | 8 | 2 | 50 |
-| Standard_NV24s_v2 | 24 | 448 | 1280 | 4 | 32 | 32 | 8 | 4 | 100 |
-
-## Older generations of virtual machine sizes
-
-This section provides information on older generations of virtual machine sizes. These sizes are still supported but will not receive additional capacity. There are newer or alternative sizes that are generally available. Please refer to [Sizes virtual machines in Azure](./sizes.md) to choose the VM sizes that will best fit your need.
-
-For more information on resizing a Linux VM, see [Resize a VM](resize-vm.md).
-
-<br>
-
-### Basic A
-
-**Newer size recommendation**: [Av2-series](av2-series.md)
-
-Premium Storage: Not Supported
-
-Premium Storage caching: Not Supported
-
-The basic tier sizes are primarily for development workloads and other applications that don't require load balancing, auto-scaling, or memory-intensive virtual machines.
-
-| Size ΓÇô Size\Name | vCPU | Memory|NICs (Max)| Max temporary disk size | Max. data disks (1023 GB each)| Max. IOPS (300 per disk) |
-||||||||
-| A0\Basic_A0 | 1 | 768 MB | 2 | 20 GB | 1 | 1x300 |
-| A1\Basic_A1 | 1 | 1.75 GB | 2 | 40 GB | 2 | 2x300 |
-| A2\Basic_A2 | 2 | 3.5 GB | 2 | 60 GB | 4 | 4x300 |
-| A3\Basic_A3 | 4 | 7 GB | 2 | 120 GB | 8 | 8x300 |
-| A4\Basic_A4 | 8 | 14 GB | 2 | 240 GB | 16 | 16x300 |
-
-<br>
-
-### Standard A0 - A4 using CLI and PowerShell
-
-In the classic deployment model, some VM size names are slightly different in CLI and PowerShell:
-
-* Standard_A0 is ExtraSmall
-* Standard_A1 is Small
-* Standard_A2 is Medium
-* Standard_A3 is Large
-* Standard_A4 is ExtraLarge
-
-### A-series
-
-**Newer size recommendation**: [Av2-series](av2-series.md)
-
-ACU: 50-100
-
-Premium Storage: Not Supported
-
-Premium Storage caching: Not Supported
-
-| Size | vCPU | Memory: GiB | Temp storage (HDD): GiB | Max data disks | Max data disk throughput: IOPS | Max NICs/Expected network bandwidth (Mbps) |
-| | | | | | | |
-| Standard_A0&nbsp;<sup>1</sup> | 1 | 0.768 | 20 | 1 | 1x500 | 2/100 |
-| Standard_A1 | 1 | 1.75 | 70 | 2 | 2x500 | 2/500 |
-| Standard_A2 | 2 | 3.5 | 135 | 4 | 4x500 | 2/500 |
-| Standard_A3 | 4 | 7 | 285 | 8 | 8x500 | 2/1000 |
-| Standard_A4 | 8 | 14 | 605 | 16 | 16x500 | 4/2000 |
-| Standard_A5 | 2 | 14 | 135 | 4 | 4x500 | 2/500 |
-| Standard_A6 | 4 | 28 | 285 | 8 | 8x500 | 2/1000 |
-| Standard_A7 | 8 | 56 | 605 | 16 | 16x500 | 4/2000 |
-
-<sup>1</sup> The A0 size is over-subscribed on the physical hardware. For this specific size only, other customer deployments may impact the performance of your running workload. The relative performance is outlined below as the expected baseline, subject to an approximate variability of 15 percent.
-
-<br>
-
-### A-series - compute-intensive instances
-
-**Newer size recommendation**: [Av2-series](av2-series.md)
-
-ACU: 225
-
-Premium Storage: Not Supported
-
-Premium Storage caching: Not Supported
-
-The A8-A11 and H-series sizes are also known as *compute-intensive instances*. The hardware that runs these sizes is designed and optimized for compute-intensive and network-intensive applications, including high-performance computing (HPC) cluster applications, modeling, and simulations. The A8-A11 series uses Intel Xeon E5-2670 @ 2.6 GHZ and the H-series uses Intel Xeon E5-2667 v3 @ 3.2 GHz.
-
-| Size | vCPU | Memory: GiB | Temp storage (HDD): GiB | Max data disks | Max data disk throughput: IOPS | Max NICs|
-||||||||
-| Standard_A8&nbsp;<sup>1</sup> | 8 | 56 | 382 | 32 | 32x500 | 2 |
-| Standard_A9&nbsp;<sup>1</sup> | 16 | 112 | 382 | 64 | 64x500 | 4 |
-| Standard_A10 | 8 | 56 | 382 | 32 | 32x500 | 2 |
-| Standard_A11 | 16 | 112 | 382 | 64 | 64x500 | 4 |
-
-<sup>1</sup> For MPI applications, dedicated RDMA backend network is enabled by FDR InfiniBand network, which delivers ultra-low-latency and high bandwidth.
-
-> [!NOTE]
-> The [A8 ΓÇô A11 VMs are planned for retirement on 3/2021](https://azure.microsoft.com/updates/a8-a11-azure-virtual-machine-sizes-will-be-retired-on-march-1-2021/). We strongly recommend not creating any new A8 ΓÇô A11 VMs. Please migrate any existing A8 ΓÇô A11 VMs to newer and powerful high-performance computing VM sizes such as H, HB, HC, HBv2 as well as general purpose compute VM sizes such as D, E, and F for better price-performance.
-> For more information, see [HPC Migration Guide](https://azure.microsoft.com/resources/hpc-migration-guide/).
-
-<br>
-
-### D-series
-
-**Newer size recommendation**: [Dav4-series](dav4-dasv4-series.md), [Dv4-series](dv4-dsv4-series.md) and [Ddv4-series](ddv4-ddsv4-series.md)
-
-ACU: 160-250 <sup>1</sup>
-
-Premium Storage: Not Supported
-
-Premium Storage caching: Not Supported
-
-| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max temp storage throughput: IOPS/Read MBps/Write MBps | Max data disks/throughput: IOPS | Max NICs/Expected network bandwidth (Mbps) |
-||||||||
-| Standard_D1 | 1 | 3.5 | 50 | 3000/46/23 | 4/4x500 | 2/500 |
-| Standard_D2 | 2 | 7 | 100 | 6000/93/46 | 8/8x500 | 2/1000 |
-| Standard_D3 | 4 | 14 | 200 | 12000/187/93 | 16/16x500 | 4/2000 |
-| Standard_D4 | 8 | 28 | 400 | 24000/375/187 | 32/32x500 | 8/4000 |
-
-<sup>1</sup> VM Family can run on one of the following CPU's: 2.2 GHz Intel Xeon® E5-2660 v2, 2.4 GHz Intel Xeon® E5-2673 v3 (Haswell) or 2.3 GHz Intel XEON® E5-2673 v4 (Broadwell)
-
-<br>
-
-### D-series - memory optimized
-
-**Newer size recommendation**: [Dav4-series](dav4-dasv4-series.md), [Dv4-series](dv4-dsv4-series.md) and [Ddv4-series](ddv4-ddsv4-series.md)
-
-ACU: 160-250 <sup>1</sup>
-
-Premium Storage: Not Supported
-
-Premium Storage caching: Not Supported
-
-| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max temp storage throughput: IOPS/Read MBps/Write MBps | Max data disks/throughput: IOPS | Max NICs/Expected network bandwidth (Mbps) |
-||||||||
-| Standard_D11 | 2 | 14 | 100 | 6000/93/46 | 8/8x500 | 2/1000 |
-| Standard_D12 | 4 | 28 | 200 | 12000/187/93 | 16/16x500 | 4/2000 |
-| Standard_D13 | 8 | 56 | 400 | 24000/375/187 | 32/32x500 | 8/4000 |
-| Standard_D14 | 16 | 112 | 800 | 48000/750/375 | 64/64x500 | 8/8000 |
-
-<sup>1</sup> VM Family can run on one of the following CPU's: 2.2 GHz Intel Xeon® E5-2660 v2, 2.4 GHz Intel Xeon® E5-2673 v3 (Haswell) or 2.3 GHz Intel XEON® E5-2673 v4 (Broadwell)
-
-<br>
-
-### Preview: DC-series
-
-**Newer size recommendation**: [DCsv2-series](dcv2-series.md)
-
-Premium Storage: Supported
-
-Premium Storage caching: Supported
-
-[Ephemeral OS Disks](ephemeral-os-disks.md): Supported
-
-The DC-series uses the latest generation of 3.7GHz Intel XEON E-2176G Processor with SGX technology, and with the Intel Turbo Boost Technology can go up to 4.7GHz.
-
-| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max cached and temp storage throughput: IOPS / MBps (cache size in GiB) | Max uncached disk throughput: IOPS / MBps | Max NICs / Expected network bandwidth (Mbps) |
-|||-||-|-|-|-|
-| Standard_DC2s | 2 | 8 | 100 | 2 | 4000 / 32 (43) | 3200 /48 | 2 / 1500 |
-| Standard_DC4s | 4 | 16 | 200 | 4 | 8000 / 64 (86) | 6400 /96 | 2 / 3000 |
-
-> [!IMPORTANT]
->
-> DC-series VMs are [generation 2 VMs](./generation-2.md#creating-a-generation-2-vm) and only support `Gen2` images.
--
-### DS-series
-
-**Newer size recommendation**: [Dasv4-series](dav4-dasv4-series.md), [Dsv4-series](dv4-dsv4-series.md) and [Ddsv4-series](ddv4-ddsv4-series.md)
-
-ACU: 160-250 <sup>1</sup>
-
-Premium Storage: Supported
-
-Premium Storage caching: Supported
-
-| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max cached and temp storage throughput: IOPS/MBps (cache size in GiB) | Max uncached disk throughput: IOPS/MBps | Max NICs/Expected network bandwidth (Mbps) |
-|||||||||
-| Standard_DS1 | 1 | 3.5 | 7 | 4 | 4000/32 (43) | 3200/32 | 2/500 |
-| Standard_DS2 | 2 | 7 | 14 | 8 | 8000/64 (86) | 6400/64 | 2/1000 |
-| Standard_DS3 | 4 | 14 | 28 | 16 | 16000/128 (172) | 12800/128 | 4/2000 |
-| Standard_DS4 | 8 | 28 | 56 | 32 | 32000/256 (344) | 25600/256 | 8/4000 |
-
-<sup>1</sup> VM Family can run on one of the following CPU's: 2.2 GHz Intel Xeon® E5-2660 v2, 2.4 GHz Intel Xeon® E5-2673 v3 (Haswell) or 2.3 GHz Intel XEON® E5-2673 v4 (Broadwell)
-
-<br>
-
-### DS-series - memory optimized
-
-**Newer size recommendation**: [Dasv4-series](dav4-dasv4-series.md), [Dsv4-series](dv4-dsv4-series.md) and [Ddsv4-series](ddv4-ddsv4-series.md)
-
-ACU: 160-250 <sup>1,2</sup>
-
-Premium Storage: Supported
-
-Premium Storage caching: Supported
-
-| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max cached and temp storage throughput: IOPS/MBps (cache size in GiB) | Max uncached disk throughput: IOPS/MBps | Max NICs/Expected network bandwidth (Mbps) |
-|||||||||
-| Standard_DS11 | 2 | 14 | 28 | 8 | 8000/64 (72) | 6400/64 | 2/1000 |
-| Standard_DS12 | 4 | 28 | 56 | 16 | 16000/128 (144) | 12800/128 | 4/2000 |
-| Standard_DS13 | 8 | 56 | 112 | 32 | 32000/256 (288) | 25600/256 | 8/4000 |
-| Standard_DS14 | 16 | 112 | 224 | 64 | 64000/512 (576) | 51200/512 | 8/8000 |
-
-<sup>1</sup> The maximum disk throughput (IOPS or MBps) possible with a DS series VM may be limited by the number, size and striping of the attached disk(s). For details, see [Design for high performance](premium-storage-performance.md).
-<sup>2</sup> VM Family can run on one of the following CPU's: 2.2 GHz Intel Xeon® E5-2660 v2, 2.4 GHz Intel Xeon® E5-2673 v3 (Haswell) or 2.3 GHz Intel XEON® E5-2673 v4 (Broadwell)
-
-<br>
-
-### Ls-series
-
-**Newer size recommendation**: [Lsv2-series](lsv2-series.md)
-
-The Ls-series offers up to 32 vCPUs, using the [Intel® Xeon® processor E5 v3 family](https://www.intel.com/content/www/us/en/processors/xeon/xeon-e5-solutions.html). The Ls-series gets the same CPU performance as the G/GS-Series and comes with 8 GiB of memory per vCPU.
-
-The Ls-series does not support the creation of a local cache to increase the IOPS achievable by durable data disks. The high throughput and IOPS of the local disk makes Ls-series VMs ideal for NoSQL stores such as Apache Cassandra and MongoDB which replicate data across multiple VMs to achieve persistence in the event of the failure of a single VM.
-
-ACU: 180-240
-
-Premium Storage: Supported
-
-Premium Storage caching: Not Supported
-
-[Ephemeral OS Disks](ephemeral-os-disks.md): Supported
-
-| Size | vCPU | Memory (GiB) | Temp storage (GiB) | Max data disks | Max temp storage throughput (IOPS/MBps) | Max uncached disk throughput (IOPS/MBps) | Max NICs/Expected network bandwidth (Mbps) |
-|||||||||
-| Standard_L4s | 4 | 32 | 678 | 16 | 20000/200 | 5000/125 | 2/4000 |
-| Standard_L8s | 8 | 64 | 1388 | 32 | 40000/400 | 10000/250 | 4/8000 |
-| Standard_L16s | 16 | 128 | 2807 | 64 | 80000/800 | 20000/500 | 8/16000 |
-| Standard_L32s&nbsp;<sup>1</sup> | 32 | 256 | 5630 | 64 | 160000/1600 | 40000/1000 | 8/20000 |
-
-The maximum disk throughput possible with Ls-series VMs may be limited by the number, size, and striping of any attached disks. For details, see [Design for high performance](premium-storage-performance.md).
-
-<sup>1</sup> Instance is isolated to hardware dedicated to a single customer.
-
-### GS-series
-
-**Newer size recommendation**: [Easv4-series](eav4-easv4-series.md), [Esv4-series](ev4-esv4-series.md), [Edsv4-series](edv4-edsv4-series.md) and [M-series](m-series.md)
-
-ACU: 180 - 240 <sup>1</sup>
-
-Premium Storage: Supported
-
-Premium Storage caching: Supported
-
-[Ephemeral OS Disks](ephemeral-os-disks.md): Supported
-
-| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max cached and temp storage throughput: IOPS / MBps (cache size in GiB) | Max uncached disk throughput: IOPS/MBps | Max NICs/Expected network bandwidth (Mbps) |
-|||||||||
-| Standard_GS1 | 2 | 28 | 56 | 8 | 10000/100 (264) | 5000/ 125 | 2/2000 |
-| Standard_GS2 | 4 | 56 | 112 | 16 | 20000/200 (528) | 10000/ 250 | 2/4000 |
-| Standard_GS3 | 8 | 112 | 224 | 32 | 40000/400 (1056) | 20000/ 500 | 4/8000 |
-| Standard_GS4&nbsp;<sup>3</sup> | 16 | 224 | 448 | 64 | 80000/800 (2112) | 40000/1000 | 8/16000 |
-| Standard_GS5&nbsp;<sup>2,&nbsp;3</sup> | 32 | 448 |896 | 64 |160000/1600 (4224) | 80000/2000 | 8/20000 |
-
-<sup>1</sup> The maximum disk throughput (IOPS or MBps) possible with a GS series VM may be limited by the number, size and striping of the attached disk(s). For details, see [Design for high performance](premium-storage-performance.md).
-
-<sup>2</sup> Isolation feature retired on 2/28/2022. For information, see the [retirement announcement](https://azure.microsoft.com/updates/the-g5-and-gs5-azure-vms-will-no-longer-be-hardwareisolated-on-28-february-2022/).
-
-<sup>3</sup> Constrained core sizes available.
-
-<br>
-
-### G-series
-
-**Newer size recommendation**: [Eav4-series](eav4-easv4-series.md), [Ev4-series](ev4-esv4-series.md) and [Edv4-series](edv4-edsv4-series.md) and [M-series](m-series.md)
-
-ACU: 180 - 240
-
-Premium Storage: Not Supported
-
-Premium Storage caching: Not Supported
-
-| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max temp storage throughput: IOPS/Read MBps/Write MBps | Max data disks/throughput: IOPS | Max NICs/Expected network bandwidth (Mbps) |
-||||||||
-| Standard_G1 | 2 | 28 | 384 | 6000/93/46 | 8/8x500 | 2/2000 |
-| Standard_G2 | 4 | 56 | 768 | 12000/187/93 | 16/16x500 | 2/4000 |
-| Standard_G3 | 8 | 112 | 1536 | 24000/375/187 | 32/32x500 | 4/8000 |
-| Standard_G4 | 16 | 224 | 3072 | 48000/750/375 | 64/64x500 | 8/16000 |
-| Standard_G5&nbsp;<sup>1</sup> | 32 | 448 | 6144 | 96000/1500/750| 64/64x500 | 8/20000 |
-
-<sup>1</sup> Isolation feature retired on 2/28/2022. For information, see the [retirement announcement](https://azure.microsoft.com/updates/the-g5-and-gs5-azure-vms-will-no-longer-be-hardwareisolated-on-28-february-2022/).
-<br>
-
-### NV-series
-**Newer size recommendation**: [NVv3-series](nvv3-series.md) and [NVv4-series](nvv4-series.md)
-
-The NV-series virtual machines are powered by [NVIDIA Tesla M60](https://images.nvidia.com/content/tesla/pdf/188417-Tesla-M60-DS-A4-fnl-Web.pdf) GPUs and NVIDIA GRID technology for desktop accelerated applications and virtual desktops where customers are able to visualize their data or simulations. Users are able to visualize their graphics intensive workflows on the NV instances to get superior graphics capability and additionally run single precision workloads such as encoding and rendering. NV-series VMs are also powered by Intel Xeon E5-2690 v3 (Haswell) CPUs.
-
-Each GPU in NV instances comes with a GRID license. This license gives you the flexibility to use an NV instance as a virtual workstation for a single user, or 25 concurrent users can connect to the VM for a virtual application scenario.
-
-Premium Storage: Not Supported
-
-Premium Storage caching: Not Supported
-
-Live Migration: Not Supported
-
-Memory Preserving Updates: Not Supported
-
-| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | GPU | GPU memory: GiB | Max data disks | Max NICs | Virtual Workstations | Virtual Applications |
-|||||||||||
-| Standard_NV6 | 6 | 56 | 340 | 1 | 8 | 24 | 1 | 1 | 25 |
-| Standard_NV12 | 12 | 112 | 680 | 2 | 16 | 48 | 2 | 2 | 50 |
-| Standard_NV24 | 24 | 224 | 1440 | 4 | 32 | 64 | 4 | 4 | 100 |
-
-1 GPU = one-half M60 card.
-<br>
-
-### NC series
-**Newer size recommendation**: [NC T4 v3-series](nct4-v3-series.md)
-
-NC-series VMs are powered by the [NVIDIA Tesla K80](https://www.nvidia.com/content/dam/en-zz/Solutions/Data-Center/tesla-product-literature/Tesla-K80-BoardSpec-07317-001-v05.pdf) card and the Intel Xeon E5-2690 v3 (Haswell) processor. Users can crunch through data faster by leveraging CUDA for energy exploration applications, crash simulations, ray traced rendering, deep learning, and more. The NC24r configuration provides a low latency, high-throughput network interface optimized for tightly coupled parallel computing workloads.
-
-[Premium Storage](premium-storage-performance.md): Not Supported<br>
-[Premium Storage caching](premium-storage-performance.md): Not Supported<br>
-[Live Migration](maintenance-and-updates.md): Not Supported<br>
-[Memory Preserving Updates](maintenance-and-updates.md): Not Supported<br>
-[VM Generation Support](generation-2.md): Generation 1<br>
-<br>
-
-| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | GPU | GPU memory: GiB | Max data disks | Max NICs |
-|||||||||
-| Standard_NC6 | 6 | 56 | 340 | 1 | 12 | 24 | 1 |
-| Standard_NC12 | 12 | 112 | 680 | 2 | 24 | 48 | 2 |
-| Standard_NC24 | 24 | 224 | 1440 | 4 | 48 | 64 | 4 |
-| Standard_NC24r* | 24 | 224 | 1440 | 4 | 48 | 64 | 4 |
-
-1 GPU = one-half K80 card.
-
-*RDMA capable
--
-<br>
--
-### NCv2 series
-**Newer size recommendation**: [NC T4 v3-series](nct4-v3-series.md) and [NC V100 v3-series](ncv3-series.md)
-
-NCv2-series VMs are powered by NVIDIA Tesla P100 GPUs. These GPUs can provide more than 2x the computational performance of the NC-series. Customers can take advantage of these updated GPUs for traditional HPC workloads such as reservoir modeling, DNA sequencing, protein analysis, Monte Carlo simulations, and others. In addition to the GPUs, the NCv2-series VMs are also powered by Intel Xeon E5-2690 v4 (Broadwell) CPUs.
-
-The NC24rs v2 configuration provides a low latency, high-throughput network interface optimized for tightly coupled parallel computing workloads.
-
-[Premium Storage](premium-storage-performance.md): Supported<br>
-[Premium Storage caching](premium-storage-performance.md): Supported<br>
-[Live Migration](maintenance-and-updates.md): Not Supported<br>
-[Memory Preserving Updates](maintenance-and-updates.md): Not Supported<br>
-[VM Generation Support](generation-2.md): Generation 1 and 2<br>
-[Ephemeral OS Disks](ephemeral-os-disks.md): Supported<br>
-
-> For this VM series, the vCPU (core) quota in your subscription is initially set to 0 in each region. [Request a vCPU quota increase](../azure-portal/supportability/regional-quota-requests.md) for this series in an [available region](https://azure.microsoft.com/regions/services/).
->
-| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | GPU | GPU memory: GiB | Max data disks | Max uncached disk throughput: IOPS/MBps | Max NICs |
-||||||||||
-| Standard_NC6s_v2 | 6 | 112 | 736 | 1 | 16 | 12 | 20000/200 | 4 |
-| Standard_NC12s_v2 | 12 | 224 | 1474 | 2 | 32 | 24 | 40000/400 | 8 |
-| Standard_NC24s_v2 | 24 | 448 | 2948 | 4 | 64 | 32 | 80000/800 | 8 |
-| Standard_NC24rs_v2* | 24 | 448 | 2948 | 4 | 64 | 32 | 80000/800 | 8 |
-
-1 GPU = one P100 card.
-
-*RDMA capable
-
-<br>
-
-### ND series
-**Newer size recommendation**: [NDv2-series](ndv2-series.md) and [NC V100 v3-series](ncv3-series.md)
-
-The ND-series virtual machines are a new addition to the GPU family designed for AI, and Deep Learning workloads. They offer excellent performance for training and inference. ND instances are powered by [NVIDIA Tesla P40](https://images.nvidia.com/content/pdf/tesla/184427-Tesla-P40-Datasheet-NV-Final-Letter-Web.pdf) GPUs and Intel Xeon E5-2690 v4 (Broadwell) CPUs. These instances provide excellent performance for single-precision floating point operations, for AI workloads utilizing Microsoft Cognitive Toolkit, TensorFlow, Caffe, and other frameworks. The ND-series also offers a much larger GPU memory size (24 GB), enabling to fit much larger neural net models. Like the NC-series, the ND-series offers a configuration with a secondary low-latency, high-throughput network through RDMA, and InfiniBand connectivity so you can run large-scale training jobs spanning many GPUs.
-
-[Premium Storage](premium-storage-performance.md): Supported<br>
-[Premium Storage caching](premium-storage-performance.md): Supported<br>
-[Live Migration](maintenance-and-updates.md): Not Supported<br>
-[Memory Preserving Updates](maintenance-and-updates.md): Not Supported<br>
-[VM Generation Support](generation-2.md): Generation 1 and 2<br>
-[Ephemeral OS Disks](ephemeral-os-disks.md): Supported<br>
-
-> For this VM series, the vCPU (core) quota per region in your subscription is initially set to 0. [Request a vCPU quota increase](../azure-portal/supportability/regional-quota-requests.md) for this series in an [available region](https://azure.microsoft.com/regions/services/).
->
-| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | GPU | GPU memory: GiB | Max data disks | Max uncached disk throughput: IOPS/MBps | Max NICs |
-||||||||||
-| Standard_ND6s | 6 | 112 | 736 | 1 | 24 | 12 | 20000/200 | 4 |
-| Standard_ND12s | 12 | 224 | 1474 | 2 | 48 | 24 | 40000/400 | 8 |
-| Standard_ND24s | 24 | 448 | 2948 | 4 | 24 | 32 | 80000/800 | 8 |
-| Standard_ND24rs* | 24 | 448 | 2948 | 4 | 96 | 32 | 80000/800 | 8 |
-
-1 GPU = one P40 card.
-
-*RDMA capable
-
-<br>
-
-## Next steps
-
-Learn more about how [Azure compute units (ACU)](acu.md) can help you compare compute performance across Azure SKUs.
virtual-machines Av1 Series Retirement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/migration-guides/av1-series-retirement.md
+
+ Title: Av1-series retirement
+description: Retirement information for the Av1 series virtual machine sizes. Before retirement, migrate your workloads to Av2-series virtual machines.
++++ Last updated : 06/08/2022++++
+# Av1-series retirement
+
+On August 31, 2024, we retire Basic and Standard A-series virtual machines (VMs). Before that date, migrate your workloads to Av2-series VMs, which provide more memory per vCPU and faster storage on solid-state drives (SSDs).
+The remaining VMs with these specific sizes on your subscription will be set to a deallocated state. These VMs will be stopped and removed from the host. These VMs will no longer be billed in the deallocated state.
+
+> [!NOTE]
+> In some cases, you must deallocate the VM prior to resizing. This can happen if the new size is not available on the hardware cluster that is currently hosting the VM.
+
+## Migrate workloads to Av2-series VMs
+
+You can resize your virtual machines to the Av2-series using the [Azure portal, PowerShell, or the CLI](../resize-vm.md). Below are examples on how to resize your VM using the Azure portal and PowerShell.
+
+> [!IMPORTANT]
+> Resizing a virtual machine results in a restart. We recommend that you perform actions that result in a restart during off-peak business hours.
+
+### Azure portal
+
+1. Open the [Azure portal](https://portal.azure.com).
+1. Type *virtual machines* in the search.
+1. Under **Services**, select **Virtual machines**.
+1. In the **Virtual machines** page, select the virtual machine you want to resize.
+1. In the left menu, select **size**.
+1. Pick a new Av2 size from the list of available sizes and select **Resize**.
+
+### Azure PowerShell
+
+1. Set the resource group and VM name variables. Replace the values with information of the VM you want to resize.
+
+ ```powershell
+ $resourceGroup = "myResourceGroup"
+ $vmName = "myVM"
+ ```
+
+1. List the VM sizes that are available on the hardware cluster where the VM is hosted.
+
+ ```powershell
+ Get-AzVMSize -ResourceGroupName $resourceGroup -VMName $vmName
+ ```
+
+1. Resize the VM to the new size.
+
+ ```powershell
+ $vm = Get-AzVM -ResourceGroupName $resourceGroup -VMName $vmName
+ $vm.HardwareProfile.VmSize = "<newAv2VMsize>"
+ Update-AzVM -VM $vm -ResourceGroupName $resourceGroup
+ ```
+
+## Help and support
+
+If you have questions, ask community experts in [Microsoft Q&A](/answers/topics/azure-virtual-machines.html). If you have a support plan and need technical help, create a support request:
+
+1. In the [Help + support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) page, select **Create a support request**. Follow the **New support request** page instructions. Use the following values:
+ * For **Issue type**, select **Technical**.
+ * For **Service**, select **My services**.
+ * For **Service type**, select **Virtual Machine running Windows/Linux**.
+ * For **Resource**, select your VM.
+ * For **Problem type**, select **Assistance with resizing my VM**.
+ * For **Problem subtype**, select the option that applies to you.
+
+Follow instructions in the **Solutions** and **Details** tabs, as applicable, and then **Review + create**.
+
+## Next steps
+
+Learn more about the [Av2-series VMs](../../av2-series.md)
virtual-machines Dedicated Host Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/migration-guides/dedicated-host-migration-guide.md
+
+ Title: Azure Dedicated Host SKU Retirement Migration Guide
+description: Walkthrough on how to migrate a retiring Dedicated Host SKU
+++++ Last updated : 07/12/2023++
+# Azure Dedicated Host SKU Retirement Migration Guide
+
+As hardware ages, it must be retired and workloads must be migrated to newer, faster, and more efficient Azure Dedicated Host SKUs. The legacy Dedicated Host SKUs should be migrated to newer Dedicated Host SKUs.
+The main differences between the retiring Dedicated Host SKUs and the newly recommended Dedicated Host SKUs are:
+
+- Newer, more efficient processors
+- Increased RAM
+- Increased available vCPUs
+- Greater regional capacity compared to the retiring Dedicated Host SKUs
+
+Review the [FAQs](dedicated-host-retirement.md#faqs) before you get started on migration. The next section will go over which Dedicated Host SKUs to migrate to help aid in migration planning and execution.
+
+## Host SKUs being retired
+
+Some Azure Dedicated Host SKUs will be retired soon. Refer to the [Azure Dedicated Host SKU Retirement](dedicated-host-retirement.md#faqs) documentation to learn more.
+
+### Dsv3-Type1 and Dsv3-Type2
+
+The Dsv3-Type1 and Dsv3-Type2 run Dsv3-series VMs, which offer a combination of vCPU, memory, and temporary storage best suited for most general-purpose workloads.
+We recommend migrating your existing VMs to one of the following Dedicated Host SKUs:
+
+- Dsv3-Type3
+- Dsv3-Type4
+
+Note that both the Dsv3-Type3 and Dsv3-Type4 won't be impacted by the 31 March 2023 retirement date. We recommend moving to either the Dsv3-Type3 or Dsv3-Type4 based on regional availability, pricing, and your organizationΓÇÖs needs.
+
+### Esv3-Type1 and Esv3-Type2
+
+The Esv3-Type1 and Esv3-Type2 run Esv3-series VMs, which offer a combination of vCPU, memory, and temporary storage best suited for most memory-intensive workloads.
+We recommend migrating your existing VMs to one of the following Dedicated Host SKUs:
+
+- Esv3-Type3
+- Esv3-Type4
+
+Note that both the Esv3-Type3 and Esv3-Type4 won't be impacted by the 31 March 2023 retirement date. We recommend moving to either the Esv3-Type3 or Esv3-Type4 based on regional availability, pricing, and your organizationΓÇÖs needs.
+
+## Migrating to supported hosts
+
+To migrate your workloads and avoid Dedicated Host SKU retirement, follow the directions for your migration method of choice.
+
+### Automatic migration (Resize)
++
+### Manual migration
+
+This includes steps for manually placed VMs, automatically placed VMs, and virtual machine scale sets on your Dedicated Hosts:
+
+#### [Manually Placed VMs](#tab/manualVM)
+
+1. Choose a target Dedicated Host SKU to migrate to.
+2. Ensure you have quota for the VM family associated with the target Dedicated Host SKU in your given region.
+3. Provision a new Dedicated Host of the target Dedicated Host SKU in the same Host Group.
+4. Stop and deallocate the VM(s) on your old Dedicated Host.
+5. Reassign the VM(s) to the target Dedicated Host.
+6. Start the VM(s).
+7. Delete the old host.
+
+#### [Automatically Placed VMs](#tab/autoVM)
+
+1. Choose a target Dedicated Host SKU to migrate to.
+2. Ensure you have quota for the VM family associated with the target Dedicated Host SKU in your given region.
+3. Provision a new Dedicated Host of the target Dedicated Host SKU in the same Host Group.
+4. Stop and deallocate the VM(s) on your old Dedicated Host.
+5. Delete the old Dedicated Host.
+6. Start the VM(s).
+
+#### [Virtual Machine Scale Sets](#tab/VMSS)
+
+1. Choose a target Dedicated Host SKU to migrate to.
+2. Ensure you have quota for the VM family associated with the target Dedicated Host SKU in your given region.
+3. Provision a new Dedicated Host of the target Dedicated Host SKU in the same Host Group.
+4. Stop the virtual machine scale set on your old Dedicated Host.
+5. Delete the old Dedicated Host.
+6. Start the virtual machine scale set.
+++
+More detailed instructions can be found in the following sections.
+
+> [!NOTE]
+> **Certain sections are different for automatically placed VMs or virtual machine scale set**. These differences will explicitly be called out in the respective steps.
+
+#### Ensure quota for the target VM family
+
+Be sure that you have enough vCPU quota for the VM family of the Dedicated Host SKU that you'll be using. If you need quota, follow this guide to [request an increase in vCPU quota](../../../azure-portal/supportability/per-vm-quota-requests.md) for your target VM family in your target region. Select the Dsv3-series or Esv3-series as the VM family, depending on the target Dedicated Host SKU.
+
+#### Create a new Dedicated Host
+
+Within the same Host Group as the existing Dedicated Host, [create a Dedicated Host](../../dedicated-hosts-how-to.md#create-a-dedicated-host) of the target Dedicated Host SKU.
+
+#### Stop the VM(s) or virtual machine scale set
+
+##### [PowerShell](#tab/PS)
+
+Refer to the PowerShell documentation to [stop a VM through PowerShell](/powershell/module/servicemanagement/azure/stop-azurevm) or [stop a virtual machine scale set through PowerShell](/powershell/module/az.compute/stop-azvmss).
+
+##### [CLI](#tab/CLI)
+
+Refer to the Command Line Interface (CLI) documentation to [stop a VM through CLI](/cli/azure/vm#az-vm-stop) or [stop a virtual machine scale set through CLI](/cli/azure/vmss#az-vmss-stop).
+
+##### [Portal](#tab/Portal)
+
+On Azure portal, go through the following steps:
+
+1. Navigate to your VM or virtual machine scale set.
+2. On the top navigation bar, click ΓÇ£StopΓÇ¥.
+++
+#### Reassign the VM(s) to the target Dedicated Host
+
+>[!NOTE]
+> **Skip this step for automatically placed VMs and virtual machine scale set.**
+
+Once the target Dedicated Host has been created and the VM has been stopped, [reassign the VM to the target Dedicated Host](../../dedicated-hosts-how-to.md#add-an-existing-vm).
+
+#### Start the VM(s) or virtual machine scale set
+
+>[!NOTE]
+>**Automatically placed VM(s) and virtual machine scale set require that you delete the old host _before_ starting the autoplaced VM(s) or virtual machine scale set.**
+
+##### [PowerShell](#tab/PS)
+Refer to the PowerShell documentation to [start a VM through PowerShell](/powershell/module/servicemanagement/azure/start-azurevm) or [start a virtual machine scale set through PowerShell](/powershell/module/az.compute/start-azvmss).
+
+##### [CLI](#tab/CLI)
+
+Refer to the Command Line Interface (CLI) documentation to [start a VM through CLI](/cli/azure/vm#az-vm-start) or [start a virtual machine scale set through CLI](/cli/azure/vmss#az-vmss-start).
+
+##### [Portal](#tab/Portal)
+
+On Azure portal, go through the following steps:
+
+1. Navigate to your VM or virtual machine scale set.
+2. On the top navigation bar, click ΓÇ£StartΓÇ¥.
+++
+#### Delete the old Dedicated Host
+
+Once all VMs have been migrated from your old Dedicated Host to the target Dedicated Host, [delete the old Dedicated Host](../../dedicated-hosts-how-to.md#deleting-a-host).
+
+## Help and support
+
+If you have questions, ask community experts in [Microsoft Q&A](/answers/topics/azure-dedicated-host.html).
virtual-machines Dedicated Host Retirement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/migration-guides/dedicated-host-retirement.md
+
+ Title: Azure Dedicated Host SKU Retirement
+description: Azure Dedicated Host SKU Retirement landing page
+++++ Last updated : 3/15/2021++
+# Azure Dedicated Host SKU Retirement
+
+We continue to modernize and optimize Azure Dedicated Host by using the latest innovations in processor and datacenter technologies. Azure Dedicated Host is a combination of a virtual machine (VM) series and a specific Intel or AMD-based physical server. As we innovate and work with our technology partners, we also need to plan how we retire aging technology.
+
+## UPDATE: Retirement timeline extension
+Considering the feedback from several Azure Dedicated Host customers that are running their critical workloads on SKUs that are scheduled for retirement, we have extended the retirement timeline from March 31, 2023 to June 30, 2023.
+We don't intend to move the retirement timeline any further and recommend all ADH users that are using any of the listed SKUs to migrate to newer generation based SKUs to avoid workload disruptions.
+
+## Migrations required by 30 June 2023 [Updated]
+
+All hardware has a finite lifespan, including the underlying hardware for Azure Dedicated Host. As we continue to modernize Azure datacenters, hardware is decommissioned and eventually retired. The hardware that runs the following Dedicated Host SKUs is reaching end of life:
+
+- Dsv3-Type1
+- Dsv3-Type2
+- Esv3-Type1
+- Esv3-Type2
+
+As a result we'll retire these Dedicated Host SKUs on 30 June 2023.
+
+## How does the retirement of Azure Dedicated Host SKUs affect you?
+
+The current retirement impacts the following Azure Dedicated Host SKUs:
+
+- Dsv3-Type1
+- Esv3-Type1
+- Dsv3-Type2
+- Esv3-Type2
+
+Note: If you're running a Dsv3-Type3, Dsv3-Type4, an Esv3-Type3, or an Esv3-Type4 Dedicated Host, you are not impacted.
+
+## What actions should you take?
+
+For manually placed VMs, you need to create a Dedicated Host of a newer SKU, stop the VMs on your existing Dedicated Host, reassign them to the new host, start the VMs, and delete the old host. For automatically placed VMs or for Virtual Machine Scale Sets, you need to create a Dedicated Host of a newer SKU, stop the VMs or Virtual Machine Scale Set, delete the old host, and then start the VMs or Virtual Machine Scale Set.
+
+Refer to the [Azure Dedicated Host Migration Guide](dedicated-host-migration-guide.md) for more detailed instructions. We recommend moving to the latest generation of Dedicated Host for your VM family.
+
+If you have any questions, contact us through customer support.
+
+## FAQs
+
+### Q: Will migration result in downtime?
+
+A: Yes, you would have to stop/deallocate your VMs or Virtual Machine Scale Sets before moving them to the target host.
+
+### Q: When will the other Dedicated Host SKUs retire?
+
+A: We'll announce Dedicated Host SKU retirements 12 months in advance of the official retirement date of a given Dedicated Host SKU.
+
+### Q: What are the milestones for the Dsv3-Type1, Dsv3-Type2, Esv3-Type1, and Esv3-Type1 retirement?
+
+A:
+
+| Date | Action |
+| - | --|
+| 15 March 2022 | Dsv3-Type1, Dsv3-Type2, Esv3-Type1, Esv3-Type2 retirement announcement |
+| 30 June 2023 | Dsv3-Type1, Dsv3-Type2, Esv3-Type1, Esv3-Type2 retirement |
+
+### Q: What happens to my Azure Reservation?
+
+A: You need to [exchange your reservation](../../../cost-management-billing/reservations/exchange-and-refund-azure-reservations.md#how-to-exchange-or-refund-an-existing-reservation) through the Azure portal to match the new Dedicated Host SKU.
+
+### Q: What would happen to my host if I do not migrate by June 30, 2023?
+
+A: After June 30, 2023 any dedicated host running on the SKUs that are marked for retirement will be set to 'Host Pending Deallocate' state before eventually deallocating the host. For more assistance, please reach out to Azure support.
+
+### Q: What will happen to my VMs if a Host is automatically deallocated?
+
+A: If the underlying host is deallocated the VMs that were running on the host would be deallocated but not deleted. You would be able to either create a new host (of the same VM family) and allocate VMs on the host or run the VMs on multitenant infrastructure.
virtual-machines Hb Series Retirement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/migration-guides/hb-series-retirement.md
+
+ Title: HB-series retirement
+description: HB-series retirement started September 1, 2021.
+++++ Last updated : 12/7/2023++
+# Migrate your HB-series virtual machines by August 31, 2024
+
+Microsoft Azure has introduced HBv2 and HBv3-series virtual machines (VMs) for high-performance computing (HPC). For this reason, we recommend that you migrate workloads from original HB-series VMs to our newer offerings.
+
+Azure [HBv2](../../hbv2-series.md) and [HBv3](../../hbv3-series.md) VMs have greater memory bandwidth, improved remote direct memory access (RDMA) networking capabilities, larger and faster local solid-state drives, and better cost and performance across various HPC workloads. As a result, we're retiring our HB-series Azure VM sizes on August 31, 2024.
+
+## How does the HB-series migration affect me?
+
+After August 31, 2024, any remaining HB-size VM subscriptions will be set to a deallocated state. They'll stop working and no longer incur billing charges.
+
+> [!NOTE]
+> This VM size retirement only affects the VM sizes in the HB series. This retirement announcement doesn't apply to the newer HBv2, HBv3, and HC-series VMs.
+
+## What actions should I take?
+
+You'll need to resize or deallocate your H-series VMs. We recommend that you migrate workloads from the original H-series VMs and the HB-series Promo VMs to our newer offerings.
+
+[HBv2](../../hbv2-series.md) and [HBv3](../../hbv3-series.md) VMs offer substantially higher levels of HPC workload performance and cost efficiency because of:
+
+- Large improvements in CPU core architecture.
+- Higher memory bandwidth.
+- Larger L3 caches.
+- Enhanced InfiniBand networking as compared to HB series.
+
+As a result, HBv2 and HBv3 series will in general offer substantially better performance per unit of cost (maximizing performance for a fixed amount of spend) and cost per performance (minimizing cost for a fixed amount of performance).
+
+All regions that contain HB-series VMs contain HBv2 and HBv3-series VMs. Existing workloads that run on HB-series VMs can be migrated without concern for geographic placement or for access to more services in those regions.
+
+[HB-series](../../hb-series.md) VMs won't be retired until September 2024. We're providing this guide in advance to give you a long window to assess, plan, and execute your migration.
+
+### Recommendations for workload migration from HB-series VMs
+
+| Current VM size | Target VM size | Difference in specification |
+||||
+|Standard_HB60rs |Standard_HB120rs_v2 <br> Standard_HB120rs_v3 <br> Standard_HB120-64rs_v3 |Newer CPU: AMD Rome and MiIan (+20-30% IPC) <br> Memory: Up to 2x more RAM <br> Memory bandwidth: Up to 30% more memory bandwidth <br> InfiniBand: 200 Gb HDR (2x higher bandwidth) <br> Max data disks: Up to 32 (+8x) |
+|Standard_HB60-45rs |Standard_HB120-96rs_v3 <br> Standard_HB120-64rs_v3 <br> Standard_HB120-32rs_v3 |Newer CPU: AMD Rome and MiIan (+20-30% IPC) <br> Memory: Up to 2x more RAM <br> Memory bandwidth: Up to 30% more memory bandwidth <br> InfiniBand: 200 Gb HDR (2x higher bandwidth) <br> Max data disks: Up to 32 (+8x) |
+|Standard_HB60-30rs |Standard_HB120-32rs_v3 <br> Standard_HB120-16rs_v3 |Newer CPU: AMD Rome and MiIan (+20-30% IPC) <br> Memory: Up to 2x more RAM <br> Memory bandwidth: Up to 30% more memory bandwidth <br> InfiniBand: 200 Gb HDR (2x higher bandwidth) <br> Max data disks: Up to 32 (+8x) |
+|Standard_HB60-15rs |Standard_HB120-16rs_v3 |Newer CPU: AMD Rome and MiIan (+20-30% IPC) <br> Memory: Up to 2x more RAM <br> Memory bandwidth: Up to 30% more memory bandwidth <br> InfiniBand: 200 Gb HDR (2x higher bandwidth) <br> Max data disks: Up to 32 (+8x) |
+
+### Migration steps
+
+1. Choose a series and size for migration.
+1. Get a quota for the target VM series.
+1. Resize the current HB-series VM size to the target size.
+
+### Get a quota for the target VM family
+
+Follow the guide to [request an increase in vCPU quota by VM family](../../../azure-portal/supportability/per-vm-quota-requests.md).
+
+### Resize the current VM
+
+You can [resize the virtual machine](../resize-vm.md).
virtual-machines Nc Series Retirement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/migration-guides/nc-series-retirement.md
+
+ Title: NC-series retirement
+description: NC-series retirement by September 6, 2023
++++ Last updated : 12/20/2022++
+# Migrate your NC and NC_Promo series virtual machines by September 6, 2023
+Based on feedback weΓÇÖve received from customers weΓÇÖre happy to announce that we're extending the retirement date by one year to 6 September 2023, for the Azure NC-Series virtual machine to give you more time to plan your migration.
+
+As we continue to bring modern and optimized virtual machine instances to Azure using the latest innovations in datacenter technologies, we thoughtfully plan how we retire aging hardware.
+With this planning in mind, we're retiring our NC (v1) GPU VM sizes, powered by NVIDIA Tesla K80 GPUs on 6 September 2023.
+
+## How does the NC-series migration affect me?
+
+After 6 September 2023, any remaining NC size virtual machines remaining in your subscription will be set to a deallocated state. These virtual machines will be stopped and removed from the host. These virtual machines will no longer be billed in the deallocated state.
+
+This VM size retirement only impacts the VM sizes in the [NC-series](../../nc-series.md). This doesn't impact the newer [NCv3](../../ncv3-series.md), [NCasT4 v3](../../nct4-v3-series.md), and [NC A100 v4](../../nc-a100-v4-series.md) series virtual machines.
++
+## What actions should I take?
+You need to resize or deallocate your NC virtual machines. We recommend moving your GPU workloads to another GPU Virtual Machine size. Learn more about migrating your workloads to another [GPU Accelerated Virtual Machine size](../../sizes-gpu.md).
+
+## Help and support
+
+If you have questions, ask community experts in [Microsoft Q&A](/answers/topics/azure-virtual-machines.html). If you have a support plan and need technical help, create a support request:
+
+1. In the [Help + support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) page, select **Create a support request**. Follow the **New support request** page instructions. Use the following values:
+ * For **Issue type**, select **Technical**.
+ * For **Service**, select **My services**.
+ * For **Service type**, select **Virtual Machine running Windows/Linux**.
+ * For **Resource**, select your VM.
+ * For **Problem type**, select **Assistance with resizing my VM**.
+ * For **Problem subtype**, select the option that applies to you.
+
+1. Follow instructions in the **Solutions** and **Details** tabs, as applicable, and then **Review + create**.
+
+## Next steps
+
+[Learn more](../../n-series-migration.md) about migrating your workloads to other GPU Azure Virtual Machine sizes.
+
+If you have questions, contact us through customer support.
virtual-machines Ncv2 Series Retirement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/migration-guides/ncv2-series-retirement.md
+
+ Title: NCv2-series retirement
+description: NCv2-series retirement by September 6, 2023
++++ Last updated : 11/21/2022++
+# Migrate your NCv2 series virtual machines by September 6, 2023
+WeΓÇÖre happy to announce that we're extending the retirement date by one year to September 6, 2023, for the Azure NCv2-Series virtual machine to give you more time to plan your migration.
+
+As we continue to bring modern and optimized virtual machine instances to Azure using the latest innovations in datacenter technologies, we thoughtfully plan how we retire aging hardware.
+
+We are retiring our NC (v2) GPU VM sizes, powered by NVIDIA Tesla P100 GPUs on 6 September 2023.
+
+## How does the NCv2-series migration affect me?
+
+After 6 September 2023, any remaining NCv2 size virtual machines remaining in your subscription will be set to a deallocated state. These virtual machines will be stopped and removed from the host. These virtual machines will no longer be billed in the deallocated state.
+
+This VM size retirement only impacts the VM sizes in the [NCv2-series](../../ncv2-series.md). This doesn't impact the newer [NCv3](../../ncv3-series.md), [NCasT4 v3](../../nct4-v3-series.md), and [NC A100 v4](../../nc-a100-v4-series.md) series virtual machines.
+
+## What actions should I take?
+You need to resize or deallocate your NC virtual machines. We recommend moving your GPU workloads to another GPU Virtual Machine size. Learn more about migrating your workloads to another [GPU Accelerated Virtual Machine size](../../sizes-gpu.md).
+
+## Help and support
+
+If you have questions, ask community experts in [Microsoft Q&A](/answers/topics/azure-virtual-machines.html). If you have a support plan and need technical help, create a support request:
+
+- In the [Help + support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) page, select **Create a support request**. Follow the **New support request** page instructions. Use the following values:
+ * For **Issue type**, select **Technical**.
+ * For **Service**, select **My services**.
+ * For **Service type**, select **Virtual Machine running Windows/Linux**.
+ * For **Resource**, select your VM.
+ * For **Problem type**, select **Assistance with resizing my VM**.
+ * For **Problem subtype**, select the option that applies to you.
+
+Follow instructions in the **Solutions** and **Details** tabs, as applicable, and then **Review + create**.
+## Next steps
+
+[Learn more](../../n-series-migration.md) about migrating your workloads to other GPU Azure Virtual Machine sizes.
+
+If you have questions, contact us through customer support.
virtual-machines Nd Series Retirement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/migration-guides/nd-series-retirement.md
+
+ Title: ND-series retirement
+description: ND-series retirement by September 6, 2023
++++ Last updated : 02/27/2023++
+# Migrate your ND series virtual machines by September 6, 2023
+Based on feedback weΓÇÖve received from customers weΓÇÖre happy to announce that we're extending the retirement date by one year to 6 September 2023, for the Azure ND-Series virtual machine to give you more time to plan your migration.
+
+As we continue to bring modern and optimized virtual machine instances to Azure leveraging the latest innovations in datacenter technologies, we thoughtfully plan how we retire aging hardware.
+With this in mind, we're retiring our ND GPU VM sizes, powered by NVIDIA Tesla P40 GPUs on 6 September 2023.
+
+## How does the ND-series migration affect me?
+
+After 6 September 2023, any remaining ND size virtual machines remaining in your subscription will be set to a deallocated state. These virtual machines will be stopped and removed from the host. These virtual machines will no longer be billed in the deallocated state.
+
+This VM size retirement only impacts the VM sizes in the [ND-series](../../nd-series.md). This retirement doesn't impact the newer [NCv3](../../ncv3-series.md), [NC T4 v3](../../nct4-v3-series.md), and [ND v2](../../ndv2-series.md) series virtual machines.
+
+## What actions should I take?
+You'll need to resize or deallocate your ND virtual machines. We recommend moving your GPU workloads to another GPU Virtual Machine size. Learn more about migrating your workloads to another [GPU Accelerated Virtual Machine size](../../sizes-gpu.md).
+
+## Next steps
+[Learn more](../../n-series-migration.md) about migrating your workloads to other GPU Azure Virtual Machine sizes.
+
+If you have questions, contact us through customer support.
virtual-machines Nv Series Retirement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/migration-guides/nv-series-retirement.md
+
+ Title: NV series retirement
+description: NV series retirement starting September 6, 2023
++++ Last updated : 02/27/2023++
+# Migrate your NV and NV_Promo series virtual machines by September 6, 2023
+WeΓÇÖre happy to announce that we're extending the retirement date by one year to September 6, 2023, for the Azure NV-Series and NV_Promo Series virtual machine to give you more time to plan your migration.
+
+We continue to bring modern and optimized virtual machine (VM) instances to Azure by using the latest innovations in datacenter technologies. As we innovate, we also thoughtfully plan how we retire aging hardware. With this context in mind, we're retiring our NV-series Azure VM sizes on September 6, 2023.
+
+## How does the NV series migration affect me?
+
+After September 6, 2023, any remaining NV and NV_Promo-size VMs remaining in your subscription will be set to a deallocated state. These VMs will be stopped and removed from the host. These VMs will no longer be billed in the deallocated state.
+
+The current VM size retirement only affects the VM sizes in the [NV series](../../nv-series.md). This retirement doesn't affect the [NVv3](../../nvv3-series.md) and [NVv4](../../nvv4-series.md) series VMs.
+
+## What actions should I take?
+
+You'll need to resize or deallocate your NV VMs. We recommend moving your GPU visualizations or graphics workloads to another [GPU accelerated VM size](../../sizes-gpu.md).
+
+[Learn more](../../nv-series-migration-guide.md) about migrating your workloads to other GPU Azure VM sizes.
+
+If you have questions, contact us through customer support.
virtual-machines Previous Gen Sizes List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/previous-gen-sizes-list.md
+
+ Title: Previous generation Azure VM sizes
+description: A list containing all previous generation and capacity limited VM size series.
++++ Last updated : 01/31/2024++++
+# Previous generation Azure VM sizes
+
+This article provides a list of all sizes that are considered *previous-gen* or *capacity limited*. For sizes that require it there are *migration guides* to help move to replacement sizes.
+
+To learn more about size series retirement, see the [size series retirement overview](./retirement-overview.md).
+
+> [!NOTE]
+> *Previous generation* and *capacity limited* sizes **are not currently retired** and can still be used.
+
+## What are previous-gen sizes?
+Previous generations virtual machine sizes can still be used, but there are newer generations available. Capacity increases are not guaranteed for previous-gen sizes. It's recommended to migrate to the latest generation replacements.
+
+## What are capacity limited previous-gen sizes?
+Capacity limited virtual machine sizes are older sizes which are still fully supported, but they won't receive more capacity. Unlike other size series which will be deployed based on demand, capacity limited sizes are limited to what is currently deployed and decreases as hardware is phased out. There are newer or alternative sizes that are generally available.
+++
+## General purpose previous-gen sizes
+
+|Series name | Status | Migration guide |
+|-||-|
+| Basic A-series | Capacity limited |
+| Standard A-series | Capacity limited |
+| Compute-intensive A-series | Capacity limited |
+| Standard D-series | Capacity limited |
+| Preview DC-series | Capacity limited |
+| DS-series | Capacity limited |
+
+For a list of general purpose sizes listed as "retired" and "announced for retirement" (sizes that are no longer available or soon to be unavailable for use), see [retired general purpose sizes](./retired-sizes-list.md#general-purpose-retired-sizes).
+
+## Compute optimized previous-gen sizes
+
+|Series name | Status | Migration guide |
+||-|-|
+| F-series | Previous-gen | |
+| Fs-series | Previous-gen | |
+
+For a list of compute optimized sizes listed as "retired" and "announced for retirement" (sizes that are no longer available or soon to be unavailable for use), see [retired compute optimized sizes](./retired-sizes-list.md#compute-optimized-retired-sizes).
+
+## Memory optimized previous-gen sizes
+
+|Series name | Replacement series |Migration guide |
+||-|-|
+| GS-series | Capacity limited | |
+| G-series | Capacity limited | |
+| Memory-optimized D-series | Capacity limited | |
+| Memory-optimized DS-series| Capacity limited | |
+
+For a list of memory optimized sizes listed as "retired" and "announced for retirement" (sizes that are no longer available or soon to be unavailable for use), see [retired memory optimized sizes](./retired-sizes-list.md#memory-optimized-retired-sizes).
+
+## Storage optimized previous-gen sizes
+
+|Series name | Replacement series | Migration guide|
+||-|-|
+| Ls-series | Capacity limited | |
+
+For a list of storage optimized sizes listed as "retired" and "announced for retirement" (sizes that are no longer available or soon to be unavailable for use), see [retired storage optimized sizes](./retired-sizes-list.md#storage-optimized-retired-sizes).
+
+## GPU accelerated previous-gen sizes
+
+|Series name | Status | Migration guide |
+|-||-|
+| NVv2-series | Previous-gen | |
+
+For a list of GPU accelerated sizes listed as "retired" and "announced for retirement" (sizes that are no longer available or soon to be unavailable for use), see [retired GPU accelerated sizes](./retired-sizes-list.md#gpu-accelerated-retired-sizes).
+
+## FPGA accelerated previous-gen sizes
+
+Currently there are no previous-gen or capacity limited FPGA accelerated sizes.
+
+For a list of FPGA accelerated sizes listed as "retired" and "announced for retirement" (sizes that are no longer available or soon to be unavailable for use), see [retired fpga accelerated sizes](./retired-sizes-list.md#fpga-accelerated-retired-sizes).
+
+## HPC previous-gen sizes
+
+Currently there are no previous-gen or capacity limited HPC sizes.
+
+For a list of HPC sizes listed as "retired" and "announced for retirement" (sizes that are no longer available or soon to be unavailable for use), see [retired HPC sizes](./retired-sizes-list.md#hpc-retired-sizes).
+
+## ADH previous-gen sizes
+
+Currently there are no previous-gen or capacity limited ADH sizes.
+
+For a list of ADH sizes listed as "retired" and "announced for retirement" (sizes that are no longer available or soon to be unavailable for use), see [retired ADH sizes](./retired-sizes-list.md#adh-retired-sizes).
+
+## Next steps
+- For a list of retired sizes, see [Retired Azure VM sizes](./retired-sizes-list.md).
+- For more information on VM sizes, see [Sizes for virtual machines in Azure](../sizes.md).
virtual-machines Resize Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/resize-vm.md
+
+ Title: Resize a virtual machine
+description: Change the VM size used for an Azure virtual machine.
++++ Last updated : 01/31/2024+++++
+# Change the size of a virtual machine
+
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets
+
+This article shows you how to change an existing virtual machine's [VM size](../sizes.md).
+
+After you create a virtual machine (VM), you can scale the VM up or down by changing the VM size. In some cases, you must deallocate the VM first. Deallocation may be necessary if the new size isn't available on the same hardware cluster that is currently hosting the VM.
+
+![A diagram showing a smaller Azure VM icon with a growing arrow pointing to a new larger Azure VM icon.](./media/size-resize-vm.png "Resizing a VM")
+
+If your VM uses Premium Storage, make sure that you choose an **s** version of the size to get Premium Storage support. For example, choose Standard_E4**s**_v3 instead of Standard_E4_v3.
+
+## Change the VM size
+
+### [Portal](#tab/portal)
+1. Open the [Azure portal](https://portal.azure.com). <br/><br/>
+1. Type *virtual machines* in the search. Under **Services**, select **Virtual machines**.
+ ![Screenshot of the Azure portal search bar.](./media/portal-vms-search.png)<br/><br/>
+1. In the **Virtual machines** page, select the virtual machine you want to resize.
+ ![Screenshot of an example VM selected.](./media/portal-select-vm.png)<br/><br/>
+1. In the left menu, select **size**. Pick a new compatible size from the list of available sizes
+ ![Screenshot of the size selection in the Azure portal.](./media/portal-size-select.png)<br/><br/>
+1. After picking a size, select **Resize**.
+ ![Screenshot of the resize button in the Azure portal.](./media/portal-resize-button.png)<br/><br/>
+
+> [!Note]
+> If the virtual machine is currently running, changing its size will cause it to restart.
+
+If your VM is still running and you don't see the size you want in the list, stopping the virtual machine may reveal more sizes.
+
+ > [!WARNING]
+ > Deallocating the VM also releases any dynamic IP addresses assigned to the VM. The OS and data disks are not affected.
+ >
+ > If you are resizing a production VM, consider using [Azure Capacity Reservations](../capacity-reservation-overview.md) to reserve Compute capacity in the region.
+
+
+### [PowerShell](#tab/powershell)
+1. Set the resource group and VM name variables. Replace the values with information of the VM you want to resize.
+
+ ```powershell
+ $resourceGroup = "myResourceGroup"
+ $vmName = "myVM"
+ ```
+
+1. List the VM sizes that are available on the hardware cluster where the VM is hosted.
+
+ ```powershell
+ Get-AzVMSize -ResourceGroupName $resourceGroup -VMName $vmName
+ ```
+
+1. Resize the VM to the new size.
+
+ ```powershell
+ $vm = Get-AzVM -ResourceGroupName $resourceGroup -VMName $vmName
+ $vm.HardwareProfile.VmSize = "<newAv2VMsize>"
+ Update-AzVM -VM $vm -ResourceGroupName $resourceGroup
+ ```
+
+**Use PowerShell to resize a VM not in an availability set.**
+
+This Cloud shell PowerShell script initializes the variables `$resourceGroup`, `$vm`, and `$size` with the resource group name, VM name, and desired VM size respectively. It then retrieves the VM object from Azure using the `Get-AzVM` cmdlet. The script modifies the `VmSize` property of the VM's hardware profile to the desired size. Finally, it applies these changes to the VM in Azure using the `Update-AzVM` cmdlet.
+
+```azurepowershell-interactive
+# Set variables
+$resourceGroup = 'myResourceGroup'
+$vmName = 'myVM'
+$size = 'Standard_DS3_v2'
+# Get the VM
+$vm = Get-AzVM -ResourceGroupName $resourceGroup -Name $vmName
+# Change the VM size
+$vm.HardwareProfile.VmSize = $size
+# Update the VM
+Update-AzVM -ResourceGroupName $resourceGroup -VM $vm
+```
+As an alternative to running the script in Azure Cloud Shell, you can also execute it locally on your machine. This local version of the PowerShell script includes additional steps to import the Azure module and authenticate your Azure account.
+
+> [!NOTE]
+> The local PowerShell may require the VM to restart to take effect.
++
+```powershell
+# Import the Azure module
+Import-Module Az
+# Login to your Azure account
+Connect-AzAccount
+# Set variables
+$resourceGroup = 'myResourceGroup'
+$vmName = 'myVM'
+$size = 'Standard_DS3_v2'
+# Select the subscription
+Select-AzSubscription -SubscriptionId '<subscriptionID>'
+# Get the VM
+$vm = Get-AzVM -ResourceGroupName $resourceGroup -Name $vmName
+# Change the VM size
+$vm.HardwareProfile.VmSize = $size
+# Update the VM
+Update-AzVM -ResourceGroupName $resourceGroup -VM $vm
+```
+
+ > [!WARNING]
+ > Deallocating the VM also releases any dynamic IP addresses assigned to the VM. The OS and data disks are not affected.
+ >
+ > If you are resizing a production VM, consider using [Azure Capacity Reservations](../capacity-reservation-overview.md) to reserve Compute capacity in the region.
++
+**Use PowerShell to resize a VM in an availability set**
+
+If the new size for a VM in an availability set isn't available on the hardware cluster currently hosting the VM, then you need to deallocate all VMs in the availability set to resize the VM. You also might need to update the size of other VMs in the availability set after one VM has been resized. To resize a VM in an availability set, run the below script. You can replace the values of `$resourceGroup`, `$vmName`, `$newVmSize`, and `$availabilitySetName` with your own.
+
+```azurepowershell-interactive
+# Set variables
+$resourceGroup = "myResourceGroup"
+$vmName = "myVM"
+$newVmSize = "<newVmSize>"
+$availabilitySetName = "<availabilitySetName>"
+
+# Check if the desired VM size is available
+$availableSizes = Get-AzVMSize `
+ -ResourceGroupName $resourceGroup `
+ -VMName $vmName |
+ Select-Object -ExpandProperty Name
+if ($availableSizes -notcontains $newVmSize) {
+ # Deallocate all VMs in the availability set
+ $as = Get-AzAvailabilitySet `
+ -ResourceGroupName $resourceGroup `
+ -Name $availabilitySetName
+ $virtualMachines = $as.VirtualMachinesReferences | Get-AzResource | Get-AzVM
+ $virtualMachines | Stop-AzVM -Force -NoWait
+
+ # Resize and restart the VMs in the availability set
+ $virtualMachines | Foreach-Object { $_.HardwareProfile.VmSize = $newVmSize }
+ $virtualMachines | Update-AzVM
+ $virtualMachines | Start-AzVM
+ exit
+}
+
+# Resize the VM
+$vm = Get-AzVM `
+ -ResourceGroupName $resourceGroup `
+ -VMName $vmName
+$vm.HardwareProfile.VmSize = $newVmSize
+Update-AzVM `
+ -VM $vm `
+ -ResourceGroupName $resourceGroup
+```
+
+This script sets the variables `$resourceGroup`, `$vmName`, `$newVmSize`, and `$availabilitySetName`. It then checks if the desired VM size is available by using `Get-AzVMSize` and checking if the output contains the desired size. If the desired size isn't available, the script deallocates all VMs in the availability set, resizes them, and starts them again. If the desired size is available, the script resizes the VM.
++
+### [CLI](#tab/cli)
+
+To resize a VM, you need the latest [Azure CLI](/cli/azure/install-az-cli2) installed and logged in to an Azure account using [az sign-in](/cli/azure/reference-index).
+
+The below script checks if the desired VM size is available before resizing. If the desired size isn't available, the script exits with an error message. If the desired size is available, the script deallocates the VM, resizes it, and starts it again. You can replace the values of `resourceGroup`, `vm`, and `size` with your own.
+
+```azurecli-interactive
+ # Set variables
+resourceGroup=myResourceGroup
+vm=myVM
+size=Standard_DS3_v2
+
+# Check if the desired VM size is available
+if ! az vm list-vm-resize-options --resource-group $resourceGroup --name $vm --query "[].name" | grep -q $size; then
+ echo "The desired VM size is not available."
+ exit 1
+fi
+
+# Deallocate the VM
+az vm deallocate --resource-group $resourceGroup --name $vm
+
+# Resize the VM
+az vm resize --resource-group $resourceGroup --name $vm --size $size
+
+# Start the VM
+az vm start --resource-group $resourceGroup --name $vm
+```
+
+ > [!WARNING]
+ > Deallocating the VM also releases any dynamic IP addresses assigned to the VM. The OS and data disks are not affected.
+ >
+ > If you are resizing a production VM, consider using [Azure Capacity Reservations](../capacity-reservation-overview.md) to reserve Compute capacity in the region.
+
+**Use Azure CLI to resize a VM in an availability set.**
+
+The below script sets the variables `resourceGroup`, `vm`, and `size`. It then checks if the desired VM size is available by using `az vm list-vm-resize-options` and checking if the output contains the desired size. If the desired size isn't available, the script exits with an error message. If the desired size is available, the script deallocates the VM, resizes it, and starts it again.
++
+```azurecli-interactive
+# Set variables
+resourceGroup="myResourceGroup"
+vmName="myVM"
+newVmSize="<newVmSize>"
+availabilitySetName="<availabilitySetName>"
+
+# Check if the desired VM size is available
+availableSizes=$(az vm list-vm-resize-options \
+ --resource-group $resourceGroup \
+ --name $vmName \
+ --query "[].name" \
+ --output tsv)
+if [[ ! $availableSizes =~ $newVmSize ]]; then
+ # Deallocate all VMs in the availability set
+ vmIds=$(az vmss list-instances \
+ --resource-group $resourceGroup \
+ --name $availabilitySetName \
+ --query "[].instanceId" \
+ --output tsv)
+ az vm deallocate \
+ --ids $vmIds \
+ --no-wait
+
+ # Resize and restart the VMs in the availability set
+ az vmss update \
+ --resource-group $resourceGroup \
+ --name $availabilitySetName \
+ --set virtualMachineProfile.hardwareProfile.vmSize=$newVmSize
+ az vmss start \
+ --resource-group $resourceGroup \
+ --name $availabilitySetName \
+ --instance-ids $vmIds
+ exit
+fi
+
+# Resize the VM
+az vm resize \
+ --resource-group $resourceGroup \
+ --name $vmName \
+ --size $newVmSize
+```
+
+### [Terraform](#tab/terraform)
+
+To resize your VM in Terraform code, you modify the `size` parameter in the `azurerm_linux_virtual_machine` or `azurerm_windows_virtual_machine` resource blocks to the desired size and run `terraform plan -out main.tfplan` to see the VM size change that will be made. Then run `terraform apply main.tfplan` to apply the changes to resize the VM.
+
+> [!IMPORTANT]
+> The below Terraform example modifies the size of an existing virtual machine when you're using the state file that created the original virtual machine. For the full Terraform code, see the [Windows Terraform quickstart](../windows/quick-create-terraform.md).
++
+ > [!WARNING]
+ > Deallocating the VM also releases any dynamic IP addresses assigned to the VM. The OS and data disks are not affected.
+ >
+ > If you are resizing a production VM, consider using [Azure Capacity Reservations](../capacity-reservation-overview.md) to reserve Compute capacity in the region.
+++
+## Choose the right SKU
+
+When resizing a VM, it's important to choose the right SKU based on the signals from the VM to determine whether you need more CPU, memory, or storage capacity:
+
+- If the VM is running a CPU-intensive workload, such as a database server or a web server with high traffic, you may need to choose a SKU with more CPU cores.
+- If the VM is running a memory-intensive workload, such as a machine learning model or a big data application, you may need to choose a SKU with more memory.
+- If the VM is running out of storage capacity, you may need to choose a SKU with more storage.
++
+For more information on choosing the right SKU, you can use the following resources:
+- [Sizes for VMs in Azure](../sizes.md): This article lists all the VM sizes available in Azure.
+- [Azure VM Selector](https://azure.microsoft.com/pricing/vm-selector/): This tool helps you find the right VM SKU based on your workload type, OS and software, and deployment region.
+++
+## Limitations
+
+You can't resize a VM size that has a local temp disk to a VM size with no local temp disk and vice versa.
+
+The only combinations allowed for resizing are:
+
+- VM (with local temp disk) -> VM (with local temp disk); and
+- VM (with no local temp disk) -> VM (with no local temp disk).
+
+For a work-around, see [How do I migrate from a VM size with local temp disk to a VM size with no local temp disk? ](../azure-vms-no-temp-disk.yml#how-do-i-migrate-from-a-vm-size-with-local-temp-disk-to-a-vm-size-with-no-local-temp-disk). The work-around can be used to resize a VM with no local temp disk to VM with a local temp disk. You create a snapshot of the VM with no local temp disk > create a disk from the snapshot > create VM from the disk with appropriate [VM size](../sizes.md) that supports VMs with a local temp disk.
++
+## Next steps
+
+- For more scalability, run multiple VM instances and scale out.
+- For more SKU selection information, see [Sizes for virtual machines in Azure](../sizes.md).
+- To determine VM sizes by workload type, OS and software, or deployment region, see [Azure VM Selector](https://azure.microsoft.com/pricing/vm-selector/).
+- For more information on Virtual Machine Scale Sets (VMSS) sizes, see [Automatically scale machines in a VMSS](../../virtual-machine-scale-sets/tutorial-autoscale-powershell.md).
+- For more cost management planning information, see the [Plan and manage your Azure costs](/training/modules/plan-manage-azure-costs/1-introduction) module.
virtual-machines Retired Sizes List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/retired-sizes-list.md
+
+ Title: Retired Azure VM sizes
+description: A list containing all retired and soon to be retired VM size series and their replacement series.
++++ Last updated : 01/31/2024++++
+# Retired Azure VM sizes
+
+This article provides a list of all sizes that are retired or have been announced for retirement. For sizes that require it there are migration guides to help move to replacement sizes.
+
+To learn more about size series retirement, see the [size series retirement overview](./retirement-overview.md).
+
+> [!NOTE]
+> Series with *Retirement Status* listed as **Retired** are **no longer available** and can't be provisioned.
+>
+> If you are currently using one of the size series listed as *Retired*, view the migration guide to switch to a replacement series as soon as possible.
+
+Series with *Retirement Status* listed as **Announced** are still available, but will be retired on the *Planned Retirement Date*. It's recommended that you plan your migration to a replacement series well before the listed retirement date.
+
+*Capacity limited* series and *previous-gen* series are not retired and still fully supported, but they have limitations similar to series that are announced for retirement. For a list of previous-gen sizes, see [previous generation Azure VM sizes](./previous-gen-sizes-list.md).
+
+## General purpose retired sizes
+
+|Series name | Retirement Status |Retirement Announcement Date | Planned Retirement Date | Migration Guide |
+|-|-|--|-|--|
+| Av1-series | **Announced** | 11/02/23 | 8/31/24 | [Av1-series Retirement](./migration-guides/av1-series-retirement.md) |
+
+## Compute optimized retired sizes
+
+Currently there are no compute optimized series retired or announced for retirement.
+
+## Memory optimized retired sizes
+
+Currently there are no memory optimized series retired or announced for retirement.
+
+## Storage optimized retired sizes
+
+Currently there are no retired storage optimized series retired or announced for retirement.
+
+## GPU accelerated retired sizes
+
+| Series name | Retirement Status |Retirement Announcement Date | Planned Retirement Date | Migration Guide
+|-|-|--|-|--|
+| NV-Series | **Retired** | - | 9/6/23 | [NV-series Retirement](./migration-guides/nv-series-retirement.md) |
+| NC-Series | **Retired** | - | 9/6/23 | [NC-series Retirement](./migration-guides/nc-series-retirement.md) |
+| NCv2-Series | **Retired** | - | 9/6/23 | [NC-series Retirement](./migration-guides/ncv2-series-retirement.md) |
+| ND-Series | **Retired** | - | 9/6/23 | [NC-series Retirement](./migration-guides/nd-series-retirement.md) |
+
+## FPGA accelerated retired sizes
+
+Currently there are no retired FPGA accelerated series retired or announced for retirement.
+
+## HPC retired sizes
+
+| Series name | Retirement Status |Retirement Announcement Date | Planned Retirement Date | Migration Guide
+|-|-|--|-|--|
+| HB-Series | **Announced** | 12/07/23 | 8/31/24 | [NV-series Retirement](./migration-guides/nv-series-retirement.md) |
+
+## ADH retired sizes
+
+| Series name | Retirement Status |Retirement Announcement Date | Planned Retirement Date | Migration Guide
+|-|-|--|-|--|
+| Dsv3-Type1 | **Retired** | - | 6/30/23 | [Dedicated Host SKU Retirement](./migration-guides/dedicated-host-retirement.md) |
+| Dsv3-Type2 | **Retired** | - | 6/30/23 | [Dedicated Host SKU Retirement](./migration-guides/dedicated-host-retirement.md) |
+| Esv3-Type1 | **Retired** | - | 6/30/23 | [Dedicated Host SKU Retirement](./migration-guides/dedicated-host-retirement.md) |
+| Esv3-Type2 | **Retired** | - | 6/30/23 | [Dedicated Host SKU Retirement](./migration-guides/dedicated-host-retirement.md) |
++
+## Next steps
+- For a list of older and capacity limited sizes, see [Previous generation Azure VM sizes](./previous-gen-sizes-list.md).
+- For more information on VM sizes, see [Sizes for virtual machines in Azure](../sizes.md).
virtual-machines Retirement Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/retirement-overview.md
+
+ Title: Previous-gen and retired VM sizes
+description: Overview of the retirement process for virtual machine sizes and information on previous-gen sizes.
++++ Last updated : 01/31/2024++++
+# Previous-gen and retired VM sizes
+
+ This process requires that previously established VM sizes are moved to "previous-gen" status, then eventually retired and made unavailable. This article provides an overview of the retirement of virtual machine sizes and explains the reasoning behind this process.
+
+![A diagram showing a greyed out Azure VM icon with an arrow pointing to a new sparkling Azure VM icon.](./media/size-retirement-new-vm.png "Moving from old to new VM sizes")
+
+When hardware begins the retirement process, it's recommended to migrate workloads to newer generation hardware that provides better performance and reliability. This helps you to avoid any potential issues that may arise from using outdated hardware. By keeping your hardware up-to-date, you can ensure that your workloads are running smoothly and efficiently.
+
+## Previous-gen sizes
+
+Previous generation sizes **are not currently retired** and can still be used. These sizes are still fully supported, but they won't receive more capacity. It's recommended to migrate to the latest generation replacements as soon as possible. For a list of sizes that are considered "previous-gen", see the [list of previous-gen sizes](./previous-gen-sizes-list.md).
+
+## Retired sizes
+
+Retiring hardware is necessary over time to ensure that the latest and greatest technology is available on Azure. This ensures that the hardware is reliable, secure, and efficient. It also allows for the latest features and capabilities that may not be present on previous generations of hardware.
+
+Retired sizes are **no longer available** and can't be used. For a list of retired sizes, see the [list of retired sizes](./retired-sizes-list.md).
+
+## Migrate to newer sizes
+
+Migrating to newer sizes allows you to keep up with the latest hardware available on Azure. You can [resize your VM](./resize-vm.md) to a newer size using the Azure portal, Azure PowerShell, Azure CLI, or Terraform.
+
+## Next steps
+- For more information on VM sizes, see [Sizes for virtual machines in Azure](../sizes.md).
+- For a list of retired sizes, see [Retired Azure VM sizes](./retired-sizes-list.md).
+- For a list of previous-gen sizes, see [Previous generation Azure VM sizes](./previous-gen-sizes-list.md).
virtual-machines Trusted Launch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/trusted-launch.md
Azure offers trusted launch as a seamless way to improve the security of [genera
> [!NOTE] > - Installation of the **CUDA & GRID drivers on Secure Boot enabled Windows VMs** does not require any extra steps.
-> - Installation of the **CUDA driver on Secure Boot enabled Ubuntu VMs** requires extra steps documented at [Install NVIDIA GPU drivers on N-series VMs running Linux](./linux/n-series-driver-setup.md#install-cuda-driver-on-ubuntu-with-secure-boot-enabled). Secure Boot should be disabled for installing CUDA Drivers on other Linux VMs.
+> - Installation of the **CUDA driver on Secure Boot enabled Ubuntu VMs** requires extra steps documented at [Install NVIDIA GPU drivers on N-series VMs running Linux](./linux/n-series-driver-setup.md#install-cuda-drivers-on-n-series-vms). Secure Boot should be disabled for installing CUDA Drivers on other Linux VMs.
> - Installation of the **GRID driver** requires secure boot to be disabled for Linux VMs. > - **Not Supported** size families do not support [generation 2](generation-2.md) VMs. Change VM Size to equivalent **Supported size families** for enabling Trusted Launch.
virtual-network-manager How To Configure Cross Tenant Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-configure-cross-tenant-cli.md
Last updated 03/22/2023
-#customerintent: As a cloud admin, I need to manage multiple tenants from a single network manager so that I can easily manage all network resources governed by Azure Virtual Network Manager.
+# Customer intent: As a cloud admin, I need to manage multiple tenants from a single network manager so that I can easily manage all network resources governed by Azure Virtual Network Manager.
# Configure a cross-tenant connection in Azure Virtual Network Manager Preview - CLI
virtual-network-manager How To Configure Cross Tenant Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-configure-cross-tenant-portal.md
Last updated 03/22/2023
-#customerintent: As a cloud admin, I need to manage multiple tenants from a single network manager so that I can easily manage all network resources governed by Azure Virtual Network Manager.
+# Customer intent: As a cloud admin, I need to manage multiple tenants from a single network manager so that I can easily manage all network resources governed by Azure Virtual Network Manager.
# Configure a cross-tenant connection in Azure Virtual Network Manager Preview - portal
virtual-network Public Ip Basic Upgrade Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-basic-upgrade-guidance.md
Last updated 08/24/2023
-#customer-intent: As an cloud engineer with Basic public IP services, I need guidance and direction on migrating my workloads off basic to Standard SKUs
+# Customer intent: As an cloud engineer with Basic public IP services, I need guidance and direction on migrating my workloads off basic to Standard SKUs
# Upgrading a basic public IP address to Standard SKU - Guidance
virtual-network Virtual Network Encryption Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-encryption-overview.md
Last updated 01/17/2024
-# customer intent: As a network administrator, I want to learn about encryption in Azure Virtual Network so that I can secure my network traffic.
+# Customer intent: As a network administrator, I want to learn about encryption in Azure Virtual Network so that I can secure my network traffic.
Virtual network encryption has the following requirements:
| D-series | **[Dv4 and Dsv4-series](/azure/virtual-machines/dv4-dsv4-series)**, **[Ddv4 and Ddsv4-series](/azure/virtual-machines/ddv4-ddsv4-series)**, **[Dav4 and Dasv4-series](/azure/virtual-machines/dav4-dasv4-series)** | | D-series V5 | **[Dv5 and Dsv5-series](/azure/virtual-machines/dv5-dsv5-series)**, **[Ddv5 and Ddsv5-series](/azure/virtual-machines/ddv5-ddsv5-series)** | | E-series | **[Ev4 and Esv4-series](/azure/virtual-machines/ev4-esv4-series)**, **[Edv4 and Edsv4-series](/azure/virtual-machines/edv4-edsv4-series)**, **[Eav4 and Easv4-series](/azure/virtual-machines/eav4-easv4-series)** |
- | E-series V5 | **[Ev4 and Esv4-series](/azure/virtual-machines/ev5-esv5-series)**, **[Edv4 and Edsv4-series](/azure/virtual-machines/edv5-edsv5-series)** |
+ | E-series V5 | **[Ev5 and Esv5-series](/azure/virtual-machines/ev5-esv5-series)**, **[Edv5 and Edsv5-series](/azure/virtual-machines/edv5-edsv5-series)** |
| LSv3 | **[LSv3-series](/azure/virtual-machines/lsv3-series)** | | M-series | **[Mv2-series](/azure/virtual-machines/mv2-series)**, **[Msv3 and Mdsv3 Medium Memory Series](/azure/virtual-machines/msv3-mdsv3-medium-series)** |
virtual-network Virtual Network Peering Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-peering-overview.md
Last updated 05/28/2023
-#customer intent: As a cloud architect, I need to know how to use virtual network peering for connecting virtual networks. This will allow me to design connectivity correctly, understand future scalability options, and limitations.
+# Customer intent: As a cloud architect, I need to know how to use virtual network peering for connecting virtual networks. This will allow me to design connectivity correctly, understand future scalability options, and limitations.
# Virtual network peering