Updates from: 02/02/2024 02:15:09
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c B2c Global Identity Funnel Based Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/b2c-global-identity-funnel-based-design.md
Last updated 01/26/2024
-#customer intent: I'm a developer, and I need to understand how to build a global identity solution using a funnel-based approach, so I can implement it in my organization's Azure AD B2C environment.
+# Customer intent: I'm a developer, and I need to understand how to build a global identity solution using a funnel-based approach, so I can implement it in my organization's Azure AD B2C environment.
# Build a global identity solution with funnel-based approach
active-directory-b2c B2c Global Identity Proof Of Concept Funnel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/b2c-global-identity-proof-of-concept-funnel.md
Last updated 01/26/2024
-#customer intent: As a developer, I want to understand how to build a global identity solution using a funnel-based approach, so I can implement it in my organization's Azure AD B2C environment.
+# Customer intent: As a developer, I want to understand how to build a global identity solution using a funnel-based approach, so I can implement it in my organization's Azure AD B2C environment.
# Azure Active Directory B2C global identity framework proof of concept for funnel-based configuration
active-directory-b2c B2c Global Identity Proof Of Concept Regional https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/b2c-global-identity-proof-of-concept-regional.md
Last updated 01/24/2024
-#customer intent: I'm a developer implementing Azure Active Directory B2C, and I want to configure region-based sign-up, sign-in, and password reset journeys. My goal is for users to be directed to the correct region and their data managed accordingly.
+# Customer intent: I'm a developer implementing Azure Active Directory B2C, and I want to configure region-based sign-up, sign-in, and password reset journeys. My goal is for users to be directed to the correct region and their data managed accordingly.
# Azure Active Directory B2C global identity framework proof of concept for region-based configuration
active-directory-b2c B2c Global Identity Region Based Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/b2c-global-identity-region-based-design.md
Last updated 01/26/2024
-#customer intent: I'm a developer implementing a global identity solution. I need to understand the different scenarios and workflows for region-based design approach in Azure AD B2C. My goal is to design and implement the authentication and sign-up processes effectively for users from different regions.
+# Customer intent: I'm a developer implementing a global identity solution. I need to understand the different scenarios and workflows for region-based design approach in Azure AD B2C. My goal is to design and implement the authentication and sign-up processes effectively for users from different regions.
# Build a global identity solution with region-based approach
active-directory-b2c B2c Global Identity Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/b2c-global-identity-solutions.md
Last updated 01/26/2024
-#customer intent: I'm a developer building a customer-facing application. I need to understand the different approaches to implement an identity platform using Azure AD B2C tenants for a globally operating business model. I want to make an informed decision about the architecture that best suits my application's requirements.
+# Customer intent: I'm a developer building a customer-facing application. I need to understand the different approaches to implement an identity platform using Azure AD B2C tenants for a globally operating business model. I want to make an informed decision about the architecture that best suits my application's requirements.
# Azure Active Directory B2C global identity framework
active-directory-b2c External Identities Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/external-identities-videos.md
Last updated 01/26/2024
-#customer intent: I'm a developers working with Azure Active Directory B2C. I need videos that provide a deep-dive into the architecture and features of the service. My goal is to gain a better understanding of how to implement and utilize Azure AD B2C in my applications.
+# Customer intent: I'm a developers working with Azure Active Directory B2C. I need videos that provide a deep-dive into the architecture and features of the service. My goal is to gain a better understanding of how to implement and utilize Azure AD B2C in my applications.
# Microsoft Azure Active Directory B2C external identity video series
active-directory-b2c Identity Verification Proofing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-verification-proofing.md
Last updated 01/26/2024
-#customer intent: I'm a developer integrating Azure AD B2C, and I want to configure an identity verification and proofing provider. I need to combat identity fraud and create a trusted user experience for account registration.
+# Customer intent: I'm a developer integrating Azure AD B2C, and I want to configure an identity verification and proofing provider. I need to combat identity fraud and create a trusted user experience for account registration.
# Identity verification and proofing partners
active-directory-b2c Partner Akamai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-akamai.md
Last updated 01/26/2024
-#customer intent: I'm an IT admin, and I want to configure Azure Active Directory B2C with Akamai Enterprise Application Access for SSO and secure hybrid access. I want to enable Azure AD B2C authentication for end users accessing private applications secured by Akamai Enterprise Application Access.
+# Customer intent: I'm an IT admin, and I want to configure Azure Active Directory B2C with Akamai Enterprise Application Access for SSO and secure hybrid access. I want to enable Azure AD B2C authentication for end users accessing private applications secured by Akamai Enterprise Application Access.
# Configure Azure Active Directory B2C with Akamai Web Application Protector
active-directory-b2c Partner Arkose Labs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-arkose-labs.md
Last updated 01/26/2024
-#customer intent: I'm a developer integrating Azure Active Directory B2C with the Arkose Labs platform. I need to configure the integration, so I can protect against bot attacks, account takeover, and fraudulent account openings.
+# Customer intent: I'm a developer integrating Azure Active Directory B2C with the Arkose Labs platform. I need to configure the integration, so I can protect against bot attacks, account takeover, and fraudulent account openings.
active-directory-b2c Partner Asignio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-asignio.md
zone_pivot_groups: b2c-policy-type
-#customer intent: I'm a developer integrating Asignio with Azure AD B2C for multifactor authentication. I want to configure an application with Asignio and set it up as an identity provider (IdP) in Azure AD B2C, so I can provide a passwordless, soft biometric, and multifactor authentication experience to customers.
+# Customer intent: I'm a developer integrating Asignio with Azure AD B2C for multifactor authentication. I want to configure an application with Asignio and set it up as an identity provider (IdP) in Azure AD B2C, so I can provide a passwordless, soft biometric, and multifactor authentication experience to customers.
# Configure Asignio with Azure Active Directory B2C for multifactor authentication
active-directory-b2c Partner Bindid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-bindid.md
zone_pivot_groups: b2c-policy-type
-#customer intent: I'm a developer integrating Azure Active Directory B2C with Transmit Security BindID. I need instructions to configure integration, so I can enable passwordless authentication using FIDO2 biometrics for my application.
+# Customer intent: I'm a developer integrating Azure Active Directory B2C with Transmit Security BindID. I need instructions to configure integration, so I can enable passwordless authentication using FIDO2 biometrics for my application.
# Configure Transmit Security with Azure Active Directory B2C for passwordless authentication
active-directory-b2c Partner Biocatch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-biocatch.md
Last updated 01/26/2024
-#customer intent: I'm a developer integrating Azure AD B2C authentication with BioCatch technology. I need to configure the custom UI, policies, and user journey. My goal is to enhance the security of my Customer Identity and Access Management (CIAM) system by analyzing user physical and cognitive behaviors.
+# Customer intent: I'm a developer integrating Azure AD B2C authentication with BioCatch technology. I need to configure the custom UI, policies, and user journey. My goal is to enhance the security of my Customer Identity and Access Management (CIAM) system by analyzing user physical and cognitive behaviors.
# Tutorial: Configure BioCatch with Azure Active Directory B2C
active-directory-b2c Partner Bloksec https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-bloksec.md
zone_pivot_groups: b2c-policy-type
-#customer intent: I'm a developer integrating Azure Active Directory B2C with BlokSec for passwordless authentication. I need to configure integration, so I can simplify user sign-in and protect against identity-related attacks.
+# Customer intent: I'm a developer integrating Azure Active Directory B2C with BlokSec for passwordless authentication. I need to configure integration, so I can simplify user sign-in and protect against identity-related attacks.
# Tutorial: Configure Azure Active Directory B2C with BlokSec for passwordless authentication
active-directory-b2c Partner Cloudflare https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-cloudflare.md
Last updated 01/26/2024
-#customer intent: I'm a developer configuring Azure AD B2C with Cloudflare WAF. I need to enable and configure the Web Application Firewall, so I can protect my application from malicious attacks such as SQL Injection and cross-site scripting (XSS).
+# Customer intent: I'm a developer configuring Azure AD B2C with Cloudflare WAF. I need to enable and configure the Web Application Firewall, so I can protect my application from malicious attacks such as SQL Injection and cross-site scripting (XSS).
# Tutorial: Configure Cloudflare Web Application Firewall with Azure Active Directory B2C
active-directory-b2c Partner Datawiza https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-datawiza.md
Last updated 01/26/2024
-#customer intent: I'm a developer, and I want to integrate Azure Active Directory B2C with Datawiza Access Proxy (DAP). My goal is to enable single sign-on (SSO) and granular access control for on-premises legacy applications, without rewriting them.
+# Customer intent: I'm a developer, and I want to integrate Azure Active Directory B2C with Datawiza Access Proxy (DAP). My goal is to enable single sign-on (SSO) and granular access control for on-premises legacy applications, without rewriting them.
# Tutorial: Configure Azure Active Directory B2C with Datawiza to provide secure hybrid access
active-directory-b2c Partner Deduce https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-deduce.md
-#customer intent: As an Azure AD B2C administrator, I want to integrate Deduce with Azure AD B2C authentication. I want to combat identity fraud and create a trusted user experience for my organization.
+# Customer intent: As an Azure AD B2C administrator, I want to integrate Deduce with Azure AD B2C authentication. I want to combat identity fraud and create a trusted user experience for my organization.
# Configure Azure Active Directory B2C with Deduce to combat identity fraud and create a trusted user experience
active-directory-b2c Partner Dynamics 365 Fraud Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-dynamics-365-fraud-protection.md
Last updated 01/26/2024
-#customer intent: I'm a developer, and I want to integrate Microsoft Dynamics 365 Fraud Protection with Azure Active Directory B2C. I need to assess risk during attempts to create fraudulent accounts and sign-ins, and then block or challenge suspicious attempts.
+# Customer intent: I'm a developer, and I want to integrate Microsoft Dynamics 365 Fraud Protection with Azure Active Directory B2C. I need to assess risk during attempts to create fraudulent accounts and sign-ins, and then block or challenge suspicious attempts.
# Tutorial: Configure Microsoft Dynamics 365 Fraud Protection with Azure Active Directory B2C
active-directory-b2c Partner Eid Me https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-eid-me.md
zone_pivot_groups: b2c-policy-type
-#customer intent: I'm an Azure AD B2C administrator, and I want to configure eID-Me as an identity provider (IdP). My goal is to enable users to verify their identity and sign in using eID-Me.
+# Customer intent: I'm an Azure AD B2C administrator, and I want to configure eID-Me as an identity provider (IdP). My goal is to enable users to verify their identity and sign in using eID-Me.
# Configure Azure Active Directory B2C with Bluink eID-Me for identity verification
active-directory-b2c Partner Experian https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-experian.md
Last updated 01/26/2024
-#customer intent: I'm an Azure AD B2C administrator, and I want to integrate Experian CrossCore with Azure AD B2C. I need to verify user identification and perform risk analysis based on user attributes during sign-up.
+# Customer intent: I'm an Azure AD B2C administrator, and I want to integrate Experian CrossCore with Azure AD B2C. I need to verify user identification and perform risk analysis based on user attributes during sign-up.
# Tutorial: Configure Experian with Azure Active Directory B2C
active-directory-b2c Partner F5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-f5.md
Last updated 01/26/2024
-#customer intent: As an IT admin responsible for securing applications, I want to integrate Azure Active Directory B2C with F5 BIG-IP Access Policy Manager. I want to expose legacy applications securely to the internet with preauthentication, Conditional Access, and single sign-on (SSO) capabilities.
+# Customer intent: As an IT admin responsible for securing applications, I want to integrate Azure Active Directory B2C with F5 BIG-IP Access Policy Manager. I want to expose legacy applications securely to the internet with preauthentication, Conditional Access, and single sign-on (SSO) capabilities.
# Tutorial: Enable secure hybrid access for applications with Azure Active Directory B2C and F5 BIG-IP
active-directory-b2c Partner Grit App Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-grit-app-proxy.md
-#customer intent: I'm an application developer using header-based authentication, and I want to migrate my legacy application to Azure Active Directory B2C with Grit app proxy. I need to enable modern authentication experiences, enhance security, and save on licensing costs.
+# Customer intent: I'm an application developer using header-based authentication, and I want to migrate my legacy application to Azure Active Directory B2C with Grit app proxy. I need to enable modern authentication experiences, enhance security, and save on licensing costs.
# Migrate applications using header-based authentication to Azure Active Directory B2C with Grit's app proxy
active-directory-b2c Partner Grit Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-grit-authentication.md
-#customer intent: As an application developer using header-based authentication, I want to migrate my legacy application to Azure Active Directory B2C with Grit app proxy. I want to enable modern authentication experiences, enhance security, and save on licensing costs.
+# Customer intent: As an application developer using header-based authentication, I want to migrate my legacy application to Azure Active Directory B2C with Grit app proxy. I want to enable modern authentication experiences, enhance security, and save on licensing costs.
# Configure Grit's biometric authentication with Azure Active Directory B2C
active-directory-b2c Partner Grit Editor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-grit-editor.md
-#customer intent: I'm an Azure AD B2C administrator, and I want to use the Visual IEF Editor tool to create, modify, and deploy Azure AD B2C policies, without writing code.
+# Customer intent: I'm an Azure AD B2C administrator, and I want to use the Visual IEF Editor tool to create, modify, and deploy Azure AD B2C policies, without writing code.
active-directory-b2c Partner Grit Iam https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-grit-iam.md
-#customer intent: I'm a developer, and I want to integrate Azure Active Directory B2C authentication with the Grit IAM B2B2C solution. I need to provide secure and user-friendly identity and access management for my customers.
+# Customer intent: I'm a developer, and I want to integrate Azure Active Directory B2C authentication with the Grit IAM B2B2C solution. I need to provide secure and user-friendly identity and access management for my customers.
# Tutorial: Configure the Grit IAM B2B2C solution with Azure Active Directory B2C
active-directory-b2c Partner Haventec https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-haventec.md
Last updated 01/26/2024
-#customer intent: I'm a developer integrating Haventec Authenticate with Azure AD B2C. I need instructions to configure integration, so I can enable single-step, multi-factor passwordless authentication for my web and mobile applications.
+# Customer intent: I'm a developer integrating Haventec Authenticate with Azure AD B2C. I need instructions to configure integration, so I can enable single-step, multi-factor passwordless authentication for my web and mobile applications.
# Tutorial: Configure Haventec Authenticate with Azure Active Directory B2C for single-step, multi-factor passwordless authentication
active-directory-b2c Partner Hypr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-hypr.md
Last updated 01/26/2024
-#customer intent: I'm a developer integrating HYPR with Azure AD B2C. I want a tutorial to configure the Azure AD B2C policy to enable passwordless authentication using HYPR for my customer applications.
+# Customer intent: I'm a developer integrating HYPR with Azure AD B2C. I want a tutorial to configure the Azure AD B2C policy to enable passwordless authentication using HYPR for my customer applications.
# Tutorial for configuring HYPR with Azure Active Directory B2C
active-directory-b2c Partner Idemia https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-idemia.md
zone_pivot_groups: b2c-policy-type
-#customer intent: I'm an Azure AD B2C administrator, and I want to configure IDEMIA Mobile ID integration with Azure AD B2C. I want users to authenticate using biometric authentication services and benefit from a trusted, government-issued digital ID.
+# Customer intent: I'm an Azure AD B2C administrator, and I want to configure IDEMIA Mobile ID integration with Azure AD B2C. I want users to authenticate using biometric authentication services and benefit from a trusted, government-issued digital ID.
# Tutorial: Configure IDEMIA Mobile ID with Azure Active Directory B2C
active-directory-b2c Partner Jumio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-jumio.md
Last updated 01/26/2024
-#customer intent: I'm an Azure AD B2C administrator, and I want to integrate Jumio with Azure AD B2C. I need to enable real-time automated ID verification for user accounts and protect customer data.
+# Customer intent: I'm an Azure AD B2C administrator, and I want to integrate Jumio with Azure AD B2C. I need to enable real-time automated ID verification for user accounts and protect customer data.
# Tutorial for configuring Jumio with Azure Active Directory B2C
active-directory-b2c Partner Keyless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-keyless.md
Last updated 01/26/2024
-#customer intent: I'm a developer integrating Azure AD B2C with Keyless for passwordless authentication. I need to configure Keyless with Azure AD B2C, so I can provide a secure and convenient passwordless authentication experience for my customer applications.
+# Customer intent: I'm a developer integrating Azure AD B2C with Keyless for passwordless authentication. I need to configure Keyless with Azure AD B2C, so I can provide a secure and convenient passwordless authentication experience for my customer applications.
active-directory-b2c Partner Lexisnexis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-lexisnexis.md
Last updated 01/26/2024
-#customer intent: I'm a developer integrating Azure Active Directory B2C with LexisNexis ThreatMetrix. I want to configure the API and UI components, so I can verify user identities and perform risk analysis based on user attributes and device profiling information.
+# Customer intent: I'm a developer integrating Azure Active Directory B2C with LexisNexis ThreatMetrix. I want to configure the API and UI components, so I can verify user identities and perform risk analysis based on user attributes and device profiling information.
# Tutorial for configuring LexisNexis with Azure Active Directory B2C
active-directory-b2c Partner N8identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-n8identity.md
Last updated 01/26/2024
-#customer intent: As an administrator managing customer accounts in Azure AD B2C, I want to configure TheAccessHub Admin Tool with Azure AD B2C. My goal is to migrate customer accounts, administer CSR requests, synchronize data, and customize notifications.
+# Customer intent: As an administrator managing customer accounts in Azure AD B2C, I want to configure TheAccessHub Admin Tool with Azure AD B2C. My goal is to migrate customer accounts, administer CSR requests, synchronize data, and customize notifications.
active-directory-b2c Partner Nevis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-nevis.md
Last updated 01/26/2024
-#customer intent: I'm a developer, and I want to configure Nevis with Azure Active Directory B2C for passwordless authentication. I need to enable customer authentication and comply with Payment Services Directive 2 (PSD2) transaction requirements.
+# Customer intent: I'm a developer, and I want to configure Nevis with Azure Active Directory B2C for passwordless authentication. I need to enable customer authentication and comply with Payment Services Directive 2 (PSD2) transaction requirements.
# Tutorial to configure Nevis with Azure Active Directory B2C for passwordless authentication
active-directory-b2c Partner Nok Nok https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-nok-nok.md
Last updated 01/26/2024
-#customer intent: I'm a developer integrating Azure Active Directory B2C with a third-party authentication provider. I want to learn how to configure Nok Nok Passport as an identity provider (IdP) in Azure AD B2C. My goal is to enable passwordless FIDO authentication for my users.
+# Customer intent: I'm a developer integrating Azure Active Directory B2C with a third-party authentication provider. I want to learn how to configure Nok Nok Passport as an identity provider (IdP) in Azure AD B2C. My goal is to enable passwordless FIDO authentication for my users.
# Tutorial: Configure Nok Nok Passport with Azure Active Directory B2C for passwordless FIDO2 authentication
active-directory-b2c Partner Onfido https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-onfido.md
Last updated 01/26/2024
-#customer intent: I'm a developer integrating Azure Active Directory B2C with Onfido. I need to configure the Onfido service to verify identity in the sign-up or sign-in flow. My goal is to meet Know Your Customer and identity requirements and provide a reliable onboarding experience, while reducing fraud.
+# Customer intent: I'm a developer integrating Azure Active Directory B2C with Onfido. I need to configure the Onfido service to verify identity in the sign-up or sign-in flow. My goal is to meet Know Your Customer and identity requirements and provide a reliable onboarding experience, while reducing fraud.
# Tutorial for configuring Onfido with Azure Active Directory B2C
active-directory-b2c Partner Ping Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-ping-identity.md
Last updated 01/26/2024
-#customer intent: I'm a developer, and I want to learn how to configure Ping Identity with Azure Active Directory B2C for secure hybrid access (SHA). I need to extend the capabilities of Azure AD B2C and enable secure hybrid access using PingAccess and PingFederate.
+# Customer intent: I'm a developer, and I want to learn how to configure Ping Identity with Azure Active Directory B2C for secure hybrid access (SHA). I need to extend the capabilities of Azure AD B2C and enable secure hybrid access using PingAccess and PingFederate.
# Tutorial: Configure Ping Identity with Azure Active Directory B2C for secure hybrid access
active-directory-b2c Partner Saviynt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-saviynt.md
Last updated 01/26/2024
-#customer intent: As a security manager, I want to integrate Azure Active Directory B2C with Saviynt. I need visibility, security, and governance over user life-cycle management and access control.
+# Customer intent: As a security manager, I want to integrate Azure Active Directory B2C with Saviynt. I need visibility, security, and governance over user life-cycle management and access control.
# Tutorial to configure Saviynt with Azure Active Directory B2C
active-directory-b2c Partner Strata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-strata.md
Last updated 01/26/2024
-#customer intent: As an IT admin, I want to integrate Azure Active Directory B2C with StrataMaverics Identity Orchestrator. I need to protect on-premises applications and enable customer single sign-on (SSO) to hybrid apps.
+# Customer intent: As an IT admin, I want to integrate Azure Active Directory B2C with StrataMaverics Identity Orchestrator. I need to protect on-premises applications and enable customer single sign-on (SSO) to hybrid apps.
active-directory-b2c Partner Trusona https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-trusona.md
zone_pivot_groups: b2c-policy-type
-#customer intent: I'm a developer integrating Azure AD B2C authentication with Trusona Authentication Cloud. I want to configure Trusona Authentication Cloud as an identity provider (IdP) in Azure AD B2C, so I can enable passwordless authentication and provide a better user experience for my web application users.
+# Customer intent: I'm a developer integrating Azure AD B2C authentication with Trusona Authentication Cloud. I want to configure Trusona Authentication Cloud as an identity provider (IdP) in Azure AD B2C, so I can enable passwordless authentication and provide a better user experience for my web application users.
# Configure Trusona Authentication Cloud with Azure Active Directory B2C
active-directory-b2c Partner Typingdna https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-typingdna.md
Last updated 01/26/2024
-#customer intent: I'm an Azure AD B2C administrator, and I want to integrate TypingDNA with Azure AD B2C. I need to comply with Payment Services Directive 2 (PSD2) transaction requirements through keystroke dynamics and strong customer authentication.
+# Customer intent: I'm an Azure AD B2C administrator, and I want to integrate TypingDNA with Azure AD B2C. I need to comply with Payment Services Directive 2 (PSD2) transaction requirements through keystroke dynamics and strong customer authentication.
# Tutorial for configuring TypingDNA with Azure Active Directory B2C
active-directory-b2c Partner Web Application Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-web-application-firewall.md
Last updated 01/26/2024
-#customer intent: I'm a developer configuring Azure Active Directory B2C with Azure Web Application Firewall. I want to enable the WAF service for my B2C tenant with a custom domain, so I can protect my web applications from common exploits and vulnerabilities.
+# Customer intent: I'm a developer configuring Azure Active Directory B2C with Azure Web Application Firewall. I want to enable the WAF service for my B2C tenant with a custom domain, so I can protect my web applications from common exploits and vulnerabilities.
active-directory-b2c Partner Whoiam Rampart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-whoiam-rampart.md
-#customer intent: I'm a developer integrating WhoIAM Rampart with Azure AD B2C. I need to configure and integrate Rampart with Azure AD B2C using custom policies. My goal is to enable an integrated helpdesk and invitation-gated user registration experience for my application.
+# Customer intent: I'm a developer integrating WhoIAM Rampart with Azure AD B2C. I need to configure and integrate Rampart with Azure AD B2C using custom policies. My goal is to enable an integrated helpdesk and invitation-gated user registration experience for my application.
active-directory-b2c Partner Whoiam https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-whoiam.md
Last updated 01/26/2024
-#customer intent: I'm a developer integrating Azure Active Directory B2C with a third-party identity management system. I need a tutorial to configure WhoIAM Branded Identity Management System (BRIMS) with Azure AD B2C. My goal is to enable user verification with voice, SMS, and email in my application.
+# Customer intent: I'm a developer integrating Azure Active Directory B2C with a third-party identity management system. I need a tutorial to configure WhoIAM Branded Identity Management System (BRIMS) with Azure AD B2C. My goal is to enable user verification with voice, SMS, and email in my application.
# Tutorial to configure Azure Active Directory B2C with WhoIAM
active-directory-b2c Partner Xid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-xid.md
Last updated 01/26/2024
-#customer intent: As an Azure AD B2C administrator, I want to configure xID as an identity provider, so users can sign in using xID and authenticate with their digital identity on their device.
+# Customer intent: As an Azure AD B2C administrator, I want to configure xID as an identity provider, so users can sign in using xID and authenticate with their digital identity on their device.
# Configure xID with Azure Active Directory B2C for passwordless authentication
active-directory-b2c Partner Zscaler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-zscaler.md
Last updated 01/26/2024
-#customer intent: As an IT admin, I want to integrate Azure Active Directory B2C authentication with Zscaler Private Access. I need to provide secure access to private applications and assets without the need for a virtual private network (VPN).
+# Customer intent: As an IT admin, I want to integrate Azure Active Directory B2C authentication with Zscaler Private Access. I need to provide secure access to private applications and assets without the need for a virtual private network (VPN).
# Tutorial: Configure Zscaler Private Access with Azure Active Directory B2C
ai-services Liveness https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/Tutorials/liveness.md
The liveness detection solution successfully defends against a variety of spoof
- Once you have your Azure subscription, <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesFace" title="Create a Face resource" target="_blank">create a Face resource</a> in the Azure portal to get your key and endpoint. After it deploys, select **Go to resource**. - You need the key and endpoint from the resource you create to connect your application to the Face service. You'll paste your key and endpoint into the code later in the quickstart. - You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.-- Access to the Azure AI Vision SDK for mobile (IOS and Android). To get started, you need to apply for the [Face Recognition Limited Access features](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUQjA5SkYzNDM4TkcwQzNEOE1NVEdKUUlRRCQlQCN0PWcu) to get access to the SDK. For more information, see the [Face Limited Access](/legal/cognitive-services/computer-vision/limited-access-identity?context=%2Fazure%2Fcognitive-services%2Fcomputer-vision%2Fcontext%2Fcontext) page.
+- Access to the Azure AI Vision Face Client SDK for mobile (IOS and Android). To get started, you need to apply for the [Face Recognition Limited Access features](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUQjA5SkYzNDM4TkcwQzNEOE1NVEdKUUlRRCQlQCN0PWcu) to get access to the SDK. For more information, see the [Face Limited Access](/legal/cognitive-services/computer-vision/limited-access-identity?context=%2Fazure%2Fcognitive-services%2Fcomputer-vision%2Fcontext%2Fcontext) page.
## Perform liveness detection
The high-level steps involved in liveness with verification orchestration are il
```json Request:
- curl --location '<insert-api-endpoint>/face/v1.1-preview.1/detectlivenesswithverify/singlemodal' \
+ curl --location '<insert-api-endpoint>/face/v1.1-preview.1/detectlivenesswithverify/singlemodal/sessions/3847ffd3-4657-4e6c-870c-8e20de52f567' \
--header 'Content-Type: multipart/form-data' \ --header 'apim-recognition-model-preview-1904: true' \ --header 'Authorization: Bearer.<session-authorization-token> \
ai-services Concept Image Retrieval https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-image-retrieval.md
Multi-modal embedding has a variety of applications in different fields, includi
## What are vector embeddings?
-Vector embeddings are a way of representing content&mdash;text or images&mdash;as vectors of real numbers in a high-dimensional space. Vector embeddings are often learned from large amounts of textual and visual data using machine learning algorithms, such as neural networks. Each dimension of the vector corresponds to a different feature or attribute of the content, such as its semantic meaning, syntactic role, or context in which it commonly appears.
+Vector embeddings are a way of representing content&mdash;text or images&mdash;as vectors of real numbers in a high-dimensional space. Vector embeddings are often learned from large amounts of textual and visual data using machine learning algorithms, such as neural networks.
+
+Each dimension of the vector corresponds to a different feature or attribute of the content, such as its semantic meaning, syntactic role, or context in which it commonly appears. In Azure AI Vision, image and text vector embeddings have 1024 dimensions.
> [!NOTE] > Vector embeddings can only be meaningfully compared if they are from the same model type.
The image and video retrieval services return a field called "relevance." The te
> [!IMPORTANT] > The relevance score is a good measure to rank results such as images or video frames with respect to a single query. However, the relevance score cannot be accurately compared across queries. Therefore, it's not possible to easily map the relevance score to a confidence level. It's also not possible to trivially create a threshold algorithm to eliminate irrelevant results based solely on the relevance score.
+## Input requirements
+
+**Image input**
+- The file size of the image must be less than 20 megabytes (MB)
+- The dimensions of the image must be greater than 10 x 10 pixels and less than 16,000 x 16,000 pixels
+
+**Text input**
+- The text string must be between (inclusive) one word and 70 words.
+ ## Next steps Enable Multi-modal embeddings for your search service and follow the steps to generate vector embeddings for text and images.
ai-services Overview Image Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/overview-image-analysis.md
Image Analysis works on images that meet the following requirements:
- The file size of the image must be less than 20 megabytes (MB) - The dimensions of the image must be greater than 50 x 50 pixels and less than 16,000 x 16,000 pixels
+> [!TIP]
+> Input requirements for multi-modal embeddings are different and are listed in [Multi-modal embeddings](/azure/ai-services/computer-vision/concept-image-retrieval#input-requirements)
#### [Version 3.2](#tab/3-2)
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/whats-new.md
Azure AI Language is updated on an ongoing basis. To stay up-to-date with recent
* [Native document support](native-document-support/use-native-documents.md) is now available in `2023-11-15-preview` public preview.
+## December 2023
+
+* [Text Analytics for health](./text-analytics-for-health/overview.md) new model 2023-12-01 is now available.
+* New Relation Type: `BodySiteOfExamination`
+ * Quality enhancements to support radiology documents
+ * Significant latency improvements
+ * Various bug fixes: Improvements across NER, Entity Linking, Relations and Assertion Detection
+ ## November 2023 * [Named Entity Recognition Container](./named-entity-recognition/how-to/use-containers.md) is now Generally Available (GA).
ai-services Content Filter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/content-filter.md
The default content filtering configuration is set to filter at the medium sever
| Severity filtered | Configurable for prompts | Configurable for completions | Descriptions | |-|--||--| | Low, medium, high | Yes | Yes | Strictest filtering configuration. Content detected at severity levels low, medium and high is filtered.|
-| Medium, high | Yes | Yes | Default setting. Content detected at severity level low is not filtered, content at medium and high is filtered.|
+| Medium, high | Yes | Yes | Default setting. Content detected at severity level low isn't filtered, content at medium and high is filtered.|
| High | Yes| Yes | Content detected at severity levels low and medium isn't filtered. Only content at severity level high is filtered.| | No filters | If approved<sup>\*</sup>| If approved<sup>\*</sup>| No content is filtered regardless of severity level detected. Requires approval<sup>\*</sup>.|
-<sup>\*</sup> Only customers who have been approved for modified content filtering have full content filtering control and can turn content filters partially or fully off. Content filtering control does not apply to content filters for DALL-E (preview) or GPT-4 Turbo with Vision (preview). Apply for modified content filters using this form: [Azure OpenAI Limited Access Review: Modified Content Filtering (microsoft.com)](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUMlBQNkZMR0lFRldORTdVQzQ0TEI5Q1ExOSQlQCN0PWcu).
+<sup>\*</sup> Only customers who have been approved for modified content filtering have full content filtering control and can turn content filters partially or fully off. Content filtering control doesn't apply to content filters for DALL-E (preview) or GPT-4 Turbo with Vision (preview). Apply for modified content filters using this form: [Azure OpenAI Limited Access Review: Modified Content Filtering (microsoft.com)](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUMlBQNkZMR0lFRldORTdVQzQ0TEI5Q1ExOSQlQCN0PWcu).
Customers are responsible for ensuring that applications integrating Azure OpenAI comply with the [Code of Conduct](/legal/cognitive-services/openai/code-of-conduct?context=%2Fazure%2Fai-services%2Fopenai%2Fcontext%2Fcontext).
For details on the inference REST API endpoints for Azure OpenAI and how to crea
} ```
-## Streaming
+## Content streaming
-Azure OpenAI Service includes a content filtering system that works alongside core models. The following section describes the AOAI streaming experience and options in the context of content filters.
+This section describes the Azure OpenAI content streaming experience and options. With approval, you have the option to receive content from the API as it's generated, instead of waiting for chunks of content that have been verified to pass your content filters.
### Default
-The content filtering system is integrated and enabled by default for all customers. In the default streaming scenario, completion content is buffered, the content filtering system runs on the buffered content, and ΓÇô depending on content filtering configuration ΓÇô content is either returned to the user if it does not violate the content filtering policy (Microsoft default or custom user configuration), or itΓÇÖs immediately blocked which returns a content filtering error, without returning harmful completion content. This process is repeated until the end of the stream. Content was fully vetted according to the content filtering policy before returned to the user. Content is not returned token-by-token in this case, but in ΓÇ£content chunksΓÇ¥ of the respective buffer size.
+The content filtering system is integrated and enabled by default for all customers. In the default streaming scenario, completion content is buffered, the content filtering system runs on the buffered content, and ΓÇô depending on the content filtering configuration ΓÇô content is either returned to the user if it doesn't violate the content filtering policy (Microsoft's default or a custom user configuration), or itΓÇÖs immediately blocked and returns a content filtering error, without returning the harmful completion content. This process is repeated until the end of the stream. Content is fully vetted according to the content filtering policy before it's returned to the user. Content isn't returned token-by-token in this case, but in ΓÇ£content chunksΓÇ¥ of the respective buffer size.
### Asynchronous modified filter
-Customers who have been approved for modified content filters can choose Asynchronous Modified Filter as an additional option, providing a new streaming experience. In this case, content filters are run asynchronously, completion content is returned immediately with a smooth token-by-token streaming experience. No content is buffered, the content filters run asynchronously, which allows for zero latency in this context.
+Customers who have been approved for modified content filters can choose the asynchronous modified filter as an additional option, providing a new streaming experience. In this case, content filters are run asynchronously, and completion content is returned immediately with a smooth token-by-token streaming experience. No content is buffered, which allows for zero latency.
-> [!NOTE]
-> Customers must be aware that while the feature improves latency, it can bring a trade-off in terms of the safety and real-time vetting of smaller sections of model output. Because content filters are run asynchronously, content moderation messages and the content filtering signal in case of a policy violation are delayed, which means some sections of harmful content that would otherwise have been filtered immediately could be displayed to the user.
+Customers must be aware that while the feature improves latency, it's a trade-off against the safety and real-time vetting of smaller sections of model output. Because content filters are run asynchronously, content moderation messages and policy violation signals are delayed, which means some sections of harmful content that would otherwise have been filtered immediately could be displayed to the user.
-**Annotations**: Annotations and content moderation messages are continuously returned during the stream. We strongly recommend to consume annotations and implement additional AI content safety mechanisms, such as redacting content or returning additional safety information to the user.
+**Annotations**: Annotations and content moderation messages are continuously returned during the stream. We strongly recommend you consume annotations in your app and implement additional AI content safety mechanisms, such as redacting content or returning additional safety information to the user.
-**Content filtering signal**: The content filtering error signal is delayed; in case of a policy violation, itΓÇÖs returned as soon as itΓÇÖs available, and the stream is stopped. The content filtering signal is guaranteed within ~1,000-character windows in case of a policy violation.
+**Content filtering signal**: The content filtering error signal is delayed. In case of a policy violation, itΓÇÖs returned as soon as itΓÇÖs available, and the stream is stopped. The content filtering signal is guaranteed within a ~1,000-character window of the policy-violating content.
-Approval for Modified Content Filtering is required for access to Streaming ΓÇô Asynchronous Modified Filter. The application can be found [here](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xURE01NDY1OUhBRzQ3MkQxMUhZSE1ZUlJKTiQlQCN0PWcu). To enable it via Azure OpenAI Studio please follow the instructions [here](/azure/ai-services/openai/how-to/content-filters) to create a new content filtering configuration, and select ΓÇ£Asynchronous Modified FilterΓÇ¥ in the Streaming section, as shown in the below screenshot.
+Approval for modified content filtering is required for access to the asynchronous modified filter. The application can be found [here](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xURE01NDY1OUhBRzQ3MkQxMUhZSE1ZUlJKTiQlQCN0PWcu). To enable it in Azure OpenAI Studio, follow the [Content filter how-to guide](/azure/ai-services/openai/how-to/content-filters) to create a new content filtering configuration, and select **Asynchronous Modified Filter** in the Streaming section.
-### Overview
+### Comparison of content filtering modes
-| Category | Streaming - Default | Streaming - Asynchronous Modified Filter |
+| | Streaming - Default | Streaming - Asynchronous Modified Filter |
|||| |Status |GA |Public Preview |
-| Access | Enabled by default, no action needed |Customers approved for Modified Content Filtering can configure directly via Azure OpenAI Studio (as part of a content filtering configuration; applied on deployment-level) |
-| Eligibility |All customers |Customers approved for Modified Content Filtering |
-|Modality and Availability |Text; all GPT-models |Text; all GPT-models except gpt-4-vision |
+| Eligibility |All customers |Customers approved for modified content filtering |
+| How to enable | Enabled by default, no action needed |Customers approved for modified content filtering can configure it directly in Azure OpenAI Studio (as part of a content filtering configuration, applied at the deployment level) |
+|Modality and availability |Text; all GPT models |Text; all GPT models except gpt-4-vision |
|Streaming experience |Content is buffered and returned in chunks |Zero latency (no buffering, filters run asynchronously) |
-|Content filtering signal |Immediate filtering signal |Delayed filtering signal (in up to ~1,000 char increments) |
-|Content filtering configurations |Supports default and any customer-defined filter setting (including optional models) |Supports default and any customer-defined filter setting (including optional models) |
+|Content filtering signal |Immediate filtering signal |Delayed filtering signal (in up to ~1,000-character increments) |
+|Content filtering configurations |Supports default and any customer-defined filter setting (including optional models) |Supports default and any customer-defined filter setting (including optional models) |
-### Annotations and sample response stream
+### Annotations and sample responses
#### Prompt annotation message
data: {
#### Annotation message
-The text field will always be an empty string, indicating no new tokens. Annotations will only be relevant to already-sent tokens. There may be multiple Annotation Messages referring to the same tokens.
+The text field will always be an empty string, indicating no new tokens. Annotations will only be relevant to already-sent tokens. There may be multiple annotation messages referring to the same tokens.
-ΓÇ£start_offsetΓÇ¥ and ΓÇ£end_offsetΓÇ¥ are low-granularity offsets in text (with 0 at beginning of prompt) which the annotation is relevant to.
+`"start_offset"` and `"end_offset"` are low-granularity offsets in text (with 0 at beginning of prompt) to mark which text the annotation is relevant to.
-ΓÇ£check_offsetΓÇ¥ represents how much text has been fully moderated. It is an exclusive lower bound on the end_offsets of future annotations. It is nondecreasing.
+`"check_offset"` represents how much text has been fully moderated. It's an exclusive lower bound on the `"end_offset"` values of future annotations. It's non-decreasing.
```json data: {
data: {
```
-### Sample response stream
+#### Sample response stream (passes filters)
-Below is a real chat completion response using Asynchronous Modified Filter. Note how prompt annotations are not changed; completion tokens are sent without annotations; and new annotation messages are sent without tokens, instead associated with certain content filter offsets.
+Below is a real chat completion response using asynchronous modified filter. Note how the prompt annotations aren't changed, completion tokens are sent without annotations, and new annotation messages are sent without tokens&mdash;they are instead associated with certain content filter offsets.
`{"temperature": 0, "frequency_penalty": 0, "presence_penalty": 1.0, "top_p": 1.0, "max_tokens": 800, "messages": [{"role": "user", "content": "What is color?"}], "stream": true}`
data: {"id":"","object":"","created":0,"model":"","choices":[{"index":0,"finish_
data: [DONE] ```
-### Sample response stream (blocking)
+#### Sample response stream (blocked by filters)
`{"temperature": 0, "frequency_penalty": 0, "presence_penalty": 1.0, "top_p": 1.0, "max_tokens": 800, "messages": [{"role": "user", "content": "Tell me the lyrics to \"Hey Jude\"."}], "stream": true}`
ai-services Batch Transcription Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-transcription-create.md
Last updated 1/26/2024 zone_pivot_groups: speech-cli-rest
-#customer intent: As a user who implements audio transcription, I want create transcriptions in bulk so that I don't have to submit audio content repeatedly.
+# Customer intent: As a user who implements audio transcription, I want create transcriptions in bulk so that I don't have to submit audio content repeatedly.
# Create a batch transcription
aks Artifact Streaming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/artifact-streaming.md
Now that you enabled Artifact Streaming on a premium ACR and connected that to a
* Check if your node pool has Artifact Streaming enabled using the [`az aks nodepool show`][az-aks-nodepool-show] command. ```azurecli-interactive
- az aks nodepool show --resource-group myResourceGroup --cluster-name myAKSCluster --name myNodePool grep ArtifactStreamingConfig
+ az aks nodepool show --resource-group myResourceGroup --cluster-name myAKSCluster --name myNodePool --query artifactStreamingProfile
``` In the output, check that the `Enabled` field is set to `true`.
aks Azure Disk Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-customer-managed-keys.md
Title: Use a customer-managed key to encrypt Azure disks in Azure Kubernetes Service (AKS)
-description: Bring your own keys (BYOK) to encrypt AKS OS and Data disks.
+ Title: Use a customer-managed key to encrypt Azure managed disks in Azure Kubernetes Service (AKS)
+description: Bring your own keys (BYOK) to encrypt managed OS and data disks in AKS.
Previously updated : 11/24/2023 Last updated : 02/01/2024
-# Bring your own keys (BYOK) with Azure disks in Azure Kubernetes Service (AKS)
+# Bring your own keys (BYOK) with Azure managed disks in Azure Kubernetes Service (AKS)
-Azure Storage encrypts all data in a storage account at rest. By default, data is encrypted with Microsoft-managed keys. For more control over encryption keys, you can supply customer-managed keys to use for encryption at rest for both the OS and data disks for your AKS clusters.
+Azure encrypts all data in a managed disk at rest. By default, data is encrypted with Microsoft-managed keys. For more control over encryption keys, you can supply customer-managed keys to use for encryption at rest for both the OS and data disks for your AKS clusters.
Learn more about customer-managed keys on [Linux][customer-managed-keys-linux] and [Windows][customer-managed-keys-windows].
Learn more about customer-managed keys on [Linux][customer-managed-keys-linux] a
## Limitations
-* Encryption of OS disk with customer-managed keys can only be enabled when creating an AKS cluster.
+* Encryption of an OS disk with customer-managed keys can only be enabled when creating an AKS cluster.
* Virtual nodes are not supported.
-* When encrypting ephemeral OS disk-enabled node pool with customer-managed keys, if you want to rotate the key in Azure Key Vault, you need to:
+* When encrypting an ephemeral OS disk-enabled node pool with customer-managed keys, if you want to rotate the key in Azure Key Vault, you need to:
* Scale down the node pool count to 0 * Rotate the key
aks Concepts Clusters Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-clusters-workloads.md
Last updated 01/16/2024
Application development continues to move toward a container-based approach, increasing our need to orchestrate and manage resources. As the leading platform, Kubernetes provides reliable scheduling of fault-tolerant application workloads. Azure Kubernetes Service (AKS), a managed Kubernetes offering, further simplifies container-based application deployment and management. This article introduces core concepts:+ * Kubernetes infrastructure components:
- * *control plane*
- * *nodes*
- * *node pools*
-* Workload resources:
- * *pods*
- * *deployments*
- * *sets*
+
+ * *control plane*
+ * *nodes*
+ * *node pools*
+
+* Workload resources:
+
+ * *pods*
+ * *deployments*
+ * *sets*
+ * Group resources using *namespaces*. ## What is Kubernetes?
When you create an AKS cluster, the following namespaces are available:
| *kube-system* | Where core resources exist, such as network features like DNS and proxy, or the Kubernetes dashboard. You typically don't deploy your own applications into this namespace. | | *kube-public* | Typically not used, but can be used for resources to be visible across the whole cluster, and can be viewed by any user. | - For more information, see [Kubernetes namespaces][kubernetes-namespaces]. ## Next steps This article covers some of the core Kubernetes components and how they apply to AKS clusters. For more information on core Kubernetes and AKS concepts, see the following articles: -- [Kubernetes / AKS access and identity][aks-concepts-identity]-- [Kubernetes / AKS security][aks-concepts-security]-- [Kubernetes / AKS virtual networks][aks-concepts-network]-- [Kubernetes / AKS storage][aks-concepts-storage]-- [Kubernetes / AKS scale][aks-concepts-scale]
+- [AKS access and identity][aks-concepts-identity]
+- [AKS security][aks-concepts-security]
+- [AKS virtual networks][aks-concepts-network]
+- [AKS storage][aks-concepts-storage]
+- [AKS scale][aks-concepts-scale]
<!-- EXTERNAL LINKS --> [cluster-api-provider-azure]: https://github.com/kubernetes-sigs/cluster-api-provider-azure
aks Howto Deploy Java Liberty App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/howto-deploy-java-liberty-app.md
This article is intended to help you quickly get to deployment. Before going to
* This article requires at least version 2.31.0 of Azure CLI. If using Azure Cloud Shell, the latest version is already installed. > [!NOTE]
-> This guidance can also be executed from a local developer command line with Azure CLI installed. To learn how to install the Azure CLI, see [How to install the Azure CLI](/cli/azure/install-azure-cli).
+> You can also execute this guidance from the [Azure Cloud Shell](/azure/cloud-shell/quickstart). This approach has all the prerequisite tools pre-installed, with the exception of Docker.
+>
+> [![Image of button to launch Cloud Shell in a new window.](../../includes/media/cloud-shell-try-it/hdi-launch-cloud-shell.png)](https://shell.azure.com)
* If running the commands in this guide locally (instead of Azure Cloud Shell): * Prepare a local machine with Unix-like operating system installed (for example, Ubuntu, Azure Linux, macOS, Windows Subsystem for Linux).
The following steps guide you to create a Liberty runtime on AKS. After completi
1. Create a new resource group. Because resource groups must be unique within a subscription, pick a unique name. An easy way to have unique names is to use a combination of your initials, today's date, and some identifier. For example, `ejb0913-java-liberty-project-rg`. 1. Select *East US* as **Region**.
+
+ Create environment variables in your shell for the resource group names for the cluster and the database.
-1. Select **Next**, enter the **AKS** pane. This pane allows you to select an existing AKS cluster and Azure Container Registry (ACR), instead of causing the deployment to create a new one, if desired. This capability enables you to use the sidecar pattern, as shown in the [Azure architecture center](/azure/architecture/patterns/sidecar). You can also adjust the settings for the size and number of the virtual machines in the AKS node pool. Leave all other values at the defaults.
+ ### [Bash](#tab/in-bash)
+
+ ```bash
+ export RESOURCE_GROUP_NAME=<your-resource-group-name>
+ export DB_RESOURCE_GROUP_NAME=<your-resource-group-name>
+ ```
+
+ ### [PowerShell](#tab/in-powershell)
+
+ ```powershell
+ $Env:RESOURCE_GROUP_NAME="<your-resource-group-name>"
+ $Env:DB_RESOURCE_GROUP_NAME="<your-resource-group-name>"
+ ```
+
+
+
+1. Select **Next**, enter the **AKS** pane. This pane allows you to select an existing AKS cluster and Azure Container Registry (ACR), instead of causing the deployment to create a new one, if desired. This capability enables you to use the sidecar pattern, as shown in the [Azure architecture center](/azure/architecture/patterns/sidecar). You can also adjust the settings for the size and number of the virtual machines in the AKS node pool. The remaining values do not need to be changed from their default values.
1. Select **Next**, enter the **Load Balancing** pane. Next to **Connect to Azure Application Gateway?** select **Yes**. This section lets you customize the following deployment options.
- 1. You can customize the **virtual network** and **subnet** into which the deployment will place the resources. Leave these values at their defaults.
+ 1. You can customize the **virtual network** and **subnet** into which the deployment will place the resources. The remaining values do not need to be changed from their default values.
1. You can provide the **TLS/SSL certificate** presented by the Azure Application Gateway. Leave the values at the default to cause the offer to generate a self-signed certificate. Don't go to production using a self-signed certificate. For more information about self-signed certificates, see [Create a self-signed public certificate to authenticate your application](../active-directory/develop/howto-create-self-signed-certificate.md). 1. You can select **Enable cookie based affinity**, also known as sticky sessions. We want sticky sessions enabled for this article, so ensure this option is selected.
To avoid Azure charges, you should clean up unnecessary resources. When the clus
```bash az group delete --name $RESOURCE_GROUP_NAME --yes --no-wait
-az group delete --name <db-resource-group> --yes --no-wait
+az group delete --name $DB_RESOURCE_GROUP_NAME --yes --no-wait
``` ### [PowerShell](#tab/in-powershell) ```powershell az group delete --name $Env:RESOURCE_GROUP_NAME --yes --no-wait
-az group delete --name <db-resource-group> --yes --no-wait
+az group delete --name $Env:DB_RESOURCE_GROUP_NAME --yes --no-wait
```
aks Long Term Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/long-term-support.md
Title: Long term support for Azure Kubernetes Service (AKS)
-description: Learn about Azure Kubernetes Service (AKS) Long term support for Kubernetes
+ Title: Long-term support for Azure Kubernetes Service (AKS)
+description: Learn about Azure Kubernetes Service (AKS) long-term support for Kubernetes
Previously updated : 08/16/2023 Last updated : 01/24/2024
-#Customer intent: As a cluster operator or developer, I want to understand how Long Term Support for Kubernetes on AKS works.
+#Customer intent: As a cluster operator or developer, I want to understand how long-term support for Kubernetes on AKS works.
-# Long term support
-The Kubernetes community releases a new minor version approximately every four months, with a support window for each version for one year. This support in terms of Azure Kubernetes Service (AKS) is called "Community Support."
+# Long-term support
-AKS supports versions of Kubernetes that are within this Community Support window, to push bug fixes and security updates from community releases.
+The Kubernetes community releases a new minor version approximately every four months, with a support window for each version for one year. In Azure Kubernetes Service (AKS), this support window is called "Community support."
-While innovation delivered with this release cadence provides huge benefits to you, it challenges you to keep up to date with Kubernetes releases, which can be made more difficult based on the number of AKS clusters you have to maintain.
+AKS supports versions of Kubernetes that are within this Community support window, to push bug fixes and security updates from community releases.
+While innovation delivered with this release cadence provides huge benefits to you, it challenges you to keep up to date with Kubernetes releases, which can be made more difficult based on the number of AKS clusters you have to maintain.
## AKS support types
-After approximately one year, the Kubernetes version exits Community Support and your AKS clusters are now at-risk as bug fixes and security updates become unavailable.
-AKS provides one year Community Support and one year of Long Term Support (LTS) to back port security fixes from the community upstream in our public repository. Our upstream LTS working group contributes efforts back to the community to provide our customers with a longer support window.
+After approximately one year, the Kubernetes version exits Community support and your AKS clusters are now at risk as bug fixes and security updates become unavailable.
+
+AKS provides one year Community support and one year of long-term support (LTS) to back port security fixes from the community upstream in our public repository. Our upstream LTS working group contributes efforts back to the community to provide our customers with a longer support window.
LTS intends to give you an extended period of time to plan and test for upgrades over a two-year period from the General Availability of the designated Kubernetes version.
-| | Community Support |Long Term Support |
+| | Community support |Long-term support |
|||| | **When to use** | When you can keep up with upstream Kubernetes releases | When you need control over when to migrate from one version to another | | **Support versions** | Three GA minor versions | One Kubernetes version (currently *1.27*) for two years |
+## Enable long-term support
-## Enable Long Term Support
-
-Enabling and disabling Long Term Support is a combination of moving your cluster to the Premium tier and explicitly selecting the LTS support plan.
+Enabling and disabling long-term support is a combination of moving your cluster to the Premium tier and explicitly selecting the LTS support plan.
> [!NOTE]
-> While it's possible to enable LTS when the cluster is in Community Support, you'll be charged once you enable the Premium tier.
+> While it's possible to enable LTS when the cluster is in Community support, you'll be charged once you enable the Premium tier.
### Create a cluster with LTS enabled
-```
+
+```azurecli
az aks create --resource-group myResourceGroup --name myAKSCluster --tier premium --k8s-support-plan AKSLongTermSupport --kubernetes-version 1.27 ``` > [!NOTE]
-> Enabling and disabling LTS is a combination of moving your cluster to the Premium tier, as well as enabling Long Term Support. Both must either be turned on or off.
+> Enabling and disabling LTS is a combination of moving your cluster to the Premium tier, as well as enabling long-term support. Both must either be turned on or off.
### Enable LTS on an existing cluster
-```
+
+```azurecli
az aks update --resource-group myResourceGroup --name myAKSCluster --tier premium --k8s-support-plan AKSLongTermSupport ``` ### Disable LTS on an existing cluster
-```
+
+```azurecli
az aks update --resource-group myResourceGroup --name myAKSCluster --tier [free|standard] --k8s-support-plan KubernetesOfficial ``` ## Long term support, add-ons and features
-The AKS team currently tracks add-on versions where Kubernetes community support exists. Once a version leaves Community Support, we rely on Open Source projects for managed add-ons to continue that support. Due to various external factors, some add-ons and features may not support Kubernetes versions outside these upstream Community Support windows.
+
+The AKS team currently tracks add-on versions where Kubernetes Community support exists. Once a version leaves Community support, we rely on open source projects for managed add-ons to continue that support. Due to various external factors, some add-ons and features may not support Kubernetes versions outside these upstream Community support windows.
See the following table for a list of add-ons and features that aren't supported and the reason why.
See the following table for a list of add-ons and features that aren't supported
|| | Istio | The Istio support cycle is short (six months), and there will not be maintenance releases for Kubernetes 1.27 | | Keda | Unable to guarantee future version compatibility with Kubernetes 1.27 |
-| Calico | Requires Calico Enterprise agreement past Community Support |
-| Cillium | Requires Cillium Enterprise agreement past Community Support |
+| Calico | Requires Calico Enterprise agreement past Community support |
+| Cillium | Requires Cillium Enterprise agreement past Community support |
| Azure Linux | Support timeframe for Azure Linux 2 ends during this LTS cycle | | Key Management Service (KMS) | KMSv2 replaces KMS during this LTS cycle | | Dapr | AKS extensions are not supported |
See the following table for a list of add-ons and features that aren't supported
| Open Service Mesh | OSM will be deprecated| | AAD Pod Identity | Deprecated in place of Workload Identity | - > [!NOTE]
->You can't move your cluster to Long Term support if any of these add-ons or features are enabled.
->Whilst these AKS managed add-ons aren't supported by Microsoft, you're able to install the Open Source versions of these on your cluster if you wish to use it past Community Support.
+>You can't move your cluster to long-term support if any of these add-ons or features are enabled.
+>Whilst these AKS managed add-ons aren't supported by Microsoft, you're able to install the Open Source versions of these on your cluster if you wish to use it past Community support.
## How we decide the next LTS version+ Versions of Kubernetes LTS are available for two years from General Availability, we mark a later version of Kubernetes as LTS based on the following criteria:+ * Sufficient time for customers to migrate from the prior LTS version to the current have passed * The previous version has had a two year support window Read the AKS release notes to stay informed of when you're able to plan your migration. ### Migrate from LTS to Community support+ Using LTS is a way to extend your window to plan a Kubernetes version upgrade. You may want to migrate to a version of Kubernetes that is within the [standard support window](supported-kubernetes-versions.md#kubernetes-version-support-policy). To move from an LTS enabled cluster to a version of Kubernetes that is within the standard support window, you need to disable LTS on the cluster:
-```
+```azurecli
az aks update --resource-group myResourceGroup --name myAKSCluster --tier [free|standard] --k8s-support-plan KubernetesOfficial ``` And then upgrade the cluster to a later supported version:
-```
+```azurecli
az aks upgrade --resource-group myResourceGroup --name myAKSCluster --kubernetes-version 1.28.3 ```+ > [!NOTE] > Kubernetes 1.28.3 is used as an example here, please check the [AKS release tracker](release-tracker.md) for available Kubernetes releases. There are approximately two years between one LTS version and the next. In lieu of upstream support for migrating more than two minor versions, there's a high likelihood your application depends on Kubernetes APIs that have been deprecated. We recommend you thoroughly test your application on the target LTS Kubernetes version and carry out a blue/green deployment from one version to another. ### Migrate from LTS to the next LTS release
-The upstream Kubernetes community supports a two minor version upgrade path. The process migrates the objects in your Kubernetes cluster as part of the upgrade process, and provides a tested, and accredited migration path.
+
+The upstream Kubernetes community supports a two-minor-version upgrade path. The process migrates the objects in your Kubernetes cluster as part of the upgrade process, and provides a tested, and accredited migration path.
For customers that wish to carry out an in-place migration, the AKS service will migrate your control plane from the previous LTS version to the latest, and then migrate your data plane. To carry out an in-place upgrade to the latest LTS version, you need to specify an LTS enabled Kubernetes version as the upgrade target.
-```
+```azurecli
az aks upgrade --resource-group myResourceGroup --name myAKSCluster --kubernetes-version 1.30.2 ``` > [!NOTE]
-> Kubernetes 1.30.2 is used as an example here, please check the [AKS release tracker](release-tracker.md) for available Kubernetes releases.
+> Kubernetes 1.30.2 is used as an example version in this article. Check the [AKS release tracker](release-tracker.md) for available Kubernetes releases.
aks Monitor Control Plane Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/monitor-control-plane-metrics.md
This article helps you understand this new feature, how to implement it, and how
- [Private link](../azure-monitor/logs/private-link-security.md) isn't supported. - Only the default [ama-metrics-settings-config-map](../azure-monitor/containers/prometheus-metrics-scrape-configuration.md#configmaps) can be customized. All other customizations are not supported. - The cluster must use [managed identity authentication](use-managed-identity.md).-- This feature is currently available in the following regions: West US 2, East Asia, UK South, East US, Australia Central, Australia East, Brazil South, Canada Central, Central India, East US 2, France Central, and Germany West Central.
+- This feature is currently available in the following regions: West US 2, East Asia, UK South, East US, Australia Central, Australia East, Brazil South, Canada Central, Central India, East US 2, France Central, and Germany West Central, Israel Central, Italy North, Japan East, JioIndia West, Korea Central, Malaysia South, Mexico Central, North Central.
### Install or update the `aks-preview` Azure CLI extension
aks Operator Best Practices Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/operator-best-practices-network.md
spec:
paths: - path: /blog backend:
- service
+ service:
name: blogservice port: 80 - path: /store backend:
- service
+ service:
name: storeservice port: 80 ```
aks Quickstart Helm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/quickstart-helm.md
Title: Develop on Azure Kubernetes Service (AKS) with Helm
description: Use Helm with AKS and Azure Container Registry to package and run application containers in a cluster. Previously updated : 01/18/2024 Last updated : 01/25/2024 # Quickstart: Develop on Azure Kubernetes Service (AKS) with Helm
You need to store your container images in an Azure Container Registry (ACR) to
az group create --name myResourceGroup --location eastus ```
-2. Create an Azure Container Registry using the [az acr create][az-acr-create] command. The following example creates an ACR named *myhelmacr* with the *Basic* SKU.
+2. Create an Azure Container Registry with a unique name by calling the [az acr create][az-acr-create] command. The following example creates an ACR named *myhelmacr* with the *Basic* SKU.
```azurecli-interactive az acr create --resource-group myResourceGroup --name myhelmacr --sku Basic
You need to store your container images in an Azure Container Registry (ACR) to
New-AzResourceGroup -Name myResourceGroup -Location eastus ```
-2. Create an Azure Container Registry using the [New-AzContainerRegistry][new-azcontainerregistry] cmdlet. The following example creates an ACR named *myhelmacr* with the *Basic* SKU.
+2. Create an Azure Container Registry with a unique name by calling the [New-AzContainerRegistry][new-azcontainerregistry] cmdlet. The following example creates an ACR named *myhelmacr* with the *Basic* SKU.
```azurepowershell-interactive
- New-AzContainerRegistry -ResourceGroupName myResourceGroup -Name myhelmacr -Sku Basic
+ New-AzContainerRegistry -ResourceGroupName myResourceGroup -Name myhelmacr -Sku Basic -Location eastus
``` Your output should look similar to the following condensed example output. Take note of your *loginServer* value for your ACR to use in a later step.
api-management Api Management Howto Disaster Recovery Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-disaster-recovery-backup-restore.md
Previously updated : 11/30/2023 Last updated : 01/31/2023
Backup-AzApiManagement -ResourceGroupName $apiManagementResourceGroup -Name $api
-TargetBlobName $blobName -AccessType "UserAssignedManagedIdentity" ` -identityClientId $identityid ```
-Backup is a long-running operation that may take several minutes to complete.
+Backup is a long-running operation that may take several minutes to complete. During this time the API gateway continues to handle requests, but the state of the service is Updating.
### [REST](#tab/rest)
In the body of the request, specify the target storage account name, blob contai
Set the value of the `Content-Type` request header to `application/json`.
-Backup is a long-running operation that may take several minutes to complete. If the request succeeded and the backup process began, you receive a `202 Accepted` response status code with a `Location` header. Make `GET` requests to the URL in the `Location` header to find out the status of the operation. While the backup is in progress, you continue to receive a `202 Accepted` status code. A Response code of `200 OK` indicates successful completion of the backup operation.
+Backup is a long-running operation that may take several minutes to complete. If the request succeeded and the backup process began, you receive a `202 Accepted` response status code with a `Location` header. Make `GET` requests to the URL in the `Location` header to find out the status of the operation. While the backup is in progress, you continue to receive a `202 Accepted` status code. During this time the API gateway continues to handle requests, but the state of the service is Updating. A Response code of `200 OK` indicates successful completion of the backup operation.
api-management Api Management Howto Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-ip-addresses.md
Previously updated : 12/21/2021 Last updated : 01/29/2024
API Management uses a public IP address for a connection outside the VNet or a p
* When a request is sent from API Management to a public (internet-facing) backend, a public IP address will always be visible as the origin of the request.
-## IP addresses of Consumption tier API Management service
+## IP addresses of Consumption, Basic v2, and Standard v2 tier API Management service
-If your API Management service is a Consumption tier service, it doesn't have a dedicated IP address. Consumption tier service runs on a shared infrastructure and without a deterministic IP address.
+If your API Management instance is created in a service tier that runs on a shared infrastructure, it doesn't have a dedicated IP address. Currently, instances in the following service tiers run on a shared infrastructure and without a deterministic IP address: Consumption, Basic v2 (preview), Standard v2 (preview).
-If you need to add the outbound IP addresses used by your Consumption tier instance to an allowlist, you can add the instance's data center (Azure region) to an allowlist. You can [download a JSON file that lists IP addresses for all Azure data centers](https://www.microsoft.com/download/details.aspx?id=56519). Then find the JSON fragment that applies to the region that your instance runs in.
+If you need to add the outbound IP addresses used by your Consumption, Basic v2, or Standard v2 tier instance to an allowlist, you can add the instance's data center (Azure region) to an allowlist. You can [download a JSON file that lists IP addresses for all Azure data centers](https://www.microsoft.com/download/details.aspx?id=56519). Then find the JSON fragment that applies to the region that your instance runs in.
For example, the following JSON fragment is what the allowlist for Western Europe might look like:
api-management Api Management Howto Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-migrate.md
Last updated 08/20/2021
-#customerintent: As an Azure service administrator, I want to move my service resources to another Azure region.
+# Customer intent: As an Azure service administrator, I want to move my service resources to another Azure region.
# How to move Azure API Management across regions
api-management Api Management Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-subscriptions.md
A subscriber can use an API Management subscription key in one of two ways:
> **Ocp-Apim-Subscription-Key** is the default name of the subscription key header, and **subscription-key** is the default name of the query parameter. If desired, you may modify these names in the settings for each API. For example, in the portal, update these names on the **Settings** tab of an API. > [!NOTE]
-> When included in a request header or query parameter, the subscription key by default is passed to the backend and may be exposed in backend monitoring logs or other systems. If this is considered sensitive data, you can configure a policy in the `outbound` section to remove the subscription key header ([`set-header`](set-header-policy.md)) or query parameter ([`set-query-parameter`](set-query-parameter-policy.md)).
+> When included in a request header or query parameter, the subscription key by default is passed to the backend and may be exposed in backend monitoring logs or other systems. If this is considered sensitive data, you can configure a policy at the end of the `inbound` section to remove the subscription key header ([`set-header`](set-header-policy.md)) or query parameter ([`set-query-parameter`](set-query-parameter-policy.md)).
## Enable or disable subscription requirement for API or product access
api-management Compute Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/compute-infrastructure.md
Most new instances created in service tiers other than the Consumption tier are
## What are the compute platforms for API Management?
-The following table summarizes the compute platforms currently used in the **Consumption**, **Developer**, **Basic**, **Standard**, and **Premium** tiers of API Management.
+The following table summarizes the compute platforms currently used in the **Consumption**, **Developer**, **Basic**, **Standard**, and **Premium** tiers of API Management. This table doesn't apply to the [v2 pricing tiers (preview)](#what-about-the-v2-pricing-tiers).
| Version | Description | Architecture | Tiers | | -| -| -- | - |
api-management Howto Protect Backend Frontend Azure Ad B2c https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/howto-protect-backend-frontend-azure-ad-b2c.md
--+ Last updated 02/18/2021
For a conceptual overview of API authorization, see [Authentication and authoriz
## Aims
-We're going to see how API Management can be used in a simplified scenario with Azure Functions and Azure AD B2C. You'll create a JavaScript (JS) app calling an API, that signs in users with Azure AD B2C. Then you'll use API Management's validate-jwt, CORS, and Rate Limit By Key policy features to protect the Backend API.
+We're going to see how API Management can be used in a simplified scenario with Azure Functions and Azure AD B2C. You'll create a JavaScript (JS) app calling an API that signs in users with Azure AD B2C. Then you'll use API Management's validate-jwt, CORS, and Rate Limit By Key policy features to protect the Backend API.
For defense in depth, we then use EasyAuth to validate the token again inside the back-end API and ensure that API management is the only service that can call the Azure Functions backend.
Here's a quick overview of the steps:
1. Create the sign-up and sign-in policies to allow users to sign in with Azure AD B2C 1. Configure API Management with the new Azure AD B2C Client IDs and keys to Enable OAuth2 user authorization in the Developer Console 1. Build the Function API
-1. Configure the Function API to enable EasyAuth with the new Azure AD B2C Client IDΓÇÖs and Keys and lock down to APIM VIP
+1. Configure the Function API to enable EasyAuth with the new Azure AD B2C Client IDs and Keys and lock down to APIM VIP
1. Build the API Definition in API Management 1. Set up Oauth2 for the API Management API configuration 1. Set up the **CORS** policy and add the **validate-jwt** policy to validate the OAuth token for every incoming request 1. Build the calling application to consume the API 1. Upload the JS SPA Sample
-1. Configure the Sample JS Client App with the new Azure AD B2C Client IDΓÇÖs and keys
+1. Configure the Sample JS Client App with the new Azure AD B2C Client IDs and keys
1. Test the Client Application > [!TIP]
Open the Azure AD B2C blade in the portal and do the following steps.
> > We still have no IP security applied, if you have a valid key and OAuth2 token, anyone can call this from anywhere - ideally we want to force all requests to come via API Management. >
- > If you're using APIM Consumption tier then [there isn't a dedicated Azure API Management Virtual IP](./api-management-howto-ip-addresses.md#ip-addresses-of-consumption-tier-api-management-service) to allow-list with the functions access-restrictions. In the Azure API Management Standard SKU and above [the VIP is single tenant and for the lifetime of the resource](./api-management-howto-ip-addresses.md#changes-to-the-ip-addresses). For the Azure API Management Consumption tier, you can lock down your API calls via the shared secret function key in the portion of the URI you copied above. Also, for the Consumption tier - steps 12-17 below do not apply.
+ > If you're using the API Management Consumption, Basic v2, and Standard v2 tiers then [there isn't a dedicated Azure API Management Virtual IP](./api-management-howto-ip-addresses.md#ip-addresses-of-consumption-basic-v2-and-standard-v2-tier-api-management-service) to allow-list with the functions access-restrictions. In the Azure API Management dedicated tiers [the VIP is single tenant and for the lifetime of the resource](./api-management-howto-ip-addresses.md#changes-to-the-ip-addresses). For the tiers that run on shared infrastructure, you can lock down your API calls via the shared secret function key in the portion of the URI you copied above. Also, for these tiers - steps 12-17 below do not apply.
1. Close the 'Authentication' blade from the App Service / Functions portal. 1. Open the *API Management blade of the portal*, then open *your instance*.
api-management Import Soap Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/import-soap-api.md
In this article, you learn how to:
With this selection, the API is exposed as SOAP, and API consumers have to use SOAP rules. If you want to "restify" the API, follow the steps in [Import a SOAP API and convert it to REST](restify-soap-api.md). ![Create SOAP API from WSDL specification](./media/import-soap-api/pass-through.png)
-1. The following fields are filled automatically with information from the SOAP API: **Display name**, **Name**, **Description**.
+1. The following API settings are filled automatically based on information from the SOAP API: **Display name**, **Name**, **Description**. Operations are filled automatically with **Display name**, **URL**, and **Description**, and receive a system-generated **Name**.
1. Enter other API settings. You can set the values during creation or configure them later by going to the **Settings** tab. For more information about API settings, see [Import and publish your first API](import-and-publish.md#import-and-publish-a-backend-api) tutorial.
api-management Restify Soap Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/restify-soap-api.md
Complete the following quickstart: [Create an Azure API Management instance](get
![SOAP to REST](./media/restify-soap-api/soap-to-rest.png)
-1. The following fields are filled automatically with information from the SOAP API: **Display name**, **Name**, **Description**.
+1. The following fields are filled automatically with information from the SOAP API: **Display name**, **Name**, **Description**. Operations are filled automatically with **Display name**, **URL**, and **Description**, and receive a system-generated **Name**.
1. Enter other API settings. You can set the values during creation or configure them later by going to the **Settings** tab. For more information about API settings, see [Import and publish your first API](import-and-publish.md#import-and-publish-a-backend-api) tutorial.
Operations can be called directly from the Azure portal, which provides a conven
## Next steps > [!div class="nextstepaction"]
-> [Transform and protect a published API](transform-api.md)
+> [Transform and protect a published API](transform-api.md)
app-service Manage Create Arc Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-create-arc-environment.md
The [custom location](../azure-arc/kubernetes/custom-locations.md) in Azure is u
```bash CUSTOM_LOCATION_NAME="my-custom-location" # Name of the custom location
- CONNECTED_CLUSTER_ID=$(az connectedk8s show --resource-group $GROUP_NAME --name $CLUSTER_NAME-query id --output tsv)
+ CONNECTED_CLUSTER_ID=$(az connectedk8s show --resource-group $GROUP_NAME --name $CLUSTER_NAME --query id --output tsv)
``` # [PowerShell](#tab/powershell)
app-service Troubleshoot Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/troubleshoot-diagnostic-logs.md
Select **On** for either **Application Logging (Filesystem)** or **Application L
The **Filesystem** option is for temporary debugging purposes, and turns itself off in 12 hours. The **Blob** option is for long-term logging, and needs a blob storage container to write logs to. The **Blob** option also includes additional information in the log messages, such as the ID of the origin VM instance of the log message (`InstanceId`), thread ID (`Tid`), and a more granular timestamp ([`EventTickCount`](/dotnet/api/system.datetime.ticks)).
-> [!NOTE]
-> If your Azure Storage account is secured by firewall rules, see [Networking considerations](#networking-considerations).
- > [!NOTE] > Currently only .NET application logs can be written to the blob storage. Java, PHP, Node.js, Python application logs can only be stored on the App Service file system (without code modifications to write logs to external storage). >
To enable web server logging for Windows apps in the [Azure portal](https://port
For **Web server logging**, select **Storage** to store logs on blob storage, or **File System** to store logs on the App Service file system.
-> [!NOTE]
-> If your Azure Storage account is secured by firewall rules, see [Networking considerations](#networking-considerations).
- In **Retention Period (Days)**, set the number of days the logs should be retained. > [!NOTE]
The following table shows the supported log types and descriptions:
## Networking considerations -- App Service logs aren't supported using Regional VNet integration, our recommendation is to use the Diagnostic settings feature.-
-If you secure your Azure Storage account by [only allowing selected networks](../storage/common/storage-network-security.md#change-the-default-network-access-rule), it can receive logs from App Service only if both of the following are true:
--- The Azure Storage account is in a different Azure region from the App Service app.-- All outbound addresses of the App Service app are [added to the Storage account's firewall rules](../storage/common/storage-network-security.md#managing-ip-network-rules). To find the outbound addresses for your app, see [Find outbound IPs](overview-inbound-outbound-ips.md#find-outbound-ips).
+For Diagnostic Settings restrictions, refer to the [official Diagnostic Settings documentation regarding destination limits](../azure-monitor/essentials/diagnostic-settings.md#destination-limitations).
## <a name="nextsteps"></a> Next steps * [Query logs with Azure Monitor](../azure-monitor/logs/log-query-overview.md) * [How to Monitor Azure App Service](web-sites-monitor.md) * [Troubleshooting Azure App Service in Visual Studio](troubleshoot-dotnet-visual-studio.md)
-* [Analyze app Logs in HDInsight](https://gallery.technet.microsoft.com/scriptcenter/Analyses-Windows-Azure-web-0b27d413)
* [Tutorial: Run a load test to identify performance bottlenecks in a web app](../load-testing/tutorial-identify-bottlenecks-azure-portal.md)
application-gateway Key Vault Certs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/key-vault-certs.md
Previously updated : 03/04/2022 Last updated : 02/01/2024
Application Gateway integration with Key Vault offers many benefits, including:
Application Gateway currently supports software-validated certificates only. Hardware security module (HSM)-validated certificates arenΓÇÖt supported.
-After Application Gateway is configured to use Key Vault certificates, its instances retrieve the certificate from Key Vault and install them locally for TLS termination. The instances poll Key Vault at four-hour intervals to retrieve a renewed version of the certificate, if it exists. If an updated certificate is found, the TLS/SSL certificate that's currently associated with the HTTPS listener is automatically rotated.
+After Application Gateway is configured to use Key Vault certificates, its instances retrieve the certificate from Key Vault and install them locally for TLS termination. The instances poll Key Vault at four-hour intervals to retrieve a renewed version of the certificate, if it exists. If an updated certificate is found, the TLS/SSL certificate that's associated with the HTTPS listener is automatically rotated.
> [!TIP]
-> Any change to Application Gateway will force a check against Key Vault to see if any new versions of certificates are available. This includes, but not limited to, changes to Frontend IP Configurations, Listeners, Rules, Backend Pools, Resource Tags, and more. If an updated certificate is found, the new certificate will immediately be presented.
+> Any change to Application Gateway forces a check against Key Vault to see if any new versions of certificates are available. This includes, but not limited to, changes to Frontend IP Configurations, Listeners, Rules, Backend Pools, Resource Tags, and more. If an updated certificate is found, the new certificate is immediately presented.
-Application Gateway uses a secret identifier in Key Vault to reference the certificates. For Azure PowerShell, the Azure CLI, or Azure Resource Manager, we strongly recommend that you use a secret identifier that doesn't specify a version. This way, Application Gateway will automatically rotate the certificate if a newer version is available in your Key Vault. An example of a secret URI without a version is `https://myvault.vault.azure.net/secrets/mysecret/`. You may refer to the PowerShell steps provided in the [section below](#key-vault-azure-role-based-access-control-permission-model).
+Application Gateway uses a secret identifier in Key Vault to reference the certificates. For Azure PowerShell, the Azure CLI, or Azure Resource Manager, we strongly recommend that you use a secret identifier that doesn't specify a version. This way, Application Gateway automatically rotates the certificate if a newer version is available in your Key Vault. An example of a secret URI without a version is `https://myvault.vault.azure.net/secrets/mysecret/`. You may refer to the PowerShell steps provided in the [following section](#key-vault-azure-role-based-access-control-permission-model).
The Azure portal supports only Key Vault certificates, not secrets. Application Gateway still supports referencing secrets from Key Vault, but only through non-portal resources like PowerShell, the Azure CLI, APIs, and Azure Resource Manager templates (ARM templates).
You can either create a new user-assigned managed identity or reuse an existing
Define access policies to use the user-assigned managed identity with your Key Vault: 1. In the Azure portal, go to **Key Vault**.
-1. Select the Key Vault that contains your certificate.
-1. If you're using the permission model **Vault access policy**: Select **Access Policies**, select **+ Add Access Policy**, select **Get** for **Secret permissions**, and choose your user-assigned managed identity for **Select principal**. Then select **Save**.
+2. Select the Key Vault that contains your certificate.
+3. If you're using the permission model **Vault access policy**: Select **Access Policies**, select **+ Add Access Policy**, select **Get** for **Secret permissions**, and choose your user-assigned managed identity for **Select principal**. Then select **Save**.
If you're using **Azure role-based access control** follow the article [Assign a managed identity access to a resource](../active-directory/managed-identities-azure-resources/howto-assign-access-portal.md) and assign the user-assigned managed identity the **Key Vault Secrets User** role to the Azure Key Vault.
+> [!NOTE]
+> If you have Key Vaults for your HTTPS listener that use different identities, creating or updating the listener requires checking the certificates associated with each identity. In order for the operation to be successful, you must [grant permission](../key-vault/general/rbac-guide.md) to all identities.
+ ### Verify Firewall Permissions to Key Vault As of March 15, 2021, Key Vault recognizes Application Gateway as a trusted service by leveraging User Managed Identities for authentication to Azure Key Vault. With the use of service endpoints and enabling the trusted services option for Key Vault's firewall, you can build a secure network boundary in Azure. You can deny access to traffic from all networks (including internet traffic) to Key Vault but still make Key Vault accessible for an Application Gateway resource under your subscription.
When you're using a restricted Key Vault, use the following steps to configure A
> Steps 1-3 are not required if your Key Vault has a Private Endpoint enabled. The application gateway can access the Key Vault using the private IP address. > [!IMPORTANT]
-> If using Private Endpoints to access Key Vault, you must link the privatelink.vaultcore.azure.net private DNS zone, containing the corresponding record to the referenced Key Vault, to the virtual network containing Application Gateway. Custom DNS servers may continue to be used on the virtual network instead of the Azure DNS provided resolvers, however the private dns zone will need to remain linked to the virtual network as well.
+> If using Private Endpoints to access Key Vault, you must link the privatelink.vaultcore.azure.net private DNS zone, containing the corresponding record to the referenced Key Vault, to the virtual network containing Application Gateway. Custom DNS servers may continue to be used on the virtual network instead of the Azure DNS provided resolvers, however the private DNS zone needs to remain linked to the virtual network as well.
1. In the Azure portal, in your Key Vault, select **Networking**.
-1. On the **Firewalls and virtual networks** tab, select **Selected networks**.
-1. For **Virtual networks**, select **+ Add existing virtual networks**, and then add the virtual network and subnet for your Application Gateway instance. If prompted, ensure the _Do not configure 'Microsoft.KeyVault' service endpoint(s) at this time_ checkbox is unchecked to ensure the `Microsoft.KeyVault` service endpoint is enabled on the subnet.
-1. Select **Yes** to allow trusted services to bypass the Key Vault's firewall.
+2. On the **Firewalls and virtual networks** tab, select **Selected networks**.
+3. For **Virtual networks**, select **+ Add existing virtual networks**, and then add the virtual network and subnet for your Application Gateway instance. If prompted, ensure the _Do not configure 'Microsoft.KeyVault' service endpoint(s) at this time_ checkbox is unchecked to ensure the `Microsoft.KeyVault` service endpoint is enabled on the subnet.
+4. Select **Yes** to allow trusted services to bypass the Key Vault's firewall.
![Screenshot that shows selections for configuring Application Gateway to use firewalls and virtual networks.](media/key-vault-certs/key-vault-firewall.png)
$appgw = Get-AzApplicationGateway -Name MyApplicationGateway -ResourceGroupName
Set-AzApplicationGatewayIdentity -ApplicationGateway $appgw -UserAssignedIdentityId "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/MyResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/MyManagedIdentity" # Get the secret ID from Key Vault $secret = Get-AzKeyVaultSecret -VaultName "MyKeyVault" -Name "CertificateName"
-$secretId = $secret.Id.Replace($secret.Version, "") # Remove the secret version so AppGW will use the latest version in future syncs
+$secretId = $secret.Id.Replace($secret.Version, "") # Remove the secret version so Application Gateway uses the latest version in future syncs
# Specify the secret ID from Key Vault Add-AzApplicationGatewaySslCertificate -KeyVaultSecretId $secretId -ApplicationGateway $appgw -Name $secret.Name # Commit the changes to the Application Gateway
Under **Choose a certificate** select the certificate named in the previous step
## Investigating and resolving Key Vault errors > [!NOTE]
-> It is important to consider any impact on your Application Gateway resource when making changes or revoking access to your Key Vault resource. In case your application gateway is unable to access the associated key vault or locate the certificate object in it, it will automatically put that listener in a disabled state.
+> It is important to consider any impact on your application gateway resource when making changes or revoking access to your Key Vault resource. If your application gateway is unable to access the associated key vault or locate the certificate object in it, the application gateway automatically sets the listener to a disabled state.
>
-> You can identify this user-driven event by viewing the Resource Health for your Application Gateway. [Learn more](../application-gateway/disabled-listeners.md).
+> You can identify this user-driven event by viewing the Resource Health for your application gateway. [Learn more](../application-gateway/disabled-listeners.md).
Azure Application Gateway doesn't just poll for the renewed certificate version on Key Vault at every four-hour interval. It also logs any error and is integrated with Azure Advisor to surface any misconfiguration with a recommendation for its fix. 1. Sign-in to your Azure portal
-1. Select Advisor
-1. Select Operational Excellence category from the left menu.
-1. You will find a recommendation titled **Resolve Azure Key Vault issue for your Application Gateway**, if your gateway is experiencing this issue. Ensure the correct Subscription is selected from the drop-down options above.
-1. Select it to view the error details, the associated key vault resource and the [troubleshooting guide](../application-gateway/application-gateway-key-vault-common-errors.md) to fix your exact issue.
+2. Select Advisor
+3. Select Operational Excellence category from the left menu.
+4. You find a recommendation titled **Resolve Azure Key Vault issue for your Application Gateway**, if your gateway is experiencing this issue. Ensure the correct subscription is selected from the drop-down options above.
+5. Select it to view the error details, the associated key vault resource and the [troubleshooting guide](../application-gateway/application-gateway-key-vault-common-errors.md) to fix your exact issue.
By identifying such an event through Azure Advisor or Resource Health, you can quickly resolve any configuration problems with your Key Vault. We strongly recommend you take advantage of [Azure Advisor](../advisor/advisor-alerts-portal.md) and [Resource Health](../service-health/resource-health-alert-monitor-guide.md) alerts to stay informed when a problem is detected.
-For Advisor alert, use "Resolve Azure Key Vault issue for your Application Gateway" in the recommendation type as shown below.</br>
+For Advisor alert, use "Resolve Azure Key Vault issue for your Application Gateway" in the recommendation type shown:</br>
![Diagram that shows steps for Advisor alert.](media/key-vault-certs/advisor-alert.png)
-You can configure the Resource health alert as illustrated below.</br>
+You can configure the Resource health alert as illustrated:</br>
![Diagram that shows steps for Resource health alert.](media/key-vault-certs/resource-health-alert.png) ## Next steps
azure-app-configuration Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/overview.md
The easiest way to add an App Configuration store to your application is through
| ASP.NET Core | App Configuration [provider](/dotnet/api/Microsoft.Extensions.Configuration.AzureAppConfiguration) for .NET Core | ASP.NET Core [quickstart](./quickstart-aspnet-core-app.md) | | .NET Framework and ASP.NET | App Configuration [builder](https://go.microsoft.com/fwlink/?linkid=2074663) for .NET | .NET Framework [quickstart](./quickstart-dotnet-app.md) | | Java Spring | App Configuration [provider](https://go.microsoft.com/fwlink/?linkid=2180917) for Spring Cloud | Java Spring [quickstart](./quickstart-java-spring-app.md) |
-| JavaScript/Node.js | App Configuration [client](https://go.microsoft.com/fwlink/?linkid=2103664) for JavaScript | Javascript/Node.js [quickstart](./quickstart-javascript.md)|
-| Python | App Configuration [client](https://go.microsoft.com/fwlink/?linkid=2103727) for Python | Python [quickstart](./quickstart-python.md) |
+| JavaScript/Node.js | App Configuration [provider](https://github.com/Azure/AppConfiguration-JavaScriptProvider) for JavaScript | Javascript/Node.js [quickstart](./quickstart-javascript-provider.md)|
+| Python | App Configuration [provider](https://pypi.org/project/azure-appconfiguration-provider/) for Python | Python [quickstart](./quickstart-python-provider.md)) |
| Other | App Configuration [REST API](/rest/api/appconfiguration/) | None | ## Next steps
azure-arc Troubleshoot Resource Bridge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/troubleshoot-resource-bridge.md
If you run `az arcappliance` CLI commands for Arc Resource Bridge via remote Pow
Using `az arcappliance` commands from remote PowerShell isn't currently supported. Instead, sign in to the node through Remote Desktop Protocol (RDP) or use a console session.
-### Resource bridge cannot be updated
+### Resource bridge configurations cannot be updated
In this release, all the parameters are specified at time of creation. To update the Azure Arc resource bridge, you must delete it and redeploy it again.
For example, if you specified the wrong location, or subscription during deploym
To resolve this issue, delete the appliance and update the appliance YAML file. Then redeploy and create the resource bridge.
+### Appliance Network Unavailable
+
+If Arc resource bridge is experiencing a network communication problem, you may see an "Appliance Network Unavailable" error when trying to perform an action that interacts with the resource bridge or an extension operating on top of the bridge. This error can also surface as "Error while dialing dial tcp xx.xx.xxx.xx:55000: connect: no route to host" and this is typically a network communication problem. The problem could be that communication from the host to the Arc resource bridge VM needs to be opened with the help of your network administrator. It could be that there was a temporary network issue not allowing the host to reach the Arc resource bridge VM and once the network issue is resolved, you can retry the operation.
+ ### Connection closed before server preface received When there are multiple attempts to deploy Arc resource bridge, expired credentials left on the management machine might cause future deployments to fail. The error will contain the message `Unavailable desc = connection closed before server preface received`. This error will surface in various `az arcappliance` commands including `validate`, `prepare` and `delete`.
azure-arc Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/upgrade.md
Before upgrading an Arc resource bridge, the following prerequisites must be met
- The appliance VM must be online, its status is "Running" and the [credentials in the appliance VM](maintenance.md#update-credentials-in-the-appliance-vm) must be valid. -- There must be sufficient space on the management machine (~3.5 GB) and appliance VM (35 GB) to download required images. For VMware, a new template is created.
+- There must be sufficient space on the management machine (~3.5 GB) and appliance VM (35 GB) to download required images.
+
+- For Arc-enabled VMware, upgrading the resource bridge requires 200GB of free space on the datastore. A new template is also created.
- The outbound connection from the Appliance VM IPs (`k8snodeippoolstart/end`, VM IP 1/2) to `msk8s.sb.tlu.dl.delivery.mp.microsoft.com`, port 443 must be enabled. Be sure the full list of [required endpoints for Arc resource bridge](network-requirements.md) are also enabled.
azure-arc Deliver Extended Security Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/deliver-extended-security-updates.md
To enroll Azure Arc-enabled servers eligible for ESUs at no additional cost, fol
This linking will not trigger a compliance violation or enforcement block, allowing you to extend the application of a license beyond its provisioned cores. The expectation is that the license only includes cores for production and billed servers. Any additional cores will be charged and result in over-billing.
+> [!IMPORTANT]
+> Adding these tags to your license will NOT make the license free or reduce the number of license cores that are chargeable. These tags allow you to link your Azure machines to existing licenses that are already configured with payable cores without needing to create any new licenses or add additional cores to your free machines.
+ **Example:** You have 8 Windows Server 2012 R2 Standard instances, each with 8 physical cores. 6 of these Windows Server 2012 R2 Standard machines are for production, and 2 of these Windows Server 2012 R2 Standard machines are eligible for free ESUs through the Visual Studio Dev Test subscription. You should first provision and activate a regular ESU License for Windows Server 2012/R2 that's Standard edition and has 48 physical cores. You should link this regular, production ESU license to your 6 production servers. Next, you should use this existing license, not add any more cores or provision a separate license, and link this license to your 2 non-production Windows Server 2012 R2 standard machines. You should tag the license and the 2 non-production Windows Server 2012 R2 Standard machines with Name: ΓÇ£ESU UsageΓÇ¥ and Value: ΓÇ£WS2012 VISUAL STUDIO DEV TESTΓÇ¥.
azure-functions Functions Add Output Binding Storage Queue Vs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-add-output-binding-storage-queue-vs.md
Because you're using a Queue storage output binding, you need the Storage bindin
# [Isolated worker model](#tab/isolated-process) ```bash
- Install-Package /dotnet/api/microsoft.azure.webjobs.blobattribute.Queues -IncludePrerelease
+ Install-Package Microsoft.Azure.Functions.Worker.Extensions.Storage.Queues
``` # [In-process model](#tab/in-process) ```bash
azure-functions Functions Kubernetes Keda https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-kubernetes-keda.md
The Azure Functions runtime provides flexibility in hosting where and how you want. [KEDA](https://keda.sh) (Kubernetes-based Event Driven Autoscaling) pairs seamlessly with the Azure Functions runtime and tooling to provide event driven scale in Kubernetes. > [!IMPORTANT]
-> Azure Functions on Kubernetes using KEDA is an open-source effort that you can use free of cost. Best-effort support is provided by contributors and from the community, so please use [GitHub issues in the Azure Functions repository](https://github.com/Azure/Azure-Functions/issues) to report bugs and raise feature requests. Azure Functions deployment to Azure Container Apps, which runs on managed Kubernetes clusters in Azure, is currently in preview. For more information, see [Azure Container Apps hosting of Azure Functions](functions-container-apps-hosting.md).
+> Running your containerized function apps on Kubernetes, either by using KEDA or by direct deployment, is an open-source effort that you can use free of cost. Best-effort support is provided by contributors and from the community by using [GitHub issues in the Azure Functions repository](https://github.com/Azure/Azure-Functions/issues). Please use these issues to report bugs and raise feature requests. Containerized function app deployments to Azure Container Apps, which runs on managed Kubernetes clusters in Azure, is currently in preview. For more information, see [Azure Container Apps hosting of Azure Functions](functions-container-apps-hosting.md).
## How Kubernetes-based functions work
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
Last updated 7/19/2023
-#customer-intent: As an IT manager, I want to understand the capabilities of Azure Monitor Agent to determine whether I can use the agent to collect the data I need from the operating systems of my virtual machines.
+# Customer intent: As an IT manager, I want to understand the capabilities of Azure Monitor Agent to determine whether I can use the agent to collect the data I need from the operating systems of my virtual machines.
# Azure Monitor Agent overview
azure-monitor Azure Monitor Agent Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-performance.md
Last updated 4/07/2023
-#customer-intent: As a deployment engineer, I can scope the resources required to scale my gateway data colletors the use the Azure Monitor Agent.
+# Customer intent: As a deployment engineer, I can scope the resources required to scale my gateway data colletors the use the Azure Monitor Agent.
# Azure Monitor Agent Performance Benchmark
azure-monitor Troubleshooter Ama Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/troubleshooter-ama-linux.md
Last updated 12/14/2023
-# customer-intent: When AMA is experiencing issues, I want to investigate the issues and determine if I can resolve the issue on my own.
+# Customer intent: When AMA is experiencing issues, I want to investigate the issues and determine if I can resolve the issue on my own.
# How to use the Linux operating system (OS) Azure Monitor Agent Troubleshooter
azure-monitor Troubleshooter Ama Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/troubleshooter-ama-windows.md
Last updated 12/14/2023
-# customer-intent: When AMA is experiencing issues, I want to investigate the issues and determine if I can resolve the issue on my own.
+# Customer intent: When AMA is experiencing issues, I want to investigate the issues and determine if I can resolve the issue on my own.
# How to use the Windows operating system (OS) Azure Monitor Agent Troubleshooter
azure-monitor Alerts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-overview.md
The alert condition for stateful alerts is `fired`, until it is considered resol
For stateful alerts, while the alert itself is deleted after 30 days, the alert condition is stored until the alert is resolved, to prevent firing another alert, and so that notifications can be sent when the alert is resolved.
-Stateful log alerts have these limitations:
-- they can trigger up to 300 alerts per evaluation.-- you can have a maximum of 6000 alerts with the `fired` alert condition.
+Stateful log alerts have limitations - details [here](https://learn.microsoft.com/azure/azure-monitor/service-limits#alerts).
This table describes when a stateful alert is considered resolved:
azure-monitor Prometheus Metrics Scrape Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-metrics-scrape-configuration.md
Four different configmaps can be configured to provide scrape configuration and
2. [`ama-metrics-prometheus-config`](https://aka.ms/azureprometheus-addon-rs-configmap) (**Recommended**) This config map can be used to provide Prometheus scrape config for addon replica. Addon runs a singleton replica, and any cluster level services can be discovered and scraped by providing scrape jobs in this configmap. You can take the sample configmap from the above git hub repo, add scrape jobs that you would need and apply/deploy the config map to `kube-system` namespace for your cluster. 3. [`ama-metrics-prometheus-config-node`](https://aka.ms/azureprometheus-addon-ds-configmap) (**Advanced**)
- This config map can be used to provide Prometheus scrape config for addon DaemonSet that runs on every **Linux** node in the cluster, and any node level targets on each node can be scraped by providing scrape jobs in this configmap. When you use this configmap, you can use `$NODE_IP` variable in your scrape config, which gets substituted by corresponding node's ip address in DaemonSet pod running on each node. This way you get access to scrape anything that runs on that node from the metrics addon DaemonSet. **Please be careful when you use discoveries in scrape config in this node level config map, as every node in the cluster will setup & discover the target(s) and will collect redundant metrics**.
+ This config map can be used to provide Prometheus scrape config for addon DaemonSet that runs on every **Linux** node in the cluster, and any node level targets on each node can be scraped by providing scrape jobs in this configmap. When you use this configmap, you can use `$NODE_IP` variable in your scrape config, which gets substituted by corresponding node's ip address in DaemonSet pod running on each node. This way you get access to scrape anything that runs on that node from the metrics addon DaemonSet. **Please be careful when you use discoveries in scrape config in this node level config map, as every node in the cluster will setup & discover the target(s) and will collect redundant metrics**.
You can take the sample configmap from the above git hub repo, add scrape jobs that you would need and apply/deploy the config map to `kube-system` namespace for your cluster 4. [`ama-metrics-prometheus-config-node-windows`](https://aka.ms/azureprometheus-addon-ds-configmap-windows) (**Advanced**)
- This config map can be used to provide Prometheus scrape config for addon DaemonSet that runs on every **Windows** node in the cluster, and node level targets on each node can be scraped by providing scrape jobs in this configmap. When you use this configmap, you can use `$NODE_IP` variable in your scrape config, which will be substituted by corresponding node's ip address in DaemonSet pod running on each node. This way you get access to scrape anything that runs on that node from the metrics addon DaemonSet. **Please be careful when you use discoveries in scrape config in this node level config map, as every node in the cluster will setup & discover the target(s) and will collect redundant metrics**.
+ This config map can be used to provide Prometheus scrape config for addon DaemonSet that runs on every **Windows** node in the cluster, and node level targets on each node can be scraped by providing scrape jobs in this configmap. When you use this configmap, you can use `$NODE_IP` variable in your scrape config, which will be substituted by corresponding node's ip address in DaemonSet pod running on each node. This way you get access to scrape anything that runs on that node from the metrics addon DaemonSet. **Please be careful when you use discoveries in scrape config in this node level config map, as every node in the cluster will setup & discover the target(s) and will collect redundant metrics**.
You can take the sample configmap from the above git hub repo, add scrape jobs that you would need and apply/deploy the config map to `kube-system` namespace for your cluster ## Metrics add-on settings configmap
metric_relabel_configs:
regex: '.+' ```
+### TLS based scraping
+
+If you have a Prometheus instance served with TLS and you want to scrape metrics from it, you need to set scheme to `https` and set the TLS settings in your configmap or respective CRD. You can use the `tls_config` configuration property inside a custom scrape job to configure the TLS settings either using a CRD or a configmap. You need to provide a CA certificate to validate API server certificate with. The CA certificate is used to verify the authenticity of the server's certificate when Prometheus connects to the target over TLS. It helps ensure that the server's certificate is signed by a trusted authority.
+
+The secret should be created in kube-system namespace and then the configmap/CRD should be created in kube-system namespace. The order of secret creation matters. When there's no secret but a valid CRD/config map, you will find errors in collector log -> `no file found for cert....`
+
+Below are the details about how to provide the TLS config settings through a configmap or CRD.
+
+- To provide the TLS config setting in a configmap, please create the self-signed certificate and key inside /etc/prometheus/certs directory inside your mtls enabled app.
+ An example tlsConfig inside the config map should look like this:
+
+```yaml
+tls_config:
+ ca_file: /etc/prometheus/certs/client-cert.pem
+ cert_file: /etc/prometheus/certs/client-cert.pem
+ key_file: /etc/prometheus/certs/client-key.pem
+ insecure_skip_verify: false
+```
+
+- To provide the TLS config setting in a CRD, please create the self-signed certificate and key inside /etc/prometheus/certs directory inside your mtls enabled app.
+ An example tlsConfig inside a Podmonitor should look like this:
+
+```yaml
+tlsConfig:
+ ca:
+ secret:
+ key: "client-cert.pem" # since it is self-signed
+ name: "ama-metrics-mtls-secret"
+ cert:
+ secret:
+ key: "client-cert.pem"
+ name: "ama-metrics-mtls-secret"
+ keySecret:
+ key: "client-key.pem"
+ name: "ama-metrics-mtls-secret"
+ insecureSkipVerify: false
+```
+> [!NOTE]
+> Make sure that the certificate file name and key name inside the mtls app is in the following format in case of a CRD based scraping.
+ For example: secret_kube-system_ama-metrics-mtls-secret_cert-name.pem and secret_kube-system_ama-metrics-mtls-secret_key-name.pem.
+> The CRD needs to be created in kube-system namespace.
+> The secret name should exactly be ama-metrics-mtls-secret in kube-system namespace. An example command for creating secret: kubectl create secret generic ama-metrics-mtls-secret --from-file=secret_kube-system_ama-metrics-mtls-secret_client-cert.pem=secret_kube-system_ama-metrics-mtls-secret_client-cert.pem --from-file=secret_kube-system_ama-metrics-mtls-secret_client-key.pem=secret_kube-system_ama-metrics-mtls-secret_client-key.pem -n kube-system
+
+To read more on TLS authentication, the following documents might be helpful.
+
+- Generating TLS certificates -> https://o11y.eu/blog/prometheus-server-tls/
+- Configurations -> https://prometheus.io/docs/alerting/latest/configuration/#tls_config
+ ## Next steps [Setup Alerts on Prometheus metrics](./container-insights-metric-alerts.md)<br>
azure-monitor Activity Log Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/activity-log-insights.md
Last updated 12/11/2023
-#customer-intent: As an IT manager, I want to understand how I can use activity log insights to monitor changes to resources and resource groups in an Azure subscription.
+# Customer intent: As an IT manager, I want to understand how I can use activity log insights to monitor changes to resources and resource groups in an Azure subscription.
# Monitor changes to resources and resource groups with Azure Monitor activity log insights
azure-monitor Data Platform Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-platform-metrics.md
Azure Monitor collects metrics from the following sources. After these metrics a
- **Azure resources**: Platform metrics are created by Azure resources and give you visibility into their health and performance. Each type of resource creates a [distinct set of metrics](./metrics-supported.md) without any configuration required. Platform metrics are collected from Azure resources at one-minute frequency unless specified otherwise in the metric's definition. - **Applications**: Application Insights creates metrics for your monitored applications to help you detect performance issues and track trends in how your application is being used. Values include _Server response time_ and _Browser exceptions_.-- **Virtual machine agents**: Metrics are collected from the guest operating system of a virtual machine. You can enable guest OS metrics for Windows virtual machines by using the [Windows diagnostic extension](../agents/diagnostics-extension-overview.md) and for Linux virtual machines by using the [InfluxData Telegraf agent](https://www.influxdata.com/time-series-platform/telegraf/).
+- **Virtual machine agents**: Metrics are collected from the guest operating system of a virtual machine. You can enable guest OS metrics for Windows virtual machines by using the [Azure Monitor Agent](/azure/azure-monitor/agents/agents-overview). Azure Monitor Agent replaces the legacy agents - [Windows diagnostic extension](../agents/diagnostics-extension-overview.md) and the [InfluxData Telegraf agent](https://www.influxdata.com/time-series-platform/telegraf/) for Linux virtual machines.
- **Custom metrics**: You can define metrics in addition to the standard metrics that are automatically available. You can [define custom metrics in your application](../app/api-custom-events-metrics.md) that's monitored by Application Insights. You can also create custom metrics for an Azure service by using the [custom metrics API](./metrics-store-custom-rest-api.md). - **Kubernetes clusters**: Kubernetes clusters typically send metric data to a local Prometheus server that you must maintain. [Azure Monitor managed service for Prometheus ](prometheus-metrics-overview.md) provides a managed service that collects metrics from Kubernetes clusters and store them in Azure Monitor Metrics.
azure-monitor Migrate To Batch Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/migrate-to-batch-api.md
Last updated 05/07/2023
-#customer-intent: As a customer, I want to understand how to migrate from the metrics API to the getBatch API
+# Customer intent: As a customer, I want to understand how to migrate from the metrics API to the getBatch API
# How to migrate from the metrics API to the getBatch API
azure-monitor Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/insights-overview.md
Last updated 10/15/2022 + # Azure Monitor Insights overview Some services have a curated monitoring experience. That is, Microsoft provides customized functionality meant to act as a starting point for monitoring those services. These experiences are collectively known as *curated visualizations* with the larger more complex of them being called *Insights*.
The following table lists the available curated visualizations and information a
|Name with docs link| State | [Azure portal link](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/more)| Description | |:--|:--|:--|:--| |**Compute**||||
- | [Azure VM Insights](/azure/azure-monitor/insights/vminsights-overview) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/virtualMachines) | Monitors your Azure VMs and Virtual Machine Scale Sets at scale. It analyzes the performance and health of your Windows and Linux VMs and monitors their processes and dependencies on other resources and external processes. |
-| [Azure Container Insights](/azure/azure-monitor/insights/container-insights-overview) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/containerInsights) | Monitors the performance of container workloads that are deployed to managed Kubernetes clusters hosted on Azure Kubernetes Service. It gives you performance visibility by collecting metrics from controllers, nodes, and containers that are available in Kubernetes through the Metrics API. Container logs are also collected. After you enable monitoring from Kubernetes clusters, these metrics and logs are automatically collected for you through a containerized version of the Log Analytics agent for Linux. |
+|[Azure VM Insights](/azure/azure-monitor/insights/vminsights-overview) |General Availability (GA) | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/virtualMachines) |Monitors your Azure VMs and Virtual Machine Scale Sets at scale. It analyzes the performance and health of your Windows and Linux VMs and monitors their processes and dependencies on other resources and external processes. |
+|[Azure Container Insights](/azure/azure-monitor/insights/container-insights-overview) |GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/containerInsights) |Monitors the performance of container workloads that are deployed to managed Kubernetes clusters hosted on Azure Kubernetes Service. It gives you performance visibility by collecting metrics from controllers, nodes, and containers that are available in Kubernetes through the Metrics API. Container logs are also collected. After you enable monitoring from Kubernetes clusters, these metrics and logs are automatically collected for you through a containerized version of the Log Analytics agent for Linux. |
|**Networking**||||
- | [Azure Network Insights](../../network-watcher/network-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/networkInsights) | Provides a comprehensive view of health and metrics for all your network resources. The advanced search capability helps you identify resource dependencies, enabling scenarios like identifying resources that are hosting your website, by searching for your website name. |
+|[Azure Network Insights](../../network-watcher/network-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/networkInsights) | Provides a comprehensive view of health and metrics for all your network resources. The advanced search capability helps you identify resource dependencies, enabling scenarios like identifying resources that are hosting your website, by searching for your website name. |
|**Storage**||||
- | [Azure Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/storageInsights) | Provides comprehensive monitoring of your Azure Storage accounts by delivering a unified view of your Azure Storage services performance, capacity, and availability. |
+| [Azure Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/storageInsights) | Provides comprehensive monitoring of your Azure Storage accounts by delivering a unified view of your Azure Storage services performance, capacity, and availability. |
| [Azure Backup](../../backup/backup-azure-monitoring-use-azuremonitor.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_DataProtection/BackupCenterMenuBlade/backupReportsConfigure/menuId/backupReportsConfigure) | Provides built-in monitoring and alerting capabilities in a Recovery Services vault. | |**Databases**|||| | [Azure Cosmos DB Insights](../../cosmos-db/cosmosdb-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/cosmosDBInsights) | Provides a view of the overall performance, failures, capacity, and operational health of all your Azure Cosmos DB resources in a unified interactive experience. | | [Azure Monitor for Azure Cache for Redis (preview)](../../azure-cache-for-redis/redis-cache-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/redisCacheInsights) | Provides a unified, interactive view of overall performance, failures, capacity, and operational health. | |**Analytics**|||| | [Azure Data Explorer Insights](/azure/data-explorer/data-explorer-insights) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/adxClusterInsights) | Azure Data Explorer Insights provides comprehensive monitoring of your clusters by delivering a unified view of your cluster performance, operations, usage, and failures. |
- | [Azure Monitor Log Analytics Workspace](../logs/log-analytics-workspace-insights-overview.md) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/lawsInsights) | Log Analytics Workspace Insights (preview) provides comprehensive monitoring of your workspaces through a unified view of your workspace usage, performance, health, agent, queries, and change log. This article will help you understand how to onboard and use Log Analytics Workspace Insights (preview). |
+| [Azure Monitor Log Analytics Workspace](../logs/log-analytics-workspace-insights-overview.md) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/lawsInsights) | Log Analytics Workspace Insights (preview) provides comprehensive monitoring of your workspaces through a unified view of your workspace usage, performance, health, agent, queries, and change log. This article will help you understand how to onboard and use Log Analytics Workspace Insights (preview). |
|**Security**||||
- | [Azure Key Vault Insights (preview)](../../key-vault/key-vault-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/keyvaultsInsights) | Provides comprehensive monitoring of your key vaults by delivering a unified view of your Key Vault requests, performance, failures, and latency. |
+| [Azure Key Vault Insights](../../key-vault/key-vault-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/keyvaultsInsights) | Provides comprehensive monitoring of your key vaults by delivering a unified view of your Key Vault requests, performance, failures, and latency. |
|**Monitor**||||
- | [Azure Monitor Application Insights](../app/app-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/applicationsInsights) | Extensible application performance management service that monitors the availability, performance, and usage of your web applications whether they're hosted in the cloud or on-premises. It uses the powerful data analysis platform in Azure Monitor to provide you with deep insights into your application's operations. It enables you to diagnose errors without waiting for a user to report them. Application Insights includes connection points to various development tools and integrates with Visual Studio to support your DevOps processes. |
-| [Azure activity Log Insights](../essentials/activity-log-insights.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_DataProtection/BackupCenterMenuBlade/backupReportsConfigure/menuId/backupReportsConfigure) | Provides built-in monitoring and alerting capabilities in a Recovery Services vault. |
+| [Azure Monitor Application Insights](../app/app-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/applicationsInsights) | Extensible application performance management service that monitors the availability, performance, and usage of your web applications whether they're hosted in the cloud or on-premises. It uses the powerful data analysis platform in Azure Monitor to provide you with deep insights into your application's operations. It enables you to diagnose errors without waiting for a user to report them. Application Insights includes connection points to various development tools and integrates with Visual Studio to support your DevOps processes. |
+| [Azure activity Log Insights](../essentials/activity-log-insights.md) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_DataProtection/BackupCenterMenuBlade/backupReportsConfigure/menuId/backupReportsConfigure) | Provides built-in monitoring and alerting capabilities in a Recovery Services vault. |
| [Azure Monitor for Resource Groups](resource-group-insights.md) | GA | No | Triage and diagnose any problems your individual resources encounter, while offering context for the health and performance of the resource group as a whole. | |**Integration**|||| | [Azure Service Bus Insights](../../service-bus-messaging/service-bus-insights.md) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/serviceBusInsights) | Azure Service Bus Insights provide a view of the overall performance, failures, capacity, and operational health of all your Service Bus resources in a unified interactive experience. |
- [Azure IoT Edge](../../iot-edge/how-to-explore-curated-visualizations.md) | GA | No | Visualize and explore metrics collected from the IoT Edge device right in the Azure portal by using Azure Monitor Workbooks-based public templates. The curated workbooks use built-in metrics from the IoT Edge runtime. These views don't need any metrics instrumentation from the workload modules. |
+|[Azure IoT Edge](../../iot-edge/how-to-explore-curated-visualizations.md) | GA | No | Visualize and explore metrics collected from the IoT Edge device right in the Azure portal by using Azure Monitor Workbooks-based public templates. The curated workbooks use built-in metrics from the IoT Edge runtime. These views don't need any metrics instrumentation from the workload modules. |
|**Workloads**|||| | [Azure SQL Insights (preview)](/azure/azure-sql/database/sql-insights-overview) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/sqlWorkloadInsights) | A comprehensive interface for monitoring any product in the Azure SQL family. SQL Insights uses dynamic management views to expose the data you need to monitor health, diagnose problems, and tune performance. Note: If you're just setting up SQL monitoring, use SQL Insights instead of the SQL Analytics solution. | | [Azure Monitor for SAP solutions](../../virtual-machines/workloads/sap/monitor-sap-on-azure.md) | Preview | No | An Azure-native monitoring product for anyone running their SAP landscapes on Azure. It works with both SAP on Azure Virtual Machines and SAP on Azure Large Instances. Collects telemetry data from Azure infrastructure and databases in one central location and visually correlates the data for faster troubleshooting. You can monitor different components of an SAP landscape, such as Azure virtual machines (VMs), high-availability clusters, SAP HANA database, and SAP NetWeaver, by adding the corresponding provider for that component. | |**Other**|||| | [Azure Virtual Desktop Insights](../../virtual-desktop/azure-monitor.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_WVD/WvdManagerMenuBlade/insights/menuId/insights) | Azure Virtual Desktop Insights is a dashboard built on Azure Monitor Workbooks that helps IT professionals understand their Azure Virtual Desktop environments. |
-| [Azure Stack HCI Insights](/azure-stack/hci/manage/azure-stack-hci-insights) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/azureStackHCIInsights) | Based on Azure Monitor Workbooks. Provides health, performance, and usage insights about registered Azure Stack HCI version 21H2 clusters that are connected to Azure and enrolled in monitoring. It stores its data in a Log Analytics workspace, which allows it to deliver powerful aggregation and filtering and analyze data trends over time. |
+| [Azure Stack HCI Insights](/azure-stack/hci/manage/azure-stack-hci-insights) | GA| [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/azureStackHCIInsights) | Based on Azure Monitor Workbooks. Provides health, performance, and usage insights about registered Azure Stack HCI version 21H2 clusters that are connected to Azure and enrolled in monitoring. It stores its data in a Log Analytics workspace, which allows it to deliver powerful aggregation and filtering and analyze data trends over time. |
| [Windows Update for Business](/windows/deployment/update/wufb-reports-overview) | GA | [Yes](https://ms.portal.azure.com/#view/AppInsightsExtension/WorkbookViewerBlade/Type/updatecompliance-insights/ComponentId/Azure%20Monitor/GalleryResourceType/Azure%20Monitor/ConfigurationId/community-Workbooks%2FUpdateCompliance%2FUpdateComplianceHub) | Detailed deployment monitoring, compliance assessment and failure troubleshooting for all Windows 10/11 devices.| |**Not in Azure portal Insight hub**||||
-| [Azure Monitor Workbooks for Microsoft Entra ID](../../active-directory/reports-monitoring/howto-use-azure-monitor-workbooks.md) | General availability (GA) | [Yes](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Workbooks) | Microsoft Entra ID provides workbooks to understand the effect of your Conditional Access policies, troubleshoot sign-in failures, and identify legacy authentications. |
+| [Azure Monitor Workbooks for Microsoft Entra ID](../../active-directory/reports-monitoring/howto-use-azure-monitor-workbooks.md) |GA| [Yes](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Workbooks) | Microsoft Entra ID provides workbooks to understand the effect of your Conditional Access policies, troubleshoot sign-in failures, and identify legacy authentications. |
| [Azure HDInsight](../../hdinsight/log-analytics-migration.md#insights) | Preview | No | An Azure Monitor workbook that collects important performance metrics from your HDInsight cluster and provides the visualizations and dashboards for most common scenarios. Gives a complete view of a single HDInsight cluster including resource utilization and application status.| --- ## Next steps - Reference some of the insights listed above to review their functionality
azure-monitor Aiops Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/aiops-machine-learning.md
Last updated 02/28/2023
-#customer-intent: As a DevOps manager or data scientist, I want to understand which AIOps features Azure Monitor offers and how to implement a machine learning pipeline on data in Azure Monitor Logs so that I can use artifical intelligence to improve service quality and reliability of my IT environment.
+# Customer intent: As a DevOps manager or data scientist, I want to understand which AIOps features Azure Monitor offers and how to implement a machine learning pipeline on data in Azure Monitor Logs so that I can use artifical intelligence to improve service quality and reliability of my IT environment.
# Detect and mitigate potential issues using AIOps and machine learning in Azure Monitor
azure-monitor Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/availability-zones.md
Last updated 06/05/2023
-#customer-intent: As an IT manager, I want to understand the data and service resilience benefits Azure Monitor availability zones provide to ensure my data and services are sufficiently protected in the event of datacenter failure.
+# Customer intent: As an IT manager, I want to understand the data and service resilience benefits Azure Monitor availability zones provide to ensure my data and services are sufficiently protected in the event of datacenter failure.
# Enhance data and service resilience in Azure Monitor Logs with availability zones
azure-monitor Basic Logs Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-configure.md
Title: Set a table's log data plan to Basic Logs or Analytics Logs
description: Learn how to use Basic Logs and Analytics Logs to reduce costs and take advantage of advanced features and analytics capabilities in Azure Monitor Logs. -+ Last updated 12/17/2023
All custom tables created with or migrated to the [data collection rule (DCR)-ba
| Application Gateways | [AGWAccessLogs](/azure/azure-monitor/reference/tables/AGWAccessLogs)<br>[AGWPerformanceLogs](/azure/azure-monitor/reference/tables/AGWPerformanceLogs)<br>[AGWFirewallLogs](/azure/azure-monitor/reference/tables/AGWFirewallLogs) | | Application Gateway for Containers | [AGCAccessLogs](/azure/azure-monitor/reference/tables/AGCAccessLogs) | | Application Insights | [AppTraces](/azure/azure-monitor/reference/tables/apptraces) |
-| Bare Metal Machines | [NCBMSystemLogs](/azure/azure-monitor/reference/tables/NCBMSystemLogs)<br>[NCBMSecurityLogs](/azure/azure-monitor/reference/tables/NCBMSecurityLogs) |
+| Bare Metal Machines | [NCBMSecurityDefenderLogs](/azure/azure-monitor/reference/tables/ncbmsecuritydefenderlogs)<br>[NCBMSystemLogs](/azure/azure-monitor/reference/tables/NCBMSystemLogs)<br>[NCBMSecurityLogs](/azure/azure-monitor/reference/tables/NCBMSecurityLogs) |
| Chaos Experiments | [ChaosStudioExperimentEventLogs](/azure/azure-monitor/reference/tables/ChaosStudioExperimentEventLogs) | | Cloud HSM | [CHSMManagementAuditLogs](/azure/azure-monitor/reference/tables/CHSMManagementAuditLogs) | | Container Apps | [ContainerAppConsoleLogs](/azure/azure-monitor/reference/tables/containerappconsoleLogs) |
azure-monitor Ingest Logs Event Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/ingest-logs-event-hub.md
Last updated 12/28/2023
-# customer-intent: As a DevOps engineer, I want to ingest data from an event hub into a Log Analytics workspace so that I can monitor logs that I send to Azure Event Hubs.
+# Customer intent: As a DevOps engineer, I want to ingest data from an event hub into a Log Analytics workspace so that I can monitor logs that I send to Azure Event Hubs.
# Tutorial: Ingest events from Azure Event Hubs into Azure Monitor Logs (Public Preview)
azure-monitor Migrate Splunk To Azure Monitor Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/migrate-splunk-to-azure-monitor-logs.md
Last updated 01/27/2023
-#customer-intent: As an IT manager, I want to understand the steps required to migrate my Splunk deployment to Azure Monitor Logs so that I can decide whether to migrate and plan and execute my migration.
+# Customer intent: As an IT manager, I want to understand the steps required to migrate my Splunk deployment to Azure Monitor Logs so that I can decide whether to migrate and plan and execute my migration.
azure-monitor Search Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/search-jobs.md
description: Search jobs are asynchronous log queries in Azure Monitor that make
Last updated 10/01/2022
-#customer-intent: As a data scientist or workspace administrator, I want an efficient way to search through large volumes of data in a table, including archived and basic logs.
+# Customer intent: As a data scientist or workspace administrator, I want an efficient way to search through large volumes of data in a table, including archived and basic logs.
# Run search jobs in Azure Monitor
azure-netapp-files Access Smb Volume From Windows Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/access-smb-volume-from-windows-client.md
You can use Microsoft Entra ID with the Hybrid Authentication Management module
>[!NOTE] >Using Microsoft Entra ID for authenticating [hybrid user identities](../active-directory/hybrid/whatis-hybrid-identity.md) allows Microsoft Entra users to access Azure NetApp Files SMB shares. This means your end users can access Azure NetApp Files SMB shares without requiring a line-of-sight to domain controllers from Microsoft Entra hybrid joined and Microsoft Entra joined VMs. Cloud-only identities aren't currently supported. For more information, see [Understand guidelines for Active Directory Domain Services site design and planning](understand-guidelines-active-directory-domain-service-site.md). ## Requirements and considerations
The configuration process takes you through five process:
1. Under **Computers**, right-click on the computer account created as part of the Azure NetApp Files volume then select **Properties**. 1. Under **Attribute Editor,** locate `servicePrincipalName`. In the Multi-valued string editor, add the CIFS SPN value using the CIFS/FQDN format. <a name='register-a-new-azure-ad-application'></a>
The configuration process takes you through five process:
1. Assign a **Name**. Under select the **Supported account type**, choose **Accounts in this organizational directory only (Single tenant)**. 1. Select **Register**. 1. Configure the permissions for the application. From your **App Registrations**, select **API Permissions** then **Add a permission**. 1. Select **Microsoft Graph** then **Delegated Permissions**. Under **Select Permissions**, select **openid** and **profile** under **OpenId permissions**.
- :::image type="content" source="../media/azure-netapp-files/api-permissions.png" alt-text="Screenshot to register API permissions." lightbox="../media/azure-netapp-files/api-permissions.png":::
+ :::image type="content" source="./media/access-smb-volume-from-windows-client/api-permissions.png" alt-text="Screenshot to register API permissions." lightbox="./media/access-smb-volume-from-windows-client/api-permissions.png":::
1. Select **Add permission**. 1. From **API Permissions**, select **Grant admin consent for...**.
- :::image type="content" source="../media/azure-netapp-files/grant-admin-consent.png" alt-text="Screenshot to grant API permissions." lightbox="../media/azure-netapp-files/grant-admin-consent.png ":::
+ :::image type="content" source="./media/access-smb-volume-from-windows-client/grant-admin-consent.png" alt-text="Screenshot to grant API permissions." lightbox="./media/access-smb-volume-from-windows-client/grant-admin-consent.png ":::
1. From **Authentication**, under **App instance property lock**, select **Configure** then deselect the checkbox labeled **Enable property lock**.
- :::image type="content" source="../media/azure-netapp-files/authentication-registration.png" alt-text="Screenshot of app registrations." lightbox="../media/azure-netapp-files/authentication-registration.png":::
+ :::image type="content" source="./media/access-smb-volume-from-windows-client/authentication-registration.png" alt-text="Screenshot of app registrations." lightbox="./media/access-smb-volume-from-windows-client/authentication-registration.png":::
1. From **Overview**, make note of the **Application (client) ID**, which is required later.
The configuration process takes you through five process:
* Value name: KERBEROS.MICROSOFTONLINE.COM * Value: .contoso.com
- :::image type="content" source="../media/azure-netapp-files/define-host-name-to-kerberos.png" alt-text="Screenshot to define how-name-to-Kerberos real mappings." lightbox="../media/azure-netapp-files/define-host-name-to-kerberos.png":::
+ :::image type="content" source="./media/access-smb-volume-from-windows-client/define-host-name-to-kerberos.png" alt-text="Screenshot to define how-name-to-Kerberos real mappings." lightbox="./media/access-smb-volume-from-windows-client/define-host-name-to-kerberos.png":::
### Mount the Azure NetApp Files SMB volumes
The configuration process takes you through five process:
2. Mount the Azure NetApp Files SMB volume using the info provided in the Azure portal. For more information, see [Mount SMB volumes for Windows VMs](mount-volumes-vms-smb.md). 3. Confirm the mounted volume is using Kerberos authentication and not NTLM authentication. Open a command prompt, issue the `klist` command; observe the output in the cloud TGT (krbtgt) and CIFS server ticket information.
- :::image type="content" source="../media/azure-netapp-files/klist-output.png" alt-text="Screenshot of CLI output." lightbox="../media/azure-netapp-files/klist-output.png":::
+ :::image type="content" source="./media/access-smb-volume-from-windows-client/klist-output.png" alt-text="Screenshot of CLI output." lightbox="./media/access-smb-volume-from-windows-client/klist-output.png":::
## Further information
azure-netapp-files Application Volume Group Add Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/application-volume-group-add-hosts.md
Building a multiple-host SAP HANA database always starts with creating a volume
Click **Next: Volume Group**.
- [ ![Screenshot that shows the HANA section for adding hosts.](../media/azure-netapp-files/application-multiple-hosts-sap-hana.png) ](../media/azure-netapp-files/application-multiple-hosts-sap-hana.png#lightbox)
+ [ ![Screenshot that shows the HANA section for adding hosts.](./media/application-volume-group-add-hosts/application-multiple-hosts-sap-hana.png) ](./media/application-volume-group-add-hosts/application-multiple-hosts-sap-hana.png#lightbox)
3. In the **Volume group** tab, provide identical input as you did when you created the first HANA host.
Building a multiple-host SAP HANA database always starts with creating a volume
Click **Next: Review + Create**.
- [ ![Screenshot that shows the Volumes section for adding hosts.](../media/azure-netapp-files/application-multiple-hosts-volumes.png) ](../media/azure-netapp-files/application-multiple-hosts-volumes.png#lightbox)
+ [ ![Screenshot that shows the Volumes section for adding hosts.](./media/application-volume-group-add-hosts/application-multiple-hosts-volumes.png) ](./media/application-volume-group-add-hosts/application-multiple-hosts-volumes.png#lightbox)
4. In the **Review + Create** tab, the `{HostId}` placeholder is replaced with the individual numbers for each of the volume groups that will be created. You can click **Next Group** to navigate through all volume groups that are being created (one for each host). You can also click a particular volume to view its details.
- [ ![Screenshot that shows the Review and Create section for adding hosts.](../media/azure-netapp-files/application-multiple-review-create.png) ](../media/azure-netapp-files/application-multiple-review-create.png#lightbox)
+ [ ![Screenshot that shows the Review and Create section for adding hosts.](./media/application-volume-group-add-hosts/application-multiple-review-create.png) ](./media/application-volume-group-add-hosts/application-multiple-review-create.png#lightbox)
5. After you navigate through the volume groups, click **Create All Groups** to create all the volumes for the HANA hosts you are adding.
- [ ![Screenshot that shows the Create All Groups button.](../media/azure-netapp-files/application-multiple-create-groups.png) ](../media/azure-netapp-files/application-multiple-create-groups.png#lightbox)
+ [ ![Screenshot that shows the Create All Groups button.](./media/application-volume-group-add-hosts/application-multiple-create-groups.png) ](./media/application-volume-group-add-hosts/application-multiple-create-groups.png#lightbox)
The **Create Volume Group** page shows the added volume groups with the "Creating" status.
azure-netapp-files Application Volume Group Add Volume Secondary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/application-volume-group-add-volume-secondary.md
The HANA System Replication (HSR) functionality enables SAP HANA databases to sy
The following diagram illustrates the concept of HSR:
- ![Diagram that explains HANA System Replication.](../media/azure-netapp-files/application-hana-system-replication.png)
+ ![Diagram that explains HANA System Replication.](./media/application-volume-group-add-volume-secondary/application-hana-system-replication.png)
To enable HSR, the configuration of the secondary SAP HANA system must be identical to the primary SAP HANA system. That is, if the primary system is a single-host HANA system, then the secondary SAP HANA system also needs to be a single-hosts system. The same applies for multiple host systems.
This section shows an example of creating a single-host, secondary SAP HANA syst
Click **Next: Volume Group** to continue.
- [ ![Screenshot that shows the HANA section in HSR configuration.](../media/azure-netapp-files/application-secondary-sap-hana.png) ](../media/azure-netapp-files/application-secondary-sap-hana.png#lightbox)
+ [ ![Screenshot that shows the HANA section in HSR configuration.](./media/application-volume-group-add-volume-secondary/application-secondary-sap-hana.png) ](./media/application-volume-group-add-volume-secondary/application-secondary-sap-hana.png#lightbox)
3. In the **Volume group** tab, provide information for creating the volume group:
This section shows an example of creating a single-host, secondary SAP HANA syst
Click **Next: Volumes**.
- [ ![Screenshot that shows the Tags section of the Volume Group tab.](../media/azure-netapp-files/application-secondary-volume-group-tags.png) ](../media/azure-netapp-files/application-secondary-volume-group-tags.png#lightbox)
+ [ ![Screenshot that shows the Tags section of the Volume Group tab.](./media/application-volume-group-add-volume-secondary/application-secondary-volume-group-tags.png) ](./media/application-volume-group-add-volume-secondary/application-secondary-volume-group-tags.png#lightbox)
6. The **Volumes** tab displays information about the volumes that are being created. The volume naming convention includes an `"HA-"` prefix to indicate that the volume belongs to the secondary system of an HSR setup.
- [ ![Screenshot that shows the Volume Group tab.](../media/azure-netapp-files/application-secondary-volumes-tags.png) ](../media/azure-netapp-files/application-secondary-volumes-tags.png#lightbox)
+ [ ![Screenshot that shows the Volume Group tab.](./media/application-volume-group-add-volume-secondary/application-secondary-volumes-tags.png) ](./media/application-volume-group-add-volume-secondary/application-secondary-volumes-tags.png#lightbox)
7. In the **Volumes** tab, you can select each volume to view or change the volume details, including the protocol and tag for the volume. In the **Tags** section of a volume, you can populate the `HSRPartnerStorageResourceId` tag with the resource ID of the corresponding primary volume. This action only marks the primary volume; it does not validate the provided resource ID.
- [ ![Screenshot that shows the tag details.](../media/azure-netapp-files/application-secondary-volumes-tag-details.png) ](../media/azure-netapp-files/application-secondary-volumes-tag-details.png#lightbox)
+ [ ![Screenshot that shows the tag details.](./media/application-volume-group-add-volume-secondary/application-secondary-volumes-tag-details.png) ](./media/application-volume-group-add-volume-secondary/application-secondary-volumes-tag-details.png#lightbox)
Click **Volumes** to return to the Volumes overview page.
azure-netapp-files Application Volume Group Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/application-volume-group-delete.md
This article describes how to delete an application volume group.
1. Click **Application volume groups**. Select the volume group you want to delete.
- [![Screenshot that shows Application Volume Groups list.](../media/azure-netapp-files/application-volume-group-list.png) ](../media/azure-netapp-files/application-volume-group-list.png#lightbox)
+ [![Screenshot that shows Application Volume Groups list.](./media/application-volume-group-delete/application-volume-group-list.png) ](./media/application-volume-group-delete/application-volume-group-list.png#lightbox)
2. To delete the volume group, click **Delete**. If you are prompted, type the volume group name to confirm the deletion.
- [![Screenshot that shows Application Volume Groups deletion.](../media/azure-netapp-files/application-volume-group-delete.png)](../media/azure-netapp-files/application-volume-group-delete.png#lightbox)
+ [![Screenshot that shows Application Volume Groups deletion.](./media/application-volume-group-delete/application-volume-group-delete.png)](./media/application-volume-group-delete/application-volume-group-delete.png#lightbox)
## Next steps
azure-netapp-files Application Volume Group Deploy First Host https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/application-volume-group-deploy-first-host.md
Be sure to follow the **[pinning recommendations](https://aka.ms/HANAPINNING)**
1. From your NetApp account, select **Application volume groups**, then **+Add Group**.
- [ ![Screenshot that shows how to add a group.](../media/azure-netapp-files/application-volume-group-add-group.png) ](../media/azure-netapp-files/application-volume-group-add-group.png#lightbox)
+ [ ![Screenshot that shows how to add a group.](./media/application-volume-group-deploy-first-host/application-volume-group-add-group.png) ](./media/application-volume-group-deploy-first-host/application-volume-group-add-group.png#lightbox)
2. In Deployment Type, select **SAP HANA** then **Next**.
- [ ![Screenshot that shows the Create Volume Group window.](../media/azure-netapp-files/application-volume-group-create-group.png) ](../media/azure-netapp-files/application-volume-group-create-group.png#lightbox)
+ [ ![Screenshot that shows the Create Volume Group window.](./media/application-volume-group-deploy-first-host/application-volume-group-create-group.png) ](./media/application-volume-group-deploy-first-host/application-volume-group-create-group.png#lightbox)
3. In the **SAP HANA** tab, provide HANA-specific information:
Be sure to follow the **[pinning recommendations](https://aka.ms/HANAPINNING)**
Select **Next: Volume Group**.
- [ ![Screenshot that shows the SAP HANA tag.](../media/azure-netapp-files/application-sap-hana-tag.png) ](../media/azure-netapp-files/application-sap-hana-tag.png#lightbox)
+ [ ![Screenshot that shows the SAP HANA tag.](./media/application-volume-group-deploy-first-host/application-sap-hana-tag.png) ](./media/application-volume-group-deploy-first-host/application-sap-hana-tag.png#lightbox)
4. In the **Volume group** tab, provide information for creating the volume group:
Be sure to follow the **[pinning recommendations](https://aka.ms/HANAPINNING)**
Select **Next: Tags**.
- [ ![Screenshot that shows the Volume Group tag.](../media/azure-netapp-files/application-volume-group-tag.png) ](../media/azure-netapp-files/application-volume-group-tag.png#lightbox)
+ [ ![Screenshot that shows the Volume Group tag.](./media/application-volume-group-deploy-first-host/application-volume-group-tag.png) ](./media/application-volume-group-deploy-first-host/application-volume-group-tag.png#lightbox)
5. In the **Tags** section of the Volume Group tab, you can add tags as needed for the volumes. Select **Next: Protocol**.
- [ ![Screenshot that shows how to add tags.](../media/azure-netapp-files/application-add-tags.png) ](../media/azure-netapp-files/application-add-tags.png#lightbox)
+ [ ![Screenshot that shows how to add tags.](./media/application-volume-group-deploy-first-host/application-add-tags.png) ](./media/application-volume-group-deploy-first-host/application-add-tags.png#lightbox)
6. In the **Protocols** section of the Volume Group tab, you can modify the **Export Policy**, which should be common to all volumes. Select **Next: Volumes**.
- [ ![Screenshot that shows the protocols tags.](../media/azure-netapp-files/application-protocols-tag.png) ](../media/azure-netapp-files/application-protocols-tag.png#lightbox)
+ [ ![Screenshot that shows the protocols tags.](./media/application-volume-group-deploy-first-host/application-protocols-tag.png) ](./media/application-volume-group-deploy-first-host/application-protocols-tag.png#lightbox)
7. The **Volumes** tab summarizes the volumes that are being created with proposed volume name, quota, and throughput.
Be sure to follow the **[pinning recommendations](https://aka.ms/HANAPINNING)**
The creation for the data-backup and log-backup volumes is optional.
- [ ![Screenshot that shows a list of volumes being created.](../media/azure-netapp-files/application-volume-list.png) ](../media/azure-netapp-files/application-volume-list.png#lightbox)
+ [ ![Screenshot that shows a list of volumes being created.](./media/application-volume-group-deploy-first-host/application-volume-list.png) ](./media/application-volume-group-deploy-first-host/application-volume-list.png#lightbox)
8. In the **Volumes** tab, you can select each volume to view or change the volume details. For example, select "data-*volume-name*".
Be sure to follow the **[pinning recommendations](https://aka.ms/HANAPINNING)**
Select **Next: Protocols** to review the protocol settings.
- [ ![Screenshot that shows the Basics tab of Create a Volume Group page.](../media/azure-netapp-files/application-create-volume-basics-tab.png) ](../media/azure-netapp-files/application-create-volume-basics-tab.png#lightbox)
+ [ ![Screenshot that shows the Basics tab of Create a Volume Group page.](./media/application-volume-group-deploy-first-host/application-create-volume-basics-tab.png) ](./media/application-volume-group-deploy-first-host/application-create-volume-basics-tab.png#lightbox)
9. In the **Protocols** tab of a volume, you can modify **File path** (the export name where the volume can be mounted) and **Export policy** as needed.
Be sure to follow the **[pinning recommendations](https://aka.ms/HANAPINNING)**
Select the **Tags** tab if you want to specify tags for a volume. Or select **Volumes** to return to the Volumes overview page.
- [ ![Screenshot that shows the Protocol tab of Create a Volume Group page.](../media/azure-netapp-files/application-create-volume-protocol-tab.png) ](../media/azure-netapp-files/application-create-volume-protocol-tab.png#lightbox)
+ [ ![Screenshot that shows the Protocol tab of Create a Volume Group page.](./media/application-volume-group-deploy-first-host/application-create-volume-protocol-tab.png) ](./media/application-volume-group-deploy-first-host/application-create-volume-protocol-tab.png#lightbox)
10. The **Volumes** page displays volume details.
- [ ![Screenshot that shows Volumes page with volume details.](../media/azure-netapp-files/application-volume-details.png) ](../media/azure-netapp-files/application-volume-details.png#lightbox)
+ [ ![Screenshot that shows Volumes page with volume details.](./media/application-volume-group-deploy-first-host/application-volume-details.png) ](./media/application-volume-group-deploy-first-host/application-volume-details.png#lightbox)
If you want to remove the optional volumes (marked with a `*`), such as data-backup volume or log-backup volume from the volume group, select the volume then select **Remove volume**. Confirm the removal in the dialog box that appears. > [!IMPORTANT] > You cannot add a removed volume back to the volume group again. You need to stop and restart the application volume group configuration.
- [ ![Screenshot that shows how to remove a volume.](../media/azure-netapp-files/application-volume-remove.png) ](../media/azure-netapp-files/application-volume-remove.png#lightbox)
+ [ ![Screenshot that shows how to remove a volume.](./media/application-volume-group-deploy-first-host/application-volume-remove.png) ](./media/application-volume-group-deploy-first-host/application-volume-remove.png#lightbox)
Select **Volumes** to return to the Volume overview page. Select **Next: Review + create**. 11. The **Review + Create** tab lists all the volumes and how they will be created. Select **Create Volume Group** to start the volume group creation.
- [ ![Screenshot that shows the Review and Create tab.](../media/azure-netapp-files/application-review-create.png) ](../media/azure-netapp-files/application-review-create.png#lightbox)
+ [ ![Screenshot that shows the Review and Create tab.](./media/application-volume-group-deploy-first-host/application-review-create.png) ](./media/application-volume-group-deploy-first-host/application-review-create.png#lightbox)
12. The **Volume Groups** deployment workflow starts, and the progress is displayed. This process can take a few minutes to complete.
- [ ![Screenshot that shows the Deployment in Progress window.](../media/azure-netapp-files/application-deployment-in-progress.png) ](../media/azure-netapp-files/application-deployment-in-progress.png#lightbox)
+ [ ![Screenshot that shows the Deployment in Progress window.](./media/application-volume-group-deploy-first-host/application-deployment-in-progress.png) ](./media/application-volume-group-deploy-first-host/application-deployment-in-progress.png#lightbox)
You can display the list of volume groups to see the new volume group. You can select the new volume group to see the details and status of each of the volumes being created. Creating a volume group is an "all-or-none" operation. If one volume cannot be created, all remaining volumes will be removed as well.
- [ ![Screenshot that shows the new volume group.](../media/azure-netapp-files/application-new-volume-group.png) ](../media/azure-netapp-files/application-new-volume-group.png#lightbox)
+ [ ![Screenshot that shows the new volume group.](./media/application-volume-group-deploy-first-host/application-new-volume-group.png) ](./media/application-volume-group-deploy-first-host/application-new-volume-group.png#lightbox)
## Next steps
azure-netapp-files Application Volume Group Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/application-volume-group-disaster-recovery.md
Instead of using HANA System Replication (HSR), you can use cross-region replica
The following diagram illustrates cross-region replication between the source and destination HANA servers. Cross-region replication is asynchronous. As such, not all volumes need to be replicated.
- ![Diagram that shows cross-region replication between the source and destination HANA servers.](../media/azure-netapp-files/application-cross-region-replication.png)
+ ![Diagram that shows cross-region replication between the source and destination HANA servers.](./media/application-volume-group-disaster-recovery/application-cross-region-replication.png)
> [!NOTE] > When you use an HA deployment with HSR at the primary side, you can choose to replicate not only the primary HANA system as described in this section, but also the HANA secondary system using cross-region replication. To automatically adapt the naming convention, you select both the **HSR secondary** and **Disaster recovery destination** options in the Create a Volume Group screen. The prefix will then be changed to `DR2-`.
The following example adds volumes to an SAP HANA system. The system serves as a
Click **Next: Volume Group**.
- [ ![Screenshot that shows the Create a Volume Group page in a cross-region replication configuration.](../media/azure-netapp-files/application-cross-region-create-volume.png) ](../media/azure-netapp-files/application-cross-region-create-volume.png#lightbox)
+ [ ![Screenshot that shows the Create a Volume Group page in a cross-region replication configuration.](./media/application-volume-group-disaster-recovery/application-cross-region-create-volume.png) ](./media/application-volume-group-disaster-recovery/application-cross-region-create-volume.png#lightbox)
3. In the **Volume group** tab, provide information for creating the volume group:
The following example adds volumes to an SAP HANA system. The system serves as a
5. In the **Replication** section of the Volume Group tab, the Replication Schedule field defaults to "Multiple" (disabled). The default replication schedules are different for the replicated volumes. As such, you can modify the replication schedules only for each volume individually from the Volumes tab, and not globally for the entire volume group.
- [ ![Screenshot that shows Multiple field is disabled in Create a Volume Group page.](../media/azure-netapp-files/application-cross-region-multiple-disabled.png) ](../media/azure-netapp-files/application-cross-region-multiple-disabled.png#lightbox)
+ [ ![Screenshot that shows Multiple field is disabled in Create a Volume Group page.](./media/application-volume-group-disaster-recovery/application-cross-region-multiple-disabled.png) ](./media/application-volume-group-disaster-recovery/application-cross-region-multiple-disabled.png#lightbox)
Click **Next: Tags**.
The following example adds volumes to an SAP HANA system. The system serves as a
The default type for the data-backup volume is DP, but this setting can be changed to RW.
- [ ![Screenshot that shows volume types in Create a Volume Group page.](../media/azure-netapp-files/application-cross-region-volume-types.png) ](../media/azure-netapp-files/application-cross-region-volume-types.png#lightbox)
+ [ ![Screenshot that shows volume types in Create a Volume Group page.](./media/application-volume-group-disaster-recovery/application-cross-region-volume-types.png) ](./media/application-volume-group-disaster-recovery/application-cross-region-volume-types.png#lightbox)
8. Click each volume with the DP type to specify the **Source volume ID**. For more information, see [Locate the source volume resource ID](cross-region-replication-create-peering.md#locate-the-source-volume-resource-id). You can optionally change the default replication schedule of a volume. See [Replication schedules, RTO, and RPO](#replication-schedules-rto-and-rpo) for the replication schedule options.
- [ ![Screenshot that shows the Replication tab in Create a Volume Group page.](../media/azure-netapp-files/application-cross-region-replication-tab.png) ](../media/azure-netapp-files/application-cross-region-replication-tab.png#lightbox)
+ [ ![Screenshot that shows the Replication tab in Create a Volume Group page.](./media/application-volume-group-disaster-recovery/application-cross-region-replication-tab.png) ](./media/application-volume-group-disaster-recovery/application-cross-region-replication-tab.png#lightbox)
9. After you create the volume group, set up replication by following instructions in [Authorize replication from the source volume](cross-region-replication-create-peering.md#authorize-replication-from-the-source-volume).
In this scenario, you typically donΓÇÖt change roles for primary and secondary s
The following diagram describes this scenario:
-[ ![Diagram that shows replication for only the primary HANA database volumes.](../media/azure-netapp-files/replicate-only-primary-database-volumes.png) ](../media/azure-netapp-files/replicate-only-primary-database-volumes.png#lightbox)
+[ ![Diagram that shows replication for only the primary HANA database volumes.](./media/application-volume-group-disaster-recovery/replicate-only-primary-database-volumes.png) ](./media/application-volume-group-disaster-recovery/replicate-only-primary-database-volumes.png#lightbox)
In this scenario, a DR setup must include only the volumes of the primary HANA system. With the daily replication of the primary data volume and the log backups of both the primary and secondary systems, the system can be recovered at the DR site. In the diagram, a single volume is used for the log backups of the primary and secondary systems.
For reasons other than HA, you might want to periodically switch roles between t
The following diagram describes this scenario:
-[ ![Diagram that shows replication for both the primary and the secondary HANA database volumes.](../media/azure-netapp-files/replicate-both-primary-secondary-database-volumes.png) ](../media/azure-netapp-files/replicate-both-primary-secondary-database-volumes.png#lightbox)
+[ ![Diagram that shows replication for both the primary and the secondary HANA database volumes.](./media/application-volume-group-disaster-recovery/replicate-both-primary-secondary-database-volumes.png) ](./media/application-volume-group-disaster-recovery/replicate-both-primary-secondary-database-volumes.png#lightbox)
In this scenario, you might want to replicate both sets of volumes from the primary and secondary HANA systems as shown in the diagram.
azure-netapp-files Application Volume Group Manage Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/application-volume-group-manage-volumes.md
You can manage a volume from its volume group. You can resize, delete, or change
1. From your NetApp account, select **Application volume groups**. Click a volume group to display the volumes in the group. Select the volume you want to resize, delete, or change throughput. The volume overview will be displayed.
- [![Screenshot that shows Application Volume Groups overview page.](../media/azure-netapp-files/application-volume-group-overview.png)](../media/azure-netapp-files/application-volume-group-overview.png#lightbox)
+ [![Screenshot that shows Application Volume Groups overview page.](./media/application-volume-group-manage-volumes/application-volume-group-overview.png)](./media/application-volume-group-manage-volumes/application-volume-group-overview.png#lightbox)
1. To resize the volume, click **Resize** and specify the quota in GiB.
- ![Screenshot that shows the Update Volume Quota window.](../media/azure-netapp-files/application-volume-resize.png)
+ ![Screenshot that shows the Update Volume Quota window.](./media/application-volume-group-manage-volumes/application-volume-resize.png)
2. To change the throughput for the volume, click **Change throughput** and specify the intended throughput in MiB/s.
- ![Screenshot that shows the Change Throughput window.](../media/azure-netapp-files/application-volume-change-throughput.png)
+ ![Screenshot that shows the Change Throughput window.](./media/application-volume-group-manage-volumes/application-volume-change-throughput.png)
3. To delete the volume in the volume group, click **Delete**. If you are prompted, type the volume name to confirm the deletion. > [!IMPORTANT] > The volume deletion operation cannot be undone.
- ![Screenshot that shows the Delete Volume window.](../media/azure-netapp-files/application-volume-delete.png)
+ ![Screenshot that shows the Delete Volume window.](./media/application-volume-group-manage-volumes/application-volume-delete.png)
## Next steps
azure-netapp-files Auxiliary Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/auxiliary-groups.md
Accept requests from the kernel to map user id numbers into lists of group
When an access request is made, only 16 GIDs are passed in the RPC portion of the packet. Any GID beyond the limit of 16 is dropped by the protocol. Extended GIDs in Azure NetApp Files can only be used with external name services such as LDAP.
azure-netapp-files Azure Government https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-government.md
All [Azure NetApp Files features](whats-new.md) available on Azure public cloud
Azure Government users can access Azure NetApp Files by pointing their browsers to **portal.azure.us**. The portal site name is **Microsoft Azure Government**. For more information, see [Connect to Azure Government using portal](../azure-government/documentation-government-get-started-connect-with-portal.md).
-![Screenshot that shows the Azure Government portal highlighting portal.azure.us as the URL.](../media/azure-netapp-files/azure-government.jpg)
+![Screenshot that shows the Azure Government portal highlighting portal.azure.us as the URL.](./media/azure-government/azure-government.jpg)
From the Azure Government portal, you can access Azure NetApp Files the same way you would in the Azure portal. For example, you can enter **Azure NetApp Files** in the portal's **Search resources** box, and then select **Azure NetApp Files** from the list that appears.
azure-netapp-files Azure Netapp Files Configure Export Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-configure-export-policy.md
Before modifying policy rules with NFS Kerberos enabled, see [Export policy rule
* **Read-only** and **Read/Write**: If you use Kerberos encryption with NFSv4.1, follow the instructions in [Configure NFSv4.1 Kerberos encryption](configure-kerberos-encryption.md). For performance impact of Kerberos, see [Performance impact of Kerberos on NFSv4.1 volumes](performance-impact-kerberos.md).
- ![Kerberos security options](../media/azure-netapp-files/kerberos-security-options.png)
+ ![Kerberos security options](./media/azure-netapp-files-configure-export-policy/kerberos-security-options.png)
* **Root Access**: Specify whether the `root` account can access the volume. By default, Root Access is set to **On**, and the `root` account has access to the volume. This option is not available for NFSv4.1 Kerberos volumes.
- ![Export policy](../media/azure-netapp-files/azure-netapp-files-export-policy.png)
+ ![Export policy](./media/azure-netapp-files-configure-export-policy/azure-netapp-files-export-policy.png)
* **Chown Mode**: Modify the change ownership mode as needed to set the ownership management capabilities of files and directories. Two options are available:
Before modifying policy rules with NFS Kerberos enabled, see [Export policy rule
Registration requirement and considerations apply for setting **`Chown Mode`**. Follow instructions in [Configure Unix permissions and change ownership mode](configure-unix-permissions-change-ownership-mode.md).
- ![Screenshot that shows the change ownership mode option.](../media/azure-netapp-files/chown-mode-export-policy.png)
+ ![Screenshot that shows the change ownership mode option.](./media/azure-netapp-files-configure-export-policy/chown-mode-export-policy.png)
## Next steps * [Understand NAS permissions in Azure NetApp Files](network-attached-storage-permissions.md)
azure-netapp-files Azure Netapp Files Configure Nfsv41 Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-configure-nfsv41-domain.md
The root user mapping can illustrate what happens if there is a mismatch between
In the following directory listing example, the user `root` mounts a volume on a Linux client that uses its default configuration `localdomain` for the ID authentication domain, which is different from Azure NetApp FilesΓÇÖ default configuration of `defaultv4iddomain.com`. In the listing of the files in the directory, `file1` shows as being mapped to `nobody`, when it should be owned by the root user.
You can also use [Azure CLI commands](/cli/azure/feature) `az feature register`
1. Select **Configure**. 1. To use the default domain `defaultv4iddomain.com`, select the box next to **Use Default NFSv4 ID Domain**. To use another domain, uncheck the text box and provide the name of the NFSv4.1 ID domain.
- :::image type="content" source="../media/azure-netapp-files/nfsv4-id-domain.png" alt-text="Screenshot with field to set NFSv4 domain." lightbox="../media/azure-netapp-files/nfsv4-id-domain.png":::
+ :::image type="content" source="./media/azure-netapp-files-configure-nfsv41-domain/nfsv4-id-domain.png" alt-text="Screenshot with field to set NFSv4 domain." lightbox="./media/azure-netapp-files-configure-nfsv41-domain/nfsv4-id-domain.png":::
1. Select **Save**.
You can also use [Azure CLI commands](/cli/azure/feature) `az feature register`
The following example shows the resulting user/group change:
-![Screenshot that shows an example of the resulting user/group change.](../media/azure-netapp-files/azure-netapp-files-nfsv41-resulting-config.png)
+![Screenshot that shows an example of the resulting user/group change.](./media/azure-netapp-files-configure-nfsv41-domain/azure-netapp-files-nfsv41-resulting-config.png)
As the example shows, the user/group has now changed from `nobody` to `root`.
Azure NetApp Files supports local users and groups (created locally on the NFS c
In the following example, `Host1` has three user accounts (`testuser01`, `testuser02`, `testuser03`):
-![Screenshot that shows that Host1 has three existing test user accounts.](../media/azure-netapp-files/azure-netapp-files-nfsv41-host1-users.png)
+![Screenshot that shows that Host1 has three existing test user accounts.](./media/azure-netapp-files-configure-nfsv41-domain/azure-netapp-files-nfsv41-host1-users.png)
On `Host2`, no corresponding user accounts exist, but the same volume is mounted on both hosts:
-![Resulting configuration for NFSv4.1](../media/azure-netapp-files/azure-netapp-files-nfsv41-host2-users.png)
+![Resulting configuration for NFSv4.1](./media/azure-netapp-files-configure-nfsv41-domain/azure-netapp-files-nfsv41-host2-users.png)
To resolve this issue, either create the missing accounts on the NFS client or configure your NFS clients to use the LDAP server that Azure NetApp Files is using for centrally managed UNIX identities.
azure-netapp-files Azure Netapp Files Cost Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-cost-model.md
If your capacity pool size requirements fluctuate (for example, because of varia
For example, you are using the Premium capacity 24 hours (1 day) at 10 TiB, 96 hours (4 days) at 24 TiB, four times at 6 hours (1 day) at 5 TiB, 480 hours (20 days) at 6 TiB, and the monthΓÇÖs remaining hours at 0 TiB. A dynamic cloud consumption deployment profile looks different from a traditional static on-premises consumption profile:
-[ ![Bar chart that shows dynamic versus static capacity pool provisioning.](../media/azure-netapp-files/cost-model-example-one-capacity.png) ](../media/azure-netapp-files/cost-model-example-one-capacity.png#lightbox)
+[ ![Bar chart that shows dynamic versus static capacity pool provisioning.](./media/azure-netapp-files-cost-model/cost-model-example-one-capacity.png) ](./media/azure-netapp-files-cost-model/cost-model-example-one-capacity.png#lightbox)
When costs are billed at $0.000403 per GiB/hour ([pricing depending on the region](https://azure.microsoft.com/pricing/details/netapp/)), the monthly cost breakdown looks like this:
When costs are billed at $0.000403 per GiB/hour ([pricing depending on the regio
* 6 TiB x 480 hours x $0.000403 per GiB/hour = $1,188.50 * Total = **$2,238.33**
-[ ![Bar chart that shows static versus dynamic service level cost model.](../media/azure-netapp-files/cost-model-example-one-pricing.png) ](../media/azure-netapp-files/cost-model-example-one-pricing.png#lightbox)
+[ ![Bar chart that shows static versus dynamic service level cost model.](./media/azure-netapp-files-cost-model/cost-model-example-one-pricing.png) ](./media/azure-netapp-files-cost-model/cost-model-example-one-pricing.png#lightbox)
This scenario constitutes a monthly savings of $4,892.64 compared to static provisioning.
If your capacity pool size requirements remain the same but performance requirem
Consider a scenario where the capacity requirement is a constant 24 TiB. But your performance needs fluctuate between 384 hours (16 days) of Standard service level, 120 hours (5 days) of Premium service level, 168 hours (7 days) of Ultra service level, and then back to 48 hours (2 days) of standard service level performance. In this scenario, a dynamic cloud consumption deployment profile looks different compared to a traditional static on-premises consumption profile:
-[ ![Bar chart that shows provisioning with and without dynamic service level change.](../media/azure-netapp-files/cost-model-example-two-capacity.png) ](../media/azure-netapp-files/cost-model-example-two-capacity.png#lightbox)
+[ ![Bar chart that shows provisioning with and without dynamic service level change.](./media/azure-netapp-files-cost-model/cost-model-example-two-capacity.png) ](./media/azure-netapp-files-cost-model/cost-model-example-two-capacity.png#lightbox)
In this case, when costs are billed at $0.000202 per GiB/hour (Standard), $0.000403 per GiB/hour (Premium) and $0.000538 per GiB/hour (Ultra) respectively ([pricing depending on the region](https://azure.microsoft.com/pricing/details/netapp/)), the monthly cost breakdown looks like this:
In this case, when costs are billed at $0.000202 per GiB/hour (Standard), $0.000
* 24 TiB x 48 hours x $0.000202 per GiB/hour = $238.29 * Total = **$5,554.37**
-[ ![Bar chart that shows static versus dynamic service level change cost model.](../media/azure-netapp-files/cost-model-example-two-pricing.png) ](../media/azure-netapp-files/cost-model-example-two-pricing.png#lightbox)
+[ ![Bar chart that shows static versus dynamic service level change cost model.](./media/azure-netapp-files-cost-model/cost-model-example-two-pricing.png) ](./media/azure-netapp-files-cost-model/cost-model-example-two-pricing.png#lightbox)
This scenario constitutes a monthly savings of $3,965.39 compared to static provisioning.
The following diagram illustrates the concepts.
* 7.9 TiB of capacity is used (3.5 TiB, 400 GiB, 4 TiB in Volumes 1, 2, and 3). * The capacity pool has 100 GiB of unprovisioned capacity remaining. ## Next steps
azure-netapp-files Azure Netapp Files Create Netapp Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-create-netapp-account.md
You must register your subscription for using the NetApp Resource Provider. For
* **Resource group**: Use an existing resource group or create a new one. * **Location**: Select the region where you want the account and its child resources to be located.
- ![Screenshot that shows New NetApp account.](../media/azure-netapp-files/azure-netapp-files-new-netapp-account.png)
+ ![Screenshot that shows New NetApp account.](./media/azure-netapp-files-create-netapp-account/azure-netapp-files-new-netapp-account.png)
1. Select **Create**. The NetApp account you created now appears in the Azure NetApp Files pane.
azure-netapp-files Azure Netapp Files Create Volumes Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-create-volumes-smb.md
Before creating an SMB volume, you need to create an Active Directory connection
1. Select the **Volumes** blade from the Capacity Pools blade.
- ![Navigate to Volumes](../media/azure-netapp-files/azure-netapp-files-navigate-to-volumes.png)
+ ![Navigate to Volumes](./media/shared/azure-netapp-files-navigate-to-volumes.png)
2. Select **+ Add volume** to create a volume. The Create a Volume window appears.
Before creating an SMB volume, you need to create an Active Directory connection
If you haven't delegated a subnet, you can select **Create new** on the Create a Volume page. Then in the Create Subnet page, specify the subnet information, and select **Microsoft.NetApp/volumes** to delegate the subnet for Azure NetApp Files. In each VNet, only one subnet can be delegated to Azure NetApp Files.
- ![Create subnet](../media/azure-netapp-files/azure-netapp-files-create-subnet.png)
+ ![Create subnet](./media/shared/azure-netapp-files-create-subnet.png)
* **Network features** In supported regions, you can specify whether you want to use **Basic** or **Standard** network features for the volume. See [Configure network features for a volume](configure-network-features.md) and [Guidelines for Azure NetApp Files network planning](azure-netapp-files-network-topologies.md) for details.
Before creating an SMB volume, you need to create an Active Directory connection
For information about creating a snapshot policy, see [Manage snapshot policies](snapshots-manage-policy.md).
- ![Show advanced selection](../media/azure-netapp-files/volume-create-advanced-selection.png)
+ ![Show advanced selection](./media/shared/volume-create-advanced-selection.png)
4. Select **Protocol** and complete the following information: * Select **SMB** as the protocol type for the volume.
Before creating an SMB volume, you need to create an Active Directory connection
**Custom applications are not supported with SMB Continuous Availability.**
- :::image type="content" source="../media/azure-netapp-files/azure-netapp-files-protocol-smb.png" alt-text="Screenshot showing the Protocol tab of creating an SMB volume." lightbox="../media/azure-netapp-files/azure-netapp-files-protocol-smb.png":::
+ :::image type="content" source="./media/azure-netapp-files-create-volumes-smb/azure-netapp-files-protocol-smb.png" alt-text="Screenshot showing the Protocol tab of creating an SMB volume." lightbox="./media/azure-netapp-files-create-volumes-smb/azure-netapp-files-protocol-smb.png":::
5. Select **Review + Create** to review the volume details. Then select **Create** to create the SMB volume.
Access to an SMB volume is managed through permissions.
You can set permissions for a file or folder by using the **Security** tab of the object's properties in the Windows SMB client.
-![Set file and folder permissions](../media/azure-netapp-files/set-file-folder-permissions.png)
+![Set file and folder permissions](./media/azure-netapp-files-create-volumes-smb/set-file-folder-permissions.png)
### Modify SMB share permissions
azure-netapp-files Azure Netapp Files Create Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-create-volumes.md
This article shows you how to create an NFS volume. For SMB volumes, see [Create
1. Select the **Volumes** blade from the Capacity Pools blade. Select **+ Add volume** to create a volume.
- ![Navigate to Volumes](../media/azure-netapp-files/azure-netapp-files-navigate-to-volumes.png)
+ ![Navigate to Volumes](./media/shared/azure-netapp-files-navigate-to-volumes.png)
2. In the Create a Volume window, select **Create**, and provide information for the following fields under the Basics tab: * **Volume name**
This article shows you how to create an NFS volume. For SMB volumes, see [Create
If you have not delegated a subnet, you can select **Create new** on the Create a Volume page. Then in the Create Subnet page, specify the subnet information, and select **Microsoft.NetApp/volumes** to delegate the subnet for Azure NetApp Files. In each Virtual Network, only one subnet can be delegated to Azure NetApp Files.
- ![Create subnet](../media/azure-netapp-files/azure-netapp-files-create-subnet.png)
+ ![Create subnet](./media/shared/azure-netapp-files-create-subnet.png)
* **Network features** In supported regions, you can specify whether you want to use **Basic** or **Standard** network features for the volume. See [Configure network features for a volume](configure-network-features.md) and [Guidelines for Azure NetApp Files network planning](azure-netapp-files-network-topologies.md) for details.
This article shows you how to create an NFS volume. For SMB volumes, see [Create
For information about creating a snapshot policy, see [Manage snapshot policies](snapshots-manage-policy.md).
- ![Show advanced selection](../media/azure-netapp-files/volume-create-advanced-selection.png)
+ ![Show advanced selection](./media/shared/volume-create-advanced-selection.png)
>[!NOTE] >By default, the `.snapshot` directory path is hidden from NFSv4.1 clients. Enabling the **Hide snapshot path** option will hide the .snapshot directory from NFSv3 clients; the directory will still be accessible.
This article shows you how to create an NFS volume. For SMB volumes, see [Create
* Optionally, [configure export policy for the NFS volume](azure-netapp-files-configure-export-policy.md).
- ![Specify NFS protocol](../media/azure-netapp-files/azure-netapp-files-protocol-nfs.png)
+ ![Specify NFS protocol](./media/azure-netapp-files-create-volumes/azure-netapp-files-protocol-nfs.png)
4. Select **Review + Create** to review the volume details. Select **Create** to create the volume.
azure-netapp-files Azure Netapp Files Delegate Subnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-delegate-subnet.md
You must delegate a subnet to Azure NetApp Files. When you create a volume, you
* **Address range**: Specify the IP address range. * **Subnet delegation**: Select **Microsoft.NetApp/volumes**.
- ![Subnet delegation](../media/azure-netapp-files/azure-netapp-files-subnet-delegation.png)
+ ![Subnet delegation](./media/azure-netapp-files-delegate-subnet/azure-netapp-files-subnet-delegation.png)
You can also create and delegate a subnet when you [create a volume for Azure NetApp Files](azure-netapp-files-create-volumes.md).
azure-netapp-files Azure Netapp Files Manage Snapshots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-manage-snapshots.md
Azure NetApp Files supports creating on-demand [snapshots](snapshots-introductio
1. Go to the volume that you want to create a snapshot for. Select **Snapshots**.
- ![Screenshot that shows how to navigate to the snapshots blade.](../media/azure-netapp-files/azure-netapp-files-navigate-to-snapshots.png)
+ ![Screenshot that shows how to navigate to the snapshots blade.](./media/azure-netapp-files-manage-snapshots/azure-netapp-files-navigate-to-snapshots.png)
2. Select **+ Add snapshot** to create an on-demand snapshot for a volume.
- ![Screenshot that shows how to add a snapshot.](../media/azure-netapp-files/azure-netapp-files-add-snapshot.png)
+ ![Screenshot that shows how to add a snapshot.](./media/azure-netapp-files-manage-snapshots/azure-netapp-files-add-snapshot.png)
3. In the New Snapshot window, provide a name for the new snapshot that you are creating.
- ![Screenshot that shows the New Snapshot window.](../media/azure-netapp-files/azure-netapp-files-new-snapshot.png)
+ ![Screenshot that shows the New Snapshot window.](./media/azure-netapp-files-manage-snapshots/azure-netapp-files-new-snapshot.png)
4. Select **OK**.
azure-netapp-files Azure Netapp Files Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-metrics.md
Azure NetApp Files metrics are natively integrated into Azure monitor. From with
- From Azure monitor, select **Metrics**, select a capacity pool or volume. Then select **Metric** to view the available metrics:
- :::image type="content" source="../media/azure-netapp-files/metrics-select-pool-volume.png" alt-text="Screenshot that shows how to access Azure NetApp Files metrics for capacity pools or volumes." lightbox="../media/azure-netapp-files/metrics-select-pool-volume.png":::
+ :::image type="content" source="./media/azure-netapp-files-metrics/metrics-select-pool-volume.png" alt-text="Screenshot that shows how to access Azure NetApp Files metrics for capacity pools or volumes." lightbox="./media/azure-netapp-files-metrics/metrics-select-pool-volume.png":::
- From the Azure NetApp Files capacity pool or volume, select **Metrics**. Then select **Metric** to view the available metrics:
- :::image type="content" source="../media/azure-netapp-files/metrics-navigate-volume.png" alt-text="Snapshot that shows how to navigate to the Metric pull-down." lightbox="../media/azure-netapp-files/metrics-navigate-volume.png":::
+ :::image type="content" source="./media/azure-netapp-files-metrics/metrics-navigate-volume.png" alt-text="Snapshot that shows how to navigate to the Metric pull-down." lightbox="./media/azure-netapp-files-metrics/metrics-navigate-volume.png":::
## <a name="capacity_pools"></a>Usage metrics for capacity pools
Azure NetApp Files metrics are natively integrated into Azure monitor. From with
Consider repurposing the volume and delegating a different volume with a larger size and/or in a higher service level to meet your application requirements. If it's an NFS volume, consider changing mount options to reduce data flow if your application supports those changes.
- :::image type="content" source="../media/azure-netapp-files/throughput-limit-reached.png" alt-text="Screenshot that shows Azure NetApp Files metrics a line graph demonstrating throughput limit reached." lightbox="../media/azure-netapp-files/throughput-limit-reached.png":::
+ :::image type="content" source="./media/azure-netapp-files-metrics/throughput-limit-reached.png" alt-text="Screenshot that shows Azure NetApp Files metrics a line graph demonstrating throughput limit reached." lightbox="./media/azure-netapp-files-metrics/throughput-limit-reached.png":::
## Performance metrics for volumes
azure-netapp-files Azure Netapp Files Mount Unmount Volumes For Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-mount-unmount-volumes-for-virtual-machines.md
For more information about how NFS operates in Azure NetApp Files, see [Understa
1. Review the [Linux NFS mount options best practices](performance-linux-mount-options.md). 2. Select the **Volumes** pane and then the NFS volume that you want to mount. 3. To mount the NFS volume using a Linux client, select **Mount instructions** from the selected volume. Follow the displayed instructions to mount the volume.
- :::image type="content" source="../media/azure-netapp-files/azure-netapp-files-mount-instructions-nfs.png" alt-text="Screenshot of Mount instructions." lightbox="../media/azure-netapp-files/azure-netapp-files-mount-instructions-nfs.png":::
+ :::image type="content" source="./media/azure-netapp-files-mount-unmount-volumes-for-virtual-machines/azure-netapp-files-mount-instructions-nfs.png" alt-text="Screenshot of Mount instructions." lightbox="./media/azure-netapp-files-mount-unmount-volumes-for-virtual-machines/azure-netapp-files-mount-instructions-nfs.png":::
* Ensure that you use the `vers` option in the `mount` command to specify the NFS protocol version that corresponds to the volume you want to mount. For example, if the NFS version is NFSv4.1: `sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,vers=4.1,tcp,sec=sys $MOUNTTARGETIPADDRESS:/$VOLUMENAME $MOUNTPOINT`
azure-netapp-files Azure Netapp Files Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md
Configuring UDRs on the source VM subnets with the address prefix of delegated s
The following diagram illustrates an Azure-native environment: ### Local VNet
In the diagram above, although VM 3 can connect to Volume 1, VM 4 can't connect
The following diagram illustrates an Azure-native environment with cross-region VNet peering. With Standard network features, VMs are able to connect to volumes in another region via global or cross-region VNet peering. The above diagram adds a second region to the configuration in the [local VNet peering section](#vnet-peering). For VNet 4 in this diagram, an Azure NetApp Files volume is created in a delegated subnet and can be mounted on VM5 in the application subnet.
In the diagram, VM2 in Region 1 can connect to Volume 3 in Region 2. VM5 in Regi
The following diagram illustrates a hybrid environment: In the hybrid scenario, applications from on-premises datacenters need access to the resources in Azure. This is the case whether you want to extend your datacenter to Azure or you want to use Azure native services or for disaster recovery. See [VPN Gateway planning options](../vpn-gateway/vpn-gateway-about-vpngateways.md?toc=%2fazure%2fvirtual-network%2ftoc.json#planningtable) for information on how to connect multiple resources on-premises to resources in Azure through a site-to-site VPN or an ExpressRoute.
azure-netapp-files Azure Netapp Files Performance Metrics Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-performance-metrics-volumes.md
Ensure that you choose the correct service level and volume quota size for the e
You should perform the benchmark testing in the same VNet as Azure NetApp Files. The example below demonstrates the recommendation:
-![VNet recommendations](../media/azure-netapp-files/azure-netapp-files-benchmark-testing-vnet.png)
+![VNet recommendations](./media/azure-netapp-files-performance-metrics-volumes/azure-netapp-files-benchmark-testing-vnet.png)
## Performance benchmarking tools
You can view historical data for the following information:
You can access Azure NetApp Files counters on a per-volume basis from the Metrics page, as shown below:
-![Azure Monitor metrics](../media/azure-netapp-files/azure-netapp-files-benchmark-monitor-metrics.png)
+![Azure Monitor metrics](./media/azure-netapp-files-performance-metrics-volumes/azure-netapp-files-benchmark-monitor-metrics.png)
You can also create a dashboard in Azure Monitor for Azure NetApp Files by going to the Metrics page, filtering for NetApp, and specifying the volume counters of interest:
-![Azure Monitor dashboard](../media/azure-netapp-files/azure-netapp-files-benchmark-monitor-dashboard.png)
+![Azure Monitor dashboard](./media/azure-netapp-files-performance-metrics-volumes/azure-netapp-files-benchmark-monitor-dashboard.png)
### Azure Monitor API access
azure-netapp-files Azure Netapp Files Quickstart Set Up Account Create Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-quickstart-set-up-account-create-volumes.md
Use the Azure portal, PowerShell, or the Azure CLI to [register for NetApp Resou
1. In the Azure portal's search box, enter **Azure NetApp Files** and then select **Azure NetApp Files** from the list that appears.
- ![Select Azure NetApp Files](../media/azure-netapp-files/azure-netapp-files-select-azure-netapp-files.png)
+ ![Select Azure NetApp Files](./media/azure-netapp-files-quickstart-set-up-account-create-volumes/azure-netapp-files-select-azure-netapp-files.png)
2. Select **+ Create** to create a new NetApp account.
Use the Azure portal, PowerShell, or the Azure CLI to [register for NetApp Resou
3. Select **Create new** to create new resource group. Enter **myRG1** for the resource group name. Select **OK**. 4. Select your account location.
- ![New NetApp Account window](../media/azure-netapp-files/azure-netapp-files-new-account-window.png)
+ ![New NetApp Account window](./media/azure-netapp-files-quickstart-set-up-account-create-volumes/azure-netapp-files-new-account-window.png)
- ![Resource group window](../media/azure-netapp-files/azure-netapp-files-resource-group-window.png)
+ ![Resource group window](./media/azure-netapp-files-quickstart-set-up-account-create-volumes/azure-netapp-files-resource-group-window.png)
4. Select **Create** to create your new NetApp account.
The following code snippet shows how to create a NetApp account in an Azure Reso
1. From the Azure NetApp Files management blade, select your NetApp account (**myaccount1**).
- ![Screenshot of selecting NetApp account menu.](../media/azure-netapp-files/azure-netapp-files-select-netapp-account.png)
+ ![Screenshot of selecting NetApp account menu.](./media/azure-netapp-files-quickstart-set-up-account-create-volumes/azure-netapp-files-select-netapp-account.png)
2. From the Azure NetApp Files management blade of your NetApp account, select **Capacity pools**.
- ![Screenshot of Capacity pool selection interface.](../media/azure-netapp-files/azure-netapp-files-click-capacity-pools.png)
+ ![Screenshot of Capacity pool selection interface.](./media/azure-netapp-files-quickstart-set-up-account-create-volumes/azure-netapp-files-click-capacity-pools.png)
3. Select **+ Add pools**.
- :::image type="content" source="../media/azure-netapp-files/azure-netapp-files-new-capacity-pool.png" alt-text="Screenshot of new capacity pool options.":::
+ :::image type="content" source="./media/shared/azure-netapp-files-new-capacity-pool.png" alt-text="Screenshot of new capacity pool options.":::
4. Provide information for the capacity pool: * Enter **mypool1** as the pool name.
The following code snippet shows how to create a capacity pool in an Azure Resou
1. From the Azure NetApp Files management blade of your NetApp account, select **Volumes**.
- ![Screenshot of select volumes interface.](../media/azure-netapp-files/azure-netapp-files-click-volumes.png)
+ ![Screenshot of select volumes interface.](./media/azure-netapp-files-quickstart-set-up-account-create-volumes/azure-netapp-files-click-volumes.png)
2. Select **+ Add volume**.
- ![Screenshot of add volumes interface.](../media/azure-netapp-files/azure-netapp-files-click-add-volumes.png)
+ ![Screenshot of add volumes interface.](./media/azure-netapp-files-quickstart-set-up-account-create-volumes/azure-netapp-files-click-add-volumes.png)
3. In the Create a Volume window, provide information for the volume: 1. Enter **myvol1** as the volume name.
The following code snippet shows how to create a capacity pool in an Azure Resou
* Select **OK** to create the VNet. 5. In subnet, select the newly created Vnet (**myvnet1**) as the delegate subnet.
- ![Screenshot of create a volume window.](../media/azure-netapp-files/azure-netapp-files-create-volume-window.png)
+ ![Screenshot of create a volume window.](./media/azure-netapp-files-quickstart-set-up-account-create-volumes/azure-netapp-files-create-volume-window.png)
- ![Screenshot of create a virtual network window.](../media/azure-netapp-files/azure-netapp-files-create-virtual-network-window.png)
+ ![Screenshot of create a virtual network window.](./media/azure-netapp-files-quickstart-set-up-account-create-volumes/azure-netapp-files-create-virtual-network-window.png)
4. Select **Protocol**, and then complete the following actions: * Select **NFS** as the protocol type for the volume.
The following code snippet shows how to create a capacity pool in an Azure Resou
* Select the NFS version (**NFSv3** or **NFSv4.1**) for the volume. See [considerations](azure-netapp-files-create-volumes.md#considerations) and [best practice](azure-netapp-files-create-volumes.md#best-practice) about NFS versions.
- ![Screenshot of NFS protocol for selection.](../media/azure-netapp-files/azure-netapp-files-quickstart-protocol-nfs.png)
+ ![Screenshot of NFS protocol for selection.](./media/azure-netapp-files-quickstart-set-up-account-create-volumes/azure-netapp-files-quickstart-protocol-nfs.png)
5. Select **Review + create** to display information for the volume you are creating. 6. Select **Create** to create the volume. The created volume appears in the Volumes blade.
- ![Screenshot of volume creation confirmation.](../media/azure-netapp-files/azure-netapp-files-create-volume-created.png)
+ ![Screenshot of volume creation confirmation.](./media/azure-netapp-files-quickstart-set-up-account-create-volumes/azure-netapp-files-create-volume-created.png)
# [PowerShell](#tab/azure-powershell)
When you are done and if you want to, you can delete the resource group. The act
2. In the list of subscriptions, select the resource group (myRG1) you want to delete.
- ![Screenshot of the resource groups menu.](../media/azure-netapp-files/azure-netapp-files-azure-navigate-to-resource-groups.png)
+ ![Screenshot of the resource groups menu.](./media/azure-netapp-files-quickstart-set-up-account-create-volumes/azure-netapp-files-azure-navigate-to-resource-groups.png)
3. In the resource group page, select **Delete resource group**.
- ![Screenshot that highlights the Delete resource group button.](../media/azure-netapp-files/azure-netapp-files-azure-delete-resource-group.png)
+ ![Screenshot that highlights the Delete resource group button.](./media/azure-netapp-files-quickstart-set-up-account-create-volumes/azure-netapp-files-azure-delete-resource-group.png)
A window opens and displays a warning about the resources that will be deleted with the resource group. 4. Enter the name of the resource group (myRG1) to confirm that you want to permanently delete the resource group and all resources in it, and then select **Delete**.
- ![Screenshot showing confirmation of deleting resource group.](../media/azure-netapp-files/azure-netapp-files-azure-confirm-resource-group-deletion.png)
+ ![Screenshot showing confirmation of deleting resource group.](./media/azure-netapp-files-quickstart-set-up-account-create-volumes/azure-netapp-files-azure-confirm-resource-group-deletion.png)
# [PowerShell](#tab/azure-powershell)
azure-netapp-files Azure Netapp Files Register https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-register.md
To use the Azure NetApp Files service, you need to register the NetApp Resource
1. From the Azure portal, click the Azure Cloud Shell icon on the upper right-hand corner:
- ![Azure Cloud Shell icon](../media/azure-netapp-files/azure-netapp-files-azure-cloud-shell.png)
+ ![Azure Cloud Shell icon](./media/azure-netapp-files-register/azure-netapp-files-azure-cloud-shell.png)
2. If you have multiple subscriptions on your Azure account, select the one that you want to configure for Azure NetApp Files:
To use the Azure NetApp Files service, you need to register the NetApp Resource
6. In the Subscriptions blade, click your subscription ID. 7. In the settings of the subscription, click **Resource providers** to verify that Microsoft.NetApp Provider indicates the Registered status:
- ![Registered Microsoft.NetApp](../media/azure-netapp-files/azure-netapp-files-registered-resource-providers.png)
+ ![Registered Microsoft.NetApp](./media/azure-netapp-files-register/azure-netapp-files-registered-resource-providers.png)
## Next steps
azure-netapp-files Azure Netapp Files Resize Capacity Pools Or Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-resize-capacity-pools-or-volumes.md
Resizing the capacity pool changes the purchased Azure NetApp Files capacity.
1. From the NetApp Account view, go to **Capacity pools**, and select the capacity pool that you want to resize. 2. Right-click the capacity pool name or select the "…" icon at the end of the capacity pool row to display the context menu. Select **Resize**.
- ![Screenshot that shows pool context menu.](../media/azure-netapp-files/resize-pool-context-menu.png)
+ ![Screenshot that shows pool context menu.](./media/azure-netapp-files-resize-capacity-pools-or-volumes/resize-pool-context-menu.png)
3. In the Resize pool window, specify the pool size. Select **OK**.
- ![Screenshot that shows Resize pool window.](../media/azure-netapp-files/resize-pool-window.png)
+ ![Screenshot that shows Resize pool window.](./media/azure-netapp-files-resize-capacity-pools-or-volumes/resize-pool-window.png)
## Resize a volume using the Azure portal
You can change the size of a volume as necessary. A volume's capacity consumptio
1. From the NetApp Account view, go to **Volumes**, and select the volume that you want to resize. 2. Right-click the volume name or select the "…" icon at the end of the volume's row to display the context menu. Select **Resize**.
- ![Screenshot that shows volume context menu.](../media/azure-netapp-files/resize-volume-context-menu.png)
+ ![Screenshot that shows volume context menu.](./media/azure-netapp-files-resize-capacity-pools-or-volumes/resize-volume-context-menu.png)
3. In the Update volume quota window, specify the quota for the volume. Select **OK**.
- ![Screenshot that shows Update Volume Quota window.](../media/azure-netapp-files/resize-volume-quota-window.png)
+ ![Screenshot that shows Update Volume Quota window.](./media/azure-netapp-files-resize-capacity-pools-or-volumes/resize-volume-quota-window.png)
## Resizing the capacity pool or a volume using Azure CLI
azure-netapp-files Azure Netapp Files Resource Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-resource-limits.md
You can create an Azure support request to increase the adjustable limits from t
2. For **Subscription**, select your subscription. 3. For **Quota Type**, select **Storage: Azure NetApp Files limits**.
- ![Screenshot that shows the Problem Description tab.](../media/azure-netapp-files/support-problem-descriptions.png)
+ ![Screenshot that shows the Problem Description tab.](./media/shared/support-problem-descriptions.png)
3. Under the **Additional details** tab, select **Enter details** in the Request Details field.
- ![Screenshot that shows the Details tab and the Enter Details field.](../media/azure-netapp-files/quota-additional-details.png)
+ ![Screenshot that shows the Details tab and the Enter Details field.](./media/shared/quota-additional-details.png)
4. To request limit increase, provide the following information in the Quota Details window that appears: 1. In **Quota Type**, select the type of resource you want to increase.
You can create an Azure support request to increase the adjustable limits from t
3. Enter a value to request an increase for the quota type you specified.
- ![Screenshot that shows how to display and request increase for regional quota.](../media/azure-netapp-files/quota-details-regional-request.png)
+ ![Screenshot that shows how to display and request increase for regional quota.](./media/azure-netapp-files-resource-limits/quota-details-regional-request.png)
5. Select **Save and continue**. Select **Review + create** to create the request.
azure-netapp-files Azure Netapp Files Service Levels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-service-levels.md
The throughput limit for a volume is determined by the combination of the follow
The following diagram shows throughput limit examples of volumes in an auto QoS capacity pool:
-![Service level illustration](../media/azure-netapp-files/azure-netapp-files-service-levels.png)
+![Service level illustration](./media/azure-netapp-files-service-levels/azure-netapp-files-service-levels.png)
* In Example 1, a volume from an auto QoS capacity pool with the Premium storage tier that is assigned 2 TiB of quota will be assigned a throughput limit of 128 MiB/s (2 TiB * 64 MiB/s). This scenario applies regardless of the capacity pool size or the actual volume consumption.
For example, for an SAP HANA system, this capacity pool can be used to create th
The following diagram illustrates the scenarios for the SAP HANA volumes:
-![QoS SAP HANA volume scenarios](../media/azure-netapp-files/qos-sap-hana-volume-scenarios.png)
+![QoS SAP HANA volume scenarios](./media/azure-netapp-files-service-levels/qos-sap-hana-volume-scenarios.png)
## Next steps
azure-netapp-files Azure Netapp Files Set Up Capacity Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-set-up-capacity-pool.md
Creating a capacity pool enables you to create volumes within it.
1. Go to the management blade for your NetApp account, and then, from the navigation pane, click **Capacity pools**.
- ![Navigate to capacity pool](../media/azure-netapp-files/azure-netapp-files-navigate-to-capacity-pool.png)
+ ![Navigate to capacity pool](./media/azure-netapp-files-set-up-capacity-pool/azure-netapp-files-navigate-to-capacity-pool.png)
2. Select **+ Add pools** to create a new capacity pool. The New Capacity Pool window appears.
Creating a capacity pool enables you to create volumes within it.
``` You can also use [Azure CLI commands](/cli/azure/feature) `az feature register` and `az feature show` to register the feature and display the registration status.
- :::image type="content" source="../media/azure-netapp-files/azure-netapp-files-new-capacity-pool.png" alt-text="Screenshot showing the New Capacity Pool window.":::
+ :::image type="content" source="./media/shared/azure-netapp-files-new-capacity-pool.png" alt-text="Screenshot showing the New Capacity Pool window.":::
4. Select **Create**.
azure-netapp-files Azure Netapp Files Smb Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-smb-performance.md
With SMB Multichannel disabled on the client, pure 4 KiB read and write tests we
The command `netstat -na | findstr 445` proved that additional connections were established with increments from `1` to `4` to `8` and to `16`. Four CPU cores were fully utilized for SMB during each test, as confirmed by the perfmon `Per Processor Network Activity Cycles` statistic (not included in this article.)
-![Chart that shows random I/O comparison of SMB Multichannel.](../media/azure-netapp-files/azure-netapp-files-random-io-tests.png)
+![Chart that shows random I/O comparison of SMB Multichannel.](./media/azure-netapp-files-smb-performance/azure-netapp-files-random-io-tests.png)
The Azure virtual machine does not affect SMB (nor NFS) storage I/O limits. As shown in the following chart, the D32ds instance type has a limited rate of 308,000 for cached storage IOPS and 51,200 for uncached storage IOPS. However, the graph above shows significantly more I/O over SMB.
-![Chart that shows random I/O comparison test.](../media/azure-netapp-files/azure-netapp-files-random-io-tests-list.png)
+![Chart that shows random I/O comparison test.](./media/azure-netapp-files-smb-performance/azure-netapp-files-random-io-tests-list.png)
#### Sequential IO Tests similar to the random I/O tests described previously were performed with 64-KiB sequential I/O. Although the increases in client connection count per RSS network interface beyond 4ΓÇÖ had no noticeable effect on random I/O, the same does not apply to sequential I/O. As the following graph shows, each increase is associated with a corresponding increase in read throughput. Write throughput remained flat due to network bandwidth restrictions placed by Azure for each instance type/size.
-![Chart that shows throughput test comparison.](../media/azure-netapp-files/azure-netapp-files-sequential-io-tests.png)
+![Chart that shows throughput test comparison.](./media/azure-netapp-files-smb-performance/azure-netapp-files-sequential-io-tests.png)
Azure places network rate limits on each virtual machine type/size. The rate limit is imposed on outbound traffic only. The number of NICs present on a virtual machine has no bearing on the total amount of bandwidth available to the machine. For example, the D32ds instance type has an imposed network limit of 16,000 Mbps (2,000 MiB/s). As the sequential graph above shows, the limit affects the outbound traffic (writes) but not multichannel reads.
-![Chart that shows sequential I/O comparison test.](../media/azure-netapp-files/azure-netapp-files-sequential-io-tests-list.png)
+![Chart that shows sequential I/O comparison test.](./media/azure-netapp-files-smb-performance/azure-netapp-files-sequential-io-tests-list.png)
## SMB Signing
SMB Signing is supported for all SMB protocol versions that are supported by Azu
SMB Signing has a deleterious effect upon SMB performance. Among other potential causes of the performance degradation, the digital signing of each packet consumes additional client-side CPU as the perfmon output below shows. In this case, Core 0 appears responsible for SMB, including SMB Signing. A comparison with the non-multichannel sequential read throughput numbers in the previous section shows that SMB Signing reduces overall throughput from 875MiB/s to approximately 250MiB/s.
-![Chart that shows SMB Signing performance impact.](../media/azure-netapp-files/azure-netapp-files-smb-signing-performance.png)
+![Chart that shows SMB Signing performance impact.](./media/azure-netapp-files-smb-performance/azure-netapp-files-smb-signing-performance.png)
## Performance for a single instance with a 1-TB dataset
To provide more detailed insight into workloads with read/write mixes, the follo
The following chart shows the results for 4k random I/O, with a single VM instance and a read/write mix at 10% intervals:
-![Chart that shows Windows 2019 standard _D32ds_v4 4K random IO test.](../media/azure-netapp-files/smb-performance-standard-4k-random-io.png)
+![Chart that shows Windows 2019 standard _D32ds_v4 4K random IO test.](./media/azure-netapp-files-smb-performance/smb-performance-standard-4k-random-io.png)
The following chart shows the results for sequential I/O:
-![Chart that shows Windows 2019 standard _D32ds_v4 64K sequential throughput.](../media/azure-netapp-files/smb-performance-standard-64k-throughput.png)
+![Chart that shows Windows 2019 standard _D32ds_v4 64K sequential throughput.](./media/azure-netapp-files-smb-performance/smb-performance-standard-64k-throughput.png)
## Performance when scaling out using 5 VMs with a 1-TB dataset
These tests with 5 VMs use the same testing environment as the single VM, with e
The following chart shows the results for random I/O:
-![Chart that shows Windows 2019 standard _D32ds_v4 4K 5-instance randio IO test.](../media/azure-netapp-files/smb-performance-standard-4k-random-io-5-instances.png)
+![Chart that shows Windows 2019 standard _D32ds_v4 4K 5-instance randio IO test.](./media/azure-netapp-files-smb-performance/smb-performance-standard-4k-random-io-5-instances.png)
The following chart shows the results for sequential I/O:
-![Chart that shows Windows 2019 standard _D32ds_v4 64K 5-instance sequential throughput.](../media/azure-netapp-files/smb-performance-standard-64k-throughput-5-instances.png)
+![Chart that shows Windows 2019 standard _D32ds_v4 64K 5-instance sequential throughput.](./media/azure-netapp-files-smb-performance/smb-performance-standard-64k-throughput-5-instances.png)
## How to monitor Hyper-V ethernet adapters
One strategy used in testing with FIO is to set `numjobs=16`. Doing so forks eac
You can check for activity on each of the adapters in Windows Performance Monitor by selecting **Performance Monitor > Add Counters > Network Interface > Microsoft Hyper-V Network Adapter**.
-![Screenshot that shows Performance Monitor Add Counter interface.](../media/azure-netapp-files/smb-performance-performance-monitor-add-counter.png)
+![Screenshot that shows Performance Monitor Add Counter interface.](./media/azure-netapp-files-smb-performance/smb-performance-performance-monitor-add-counter.png)
After you have data traffic running in your volumes, you can monitor your adapters in Windows Performance Monitor. If you do not use all of these 16 virtual adapters, you might not be maximizing your network bandwidth capacity.
-![Screenshot that shows Performance Monitor output.](../media/azure-netapp-files/smb-performance-performance-monitor-output.png)
+![Screenshot that shows Performance Monitor output.](./media/azure-netapp-files-smb-performance/smb-performance-performance-monitor-output.png)
## SMB encryption
With SMB Multichannel enabled, an SMB3 client establishes multiple TCP connectio
To see if your Azure virtual machine NICs support RSS, run the command `Get-SmbClientNetworkInterface` as follows and check the field `RSS Capable`:
-![Screenshot that shows RSS output for Azure virtual machine.](../media/azure-netapp-files/azure-netapp-files-formance-rss-support.png)
+![Screenshot that shows RSS output for Azure virtual machine.](./media/azure-netapp-files-smb-performance/azure-netapp-files-formance-rss-support.png)
## Multiple NICs on SMB clients
You should not configure multiple NICs on your client for SMB. The SMB client wi
As the output of `Get-SmbClientNetworkInterace` below shows, the virtual machine has 2 network interfaces--15 and 12. As shown under the following command `Get-SmbMultichannelConnection`, even though there are two RSS-capable NICS, only interface 12 is used in connection with the SMB share; interface 15 is not in use.
-![Screeshot that shows output for RSS-capable NICS.](../media/azure-netapp-files/azure-netapp-files-rss-capable-nics.png)
+![Screeshot that shows output for RSS-capable NICS.](./media/azure-netapp-files-smb-performance/azure-netapp-files-rss-capable-nics.png)
## Next steps
azure-netapp-files Azure Netapp Files Solution Architectures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md
Azure NetApp FilesΓÇÖ integration with Azure native services like Azure Kubernet
The following diagram depicts the categorization of reference architectures, blueprints and solutions on this page as laid out in the above introduction: **Azure NetApp Files key use cases** In summary, Azure NetApp Files is a versatile and scalable storage service that provides an ideal platform for migrating various workload categories, running specialized workloads, and integrating with Azure native services. Azure NetApp FilesΓÇÖ high-performance, security, and scalability features make it a reliable choice for businesses looking to run their applications and workloads in Azure.
azure-netapp-files Azure Netapp Files Understand Storage Hierarchy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-understand-storage-hierarchy.md
Before creating a volume in Azure NetApp Files, you must purchase and set up a p
## <a name="conceptual_diagram_of_storage_hierarchy"></a>Conceptual diagram of storage hierarchy The following example shows the relationships of the Azure subscription, NetApp accounts, capacity pools, and volumes. ## <a name="azure_netapp_files_account"></a>NetApp accounts
azure-netapp-files Backup Configure Manual https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-configure-manual.md
If you havenΓÇÖt done so, enable the backup functionality for the volume before
3. In the Configure Backup page, toggle the **Enabled** setting to **On**. 4. Select **OK**.
-![Screenshot that shows the Enabled setting of Configure Backups window.](../media/azure-netapp-files/backup-configure-enabled.png)
+![Screenshot that shows the Enabled setting of Configure Backups window.](./media/shared/backup-configure-enabled.png)
## Create a manual backup for a volume
If you havenΓÇÖt done so, enable the backup functionality for the volume before
When you create a manual backup, a snapshot is also created on the volume using the same name you specified for the backup. This snapshot represents the current state of the active file system. It is transferred to Azure storage. Once the backup completes, the manual backup entry appears in the list of backups for the volume.
-![Screenshot that shows the New Backup window.](../media/azure-netapp-files/backup-new.png)
+![Screenshot that shows the New Backup window.](./media/backup-configure-manual/backup-new.png)
## Next steps
azure-netapp-files Backup Configure Policy Based https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-configure-policy-based.md
To enable a policy-based (scheduled) backup:
2. Select your Azure NetApp Files account. 3. Select **Backups**.
- <!-- :::image type="content" source="../media/azure-netapp-files/backup-navigate.png" alt-text="Screenshot that shows how to navigate to Backups option." lightbox="../media/azure-netapp-files/backup-navigate.png"::: -->
+ <!-- :::image type="content" source="./media/backup-configure-policy-based/backup-navigate.png" alt-text="Screenshot that shows how to navigate to Backups option." lightbox="./media/backup-configure-policy-based/backup-navigate.png"::: -->
4. Select **Backup Policies**. 5. Select **Add**.
To enable a policy-based (scheduled) backup:
The minimum value for **Daily Backups to Keep** is 2.
- :::image type="content" source="../media/azure-netapp-files/backup-policy-window-daily.png" alt-text="Screenshot that shows the Backup Policy window." lightbox="../media/azure-netapp-files/backup-policy-window-daily.png":::
+ :::image type="content" source="./media/backup-configure-policy-based/backup-policy-window-daily.png" alt-text="Screenshot that shows the Backup Policy window." lightbox="./media/backup-configure-policy-based/backup-policy-window-daily.png":::
### Example of a valid configuration
To enable the backup functionality for a volume:
The Vault information is prepopulated.
- :::image type="content" source="../media/azure-netapp-files/backup-configure-enabled.png" alt-text="Screenshot showing Configure Backups window." lightbox="../media/azure-netapp-files/backup-configure-enabled.png":::
+ :::image type="content" source="./media/shared/backup-configure-enabled.png" alt-text="Screenshot showing Configure Backups window." lightbox="./media/shared/backup-configure-enabled.png":::
## Next steps
azure-netapp-files Backup Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-delete.md
If you need to delete backups to free up space, select an older backup from the
2. Navigate to **Backups**. 3. From the backup list, select the backup to delete. Click the three dots (`…`) to the right of the backup, then click **Delete** from the Action menu.
- ![Screenshot that shows the Delete menu for backups.](../media/azure-netapp-files/backup-action-menu-delete.png)
+ ![Screenshot that shows the Delete menu for backups.](./media/backup-delete/backup-action-menu-delete.png)
## Next steps
azure-netapp-files Backup Disable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-disable.md
If a volume is deleted but the backup policy wasnΓÇÖt disabled before the volume
3. Select **Configure**. 4. In the Configure Backups page, toggle the **Enabled** setting to **Off**. Enter the volume name to confirm, and click **OK**.
- ![Screenshot that shows the Restore to with Configure Backups window with backup disabled.](../media/azure-netapp-files/backup-configure-backups-disable.png)
+ ![Screenshot that shows the Restore to with Configure Backups window with backup disabled.](./media/backup-disable/backup-configure-backups-disable.png)
## Next steps
azure-netapp-files Backup Manage Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-manage-policies.md
To modify the backup policy settings:
2. Select **Backup Policies** then select the three dots (`…`) to the right of a backup policy. Select **Edit**.
- :::image type="content" source="../media/azure-netapp-files/backup-policies-edit.png" alt-text="Screenshot that shows context sensitive menu of Backup Policies." lightbox="../media/azure-netapp-files/backup-policies-edit.png":::
+ :::image type="content" source="./media/backup-manage-policies/backup-policies-edit.png" alt-text="Screenshot that shows context sensitive menu of Backup Policies." lightbox="./media/backup-manage-policies/backup-policies-edit.png":::
3. In the Modify Backup Policy window, update the number of backups you want to keep for daily, weekly, and monthly backups. Enter the backup policy name to confirm the action. Click **Save**.
- :::image type="content" source="../media/azure-netapp-files/backup-modify-policy.png" alt-text="Screenshot showing the Modify Backup Policy window." lightbox="../media/azure-netapp-files/backup-modify-policy.png":::
+ :::image type="content" source="./media/backup-manage-policies/backup-modify-policy.png" alt-text="Screenshot showing the Modify Backup Policy window." lightbox="./media/backup-manage-policies/backup-modify-policy.png":::
> [!NOTE] > After backups are enabled and have taken effect for the scheduled frequency, you cannot change the backup retention count to `0`. A minimum number of `1` retention is required for the backup policy. See [Resource limits for Azure NetApp Files](azure-netapp-files-resource-limits.md) for details.
A backup policy can be suspended so that it does not perform any new backup oper
1. Toggle **Policy State** to **Disabled**, enter the policy name to confirm, and click **Save**.
- ![Screenshot that shows the Modify Backup Policy window with Policy State disabled.](../media/azure-netapp-files/backup-modify-policy-disabled.png)
+ ![Screenshot that shows the Modify Backup Policy window with Policy State disabled.](./media/backup-manage-policies/backup-modify-policy-disabled.png)
### Suspend a backup policy for a specific volume
A backup policy can be suspended so that it does not perform any new backup oper
3. Select **Configure**. 4. In the Configure Backups page, toggle **Policy State** to **Suspend**, enter the volume name to confirm, and click **OK**.
- ![Screenshot that shows the Configure Backups window with the Suspend Policy State.](../media/azure-netapp-files/backup-modify-policy-suspend.png)
+ ![Screenshot that shows the Configure Backups window with the Suspend Policy State.](./media/backup-manage-policies/backup-modify-policy-suspend.png)
## Next steps
azure-netapp-files Backup Restore New Volume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-restore-new-volume.md
See [Requirements and considerations for Azure NetApp Files backup](backup-requi
2. From the backup list, select the backup to restore. Select the three dots (`…`) to the right of the backup, then select **Restore to new volume** from the Action menu.
- :::image type="content" source="../media/azure-netapp-files/backup-restore-new-volume.png" alt-text="Screenshot of selecting restore backup to new volume." lightbox="../media/azure-netapp-files/backup-restore-new-volume.png":::
+ :::image type="content" source="./media/backup-restore-new-volume/backup-restore-new-volume.png" alt-text="Screenshot of selecting restore backup to new volume." lightbox="./media/backup-restore-new-volume/backup-restore-new-volume.png":::
3. In the Create a Volume page that appears, provide information for the fields in the page as applicable, and select **Review + Create** to begin restoring the backup to a new volume.
See [Requirements and considerations for Azure NetApp Files backup](backup-requi
* The **Capacity pool** that the backup is restored into must have sufficient unused capacity to host the new restored volume. Otherwise, the restore operation fails.
- ![Screenshot that shows the Create a Volume page.](../media/azure-netapp-files/backup-restore-create-volume.png)
+ ![Screenshot that shows the Create a Volume page.](./media/backup-restore-new-volume/backup-restore-create-volume.png)
4. The Volumes page displays the new volume. In the Volumes page, the **Originated from** field identifies the name of the snapshot used to create the volume.
azure-netapp-files Backup Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-search.md
If a volume is deleted, its backups are still retained. The backups are listed i
A partial search is supported; you donΓÇÖt have to specify the entire backup name. The search filters the backups based on the search string.
- :::image type="content" source="../media/azure-netapp-files/backup-search-vault.png" alt-text="Screenshot that shows a list of backups in a vault." lightbox="../media/azure-netapp-files/backup-search-vault.png":::
+ :::image type="content" source="./media/backup-search/backup-search-vault.png" alt-text="Screenshot that shows a list of backups in a vault." lightbox="./media/backup-search/backup-search-vault.png":::
## Search backups at volume level
You can display and search backups at the volume level:
A partial search is supported; you donΓÇÖt have to specify the entire backup name. The search filters the backups based on the search string.
- :::image type="content" source="../media/azure-netapp-files/backup-search-volume-level.png" alt-text="Screenshot that shows a list of backup for a volume." lightbox="../media/azure-netapp-files/backup-search-volume-level.png":::
+ :::image type="content" source="./media/backup-search/backup-search-volume-level.png" alt-text="Screenshot that shows a list of backup for a volume." lightbox="./media/backup-search/backup-search-volume-level.png":::
## Next steps
azure-netapp-files Backup Vault Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-vault-manage.md
Although it's possible to create multiple backup vaults in your Azure NetApp Fil
1. Select **+ Add Backup Vault**. Assign a name to your backup vault then select **Create**.
- :::image type="content" source="../media/azure-netapp-files/backup-vault-create.png" alt-text="Screenshot of backup vault creation." lightbox="../media/azure-netapp-files/backup-vault-create.png":::
+ :::image type="content" source="./media/backup-vault-manage/backup-vault-create.png" alt-text="Screenshot of backup vault creation." lightbox="./media/backup-vault-manage/backup-vault-create.png":::
## Migrate backups to a backup vault
If you have existing backups, you must migrate them to a backup vault before you
If there are backups from volumes that have been deleted that you want to migrate, select **Include backups from Deleted Volumes**. This option will only be enabled if backups from deleted volumes are present.
- :::image type="content" source="../media/azure-netapp-files/backup-vault-assign.png" alt-text="Screenshot of backup vault assignment." lightbox="../media/azure-netapp-files/backup-vault-assign.png":::
+ :::image type="content" source="./media/backup-vault-manage/backup-vault-assign.png" alt-text="Screenshot of backup vault assignment." lightbox="./media/backup-vault-manage/backup-vault-assign.png":::
1. Navigate to the **Backup Vault** menu to view and manage your backups.
If you have existing backups, you must migrate them to a backup vault before you
1. Navigate to the **Backup Vault** menu. 1. Identify the backup vault you want to delete and select the three dots `...` next to the backup's name. Select **Delete**.
- :::image type="content" source="../media/azure-netapp-files/backup-vault-delete.png" alt-text="Screenshot of deleting a backup vault." lightbox="../media/azure-netapp-files/backup-vault-delete.png":::
+ :::image type="content" source="./media/backup-vault-manage/backup-vault-delete.png" alt-text="Screenshot of deleting a backup vault." lightbox="./media/backup-vault-manage/backup-vault-delete.png":::
## Next steps
azure-netapp-files Configure Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-customer-managed-keys.md
Customer-managed keys for Azure NetApp Files volume encryption enable you to use
The following diagram demonstrates how customer-managed keys work with Azure NetApp Files: 1. Azure NetApp Files grants permissions to encryption keys to a managed identity. The managed identity is either a user-assigned managed identity that you create and manage or a system-assigned managed identity associated with the NetApp account. 2. You configure encryption with a customer-managed key for the NetApp account.
You can also use [Azure CLI commands](/cli/azure/feature) `az feature register`
The **Encryption** page enables you to manage encryption settings for your NetApp account. It includes an option to let you set your NetApp account to use your own encryption key, which is stored in [Azure Key Vault](../key-vault/general/basic-concepts.md). This setting provides a system-assigned identity to the NetApp account, and it adds an access policy for the identity with the required key permissions.
- :::image type="content" source="../media/azure-netapp-files/encryption-menu.png" alt-text="Screenshot of the encryption menu." lightbox="../media/azure-netapp-files/encryption-menu.png":::
+ :::image type="content" source="./media/configure-customer-managed-keys/encryption-menu.png" alt-text="Screenshot of the encryption menu." lightbox="./media/configure-customer-managed-keys/encryption-menu.png":::
1. When you set your NetApp account to use customer-managed key, you have two ways to specify the Key URI: * The **Select from key vault** option allows you to select a key vault and a key.
- :::image type="content" source="../media/azure-netapp-files/select-key.png" alt-text="Screenshot of the select a key interface." lightbox="../media/azure-netapp-files/select-key.png":::
+ :::image type="content" source="./media/configure-customer-managed-keys/select-key.png" alt-text="Screenshot of the select a key interface." lightbox="./media/configure-customer-managed-keys/select-key.png":::
* The **Enter key URI** option allows you to enter manually the key URI.
- :::image type="content" source="../media/azure-netapp-files/key-enter-uri.png" alt-text="Screenshot of the encryption menu showing key URI field." lightbox="../media/azure-netapp-files/key-enter-uri.png":::
+ :::image type="content" source="./media/configure-customer-managed-keys/key-enter-uri.png" alt-text="Screenshot of the encryption menu showing key URI field." lightbox="./media/configure-customer-managed-keys/key-enter-uri.png":::
1. Select the identity type that you want to use for authentication to the Azure Key Vault. If your Azure Key Vault is configured to use Vault access policy as its permission model, both options are available. Otherwise, only the user-assigned option is available. * If you choose **System-assigned**, select the **Save** button. The Azure portal configures the NetApp account automatically with the following process: A system-assigned identity is added to your NetApp account. An access policy is to be created on your Azure Key Vault with key permissions Get, Encrypt, Decrypt.
- :::image type="content" source="../media/azure-netapp-files/encryption-system-assigned.png" alt-text="Screenshot of the encryption menu with system-assigned options." lightbox="../media/azure-netapp-files/encryption-system-assigned.png":::
+ :::image type="content" source="./media/configure-customer-managed-keys/encryption-system-assigned.png" alt-text="Screenshot of the encryption menu with system-assigned options." lightbox="./media/configure-customer-managed-keys/encryption-system-assigned.png":::
* If you choose **User-assigned**, you must select an identity. Choose **Select an identity** to open a context pane where you select a user-assigned managed identity.
- :::image type="content" source="../media/azure-netapp-files/encryption-user-assigned.png" alt-text="Screenshot of user-assigned submenu." lightbox="../media/azure-netapp-files/encryption-user-assigned.png":::
+ :::image type="content" source="./media/configure-customer-managed-keys/encryption-user-assigned.png" alt-text="Screenshot of user-assigned submenu." lightbox="./media/configure-customer-managed-keys/encryption-user-assigned.png":::
If you've configured your Azure Key Vault to use Vault access policy, the Azure portal configures the NetApp account automatically with the following process: The user-assigned identity you select is added to your NetApp account. An access policy is created on your Azure Key Vault with the key permissions Get, Encrypt, Decrypt.
You can use an Azure Key Vault that is configured to use Azure role-based access
1. In your Azure account, navigate to **Key vaults** then **Access policies**. 1. To create an access policy, under **Permission model**, select **Azure role-based access-control**.
- :::image type="content" source="../media/azure-netapp-files/rbac-permission.png" alt-text="Screenshot of access configuration menu." lightbox="../media/azure-netapp-files/rbac-permission.png":::
+ :::image type="content" source="./media/configure-customer-managed-keys/rbac-permission.png" alt-text="Screenshot of access configuration menu." lightbox="./media/configure-customer-managed-keys/rbac-permission.png":::
1. When creating the user-assigned role, there are three permissions required for customer-managed keys: 1. `Microsoft.KeyVault/vaults/keys/read` 1. `Microsoft.KeyVault/vaults/keys/encrypt/action`
You can use an Azure Key Vault that is configured to use Azure role-based access
1. Once the custom role is created and available to use with the key vault, you apply it to the user-assigned identity.
- :::image type="content" source="../media/azure-netapp-files/rbac-review-assign.png" alt-text="Screenshot of RBAC review and assign menu." lightbox="../media/azure-netapp-files/rbac-review-assign.png":::
+ :::image type="content" source="./media/configure-customer-managed-keys/rbac-review-assign.png" alt-text="Screenshot of RBAC review and assign menu." lightbox="./media/configure-customer-managed-keys/rbac-review-assign.png":::
## Create an Azure NetApp Files volume using customer-managed keys
You can use an Azure Key Vault that is configured to use Azure role-based access
You must select a key vault private endpoint as well. The dropdown menu displays private endpoints in the selected Virtual network. If there's no private endpoint for your key vault in the selected virtual network, then the dropdown is empty, and you won't be able to proceed. If so, see to [Azure Private Endpoint](../private-link/private-endpoint-overview.md).
- :::image type="content" source="../media/azure-netapp-files/keys-create-volume.png" alt-text="Screenshot of create volume menu." lightbox="../media/azure-netapp-files/keys-create-volume.png":::
+ :::image type="content" source="./media/configure-customer-managed-keys/keys-create-volume.png" alt-text="Screenshot of create volume menu." lightbox="./media/configure-customer-managed-keys/keys-create-volume.png":::
1. Continue to complete the volume creation process. Refer to: * [Create an NFS volume](azure-netapp-files-create-volumes.md)
You can use an Azure Key Vault that is configured to use Azure role-based access
If you have already configured your NetApp account for customer-managed keys and have one or more volumes encrypted with customer-managed keys, you can change the key that is used to encrypt all volumes under the NetApp account. You can select any key that is in the same key vault. Changing key vaults isn't supported. 1. Under your NetApp account, navigate to the **Encryption** menu. Under the **Current key** input field, select the **Rekey** link. 1. In the **Rekey** menu, select one of the available keys from the dropdown menu. The chosen key must be different from the current key. 1. Select **OK** to save. The rekey operation may take several minutes.
azure-netapp-files Configure Kerberos Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-kerberos-encryption.md
The following requirements apply to NFSv4.1 client encryption:
> [!IMPORTANT] > You cannot modify the Kerberos enablement selection after the volume is created.
- ![Create NFSv4.1 Kerberos volume](../media/azure-netapp-files/create-kerberos-volume.png)
+ ![Create NFSv4.1 Kerberos volume](./media/configure-kerberos-encryption/create-kerberos-volume.png)
2. Select **Export Policy** to match the desired level of access and security option (Kerberos 5, Kerberos 5i, or Kerberos 5p) for the volume.
The following requirements apply to NFSv4.1 client encryption:
AD Server and KDC IP can be the same server. This information is used to create the SPN computer account used by Azure NetApp Files. After the computer account is created, Azure NetApp Files will use DNS Server records to locate additional KDC servers as needed.
- ![Kerberos Realm](../media/azure-netapp-files/kerberos-realm.png)
+ ![Kerberos Realm](./media/configure-kerberos-encryption/kerberos-realm.png)
3. Click **Join** to save the configuration.
Follow instructions in [Configure an NFS client for Azure NetApp Files](configur
For example:
- ![Mount instructions for Kerberos volumes](../media/azure-netapp-files/mount-instructions-kerberos-volume.png)
+ ![Mount instructions for Kerberos volumes](./media/configure-kerberos-encryption/mount-instructions-kerberos-volume.png)
3. Create the directory (mount point) for the new volume.
azure-netapp-files Configure Ldap Extended Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-ldap-extended-groups.md
The following information is passed to the server in the query:
>[!NOTE] >If the POSIX attributes are not set up correctly, user and group lookup operations may fail, and users may be squashed to `nobody` when accessing NFS volumes.
- ![Screenshot of Multi-valued String Editor that shows multiple values specified for Object Class.](../media/azure-netapp-files/multi-valued-string-editor.png)
+ ![Screenshot of Multi-valued String Editor that shows multiple values specified for Object Class.](./media/shared/multi-valued-string-editor.png)
You can manage POSIX attributes by using the Active Directory Users and Computers MMC snap-in. The following example shows the Active Directory Attribute Editor. See [Access Active Directory Attribute Editor](create-volumes-dual-protocol.md#access-active-directory-attribute-editor) for details.
- ![Active Directory Attribute Editor](../media/azure-netapp-files/active-directory-attribute-editor.png)
+ ![Active Directory Attribute Editor](./media/shared/active-directory-attribute-editor.png)
4. If you want to configure an LDAP-integrated NFSv4.1 Linux client, see [Configure an NFS client for Azure NetApp Files](configure-nfs-clients.md).
The following information is passed to the server in the query:
6. Follow steps in [Create an NFS volume for Azure NetApp Files](azure-netapp-files-create-volumes.md) to create an NFS volume. During the volume creation process, under the **Protocol** tab, enable the **LDAP** option.
- ![Screenshot that shows Create a Volume page with LDAP option.](../media/azure-netapp-files/create-nfs-ldap.png)
+ ![Screenshot that shows Create a Volume page with LDAP option.](./media/configure-ldap-extended-groups/create-nfs-ldap.png)
7. Optional - You can enable local NFS client users not present on the Windows LDAP server to access an NFS volume that has LDAP with extended groups enabled. To do so, enable the **Allow local NFS users with LDAP** option as follows: 1. Select **Active Directory connections**. On an existing Active Directory connection, select the context menu (the three dots `…`), and select **Edit**. 2. On the **Edit Active Directory settings** window that appears, select the **Allow local NFS users with LDAP** option.
- ![Screenshot that shows the Allow local NFS users with LDAP option](../media/azure-netapp-files/allow-local-nfs-users-with-ldap.png)
+ ![Screenshot that shows the Allow local NFS users with LDAP option](./media/shared/allow-local-nfs-users-with-ldap.png)
8. <a name="ldap-search-scope"></a>Optional - If you have large topologies, and you use the Unix security style with a dual-protocol volume or LDAP with extended groups, you can use the **LDAP Search Scope** option to avoid "access denied" errors on Linux clients for Azure NetApp Files.
The following information is passed to the server in the query:
* If a user is a member of more than 256 groups, only 256 groups will be listed. * Refer to [errors for LDAP volumes](troubleshoot-volumes.md#errors-for-ldap-volumes) if you run into errors.
- ![Screenshot that shows options related to LDAP Search Scope](../media/azure-netapp-files/ldap-search-scope.png)
+ ![Screenshot that shows options related to LDAP Search Scope](./media/configure-ldap-extended-groups/ldap-search-scope.png)
## Next steps
azure-netapp-files Configure Ldap Over Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-ldap-over-tls.md
If you do not have a root CA certificate, you need to generate one and export it
3. Export the root CA certificate. Root CA certificates can be exported from the Personal or Trusted Root Certification Authorities directory, as shown in the following examples:
- ![screenshot that shows personal certificates](../media/azure-netapp-files/personal-certificates.png)
- ![screenshot that shows trusted root certification authorities](../media/azure-netapp-files/trusted-root-certification-authorities.png)
+ ![screenshot that shows personal certificates](./media/configure-ldap-over-tls/personal-certificates.png)
+ ![screenshot that shows trusted root certification authorities](./media/configure-ldap-over-tls/trusted-root-certification-authorities.png)
Ensure that the certificate is exported in the Base-64 encoded X.509 (.CER) format:
- ![Certificate Export Wizard](../media/azure-netapp-files/certificate-export-wizard.png)
+ ![Certificate Export Wizard](./media/configure-ldap-over-tls/certificate-export-wizard.png)
## Enable LDAP over TLS and upload root CA certificate
If you do not have a root CA certificate, you need to generate one and export it
2. In the **Join Active Directory** or **Edit Active Directory** window that appears, select the **LDAP over TLS** checkbox to enable LDAP over TLS for the volume. Then select **Server root CA Certificate** and upload the [generated root CA certificate](#generate-and-export-root-ca-certificate) to use for LDAP over TLS.
- ![Screenshot that shows the LDAP over TLS option](../media/azure-netapp-files/ldap-over-tls-option.png)
+ ![Screenshot that shows the LDAP over TLS option](./media/configure-ldap-over-tls/ldap-over-tls-option.png)
Ensure that the certificate authority name can be resolved by DNS. This name is the "Issued By" or "Issuer" field on the certificate:
- ![Screenshot that shows certificate information](../media/azure-netapp-files/certificate-information.png)
+ ![Screenshot that shows certificate information](./media/configure-ldap-over-tls/certificate-information.png)
If you uploaded an invalid certificate, and you have existing AD configurations, SMB volumes, or Kerberos volumes, an error similar to the following occurs:
azure-netapp-files Configure Network Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-network-features.md
This section shows you how to set the network features option when you create a
The following screenshot shows a volume creation example for a region that supports the Standard network features capabilities:
- ![Screenshot that shows volume creation for Standard network features.](../media/azure-netapp-files/network-features-create-standard.png)
+ ![Screenshot that shows volume creation for Standard network features.](./media/configure-network-features/network-features-create-standard.png)
The following screenshot shows a volume creation example for a region that does *not* support the Standard network features capabilities:
- ![Screenshot that shows volume creation for Basic network features.](../media/azure-netapp-files/network-features-create-basic.png)
+ ![Screenshot that shows volume creation for Basic network features.](./media/configure-network-features/network-features-create-basic.png)
2. Before completing the volume creation process, you can display the specified network features setting in the **Review + Create** tab of the Create a Volume screen. Select **Create** to complete the volume creation.
- ![Screenshot that shows the Review and Create tab of volume creation.](../media/azure-netapp-files/network-features-review-create-tab.png)
+ ![Screenshot that shows the Review and Create tab of volume creation.](./media/configure-network-features/network-features-review-create-tab.png)
3. You can select **Volumes** to display the network features setting for each volume:
- [ ![Screenshot that shows the Volumes page displaying the network features setting.](../media/azure-netapp-files/network-features-volume-list.png)](../media/azure-netapp-files/network-features-volume-list.png#lightbox)
+ [ ![Screenshot that shows the Volumes page displaying the network features setting.](./media/configure-network-features/network-features-volume-list.png)](./media/configure-network-features/network-features-volume-list.png#lightbox)
## <a name="edit-network-features-option-for-existing-volumes"></a> Edit network features option for existing volumes (preview)
This feature currently doesn't support SDK.
1. Select **Change network features**. 1. The **Edit network features** window displays the volumes that are in the same network sibling set. Confirm whether you want to modify the network features option.
- :::image type="content" source="../media/azure-netapp-files/edit-network-features.png" alt-text="Screenshot showing the Edit Network Features window." lightbox="../media/azure-netapp-files/edit-network-features.png":::
+ :::image type="content" source="./media/configure-network-features/edit-network-features.png" alt-text="Screenshot showing the Edit Network Features window." lightbox="./media/configure-network-features/edit-network-features.png":::
### Update Terraform-managed Azure NetApp Files volume from Basic to Standard
Updating the network features of your volume alters the underlying network sibli
The name of the state file in your Terraform module is `terraform.tfstate`. It contains the arguments and their values of all deployed resources in the module. Below is highlighted the `network_features` argument with value ΓÇ£BasicΓÇ¥ for an Azure NetApp Files Volume in a `terraform.tfstate` example file: Do _not_ manually update the `terraform.tfstate` file. Likewise, the `network_features` argument in the `*.tf` and `*.tf.json` configuration files should also not be updated until you follow the steps outlined here as this would cause a mismatch in the arguments of the remote volume and the local configuration file representing that remote volume. When Terraform detects a mismatch between the arguments of remote resources and local configuration files representing those remote resources, Terraform can destroy the remote resources and reprovision them with the arguments in the local configuration files. This can cause data loss in a volume.
Changing the network features for an Azure NetApp Files Volume can impact the ne
1. Select the **Change network features**. ***Do **not** select Save.*** 1. Record the paths of the affected volumes then select **Cancel**. All Terraform configuration files that define these volumes need to be updated, meaning you need to find the Terraform configuration files that define these volumes. The configuration files representing the affected volumes might not be in the same Terraform module.
You must modify the configuration files for each affected volume managed by Terr
1. Locate the affected Terraform-managed volumes configuration files. 1. Add the `ignore_changes = [network_features]` to the volume's `lifecycle` configuration block. If the `lifecycle` block does not exist in that volumeΓÇÖs configuration, add it.
- :::image type="content" source="../media/azure-netapp-files/terraform-lifecycle.png" alt-text="Screenshot of the lifecycle configuration." lightbox="../media/azure-netapp-files/terraform-lifecycle.png":::
+ :::image type="content" source="./media/configure-network-features/terraform-lifecycle.png" alt-text="Screenshot of the lifecycle configuration." lightbox="./media/configure-network-features/terraform-lifecycle.png":::
1. Repeat for each affected Terraform-managed volume.
The `ignore_changes` feature is intended to be used when a resourceΓÇÖs referenc
1. Select the **Change network features**. 1. In the **Action** field, confirm that it reads **Change to Standard**.
- :::image type="content" source="../media/azure-netapp-files/change-network-features-standard.png" alt-text="Screenshot of confirm change of network features." lightbox="../media/azure-netapp-files/change-network-features-standard.png":::
+ :::image type="content" source="./media/configure-network-features/change-network-features-standard.png" alt-text="Screenshot of confirm change of network features." lightbox="./media/configure-network-features/change-network-features-standard.png":::
1. Select **Save**. 1. Wait until you receive a notification that the network features update has completed. In your **Notifications**, the message reads "Successfully updated network features. Network features for network sibling set have successfully updated to ΓÇÿStandardΓÇÖ." 1. In the terminal, run `terraform plan` to view any potential changes. The output should indicate that the infrastructure matches the configuration with a message reading "No changes. Your infrastructure matches the configuration."
- :::image type="content" source="../media/azure-netapp-files/terraform-plan-output.png" alt-text="Screenshot of terraform plan command output." lightbox="../media/azure-netapp-files/terraform-plan-output.png":::
+ :::image type="content" source="./media/configure-network-features/terraform-plan-output.png" alt-text="Screenshot of terraform plan command output." lightbox="./media/configure-network-features/terraform-plan-output.png":::
>[!IMPORTANT] > As a safety precaution, execute `terraform plan` before executing `terraform apply`. The command `terraform plan` allows you to create a ΓÇ£planΓÇ¥ file, which contains the changes to your remote resources. This plan allows you to know if any of your affected volumes will be destroyed by running `terraform apply`.
The `ignore_changes` feature is intended to be used when a resourceΓÇÖs referenc
Observe the change in the value of the `network_features` argument in the `terraform.tfstate` files, which changed from "Basic" to "Standard":
- :::image type="content" source="../media/azure-netapp-files/updated-terraform-module.png" alt-text="Screenshot of updated Terraform module." lightbox="../media/azure-netapp-files/updated-terraform-module.png":::
+ :::image type="content" source="./media/configure-network-features/updated-terraform-module.png" alt-text="Screenshot of updated Terraform module." lightbox="./media/configure-network-features/updated-terraform-module.png":::
#### Update Terraform-managed Azure NetApp Files volumesΓÇÖ configuration file for configuration parity
Once you've update the volumes' network features, you must also modify the `netw
1. In the configuration file, set `network_features` to "Standard" and remove the `ignore_changes = [network_features]` line from the `lifecycle` block.
- :::image type="content" source="../media/azure-netapp-files/terraform-network-features-standard.png" alt-text="Screenshot of Terraform module with Standard network features." lightbox="../media/azure-netapp-files/terraform-network-features-standard.png":::
+ :::image type="content" source="./media/configure-network-features/terraform-network-features-standard.png" alt-text="Screenshot of Terraform module with Standard network features." lightbox="./media/configure-network-features/terraform-network-features-standard.png":::
1. Repeat for each affected Terraform-managed volume. 1. Verify that the updated configuration files accurately represent the configuration of the remote resources by running `terraform plan`. Confirm the output reads "No changes."
azure-netapp-files Configure Unix Permissions Change Ownership Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-unix-permissions-change-ownership-mode.md
The change ownership mode (**`Chown Mode`**) functionality enables you to set th
The following example shows the Create a Volume screen for an NFS volume.
- ![Screenshots that shows the Create a Volume screen for NFS.](../media/azure-netapp-files/unix-permissions-create-nfs-volume.png)
+ ![Screenshots that shows the Create a Volume screen for NFS.](./media/configure-unix-permissions-change-ownership-mode/unix-permissions-create-nfs-volume.png)
2. For existing NFS or dual-protocol volumes, you can set or modify **Unix permissions** and **change ownership mode** as follows: 1. To modify Unix permissions, right-click the **volume**, and select **Edit**. In the Edit window that appears, specify a value for **Unix Permissions**.
- ![Screenshots that shows the Edit screen for Unix permissions.](../media/azure-netapp-files/unix-permissions-edit.png)
+ ![Screenshots that shows the Edit screen for Unix permissions.](./media/configure-unix-permissions-change-ownership-mode/unix-permissions-edit.png)
2. To modify the change ownership mode, click the **volume**, click **Export policy**, then modify the **`Chown Mode`** setting.
- ![Screenshots that shows the Export Policy screen.](../media/azure-netapp-files/chown-mode-edit.png)
+ ![Screenshots that shows the Export Policy screen.](./media/configure-unix-permissions-change-ownership-mode/chown-mode-edit.png)
## Next steps
azure-netapp-files Configure Virtual Wan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-virtual-wan.md
Refer to [What is Azure Virtual WAN?](../virtual-wan/virtual-wan-about.md) to le
The following diagram shows the concept of deploying Azure NetApp Files volume in one or more spokes of a Virtual WAN and accessing the volumes globally. This article will explain how to deploy and access an Azure NetApp Files volume over Virtual WAN.
Deploying Azure NetApp Files volume with Standard network features in a Virtual
This diagram shows routing traffic from on-premises to an Azure NetApp Files volume in a Virtual WAN spoke VNet via a Virtual WAN hub with a VPN gateway and an Azure firewall deployed inside the virtual hub. To learn how to install an Azure Firewall in a Virtual WAN hub, refer [Configure Azure Firewall in a Virtual WAN hub](../virtual-wan/howto-firewall.md).
To force the Azure NetApp Files-bound traffic through Azure Firewall in the Virt
The following image of the Azure portal shows an example virtual hub of effective routes. In the first item, the IP address is listed as 10.2.0.5/32. The static routing entry's destination prefix is `<IP-Azure NetApp Files-Volume>/32`, and the next hop is `Azure-Firewall-in-hub`. > [!IMPORTANT] > Azure NetApp Files mount leverages private IP addresses within a delegated [subnet](azure-netapp-files-network-topologies.md#subnets). The specific IP address entry is required, even if a CIDR to which the Azure NetApp Files volume IP address belongs is pointing to the Azure Firewall as its next hop. For example, 10.2.0.5/32 should be listed even though 10.0.0.0/8 is listed with the Azure Firewall as the next hop.
To identify the private IP address associated with your Azure NetApp Files volum
1. Navigate to the **Volumes** in your Azure NetApp Files subscription. 1. Identify the volume you're looking for. The private IP address associated with an Azure NetApp Files volume is listed as part of the mount path of the volume. ### Edit virtual hub effective routes
You can effect changes to a virtual hub's effective routes by adding routes expl
1. In the virtual hub, navigate to **Route Tables**. 1. Select the route table you want to edit.
- :::image type="content" source="../media/azure-netapp-files/virtual-hub-route-table.png" alt-text="Screenshot of virtual hub route table.":::
+ :::image type="content" source="./media/configure-virtual-wan/virtual-hub-route-table.png" alt-text="Screenshot of virtual hub route table.":::
1. Choose a **Route name** then add the **Destination prefix** and **Next hop**.
- :::image type="content" source="../media/azure-netapp-files/route-table-edit.png" alt-text="Screenshot of route table edits.":::
+ :::image type="content" source="./media/configure-virtual-wan/route-table-edit.png" alt-text="Screenshot of route table edits.":::
1. Save your changes. ## Next steps
azure-netapp-files Convert Nfsv3 Nfsv41 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/convert-nfsv3-nfsv41.md
This section shows you how to convert the NFSv3 volume to NFSv4.1.
2. Select **Edit**. 3. In the Edit window that appears, select **NSFv4.1** in the **Protocol type** pulldown.
- ![screenshot that shows the Edit menu with the Protocol Type field](../media/azure-netapp-files/edit-protocol-type.png)
+ ![screenshot that shows the Edit menu with the Protocol Type field](./media/convert-nfsv3-nfsv41/edit-protocol-type.png)
3. Wait for the conversion operation to complete.
This section shows you how to convert the NFSv4.1 volume to NFSv3.
2. Select **Edit**. 3. In the Edit window that appears, select **NSFv3** in the **Protocol type** pulldown.
- ![screenshot that shows the Edit menu with the Protocol Type field](../media/azure-netapp-files/edit-protocol-type.png)
+ ![screenshot that shows the Edit menu with the Protocol Type field](./media/convert-nfsv3-nfsv41/edit-protocol-type.png)
3. Wait for the conversion operation to complete.
azure-netapp-files Cool Access Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cool-access-introduction.md
Azure NetApp Files supports three [service levels](azure-netapp-files-service-le
The following diagram illustrates an application with a volume enabled for cool access. In the initial write, data blocks are assigned a "warm" temperature value (in the diagram, red data blocks) and exist on the "hot" tier. As the data resides on the volume, a temperature scan monitors the activity of each block. When a data block is inactive, the temperature scan decreases the value of the block until it has been inactive for the number of days specified in the cooling period. The cooling period can be between 7 and 183 days; it has a default value of 31 days. Once marked "cold," the tiering scan collects blocks and packages them into 4-MB objects, which are moved to Azure storage fully transparently. To the application and users, those cool blocks still appear online. Tiered data appears to be online and continues to be available to users and applications by transparent and automated retrieval from the cool tier.
This test had a large dataset and ran several days starting the worst-case most-
The following chart shows a test that ran over 2.5 days on the 10-TB working dataset that has been 100% cooled and the buffers cleared (absolute worst-case aged data). ### 64k sequential-read test
azure-netapp-files Create Active Directory Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-active-directory-connections.md
Several features of Azure NetApp Files require that you have an Active Directory
1. From your NetApp account, select **Active Directory connections**, then select **Join**.
- ![Screenshot showing the Active Directory connections menu. The join button is highlighted.](../media/azure-netapp-files/azure-netapp-files-active-directory-connections.png)
+ ![Screenshot showing the Active Directory connections menu. The join button is highlighted.](./media/create-active-directory-connections/azure-netapp-files-active-directory-connections.png)
>[!NOTE] >Azure NetApp Files supports only one Active Directory connection within the same region and the same subscription.
Several features of Azure NetApp Files require that you have an Active Directory
If you're using Azure NetApp Files with Microsoft Entra Domain Services, the organizational unit path is `OU=AADDC Computers`
- :::image type="content" source="../media/azure-netapp-files/azure-netapp-files-join-active-directory.png" alt-text="Screenshot of the Join Active Directory input fields.":::
+ :::image type="content" source="./media/create-active-directory-connections/azure-netapp-files-join-active-directory.png" alt-text="Screenshot of the Join Active Directory input fields.":::
* <a name="aes-encryption"></a>**AES Encryption** This option enables AES encryption authentication support for the admin account of the AD connection.
- ![Screenshot of the AES description field. The field is a checkbox.](../media/azure-netapp-files/active-directory-aes-encryption.png)
+ ![Screenshot of the AES description field. The field is a checkbox.](./media/create-active-directory-connections/active-directory-aes-encryption.png)
See [Requirements for Active Directory connections](#requirements-for-active-directory-connections) for requirements.
Several features of Azure NetApp Files require that you have an Active Directory
>[!NOTE] >DNS PTR records for the AD DS computer account(s) must be created in the AD DS **Organizational Unit** specified in the Azure NetApp Files AD connection for LDAP Signing to work.
- ![Screenshot of the LDAP signing checkbox.](../media/azure-netapp-files/active-directory-ldap-signing.png)
+ ![Screenshot of the LDAP signing checkbox.](./media/create-active-directory-connections/active-directory-ldap-signing.png)
* **Allow local NFS users with LDAP** This option enables local NFS client users to access to NFS volumes. Setting this option disables extended groups for NFS volumes. It also limits the number of groups to 16. For more information, see [Allow local NFS users with LDAP to access a dual-protocol volume](create-volumes-dual-protocol.md#allow-local-nfs-users-with-ldap-to-access-a-dual-protocol-volume).
Several features of Azure NetApp Files require that you have an Active Directory
The **Group Membership Filter** option allows you to create a custom search filter for users who are members of specific AD DS groups.
- ![Screenshot of the LDAP search scope field, showing a checked box.](../media/azure-netapp-files/ldap-search-scope-checked.png)
+ ![Screenshot of the LDAP search scope field, showing a checked box.](./media/create-active-directory-connections/ldap-search-scope-checked.png)
See [Configure AD DS LDAP with extended groups for NFS volume access](configure-ldap-extended-groups.md#ldap-search-scope) for information about these options.
Several features of Azure NetApp Files require that you have an Active Directory
* <a name="backup-policy-users"></a> **Backup policy users** This option grants addition security privileges to AD DS domain users or groups that require elevated backup privileges to support backup, restore, and migration workflows in Azure NetApp Files. The specified AD DS user accounts or groups will have elevated NTFS permissions at the file or folder level.
- ![Screenshot of the Backup policy users field showing an empty text input field.](../media/azure-netapp-files/active-directory-backup-policy-users.png)
+ ![Screenshot of the Backup policy users field showing an empty text input field.](./media/create-active-directory-connections/active-directory-backup-policy-users.png)
The following privileges apply when you use the **Backup policy users** setting:
Several features of Azure NetApp Files require that you have an Active Directory
* **Security privilege users** <!-- SMB CA share feature --> This option grants security privilege (`SeSecurityPrivilege`) to AD DS domain users or groups that require elevated privileges to access Azure NetApp Files volumes. The specified AD DS users or groups will be allowed to perform certain actions on SMB shares that require security privilege not assigned by default to domain users.
- ![Screenshot showing the Security privilege users box of Active Directory connections window.](../media/azure-netapp-files/security-privilege-users.png)
+ ![Screenshot showing the Security privilege users box of Active Directory connections window.](./media/create-active-directory-connections/security-privilege-users.png)
The following privilege applies when you use the **Security privilege users** setting:
Several features of Azure NetApp Files require that you have an Active Directory
>[!NOTE] >The domain admins are automatically added to the Administrators privilege users group.
- ![Screenshot that shows the Administrators box of Active Directory connections window.](../media/azure-netapp-files/active-directory-administrators.png)
+ ![Screenshot that shows the Administrators box of Active Directory connections window.](./media/create-active-directory-connections/active-directory-administrators.png)
The following privileges apply when you use the **Administrators privilege users** setting:
Several features of Azure NetApp Files require that you have an Active Directory
* Credentials, including your **username** and **password**
- ![Screenshot that shows Active Directory credentials fields showing username, password and confirm password fields.](../media/azure-netapp-files/active-directory-credentials.png)
+ ![Screenshot that shows Active Directory credentials fields showing username, password and confirm password fields.](./media/create-active-directory-connections/active-directory-credentials.png)
>[!IMPORTANT] >Although Active Directory supports 256-character passwords, Active Directory passwords with Azure NetApp Files **cannot** exceed 64 characters.
Several features of Azure NetApp Files require that you have an Active Directory
The Active Directory connection you created appears.
- ![Screenshot of the Active Directory connections menu showing a successfully created connection.](../media/azure-netapp-files/azure-netapp-files-active-directory-connections-created.png)
+ ![Screenshot of the Active Directory connections menu showing a successfully created connection.](./media/create-active-directory-connections/azure-netapp-files-active-directory-connections-created.png)
## <a name="shared_ad"></a>Map multiple NetApp accounts in the same subscription and region to an AD connection
You can also use [Azure CLI commands](/cli/azure/feature) `az feature register`
### Steps 1. Navigate to the volume **Overview** menu. Select **Reset Active Directory Account**. Alternately, navigate to the **Volumes** menu. Identify the volume for which you want to reset the Active Directory account and select the three dots (`...`) at the end of the row. Select **Reset Active Directory Account**. 2. A warning message that explains the implications of this action will pop up. Type **yes** in the text box to proceed. ## Next steps
azure-netapp-files Create Cross Zone Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-cross-zone-replication.md
This process requires that your account is subscribed to the [availability zone
> [!IMPORTANT] > Logical availability zones for the subscription without Azure NetApp Files presence are marked `(Unavailable)` and are greyed out.
- :::image type="content" source="../media/azure-netapp-files/create-volume-availability-zone.png" alt-text="Screenshot of the 'Create a Zone' menu requires you to select an availability zone." lightbox="../media/azure-netapp-files/create-volume-availability-zone.png":::
+ :::image type="content" source="./media/create-cross-zone-replication/create-volume-availability-zone.png" alt-text="Screenshot of the 'Create a Zone' menu requires you to select an availability zone." lightbox="./media/create-cross-zone-replication/create-volume-availability-zone.png":::
1. Follow the steps indicated in the interface to create the volume. The **Review + Create** page shows the selected availability zone you specified.
- :::image type="content" source="../media/azure-netapp-files/zone-replication-review-create.png" alt-text="Screenshot showing the need to confirm selection of correct availability zone in the Review and Create page." lightbox="../media/azure-netapp-files/zone-replication-review-create.png":::
+ :::image type="content" source="./media/create-cross-zone-replication/zone-replication-review-create.png" alt-text="Screenshot showing the need to confirm selection of correct availability zone in the Review and Create page." lightbox="./media/create-cross-zone-replication/zone-replication-review-create.png":::
1. After you create the volume, the **Volume Overview** page includes availability zone information for the volume.
- :::image type="content" source="../media/azure-netapp-files/zone-replication-volume-overview.png" alt-text="The selected availability zone will display when you create the volume." lightbox="../media/azure-netapp-files/zone-replication-volume-overview.png":::
+ :::image type="content" source="./media/create-cross-zone-replication/zone-replication-volume-overview.png" alt-text="The selected availability zone will display when you create the volume." lightbox="./media/create-cross-zone-replication/zone-replication-volume-overview.png":::
## Create the data replication volume in another availability zone of the same region
This process requires that your account is subscribed to the [availability zone
1. Create the data replication volume (the destination volume) _in another availability zone, but in the same region as the source volume_. In the **Basics** tab of the **Create a new protection volume** page, select an available availability zone. > [!IMPORTANT] > Logical availability zones for the subscription without Azure NetApp Files presence are marked `(Unavailable)` and are greyed out.
- :::image type="content" source="../media/azure-netapp-files/zone-replication-create-new-volume.png" alt-text="Select an availability zone for the cross-zone replication volume." lightbox="../media/azure-netapp-files/zone-replication-create-new-volume.png":::
+ :::image type="content" source="./media/create-cross-zone-replication/zone-replication-create-new-volume.png" alt-text="Select an availability zone for the cross-zone replication volume." lightbox="./media/create-cross-zone-replication/zone-replication-create-new-volume.png":::
## Complete cross-zone replication configuration
azure-netapp-files Create Volumes Dual Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-volumes-dual-protocol.md
You can also use [Azure CLI commands](/cli/azure/feature) `az feature register`
1. Click the **Volumes** blade from the Capacity Pools blade. Click **+ Add volume** to create a volume.
- ![Navigate to Volumes](../media/azure-netapp-files/azure-netapp-files-navigate-to-volumes.png)
+ ![Navigate to Volumes](./media/shared/azure-netapp-files-navigate-to-volumes.png)
2. In the Create a Volume window, click **Create**, and provide information for the following fields under the Basics tab: * **Volume name**
You can also use [Azure CLI commands](/cli/azure/feature) `az feature register`
If you have not delegated a subnet, you can click **Create new** on the Create a Volume page. Then in the Create Subnet page, specify the subnet information, and select **Microsoft.NetApp/volumes** to delegate the subnet for Azure NetApp Files. In each VNet, only one subnet can be delegated to Azure NetApp Files.
- ![Create subnet](../media/azure-netapp-files/azure-netapp-files-create-subnet.png)
+ ![Create subnet](./media/shared/azure-netapp-files-create-subnet.png)
* **Network features** In supported regions, you can specify whether you want to use **Basic** or **Standard** network features for the volume. See [Configure network features for a volume](configure-network-features.md) and [Guidelines for Azure NetApp Files network planning](azure-netapp-files-network-topologies.md) for details.
You can also use [Azure CLI commands](/cli/azure/feature) `az feature register`
For information about creating a snapshot policy, see [Manage snapshot policies](snapshots-manage-policy.md).
- ![Show advanced selection](../media/azure-netapp-files/volume-create-advanced-selection.png)
+ ![Show advanced selection](./media/shared/volume-create-advanced-selection.png)
3. Click the **Protocol** tab, and then complete the following actions: * Select **Dual-protocol** as the protocol type for the volume.
You can also use [Azure CLI commands](/cli/azure/feature) `az feature register`
* Optionally, [configure export policy for the volume](azure-netapp-files-configure-export-policy.md).
- ![Specify dual-protocol](../media/azure-netapp-files/create-volume-protocol-dual.png)
+ ![Specify dual-protocol](./media/create-volumes-dual-protocol/create-volume-protocol-dual.png)
4. Click **Review + Create** to review the volume details. Then click **Create** to create the volume.
The **Allow local NFS users with LDAP** option in Active Directory connections e
2. On the **Edit Active Directory settings** window that appears, select the **Allow local NFS users with LDAP** option.
- ![Screenshot that shows the Allow local NFS users with LDAP option](../media/azure-netapp-files/allow-local-nfs-users-with-ldap.png)
+ ![Screenshot that shows the Allow local NFS users with LDAP option](./media/shared/allow-local-nfs-users-with-ldap.png)
## Manage LDAP POSIX Attributes You can manage POSIX attributes such as UID, Home Directory, and other values by using the Active Directory Users and Computers MMC snap-in. The following example shows the Active Directory Attribute Editor:
-![Active Directory Attribute Editor](../media/azure-netapp-files/active-directory-attribute-editor.png)
+![Active Directory Attribute Editor](./media/shared/active-directory-attribute-editor.png)
You need to set the following attributes for LDAP users and LDAP groups: * Required attributes for LDAP users:
You need to set the following attributes for LDAP users and LDAP groups:
The values specified for `objectClass` are separate entries. For example, in Multi-valued String Editor, `objectClass` would have separate values (`user` and `posixAccount`) specified as follows for LDAP users:
-![Screenshot of Multi-valued String Editor that shows multiple values specified for Object Class.](../media/azure-netapp-files/multi-valued-string-editor.png)
+![Screenshot of Multi-valued String Editor that shows multiple values specified for Object Class.](./media/shared/multi-valued-string-editor.png)
Microsoft Entra Domain Services doesnΓÇÖt allow you to modify the objectClass POSIX attribute on users and groups created in the organizational AADDC Users OU. As a workaround, you can create a custom OU and create users and groups in the custom OU.
On a Windows system, you can access the Active Directory Attribute Editor as fol
1. Click **Start**, navigate to **Windows Administrative Tools**, and then click **Active Directory Users and Computers** to open the Active Directory Users and Computers window. 2. Click the domain name that you want to view, and then expand the contents. 3. To display the advanced Attribute Editor, enable the **Advanced Features** option in the Active Directory Users Computers **View** menu.
- ![Screenshot that shows how to access the Attribute Editor Advanced Features menu.](../media/azure-netapp-files/attribute-editor-advanced-features.png)
+ ![Screenshot that shows how to access the Attribute Editor Advanced Features menu.](./media/create-volumes-dual-protocol/attribute-editor-advanced-features.png)
4. Double-click **Users** on the left pane to see the list of users. 5. Double-click a particular user to see its **Attribute Editor** tab.
azure-netapp-files Cross Region Replication Create Peering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-region-replication-create-peering.md
Before you begin, ensure that you have reviewed the [requirements and considerat
You need to obtain the resource ID of the source volume that you want to replicate. 1. Go to the source volume, and select **Properties** under Settings to display the source volume resource ID.
- ![Locate source volume resource ID](../media/azure-netapp-files/cross-region-replication-source-volume-resource-id.png)
+ ![Locate source volume resource ID](./media/cross-region-replication-create-peering/cross-region-replication-source-volume-resource-id.png)
2. Copy the resource ID to the clipboard. You will need it later.
You can also select an existing NetApp account in a different region.
4. Create the data replication volume by selecting **Volumes** under Storage Service in the destination NetApp account. Then select the **+ Add data replication** button.
- ![Add data replication](../media/azure-netapp-files/cross-region-replication-add-data-replication.png)
+ ![Add data replication](./media/cross-region-replication-create-peering/cross-region-replication-add-data-replication.png)
5. In the Create a Volume page that appears, complete the following fields under the **Basics** tab: * Volume name
For the NFS protocol, ensure that the export policy rules satisfy the requiremen
8. Under the **Replication** tab, paste in the source volume resource ID that you obtained in [Locate the source volume resource ID](#locate-the-source-volume-resource-id), and then select the desired replication schedule. There are three options for the replication schedule: every 10 minutes, hourly, and daily.
- ![Create volume replication](../media/azure-netapp-files/cross-region-replication-create-volume-replication.png)
+ ![Create volume replication](./media/cross-region-replication-create-peering/cross-region-replication-create-volume-replication.png)
9. Select **Review + Create**, then select **Create** to create the data replication volume.
- ![Review and create replication](../media/azure-netapp-files/cross-region-replication-review-create-replication.png)
+ ![Review and create replication](./media/cross-region-replication-create-peering/cross-region-replication-review-create-replication.png)
## Authorize replication from the source volume
To authorize the replication, you need to obtain the resource ID of the replicat
3. Select the replication destination volume, go to **Properties** under Settings, and locate the **Resource ID** of the destination volume. Copy the destination volume resource ID to the clipboard.
- ![Properties resource ID](../media/azure-netapp-files/cross-region-replication-properties-resource-id.png)
+ ![Properties resource ID](./media/cross-region-replication-create-peering/cross-region-replication-properties-resource-id.png)
4. In Azure NetApp Files, go to the replication source account and source capacity pool. 5. Locate the replication source volume and select it. Navigate to **Replication** under Storage Service then select **Authorize**.
- ![Authorize replication](../media/azure-netapp-files/cross-region-replication-authorize.png)
+ ![Authorize replication](./media/cross-region-replication-create-peering/cross-region-replication-authorize.png)
6. In the Authorize field, paste the destination replication volume resource ID that you obtained in Step 3, then select **OK**.
azure-netapp-files Cross Region Replication Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-region-replication-delete.md
You can terminate the replication connection between the source and the destinat
4. Type **Yes** when prompted and click **Break**.
- ![Break replication peering](../media/azure-netapp-files/cross-region-replication-break-replication-peering.png)
+ ![Break replication peering](./media/shared/cross-region-replication-break-replication-peering.png)
1. To delete volume replication, select **Replication** from the source or the destination volume.
You can terminate the replication connection between the source and the destinat
3. Confirm deletion by typing **Yes** and clicking **Delete**.
- ![Delete replication](../media/azure-netapp-files/cross-region-replication-delete-replication.png)
+ ![Delete replication](./media/cross-region-replication-delete/cross-region-replication-delete-replication.png)
## Delete source or destination volumes
If you want to delete the source or destination volume, you must perform the fol
2. Delete the destination or source volume as needed by right-clicking the volume name and select **Delete**.
- ![Screenshot that shows right-click menu of a volume.](../media/azure-netapp-files/cross-region-replication-delete-volume.png)
+ ![Screenshot that shows right-click menu of a volume.](./media/cross-region-replication-delete/cross-region-replication-delete-volume.png)
## Next steps
azure-netapp-files Cross Region Replication Display Health Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-region-replication-display-health-status.md
You can view replication status on the source volume or the destination volume.
* **Total progress** ΓÇô Shows the total number of cumulative bytes transferred over the lifetime of the relationship. This amount is the actual bytes transferred, and it might differ from the logical space that the source and destination volumes report.
- ![Replication health status](../media/azure-netapp-files/cross-region-replication-health-status.png)
+ ![Replication health status](./media/cross-region-replication-display-health-status/cross-region-replication-health-status.png)
> [!NOTE] > Replication relationship shows health status as *unhealthy* if previous replication jobs are not complete. This status is a result of large volumes being transferred with a lower transfer window (for example, a ten-minute transfer time for a large volume). In this case, the relationship status shows *transferring* and health status shows *unhealthy*.
Create [alert rules in Azure Monitor](../azure-monitor/alerts/alerts-overview.md
* If your replication schedule is daily, enter 103,680 (24 hours * 60 minutes * 60 seconds * 1.2). 9. Select **Review + create**. The alert rule is ready for use. ## Next steps
azure-netapp-files Cross Region Replication Manage Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-region-replication-manage-disaster-recovery.md
When you need to activate the destination volume (for example, when you want to
4. Type **Yes** when prompted and click the **Break** button.
- ![Break replication peering](../media/azure-netapp-files/cross-region-replication-break-replication-peering.png)
+ ![Break replication peering](./media/shared/cross-region-replication-break-replication-peering.png)
5. Mount the destination volume by following the steps in [Mount or unmount a volume for Windows or Linux virtual machines](azure-netapp-files-mount-unmount-volumes-for-virtual-machines.md). This step enables a client to access the destination volume.
After disaster recovery, you can reactivate the source volume by performing a re
2. Type **Yes** when prompted and click **OK**.
- ![Resync replication](../media/azure-netapp-files/cross-region-replication-resync-replication.png)
+ ![Resync replication](./media/cross-region-replication-manage-disaster-recovery/cross-region-replication-resync-replication.png)
3. Monitor the source volume health status by following steps in [Display health status of replication relationship](cross-region-replication-display-health-status.md). When the source volume health status shows the following values, the reverse resync operation is complete, and changes made at the destination volume are now captured on the source volume:
azure-netapp-files Default Individual User Group Quotas Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/default-individual-user-group-quotas-introduction.md
The following subsections describe and depict the behavior of the various quota
A default user quota automatically applies a quota limit to *all users* accessing the volume without creating separate quotas for each target user. Each user can only consume the amount of storage as defined by the default user quota setting. No single user can exhaust the volumeΓÇÖs capacity, as long as the default user quota is less than the volume quota. The following diagram depicts this behavior. ### Individual user quota An individual user quota applies a quota to *individual target user* accessing the volume. You can specify the target user by a UNIX user ID (UID) or a Windows security identifier (SID), depending on volume protocol (NFS or SMB). You can define multiple individual user quota settings on a volume. Each user can only consume the amount of storage as defined by their individual user quota setting. No single user can exhaust the volumeΓÇÖs capacity, as long as the individual user quota is less than the volume quota. Individual user quotas override a default user quota, where applicable. The following diagram depicts this behavior. ### Combining default and individual user quotas You can create quota exceptions for specific users by allowing those users less or more capacity than a default user quota setting by combining default and individual user quota settings. In the following example, individual user quotas are set for `user1`, `user2`, and `user3`. Any other user is subjected to the default user quota setting. The individual quota settings can be smaller or larger than the default user quota setting. The following diagram depicts this behavior. ### Default group quota A default group quota automatically applies a quota limit to *all users within all groups* accessing the volume without creating separate quotas for each target group. The total consumption for all users in any group can't exceed the group quota limit. Group quotas arenΓÇÖt applicable to SMB and dual-protocol volumes. A single user can potentially consume the entire group quota. The following diagram depicts this behavior. ### Individual group quota An individual group quota applies a quota to *all users within an individual target group* accessing the volume. The total consumption for all users *in that group* can't exceed the group quota limit. Group quotas arenΓÇÖt applicable to SMB and dual-protocol volumes. You specify the group by a UNIX group ID (GID). Individual group quotas override default group quotas where applicable. The following diagram depicts this behavior. ### Combining individual and default group quota You can create quota exceptions for specific groups by allowing those groups less or more capacity than a default group quota setting by combining default and individual group quota settings. Group quotas arenΓÇÖt applicable to SMB and dual-protocol volumes. In the following example, individual group quotas are set for `group1` and `group2`. Any other group is subjected to the default group quota setting. The individual group quota settings can be smaller or larger than the default group quota setting. The following diagram depicts this scenario. ### Combining default and individual user and group quotas You can combine the various previously described quota options to achieve very specific quota definitions. You can create very specific quota definitions by (optionally) starting with defining a default group quota, followed by individual group quotas matching your requirements. Then you can further tighten individual user consumption by first (optionally) defining a default user quota, followed by individual user quotas matching individual user requirements. Group quotas arenΓÇÖt applicable to SMB and dual-protocol volumes. In the following example, a default group quota has been set as well as individual group quotas for `group1` and `group2`. Furthermore, a default user quota has been set as well as individual quotas for `user1`, `user2`, `user3`, `user5`, and `userZ`. The following diagram depicts this scenario. ## Observing user quota settings and consumption
Windows users can observe their user quota and consumption in Windows Explorer a
* Administrator view:
- :::image type="content" source="../media/azure-netapp-files/user-quota-administrator-view.png" alt-text="Screenshot showing administrator view of user quota and consumption.":::
+ :::image type="content" source="./media/default-individual-user-group-quotas-introduction/user-quota-administrator-view.png" alt-text="Screenshot showing administrator view of user quota and consumption.":::
* User view:
- :::image type="content" source="../media/azure-netapp-files/user-quota-user-view.png" alt-text="Screenshot showing user view of user quota and consumption.":::
+ :::image type="content" source="./media/default-individual-user-group-quotas-introduction/user-quota-user-view.png" alt-text="Screenshot showing user view of user quota and consumption.":::
### Linux client Linux users can observe their *user* quota and consumption by using the [`quota(1)`](https://man7.org/linux/man-pages/man1/quota.1.html) command. Assume a scenario where a 2-TiB volume with a 100-MiB default or individual user quota has been configured. On the client, this scenario is represented as follows: Azure NetApp Files currently doesn't support group quota reporting. However, you know you've reached your groupΓÇÖs quota limit when you receive a `Disk quota exceeded` error in writing to the volume while you havenΓÇÖt reached your user quota yet. In the following scenario, users `user4` and `user5` are members of `group2`. The group `group2` has a 200-MiB default or individual group quota assigned. The volume is already populated with 150 MiB of data owned by user `user4`. User `user5` appears to have a 100-MiB quota available as reported by the `quota(1)` command, but `user5` canΓÇÖt consume more than 50 MiB due to the remaining group quota for `group2`. User `user5` receives a `Disk quota exceeded` error message after writing 50 MiB, despite not reaching the user quota. > [!IMPORTANT] > For quota reporting to work, the client needs access to port 4049/UDP on the Azure NetApp Files volumesΓÇÖ storage endpoint. When using NSGs with standard network features on the Azure NetApp Files delegated subnet, make sure that access is enabled.
azure-netapp-files Disable Showmount https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/disable-showmount.md
The disable showmount capability is currently in preview. If you're using this f
3. Confirm that you've disabled the showmount in the **Overview** menu of your Azure subscription. The attribute **Disable Showmount** displays as true if the operation succeeded.
- :::image type="content" source="../media/azure-netapp-files/disable-showmount.png" alt-text="Screenshot of the Azure interface depicting the disable showmount option." lightbox="../media/azure-netapp-files/disable-showmount.png":::
+ :::image type="content" source="./media/disable-showmount/disable-showmount.png" alt-text="Screenshot of the Azure interface depicting the disable showmount option." lightbox="./media/disable-showmount/disable-showmount.png":::
4. If you need to enable showmount, unregister the feature.
azure-netapp-files Double Encryption At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/double-encryption-at-rest.md
If you are using this feature for the first time, you need to [register for the
When you create a volume in a double-encryption capacity pool, the default key management (the **Encryption key source** field) is `Microsoft Managed Key`, and the other choice is `Customer Managed Key`. Using customer-managed keys requires additional preparation of an Azure Key Vault and other details. For more information about using volume encryption with customer managed keys, see [Configure customer-managed keys for Azure NetApp Files volume encryption](configure-customer-managed-keys.md). ## Supported regions
azure-netapp-files Dual Protocol Permission Behaviors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/dual-protocol-permission-behaviors.md
One such example is if your storage resides in Azure NetApp Files, while your NA
The following figure shows an example of that kind of configuration. ## Next steps
azure-netapp-files Dynamic Change Volume Service Level https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/dynamic-change-volume-service-level.md
The capacity pool that you want to move the volume to must already exist. The ca
1. On the Volumes page, right-click the volume whose service level you want to change. Select **Change Pool**.
- ![Right-click volume](../media/azure-netapp-files/right-click-volume.png)
+ ![Right-click volume](./media/dynamic-change-volume-service-level/right-click-volume.png)
2. In the Change pool window, select the capacity pool you want to move the volume to.
- ![Change pool](../media/azure-netapp-files/change-pool.png)
+ ![Change pool](./media/dynamic-change-volume-service-level/change-pool.png)
3. Select **OK**.
azure-netapp-files Enable Continuous Availability Existing SMB https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/enable-continuous-availability-existing-SMB.md
You can enable the SMB Continuous Availability (CA) feature when you [create a n
1. Select the SMB volume that you want to have SMB CA enabled. Then select **Edit**. 1. On the Edit window that appears, select the **Enable Continuous Availability** checkbox.
- ![Snapshot that shows the Enable Continuous Availability option.](../media/azure-netapp-files/enable-continuous-availability.png)
+ ![Snapshot that shows the Enable Continuous Availability option.](./media/enable-continuous-availability-existing-smb/enable-continuous-availability.png)
1. Reboot the Windows systems connecting to the existing SMB share.
azure-netapp-files Faq Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-smb.md
The Azure NetApp Files service has a policy that automatically updates the passw
To see when the password was last updated on the Azure NetApp Files SMB computer account, check the `pwdLastSet` property on the computer account using the [Attribute Editor](create-volumes-dual-protocol.md#access-active-directory-attribute-editor) in the **Active Directory Users and Computers** utility:
-![Screenshot that shows the Active Directory Users and Computers utility](../media/azure-netapp-files/active-directory-users-computers-utility.png)
+![Screenshot that shows the Active Directory Users and Computers utility](./media/faq-smb/active-directory-users-computers-utility.png)
>[!NOTE] > Due to an interoperability issue with the [April 2022 Monthly Windows Update](
azure-netapp-files Lightweight Directory Access Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/lightweight-directory-access-protocol.md
LDAP can be a name mapping resource, if the LDAP schema attributes on the LDAP s
In the following example, a user has a Windows name of `asymmetric` and needs to map to a UNIX identity of `UNIXuser`. To achieve that in Azure NetApp Files, open an instance of the [Active Directory Users and Computers MMC](/troubleshoot/windows-server/system-management-components/remote-server-administration-tools). Then, find the desired user and open the properties box. (Doing so requires [enabling the Attribute Editor](http://directoryadmin.blogspot.com/2019/02/attribute-editor-tab-missing-in-active.html)). Navigate to the Attribute Editor tab and find the UID field, then populate the UID field with the desired UNIX user name `UNIXuser` and click **Add** and **OK** to confirm. After this action is done, files written from Windows SMB shares by the Windows user `asymmetric` will be owned by `UNIXuser` from the NFS side. The following example shows Windows SMB owner `asymmetric`: The following example shows NFS owner `UNIXuser` (mapped from Windows user `asymmetric` using LDAP):
azure-netapp-files Manage Availability Zone Volume Placement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/manage-availability-zone-volume-placement.md
You can deploy new volumes in the logical availability zone of your choice. You
> [!IMPORTANT] > Logical availability zones for the subscription without Azure NetApp Files presence are marked `(Unavailable)` and are greyed out.
- [ ![Screenshot that shows the Availability Zone menu.](../media/azure-netapp-files/availability-zone-menu-drop-down.png) ](../media/azure-netapp-files/availability-zone-menu-drop-down.png#lightbox)
+ [ ![Screenshot that shows the Availability Zone menu.](./media/manage-availability-zone-volume-placement/availability-zone-menu-drop-down.png) ](./media/manage-availability-zone-volume-placement/availability-zone-menu-drop-down.png#lightbox)
3. Follow the UI to create the volume. The **Review + Create** page shows the selected availability zone you specified.
- [ ![Screenshot that shows the Availability Zone review.](../media/azure-netapp-files/availability-zone-display-down.png) ](../media/azure-netapp-files/availability-zone-display-down.png#lightbox)
+ [ ![Screenshot that shows the Availability Zone review.](./media/manage-availability-zone-volume-placement/availability-zone-display-down.png) ](./media/manage-availability-zone-volume-placement/availability-zone-display-down.png#lightbox)
4. Navigate to **Properties** to confirm your availability zone configuration.
- :::image type="content" source="../media/azure-netapp-files/availability-zone-volume-overview.png" alt-text="Screenshot of volume properties interface." lightbox="../media/azure-netapp-files/availability-zone-volume-overview.png":::
+ :::image type="content" source="./media/manage-availability-zone-volume-placement/availability-zone-volume-overview.png" alt-text="Screenshot of volume properties interface." lightbox="./media/manage-availability-zone-volume-placement/availability-zone-volume-overview.png":::
## Populate an existing volume with availability zone information
You can deploy new volumes in the logical availability zone of your choice. You
> [!IMPORTANT] > Availability zone information can only be populated as provided. You can't select an availability zone or move the volume to another availability zone by using this feature. If you want to move this volume to another availability zone, consider using [cross-zone replication](create-cross-zone-replication.md) (after populating the volume with the availability zone information). >
- > :::image type="content" source="../media/azure-netapp-files/populate-availability-zone.png" alt-text="Screenshot of the Populate Availability Zone window." lightbox="../media/azure-netapp-files/populate-availability-zone.png":::
+ > :::image type="content" source="./media/manage-availability-zone-volume-placement/populate-availability-zone.png" alt-text="Screenshot of the Populate Availability Zone window." lightbox="./media/manage-availability-zone-volume-placement/populate-availability-zone.png":::
## Populate availability zone for Terraform-managed volumes
The populate availability zone features requires a `zone` property on the volume
``` 1. In the Azure portal, locate the Terraform module. In the volume **Overview**, select **Populate availability zone** and make note of the availability zone. Do _not_ select save.
- :::image type="content" source="../media/azure-netapp-files/populate-availability-zone.png" alt-text="Screenshot of the Populate Availability Zone menu." lightbox="../media/azure-netapp-files/populate-availability-zone.png":::
+ :::image type="content" source="./media/manage-availability-zone-volume-placement/populate-availability-zone.png" alt-text="Screenshot of the Populate Availability Zone menu." lightbox="./media/manage-availability-zone-volume-placement/populate-availability-zone.png":::
1. In the volume's configuration file (`main.tf`), add a value for `zone`, entering the numerical value you retrieved in the previous step. For example, if the volume's availability zone is 2, enter `zone = 2`. Save the file. 1. Return to the Azure portal. Select **Save** to populate the availability zone.
azure-netapp-files Manage Billing Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/manage-billing-tags.md
Billing tags are assigned at the capacity pool level, not volume level.
> [!IMPORTANT] > Tag data is replicated globally. As such, do not use tag names or values that could compromise the security of your resources. For example, do not use tag names that contain personal or sensitive information.
- ![Snapshot that shows the Tags window of a capacity pool.](../media/azure-netapp-files/billing-tags-capacity-pool.png)
+ ![Snapshot that shows the Tags window of a capacity pool.](./media/manage-billing-tags/billing-tags-capacity-pool.png)
3. You can display and download information about tagged resources by using the [Azure Cost Management](../cost-management-billing/cost-management-billing-overview.md) portal: 1. Click **Cost Analysis** and select the **Cost by resource** view.
- [ ![Screenshot that shows Cost Analysis of Azure Cost Management](../media/azure-netapp-files/cost-analysis.png) ](../media/azure-netapp-files/cost-analysis.png#lightbox)
+ [ ![Screenshot that shows Cost Analysis of Azure Cost Management](./media/manage-billing-tags/cost-analysis.png) ](./media/manage-billing-tags/cost-analysis.png#lightbox)
2. To download an invoice, selecting **Invoices** and then the **Download** button.
- [ ![Screenshot that shows Invoices of Azure Cost Management](../media/azure-netapp-files/azure-cost-invoices.png) ](../media/azure-netapp-files/azure-cost-invoices.png#lightbox)
+ [ ![Screenshot that shows Invoices of Azure Cost Management](./media/manage-billing-tags/azure-cost-invoices.png) ](./media/manage-billing-tags/azure-cost-invoices.png#lightbox)
1. In the Download window that appears, download usage details. The downloaded `csv` file will include capacity pool billing tags for the corresponding resources.
- ![Snapshot that shows the Download window of Azure Cost Management.](../media/azure-netapp-files/invoice-download.png)
+ ![Snapshot that shows the Download window of Azure Cost Management.](./media/manage-billing-tags/invoice-download.png)
- [ ![Screenshot that shows the downloaded spreadsheet.](../media/azure-netapp-files/spreadsheet-download.png) ](../media/azure-netapp-files/spreadsheet-download.png#lightbox)
+ [ ![Screenshot that shows the downloaded spreadsheet.](./media/manage-billing-tags/spreadsheet-download.png) ](./media/manage-billing-tags/spreadsheet-download.png#lightbox)
## Next steps
azure-netapp-files Manage Cool Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/manage-cool-access.md
Before creating or enabling a cool-access volume, you need to configure a Standa
1. Check the **Enable Cool Access** checkbox, then select **Create**. When you select **Enable Cool Access**, the UI automatically selects the auto QoS type. The manual QoS type isn't supported for Standard service with cool access.
- :::image type="content" source="../media/azure-netapp-files/cool-access-new-capacity-pool.png" alt-text="Screenshot that shows the New Capacity Pool window with the Enable Cool Access option selected." lightbox="../media/azure-netapp-files/cool-access-new-capacity-pool.png":::
+ :::image type="content" source="./media/manage-cool-access/cool-access-new-capacity-pool.png" alt-text="Screenshot that shows the New Capacity Pool window with the Enable Cool Access option selected." lightbox="./media/manage-cool-access/cool-access-new-capacity-pool.png":::
#### <a name="enable-cool-access-existing-pool"></a> Enable cool access on an existing capacity pool
You can enable cool access support on an existing Standard service-level capacit
2. Select **Enable Cool Access**:
- :::image type="content" source="../media/azure-netapp-files/cool-access-existing-pool.png" alt-text="Screenshot that shows the right-click menu on an existing capacity pool. The menu enables you to select the Enable Cool Access option." lightbox="../media/azure-netapp-files/cool-access-existing-pool.png":::
+ :::image type="content" source="./media/manage-cool-access/cool-access-existing-pool.png" alt-text="Screenshot that shows the right-click menu on an existing capacity pool. The menu enables you to select the Enable Cool Access option." lightbox="./media/manage-cool-access/cool-access-existing-pool.png":::
### Configure a volume for cool access
Standard storage with cool access can be enabled during the creation of a volume
* When the cool access setting is disabled on the volume, you can't modify the cool access retrieval policy setting on the volume. * Once you disable the cool access setting on the volume, the cool access retrieval policy setting automatically reverts to `Default`.
- :::image type="content" source="../media/azure-netapp-files/cool-access-new-volume.png" alt-text="Screenshot that shows the Create a Volume page. Under the basics tab, the Enable Cool Access checkbox is selected. The options for the cool access retrieval policy are displayed. " lightbox="../media/azure-netapp-files/cool-access-new-volume.png":::
+ :::image type="content" source="./media/manage-cool-access/cool-access-new-volume.png" alt-text="Screenshot that shows the Create a Volume page. Under the basics tab, the Enable Cool Access checkbox is selected. The options for the cool access retrieval policy are displayed. " lightbox="./media/manage-cool-access/cool-access-new-volume.png":::
1. Follow one of the following articles to complete the volume creation: * [Create an NFS volume](azure-netapp-files-create-volumes.md)
In a Standard service-level, cool-access enabled capacity pool, you can enable a
* Once you disable the cool access setting on the volume, the cool access retrieval policy setting automatically reverts to `Default`.
- :::image type="content" source="../media/azure-netapp-files/cool-access-existing-volume.png" alt-text="Screenshot that shows the Enable Cool Access window with the Enable Cool Access field selected. " lightbox="../media/azure-netapp-files/cool-access-existing-volume.png":::
+ :::image type="content" source="./media/manage-cool-access/cool-access-existing-volume.png" alt-text="Screenshot that shows the Enable Cool Access window with the Enable Cool Access field selected. " lightbox="./media/manage-cool-access/cool-access-existing-volume.png":::
### <a name="modify_cool"></a>Modify cool access configuration for a volume
azure-netapp-files Manage Default Individual User Group Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/manage-default-individual-user-group-quotas.md
Quota rules only come into effect on the CRR/CZR destination volume after the re
1. From the Azure portal, navigate to the volume for which you want to create a quota rule. Select **User and group quotas** in the navigation pane, then click **Add** to create a quota rule for a volume.
- ![Screenshot that shows the New Quota window of Users and Group Quotas.](../media/azure-netapp-files/user-group-quotas-new-quota.png)
+ ![Screenshot that shows the New Quota window of Users and Group Quotas.](./media/manage-default-individual-user-group-quotas/user-group-quotas-new-quota.png)
2. In the **New quota** window that appears, provide information for the following fields, then click **Create**.
Quota rules only come into effect on the CRR/CZR destination volume after the re
1. On the Azure portal, navigate to the volume whose quota rule you want to edit or delete. Select `…` at the end of the quota rule row, then select **Edit** or **Delete** as appropriate.
- ![Screenshot that shows the Edit and Delete options of Users and Group Quotas.](../media/azure-netapp-files/user-group-quotas-delete-edit.png)
+ ![Screenshot that shows the Edit and Delete options of Users and Group Quotas.](./media/manage-default-individual-user-group-quotas/user-group-quotas-delete-edit.png)
1. If you're editing a quota rule, update **Quota Limit** in the Edit User Quota Rule window that appears.
- ![Screenshot that shows the Edit User Quota Rule window of Users and Group Quotas.](../media/azure-netapp-files/user-group-quotas-edit-rule.png)
+ ![Screenshot that shows the Edit User Quota Rule window of Users and Group Quotas.](./media/manage-default-individual-user-group-quotas/user-group-quotas-edit-rule.png)
1. If you're deleting a quota rule, confirm the deletion by selecting **Yes**.
- ![Screenshot that shows the Confirm Delete window of Users and Group Quotas.](../media/azure-netapp-files/user-group-quotas-confirm-delete.png)
+ ![Screenshot that shows the Confirm Delete window of Users and Group Quotas.](./media/manage-default-individual-user-group-quotas/user-group-quotas-confirm-delete.png)
## Next steps * [Understand default and individual user and group quotas](default-individual-user-group-quotas-introduction.md)
azure-netapp-files Manage Manual Qos Capacity Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/manage-manual-qos-capacity-pool.md
You can change a capacity pool that currently uses the auto QoS type to use the
3. Select **Change QoS type**. Then set **New QoS Type** to **Manual**. Select **OK**.
-![Change QoS type](../media/azure-netapp-files/change-qos-type.png)
+![Change QoS type](./media/manage-manual-qos-capacity-pool/change-qos-type.png)
## Monitor the throughput of a manual QoS capacity pool
If a volume is contained in a manual QoS capacity pool, you can modify the allot
2. Select **Change throughput**. Specify the **Throughput (MiB/S)** that you want. Select **OK**.
- ![Change QoS throughput](../media/azure-netapp-files/change-qos-throughput.png)
+ ![Change QoS throughput](./media/manage-manual-qos-capacity-pool/change-qos-throughput.png)
## Next steps
azure-netapp-files Manage Smb Share Access Control Lists https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/manage-smb-share-access-control-lists.md
There are two ways to view share settings:
You must have the mount path. You can retrieve this in the Azure portal by navigating to the **Overview** menu of the volume for which you want to configure share ACLs. Identify the **Mount path**. ## View SMB share ACLs with advanced permissions
Advanced permissions for files, folders, and shares on an Azure NetApp File volu
1. In Windows Explorer, use the mount path to open the volume. Right-click on the volume, select **Properties**. Switch to the **Security** tab then select **Advanced**.
- :::image type="content" source="../media/azure-netapp-files/security-advanced-tab.png" alt-text="Screenshot of security tab." lightbox="../media/azure-netapp-files/security-advanced-tab.png":::
+ :::image type="content" source="./media/manage-smb-share-access-control-lists/security-advanced-tab.png" alt-text="Screenshot of security tab." lightbox="./media/manage-smb-share-access-control-lists/security-advanced-tab.png":::
1. In the new window that pops up, switch to the **Share** tab to view the share-level ACLs. You cannot modify share-level ACLs. >[!NOTE] >Azure NetApp Files doesn't support windows audit ACLs. Azure NetApp Files ignores any audit ACL applied to files or directories hosted on Azure NetApp Files volumes.
- :::image type="content" source="../media/azure-netapp-files/view-permissions.png" alt-text="Screenshot of the permissions tab." lightbox="../media/azure-netapp-files/view-permissions.png":::
+ :::image type="content" source="./media/manage-smb-share-access-control-lists/view-permissions.png" alt-text="Screenshot of the permissions tab." lightbox="./media/manage-smb-share-access-control-lists/view-permissions.png":::
- :::image type="content" source="../media/azure-netapp-files/view-shares.png" alt-text="Screenshot of the share tab." lightbox="../media/azure-netapp-files/view-shares.png":::
+ :::image type="content" source="./media/manage-smb-share-access-control-lists/view-shares.png" alt-text="Screenshot of the share tab." lightbox="./media/manage-smb-share-access-control-lists/view-shares.png":::
## Modify share-levels ACLs with the Microsoft Management Console
You can only modify the share ACLs in Azure NetApp Files with the Microsoft Mana
1. In the Computer Management window, right-click **Computer management (local)** then select **Connect to another computer**.
- :::image type="content" source="../media/azure-netapp-files/computer-management-local.png" alt-text="Screenshot of the computer management window." lightbox="../media/azure-netapp-files/computer-management-local.png":::
+ :::image type="content" source="./media/manage-smb-share-access-control-lists/computer-management-local.png" alt-text="Screenshot of the computer management window." lightbox="./media/manage-smb-share-access-control-lists/computer-management-local.png":::
1. In the **Another computer** field, enter the fully qualified domain name (FQDN).
You can only modify the share ACLs in Azure NetApp Files with the Microsoft Mana
1. Once connected, expand **System Tools** then select **Shared Folders > Shares**. 1. To manage share permissions, right-click on the name of the share you want to modify from list and select **Properties**.
- :::image type="content" source="../media/azure-netapp-files/share-folder.png" alt-text="Screenshot of the share folder." lightbox="../media/azure-netapp-files/share-folder.png":::
+ :::image type="content" source="./media/manage-smb-share-access-control-lists/share-folder.png" alt-text="Screenshot of the share folder." lightbox="./media/manage-smb-share-access-control-lists/share-folder.png":::
1. Add, remove, or modify the share ACLs as appropriate.
- :::image type="content" source="../media/azure-netapp-files/add-share.png" alt-text="Screenshot showing how to add a share." lightbox="../media/azure-netapp-files/add-share.png":::
+ :::image type="content" source="./media/manage-smb-share-access-control-lists/add-share.png" alt-text="Screenshot showing how to add a share." lightbox="./media/manage-smb-share-access-control-lists/add-share.png":::
## Next step
azure-netapp-files Monitor Volume Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/monitor-volume-capacity.md
You can use Windows clients to check the used and available capacity of a volume
* Go to File Explorer, right-click the mapped drive, and select **Properties** to display capacity.
- [ ![Screenshot that shows Explorer drive properties and volume properties.](../media/azure-netapp-files/monitor-explorer-drive-properties.png) ](../media/azure-netapp-files/monitor-explorer-drive-properties.png#lightbox)
+ [ ![Screenshot that shows Explorer drive properties and volume properties.](./media/monitor-volume-capacity/monitor-explorer-drive-properties.png) ](./media/monitor-volume-capacity/monitor-explorer-drive-properties.png#lightbox)
* Use the `dir` command at the command prompt:
- ![Screenshot that shows using the dir command to display capacity.](../media/azure-netapp-files/monitor-volume-properties-dir-command.png)
+ ![Screenshot that shows using the dir command to display capacity.](./media/monitor-volume-capacity/monitor-volume-properties-dir-command.png)
The *available space* is accurate using File Explorer or the `dir` command. However, the *consumed/used space* will be an estimate when snapshots are generated on the volume. The [consumed snapshot capacity](azure-netapp-files-cost-model.md#capacity-consumption-of-snapshots) counts towards the total consumed space on the volume. To get the absolute volume consumption, including the capacity used by snapshots, use the [Azure NetApp Metrics](azure-netapp-files-metrics.md#volumes) in the Azure portal.
The `-h` option shows the size, including used and available space in human read
The following snapshot shows volume capacity reporting in Linux:
-![Screenshot that shows volume capacity reporting in Linux.](../media/azure-netapp-files/monitor-volume-properties-linux-command.png)
+![Screenshot that shows volume capacity reporting in Linux.](./media/monitor-volume-capacity/monitor-volume-properties-linux-command.png)
The *available space* is accurate using the `df` command. However, the *consumed/used space* will be an estimate when snapshots are generated on the volume. The [consumed snapshot capacity](azure-netapp-files-cost-model.md#capacity-consumption-of-snapshots) counts towards the total consumed space on the volume. To get the absolute volume consumption, including the capacity used by snapshots, use the [Azure NetApp Metrics](azure-netapp-files-metrics.md#volumes) in the Azure portal.
azure-netapp-files Mount Volumes Vms Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/mount-volumes-vms-smb.md
You can mount an SMB file for Windows virtual machines (VMs).
1. Select the **Volumes** menu and then the SMB volume that you want to mount. 1. To mount the SMB volume using a Windows client, select **Mount instructions** from the selected volume. Follow the displayed instructions to mount the volume.
- :::image type="content" source="../media/azure-netapp-files/azure-netapp-files-mount-instructions-smb.png" alt-text="Screenshot of Mount instructions." lightbox="../media/azure-netapp-files/azure-netapp-files-mount-instructions-smb.png":::
+ :::image type="content" source="./media/mount-volumes-vms-smb/azure-netapp-files-mount-instructions-smb.png" alt-text="Screenshot of Mount instructions." lightbox="./media/mount-volumes-vms-smb/azure-netapp-files-mount-instructions-smb.png":::
## Next steps
azure-netapp-files Network Attached File Permissions Nfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/network-attached-file-permissions-nfs.md
Mode bit permissions in NFS provide basic permissions for files and folders, usi
Numeric values are applied to different segments of an access control: owner, group and everyone else, meaning that there are no granular user access controls in place for basic NFSv3. The following image shows an example of how a mode bit access control might be constructed for use with an NFSv3 object. Azure NetApp Files doesn't support POSIX ACLs. Thus granular ACLs are only possible with NFSv3 when using an NTFS security style volume with valid UNIX to Windows name mappings via a name service such as Active Directory LDAP. Alternately, you can use NFSv4.1 with Azure NetApp Files and NFSv4.1 ACLs.
azure-netapp-files Network Attached File Permissions Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/network-attached-file-permissions-smb.md
SMB volumes in Azure NetApp Files can leverage NTFS security styles to make use
NTFS ACLs provide granular permissions and ownership for files and folders by way of access control entries (ACEs). Directory permissions can also be set to enable or disable inheritance of permissions. For a complete overview of NTFS-style ACLs, see [Microsoft Access Control overview](/windows/security/identity-protection/access-control/access-control).
azure-netapp-files Network Attached File Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/network-attached-file-permissions.md
Folders can be assigned inheritance flags, which means that parent folder permis
* In Windows SMB shares, inheritance is controlled in the advanced permission view. * For NFSv3, permission inheritance doesnΓÇÖt work via ACL, but instead can be mimicked using umask and setgid flags. * With NFSv4.1, permission inheritance can be handled using inheritance flags on ACLs.
azure-netapp-files Network Attached Storage Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/network-attached-storage-concept.md
Network Attached Storage (NAS) is a way for a centralized storage system to present data to multiple networked clients across a WAN or LAN. Datasets in a NAS environment can be structured (data in a well-defined format, such as databases) or unstructured (data not stored in a structured database format, such as images, media files, logs, home directories, etc.). Regardless of the structure, the data is served through a standard conversation between a NAS client and the Azure NetApp Files NAS services. The conversation happens following these basic steps:
azure-netapp-files Network Attached Storage Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/network-attached-storage-permissions.md
The initial entry point to be secured in a NAS environment is access to the shar
Since the most restrictive permissions override other permissions, and a share is the main entry point to the volume (with the fewest access controls), share permissions should abide by a funnel logic, where the share allows more access than the underlying files and folders. The funnel logic enacts more granular, restrictive controls. ## NFS export policies
Volumes in Azure NetApp Files are shared out to NFS clients by exporting a path
An export policy is a container for a set of access rules that are listed in order of desired access. These rules control access to NFS shares by using client IP addresses or subnets. If a client isn't listed in an export policy ruleΓÇöeither allowing or explicitly denying accessΓÇöthen that client is unable to mount the NFS export. Since the rules are read in sequential order, if a more restrictive policy rule is applied to a client (for example, by way of a subnet), then it's read and applied first. Subsequent policy rules that allow more access are ignored. This diagram shows a client that has an IP of 10.10.10.10 getting read-only access to a volume because the subnet 0.0.0.0/0 (every client in every subnet) is set to read-only and is listed first in the policy. ### Export policy rule options available in Azure NetApp Files
The order of export policy rules determines how they are applied. The first rule
Consider the following example: - The first rule in the index includes *all clients* in *all subnets* by way of the default policy rule using 0.0.0.0/0 as the **Allowed clients** entry. That rule allows ΓÇ£Read & WriteΓÇ¥ access to all clients for that Azure NetApp Files NFSv3 volume. - The second rule in the index explicitly lists NFS client 10.10.10.10 and is configured to limit access to ΓÇ£Read only,ΓÇ¥ with no root access (root is squashed).
To fix this and set access to the desired level, the rules can be re-ordered to
SMB shares enable end users can access SMB or dual-protocol volumes in Azure NetApp Files. Access controls for SMB shares are limited in the Azure NetApp Files control plane to only SMB security options such as access-based enumeration and non-browsable share functionality. These security options are configured during volume creation with the **Edit volume** functionality. Share-level permission ACLs are managed through a Windows MMC console rather than through Azure NetApp Files.
Azure NetApp Files offers multiple share properties to enhance security for admi
[Access-based enumeration](azure-netapp-files-create-volumes-smb.md#access-based-enumeration) is an Azure NetApp Files SMB volume feature that limits enumeration of files and folders (that is, listing the contents) in SMB only to users with allowed access on the share. For instance, if a user doesn't have access to read a file or folder in a share with access-based enumeration enabled, then the file or folder doesn't show up in directory listings. In the following example, a user (`smbuser`) doesn't have access to read a folder named ΓÇ£ABEΓÇ¥ in an Azure NetApp Files SMB volume. Only `contosoadmin` has access. In the below example, access-based enumeration is disabled, so the user has access to the `ABE` directory of `SMBVolume`. In the next example, access-based enumeration is enabled, so the `ABE` directory of `SMBVolume` doesn't display for the user. The permissions also extend to individual files. In the below example, access-based enumeration is disabled and `ABE-file` displays to the user. With access-based enumeration enabled, `ABE-file` doesn't display to the user. #### Non-browsable shares
The non-browsable shares feature in Azure NetApp Files limits clients from brows
In the following image, the non-browsable share property isn't enabled for `SMBVolume`, so the volume displays in the listing of the file server (using `\\servername`). With non-browsable shares enabled on `SMBVolume` in Azure NetApp Files, the same view of the file server excludes `SMBVolume`. In the next image, the share `SMBVolume` has non-browsable shares enabled in Azure NetApp Files. When that is enabled, this is the view of the top level of the file server. Even though the volume in the listing cannot be seen, it remains accessible if the user knows the file path. #### SMB3 encryption SMB3 encryption is an Azure NetApp Files SMB volume feature that enforces encryption over the wire for SMB clients for greater security in NAS environments. The following image shows a screen capture of network traffic when SMB encryption is disabled. Sensitive informationΓÇösuch as file names and file handlesΓÇöis visible. When SMB Encryption is enabled, the packets are marked as encrypted, and no sensitive information can be seen. Instead, itΓÇÖs shown as "Encrypted SMB3 data." #### SMB share ACLs
azure-netapp-files Network Attached Storage Protocols https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/network-attached-storage-protocols.md
In the following illustration, `user1` authenticates to Azure NetApp Files to ac
In this instance, `user1` gets full control on their own folder (`user1-dir`) and no access to the `HR` folder. This setting is based on the security ACLs specified in the file system, and `user1` will get the expected access regardless of which protocol they're accessing the volumes from. ### Considerations for Azure NetApp Files dual-protocol volumes
azure-netapp-files Network File System Group Memberships https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/network-file-system-group-memberships.md
The following example shows the output from Active Directory with a userΓÇÖs DN
The following example shows the Windows group member field: The following example shows `LDAPsearch` of all groups where `User1` is a member: You can also query group memberships for a user in Azure NetApp Files by selecting **LDAP Group ID List** link under **Support + troubleshooting** on the volume menu. ## Group limits in NFS
The options to extend the group limitation work the same way that the `manage-gi
The following example shows RPC packet with 16 GIDs. Any GID past the limit of 16 is dropped by the protocol. With extended groups in Azure NetApp Files, when a new NFS request comes in, information about the userΓÇÖs group membership is requested.
azure-netapp-files Nfs Access Control Lists https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/nfs-access-control-lists.md
The NFSv4.x protocol can provide access control in the form of [access control lists (ACLs)](/windows/win32/secauthz/access-control-lists), which conceptually similar to ACLs used in [SMB via Windows NTFS permissions](network-attached-file-permissions-smb.md). An NFSv4.x ACL consists of individual [Access Control Entries (ACEs)](/windows/win32/secauthz/access-control-entries), each of which provides an access control directive to the server. Each NFSv4.x ACL is created with the format of `type:flags:principal:permissions`.
When a local user or group ACL is set, any user or group that corresponds to the
The credentials passed from client to server can be seen via a packet capture as seen below. **Caveats:**
chown: changing ownership of ΓÇÿtestdirΓÇÖ: Operation not permitted
The export policy rule on the volume can be modified to change this behavior. In the **Export policy** menu for the volume, modify **Chown mode** to "unrestricted." Once modified, ownership can be changed by users other than root if they have appropriate access rights. This requires the ΓÇ£Take OwnershipΓÇ¥ NFSv4.x ACL permission (designated by the letter ΓÇ£oΓÇ¥). Ownership can also be changed if the user changing ownership currently owns the file or folder.
Root access with NFSv4.x ACLs can't be limited unless [root is squashed](network
To configure root squashing, navigate to the **Export policy** menu on the volume then change ΓÇ£Root accessΓÇ¥ to ΓÇ£offΓÇ¥ for the policy rule. The effect of disabling root access root squashes to anonymous user `nfsnobody:65534`. Root access is then unable to change ownership.
azure-netapp-files Performance Azure Vmware Solution Datastore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-azure-vmware-solution-datastore.md
If you stripe volumes across multiple disks, ensure the backup software or disas
To understand how well a single AVS VM scales as more virtual disks are added, tests were performed with one, two, four, and eight datastores (each containing a single VMDK). The following diagram shows a single disk averaged around 73,040 IOPS (scaling from 100% write / 0% read, to 0% write / 100% read). When this test was increased to two drives, performance increased by 75.8% to 128,420 IOPS. Increasing to four drives began to show diminishing returns of what a single VM, sized as tested, could push. The peak IOPS observed were 147,000 IOPS with 100% random reads. ### Single-host scaling ΓÇô Single datastore
It scales poorly to increase the number of VMs driving IO to a single datastore
Increasing the block size (to 64 KB) for large block workloads had comparable results, reaching a peak of 2148 MiB/s (single VM, single VMDK) and 2138 MiB/s (4 VMs, 16 VMDKs). ### Single-host scaling ΓÇô Multiple datastores From the context of a single AVS host, while a single datastore allowed the VMs to drive about 76,000 IOPS, spreading the workloads across two datastores increased total throughput by 76% on average. Moving beyond two datastores to four resulted in a 163% increase (over one datastore, a 49% increase from two to four) as shown in the following diagram. Even though there were still performance gains, increasing beyond eight datastores showed diminishing returns. ### Multi-host scaling ΓÇô Single datastore A single datastore from a single host produced over 2000 MiB/s of sequential 64-KB throughput. Distributing the same workload across all four hosts produced a peak gain of 135% driving over 5000 MiB/s. This outcome likely represents the upper ceiling of a single Azure NetApp Files volume throughput performance. Decreasing the block size from 64 KB to 8 KB and rerunning the same iterations resulted in four VMs producing 195,000 IOPS, as shown the following diagram. Performance scales as both the number of hosts and the number of datastores increase, because the number of network flows increases. The performance increases by scaling the number of hosts multiplied by the number of datastores, because the count of network flows is a factor of hosts times datastores. ### Multi-host scaling ΓÇô Multiple datastores A single datastore with four VMs spread across four hosts produced over 5000 MiB/s of sequential 64-KB IO. For more demanding workloads, each VM is moved to a dedicated datastore, producing over 10,500 MiB/s in total, as shown in the following diagram. For small-block, random workloads, a single datastore produced 195,000 random 8-KB IOPS. Scaling to four datastores produced over 530,000 random 8K IOPS. ## Implications and recommendations
azure-netapp-files Performance Benchmarks Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-benchmarks-azure-vmware-solution.md
Traffic latency from AVS to Azure NetApp Files datastores varies from sub-millis
In a single AVS host scenario, the AVS to Azure NetApp Files datastore I/O occurs over a single network flow. The following graphs compare the throughput and IOPs of a single virtual machine with the aggregated throughput and IOPs of four virtual machines. In the subsequent scenarios, the number of network flows increases as more hosts and datastores are added. ## One-to-multiple Azure NetApp Files datastores with a single AVS host The following graphs compare the throughput of a single virtual machine on a single Azure NetApp Files datastore with the aggregated throughput of four Azure NetApp Files datastores. In both scenarios, each virtual machine has a VMDK on each Azure NetApp Files datastore. The following graphs compare the IOPs of a single virtual machine on a single Azure NetApp Files datastore with the aggregated IOPs of eight Azure NetApp Files datastores. In both scenarios, each virtual machine has a VMDK on each Azure NetApp Files datastore. ## Scale-out Azure NetApp Files datastores with multiple AVS hosts
The following graph shows the aggregated throughput and IOPs of 16 virtual machi
Nearly identical results were achieved with a single virtual machine on each host with four VMDKs per virtual machine and each of those VMDKs on a separate datastore. ## Next steps
azure-netapp-files Performance Benchmarks Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-benchmarks-linux.md
The graph below represents a 64-kibibyte (KiB) sequential workload and a 1 TiB w
The graph illustrates decreases in 10% at a time, from pure read to pure write. It demonstrates what you can expect when using varying read/write ratios (100%:0%, 90%:10%, 80%:20%, and so on).
-![Linux workload throughput](../media/azure-netapp-files/performance-benchmarks-linux-workload-throughput.png)
+![Linux workload throughput](./media/performance-benchmarks-linux/performance-benchmarks-linux-workload-throughput.png)
### Linux workload IOPS
The following graph represents a 4-kibibyte (KiB) random workload and a 1 TiB wo
This graph illustrates decreases in 10% at a time, from pure read to pure write. It demonstrates what you can expect when using varying read/write ratios (100%:0%, 90%:10%, 80%:20%, and so on).
-![Linux workload IOPS](../media/azure-netapp-files/performance-benchmarks-linux-workload-iops.png)
+![Linux workload IOPS](./media/performance-benchmarks-linux/performance-benchmarks-linux-workload-iops.png)
## Linux scale-up
The graphs compare the advantages of `nconnect` to a non-`connected` mounted vol
The following graphs show 64-KiB sequential reads of ~3,500 MiB/s reads with `nconnect`, roughly 2.3X non-`nconnect`.
-![Linux read throughput](../media/azure-netapp-files/performance-benchmarks-linux-read-throughput.png)
+![Linux read throughput](./media/performance-benchmarks-linux/performance-benchmarks-linux-read-throughput.png)
### Linux write throughput The following graphs show sequential writes. They indicate that `nconnect` has no noticeable benefit for sequential writes. 1,500 MiB/s is roughly both the sequential write volume upper limit and the D32s_v4 instance egress limit.
-![Linux write throughput](../media/azure-netapp-files/performance-benchmarks-linux-write-throughput.png)
+![Linux write throughput](./media/performance-benchmarks-linux/performance-benchmarks-linux-write-throughput.png)
### Linux read IOPS The following graphs show 4-KiB random reads of ~200,000 read IOPS with `nconnect`, roughly 3X non-`nconnect`.
-![Linux read IOPS](../media/azure-netapp-files/performance-benchmarks-linux-read-iops.png)
+![Linux read IOPS](./media/performance-benchmarks-linux/performance-benchmarks-linux-read-iops.png)
### Linux write IOPS The following graphs show 4-KiB random writes of ~135,000 write IOPS with `nconnect`, roughly 3X non-`nconnect`.
-![Linux write IOPS](../media/azure-netapp-files/performance-benchmarks-linux-write-iops.png)
+![Linux write IOPS](./media/performance-benchmarks-linux/performance-benchmarks-linux-write-iops.png)
## Next steps
azure-netapp-files Performance Linux Concurrency Session Slots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-linux-concurrency-session-slots.md
A concurrency level as low as 155 is sufficient to achieve 155,000 Oracle DB NFS
* Considering a latency of 0.5 ms, a concurrency of 55 is needed to achieve 110,000 IOPS. * Considering a latency of 1 ms, a concurrency of 155 is needed to achieve 155,000 IOPS.
-![Oracle DNFS latency curve](../media/azure-netapp-files/performance-oracle-dnfs-latency-curve.png)
+![Oracle DNFS latency curve](./media/shared/performance-oracle-dnfs-latency-curve.png)
See [Oracle database performance on Azure NetApp Files single volumes](performance-oracle-single-volumes.md) for details.
Use the following `tcpdump` command to capture the mount command:
Using Wireshark, the packets of interest are as follows:
-![Screenshot that shows packets of interest.](../media/azure-netapp-files/performance-packets-interest.png)
+![Screenshot that shows packets of interest.](./media/performance-linux-concurrency-session-slots/performance-packets-interest.png)
Within these two packets, look at the `max_reqs` field within the middle section of the trace file.
Within these two packets, look at the `max_reqs` field within the middle section
Packet 12 (client maximum requests) shows that the client had a `max_session_slots` value of 64. In the next section, notice that the server supports a concurrency of 180 for the session. The session ends up negotiating the lower of the two provided values.
-![Screenshot that shows max session slots for Packet 12.](../media/azure-netapp-files/performance-max-session-packet-12.png)
+![Screenshot that shows max session slots for Packet 12.](./media/performance-linux-concurrency-session-slots/performance-max-session-packet-12.png)
The following example shows Packet 14 (server maximum requests):
-![Screenshot that shows max session slots for Packet 14.](../media/azure-netapp-files/performance-max-session-packet-14.png)
+![Screenshot that shows max session slots for Packet 14.](./media/performance-linux-concurrency-session-slots/performance-max-session-packet-14.png)
## Next steps
azure-netapp-files Performance Oracle Multiple Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-oracle-multiple-volumes.md
The following charts capture the performance profile of a single E104ids_v5 Azur
The following diagram depicts the architecture that testing was completed against; note the Oracle database spread across multiple Azure NetApp Files volumes and endpoints. #### Single-host storage IO The following diagram shows a 100% randomly selected workload with a database buffer hit ratio of about 8%. SLOB2 was able to drive approximately 850,000 I/O requests per second while maintaining a submillisecond DB file sequential read event latency. With a database block size of 8K that amounts to approximately 6,800 MiB/s of storage throughput. #### Single-host throughput The following diagram demonstrates that, for bandwidth intensive sequential IO workloads such as full table scans or RMAN activities, Azure NetApp Files can deliver the full bandwidth capabilities of the E104ids_v5 VM itself. >[!NOTE] >As the compute instance is at the theoretical maximum of its bandwidth, adding additional application concurrency results only in increased client-side latency. This results in SLOB2 workloads exceeding the targeted completion timeframe therefore thread count was capped at six.
The following charts capture the performance profile of three E104ids_v5 Azure V
The following diagram depicts the architecture that testing was completed against; note the three Oracle databases spread across multiple Azure NetApp Files volumes and endpoints. Endpoints can be dedicated to a single host as shown with Oracle VM 1 or shared among hosts as shown with Oracle VM2 and Oracle VM 3. #### Multi-host storage IO The following diagram shows a 100% randomly selected workload with a database buffer hit ratio of about 8%. SLOB2 was able to drive approximately 850,000 I/O requests per second across all three hosts individually. SLOB2 was able accomplish this while executing in parallel to a collective total of about 2,500,000 I/O requests per second with each host still maintaining a submillisecond db file sequential read event latency. With a database block size of 8K, this amounts to approximately 20,000 MiB/s between the three hosts. #### Multi-host throughput The following diagram demonstrates that for sequential workloads, Azure NetApp Files can still deliver the full bandwidth capabilities of the E104ids_v5 VM itself even as it scales outward. SLOB2 was able to drive I/O totaling over 30,000 MiB/s across the three hosts while running in parallel. #### Real-world performance
In the scenario where multiple NICs are configured, you need to determine which
Use the following process to identify the mapping between configured network interface and its associated virtual interface. This process validates that accelerated networking is enabled for a specific NIC on your Linux machine and display the physical ingress speed the NIC can potentially achieve. 1. Execute the `ip a` command:
- :::image type="content" alt-text="Screenshot of output of ip a command." source="../media/azure-netapp-files/ip-a-command-output.png":::
+ :::image type="content" alt-text="Screenshot of output of ip a command." source="./media/performance-oracle-multiple-volumes/ip-a-command-output.png":::
1. List the `/sys/class/net/` directory of the NIC ID you are verifying (`eth0` in the example) and `grep` for the word lower: ```bash ls /sys/class/net/eth0 | grep lower lower_eth1 ``` 1. Execute the `ethtool` command against the ethernet device identified as the lower device in the previous step.
- :::image type="content" alt-text="Screenshot of output of settings for eth1." source="../media/azure-netapp-files/ethtool-output.png":::
+ :::image type="content" alt-text="Screenshot of output of settings for eth1." source="./media/performance-oracle-multiple-volumes/ethtool-output.png":::
#### Azure VM: Network vs. disk bandwidth limits
A level of expertise is required when reading Azure VM performance limits docume
A sample chart is shown for reference: ### Azure NetApp Files
Automatic Storage Management (ASM) is supported for NFS volumes. Though typicall
An ASM over dNFS configuration was used to produce all test results discussed in this article. The following diagram illustrates the ASM file layout within the Azure NetApp Files volumes and the file allocation to the ASM disk groups. There are some limitations with the use of ASM over Azure NetApp Files NFS mounted volumes when it comes to storage snapshots that can be overcome with certain architectural considerations. Contact your Azure NetApp Files specialist or cloud solutions architect for an in-depth review of these considerations.
In Oracle version 12.2 and above, an Exadata specific addition will be included
* The top cells by percentage CPU are display and are in descending order of percentage CPU * Average: 39.34% CPU, 28.57% user, 10.77% sys
- :::image type="content" alt-text="Screenshot of a table showing top cells by percentage CPU." source="../media/azure-netapp-files/exadata-top-cells.png":::
+ :::image type="content" alt-text="Screenshot of a table showing top cells by percentage CPU." source="./media/performance-oracle-multiple-volumes/exadata-top-cells.png":::
* Single cell physical block reads * Flash cache usage
azure-netapp-files Performance Oracle Single Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-oracle-single-volumes.md
This article addresses the following topics about Oracle in the cloud. These top
The following diagram illustrates the environment used for testing. For consistency and simplicity, Ansible playbooks were used to deploy all elements of the test bed.
-![Oracle testing environment](../media/azure-netapp-files/performance-oracle-test-environment.png)
+![Oracle testing environment](./media/performance-oracle-single-volumes/performance-oracle-test-environment.png)
### Virtual machine configuration
A PDB was created for the SLOB database.
The following diagram shows the tablespace named PERFIO with 600 GB in size (20 data files, 30 GB each) created to host four SLOB user schemas. Each user schema was 125 GB in size.
-![Oracle database](../media/azure-netapp-files/performance-oracle-tablespace.png)
+![Oracle database](./media/performance-oracle-single-volumes/performance-oracle-tablespace.png)
## Performance metrics
This scenario was running on an Azure VM Standard_D32s_v3 (Intel E5-2673 v4 @ 2.
As shown in the following diagram, the Oracle DNFS client delivered up to 2.8x more throughput than the regular Linux kNFS Client:
-![Linux kNFS Client compared with Oracle Direct NFS](../media/azure-netapp-files/performance-oracle-kfns-compared-dnfs.png)
+![Linux kNFS Client compared with Oracle Direct NFS](./media/performance-oracle-single-volumes/performance-oracle-kfns-compared-dnfs.png)
The following diagram shows the latency curve for the read operations. In this context, the bottleneck for the kNFS client is the single NFS TCP socket connection established between the client and the NFS server (the Azure NetApp Files volume).
-![Linux kNFS Client compared with Oracle Direct NFS latency curve](../media/azure-netapp-files/performance-oracle-latency-curve.png)
+![Linux kNFS Client compared with Oracle Direct NFS latency curve](./media/performance-oracle-single-volumes/performance-oracle-latency-curve.png)
The DNFS client was able to push more IO requests/sec due to its ability to create hundreds of TCP socket connections, therefore taking advantage of the parallelism. As described in [Azure NetApp Files configuration](#anf_config), each additional TiB of capacity allocated allows for an additional 128MiB/s of bandwidth. DNFS topped out at 1 GiB/s of throughput, which is the limit imposed by the 8-TiB capacity selection. Given more capacity, more throughput would have been driven. Throughput is only one of the considerations. Another consideration is latency, which has the primary impact on user experience. As the following diagram shows, latency increases can be expected far more rapidly with kNFS than with DNFS.
-![Linux kNFS Client compared with Oracle Direct NFS read latency](../media/azure-netapp-files/performance-oracle-read-latency.png)
+![Linux kNFS Client compared with Oracle Direct NFS read latency](./media/performance-oracle-single-volumes/performance-oracle-read-latency.png)
Histograms provide excellent insight into database latencies. The following diagram provides a complete view from the perspective of the recorded "db file sequential read", while using DNFS at the highest concurrency data point (32 threads/schema). As shown in the following diagram, 47% of all read operations were honored between 512 microseconds and 1000 microseconds, while 90% of all read operations were served at a latency below 2 ms.
-![Linux kNFS Client compared with Oracle Direct NFS histograms](../media/azure-netapp-files/performance-oracle-histogram-read-latency.png)
+![Linux kNFS Client compared with Oracle Direct NFS histograms](./media/performance-oracle-single-volumes/performance-oracle-histogram-read-latency.png)
In conclusion, it's clear that DNFS is a must-have when it comes to improving the performance of an Oracle database instance on NFS.
DNFS is capable of consuming far more bandwidth than what is provided by an 8-TB
The following diagram shows a configuration for an 80% select and 20% update workload, and with a database buffer hit ratio of 8%. SLOB was able to drive a single volume to 200,000 NFS I/O requests per second. Considering that each operation is 8-KiB size, the system under test was able to deliver ~200,000 IO requests/sec or 1600 MiB/s.
-![Oracle DNFS throughput](../media/azure-netapp-files/performance-oracle-dnfs-throughput.png)
+![Oracle DNFS throughput](./media/performance-oracle-single-volumes/performance-oracle-dnfs-throughput.png)
The following read latency curve diagram shows that, as the read throughput increases, the latency increases smoothly below the 1-ms line, and it hits the knee of the curve at ~165,000 average read IO requests/sec at the average read latency of ~1.3 ms. This value is an incredible latency value for an I/O rate unachievable with almost any other technology in the Azure Cloud.
-![Oracle DNFS latency curve](../media/azure-netapp-files/performance-oracle-dnfs-latency-curve.png)
+![Oracle DNFS latency curve](./media/shared/performance-oracle-dnfs-latency-curve.png)
#### Sequential I/O As shown in the following diagram, not all I/O is random in nature, considering an RMAN backup or a full table scan, for example, as workloads requiring as much bandwidth as they can get. Using the same configuration as described previously but with the volume resized to 32 TiB, the following diagram shows that a single Oracle DB instance can drive upwards of 3,900 MB/s of throughput, very close to the Azure NetApp Files volume's performance quota of 32 TB (128 MB/s * 32 = 4096 MB/s).
-![Oracle DNFS I/O](../media/azure-netapp-files/performance-oracle-dnfs-io.png)
+![Oracle DNFS I/O](./media/performance-oracle-single-volumes/performance-oracle-dnfs-io.png)
In summary, Azure NetApp Files helps you take your Oracle databases to the cloud. It delivers on performance when the database demands it. You can dynamically and non-disruptively resize your volume quota at any time.
azure-netapp-files Regional Capacity Quota https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/regional-capacity-quota.md
You can click **Quota** under Settings of Azure NetApp Files to display the curr
For example:
-![Screenshot that shows how to display quota information.](../media/azure-netapp-files/quota-display.png)
+![Screenshot that shows how to display quota information.](./media/regional-capacity-quota/quota-display.png)
## Request regional capacity quota increase
azure-netapp-files Request Region Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/request-region-access.md
In some special situations, you might need to explicitly request access to a reg
2. For **Subscription**, select your subscription. 3. For **Quota Type**, select **Storage: Azure NetApp Files limits**.
- ![Screenshot that shows the Problem Description tab.](../media/azure-netapp-files/support-problem-descriptions.png)
+ ![Screenshot that shows the Problem Description tab.](./media/shared/support-problem-descriptions.png)
3. Under the **Additional details** tab, click **Enter details** in the Request Details field.
- ![Screenshot that shows the Details tab and the Enter Details field.](../media/azure-netapp-files/quota-additional-details.png)
+ ![Screenshot that shows the Details tab and the Enter Details field.](./media/shared/quota-additional-details.png)
4. To request region access, provide the following information in the Quota Details window that appears: 1. In **Quota Type**, select **Region Access**. 2. In **Region Requested**, select your region.
- ![Screenshot that shows the Quota Details window for requesting region access.](../media/azure-netapp-files/quota-details-region-access.png)
+ ![Screenshot that shows the Quota Details window for requesting region access.](./media/request-region-access/quota-details-region-access.png)
5. Click **Save and continue**. Click **Review + create** to create the request.
azure-netapp-files Snapshots Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/snapshots-delete.md
You can delete snapshots that you no longer need to keep.
1. Go to the **Snapshots** menu of a volume. Right-click the snapshot you want to delete. Select **Delete**.
- ![Screenshot that describes the right-click menu of a snapshot](../media/azure-netapp-files/snapshot-right-click-menu.png)
+ ![Screenshot that describes the right-click menu of a snapshot](./media/shared/snapshot-right-click-menu.png)
2. In the Delete Snapshot window, confirm that you want to delete the snapshot by clicking **Yes**.
- ![Screenshot that confirms snapshot deletion](../media/azure-netapp-files/snapshot-confirm-delete.png)
+ ![Screenshot that confirms snapshot deletion](./media/snapshots-delete/snapshot-confirm-delete.png)
## Next steps
azure-netapp-files Snapshots Edit Hide Path https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/snapshots-edit-hide-path.md
The Hide Snapshot Path option controls whether the snapshot path of a volume is
## Steps 1. To view the Hide Snapshot Path option setting of a volume, select the volume. The **Hide snapshot path** field shows whether the option is enabled.
- ![Screenshot that describes the Hide Snapshot Path field.](../media/azure-netapp-files/hide-snapshot-path-field.png)
+ ![Screenshot that describes the Hide Snapshot Path field.](./media/snapshots-edit-hide-path/hide-snapshot-path-field.png)
2. To edit the Hide Snapshot Path option, click **Edit** on the volume page and modify the **Hide snapshot path** option as needed.
- ![Screenshot that describes the Edit volume snapshot option.](../media/azure-netapp-files/volume-edit-snapshot-options.png)
+ ![Screenshot that describes the Edit volume snapshot option.](./media/snapshots-edit-hide-path/volume-edit-snapshot-options.png)
## Next steps
azure-netapp-files Snapshots Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/snapshots-introduction.md
The following diagrams illustrate the concepts:
1. Files consist of metadata and data blocks written to a volume. In this illustration, there are three files, each consisting of three blocks: file 1, file 2, and file 3.
- [![Volume contains three files, file1, file2 and file3, each consisting of three data blocks.](../media/azure-netapp-files/single-file-snapshot-restore-one.png)](../media/azure-netapp-files/single-file-snapshot-restore-one.png#lightbox)
+ [![Volume contains three files, file1, file2 and file3, each consisting of three data blocks.](./media/snapshots-introduction/single-file-snapshot-restore-one.png)](./media/snapshots-introduction/single-file-snapshot-restore-one.png#lightbox)
2. A snapshot `Snapshot1` is taken, which copies the metadata and only the pointers to the blocks that represent the files:
- [![Snapshot1 is created, which is a copy of the volume metadata and only the pointers to the data blocks (in file1, file2 and file3).](../media/azure-netapp-files/single-file-snapshot-restore-two.png)](../media/azure-netapp-files/single-file-snapshot-restore-two.png#lightbox)
+ [![Snapshot1 is created, which is a copy of the volume metadata and only the pointers to the data blocks (in file1, file2 and file3).](./media/snapshots-introduction/single-file-snapshot-restore-two.png)](./media/snapshots-introduction/single-file-snapshot-restore-two.png#lightbox)
3. Files on the volume continue to change, and new files are added. Modified data blocks are written as new data blocks on the volume. The blocks that were previously captured in `Snapshot1` remain unchanged:
- [![Changes to file2 and file3 are written to new data blocks, and a new file file4 is created. Blocks that were previously captured in Snapshot1 remain unchanged.](../media/azure-netapp-files/single-file-snapshot-restore-three.png)](../media/azure-netapp-files/single-file-snapshot-restore-three.png#lightbox)
+ [![Changes to file2 and file3 are written to new data blocks, and a new file file4 is created. Blocks that were previously captured in Snapshot1 remain unchanged.](./media/snapshots-introduction/single-file-snapshot-restore-three.png)](./media/snapshots-introduction/single-file-snapshot-restore-three.png#lightbox)
4. A new snapshot `Snapshot2` is taken to capture the changes and additions:
- [ ![The latest changes are captured in Snapshot2 for a second point in time view of the volume (and the files within).](../media/azure-netapp-files/single-file-snapshot-restore-four.png) ](../media/azure-netapp-files/single-file-snapshot-restore-four.png#lightbox)
+ [ ![The latest changes are captured in Snapshot2 for a second point in time view of the volume (and the files within).](./media/snapshots-introduction/single-file-snapshot-restore-four.png) ](./media/snapshots-introduction/single-file-snapshot-restore-four.png#lightbox)
When a snapshot is taken, the pointers to the data blocks are copied, and modifications are written to new data locations. The snapshot pointers continue to point to the original data blocks that the file occupied when the snapshot was taken, giving you a live and a historical view of the data. If you were to create a new snapshot, the current pointers (that is, the ones created after the most recent additions and modifications) are copied to a new snapshot `Snapshot2`. This creates access to three generations of data (the live data, `Snapshot2`, and `Snapshot1`, in order of age) without taking up the volume space that three full copies would require.
Meanwhile, the data blocks that are pointed to from snapshots remain stable and
The following diagram shows a volumeΓÇÖs snapshots and used space over time:
-[ ![Diagram that shows a volumeΓÇÖs snapshots and used space over time](../media/azure-netapp-files/snapshots-used-space-over-time.png)](../media/azure-netapp-files/snapshots-used-space-over-time.png#lightbox)
+[ ![Diagram that shows a volumeΓÇÖs snapshots and used space over time](./media/snapshots-introduction/snapshots-used-space-over-time.png)](./media/snapshots-introduction/snapshots-used-space-over-time.png#lightbox)
Because a volume snapshot records only the block changes since the latest snapshot, it provides the following key benefits:
Azure NetApp Files supports [cross-region replication](cross-region-replication-
The following diagram shows snapshot traffic in cross-region replication scenarios:
-[ ![Diagram that shows snapshot traffic in cross-region replication scenarios](../media/azure-netapp-files/snapshot-traffic-cross-region-replication.png)](../media/azure-netapp-files/snapshot-traffic-cross-region-replication.png#lightbox)
+[ ![Diagram that shows snapshot traffic in cross-region replication scenarios](./media/snapshots-introduction/snapshot-traffic-cross-region-replication.png)](./media/snapshots-introduction/snapshot-traffic-cross-region-replication.png#lightbox)
## How snapshots can be vaulted for long-term retention and cost savings
To enable snapshot vaulting on your Azure NetApp Files volume, [configure a back
The following diagram shows how snapshot data is transferred from the Azure NetApp Files volume to Azure NetApp Files backup storage, hosted on Azure storage.
-[ ![Diagram that shows snapshot data transferred from the Azure NetApp Files volume to Azure NetApp Files backup storage](../media/azure-netapp-files/snapshot-data-transfer-backup-storage.png) ](../media/azure-netapp-files/snapshot-data-transfer-backup-storage.png#lightbox)
+[ ![Diagram that shows snapshot data transferred from the Azure NetApp Files volume to Azure NetApp Files backup storage](./media/snapshots-introduction/snapshot-data-transfer-backup-storage.png) ](./media/snapshots-introduction/snapshot-data-transfer-backup-storage.png#lightbox)
The Azure NetApp Files backup functionality is designed to keep a longer history of backups as indicated in this simplified example. Notice how the backup repository on the right contains more and older snapshots than the protected volume and snapshots on the left.
You can restore Azure NetApp Files snapshots to separate, independent volumes (c
The following diagram shows a new volume created by restoring (cloning) a snapshot:
-[![Diagram that shows a new volume created by restoring a snapshot](../media/azure-netapp-files/snapshot-restore-clone-new-volume.png)
-](../media/azure-netapp-files/snapshot-restore-clone-new-volume.png#lightbox)
+[![Diagram that shows a new volume created by restoring a snapshot](./media/snapshots-introduction/snapshot-restore-clone-new-volume.png)
+](./media/snapshots-introduction/snapshot-restore-clone-new-volume.png#lightbox)
The same operation can be performed on replicated snapshots to a disaster-recovery (DR) volume. Any snapshot can be restored to a new volume, even when cross-region replication remains active or in progress. This capability enables non-disruptive creation of test and development environments in a DR region, putting the data to use, whereas the replicated volumes would otherwise be used only for DR purposes. This use case enables test and development to be isolated from production, eliminating potential impact on production environments. The following diagram shows volume restoration (cloning) by using DR target volume snapshot while cross-region replication is taking place:
-[![Diagram that shows volume restoration using DR target volume snapshot](../media/azure-netapp-files/snapshot-restore-clone-target-volume.png)](../media/azure-netapp-files/snapshot-restore-clone-target-volume.png#lightbox)
+[![Diagram that shows volume restoration using DR target volume snapshot](./media/snapshots-introduction/snapshot-restore-clone-target-volume.png)](./media/snapshots-introduction/snapshot-restore-clone-target-volume.png#lightbox)
When you restore a snapshot to a new volume, the Volume overview page displays the name of the snapshot used to create the new volume in the **Originated from** field. See [Restore a snapshot to a new volume](snapshots-restore-new-volume.md) about volume restore operations.
Reverting a volume snapshot is near-instantaneous and takes only a few seconds t
The following diagram shows a volume reverting to an earlier snapshot:
-[![Diagram that shows a volume reverting to an earlier snapshot](../media/azure-netapp-files/snapshot-volume-revert.png)
-](../media/azure-netapp-files/snapshot-volume-revert.png#lightbox)
+[![Diagram that shows a volume reverting to an earlier snapshot](./media/snapshots-introduction/snapshot-volume-revert.png)
+](./media/snapshots-introduction/snapshot-volume-revert.png#lightbox)
> [!IMPORTANT]
If the [Snapshot Path visibility](snapshots-edit-hide-path.md) is not set to `hi
The following diagram shows file or directory access to a snapshot using a client:
-[![Diagram that shows file or directory access to a snapshot](../media/azure-netapp-files/snapshot-file-directory-access.png)](../media/azure-netapp-files/snapshot-file-directory-access.png#lightbox)
+[![Diagram that shows file or directory access to a snapshot](./media/snapshots-introduction/snapshot-file-directory-access.png)](./media/snapshots-introduction/snapshot-file-directory-access.png#lightbox)
In the diagram, Snapshot 1 consumes only the delta blocks between the active volume and the moment of snapshot creation. But when you access the snapshot via the volume snapshot path, the data will *appear* as if itΓÇÖs the full volume capacity at the time of the snapshot creation. By accessing the snapshot folders, you can restore data by copying files and directories out of a snapshot of choice.
Similarly, snapshots in target cross-region replication volumes can be accessed
The following diagram shows snapshot access in cross-region replication scenarios:
-[![Diagram that shows snapshot access in cross-region replication](../media/azure-netapp-files/snapshot-access-cross-region-replication.png)](../media/azure-netapp-files/snapshot-access-cross-region-replication.png#lightbox)
+[![Diagram that shows snapshot access in cross-region replication](./media/snapshots-introduction/snapshot-access-cross-region-replication.png)](./media/snapshots-introduction/snapshot-access-cross-region-replication.png#lightbox)
See [Restore a file from a snapshot using a client](snapshots-restore-file-client.md) about restoring individual files or directories from snapshots.
The following diagram describes how single-file snapshot restore works:
When a single file is restored in-place (`file2`) or to a new file in the volume (`file2ΓÇÖ`), only the *pointers* to existing blocks previously captured in a snapshot are reverted. This operation eliminates the copying of any data blocks and is near-instantaneous, irrespective of the size of the file (the number of blocks in the file).
- [![Individual files can be restored from any snapshot by reverting block pointers to an existing file (file2) or to a new file (file2ΓÇÖ) by creating new file metadata and pointers to blocks in the snapshot.](../media/azure-netapp-files/single-file-snapshot-restore-five.png)](../media/azure-netapp-files/single-file-snapshot-restore-five.png#lightbox)
+ [![Individual files can be restored from any snapshot by reverting block pointers to an existing file (file2) or to a new file (file2ΓÇÖ) by creating new file metadata and pointers to blocks in the snapshot.](./media/snapshots-introduction/single-file-snapshot-restore-five.png)](./media/snapshots-introduction/single-file-snapshot-restore-five.png#lightbox)
### Restoring volume backups from vaulted snapshots
You can [search for backups](backup-search.md) at the volume level or the NetApp
The following diagram illustrates the operation of restoring a selected vaulted snapshot to a new volume:
-[![Diagram that shows restoring a selected vaulted snapshot to a new volume](../media/azure-netapp-files/snapshot-restore-vaulted-new-volume.png)](../media/azure-netapp-files/snapshot-restore-vaulted-new-volume.png#lightbox)
+[![Diagram that shows restoring a selected vaulted snapshot to a new volume](./media/snapshots-introduction/snapshot-restore-vaulted-new-volume.png)](./media/snapshots-introduction/snapshot-restore-vaulted-new-volume.png#lightbox)
### Restoring individual files or directories from vaulted snapshots
When a snapshot is deleted, all pointers from that snapshot to existing data blo
The following diagram shows the effect on storage consumption of Snapshot 3 deletion from a volume:
-[![Diagram that shows storage consumption effect of snapshot deletion](../media/azure-netapp-files/snapshot-delete-storage-consumption.png)](../media/azure-netapp-files/snapshot-delete-storage-consumption.png#lightbox)
+[![Diagram that shows storage consumption effect of snapshot deletion](./media/snapshots-introduction/snapshot-delete-storage-consumption.png)](./media/snapshots-introduction/snapshot-delete-storage-consumption.png#lightbox)
Be sure to [monitor volume and snapshot consumption](azure-netapp-files-metrics.md#volumes) and understand how the application, active volume, and snapshot consumption interact.
azure-netapp-files Snapshots Manage Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/snapshots-manage-policy.md
A snapshot policy enables you to specify the snapshot creation frequency in hour
1. From the NetApp Account view, select **Snapshot policy**.
- ![Screenshot that shows how to navigate to Snapshot Policy.](../media/azure-netapp-files/snapshot-policy-navigation.png)
+ ![Screenshot that shows how to navigate to Snapshot Policy.](./media/snapshots-manage-policy/snapshot-policy-navigation.png)
2. In the Snapshot Policy window, set Policy State to **Enabled**.
A snapshot policy enables you to specify the snapshot creation frequency in hour
The following example shows hourly snapshot policy configuration.
- ![Screenshot that shows the hourly snapshot policy.](../media/azure-netapp-files/snapshot-policy-hourly.png)
+ ![Screenshot that shows the hourly snapshot policy.](./media/snapshots-manage-policy/snapshot-policy-hourly.png)
The following example shows daily snapshot policy configuration.
- ![Screenshot that shows the daily snapshot policy.](../media/azure-netapp-files/snapshot-policy-daily.png)
+ ![Screenshot that shows the daily snapshot policy.](./media/snapshots-manage-policy/snapshot-policy-daily.png)
The following example shows weekly snapshot policy configuration.
- ![Screenshot that shows the weekly snapshot policy.](../media/azure-netapp-files/snapshot-policy-weekly.png)
+ ![Screenshot that shows the weekly snapshot policy.](./media/snapshots-manage-policy/snapshot-policy-weekly.png)
The following example shows monthly snapshot policy configuration.
- ![Screenshot that shows the monthly snapshot policy.](../media/azure-netapp-files/snapshot-policy-monthly.png)
+ ![Screenshot that shows the monthly snapshot policy.](./media/snapshots-manage-policy/snapshot-policy-monthly.png)
4. Select **Save**.
You cannot apply a snapshot policy to a destination volume in cross-region repli
1. Go to the **Volumes** page, right-click the volume that you want to apply a snapshot policy to, and select **Edit**.
- ![Screenshot that shows the Volumes right-click menu.](../media/azure-netapp-files/volume-right-cick-menu.png)
+ ![Screenshot that shows the Volumes right-click menu.](./media/snapshots-manage-policy/volume-right-cick-menu.png)
2. In the Edit window, under **Snapshot policy**, select a policy to use for the volume. Select **OK** to apply the policy.
- ![Screenshot that shows the Snapshot policy menu.](../media/azure-netapp-files/snapshot-policy-edit.png)
+ ![Screenshot that shows the Snapshot policy menu.](./media/snapshots-manage-policy/snapshot-policy-edit.png)
## Modify a snapshot policy
You can modify an existing snapshot policy to change the policy state, snapshot
2. Right-click the snapshot policy you want to modify, then select **Edit**.
- ![Screenshot that shows the Snapshot policy right-click menu.](../media/azure-netapp-files/snapshot-policy-right-click-menu.png)
+ ![Screenshot that shows the Snapshot policy right-click menu.](./media/snapshots-manage-policy/snapshot-policy-right-click-menu.png)
3. Make the changes in the Snapshot Policy window that appears, then select **Save**.
You can delete a snapshot policy that you no longer want to keep.
2. Right-click the snapshot policy you want to modify, then select **Delete**.
- ![Screenshot that shows the Delete menu item.](../media/azure-netapp-files/snapshot-policy-right-click-menu.png)
+ ![Screenshot that shows the Delete menu item.](./media/snapshots-manage-policy/snapshot-policy-right-click-menu.png)
3. Select **Yes** to confirm that you want to delete the snapshot policy.
- ![Screenshot that shows snapshot policy delete confirmation.](../media/azure-netapp-files/snapshot-policy-delete-confirm.png)
+ ![Screenshot that shows snapshot policy delete confirmation.](./media/snapshots-manage-policy/snapshot-policy-delete-confirm.png)
## Next steps
azure-netapp-files Snapshots Restore File Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/snapshots-restore-file-client.md
NFSv4.1 does not show the `.snapshot` directory (`ls -la`). However, when the Hi
1. If the `~snapshot` directory of the volume is hidden, [show hidden items](https://support.microsoft.com/help/4028316/windows-view-hidden-files-and-folders-in-windows-10) in the parent directory to display `~snapshot`.
- ![Screenshot that shows hidden items of a directory.](../media/azure-netapp-files/snapshot-show-hidden.png)
+ ![Screenshot that shows hidden items of a directory.](./media/snapshots-restore-file-client/snapshot-show-hidden.png)
2. Navigate to the subdirectory within `~snapshot` to find the file you want to restore. Right-click the file. Select **Copy**.
- ![Screenshot that shows how to copy a file to restore.](../media/azure-netapp-files/snapshot-copy-file-restore.png)
+ ![Screenshot that shows how to copy a file to restore.](./media/snapshots-restore-file-client/snapshot-copy-file-restore.png)
3. Return to the parent directory. Right-click in the parent directory and select `Paste` to paste the file to the directory.
- ![Screenshot that shows how to paste a file to restore.](../media/azure-netapp-files/snapshot-paste-file-restore.png)
+ ![Screenshot that shows how to paste a file to restore.](./media/snapshots-restore-file-client/snapshot-paste-file-restore.png)
4. You can also right-click the parent directory, select **Properties**, click the **Previous Versions** tab to see the list of snapshots, and select **Restore** to restore a file.
- ![Screenshot that shows the properties previous versions.](../media/azure-netapp-files/snapshot-properties-previous-version.png)
+ ![Screenshot that shows the properties previous versions.](./media/snapshots-restore-file-client/snapshot-properties-previous-version.png)
## Next steps
azure-netapp-files Snapshots Restore File Single https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/snapshots-restore-file-single.md
The restore operation does not create directories in the process. If the specifi
3. Right-click the snapshot that you want to use for restoring files, and then select **Restore Files** from the menu.
- [ ![Snapshot that shows how to access the Restore Files menu item.](../media/azure-netapp-files/snapshot-restore-files-menu.png) ](../media/azure-netapp-files/snapshot-restore-files-menu.png#lightbox)
+ [ ![Snapshot that shows how to access the Restore Files menu item.](./media/snapshots-restore-file-single/snapshot-restore-files-menu.png) ](./media/snapshots-restore-file-single/snapshot-restore-files-menu.png#lightbox)
5. In the Restore Files window that appears, provide the following information: 1. In the **File Paths** field, specify the file or files to restore by using their full paths.
The restore operation does not create directories in the process. If the specifi
3. Click **Restore** to begin the restore operation.
- ![Snapshot the Restore Files window.](../media/azure-netapp-files/snapshot-restore-files-window.png)
+ ![Snapshot the Restore Files window.](./media/snapshots-restore-file-single/snapshot-restore-files-window.png)
## Examples The following examples show you how to specify files from a volume snapshot for restore.
azure-netapp-files Snapshots Restore New Volume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/snapshots-restore-new-volume.md
1. Select **Snapshots** from the Volume page to display the snapshot list. 2. Right-click the snapshot to restore and select **Restore to new volume** from the menu option.
- ![Screenshot that shows the Restore New Volume menu.](../media/azure-netapp-files/azure-netapp-files-snapshot-restore-to-new-volume.png)
+ ![Screenshot that shows the Restore New Volume menu.](./media/snapshots-restore-new-volume/azure-netapp-files-snapshot-restore-to-new-volume.png)
3. In the **Create a Volume** page, provide information for the new volume.
By default, the new volume includes a reference to the snapshot that was used for the restore operation from the original volume from Step 2, referred to as the *base snapshot*. This base snapshot does *not* consume any additional space because of [how snapshots work](snapshots-introduction.md). If you don't want the new volume to contain this base snapshot, select **Delete base snapshot** during the new volume creation.
- :::image type="content" source="../media/azure-netapp-files/snapshot-restore-new-volume.png" alt-text="Screenshot showing the Create a Volume window for restoring a volume from a snapshot.":::
+ :::image type="content" source="./media/snapshots-restore-new-volume/snapshot-restore-new-volume.png" alt-text="Screenshot showing the Create a Volume window for restoring a volume from a snapshot.":::
4. Select **Review+create**. Select **Create**. The Volumes page displays the new volume to which the snapshot restores. Refer to the **Originated from** field to see the name of the snapshot used to create the volume.
azure-netapp-files Snapshots Revert Volume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/snapshots-revert-volume.md
The revert functionality is also available in configurations with volume replica
1. Go to the **Snapshots** menu of a volume. Right-click the snapshot you want to use for the revert operation. Select **Revert volume**.
- ![Screenshot that describes the right-click menu of a snapshot.](../media/azure-netapp-files/snapshot-right-click-menu.png)
+ ![Screenshot that describes the right-click menu of a snapshot.](./media/shared/snapshot-right-click-menu.png)
2. In the Revert Volume to Snapshot window, type the name of the volume, and click **Revert**. The volume is now restored to the point in time of the selected snapshot.
-![Screenshot that shows the Revert Volume to Snapshot window.](../media/azure-netapp-files/snapshot-revert-volume.png)
+![Screenshot that shows the Revert Volume to Snapshot window.](./media/snapshots-revert-volume/snapshot-revert-volume.png)
## Next steps
azure-netapp-files Solutions Benefits Azure Netapp Files Electronic Design Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/solutions-benefits-azure-netapp-files-electronic-design-automation.md
The 12-volume scenario demonstrates a general decrease in latency over the six-v
The following graph illustrates the latency and operations rate for the EDA workload on Azure NetApp Files.
-![Latency and operations rate for the EDA workload on Azure NetApp Files](../media/azure-netapp-files/solutions-electronic-design-automation-workload-latency-operation-rate.png)
+![Latency and operations rate for the EDA workload on Azure NetApp Files](./media/solutions-benefits-azure-netapp-files-electronic-design-automation/solutions-electronic-design-automation-workload-latency-operation-rate.png)
The following graph illustrates the latency and throughput for the EDA workload on Azure NetApp Files.
-![Latency and throughput for the EDA workload on Azure NetApp Files](../media/azure-netapp-files/solutions-electronic-design-automation-workload-latency-throughput.png)
+![Latency and throughput for the EDA workload on Azure NetApp Files](./media/solutions-benefits-azure-netapp-files-electronic-design-automation/solutions-electronic-design-automation-workload-latency-throughput.png)
## Layout of test scenarios
azure-netapp-files Solutions Benefits Azure Netapp Files Oracle Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/solutions-benefits-azure-netapp-files-oracle-database.md
The following summary explains how Oracle Direct NFS works at a high level:
* The traditional NFS client uses a single network flow as shown below:
- ![Traditional NFS client using a single network flow](../media/azure-netapp-files/solutions-traditional-nfs-client-using-single-network-flow.png)
+ ![Traditional NFS client using a single network flow](./media/solutions-benefits-azure-netapp-files-oracle-database/solutions-traditional-nfs-client-using-single-network-flow.png)
Oracle Direct NFS further improves performance by load-balancing network traffic across multiple network flows. As tested and shown below, 650 distinct network connections were established dynamically by the Oracle Database:
- ![Oracle Direct NFS improving performance](../media/azure-netapp-files/solutions-oracle-direct-nfs-performance-load-balancing.png)
+ ![Oracle Direct NFS improving performance](./media/solutions-benefits-azure-netapp-files-oracle-database/solutions-oracle-direct-nfs-performance-load-balancing.png)
The [Oracle FAQ for Direct NFS](http://www.orafaq.com/wiki/Direct_NFS) shows that Oracle dNFS is an optimized NFS client. It provides fast and scalable access to NFS storage that is located on NAS storage devices (accessible over TCP/IP). dNFS is built into the database kernel just like ASM, which is used primarily with DAS or SAN storage. As such, *the guideline is to use dNFS when implementing NAS storage and use ASM when implementing SAN storage.*
dNFS is the default option in Oracle 18c.
dNFS is available starting with Oracle Database 11g. The diagram below compares dNFS with native NFS. When you use dNFS, an Oracle database that runs on an Azure virtual machine can drive more I/O than the native NFS client.
-![Oracle and Azure NetApp Files comparison of dNFS with native NFS](../media/azure-netapp-files/solutions-oracle-azure-netapp-files-comparing-dnfs-native-nfs.png)
+![Oracle and Azure NetApp Files comparison of dNFS with native NFS](./media/solutions-benefits-azure-netapp-files-oracle-database/solutions-oracle-azure-netapp-files-comparing-dnfs-native-nfs.png)
You can enable or disable dNFS by running two commands and restarting the database.
azure-netapp-files Solutions Benefits Azure Netapp Files Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/solutions-benefits-azure-netapp-files-sql-server.md
The two sets of graphics in this section show the TCO example. The number and t
The first set of graphic shows the overall cost of the solution using a 1-TiB database size, comparing the D16s_v4 to the D64, the D8 to the D32, and the D4 to the D16. The projected IOPs for each configuration are indicated by a green or yellow line and corresponds to the right-hand side Y axis.
-[ ![Graphic that shows overall cost of the solution using a 1-TiB database size.](../media/azure-netapp-files/solution-sql-server-cost-1-tib.png) ](../media/azure-netapp-files/solution-sql-server-cost-1-tib.png#lightbox)
+[ ![Graphic that shows overall cost of the solution using a 1-TiB database size.](./media/solutions-benefits-azure-netapp-files-sql-server/solution-sql-server-cost-1-tib.png) ](./media/solutions-benefits-azure-netapp-files-sql-server/solution-sql-server-cost-1-tib.png#lightbox)
The second set of graphic shows the overall cost using a 50-TiB database. The comparisons are otherwise the same ΓÇô D16 compared with Azure NetApp Files versus D64 with block by example.
-[ ![Graphic that shows overall cost using a 50-TiB database size.](../media/azure-netapp-files/solution-sql-server-cost-50-tib.png) ](../media/azure-netapp-files/solution-sql-server-cost-50-tib.png#lightbox)
+[ ![Graphic that shows overall cost using a 50-TiB database size.](./media/solutions-benefits-azure-netapp-files-sql-server/solution-sql-server-cost-50-tib.png) ](./media/solutions-benefits-azure-netapp-files-sql-server/solution-sql-server-cost-50-tib.png#lightbox)
## Performance, and lots of it
With Azure NetApp Files, each of the instances in the D class can meet or exceed
The following diagram summarizes the S3B CPU limits test:
-![Diagram that shows average CPU percentage for single-instance SQL Server over Azure NetApp Files.](../media/azure-netapp-files/solution-sql-server-single-instance-average-cpu.png)
+![Diagram that shows average CPU percentage for single-instance SQL Server over Azure NetApp Files.](./media/solutions-benefits-azure-netapp-files-sql-server/solution-sql-server-single-instance-average-cpu.png)
Scalability is only part of the story. The other part is latency. ItΓÇÖs one thing for smaller virtual machines to have the ability to drive much higher I/O rates, itΓÇÖs another thing to do so with low single-digit latencies as shown below.
Scalability is only part of the story. The other part is latency. ItΓÇÖs one th
The following diagram shows the latency for single-instance SQL Server over Azure NetApp Files:
-![Diagram that shows latency for single-instance SQL Server over Azure NetApp Files.](../media/azure-netapp-files/solution-sql-server-single-instance-latency.png)
+![Diagram that shows latency for single-instance SQL Server over Azure NetApp Files.](./media/solutions-benefits-azure-netapp-files-sql-server/solution-sql-server-single-instance-latency.png)
## SSB testing tool
azure-netapp-files Solutions Windows Virtual Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/solutions-windows-virtual-desktop.md
This recommendation is confirmed by a 500-user LoginVSI test, logging approximat
As an example, at 62 users per D16as_V4 virtual machine, Azure NetApp Files can easily support 60,000 users per environment. Testing to evaluate the upper limit of the D32as_v4 virtual machine is ongoing. If the Azure Virtual Desktop user per vCPU recommendation holds true for the D32as_v4, more than 120,000 users would fit within 1,000 virtual machines before broaching [the 1,000 IP VNet limit](./azure-netapp-files-network-topologies.md), as shown in the following figure.
-![Azure Virtual Desktop pooled desktop scenario](../media/azure-netapp-files/solutions-pooled-desktop-scenario.png)
+![Azure Virtual Desktop pooled desktop scenario](./media/solutions-windows-virtual-desktop/solutions-pooled-desktop-scenario.png)
### Personal desktop scenario In a personal desktop scenario, the following figure shows the general-purpose architectural recommendation. Users are mapped to specific desktop pods and each pod has just under 1,000 virtual machines, leaving room for IP addresses propagating from the management VNet. Azure NetApp Files can easily handle 900+ personal desktops per single-session host pool VNet, with the actual number of virtual machines being equal to 1,000 minus the number of management hosts found in the Hub VNet. If more personal desktops are needed, it's easy to add more pods (host pools and virtual networks), as shown in the following figure.
-![Azure Virtual Desktop personal desktop scenario](../media/azure-netapp-files/solutions-personal-desktop-scenario.png)
+![Azure Virtual Desktop personal desktop scenario](./media/solutions-windows-virtual-desktop/solutions-personal-desktop-scenario.png)
When building a POD based architecture like this, assigning users to the correct pod upon login is of importance to assure users will always find their user profiles.
azure-netapp-files Storage Service Add Ons https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/storage-service-add-ons.md
The **Storage service add-ons** portal menu of Azure NetApp Files provides a ΓÇ£
Clicking a category (for example, **NetApp add-ons**) under **Storage service add-ons** displays tiles for available add-ons in that category. Clicking an add-on tile in the category takes you to a landing page for quick access of that add-on and directs you to the add-on installation page.
-![Snapshot that shows how to access to the storage service add-ons menu.](../media/azure-netapp-files/storage-service-add-ons.png)
+![Snapshot that shows how to access to the storage service add-ons menu.](./media/storage-service-add-ons/storage-service-add-ons.png)
## Next steps
azure-netapp-files Troubleshoot Diagnose Solve Problems https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/troubleshoot-diagnose-solve-problems.md
You can use Azure **diagnose and solve problems** tool to troubleshoot issues of
The following screenshot shows an example of issue types that you can troubleshoot for Azure NetApp Files:
- :::image type="content" source="../media/azure-netapp-files/troubleshoot-issue-types.png" alt-text="Screenshot that shows an example of issue types in diagnose and solve problems page." lightbox="../media/azure-netapp-files/troubleshoot-issue-types.png":::
+ :::image type="content" source="./media/troubleshoot-diagnose-solve-problems/troubleshoot-issue-types.png" alt-text="Screenshot that shows an example of issue types in diagnose and solve problems page." lightbox="./media/troubleshoot-diagnose-solve-problems/troubleshoot-issue-types.png":::
3. After specifying the problem type, select an option (problem subtype) from the pull-down menu to describe the specific problem you are experiencing. Then follow the on-screen directions to troubleshoot the problem.
- :::image type="content" source="../media/azure-netapp-files/troubleshoot-diagnose-pull-down.png" alt-text="Screenshot that shows the pull-down menu for problem subtype selection." lightbox="../media/azure-netapp-files/troubleshoot-diagnose-pull-down.png":::
+ :::image type="content" source="./media/troubleshoot-diagnose-solve-problems/troubleshoot-diagnose-pull-down.png" alt-text="Screenshot that shows the pull-down menu for problem subtype selection." lightbox="./media/troubleshoot-diagnose-solve-problems/troubleshoot-diagnose-pull-down.png":::
This page presents general guidelines and relevant resources for the problem subtype you select. In some situations, you might be prompted to fill out a questionnaire to trigger diagnostics. If issues are identified, the tool presents a diagnosis and possible solutions.
- :::image type="content" source="../media/azure-netapp-files/troubleshoot-problem-subtype.png" alt-text="Screenshot that shows the capacity pool troubleshoot page." lightbox="../media/azure-netapp-files/troubleshoot-problem-subtype.png":::
+ :::image type="content" source="./media/troubleshoot-diagnose-solve-problems/troubleshoot-problem-subtype.png" alt-text="Screenshot that shows the capacity pool troubleshoot page." lightbox="./media/troubleshoot-diagnose-solve-problems/troubleshoot-problem-subtype.png":::
For more information about using this tool, see [Diagnostics and solve tool - Azure App Service](../app-service/overview-diagnostics.md).
azure-netapp-files Troubleshoot File Locks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/troubleshoot-file-locks.md
You can break file locks for all files in a volume or break all file locks initi
1. Select **Break File Locks**.
- :::image type="content" source="../media/azure-netapp-files/break-file-locks.png" alt-text="Screenshot of break file locks portal." lightbox="../media/azure-netapp-files/break-file-locks.png":::
+ :::image type="content" source="./media/troubleshoot-file-locks/break-file-locks.png" alt-text="Screenshot of break file locks portal." lightbox="./media/troubleshoot-file-locks/break-file-locks.png":::
1. Confirm you understand that breaking file locks may be disruptive.
azure-netapp-files Troubleshoot User Access Ldap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/troubleshoot-user-access-ldap.md
Validating user access is helpful for scenarios such as ensuring POSIX attribute
1. In the volume page for the LDAP-enabled volume, select **LDAP Group ID List** under **Support & Troubleshooting**. 1. Enter the user ID and select **Get group IDs**.
- :::image type="content" source="../media/azure-netapp-files/troubleshoot-ldap-user-id.png" alt-text="Screenshot of the LDAP group ID list portal." lightbox="../media/azure-netapp-files/troubleshoot-ldap-user-id.png":::
+ :::image type="content" source="./media/troubleshoot-user-access-ldap/troubleshoot-ldap-user-id.png" alt-text="Screenshot of the LDAP group ID list portal." lightbox="./media/troubleshoot-user-access-ldap/troubleshoot-ldap-user-id.png":::
1. The portal will display up to 256 results even if the user is in more than 256 groups. You can search for a specific group ID in the results.
azure-netapp-files Understand Guidelines Active Directory Domain Service Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/understand-guidelines-active-directory-domain-service-site.md
An AD DS site topology is a logical representation of the network where Azure Ne
The following diagram shows a sample network topology: sample-network-topology.png In the sample network topology, an on-premises AD DS domain (`anf.local`) is extended into an Azure virtual network. The on-premises network is connected to the Azure virtual network using an Azure ExpressRoute circuit.
Azure NetApp Files can only use one AD DS site to determine which domain control
In the Active Directory Sites and Services tool, verify that the AD DS domain controllers deployed into the AD DS subnet are assigned to the `ANF` site: To create the subnet object that maps to the AD DS subnet in the Azure virtual network, right-click the **Subnets** container in the **Active Directory Sites and Services** utility and select **New Subnet...**. ΓÇâ In the **New Object - Subnet** dialog, the 10.0.0.0/24 IP address range for the AD DS Subnet is entered in the **Prefix** field. Select `ANF` as the site object for the subnet. Select **OK** to create the subnet object and assign it to the `ANF` site. To verify that the new subnet object is assigned to the correct site, right-click the 10.0.0.0/24 subnet object and select **Properties**. The **Site** field should show the `ANF` site object: To create the subnet object that maps to the Azure NetApp Files delegated subnet in the Azure virtual network, right-click the **Subnets** container in the **Active Directory Sites and Services** utility and select **New Subnet...**.
azure-netapp-files Use Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/use-availability-zones.md
Azure availability zones are highly available, fault tolerant, and more scalable
The use of high availability (HA) architectures with availability zones are now a default and best practice recommendation inΓÇ»[AzureΓÇÖs Well-Architected Framework](/azure/architecture/framework/resiliency/design-best-practices#use-zone-aware-services). Enterprise applications and resources are increasingly deployed into multiple availability zones to achieve this level of high availability (HA) or failure domain (zone) isolation. Azure NetApp Files' [availability zone volume placement](manage-availability-zone-volume-placement.md) feature lets you deploy volumes in availability zones of your choice, in alignment with Azure compute and other services in the same zone.
azure-netapp-files Use Dfs N And Dfs Root Consolidation With Azure Netapp Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/use-dfs-n-and-dfs-root-consolidation-with-azure-netapp-files.md
If you already have a DFS Namespace in place, no special steps are required to u
| File share type | SMB | NFS | dual-protocol* | |-|:-:|:-:|:-:|
-| Azure NetApp Files | ![Yes](../media/azure-netapp-files/icons/yes-icon.png) | ![No](../media/azure-netapp-files/icons/no-icon.png) | ![Yes](../media/azure-netapp-files/icons/yes-icon.png) |
+| Azure NetApp Files | ![Yes](./media/shared/yes-icon.png) | ![No](./media/shared/no-icon.png) | ![Yes](./media/shared/yes-icon.png) |
> [!IMPORTANT] > This functionality applies to the SMB side of Azure NetApp Files dual-protocol volumes.
For all DFS Namespace types, the **DFS Namespaces** server role must be installe
8. In the **Server Roles** section, select and check the **DFS Namespaces** role from role list under **File and Storage Services** > **File and iSCSI Services**.
-![A screenshot of the **Add Roles and Features** wizard with the **DFS Namespaces** role selected.](../media/azure-netapp-files/azure-netapp-files-dfs-namespaces-install.png)
+![A screenshot of the **Add Roles and Features** wizard with the **DFS Namespaces** role selected.](./media/use-dfs-n-and-dfs-root-consolidation-with-azure-netapp-files/azure-netapp-files-dfs-namespaces-install.png)
9. Click **Next** until the **Install** button is available
Install-WindowsFeature -Name "FS-DFS-Namespace", "RSAT-DFS-Mgmt-Con"
If you don't need to take over an existing legacy file server, a domain-based namespace is recommended. Domain-based namespaces are hosted as part of AD and have a UNC path containing the name of your domain, for example, `\\contoso.com\corporate\finance`, if your domain is `contoso.com`. The following graphic shows an example of this architecture.
-![A screenshot of the architecture for DFS-N with Azure NetApp Files volumes.](../media/azure-netapp-files/azure-netapp-files-dfs-domain-architecture-example.png)
+![A screenshot of the architecture for DFS-N with Azure NetApp Files volumes.](./media/use-dfs-n-and-dfs-root-consolidation-with-azure-netapp-files/azure-netapp-files-dfs-domain-architecture-example.png)
>[!IMPORTANT]
The basic unit of management for DFS Namespaces is the namespace. The namespace
5. The **Namespace Type** section allows you to choose between a **Domain-based namespace** and a **Stand-alone namespace**. Select a domain-based namespace. Refer to [namespace types](#namespace-types) above for more information on choosing between namespace types.
-![A screenshot of selecting domain-based namespace **New Namespace Wizard**.](../media/azure-netapp-files/azure-netapp-files-dfs-domain-namespace-type.png)
+![A screenshot of selecting domain-based namespace **New Namespace Wizard**.](./media/use-dfs-n-and-dfs-root-consolidation-with-azure-netapp-files/azure-netapp-files-dfs-domain-namespace-type.png)
6. Select **Create** to create the namespace and **Close** when the dialog completes.
You can think of DFS Namespaces folders as analogous to file shares.
1. In the DFS Management console, select the namespace you just created and select **New Folder**. The resulting **New Folder** dialog allows you to create both the folder and its targets.
-![A screenshot of the **New Folder** domain-based dialog.](../media/azure-netapp-files/azure-netapp-files-dfs-domain-folder-targets.png)
+![A screenshot of the **New Folder** domain-based dialog.](./media/use-dfs-n-and-dfs-root-consolidation-with-azure-netapp-files/azure-netapp-files-dfs-domain-folder-targets.png)
2. In the textbox labeled **Name** provide the name of the share.
Root consolidation may only be used with standalone namespaces. If you already h
This section outlines the steps to configure DFS Namespace root consolidation on a standalone server. For a highly available architecture please work with your Microsoft technical team to configure Windows Server failover clustering and an Azure Load Balancer as required. The following graphic shows an example of a highly available architecture.
-![A screenshot of the architecture for root consolidation with Azure NetApp Files.](../media/azure-netapp-files/azure-netapp-files-root-consolidation-architecture-example.png)
+![A screenshot of the architecture for root consolidation with Azure NetApp Files.](./media/use-dfs-n-and-dfs-root-consolidation-with-azure-netapp-files/azure-netapp-files-root-consolidation-architecture-example.png)
### Enabling root consolidation
In order for DFS Namespaces to respond to existing file server names, **you must
5. In the textbox labeled **Fully qualified domain name (FQDN) for the target host**, enter the name of the DFS-N server you have set up. You can use the **Browse** button to help you select the server if desired.
-![A screenshot depicting the **New Resource Record** for a CNAME DNS entry.](../media/azure-netapp-files/azure-netapp-files-root-consolidation-cname.png)
+![A screenshot depicting the **New Resource Record** for a CNAME DNS entry.](./media/use-dfs-n-and-dfs-root-consolidation-with-azure-netapp-files/azure-netapp-files-root-consolidation-cname.png)
6. Select **OK** to create the CNAME record for your server.
To take over an existing server name with root consolidation, the name of the na
6. Select the desired namespace type for your environment and select **Next**. The wizard then summarizes the namespace to be created.
-![A screenshot of selecting standalone namespace in the **New Namespace Wizard**.](../media/azure-netapp-files/azure-netapp-files-dfs-namespace-type.png)
+![A screenshot of selecting standalone namespace in the **New Namespace Wizard**.](./media/use-dfs-n-and-dfs-root-consolidation-with-azure-netapp-files/azure-netapp-files-dfs-namespace-type.png)
7. Select **Create** to create the namespace and **Close** when the dialog completes.
You can think of DFS Namespaces folders as analogous to file shares.
1. In the DFS Management console, select the namespace you just created and select **New Folder**. The resulting **New Folder** dialog allows you to create both the folder and its targets.
-![A screenshot of the **New Folder** dialog.](../media/azure-netapp-files/azure-netapp-files-dfs-folder-targets.png)
+![A screenshot of the **New Folder** dialog.](./media/use-dfs-n-and-dfs-root-consolidation-with-azure-netapp-files/azure-netapp-files-dfs-folder-targets.png)
2. In the textbox labeled **Name** provide the name of the share.
azure-netapp-files Volume Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/volume-delete.md
This article describes how to delete an Azure NetApp Files volume.
1. From the Azure portal and under storage service, select **Volumes**. Locate the volume you want to delete. 2. Right click the volume name and select **Delete**.
- ![Screenshot that shows right-click menu for deleting a volume.](../media/azure-netapp-files/volume-delete.png)
+ ![Screenshot that shows right-click menu for deleting a volume.](./media/volume-delete/volume-delete.png)
## Next steps
azure-netapp-files Volume Hard Quota Guidelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/volume-hard-quota-guidelines.md
Windows clients can check the used and available capacity of a volume by using t
The following examples show the volume capacity reporting in Windows *before* the changed behavior:
-![Screenshots that show example storage capacity of a volume before behavior change.](../media/azure-netapp-files/hard-quota-windows-capacity-before.png)
+![Screenshots that show example storage capacity of a volume before behavior change.](./media/volume-hard-quota-guidelines/hard-quota-windows-capacity-before.png)
You can also use the `dir` command at the command prompt as shown below:
-![Screenshot that shows using a command to display storage capacity for a volume before behavior change.](../media/azure-netapp-files/hard-quota-command-capacity-before.png)
+![Screenshot that shows using a command to display storage capacity for a volume before behavior change.](./media/volume-hard-quota-guidelines/hard-quota-command-capacity-before.png)
The following examples show the volume capacity reporting in Windows *after* the changed behavior:
-![Screenshots that show example storage capacity of a volume after behavior change.](../media/azure-netapp-files/hard-quota-windows-capacity-after.png)
+![Screenshots that show example storage capacity of a volume after behavior change.](./media/volume-hard-quota-guidelines/hard-quota-windows-capacity-after.png)
The following example shows the `dir` command output:
-![Screenshot that shows using a command to display storage capacity for a volume after behavior change.](../media/azure-netapp-files/hard-quota-command-capacity-after.png)
+![Screenshot that shows using a command to display storage capacity for a volume after behavior change.](./media/volume-hard-quota-guidelines/hard-quota-command-capacity-after.png)
##### Linux
Linux clients can check the used and available capacity of a volume by using the
The following example shows volume capacity reporting in Linux *before* the changed behavior:
-![Screenshot that shows using Linux to display storage capacity for a volume before behavior change.](../media/azure-netapp-files/hard-quota-linux-capacity-before.png)
+![Screenshot that shows using Linux to display storage capacity for a volume before behavior change.](./media/volume-hard-quota-guidelines/hard-quota-linux-capacity-before.png)
The following example shows volume capacity reporting in Linux *after* the changed behavior:
-![Screenshot that shows using Linux to display storage capacity for a volume after behavior change.](../media/azure-netapp-files/hard-quota-linux-capacity-after.png)
+![Screenshot that shows using Linux to display storage capacity for a volume after behavior change.](./media/volume-hard-quota-guidelines/hard-quota-linux-capacity-after.png)
### Configure alerts using ANFCapacityManager
You can configure the following key alerting settings:
The following illustration shows the alert configuration:
-![Illustration that shows alert configuration by using ANFCapacityManager.](../media/azure-netapp-files/hard-quota-anfcapacitymanager-configuration.png)
+![Illustration that shows alert configuration by using ANFCapacityManager.](./media/volume-hard-quota-guidelines/hard-quota-anfcapacitymanager-configuration.png)
After installing ANFCapacityManager, you can expect the following behavior: When an Azure NetApp Files capacity pool or volume is created, modified, or deleted, the Logic App will automatically create, modify, or delete a capacity-based Metric Alert rule with the name `ANF_Pool_poolname` or `ANF_Volume_poolname_volname`.
You can [change the size of a volume](azure-netapp-files-resize-capacity-pools-o
2. Right-click the name of the volume that you want to resize or select the `…` icon at the end of the volume's row to display the context menu. 3. Use the context menu options to resize or delete the volume.
- ![Screenshot that shows context menu options for a volume.](../media/azure-netapp-files/hard-quota-volume-options.png)
+ ![Screenshot that shows context menu options for a volume.](./media/volume-hard-quota-guidelines/hard-quota-volume-options.png)
- ![Screenshot that shows the Update Volume Quota window.](../media/azure-netapp-files/hard-quota-update-volume-quota.png)
+ ![Screenshot that shows the Update Volume Quota window.](./media/volume-hard-quota-guidelines/hard-quota-update-volume-quota.png)
In some cases, the hosting capacity pool does not have sufficient capacity to resize the volumes. However, you can [change the capacity pool size](azure-netapp-files-resize-capacity-pools-or-volumes.md#resizing-the-capacity-pool-or-a-volume-using-azure-cli) in 1-TiB increments or decrements. The capacity pool size cannot be smaller than 4 TiB. *Resizing the capacity pool changes the purchased Azure NetApp Files capacity.*
In some cases, the hosting capacity pool does not have sufficient capacity to re
2. Right-click the capacity pool name or select the `…` icon at the end of the capacity pool’s row to display the context menu. 3. Use the context menu options to resize or delete the capacity pool.
- ![Screenshot that shows context menu options for a capacity pool.](../media/azure-netapp-files/hard-quota-pool-options.png)
+ ![Screenshot that shows context menu options for a capacity pool.](./media/volume-hard-quota-guidelines/hard-quota-pool-options.png)
- ![Screenshot that shows the Resize Pool window.](../media/azure-netapp-files/hard-quota-update-resize-pool.png)
+ ![Screenshot that shows the Resize Pool window.](./media/volume-hard-quota-guidelines/hard-quota-update-resize-pool.png)
##### CLI or PowerShell
You can use the [Azure NetApp Files CLI tools](azure-netapp-files-sdk-cli.md#cli
To manage Azure NetApp Files resources using Azure CLI, you can open the Azure portal and select the Azure **Cloud Shell** link in the top of the menu bar:
-[ ![Screenshot that shows how to access Cloud Shell link.](../media/azure-netapp-files/hard-quota-update-cloud-shell-link.png) ](../media/azure-netapp-files/hard-quota-update-cloud-shell-link.png#lightbox)
+[ ![Screenshot that shows how to access Cloud Shell link.](./media/volume-hard-quota-guidelines/hard-quota-update-cloud-shell-link.png) ](./media/volume-hard-quota-guidelines/hard-quota-update-cloud-shell-link.png#lightbox)
This action will open the Azure Cloud Shell:
-[ ![Screenshot that shows Cloud Shell window.](../media/azure-netapp-files/hard-quota-update-cloud-shell-window.png) ](../media/azure-netapp-files/hard-quota-update-cloud-shell-window.png#lightbox)
+[ ![Screenshot that shows Cloud Shell window.](./media/volume-hard-quota-guidelines/hard-quota-update-cloud-shell-window.png) ](./media/volume-hard-quota-guidelines/hard-quota-update-cloud-shell-window.png#lightbox)
The following examples use the commands to [show](/cli/azure/netappfiles/volume#az-netappfiles-volume-show) and [update](/cli/azure/netappfiles/volume#az-netappfiles-volume-update) the size of a volume:
-[ ![Screenshot that shows using PowerShell to show volume size.](../media/azure-netapp-files/hard-quota-update-powershell-volume-show.png) ](../media/azure-netapp-files/hard-quota-update-powershell-volume-show.png#lightbox)
+[ ![Screenshot that shows using PowerShell to show volume size.](./media/volume-hard-quota-guidelines/hard-quota-update-powershell-volume-show.png) ](./media/volume-hard-quota-guidelines/hard-quota-update-powershell-volume-show.png#lightbox)
-[ ![Screenshot that shows using PowerShell to update volume size.](../media/azure-netapp-files/hard-quota-update-powershell-volume-update.png) ](../media/azure-netapp-files/hard-quota-update-powershell-volume-update.png#lightbox)
+[ ![Screenshot that shows using PowerShell to update volume size.](./media/volume-hard-quota-guidelines/hard-quota-update-powershell-volume-update.png) ](./media/volume-hard-quota-guidelines/hard-quota-update-powershell-volume-update.png#lightbox)
The following examples use the commands to [show](/cli/azure/netappfiles/pool#az-netappfiles-pool-show) and [update](/cli/azure/netappfiles/pool#az-netappfiles-pool-update) the size of a capacity pool:
-[ ![Screenshot that shows using PowerShell to show capacity pool size.](../media/azure-netapp-files/hard-quota-update-powershell-pool-show.png) ](../media/azure-netapp-files/hard-quota-update-powershell-pool-show.png#lightbox)
+[ ![Screenshot that shows using PowerShell to show capacity pool size.](./media/volume-hard-quota-guidelines/hard-quota-update-powershell-pool-show.png) ](./media/volume-hard-quota-guidelines/hard-quota-update-powershell-pool-show.png#lightbox)
-[ ![Screenshot that shows using PowerShell to update capacity pool size.](../media/azure-netapp-files/hard-quota-update-powershell-pool-update.png) ](../media/azure-netapp-files/hard-quota-update-powershell-pool-update.png#lightbox)
+[ ![Screenshot that shows using PowerShell to update capacity pool size.](./media/volume-hard-quota-guidelines/hard-quota-update-powershell-pool-update.png) ](./media/volume-hard-quota-guidelines/hard-quota-update-powershell-pool-update.png#lightbox)
#### Automated
You can configure the following key capacity management setting:
* **AutoGrow Percent Increase** - Percent of the existing volume size to automatically grow a volume if it reaches the specified **% Full Threshold**. A value of 0 (zero) will disable the AutoGrow feature. A value between 10 and 100 is recommended.
- ![Screenshot that shows Set Volume Auto Growth Percent window.](../media/azure-netapp-files/hard-quota-volume-anfcapacitymanager-auto-grow-percent.png)
+ ![Screenshot that shows Set Volume Auto Growth Percent window.](./media/volume-hard-quota-guidelines/hard-quota-volume-anfcapacitymanager-auto-grow-percent.png)
## FAQ
azure-portal How To Create Azure Support Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/supportability/how-to-create-azure-support-request.md
Next, we collect more details about the problem. Providing thorough and detailed
In some cases, you may see additional options. For example, for certain types of Virtual Machine problem types, you can choose whether to [allow access to a virtual machine's memory](#memory-dump-collection).
-1. In the **Support method** section, select the **Severity** level, depending on the business impact. The [maximum available severity level and time to respond](https://azure.microsoft.com/support/plans/response/) depends on your [support plan](https://azure.microsoft.com/support/plans) and the country/region in which you're located, including the timing of business hours in that country/region.
+1. In the **Support method** section, select the **Support plan**, the **Severity** level, depending on the business impact. The [maximum available severity level and time to respond](https://azure.microsoft.com/support/plans/response/) depends on your [support plan](https://azure.microsoft.com/support/plans) and the country/region in which you're located, including the timing of business hours in that country/region.
+
+ > [!TIP]
+ > To add a support plan that requires an **Access ID** and **Contract ID**, select **Help + Support** > **Support plans** > **Link support benefits**. When a limited support plan expires or has no support incidents remaining, it won't be available to select.
+ 1. Provide your preferred contact method, your availability, and your preferred support language. Confirm that your country/region setting is accurate, as this setting affects the business hours in which a support engineer can work on your request.
azure-resource-manager Msbuild Bicep File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/msbuild-bicep-file.md
You need the latest versions of the following software:
- [Visual Studio](/visualstudio/install/install-visual-studio), or [Visual Studio Code](./install.md#visual-studio-code-and-bicep-extension). The Visual Studio community version, available for free, installs .NET 6.0, .NET Core 3.1, .NET SDK, MSBuild, .NET Framework 4.8, NuGet package manager, and C# compiler. From the installer, select **Workloads** > **.NET desktop development**. With Visual Studio Code, you also need the extensions for [Bicep](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep) and [Azure Resource Manager (ARM) Tools](https://marketplace.visualstudio.com/items?itemName=msazurermtools.azurerm-vscode-tools) - [PowerShell](/powershell/scripting/install/installing-powershell) or a command-line shell for your operating system.
+If your environment doesn't have nuget.org configured as a package feed, depending on how `nuget.config` is configured, you might need to run the following command:
+
+```powershell
+dotnet nuget add source https://api.nuget.org/v3/index.json -n nuget.org
+```
+
+In certain environments, using a single package feed helps prevent problems arising from packages with the same ID and version containing different contents in different feeds. For Azure Artifacts users, this can be done using the [upstream sources feature](/azure/devops/artifacts/concepts/upstream-sources).
+ ## MSBuild tasks and Bicep packages From your continuous integration (CI) pipeline, you can use MSBuild tasks and CLI packages to convert Bicep files and Bicep parameter files into JSON. The functionality relies on the following NuGet packages:
You can find the latest version from these pages. For example:
:::image type="content" source="./media/msbuild-bicep-file/bicep-nuget-package-version.png" alt-text="Screenshot showing how to find the latest Bicep NuGet package version." border="true":::
-The latest NuGet package versions match the latest [Bicep CLI](./bicep-cli.md) version.
+The latest NuGet package versions match the latest [Bicep CLI](./bicep-cli.md) version.
- **Azure.Bicep.MSBuild**
- When included in project file's `PackageReference` property, the `Azure.Bicep.MSBuild` package imports the Bicep task used for invoking the Bicep CLI.
+ When included in project file's `PackageReference` property, the `Azure.Bicep.MSBuild` package imports the Bicep task used for invoking the Bicep CLI.
```xml <ItemGroup>
The latest NuGet package versions match the latest [Bicep CLI](./bicep-cli.md) v
- **Azure.Bicep.CommandLine**
- The `Azure.Bicep.CommandLine.*` packages are available for Windows, Linux, and macOS. The following example references the package for Windows.
+ The `Azure.Bicep.CommandLine.*` packages are available for Windows, Linux, and macOS. The following example references the package for Windows.
```xml <ItemGroup>
Build a project in .NET with the dotnet CLI.
New-Item -Name .\msBuildDemo -ItemType Directory Set-Location -Path .\msBuildDemo ```+ 1. Run the `dotnet` command to create a new console with the .NET 6 framework. ```powershell
Build a project in .NET Core 3.1 using the dotnet CLI.
New-Item -Name .\msBuildDemo -ItemType Directory Set-Location -Path .\msBuildDemo ```+ 1. Run the `dotnet` command to create a new console with the .NET 6 framework. ```powershell
You need a Bicep file and a BicepParam file to be converted to JSON.
Replace `{prefix}` with a string value used as a prefix for the storage account name. - ### Run MSBuild Run MSBuild to convert the Bicep file and the Bicep parameter file to JSON.
Run MSBuild to convert the Bicep file and the Bicep parameter file to JSON.
dotnet build .\msBuildDemo.csproj ```
- or
+ or
```powershell dotnet restore .\msBuildDemo.csproj
backup About Restore Microsoft Azure Recovery Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/about-restore-microsoft-azure-recovery-services.md
Using the MARS agent you can:
- **[Restore all backed up files in a volume](restore-all-files-volume-mars.md):** This option recovers all backed up data in a specified volume from the recovery point in Azure Backup. It allows a faster transfer speed (up to 40 MBPS).<br>We recommend you to use this option for recovering large amounts of data, or entire volumes. - **[Restore a specific set of backed up files and folders in a volume using PowerShell](backup-client-automation.md#restore-data-from-azure-backup):** If the paths to the files and folders relative to the volume root are known, this option allows you to restore the specified set of files and folders from a recovery point, using the faster transfer speed of the full volume restore. However, this option doesnΓÇÖt provide the convenience of browsing files and folders in the recovery point using the Instant Restore option. - **[Restore individual files and folders using Instant Restore](backup-azure-restore-windows-server.md):** This option allows quick access to the backup data by mounting volume in the recovery point as a drive. You can then browse, and copy files and folders. This option offers a copy speed of up to 6 MBPS, which is suitable for recovering individual files and folders of total size less than 80 GB. Once the required files are copied, you can unmount the recovery point.-- **Cross Region Restore for MARS (preview)**: If your Recovery Services vault uses GRS resiliency and has the [Cross Region Restore setting turned on](backup-create-recovery-services-vault.md#set-cross-region-restore), you can restore the backup data from the secondary region.
+- **Cross Region Restore for MARS**: If your Recovery Services vault uses GRS resiliency and has the [Cross Region Restore setting turned on](backup-create-recovery-services-vault.md#set-cross-region-restore), you can restore the backup data from the secondary region.
-## Cross Region Restore (preview)
+## Cross Region Restore
Cross Region Restore (CRR) allows you to restore MARS backup data from a secondary region, which is an Azure paired region. This enables you to conduct drills for audit and compliance, and recover data during the unavailability of the primary region in Azure in the case of a disaster.
backup Backup Create Recovery Services Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-create-recovery-services-vault.md
For more information about backup and restore with Cross Region Restore, see the
- [Cross Region Restore for Azure VMs](backup-azure-arm-restore-vms.md#cross-region-restore) - [Cross Region Restore for SQL Server databases](restore-sql-database-azure-vm.md#cross-region-restore) - [Cross Region Restore for SAP HANA databases](sap-hana-db-restore.md#cross-region-restore)-- [Cross Region Restore for MARS (Preview)](about-restore-microsoft-azure-recovery-services.md#cross-region-restore-preview)
+- [Cross Region Restore for MARS (Preview)](about-restore-microsoft-azure-recovery-services.md#cross-region-restore)
## Set encryption settings
backup Backup Vault Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-vault-overview.md
Title: Overview of the Backup vaults description: An overview of Backup vaults. Previously updated : 07/05/2023 Last updated : 02/01/2024
This section discusses the options available for encrypting your backup data sto
By default, all your data is encrypted using platform-managed keys. You don't need to take any explicit action from your end to enable this encryption. It applies to all workloads being backed up to your Backup vault.
-## Cross Region Restore support for PostgreSQL using Azure Backup (preview)
+## Cross Region Restore support for PostgreSQL using Azure Backup
Azure Backup allows you to replicate your backups to an additional Azure paired region by using Geo-redundant Storage (GRS) to protect your backups from regional outages. When you enable the backups with GRS, the backups in the secondary region become accessible only when Microsoft declares an outage in the primary region. However, Cross Region Restore enables you to access and perform restores from the secondary region recovery points even when no outage occurs in the primary region; thus, enables you to perform drills to assess regional resiliency.
-Learn [how to perform Cross Region Restore](create-manage-backup-vault.md#perform-cross-region-restore-using-azure-portal-preview).
+Learn [how to perform Cross Region Restore](create-manage-backup-vault.md#perform-cross-region-restore-using-azure-portal).
>[!Note] >- Cross Region Restore is now available for PostgreSQL backups protected in Backup vaults.
->- Backup vaults enabled with Cross Region Restore will be automatically charged at [RA-GRS rates](https://azure.microsoft.com/pricing/details/backup/) for the PostgreSQL backups stored in the vault once the feature is generally available.
+>- Backup vaults enabled with Cross Region Restore are automatically charged at [RA-GRS rates](https://azure.microsoft.com/pricing/details/backup/) for the PostgreSQL backups stored in the vault.
## Next steps -- [Create and manage Backup vault](create-manage-backup-vault.md)
+- [Create and manage Backup vault](create-manage-backup-vault.md#perform-cross-region-restore-using-azure-portal).
backup Create Manage Backup Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/create-manage-backup-vault.md
Title: Create and manage Backup vaults description: Learn how to create and manage the Backup vaults. Previously updated : 08/10/2023 Last updated : 02/01/2024
Troubleshoot the following common issues you might encounter during Backup vault
**Cause**: You may face this error if you try to move multiple Backup vaults in a single attempt.
-**Recommentation**: Ensure that only one Backup vault is selected for every move operation.
+**Recommendation**: Ensure that only one Backup vault is selected for every move operation.
#### UserErrorBackupVaultResourceMoveNotAllowedUntilResourceProvisioned
Troubleshoot the following common issues you might encounter during Backup vault
**Recommendation**: Remove the Managed Identity from the existing Tenant; move the resource and add it again to the new one.
-## Perform Cross Region Restore using Azure portal (preview)
+## Perform Cross Region Restore using Azure portal
-Follow these steps:
+The Cross Region Restore option allows you to restore data in a secondary Azure paired region. To configure Cross Region Restore for the backup vault: 
1. Sign in to [Azure portal](https://portal.azure.com/).
-1. [Create a new Backup vault](create-manage-backup-vault.md#create-backup-vault) or choose an existing Backup vault, and then enable Cross Region Restore by going to **Properties** > **Cross Region Restore (Preview)**, and choose **Enable**.
+1. [Create a new Backup vault](create-manage-backup-vault.md#create-backup-vault) or choose an existing Backup vault, and then enable Cross Region Restore by going to **Properties** > **Cross Region Restore**, and choose **Enable**.
:::image type="content" source="./media/backup-vault-overview/enable-cross-region-restore-for-postgresql-database.png" alt-text="Screenshot shows how to enable Cross Region Restore for PostgreSQL database." lightbox="./media/backup-vault-overview/enable-cross-region-restore-for-postgresql-database.png":::
Follow these steps:
:::image type="content" source="./media/backup-vault-overview/check-availability-of-recovery-point-in-secondary-region.png" alt-text="Screenshot shows how to check availability for the recovery points in the secondary region." lightbox="./media/backup-vault-overview/check-availability-of-recovery-point-in-secondary-region.png":::
-1. The recovery points available in the secondary region are now listed.
+ The recovery points available in the secondary region are now listed.
- Choose **Restore to secondary region**.
+1. Select **Restore to secondary region**.
:::image type="content" source="./media/backup-vault-overview/initiate-restore-to-secondary-region.png" alt-text="Screenshot shows how to initiate restores to the secondary region." lightbox="./media/backup-vault-overview/initiate-restore-to-secondary-region.png":::
Follow these steps:
:::image type="content" source="./media/backup-vault-overview/monitor-postgresql-restore-to-secondary-region.png" alt-text="Screenshot shows how to monitor the postgresql restore to the secondary region." lightbox="./media/backup-vault-overview/monitor-postgresql-restore-to-secondary-region.png":::
+> [!NOTE]
+> Cross Region Restore is currently only available for PostGreSQL servers.
+ ## Cross Subscription Restore using Azure portal Some datasources of Backup vault support restore to a subscription different from that of the source machine. Cross Subscription Restore (CSR) is enabled for existing vaults by default, and you can use it if supported for the intended datasource.
You can also select the state of CSR during the creation of Backup vault.
>- CSR once permanently disabled on a vault can't be re-enabled because it's an irreversible operation. >- If CSR is disabled but not permanently disabled, then you can reverse the operation by selecting **Vault** > **Properties** > **Cross Subscription Restore** > **Enable**. >- If a Backup vault is moved to a different subscription when CSR is disabled or permanently disabled, restore to the original subscription will fail.
-
++ ## Next steps -- [Configure backup on Azure PostgreSQL databases](backup-azure-database-postgresql.md#configure-backup-on-azure-postgresql-databases)
+- [Configure backup on Azure PostgreSQL databases](backup-azure-database-postgresql.md#configure-backup-on-azure-postgresql-databases)
backup Quick Cross Region Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/quick-cross-region-restore.md
+
+ Title: Quickstart - Restore a PostgreSQL database across regions using Azure Backup
+description: Learn how to restore a PostgreSQL database across regions by using Azure Backup.
+ Last updated : 02/01/2024++++
+# Quickstart: Restore a PostgreSQL database across regions by using Azure Backup
+
+This quickstart describes how to enable Cross Region Restore on your Backup vault to restore the data to an alternate region when the primary region is down.
+
+The Cross Region Restore option allows you to restore data in a secondaryΓÇ»[Azure paired region](/azure/availability-zones/cross-region-replication-azure) even when no outage occurs in the primary region; thus, enabling you to perform drills when there's an audit or compliance requirement.
+
+> [!NOTE]
+>- Currently, Geo-redundant Storage (GRS) vaults with Cross Region Restore enabled can't be changed to Zone-redundant Storage (ZRS) or Locally redundant Storage (LRS) after the protection starts for the first time.
+>- Cross Regional Restore (CRR) with Cross Subscription Restore (CSR) is currently not supported.
+
+## Prerequisites
+
+To begin with the Cross Region Restore, ensure that:
+
+- A Backup vault with Cross Region Restore configured. [Create one](./create-manage-backup-vault.md#create-a-backup-vault) in case you donΓÇÖt a Backup vault.
+- PostgreSQL database is protected by using Azure Backup, and one full backup is run. To protect and back up a database, see [Back up Azure Database for PostgreSQL server](backup-azure-database-postgresql.md).
+
+## Restore the database using Azure portal
+
+To restore the database to the secondary region using the Azure portal, follow these steps:
+
+1. Sign in to [Azure portal](https://portal.azure.com/).
+1. To check the available recovery point in the secondary region, go to theΓÇ»**Backup center**ΓÇ»>ΓÇ»**Backup Instances**.
+1. Filter to **Azure Database for PostgreSQL servers**, then filter **Instance Region** as *Secondary Region*.
+1. Select the required Backup instance.
+
+ The recovery points available in the secondary region are now listed.
+
+1. Select **Restore to secondary region** to review the target region selected, and then select the appropriate recovery point and restore parameters.
+ You can also trigger restores from the respective backup instance.
+
+ :::image type="content" source="./media/create-manage-backup-vault/restore-to-secondary-region.png" alt-text="Screenshot showing how to restore to secondary region." lightbox="./media/create-manage-backup-vault/restore-to-secondary-region.png":::
++
+
+1. Once the restore starts, you can monitor the completion of the restore operation under **Backup Jobs** of the Backup vault by filtering **Datasource type** to *Azure Database for PostgreSQL servers*  and **Instance Region** to *Secondary Region*.
+
+## Next steps
+
+- [Learn about the Cross Region Restore](./tutorial-cross-region-restore.md) feature.
backup Quick Secondary Region Restore Postgresql Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/quick-secondary-region-restore-postgresql-powershell.md
+
+ Title: Quickstart - Cross region restore for PostgreSQL database with PowerShell by using Azure Backup
+description: In this Quickstart, learn how to restore PostgreSQL database across region with the Azure PowerShell module.
+ms.devlang: azurecli
+ Last updated : 02/01/2024+++++
+# Quickstart: Restore Azure Database for PostgreSQL server across regions with PowerShell by using Azure Backup
+
+This quickstart describes how to configure and perform cross-region restore for Azure Database for PostgreSQL server with Azure PowerShell.
+
+[Azure Backup](backup-overview.md) allows you to back up and restore the Azure Database for PostgreSQL server. The [Azure PowerShell AZ](/powershell/azure/new-azureps-module-az) module allows you to create and manage Azure resources from the command line or in scripts. If you want to restore the PostgreSQL database across regions by using the Azure portal, see [this Quickstart](quick-cross-region-restore.md).
+
+## Enable Cross Region Restore for Backup vault
+
+To enable the Cross Region Restore feature on the Backup vault that has Geo-redundant Storage enabled, run the following cmdlet:
+
+```azurepowershell
+Update-AzDataProtectionBackupVault -SubscriptionId $subscriptionId -ResourceGroupName $resourceGroupName -CrossRegionRestoreState $CrossRegionRestoreState
+```
+
+>[!Note]
+>You can't disable Cross Region Restore once protection has started with this feature enabled.
++
+## Configure restore for the PostgreSQL database to a secondary region
+
+To restore the database to a secondary region after enabling Cross Region Restore, run the following cmdlets:
+
+1. Fetch the backup instances from secondary region.
+
+ ```azurepowershell
+ Search-AzDataProtectionBackupInstanceInAzGraph -Subscription $subscriptionId -ResourceGroup $resourceGroupName -Vault $vaultName -DatasourceType AzureDatabaseForPostgreSQL
+ ```
+
+2. Once you identify the backed-up instance, fetch the relevant recovery point by using the `Get-AzDataProtectionRecoveryPoint` cmdlet.
+
+ ```azurepowershell
+ $recoveryPointsCrr = Get-AzDataProtectionRecoveryPoint -BackupInstanceName $instance.Name -ResourceGroupName $resourceGroupName -ResourceGroupName $resourceGroupName e -SubscriptionId $subscriptionId -UseSecondaryRegion
+ ```
+
+3. Prepare the restore request.
+
+ To restore the database, follow one of the following methods:
+
+ **Restore as database**
+
+ Follow these steps:
+
+ 1. Create the Azure Resource Manager ID for the new PostgreSQL database. You need to create this with the [target PostgreSQL server to which permissions are assigned](/azure/backup/restore-postgresql-database-ps#set-up-permissions). Additionally, create the required *PostgreSQL database name*.
+
+ For example, you can name a PostgreSQL database as `emprestored21` under a target PostgreSQL server `targetossserver` in a resource group `targetrg` with a different subscription.
+
+ ```azurepowershell
+ $targetResourceId = "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx/resourceGroups/targetrg/providers/providers/Microsoft.DBforPostgreSQL/servers/targetossserver/databases/emprestored21"
+ ```
+ 2. Use the `Intialize-AzDataProtectionRestoreRequest` cmdlet to prepare the restore request with relevant details.
+
+ ```azurepowershell
+ $OssRestoreReq = Initialize-AzDataProtectionRestoreRequest -DatasourceType AzureDatabaseForPostgreSQL -SourceDataStore VaultStore -RestoreLocation $vault.ReplicatedRegion[0] -RestoreType AlternateLocation -RecoveryPoint $recoveryPointsCrr[0].Property.RecoveryPointId -TargetResourceId $targetResourceId -SecretStoreURI $secretURI -SecretStoreType AzureKeyVault
+ ```
+
+ **Restore as files**
+
+ Follow these steps:
+
+ 1. Fetch the *Uniform Resource Identifier (URI)* of the container, in the [storage account to which permissions are assigned](/azure/backup/restore-postgresql-database-ps#set-up-permissions).
+
+ For example, a container named `testcontainerrestore` under a storage account `testossstorageaccount` with a different subscription.
+
+ ```azurepowershell
+ $contURI = https://testossstorageaccount.blob.core.windows.net/testcontainerrestore
+ ```
+
+ 2. Use the `Intialize-AzDataProtectionRestoreRequest` cmdlet to prepare the restore request with relevant details.
+
+ ```azurepowershell
+ $OssRestoreReq = Initialize-AzDataProtectionRestoreRequest -DatasourceType AzureDatabaseForPostgreSQL -SourceDataStore VaultStore -RestoreLocation $vault.ReplicatedRegion[0] -RestoreType RestoreAsFiles -RecoveryPoint $recoveryPointsCrr[0].Property.RecoveryPointId -TargetContainerURI $targetContainerURI -FileNamePrefix $fileNamePrefix
+ ```
+
+## Validate the PostgreSQL database restore configuration
+
+To validate the probabilities of success for the restore operation, run the following cmdlet:
+
+```azurepowershell
+$validate = Test-AzDataProtectionBackupInstanceRestore -ResourceGroupName $ResourceGroupName -Name $instance[0].Name -VaultName $VaultName -RestoreRequest $OssRestoreReq -SubscriptionId $SubscriptionId -RestoreToSecondaryRegion #-Debug
+```
+
+## Trigger the restore operation
+
+To trigger the restore operation, run the following cmdlet:
+
+```azurepowershell
+$restoreJob = Start-AzDataProtectionBackupInstanceRestore -BackupInstanceName $instance.Name -ResourceGroupName $ResourceGroupName -VaultName $vaultName -SubscriptionId $SubscriptionId -Parameter $OssRestoreReq -RestoreToSecondaryRegion # -Debug
+```
+
+## Track the restore job
+
+To monitor the restore job progress, choose one of the methods:
+
+- To get the complete list of Cross Region Restore jobs from the secondary region, run the following cmdlet:
+
+ ```azurepowershell
+ $job = Get-AzDataProtectionJob -ResourceGroupName $resourceGroupName -SubscriptionId $subscriptionId -VaultName $vaultName -UseSecondaryRegion
+ ```
+
+- To get a single job detail, run the following cmdlet:
+
+ ```azurepowershell
+ $restoreJob = Start-AzDataProtectionBackupInstanceRestore -BackupInstanceName $instance.Name -ResourceGroupName $ResourceGroupName -VaultName $vaultName -SubscriptionId $SubscriptionId -Parameter $OssRestoreReq -RestoreToSecondaryRegion # -Debug
+ ```
+
+## Next steps
+
+- Learn how to [configure and run Cross Region Restore for Azure database for PostgreSQL](tutorial-cross-region-restore.md).
backup Restore Azure Database Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-azure-database-postgresql.md
Title: Restore Azure Database for PostgreSQL description: Learn about how to restore Azure Database for PostgreSQL backups. Previously updated : 01/21/2022 Last updated : 02/01/2024
az role assignment create --assignee $VaultMSI_AppId --role "Storage Blob Data
``` Replace the assignee parameter with the _Application ID_ of the vault's MSI and the scope parameter to refer to your specific container. To get the **Application ID** of the vault MSI, select **All applications** under **Application type**. Search for the vault name and copy the Application ID.
- :::image type="content" source="./media/restore-azure-database-postgresql/select-application-type-for-id-inline.png" alt-text="Screenshot showing the process to get the Application I D of the vault MSI." lightbox="./media/restore-azure-database-postgresql/select-application-type-for-id-expanded.png":::
+ :::image type="content" source="./media/restore-azure-database-postgresql/select-application-type-for-id-inline.png" alt-text="Screenshot showing the process to get the Application ID of the vault MSI." lightbox="./media/restore-azure-database-postgresql/select-application-type-for-id-expanded.png":::
- :::image type="content" source="./media/restore-azure-database-postgresql/copy-vault-id-inline.png" alt-text="Screenshot showing the process to copy the Application I D of the vault." lightbox="./media/restore-azure-database-postgresql/copy-vault-id-expanded.png":::
+ :::image type="content" source="./media/restore-azure-database-postgresql/copy-vault-id-inline.png" alt-text="Screenshot showing the process to copy the Application ID of the vault." lightbox="./media/restore-azure-database-postgresql/copy-vault-id-expanded.png":::
+## Restore databases across regions
+
+As one of the restore options, Cross Region Restore (CRR) allows you to restore Azure Database for PostgreSQL servers in a secondary region, which is an Azure-paired region.
+
+### Considerations
+
+- To begin using the feature, read the [Before you start](create-manage-backup-vault.md#before-you-start) section.
+- To check if Cross Region Restore is enabled, see the [Configure Cross Region Restore](create-manage-backup-vault.md#perform-cross-region-restore-using-azure-portal) section.
++
+### View backup instances in secondary region
+
+If CRR is enabled, you can view the backup instances in the secondary region.
+
+1. From the [Azure portal](https://portal.azure.com/), go to **Backup Vault** > **Backup Instances**.
+1. Select the filter as **Instance Region == Secondary Region**.
++
+ :::image type="content" source="./media/create-manage-backup-vault/select-secondary-region-as-instance-region.png" alt-text="Screenshot showing the selection of the secondary region as the instance region." lightbox="./media/create-manage-backup-vault/select-secondary-region-as-instance-region.png":::
+
+ >[!Note]
+ > Only Backup Management Types supporting the CRR feature are listed. Currently, the restoration of primary region data to a secondary region for PostgreSQL servers is only supported.
++
+### Restore in secondary region
+
+The secondary region restore experience is similar to the primary region restore.
+
+When configuring details in the **Restore Configuration** pane to configure your restore, youΓÇÖre prompted to provide only secondary region parameters. So, a vault should already exist in the secondary region and the PostgreSQL server should be registered to the vault in the secondary region.ΓÇ»
+
+Follow these steps:
++
+1. Select **Backup Instance name** to view details.
+2. Select **Restore to secondary region**.
+
+ :::image type="content" source="./media/create-manage-backup-vault/restore-to-secondary-region.png" alt-text="Screenshot showing how to restore to secondary region." lightbox="./media/create-manage-backup-vault/restore-to-secondary-region.png":::
+
+1. Select the restore point, the region, and the resource group.
+1. Select **Restore**.
+ >[!Note]
+ > - After the restore is triggered in the data transfer phase, the restore job can't be canceled.
+ > - The role/access level required to perform restore operation in cross-regions are *Backup Operator* role in the subscription and *Contributor (write)* access on the source and target virtual machines. To view backup jobs, *Backup reader* is the minimum permission required in the subscription.
+ > - The RPO for the backup data to be available in secondary region is 12 hours. Therefore, when you turn on CRR, the RPO for the secondary region is 12 hours + log frequency duration (that can be set to a minimum of 15 minutes).
++
+### Monitoring secondary region restore jobs
+
+1. In the Azure portal, go to **Monitoring + Reporting** > **Backup Jobs**.
+2. Filter Instance Region for **Secondary Region** to view the jobs in the secondary region.
+
+ :::image type="content" source="./media/create-manage-backup-vault/view-jobs-in-secondary-region.png" alt-text="Screenshot showing how to view jobs in secondary region." lightbox="./media/create-manage-backup-vault/view-jobs-in-secondary-region.png":::
++ ## Next steps [Troubleshoot PostgreSQL database backup by using Azure Backup](backup-azure-database-postgresql-troubleshoot.md)
backup Tutorial Cross Region Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/tutorial-cross-region-restore.md
+
+ Title: Tutorial - Configure and run Cross Region Restore for Azure database for PostgreSQL
+description: Learn how to configure and run Cross Region Restore for Azure database for PostgreSQL using Azure Backup.
+ Last updated : 02/01/2024++++
+# Tutorial: Configure and run Cross Region Restore for Azure database for PostgreSQL by using Azure Backup
+
+This tutorial describes how you can enable and run Cross Region Restore to restore SQL databases hosted on Azure VMs in a secondary region.
+
+The Cross Region Restore option allows you to restore data in a secondaryΓÇ»[Azure paired region](/azure/availability-zones/cross-region-replication-azure) even when no outage occurs in the primary region; thus, enabling you to perform drills to assess regional resiliency.ΓÇ»
+
+> [!NOTE]
+>- Currently, Geo-redundant Storage (GRS) vault with Cross Region Restore enabled can't be changed to Zone-redundant Storage (ZRS) or Locally-redundant Storage (LRS) after the protection starts for the first time.ΓÇ»
+>- Secondary region Recovery Point Objective (RPO) is currently *36 hours*. This is because the RPO in the primary region is 24 hours and can take up to 12 hours to replicate the backup data from the primary to the secondary region.
+
+## Considerations
+
+Before you begin Cross Region Restore for PostgreSQL server, see the following information:
+
+- Cross Region Restore is supported only for a Backup vault that uses Storage Redundancy = Geo-redundant.
+- PostgreSQL databases hosted on Azure VMs are supported. You can restore databases or their files.ΓÇ»
+- Review the [support matrix](./backup-support-matrix.md) for a list of supported managed types and regions.
+- Cross Region Restore option incurs additional charges. [Learn more about pricing](https://azure.microsoft.com/pricing/details/backup/).
+- Once you enable Cross Region Restore, it might take up to 48 hours for the backup items to be available in secondary regions.
+- Review theΓÇ»[permissions required to use Cross Region Restore](backup-rbac-rs-vault.md#minimum-role-requirements-for-azure-vm-backup).ΓÇ»
+
+A vault created with GRS redundancy includes the option to configure the Cross Region Restore feature. Every GRS vault has a banner that links to the documentation.ΓÇ»
+
+## Enable Cross Region Restore on a Backup vault
+
+The Cross Region Restore option allows you to restore data in a secondary Azure paired region.
+
+To configure Cross Region Restore for the backup vault, follow these steps:ΓÇ»
+
+1. Sign in to [Azure portal](https://portal.azure.com/).
+1. [Create a new Backup vault](create-manage-backup-vault.md#create-backup-vault) or choose an existing Backup vault.
+1. Enable Cross Region Restore:
+ 1. Select **Properties** (under Manage). 
+ 1. Under **Vault Settings**, select **Update** for Cross Region Restore.
+ 1. UnderΓÇ»**Cross Region Restore**, selectΓÇ»**Enable**.
+
+ :::image type="content" source="./media/tutorial-cross-region-restore/update-for-cross-region-restore.png" alt-text="Screenshot showing the selection of update for cross region restore.":::
+
+ :::image type="content" source="./media/tutorial-cross-region-restore/enable-cross-region-restore.png" alt-text="Screenshot shows the Enable cross region restore option.":::
+
+## View backup instances in secondary region
+
+If CRR is enabled, you can view the backup instances in the secondary region.
+
+Follow these steps:
+
+1. From the [Azure portal](https://portal.azure.com/), go to your Backup vault.
+1. Select **Backup instances** under **Manage**.
+1. Select **Instance Region** == *Secondary Region* on the filters.
+
+ :::image type="content" source="./media/tutorial-cross-region-restore/backup-instances-secondary-region.png" alt-text="Screenshot showing the secondary region filter." lightbox="./media/tutorial-cross-region-restore/backup-instances-secondary-region.png":::
++
+## Restore the database to the secondary region
+
+To restore the database to the secondary region, follow these steps:
+
+1. Go to the Backup vaultΓÇÖs **Overview** pane, and then configure a backup for PostgreSQL database.
+ > [!Note]
+ > Once the backup is complete in the primary region, it can take up to 12 hours for the recovery point in the primary region to get replicated to the secondary region.
+1. To check the availability of recovery point in the secondary region, go to theΓÇ»**Backup center** >ΓÇ»**Backup Instances**.
+1. Filter to **Azure Database for PostgreSQL servers**, then filter Instance region as **Secondary Region**, and then select the required Backup Instance.
+ :::image type="content" source="./media/create-manage-backup-vault/view-jobs-in-secondary-region.png" alt-text="Screenshot showing how to view jobs in secondary region." lightbox="./media/create-manage-backup-vault/view-jobs-in-secondary-region.png":::
+
+ The recovery points available in the secondary region are now listed.
+
+1. SelectΓÇ»**Restore to secondary region**.
+
+ You can also trigger restores from the respective backup instance.
+1. Select Restore to secondary region to review the target region selected, and then select the appropriate recovery point and restore parameters.
+1. Once the restore starts, you can monitor the completion of the restore operation under Backup Jobs of the Backup vault by filtering Jobs workload type to Azure Database for PostgreSQL servers and Instance Region to Secondary Region.
++
+## Next steps
+
+For more information about backup and restore with Cross Region Restore, see:
+
+- [Cross Region Restore for PostGreSQL Servers](create-manage-backup-vault.md#perform-cross-region-restore-using-azure-portal).
+- [Restore Azure Database for PostgreSQL backups](./restore-azure-database-postgresql.md).
backup Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/whats-new.md
Title: What's new in Azure Backup description: Learn about the new features in the Azure Backup service. Previously updated : 12/25/2023 Last updated : 02/01/2024 - ignite-2023
You can learn more about the new releases by bookmarking this page or by [subscr
## Updates summary
+- January 2024
+ - [Cross Region Restore support for PostgreSQL by using Azure Backup is now generally available](#cross-region-restore-support-for-postgresql-by-using-azure-backup-is-now-generally-available)
- December 2023 - [Vaulted backup and Cross Region Restore for support for AKS (preview)](#vaulted-backup-and-cross-region-restore-for-support-for-aks-preview) - November 2023
You can learn more about the new releases by bookmarking this page or by [subscr
- [Backup for Azure Blobs (in preview)](#backup-for-azure-blobs-in-preview)
+## Cross Region Restore support for PostgreSQL by using Azure Backup is now generally available
+
+Azure Backup allows you to replicate your backups to an additional Azure paired region by using Geo-redundant Storage (GRS) to protect your backups from regional outages. When you enable the backups with GRS, the backups in the secondary region become accessible only when Microsoft declares an outage in the primary region. However, Cross Region Restore enables you to access and perform restores from the secondary region recovery points even when no outage occurs in the primary region; thus, enables you to perform drills to assess regional resiliency.
+
+For more information, see [Cross Region Restore support for PostgreSQL using Azure Backup](backup-vault-overview.md#cross-region-restore-support-for-postgresql-using-azure-backup).
+ ## Vaulted backup and Cross Region Restore for support for AKS (preview) Azure Backup supports storing AKS backups offsite, which is protected against tenant compromise, malicious attacks and ransomware threats. Along with backup stored in a vault, you can also use the backups in a regional disaster scenario and recover backups.
For more information, see [Save and manage MARS agent passphrase securely in Azu
You can now restore data from the secondary region for MARS Agent backups using Cross Region Restore on Recovery Services vaults with Geo-redundant storage (GRS) replication. You can use this capability to do recovery drills from the secondary region for audit or compliance. If disasters cause partial or complete unavailability of the primary region, you can directly access the backup data from the secondary region.
-For more information, see [Cross Region Restore for MARS (preview)](about-restore-microsoft-azure-recovery-services.md#cross-region-restore-preview).
+For more information, see [Cross Region Restore for MARS (preview)](about-restore-microsoft-azure-recovery-services.md#cross-region-restore).
## SAP HANA System Replication database backup support is now generally available
For more information, see [Back up a HANA system with replication enabled](sap-h
Azure Backup allows you to replicate your backups to an additional Azure paired region by using Geo-redundant Storage (GRS) to protect your backups from regional outages. When you enable the backups with GRS, the backups in the secondary region become accessible only when Microsoft declares an outage in the primary region.
-For more information, see [Cross Region Restore support for PostgreSQL using Azure Backup](backup-vault-overview.md#cross-region-restore-support-for-postgresql-using-azure-backup-preview).
+For more information, see [Cross Region Restore support for PostgreSQL using Azure Backup](backup-vault-overview.md#cross-region-restore-support-for-postgresql-using-azure-backup).
## Microsoft Azure Backup Server v4 is now generally available
communication-services Pstn Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/pstn-pricing.md
All prices shown below are in USD.
### Phone number leasing charges |Number type |Monthly fee | |--|--|
+|Geographic |USD 3.00/mo |
|Toll-Free |USD 16.00/mo | + ### Usage charges |Number type |To make calls* |To receive calls| |-||-|
communication-services Migrating To Azure Communication Services Calling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/migrating-to-azure-communication-services-calling.md
Azure Communication Services offers various call types. The type of call you cho
## Installation
-### Install the Azure Communication Services calling SDK
+### Install the Azure Communication Services Calling SDK
Use the `npm install` command to install the Azure Communication Services Calling SDK for JavaScript. ```console
Call creation and start are synchronous. The `call` instance enables you to subs
call.on('stateChanged', async () =\> { console.log(\`Call state changed: \${call.state}\`) }); ```
-### Azure Communication Services 1:1 Call
+#### 1:1 Call
To call another Azure Communication Services user, use the `startCall` method on `callAgent` and pass the recipient's `CommunicationUserIdentifier` that you [created with the Communication Services administration library](../quickstarts/identity/access-tokens.md). ```javascript
const userCallee = { communicationUserId: '\<Azure_Communication_Services_USER_I
const oneToOneCall = callAgent.startCall([userCallee]); ```
-### Azure Communication Services Room Call
+#### Rooms Call
To join a `Room` call, you can instantiate a context object with the `roomId` property as the room identifier. To join the call, use the `join` method and pass the context instance. ```javascript
const call = callAgent.join(context);
``` A **Room** offers application developers better control over who can join a call, when they meet and how they collaborate. To learn more about **Rooms**, see the [Rooms overview](../concepts/rooms/room-concept.md), or see [Quickstart: Join a room call](../quickstarts/rooms/join-rooms-call.md).
-### Azure Communication Services Group Call
+#### Group Call
To start a new group call or join an ongoing group call, use the `join` method and pass an object with a `groupId` property. The `groupId` value must be a GUID. ```javascript
const context = { groupId: '\<GUID\>'};
const call = callAgent.join(context); ```
-### Azure Communication Services Teams call
+#### Teams call
Start a synchronous one-to-one or group call using the `startCall` API on `teamsCallAgent`. You can provide `MicrosoftTeamsUserIdentifier` or `PhoneNumberIdentifier` as a parameter to define the target of the call. The method returns the `TeamsCall` instance that allows you to subscribe to call events. ```javascript
callAgent.on('callsUpdated', (event) => {
For Azure Communication Services Teams implementation, see how to [Receive a Teams Incoming Call](../how-tos/cte-calling-sdk/manage-calls.md#receive-a-teams-incoming-call).
-## Adding participants to call
+## Adding and removing participants to a call
### Twilio
remoteParticipant.on('stateChanged', () => {
}); ```
-## Video
+## Video calling
-### Starting and stopping video
+## Starting and stopping video
-#### Twilio
+### Twilio
```javascript const videoTrack = await twilioVideo.createLocalVideoTrack({ constraints });
localParticipant.unpublishTrack(videoTrack);
Then create a new Video Track with the correct constraints.
-#### Azure Communication Services
+### Azure Communication Services
To start a video while on a call, you need to enumerate cameras using the `getCameras` method on the `deviceManager` object. Then create a new instance of `LocalVideoStream` with the desired camera and pass the `LocalVideoStream` object into the `startVideo` method of an existing call object: ```javascript
await blurProcessor.loadModel();
``` As soon as the model is loaded, you can add the background to the video track using the `addProcessor` method:-
-| videoTrack.addProcessor(processor, { inputFrameBufferType: 'video', outputFrameBufferContextType: 'webgl2' }); |
-||
+```javascript
+videoTrack.addProcessor(processor, { inputFrameBufferType: 'video', outputFrameBufferContextType: 'webgl2' });
+```
#### Azure Communication Services
-Use the npm install command to install the Azure Communication Services Effects SDK for JavaScript.
+Use the npm install command to install the [Azure Communication Services Effects SDK](../quickstarts/voice-video-calling/get-started-video-effects.md?pivots=platform-web) for JavaScript.
```console npm install @azure/communication-calling-effects --save ```
You can learn more about ensuring precall readiness in [Pre-Call diagnostics](..
## Event listeners
-Twilio
+### Twilio
```javascript twilioRoom.on('participantConneted', (participant) => {
connectors Compare Built In Azure Connectors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/compare-built-in-azure-connectors.md
ms.suite: integration
Last updated 01/04/2024-
-# As a developer, I want to understand the differences between built-in and Azure connectors in Azure Logic Apps (Standard).
+# Customer intent: As a developer, I want to understand the differences between built-in and Azure connectors in Azure Logic Apps (Standard).
# Differences between built-in operations and Azure connectors in Azure Logic Apps (Standard)
connectors Connectors Azure Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-azure-application-insights.md
Last updated 01/10/2024 tags: connectors
-# As a developer, I want to get telemetry from an Application Insights resource to use with my workflow in Azure Logic Apps.
+# Customer intent: As a developer, I want to get telemetry from an Application Insights resource to use with my workflow in Azure Logic Apps.
# Connect to Azure Application Insights from workflows in Azure Logic Apps
connectors Connectors Azure Monitor Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-azure-monitor-logs.md
Last updated 01/10/2024 tags: connectors
-# As a developer, I want to get log data from my Log Analytics workspace or telemetry from my Application Insights resource to use with my workflow in Azure Logic Apps.
+# Customer intent: As a developer, I want to get log data from my Log Analytics workspace or telemetry from my Application Insights resource to use with my workflow in Azure Logic Apps.
# Connect to Log Analytics or Application Insights from workflows in Azure Logic Apps
connectors Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/introduction.md
Last updated 01/10/2024
-# As a developer, I want to learn how connectors help me access data, events, and resources in other apps, services, systems, and platforms from my workflow in Azure Logic Apps.
+# Customer intent: As a developer, I want to learn how connectors help me access data, events, and resources in other apps, services, systems, and platforms from my workflow in Azure Logic Apps.
# What are connectors in Azure Logic Apps
container-apps Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/networking.md
Subnet address ranges can't overlap with the following ranges reserved by Azure
- 172.31.0.0/16 - 192.0.2.0/24
+If you created your container apps environment with a custom service CIDR, make sure your container app's subnet (or any peered subnet) doesn't conflict with your custom service CIDR range.
+ ### Subnet configuration with CLI
container-apps User Defined Routes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/user-defined-routes.md
Your virtual networks in Azure have default route tables in place when you creat
| Setting | Action | |--|--|
- | **Address prefix** | Select the virtual network for your container app. |
- | **Next hop type** | Select the subnet your for container app. |
+ | **Virtual network** | Select the virtual network for your container app. |
+ | **Subnet** | Select the subnet your for container app. |
1. Select **OK**.
container-registry Container Registry Artifact Streaming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-artifact-streaming.md
Last updated 12/14/2023
-#customer intent: As a developer, I want artifact streaming capabilities so that I can efficiently deliver and serve containerized applications to end-users in real-time.
+# Customer intent: As a developer, I want artifact streaming capabilities so that I can efficiently deliver and serve containerized applications to end-users in real-time.
# Artifact streaming in Azure Container Registry (Preview)
cosmos-db Merge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/merge.md
az rest `
--url $endpoint ` --body "{}"
+```
+ #### [API for MongoDB](#tab/mongodb/azure-powershell) + For **provisioned throughput** containers, use `Invoke-AzCosmosDBMongoDBCollectionMerge` with the `-WhatIf` parameter to preview the merge without actually performing the operation. ```azurepowershell-interactive+ $parameters = @{ ResourceGroupName = "<resource-group-name>" AccountName = "<cosmos-account-name>"
$parameters = @{
Name = "<cosmos-container-name>" WhatIf = $true }+ Invoke-AzCosmosDBMongoDBCollectionMerge @parameters ``` Start the merge by running the same command without the `-WhatIf` parameter. - ```azurepowershell-interactive $parameters = @{ ResourceGroupName = "<resource-group-name>"
az cosmosdb mongodb database merge \
```
-```http-interactive
-POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DocumentDB/databaseAccounts/{accountName}/mongodbDatabases/{databaseName}/partitionMerge?api-version=2023-11-15-preview
-```
--
cosmos-db Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/role-based-access-control.md
The **Access control (IAM)** pane in the Azure portal is used to configure Azure
:::image type="content" source="./media/role-based-access-control/database-security-identity-access-management-rbac.png" alt-text="Access control (IAM) in the Azure portal - demonstrating database security."::: + ## Custom roles In addition to the built-in roles, users may also create [custom roles](../role-based-access-control/custom-roles.md) in Azure and apply these roles to service principals across all subscriptions within their Active Directory tenant. Custom roles provide users a way to create Azure role definitions with a custom set of resource provider operations. To learn which operations are available for building custom roles for Azure Cosmos DB see, [Azure Cosmos DB resource provider operations](../role-based-access-control/resource-provider-operations.md#microsoftdocumentdb)
In addition to the built-in roles, users may also create [custom roles](../role-
> [!NOTE] > Custom role assignments may not always be visible in the Azure portal.
+> [!WARNING]
+> Account keys are not automatically rotated or revoked after management RBAC changes. These keys give access to data plane operations. When removing access to the keys from an user, it is recommended to rotate the keys as well. For RBAC Data Plane, the Cosmos DB backend will reject requests once the roles/claims no longer match. If an user requires temporary access to data plane operations, it's recommended to use [Azure Cosmos DB RBAC](how-to-setup-rbac.md) Data Plane.
+ ## <a id="prevent-sdk-changes"></a>Preventing changes from the Azure Cosmos DB SDKs The Azure Cosmos DB resource provider can be locked down to prevent any changes to resources from a client connecting using the account keys (that is applications connecting via the Azure Cosmos DB SDK). This feature may be desirable for users who want higher degrees of control and governance for production environments. Preventing changes from the SDK also enables features such as resource locks and diagnostic logs for control plane operations. The clients connecting from Azure Cosmos DB SDK will be prevented from changing any property for the Azure Cosmos DB accounts, databases, containers, and throughput. The operations involving reading and writing data to Azure Cosmos DB containers themselves are not impacted.
cost-management-billing Migrate Ea Marketplace Store Charge Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/automate/migrate-ea-marketplace-store-charge-api.md
+
+ Title: Migrate from EA Marketplace Store Charge API
+
+description: This article has information to help you migrate from the EA Marketplace Store Charge API.
++ Last updated : 01/31/2024++++++
+# Migrate from EA Marketplace Store Charge API
+
+EA customers who were previously using the Enterprise Reporting consumption.azure.com API to [get their marketplace store charges](/rest/api/billing/enterprise/billing-enterprise-api-marketplace-storecharge) need to migrate to a replacement Azure Resource Manager API. This article helps you migrate by using the following instructions. It also explains the contract differences between the old API and the new API.
+
+Endpoints to migrate off:
+
+|Endpoint|API Comments|
+|||
+| `/v3/enrollments/{enrollmentNumber}/marketplacecharges` | ΓÇó API method: GET <br><br> ΓÇó Synchronous (non polling) <br><br> ΓÇó Data format: JSON |
+| `/v3/enrollments/{enrollmentNumber}/billingPeriods/{billingPeriod}/marketplacecharges` | ΓÇó API method: GET <br><br> ΓÇó Synchronous (non polling) <br><br> ΓÇó Data format: JSON |
+| `/v3/enrollments/{enrollmentNumber}/marketplacechargesbycustomdate?startTime=2017-01-01&endTime=2017-01-10` | ΓÇó API method: GET <br><br> ΓÇó Synchronous (non polling) <br><br> ΓÇó Data format: JSON |
+
+## Assign permissions to an SPN to call the API
+
+Before calling the API, you need to configure a service principal with the correct permission. You use the service principal to call the API. For more information, see [Assign permissions to ACM APIs](cost-management-api-permissions.md).
+
+### Call the Marketplaces API
+
+Use the following request URIs when calling the new Marketplaces API. All Azure and Marketplace charges are merged into a single file that is available through the new solutions. You can identify which charges are *Azure* versus *Marketplace* charges by using the `PublisherType` field that is available in the new dataset.
+
+Your enrollment number should be used as the `billingAccountId`.
+
+#### Supported requests
+
+You can call the API using the following scopes:
+
+- Department: `/providers/Microsoft.Billing/departments/{departmentId}`
+- Enrollment: `/providers/Microsoft.Billing/billingAccounts/{billingAccountId}`
+- EnrollmentAccount: `/providers/Microsoft.Billing/enrollmentAccounts/{enrollmentAccountId}`
+- Management Group: `/providers/Microsoft.Management/managementGroups/{managementGroupId}`
+- Subscription: `/subscriptions/{subscriptionId}/`
+
+For subscription, billing account, department, enrollment account, and management group scopes you can also add a billing period to the scope using `/providers/Microsoft.Billing/billingPeriods/{billingPeriodName}`. For example, to specify a billing period at the department scope, use `/providers/Microsoft.Billing/departments/{departmentId}/providers/Microsoft.Billing/billingPeriods/{billingPeriodName}`.
+
+[List Marketplaces](/rest/api/consumption/marketplaces/list#marketplaceslistresult)
+
+```http
+GET https://management.azure.com/{scope}/providers/Microsoft.Consumption/marketplaces
+```
+
+With optional parameters:
+
+```http
+https://management.azure.com/{scope}/providers/Microsoft.Consumption/marketplaces?$filter={$filter}&$top={$top}&$skiptoken={$skiptoken}
+```
+
+#### Response body changes
+
+Old response:
++
+```json
+[
+ {
+ "id": "id",
+ "subscriptionGuid": "00000000-0000-0000-0000-000000000000",
+ "subscriptionName": "subName",
+ "meterId": "2core",
+ "usageStartDate": "2015-09-17T00:00:00Z",
+ "usageEndDate": "2015-09-17T23:59:59Z",
+ "offerName": "Virtual LoadMaster&trade; (VLM) for Azure",
+ "resourceGroup": "Res group",
+ "instanceId": "id",
+ "additionalInfo": "{\"ImageType\":null,\"ServiceType\":\"Medium\"}",
+ "tags": "",
+ "orderNumber": "order",
+ "unitOfMeasure": "",
+ "costCenter": "100",
+ "accountId": 100,
+ "accountName": "Account Name",
+ "accountOwnerId": "account@live.com",
+ "departmentId": 101,
+ "departmentName": "Department 1",
+ "publisherName": "Publisher 1",
+ "planName": "Plan name",
+ "consumedQuantity": 1.15,
+ "resourceRate": 0.1,
+ "extendedCost": 1.11,
+ "isRecurringCharge": "False"
+ },
+ ...
+ ]
+```
+
+New response:
+
+```json
+ {
+ "id": "/subscriptions/subid/providers/Microsoft.Billing/billingPeriods/201702/providers/Microsoft.Consumption/marketPlaces/marketplacesId1",
+ "name": "marketplacesId1",
+ "type": "Microsoft.Consumption/marketPlaces",
+ "tags": {
+ "env": "newcrp",
+ "dev": "tools"
+ },
+ "properties": {
+ "accountName": "Account1",
+ "additionalProperties": "additionalProperties",
+ "costCenter": "Center1",
+ "departmentName": "Department1",
+ "billingPeriodId": "/subscriptions/subid/providers/Microsoft.Billing/billingPeriods/201702",
+ "usageStart": "2017-02-13T00:00:00Z",
+ "usageEnd": "2017-02-13T23:59:59Z",
+ "instanceName": "shared1",
+ "instanceId": "/subscriptions/subid/resourceGroups/Default-Web-eastasia/providers/Microsoft.Web/sites/shared1",
+ "currency": "USD",
+ "consumedQuantity": 0.00328,
+ "pretaxCost": 0.67,
+ "isEstimated": false,
+ "meterId": "00000000-0000-0000-0000-000000000000",
+ "offerName": "offer1",
+ "resourceGroup": "TEST",
+ "orderNumber": "00000000-0000-0000-0000-000000000000",
+ "publisherName": "xyz",
+ "planName": "plan2",
+ "resourceRate": 0.24,
+ "subscriptionGuid": "00000000-0000-0000-0000-000000000000",
+ "subscriptionName": "azure subscription",
+ "unitOfMeasure": "10 Hours",
+ "isRecurringCharge": false
+ }
+ }
+
+```
+
+## Next steps
+
+- Read the [Migrate from Azure Enterprise Reporting to Microsoft Cost Management APIs overview](migrate-ea-reporting-arm-apis-overview.md) article.
cost-management-billing Migrate Ea Usage Details Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/automate/migrate-ea-usage-details-api.md
description: This article has information to help you migrate from the EA Usage Details APIs. Previously updated : 11/17/2023 Last updated : 01/30/2024
The table below summarizes the different APIs that you may be using today to ing
| `/v3/enrollments/{enrollmentNumber}/usagedetails/submit?billingPeriod={billingPeriod}` | - API method: POST<br> - Asynchronous (polling based)<br> - Data format: CSV | | `/v3/enrollments/{enrollmentNumber}/usagedetails/submit?startTime=2017-04-01&endTime=2017-04-10` | - API method: POST<br> - Asynchronous (polling based)<br> - Data format: CSV |
-## Enterprise Marketplace Store Charge APIs to migrate off
-
-In addition to the usage details APIs outlined above, you'll need to migrate off the [Enterprise Marketplace Store Charge APIs](/rest/api/billing/enterprise/billing-enterprise-api-marketplace-storecharge). All Azure and Marketplace charges have been merged into a single file that is available through the new solutions. You can identify which charges are *Azure* versus *Marketplace* charges by using the `PublisherType` field that is available in the new dataset. The table below outlines the applicable APIs. All of the following APIs are behind the *https://consumption.azure.com* endpoint.
-
-| Endpoint | API Comments |
-| | |
-| `/v3/enrollments/{enrollmentNumber}/marketplacecharges` | - API method: GET<br> - Synchronous (non polling)<br> - Data format: JSON |
-| `/v3/enrollments/{enrollmentNumber}/billingPeriods/{billingPeriod}/marketplacecharges` | - API method: GET<br> - Synchronous (non polling)<br> - Data format: JSON |
-| `/v3/enrollments/{enrollmentNumber}/marketplacechargesbycustomdate?startTime=2017-01-01&endTime=2017-01-10` | - API method: GET<br> - Synchronous (non polling)<br> - Data format: JSON |
- ## Data field mapping The table below provides a summary of the old fields available in the solutions you're currently using along with the field to use in the new solutions.
cost-management-billing Tutorial Improved Exports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/tutorial-improved-exports.md
+
+ Title: Tutorial - Improved exports experience - Preview
+description: This tutorial helps you create automatic exports for your actual and amortized costs in the Cost and Usage Specification standard (FOCUS) format.
++ Last updated : 01/31/2023++++++
+# Tutorial: Improved exports experience - Preview
+
+This tutorial helps you create automatic exports using the improved exports experience that can be enabled from [Cost Management labs](enable-preview-features-cost-management-labs.md#exports-preview) by selecting **Exports (preview)** button. The improved Exports experience is designed to streamline your FinOps practice by automating the export of other cost-impacting datasets. The updated exports are optimized to handle large datasets while enhancing the user experience.
+
+Review [Azure updates](https://azure.microsoft.com/updates/) to see when the feature becomes available generally available.
+
+## Improved functionality
+
+The improved Exports feature supports new datasets including price sheets, reservation recommendations, reservation details, and reservation transactions. Also, you can download cost and usage details using the open-source FinOps Open Cost and Usage Specification [FOCUS](https://focus.finops.org/) format. It combines actual and amortized costs and reduces data processing times and storage and compute costs.
+FinOps datasets are often large and challenging to manage. Exports improve file manageability, reduce download latency, and help save on storage and network charges with the following functionality:
+
+- File partitioning, which breaks the file into manageable smaller chunks.
+- File overwrite, which replaces the previous day's file with an updated file each day in daily export.
+
+The Exports feature has an updated user interface, which helps you to easily create multiple exports for various cost management datasets to Azure storage using a single, simplified create experience. Exports let you choose the latest or any of the earlier dataset schema versions when you create a new export. Supporting multiple versions ensures that the data processing layers that you built on for existing datasets are reused while you adopt the latest API functionality. You can selectively export historical data by rerunning an existing Export job for a historical period. So you don't have to create a new one-time export for a specific date range. You can enhance security and compliance by configuring exports to storage accounts behind a firewall. The Azure Storage firewall provides access control for the public endpoint of the storage account.
+
+## Prerequisites
+
+Data export is available for various Azure account types, including [Enterprise Agreement (EA)](https://azure.microsoft.com/pricing/enterprise-agreement/) and [Microsoft Customer Agreement (MCA)](get-started-partners.md) customers. To view the full list of supported account types, see [Understand Cost Management data](understand-cost-mgt-data.md). The following Azure permissions, or scopes, are supported per subscription for data export by user and group. For more information about scopes, see [Understand and work with scopes](understand-work-scopes.md).
+
+- Owner - Can create, modify, or delete scheduled exports for a subscription.
+- Contributor - Can create, modify, or delete their own scheduled exports. Can modify the name of scheduled exports created by others.
+- Reader - Can schedule exports that they have permission to.
+ - **For more information about scopes, including access needed to configure exports for Enterprise Agreement and Microsoft Customer agreement scopes, see [Understand and work with scopes](understand-work-scopes.md)**.
+
+For Azure Storage accounts:
+- Write permissions are required to change the configured storage account, independent of permissions on the export.
+- Your Azure storage account must be configured for blob or file storage.
+- Don't configure exports to a storage container that is configured as a destination in an [object replication rule](../../storage/blobs/object-replication-overview.md#object-replication-policies-and-rules).
+- To export to storage accounts with configured firewalls, you need other privileges on the storage account. The other privileges are only required during export creation or modification. They are:
+ - Owner role on the storage account.
+ Or
+ - Any custom role with `Microsoft.Authorization/roleAssignments/write` and `Microsoft.Authorization/permissions/read` permissions.
+ Additionally, ensure that you enable [Allow trusted Azure service access](../../storage/common/storage-network-security.md#grant-access-to-trusted-azure-services) to the storage account when you configure the firewall.
+- The storage account configuration must have the **Permitted scope for copy operations (preview)** option set to **From any storage account**.
+ :::image type="content" source="./media/tutorial-export-acm-data/permitted-scope-copy-operations.png" alt-text="Screenshot showing From any storage account option set." lightbox="./media/tutorial-export-acm-data/permitted-scope-copy-operations.png" :::
+
+If you have a new subscription, you can't immediately use Cost Management features. It might take up to 48 hours before you can use all Cost Management features.
+
+Enable the new Exports experience from Cost Management labs by selecting **Exports (preview)**. For more information about how to enable Exports (preview), see [Explore preview features](enable-preview-features-cost-management-labs.md#explore-preview-features). The preview feature is being deployed progressively.
+
+## Create exports
+
+You can create multiple exports of various data types using the following steps.
+
+### Choose a scope and navigate to Exports
+
+1. Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com/).
+2. Search for **Cost Management**.
+3. Select a billing scope.
+4. In the left navigation menu, select **Exports**.
+ - **For Partners**: Sign in as a partner at the billing account scope or on a customer's tenant. Then you can export data to an Azure Storage account that is linked to your partner storage account. However, you must have an active subscription in your CSP tenant.
+5. Set the schedule frequency.
+
+### Create new exports
+
+On the Exports page, at the top of the page, select **+ Create**.
+
+### Fill in export details
+
+1. On the Add export page, select the **Type of data**, the **Dataset version**, and enter an **Export name**. Optionally, enter an **Export description**.
+2. For **Type of data**, when you select **Reservation recommendations**, select values for the other fields that appear:
+ - Reservation scope
+ - Resource type
+ - Look back period
+3. Depending on the **Type of data** and **Frequency** that you select, you might need to specify more fields to define the date range in UTC format.
+4. Select **Add** to see the export listed on the Basic tab.
++
+### Optionally add more exports
+
+You can create up to 10 exports when you select **+ Add new exports**.
+
+Select **Next** when you're ready to define the destination.
+
+### Define the export destination
+
+1. On the Destination tab, select the **Storage type**. The default is Azure blob storage.
+2. Specify your Azure storage account subscription. Choose an existing resource group or create a new one.
+3. Select the Storage account name or create a new one.
+4. If you create a new storage account, choose an Azure region.
+5. Specify the storage container and directory path for the export file.
+6. File partitioning is enabled by default. It splits large files into smaller ones.
+7. **Overwrite data** is enabled by default. For daily exports, it replaces the previous day's file with an updated file.
+8. Select **Next** to move to the **Review + create** tab.
++
+### Review and create
+
+Review your export configuration and make any necessary changes. When done, select **Review + create** to complete the process.
+
+## Manage exports
+
+You can view and manage your exports by navigating to the Exports page where a summary of details for each export appears, including:
+
+- Type of data
+- Schedule status
+- Data version
+- Last run time
+- Frequency
+- Storage account
+- Estimated next run date and time
+
+You can perform the following actions by selecting the ellipsis (**…**) on the right side of the page or by selecting the individual export.
+
+- Run now - Queues an unplanned export to run at the next available moment, regardless of the scheduled run time.
+- Export selected dates - Reruns an export for a historical date range instead of creating a new one-time export. You can extract up to 13 months of historical data in three-month chunks. This option isn't available for price sheets.
+- Disable - Temporarily suspends the export job.
+- Delete - Permanently removes the export.
+- Refresh - Updates the Run history.
++
+### Schedule frequency
+
+All types of data support various schedule frequency options, as described in the following table.
+
+| **Type of data** | **Frequency options** |
+| | |
+| Price sheet | ΓÇó One-time export <br> ΓÇó Current month <br>ΓÇó Daily export of the current month |
+| Reservation details | ΓÇó One-time export <br> ΓÇó Daily export of month-to-date costs <br> ΓÇó Monthly export of last month's costs |
+| Reservation recommendations | ΓÇó One-time export <br> ΓÇó Daily export |
+| Reservation transactions | ΓÇó One-time export <br> ΓÇó Daily export <br> ΓÇó Monthly export of last month's data|
+| Cost and usage details (actual)<br> Cost and usage details (amortized) <br> Cost and usage details (FOCUS)<br> Cost and usage details (usage only) | ΓÇó One-time export <br>ΓÇó Daily export of month-to-date costs<br>ΓÇó Monthly export of last month's costs <br>ΓÇó Monthly export of last billing month's costs |
+
+## Understand data types
+
+- Cost and usage details (actual) - Select this option to export standard usage and purchase charges.
+- Cost and usage details (amortized) - Select this option to export amortized costs for purchases like Azure reservations and Azure savings plan for compute.
+- Cost and usage details (FOCUS) - Select this option to export cost and usage details using the open-source FinOps Open Cost and Usage Specification ([FOCUS](https://focus.finops.org/)) format. It combines actual and amortized costs. This format reduces data processing time and storage and compute charges for exports. The management group scope isn't supported for Cost and usage details (FOCUS) exports.
+- Cost and usage details (usage only) - Select this option to export standard usage charges without purchase information. Although you can't use this option when creating new exports, existing exports using this option are still supported.
+- Price sheet ΓÇô Select this option to export your download your organization's Azure pricing.
+- Reservation details ΓÇô Select this option to export the current list of all available reservations.
+- Reservation recommendations ΓÇô Select this option to export the list of reservation recommendations, which help with rate optimization.
+- Reservation transactions ΓÇô Select this option to export the list of all reservation purchases, exchanges, and refunds.
+
+Agreement types, scopes, and required roles are explained at [Understand and work with scopes](understand-work-scopes.md).
+
+| **Data types** | **Supported agreement** | **Supported scopes** |
+| | | |
+| Cost and usage (actual) | ΓÇó EA<br> ΓÇó MCA that you bought through the Azure website <br> ΓÇó MCA enterprise<br> ΓÇó MCA that you buy through a Microsoft partner <br> ΓÇó Microsoft Online Service Program (MOSP), also known as pay-as-you-go (PAYG) <br> ΓÇó Azure internal | ΓÇó EA - Enrollment, department, account, management group, subscription, and resource group <br> ΓÇó MCA - Billing account, billing profile, Invoice section, subscription, and resource group <br> ΓÇó Microsoft Partner Agreement (MPA) - Customer, subscription, and resource group |
+| Cost and usage (amortized) | ΓÇó EA <br> ΓÇó MCA that you bought through the Azure website <br> ΓÇó MCA enterprise <br> ΓÇó MCA that you buy through a Microsoft partner <br> ΓÇó Microsoft Online Service Program (MOSP), also known as pay-as-you-go (PAYG) <br> ΓÇó Azure internal | ΓÇó EA - Enrollment, department, account, management group, subscription, and resource group <br> ΓÇó MCA - Billing account, billing profile, Invoice section, subscription, and resource group <br> ΓÇó MPA - Customer, subscription, and resource group |
+| Cost and usage (FOCUS) | ΓÇó EA <br> ΓÇó MCA that you bought through the Azure website <br> ΓÇó MCA enterprise <br> ΓÇó MCA that you buy through a Microsoft partner| ΓÇó EA - Enrollment, department, account, subscription, and resource group <br> ΓÇó MCA - Billing account, billing profile, invoice section, subscription, and resource group <br> ΓÇó MPA - Customer, subscription, resource group. **NOTE**: The management group scope isn't supported for Cost and usage details (FOCUS) exports. |
+| All available prices | ΓÇó EA <br> ΓÇó MCA that you bought through the Azure website <br> ΓÇó MCA enterprise <br> ΓÇó MCA that you buy through a Microsoft partner | ΓÇó EA - Billing account <br> ΓÇó All other supported agreements - Billing profile |
+| Reservation recommendations | ΓÇó EA <br> ΓÇó MCA that you bought through the Azure website <br> ΓÇó MCA enterprise <br> ΓÇó MCA that you buy through a Microsoft partner | ΓÇó EA - Billing account <br> ΓÇó All other supported agreements - Billing profile |
+| Reservation transactions | ΓÇó EA <br> ΓÇó MCA that you bought through the Azure website <br> ΓÇó MCA enterprise <br> ΓÇó MCA that you buy through a Microsoft partner | ΓÇó EA - Billing account <br> ΓÇó All other supported agreements - Billing profile |
+| Reservation details | ΓÇó EA <br> ΓÇó MCA that you bought through the Azure website <br> ΓÇó MCA enterprise <br> ΓÇó MCA that you buy through a Microsoft partner | ΓÇó EA - Billing account <br> ΓÇó All other supported agreements - Billing profile |
+
+## Limitations
+
+The improved exports experience currently has the following limitations.
+
+- The new Exports experience doesn't fully support the management group scope and it has feature limitations.
+- Azure internal and MOSP billing scopes and subscriptions donΓÇÖt support FOCUS datasets.
+- Shared access service (SAS) key-based cross tenant export is only supported for Microsoft partners at the billing account scope. It isn't supported for other partner scenarios like any other scope, EA indirect contract or Azure Lighthouse.
+
+## Next steps
+
+- Learn more about exports at [Tutorial: Create and manage exported data](tutorial-export-acm-data.md).
data-factory Connector Microsoft Fabric Lakehouse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-microsoft-fabric-lakehouse.md
Microsoft Fabric Lakehouse connector supports the following file formats. Refer
- [JSON format](format-json.md) - [ORC format](format-orc.md) - [Parquet format](format-parquet.md)
+
+To use Fabric Lakehouse file-based connector in inline dataset type, you need to choose the right Inline dataset type for your data. You can use DelimitedText, Avro, JSON, ORC, or Parquet depending on your data format.
### Microsoft Fabric Lakehouse Table in mapping data flow
sink(allowSchemaDrift: true,
skipDuplicateMapOutputs: true) ~> CustomerTable ```
+For Fabric Lakehouse table-based connector in inline dataset type, you only need to use Delta as dataset type. This will allow you to read and write data from Fabric Lakehouse tables.
## Related content
defender-for-cloud Agentless Vulnerability Assessment Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/agentless-vulnerability-assessment-azure.md
Vulnerability assessment for Azure, powered by Microsoft Defender Vulnerability
> [!NOTE] > This feature supports scanning of images in the Azure Container Registry (ACR) only. Images that are stored in other container registries should be imported into ACR for coverage. Learn how to [import container images to a container registry](/azure/container-registry/container-registry-import-images).
-In every subscription where this capability is enabled, all images stored in ACR that meet the following criteria for scan triggers are scanned for vulnerabilities without any extra configuration of users or registries. Recommendations with vulnerability reports are provided for all images in ACR as well as images that are currently running in AKS that were pulled from an ACR registry. Images are scanned shortly after being added to a registry, and rescanned for new vulnerabilities once every 24 hours.
+In every subscription where this capability is enabled, all images stored in ACR that meet the criteria for scan triggers are scanned for vulnerabilities without any extra configuration of users or registries. Recommendations with vulnerability reports are provided for all images in ACR as well as images that are currently running in AKS that were pulled from an ACR registry or any other Defender for Cloud supported registry (ECR, GCR, or GAR). Images are scanned shortly after being added to a registry, and rescanned for new vulnerabilities once every 24 hours.
Container vulnerability assessment powered by Microsoft Defender Vulnerability Management has the following capabilities:
defender-for-cloud Concept Data Security Posture Prepare https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-data-security-posture-prepare.md
Previously updated : 01/14/2024 Last updated : 01/28/2024
Sensitive data discovery is available in the Defender CSPM, Defender for Storage
- Existing plan status shows as ΓÇ£PartialΓÇ¥ rather than ΓÇ£FullΓÇ¥ if one or more extensions aren't turned on. - The feature is turned on at the subscription level. - If sensitive data discovery is turned on, but Defender CSPM isn't enabled, only storage resources will be scanned.
+- If a subscription is enabled with Defender CSPM and in parallel you scanned the same resources with Purview, Purview's scan result is ignored and defaults to displaying the Microsoft Defender for Cloud's scanning results for the supported resource type.
## What's supported
defender-for-cloud Concept Data Security Posture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-data-security-posture.md
Previously updated : 10/26/2023 Last updated : 01/28/2024 # About data-aware security posture
defender-for-cloud Configure Servers Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/configure-servers-coverage.md
Title: Configure monitoring coverage
+ Title: Configure Defender for Servers features
description: Learn how to configure the different monitoring components that are available in Defender for Servers in Microsoft Defender for Cloud. Previously updated : 01/25/2024 Last updated : 02/01/2024
-# Configure monitoring coverage
+# Configure Defender for Servers features
Microsoft Defender for Cloud's Defender for Servers plans contains components that monitor your environments to provide extended coverage on your servers. Each of these components can be enabled, disabled or configured to your meet your specific requirements.
Microsoft Defender for Cloud's Defender for Servers plans contains components th
When you enable Defender for Servers plan 2, all of these components are toggled to **On** by default.
+> [!NOTE]
+> The Log Analytics agent (also known as MMA) is set to retire in [August 2024](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). All Defender for Servers features that depend on the AMA, including those described on the [Enable Defender for Endpoint (Log Analytics)](endpoint-protection-recommendations-technical.md) page, will be available through either [Microsoft Defender for Endpoint integration](integration-defender-for-endpoint.md) or [agentless scanning](concept-agentless-data-collection.md), before the retirement date. For more information about the roadmap for each of the features that are currently rely on Log Analytics Agent, see [this announcement](upcoming-changes.md#defender-for-cloud-plan-and-strategy-for-the-log-analytics-agent-deprecation).
+ ## Configure Log Analytics agent After enabling the Log Analytics agent, you'll be presented with the option to select which workspace should be utilized.
defender-for-cloud Continuous Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/continuous-export.md
Title: Continuous export of alerts and recommendations to Log Analytics or Azure Event Hubs
-description: Learn how to configure continuous export of security alerts and recommendations to Log Analytics or Azure Event Hubs
+ Title: Set up continuous export of alerts and recommendations
+description: Learn how to set up continuous export of Microsoft Defender for Cloud security alerts and recommendations to Log Analytics in Azure Monitor or to Azure Event Hubs.
Last updated 06/19/2023
# Continuously export Microsoft Defender for Cloud data
-Microsoft Defender for Cloud generates detailed security alerts and recommendations. To analyze the information in these alerts and recommendations, you can export them to Azure Log Analytics, Event Hubs, or to another [SIEM, SOAR, or IT classic deployment model solution](export-to-siem.md). You can stream the alerts and recommendations as they're generated or define a schedule to send periodic snapshots of all of the new data.
+Microsoft Defender for Cloud generates detailed security alerts and recommendations. To analyze the information that's in these alerts and recommendations, you can export them to Log Analytics in Azure Monitor, to Azure Event Hubs, or to another Security Information and Event Management (SIEM), Security Orchestration Automated Response (SOAR), or IT classic [deployment model solution](export-to-siem.md). You can stream the alerts and recommendations as they're generated or define a schedule to send periodic snapshots of all new data.
-With **continuous export**, you can fully customize what information to export and where it goes. For example, you can configure it so that:
+When you set up continuous export, you can fully customize what information to export and where the information goes. For example, you can configure it so that:
-- All high severity alerts are sent to an Azure event hub-- All medium or higher severity findings from vulnerability assessment scans of your SQL servers are sent to a specific Log Analytics workspace-- Specific recommendations are delivered to an event hub or Log Analytics workspace whenever they're generated-- The secure score for a subscription is sent to a Log Analytics workspace whenever the score for a control changes by 0.01 or more
+- All high-severity alerts are sent to an Azure event hub.
+- All medium or higher-severity findings from vulnerability assessment scans of your computers running SQL Server are sent to a specific Log Analytics workspace.
+- Specific recommendations are delivered to an event hub or Log Analytics workspace whenever they're generated.
+- The secure score for a subscription is sent to a Log Analytics workspace whenever the score for a control changes by 0.01 or more.
-This article describes how to configure continuous export to Log Analytics workspaces or Azure event hubs.
+This article describes how to set up continuous export to a Log Analytics workspace or to an event hub in Azure.
> [!TIP]
-> Defender for Cloud also offers the option to perform a one-time, manual export to CSV. Learn more in [Manual one-time export of alerts and recommendations](#manual-one-time-export-of-alerts-and-recommendations).
+> Defender for Cloud also offers the option to do a onetime, manual export to a comma-separated values (CSV) file. Learn more in [Manually export alerts and recommendations](#manually-export-alerts-and-recommendations).
## Availability |Aspect|Details| |-|:-|
-|Release state:|General availability (GA)|
+|Release status:|General availability (GA)|
|Pricing:|Free|
-|Required roles and permissions:|<ul><li>**Security admin** or **Owner** on the resource group</li><li>Write permissions for the target resource.</li><li>If you're using the [Azure Policy 'DeployIfNotExist' policies](#configure-continuous-export-at-scale-using-the-supplied-policies), you need the permissions that allow you to assign policies</li><li>To export data to Event Hubs, you need Write permission on the Event Hubs Policy.</li><li>To export to a Log Analytics workspace:<ul><li>if it **has the SecurityCenterFree solution**, you need a minimum of read permissions for the workspace solution: `Microsoft.OperationsManagement/solutions/read`</li><li>if it **doesn't have the SecurityCenterFree solution**, you need write permissions for the workspace solution: `Microsoft.OperationsManagement/solutions/action`</li><li>Learn more about [Azure Monitor and Log Analytics workspace solutions](/previous-versions/azure/azure-monitor/insights/solutions)</li></ul></li></ul>|
+|Required roles and permissions:|<ul><li>Security Admin or Owner for the resource group.</li><li>Write permissions for the target resource.</li><li>If you use the [Azure Policy DeployIfNotExist policies](#set-up-continuous-export-at-scale-by-using-provided-policies), you must have permissions that let you assign policies.</li><li>To export data to Event Hubs, you must have Write permissions on the Event Hubs policy.</li><li>To export to a Log Analytics workspace:<ul><li>If it *has the SecurityCenterFree solution*, you must have a minimum of Read permissions for the workspace solution: `Microsoft.OperationsManagement/solutions/read`.</li><li>If it *doesn't have the SecurityCenterFree solution*, you must have write permissions for the workspace solution: `Microsoft.OperationsManagement/solutions/action`.</li><li>Learn more about [Azure Monitor and Log Analytics workspace solutions](/previous-versions/azure/azure-monitor/insights/solutions).</li></ul></li></ul>|
|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Microsoft Azure operated by 21Vianet)| ## What data types can be exported?
-Continuous export can export the following data types whenever they change:
+You can use continuous export to export the following data types whenever they change:
- Security alerts. - Security recommendations.-- Security findings. Findings can be thought of as 'sub' recommendations and belong to a 'parent' recommendation. For example:
- - The recommendations [System updates should be installed on your machines (powered by Update Center)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/e1145ab1-eb4f-43d8-911b-36ddf771d13f) and [System updates should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/4ab6e3c5-74dd-8b35-9ab9-f61b30875b27) each has one 'sub' recommendation per outstanding system update.
- - The recommendation [Machines should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1195afff-c881-495e-9bc5-1486211ae03f) has a 'sub' recommendation for every vulnerability identified by the vulnerability scanner.
+- Security findings.
+
+ Findings can be thought of as "sub" recommendations and belong to a "parent" recommendation. For example:
+
+ - The recommendations [System updates should be installed on your machines (powered by Update Center)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/e1145ab1-eb4f-43d8-911b-36ddf771d13f) and [System updates should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/4ab6e3c5-74dd-8b35-9ab9-f61b30875b27) each has one sub recommendation per outstanding system update.
+ - The recommendation [Machines should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1195afff-c881-495e-9bc5-1486211ae03f) has a sub recommendation for every vulnerability that the vulnerability scanner identifies.
+ > [!NOTE]
- > If youΓÇÖre configuring a continuous export with the REST API, always include the parent with the findings.
+ > If youΓÇÖre configuring continuous export by using the REST API, always include the parent with the findings.
+ - Secure score per subscription or per control. - Regulatory compliance data.
-## Set up a continuous export
+<a name="set-up-a-continuous-export"></a>
+
+## Set up continuous export
+
+You can set up continuous export on the Microsoft Defender for Cloud pages in the Azure portal, by using the REST API, or at scale by using provided Azure Policy templates.
-You can configure continuous export from the Microsoft Defender for Cloud pages in Azure portal, via the REST API, or at scale using the supplied Azure Policy templates.
+### [Azure portal](#tab/azure-portal)
-### [**Use the Azure portal**](#tab/azure-portal)
+<a name="configure-continuous-export-from-the-defender-for-cloud-pages-in-azure-portal"></a>
-### Configure continuous export from the Defender for Cloud pages in Azure portal
+### Set up continuous export on the Defender for Cloud pages in the Azure portal
-If you're setting up a continuous export to Log Analytics or Azure Event Hubs:
+To set up a continuous export to Log Analytics or Azure Event Hubs by using the Azure portal:
-1. From Defender for Cloud's menu, open **Environment settings**.
+1. On the Defender for Cloud resource menu, select **Environment settings**.
-1. Select the specific subscription for which you want to configure the data export.
+1. Select the subscription that you want to configure data export for.
-1. From the sidebar of the settings page for that subscription, select **Continuous export**.
+1. In the resource menu under **Settings**, select **Continuous export**.
- :::image type="content" source="./media/continuous-export/continuous-export-options-page.png" alt-text="Export options in Microsoft Defender for Cloud." lightbox="./media/continuous-export/continuous-export-options-page.png":::
+ :::image type="content" source="./media/continuous-export/continuous-export-options-page.png" alt-text="Screenshot that shows the export options in Microsoft Defender for Cloud." lightbox="./media/continuous-export/continuous-export-options-page.png":::
- Here you see the export options. There's a tab for each available export target, either event hub or Log Analytics workspace.
+ The export options appear. There's a tab for each available export target, either event hub or Log Analytics workspace.
-1. Select the data type you'd like to export and choose from the filters on each type (for example, export only high severity alerts).
+1. Select the data type you'd like to export, and choose from the filters on each type (for example, export only high-severity alerts).
1. Select the export frequency:
- - **Streaming** ΓÇô assessments are sent when a resourceΓÇÖs health state is updated (if no updates occur, no data is sent).
- - **Snapshots** ΓÇô a snapshot of the current state of the selected data types that are sent once a week per subscription. To identify snapshot data, look for the field ``IsSnapshot``.
- If your selection includes one of these recommendations, you can include the vulnerability assessment findings together with them:
+ - **Streaming**. Assessments are sent when a resourceΓÇÖs health state is updated (if no updates occur, no data is sent).
+ - **Snapshots**. A snapshot of the current state of the selected data types that are sent once a week per subscription. To identify snapshot data, look for the field **IsSnapshot**.
+
+ If your selection includes one of these recommendations, you can include the vulnerability assessment findings with them:
+ - [SQL databases should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/82e20e14-edc5-4373-bfc4-f13121257c37) - [SQL servers on machines should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/f97aa83c-9b63-4f9a-99f6-b22c4398f936) - [Container registry images should have vulnerability findings resolved (powered by Qualys)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/dbd0cb49-b563-45e7-9724-889e799fa648) - [Machines should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1195afff-c881-495e-9bc5-1486211ae03f) - [System updates should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/4ab6e3c5-74dd-8b35-9ab9-f61b30875b27)
- To include the findings with these recommendations, enable the **include security findings** option.
+ To include the findings with these recommendations, set **Include security findings** to **Yes**.
- :::image type="content" source="./media/continuous-export/include-security-findings-toggle.png" alt-text="Include security findings toggle in continuous export configuration." :::
+ :::image type="content" source="./media/continuous-export/include-security-findings-toggle.png" alt-text="Screenshot that shows the Include security findings toggle in a continuous export configuration." :::
-1. From the "Export target" area, choose where you'd like the data saved. Data can be saved in a target of a different subscription (for example, on a Central Event Hubs instance or a central Log Analytics workspace).
+1. Under **Export target**, choose where you'd like the data saved. Data can be saved in a target of a different subscription (for example, in a central Event Hubs instance or in a central Log Analytics workspace).
- You can also send the data to an [Event hubs or Log Analytics workspace in a different tenant](#export-data-to-an-azure-event-hubs-or-log-analytics-workspace-in-another-tenant).
+ You can also send the data to an [event hub or Log Analytics workspace in a different tenant](#export-data-to-an-event-hub-or-log-analytics-workspace-in-another-tenant).
1. Select **Save**. > [!NOTE]
-> Log analytics supports records that are only up to 32KB in size. When the data limit is reached, you will see an alert telling you that the `Data limit has been exceeded`.
+> Log Analytics supports only records that are up to 32 KB in size. When the data limit is reached, an alert displays the message **Data limit has been exceeded**.
-### [**Use the REST API**](#tab/rest-api)
+### [REST API](#tab/rest-api)
-### Configure continuous export using the REST API
+### Set up continuous export by using the REST API
-Continuous export can be configured and managed via the Microsoft Defender for Cloud [automations API](/rest/api/defenderforcloud/automations). Use this API to create or update rules for exporting to any of the following possible destinations:
+You can set up and manage continuous export by using the Microsoft Defender for Cloud [automations API](/rest/api/defenderforcloud/automations). Use this API to create or update rules for exporting to any of the following destinations:
- Azure Event Hubs - Log Analytics workspace - Azure Logic Apps
-You can also send the data to an [Event Hubs or Log Analytics workspace in a different tenant](#export-data-to-an-azure-event-hubs-or-log-analytics-workspace-in-another-tenant).
+You also can send the data to an [event hub or Log Analytics workspace in a different tenant](#export-data-to-an-event-hub-or-log-analytics-workspace-in-another-tenant).
-Here are some examples of options that you can only use in the API:
+Here are some examples of options that you can use only in the API:
-- **Greater volume** - You can create multiple export configurations on a single subscription with the API. The **Continuous Export** page in the Azure portal supports only one export configuration per subscription.
+- **Greater volume**: You can create multiple export configurations on a single subscription by using the API. The **Continuous Export** page in the Azure portal supports only one export configuration per subscription.
-- **Additional features** - The API offers parameters that aren't shown in the Azure portal. For example, you can add tags to your automation resource and define your export based on a wider set of alert and recommendation properties than the ones offered in the **Continuous Export** page in the Azure portal.
+- **Additional features**: The API offers parameters that aren't shown in the Azure portal. For example, you can add tags to your automation resource and define your export based on a wider set of alert and recommendation properties than the ones that are offered on the **Continuous export** page in the Azure portal.
-- **More focused scope** - The API provides a more granular level for the scope of your export configurations. When defining an export with the API, you can do so at the resource group level. If you're using the **Continuous Export** page in the Azure portal, you have to define it at the subscription level.
+- **Focused scope**: The API offers you a more granular level for the scope of your export configurations. When you define an export by using the API, you can define it at the resource group level. If you're using the **Continuous export** page in the Azure portal, you must define it at the subscription level.
> [!TIP]
- > These API-only options are not shown in the Azure portal. If you use them, there'll be a banner informing you that other configurations exist.
+ > These API-only options are not shown in the Azure portal. If you use them, a banner informs you that other configurations exist.
+
+### [Azure Policy](#tab/azure-policy)
-### [**Deploy at scale with Azure Policy**](#tab/azure-policy)
+<a name="configure-continuous-export-at-scale-using-the-supplied-policies"></a>
-### Configure continuous export at scale using the supplied policies
+### Set up continuous export at scale by using provided policies
-Automating your organization's monitoring and incident response processes can greatly improve the time it takes to investigate and mitigate security incidents.
+Automating your organization's monitoring and incident response processes can help you reduce the time it takes to investigate and mitigate security incidents.
-To deploy your continuous export configurations across your organization, use the supplied Azure Policy 'DeployIfNotExist' policies to create and configure continuous export procedures.
+To deploy your continuous export configurations across your organization, use the provided Azure Policy `DeployIfNotExist` policies to create and configure continuous export procedures.
-**To implement these policies**:
+To implement these policies:
-1. Select the policy you want to apply from this table:
+1. In the following table, choose a policy to apply:
|Goal |Policy |Policy ID | ||||
To deploy your continuous export configurations across your organization, use th
|Continuous export to Log Analytics workspace|[Deploy export to Log Analytics workspace for Microsoft Defender for Cloud alerts and recommendations](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fffb6f416-7bd2-4488-8828-56585fef2be9)|ffb6f416-7bd2-4488-8828-56585fef2be9| > [!TIP]
- > You can also find these by searching Azure Policy:
+ > You can also find the policies by searching Azure Policy:
> > 1. Open Azure Policy.
- > :::image type="content" source="./media/continuous-export/opening-azure-policy.png" alt-text="Accessing Azure Policy.":::
- > 1. From the Azure Policy menu, select **Definitions** and search for them by name.
+ >
+ > :::image type="content" source="./media/continuous-export/opening-azure-policy.png" alt-text="Screenshot that shows accessing Azure Policy.":::
+ >
+ > 1. On the Azure Policy menu, select **Definitions** and search for the policies by name.
-1. From the relevant Azure Policy page, select **Assign**.
- :::image type="content" source="./media/continuous-export/export-policy-assign.png" alt-text="Assigning the Azure Policy.":::
+1. On the relevant page in Azure Policy, select **Assign**.
+
+ :::image type="content" source="./media/continuous-export/export-policy-assign.png" alt-text="Screenshot that shows assigning the Azure Policy.":::
+
+1. Select each tab and set the parameters to meet your requirements:
+
+ 1. On the **Basics** tab, set the scope for the policy. To use centralized management, assign the policy to the management group that contains the subscriptions that use the continuous export configuration.
+
+ 1. On the **Parameters** tab, set the resource group and data type details.
-1. Open each tab and set the parameters as desired:
- 1. In the **Basics** tab, set the scope for the policy. To use centralized management, assign the policy to the Management Group containing the subscriptions that use continuous export configuration.
- 1. In the **Parameters** tab, set the resource group and data type details.
> [!TIP]
- > Each parameter has a tooltip explaining the options available to you.
+ > Each parameter has a tooltip that explains the options that are available.
+ >
+ > The Azure Policy **Parameters** tab (1) provides access to configuration options that are similar to options that you can access on the Defender for Cloud **Continuous export** page (2).
>
- > Azure Policy's parameters tab (1) provides access to similar configuration options as Defender for Cloud's continuous export page (2).
- > :::image type="content" source="./media/continuous-export/azure-policy-next-to-continuous-export.png" alt-text="Comparing the parameters in continuous export with Azure Policy." lightbox="./media/continuous-export/azure-policy-next-to-continuous-export.png":::
- 1. Optionally, to apply this assignment to existing subscriptions, open the **Remediation** tab and select the option to create a remediation task.
-1. Review the summary page and select **Create**.
+ > :::image type="content" source="./media/continuous-export/azure-policy-next-to-continuous-export.png" alt-text="Screenshot that shows comparing the parameters in continuous export with Azure Policy." lightbox="./media/continuous-export/azure-policy-next-to-continuous-export.png":::
+ >
+
+ 1. Optionally, to apply this assignment to existing subscriptions, select the **Remediation** tab, and then select the option to create a remediation task.
+
+1. Review the summary page, and then select **Create**.
-## Exporting to a Log Analytics workspace
+## Export to a Log Analytics workspace
If you want to analyze Microsoft Defender for Cloud data inside a Log Analytics workspace or use Azure alerts together with Defender for Cloud alerts, set up continuous export to your Log Analytics workspace. ### Log Analytics tables and schemas
-Security alerts and recommendations are stored in the *SecurityAlert* and *SecurityRecommendation* tables respectively.
+Security alerts and recommendations are stored in the **SecurityAlert** and **SecurityRecommendation** tables respectively.
-The name of the Log Analytics solution containing these tables depends on whether you've enabled the enhanced security features: Security ('Security and Audit') or SecurityCenterFree.
+The name of the Log Analytics solution that contains these tables depends on whether you enabled the enhanced security features: Security (the Security and Audit solution) or SecurityCenterFree.
> [!TIP]
-> To see the data on the destination workspace, you must enable one of these solutions **Security and Audit** or **SecurityCenterFree**.
+> To see the data on the destination workspace, you must enable one of these solutions: Security and Audit or SecurityCenterFree.
-![The *SecurityAlert* table in Log Analytics.](./media/continuous-export/log-analytics-securityalert-solution.png)
+![Screenshot that shows the SecurityAlert table in Log Analytics.](./media/continuous-export/log-analytics-securityalert-solution.png)
-To view the event schemas of the exported data types, visit the [Log Analytics table schemas](https://aka.ms/ASCAutomationSchemas).
+To view the event schemas of the exported data types, see [Log Analytics table schemas](https://aka.ms/ASCAutomationSchemas).
-## Export data to an Azure Event Hubs or Log Analytics workspace in another tenant
+## Export data to an event hub or Log Analytics workspace in another tenant
-You ***cannot*** configure data to be exported to a log analytics workspace in another tenant when using Azure Policy to assign the configuration. This process only works with the REST API, and the configuration is unsupported in the Azure portal (due to requiring multitenant context). Azure Lighthouse ***does not*** resolve this issue with Policy, although you can use Lighthouse as the authentication method.
+You *can't* configure data to be exported to a Log Analytics workspace in another tenant if you use Azure Policy to assign the configuration. This process works only when you use the REST API to assign the configuration, and the configuration is unsupported in the Azure portal (because it requires a multitenant context). Azure Lighthouse *doesn't* resolve this issue with Azure Policy, although you can use Azure Lighthouse as the authentication method.
-When collecting data into a tenant, you can analyze the data from one central location.
+When you collect data in a tenant, you can analyze the data from one, central location.
-To export data to an Azure Event Hubs or Log Analytics workspace in a different tenant:
+To export data to an event hub or Log Analytics workspace in a different tenant:
-1. In the tenant that has the Azure Event Hubs or Log Analytics workspace, [invite a user](../active-directory/external-identities/what-is-b2b.md#easily-invite-guest-users-from-the-azure-portal) from the tenant that hosts the continuous export configuration, or alternatively configure Azure Lighthouse for the source and destination tenant.
-1. If using Microsoft Entra B2B Guest access, ensure that the user accepts the invitation to access the tenant as a guest.
-1. If you're using a Log Analytics Workspace, assign the user in the workspace tenant one of these roles: Owner, Contributor, Log Analytics Contributor, Sentinel Contributor, or Monitoring Contributor.
-1. Create and submit the request to the Azure REST API to configure the required resources. You'll need to manage the bearer tokens in both the context of the local (workspace) and the remote (continuous export) tenant.
+1. In the tenant that has the event hub or Log Analytics workspace, [invite a user](../active-directory/external-identities/what-is-b2b.md#easily-invite-guest-users-from-the-azure-portal) from the tenant that hosts the continuous export configuration, or you can configure Azure Lighthouse for the source and destination tenant.
+1. If you use business-to-business (B2B) guest user access in Microsoft Entra ID, ensure that the user accepts the invitation to access the tenant as a guest.
+1. If you use a Log Analytics workspace, assign the user in the workspace tenant one of these roles: Owner, Contributor, Log Analytics Contributor, Sentinel Contributor, or Monitoring Contributor.
+1. Create and submit the request to the Azure REST API to configure the required resources. You must manage the bearer tokens in both the context of the local (workspace) tenant and the remote (continuous export) tenant.
## Continuously export to an event hub behind a firewall
-You can enable continuous export as a trusted service, so that you can send data to an event hub that has an Azure Firewall enabled.
+You can enable continuous export as a trusted service so that you can send data to an event hub that has Azure Firewall enabled.
-**To grant access to continuous export as a trusted service**:
+To grant access to continuous export as a trusted service:
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Navigate to **Microsoft Defender for Cloud** > **Environmental settings**.
+1. Go to **Microsoft Defender for Cloud** > **Environmental settings**.
1. Select the relevant resource.
You can enable continuous export as a trusted service, so that you can send data
:::image type="content" source="media/continuous-export/export-as-trusted.png" alt-text="Screenshot that shows where the checkbox is located to select export as trusted service.":::
-You need to add the relevant role assignment on the destination Event Hubs.
+You must add the relevant role assignment to the destination event hub.
-**To add the relevant role assignment on the destination Event Hub**:
+To add the relevant role assignment to the destination event hub:
-1. Navigate to the selected Event Hubs.
+1. Go to the selected event hub.
-1. Select **Access Control** > **Add role assignment**
+1. In the resource menu, select **Access control (IAM)** > **Add role assignment**.
- :::image type="content" source="media/continuous-export/add-role-assignment.png" alt-text="Screenshot that shows where the add role assignment button is found." lightbox="media/continuous-export/add-role-assignment.png":::
+ :::image type="content" source="media/continuous-export/add-role-assignment.png" alt-text="Screenshot that shows the Add role assignment button." lightbox="media/continuous-export/add-role-assignment.png":::
1. Select **Azure Event Hubs Data Sender**. 1. Select the **Members** tab.
-1. Select **+ Select members**.
+1. Choose **+ Select members**.
-1. Search for and select **Windows Azure Security Resource Provider**.
+1. Search for and then select **Windows Azure Security Resource Provider**.
:::image type="content" source="media/continuous-export/windows-security-resource.png" alt-text="Screenshot that shows you where to enter and search for Microsoft Azure Security Resource Provider." lightbox="media/continuous-export/windows-security-resource.png":::
You need to add the relevant role assignment on the destination Event Hubs.
## View exported alerts and recommendations in Azure Monitor
-You might also choose to view exported Security Alerts and/or recommendations in [Azure Monitor](../azure-monitor/alerts/alerts-overview.md).
+You might also choose to view exported security alerts or recommendations in [Azure Monitor](../azure-monitor/alerts/alerts-overview.md).
-Azure Monitor provides a unified alerting experience for various Azure alerts including Diagnostic Log, Metric alerts, and custom alerts based on Log Analytics workspace queries.
+Azure Monitor provides a unified alerting experience for various Azure alerts, including a diagnostic log, metric alerts, and custom alerts that are based on Log Analytics workspace queries.
-To view alerts and recommendations from Defender for Cloud in Azure Monitor, configure an Alert rule based on Log Analytics queries (Log Alert):
+To view alerts and recommendations from Defender for Cloud in Azure Monitor, configure an alert rule that's based on Log Analytics queries (a log alert rule):
-1. From Azure Monitor's **Alerts** page, select **New alert rule**.
+1. On the Azure Monitor **Alerts** page, select **New alert rule**.
- ![Azure Monitor's alerts page.](./media/continuous-export/azure-monitor-alerts.png)
+ ![Screenshot that shows the Azure Monitor alerts page.](./media/continuous-export/azure-monitor-alerts.png)
-1. In the create rule page, configure your new rule (in the same way you'd configure a [log alert rule in Azure Monitor](../azure-monitor/alerts/alerts-unified-log.md)):
+1. On the **Create rule** pane, set up your new rule the same way you'd configure a [log alert rule in Azure Monitor](../azure-monitor/alerts/alerts-unified-log.md):
- For **Resource**, select the Log Analytics workspace to which you exported security alerts and recommendations.
- - For **Condition**, select **Custom log search**. In the page that appears, configure the query, lookback period, and frequency period. In the search query, you can type *SecurityAlert* or *SecurityRecommendation* to query the data types that Defender for Cloud continuously exports to as you enable the Continuous export to Log Analytics feature.
+ - For **Condition**, select **Custom log search**. In the page that appears, configure the query, lookback period, and frequency period. In the search query, you can enter **SecurityAlert** or **SecurityRecommendation** to query the data types that Defender for Cloud continuously exports to as you enable the continuous export to Log Analytics feature.
+
+ - Optionally, create an [action group](../azure-monitor/alerts/action-groups.md) to trigger. Action groups can automate sending an email, creating an ITSM ticket, running a webhook, and more, based on an event in your environment.
+
+ ![Screenshot that shows the Azure Monitor create alert rule pane.](./media/continuous-export/azure-monitor-alert-rule.png)
- - Optionally, configure the [Action Group](../azure-monitor/alerts/action-groups.md) that you'd like to trigger. Action groups can trigger email sending, ITSM tickets, WebHooks, and more.
- ![Azure Monitor alert rule.](./media/continuous-export/azure-monitor-alert-rule.png)
+The Defender for Cloud alerts or recommendations appear (depending on your configured continuous export rules and the condition that you defined in your Azure Monitor alert rule) in Azure Monitor alerts, with automatic triggering of an action group (if provided).
-The Microsoft Defender for Cloud alerts or recommendations appears (depending on your configured continuous export rules and the condition you defined in your Azure Monitor alert rule) in Azure Monitor alerts, with automatic triggering of an action group (if provided).
+<a name="manual-one-time-export-of-alerts-and-recommendations"></a>
-## Manual one-time export of alerts and recommendations
+## Manually export alerts and recommendations
-To download a CSV report for alerts or recommendations, open the **Security alerts** or **Recommendations** page and select the **Download CSV report** button.
+To download a CSV file that lists alerts or recommendations, go to the **Security alerts** page or the **Recommendations** page, and then select the **Download CSV report** button.
> [!TIP]
-> Due to Azure Resource Graph limitations, the reports are limited to a file size of 13K rows. If you're seeing errors related to too much data being exported, try limiting the output by selecting a smaller set of subscriptions to be exported.
+> Due to Azure Resource Graph limitations, the reports are limited to a file size of 13,000 rows. If you see errors related to too much data being exported, try limiting the output by selecting a smaller set of subscriptions to be exported.
> [!NOTE] > These reports contain alerts and recommendations for resources from the currently selected subscriptions.
-## Next steps
+## Related content
In this article, you learned how to configure continuous exports of your recommendations and alerts. You also learned how to download your alerts data as a CSV file.
-For related material, see the following documentation:
+To see related content:
- Learn more about [workflow automation templates](https://github.com/Azure/Azure-Security-Center/tree/master/Workflow%20automation).-- [Azure Event Hubs documentation](../event-hubs/index.yml)-- [Microsoft Sentinel documentation](../sentinel/index.yml)-- [Azure Monitor documentation](../azure-monitor/index.yml)-- [Export data types schemas](https://aka.ms/ASCAutomationSchemas)
+- See the [Azure Event Hubs documentation](../event-hubs/index.yml).
+- Learn more about [Microsoft Sentinel](../sentinel/index.yml).
+- Review the [Azure Monitor documentation](../azure-monitor/index.yml).
+- Learn how to [export data types schemas](https://aka.ms/ASCAutomationSchemas).
- Check out [common questions](faq-general.yml) about continuous export.
defender-for-cloud Custom Dashboards Azure Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/custom-dashboards-azure-workbooks.md
Title: Workbooks gallery
-description: Learn how to create rich, interactive reports of your Microsoft Defender for Cloud data with the integrated Azure Monitor Workbooks gallery
+ Title: Use Azure Monitor gallery workbooks with Defender for Cloud data
+description: Learn how to create rich, interactive reports for your Microsoft Defender for Cloud data by using workbooks from the integrated Azure Monitor workbooks gallery.
Last updated 12/06/2023
-# Create rich, interactive reports of Defender for Cloud data
+# Create rich, interactive reports of Defender for Cloud data by using workbooks
-[Azure Workbooks](../azure-monitor/visualize/workbooks-overview.md) provide a flexible canvas for data analysis and the creation of rich visual reports within the Azure portal. They allow you to tap into multiple data sources from across Azure, and combine them into unified interactive experiences.
+[Azure workbooks](../azure-monitor/visualize/workbooks-overview.md) are flexible canvas that you can use to analyze data and create rich, visual reports in the Azure portal. In workbooks, you can access multiple data sources across Azure. Combine workbooks into unified, interactive experiences.
-Workbooks provide a rich set of capabilities for visualizing your Azure data. For detailed examples of each visualization type, see the [visualizations examples and documentation](../azure-monitor/visualize/workbooks-text-visualizations.md).
+Workbooks provide a rich set of capabilities for visualizing your Azure data. For detailed information about each visualization type, see the [visualizations examples and documentation](../azure-monitor/visualize/workbooks-text-visualizations.md).
-Within Microsoft Defender for Cloud, you can access the built-in workbooks to track your organizationΓÇÖs security posture. You can also build custom workbooks to view a wide range of data from Defender for Cloud or other supported data sources.
+In Microsoft Defender for Cloud, you can access built-in workbooks to track your organizationΓÇÖs security posture. You can also build custom workbooks to view a wide range of data from Defender for Cloud or other supported data sources.
-For pricing, check out the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
+For pricing, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
## Prerequisites
-**Required roles and permissions**: To save workbooks, you must have at least [Workbook Contributor](../role-based-access-control/built-in-roles.md#workbook-contributor) permissions on the target resource group
+**Required roles and permissions**: To save a workbook, you must have at least [Workbook Contributor](../role-based-access-control/built-in-roles.md#workbook-contributor) permissions for the relevant resource group.
**Cloud availability**: :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds :::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Microsoft Azure operated by 21Vianet)
-## Workbooks gallery in Microsoft Defender for Cloud
+<a name="workbooks-gallery-in-microsoft-defender-for-cloud"></a>
-With the integrated Azure Workbooks functionality, Microsoft Defender for Cloud makes it straightforward to build your own custom, interactive workbooks. Defender for Cloud also includes a gallery with the following workbooks ready for your customization:
+## Use Defender for Cloud gallery workbooks
-- ['Coverage' workbook](#use-the-coverage-workbook) - Track the coverage of Defender for Cloud plans and extensions across your environments and subscriptions.-- ['Secure Score Over Time' workbook](#use-the-secure-score-over-time-workbook) - Track your subscriptions' scores and changes to recommendations for your resources-- ['System Updates' workbook](#use-the-system-updates-workbook) - View missing system updates by resources, OS, severity, and more-- ['Vulnerability Assessment Findings' workbook](#use-the-vulnerability-assessment-findings-workbook) - View the findings of vulnerability scans of your Azure resources-- ['Compliance Over Time' workbook](#use-the-compliance-over-time-workbook) - View the status of a subscription's compliance with the regulatory or industry standards you've selected-- ['Active Alerts' workbook](#use-the-active-alerts-workbook) - View active alerts by severity, type, tag, MITRE ATT&CK tactics, and location.-- Price Estimation workbook - View monthly consolidated price estimations for Microsoft Defender for Cloud plans based on the resource telemetry in your own environment. These numbers are estimates based on retail prices and don't provide actual billing data.-- Governance workbook - The governance report in the governance rules settings lets you track progress of the rules effective in the organization.-- ['DevOps Security (Preview)' workbook](#use-the-devops-security-workbook) - View a customizable foundation that helps you visualize the state of your DevOps posture for the connectors you've configured.
+In Defender for Cloud, you can use integrated Azure workbooks functionality to build custom, interactive workbooks that display your security data. Defender for Cloud includes a workbooks gallery that has the following workbooks ready for you to customize:
-In addition to the built-in workbooks, you can also find other useful workbooks found under the ΓÇ£Community" category, which is provided as is with no SLA or support. Choose one of the supplied workbooks or create your own.
+- [Coverage workbook](#coverage-workbook): Track the coverage of Defender for Cloud plans and extensions across your environments and subscriptions.
+- [Secure Score Over Time workbook](#secure-score-over-time-workbook): Track your subscription scores and changes to recommendations for your resources.
+- [System Updates workbook](#system-updates-workbook): View missing system updates by resource, OS, severity, and more.
+- [Vulnerability Assessment Findings workbook](#vulnerability-assessment-findings-workbook): View the findings of vulnerability scans of your Azure resources.
+- [Compliance Over Time workbook](#compliance-over-time-workbook): View the status of a subscription's compliance with regulatory standards or industry standards that you select.
+- [Active Alerts workbook](#active-alerts-workbook): View active alerts by severity, type, tag, MITRE ATT&CK tactics, and location.
+- Price Estimation workbook: View monthly, consolidated price estimations for Defender for Cloud plans based on the resource telemetry in your environment. The numbers are estimates that are based on retail prices and don't represent actual billing or invoice data.
+- Governance workbook: Use the governance report in the governance rules settings to track progress of the rules that affect your organization.
+- [DevOps Security (preview) workbook](#devops-security-workbook): View a customizable foundation that helps you visualize the state of your DevOps posture for the connectors that you set up.
+Along with built-in workbooks, you can find useful workbooks in the **Community** category. These workbooks are provided as-is and have no SLA or support. You can choose one of the provided workbooks or create your own workbook.
+ > [!TIP]
-> Use the **Edit** button to customize any of the supplied workbooks to your satisfaction. When you're done editing, select **Save** and your changes will be saved to a new workbook.
+> To customize any of the workbooks, select the **Edit** button. When you're done editing, select **Save**. The changes are saved in a new workbook.
+>
+> :::image type="content" source="media/custom-dashboards-azure-workbooks/editing-supplied-workbooks.png" alt-text="Screenshot that shows how to edit a supplied workbook to customize it for your needs.":::
>
-> :::image type="content" source="media/custom-dashboards-azure-workbooks/editing-supplied-workbooks.png" alt-text="Editing the supplied workbooks to customize them for your particular needs.":::
-### Use the 'Coverage' workbook
+<a name="use-the-coverage-workbook"></a>
-Enabling Defender for Cloud across multiple subscriptions and environments (Azure, AWS, and GCP) can make it hard to keep track of which plans are active. This is especially true if you have multiple subscriptions and environments.
+### Coverage workbook
-The Coverage workbook allows you to keep track of which Defender for Cloud plans are active on which parts of your environments. This workbook can help you to ensure that your environments and subscriptions are fully protected. By having access to detailed coverage information, you can also identify any areas that might need other protection and take action to address those areas.
+If you enable Defender for Cloud across multiple subscriptions and environments (Azure, Amazon Web Services, and Google Cloud Platform), you might find it challenging to keep track of which plans are active. It's especially true if you have multiple subscriptions and environments.
+The Coverage workbook helps you keep track of which Defender for Cloud plans are active in which parts of your environments. This workbook can help you ensure that your environments and subscriptions are fully protected. By having access to detailed coverage information, you can identify areas that might need more protection so that you can take action to address those areas.
-This workbook allows you to select a subscription (or all subscriptions) from the dropdown menu and view:
+
+In this workbook, you can select a subscription (or all subscriptions), and then view the following tabs:
- **Additional information**: Shows release notes and an explanation of each toggle.-- **Relative coverage**: Shows the percentage of subscriptions/connectors that have a particular Defender for Cloud plan enabled.
+- **Relative coverage**: Shows the percentage of subscriptions or connectors that have a specific Defender for Cloud plan enabled.
- **Absolute coverage**: Shows each plan's status per subscription.-- **Detailed coverage** - Shows additional settings that can/need to be enabled on relevant plans in order to get each plan's full value.
+- **Detailed coverage**: Shows additional settings that can be enabled or that must need to be enabled on relevant plans to get each plan's full value.
+
+You also can select the Azure, Amazon Web Services, or Google Cloud Platform environment in each or all subscriptions to see which plans and extensions are enabled for the environments.
-You can also select which environment (Azure, AWS or GCP) under each or all subscriptions to see which plans and extensions are enabled under that environment.
+<a name="use-the-secure-score-over-time-workbook"></a>
-### Use the 'Secure Score Over Time' workbook
+### Secure Score Over Time workbook
-This workbook uses secure score data from your Log Analytics workspace. That data needs to be exported from the continuous export tool as described in [Configure continuous export from the Defender for Cloud pages in Azure portal](continuous-export.md?tabs=azure-portal).
+The Secure Score Over Time workbook uses secure score data from your Log Analytics workspace. The data must be exported by using the continuous export tool as described in [Set up continuous export for Defender for Cloud in the Azure portal](continuous-export.md?tabs=azure-portal).
-When you set up the continuous export, set the export frequency to both **streaming updates** and **snapshots**.
+When you set up continuous export, under **Export frequency**, select both **Streaming updates** and **Snapshots (Preview)**.
> [!NOTE]
-> Snapshots get exported weekly, so you'll need to wait at least one week for the first snapshot to be exported before you can view data in this workbook.
+> Snapshots are exported weekly. There's a delay of at least one week after the first snapshot is exported before you can view data in the workbook.
> [!TIP]
-> To configure continuous export across your organization, use the supplied Azure Policy 'DeployIfNotExist' policies described in [Configure continuous export at scale](continuous-export.md?tabs=azure-policy).
+> To configure continuous export across your organization, use the provided `DeployIfNotExist` policies in Azure Policy that are described in [Set up continuous export at scale](continuous-export.md?tabs=azure-policy).
-The secure score over time workbook has five graphs for the subscriptions reporting to the selected workspaces:
+The Secure Score Over Time workbook has five graphs for the subscriptions that report to the selected workspaces:
|Graph |Example | |||
-|**Score trends for the last week and month**<br>Use this section to monitor the current score and general trends of the scores for your subscriptions.|:::image type="content" source="media/custom-dashboards-azure-workbooks/secure-score-over-time-table-1.png" alt-text="Trends for secure score on the built-in workbook.":::|
-|**Aggregated score for all selected subscriptions**<br>Hover your mouse over any point in the trend line to see the aggregated score at any date in the selected time range.|:::image type="content" source="media/custom-dashboards-azure-workbooks/secure-score-over-time-table-2.png" alt-text="Aggregated score for all selected subscriptions.":::|
-|**Recommendations with the most unhealthy resources**<br>This table helps you triage the recommendations that have had the most resources changed to unhealthy over the selected period.|:::image type="content" source="media/custom-dashboards-azure-workbooks/secure-score-over-time-table-3.png" alt-text="Recommendations with the most unhealthy resources.":::|
-|**Scores for specific security controls**<br>Defender for Cloud's security controls is logical groupings of recommendations. This chart shows you, at a glance, the weekly scores for all of your controls.|:::image type="content" source="media/custom-dashboards-azure-workbooks/secure-score-over-time-table-4.png" alt-text="Scores for your security controls over the selected time period.":::|
-|**Resources changes**<br>Recommendations with the most resources that have changed state (healthy, unhealthy, or not applicable) during the selected period are listed here. Select any recommendation from the list to open a new table listing the specific resources.|:::image type="content" source="media/custom-dashboards-azure-workbooks/secure-score-over-time-table-5.png" alt-text="Recommendations with the most resources that have changed health state.":::|
+|**Score trends for the last week and month**<br>Use this section to monitor the current score and general trends of the scores for your subscriptions.|:::image type="content" source="media/custom-dashboards-azure-workbooks/secure-score-over-time-table-1.png" alt-text="Screenshot that shows trends for secure score on the built-in workbook.":::|
+|**Aggregated score for all selected subscriptions**<br>Hover your mouse over any point in the trend line to see the aggregated score at any date in the selected time range.|:::image type="content" source="media/custom-dashboards-azure-workbooks/secure-score-over-time-table-2.png" alt-text="Screenshot that shows an aggregated score for all selected subscriptions.":::|
+|**Recommendations with the most unhealthy resources**<br>This table helps you triage the recommendations that had the most resources that changed to an unhealthy status in the selected period.|:::image type="content" source="media/custom-dashboards-azure-workbooks/secure-score-over-time-table-3.png" alt-text="Screenshot that shows recommendations that have the most unhealthy resources.":::|
+|**Scores for specific security controls**<br>The security controls in Defender for Cloud are logical groupings of recommendations. This chart shows you at a glance the weekly scores for all your controls.|:::image type="content" source="media/custom-dashboards-azure-workbooks/secure-score-over-time-table-4.png" alt-text="Screenshot that shows scores for your security controls over the selected time period.":::|
+|**Resources changes**<br>Recommendations that have the most resources that changed state (healthy, unhealthy, or not applicable) during the selected period are listed here. Select any recommendation in the list to open a new table that lists the specific resources.|:::image type="content" source="media/custom-dashboards-azure-workbooks/secure-score-over-time-table-5.png" alt-text="Screenshot that shows recommendations that have the most resources that changed health state during the selected period.":::|
-### Use the 'System Updates' workbook
+### System Updates workbook
-This workbook is based on the security recommendation "System updates should be installed on your machines".
+The System Updates workbook is based on the security recommendation that system updates should be installed on your machines. The workbook helps you identify machines that have updates to apply.
-The workbook helps you identify machines with outstanding updates.
+You can view the update status for selected subscriptions by:
-You can view the situation for the selected subscriptions according to:
+- A list of resources that have outstanding updates to apply.
+- A list of updates that are missing from your resources.
-- The list of resources with outstanding updates-- The list of updates missing from your resources
+### Vulnerability Assessment Findings workbook
-### Use the 'Vulnerability Assessment Findings' workbook
-
-Defender for Cloud includes vulnerability scanners for your machines, containers in container registries, and SQL servers.
+Defender for Cloud includes vulnerability scanners for your machines, containers in container registries, and computers running SQL Server.
Learn more about using these scanners:
Findings for each resource type are reported in separate recommendations:
- [SQL databases should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/82e20e14-edc5-4373-bfc4-f13121257c37) - [SQL servers on machines should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/f97aa83c-9b63-4f9a-99f6-b22c4398f936)
-This workbook gathers these findings and organizes them by severity, resource type, and category.
+The Vulnerability Assessment Findings workbook gathers these findings and organizes them by severity, resource type, and category.
-### Use the 'Compliance Over Time' workbook
+### Compliance Over Time workbook
-Microsoft Defender for Cloud continually compares the configuration of your resources with requirements in industry standards, regulations, and benchmarks. Built-in standards include NIST SP 800-53, SWIFT CSP CSCF v2020, Canada Federal PBMM, HIPAA HITRUST, and more. You can select the specific standards relevant to your organization using the regulatory compliance dashboard. Learn more in [Customize the set of standards in your regulatory compliance dashboard](update-regulatory-compliance-packages.md).
+Microsoft Defender for Cloud continually compares the configuration of your resources with requirements in industry standards, regulations, and benchmarks. Built-in standards include NIST SP 800-53, SWIFT CSP CSCF v2020, Canada Federal PBMM, HIPAA HITRUST, and more. You can select standards that are relevant to your organization by using the regulatory compliance dashboard. Learn more in [Customize the set of standards in your regulatory compliance dashboard](update-regulatory-compliance-packages.md).
-This workbook tracks your compliance status over time with the various standards you've added to your dashboard.
+The Compliance Over Time workbook tracks your compliance status over time by using the various standards that you add to your dashboard.
-When you select a standard from the overview area of the report, the lower pane reveals a more detailed breakdown:
+When you select a standard from the overview area of the report, the lower pane displays a more detailed breakdown:
-You can keep drilling down - right down to the recommendation level - to view the resources that have passed or failed each control.
+To view the resources that passed or failed each control, you can keep drilling down, all the way to the recommendation level.
> [!TIP]
-> For each panel of the report, you can export the data to Excel with the "Export to Excel" option.
+> For each panel of the report, you can export the data to Excel by using the **Export to Excel** option.
>
-> :::image type="content" source="media/custom-dashboards-azure-workbooks/export-workbook-data.png" alt-text="Exporting compliance workbook data to Excel.":::
+> :::image type="content" source="media/custom-dashboards-azure-workbooks/export-workbook-data.png" alt-text="Screenshot that shows how to export a compliance workbook data to Excel.":::
+
+<a name="use-the-active-alerts-workbook"></a>
-### Use the 'Active Alerts' workbook
+### Active Alerts workbook
-This workbook displays the active security alerts for your subscriptions on one dashboard. Security alerts are the notifications that Defender for Cloud generates when it detects threats on your resources. Defender for Cloud prioritizes, and lists the alerts, along with information needed for quick investigation and remediation.
+The Active Alerts workbook displays the active security alerts for your subscriptions on one dashboard. Security alerts are the notifications that Defender for Cloud generates when it detects threats against your resources. Defender for Cloud prioritizes and lists the alerts with the information that you need to quickly investigate and remediate.
-This workbook benefits you by letting you understand the active threats on your environment, and allows you to prioritize between the active alerts.
+This workbook benefits you by helping you be aware of and prioritize the active threats in your environment.
> [!NOTE]
-> Most workbooks use Azure Resource Graph (ARG) to query their data. For example, to display the Map View, Log Analytics workspace is used to query the data. [Continuous export](continuous-export.md) should be enabled, and export the security alerts to the Log Analytics workspace.
+> Most workbooks use Azure Resource Graph to query data. For example, to display a map view, data is queried in a Log Analytics workspace. [Continuous export](continuous-export.md) should be enabled. Export the security alerts to the Log Analytics workspace.
-You can view the active alerts by severity, resource group, or tag.
+You can view active alerts by severity, resource group, and tag.
You can also view your subscription's top alerts by attacked resources, alert types, and new alerts. +
+To see more details about an alert, select the alert.
-You can get more details on any of these alerts by selecting it.
+The **MITRE ATT&CK tactics** tab lists alerts in the order of the kill chain and the number of alerts that the subscription has at each stage.
-The MITRE ATT&CK tactics display by the order of the kill-chain, and the number of alerts the subscription has at each stage.
+You can see all the active alerts in a table and filter by columns.
-You can see all of the active alerts in a table with the ability to filter by columns. Select an alert to view button appears.
+To see details for a specific alert, select the alert in the table, and then select the **Open Alert View** button.
-By selecting the Open Alert View button, you can see all the details of that specific alert.
+To see all alerts by location in a map view, select the **Map View** tab.
-By selecting Map View, you can also see all alerts based on their location.
+Select a location on the map to view all the alerts for that location.
-Select a location on the map to view all of the alerts for that location.
+To view the details for an alert, select an alert, and then select the **Open Alert View** button.
-You can see the details for that alert with the Open Alert View button.
+<a name="use-the-devops-security-workbook"></a>
-### Use the 'DevOps Security' workbook
+### DevOps Security workbook
-This workbook provides a customizable visual report of your DevOps security posture. You can use this workbook to view insights into your repositories with the highest number of CVEs and weaknesses, active repositories that have Advanced Security disabled, security posture assessments of your DevOps environment configurations, and much more. Customize and add your own visual reports using the rich set of data in Azure Resource Graph to fit the business needs of your security team.
+The DevOps Security workbook provides a customizable visual report of your DevOps security posture. You can use this workbook to view insights about your repositories that have the highest number of common vulnerabilities and exposures (CVEs) and weaknesses, active repositories that have Advanced Security turned off, security posture assessments of your DevOps environment configurations, and much more. Customize and add your own visual reports by using the rich set of data in Azure Resource Graph to fit the business needs of your security team.
> [!NOTE]
-> You must have a [GitHub connector](quickstart-onboard-github.md), [GitLab connector](quickstart-onboard-gitlab.md), or an [Azure DevOps connector](quickstart-onboard-devops.md), connected to your environment in order to utilize this workbook
+> To use this workbork, your environment must have a [GitHub connector](quickstart-onboard-github.md), [GitLab connector](quickstart-onboard-gitlab.md), or [Azure DevOps connector](quickstart-onboard-devops.md).
-**To deploy the workbook**:
+To deploy the workbook:
1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Navigate to **Microsoft Defender for Cloud** > **Workbooks**.
+1. Go to **Microsoft Defender for Cloud** > **Workbooks**.
1. Select the **DevOps Security (Preview)** workbook.
-The workbook will load and show you the Overview tab where you can see the number of exposed secrets, code security and DevOps security. All of these findings are broken down by total for each repository and the severity.
+The workbook loads and displays the **Overview** tab. On this tab, you can see the number of exposed secrets, the code security, and DevOps security. The findings are shown by total for each repository and by severity.
-Select the Secrets tab to view the count by secret type.
+To view the count by secret type, select the **Secrets** tab.
-The Code tab displays your count findings by tool and repository and your code scanning by severity.
+The **Code** tab displays the findings count by tool and repository. It shows the results of your code scanning by severity.
-The Open Source Security (OSS) Vulnerabilities tab displays your OSS vulnerabilities by severity and the count of findings by repository.
+The **OSS Vulnerabilities** tab displays Open Source Security (OSS) vulnerabilities by severity and the count of findings by repository.
-The Infrastructure as Code tab displays your findings by tool and repository.
+The **Infrastructure as Code** tab displays your findings by tool and repository.
-The Posture tab displays your security posture by severity and repository.
+The **Posture** tab displays security posture by severity and repository.
-The Threats and Tactics tab displays the total count of threats and tactics and by repository.
+The **Threats & Tactics** tab displays the count of threats and tactics by repository and the total count.
## Import workbooks from other workbook galleries
-To move workbooks that you've built in other Azure services into your Microsoft Defender for Cloud workbooks gallery:
+To move workbooks that you build in other Azure services into your Microsoft Defender for Cloud workbook gallery:
-1. Open the target workbook.
+1. Open the workbook that you want to import.
-1. From the toolbar, select **Edit**.
+1. On the toolbar, select **Edit**.
- :::image type="content" source="media/custom-dashboards-azure-workbooks/editing-workbooks.png" alt-text="Editing a workbook.":::
+ :::image type="content" source="media/custom-dashboards-azure-workbooks/editing-workbooks.png" alt-text="Screenshot that shows how to edit a workbook.":::
-1. From the toolbar, select **</>** to enter the Advanced Editor.
+1. On the toolbar, select **</>** to open the advanced editor.
- :::image type="content" source="media/custom-dashboards-azure-workbooks/editing-workbooks-advanced-editor.png" alt-text="Launching the advanced editor to get the Gallery Template JSON code.":::
+ :::image type="content" source="media/custom-dashboards-azure-workbooks/editing-workbooks-advanced-editor.png" alt-text="Screenshot that shows how to open the advanced editor to copy the gallery template JSON code.":::
-1. Copy the workbook's Gallery Template JSON.
+1. In the workbook gallery template, select all the JSON in the file and copy it.
+
+1. Open the workbook gallery in Defender for Cloud, and then select **New** on the menu bar.
+
+1. Select **</>** to open the Advanced Editor.
+
+1. Paste the entire gallery template JSON code.
-1. Open the workbooks gallery in Defender for Cloud and from the menu bar select **New**.
-1. Select the **</>** to enter the Advanced Editor.
-1. Paste in the entire Gallery Template JSON.
1. Select **Apply**.
-1. From the toolbar, select **Save As**.
- :::image type="content" source="media/custom-dashboards-azure-workbooks/editing-workbooks-save-as.png" alt-text="Saving the workbook to the gallery in Defender for Cloud.":::
+1. On the toolbar, select **Save As**.
+
+ :::image type="content" source="media/custom-dashboards-azure-workbooks/editing-workbooks-save-as.png" alt-text="Screenshot that shows saving the workbook to the gallery in Defender for Cloud.":::
+
+1. To save changes to the workbook, enter or select the following information:
+
+ - A name for the workbook.
+ - The Azure region to use.
+ - Any relevant information about the subscription, resource group, and sharing.
-1. Enter the required details for saving the workbook:
- 1. A name for the workbook
- 1. The desired region
- 1. Subscription, resource group, and sharing as appropriate.
+To find the saved workbook, go to the **Recently modified workbooks** category.
-You'll find your saved workbook in the **Recently modified workbooks** category.
+## Related content
-## Next steps
+This article describes the Defender for Cloud integrated Azure workbooks page that has built-in reports and the option to build your own custom, interactive reports.
-This article described Defender for Cloud's integrated Azure Workbooks page with built-in reports and the option to build your own custom, interactive reports.
+- Learn more about [Azure workbooks](../azure-monitor/visualize/workbooks-overview.md).
-- Learn more about [Azure Workbooks](../azure-monitor/visualize/workbooks-overview.md)
+Built-in workbooks get their data from Defender for Cloud recommendations.
-- The built-in workbooks pull their data from Defender for Cloud's recommendations. Learn about the many security recommendations in [Security recommendations - a reference guide](recommendations-reference.md)
+- Learn about the many security recommendations in [Security recommendations: A reference guide](recommendations-reference.md).
defender-for-cloud Defender For Containers Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-architecture.md
To learn more about implementation details such as supported operating systems,
When Defender for Cloud protects a cluster hosted in Azure Kubernetes Service, the collection of audit log data is agentless and collected automatically through Azure infrastructure with no additional cost or configuration considerations. These are the required components in order to receive the full protection offered by Microsoft Defender for Containers: - **Defender agent**: The DaemonSet that is deployed on each node, collects signals from hosts using [eBPF technology](https://ebpf.io/), and provides runtime protection. The agent is registered with a Log Analytics workspace, and used as a data pipeline. However, the audit log data isn't stored in the Log Analytics workspace. The Defender agent is deployed as an AKS Security profile.-- **Azure Policy for Kubernetes**: A pod that extends the open-source [Gatekeeper v3](https://github.com/open-policy-agent/gatekeeper) and registers as a web hook to Kubernetes admission control making it possible to apply at-scale enforcements, and safeguards on your clusters in a centralized, consistent manner. The Azure Policy for Kubernetes pod is deployed as an AKS add-on. For more information, see [Protect your Kubernetes workloads](kubernetes-workload-protections.md) and [Understand Azure Policy for Kubernetes clusters](/azure/governance/policy/concepts/policy-for-kubernetes).
+- **Azure Policy for Kubernetes**: A pod that extends the open-source [Gatekeeper v3](https://github.com/open-policy-agent/gatekeeper) and registers as a web hook to Kubernetes admission control making it possible to apply at-scale enforcements, and safeguards on your clusters in a centralized, consistent manner. The Azure Policy for Kubernetes pod is deployed as an AKS add-on. It's only installed on one node in the cluster. For more information, see [Protect your Kubernetes workloads](kubernetes-workload-protections.md) and [Understand Azure Policy for Kubernetes clusters](/azure/governance/policy/concepts/policy-for-kubernetes).
:::image type="content" source="./media/defender-for-containers/architecture-aks-cluster.png" alt-text="Diagram of high-level architecture of the interaction between Microsoft Defender for Containers, Azure Kubernetes Service, and Azure Policy." lightbox="./media/defender-for-containers/architecture-aks-cluster.png":::
When you enable the agentless discovery for Kubernetes extension, the following
These components are required in order to receive the full protection offered by Microsoft Defender for Containers: -- **[Azure Arc-enabled Kubernetes](/azure/azure-arc/kubernetes/overview)** - An agent based solution that connects your clusters to Azure. Azure then is capable of providing services such as Defender, and Policy as [Arc extensions](/azure/azure-arc/kubernetes/extensions). For more information, see [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md). The following two components are the required Arc extensions.
+- **[Azure Arc-enabled Kubernetes](/azure/azure-arc/kubernetes/overview)** - Azure Arc-enabled Kubernetes - An agent based solution, installed on one node in the cluster, that connects your clusters to Defender for Cloud. Defender for Cloud is then able to deploy the following two agents as [Arc extensions](/azure/azure-arc/kubernetes/extensions):
- **Defender agent**: The DaemonSet that is deployed on each node, collects host signals using [eBPF technology](https://ebpf.io/) and Kubernetes audit logs, to provide runtime protection. The agent is registered with a Log Analytics workspace, and used as a data pipeline. However, the audit log data isn't stored in the Log Analytics workspace. The Defender agent is deployed as an Arc-enabled Kubernetes extension. -- **Azure Policy for Kubernetes**: A pod that extends the open-source [Gatekeeper v3](https://github.com/open-policy-agent/gatekeeper) and registers as a web hook to Kubernetes admission control making it possible to apply at-scale enforcements, and safeguards on your clusters in a centralized, consistent manner. The Azure Policy for Kubernetes pod is deployed as an Arc-enabled Kubernetes extension. For more information, see [Protect your Kubernetes workloads](/azure/defender-for-cloud/kubernetes-workload-protections) and [Understand Azure Policy for Kubernetes clusters](/azure/governance/policy/concepts/policy-for-kubernetes).
+- **Azure Policy for Kubernetes**: A pod that extends the open-source [Gatekeeper v3](https://github.com/open-policy-agent/gatekeeper) and registers as a web hook to Kubernetes admission control making it possible to apply at-scale enforcements, and safeguards on your clusters in a centralized, consistent manner. The Azure Policy for Kubernetes pod is deployed as an Arc-enabled Kubernetes extension. It's only installed on one node in the cluster. For more information, see [Protect your Kubernetes workloads](/azure/defender-for-cloud/kubernetes-workload-protections) and [Understand Azure Policy for Kubernetes clusters](/azure/governance/policy/concepts/policy-for-kubernetes).
> [!NOTE] > Defender for Containers support for Arc-enabled Kubernetes clusters is a preview feature.
These components are required in order to receive the full protection offered by
When Defender for Cloud protects a cluster hosted in Elastic Kubernetes Service, the collection of audit log data is agentless. These are the required components in order to receive the full protection offered by Microsoft Defender for Containers: - **[Kubernetes audit logs](https://kubernetes.io/docs/tasks/debug-application-cluster/audit/)** ΓÇô [AWS accountΓÇÖs CloudWatch](https://aws.amazon.com/cloudwatch/) enables, and collects audit log data through an agentless collector, and sends the collected information to the Microsoft Defender for Cloud backend for further analysis.-- **[Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md)** - An agent based solution that connects your EKS clusters to Azure. Azure then is capable of providing services such as Defender, and Policy as [Arc extensions](../azure-arc/kubernetes/extensions.md). For more information, see [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md). The following two components are the required Arc extensions.
+- **[Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md)** - Azure Arc-enabled Kubernetes - An agent based solution, installed on one node in the cluster, that connects your clusters to Defender for Cloud. Defender for Cloud is then able to deploy the following two agents as [Arc extensions](/azure/azure-arc/kubernetes/extensions):
- **Defender agent**: The DaemonSet that is deployed on each node, collects signals from hosts using [eBPF technology](https://ebpf.io/), and provides runtime protection. The agent is registered with a Log Analytics workspace, and used as a data pipeline. However, the audit log data isn't stored in the Log Analytics workspace. The Defender agent is deployed as an Arc-enabled Kubernetes extension.-- **Azure Policy for Kubernetes**: A pod that extends the open-source [Gatekeeper v3](https://github.com/open-policy-agent/gatekeeper) and registers as a web hook to Kubernetes admission control making it possible to apply at-scale enforcements, and safeguards on your clusters in a centralized, consistent manner. The Azure Policy for Kubernetes pod is deployed as an Arc-enabled Kubernetes extension. For more information, see [Protect your Kubernetes workloads](kubernetes-workload-protections.md) and [Understand Azure Policy for Kubernetes clusters](/azure/governance/policy/concepts/policy-for-kubernetes).
+- **Azure Policy for Kubernetes**: A pod that extends the open-source [Gatekeeper v3](https://github.com/open-policy-agent/gatekeeper) and registers as a web hook to Kubernetes admission control making it possible to apply at-scale enforcements, and safeguards on your clusters in a centralized, consistent manner. The Azure Policy for Kubernetes pod is deployed as an Arc-enabled Kubernetes extension. It's only installed on one node in the cluster. For more information, see [Protect your Kubernetes workloads](kubernetes-workload-protections.md) and [Understand Azure Policy for Kubernetes clusters](/azure/governance/policy/concepts/policy-for-kubernetes).
> [!NOTE] > Defender for Containers support for AWS EKS clusters is a preview feature.
When Defender for Cloud protects a cluster hosted in Google Kubernetes Engine, t
- **[Kubernetes audit logs](https://kubernetes.io/docs/tasks/debug-application-cluster/audit/)** ΓÇô [GCP Cloud Logging](https://cloud.google.com/logging/) enables, and collects audit log data through an agentless collector, and sends the collected information to the Microsoft Defender for Cloud backend for further analysis. -- **[Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md)** - An agent based solution that connects your GKE clusters to Azure. Azure then is capable of providing services such as Defender, and Policy as [Arc extensions](../azure-arc/kubernetes/extensions.md). For more information, see [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md). The following two components are the required Arc extensions.
+- **[Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md)** - Azure Arc-enabled Kubernetes - An agent based solution, installed on one node in the cluster, that connects your clusters to Defender for Cloud. Defender for Cloud is then able to deploy the following two agents as [Arc extensions](/azure/azure-arc/kubernetes/extensions):
- **Defender agent**: The DaemonSet that is deployed on each node, collects signals from hosts using [eBPF technology](https://ebpf.io/), and provides runtime protection. The agent is registered with a Log Analytics workspace, and used as a data pipeline. However, the audit log data isn't stored in the Log Analytics workspace. The Defender agent is deployed as an Arc-enabled Kubernetes extension.-- **Azure Policy for Kubernetes**: A pod that extends the open-source [Gatekeeper v3](https://github.com/open-policy-agent/gatekeeper) and registers as a web hook to Kubernetes admission control making it possible to apply at-scale enforcements, and safeguards on your clusters in a centralized, consistent manner. The Azure Policy for Kubernetes pod is deployed as an Arc-enabled Kubernetes extension. For more information, see [Protect your Kubernetes workloads](kubernetes-workload-protections.md) and [Understand Azure Policy for Kubernetes clusters](/azure/governance/policy/concepts/policy-for-kubernetes).
+- **Azure Policy for Kubernetes**: A pod that extends the open-source [Gatekeeper v3](https://github.com/open-policy-agent/gatekeeper) and registers as a web hook to Kubernetes admission control making it possible to apply at-scale enforcements, and safeguards on your clusters in a centralized, consistent manner. The Azure Policy for Kubernetes pod is deployed as an Arc-enabled Kubernetes extension. It only needs to be installed on one node in the cluster. For more information, see [Protect your Kubernetes workloads](kubernetes-workload-protections.md) and [Understand Azure Policy for Kubernetes clusters](/azure/governance/policy/concepts/policy-for-kubernetes).
> [!NOTE] > Defender for Containers support for GCP GKE clusters is a preview feature.
defender-for-cloud Endpoint Protection Recommendations Technical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/endpoint-protection-recommendations-technical.md
Title: Endpoint protection recommendations
+ Title: Assessment checks for endpoint detection and response solutions
description: How the endpoint protection solutions are discovered and identified as healthy. Previously updated : 06/15/2023 Last updated : 02/01/2024
-# Endpoint protection assessment and recommendations in Microsoft Defender for Cloud
-> [!NOTE]
-> As the Log Analytics agent (also known as MMA) is set to retire in [August 2024](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/), all Defender for Servers features that currently depend on it, including those described on this page, will be available through either [Microsoft Defender for Endpoint integration](integration-defender-for-endpoint.md) or [agentless scanning](concept-agentless-data-collection.md), before the retirement date. For more information about the roadmap for each of the features that are currently rely on Log Analytics Agent, see [this announcement](upcoming-changes.md#defender-for-cloud-plan-and-strategy-for-the-log-analytics-agent-deprecation).
+# Assessment checks for endpoint detection and response solutions
Microsoft Defender for Cloud provides health assessments of [supported](supported-machines-endpoint-solutions-clouds-servers.md#endpoint-supported) versions of Endpoint protection solutions. This article explains the scenarios that lead Defender for Cloud to generate the following two recommendations: - [Endpoint protection should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/4fb67663-9ab9-475d-b026-8c544cced439) - [Endpoint protection health issues should be resolved on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/37a3689a-818e-4a0e-82ac-b1392b9bb000)
+> [!NOTE]
+> As the Log Analytics agent (also known as MMA) is set to retire in [August 2024](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/), all Defender for Servers features that currently depend on it, including those described on this page, will be available through either [Microsoft Defender for Endpoint integration](integration-defender-for-endpoint.md) or [agentless scanning](concept-agentless-data-collection.md), before the retirement date. For more information about the roadmap for each of the features that are currently rely on Log Analytics Agent, see [this announcement](upcoming-changes.md#defender-for-cloud-plan-and-strategy-for-the-log-analytics-agent-deprecation).
+ > [!TIP] > At the end of 2021, we revised the recommendation that installs endpoint protection. One of the changes affects how the recommendation displays machines that are powered off. In the previous version, machines that were turned off appeared in the 'Not applicable' list. In the newer recommendation, they don't appear in any of the resources lists (healthy, unhealthy, or not applicable). ## Windows Defender -- Defender for Cloud recommends **Endpoint protection should be installed on your machines** when [Get-MpComputerStatus](/powershell/module/defender/get-mpcomputerstatus) runs and the result is **AMServiceEnabled: False**--- Defender for Cloud recommends **Endpoint protection health issues should be resolved on your machines** when [Get-MpComputerStatus](/powershell/module/defender/get-mpcomputerstatus) runs and any of the following occurs:-
- - Any of the following properties are false:
-
- - **AMServiceEnabled**
- - **AntispywareEnabled**
- - **RealTimeProtectionEnabled**
- - **BehaviorMonitorEnabled**
- - **IoavProtectionEnabled**
- - **OnAccessProtectionEnabled**
- - If one or both of the following properties are 7 or more:
-
- - **AntispywareSignatureAge**
- - **AntivirusSignatureAge**
+| Recommendation | Appears when |
+|--|--|
+| **Endpoint protection should be installed on your machines** | [Get-MpComputerStatus](/powershell/module/defender/get-mpcomputerstatus) runs and the result is **AMServiceEnabled: False** |
+| **Endpoint protection health issues should be resolved on your machines** | [Get-MpComputerStatus](/powershell/module/defender/get-mpcomputerstatus) runs and any of the following occurs: <br><br> Any of the following properties are false: <br><br> - **AMServiceEnabled** <br> - **AntispywareEnabled** <br> - **RealTimeProtectionEnabled** <br> - **BehaviorMonitorEnabled** <br> - **IoavProtectionEnabled** <br> - **OnAccessProtectionEnabled** <br> <br> If one or both of the following properties are 7 or more: <br><br> - **AntispywareSignatureAge** <br> - **AntivirusSignatureAge** |
## Microsoft System Center endpoint protection -- Defender for Cloud recommends **Endpoint protection should be installed on your machines** when importing **SCEPMpModule ("$env:ProgramFiles\Microsoft Security Client\MpProvider\MpProvider.psd1")** and running **Get-MProtComputerStatus** results in **AMServiceEnabled = false**.--- Defender for Cloud recommends **Endpoint protection health issues should be resolved on your machines** when **Get-MprotComputerStatus** runs and any of the following occurs:-
- - At least one of the following properties is false:
-
- - **AMServiceEnabled**
- - **AntispywareEnabled**
- - **RealTimeProtectionEnabled**
- - **BehaviorMonitorEnabled**
- - **IoavProtectionEnabled**
- - **OnAccessProtectionEnabled**
-
- - If one or both of the following Signature Updates are greater or equal to 7:
-
- - **AntispywareSignatureAge**
- - **AntivirusSignatureAge**
+| Recommendation | Appears when |
+|--|--|
+| **Endpoint protection should be installed on your machines** | importing **SCEPMpModule ("$env:ProgramFiles\Microsoft Security Client\MpProvider\MpProvider.psd1")** and running **Get-MProtComputerStatus** results in **AMServiceEnabled = false** |
+| **Endpoint protection health issues should be resolved on your machines** | **Get-MprotComputerStatus** runs and any of the following occurs: <br><br> At least one of the following properties is false: <br><br> - **AMServiceEnabled** <br> - **AntispywareEnabled** <br> - **RealTimeProtectionEnabled** <br> - **BehaviorMonitorEnabled** <br> - **IoavProtectionEnabled** <br> - **OnAccessProtectionEnabled** <br><br> If one or both of the following Signature Updates are greater or equal to 7: <br><br> - **AntispywareSignatureAge** <br> - **AntivirusSignatureAge** |
## Trend Micro -- Defender for Cloud recommends **Endpoint protection should be installed on your machines** when any of the following checks aren't met:
- - **HKLM:\SOFTWARE\TrendMicro\Deep Security Agent** exists
- - **HKLM:\SOFTWARE\TrendMicro\Deep Security Agent\InstallationFolder** exists
- - The **dsa_query.cmd** file is found in the Installation Folder
- - Running **dsa_query.cmd** results with **Component.AM.mode: on - Trend Micro Deep Security Agent detected**
+| Recommendation | Appears when |
+|--|--|
+| **Endpoint protection should be installed on your machines** | any of the following checks aren't met: <br><br> - **HKLM:\SOFTWARE\TrendMicro\Deep Security Agent** exists <br> - **HKLM:\SOFTWARE\TrendMicro\Deep Security Agent\InstallationFolder** exists <br> - The **dsa_query.cmd** file is found in the Installation Folder <br> - Running **dsa_query.cmd** results with **Component.AM.mode: on - Trend Micro Deep Security Agent detected** |
## Symantec endpoint protection
-Defender for Cloud recommends **Endpoint protection should be installed on your machines** when any of the following checks aren't met:
--- **HKLM:\Software\Symantec\Symantec Endpoint Protection\CurrentVersion\PRODUCTNAME = "Symantec Endpoint Protection"**-- **HKLM:\Software\Symantec\Symantec Endpoint Protection\CurrentVersion\public-opstate\ASRunningStatus = 1**-
-Or
--- **HKLM:\Software\Wow6432Node\Symantec\Symantec Endpoint Protection\CurrentVersion\PRODUCTNAME = "Symantec Endpoint Protection"**-- **HKLM:\Software\Wow6432Node\Symantec\Symantec Endpoint Protection\CurrentVersion\public-opstate\ASRunningStatus = 1**-
-Defender for Cloud recommends **Endpoint protection health issues should be resolved on your machines** when any of the following checks aren't met:
--- Check Symantec Version >= 12: Registry location: **HKLM:\Software\Symantec\Symantec Endpoint Protection\CurrentVersion" -Value "PRODUCTVERSION"**-- Check Real-Time Protection status: **HKLM:\Software\Wow6432Node\Symantec\Symantec Endpoint Protection\AV\Storages\Filesystem\RealTimeScan\OnOff == 1**-- Check Signature Update status: **HKLM\Software\Symantec\Symantec Endpoint Protection\CurrentVersion\public-opstate\LatestVirusDefsDate <= 7 days**-- Check Full Scan status: **HKLM:\Software\Symantec\Symantec Endpoint Protection\CurrentVersion\public-opstate\LastSuccessfulScanDateTime <= 7 days**-- Find signature version number Path to signature version for Symantec 12: **Registry Paths+ "CurrentVersion\SharedDefs" -Value "SRTSP"**-- Path to signature version for Symantec 14: **Registry Paths+ "CurrentVersion\SharedDefs\SDSDefs" -Value "SRTSP"**-
-Registry Paths:
--- **"HKLM:\Software\Symantec\Symantec Endpoint Protection" + $Path;**-- **"HKLM:\Software\Wow6432Node\Symantec\Symantec Endpoint Protection" + $Path**
+| Recommendation | Appears when |
+|--|--|
+| **Endpoint protection should be installed on your machines** | any of the following checks aren't met: <br> <br> - **HKLM:\Software\Symantec\Symantec Endpoint Protection\CurrentVersion\PRODUCTNAME = "Symantec Endpoint Protection"** <br> - **HKLM:\Software\Symantec\Symantec Endpoint Protection\CurrentVersion\public-opstate\ASRunningStatus = 1** <br> Or <br> - **HKLM:\Software\Wow6432Node\Symantec\Symantec Endpoint Protection\CurrentVersion\PRODUCTNAME = "Symantec Endpoint Protection"** <br> - **HKLM:\Software\Wow6432Node\Symantec\Symantec Endpoint Protection\CurrentVersion\public-opstate\ASRunningStatus = 1**|
+| **Endpoint protection health issues should be resolved on your machines** | any of the following checks aren't met: <br> <br> - Check Symantec Version >= 12: Registry location: **HKLM:\Software\Symantec\Symantec Endpoint Protection\CurrentVersion" -Value "PRODUCTVERSION"** <br> - Check Real-Time Protection status: **HKLM:\Software\Wow6432Node\Symantec\Symantec Endpoint Protection\AV\Storages\Filesystem\RealTimeScan\OnOff == 1** <br> - Check Signature Update status: **HKLM\Software\Symantec\Symantec Endpoint Protection\CurrentVersion\public-opstate\LatestVirusDefsDate <= 7 days** <br> - Check Full Scan status: **HKLM:\Software\Symantec\Symantec Endpoint Protection\CurrentVersion\public-opstate\LastSuccessfulScanDateTime <= 7 days** <br> - Find signature version number Path to signature version for Symantec 12: **Registry Paths+ "CurrentVersion\SharedDefs" -Value "SRTSP"** <br> - Path to signature version for Symantec 14: **Registry Paths+ "CurrentVersion\SharedDefs\SDSDefs" -Value "SRTSP"** <br><br> Registry Paths: <br> <br> - **"HKLM:\Software\Symantec\Symantec Endpoint Protection" + $Path;** <br> - **"HKLM:\Software\Wow6432Node\Symantec\Symantec Endpoint Protection" + $Path** |
## McAfee endpoint protection for Windows
-Defender for Cloud recommends **Endpoint protection should be installed on your machines** when any of the following checks aren't met:
--- **HKLM:\SOFTWARE\McAfee\Endpoint\AV\ProductVersion** exists-- **HKLM:\SOFTWARE\McAfee\AVSolution\MCSHIELDGLOBAL\GLOBAL\enableoas = 1**-
-Defender for Cloud recommends **Endpoint protection health issues should be resolved on your machines** when any of the following checks aren't met:
--- McAfee Version: **HKLM:\SOFTWARE\McAfee\Endpoint\AV\ProductVersion >= 10**-- Find Signature Version: **HKLM:\Software\McAfee\AVSolution\DS\DS -Value "dwContentMajorVersion"**-- Find Signature date: **HKLM:\Software\McAfee\AVSolution\DS\DS -Value "szContentCreationDate" >= 7 days**-- Find Scan date: **HKLM:\Software\McAfee\Endpoint\AV\ODS -Value "LastFullScanOdsRunTime" >= 7 days**
+| Recommendation | Appears when |
+|--|--|
+| **Endpoint protection should be installed on your machines** | any of the following checks aren't met: <br><br> - **HKLM:\SOFTWARE\McAfee\Endpoint\AV\ProductVersion** exists <br> - **HKLM:\SOFTWARE\McAfee\AVSolution\MCSHIELDGLOBAL\GLOBAL\enableoas = 1**|
+| **Endpoint protection health issues should be resolved on your machines** | any of the following checks aren't met: <br> <br> - McAfee Version: **HKLM:\SOFTWARE\McAfee\Endpoint\AV\ProductVersion >= 10** <br> - Find Signature Version: **HKLM:\Software\McAfee\AVSolution\DS\DS -Value "dwContentMajorVersion"** <br> - Find Signature date: **HKLM:\Software\McAfee\AVSolution\DS\DS -Value "szContentCreationDate" >= 7 days** <br> - Find Scan date: **HKLM:\Software\McAfee\Endpoint\AV\ODS -Value "LastFullScanOdsRunTime" >= 7 days** |
## McAfee Endpoint Security for Linux Threat Prevention
-Defender for Cloud recommends **Endpoint protection should be installed on your machines** when any of the following checks aren't met:
--- File **/opt/McAfee/ens/tp/bin/mfetpcli** exists-- **"/opt/McAfee/ens/tp/bin/mfetpcli --version"** output is: **McAfee name = McAfee Endpoint Security for Linux Threat Prevention and McAfee version >= 10**-
-Defender for Cloud recommends **Endpoint protection health issues should be resolved on your machines** when any of the following checks aren't met:
--- **"/opt/McAfee/ens/tp/bin/mfetpcli --listtask"** returns **Quick scan, Full scan** and both of the scans <= 7 days-- **"/opt/McAfee/ens/tp/bin/mfetpcli --listtask"** returns **DAT and engine Update time** and both of them <= 7 days-- **"/opt/McAfee/ens/tp/bin/mfetpcli --getoasconfig --summary"** returns **On Access Scan** status
+| Recommendation | Appears when |
+|--|--|
+| **Endpoint protection should be installed on your machines** | any of the following checks aren't met: <br> <br> - File **/opt/McAfee/ens/tp/bin/mfetpcli** exists <br> - **"/opt/McAfee/ens/tp/bin/mfetpcli --version"** output is: **McAfee name = McAfee Endpoint Security for Linux Threat Prevention and McAfee version >= 10** |
+| **Endpoint protection health issues should be resolved on your machines** | any of the following checks aren't met: <br> <br> - **"/opt/McAfee/ens/tp/bin/mfetpcli --listtask"** returns **Quick scan, Full scan** and both of the scans <= 7 days <br> - **"/opt/McAfee/ens/tp/bin/mfetpcli --listtask"** returns **DAT and engine Update time** and both of them <= 7 days <br> - **"/opt/McAfee/ens/tp/bin/mfetpcli --getoasconfig --summary"** returns **On Access Scan** status |
## Sophos Antivirus for Linux
-Defender for Cloud recommends **Endpoint protection should be installed on your machines** when any of the following checks aren't met:
--- File **/opt/sophos-av/bin/savdstatus** exits or search for customized location **"readlink $(which savscan)"**-- **"/opt/sophos-av/bin/savdstatus --version"** returns Sophos name = **Sophos Anti-Virus and Sophos version >= 9**-
-Defender for Cloud recommends **Endpoint protection health issues should be resolved on your machines** when any of the following checks aren't met:
--- **"/opt/sophos-av/bin/savlog --maxage=7 | grep -i "Scheduled scan .\* completed" | tail -1"**, returns a value-- **"/opt/sophos-av/bin/savlog --maxage=7 | grep "scan finished"** | tail -1", returns a value-- **"/opt/sophos-av/bin/savdstatus --lastupdate"** returns lastUpdate, which should be <= 7 days-- **"/opt/sophos-av/bin/savdstatus -v"** is equal to **"On-access scanning is running"**-- **"/opt/sophos-av/bin/savconfig get LiveProtection"** returns enabled
+| Recommendation | Appears when |
+|--|--|
+| **Endpoint protection should be installed on your machines** | any of the following checks aren't met: <br> <br> - File **/opt/sophos-av/bin/savdstatus** exits or search for customized location **"readlink $(which savscan)"** <br> - **"/opt/sophos-av/bin/savdstatus --version"** returns Sophos name = **Sophos Anti-Virus and Sophos version >= 9** |
+| **Endpoint protection health issues should be resolved on your machines** | any of the following checks aren't met: <br> <br> - **"/opt/sophos-av/bin/savlog --maxage=7 \| grep -i "Scheduled scan .\* completed" \| tail -1"**, returns a value <br> - **"/opt/sophos-av/bin/savlog --maxage=7 \| grep "scan finished"** \| tail -1", returns a value <br> - **"/opt/sophos-av/bin/savdstatus --lastupdate"** returns lastUpdate, which should be <= 7 days <br> - **"/opt/sophos-av/bin/savdstatus -v"** is equal to **"On-access scanning is running"** <br> - **"/opt/sophos-av/bin/savconfig get LiveProtection"** returns enabled |
## Troubleshoot and support
defender-for-cloud Iac Template Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/iac-template-mapping.md
Title: Map IaC templates from code to cloud
-description: Learn how to map your Infrastructure as Code templates to your cloud resources.
+ Title: Map Infrastructure as Code templates from code to cloud
+description: Learn how to map your Infrastructure as Code (IaC) templates to your cloud resources.
Last updated 11/03/2023
# Map Infrastructure as Code templates to cloud resources
-Mapping Infrastructure as Code (IaC) templates to cloud resources ensures consistent, secure, and auditable infrastructure provisioning. It enables rapid response to security threats and a security-by-design approach. If there are misconfigurations in runtime resources, this mapping allows remediation at the template level, ensuring no drift and facilitating deployment via CI/CD methodology.
+Mapping Infrastructure as Code (IaC) templates to cloud resources helps you ensure consistent, secure, and auditable infrastructure provisioning. It supports rapid response to security threats and a security-by-design approach. You can use mapping to discover misconfigurations in runtime resources. Then, remediate at the template level to help ensure no drift and to facilitate deployment via CI/CD methodology.
## Prerequisites
-To allow Microsoft Defender for Cloud to map Infrastructure as Code template to cloud resources, you need:
+To set Microsoft Defender for Cloud to map IaC templates to cloud resources, you need:
-- An Azure account with Defender for Cloud onboarded. If you don't already have an Azure account, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- [Azure DevOps](quickstart-onboard-devops.md) environment onboarded into Microsoft Defender for Cloud.
+- An Azure account with Defender for Cloud configured. If you don't already have an Azure account, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An [Azure DevOps](quickstart-onboard-devops.md) environment set up in Defender for Cloud.
- [Defender Cloud Security Posture Management (CSPM)](tutorial-enable-cspm-plan.md) enabled.-- Configure your Azure Pipelines to run [Microsoft Security DevOps Azure DevOps extension](azure-devops-extension.md).-- Tag your supported Infrastructure as Code templates and your cloud resources. (Open-source tools like [Yor_trace](https://github.com/bridgecrewio/yor) can be used to automatically tag Infrastructure as Code templates)
- - Supported cloud platforms: AWS, Azure, GCP.
- - Supported source code management systems: Azure DevOps.
- - Supported template languages: Azure Resource Manager, Bicep, CloudFormation, Terraform.
+- Azure Pipelines set up to run the [Microsoft Security DevOps Azure DevOps extension](azure-devops-extension.md).
+- IaC templates and cloud resources set up with tag support. You can use open-source tools like [Yor_trace](https://github.com/bridgecrewio/yor) to automatically tag IaC templates.
+ - Supported cloud platforms: Microsoft Azure, Amazon Web Services, Google Cloud Platform
+ - Supported source code management systems: Azure DevOps
+ - Supported template languages: Azure Resource Manager, Bicep, CloudFormation, Terraform
> [!NOTE]
-> Microsoft Defender for Cloud will only use the following tags from Infrastructure as Code templates for mapping:
-
-> - yor_trace
-> - mapping_tag
+> Microsoft Defender for Cloud uses only the following tags from IaC templates for mapping:
+>
+> - `yor_trace`
+> - `mapping_tag`
## See the mapping between your IaC template and your cloud resources
-To see the mapping between your IaC template and your cloud resources in the [Cloud Security Explorer](how-to-manage-cloud-security-explorer.md):
+To see the mapping between your IaC template and your cloud resources in [Cloud Security Explorer](how-to-manage-cloud-security-explorer.md):
1. Sign in to the [Azure portal](https://portal.azure.com/).+ 1. Go to **Microsoft Defender for Cloud** > **Cloud Security Explorer**.
-1. Search for and select all your cloud resources from the drop-down menu.
-1. Select + to add other filters to your query.
-1. Add the subfilter **Provisioned by** from the category **Identity & Access**.
-1. Select **Code repositories** from the category **DevOps**.
-1. After building your query, select **Search** to run the query.
-Alternatively, you can use the built-in template named ΓÇ£Cloud resources provisioned by IaC templates with high severity misconfigurationsΓÇ¥.
+1. In the dropdown menu, search for and select all your cloud resources.
+
+1. To add more filters to your query, select **+**.
+
+1. In the **Identity & Access** category, add the subfilter **Provisioned by**.
+
+1. In the **DevOps** category, select **Code repositories**.
+
+1. After you build your query, select **Search** to run the query.
-![Screenshot of IaC Mapping Cloud Security Explorer template.](media/iac-template-mapping/iac-mapping.png)
+Alternatively, select the built-in template **Cloud resources provisioned by IaC templates with high severity misconfigurations**.
+ > [!NOTE]
-> Please note that mapping between your Infrastructure as Code templates to your cloud resources can take up to 12 hours to appear in the Cloud Security Explorer.
+> Mapping between your IaC templates and your cloud resources might take up to 12 hours to appear in Cloud Security Explorer.
## (Optional) Create sample IaC mapping tags
-To create sample IaC mapping tags within your code repositories, follow these steps:
+To create sample IaC mapping tags in your code repositories:
+
+1. In your repository, add an IaC template that includes tags.
+
+ You can start with a [sample template](https://github.com/microsoft/security-devops-azdevops/tree/main/samples/IaCMapping).
+
+1. To commit directly to the main branch or create a new branch for this commit, select **Save**.
+
+1. Confirm that you included the **Microsoft Security DevOps** task in your Azure pipeline.
-1. Add an **IaC template with tags** to your repository. To use an example template, see [here](https://github.com/microsoft/security-devops-azdevops/tree/main/samples/IaCMapping).
-1. Select **save** to commit directly to the main branch or create a new branch for this commit.
-1. Include the **Microsoft Security DevOps** task in your Azure pipeline.
-1. Verify that the **pipeline logs** show a finding saying **ΓÇ£An IaC tag(s) was found on this resourceΓÇ¥**. This means that Defender for Cloud successfully discovered tags.
+1. Verify that pipeline logs show a finding that says **An IaC tag(s) was found on this resource**. The finding indicates that Defender for Cloud successfully discovered tags.
-## Next steps
+## Related content
- Learn more about [DevOps security in Defender for Cloud](defender-for-devops-introduction.md).
defender-for-cloud Iac Vulnerabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/iac-vulnerabilities.md
Title: Discover misconfigurations in Infrastructure as Code
-description: Learn how to use DevOps security in Defender for Cloud to discover misconfigurations in Infrastructure as Code (IaC)
+ Title: Scan for misconfigurations in Infrastructure as Code
+description: Learn how to use Microsoft Security DevOps scanning with Microsoft Defender for Cloud to find misconfigurations in Infrastructure as Code (IaC) in a connected GitHub repository or Azure DevOps project.
Last updated 01/24/2023
-# Discover misconfigurations in Infrastructure as Code (IaC)
+# Scan your connected GitHub repository or Azure DevOps project
-Once you have set up the Microsoft Security DevOps GitHub action or Azure DevOps extension, you can configure the YAML configuration file to run a single tool or multiple tools. For example, you can set up the action or extension to run Infrastructure as Code (IaC) scanning tools only. This can help reduce pipeline run time.
+You can set up Microsoft Security DevOps to scan your connected GitHub repository or Azure DevOps project. Use a GitHub action or an Azure DevOps extension to run Microsoft Security DevOps only on your Infrastructure as Code (IaC) source code, and help reduce your pipeline runtime.
+
+This article shows you how to apply a template YAML configuration file to scan your connected repository or project specifically for IaC security issues by using Microsoft Security DevOps rules.
## Prerequisites -- Configure Microsoft Security DevOps for GitHub and/or Azure DevOps based on your source code management system:
- - [Microsoft Security DevOps GitHub action](github-action.md)
- - [Microsoft Security DevOps Azure DevOps extension](azure-devops-extension.md).
-- Ensure you have an IaC template in your repository.
+- For Microsoft Security DevOps, set up the GitHub action or the Azure DevOps extension based on your source code management system:
+ - If your repository is in GitHub, set up the [Microsoft Security DevOps GitHub action](github-action.md).
+ - If you manage your source code in Azure DevOps, set up the [Microsoft Security DevOps Azure DevOps extension](azure-devops-extension.md).
+- Ensure that you have an IaC template in your repository.
+
+<a name="configure-iac-scanning-and-view-the-results-in-github"></a>
-## Configure IaC scanning and view the results in GitHub
+## Set up and run a GitHub action to scan your connected IaC source code
+
+To set up an action and view scan results in GitHub:
1. Sign in to [GitHub](https://www.github.com).
-1. Navigate to **`your repository's home page`** > **.github/workflows** > **msdevopssec.yml** that was created in the [prerequisites](github-action.md#configure-the-microsoft-security-devops-github-action-1).
+1. Go to the main page of your repository.
+
+1. In the file directory, select **.github** > **workflows** > **msdevopssec.yml**.
+
+ For more information about working with an action in GitHub, see [Prerequisites](github-action.md#configure-the-microsoft-security-devops-github-action-1).
+
+1. Select the **Edit this file** (pencil) icon.
+
+ :::image type="content" source="media/tutorial-iac-vulnerabilities/workflow-yaml.png" alt-text="Screenshot that highlights the Edit this file icon for the msdevopssec.yml file." lightbox="media/tutorial-iac-vulnerabilities/workflow-yaml.png":::
-1. Select **Edit file**.
+1. In the **Run analyzers** section of the YAML file, add this code:
- :::image type="content" source="media/tutorial-iac-vulnerabilities/workflow-yaml.png" alt-text="Screenshot that shows where to find the edit button for the msdevopssec.yml file." lightbox="media/tutorial-iac-vulnerabilities/workflow-yaml.png":::
+ ```yaml
+ with:
+ categories: 'IaC'
+ ```
-1. Under the Run Analyzers section, add:
+ > [!NOTE]
+ > Values are case sensitive.
- ```yml
- with:
- categories: 'IaC'
- ```
+ Here's an example:
- > [!NOTE]
- > Categories are case sensitive.
- :::image type="content" source="media/tutorial-iac-vulnerabilities/add-to-yaml.png" alt-text="Screenshot that shows the information that needs to be added to the yaml file.":::
+ :::image type="content" source="media/tutorial-iac-vulnerabilities/add-to-yaml.png" alt-text="Screenshot that shows the information to add to the YAML file.":::
-1. Select **Start Commit**.
+1. Select **Commit changes . . .** .
1. Select **Commit changes**.
- :::image type="content" source="media/tutorial-iac-vulnerabilities/commit-change.png" alt-text="Screenshot that shows where to select commit change on the GitHub page.":::
+ :::image type="content" source="media/tutorial-iac-vulnerabilities/commit-change.png" alt-text="Screenshot that shows where to select Commit changes on the GitHub page.":::
-1. (Optional) Add an IaC template to your repository. Skip if you already have an IaC template in your repository.
+1. (Optional) Add an IaC template to your repository. If you already have an IaC template in your repository, skip this step.
- For example, [commit an IaC template to deploy a basic Linux web application](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/webapp-basic-linux) to your repository.
+ For example, commit an IaC template that you can use to [deploy a basic Linux web application](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/webapp-basic-linux).
- 1. Select `azuredeploy.json`.
+ 1. Select the **azuredeploy.json** file.
- :::image type="content" source="media/tutorial-iac-vulnerabilities/deploy-json.png" alt-text="Screenshot that shows where the azuredeploy.json file is located.":::
+ :::image type="content" source="media/tutorial-iac-vulnerabilities/deploy-json.png" alt-text="Screenshot that shows where the azuredeploy.json file is located.":::
- 1. Select **Raw**.
+ 1. Select **Raw**.
- 1. Copy all the information in the file.
+ 1. Copy all the information in the file, like in the following example:
```json {
Once you have set up the Microsoft Security DevOps GitHub action or Azure DevOps
"type": "string", "defaultValue": "AzureLinuxApp", "metadata": {
- "description": "Base name of the resource such as web app name and app service plan "
+ "description": "The base name of the resource, such as the web app name or the App Service plan."
}, "minLength": 2 },
Once you have set up the Microsoft Security DevOps GitHub action or Azure DevOps
"type": "string", "defaultValue": "S1", "metadata": {
- "description": "The SKU of App Service Plan "
+ "description": "The SKU of the App Service plan."
} }, "linuxFxVersion": { "type": "string", "defaultValue": "php|7.4", "metadata": {
- "description": "The Runtime stack of current web app"
+ "description": "The runtime stack of the current web app."
} }, "location": { "type": "string", "defaultValue": "[resourceGroup().location]", "metadata": {
- "description": "Location for all resources."
+ "description": "The location for all resources."
} } },
Once you have set up the Microsoft Security DevOps GitHub action or Azure DevOps
} ```
- 1. On GitHub, navigate to your repository.
+ 1. In your GitHub repository, go to the **.github/workflows** folder.
- 1. **Select Add file** > **Create new file**.
+ 1. Select **Add file** > **Create new file**.
- :::image type="content" source="media/tutorial-iac-vulnerabilities/create-file.png" alt-text="Screenshot that shows you where to navigate to, to create a new file." lightbox="media/tutorial-iac-vulnerabilities/create-file.png":::
+ :::image type="content" source="media/tutorial-iac-vulnerabilities/create-file.png" alt-text="Screenshot that shows you how to create a new file." lightbox="media/tutorial-iac-vulnerabilities/create-file.png":::
- 1. Enter a name for the file.
+ 1. Enter a name for the file.
- 1. Paste the copied information into the file.
+ 1. Paste the copied information in the file.
- 1. Select **Commit new file**.
+ 1. Select **Commit new file**.
- The file is now added to your repository.
+ The template file is added to your repository.
- :::image type="content" source="media/tutorial-iac-vulnerabilities/file-added.png" alt-text="Screenshot that shows that the new file you created has been added to your repository.":::
+ :::image type="content" source="media/tutorial-iac-vulnerabilities/file-added.png" alt-text="Screenshot that shows that the new file you created is added to your repository.":::
-1. Confirm the Microsoft Security DevOps scan completed:
- 1. Select **Actions**.
- 1. Select the workflow to see the results.
+1. Verify that the Microsoft Security DevOps scan is finished:
-1. Navigate to **Security** > **Code scanning alerts** to view the results of the scan (filter by tool as needed to see just the IaC findings).
+ 1. For the repository, select **Actions**.
-## Configure IaC scanning and view the results in Azure DevOps
+ 1. Select the workflow to see the action status.
-**To view the results of the IaC scan in Azure DevOps**:
+1. To view the results of the scan, go to **Security** > **Code scanning alerts**.
+
+ You can filter by tool to see only the IaC findings.
+
+<a name="configure-iac-scanning-and-view-the-results-in-azure-devops"></a>
+
+## Set up and run an Azure DevOps extension to scan your connected IaC source code
+
+To set up an extension and view scan results in Azure DevOps:
1. Sign in to [Azure DevOps](https://dev.azure.com/).
-1. Select the desired project
+1. Select your project.
-1. Select **Pipeline**.
+1. Select **Pipelines**.
-1. Select the pipeline where the Microsoft Security DevOps Azure DevOps Extension is configured.
+1. Select the pipeline where your Azure DevOps extension for Microsoft Security DevOps is configured.
-1. **Edit** the pipeline configuration YAML file adding the following lines:
+1. Select **Edit pipeline**.
-1. Add the following lines to the YAML file
+1. In the pipeline YAML configuration file, below the `displayName` line for the **MicrosoftSecurityDevOps@1** task, add this code:
- ```yml
- inputs:
- categories: 'IaC'
- ```
+ ```yaml
+ inputs:
+ categories: 'IaC'
+ ```
- :::image type="content" source="media/tutorial-iac-vulnerabilities/addition-to-yaml.png" alt-text="Screenshot showing you where to add this line to the YAML file.":::
+ Here's an example:
-1. Select **Save**.
+ :::image type="content" source="media/tutorial-iac-vulnerabilities/addition-to-yaml.png" alt-text="Screenshot that shows where to add the IaC categories line in the pipeline configuration YAML file.":::
-1. (Optional) Add an IaC template to your repository. Skip if you already have an IaC template in your repository.
+1. Select **Save**.
-1. Select **Save** to commit directly to the main branch or Create a new branch for this commit.
+1. (Optional) Add an IaC template to your Azure DevOps project. If you already have an IaC template in your project, skip this step.
-1. Select **Pipeline** > **`Your created pipeline`** to view the results of the IaC scan.
+1. Choose whether to commit directly to the main branch or to create a new branch for the commit, and then select **Save**.
-1. Select any result to see the details.
+1. To view the results of the IaC scan, select **Pipelines**, and then select the pipeline you modified.
-## View details and remediation information on IaC rules included with Microsoft Security DevOps
+1. See see more details, select a specific pipeline run.
-The IaC scanning tools that are included with Microsoft Security DevOps, are [Template Analyzer](https://github.com/Azure/template-analyzer) (which contains [PSRule](https://aka.ms/ps-rule-azure)) and [Terrascan](https://github.com/tenable/terrascan).
+## View details and remediation information for applied IaC rules
-Template Analyzer runs rules on ARM and Bicep templates. You can learn more about [Template Analyzer's rules and remediation details](https://github.com/Azure/template-analyzer/blob/main/docs/built-in-rules.md#built-in-rules).
+The IaC scanning tools that are included with Microsoft Security DevOps are [Template Analyzer](https://github.com/Azure/template-analyzer) ([PSRule](https://aka.ms/ps-rule-azure) is included in Template Analyzer) and [Terrascan](https://github.com/tenable/terrascan).
-Terrascan runs rules on ARM, CloudFormation, Docker, Helm, Kubernetes, Kustomize, and Terraform templates. You can learn more about the [Terrascan rules](https://runterrascan.io/docs/policies/).
+Template Analyzer runs rules on Azure Resource Manager templates (ARM templates) and Bicep templates. For more information, see the [Template Analyzer rules and remediation details](https://github.com/Azure/template-analyzer/blob/main/docs/built-in-rules.md#built-in-rules).
-## Learn more
+Terrascan runs rules on ARM templates and templates for CloudFormation, Docker, Helm, Kubernetes, Kustomize, and Terraform. For more information, see the [Terrascan rules](https://runterrascan.io/docs/policies/).
-- Learn more about [Template Analyzer](https://github.com/Azure/template-analyzer).-- Learn more about [PSRule](https://aka.ms/ps-rule-azure).-- Learn more about [Terrascan](https://runterrascan.io/).
+To learn more about the IaC scanning tools that are included with Microsoft Security DevOps, see:
-In this tutorial you learned how to configure the Microsoft Security DevOps GitHub Action and Azure DevOps Extension to scan for Infrastructure as Code (IaC) security misconfigurations and how to view the results.
+- [Template Analyzer](https://github.com/Azure/template-analyzer)
+- [PSRule](https://aka.ms/ps-rule-azure)
+- [Terrascan](https://runterrascan.io/)
-## Next steps
+## Related content
-Learn more about [DevOps security](defender-for-devops-introduction.md).
+In this article, you learned how to set up a GitHub action and an Azure DevOps extension for Microsoft Security DevOps to scan for IaC security misconfigurations and how to view the results.
-Learn how to [connect your GitHub](quickstart-onboard-github.md) to Defender for Cloud.
+To get more information:
-Learn how to [connect your Azure DevOps](quickstart-onboard-devops.md) to Defender for Cloud.
+- Learn more about [DevOps security](defender-for-devops-introduction.md).
+- Learn how to [connect your GitHub repository](quickstart-onboard-github.md) to Defender for Cloud.
+- Learn how to [connect your Azure DevOps project](quickstart-onboard-devops.md) to Defender for Cloud.
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
Title: Important upcoming changes description: Upcoming changes to Microsoft Defender for Cloud that you might need to be aware of and for which you might need to plan Previously updated : 01/21/2024 Last updated : 02/01/2024 # Important upcoming changes to Microsoft Defender for Cloud
If you're looking for the latest release notes, you can find them in the [What's
| Planned change | Announcement date | Estimated date for change | |--|--|--|
+| [Changes in endpoint protection recommendations](#changes-in-endpoint-protection-recommendations) | February 1, 2024 | February 28, 2024 |
| [Change in pricing for multicloud container threat detection](#change-in-pricing-for-multicloud-container-threat-detection) | January 30, 2024 | April 2024 | | [Enforcement of Defender CSPM for Premium DevOps Security Capabilities](#enforcement-of-defender-cspm-for-premium-devops-security-value) | January 29, 2024 | March 2024 | | [Update to agentless VM scanning built-in Azure role](#update-to-agentless-vm-scanning-built-in-azure-role) |January 14, 2024 | February 2024 |
If you're looking for the latest release notes, you can find them in the [What's
| [Deprecating two security incidents](#deprecating-two-security-incidents) | | November 2023 | | [Defender for Cloud plan and strategy for the Log Analytics agent deprecation](#defender-for-cloud-plan-and-strategy-for-the-log-analytics-agent-deprecation) | | August 2024 |
+## Changes in endpoint protection recommendations
+
+**Announcement date: February 1, 2024**
+
+**Estimated date of change: February 2024**
+
+As use of the Azure Monitor Agent (AMA) and the Log Analytics agent (also known as the Microsoft Monitoring Agent (MMA)) is [phased out in Defender for Servers](https://techcommunity.microsoft.com/t5/user/ssoregistrationpage?dest_url=https:%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fblogs%2Fblogworkflowpage%2Fblog-id%2FMicrosoftDefenderCloudBlog%2Farticle-id%2F1269), existing endpoint recommendations which rely on those agents, will be replaced with new recommendations. The new recommendations rely on [agentless machine scanning](concept-agentless-data-collection.md) which allows the recommendations to discover and assesses the configuration of supported endpoint detection and response solutions and offers remediation steps, if issues are found.
+
+These public preview recommendations will be deprecated.
+
+| Recommendation | Agent | Deprecation date | Replacement recommendation |
+|--|--|--|--|
+| [Endpoint protection should be installed on your machines](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/4fb67663-9ab9-475d-b026-8c544cced439) (public) | MMA/AMA | February 2024 | New agentless recommendations. |
+| [Endpoint protection health issues should be resolved on your machines](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/37a3689a-818e-4a0e-82ac-b1392b9bb000) (public)| MMA/AMA | February 2024 | New agentless recommendations. |
+
+The current generally available recommendations will remain supported until August 2024.
+
+As part of that deprecation, weΓÇÖll be introducing new agentless endpoint protection recommendations. These recommendations will be available in Defender for Servers Plan 2 and the Defender CSPM plan. They will support Azure and multicloud machines. On-premises machines are not supported.
+
+| Preliminary recommendation name | Estimated release date |
+|--|--|--|
+| Endpoint Detection and Response (EDR) solution should be installed on Virtual Machines | February 2024 |
+| Endpoint Detection and Response (EDR) solution should be installed on EC2s | February 2024 |
+| Endpoint Detection and Response (EDR) solution should be installed on Virtual Machines (GCP) | February 2024 |
+| Endpoint Detection and Response (EDR) configuration issues should be resolved on virtual machines | February 2024 |
+| Endpoint Detection and Response (EDR) configuration issues should be resolved on EC2s | February 2024 |
+| Endpoint Detection and Response (EDR) configuration issues should be resolved on GCP virtual machines | February 2024 |
+ ## Change in pricing for multicloud container threat detection **Announcement date: January 30, 2024**
deployment-environments Best Practice Catalog Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/best-practice-catalog-structure.md
Last updated 11/27/2023
-#customer intent: As a platform engineer, I want to structure my catalog so that Azure Deployment Environments can find and cache environment definitions efficiently.
+# Customer intent: As a platform engineer, I want to structure my catalog so that Azure Deployment Environments can find and cache environment definitions efficiently.
deployment-environments Concept Environment Yaml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/concept-environment-yaml.md
Last updated 11/17/2023
-#customer intent: As a developer, I want to know which parameters I can assign for parameters in environment.yaml.
+# Customer intent: As a developer, I want to know which parameters I can assign for parameters in environment.yaml.
deployment-environments How To Create Environment With Azure Developer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-create-environment-with-azure-developer.md
Last updated 01/26/2023
-#customer intent: As a developer, I want to be able to create an enviroment by using AZD so that I can create my coding environment.
+# Customer intent: As a developer, I want to be able to create an enviroment by using AZD so that I can create my coding environment.
deployment-environments How To Schedule Environment Deletion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-schedule-environment-deletion.md
Last updated 11/10/2023
-#customer intent: As a developer, I want automatically delete my environment on a specific date so that I can keep resources current.
+# Customer intent: As a developer, I want automatically delete my environment on a specific date so that I can keep resources current.
deployment-environments Overview What Is Azure Deployment Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/overview-what-is-azure-deployment-environments.md
Azure Deployment Environments enables usage [scenarios](./concept-environments-s
Developers have the following self-service experience when working with [environments](./concept-environments-key-concepts.md#environments).
->[!NOTE]
-> Developers have a CLI-based experience to create and manage environments for Azure Deployment Environments.
- - Deploy a preconfigured environment for any stage of the development cycle. - Spin up a sandbox environment to explore Azure. - Create platform as a service (PaaS) and infrastructure as a service (IaaS) environments quickly and easily by following a few simple steps. - Deploy environments right from where they work.
+Developers create and manage environments for Azure Deployment Environments through the [developer portal](./quickstart-create-access-environments.md), with the [Azure CLI](./how-to-create-access-environments.md) or with the [Azure Developer CLI](./how-to-create-environment-with-azure-developer.md).
+ ### Platform engineering scenarios Azure Deployment Environments helps your platform engineer apply the right set of policies and settings on various types of environments, control the resource configuration that developers can create, and track environments across projects. They perform the following tasks:
Azure Deployment Environments provides the following benefits to creating, confi
Capture and share IaC templates in source control within your team or organization, to easily create on-demand environments. Promote collaboration through inner-sourcing of templates from source control repositories. - **Compliance and governance**:
-Platform engineering teams can curate environment templates to enforce enterprise security policies and map projects to Azure subscriptions, identities, and permissions by environment types.
+Platform engineering teams can curate environment definitions to enforce enterprise security policies and map projects to Azure subscriptions, identities, and permissions by environment types.
- **Project-based configurations**:
-Create and organize environment templates by the types of applications that development teams are working on, rather than using an unorganized list of templates or a traditional IaC setup.
+Create and organize environment definitions by the types of applications that development teams are working on, rather than using an unorganized list of templates or a traditional IaC setup.
- **Worry-free self-service**: Enable your development teams to quickly and easily create app infrastructure (PaaS, serverless, and more) resources by using a set of preconfigured templates. You can also track costs on these resources to stay within your budget.
dev-box How To Configure Intune Conditional Access Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-configure-intune-conditional-access-policies.md
Last updated 12/20/2023
-#customer intent: As a platform engineer, I want to configure conditional access policies in Microsoft Intune so that I can control access to dev boxes.
+# Customer intent: As a platform engineer, I want to configure conditional access policies in Microsoft Intune so that I can control access to dev boxes.
dev-box Tutorial Connect To Dev Box With Remote Desktop App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/tutorial-connect-to-dev-box-with-remote-desktop-app.md
In this tutorial, you download and use a remote desktop client application to connect to a dev box.
-Remote Desktop apps let you use and control a dev box from almost any device. For your desktop or laptop, you can choose to download the Remote Desktop client for Windows Desktop or Microsoft Remote Desktop for Mac. You can also download a Remote Desktop app for your mobile device: Microsoft Remote Desktop for iOS or Microsoft Remote Desktop for Android.
+Remote desktop apps let you use and control a dev box from almost any device. For your desktop or laptop, you can choose to download the Remote Desktop client for Windows Desktop or Microsoft Remote Desktop for Mac. You can also download a remote desktop app for your mobile device: Microsoft Remote Desktop for iOS or Microsoft Remote Desktop for Android.
+
+> [!TIP]
+> Many remote desktops apps allow you to [use multiple monitors](tutorial-configure-multiple-monitors.md) when you connect to your dev box.
Alternately, you can connect to your dev box through the browser from the Microsoft Dev Box developer portal.
event-grid Event Hubs Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-hubs-integration.md
Title: 'Tutorial: Send Event Hubs data to data warehouse - Event Grid' description: Shows how to migrate Event Hubs captured data from Azure Blob Storage to Azure Synapse Analytics, specifically a dedicated SQL pool, using Azure Event Grid and Azure Functions. Previously updated : 11/14/2022 Last updated : 01/31/2024 ms.devlang: csharp
To complete this tutorial, you must have:
- An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin. - [Visual studio](https://www.visualstudio.com/vs/) with workloads for: .NET desktop development, Azure development, ASP.NET and web development, Node.js development, and Python development. - Download the [EventHubsCaptureEventGridDemo sample project](https://github.com/Azure/azure-event-hubs/tree/master/samples/e2e/EventHubsCaptureEventGridDemo) to your computer.
- - WindTurbineDataGenerator ΓÇô A simple publisher that sends sample wind turbine data to a capture-enabled event hub
- - FunctionDWDumper ΓÇô An Azure Function that receives a notification from Azure Event Grid when an Avro file is captured to the Azure Storage blob. It receives the blobΓÇÖs URI path, reads its contents, and pushes this data to Azure Synapse Analytics (dedicated SQL pool).
+ - WindTurbineDataGenerator ΓÇô A simple publisher that sends sample wind turbine data to an event hub with the Capture feature enabled.
+ - FunctionDWDumper ΓÇô An Azure function that receives a notification from Azure Event Grid when an Avro file is captured to the Azure Storage blob. It receives the blobΓÇÖs URI path, reads its contents, and pushes this data to Azure Synapse Analytics (dedicated SQL pool).
## Deploy the infrastructure In this step, you deploy the required infrastructure with a [Resource Manager template](https://github.com/Azure/azure-docs-json-samples/blob/master/event-grid/EventHubsDataMigration.json). When you deploy the template, the following resources are created:
In this step, you deploy the required infrastructure with a [Resource Manager te
} ``` 2. Deploy all the resources mentioned in the previous section (event hub, storage account, functions app, Azure Synapse Analytics) by running the following CLI command:
- 1. Copy and paste the command into the Cloud Shell window. Alternatively, you may want to copy/paste into an editor of your choice, set values, and then copy the command to the Cloud Shell.
+ 1. Copy and paste the command into the Cloud Shell window. Alternatively, you can copy/paste into an editor of your choice, set values, and then copy the command to the Cloud Shell. If you see an error due to an Azure resource name, delete the resource group, fix the name, and retry the command again.
> [!IMPORTANT] > Specify values for the following entities before running the command:
In this step, you deploy the required infrastructure with a [Resource Manager te
--template-uri https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/event-grid/EventHubsDataMigration.json \ --parameters eventHubNamespaceName=<event-hub-namespace> eventHubName=hubdatamigration sqlServerName=<sql-server-name> sqlServerUserName=<user-name> sqlServerPassword=<password> sqlServerDatabaseName=<database-name> storageName=<unique-storage-name> functionAppName=<app-name> ```
- 3. Press **ENTER** in the Cloud Shell window to run the command. This process may take a while since you're creating a bunch of resources. In the result of the command, ensure that there have been no failures.
+ 3. Press **ENTER** in the Cloud Shell window to run the command. This process might take a while since you're creating a bunch of resources. In the result of the command, ensure that there have been no failures.
1. Close the Cloud Shell by selecting the **Cloud Shell** button in the portal (or) **X** button in the top-right corner of the Cloud Shell window. ### Verify that the resources are created
First, get the publish profile for the Functions app from the Azure portal. Then
1. On the **Resource Group** page, select the **Azure Functions app** in the list of resources.
- :::image type="content" source="media/event-hubs-functions-synapse-analytics/select-function-app.png" alt-text="Screenshot showing the selection of the function app in the list of resources for a resource group.":::
+ :::image type="content" source="media/event-hubs-functions-synapse-analytics/select-function-app.png" lightbox="media/event-hubs-functions-synapse-analytics/select-function-app.png" alt-text="Screenshot showing the selection of the function app in the list of resources for a resource group.":::
1. On the **Function App** page for your app, select **Get publish profile** on the command bar.
- :::image type="content" source="media/event-hubs-functions-synapse-analytics/get-publish-profile.png" alt-text="Screenshot showing the selection of the **Get Publish Profile** button on the command bar of the function app page.":::
+ :::image type="content" source="media/event-hubs-functions-synapse-analytics/get-publish-profile.png" lightbox="media/event-hubs-functions-synapse-analytics/get-publish-profile.png" alt-text="Screenshot showing the selection of the **Get Publish Profile** button on the command bar of the function app page.":::
1. Download and save the file into the **FunctionEGDDumper** subfolder of the **EventHubsCaptureEventGridDemo** folder. ### Use the publish profile to publish the Functions app
First, get the publish profile for the Functions app from the Azure portal. Then
:::image type="content" source="media/event-hubs-functions-synapse-analytics/import-profile.png" alt-text="Screenshot showing the selection **Import Profile** on the **Publish** dialog box."::: 1. On the **Import profile** tab, select the publish settings file that you saved earlier in the **FunctionEGDWDumper** folder, and then select **Finish**. 1. When Visual Studio has configured the profile, select **Publish**. Confirm that the publishing succeeded.
-2. In the web browser that has the **Azure Function** page open, select **Functions** on the left menu. Confirm that the **EventGridTriggerMigrateData** function shows up in the list. If you don't see it, try publishing from Visual Studio again, and then refresh the page in the portal.
+2. In the web browser that has the **Azure Function** page open, select **Functions** in the middle pane. Confirm that the **EventGridTriggerMigrateData** function shows up in the list. If you don't see it, try publishing from Visual Studio again, and then refresh the page in the portal.
:::image type="content" source="media/event-hubs-functions-synapse-analytics/confirm-function-creation.png" alt-text="Screenshot showing the confirmation of function creation.":::
After publishing the function, you're ready to subscribe to the event.
1. Verify that the event subscription is created. Switch to the **Event Subscriptions** tab on the **Events** page for the Event Hubs namespace. :::image type="content" source="media/event-hubs-functions-synapse-analytics/confirm-event-subscription.png" alt-text="Screenshot showing the Event Subscriptions tab on the Events page." lightbox="media/event-hubs-functions-synapse-analytics/confirm-event-subscription.png":::
-1. Select the App Service plan (not the App Service) in the list of resources in the resource group.
## Run the app to generate data You've finished setting up your event hub, dedicate SQL pool (formerly SQL Data Warehouse), Azure function app, and event subscription. Before running an application that generates data for event hub, you need to configure a few values.
This section helps you with monitoring or troubleshooting the solution.
### View captured data in the storage account 1. Navigate to the resource group and select the storage account used for capturing event data.
-1. On the **Storage account** page, select **Storage Explorer (preview**) on the left menu.
+1. On the **Storage account** page, select **Storage browser** on the left menu.
1. Expand **BLOB CONTAINERS**, and select **windturbinecapture**. 1. Open the folder named same as your **Event Hubs namespace** in the right pane. 1. Open the folder named same as your event hub (**hubdatamigration**).
This section helps you with monitoring or troubleshooting the solution.
### Verify that the Event Grid trigger invoked the function 1. Navigate to the resource group and select the function app.
-1. Select **Functions** on the left menu.
+1. Select **Functions** tab in the middle pane.
1. Select the **EventGridTriggerMigrateData** function from the list. 1. On the **Function** page, select **Monitor** on the left menu. 1. Select **Configure** to configure application insights to capture invocation logs.
expressroute Configure Expressroute Private Peering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/configure-expressroute-private-peering.md
Last updated 01/02/2024
-#customer intent: As a network engineer, I want to establish a private connection from my on-premises network to my Azure virtual network using ExpressRoute.
+# Customer intent: As a network engineer, I want to establish a private connection from my on-premises network to my Azure virtual network using ExpressRoute.
# Tutorial: Establish a private connection from on-premises to an Azure virtual network using ExpressRoute
expressroute Expressroute Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-faqs.md
See the recommendation for [High availability and failover with Azure ExpressRou
Yes. Office 365 GCC service endpoints are reachable through the Azure US Government ExpressRoute. However, you first need to open a support ticket on the Azure portal to provide the prefixes you intend to advertise to Microsoft. Your connectivity to Office 365 GCC services will be established after the support ticket is resolved.
+### Can I have ExpressRoute Private Peering in an Azure Goverment environment with Virtual Network Gateways in Azure commercial cloud?
+
+No, it's not possible to establish ExpressRoute Private peering in an Azure Goverment environment with a virtual network gateway in Azure commercial cloud environments. Furthermore, the scope of the ExpressRoute Government Microsoft Peering is limited to only public IPs within Azure government regions and doesn't extend to the broader ranges of commercial public IPs.
+ ## Route filters for Microsoft peering ### Are Azure service routes advertised when I first configure Microsoft peering?
expressroute Expressroute Howto Macsec https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-macsec.md
Follow these steps to begin the configuration:
``` > [!NOTE]
- > CKN must be an even-length string up to 64 hexadecimal digits (0-9, A-F).
- >
- > CAK length depends on cipher suite specified:
- > * For GcmAes128 and GcmAesXpn128, the CAK must be an even-length string with 32 hexadecimal digits (0-9, A-F).
- > * For GcmAes256 and GcmAesXpn256, the CAK must be an even-length string with 64 hexadecimal digits (0-9, A-F).
+ > * CKN must be an even-length string up to 64 hexadecimal digits (0-9, A-F).
+ > * CAK length depends on cipher suite specified:
+ > * For GcmAes128 and GcmAesXpn128, the CAK must be an even-length string with 32 hexadecimal digits (0-9, A-F).
+ > * For GcmAes256 and GcmAesXpn256, the CAK must be an even-length string with 64 hexadecimal digits (0-9, A-F).
+ > * For CAK, the full length of the key must be used. If the key is shorter than the required length then `0's` will be added to the end of the key to meet the length requirement. For example, CAK of 1234 will be 12340000... for both 128-bit and 256-bit based on the cipher.
1. Grant the user identity the authorization to perform the `GET` operation.
frontdoor Classic Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/classic-overview.md
Last updated 08/09/2023
-# customer intent: As an IT admin, I want to learn about Front Door and what I can use it for.
+# Customer intent: As an IT admin, I want to learn about Front Door and what I can use it for.
# What is Azure Front Door (classic)?
healthcare-apis Events Disable Delete Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-disable-delete-workspace.md
Title: How to disable events and delete events enabled workspaces - Azure Health Data Services
-description: Learn how to disable events and delete events enabled workspaces.
+ Title: Disable events for the FHIR or DICOM service in Azure Health Data Services
+description: Disable events for the FHIR or DICOM service in Azure Health Services by deleting an event subscription. Learn why and how to stop sending notifications from your data and resources.
-+ Previously updated : 09/26/2023 Last updated : 01/31/2024
-# How to disable events and delete event enabled workspaces
+# Disable events
-> [!NOTE]
-> [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
+**Applies to:** [!INCLUDE [Yes icon](../includes/applies-to.md)][!INCLUDE [FHIR service](../includes/fhir-service.md)], [!INCLUDE [DICOM service](../includes/DICOM-service.md)]
-In this article, learn how to disable events and delete events enabled workspaces.
+Events in Azure Health Services allow you to monitor and respond to changes in your data and resources. By creating an event subscription, you can specify the conditions and actions for sending notifications to various endpoints.
-## Disable events
+However, there may be situations where you want to temporarily or permanently stop receiving notifications from an event subscription. For example, you might want to pause notifications during maintenance or testing, or delete the event subscription if you no longer need it.
-To disable events from sending event messages for a single **Event Subscription**, the **Event Subscription** must be deleted.
+To disable events from sending notifications for an **Event Subscription**, you need to delete the subscription.
-1. Select the **Event Subscription** to be deleted. In this example, we're selecting an Event Subscription named **fhir-events**.
+1. In the Azure portal on the left pane, select **Events**.
- :::image type="content" source="media/disable-delete-workspaces/select-event-subscription.png" alt-text="Screenshot of Events Subscriptions and select event subscription to be deleted." lightbox="media/disable-delete-workspaces/select-event-subscription.png":::
+1. Select **Event Subscriptions**.
-2. Select **Delete** and confirm the **Event Subscription** deletion.
+1. Select the **Event Subscription** you want to disable notifications for. In the example, the event subscription is named **azuredocsdemo-fhir-events-subscription**.
- :::image type="content" source="media/disable-delete-workspaces/select-subscription-delete.png" alt-text="Screenshot of events subscriptions and select delete and confirm the event subscription to be deleted." lightbox="media/disable-delete-workspaces/select-subscription-delete.png":::
+ :::image type="content" source="media/disable-delete-workspaces/select-event-subscription.png" alt-text="Screenshot showing selection of event subscription to be deleted." lightbox="media/disable-delete-workspaces/select-event-subscription.png":::
-3. If you have multiple **Event Subscriptions**, follow the steps to delete the **Event Subscriptions** so that no **Event Subscriptions** remain.
+1. Choose **Delete**.
- :::image type="content" source="media/disable-delete-workspaces/no-event-subscriptions-found.png" alt-text="Screenshot of Event Subscriptions and delete all event subscriptions to disable events." lightbox="media/disable-delete-workspaces/no-event-subscriptions-found.png":::
+ :::image type="content" source="media/disable-delete-workspaces/select-subscription-delete-sml.png" alt-text="Screenshot showing confirmation of the event subscription to be deleted." lightbox="media/disable-delete-workspaces/select-subscription-delete-lrg.png":::
-> [!NOTE]
-> The FHIR service will automatically go into an **Updating** status to disable events when a full delete of **Event Subscriptions** is executed. The FHIR service will remain online while the operation is completing, however, you won't be able to make any further configuration changes to the FHIR service until the updating has completed.
+1. If there are multiple event subscriptions, repeat these steps to delete each one until the message **No Event Subscriptions Found** is displayed in the **Name** field.
-## Delete events enabled workspaces
+ :::image type="content" source="media/disable-delete-workspaces/no-event-subscriptions-found-sml.png" alt-text="Screenshot showing deletion of all event subscriptions to disable events." lightbox="media/disable-delete-workspaces/no-event-subscriptions-found-lrg.png":::
-To avoid errors and successfully delete events enabled workspaces, follow these steps and in this specific order:
+> [!NOTE]
+> When you delete all event subscriptions, the FHIR or DICOM service disables events and goes into **Updating** status. The FHIR or DICOM service stays online during the update, but you canΓÇÖt change the configuration until it completes.
-1. Delete all workspace associated child resources (for example: DICOM services, FHIR services, and MedTech services).
-2. Delete all workspace associated **Event Subscriptions**.
-3. Delete workspace.
+## Delete events-enabled workspaces
-## Next steps
+To delete events-enabled workspaces without errors, do these steps in this exact order:
+
+1. Delete all child resources associated with the workspace (for example, FHIR&reg; services, DICOM&reg; services, and MedTech services).
-In this article, you learned how to disable events and delete events enabled workspaces.
+1. [Delete all event subscriptions](#disable-events) associated with the workspace.
-To learn about how to troubleshoot events, see
+1. Delete the workspace.
+
+## Next steps
-> [!div class="nextstepaction"]
-> [Troubleshoot events](events-troubleshooting-guide.md)
+[Troubleshoot events](events-troubleshooting-guide.md)
-FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Events Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-faqs.md
Title: Frequently asked questions about events - Azure Health Data Services
-description: Learn about the frequently asked questions about events.
+ Title: Events FAQ for Azure Health Data Services
+description: Get answers to common questions about the events capability in the FHIR and DICOM services in Azure Health Data Services. Find out how events work, what types of events are supported, and how to subscribe to events by using Azure Event Grid.
-+ Previously updated : 07/11/2023 Last updated : 01/31/2024
-# Frequently asked questions about events
+# Events FAQ
-> [!NOTE]
-> [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
+**Applies to:** [!INCLUDE [Yes icon](../includes/applies-to.md)][!INCLUDE [FHIR service](../includes/fhir-service.md)], [!INCLUDE [DICOM service](../includes/DICOM-service.md)]
-## Events: The basics
+Events let you subscribe to data changes in the FHIR&reg; or DICOM&reg; service and get notified through Azure Event Grid. You can use events to trigger workflows, automate tasks, send alerts, and more. In this FAQ, youΓÇÖll find answers to some common questions about events.
-## Can I use events with a different FHIR/DICOM service other than the Azure Health Data Services FHIR/DICOM service?
+**Can I use events with a non-Microsoft FHIR or DICOM service?**
-No. The Azure Health Data Services events feature only currently supports the Azure Health Data Services FHIR and DICOM services.
+No. The Events capability only supports the Azure Health Data Services FHIR and DICOM services.
-## What FHIR resource changes does events support?
+**What FHIR resource changes are supported by events?**
-Events are generated from the following FHIR service types:
+Events are generated from these FHIR service types:
-* **FhirResourceCreated** - The event emitted after a FHIR resource gets created successfully.
+- **FhirResourceCreated**. The event emitted after a FHIR resource is created.
-* **FhirResourceUpdated** - The event emitted after a FHIR resource gets updated successfully.
+- **FhirResourceUpdated**. The event emitted after a FHIR resource is updated.
-* **FhirResourceDeleted** - The event emitted after a FHIR resource gets soft deleted successfully.
+- **FhirResourceDeleted**. The event emitted after a FHIR resource is soft deleted.
-For more information about the FHIR service delete types, see [FHIR REST API capabilities for Azure Health Data Services FHIR service](../../healthcare-apis/fhir/fhir-rest-api-capabilities.md).
+For more information about delete types in the FHIR service, see [FHIR REST API capabilities for Azure Health Data Services](../../healthcare-apis/fhir/fhir-rest-api-capabilities.md).
-## Does events support FHIR bundles?
+**Does events support FHIR bundles?**
-Yes. The events feature is designed to emit notifications of data changes at the FHIR resource level.
+Yes. The events capability emits notifications of data changes at the FHIR resource level.
-Events support these [FHIR bundle types](http://hl7.org/fhir/R4/valueset-bundle-type.html) in the following ways:
+Events support these [FHIR bundle types](http://hl7.org/fhir/R4/valueset-bundle-type.html):
-* **Batch**: An event is emitted for each successful data change operation in a bundle. If one of the operations generates an error, no event is emitted for that operation. For example: the batch bundle contains five operations, however, there's an error with one of the operations. Events are emitted for the four successful operations with no event emitted for the operation that generated an error.
+- **Batch**. An event is emitted for each successful data change operation in a bundle. If one of the operations generates an error, no event is emitted for that operation. For example: the batch bundle contains five operations, however, there's an error with one of the operations. Events are emitted for the four successful operations with no event emitted for the operation that generated an error.
-* **Transaction**: An event is emitted for each successful bundle operation as long as there are no errors. If there are any errors within a transaction bundle, then no events are emitted. For example: the transaction bundle contains five operations, however, there's an error with one of the operations. No events are emitted for that bundle.
+- **Transaction**. An event is emitted for each successful bundle operation as long as there are no errors. If there are any errors within a transaction bundle, then no events are emitted. For example: the transaction bundle contains five operations, however, there's an error with one of the operations. No events are emitted for that bundle.
> [!NOTE]
-> Events are not sent in the sequence of the data operations in the FHIR bundle.
+> Events aren't sent in the sequence of the data operations in the FHIR bundle.
-## What DICOM image changes does events support?
+**What DICOM image changes does events support?**
Events are generated from the following DICOM service types:
-* **DicomImageCreated** - The event emitted after a DICOM image gets created successfully.
+- **DicomImageCreated**. The event emitted after a DICOM image is created.
-* **DicomImageDeleted** - The event emitted after a DICOM image gets deleted successfully.
+- **DicomImageDeleted**. The event emitted after a DICOM image is deleted.
-* **DicomImageUpdated** - The event emitted after a DICOM image gets updated successfully.
+- **DicomImageUpdated**. The event emitted after a DICOM image is updated. For more information, see [Update DICOM files](../dicom/update-files.md).
-## What is the payload of an events message?
+**What is the payload of an events message?**
-For a detailed description of the events message structure and both required and nonrequired elements, see [Events message structures](events-message-structure.md).
+For a description of the events message structure and required and nonrequired elements, see [Events message structures](events-message-structure.md).
-## What is the throughput for the events messages?
+**What is the throughput for events messages?**
-The throughput of the FHIR or DICOM service and the Event Grid govern the throughput of FHIR and DICOM events. When a request made to the FHIR service is successful, it returns a 2xx HTTP status code. It also generates a FHIR resource or DICOM image changing event. The current limitation is 5,000 events/second per workspace for all FHIR or DICOM service instances in the workspace.
+The throughput of the FHIR or DICOM service and the Event Grid governs the throughput of FHIR and DICOM events. When a request made to the FHIR service is successful, it returns a 2xx HTTP status code. It also generates a FHIR resource or DICOM image changing event. The current limitation is 5,000 events/second per workspace for all FHIR or DICOM service instances in the workspace.
-## How am I charged for using events?
+**How am I charged for using events?**
There are no extra charges for using [Azure Health Data Services events](https://azure.microsoft.com/pricing/details/health-data-services/). However, applicable charges for the [Event Grid](https://azure.microsoft.com/pricing/details/event-grid/) are assessed against your Azure subscription.
-## How do I subscribe to multiple FHIR and/or DICOM services in the same workspace separately?
+**How do I subscribe separately to multiple FHIR or DICOM services in the same workspace?**
-You can use the Event Grid filtering feature. There are unique identifiers in the event message payload to differentiate different accounts and workspaces. You can find a global unique identifier for workspace in the `source` field, which is the Azure Resource ID. You can locate the unique FHIR account name in that workspace in the `data.resourceFhirAccount` field. You can locate the unique DICOM account name in that workspace in the `data.serviceHostName` field. When you create a subscription, you can use the filtering operators to select the events you want to get in that subscription.
+Use the Event Grid filtering feature. There are unique identifiers in the event message payload to differentiate accounts and workspaces. You can find a global unique identifier for workspace in the `source` field, which is the Azure Resource ID. You can locate the unique FHIR account name in that workspace in the `data.resourceFhirAccount` field. You can locate the unique DICOM account name in the workspace in the `data.serviceHostName` field. When you create a subscription, use the filtering operators to select the events you want to include in the subscription.
:::image type="content" source="media\event-grid\event-grid-filters.png" alt-text="Screenshot of the Event Grid filters tab." lightbox="media\event-grid\event-grid-filters.png":::
-## Can I use the same subscriber for multiple workspaces, FHIR accounts, or DICOM accounts?
+**Can I use the same subscriber for multiple workspaces, FHIR accounts, or DICOM accounts?**
-Yes. We recommend that you use different subscribers for each individual FHIR or DICOM service to process in isolated scopes.
+Yes. We recommend that you use different subscribers for each FHIR or DICOM service to enable processing in isolated scopes.
-## Is Event Grid compatible with HIPAA and HITRUST compliance obligations?
+**Is the Event Grid compatible with HIPAA and HITRUST compliance requirements?**
-Yes. Event Grid supports customer's Health Insurance Portability and Accountability Act (HIPAA) and Health Information Trust Alliance (HITRUST) obligations. For more information, see [Microsoft Azure Compliance Offerings](https://azure.microsoft.com/resources/microsoft-azure-compliance-offerings/).
+Yes. Event Grid supports Health Insurance Portability and Accountability Act (HIPAA) and Health Information Trust Alliance (HITRUST) obligations. For more information, see [Microsoft Azure Compliance Offerings](https://azure.microsoft.com/resources/microsoft-azure-compliance-offerings/).
-## What is the expected time to receive an events message?
+**How long does it take to receive an events message?**
-On average, you should receive your event message within one second after a successful HTTP request. 99.99% of the event messages should be delivered within five seconds unless the limitation of either the FHIR service, DICOM service, or [Event Grid](../../event-grid/quotas-limits.md) has been met.
+On average, you should receive your event message within one second after a successful HTTP request. 99.99% of the event messages should be delivered within five seconds unless the limitation of either the FHIR service, DICOM service, or [Event Grid](../../event-grid/quotas-limits.md) is reached.
-## Is it possible to receive duplicate events messages?
+**Is it possible to receive duplicate events messages?**
-Yes. The Event Grid guarantees at least one events message delivery with its push mode. There may be chances that the event delivery request returns with a transient failure status code for random reasons. In this situation, the Event Grid considers that as a delivery failure and resends the events message. For more information, see [Azure Event Grid delivery and retry](../../event-grid/delivery-and-retry.md).
+Yes. The Event Grid guarantees at least one events message delivery with its push mode. There may be cases when the event delivery request returns with a transient failure status code for random reasons. In this situation, the Event Grid considers it a delivery failure and resends the events message. For more information, see [Azure Event Grid delivery and retry](../../event-grid/delivery-and-retry.md).
-Generally, we recommend that developers ensure idempotency for the event subscriber. The event ID or the combination of all fields in the `data` property of the message content are unique per each event. The developer can rely on them to deduplicate.
+Generally, we recommend that developers ensure idempotency for the event subscriber. The event ID or the combination of all fields in the `data` property of the message content are unique for each event. You can rely on them to deduplicate.
-## More frequently asked questions
-
-[FAQs about the Azure Health Data Services](../healthcare-apis-faqs.md)
-
-[FAQs about Azure Health Data Services DICOM service](../dicom/dicom-services-faqs.yml)
-
-[FAQs about Azure Health Data Services FHIR service](../fhir/fhir-faq.md)
-
-[FAQs about Azure Health Data Services MedTech service](../iot/iot-connector-faqs.md)
-
-FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
iot-central How To Connect Iot Edge Transparent Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/how-to-connect-iot-edge-transparent-gateway.md
Your transparent gateway is now configured and ready to start forwarding telemet
## Provision a downstream device
-IoT Central relies on the Device Provisioning Service (DPS) to provision devices in IoT Central. Currently, IoT Edge can't use DPS provision a downstream device to your IoT Central application. The following steps show you how to provision the `thermostat1` device manually. To complete these steps, you need an environment with Python 3.6 (or higher) installed and internet connectivity. The [Azure Cloud Shell](https://shell.azure.com/) has Python 3.7 pre-installed:
+IoT Central relies on the Device Provisioning Service (DPS) to provision devices in IoT Central. Currently, IoT Edge can't use DPS provision a downstream device to your IoT Central application. The following steps show you how to provision the `thermostat1` device manually. To complete these steps, you need an environment with Python installed and internet connectivity. Check the [Azure IoT Python SDK](https://github.com/Azure/azure-iot-sdk-python/blob/main/README.md) for current Python version requirements. The [Azure Cloud Shell](https://shell.azure.com/) has Python pre-installed:
1. Run the following command to install the `azure.iot.device` module:
iot-operations Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/reference/glossary.md
Last updated 01/10/2024
-#customer intent: As a user of Azure IoT Operations, I want learn about the terminology associated with Azure IoT Operations so that I can use the terminology correctly.
+# Customer intent: As a user of Azure IoT Operations, I want learn about the terminology associated with Azure IoT Operations so that I can use the terminology correctly.
iot-operations Tutorial Anomaly Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/view-analyze-data/tutorial-anomaly-detection.md
description: Learn how to detect anomalies in real time in your manufacturing pr
Previously updated : 12/18/2023 Last updated : 02/01/2024 #CustomerIntent: As an OT, I want to configure my Azure IoT Operations deployment to detect anomalies in real time in my manufacturing process.
By default, the anomaly detection service uses preset estimated control limits.
## Transform and enrich the measurement data
-<!-- TODO: Clarify here where the anomaly detection takes place -->
- To transform the measurement data from your production lines into a structure that the anomaly detector can use, you use Data Processor pipelines. In this tutorial, you create three pipelines:
To create the ERP reference data pipeline that ingests the data from the HTTP en
| Field | Value | |-|--|
- | Name | `erp-input` |
+ | Name | `HTTP Endpoint - ERP data` |
| Method | `GET` | | URL | `http://callout-svc-http:3333/ref_data` | | Authentication | `None` |
To create the ERP reference data pipeline that ingests the data from the HTTP en
1. Select **Add destination** and then select **Reference datasets**.
+1. Name the stage _Reference dataset - erp-data_.
+ 1. Select **erp-data** in the **Dataset** field, and select **Apply**. 1. Select **Save** to save the pipeline.
To create the _opcua-anomaly-pipeline_ pipeline:
| Field | Value | |--||
+ | Name | `MQ - ContosoLLC/#` |
| Broker | `tls://aio-mq-dmqtt-frontend:8883` | | Topic | `ContosoLLC/#` | | Data format | `JSON` | Select **Apply**. The simulated production line assets send measurements to the MQ broker in the cluster. This input stage configuration subscribes to all the topics under the `ContosoLLC` topic in the MQ broker. This topic receives measurement data from the Redmond, Seattle, and Tacoma sites.
-1. Add a **Transform** stage after the source stage with the following JQ expressions. This transform reorganizes the data and makes it easier to read:
+1. Add a **Transform** stage after the source stage. Name the stage _Transform - Reorganize message_ and add the following JQ expressions. This transform reorganizes the data and makes it easier to read:
```jq .payload[0].Payload |= with_entries(.value |= .Value) |
To create the _opcua-anomaly-pipeline_ pipeline:
Select **Apply**.
-1. Use the **Stages** list on the left to add an **Enrich** stage after the transform stage, and select it from the pipeline diagram. This stage enriches the measurements from the simulated production line assets with reference data from the `erp-data` dataset. This stage uses a condition to determine when to add the ERP data. Open the **Add condition** options and add the following information:
+1. Use the **Stages** list on the left to add an **Enrich** stage after the transform stage, and select it from the pipeline diagram. Name the stage _Enrich - Add ERP data_. This stage enriches the measurements from the simulated production line assets with reference data from the `erp-data` dataset. This stage uses a condition to determine when to add the ERP data. Open the **Add condition** options and add the following information:
| Field | Value | |-|-|
To create the _opcua-anomaly-pipeline_ pipeline:
Select **Apply**.
-1. Add a **Transform** stage after the enrich stage with the following JQ expressions. This transform stage reorganizes the data and makes it easier to read. These JQ expressions move the enrichment data to the same flat path as the real-time data to make it easier to export to Azure Data Explorer:
+1. Add a **Transform** stage after the enrich stage with the following JQ expressions. Name the stage _Transform - Flatten ERP data_. This transform stage reorganizes the data and makes it easier to read. These JQ expressions move the enrichment data to the same flat path as the real-time data to make it easier to export to Azure Data Explorer:
```jq .payload.Payload |= . + .enrich |
To create the _opcua-anomaly-pipeline_ pipeline:
| Field | Value | |--||
- | Name | `Call out HTTP - Anomaly` |
+ | Name | `Call out HTTP - Anomaly detection` |
| Method | `POST` | | URL | `http://anomaly-svc:3333/anomaly` | | Authentication | None |
To create the _opcua-anomaly-pipeline_ pipeline:
| Field | Value | |-||
+ | Name | `MQ - processed-output` |
| Broker | `tls://aio-mq-dmqtt-frontend:8883` | | Topic | `processed-output` | | Data format | `JSON` |
The next step is to create a Data Processor pipeline that sends the transformed
1. In the pipeline diagram, select **Configure source** and then select **MQ**. Enter the information from the following table:
- | Field | Value |
- |-|-|
- | Name | processed-mq-data |
- | Broker | tls://aio-mq-dmqtt-frontend:8883 |
- | Topic | processed-output |
- | Data Format | JSON |
+ | Field | Value |
+ |-||
+ | Name | `MQ - processed-output` |
+ | Broker | `tls://aio-mq-dmqtt-frontend:8883` |
+ | Topic | `processed-output` |
+ | Data Format | `JSON` |
Select **Apply**.
The next step is to create a Data Processor pipeline that sends the transformed
1. To connect the source and destination stages, select the red dot at the bottom of the source stage and drag it to the red dot at the top of the destination stage.
-1. Select **Add destination** and then select Azure Data Explorer.
-
-1. Use the information in the following table to configure the destination stage:
-
- | Field | Value |
- |-|-|
- | Cluster URL | To find this value, navigate to your cluster at [Azure Data Explorer](https://dataexplorer.azure.com) and select the **Edit connection** icon next to your cluster name in the left pane. |
- | Database | `bakery_ops` |
- | Table | `edge_data` |
- | Authentication | Service principal |
- | Tenant ID | The tenant ID you made a note of when you created the service principal. |
- | Client ID | The app ID you made a note of when you created the service principal. |
- | Secret | `AIOFabricSecret` |
- | Batching > Batch time | `5s` |
- | Batching > Batch path | `.payload.payload` |
- | Column > Name | `AssetID` |
- | Column > Path | `.assetId` |
- | Column > Name | `Timestamp` |
- | Column > Path | `.sourceTimestamp` |
- | Column > Name | `Name` |
- | Column > Path | `.assetName` |
- | Column > Name | `SerialNumber` |
- | Column > Path | `.serialNumber` |
- | Column > Name | `Status` |
- | Column > Path | `.machineStatus` |
- | Column > Name | `Maintenance` |
- | Column > Path | `.maintenanceStatus` |
- | Column > Name | `Location` |
- | Column > Path | `.site` |
- | Column > Name | `OperatingTime` |
- | Column > Path | `.operatingTime` |
- | Column > Name | `Humidity` |
- | Column > Path | `.humidity` |
- | Column > Name | `HumidityAnomalyFactor` |
- | Column > Path | `.humidityAnomalyFactor` |
- | Column > Name | `HumidityAnomaly` |
- | Column > Path | `.humidityAnomaly` |
- | Column > Name | `Temperature` |
- | Column > Path | `.temperature` |
- | Column > Name | `TemperatureAnomalyFactor` |
- | Column > Path | `.temperatureAnomalyFactor` |
- | Column > Name | `TemperatureAnomaly` |
- | Column > Path | `.temperatureAnomaly` |
- | Column > Name | `Vibration` |
- | Column > Path | `.vibration` |
- | Column > Name | `VibrationAnomalyFactor` |
- | Column > Path | `.vibrationAnomalyFactor` |
- | Column > Name | `VibrationAnomaly` |
- | Column > Path | `.vibrationAnomaly` |
+1. Select **Add destination** and then select **Azure Data Explorer**. Select the **Advanced** tab and then paste in the following configuration:
+
+ ```json
+ {
+ "displayName": "Azure Data Explorer - bakery_ops",
+ "type": "output/dataexplorer@v1",
+ "viewOptions": {
+ "position": {
+ "x": 0,
+ "y": 432
+ }
+ },
+ "clusterUrl": "https://your-cluster.northeurope.kusto.windows.net/",
+ "database": "bakery_ops",
+ "table": "edge_data",
+ "authentication": {
+ "type": "servicePrincipal",
+ "tenantId": "your tenant ID",
+ "clientId": "your client ID",
+ "clientSecret": "AIOFabricSecret"
+ },
+ "batch": {
+ "time": "5s",
+ "path": ".payload.payload"
+ },
+ "columns": [
+ {
+ "name": "AssetID",
+ "path": ".assetId"
+ },
+ {
+ "name": "Timestamp",
+ "path": ".sourceTimestamp"
+ },
+ {
+ "name": "Name",
+ "path": ".assetName"
+ },
+ {
+ "name": "SerialNumber",
+ "path": ".serialNumber"
+ },
+ {
+ "name": "Status",
+ "path": ".machineStatus"
+ },
+ {
+ "name": "Maintenance",
+ "path": ".maintenanceStatus"
+ },
+ {
+ "name": "Location",
+ "path": ".site"
+ },
+ {
+ "name": "OperatingTime",
+ "path": ".operatingTime"
+ },
+ {
+ "name": "Humidity",
+ "path": ".humidity"
+ },
+ {
+ "name": "HumidityAnomalyFactor",
+ "path": ".humidityAnomalyFactor"
+ },
+ {
+ "name": "HumidityAnomaly",
+ "path": ".humidityAnomaly"
+ },
+ {
+ "name": "Temperature",
+ "path": ".temperature"
+ },
+ {
+ "name": "TemperatureAnomalyFactor",
+ "path": ".temperatureAnomalyFactor"
+ },
+ {
+ "name": "TemperatureAnomaly",
+ "path": ".temperatureAnomaly"
+ },
+ {
+ "name": "Vibration",
+ "path": ".vibration"
+ },
+ {
+ "name": "VibrationAnomalyFactor",
+ "path": ".vibrationAnomalyFactor"
+ },
+ {
+ "name": "VibrationAnomaly",
+ "path": ".vibrationAnomaly"
+ }
+ ]
+ }
+ ```
- Select **Apply**.
+1. Then navigate to the **Basic** tag and fill in the following fields by using the information you made a note of previously:
+
+ | Field | Value |
+ |--|-|
+ | Cluster URL | To find this value, navigate to your cluster at [Azure Data Explorer](https://dataexplorer.azure.com) and select the **Edit connection** icon next to your cluster name in the left pane. |
+ | Tenant ID | The tenant ID you made a note of when you created the service principal. |
+ | Client ID | The app ID you made a note of when you created the service principal. |
+ | Secret | `AIOFabricSecret` |
+
+ Select **Apply**.
1. Select **Save** to save the pipeline.
To visualize anomalies and process data, you can use Azure Managed Grafana. Use
1. On the **Add data source** page, search for and select **Azure Data Explorer Datasource**. 1. In the **Connection Details** section, add your Azure Data Explorer cluster URI.
-
+ 1. In the **Authentication** section, select **App Registration** and enter your service principal details. You made a note of these values when you created your service principal.
-
+ 1. To test the connection to the Azure Data Explorer database, select **Save & test**. You should see a **Success** indicator. Now that your Grafana instance is connected to your Azure Data Explorer database, you can build a dashboard:
iot-operations Tutorial Overall Equipment Effectiveness https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/view-analyze-data/tutorial-overall-equipment-effectiveness.md
description: Learn how to calculate overall equipment and effectiveness and powe
Previously updated : 12/18/2023 Last updated : 02/01/2024 #CustomerIntent: As an OT, I want to configure my Azure IoT Operations deployment to calculate overall equipment effectiveness and power consumption for my manufacturing process.
To create the _production-data-reference_ pipeline that ingests the data from th
| Field | Value | |-|--|
- | Name | `HTTP Endpoint - prod` |
+ | Name | `HTTP Endpoint - production data` |
| Method | `GET` | | URL | `http://callout-svc-http:3333/productionData` | | Authentication | `None` |
To create the _production-data-reference_ pipeline that ingests the data from th
| API Request ΓÇô Request Body | `{}` | | Request Interval | `1m` |
- Select **Apply**.
+ Select **Apply**.
1. Select **Add stages** and then select **Delete** to delete the middle stage.
To create the _production-data-reference_ pipeline that ingests the data from th
1. Select **Add destination** and then select **Reference datasets**.
+1. Name the stage _Reference dataset - production-data_.
+ 1. Select **production-data** in the **Dataset** field, and select **Apply**. 1. Select **Save** to save the pipeline.
To create the _operations-data-reference_ pipeline that ingests the data from th
| Field | Value | |-|--|
- | Name | `HTTP Endpoint - operator` |
+ | Name | `HTTP Endpoint - operations data` |
| Method | `GET` | | URL | `http://callout-svc-http:3333/operatorData` | | Authentication | `None` |
To create the _operations-data-reference_ pipeline that ingests the data from th
| API Request ΓÇô Request Body | `{}` | | Request Interval | `1m` |
- Select **Apply**.
+ Select **Apply**.
1. Select **Add stages** and then select **Delete** to delete the middle stage.
To create the _operations-data-reference_ pipeline that ingests the data from th
1. Select **Add destination** and then select **Reference datasets**.
+1. Name the stage _Reference dataset - operations-data_.
+ 1. Select **operations-data** in the **Dataset** field, select **Apply**. 1. Select **Save** to save the pipeline.
To create the _oee-process-pipeline_ pipeline:
| Field | Value | |--||
+ | Name | `MQ - Contoso/#` |
| Broker | `tls://aio-mq-dmqtt-frontend:8883` | | Topic | `Contoso/#` | | Data format | `JSON` | Select **Apply**. The simulated production line assets send measurements to the MQ broker in the cluster. This input stage configuration subscribes to all the topics under the `Contoso` topic in the MQ broker.
-1. Use the **Stages** list on the left to add a **Transform** stage after the source stage with the following JQ expressions. This transform creates a flat, readable view of the message and extracts the `Line` and `Site` information from the topic:
+1. Use the **Stages** list on the left to add a **Transform** stage after the source stage. Name the stage _Transform - flatten message_ and add the following JQ expressions. This transform creates a flat, readable view of the message and extracts the `Line` and `Site` information from the topic:
```jq .payload[0].Payload |= with_entries(.value |= .Value) |
To create the _oee-process-pipeline_ pipeline:
Select **Apply**.
-1. Use the **Stages** list on the left to add an **Aggregate** stage after the transform stage and select it. In this pipeline, you use the aggregate stage to down sample the measurements from the production line assets. You configure the stage to aggregate data for 10 seconds. Then for the relevant data, calculate the average or pick the latest value. Select the **Advanced** tab in the aggregate stage and paste in the following configuration: <!-- TODO: Need to double check this - can we avoid error associated with "next"? -->
+1. Use the **Stages** list on the left to add an **Aggregate** stage after the transform stage and select it. Name the stage _Aggregate - down sample measurements_. In this pipeline, you use the aggregate stage to down sample the measurements from the production line assets. You configure the stage to aggregate data for 10 seconds. Then for the relevant data, calculate the average or pick the latest value. Select the **Advanced** tab in the aggregate stage and paste in the following configuration:
```json {
To create the _oee-process-pipeline_ pipeline:
1. Use the **Stages** list on the left to add a **Call out HTTP** stage after the aggregate stage and select it. This HTTP call out stage calls a custom module running in the Kubernetes cluster that exposes an HTTP API. The module calculates the shift based on the current time. To configure the stage, select **Add condition** and enter the information from the following table:
- | Field | Value |
- |--|-|
- | Name | Call out HTTP - Shift |
- | Method | POST |
- | URL | http://shift-svc-http:3333 |
- | Authentication | None |
- | API Request - Data format | JSON |
- | API Request - Path | .payload |
- | API Response - Data format | JSON |
- | API Response - Path | .payload |
+ | Field | Value |
+ |--||
+ | Name | `Call out HTTP - Fetch shift data` |
+ | Method | `POST` |
+ | URL | `http://shift-svc-http:3333` |
+ | Authentication | `None` |
+ | API Request - Data format | `JSON` |
+ | API Request - Path | `.payload` |
+ | API Response - Data format | `JSON` |
+ | API Response - Path | `.payload` |
Select **Apply**. 1. Use the **Stages** list on the left to add an **Enrich** stage after the HTTP call out stage and select it. This stage enriches the measurements from the simulated production line assets with reference data from the _operations-data_ dataset. This stage uses a condition to determine when to add the operations data. Open the **Add condition** options and add the following information:
- | Field | Value |
- |--|-|
- | Dataset | operations-data |
- | Output path | .payload.operatorData |
- | Input path | .payload.shift |
- | Property | Shift |
- | Operator | Key match |
+ | Field | Value |
+ |--||
+ | Name | `Enrich - Operations data` |
+ | Dataset | `operations-data` |
+ | Output path | `.payload.operatorData` |
+ | Input path | `.payload.shift` |
+ | Property | `Shift` |
+ | Operator | `Key match` |
Select **Apply**. 1. Use the **Stages** list on the left to add another **Enrich** stage after the first enrich stage and select it. This stage enriches the measurements from the simulated production line assets with reference data from the _production-data_ dataset. Open the **Add condition** options and add the following information:
- | Field | Value |
- |--|-|
- | Dataset | production-data |
- | Output path | .payload.productionData |
- | Input path | .payload.Line |
- | Property | Line |
- | Operator | Key match |
+ | Field | Value |
+ |--||
+ | Name | `Enrich - Production data` |
+ | Dataset | `production-data` |
+ | Output path | `.payload.productionData` |
+ | Input path | `.payload.Line` |
+ | Property | `Line` |
+ | Operator | `Key match` |
Select **Apply**.
-1. Use the **Stages** list on the left to add another **Transform** stage after the enrich stage and select it. Add the following JQ expressions:
+1. Use the **Stages** list on the left to add another **Transform** stage after the enrich stage and select it. Name the stage _Transform - flatten enrichment data_. Add the following JQ expressions:
```json .payload |= . + .operatorData |
To create the _oee-process-pipeline_ pipeline:
1. Use the **Destinations** tab on the left to select **MQ** for the output stage, and select the stage. Add the following configuration:
- | Field | Value |
- |-|-|
- | Broker | tls://aio-mq-dmqtt-frontend:8883 |
- | Topic | Oee-processed-output |
- | Data format | JSON |
- | Path | .payload |
+ | Field | Value |
+ |-||
+ | Name | `MQ - Oee-processed-output` |
+ | Broker | `tls://aio-mq-dmqtt-frontend:8883` |
+ | Topic | `Oee-processed-output` |
+ | Data format | `JSON` |
+ | Path | `.payload` |
Select **Apply**.
The next step is to create a Data Processor pipeline that sends the transformed
1. Back in the [Azure IoT Operations](https://iotoperations.azure.com) portal, navigate to **Data pipelines** and select **Create pipeline**.
+1. Select the title of the pipeline on the top left corner, rename it to _oee-fabric_, and **Apply** the change.
+ 1. In the pipeline diagram, select **Configure source** and then select **MQ**. Use the information from the following table to configure it:
- | Field | Value |
- |-|-|
- | Name | processed-oee-data |
- | Broker | tls://aio-mq-dmqtt-frontend:8883 |
- | Topic | Oee-processed-output |
- | Data Format | JSON |
+ | Field | Value |
+ |-||
+ | Name | `MQ - Oee-processed-output` |
+ | Broker | `tls://aio-mq-dmqtt-frontend:8883` |
+ | Topic | `Oee-processed-output` |
+ | Data Format | `JSON` |
Select **Apply**.
The next step is to create a Data Processor pipeline that sends the transformed
```json {
- "displayName": "Node - 26cdc2",
+ "displayName": "Fabric Lakehouse - OEE table",
"type": "output/fabric@v1", "viewOptions": { "position": {
The next step is to create a Data Processor pipeline that sends the transformed
Select **Apply**.
-1. Save the pipeline as **oee-fabric**.
+1. To save your pipeline, select **Save**. It may take a few minutes for the pipeline to deploy to your cluster, so make sure it's finished before you proceed.
## View your measurement data in Microsoft Fabric
iot Iot Overview Analyze Visualize https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-overview-analyze-visualize.md
Last updated 04/11/2023 -
-# As a solution builder, I want a high-level overview of the options for analyzing and visualizing device data in an IoT solution.
+# Customer intent: As a solution builder, I want a high-level overview of the options for analyzing and visualizing device data in an IoT solution.
# Analyze and visualize your IoT data
iot Iot Overview Device Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-overview-device-connectivity.md
Last updated 03/20/2023
- template-overview - ignite-2023
-# As a solution builder or device developer I want a high-level overview of the issues around device infrastructure and connectivity so that I can easily find relevant content.
+# Customer intent: As a solution builder or device developer I want a high-level overview of the issues around device infrastructure and connectivity so that I can easily find relevant content.
# Device infrastructure and connectivity
iot Iot Overview Device Development https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-overview-device-development.md
Last updated 03/20/2023 -
-# As a solution builder or device developer I want a high-level overview of the issues around device development so that I can easily find relevant content.
+# Customer intent: As a solution builder or device developer I want a high-level overview of the issues around device development so that I can easily find relevant content.
# IoT device development
iot Iot Overview Device Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-overview-device-management.md
Last updated 03/20/2023 -
-# As a solution builder or device developer I want a high-level overview of the issues around device management and control so that I can easily find relevant content.
+# Customer intent: As a solution builder or device developer I want a high-level overview of the issues around device management and control so that I can easily find relevant content.
# Device management and control
iot Iot Overview Message Processing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-overview-message-processing.md
Last updated 04/03/2023 -
-# As a solution builder or device developer I want a high-level overview of the message processing in IoT solutions so that I can easily find relevant content for my scenario.
+# Customer intent: As a solution builder or device developer I want a high-level overview of the message processing in IoT solutions so that I can easily find relevant content for my scenario.
# Message processing in an IoT solution
iot Iot Overview Scalability High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-overview-scalability-high-availability.md
Last updated 05/18/2023 -
-# As a solution builder, I want a high-level overview of the options for scalability, high availability, and disaster recovery in an IoT solution so that I can easily find relevant content for my scenario.
+# Customer intent: As a solution builder, I want a high-level overview of the options for scalability, high availability, and disaster recovery in an IoT solution so that I can easily find relevant content for my scenario.
# IoT solution scalability, high availability, and disaster recovery
iot Iot Overview Solution Extensibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-overview-solution-extensibility.md
Last updated 04/03/2023 -
-# As a solution builder, I want a high-level overview of the options for extending an IoT solution so that I can easily find relevant content for my scenario.
+# Customer intent: As a solution builder, I want a high-level overview of the options for extending an IoT solution so that I can easily find relevant content for my scenario.
# Extend your IoT solution
iot Iot Overview Solution Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-overview-solution-management.md
Last updated 05/04/2023
-# As a solution builder, I want a high-level overview of the options for managing an IoT solution so that I can easily find relevant content for my scenario.
+# Customer intent: As a solution builder, I want a high-level overview of the options for managing an IoT solution so that I can easily find relevant content for my scenario.
# Manage your IoT solution
key-vault Built In Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/built-in-roles.md
To manage control plane permissions for the Managed HSM resource, you must use [
|Managed HSM Policy Administrator| Grants permissions to create and delete role assignments.|4bd23610-cdcf-4971-bdee-bdc562cc28e4| |Managed HSM Crypto Auditor|Grants read permissions to read (but not use) key attributes.|2c18b078-7c48-4d3a-af88-5a3a1b3f82b3| |Managed HSM Crypto Service Encryption User| Grants permissions to use a key for service encryption. |33413926-3206-4cdd-b39a-83574fe37a17|
-|Managed HSM Backup| Grants permissions to perform single-key or whole-HSM backup.|7b127d3c-77bd-4e3e-bbe0-dbb8971fa7f8|
|Managed HSM Crypto Service Release User| Grants permissions to release a key to a trusted execution environment. |21dbd100-6940-42c2-9190-5d6cb909625c|
+|Managed HSM Backup| Grants permissions to perform single-key or whole-HSM backup.|7b127d3c-77bd-4e3e-bbe0-dbb8971fa7f8|
+|Managed HSM Restore| Grants permissions to perform single-key or whole-HSM restore. |6efe6056-5259-49d2-8b3d-d3d73544b20b|
## Permitted operations
To manage control plane permissions for the Managed HSM resource, you must use [
> - All the data action names have the prefix **Microsoft.KeyVault/managedHsm**, which is omitted in the table for brevity. > - All role names have the prefix **Managed HSM**, which is omitted in the following table for brevity.
-|Data action | Administrator | Crypto Officer | Crypto User | Policy Administrator | Crypto Service Encryption User | Backup | Crypto Auditor| Crypto Service Released User|
-||::|::|::|::|::|::|::|::|
-|**Security domain management**|||||||||
-|/securitydomain/download/action|X||||||||
-|/securitydomain/upload/action|X||||||||
-|/securitydomain/upload/read|X||||||||
-|/securitydomain/transferkey/read|X||||||||
-|**Key management**|||||||||
-|/keys/read/action|||X||X||X||
-|/keys/write/action|||X||||||
-|/keys/rotate/action|||X||||||
-|/keys/create|||X||||||
-|/keys/delete|||X||||||
-|/keys/deletedKeys/read/action||X|||||||
-|/keys/deletedKeys/recover/action||X|||||||
-|/keys/deletedKeys/delete||X|||||X||
-|/keys/backup/action|||X|||X|||
-|/keys/restore/action|||X||||||
-|/keys/release/action|||X|||||X |
-|/keys/import/action|||X||||||
-|**Key cryptographic operations**|||||||||
-|/keys/encrypt/action|||X||||||
-|/keys/decrypt/action|||X||||||
-|/keys/wrap/action|||X||X||||
-|/keys/unwrap/action|||X||X||||
-|/keys/sign/action|||X||||||
-|/keys/verify/action|||X||||||
-|**Role management**|||||||||
-|/roleAssignments/read/action|X|X|X|X|||X||
-|/roleAssignments/write/action|X|X||X|||||
-|/roleAssignments/delete/action|X|X||X|||||
-|/roleDefinitions/read/action|X|X|X|X|||X||
-|/roleDefinitions/write/action|X|X||X|||||
-|/roleDefinitions/delete/action|X|X||X|||||
-|**Backup and restore management**|||||||||
-|/backup/start/action|X|||||X|||
-|/backup/status/action|X|||||X|||
-|/restore/start/action|X||||||||
-|/restore/status/action|X||||||||
+|Data action | Administrator | Crypto Officer | Crypto User | Policy Administrator | Crypto Service Encryption User | Backup | Crypto Auditor | Crypto Service Release User | Restore|
+||::|::|::|::|::|::|::|::|::|
+|**Security domain management**||||||||||
+|/securitydomain/download/action|X|||||||||
+|/securitydomain/upload/action|X|||||||||
+|/securitydomain/upload/read|X|||||||||
+|/securitydomain/transferkey/read|X|||||||||
+|**Key management**||||||||||
+|/keys/read/action|||X||X||X|||
+|/keys/write/action|||X|||||||
+|/keys/rotate/action|||X|||||||
+|/keys/create|||X|||||||
+|/keys/delete|||X|||||||
+|/keys/deletedKeys/read/action||X||||||||
+|/keys/deletedKeys/recover/action||X||||||||
+|/keys/deletedKeys/delete||X|||||X|||
+|/keys/backup/action|||X|||X||||
+|/keys/restore/action|||X||||||X|
+|/keys/release/action|||X|||||X||
+|/keys/import/action|||X|||||||
+|**Key cryptographic operations**||||||||||
+|/keys/encrypt/action|||X|||||||
+|/keys/decrypt/action|||X|||||||
+|/keys/wrap/action|||X||X|||||
+|/keys/unwrap/action|||X||X|||||
+|/keys/sign/action|||X|||||||
+|/keys/verify/action|||X|||||||
+|**Role management**||||||||||
+|/roleAssignments/read/action|X|X|X|X|||X|||
+|/roleAssignments/write/action|X|X||X||||||
+|/roleAssignments/delete/action|X|X||X||||||
+|/roleDefinitions/read/action|X|X|X|X|||X|||
+|/roleDefinitions/write/action|X|X||X||||||
+|/roleDefinitions/delete/action|X|X||X||||||
+|**Backup and restore management**||||||||||
+|/backup/start/action|X|||||X||||
+|/backup/status/action|X|||||X||||
+|/restore/start/action|X||||||||X|
+|/restore/status/action|X||||||||X|
## Next steps
load-balancer Load Balancer Basic Upgrade Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-basic-upgrade-guidance.md
Last updated 09/27/2023
-#customer-intent: As an cloud engineer with basic Load Balancer services, I need guidance and direction on migrating my workloads off basic to standard SKUs
+# Customer intent: As an cloud engineer with basic Load Balancer services, I need guidance and direction on migrating my workloads off basic to standard SKUs
# Upgrading from basic Load Balancer - Guidance
load-balancer Load Balancer Custom Probe Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-custom-probe-overview.md
Last updated 10/10/2023
-#customer intent: As a network engineer, I want to understand how to configure health probes for Azure Load Balancer so that I can detect application failures, manage load, and plan for downtime.
+# Customer intent: As a network engineer, I want to understand how to configure health probes for Azure Load Balancer so that I can detect application failures, manage load, and plan for downtime.
# Azure Load Balancer health probes
load-balancer Quickstart Load Balancer Standard Internal Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-internal-terraform.md
+
+ Title: "Quickstart: Create an internal load balancer - Terraform"
+
+description: This quickstart shows how to create an internal load balancer by using Terraform.
++++++ Last updated : 01/02/2024++
+#Customer intent: I want to create an internal load balancer by using Terraform so that I can load balance internal traffic to VMs.
++
+# Quickstart: Create an internal load balancer to load balance internal traffic to VMs using Terraform
+
+This quickstart shows you how to deploy a standard internal load balancer and two virtual machines using Terraform. Additional resources include Azure Bastion, NAT Gateway, a virtual network, and the required subnets.
++
+> [!div class="checklist"]
+> * Create an Azure resource group using [azurerm_resource_group](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/resource_group)
+> * Create an Azure Virtual Network using [azurerm_virtual_network](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/virtual_network)
+> * Create an Azure subnet using [azurerm_subnet](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/subnet)
+> * Create an Azure public IP using [azurerm_public_ip](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/public_ip)
+> * Create an Azure Load Balancer using [azurerm_lb](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/lb)
+> * Create an Azure network interface using [azurerm_network_interface](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/network_interface)
+> * Create an Azure network interface load balancer backend address pool association using [azurerm_network_interface_backend_address_pool_association](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/network_interface_backend_address_pool_association)
+> * Create an Azure Linux Virtual Machine using [azurerm_linux_virtual_machine](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/linux_virtual_machine)
+> * Create an Azure Virtual Machine Extension using [azurerm_virtual_machine_extension](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/virtual_machine_extension)
+> * Create an Azure NAT Gateway using [azurerm_nat_gateway](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/nat_gateway)
+> * Create an Azure Bastion using [azurerm_bastion_host](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/bastion_host)
+
+## Prerequisites
+
+- [Install and configure Terraform](/azure/developer/terraform/quickstart-configure)
+
+## Implement the Terraform code
+
+> [!NOTE]
+> See more [articles and sample code showing how to use Terraform to manage Azure resources](/azure/terraform).
+
+1. Create a directory in which to test the sample Terraform code and make it the current directory.
+
+1. Create a file named `providers.tf` and insert the following code:
+
+ ```
+ terraform {
+   required_version = ">=0.12"
+
+   required_providers {
+     azapi = {
+       source = "azure/azapi"
+       version = "~>1.5"
+     }
+     azurerm = {
+       source = "hashicorp/azurerm"
+       version = "~>2.0"
+     }
+     random = {
+       source = "hashicorp/random"
+       version = "~>3.0"
+     }
+   }
+ }
+
+ provider "azurerm" {
+   features {}
+ }
+ ```
+
+1. Create a file named `main.tf` and insert the following code:
+
+ ```
+ resource "random_string" "my_resource_group" {
+   length  = 8
+   upper   = false
+  special = false
+ }
+
+ # Create Resource Group
+ resource "azurerm_resource_group" "my_resource_group" {
+  name     = "${var.public_ip_name}-${random_string.my_resource_group.result}"
+  location = var.resource_group_location
+ }
+
+ # Create Virtual Network
+ resource "azurerm_virtual_network" "my_virtual_network" {
+   name = var.virtual_network_name
+   address_space = ["10.0.0.0/16"]
+   location = azurerm_resource_group.my_resource_group.location
+   resource_group_name = azurerm_resource_group.my_resource_group.name
+ }
+
+ # Create a subnet in the Virtual Network
+ resource "azurerm_subnet" "my_subnet" {
+   name = var.subnet_name
+   resource_group_name = azurerm_resource_group.my_resource_group.name
+   virtual_network_name = azurerm_virtual_network.my_virtual_network.name
+   address_prefixes = ["10.0.1.0/24"]
+ }
+
+ # Create a subnet named as "AzureBastionSubnet" in the Virtual Network for creating Azure Bastion
+ resource "azurerm_subnet" "my_bastion_subnet" {
+   name = "AzureBastionSubnet"
+   resource_group_name = azurerm_resource_group.my_resource_group.name
+   virtual_network_name = azurerm_virtual_network.my_virtual_network.name
+   address_prefixes = ["10.0.2.0/24"]
+ }
+
+ # Create Network Security Group and rules
+ resource "azurerm_network_security_group" "my_nsg" {
+   name = var.network_security_group_name
+   location = azurerm_resource_group.my_resource_group.location
+   resource_group_name = azurerm_resource_group.my_resource_group.name
+
+   security_rule {
+     name = "ssh"
+     priority = 1022
+     direction = "Inbound"
+     access = "Allow"
+     protocol = "Tcp"
+     source_port_range = "*"
+     destination_port_range = "22"
+     source_address_prefix = "*"
+     destination_address_prefix = "10.0.1.0/24"
+   }
+
+   security_rule {
+     name = "web"
+     priority = 1080
+     direction = "Inbound"
+     access = "Allow"
+     protocol = "Tcp"
+     source_port_range = "*"
+     destination_port_range = "80"
+     source_address_prefix = "*"
+     destination_address_prefix = "10.0.1.0/24"
+   }
+ }
+
+ # Associate the Network Security Group to the subnet
+ resource "azurerm_subnet_network_security_group_association" "my_nsg_association" {
+   subnet_id = azurerm_subnet.my_subnet.id
+   network_security_group_id = azurerm_network_security_group.my_nsg.id
+ }
+
+ # Create Public IPs
+ resource "azurerm_public_ip" "my_public_ip" {
+   count = 2
+   name = "${var.public_ip_name}-${count.index}"
+   location = azurerm_resource_group.my_resource_group.location
+   resource_group_name = azurerm_resource_group.my_resource_group.name
+   allocation_method = "Static"
+   sku = "Standard"
+ }
+
+ # Create a NAT Gateway for outbound internet access of the Virtual Machines in the Backend Pool of the Load Balancer
+ resource "azurerm_nat_gateway" "my_nat_gateway" {
+   name = var.nat_gateway
+   location = azurerm_resource_group.my_resource_group.location
+   resource_group_name = azurerm_resource_group.my_resource_group.name
+   sku_name = "Standard"
+ }
+
+ # Associate one of the Public IPs to the NAT Gateway
+ resource "azurerm_nat_gateway_public_ip_association" "my_nat_gateway_ip_association" {
+   nat_gateway_id = azurerm_nat_gateway.my_nat_gateway.id
+   public_ip_address_id = azurerm_public_ip.my_public_ip[0].id
+ }
+
+ # Associate the NAT Gateway to subnet
+ resource "azurerm_subnet_nat_gateway_association" "my_nat_gateway_subnet_association" {
+   subnet_id = azurerm_subnet.my_subnet.id
+   nat_gateway_id = azurerm_nat_gateway.my_nat_gateway.id
+ }
+
+ # Create Network Interfaces
+ resource "azurerm_network_interface" "my_nic" {
+   count = 3
+   name = "${var.network_interface_name}-${count.index}"
+   location = azurerm_resource_group.my_resource_group.location
+   resource_group_name = azurerm_resource_group.my_resource_group.name
+
+   ip_configuration {
+     name = "ipconfig-${count.index}"
+     subnet_id = azurerm_subnet.my_subnet.id
+     private_ip_address_allocation = "Dynamic"
+     primary = true
+   }
+ }
+
+ # Create Azure Bastion for accessing the Virtual Machines
+ resource "azurerm_bastion_host" "my_bastion" {
+   name = var.bastion_name
+   location = azurerm_resource_group.my_resource_group.location
+   resource_group_name = azurerm_resource_group.my_resource_group.name
+   sku = "Standard"
+
+   ip_configuration {
+     name = "ipconfig"
+     subnet_id = azurerm_subnet.my_bastion_subnet.id
+     public_ip_address_id = azurerm_public_ip.my_public_ip[1].id
+   }
+ }
+
+ # Associate Network Interface to the Backend Pool of the Load Balancer
+ resource "azurerm_network_interface_backend_address_pool_association" "my_nic_lb_pool" {
+   count = 2
+   network_interface_id = azurerm_network_interface.my_nic[count.index].id
+   ip_configuration_name = "ipconfig-${count.index}"
+   backend_address_pool_id = azurerm_lb_backend_address_pool.my_lb_pool.id
+ }
+
+ # Create Virtual Machine
+ resource "azurerm_linux_virtual_machine" "my_vm" {
+   count = 3
+   name = "${var.virtual_machine_name}-${count.index}"
+   location = azurerm_resource_group.my_resource_group.location
+   resource_group_name = azurerm_resource_group.my_resource_group.name
+   network_interface_ids = [azurerm_network_interface.my_nic[count.index].id]
+   size = var.virtual_machine_size
+
+   os_disk {
+     name = "${var.disk_name}-${count.index}"
+     caching = "ReadWrite"
+     storage_account_type = var.redundancy_type
+   }
+
+   source_image_reference {
+     publisher = "Canonical"
+     offer = "0001-com-ubuntu-server-jammy"
+     sku = "22_04-lts-gen2"
+     version = "latest"
+   }
+
+   admin_username = var.username
+   admin_password = var.password
+   disable_password_authentication = false
+
+ }
+
+ # Enable virtual machine extension and install Nginx
+ resource "azurerm_virtual_machine_extension" "my_vm_extension" {
+   count = 2
+   name = "Nginx"
+   virtual_machine_id = azurerm_linux_virtual_machine.my_vm[count.index].id
+   publisher = "Microsoft.Azure.Extensions"
+   type = "CustomScript"
+   type_handler_version = "2.0"
+
+   settings = <<SETTINGS
+  {
+   "commandToExecute": "sudo apt-get update && sudo apt-get install nginx -y && echo \"Hello World from $(hostname)\" > /var/www/html/https://docsupdatetracker.net/index.html && sudo systemctl restart nginx"
+  }
+ SETTINGS
+
+ }
+
+ # Create an Internal Load Balancer
+ resource "azurerm_lb" "my_lb" {
+   name = var.load_balancer_name
+   location = azurerm_resource_group.my_resource_group.location
+   resource_group_name = azurerm_resource_group.my_resource_group.name
+   sku = "Standard"
+
+   frontend_ip_configuration {
+     name = "frontend-ip"
+     subnet_id = azurerm_subnet.my_subnet.id
+     private_ip_address_allocation = "Dynamic"
+   }
+ }
+
+ resource "azurerm_lb_backend_address_pool" "my_lb_pool" {
+   loadbalancer_id = azurerm_lb.my_lb.id
+   name = "test-pool"
+ }
+
+ resource "azurerm_lb_probe" "my_lb_probe" {
+   resource_group_name = azurerm_resource_group.my_resource_group.name
+   loadbalancer_id = azurerm_lb.my_lb.id
+   name = "test-probe"
+   port = 80
+ }
+
+ resource "azurerm_lb_rule" "my_lb_rule" {
+   resource_group_name = azurerm_resource_group.my_resource_group.name
+   loadbalancer_id = azurerm_lb.my_lb.id
+   name = "test-rule"
+   protocol = "Tcp"
+   frontend_port = 80
+   backend_port = 80
+   disable_outbound_snat = true
+   frontend_ip_configuration_name = "frontend-ip"
+   probe_id = azurerm_lb_probe.my_lb_probe.id
+   backend_address_pool_ids = [azurerm_lb_backend_address_pool.my_lb_pool.id]
+ }
+ ```
+
+1. Create a file named `variables.tf` and insert the following code:
+
+ ```
+ variable "resource_group_location" {
+   type = string
+   default = "eastus"
+   description = "Location of the resource group."
+ }
+
+ variable "resource_group_name" {
+   type = string
+   default = "test-group"
+   description = "Name of the resource group."
+ }
+
+ variable "username" {
+   type = string
+   default = "microsoft"
+   description = "The username for the local account that will be created on the new VM."
+ }
+
+ variable "password" {
+   type = string
+   default = "Microsoft@123"
+   description = "The passoword for the local account that will be created on the new VM."
+ }
+
+ variable "virtual_network_name" {
+   type = string
+   default = "test-vnet"
+   description = "Name of the Virtual Network."
+ }
+
+ variable "subnet_name" {
+   type = string
+   default = "test-subnet"
+   description = "Name of the subnet."
+ }
+
+ variable public_ip_name {
+   type = string
+   default = "test-public-ip"
+   description = "Name of the Public IP for the NAT Gateway."
+ }
+
+ variable "nat_gateway" {
+   type = string
+   default = "test-nat"
+   description = "Name of the NAT gateway."
+ }
+
+ variable "bastion_name" {
+   type = string
+   default = "test-bastion"
+   description = "Name of the Bastion."
+ }
+
+ variable network_security_group_name {
+   type = string
+   default = "test-nsg"
+   description = "Name of the Network Security Group."
+ }
+
+ variable "network_interface_name" {
+   type = string
+   default = "test-nic"
+   description = "Name of the Network Interface."  
+ }
+
+ variable "virtual_machine_name" {
+   type = string
+   default = "test-vm"
+   description = "Name of the Virtual Machine."
+ }
+
+ variable "virtual_machine_size" {
+   type = string
+   default = "Standard_B2s"
+   description = "Size or SKU of the Virtual Machine."
+ }
+
+ variable "disk_name" {
+   type = string
+   default = "test-disk"
+   description = "Name of the OS disk of the Virtual Machine."
+ }
+
+ variable "redundancy_type" {
+   type = string
+   default = "Standard_LRS"
+   description = "Storage redundancy type of the OS disk."
+ }
+
+ variable "load_balancer_name" {
+   type = string
+   default = "test-lb"
+   description = "Name of the Load Balancer."
+ }
+ ```
+
+1. Create a file named `outputs.tf` and insert the following code:
+
+ ```
+ output "private_ip_address" {
+ value = "http://${azurerm_lb.my_lb.private_ip_address}"
+ }
+ ```
+
+## Initialize Terraform
++
+## Create a Terraform execution plan
++
+## Apply a Terraform execution plan
++
+## Verify the results
+
+1. When you apply the execution plan, Terraform displays the frontend private IP address. If you've cleared the screen, you can retrieve that value with the following Terraform command:
+
+ ```console
+ echo $(terraform output -raw private_ip_address)
+ ```
+
+1. Login to the VM which is not associated to the backend pool of load balancer using Bastion.
+
+1. Run the curl command to access the custom web page of the Nginx web server using the frontend private IP address of the load balancer.
+
+ ```
+ curl http://<Frontend IP address>
+ ```
+
+## Clean up resources
++
+## Troubleshoot Terraform on Azure
+
+[Troubleshoot common problems when using Terraform on Azure](/azure/developer/terraform/troubleshoot)
+
+## Next steps
+
+In this quickstart, you:
+
+- Created an internal Azure Load Balancer
+
+- Attached 2 VMs to the load balancer
+
+- Configured the load balancer traffic rule, health probe, and then tested the load balancer
+
+To learn more about Azure Load Balancer, continue to:
+> [!div class="nextstepaction"]
+> [What is Azure Load Balancer?](load-balancer-overview.md)
load-testing How To Test Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-test-private-endpoint.md
The subnet you use for deploying the load test can't be delegated to another Azu
Learn more about [adding or removing a subnet delegation](/azure/virtual-network/manage-subnet-delegation#remove-subnet-delegation-from-an-azure-service).
-### Starting the load test fails with `User doesn't have subnet/join/action permission on the virtual network (ALTVNET004)`
+### Updating or starting the load test fails with `User doesn't have subnet/join/action permission on the virtual network (ALTVNET004)`
-To start a load test, you must have sufficient permissions to deploy Azure Load Testing to the virtual network. You require the [Network Contributor](/azure/role-based-access-control/built-in-roles#network-contributor) role, or a parent of this role, on the virtual network.
+To update or start a load test, you must have sufficient permissions to deploy Azure Load Testing to the virtual network. You require the [Network Contributor](/azure/role-based-access-control/built-in-roles#network-contributor) role, or a parent of this role, on the virtual network.
1. See [Check access for a user to Azure resources](/azure/role-based-access-control/check-access) to verify your permissions.
logic-apps Biztalk Server Azure Integration Services Migration Approaches https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/biztalk-server-azure-integration-services-migration-approaches.md
Last updated 01/04/2024
-# As a BizTalk Server customer, I want to learn about migration options, planning considerations, and best practices for moving from BizTalk Server to Azure Integration Services.
+# Customer intent: As a BizTalk Server customer, I want to learn about migration options, planning considerations, and best practices for moving from BizTalk Server to Azure Integration Services.
# Migration approaches for BizTalk Server to Azure Integration Services
logic-apps Biztalk Server To Azure Integration Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/biztalk-server-to-azure-integration-services-overview.md
Last updated 01/04/2024
-# As a BizTalk Server customer, I want to better understand why I should migrate to Azure Integration Services in the cloud from on-premises BizTalk Server.
+# Customer intent: As a BizTalk Server customer, I want to better understand why I should migrate to Azure Integration Services in the cloud from on-premises BizTalk Server.
# Why migrate from BizTalk Server to Azure Integration Services?
logic-apps Create Maps Data Transformation Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-maps-data-transformation-visual-studio-code.md
ms.suite: integration
Last updated 11/15/2023
-# As a developer, I want to transform data in Azure Logic Apps by creating a map between schemas with Visual Studio Code.
+# Customer intent: As a developer, I want to transform data in Azure Logic Apps by creating a map between schemas with Visual Studio Code.
# Create maps to transform data in Azure Logic Apps with Visual Studio Code
logic-apps Custom Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/custom-connector-overview.md
ms.suite: integration
Last updated 01/04/2024
-# As a developer, I want learn about the capability to create custom connectors with operations that I can use in my Azure Logic Apps workflows.
+# Customer intent: As a developer, I want learn about the capability to create custom connectors with operations that I can use in my Azure Logic Apps workflows.
# Custom connectors in Azure Logic Apps
logic-apps Deploy Single Tenant Logic Apps Private Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/deploy-single-tenant-logic-apps-private-storage-account.md
Last updated 10/09/2023
-# As a developer, I want to deploy Standard logic apps to Azure storage accounts that use private endpoints.
+# Customer intent: As a developer, I want to deploy Standard logic apps to Azure storage accounts that use private endpoints.
# Deploy single-tenant Standard logic apps to private storage accounts using private endpoints
logic-apps Devops Deployment Single Tenant Azure Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/devops-deployment-single-tenant-azure-logic-apps.md
ms.suite: integration
Last updated 01/04/2024-
-# As a developer, I want to learn about DevOps deployment support for single-tenant Azure Logic Apps.
+# Customer intent: As a developer, I want to learn about DevOps deployment support for single-tenant Azure Logic Apps.
# DevOps deployment for single-tenant Azure Logic Apps
logic-apps Logic Apps Limits And Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-limits-and-config.md
For Azure Logic Apps to receive incoming communication through your firewall, yo
| Norway East | 51.120.88.93, 51.13.66.86, 51.120.89.182, 51.120.88.77, 20.100.27.17, 20.100.36.102 | | Norway West | 51.120.220.160, 51.120.220.161, 51.120.220.162, 51.120.220.163, 51.13.155.184, 51.13.151.90 | | Poland Central | 20.215.144.231, 20.215.145.0 |
+| Qatar Central | 20.21.211.241, 20.21.211.242 |
| South Africa North | 102.133.228.4, 102.133.224.125, 102.133.226.199, 102.133.228.9, 20.87.92.64, 20.87.91.171 | | South Africa West | 102.133.72.190, 102.133.72.145, 102.133.72.184, 102.133.72.173, 40.117.9.225, 102.133.98.91 | | South Central US | 13.65.98.39, 13.84.41.46, 13.84.43.45, 40.84.138.132, 20.94.151.41, 20.88.209.113 | | South India | 52.172.9.47, 52.172.49.43, 52.172.51.140, 104.211.225.152, 104.211.221.215,104.211.205.148 | | Southeast Asia | 52.163.93.214, 52.187.65.81, 52.187.65.155, 104.215.181.6, 20.195.49.246, 20.198.130.155, 23.98.121.180 |
+| Sweden Central | 20.91.178.13, 20.240.10.125 |
| Switzerland North | 51.103.128.52, 51.103.132.236, 51.103.134.138, 51.103.136.209, 20.203.230.170, 20.203.227.226 | | Switzerland West | 51.107.225.180, 51.107.225.167, 51.107.225.163, 51.107.239.66, 51.107.235.139,51.107.227.18 | | UAE Central | 20.45.75.193, 20.45.64.29, 20.45.64.87, 20.45.71.213, 40.126.212.77, 40.126.209.97 |
This section lists the outbound IP addresses that Azure Logic Apps requires in y
| Norway East | 51.120.88.52, 51.120.88.51, 51.13.65.206, 51.13.66.248, 51.13.65.90, 51.13.65.63, 51.13.68.140, 51.120.91.248, 20.100.26.148, 20.100.26.52, 20.100.36.49, 20.100.36.10 | | Norway West | 51.120.220.128, 51.120.220.129, 51.120.220.130, 51.120.220.131, 51.120.220.132, 51.120.220.133, 51.120.220.134, 51.120.220.135, 51.13.153.172, 51.13.148.178, 51.13.148.11, 51.13.149.162 | | Poland Central | 20.215.144.229, 20.215.128.160, 20.215.144.235, 20.215.144.246 |
+| Qatar Central | 20.21.211.240, 20.21.209.216, 20.21.211.245, 20.21.210.251 |
| South Africa North | 102.133.231.188, 102.133.231.117, 102.133.230.4, 102.133.227.103, 102.133.228.6, 102.133.230.82, 102.133.231.9, 102.133.231.51, 20.87.92.40, 20.87.91.122, 20.87.91.169, 20.87.88.47 | | South Africa West | 102.133.72.98, 102.133.72.113, 102.133.75.169, 102.133.72.179, 102.133.72.37, 102.133.72.183, 102.133.72.132, 102.133.75.191, 102.133.101.220, 40.117.9.125, 40.117.10.230, 40.117.9.229 | | South Central US | 104.210.144.48, 13.65.82.17, 13.66.52.232, 23.100.124.84, 70.37.54.122, 70.37.50.6, 23.100.127.172, 23.101.183.225, 20.94.150.220, 20.94.149.199, 20.88.209.97, 20.88.209.88 | | South India | 52.172.50.24, 52.172.55.231, 52.172.52.0, 104.211.229.115, 104.211.230.129, 104.211.230.126, 104.211.231.39, 104.211.227.229, 104.211.211.221, 104.211.210.192, 104.211.213.78, 104.211.218.202 | | Southeast Asia | 13.76.133.155, 52.163.228.93, 52.163.230.166, 13.76.4.194, 13.67.110.109, 13.67.91.135, 13.76.5.96, 13.67.107.128, 20.195.49.240, 20.195.49.29, 20.198.130.152, 20.198.128.124, 23.98.121.179, 23.98.121.115 |
+| Sweden Central | 20.91.178.11, 20.91.177.115, 20.240.10.91, 20.240.10.89 |
| Switzerland North | 51.103.137.79, 51.103.135.51, 51.103.139.122, 51.103.134.69, 51.103.138.96, 51.103.138.28, 51.103.136.37, 51.103.136.210, 20.203.230.58, 20.203.229.127, 20.203.224.37, 20.203.225.242 | | Switzerland West | 51.107.239.66, 51.107.231.86, 51.107.239.112, 51.107.239.123, 51.107.225.190, 51.107.225.179, 51.107.225.186, 51.107.225.151, 51.107.239.83, 51.107.232.61, 51.107.234.254, 51.107.226.253, 20.199.193.249 | | UAE Central | 20.45.75.200, 20.45.72.72, 20.45.75.236, 20.45.79.239, 20.45.67.170, 20.45.72.54, 20.45.67.134, 20.45.67.135, 40.126.210.93, 40.126.209.151, 40.126.208.156, 40.126.214.92 |
logic-apps Logic Apps Perform Data Operations