Updates from: 01/20/2024 02:22:13
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Add Api Connector Token Enrichment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-api-connector-token-enrichment.md
Title: Token enrichment - Azure Active Directory B2C description: Enrich tokens with claims from external identity data sources using APIs or outbound webhooks.- - -- - Last updated 01/17/2023
active-directory-b2c Conditional Access Identity Protection Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/conditional-access-identity-protection-overview.md
Title: Identity Protection and Conditional Access in Azure AD B2C description: Learn how Identity Protection gives you visibility into risky sign-ins and risk detections. Find out how and Conditional Access lets you enforce organizational policies based on risk events in your Azure AD B2C tenants.- Last updated 01/11/2024- - - #Customer intent: As an Azure AD B2C application owner, I want to enhance the security of my applications by using Identity Protection and Conditional Access, so that I can detect and respond to risky authentications and enforce organizational policies.
active-directory-b2c Configure Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-tokens.md
Last updated 01/11/2024- zone_pivot_groups: b2c-policy-type - #Customer intent: As a developer configuring token lifetimes in Azure Active Directory B2C, I want to understand the options and settings available for token lifetime and compatibility, so that I can customize them to fit the needs of my application and ensure secure access to resources.
active-directory-b2c Configure User Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-user-input.md
Title: Add user attributes and customize user input description: Learn how to customize user input and add user attributes to the sign-up or sign-in journey in Azure Active Directory B2C.- - - Last updated 12/13/2023- zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Protection Investigate Risk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-protection-investigate-risk.md
Title: Investigate risk with Azure Active Directory B2C Identity Protection description: Learn how to investigate risky users, and detections in Azure AD B2C Identity Protection- Last updated 09/16/2021-
active-directory-b2c Identity Provider Adfs Saml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-adfs-saml.md
Title: Add AD FS as a SAML identity provider by using custom policies description: Set up AD FS 2016 using the SAML protocol and custom policies in Azure Active Directory B2C- - - Last updated 09/16/2021- zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Adfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-adfs.md
Title: Add AD FS as an OpenID Connect identity provider by using custom policies description: Set up AD FS 2016 using the OpenID Connect protocol and custom policies in Azure Active Directory B2C- - - Last updated 06/08/2022- zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Amazon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-amazon.md
Title: Set up sign-up and sign-in with an Amazon account description: Provide sign-up and sign-in to customers with Amazon accounts in your applications using Azure Active Directory B2C.- - - - Last updated 09/16/2021
You need to store the client secret that you previously recorded in your Azure A
## Configure Amazon as an identity provider
-To enable users to sign in using a Amazon account, you need to define the account as a claims provider. that Azure AD B2C can communicate with through an endpoint. The endpoint provides a set of claims that are used by Azure AD B2C to verify that a specific user has authenticated.
+To enable users to sign in using an Amazon account, you need to define the account as a claims provider. that Azure AD B2C can communicate with through an endpoint. The endpoint provides a set of claims that are used by Azure AD B2C to verify that a specific user has authenticated.
You can define an Amazon account as a claims provider by adding it to the **ClaimsProviders** element in the extension file of your policy.
active-directory-b2c Identity Provider Apple Id https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-apple-id.md
Title: Set up sign-up and sign-in with an Apple ID description: Provide sign-up and sign-in to customers with Apple ID in your applications using Azure Active Directory B2C.- - - Last updated 11/02/2021- zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Facebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-facebook.md
Title: Set up sign-up and sign-in with a Facebook account description: Provide sign-up and sign-in to customers with Facebook accounts in your applications using Azure Active Directory B2C.- - - Last updated 03/10/2022- zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Generic Saml Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-generic-saml-options.md
Title: Set sign-in with SAML identity provider options description: Configure sign-in SAML identity provider (IdP) options in Azure Active Directory B2C.- - - Last updated 03/20/2023- zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Generic Saml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-generic-saml.md
Title: Set up sign-up and sign-in with SAML identity provider description: Set up sign-up and sign-in with any SAML identity provider (IdP) in Azure Active Directory B2C.- - - Last updated 09/16/2021- zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-github.md
Last updated 03/10/2022- zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Google https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-google.md
Last updated 12/13/2023- zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Linkedin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-linkedin.md
Title: Set up sign-up and sign-in with a LinkedIn account description: Provide sign-up and sign-in to customers with LinkedIn accounts in your applications using Azure Active Directory B2C.- - - Last updated 09/16/2021- zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Microsoft Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-microsoft-account.md
Title: Set up sign-up and sign-in with a Microsoft Account description: Provide sign-up and sign-in to customers with Microsoft Accounts in your applications using Azure Active Directory B2C.- - - Last updated 05/01/2023- zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Ping One https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-ping-one.md
Title: Set up sign-up and sign-in with a PingOne account description: Provide sign-up and sign-in to customers with PingOne accounts in your applications using Azure Active Directory B2C.- - - Last updated 12/2/2021- zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Qq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-qq.md
Title: Set up sign-up and sign-in with a QQ account using Azure Active Directory B2C description: Provide sign-up and sign-in to customers with QQ accounts in your applications using Azure Active Directory B2C.- - - Last updated 09/16/2021- zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Salesforce Saml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-salesforce-saml.md
Title: Set up sign-in with a Salesforce SAML provider by using SAML protocol description: Set up sign-in with a Salesforce SAML provider by using SAML protocol in Azure Active Directory B2C.- - - Last updated 09/16/2021- zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Salesforce https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-salesforce.md
Title: Set up sign-up and sign-in with a Salesforce account description: Provide sign-up and sign-in to customers with Salesforce accounts in your applications using Azure Active Directory B2C.- - - Last updated 09/16/2021- zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Twitter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-twitter.md
Title: Set up sign-up and sign-in with a Twitter account description: Provide sign-up and sign-in to customers with Twitter accounts in your applications using Azure Active Directory B2C.- - - Last updated 07/20/2022- zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Wechat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-wechat.md
Title: Set up sign-up and sign-in with a WeChat account description: Provide sign-up and sign-in to customers with WeChat accounts in your applications using Azure Active Directory B2C.- - - Last updated 09/16/2021- zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Weibo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-weibo.md
Title: Set up sign-up and sign-in with a Weibo account description: Provide sign-up and sign-in to customers with Weibo accounts in your applications using Azure Active Directory B2C.- - - Last updated 09/16/2021- zone_pivot_groups: b2c-policy-type
active-directory-b2c Language Customization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/language-customization.md
Title: Language customization in Azure Active Directory B2C description: Learn about customizing the language experience in your user flows in Azure Active Directory B2C.- - - Last updated 12/28/2022- zone_pivot_groups: b2c-policy-type
active-directory-b2c Session Behavior https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/session-behavior.md
Title: Configure session behavior - Azure Active Directory B2C description: Learn how to configure session behavior in Azure Active Directory B2C.- - - Last updated 01/11/2024- zone_pivot_groups: b2c-policy-type - #Customer intent: As a developer configuring session behavior in Azure Active Directory B2C, I want to understand the different types of single sign-on sessions (Azure AD B2C session, federated identity provider session, application session) and how to configure their behavior, so that I can implement the most appropriate SSO method for my policy.
active-directory-b2c Troubleshoot With Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/troubleshoot-with-application-insights.md
Title: Troubleshoot custom policies with Application Insights description: How to set up Application Insights to trace the execution of your custom policies.- - - Last updated 01/11/2024- zone_pivot_groups: b2c-policy-type - #Customer intent: As a developer working with Azure Active Directory B2C, I want to collect logs from my custom policies using Application Insights, so that I can diagnose and troubleshoot any problems that may occur.
active-directory-b2c Tutorial Create User Flows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tutorial-create-user-flows.md
Last updated 11/10/2023 - zone_pivot_groups: b2c-policy-type
active-directory-b2c Tutorial Register Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tutorial-register-applications.md
Last updated 11/13/2023-
ai-services Batch Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/How-to/batch-inference.md
Previously updated : 11/01/2022 Last updated : 01/18/2024
ai-services Create Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/How-to/create-resource.md
Previously updated : 11/01/2022 Last updated : 01/18/2024
ai-services Deploy Anomaly Detection On Container Instances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/How-to/deploy-anomaly-detection-on-container-instances.md
Previously updated : 04/01/2020 Last updated : 01/18/2024
ai-services Deploy Anomaly Detection On Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/How-to/deploy-anomaly-detection-on-iot-edge.md
Previously updated : 12/03/2020 Last updated : 01/18/2024
ai-services Identify Anomalies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/How-to/identify-anomalies.md
Previously updated : 10/01/2019 Last updated : 01/18/2024
ai-services Postman https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/How-to/postman.md
Previously updated : 12/20/2022 Last updated : 01/18/2024
ai-services Prepare Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/How-to/prepare-data.md
Previously updated : 11/01/2022 Last updated : 01/18/2024
ai-services Streaming Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/How-to/streaming-inference.md
Previously updated : 11/01/2022 Last updated : 01/18/2024
ai-services Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/How-to/train-model.md
Previously updated : 11/01/2022 Last updated : 01/18/2024
ai-services Anomaly Detector Container Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/anomaly-detector-container-configuration.md
Previously updated : 05/07/2020 Last updated : 01/18/2024
ai-services Anomaly Detector Container Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/anomaly-detector-container-howto.md
Previously updated : 01/27/2023 Last updated : 01/18/2024 keywords: on-premises, Docker, container, streaming, algorithms
ai-services Anomaly Detection Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/concepts/anomaly-detection-best-practices.md
Previously updated : 01/22/2021 Last updated : 01/18/2024
ai-services Best Practices Multivariate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/concepts/best-practices-multivariate.md
Previously updated : 06/07/2022 Last updated : 01/18/2024 keywords: anomaly detection, machine learning, algorithms
ai-services Multivariate Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/concepts/multivariate-architecture.md
Previously updated : 12/15/2022 Last updated : 01/18/2024 keywords: anomaly detection, machine learning, algorithms
ai-services Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/concepts/troubleshoot.md
Previously updated : 04/01/2021 Last updated : 01/18/2024 keywords: anomaly detection, machine learning, algorithms
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/overview.md
Previously updated : 10/27/2022 Last updated : 01/18/2024 keywords: anomaly detection, machine learning, algorithms- # What is Anomaly Detector?
ai-services Client Libraries Multivariate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/quickstarts/client-libraries-multivariate.md
zone_pivot_groups: anomaly-detector-quickstart-multivariate Previously updated : 10/27/2022 Last updated : 01/18/2024 keywords: anomaly detection, algorithms ms.devlang: csharp
ai-services Client Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/quickstarts/client-libraries.md
zone_pivot_groups: anomaly-detector-quickstart Previously updated : 10/27/2022 Last updated : 01/18/2024 keywords: anomaly detection, algorithms ms.devlang: csharp
ai-services Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/regions.md
Previously updated : 11/1/2022 Last updated : 01/18/2024
ai-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/service-limits.md
Previously updated : 1/31/2023 Last updated : 01/18/2024 - # Anomaly Detector service quotas and limits
ai-services Azure Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/tutorials/azure-data-explorer.md
Previously updated : 12/19/2022 Last updated : 01/18/2024
ai-services Batch Anomaly Detection Powerbi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/tutorials/batch-anomaly-detection-powerbi.md
Previously updated : 09/10/2020 Last updated : 01/18/2024
ai-services Multivariate Anomaly Detection Synapse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/tutorials/multivariate-anomaly-detection-synapse.md
Previously updated : 08/03/2022 Last updated : 01/18/2024
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/whats-new.md
Previously updated : 12/15/2022 Last updated : 01/18/2024 # What's new in Anomaly Detector
ai-services App Schema Definition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/app-schema-definition.md
Previously updated : 08/22/2020 Last updated : 01/19/2024 # App schema definition
ai-services Choose Natural Language Processing Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/choose-natural-language-processing-service.md
Previously updated : 10/20/2020 Last updated : 01/19/2024 # Use Azure AI services with natural language processing (NLP) to enrich chat bot conversations
ai-services Client Libraries Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/client-libraries-rest-api.md
Title: "Quickstart: Language Understanding (LUIS) SDK client libraries and REST API" description: Create and query a LUIS app with the LUIS SDK client libraries and REST API. Previously updated : 03/07/2022 Last updated : 01/19/2024
ai-services Application Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/concepts/application-design.md
Previously updated : 01/10/2022 Last updated : 01/19/2024
ai-services Entities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/concepts/entities.md
Previously updated : 07/19/2022 Last updated : 01/19/2024
ai-services Intents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/concepts/intents.md
Previously updated : 07/19/2022 Last updated : 01/19/2024 # Intents
ai-services Patterns Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/concepts/patterns-features.md
Previously updated : 07/19/2022 Last updated : 01/19/2024
ai-services Utterances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/concepts/utterances.md
ms. Previously updated : 07/19/2022 Last updated : 01/19/2024 # Utterances
ai-services Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/data-collection.md
Previously updated : 05/06/2020 Last updated : 01/19/2024 # Data collection for your app
ai-services Developer Reference Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/developer-reference-resource.md
Previously updated : 01/12/2021 Last updated : 01/19/2024 ms.devlang: csharp # ms.devlang: csharp, javascript
ai-services Encrypt Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/encrypt-data-at-rest.md
Previously updated : 08/28/2020 Last updated : 01/19/2024 #Customer intent: As a user of the Language Understanding (LUIS) service, I want to learn how encryption at rest works.
ai-services Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/faq.md
ms. Previously updated : 07/19/2022 Last updated : 01/19/2024 # Language Understanding Frequently Asked Questions (FAQ)
ai-services Get Started Get Model Rest Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/get-started-get-model-rest-apis.md
Previously updated : 11/30/2020 Last updated : 01/19/2024 zone_pivot_groups: programming-languages-set-one #Customer intent: As an API developer familiar with REST but new to the LUIS service, I want to query the LUIS endpoint of a published model so that I can see the JSON prediction response.
ai-services How To Application Settings Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/how-to-application-settings-portal.md
Previously updated : 11/30/2020 Last updated : 01/19/2024 # Application and version settings
ai-services Entities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/how-to/entities.md
ms. Previously updated : 01/05/2022 Last updated : 01/19/2024 # Add entities to extract data
ai-services Improve Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/how-to/improve-application.md
ms. Previously updated : 01/07/2022 Last updated : 01/19/2024 # How to improve a LUIS app
ai-services Intents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/how-to/intents.md
ms. Previously updated : 01/07/2022 Last updated : 01/19/2024
ai-services Label Utterances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/how-to/label-utterances.md
ms. Previously updated : 01/05/2022 Last updated : 01/19/2024 # How to label example utterances
ai-services Orchestration Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/how-to/orchestration-projects.md
ms. Previously updated : 05/23/2022 Last updated : 01/19/2024 # Combine LUIS and question answering capabilities
ai-services Publish https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/how-to/publish.md
ms. Previously updated : 12/14/2021 Last updated : 01/19/2024 # Publish your active, trained app
ai-services Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/how-to/sign-in.md
ms. Previously updated : 07/19/2022 Last updated : 01/19/2024 # Sign in to the LUIS portal and create an app
ai-services Train Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/how-to/train-test.md
ms. Previously updated : 01/10/2022 Last updated : 01/19/2024 # Train and test your LUIS app
ai-services Howto Add Prebuilt Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/howto-add-prebuilt-models.md
Previously updated : 05/17/2020 Last updated : 01/19/2024
ai-services Luis Concept Data Alteration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-concept-data-alteration.md
Previously updated : 05/06/2020 Last updated : 01/19/2024 ms.devlang: csharp
ai-services Luis Concept Data Conversion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-concept-data-conversion.md
Previously updated : 03/21/2022 Last updated : 01/19/2024
ai-services Luis Concept Data Extraction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-concept-data-extraction.md
Previously updated : 03/21/2022 Last updated : 01/19/2024 # Extract data from utterance text with intents and entities
ai-services Luis Concept Data Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-concept-data-storage.md
Previously updated : 12/07/2020 Last updated : 01/19/2024 # Data storage and removal in Language Understanding (LUIS) Azure AI services
ai-services Luis Concept Devops Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-concept-devops-automation.md
description: How to implement CI/CD workflows for DevOps for Language Understand
Previously updated : 06/01/2021 Last updated : 01/19/2024 ms.
ai-services Luis Concept Devops Sourcecontrol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-concept-devops-sourcecontrol.md
description: How to maintain your Language Understanding (LUIS) app under source
Previously updated : 06/14/2022 Last updated : 01/19/2024
ai-services Luis Concept Devops Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-concept-devops-testing.md
Previously updated : 06/3/2020 Last updated : 01/19/2024 # Testing for LUIS DevOps
ai-services Luis Concept Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-concept-model.md
ms. Previously updated : 01/07/2022 Last updated : 01/19/2024 # Design with intent and entity models
ai-services Luis Concept Prebuilt Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-concept-prebuilt-model.md
Previously updated : 10/10/2019 Last updated : 01/19/2024
ai-services Luis Concept Prediction Score https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-concept-prediction-score.md
Previously updated : 04/14/2020 Last updated : 01/19/2024 # Prediction scores indicate prediction accuracy for intent and entities
ai-services Luis Container Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-container-configuration.md
Previously updated : 04/01/2020 Last updated : 01/19/2024
ai-services Luis Container Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-container-howto.md
Previously updated : 03/02/2023 Last updated : 01/19/2024 keywords: on-premises, Docker, container
ai-services Luis Container Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-container-limitations.md
Previously updated : 10/28/2021 Last updated : 01/19/2024
ai-services Luis Get Started Create App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-get-started-create-app.md
ms. Previously updated : 07/19/2022 Last updated : 01/19/2024 #Customer intent: As a new user, I want to quickly get a LUIS app created so I can understand the model and actions to train, test, publish, and query.
ai-services Luis Get Started Get Intent From Browser https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-get-started-get-intent-from-browser.md
Previously updated : 03/26/2021 Last updated : 01/19/2024 #Customer intent: As an developer familiar with how to use a browser but new to the LUIS service, I want to query the LUIS endpoint of a published model so that I can see the JSON prediction response.
ai-services Luis Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-glossary.md
description: The glossary explains terms that you might encounter as you work wi
Previously updated : 03/21/2022 Last updated : 01/19/2024
ai-services Luis How To Azure Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-how-to-azure-subscription.md
Previously updated : 07/19/2022 Last updated : 01/19/2024 ms.devlang: azurecli
ai-services Luis How To Batch Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-how-to-batch-test.md
Previously updated : 01/19/2022 Last updated : 01/19/2024
ai-services Luis How To Collaborate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-how-to-collaborate.md
Previously updated : 05/17/2021 Last updated : 01/19/2024
ai-services Luis How To Manage Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-how-to-manage-versions.md
Previously updated : 10/25/2021 Last updated : 01/19/2024 # Use versions to edit and test without impacting staging or production apps
ai-services Luis How To Model Intent Pattern https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-how-to-model-intent-pattern.md
Previously updated : 01/07/2022 Last updated : 01/19/2024 # How to add patterns to improve prediction accuracy
ai-services Luis How To Use Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-how-to-use-dashboard.md
Previously updated : 01/07/2022 Last updated : 01/19/2024 # How to use the Dashboard to improve your app
ai-services Luis Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-language-support.md
Previously updated : 01/18/2022 Last updated : 01/19/2024 # Language and region support for LUIS
ai-services Luis Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-limits.md
Previously updated : 03/21/2022 Last updated : 01/19/2024 # Limits for your LUIS model and keys
ai-services Luis Migration Api V1 To V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-migration-api-v1-to-v2.md
Previously updated : 04/02/2019 Last updated : 01/19/2024 # API v1 to v2 Migration guide for LUIS apps
ai-services Luis Migration Api V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-migration-api-v3.md
ms.
Previously updated : 05/28/2021 Last updated : 01/19/2024
ai-services Luis Migration Authoring Entities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-migration-authoring-entities.md
Previously updated : 05/28/2021 Last updated : 01/19/2024 # Migrate to V3 Authoring entity
ai-services Luis Migration Authoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-migration-authoring.md
Previously updated : 03/21/2022 Last updated : 01/19/2024 # Migrate to an Azure resource authoring key
ai-services Luis Reference Application Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-application-settings.md
Previously updated : 05/04/2020 Last updated : 01/19/2024 # App and version settings
ai-services Luis Reference Prebuilt Age https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-age.md
Previously updated : 10/04/2019 Last updated : 01/19/2024 # Age prebuilt entity for a LUIS app
ai-services Luis Reference Prebuilt Currency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-currency.md
Previously updated : 10/14/2019 Last updated : 01/19/2024 # Currency prebuilt entity for a LUIS app
ai-services Luis Reference Prebuilt Datetimev2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-datetimev2.md
Previously updated : 04/13/2020 Last updated : 01/19/2024 # DatetimeV2 prebuilt entity for a LUIS app
ai-services Luis Reference Prebuilt Deprecated https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-deprecated.md
Previously updated : 07/29/2019 Last updated : 01/19/2024 # Deprecated prebuilt entities in a LUIS app
ai-services Luis Reference Prebuilt Dimension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-dimension.md
Previously updated : 10/14/2019 Last updated : 01/19/2024 # Dimension prebuilt entity for a LUIS app
ai-services Luis Reference Prebuilt Domains https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-domains.md
Previously updated : 04/18/2022 Last updated : 01/19/2024 # Prebuilt domain reference for your LUIS app
ai-services Luis Reference Prebuilt Email https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-email.md
Previously updated : 09/27/2019 Last updated : 01/19/2024 # Email prebuilt entity for a LUIS app
ai-services Luis Reference Prebuilt Entities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-entities.md
Previously updated : 05/05/2021 Last updated : 01/19/2024 # Entities per culture in your LUIS model
ai-services Luis Reference Prebuilt Geographyv2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-geographyV2.md
Previously updated : 10/04/2019 Last updated : 01/19/2024 # GeographyV2 prebuilt entity for a LUIS app
ai-services Luis Reference Prebuilt Keyphrase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-keyphrase.md
Previously updated : 10/28/2021 Last updated : 01/19/2024 # keyPhrase prebuilt entity for a LUIS app
ai-services Luis Reference Prebuilt Number https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-number.md
Previously updated : 09/27/2019 Last updated : 01/19/2024 # Number prebuilt entity for a LUIS app
ai-services Luis Reference Prebuilt Ordinal V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-ordinal-v2.md
Previously updated : 09/27/2019 Last updated : 01/19/2024 # Ordinal V2 prebuilt entity for a LUIS app
ai-services Luis Reference Prebuilt Ordinal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-ordinal.md
Previously updated : 10/14/2019 Last updated : 01/19/2024 # Ordinal prebuilt entity for a LUIS app
ai-services Luis Reference Prebuilt Percentage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-percentage.md
Previously updated : 09/27/2019 Last updated : 01/19/2024 # Percentage prebuilt entity for a LUIS app
ai-services Luis Reference Prebuilt Person https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-person.md
Previously updated : 05/07/2019 Last updated : 01/19/2024 # PersonName prebuilt entity for a LUIS app
ai-services Luis Reference Prebuilt Phonenumber https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-phonenumber.md
Previously updated : 09/27/2019 Last updated : 01/19/2024 # Phone number prebuilt entity for a LUIS app
ai-services Luis Reference Prebuilt Sentiment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-sentiment.md
Previously updated : 10/28/2021 Last updated : 01/19/2024 # Sentiment analysis
ai-services Luis Reference Prebuilt Temperature https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-temperature.md
Previously updated : 10/14/2019 Last updated : 01/19/2024 # Temperature prebuilt entity for a LUIS app
ai-services Luis Reference Prebuilt Url https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-url.md
Previously updated : 10/04/2019 Last updated : 01/19/2024 # URL prebuilt entity for a LUIS app
ai-services Luis Reference Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-regions.md
Previously updated : 02/08/2022 Last updated : 01/19/2024
ai-services Luis Reference Response Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-response-codes.md
Previously updated : 06/14/2022 Last updated : 01/19/2024
ai-services Luis Traffic Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-traffic-manager.md
Previously updated : 08/20/2019 Last updated : 01/19/2024 #Customer intent: As an advanced user, I want to understand how to use multiple LUIS endpoint keys to increase the number of endpoint requests my application receives.
ai-services Luis Tutorial Bing Spellcheck https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-tutorial-bing-spellcheck.md
Previously updated : 01/05/2022 Last updated : 01/19/2024 # Correct misspelled words with Bing Resource
ai-services Luis Tutorial Node Import Utterances Csv https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-tutorial-node-import-utterances-csv.md
Previously updated : 05/17/2021 Last updated : 01/19/2024 # Build a LUIS app programmatically using Node.js
ai-services Luis User Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-user-privacy.md
Previously updated : 04/08/2020 Last updated : 01/19/2024 # Export and delete your customer data in Language Understanding (LUIS) in Azure AI services
ai-services Migrate From Composite Entity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/migrate-from-composite-entity.md
Previously updated : 05/04/2020 Last updated : 01/19/2024 # Upgrade composite entity to machine-learning entity
ai-services Reference Entity List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/reference-entity-list.md
ms.
Previously updated : 01/05/2022 Last updated : 01/19/2024 # List entity
ai-services Reference Entity Machine Learned Entity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/reference-entity-machine-learned-entity.md
Previously updated : 01/05/2022 Last updated : 01/19/2024 # Machine-learning entity
ai-services Reference Entity Pattern Any https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/reference-entity-pattern-any.md
Previously updated : 09/29/2019 Last updated : 01/19/2024 # Pattern.any entity
ai-services Reference Entity Regular Expression https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/reference-entity-regular-expression.md
Previously updated : 05/05/2021 Last updated : 01/19/2024 # Regular expression entity
ai-services Reference Entity Simple https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/reference-entity-simple.md
Previously updated : 01/07/2022 Last updated : 01/19/2024 # Simple entity
ai-services Reference Pattern Syntax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/reference-pattern-syntax.md
Previously updated : 04/18/2022 Last updated : 01/19/2024 # Pattern syntax
ai-services Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/role-based-access-control.md
Previously updated : 08/23/2022 Last updated : 01/19/2024
ai-services Schema Change Prediction Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/schema-change-prediction-runtime.md
Previously updated : 04/14/2020 Last updated : 01/19/2024 # Extend app at prediction runtime
ai-services Build Decomposable Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/tutorial/build-decomposable-application.md
ms. Previously updated : 01/10/2022 Last updated : 01/19/2024 # Build a decomposable LUIS application
ai-services What Is Luis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/what-is-luis.md
keywords: Azure, artificial intelligence, ai, natural language processing, nlp,
Previously updated : 07/19/2022 Last updated : 01/19/2024
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/whats-new.md
Previously updated : 02/24/2022 Last updated : 01/19/2024 # What's new in Language Understanding
ai-services Find Similar Faces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/find-similar-faces.md
Last updated 11/07/2022 - # Find similar faces
ai-services Video Retrieval https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/video-retrieval.md
Last updated 10/16/2023 - # Do video retrieval using vectorization (version 4.0 preview)
ai-services Reference Video Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/reference-video-search.md
Last updated 11/15/2023 -
ai-services Api Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-moderator/api-reference.md
Previously updated : 05/29/2019 Last updated : 01/18/2024
ai-services Client Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-moderator/client-libraries.md
zone_pivot_groups: programming-languages-set-conmod Previously updated : 09/28/2021 Last updated : 01/18/2024 ms.devlang: csharp # ms.devlang: csharp, java, python
ai-services Encrypt Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-moderator/encrypt-data-at-rest.md
Previously updated : 03/13/2020 Last updated : 01/18/2024 #Customer intent: As a user of the Content Moderator service, I want to learn how encryption at rest works.
ai-services Export Delete Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-moderator/export-delete-data.md
Previously updated : 02/07/2019 Last updated : 01/18/2024
ai-services Image Lists Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-moderator/image-lists-quickstart-dotnet.md
Previously updated : 10/24/2019 Last updated : 01/18/2024 #Customer intent: As a C# developer of content-providing software, I want to check images against a custom list of inappropriate images so that I can handle them more efficiently.
ai-services Image Moderation Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-moderator/image-moderation-api.md
Previously updated : 10/27/2021 Last updated : 01/18/2024
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-moderator/language-support.md
Previously updated : 10/27/2021 Last updated : 01/18/2024
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-moderator/overview.md
Previously updated : 11/06/2021 Last updated : 01/18/2024 keywords: content moderator, Azure Content Moderator, online moderator, content filtering software, content moderation service, content moderation
ai-services Samples Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-moderator/samples-dotnet.md
Previously updated : 10/27/2021 Last updated : 01/18/2024 ms.devlang: csharp
ai-services Samples Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-moderator/samples-rest.md
Previously updated : 01/10/2019 Last updated : 01/18/2024
ai-services Term Lists Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-moderator/term-lists-quickstart-dotnet.md
Previously updated : 10/24/2019 Last updated : 01/18/2024 #Customer intent: As a C# developer of content-providing software, I want to analyze text content for terms that are particular to my product, so that I can categorize and handle it accordingly.
ai-services Text Moderation Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-moderator/text-moderation-api.md
Previously updated : 10/27/2021 Last updated : 01/18/2024
ai-services Try Image Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-moderator/try-image-api.md
Previously updated : 01/10/2019 Last updated : 01/18/2024
ai-services Try Image List Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-moderator/try-image-list-api.md
Previously updated : 01/10/2019 Last updated : 01/18/2024
ai-services Try Terms List Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-moderator/try-terms-list-api.md
Previously updated : 01/10/2019 Last updated : 01/18/2024
ai-services Try Text Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-moderator/try-text-api.md
Previously updated : 10/27/2021 Last updated : 01/18/2024 # Moderate text from the API console
ai-services Video Moderation Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-moderator/video-moderation-api.md
Previously updated : 10/27/2021 Last updated : 01/18/2024 ms.devlang: csharp
ai-services Harm Categories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/concepts/harm-categories.md
Last updated 04/06/2023
-keywords:
ai-services Jailbreak Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/concepts/jailbreak-detection.md
Last updated 11/07/2023
-keywords:
ai-services Use Blocklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/how-to/use-blocklist.md
Last updated 07/20/2023
-keywords:
ai-services Quickstart Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/quickstart-image.md
Last updated 05/08/2023 zone_pivot_groups: programming-languages-content-safety
-keywords:
# QuickStart: Analyze image content
ai-services Quickstart Jailbreak https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/quickstart-jailbreak.md
- Last updated 11/07/2023
-keywords:
# Quickstart: Detect jailbreak risk (preview)
ai-services Quickstart Protected Material https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/quickstart-protected-material.md
- Last updated 10/30/2023
-keywords:
# Quickstart: Detect protected material (preview)
ai-services Quickstart Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/quickstart-text.md
Last updated 07/18/2023 zone_pivot_groups: programming-languages-content-safety
-keywords:
# QuickStart: Analyze text content
ai-services Compare Alternatives https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/custom-vision-service/concepts/compare-alternatives.md
Last updated 12/04/2023 - # Compare Custom Vision to alternative Azure services
ai-services Choose Model Feature https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/choose-model-feature.md
- ignite-2023 Previously updated : 11/15/2023 Last updated : 01/19/2024
The following decision charts highlight the features of each **Document Intellig
|**Structured, semi-structured, and unstructured documents**.|&#9679; Structured &rightarrow; surveys</br>&#9679; Semi-structured &rightarrow; invoices</br>&#9679; Unstructured &rightarrow; letters| [**Custom neural model**](concept-custom-neural.md)| |**A collection of several models each trained on similar-type documents.** |&#9679; Supply purchase orders</br>&#9679; Equipment purchase orders</br>&#9679; Furniture purchase orders</br> **All composed into a single model**.| [**Composed custom model**](concept-composed-models.md)|
+## Custom classification model
+
+| Training set | Example documents | Your best solution |
+| --|--|-|
+|**At least two different types of documents**. |Forms, letters, or documents | [**Custom classification model**](./concept-custom-classifier.md)|
+++ ## Next steps * [Learn how to process your own forms and documents](quickstarts/try-document-intelligence-studio.md) with the [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio)
ai-services Concept Add On Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-add-on-capabilities.md
- ignite-2023 Previously updated : 11/21/2023 Last updated : 01/19/2024 monikerRange: '>=doc-intel-3.1.0'
monikerRange: '>=doc-intel-3.1.0'
:::moniker range=">=doc-intel-3.1.0"
-Document Intelligence supports more sophisticated and modular analysis capabilities. Use the add-on features to extend the results to include more features extracted from your documents. Some add-on features incur an extra cost. These optional features can be enabled and disabled depending on the scenario of the document extraction. The following add-on capabilities are available for `2023-07-31 (GA)` and later releases:
+Document Intelligence supports more sophisticated and modular analysis capabilities. Use the add-on features to extend the results to include more features extracted from your documents. Some add-on features incur an extra cost. These optional features can be enabled and disabled depending on the scenario of the document extraction. To enable a feature, add the associated feature name to the `features` query string property. You can enable more than one add-on feature on a request by providing a comma-separated list of features. The following add-on capabilities are available for `2023-07-31 (GA)` and later releases.
* [`ocrHighResolution`](#high-resolution-extraction)
Document Intelligence supports more sophisticated and modular analysis capabilit
> [!NOTE] >
-> Not all add-on capabilities are supported by all models. For more information, *see* [model data extraction](concept-model-overview.md#model-data-extraction).
+> Not all add-on capabilities are supported by all models. For more information, *see* [model data extraction](concept-model-overview.md#analysis-features).
The following add-on capability is available for `2023-10-31-preview` and later releases: * [`keyValuePairs`](#key-value-pairs)+ * [`queryFields`](#query-fields) > [!NOTE]
The following add-on capability is available for `2023-10-31-preview` and later
::: moniker-end
+|Add-on Capability| Add-On/Free|[2023-10-31-preview](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-10-31-preview&preserve-view=true&tabs=HTTP)|[`2023-07-31` (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)|[`2022-08-31` (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)|[v2.1 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)|
+|-|--||--|||
+|Font property extraction|Add-On| ✔️| ✔️| n/a| n/a|
+|Formula extraction|Add-On| ✔️| ✔️| n/a| n/a|
+|High resolution extraction|Add-On| ✔️| ✔️| n/a| n/a|
+|Barcode extraction|Free| ✔️| ✔️| n/a| n/a|
+|Language detection|Free| ✔️| ✔️| n/a| n/a|
+|Key value pairs|Free| ✔️|n/a|n/a| n/a|
+|Query fields|Add-On*| ✔️|n/a|n/a| n/a|
++
+Add-On* - Query fields are priced differently than the other add-on features. See [pricing](https://azure.microsoft.com/pricing/details/ai-document-intelligence/) for details.
+ ## High resolution extraction The task of recognizing small text from large-size documents, like engineering drawings, is a challenge. Often the text is mixed with other graphical elements and has varying fonts, sizes and orientations. Moreover, the text can be broken into separate parts or connected with other symbols. Document Intelligence now supports extracting content from these types of documents with the `ocr.highResolution` capability. You get improved quality of content extraction from A1/A2/A3 documents by enabling this add-on capability.
+### REST API
+
+```REST
+https://{your resource}.cognitiveservices.azure.com/documentintelligence/documentModels/prebuilt-layout:analyze?api-version=2023-10-31-preview&features=ocrHighResolution
+```
+
+```REST
+https://{your resource}.cognitiveservices.azure.com/formrecognizer/documentModels/prebuilt-layout:analyze?api-version=2023-07-31&features=ocrHighResolution
+```
+ ## Formula extraction The `ocr.formula` capability extracts all identified formulas, such as mathematical equations, in the `formulas` collection as a top level object under `content`. Inside `content`, detected formulas are represented as `:formula:`. Each entry in this collection represents a formula that includes the formula type as `inline` or `display`, and its LaTeX representation as `value` along with its `polygon` coordinates. Initially, formulas appear at the end of each page.
The `ocr.formula` capability extracts all identified formulas, such as mathemati
] ```
+ ### REST API
+
+```REST
+https://{your resource}.cognitiveservices.azure.com/documentintelligence/documentModels/prebuilt-layout:analyze?api-version=2023-10-31-preview&features=formulas
+```
+
+```REST
+https://{your resource}.cognitiveservices.azure.com/formrecognizer/documentModels/prebuilt-layout:analyze?api-version=2023-07-31&features=formulas
+```
+ ## Font property extraction The `ocr.font` capability extracts all font properties of text extracted in the `styles` collection as a top-level object under `content`. Each style object specifies a single font property, the text span it applies to, and its corresponding confidence score. The existing style property is extended with more font properties such as `similarFontFamily` for the font of the text, `fontStyle` for styles such as italic and normal, `fontWeight` for bold or normal, `color` for color of the text, and `backgroundColor` for color of the text bounding box.
The `ocr.font` capability extracts all font properties of text extracted in the
] ```
+### REST API
+
+```REST
+https://{your resource}.cognitiveservices.azure.com/documentintelligence/documentModels/prebuilt-layout:analyze?api-version=2023-10-31-preview&features=styleFont
+```
+
+```REST
+https://{your resource}.cognitiveservices.azure.com/formrecognizer/documentModels/prebuilt-layout:analyze?api-version=2023-07-31&features=styleFont
+```
+ ## Barcode property extraction The `ocr.barcode` capability extracts all identified barcodes in the `barcodes` collection as a top level object under `content`. Inside the `content`, detected barcodes are represented as `:barcode:`. Each entry in this collection represents a barcode and includes the barcode type as `kind` and the embedded barcode content as `value` along with its `polygon` coordinates. Initially, barcodes appear at the end of each page. The `confidence` is hard-coded for as 1.
The `ocr.barcode` capability extracts all identified barcodes in the `barcodes`
| `ITF` |:::image type="content" source="media/barcodes/interleaved-two-five.png" alt-text="Screenshot of the interleaved-two-of-five barcode (ITF).":::| | `Data Matrix` |:::image type="content" source="media/barcodes/datamatrix.gif" alt-text="Screenshot of the Data Matrix.":::|
+### REST API
+
+```REST
+https://{your resource}.cognitiveservices.azure.com/documentintelligence/documentModels/prebuilt-layout:analyze?api-version=2023-10-31-preview&features=barcodes
+```
+
+```REST
+https://{your resource}.cognitiveservices.azure.com/formrecognizer/documentModels/prebuilt-layout:analyze?api-version=2023-07-31&features=barcodes
+```
+ ## Language detection
-It predicts the detected primary language for each text line along with the `confidence` in the `languages` collection under `analyzeResult`.
+Adding the `languages` feature to the `analyzeResult` request predicts the detected primary language for each text line along with the `confidence` in the `languages` collection under `analyzeResult`.
```json "languages": [
It predicts the detected primary language for each text line along with the `con
] ```
+### REST API
+
+```REST
+https://{your resource}.cognitiveservices.azure.com/documentintelligence/documentModels/prebuilt-layout:analyze?api-version=2023-10-31-preview&features=languages
+```
+
+```REST
+https://{your resource}.cognitiveservices.azure.com/formrecognizer/documentModels/prebuilt-layout:analyze?api-version=2023-07-31&features=languages
+```
+ :::moniker range="doc-intel-4.0.0" ## Key-value Pairs
+In earlier API versions, the prebuilt-document model extracted key-value pairs from forms and documents. With the addition of the `keyValuePairs` feature to prebuilt-layout, the layout model now produces the same results.
+ Key-value pairs are specific spans within the document that identify a label or key and its associated response or value. In a structured form, these pairs could be the label and the value the user entered for that field. In an unstructured document, they could be the date a contract was executed on based on the text in a paragraph. The AI model is trained to extract identifiable keys and values based on a wide variety of document types, formats, and structures. Keys can also exist in isolation when the model detects that a key exists, with no associated value or when processing optional fields. For example, a middle name field can be left blank on a form in some instances. Key-value pairs are spans of text contained in the document. For documents where the same value is described in different ways, for example, customer/user, the associated key is either customer or user (based on context).
+### REST API
+
+```REST
+https://{your resource}.cognitiveservices.azure.com/documentintelligence/documentModels/prebuilt-layout:analyze?api-version=2023-10-31-preview&features=keyValuePairs
+```
+ ## Query Fields
+Query fields are an add-on capability to extend the schema extracted from any prebuilt model or define a specific key name when the key name is variable. To use query fields, set the features to `queryFields` and provide a comma-separated list of field names in the `queryFields` property.
+ * Document Intelligence now supports query field extractions. With query field extraction, you can add fields to the extraction process using a query request without the need for added training. * Use query fields when you need to extend the schema of a prebuilt or custom model or need to extract a few fields with the output of layout.
For query field extraction, specify the fields you want to extract and Document
* In addition to the query fields, the response includes text, tables, selection marks, and other relevant data.
+### REST API
+
+```REST
+https://{your resource}.cognitiveservices.azure.com/documentintelligence/documentModels/prebuilt-layout:analyze?api-version=2023-10-31-preview&features=queryFields&queryFields=TERMS
+```
+ :::moniker-end ## Next steps
For query field extraction, specify the fields you want to extract and Document
> [!div class="nextstepaction"] > Learn more: > [**Read model**](concept-read.md) [**Layout model**](concept-layout.md).+
+> [!div class="nextstepaction"]
+> SDK samples:
+> [**python**](/python/api/overview/azure/ai-documentintelligence-readme).
+
ai-services Concept Composed Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-composed-models.md
- ignite-2023 Previously updated : 11/21/2023 Last updated : 01/19/2024
**Composed models**. A composed model is created by taking a collection of custom models and assigning them to a single model built from your form types. When a document is submitted for analysis using a composed model, the service performs a classification to decide which custom model best represents the submitted document.
-With composed models, you can assign multiple custom models to a composed model called with a single model ID. It's useful when you've trained several models and want to group them to analyze similar form types. For example, your composed model might include custom models trained to analyze your supply, equipment, and furniture purchase orders. Instead of manually trying to select the appropriate model, you can use a composed model to determine the appropriate custom model for each analysis and extraction.
+With composed models, you can assign multiple custom models to a composed model called with a single model ID. It's useful when you train several models and want to group them to analyze similar form types. For example, your composed model might include custom models trained to analyze your supply, equipment, and furniture purchase orders. Instead of manually trying to select the appropriate model, you can use a composed model to determine the appropriate custom model for each analysis and extraction.
* ```Custom form``` and ```Custom template``` models can be composed together into a single composed model.
With the introduction of [**custom classification models**](./concept-custom-cla
### Composed model compatibility
-|Custom model type|Models trained with v2.1 and v2.0 | Custom template models v3.0 |Custom neural models v3.0 (preview) |Custom neural models 3.0 (GA)|
+|Custom model type|Models trained with v2.1 and v2.0 | Custom template models v3.0 |Custom neural models 3.0|Custom Neural models v3.1|
|--|--|--|--|--| |**Models trained with version 2.1 and v2.0** |Supported|Supported|Not Supported|Not Supported|
-|**Custom template models v3.0** |Supported|Supported|Not Supported|NotSupported|
-|**Custom template models v3.0 (GA)** |Not Supported|Not Supported|Supported|Not Supported|
-|**Custom neural models v3.0 (preview)**|Not Supported|Not Supported|Supported|Not Supported|
-|**Custom Neural models v3.0 (GA)**|Not Supported|Not Supported|Not Supported|Supported|
+|**Custom template models v3.0** |Supported|Supported|Not Supported|Not Supported|
+|**Custom template models v3.0** |Not Supported|Not Supported|Not Supported|Not Supported|
+|**Custom template models v3.1** |Not Supported|Not Supported|Not Supported|Not Supported|
+|**Custom Neural models v3.0**|Not Supported|Not Supported|Supported|Supported|
+|**Custom Neural models v3.1**|Not Supported|Not Supported|Supported|Supported|
+ * To compose a model trained with a prior version of the API (v2.1 or earlier), train a model with the v3.0 API using the same labeled dataset. That addition ensures that the v2.1 model can be composed with other models.
ai-services Concept Custom Classifier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-custom-classifier.md
Previously updated : 11/21/2023 Last updated : 01/19/2024 - references_regions
A custom classification model can replace [a composed model](concept-composed-mo
## Language support Classification models currently only support English language documents.+
+Classification models can now be trained on documents of different languages. See [supported languages](language-support-custom.md) for a complete list.
## Input requirements
-* For best results, provide one clear photo or high-quality scan per document.
+* For best results, provide five clear photos or high-quality scans per document type.
* Supported file formats:
Classification models currently only support English language documents.
|Prebuilt | Γ£ö | Γ£ö | | |Custom | Γ£ö | Γ£ö | |
- &#x2731; Microsoft Office files are currently not supported for other models or versions.
+
* For PDF and TIFF, up to 2000 pages can be processed (with a free tier subscription, only the first two pages are processed).
Classification models currently only support English language documents.
* For custom classification model training, the total size of training data is `1GB` with a maximum of 10,000 pages.
+## Document splitting
+
+When you have more than one document in a file, the classifier can identify the different document types contained within the input file. The classifier response contains the page ranges for each of the identified document types contained within a file. This response can include multiple instances of the same document type.
+
+The analyze operation now includes a `splitMode` property that gives you granular control over the splitting behavior.
+
+* To treat the entire input file as a single document for classification set the splitMode to `none`. When you do so, the service returns just one class for the entire input file.
+* To classify each page of the input file, set the splitMode to `perPage`. The service attempts to classify each page as an individual document.
+* Set the splitMode to `auto` and the service identifies the documents and associated page ranges.
+ ## Best practices Custom classification models require a minimum of five samples per class to train. If the classes are similar, adding extra training samples improves model accuracy.
+The classifier attempts to assign each document to one of the classes, if you expect the model to see document types not in the classes that are part of the training dataset, you should plan to set a threshold on the classification score or add a few representative samples of the document types to an ```"other"``` class. Adding an ```"other"``` class ensures that unneeded documents don't impact your classifier quality.
+ ## Training a model
-Custom classification models are supported by **v4.0:2023-10-31-preview** and **v3.1:2023-07-31 (GA)** APIs. [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio) provides a no-code user interface to interactively train a custom classifier.
+Custom classification models are supported by **v4.0:2023-10-31-preview** and **v3.1:2023-07-31 (GA)** APIs. [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio) provides a no-code user interface to interactively train a custom classifier. Follow the [how to guide](how-to-guides/build-a-custom-classifier.md) to get started.
When using the REST API, if you organize your documents by folders, you can use the ```azureBlobSource``` property of the request to train a classification model.
https://{endpoint}/formrecognizer/documentClassifiers:build?api-version=2023-07-
"car-maint": { "azureBlobSource": { "containerUrl": "SAS URL to container",
- "prefix": "sample1/car-maint/"
+ "prefix": "{path to dataset root}/car-maint/"
} }, "cc-auth": { "azureBlobSource": { "containerUrl": "SAS URL to container",
- "prefix": "sample1/cc-auth/"
+ "prefix": "{path to dataset root}/cc-auth/"
} }, "deed-of-trust": { "azureBlobSource": { "containerUrl": "SAS URL to container",
- "prefix": "sample1/deed-of-trust/"
+ "prefix": "{path to dataset root}/deed-of-trust/"
} } }
Alternatively, if you have a flat list of files or only plan to use a few select
"car-maint": { "azureBlobFileListSource": { "containerUrl": "SAS URL to container",
- "fileList": "sample1/car-maint.jsonl"
+ "fileList": "{path to dataset root}/car-maint.jsonl"
} }, "cc-auth": { "azureBlobFileListSource": { "containerUrl": "SAS URL to container",
- "fileList": "sample1/cc-auth.jsonl"
+ "fileList": "{path to dataset root}/cc-auth.jsonl"
} }, "deed-of-trust": { "azureBlobFileListSource": { "containerUrl": "SAS URL to container",
- "fileList": "sample1/deed-of-trust.jsonl"
+ "fileList": "{path to dataset root}/deed-of-trust.jsonl"
} } }
Alternatively, if you have a flat list of files or only plan to use a few select
```
-File list `car-maint.jsonl` contains the following files.
+As an example, the file list `car-maint.jsonl` contains the following files.
```json
-{"file":"sample1/car-maint/Commercial Motor Vehicle - Adatum.pdf"}
-{"file":"sample1/car-maint/Commercial Motor Vehicle - Fincher.pdf"}
-{"file":"sample1/car-maint/Commercial Motor Vehicle - Lamna.pdf"}
-{"file":"sample1/car-maint/Commercial Motor Vehicle - Liberty.pdf"}
-{"file":"sample1/car-maint/Commercial Motor Vehicle - Trey.pdf"}
+{"file":"classifier/car-maint/Commercial Motor Vehicle - Adatum.pdf"}
+{"file":"classifier/car-maint/Commercial Motor Vehicle - Fincher.pdf"}
+{"file":"classifier/car-maint/Commercial Motor Vehicle - Lamna.pdf"}
+{"file":"classifier/car-maint/Commercial Motor Vehicle - Liberty.pdf"}
+{"file":"classifier/car-maint/Commercial Motor Vehicle - Trey.pdf"}
``` ## Model response
Analyze an input file with the document classification model
:::moniker range="doc-intel-4.0.0" ```rest
-https://{endpoint}/documentintelligence/documentClassifiers:build?api-version=2023-10-31-preview
+https://{endpoint}/documentintelligence/documentClassifiers/{classifier}:analyze?api-version=2023-10-31-preview
``` :::moniker-end
ai-services Concept Custom Label https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-custom-label.md
Previously updated : 11/21/2023 Last updated : 01/19/2024 - references_regions
Tabular fields support **cross page tables** by default. To label a table that s
Tabular fields are also useful when extracting repeating information within a document that isn't recognized as a table. For example, a repeating section of work experiences in a resume can be labeled and extracted as a tabular field.
+> [!NOTE]
+> Table field when labeled are extracted as part of the `documents` section of the response. The response also contains a `tables` section which contains the tables extracted from the document by the layout model. If you have labeled a field as a table, look for the field in the documents section of the response.
+ ## Labeling guidelines * **Labeling values is required.** Don't include the surrounding text. For example when labeling a checkbox, name the field to indicate the check box selection for example ```selectionYes``` and ```selectionNo``` rather than labeling the yes or no text in the document.
ai-services Concept Document Intelligence Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-document-intelligence-studio.md
- ignite-2023 Previously updated : 11/21/2023 Last updated : 01/19/2024 monikerRange: '>=doc-intel-3.0.0'
monikerRange: '>=doc-intel-3.0.0'
**This content applies to:** ![checkmark](media/yes-icon.png) **v3.0 (GA)** | **Latest versions:** ![purple-checkmark](media/purple-yes-icon.png) [**v4.0 (preview)**](?view=doc-intel-4.0.0&preserve-view=true) ![purple-checkmark](media/purple-yes-icon.png) [**v3.1 (preview)**](?view=doc-intel-3.1.0&preserve-view=true) ::: moniker-end
-[Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/) is an online tool for visually exploring, understanding, and integrating features from the Document Intelligence service into your applications. Use the [Document Intelligence Studio quickstart](quickstarts/try-document-intelligence-studio.md) to get started analyzing documents with pretrained models. Build custom template models and reference the models in your applications using the [Python SDK v3.0](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) and other quickstarts.
+[Document Intelligence Studio](https://documentintelligence.ai.azure.com/) is an online tool for visually exploring, understanding, and integrating features from the Document Intelligence service into your applications. Use the Document Intelligence Studio to:
+
+* Learn more about the different capabilities in Document Intelligence.
+* Use your Document Intelligence resource to test models on sample documents or upload your own documents.
+* Experiment with different add-on and preview features to adapt the output to your needs.
+* Train custom classification models to classify documents.
+* Train custom extraction models to extract fields from documents.
+* Get sample code for the language-specific SDKs to integrate into your applications.
+
+Use the [Document Intelligence Studio quickstart](quickstarts/try-document-intelligence-studio.md) to get started analyzing documents with document analysis or prebuilt models. Build custom models and reference the models in your applications using one of the [language specific SDKs](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) and other quickstarts.
The following image shows the landing page for Document Intelligence Studio. :::image border="true" type="content" source="media/studio/welcome-to-studio.png" alt-text="Document Intelligence Studio Homepage":::
+## Getting started
+
+If you're visiting the Studio for the first time, follow the [getting started guide](studio-overview.md#get-started-using-document-intelligence-studio) to set up the Studio for use.
+ ## Analyze options * Document Intelligence supports sophisticated analysis capabilities. The Studio allows one entry point (Analyze options button) for configuring the add-on capabilities with ease.
The following image shows the landing page for Document Intelligence Studio.
## Document Intelligence model support
-* **Read**: Try out Document Intelligence's Read feature to extract text lines, words, detected languages, and handwritten style if detected. Start with the [Studio Read feature](https://formrecognizer.appliedai.azure.com/studio/read). Explore with sample documents and your documents. Use the interactive visualization and JSON output to understand how the feature works. See the [Read overview](concept-read.md) to learn more and get started with the [Python SDK quickstart for Layout](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true).
+* **Read**: Try out Document Intelligence's Read feature to extract text lines, words, detected languages, and handwritten style if detected. Start with the [Studio Read feature](https://documentintelligence.ai.azure.com/studio/read). Explore with sample documents and your documents. Use the interactive visualization and JSON output to understand how the feature works. See the [Read overview](concept-read.md) to learn more and get started with the [Python SDK quickstart for Layout](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true).
-* **Layout**: Try out Document Intelligence's Layout feature to extract text, tables, selection marks, and structure information. Start with the [Studio Layout feature](https://formrecognizer.appliedai.azure.com/studio/layout). Explore with sample documents and your documents. Use the interactive visualization and JSON output to understand how the feature works. See the [Layout overview](concept-layout.md) to learn more and get started with the [Python SDK quickstart for Layout](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#layout-model).
+* **Layout**: Try out Document Intelligence's Layout feature to extract text, tables, selection marks, and structure information. Start with the [Studio Layout feature](https://documentintelligence.ai.azure.com/studio/layout). Explore with sample documents and your documents. Use the interactive visualization and JSON output to understand how the feature works. See the [Layout overview](concept-layout.md) to learn more and get started with the [Python SDK quickstart for Layout](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#layout-model).
-* **Prebuilt models**: Document Intelligence's prebuilt models enable you to add intelligent document processing to your apps and flows without having to train and build your own models. As an example, start with the [Studio Invoice feature](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice). Explore with sample documents and your documents. Use the interactive visualization, extracted fields list, and JSON output to understand how the feature works. See the [Models overview](concept-model-overview.md) to learn more and get started with the [Python SDK quickstart for Prebuilt Invoice](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model).
+* **Prebuilt models**: Document Intelligence's prebuilt models enable you to add intelligent document processing to your apps and flows without having to train and build your own models. As an example, start with the [Studio Invoice feature](https://documentintelligence.ai.azure.com/studio/prebuilt?formType=invoice). Explore with sample documents and your documents. Use the interactive visualization, extracted fields list, and JSON output to understand how the feature works. See the [Models overview](concept-model-overview.md) to learn more and get started with the [Python SDK quickstart for Prebuilt Invoice](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model).
-* **Custom extraction models**: Document Intelligence's custom models enable you to extract fields and values from models trained with your data, tailored to your forms and documents. Create standalone custom models or combine two or more custom models to create a composed model to extract data from multiple form types. Start with the [Studio Custom models feature](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects). Use the help wizard, labeling interface, training step, and visualizations to understand how the feature works. Test the custom model with your sample documents and iterate to improve the model. See the [Custom models overview](concept-custom.md) to learn more.
+* **Custom extraction models**: Document Intelligence's custom models enable you to extract fields and values from models trained with your data, tailored to your forms and documents. Create standalone custom models or combine two or more custom models to create a composed model to extract data from multiple form types. Start with the [Studio Custom models feature](https://documentintelligence.ai.azure.com/studio/custommodel/projects). Use the help wizard, labeling interface, training step, and visualizations to understand how the feature works. Test the custom model with your sample documents and iterate to improve the model. See the [Custom models overview](concept-custom.md) to learn more.
* **Custom classification models**: Document classification is a new scenario supported by Document Intelligence. the document classifier API supports classification and splitting scenarios. Train a classification model to identify the different types of documents your application supports. The input file for the classification model can contain multiple documents and classifies each document within an associated page range. See [custom classification models](concept-custom-classifier.md) to learn more.
The following image shows the landing page for Document Intelligence Studio.
## Next steps
-* Visit the [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio) to begin using the models and features.
+* Visit the [Document Intelligence Studio](https://documentintelligence.ai.azure.com/) to begin using the models and features.
* Get started with our [Document Intelligence Studio quickstart](quickstarts/try-document-intelligence-studio.md).
ai-services Concept Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-model-overview.md
- ignite-2023 Previously updated : 11/21/2023 Last updated : 01/19/2024
The following table shows the available models for each current preview and stable API:
-|Model|[2023-10-31-preview](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-10-31-preview&preserve-view=true&tabs=HTTP)|[2023-07-31 (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)|[2022-08-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)|[v2.1 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)|
-|-|--||--||
-|[Add-on capabilities](concept-add-on-capabilities.md) | ✔️| ✔️| n/a| n/a|
-|[Business card](concept-business-card.md) | deprecated|✔️|✔️|✔️ |
-|[Contract](concept-contract.md) | ✔️| ✔️| n/a| n/a|
-|[Custom classifier](concept-custom-classifier.md) | ✔️| ✔️| n/a| n/a|
-|[Custom composed](concept-composed-models.md) | ✔️| ✔️| ✔️| ✔️|
-|[Custom neural](concept-custom-neural.md) | ✔️| ✔️| ✔️| n/a|
-|[Custom template](concept-custom-template.md) | ✔️| ✔️| ✔️| ✔️|
-|[General document](concept-general-document.md) | deprecated| ✔️| ✔️| n/a|
-|[Health insurance card](concept-health-insurance-card.md)| ✔️| ✔️| ✔️| n/a|
-|[ID document](concept-id-document.md) | ✔️| ✔️| ✔️| ✔️|
-|[Invoice](concept-invoice.md) | ✔️| ✔️| ✔️| ✔️|
-|[Layout](concept-layout.md) | ✔️| ✔️| ✔️| ✔️|
-|[Read](concept-read.md) | ✔️| ✔️| ✔️| n/a|
-|[Receipt](concept-receipt.md) | ✔️| ✔️| ✔️| ✔️|
-|[US 1098 Tax](concept-tax-document.md) | ✔️| ✔️| n/a| n/a|
-|[US 1098-E Tax](concept-tax-document.md) | ✔️| ✔️| n/a| n/a|
-|[US 1098-T Tax](concept-tax-document.md) | ✔️| ✔️| n/a| n/a|
-|[US 1099 Tax](concept-tax-document.md) | ✔️| n/a| n/a| n/a|
-|[US W2 Tax](concept-tax-document.md) | ✔️| ✔️| ✔️| n/a|
+|**Model Type**| **Model**|[2023-10-31-preview](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-10-31-preview&preserve-view=true&tabs=HTTP)|[2023-07-31 (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)|[2022-08-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)|[v2.1 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)|
+|-|--||--|||
+|Document analysis models|[Read](concept-read.md) | ✔️| ✔️| ✔️| n/a|
+|Document analysis models|[Layout](concept-layout.md) | ✔️| ✔️| ✔️| ✔️|
+|Document analysis models|[General document](concept-general-document.md) |moved to layout| ✔️| ✔️| n/a|
+|Prebuilt models|[Business card](concept-business-card.md) | deprecated|✔️|✔️|✔️ |
+|Prebuilt models|[Contract](concept-contract.md) | ✔️| ✔️| n/a| n/a|
+|Prebuilt models|[Health insurance card](concept-health-insurance-card.md)| ✔️| ✔️| ✔️| n/a|
+|Prebuilt models|[ID document](concept-id-document.md) | ✔️| ✔️| ✔️| ✔️|
+|Prebuilt models|[Invoice](concept-invoice.md) | ✔️| ✔️| ✔️| ✔️|
+|Prebuilt models|[Receipt](concept-receipt.md) | ✔️| ✔️| ✔️| ✔️|
+|Prebuilt models|[US 1098 Tax](concept-tax-document.md) | ✔️| ✔️| n/a| n/a|
+|Prebuilt models|[US 1098-E Tax](concept-tax-document.md) | ✔️| ✔️| n/a| n/a|
+|Prebuilt models|[US 1098-T Tax](concept-tax-document.md) | ✔️| ✔️| n/a| n/a|
+|Prebuilt models|[US 1099 Tax](concept-tax-document.md) | ✔️| n/a| n/a| n/a|
+|Prebuilt models|[US W2 Tax](concept-tax-document.md) | ✔️| ✔️| ✔️| n/a|
+|Prebuilt models|[Add-on capabilities](concept-add-on-capabilities.md) | ✔️| ✔️| n/a| n/a|
+|Custom models|[Custom classifier](concept-custom-classifier.md) | ✔️| ✔️| n/a| n/a|
+|Custom models|[Custom neural](concept-custom-neural.md) | ✔️| ✔️| ✔️| n/a|
+|Custom models|[Custom template](concept-custom-template.md) | ✔️| ✔️| ✔️| ✔️|
+|Custom models|[Custom composed](concept-composed-models.md) | ✔️| ✔️| ✔️| ✔️|
+
+|**Add-on Capability**| **Add-On/Free**|[2023-10-31-preview](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-10-31-preview&preserve-view=true&tabs=HTTP)|[`2023-07-31` (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)|[`2022-08-31` (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)|[v2.1 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)|
+|-|--||--|||
+|Font property extraction|Add-On| ✔️| ✔️| n/a| n/a|
+|Formula extraction|Add-On| ✔️| ✔️| n/a| n/a|
+|High resolution extraction|Add-On| ✔️| ✔️| n/a| n/a|
+|Barcode extraction|Free| ✔️| ✔️| n/a| n/a|
+|Language detection|Free| ✔️| ✔️| n/a| n/a|
+|Key value pairs|Free| ✔️|n/a|n/a| n/a|
+|Query fields|Add-On*| ✔️|n/a|n/a| n/a|
++
+Add-On* - Query fields are priced differently than the other add-on features. See [pricing](https://azure.microsoft.com/pricing/details/ai-document-intelligence/) for details.
::: moniker range=">=doc-intel-3.0.0"
For all models, except Business card model, Document Intelligence now supports a
* [`barcodes`](concept-add-on-capabilities.md#barcode-property-extraction) * [`languages`](concept-add-on-capabilities.md#language-detection) * [`keyValuePairs`](concept-add-on-capabilities.md#key-value-pairs) (2023-10-31-preview)
-* [`queryFields`](concept-add-on-capabilities.md#query-fields) (2023-31-preview)
+* [`queryFields`](concept-add-on-capabilities.md#query-fields) (2023-10-31-preview). `Not available with the US.Tax models`
## Analysis features
-|Model ID|Content Extraction|Query fields|Paragraphs|Paragraph Roles|Selection Marks|Tables|Key-Value Pairs|Languages|Barcodes|Document Analysis|Formulas*|Style Font*|High Resolution*|
-|:-|:-|:-|:-|:-|:-|:-|:-|:-|:-|:-|:-|:-|
-|prebuilt-read|Γ£ô| | | | | |O|O| |O|O|O|
-|prebuilt-layout|Γ£ô|Γ£ô|Γ£ô|Γ£ô|Γ£ô|Γ£ô| |O|O| |O|O|O|
-|prebuilt-document|Γ£ô|Γ£ô|Γ£ô|Γ£ô|Γ£ô|Γ£ô|Γ£ô|O|O| |O|O|O|
-|prebuilt-businessCard|Γ£ô|Γ£ô| | | | | | | |Γ£ô| | | |
-|prebuilt-idDocument|Γ£ô|Γ£ô|| | | | |O|O|Γ£ô|O|O|O|
-|prebuilt-invoice|Γ£ô|Γ£ô| | |Γ£ô|Γ£ô|O|O|O|Γ£ô|O|O|O|
-|prebuilt-receipt|Γ£ô|Γ£ô| | | | | |O|O|Γ£ô|O|O|O|
-|prebuilt-healthInsuranceCard.us|Γ£ô|Γ£ô| | | | | |O|O|Γ£ô|O|O|O|
-|prebuilt-tax.us.w2|Γ£ô|Γ£ô| | |Γ£ô| | |O|O|Γ£ô|O|O|O|
-|prebuilt-tax.us.1098|Γ£ô|Γ£ô| | |Γ£ô| | |O|O|Γ£ô|O|O|O|
-|prebuilt-tax.us.1098E|Γ£ô|Γ£ô| | |Γ£ô| | |O|O|Γ£ô|O|O|O|
-|prebuilt-tax.us.1098T|Γ£ô|Γ£ô| | |Γ£ô| | |O|O|Γ£ô|O|O|O|
-|prebuilt-tax.us.1099(variations)|Γ£ô|Γ£ô| | |Γ£ô| | |O|O|Γ£ô|O|O|O|
-|prebuilt-contract|Γ£ô|Γ£ô|Γ£ô|Γ£ô| | |O|O|Γ£ô|O|O|O|
-|{ customModelName }|Γ£ô|Γ£ô|Γ£ô|Γ£ô|Γ£ô|Γ£ô| |O|O|Γ£ô|O|O|O|
-
-Γ£ô - Enabled</br>
-O - Optional</br>
-\* - Premium features incur extra costs
### Read OCR
ai-services Concept Query Fields https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-query-fields.md
Title: Query field extraction - Document Intelligence (formerly Form Recognizer)
-description: Use Document Intelligence to extract query field data.
+description: Use Document Intelligence query fields to extend model schema.
- ignite-2023 Previously updated : 11/21/2023 Last updated : 01/19/2024 monikerRange: 'doc-intel-4.0.0'
monikerRange: 'doc-intel-4.0.0'
# Document Intelligence query field extraction
-**Document Intelligence now supports query field extractions using Azure OpenAI capabilities. With query field extraction, you can add fields to the extraction process using a query request without the need for added training.
-
+**Document Intelligence now supports query field to extend the schema of any prebuilt model to extract the specific fields you need. Query fields can also be added to layout to extract fields in addition to structure from forms or documents.
> [!NOTE] >
-> Document Intelligence Studio query field extraction is currently available with the general document model starting with the `2023-07-31 (GA)` API and later releases.
+> Document Intelligence Studio query field extraction is currently available with layout and prebuilt models, excluding the UX.Tax prebuilt models.
+
+## Query fields or key value pairs
+
+Query fields and key value pairs perform similar functions, there are a few distinctions to be aware of when deciding which feature to choose.
+
+* Key value pairs are only available with layout and invoice models. If you're looking to extend the schema for a prebuilt model, use query fields.
+
+* You don't know the specific fields to be extracted, or the number of fields is large (greater than 20), key value pairs might be a better solution.
-## Select query fields
+* Key-value pairs extract the keys and values as they exist in the form or document, you need to plan for any key variations. For example, keys `First Name` or `Given Name`. With query fields, you define the key and the model only extracts the corresponding value.
+
+* Use query fields when the value you require can't be described as a key value pair in the document. For example, the agreement date of a contract.
For query field extraction, specify the fields you want to extract and Document Intelligence analyzes the document accordingly. Here's an example:
-* If you're processing a contract in the [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/document), use the `2023-07-31` version:
+* If you're processing a contract in the [Document Intelligence Studio](https://documentintelligence.ai.azure.com/studio/layout), use the `2023-10-31-preview` or later API version:
:::image type="content" source="media/studio/query-fields.png" alt-text="Screenshot of the query fields button in Document Intelligence Studio.":::
-* You can pass a list of field labels like `Party1`, `Party2`, `TermsOfUse`, `PaymentTerms`, `PaymentDate`, and `TermEndDate`" as part of the analyze document request.
+* You can pass a list of field labels like `Party1`, `Party2`, `TermsOfUse`, `PaymentTerms`, `PaymentDate`, and `TermEndDate`" as part of the `AnalyzeDocument` request.
:::image type="content" source="media/studio/query-field-select.png" alt-text="Screenshot of query fields selection window in Document Intelligence Studio.":::
-* Document Intelligence utilizes the capabilities of both [**Azure OpenAI Service**](../../ai-services/openai/overview.md) and extraction models to analyze and extract the field data and return the values in a structured JSON output.
-
-* In addition to the query fields, the response includes text, tables, selection marks, general document key-value pairs, and other relevant data.
+* In addition to the query fields, the response includes the model output. For a list of features or schema extracted by each model, see [model analysis features](concept-model-overview.md#analysis-features).
## Query fields REST API request** Use the query fields feature with the [general document model](concept-general-document.md), to add fields to the extraction process without having to train a custom model: ```http
-POST https://{endpoint}/formrecognizer/documentModels/prebuilt-document:analyze?api-version=2023-07-31&queryFields=Party1, Party2, PaymentDate HTTP/1.1
+POST https://{endpoint}/documentintelligence/documentModels/prebuilt-layout:analyze?api-version=2023-10-31-preview&features=queryFields&queryFields=Terms,PaymentDate HTTP/1.1
Host: *.cognitiveservices.azure.com Content-Type: application/json Ocp-Apim-Subscription-Key:
Ocp-Apim-Subscription-Key:
> [!div class="nextstepaction"] > [Try the Document Intelligence Studio quickstart](./quickstarts/try-document-intelligence-studio.md)+
+> [!div class="nextstepaction"]
+> [Learn about other add-on capabilities](concept-add-on-capabilities.md)
ai-services Concept Read https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-read.md
Try extracting text from forms and documents using the Document Intelligence Stu
### Microsoft Office and HTML text extraction
-Use the parameter `api-version=2023-07-31` when using the REST API or the corresponding SDKs of that API version to extract text from Microsoft Word, Excel, PowerPoint, and HTML files. The following illustration shows extraction of the digital text and text in the Word document by running OCR on the images. Text from embedded images isn't included in the extraction.
-
-> [!NOTE]
->
-> * [Add-on capabilities](concept-add-on-capabilities.md) are not supported for Microsoft Word, Excel, PowerPoint, and HTML file formats.
--
-The page units in the model output are computed as shown:
+When analyzing Microsft Office and HTML files, the page units in the model output are computed as shown:
**File format** | **Computed page unit** | **Total pages** | | | | |
ai-services Concept Retrieval Augumented Generation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-retrieval-augumented-generation.md
Markdown is a structured and formatted markup language and a popular input for e
* **Scalability and AI quality**. The Layout model is highly scalable in Optical Character Recognition (OCR), table extraction, and [document structure analysis](concept-layout.md#document-layout-analysis). It supports [309 printed and 12 handwritten languages](language-support-ocr.md#model-id-prebuilt-layout), further ensuring high-quality results driven by AI capabilities.
-* **Large learning model (LLM) compatibility**. The Layout model Markdown formatted output is LLM friendly and facilitates seamless integration into your workflows. You can turn any table in a document into Markdown format and avoid extensive effort parsing the documents for greater LLM understanding.
+* **Large language model (LLM) compatibility**. The Layout model Markdown formatted output is LLM friendly and facilitates seamless integration into your workflows. You can turn any table in a document into Markdown format and avoid extensive effort parsing the documents for greater LLM understanding.
**Text image processed with Document Intelligence Studio and output to MarkDown using Layout model**
ai-services Language Support Ocr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/language-support-ocr.md
- ignite-2023 Previously updated : 11/15/2023 Last updated : 01/19/2024 # Language support: document analysis
Azure AI Document Intelligence models provide multilingual document processing s
##### Model ID: **prebuilt-read**
-### [**Read: handwritten text**](#tab/read-hand)
--
-The following table lists read model language support for extracting and analyzing **handwritten** text.</br>
-
-|Language| Language code (optional) | Language| Language code (optional) |
-|:--|:-:|:--|:-:|
-|English|`en`|Japanese |`ja`|
-|Chinese Simplified |`zh-Hans`|Korean |`ko`|
-|French |`fr`|Portuguese |`pt`|
-|German |`de`|Spanish |`es`|
-|Italian |`it`| Russian (preview) | `ru` |
-|Thai (preview) | `th` | Arabic (preview) | `ar` |
--
-The following table lists read model language support for extracting and analyzing **handwritten** text.</br>
-
-|Language| Language code (optional) | Language| Language code (optional) |
-|:--|:-:|:--|:-:|
-|English|`en`|Japanese |`ja`|
-|Chinese Simplified |`zh-Hans`|Korean |`ko`|
-|French |`fr`|Portuguese |`pt`|
-|German |`de`|Spanish |`es`|
-|Italian |`it`|
-
-The following table lists read model language support for extracting and analyzing **handwritten** text.</br>
-
-|Language| Language code (optional) | Language| Language code (optional) |
-|:--|:-:|:--|:-:|
-|English|`en`|Japanese |`ja`|
-|Chinese Simplified |`zh-Hans`|Korean |`ko`|
-|French |`fr`|Portuguese |`pt`|
-|German |`de`|Spanish |`es`|
-|Italian |`it`|
-- ### [**Read: printed text**](#tab/read-print) :::moniker range=">=doc-intel-3.1.0"
The following table lists read model language support for extracting and analyzi
:::column span=""::: |Language| Code (optional) | |:--|:-:|
- |Abaza|abq|
- |Abkhazian|ab|
- |Achinese|ace|
- |Acoli|ach|
- |Adangme|ada|
- |Adyghe|ady|
- |Afar|aa|
- |Afrikaans|af|
- |Akan|ak|
- |Albanian|sq|
- |Algonquin|alq|
- |Angika (Devanagari)|anp|
- |Arabic|ar|
- |Asturian|ast|
- |Asu (Tanzania)|asa|
- |Avaric|av|
- |Awadhi-Hindi (Devanagari)|awa|
- |Aymara|ay|
- |Azerbaijani (Latin)|az|
- |Bafia|ksf|
- |Bagheli|bfy|
- |Bambara|bm|
- |Bashkir|ba|
- |Basque|eu|
- |Belarusian (Cyrillic)|be, be-cyrl|
- |Belarusian (Latin)|be, be-latn|
- |Bemba (Zambia)|bem|
- |Bena (Tanzania)|bez|
- |Bhojpuri-Hindi (Devanagari)|bho|
- |Bikol|bik|
- |Bini|bin|
- |Bislama|bi|
- |Bodo (Devanagari)|brx|
- |Bosnian (Latin)|bs|
- |Brajbha|bra|
- |Breton|br|
- |Bulgarian|bg|
- |Bundeli|bns|
- |Buryat (Cyrillic)|bua|
- |Catalan|ca|
- |Cebuano|ceb|
- |Chamling|rab|
- |Chamorro|ch|
- |Chechen|ce|
- |Chhattisgarhi (Devanagari)|hne|
- |Chiga|cgg|
- |Chinese Simplified|zh-Hans|
- |Chinese Traditional|zh-Hant|
- |Choctaw|cho|
- |Chukot|ckt|
- |Chuvash|cv|
- |Cornish|kw|
- |Corsican|co|
- |Cree|cr|
- |Creek|mus|
- |Crimean Tatar (Latin)|crh|
- |Croatian|hr|
- |Crow|cro|
- |Czech|cs|
- |Danish|da|
- |Dargwa|dar|
- |Dari|prs|
- |Dhimal (Devanagari)|dhi|
- |Dogri (Devanagari)|doi|
- |Duala|dua|
- |Dungan|dng|
- |Dutch|nl|
- |Efik|efi|
- |English|en|
- |Erzya (Cyrillic)|myv|
- |Estonian|et|
- |Faroese|fo|
- |Fijian|fj|
- |Filipino|fil|
- |Finnish|fi|
+ |Abaza|`abq`|
+ |Abkhazian|`ab`|
+ |Achinese|`ace`|
+ |Acoli|`ach`|
+ |Adangme|`ada`|
+ |Adyghe|`ady`|
+ |Afar|`aa`|
+ |Afrikaans|`af`|
+ |Akan|`ak`|
+ |Albanian|`sq`|
+ |Algonquin|`alq`|
+ |Angika (Devanagari)|`anp`|
+ |Arabic|`ar`|
+ |Asturian|`ast`|
+ |Asu (Tanzania)|`asa`|
+ |Avaric|`av`|
+ |Awadhi-Hindi (Devanagari)|`awa`|
+ |Aymara|`ay`|
+ |Azerbaijani (Latin)|`az`|
+ |Bafia|`ksf`|
+ |Bagheli|`bfy`|
+ |Bambara|`bm`|
+ |Bashkir|`ba`|
+ |Basque|`eu`|
+ |Belarusian (Cyrillic)|`be`, `be-cyrl`|
+ |Belarusian (Latin)|`be`, `be-latn`|
+ |Bemba (Zambia)|`bem`|
+ |Bena (Tanzania)|`bez`|
+ |Bhojpuri-Hindi (Devanagari)|`bho`|
+ |Bikol|`bik`|
+ |Bini|`bin`|
+ |Bislama|`bi`|
+ |Bodo (Devanagari)|`brx`|
+ |Bosnian (Latin)|`bs`|
+ |Brajbha|`bra`|
+ |Breton|`br`|
+ |Bulgarian|`bg`|
+ |Bundeli|`bns`|
+ |Buryat (Cyrillic)|`bua`|
+ |Catalan|`ca`|
+ |Cebuano|`ceb`|
+ |Chamling|`rab`|
+ |Chamorro|`ch`|
+ |Chechen|`ce`|
+ |Chhattisgarhi (Devanagari)|`hne`|
+ |Chiga|`cgg`|
+ |Chinese Simplified|`zh-Hans`|
+ |Chinese Traditional|`zh-Hant`|
+ |Choctaw|`cho`|
+ |Chukot|`ckt`|
+ |Chuvash|`cv`|
+ |Cornish|`kw`|
+ |Corsican|`co`|
+ |Cree|`cr`|
+ |Creek|`mus`|
+ |Crimean Tatar (Latin)|`crh`|
+ |Croatian|`hr`|
+ |Crow|`cro`|
+ |Czech|`cs`|
+ |Danish|`da`|
+ |Dargwa|`dar`|
+ |Dari|`prs`|
+ |Dhimal (Devanagari)|`dhi`|
+ |Dogri (Devanagari)|`doi`|
+ |Duala|`dua`|
+ |Dungan|`dng`|
+ |Dutch|`nl`|
+ |Efik|`efi`|
+ |English|`en`|
+ |Erzya (Cyrillic)|`myv`|
+ |Estonian|`et`|
+ |Faroese|`fo`|
+ |Fijian|`fj`|
+ |Filipino|`fil`|
+ |Finnish|`fi`|
:::column-end::: :::column span=""::: |Language| Code (optional) | |:--|:-:|
- |Fon|fon|
- |French|fr|
- |Friulian|fur|
- |Ga|gaa|
- |Gagauz (Latin)|gag|
- |Galician|gl|
- |Ganda|lg|
- |Gayo|gay|
- |German|de|
- |Gilbertese|gil|
- |Gondi (Devanagari)|gon|
- |Greek|el|
- |Greenlandic|kl|
- |Guarani|gn|
- |Gurung (Devanagari)|gvr|
- |Gusii|guz|
- |Haitian Creole|ht|
- |Halbi (Devanagari)|hlb|
- |Hani|hni|
- |Haryanvi|bgc|
- |Hawaiian|haw|
- |Hebrew|he|
- |Herero|hz|
- |Hiligaynon|hil|
- |Hindi|hi|
- |Hmong Daw (Latin)|mww|
- |Ho(Devanagiri)|hoc|
- |Hungarian|hu|
- |Iban|iba|
- |Icelandic|is|
- |Igbo|ig|
- |Iloko|ilo|
- |Inari Sami|smn|
- |Indonesian|id|
- |Ingush|inh|
- |Interlingua|ia|
- |Inuktitut (Latin)|iu|
- |Irish|ga|
- |Italian|it|
- |Japanese|ja|
- |Jaunsari (Devanagari)|Jns|
- |Javanese|jv|
- |Jola-Fonyi|dyo|
- |Kabardian|kbd|
- |Kabuverdianu|kea|
- |Kachin (Latin)|kac|
- |Kalenjin|kln|
- |Kalmyk|xal|
- |Kangri (Devanagari)|xnr|
- |Kanuri|kr|
- |Karachay-Balkar|krc|
- |Kara-Kalpak (Cyrillic)|kaa-cyrl|
- |Kara-Kalpak (Latin)|kaa|
- |Kashubian|csb|
- |Kazakh (Cyrillic)|kk-cyrl|
- |Kazakh (Latin)|kk-latn|
- |Khakas|kjh|
- |Khaling|klr|
- |Khasi|kha|
- |K'iche'|quc|
- |Kikuyu|ki|
- |Kildin Sami|sjd|
- |Kinyarwanda|rw|
- |Komi|kv|
- |Kongo|kg|
- |Korean|ko|
- |Korku|kfq|
- |Koryak|kpy|
- |Kosraean|kos|
- |Kpelle|kpe|
- |Kuanyama|kj|
- |Kumyk (Cyrillic)|kum|
- |Kurdish (Arabic)|ku-arab|
- |Kurdish (Latin)|ku-latn|
- |Kurukh (Devanagari)|kru|
- |Kyrgyz (Cyrillic)|ky|
- |Lak|lbe|
- |Lakota|lkt|
+ |`Fon`|`fon`|
+ |French|`fr`|
+ |Friulian|`fur`|
+ |`Ga`|`gaa`|
+ |Gagauz (Latin)|`gag`|
+ |Galician|`gl`|
+ |Ganda|`lg`|
+ |Gayo|`gay`|
+ |German|`de`|
+ |Gilbertese|`gil`|
+ |Gondi (Devanagari)|`gon`|
+ |Greek|`el`|
+ |Greenlandic|`kl`|
+ |Guarani|`gn`|
+ |Gurung (Devanagari)|`gvr`|
+ |Gusii|`guz`|
+ |Haitian Creole|`ht`|
+ |Halbi (Devanagari)|`hlb`|
+ |Hani|`hni`|
+ |Haryanvi|`bgc`|
+ |Hawaiian|`haw`|
+ |Hebrew|`he`|
+ |Herero|`hz`|
+ |Hiligaynon|`hil`|
+ |Hindi|`hi`|
+ |Hmong Daw (Latin)|`mww`|
+ |Ho(Devanagiri)|`hoc`|
+ |Hungarian|`hu`|
+ |Iban|`iba`|
+ |Icelandic|`is`|
+ |Igbo|`ig`|
+ |Iloko|`ilo`|
+ |Inari Sami|`smn`|
+ |Indonesian|`id`|
+ |Ingush|`inh`|
+ |Interlingua|`ia`|
+ |Inuktitut (Latin)|`iu`|
+ |Irish|`ga`|
+ |Italian|`it`|
+ |Japanese|`ja`|
+ |Jaunsari (Devanagari)|`Jns`|
+ |Javanese|`jv`|
+ |Jola-Fonyi|`dyo`|
+ |Kabardian|`kbd`|
+ |Kabuverdianu|`kea`|
+ |Kachin (Latin)|`kac`|
+ |Kalenjin|`kln`|
+ |Kalmyk|`xal`|
+ |Kangri (Devanagari)|`xnr`|
+ |Kanuri|`kr`|
+ |Karachay-Balkar|`krc`|
+ |Kara-Kalpak (Cyrillic)|`kaa-cyrl`|
+ |Kara-Kalpak (Latin)|`kaa`|
+ |Kashubian|`csb`|
+ |Kazakh (Cyrillic)|`kk-cyrl`|
+ |Kazakh (Latin)|`kk-latn`|
+ |Khakas|`kjh`|
+ |Khaling|`klr`|
+ |Khasi|`kha`|
+ |K'iche'|`quc`|
+ |Kikuyu|`ki`|
+ |Kildin Sami|`sjd`|
+ |Kinyarwanda|`rw`|
+ |Komi|`kv`|
+ |Kongo|`kg`|
+ |Korean|`ko`|
+ |Korku|`kfq`|
+ |Koryak|`kpy`|
+ |Kosraean|`kos`|
+ |Kpelle|`kpe`|
+ |Kuanyama|`kj`|
+ |Kumyk (Cyrillic)|`kum`|
+ |Kurdish (Arabic)|`ku-arab`|
+ |Kurdish (Latin)|`ku-latn`|
+ |Kurukh (Devanagari)|`kru`|
+ |Kyrgyz (Cyrillic)|`ky`|
+ |`Lak`|`lbe`|
+ |Lakota|`lkt`|
:::column-end::: :::column span=""::: |Language| Code (optional) | |:--|:-:|
- |Latin|la|
- |Latvian|lv|
- |Lezghian|lex|
- |Lingala|ln|
- |Lithuanian|lt|
- |Lower Sorbian|dsb|
- |Lozi|loz|
- |Lule Sami|smj|
- |Luo (Kenya and Tanzania)|luo|
- |Luxembourgish|lb|
- |Luyia|luy|
- |Macedonian|mk|
- |Machame|jmc|
- |Madurese|mad|
- |Mahasu Pahari (Devanagari)|bfz|
- |Makhuwa-Meetto|mgh|
- |Makonde|kde|
- |Malagasy|mg|
- |Malay (Latin)|ms|
- |Maltese|mt|
- |Malto (Devanagari)|kmj|
- |Mandinka|mnk|
- |Manx|gv|
- |Maori|mi|
- |Mapudungun|arn|
- |Marathi|mr|
- |Mari (Russia)|chm|
- |Masai|mas|
- |Mende (Sierra Leone)|men|
- |Meru|mer|
- |Meta'|mgo|
- |Minangkabau|min|
- |Mohawk|moh|
- |Mongolian (Cyrillic)|mn|
- |Mongondow|mog|
- |Montenegrin (Cyrillic)|cnr-cyrl|
- |Montenegrin (Latin)|cnr-latn|
- |Morisyen|mfe|
- |Mundang|mua|
- |Nahuatl|nah|
- |Navajo|nv|
- |Ndonga|ng|
- |Neapolitan|nap|
- |Nepali|ne|
- |Ngomba|jgo|
- |Niuean|niu|
- |Nogay|nog|
- |North Ndebele|nd|
- |Northern Sami (Latin)|sme|
- |Norwegian|no|
- |Nyanja|ny|
- |Nyankole|nyn|
- |Nzima|nzi|
- |Occitan|oc|
- |Ojibwa|oj|
- |Oromo|om|
- |Ossetic|os|
- |Pampanga|pam|
- |Pangasinan|pag|
- |Papiamento|pap|
- |Pashto|ps|
- |Pedi|nso|
- |Persian|fa|
- |Polish|pl|
- |Portuguese|pt|
- |Punjabi (Arabic)|pa|
- |Quechua|qu|
- |Ripuarian|ksh|
- |Romanian|ro|
- |Romansh|rm|
- |Rundi|rn|
- |Russian|ru|
- |Rwa|rwk|
- |Sadri (Devanagari)|sck|
- |Sakha|sah|
- |Samburu|saq|
- |Samoan (Latin)|sm|
- |Sango|sg|
+ |Latin|`la`|
+ |Latvian|`lv`|
+ |Lezghian|`lex`|
+ |Lingala|`ln`|
+ |Lithuanian|`lt`|
+ |Lower Sorbian|`dsb`|
+ |Lozi|`loz`|
+ |Lule Sami|`smj`|
+ |Luo (Kenya and Tanzania)|`luo`|
+ |Luxembourgish|`lb`|
+ |Luyia|`luy`|
+ |Macedonian|`mk`|
+ |Machame|`jmc`|
+ |Madurese|`mad`|
+ |Mahasu Pahari (Devanagari)|`bfz`|
+ |Makhuwa-Meetto|`mgh`|
+ |Makonde|`kde`|
+ |Malagasy|`mg`|
+ |Malay (Latin)|`ms`|
+ |Maltese|`mt`|
+ |Malto (Devanagari)|`kmj`|
+ |Mandinka|`mnk`|
+ |Manx|`gv`|
+ |Maori|`mi`|
+ |Mapudungun|`arn`|
+ |Marathi|`mr`|
+ |Mari (Russia)|`chm`|
+ |Masai|`mas`|
+ |Mende (Sierra Leone)|`men`|
+ |Meru|`mer`|
+ |Meta'|`mgo`|
+ |Minangkabau|`min`|
+ |Mohawk|`moh`|
+ |Mongolian (Cyrillic)|`mn`|
+ |Mongondow|`mog`|
+ |Montenegrin (Cyrillic)|`cnr-cyrl`|
+ |Montenegrin (Latin)|`cnr-latn`|
+ |Morisyen|`mfe`|
+ |Mundang|`mua`|
+ |Nahuatl|`nah`|
+ |Navajo|`nv`|
+ |Ndonga|`ng`|
+ |Neapolitan|`nap`|
+ |Nepali|`ne`|
+ |Ngomba|`jgo`|
+ |Niuean|`niu`|
+ |Nogay|`nog`|
+ |North Ndebele|`nd`|
+ |Northern Sami (Latin)|`sme`|
+ |Norwegian|`no`|
+ |Nyanja|`ny`|
+ |Nyankole|`nyn`|
+ |Nzima|`nzi`|
+ |Occitan|`oc`|
+ |Ojibwa|`oj`|
+ |Oromo|`om`|
+ |Ossetic|`os`|
+ |Pampanga|`pam`|
+ |Pangasinan|`pag`|
+ |Papiamento|`pap`|
+ |Pashto|`ps`|
+ |Pedi|`nso`|
+ |Persian|`fa`|
+ |Polish|`pl`|
+ |Portuguese|`pt`|
+ |Punjabi (Arabic)|`pa`|
+ |Quechua|`qu`|
+ |Ripuarian|`ksh`|
+ |Romanian|`ro`|
+ |Romansh|`rm`|
+ |Rundi|`rn`|
+ |Russian|`ru`|
+ |`Rwa`|`rwk`|
+ |Sadri (Devanagari)|`sck`|
+ |Sakha|`sah`|
+ |Samburu|`saq`|
+ |Samoan (Latin)|`sm`|
+ |Sango|`sg`|
:::column-end::: :::column span=""::: |Language| Code (optional) | |:--|:-:|
- |Sangu (Gabon)|snq|
- |Sanskrit (Devanagari)|sa|
- |Santali(Devanagiri)|sat|
- |Scots|sco|
- |Scottish Gaelic|gd|
- |Sena|seh|
- |Serbian (Cyrillic)|sr-cyrl|
- |Serbian (Latin)|sr, sr-latn|
- |Shambala|ksb|
- |Shona|sn|
- |Siksika|bla|
- |Sirmauri (Devanagari)|srx|
- |Skolt Sami|sms|
- |Slovak|sk|
- |Slovenian|sl|
- |Soga|xog|
- |Somali (Arabic)|so|
- |Somali (Latin)|so-latn|
- |Songhai|son|
- |South Ndebele|nr|
- |Southern Altai|alt|
- |Southern Sami|sma|
- |Southern Sotho|st|
- |Spanish|es|
- |Sundanese|su|
- |Swahili (Latin)|sw|
- |Swati|ss|
- |Swedish|sv|
- |Tabassaran|tab|
- |Tachelhit|shi|
- |Tahitian|ty|
- |Taita|dav|
- |Tajik (Cyrillic)|tg|
- |Tamil|ta|
- |Tatar (Cyrillic)|tt-cyrl|
- |Tatar (Latin)|tt|
- |Teso|teo|
- |Tetum|tet|
- |Thai|th|
- |Thangmi|thf|
- |Tok Pisin|tpi|
- |Tongan|to|
- |Tsonga|ts|
- |Tswana|tn|
- |Turkish|tr|
- |Turkmen (Latin)|tk|
- |Tuvan|tyv|
- |Udmurt|udm|
- |Uighur (Cyrillic)|ug-cyrl|
- |Ukrainian|uk|
- |Upper Sorbian|hsb|
- |Urdu|ur|
- |Uyghur (Arabic)|ug|
- |Uzbek (Arabic)|uz-arab|
- |Uzbek (Cyrillic)|uz-cyrl|
- |Uzbek (Latin)|uz|
- |Vietnamese|vi|
- |Volap├╝k|vo|
- |Vunjo|vun|
- |Walser|wae|
- |Welsh|cy|
- |Western Frisian|fy|
- |Wolof|wo|
- |Xhosa|xh|
- |Yucatec Maya|yua|
- |Zapotec|zap|
- |Zarma|dje|
- |Zhuang|za|
- |Zulu|zu|
+ |Sangu (Gabon)|`snq`|
+ |Sanskrit (Devanagari)|`sa`|
+ |Santali(Devanagiri)|`sat`|
+ |Scots|`sco`|
+ |Scottish Gaelic|`gd`|
+ |Sena|`seh`|
+ |Serbian (Cyrillic)|`sr-cyrl`|
+ |Serbian (Latin)|`sr`, s`r-latn`|
+ |Shambala|`ksb`|
+ |Shona|`sn`|
+ |Siksika|`bla`|
+ |Sirmauri (Devanagari)|`srx`|
+ |Skolt Sami|`sms`|
+ |Slovak|`sk`|
+ |Slovenian|`sl`|
+ |Soga|`xog`|
+ |Somali (Arabic)|`so`|
+ |Somali (Latin)|`so-latn`|
+ |Songhai|`son`|
+ |South Ndebele|`nr`|
+ |Southern Altai|`alt`|
+ |Southern Sami|`sma`|
+ |Southern Sotho|`st`|
+ |Spanish|`es`|
+ |Sundanese|`su`|
+ |Swahili (Latin)|`sw`|
+ |Swati|`ss`|
+ |Swedish|`sv`|
+ |Tabassaran|`tab`|
+ |Tachelhit|`shi`|
+ |Tahitian|`ty`|
+ |Taita|`dav`|
+ |Tajik (Cyrillic)|`tg`|
+ |Tamil|`ta`|
+ |Tatar (Cyrillic)|`tt-cyrl`|
+ |Tatar (Latin)|`tt`|
+ |Teso|`teo`|
+ |Tetum|`tet`|
+ |Thai|`th`|
+ |Thangmi|`thf`|
+ |Tok Pisin|`tpi`|
+ |Tongan|`to`|
+ |Tsonga|`ts`|
+ |Tswana|`tn`|
+ |Turkish|`tr`|
+ |Turkmen (Latin)|`tk`|
+ |Tuvan|`tyv`|
+ |Udmurt|`udm`|
+ |Uighur (Cyrillic)|`ug-cyrl`|
+ |Ukrainian|`uk`|
+ |Upper Sorbian|`hsb`|
+ |Urdu|`ur`|
+ |Uyghur (Arabic)|`ug`|
+ |Uzbek (Arabic)|`uz-arab`|
+ |Uzbek (Cyrillic)|`uz-cyrl`|
+ |Uzbek (Latin)|`uz`|
+ |Vietnamese|`vi`|
+ |Volap├╝k|`vo`|
+ |Vunjo|`vun`|
+ |Walser|`wae`|
+ |Welsh|`cy`|
+ |Western Frisian|`fy`|
+ |Wolof|`wo`|
+ |Xhosa|`xh`|
+ |Yucatec Maya|`yua`|
+ |Zapotec|`zap`|
+ |Zarma|`dje`|
+ |Zhuang|`za`|
+ |Zulu|`zu`|
:::column-end::: :::row-end:::
The following table lists read model language support for extracting and analyzi
:::column span=""::: |Language| Code (optional) | |:--|:-:|
- |Afrikaans|af|
- |Angika|anp|
- |Arabic|ar|
- |Asturian|ast|
- |Awadhi|awa|
- |Azerbaijani|az|
- |Belarusian (Cyrillic)|be, be-cyrl|
- |Belarusian (Latin)|be-latn|
- |Bagheli|bfy|
- |Mahasu Pahari|bfz|
- |Bulgarian|bg|
- |Haryanvi|bgc|
- |Bhojpuri|bho|
- |Bislama|bi|
- |Bundeli|bns|
- |Breton|br|
- |Braj|bra|
- |Bodo|brx|
- |Bosnian|bs|
- |Buriat|bua|
- |Catalan|ca|
- |Cebuano|ceb|
- |Chamorro|ch|
- |Montenegrin (Latin)|cnr, cnr-latn|
- |Montenegrin (Cyrillic)|cnr-cyrl|
- |Corsican|co|
- |Crimean Tatar|crh|
- |Czech|cs|
- |Kashubian|csb|
- |Welsh|cy|
- |Danish|da|
- |German|de|
- |Dhimal|dhi|
- |Dogri|doi|
- |Lower Sorbian|dsb|
- |English|en|
- |Spanish|es|
- |Estonian|et|
- |Basque|eu|
- |Persian|fa|
- |Finnish|fi|
- |Filipino|fil|
+ |Afrikaans|`af`|
+ |Angika|`anp`|
+ |Arabic|`ar`|
+ |Asturian|`ast`|
+ |Awadhi|`awa`|
+ |Azerbaijani|`az`|
+ |Belarusian (Cyrillic)|`be`, `be-cyrl`|
+ |Belarusian (Latin)|`be-latn`|
+ |Bagheli|`bfy`|
+ |Mahasu Pahari|`bfz`|
+ |Bulgarian|`bg`|
+ |Haryanvi|`bgc`|
+ |Bhojpuri|`bho`|
+ |Bislama|`bi`|
+ |Bundeli|`bns`|
+ |Breton|`br`|
+ |Braj|`bra`|
+ |Bodo|`brx`|
+ |Bosnian|`bs`|
+ |Buriat|`bua`|
+ |Catalan|`ca`|
+ |Cebuano|`ceb`|
+ |Chamorro|`ch`|
+ |Montenegrin (Latin)|`cnr`, `cnr-latn`|
+ |Montenegrin (Cyrillic)|`cnr-cyrl`|
+ |Corsican|`co`|
+ |Crimean Tatar|`crh`|
+ |Czech|`cs`|
+ |Kashubian|`csb`|
+ |Welsh|`cy`|
+ |Danish|`da`|
+ |German|`de`|
+ |Dhimal|`dhi`|
+ |Dogri|`doi`|
+ |Lower Sorbian|`dsb`|
+ |English|`en`|
+ |Spanish|`es`|
+ |Estonian|`et`|
+ |Basque|`eu`|
+ |Persian|`fa`|
+ |Finnish|`fi`|
+ |Filipino|`fil`|
:::column-end::: :::column span=""::: |Language| Code (optional) | |:--|:-:|
- |Fijian|fj|
- |Faroese|fo|
- |French|fr|
- |Friulian|fur|
- |Western Frisian|fy|
- |Irish|ga|
- |Gagauz|gag|
- |Scottish Gaelic|gd|
- |Gilbertese|gil|
- |Galician|gl|
- |Gondi|gon|
- |Manx|gv|
- |Gurung|gvr|
- |Hawaiian|haw|
- |Hindi|hi|
- |Halbi|hlb|
- |Chhattisgarhi|hne|
- |Hani|hni|
- |Ho|hoc|
- |Croatian|hr|
- |Upper Sorbian|hsb|
- |Haitian|ht|
- |Hungarian|hu|
- |Interlingua|ia|
- |Indonesian|id|
- |Icelandic|is|
- |Italian|it|
- |Inuktitut|iu|
+ |Fijian|`fj`|
+ |Faroese|`fo`|
+ |French|`fr`|
+ |Friulian|`fur`|
+ |Western Frisian|`fy`|
+ |Irish|`ga`|
+ |Gagauz|`gag`|
+ |Scottish Gaelic|`gd`|
+ |Gilbertese|`gil`|
+ |Galician|`gl`|
+ |Gondi|`gon`|
+ |Manx|`gv`|
+ |Gurung|`gvr`|
+ |Hawaiian|`haw`|
+ |Hindi|`hi`|
+ |Halbi|`hlb`|
+ |Chhattisgarhi|`hne`|
+ |Hani|`hni`|
+ |`Ho`|`hoc`|
+ |Croatian|`hr`|
+ |Upper Sorbian|`hsb`|
+ |Haitian|`ht`|
+ |Hungarian|`hu`|
+ |Interlingua|`ia`|
+ |Indonesian|`id`|
+ |Icelandic|`is`|
+ |Italian|`it`|
+ |Inuktitut|`iu`|
|Japanese|
- |Jaunsari|jns|
- |Javanese|jv|
- |Kara-Kalpak (Latin)|kaa, kaa-latn|
- |Kara-Kalpak (Cyrillic)|kaa-cyrl|
- |Kachin|kac|
- |Kabuverdianu|kea|
- |Korku|kfq|
- |Khasi|kha|
- |Kazakh (Latin)|kk, kk-latn|
- |Kazakh (Cyrillic)|kk-cyrl|
- |Kalaallisut|kl|
- |Khaling|klr|
- |Malto|kmj|
+ |Jaunsari|`jns`|
+ |Javanese|`jv`|
+ |Kara-Kalpak (Latin)|`kaa`, `kaa-latn`|
+ |Kara-Kalpak (Cyrillic)|`kaa-cyrl`|
+ |Kachin|`kac`|
+ |Kabuverdianu|`kea`|
+ |Korku|`kfq`|
+ |Khasi|`kha`|
+ |Kazakh (Latin)|`kk`, `kk-latn`|
+ |Kazakh (Cyrillic)|`kk-cyrl`|
+ |Kalaallisut|`kl`|
+ |Khaling|`klr`|
+ |Malto|`kmj`|
:::column-end::: :::column span=""::: |Language| Code (optional) | |:--|:-:| |Korean|
- |Kosraean|kos|
- |Koryak|kpy|
- |Karachay-Balkar|krc|
- |Kurukh|kru|
- |K├╢lsch|ksh|
- |Kurdish (Latin)|ku, ku-latn|
- |Kurdish (Arabic)|ku-arab|
- |Kumyk|kum|
- |Cornish|kw|
- |Kirghiz|ky|
- |Latin|la|
- |Luxembourgish|lb|
- |Lakota|lkt|
- |Lithuanian|lt|
- |Maori|mi|
- |Mongolian|mn|
- |Marathi|mr|
- |Malay|ms|
- |Maltese|mt|
- |Hmong Daw|mww|
- |Erzya|myv|
- |Neapolitan|nap|
- |Nepali|ne|
- |Niuean|niu|
- |Dutch|nl|
- |Norwegian|no|
- |Nogai|nog|
- |Occitan|oc|
- |Ossetian|os|
- |Panjabi|pa|
- |Polish|pl|
- |Dari|prs|
- |Pushto|ps|
- |Portuguese|pt|
- |K'iche'|quc|
- |Camling|rab|
- |Romansh|rm|
- |Romanian|ro|
- |Russian|ru|
- |Sanskrit|sa|
- |Santali|sat|
+ |Kosraean|`kos`|
+ |Koryak|`kpy`|
+ |Karachay-Balkar|`krc`|
+ |Kurukh|`kru`|
+ |K├╢lsch|`ksh`|
+ |Kurdish (Latin)|`ku`, `ku-latn`|
+ |Kurdish (Arabic)|`ku-arab`|
+ |Kumyk|`kum`|
+ |Cornish|`kw`|
+ |Kirghiz|`ky`|
+ |Latin|`la`|
+ |Luxembourgish|`lb`|
+ |Lakota|`lkt`|
+ |Lithuanian|`lt`|
+ |Maori|`mi`|
+ |Mongolian|`mn`|
+ |Marathi|`mr`|
+ |Malay|`ms`|
+ |Maltese|`mt`|
+ |Hmong `Daw`|`mww`|
+ |Erzya|`myv`|
+ |Neapolitan|`nap`|
+ |Nepali|`ne`|
+ |Niuean|`niu`|
+ |Dutch|`nl`|
+ |Norwegian|`no`|
+ |Nogai|`nog`|
+ |Occitan|`oc`|
+ |Ossetian|`os`|
+ |Panjabi|`pa`|
+ |Polish|`pl`|
+ |Dari|`prs`|
+ |Pushto|`ps`|
+ |Portuguese|`pt`|
+ |K'iche'|`quc`|
+ |Camling|`rab`|
+ |Romansh|`rm`|
+ |Romanian|`ro`|
+ |Russian|`ru`|
+ |Sanskrit|`sa`|
+ |Santali|`sat`|
:::column-end::: :::column span=""::: |Language| Code (optional) | |:--|:-:|
- |Sadri|sck|
- |Scots|sco|
- |Slovak|sk|
- |Slovenian|sl|
- |Samoan|sm|
- |Southern Sami|sma|
- |Northern Sami|sme|
- |Lule Sami|smj|
- |Inari Sami|smn|
- |Skolt Sami|sms|
- |Somali|so|
- |Albanian|sq|
- |Serbian (Latin)|sr, sr-latn|
- |Sirmauri|srx|
- |Swedish|sv|
- |Swahili|sw|
- |Tetum|tet|
- |Tajik|tg|
- |Thangmi|thf|
- |Turkmen|tk|
- |Tonga|to|
- |Turkish|tr|
- |Tatar|tt|
- |Tuvinian|tyv|
- |Uighur|ug|
- |Urdu|ur|
- |Uzbek (Latin)|uz, uz-latn|
- |Uzbek (Cyrillic)|uz-cyrl|
- |Uzbek (Arabic)|uz-arab|
- |Volap├╝k|vo|
- |Walser|wae|
- |Kangri|xnr|
- |Yucateco|yua|
- |Zhuang|za|
- |Chinese (Han (Simplified variant))|zh, zh-hans|
- |Chinese (Han (Traditional variant))|zh-hant|
- |Zulu|zu|
+ |Sadri|`sck`|
+ |Scots|`sco`|
+ |Slovak|`sk`|
+ |Slovenian|`sl`|
+ |Samoan|`sm`|
+ |Southern Sami|`sma`|
+ |Northern Sami|`sme`|
+ |Lule Sami|`smj`|
+ |Inari Sami|`smn`|
+ |Skolt Sami|`sms`|
+ |Somali|`so`|
+ |Albanian|`sq`|
+ |Serbian (Latin)|`sr`, `sr-latn`|
+ |Sirmauri|`srx`|
+ |Swedish|`sv`|
+ |Swahili|`sw`|
+ |Tetum|`tet`|
+ |Tajik|`tg`|
+ |Thangmi|`thf`|
+ |Turkmen|`tk`|
+ |Tonga|`to`|
+ |Turkish|`tr`|
+ |Tatar|`tt`|
+ |Tuvinian|`tyv`|
+ |Uighur|`ug`|
+ |Urdu|`ur`|
+ |Uzbek (Latin)|`uz`, `uz-latn`|
+ |Uzbek (Cyrillic)|`uz-cyrl`|
+ |Uzbek (Arabic)|`uz-arab`|
+ |Volap├╝k|`vo`|
+ |Walser|`wae`|
+ |Kangri|`xnr`|
+ |Yucateco|`yua`|
+ |Zhuang|`za`|
+ |Chinese (Han (Simplified variant))|`zh`, `zh-hans`|
+ |Chinese (Han (Traditional variant))|`zh-hant`|
+ |Zulu|`zu`|
:::column-end::: :::row-end::: :::moniker-end
+### [**Read: handwritten text**](#tab/read-hand)
++
+The following table lists read model language support for extracting and analyzing **handwritten** text.</br>
+
+|Language| Language code (optional) | Language| Language code (optional) |
+|:--|:-:|:--|:-:|
+|English|`en`|Japanese |`ja`|
+|Chinese Simplified |`zh-Hans`|Korean |`ko`|
+|French |`fr`|Portuguese |`pt`|
+|German |`de`|Spanish |`es`|
+|Italian |`it`| Russian (preview) | `ru` |
+|Thai (preview) | `th` | Arabic (preview) | `ar` |
++
+The following table lists read model language support for extracting and analyzing **handwritten** text.</br>
+
+|Language| Language code (optional) | Language| Language code (optional) |
+|:--|:-:|:--|:-:|
+|English|`en`|Japanese |`ja`|
+|Chinese Simplified |`zh-Hans`|Korean |`ko`|
+|French |`fr`|Portuguese |`pt`|
+|German |`de`|Spanish |`es`|
+|Italian |`it`|
+
+The following table lists read model language support for extracting and analyzing **handwritten** text.</br>
+
+|Language| Language code (optional) | Language| Language code (optional) |
+|:--|:-:|:--|:-:|
+|English|`en`|Japanese |`ja`|
+|Chinese Simplified |`zh-Hans`|Korean |`ko`|
+|French |`fr`|Portuguese |`pt`|
+|German |`de`|Spanish |`es`|
+|Italian |`it`|
++ ### [**Read: language detection**](#tab/read-detection) The [Read model API](concept-read.md) supports **language detection** for the following languages in your documents. This list can include languages not currently supported for text extraction.
The [Read model API](concept-read.md) supports **language detection** for the fo
##### Model ID: **prebuilt-layout**
-### [**Layout: handwritten text**](#tab/layout-hand)
--
-The following table lists layout model language support for extracting and analyzing **handwritten** text. </br>
-
-|Language| Language code (optional) | Language| Language code (optional) |
-|:--|:-:|:--|:-:|
-|English|`en`|Japanese |`ja`|
-|Chinese Simplified |`zh-Hans`|Korean |`ko`|
-|French |`fr`|Portuguese |`pt`|
-|German |`de`|Spanish |`es`|
-|Italian |`it`| Russian (preview) | `ru` |
-|Thai (preview) | `th` | Arabic (preview) | `ar` |
--
-##### Model ID: **prebuilt-layout**
-
-The following table lists layout model language support for extracting and analyzing **handwritten** text. </br>
-
-|Language| Language code (optional) | Language| Language code (optional) |
-|:--|:-:|:--|:-:|
-|English|`en`|Japanese |`ja`|
-|Chinese Simplified |`zh-Hans`|Korean |`ko`|
-|French |`fr`|Portuguese |`pt`|
-|German |`de`|Spanish |`es`|
-|Italian |`it`|
--
- > [!NOTE]
- > Document Intelligence v2.1 does not support handwritten text extraction.
---
-The following table lists layout model language support for extracting and analyzing **handwritten** text. </br>
-
-|Language| Language code (optional) | Language| Language code (optional) |
-|:--|:-:|:--|:-:|
-|English|`en`|Japanese |`ja`|
-|Chinese Simplified |`zh-Hans`|Korean |`ko`|
-|French |`fr`|Portuguese |`pt`|
-|German |`de`|Spanish |`es`|
-|Italian |`it`| Russian (preview) | `ru` |
-|Thai (preview) | `th` | Arabic (preview) | `ar` |
- ### [**Layout: printed text**](#tab/layout-print) :::moniker range=">=doc-intel-3.1.0"
The following table lists the supported languages for printed text:
:::row::: :::column span="":::
- |Language| Code (optional) |
+ |**Language**| **Code (optional)** |
|:--|:-:|
- |Abaza|abq|
- |Abkhazian|ab|
- |Achinese|ace|
- |Acoli|ach|
- |Adangme|ada|
- |Adyghe|ady|
- |Afar|aa|
- |Afrikaans|af|
- |Akan|ak|
- |Albanian|sq|
- |Algonquin|alq|
- |Angika (Devanagari)|anp|
- |Arabic|ar|
- |Asturian|ast|
- |Asu (Tanzania)|asa|
- |Avaric|av|
- |Awadhi-Hindi (Devanagari)|awa|
- |Aymara|ay|
- |Azerbaijani (Latin)|az|
- |Bafia|ksf|
- |Bagheli|bfy|
- |Bambara|bm|
- |Bashkir|ba|
- |Basque|eu|
- |Belarusian (Cyrillic)|be, be-cyrl|
- |Belarusian (Latin)|be, be-latn|
- |Bemba (Zambia)|bem|
- |Bena (Tanzania)|bez|
- |Bhojpuri-Hindi (Devanagari)|bho|
- |Bikol|bik|
- |Bini|bin|
- |Bislama|bi|
- |Bodo (Devanagari)|brx|
- |Bosnian (Latin)|bs|
- |Brajbha|bra|
- |Breton|br|
- |Bulgarian|bg|
- |Bundeli|bns|
- |Buryat (Cyrillic)|bua|
- |Catalan|ca|
- |Cebuano|ceb|
- |Chamling|rab|
- |Chamorro|ch|
- |Chechen|ce|
- |Chhattisgarhi (Devanagari)|hne|
- |Chiga|cgg|
- |Chinese Simplified|zh-Hans|
- |Chinese Traditional|zh-Hant|
- |Choctaw|cho|
- |Chukot|ckt|
- |Chuvash|cv|
- |Cornish|kw|
- |Corsican|co|
- |Cree|cr|
- |Creek|mus|
- |Crimean Tatar (Latin)|crh|
- |Croatian|hr|
- |Crow|cro|
- |Czech|cs|
- |Danish|da|
- |Dargwa|dar|
- |Dari|prs|
- |Dhimal (Devanagari)|dhi|
- |Dogri (Devanagari)|doi|
- |Duala|dua|
- |Dungan|dng|
- |Dutch|nl|
- |Efik|efi|
- |English|en|
- |Erzya (Cyrillic)|myv|
- |Estonian|et|
- |Faroese|fo|
- |Fijian|fj|
- |Filipino|fil|
- |Finnish|fi|
+ |Abaza|`abq`|
+ |Abkhazian|`ab`|
+ |Achinese|`ace`|
+ |Acoli|`ach`|
+ |Adangme|`ada`|
+ |Adyghe|`ady`|
+ |Afar|`aa`|
+ |Afrikaans|`af`|
+ |Akan|`ak`|
+ |Albanian|`sq`|
+ |Algonquin|`alq`|
+ |Angika (Devanagari)|`anp`|
+ |Arabic|`ar`|
+ |Asturian|`ast`|
+ |Asu (Tanzania)|`asa`|
+ |Avaric|`av`|
+ |Awadhi-Hindi (Devanagari)|`awa`|
+ |Aymara|`ay`|
+ |Azerbaijani (Latin)|`az`|
+ |Bafia|`ksf`|
+ |Bagheli|`bfy`|
+ |Bambara|`bm`|
+ |Bashkir|`ba`|
+ |Basque|`eu`|
+ |Belarusian (Cyrillic)|`be`, `be-cyrl`|
+ |Belarusian (Latin)|`be`, `be-latn`|
+ |Bemba (Zambia)|`bem`|
+ |Bena (Tanzania)|`bez`|
+ |Bhojpuri-Hindi (Devanagari)|`bho`|
+ |Bikol|`bik`|
+ |Bini|`bin`|
+ |Bislama|`bi`|
+ |Bodo (Devanagari)|`brx`|
+ |Bosnian (Latin)|`bs`|
+ |Brajbha|`bra`|
+ |Breton|`br`|
+ |Bulgarian|`bg`|
+ |Bundeli|`bns`|
+ |Buryat (Cyrillic)|`bua`|
+ |Catalan|`ca`|
+ |Cebuano|`ceb`|
+ |Chamling|`rab`|
+ |Chamorro|`ch`|
+ |Chechen|`ce`|
+ |Chhattisgarhi (Devanagari)|`hne`|
+ |Chiga|`cgg`|
+ |Chinese Simplified|`zh-Hans`|
+ |Chinese Traditional|`zh-Hant`|
+ |Choctaw|`cho`|
+ |Chukot|`ckt`|
+ |Chuvash|`cv`|
+ |Cornish|`kw`|
+ |Corsican|`co`|
+ |Cree|`cr`|
+ |Creek|`mus`|
+ |Crimean Tatar (Latin)|`crh`|
+ |Croatian|`hr`|
+ |Crow|`cro`|
+ |Czech|`cs`|
+ |Danish|`da`|
+ |Dargwa|`dar`|
+ |Dari|`prs`|
+ |Dhimal (Devanagari)|`dhi`|
+ |Dogri (Devanagari)|`doi`|
+ |Duala|`dua`|
+ |Dungan|`dng`|
+ |Dutch|`nl`|
+ |Efik|`efi`|
+ |English|`en`|
+ |Erzya (Cyrillic)|`myv`|
+ |Estonian|`et`|
+ |Faroese|`fo`|
+ |Fijian|`fj`|
+ |Filipino|`fil`|
+ |Finnish|`fi`|
:::column-end::: :::column span=""::: |Language| Code (optional) | |:--|:-:|
- |Fon|fon|
- |French|fr|
- |Friulian|fur|
- |Ga|gaa|
- |Gagauz (Latin)|gag|
- |Galician|gl|
- |Ganda|lg|
- |Gayo|gay|
- |German|de|
- |Gilbertese|gil|
- |Gondi (Devanagari)|gon|
- |Greek|el|
- |Greenlandic|kl|
- |Guarani|gn|
- |Gurung (Devanagari)|gvr|
- |Gusii|guz|
- |Haitian Creole|ht|
- |Halbi (Devanagari)|hlb|
- |Hani|hni|
- |Haryanvi|bgc|
- |Hawaiian|haw|
- |Hebrew|he|
- |Herero|hz|
- |Hiligaynon|hil|
- |Hindi|hi|
- |Hmong Daw (Latin)|mww|
- |Ho(Devanagiri)|hoc|
- |Hungarian|hu|
- |Iban|iba|
- |Icelandic|is|
- |Igbo|ig|
- |Iloko|ilo|
- |Inari Sami|smn|
- |Indonesian|id|
- |Ingush|inh|
- |Interlingua|ia|
- |Inuktitut (Latin)|iu|
- |Irish|ga|
- |Italian|it|
- |Japanese|ja|
- |Jaunsari (Devanagari)|Jns|
- |Javanese|jv|
- |Jola-Fonyi|dyo|
- |Kabardian|kbd|
- |Kabuverdianu|kea|
- |Kachin (Latin)|kac|
- |Kalenjin|kln|
- |Kalmyk|xal|
- |Kangri (Devanagari)|xnr|
- |Kanuri|kr|
- |Karachay-Balkar|krc|
- |Kara-Kalpak (Cyrillic)|kaa-cyrl|
- |Kara-Kalpak (Latin)|kaa|
- |Kashubian|csb|
- |Kazakh (Cyrillic)|kk-cyrl|
- |Kazakh (Latin)|kk-latn|
- |Khakas|kjh|
- |Khaling|klr|
- |Khasi|kha|
- |K'iche'|quc|
- |Kikuyu|ki|
- |Kildin Sami|sjd|
- |Kinyarwanda|rw|
- |Komi|kv|
- |Kongo|kg|
- |Korean|ko|
- |Korku|kfq|
- |Koryak|kpy|
- |Kosraean|kos|
- |Kpelle|kpe|
- |Kuanyama|kj|
- |Kumyk (Cyrillic)|kum|
- |Kurdish (Arabic)|ku-arab|
- |Kurdish (Latin)|ku-latn|
+ |`Fon`|`fon`|
+ |French|`fr`|
+ |Friulian|`fur`|
+ |`Ga`|`gaa`|
+ |Gagauz (Latin)|`gag`|
+ |Galician|`gl`|
+ |Ganda|`lg`|
+ |Gayo|`gay`|
+ |German|`de`|
+ |Gilbertese|`gil`|
+ |Gondi (Devanagari)|`gon`|
+ |Greek|`el`|
+ |Greenlandic|`kl`|
+ |Guarani|`gn`|
+ |Gurung (Devanagari)|`gvr`|
+ |Gusii|`guz`|
+ |Haitian Creole|`ht`|
+ |Halbi (Devanagari)|`hlb`|
+ |Hani|`hni`|
+ |Haryanvi|`bgc`|
+ |Hawaiian|`haw`|
+ |Hebrew|`he`|
+ |Herero|`hz`|
+ |Hiligaynon|`hil`|
+ |Hindi|`hi`|
+ |Hmong Daw (Latin)|`mww`|
+ |Ho(Devanagiri)|`hoc`|
+ |Hungarian|`hu`|
+ |Iban|`iba`|
+ |Icelandic|`is`|
+ |Igbo|`ig`|
+ |Iloko|`ilo`|
+ |Inari Sami|`smn`|
+ |Indonesian|`id`|
+ |Ingush|`inh`|
+ |Interlingua|`ia`|
+ |Inuktitut (Latin)|`iu`|
+ |Irish|`ga`|
+ |Italian|`it`|
+ |Japanese|`ja`|
+ |Jaunsari (Devanagari)|`Jns`|
+ |Javanese|`jv`|
+ |Jola-Fonyi|`dyo`|
+ |Kabardian|`kbd`|
+ |Kabuverdianu|`kea`|
+ |Kachin (Latin)|`kac`|
+ |Kalenjin|`kln`|
+ |Kalmyk|`xal`|
+ |Kangri (Devanagari)|`xnr`|
+ |Kanuri|`kr`|
+ |Karachay-Balkar|`krc`|
+ |Kara-Kalpak (Cyrillic)|`kaa-cyrl`|
+ |Kara-Kalpak (Latin)|`kaa`|
+ |Kashubian|`csb`|
+ |Kazakh (Cyrillic)|`kk-cyrl`|
+ |Kazakh (Latin)|`kk-latn`|
+ |Khakas|`kjh`|
+ |Khaling|`klr`|
+ |Khasi|`kha`|
+ |K'iche'|`quc`|
+ |Kikuyu|`ki`|
+ |Kildin Sami|`sjd`|
+ |Kinyarwanda|`rw`|
+ |Komi|`kv`|
+ |Kongo|`kg`|
+ |Korean|`ko`|
+ |Korku|`kfq`|
+ |Koryak|`kpy`|
+ |Kosraean|`kos`|
+ |Kpelle|`kpe`|
+ |Kuanyama|`kj`|
+ |Kumyk (Cyrillic)|`kum`|
+ |Kurdish (Arabic)|`ku-arab`|
+ |Kurdish (Latin)|`ku-latn`|
:::column-end::: :::column span=""::: |Language| Code (optional) | |:--|:-:|
- |Kurukh (Devanagari)|kru|
- |Kyrgyz (Cyrillic)|ky|
- |Lak|lbe|
- |Lakota|lkt|
- |Latin|la|
- |Latvian|lv|
- |Lezghian|lex|
- |Lingala|ln|
- |Lithuanian|lt|
- |Lower Sorbian|dsb|
- |Lozi|loz|
- |Lule Sami|smj|
- |Luo (Kenya and Tanzania)|luo|
- |Luxembourgish|lb|
- |Luyia|luy|
- |Macedonian|mk|
- |Machame|jmc|
- |Madurese|mad|
- |Mahasu Pahari (Devanagari)|bfz|
- |Makhuwa-Meetto|mgh|
- |Makonde|kde|
- |Malagasy|mg|
- |Malay (Latin)|ms|
- |Maltese|mt|
- |Malto (Devanagari)|kmj|
- |Mandinka|mnk|
- |Manx|gv|
- |Maori|mi|
- |Mapudungun|arn|
- |Marathi|mr|
- |Mari (Russia)|chm|
- |Masai|mas|
- |Mende (Sierra Leone)|men|
- |Meru|mer|
- |Meta'|mgo|
- |Minangkabau|min|
- |Mohawk|moh|
- |Mongolian (Cyrillic)|mn|
- |Mongondow|mog|
- |Montenegrin (Cyrillic)|cnr-cyrl|
- |Montenegrin (Latin)|cnr-latn|
- |Morisyen|mfe|
- |Mundang|mua|
- |Nahuatl|nah|
- |Navajo|nv|
- |Ndonga|ng|
- |Neapolitan|nap|
- |Nepali|ne|
- |Ngomba|jgo|
- |Niuean|niu|
- |Nogay|nog|
- |North Ndebele|nd|
- |Northern Sami (Latin)|sme|
- |Norwegian|no|
- |Nyanja|ny|
- |Nyankole|nyn|
- |Nzima|nzi|
- |Occitan|oc|
- |Ojibwa|oj|
- |Oromo|om|
- |Ossetic|os|
- |Pampanga|pam|
- |Pangasinan|pag|
- |Papiamento|pap|
- |Pashto|ps|
- |Pedi|nso|
- |Persian|fa|
- |Polish|pl|
- |Portuguese|pt|
- |Punjabi (Arabic)|pa|
- |Quechua|qu|
- |Ripuarian|ksh|
- |Romanian|ro|
- |Romansh|rm|
- |Rundi|rn|
- |Russian|ru|
+ |Kurukh (Devanagari)|`kru`|
+ |Kyrgyz (Cyrillic)|`ky`|
+ |`Lak`|`lbe`|
+ |Lakota|`lkt`|
+ |Latin|`la`|
+ |Latvian|`lv`|
+ |Lezghian|`lex`|
+ |Lingala|`ln`|
+ |Lithuanian|`lt`|
+ |Lower Sorbian|`dsb`|
+ |Lozi|`loz`|
+ |Lule Sami|`smj`|
+ |Luo (Kenya and Tanzania)|`luo`|
+ |Luxembourgish|`lb`|
+ |Luyia|`luy`|
+ |Macedonian|`mk`|
+ |Machame|`jmc`|
+ |Madurese|`mad`|
+ |Mahasu Pahari (Devanagari)|`bfz`|
+ |Makhuwa-Meetto|`mgh`|
+ |Makonde|`kde`|
+ |Malagasy|`mg`|
+ |Malay (Latin)|`ms`|
+ |Maltese|`mt`|
+ |Malto (Devanagari)|`kmj`|
+ |Mandinka|`mnk`|
+ |Manx|`gv`|
+ |Maori|`mi`|
+ |Mapudungun|`arn`|
+ |Marathi|`mr`|
+ |Mari (Russia)|`chm`|
+ |Masai|`mas`|
+ |Mende (Sierra Leone)|`men`|
+ |Meru|`mer`|
+ |Meta'|`mgo`|
+ |Minangkabau|`min`|
+ |Mohawk|`moh`|
+ |Mongolian (Cyrillic)|`mn`|
+ |Mongondow|`mog`|
+ |Montenegrin (Cyrillic)|`cnr-cyrl`|
+ |Montenegrin (Latin)|`cnr-latn`|
+ |Morisyen|`mfe`|
+ |Mundang|`mua`|
+ |Nahuatl|`nah`|
+ |Navajo|`nv`|
+ |Ndonga|`ng`|
+ |Neapolitan|`nap`|
+ |Nepali|`ne`|
+ |Ngomba|`jgo`|
+ |Niuean|`niu`|
+ |Nogay|`nog`|
+ |North Ndebele|`nd`|
+ |Northern Sami (Latin)|`sme`|
+ |Norwegian|`no`|
+ |Nyanja|`ny`|
+ |Nyankole|`nyn`|
+ |Nzima|`nzi`|
+ |Occitan|`oc`|
+ |Ojibwa|`oj`|
+ |Oromo|`om`|
+ |Ossetic|`os`|
+ |Pampanga|`pam`|
+ |Pangasinan|`pag`|
+ |Papiamento|`pap`|
+ |Pashto|`ps`|
+ |Pedi|`nso`|
+ |Persian|`fa`|
+ |Polish|`pl`|
+ |Portuguese|`pt`|
+ |Punjabi (Arabic)|`pa`|
+ |Quechua|`qu`|
+ |Ripuarian|`ksh`|
+ |Romanian|`ro`|
+ |Romansh|`rm`|
+ |Rundi|`rn`|
+ |Russian|`ru`|
:::column-end::: :::column span=""::: |Language| Code (optional) | |:--|:-:|
- |Rwa|rwk|
- |Sadri (Devanagari)|sck|
- |Sakha|sah|
- |Samburu|saq|
- |Samoan (Latin)|sm|
- |Sango|sg|
- |Sangu (Gabon)|snq|
- |Sanskrit (Devanagari)|sa|
- |Santali(Devanagiri)|sat|
- |Scots|sco|
- |Scottish Gaelic|gd|
- |Sena|seh|
- |Serbian (Cyrillic)|sr-cyrl|
- |Serbian (Latin)|sr, sr-latn|
- |Shambala|ksb|
- |Shona|sn|
- |Siksika|bla|
- |Sirmauri (Devanagari)|srx|
- |Skolt Sami|sms|
- |Slovak|sk|
- |Slovenian|sl|
- |Soga|xog|
- |Somali (Arabic)|so|
- |Somali (Latin)|so-latn|
- |Songhai|son|
- |South Ndebele|nr|
- |Southern Altai|alt|
- |Southern Sami|sma|
- |Southern Sotho|st|
- |Spanish|es|
- |Sundanese|su|
- |Swahili (Latin)|sw|
- |Swati|ss|
- |Swedish|sv|
- |Tabassaran|tab|
- |Tachelhit|shi|
- |Tahitian|ty|
- |Taita|dav|
- |Tajik (Cyrillic)|tg|
- |Tamil|ta|
- |Tatar (Cyrillic)|tt-cyrl|
- |Tatar (Latin)|tt|
- |Teso|teo|
- |Tetum|tet|
- |Thai|th|
- |Thangmi|thf|
- |Tok Pisin|tpi|
- |Tongan|to|
- |Tsonga|ts|
- |Tswana|tn|
- |Turkish|tr|
- |Turkmen (Latin)|tk|
- |Tuvan|tyv|
- |Udmurt|udm|
- |Uighur (Cyrillic)|ug-cyrl|
- |Ukrainian|uk|
- |Upper Sorbian|hsb|
- |Urdu|ur|
- |Uyghur (Arabic)|ug|
- |Uzbek (Arabic)|uz-arab|
- |Uzbek (Cyrillic)|uz-cyrl|
- |Uzbek (Latin)|uz|
- |Vietnamese|vi|
- |Volap├╝k|vo|
- |Vunjo|vun|
- |Walser|wae|
- |Welsh|cy|
- |Western Frisian|fy|
- |Wolof|wo|
- |Xhosa|xh|
- |Yucatec Maya|yua|
- |Zapotec|zap|
- |Zarma|dje|
- |Zhuang|za|
- |Zulu|zu|
+ |`Rwa`|`rwk`|
+ |Sadri (Devanagari)|`sck`|
+ |Sakha|`sah`|
+ |Samburu|`saq`|
+ |Samoan (Latin)|`sm`|
+ |Sango|`sg`|
+ |Sangu (Gabon)|`snq`|
+ |Sanskrit (Devanagari)|`sa`|
+ |Santali(Devanagiri)|`sat`|
+ |Scots|`sco`|
+ |Scottish Gaelic|`gd`|
+ |Sena|`seh`|
+ |Serbian (Cyrillic)|`sr-cyrl`|
+ |Serbian (Latin)|`sr`, `sr-latn`|
+ |Shambala|`ksb`|
+ |Shona|`sn`|
+ |Siksika|`bla`|
+ |Sirmauri (Devanagari)|`srx`|
+ |Skolt Sami|`sms`|
+ |Slovak|`sk`|
+ |Slovenian|`sl`|
+ |Soga|`xog`|
+ |Somali (Arabic)|`so`|
+ |Somali (Latin)|`so-latn`|
+ |Songhai|`son`|
+ |South Ndebele|`nr`|
+ |Southern Altai|`alt`|
+ |Southern Sami|`sma`|
+ |Southern Sotho|`st`|
+ |Spanish|`es`|
+ |Sundanese|`su`|
+ |Swahili (Latin)|`sw`|
+ |Swati|`ss`|
+ |Swedish|`sv`|
+ |Tabassaran|`tab`|
+ |Tachelhit|`shi`|
+ |Tahitian|`ty`|
+ |Taita|`dav`|
+ |Tajik (Cyrillic)|`tg`|
+ |Tamil|`ta`|
+ |Tatar (Cyrillic)|`tt-cyrl`|
+ |Tatar (Latin)|`tt`|
+ |Teso|`teo`|
+ |Tetum|`tet`|
+ |Thai|`th`|
+ |Thangmi|`thf`|
+ |Tok Pisin|`tpi`|
+ |Tongan|`to`|
+ |Tsonga|`ts`|
+ |Tswana|`tn`|
+ |Turkish|`tr`|
+ |Turkmen (Latin)|`tk`|
+ |Tuvan|`tyv`|
+ |Udmurt|`udm`|
+ |Uighur (Cyrillic)|`ug-cyrl`|
+ |Ukrainian|`uk`|
+ |Upper Sorbian|`hsb`|
+ |Urdu|`ur`|
+ |Uyghur (Arabic)|`ug`|
+ |Uzbek (Arabic)|`uz-arab`|
+ |Uzbek (Cyrillic)|`uz-cyrl`|
+ |Uzbek (Latin)|`uz`|
+ |Vietnamese|`vi`|
+ |Volap├╝k|`vo`|
+ |Vunjo|`vun`|
+ |Walser|`wae`|
+ |Welsh|`cy`|
+ |Western Frisian|`fy`|
+ |Wolof|`wo`|
+ |Xhosa|`xh`|
+ |Yucatec Maya|`yua`|
+ |Zapotec|`zap`|
+ |Zarma|`dje`|
+ |Zhuang|`za`|
+ |Zulu|`zu`|
:::column-end::: :::row-end:::
The following table lists layout model language support for extracting and analy
:::row::: :::column span="":::
- |Language| Code (optional) |
+ |**Language**| **Code (optional)**|
|:--|:-:|
- |Afrikaans|af|
- |Angika|anp|
- |Arabic|ar|
- |Asturian|ast|
- |Awadhi|awa|
- |Azerbaijani|az|
- |Belarusian (Cyrillic)|be, be-cyrl|
- |Belarusian (Latin)|be-latn|
- |Bagheli|bfy|
- |Mahasu Pahari|bfz|
- |Bulgarian|bg|
- |Haryanvi|bgc|
- |Bhojpuri|bho|
- |Bislama|bi|
- |Bundeli|bns|
- |Breton|br|
- |Braj|bra|
- |Bodo|brx|
- |Bosnian|bs|
- |Buriat|bua|
- |Catalan|ca|
- |Cebuano|ceb|
- |Chamorro|ch|
- |Montenegrin (Latin)|cnr, cnr-latn|
- |Montenegrin (Cyrillic)|cnr-cyrl|
- |Corsican|co|
- |Crimean Tatar|crh|
- |Czech|cs|
- |Kashubian|csb|
- |Welsh|cy|
- |Danish|da|
- |German|de|
- |Dhimal|dhi|
- |Dogri|doi|
- |Lower Sorbian|dsb|
- |English|en|
- |Spanish|es|
- |Estonian|et|
- |Basque|eu|
- |Persian|fa|
- |Finnish|fi|
- |Filipino|fil|
+ |Afrikaans|`af`|
+ |Angika|`anp`|
+ |Arabic|`ar`|
+ |Asturian|`ast`|
+ |Awadhi|`awa`|
+ |Azerbaijani|`az`|
+ |Belarusian (Cyrillic)|`be`, `be-cyrl`|
+ |Belarusian (Latin)|`be-latn`|
+ |Bagheli|`bfy`|
+ |Mahasu Pahari|`bfz`|
+ |Bulgarian|`bg`|
+ |Haryanvi|`bgc`|
+ |Bhojpuri|`bho`|
+ |Bislama|`bi`|
+ |Bundeli|`bns`|
+ |Breton|`br`|
+ |Braj|`bra`|
+ |Bodo|`brx`|
+ |Bosnian|`bs`|
+ |Buriat|`bua`|
+ |Catalan|`ca`|
+ |Cebuano|`ceb`|
+ |Chamorro|`ch`|
+ |Montenegrin (Latin)|`cnr`, `cnr-latn`|
+ |Montenegrin (Cyrillic)|`cnr-cyrl`|
+ |Corsican|`co`|
+ |Crimean Tatar|`crh`|
+ |Czech|`cs`|
+ |Kashubian|`csb`|
+ |Welsh|`cy`|
+ |Danish|`da`|
+ |German|`de`|
+ |Dhimal|`dhi`|
+ |Dogri|`doi`|
+ |Lower Sorbian|`dsb`|
+ |English|`en`|
+ |Spanish|`es`|
+ |Estonian|`et`|
+ |Basque|`eu`|
+ |Persian|`fa`|
+ |Finnish|`fi`|
+ |Filipino|`fil`|
:::column-end::: :::column span=""::: |Language| Code (optional) | |:--|:-:|
- |Fijian|fj|
- |Faroese|fo|
- |French|fr|
- |Friulian|fur|
- |Western Frisian|fy|
- |Irish|ga|
- |Gagauz|gag|
- |Scottish Gaelic|gd|
- |Gilbertese|gil|
- |Galician|gl|
- |Gondi|gon|
- |Manx|gv|
- |Gurung|gvr|
- |Hawaiian|haw|
- |Hindi|hi|
- |Halbi|hlb|
- |Chhattisgarhi|hne|
- |Hani|hni|
- |Ho|hoc|
- |Croatian|hr|
- |Upper Sorbian|hsb|
- |Haitian|ht|
- |Hungarian|hu|
- |Interlingua|ia|
- |Indonesian|id|
- |Icelandic|is|
- |Italian|it|
- |Inuktitut|iu|
+ |Fijian|`fj`|
+ |Faroese|`fo`|
+ |French|`fr`|
+ |Friulian|`fur`|
+ |Western Frisian|`fy`|
+ |Irish|`ga`|
+ |Gagauz|`gag`|
+ |Scottish Gaelic|`gd`|
+ |Gilbertese|`gil`|
+ |Galician|`gl`|
+ |Gondi|`gon`|
+ |Manx|`gv`|
+ |Gurung|`gvr`|
+ |Hawaiian|`haw`|
+ |Hindi|`hi`|
+ |Halbi|`hlb`|
+ |Chhattisgarhi|`hne`|
+ |Hani|`hni`|
+ |`Ho`|`hoc`|
+ |Croatian|`hr`|
+ |Upper Sorbian|`hsb`|
+ |Haitian|`ht`|
+ |Hungarian|`hu`|
+ |Interlingua|`ia`|
+ |Indonesian|`id`|
+ |Icelandic|`is`|
+ |Italian|`it`|
+ |Inuktitut|`iu`|
|Japanese|
- |Jaunsari|jns|
- |Javanese|jv|
- |Kara-Kalpak (Latin)|kaa, kaa-latn|
- |Kara-Kalpak (Cyrillic)|kaa-cyrl|
- |Kachin|kac|
- |Kabuverdianu|kea|
- |Korku|kfq|
- |Khasi|kha|
- |Kazakh (Latin)|kk, kk-latn|
- |Kazakh (Cyrillic)|kk-cyrl|
- |Kalaallisut|kl|
- |Khaling|klr|
- |Malto|kmj|
+ |Jaunsari|`jns`|
+ |Javanese|`jv`|
+ |Kara-Kalpak (Latin)|`kaa`, `kaa-latn`|
+ |Kara-Kalpak (Cyrillic)|`kaa-cyrl`|
+ |Kachin|`kac`|
+ |Kabuverdianu|`kea`|
+ |Korku|`kfq`|
+ |Khasi|`kha`|
+ |Kazakh (Latin)|`kk`, `kk-latn`|
+ |Kazakh (Cyrillic)|`kk-cyrl`|
+ |Kalaallisut|`kl`|
+ |Khaling|`klr`|
+ |Malto|`kmj`|
:::column-end::: :::column span=""::: |Language| Code (optional) | |:--|:-:| |Korean|
- |Kosraean|kos|
- |Koryak|kpy|
- |Karachay-Balkar|krc|
- |Kurukh|kru|
- |K├╢lsch|ksh|
- |Kurdish (Latin)|ku, ku-latn|
- |Kurdish (Arabic)|ku-arab|
- |Kumyk|kum|
- |Cornish|kw|
- |Kirghiz|ky|
- |Latin|la|
- |Luxembourgish|lb|
- |Lakota|lkt|
- |Lithuanian|lt|
- |Maori|mi|
- |Mongolian|mn|
- |Marathi|mr|
- |Malay|ms|
- |Maltese|mt|
- |Hmong Daw|mww|
- |Erzya|myv|
- |Neapolitan|nap|
- |Nepali|ne|
- |Niuean|niu|
- |Dutch|nl|
- |Norwegian|no|
- |Nogai|nog|
- |Occitan|oc|
- |Ossetian|os|
- |Panjabi|pa|
- |Polish|pl|
- |Dari|prs|
- |Pushto|ps|
- |Portuguese|pt|
- |K'iche'|quc|
- |Camling|rab|
- |Romansh|rm|
- |Romanian|ro|
- |Russian|ru|
- |Sanskrit|sa|
- |Santali|sat|
+ |Kosraean|`kos`|
+ |Koryak|`kpy`|
+ |Karachay-Balkar|`krc`|
+ |Kurukh|`kru`|
+ |K├╢lsch|`ksh`|
+ |Kurdish (Latin)|`ku`, `ku-latn`|
+ |Kurdish (Arabic)|`ku-arab`|
+ |Kumyk|`kum`|
+ |Cornish|`kw`|
+ |Kirghiz|`ky`|
+ |Latin|`la`|
+ |Luxembourgish|`lb`|
+ |Lakota|`lkt`|
+ |Lithuanian|`lt`|
+ |Maori|`mi`|
+ |Mongolian|`mn`|
+ |Marathi|`mr`|
+ |Malay|`ms`|
+ |Maltese|`mt`|
+ |Hmong `Daw`|`mww`|
+ |Erzya|`myv`|
+ |Neapolitan|`nap`|
+ |Nepali|`ne`|
+ |Niuean|`niu`|
+ |Dutch|`nl`|
+ |Norwegian|`no`|
+ |Nogai|`nog`|
+ |Occitan|`oc`|
+ |Ossetian|`os`|
+ |Panjabi|`pa`|
+ |Polish|`pl`|
+ |Dari|`prs`|
+ |Pushto|`ps`|
+ |Portuguese|`pt`|
+ |K'iche'|`quc`|
+ |Camling|`rab`|
+ |Romansh|`rm`|
+ |Romanian|`ro`|
+ |Russian|`ru`|
+ |Sanskrit|`sa`|
+ |Santali|`sat`|
:::column-end::: :::column span=""::: |Language| Code (optional) | |:--|:-:|
- |Sadri|sck|
- |Scots|sco|
- |Slovak|sk|
- |Slovenian|sl|
- |Samoan|sm|
- |Southern Sami|sma|
- |Northern Sami|sme|
- |Lule Sami|smj|
- |Inari Sami|smn|
- |Skolt Sami|sms|
- |Somali|so|
- |Albanian|sq|
- |Serbian (Latin)|sr, sr-latn|
- |Sirmauri|srx|
- |Swedish|sv|
- |Swahili|sw|
- |Tetum|tet|
- |Tajik|tg|
- |Thangmi|thf|
- |Turkmen|tk|
- |Tonga|to|
- |Turkish|tr|
- |Tatar|tt|
- |Tuvinian|tyv|
- |Uighur|ug|
- |Urdu|ur|
- |Uzbek (Latin)|uz, uz-latn|
- |Uzbek (Cyrillic)|uz-cyrl|
- |Uzbek (Arabic)|uz-arab|
- |Volap├╝k|vo|
- |Walser|wae|
- |Kangri|xnr|
- |Yucateco|yua|
- |Zhuang|za|
- |Chinese (Han (Simplified variant))|zh, zh-hans|
- |Chinese (Han (Traditional variant))|zh-hant|
- |Zulu|zu|
+ |Sadri|`sck`|
+ |Scots|`sco`|
+ |Slovak|`sk`|
+ |Slovenian|`sl`|
+ |Samoan|`sm`|
+ |Southern Sami|`sma`|
+ |Northern Sami|`sme`|
+ |Lule Sami|`smj`|
+ |Inari Sami|`smn`|
+ |Skolt Sami|`sms`|
+ |Somali|`so`|
+ |Albanian|`sq`|
+ |Serbian (Latin)|`sr`, `sr-latn`|
+ |Sirmauri|`srx`|
+ |Swedish|`sv`|
+ |Swahili|`sw`|
+ |Tetum|`tet`|
+ |Tajik|`tg`|
+ |Thangmi|`thf`|
+ |Turkmen|`tk`|
+ |Tonga|`to`|
+ |Turkish|`tr`|
+ |Tatar|`tt`|
+ |Tuvinian|`tyv`|
+ |Uighur|`ug`|
+ |Urdu|`ur`|
+ |Uzbek (Latin)|`uz`, `uz-latn`|
+ |Uzbek (Cyrillic)|`uz-cyrl`|
+ |Uzbek (Arabic)|`uz-arab`|
+ |Volap├╝k|`vo`|
+ |Walser|`wae`|
+ |Kangri|`xnr`|
+ |Yucateco|`yua`|
+ |Zhuang|`za`|
+ |Chinese (Han (Simplified variant))|`zh`, `zh-hans`|
+ |Chinese (Han (Traditional variant))|`zh-hant`|
+ |Zulu|`zu`|
:::column-end::: :::row-end:::
The following table lists layout model language support for extracting and analy
:::row-end::: :::moniker-end
+### [**Layout: handwritten text**](#tab/layout-hand)
++
+The following table lists layout model language support for extracting and analyzing **handwritten** text. </br>
+
+|Language| Language code (optional) | Language| Language code (optional) |
+|:--|:-:|:--|:-:|
+|English|`en`|Japanese |`ja`|
+|Chinese Simplified |`zh-Hans`|Korean |`ko`|
+|French |`fr`|Portuguese |`pt`|
+|German |`de`|Spanish |`es`|
+|Italian |`it`| Russian (preview) | `ru` |
+|Thai (preview) | `th` | Arabic (preview) | `ar` |
++
+##### Model ID: **prebuilt-layout**
+
+The following table lists layout model language support for extracting and analyzing **handwritten** text. </br>
+
+|Language| Language code (optional) | Language| Language code (optional) |
+|:--|:-:|:--|:-:|
+|English|`en`|Japanese |`ja`|
+|Chinese Simplified |`zh-Hans`|Korean |`ko`|
+|French |`fr`|Portuguese |`pt`|
+|German |`de`|Spanish |`es`|
+|Italian |`it`|
++
+ > [!NOTE]
+ > Document Intelligence v2.1 does not support handwritten text extraction.
+++
+The following table lists layout model language support for extracting and analyzing **handwritten** text. </br>
+
+|Language| Language code (optional) | Language| Language code (optional) |
+|:--|:-:|:--|:-:|
+|English|`en`|Japanese |`ja`|
+|Chinese Simplified |`zh-Hans`|Korean |`ko`|
+|French |`fr`|Portuguese |`pt`|
+|German |`de`|Spanish |`es`|
+|Italian |`it`| Russian (preview) | `ru` |
+|Thai (preview) | `th` | Arabic (preview) | `ar` |
+ ## General document
The following table lists layout model language support for extracting and analy
:::moniker range="doc-intel-4.0.0" > [!IMPORTANT]
-> Starting with Document Intelligence **v4.0:2023-10-31-preview** and going forward, the general document model (prebuilt-document) is deprecated. To extract key-value pairs, selection marks, text, tables, and structure from documents, use the following models:
+> Starting with Document Intelligence **v4.0:2023-10-31-preview** and going forward, the general document model (prebuilt-document) is being added to layout (prebuilt-layout). To extract key-value pairs, selection marks, text, tables, and structure from documents, use the following models:
-| Feature | version| Model ID |
+| Key value pairs | version| Model ID |
|- ||--|
-|Layout model with **`features=keyValuePairs`** specified.|&bullet; v4:2023-10-31-preview</br>&bullet; v3.1:2023-07-31 (GA) |**`prebuilt-layout`**|
-|General document model|&bullet; v3.1:2023-07-31 (GA)</br>&bullet; v3.0:2022-08-31 (GA)</br>&bullet; v2.1 (GA)|**`prebuilt-document`**|
+|**Layout model** with query string **`features=keyValuePairs`** specified.|&bullet; v4:2023-10-31-preview</br>&bullet; v3.1:2023-07-31 (GA) |**`prebuilt-layout`**|
+|General document model|&bullet; v3.1:2023-07-31 (GA)</br>&bullet; v3.0:2022-08-31 (GA)|**`prebuilt-document`**|
:::moniker-end :::moniker range="doc-intel-3.1.0 || doc-intel-3.0.0"
The following table lists layout model language support for extracting and analy
The following table lists general document model language support. </br>
-| Model ID| LanguageΓÇöLocale code | Default |
+| Model `ID`| LanguageΓÇöLocale code | Default |
|--|:-|:|
-|**prebuilt-document**| English (United States)ΓÇöen-US| English (United States)ΓÇöen-US|
+|**prebuilt-document**| English (United States)ΓÇöen-`US`| English (United States)ΓÇöen-`US`|
:::moniker-end
ai-services Language Support Prebuilt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/language-support-prebuilt.md
- ignite-2023 Previously updated : 11/15/2023 Last updated : 01/19/2024 # Language support: prebuilt models
Azure AI Document Intelligence models provide multilingual document processing s
:::moniker range="doc-intel-4.0.0" > [!IMPORTANT]
-> Starting with Document Intelligence **v4.0 (preview)**, and going forward, the business card model (prebuilt-businessCard) is deprecated. To extract data from business card formats, use the following:
+> Starting with Document Intelligence **v4.0 (preview)**, and going forward, the business card model (prebuilt-businessCard) is deprecated. To extract data from business cards, use the following:
| Feature | version| Model ID | |- ||--|
Azure AI Document Intelligence models provide multilingual document processing s
***Model ID: prebuilt-businessCard***
-| LanguageΓÇöLocale code | Default |
+| Language Locale code | Default |
|:-|:|
-| &bullet; English (United States)ΓÇöen-US</br>&bullet; English (Australia)ΓÇöen-AU</br>&bullet; English (Canada)ΓÇöen-CA</br>&bullet; English (United Kingdom)ΓÇöen-GB</br>&bullet; English (India)ΓÇöen-IN</br>&bullet; English (Japan)ΓÇöen-JP</br>&bullet; Japanese (Japan)ΓÇöja-JP | Autodetected (en-US or ja-JP)
+| &bullet; English (United States) `en-US`</br>&bullet; English (Australia) en-AU</br>&bullet; English (Canada) `en-CA`</br>&bullet; English (United Kingdom)`en-GB`</br>&bullet; English (India) `en-IN`</br>&bullet; English (Japan) `en-JP`</br>&bullet; Japanese (Japan) `ja-JP` | Autodetected (en-US or ja-JP)
:::moniker-end :::moniker range="doc-intel-2.1.0"
-| LanguageΓÇöLocale code | Default |
+| Language Locale code | Default |
|:-|:|
-|&bullet; English (United States)ΓÇöen-US</br>&bullet; English (Australia)ΓÇöen-AU</br>&bullet; English (Canada)ΓÇöen-CA</br>&bullet; English (United Kingdom)ΓÇöen-GB</br>&bullet; English (India)ΓÇöen-IN</li> | Autodetected |
+|&bullet; English (United States) `en-US`</br>&bullet; English (Australia) en-AU</br>&bullet; English (Canada) en-CA</br>&bullet; English (United Kingdom) en-GB</br>&bullet; English (India) en-IN</li> | Autodetected |
:::moniker-end ## Contract ***Model ID: prebuilt-contract***
-| LanguageΓÇöLocale code | Default |
+| Language Locale code | Default |
|:-|:|
-| English (United States)ΓÇöen-US| English (United States)ΓÇöen-US|
+| English (United States) `en-US`| English (United States) `en-US`|
:::moniker-end
Azure AI Document Intelligence models provide multilingual document processing s
***Model ID: prebuilt-healthInsuranceCard.us***
-| LanguageΓÇöLocale code | Default |
+| Language Locale code | Default |
|:-|:|
-| English (United States)|English (United States)ΓÇöen-US|
+| English (United States)|English (United States) `en-US`|
:::moniker-end
Azure AI Document Intelligence models provide multilingual document processing s
::: moniker-end | Region | Document types | |--|-|
Azure AI Document Intelligence models provide multilingual document processing s
| &bullet; Serbian (sr-Latn) | Serbia (latn-rs)| | &bullet; Albanian (`sq`) | Albania (`al`)| | &bullet; Swedish (`sv`) | Sweden (`se`)|
-| &bullet; Chinese (simplified (zh-hans)) | China (zh-hans-cn)|
-| &bullet; Chinese (traditional (zh-hant)) | Hong Kong SAR (zh-hant-hk), Taiwan (zh-hant-tw)|
+| &bullet; Chinese (simplified (`zh-hans`)) | China (`zh-hans-cn`)|
+| &bullet; Chinese (traditional (`zh-hant`)) | Hong Kong SAR (`zh-hant-hk`), Taiwan (`zh-hant-tw`)|
### [Supported Currency Codes](#tab/currency)
Azure AI Document Intelligence models provide multilingual document processing s
|Latvian|``lv``|Xitsonga|`ts`| |Lingala|``ln``||| - ### [Hotel receipts](#tab/hotel) | Supported Languages|Language code | |:--|:|
Azure AI Document Intelligence models provide multilingual document processing s
::: moniker range="doc-intel-2.1.0"
-| Model | LanguageΓÇöLocale code | Default |
+| Model | Language Locale code | Default |
|--|:-|:|
-|Receipt| &bullet; English (United States)ΓÇöen-US</br> &bullet; English (Australia)ΓÇöen-AU</br> &bullet; English (Canada)ΓÇöen-CA</br> &bullet; English (United Kingdom)ΓÇöen-GB</br> &bullet; English (India)ΓÇöen-IN | Autodetected |
+|Receipt| &bullet; English (United States) `en-US`</br> &bullet; English (Australia) `en-AU`</br> &bullet; English (Canada) `en-CA`</br> &bullet; English (United Kingdom) `en-GB`</br> &bullet; English (India) `en-IN`| Autodetected |
::: moniker-end ## Tax documents :::moniker range="doc-intel-4.0.0"
- | Model ID | LanguageΓÇöLocale code | Default |
+ | Model ID | Language Locale code | Default |
|--|:-|:|
- |**prebuilt-tax.us.w2**|English (United States)|English (United States)ΓÇöen-US|
- |**prebuilt-tax.us.1098**|English (United States)|English (United States)ΓÇöen-US|
- |**prebuilt-tax.us.1098E**|English (United States)|English (United States)ΓÇöen-US|
- |**prebuilt-tax.us.1098T**|English (United States)|English (United States)ΓÇöen-US|
- |**prebuilt-tax.us.1099**|English (United States)|English (United States)ΓÇöen-US|
+ |**prebuilt-tax.us.w2**|English (United States)|English (United States) `en-US`|
+ |**prebuilt-tax.us.1098**|English (United States)|English (United States) `en-US`|
+ |**prebuilt-tax.us.1098E**|English (United States)|English (United States) `en-US`|
+ |**prebuilt-tax.us.1098T**|English (United States)|English (United States) `en-US`|
+ |**prebuilt-tax.us.1099**|English (United States)|English (United States) `en-US`|
:::moniker-end :::moniker range="doc-intel-3.1.0"
- | Model ID | LanguageΓÇöLocale code | Default |
+ | Model ID | Language Locale code | Default |
|--|:-|:|
- |**prebuilt-tax.us.w2**|English (United States)|English (United States)ΓÇöen-US|
- |**prebuilt-tax.us.1098**|English (United States)|English (United States)ΓÇöen-US|
- |**prebuilt-tax.us.1098E**|English (United States)|English (United States)ΓÇöen-US|
- |**prebuilt-tax.us.1098T**|English (United States)|English (United States)ΓÇöen-US|
+ |**prebuilt-tax.us.w2**|English (United States)|English (United States) `en-US`|
+ |**prebuilt-tax.us.1098**|English (United States)|English (United States) `en-US`|
+ |**prebuilt-tax.us.1098E**|English (United States)|English (United States) `en-US`|
+ |**prebuilt-tax.us.1098T**|English (United States)|English (United States) `en-US`|
:::moniker-end :::moniker range="doc-intel-3.0.0"
- | Model ID | LanguageΓÇöLocale code | Default |
+ | Model ID | Language Locale code | Default |
|--|:-|:|
- |**prebuilt-tax.us.w2**|English (United States)|English (United States)ΓÇöen-US|
+ |**prebuilt-tax.us.w2**|English (United States)|English (United States) `en-US`|
:::moniker-end
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/overview.md
Document analysis models enable text extraction from forms and documents and ret
:::column-end::: :::column span=""::: :::image type="icon" source="media/overview/icon-layout.png" link="#layout":::</br>
- [**Layout**](#layout) | Extract text </br>and document structure.
+ [**Layout**](#layout) | Extract text, tables, </br>and document structure.
:::column-end::: :::row-end::: :::moniker-end
You can use Document Intelligence to automate document processing in application
|Model ID| Description |Automation use cases | Development options | |-|--|-|--|
-|[**prebuilt-read**](concept-read.md)|&#9679; Extract **text** from documents.</br>&#9679; [Data and field extraction](concept-read.md#read-model-data-extraction)| &#9679; Contract processing. </br>&#9679; Financial or medical report processing.|&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/read)</br>&#9679; [**REST API**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-3.0.0&preserve-view=true&pivots=programming-language-rest-api)</br>&#9679; [**C# SDK**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-3.0.0&preserve-view=true&pivots=programming-language-csharp)</br>&#9679; [**Python SDK**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-3.0.0&preserve-view=true&pivots=programming-language-python)</br>&#9679; [**Java SDK**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-3.0.0&preserve-view=true&pivots=programming-language-java)</br>&#9679; [**JavaScript**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-3.0.0&preserve-view=true&pivots=programming-language-javascript) |
+|[**prebuilt-read**](concept-read.md)|&#9679; Extract **text** from documents.</br>&#9679; [Data extraction](concept-read.md#read-model-data-extraction)| &#9679; Digitizing any document. </br>&#9679; Compliance and auditing. &#9679; Processing handwritten notes before translation.|&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/read)</br>&#9679; [**REST API**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-rest-api)</br>&#9679; [**C# SDK**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-csharp)</br>&#9679; [**Python SDK**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-python)</br>&#9679; [**Java SDK**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-java)</br>&#9679; [**JavaScript**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-javascript) |
> [!div class="nextstepaction"] > [Return to model types](#document-analysis-models)
You can use Document Intelligence to automate document processing in application
| Model ID | Description |Automation use cases | Development options | |-|--|-|--|
-|[**prebuilt-layout**](concept-layout.md) |&#9679; Extract **text and layout** information from documents.</br>&#9679; [Data and field extraction](concept-layout.md#data-extraction)</br>&#9679; Layout API is updated to a prebuilt model. |&#9679; Document indexing and retrieval by structure.</br>&#9679; Preprocessing prior to OCR analysis. |&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/layout)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#layout-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#layout-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#layout-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#layout-model)|
+|[**prebuilt-layout**](concept-layout.md) |&#9679; Extract **text and layout** information from documents.</br>&#9679; [Data extraction](concept-layout.md#data-extraction) |&#9679; Document indexing and retrieval by structure.</br>&#9679; Financial and medical report analysis. |&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/layout)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#layout-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#layout-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#layout-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#layout-model)|
> [!div class="nextstepaction"] > [Return to model types](#document-analysis-models)
ai-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/service-limits.md
- ignite-2023 Previously updated : 11/15/2023 Last updated : 01/19/2024 monikerRange: '<=doc-intel-4.0.0'
This article contains both a quick reference and detailed description of Azure A
## Model usage
+|Document types supported|Read|Layout|Prebuilt models|Custom models|
+|--|--|--|--|--|
+| PDF | ✔️ | ✔️ | ✔️ | ✔️ |
+| Images (JPEG/JPG), PNG, BMP, TIFF, HEIF | ✔️ | ✔️ | ✔️ | ✔️ |
+| Office file types DOCX, PPTX, XLS | ✔️ | ✖️ | ✖️ | ✖️ |
+ :::moniker range="doc-intel-4.0.0" |Document types supported|Read|Layout|Prebuilt models|Custom models| |--|--|--|--|--| | PDF | ✔️ | ✔️ | ✔️ | ✔️ | | Images (JPEG/JPG), PNG, BMP, TIFF, HEIF | ✔️ | ✔️ | ✔️ | ✔️ |
-| Office file types DOCX, PPT, XLS | ✔️ | ✔️ | ✖️ | ✖️ |
+| Office file types DOCX, PPTX, XLS | ✔️ | ✔️ | ✖️ | ✖️ |
:::moniker-end ::: moniker range=">=doc-intel-3.0.0"
ai-services Studio Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/studio-overview.md
- ignite-2023 Previously updated : 11/15/2023 Last updated : 01/19/2024 monikerRange: '>=doc-intel-3.0.0'
monikerRange: '>=doc-intel-3.0.0'
Document Intelligence Studio is an online tool to visually explore, understand, train, and integrate features from the Document Intelligence service into your applications. The studio provides a platform for you to experiment with the different Document Intelligence models and sample returned data in an interactive manner without the need to write code.
-The studio supports Document Intelligence v3.0 models and v3.0 model training. Previously trained v2.1 models with labeled data are supported, but not v2.1 model training. Refer to the [REST API migration guide](v3-1-migration-guide.md) for detailed information about migrating from v2.1 to v3.0.
+[Document Intelligence Studio](https://documentintelligence.ai.azure.com/studio/) is an online tool for visually exploring, understanding, and integrating features from the Document Intelligence service into your applications. Use the Document Intelligence Studio to:
+* Learn more about the different capabilities in Document Intelligence.
+* Use your Document Intelligence resource to test models on sample documents or upload your own documents.
+* Experiment with different add-on and preview features to adapt the output to your needs.
+* Train custom classification models to classify documents.
+* Train custom extraction models to extract fields from documents.
+* Get sample code for the language specific SDKs to integrate into your applications.
+
+The studio supports Document Intelligence v3.0 and later API versions for model analysis and custom model training. Previously trained v2.1 models with labeled data are supported, but not v2.1 model training. Refer to the [REST API migration guide](v3-1-migration-guide.md) for detailed information about migrating from v2.1 to v3.0.
## Get started using Document Intelligence Studio
The studio supports Document Intelligence v3.0 models and v3.0 model training. P
1. Navigate to the [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/). If it's your first time logging in, a popup window appears prompting you to configure your service resource. You have two options:
- **a. Access by Resource**.
+ **a. Access by Resource (recommended)**.
* Choose your existing subscription. * Select an existing resource group within your subscription or create a new one.
The studio supports Document Intelligence v3.0 models and v3.0 model training. P
:::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot of keys and endpoint location in the Azure portal.":::
-1. Once you've completed configuring your resource, you're able to try the different models offered by Document Intelligence Studio. From the front page, select any Document Intelligence model to try using with a no-code approach.
+1. Once the resource is configured, you're able to try the different models offered by Document Intelligence Studio. From the front page, select any Document Intelligence model to try using with a no-code approach.
:::image type="content" source="media/studio/welcome-to-studio.png" alt-text="Screenshot of Document Intelligence Studio front page.":::
-1. After you've tried Document Intelligence Studio, use the [**C#**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true), [**Java**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true), [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) or [**Python**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) client libraries or the [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) to get started incorporating Document Intelligence models into your own applications.
+1. To test any of the document analysis or prebuilt models, select the model and use one o the sample documents or upload your own document to analyze. The analysis result is displayed at the right in the content-result-code window.
+
+1. Custom models need to be trained on your documents. See [custom models overview](concept-custom.md) for an overview of custom models.
+
+1. After validating the scenario in the Document Intelligence Studio, use the [**C#**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true), [**Java**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true), [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) or [**Python**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) client libraries or the [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) to get started incorporating Document Intelligence models into your own applications.
To learn more about each model, *see* concept pages.
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/whats-new.md
Previously updated : 11/21/2023 Last updated : 01/19/2024 - references_regions
Document Intelligence service is updated on an ongoing basis. Bookmark this page to stay up to date with release notes, feature enhancements, and our newest documentation.
+> [!IMPORTANT]
+> Preview API versions are retured once the GA API is released. The 2023-02-28-preview API version is being retired, if you are still using the preview API or the associated SDK versions, please update your code to target the 2023-07-31 API version.
+ ## December 2023 The [Document Intelligence SDKs](sdk-overview-v4-0.md) targeting REST API **2023-10-31-preview** are now available for use!
ai-services Cost Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/cost-management.md
Previously updated : 09/06/2022 Last updated : 01/19/2024
ai-services Data Feeds From Different Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/data-feeds-from-different-sources.md
Previously updated : 05/26/2021 Last updated : 01/19/2024
ai-services Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/encryption.md
Previously updated : 07/02/2021 Last updated : 01/19/2024 #Customer intent: As a user of the Metrics Advisor service, I want to learn how encryption at rest works.
ai-services Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/glossary.md
Previously updated : 09/14/2020 Last updated : 01/19/2024
ai-services Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/how-tos/alerts.md
Previously updated : 09/14/2020 Last updated : 01/19/2024
ai-services Anomaly Feedback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/how-tos/anomaly-feedback.md
Previously updated : 11/24/2020 Last updated : 01/19/2024
ai-services Configure Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/how-tos/configure-metrics.md
Previously updated : 05/12/2022 Last updated : 01/19/2024
ai-services Credential Entity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/how-tos/credential-entity.md
Previously updated : 06/22/2021 Last updated : 01/19/2024
ai-services Diagnose An Incident https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/how-tos/diagnose-an-incident.md
Previously updated : 04/15/2021 Last updated : 01/19/2024
ai-services Further Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/how-tos/further-analysis.md
Previously updated : 04/15/2021 Last updated : 01/19/2024
ai-services Manage Data Feeds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/how-tos/manage-data-feeds.md
Previously updated : 10/25/2022 Last updated : 01/19/2024
ai-services Metrics Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/how-tos/metrics-graph.md
Previously updated : 09/08/2020 Last updated : 01/19/2024
ai-services Onboard Your Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/how-tos/onboard-your-data.md
Previously updated : 04/20/2021 Last updated : 01/19/2024
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/overview.md
Previously updated : 07/06/2021 Last updated : 01/19/2024
ai-services Rest Api And Client Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/quickstarts/rest-api-and-client-library.md
Previously updated : 11/07/2022 Last updated : 01/19/2024 zone_pivot_groups: programming-languages-metrics-monitor
ai-services Web Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/quickstarts/web-portal.md
description: Learn how to start using the Metrics Advisor web portal.
Previously updated : 11/07/2022 Last updated : 01/19/2024
ai-services Enable Anomaly Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/tutorials/enable-anomaly-notification.md
Previously updated : 05/20/2021 Last updated : 01/19/2024 # Tutorial: Enable anomaly notification in Metrics Advisor
ai-services Write A Valid Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/tutorials/write-a-valid-query.md
Previously updated : 05/20/2021 Last updated : 01/19/2024 <!--
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/whats-new.md
Previously updated : 12/16/2022 Last updated : 01/19/2024
ai-services Abuse Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/abuse-monitoring.md
Last updated 06/16/2023
-keywords:
# Abuse Monitoring
ai-services Content Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/content-credentials.md
Last updated 11/08/2023-
-keywords:
# Content Credentials
ai-services Content Filter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/content-filter.md
Last updated 11/06/2023
-keywords:
# Content filtering
ai-services Gpt With Vision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/gpt-with-vision.md
Last updated 01/02/2024
-keywords:
# GPT-4 Turbo with Vision concepts
ai-services Legacy Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/legacy-models.md
recommendations: false
-keywords:
# Azure OpenAI Service legacy models
ai-services Model Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/model-versions.md
description: Learn about model versions in Azure OpenAI.
Last updated 10/30/2023- recommendations: false
-keywords:
# Azure OpenAI Service model versions
ai-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/models.md
recommendations: false
-keywords:
# Azure OpenAI Service models
ai-services Prompt Engineering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/prompt-engineering.md
recommendations: false
-keywords:
# Introduction to prompt engineering
ai-services Provisioned Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/provisioned-throughput.md
Title: Azure OpenAI Service provisioned throughput
description: Learn about provisioned throughput and Azure OpenAI. Previously updated : 1/16/2024- Last updated : 1/16/2024 recommendations: false
-keywords:
# What is provisioned throughput?
ai-services Red Teaming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/red-teaming.md
description: Learn about how red teaming and adversarial testing are an essentia
Last updated 11/03/2023- recommendations: false
-keywords:
# Planning red teaming for large language models (LLMs) and their applications
ai-services System Message https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/system-message.md
recommendations: false
-keywords:
# System message framework and template recommendations for Large Language Models (LLMs)
ai-services Business Continuity Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/business-continuity-disaster-recovery.md
Last updated 8/17/2023
recommendations: false
-keywords:
ai-services Completions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/completions.md
Last updated 08/15/2023
recommendations: false
-keywords:
ai-services Content Filters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/content-filters.md
Last updated 6/5/2023
recommendations: false
-keywords:
# How to configure content filters with Azure OpenAI Service
ai-services Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/embeddings.md
Last updated 01/16/2024
recommendations: false
-keywords:
# Learn how to generate embeddings with Azure OpenAI
ai-services Fine Tuning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/fine-tuning.md
Last updated 10/12/2023
zone_pivot_groups: openai-fine-tuning
-keywords:
# Customize a model with fine-tuning
ai-services Gpt With Vision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/gpt-with-vision.md
description: Learn about the options for using GPT-4 Turbo with Vision
- Last updated 11/06/2023
-keywords:
# Use GPT-4 Turbo with Vision
ai-services Json Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/json-mode.md
Last updated 12/05/2023
recommendations: false
-keywords:
ai-services Provisioned Throughput Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/provisioned-throughput-onboarding.md
description: Learn about provisioned throughput units onboarding and Azure OpenA
Last updated 01/15/2024- recommendations: false
-keywords:
# Provisioned throughput units onboarding
ai-services Reproducible Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/reproducible-output.md
Last updated 11/17/2023
recommendations: false
-keywords:
ai-services Use Blocklists https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/use-blocklists.md
Last updated 11/07/2023
-keywords:
# Use a blocklist in Azure OpenAI
ai-services Use Your Data Securely https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/use-your-data-securely.md
Previously updated : 11/29/2023 Last updated : 01/19/2024 recommendations: false
To allow access to your Azure OpenAI service from your client machines, like usi
## Configure Azure AI Search
+You can use basic pricing tier and higher for the configuration below. You donΓÇÖt have to use S2 pricing tier because the configuration doesn't require [private endpoint support for indexers with a skill set](/azure/search/search-limits-quotas-capacity#shared-private-link-resource-limits). See [step 8](#data-ingestion-architecture) of the data ingestion architecture diagram. The networking for custom skill is *bypass trusted service*, not *private endpoint*.
+ ### Enable managed identity To allow your other resources to recognize the Azure AI Search using Microsoft Entra ID authentication, you need to assign a managed identity for your Azure AI Search. The easiest way is to toggle on the system assigned managed identity in the Azure portal.
ai-services Work With Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/work-with-code.md
Last updated 06/24/2022
-keywords:
# Codex models and Azure OpenAI Service
ai-services Working With Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/working-with-models.md
recommendations: false
-keywords:
# Working with Azure OpenAI models
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/overview.md
Last updated 10/16/2023 recommendations: false
-keywords:
# What is Azure OpenAI Service?
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/whats-new.md
Last updated 11/30/2023 recommendations: false
-keywords:
# What's new in Azure OpenAI Service
ai-services Concept Active Inactive Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/concept-active-inactive-events.md
ms. Previously updated : 02/20/2020 Last updated : 01/19/2024 # Defer event activation
ai-services Concept Active Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/concept-active-learning.md
ms. Previously updated : 02/20/2020 Last updated : 01/19/2024 # Learning policy and settings
ai-services Concept Apprentice Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/concept-apprentice-mode.md
ms. Previously updated : 07/26/2022 Last updated : 01/19/2024 # Use Apprentice mode to train Personalizer without affecting your existing application
ai-services Concept Auto Optimization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/concept-auto-optimization.md
ms. Previously updated : 03/08/2021 Last updated : 01/19/2024 # Personalizer Auto-Optimize (Preview)
ai-services Concept Multi Slot Personalization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/concept-multi-slot-personalization.md
Previously updated : 05/24/2021 Last updated : 01/19/2024
ai-services Concept Rewards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/concept-rewards.md
ms. Previously updated : 02/20/2020 Last updated : 01/19/2024
ai-services Concepts Exploration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/concepts-exploration.md
ms. Previously updated : 08/28/2022 Last updated : 01/19/2024 # Exploration
ai-services Concepts Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/concepts-features.md
ms. Previously updated : 12/28/2022 Last updated : 01/19/2024 # Context and actions
ai-services Concepts Offline Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/concepts-offline-evaluation.md
ms. Previously updated : 02/20/2020 Last updated : 01/19/2024 # Offline evaluation
ai-services Concepts Reinforcement Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/concepts-reinforcement-learning.md
ms. Previously updated : 05/07/2019 Last updated : 01/19/2024 # What is Reinforcement Learning?
ai-services Concepts Scalability Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/concepts-scalability-performance.md
ms. Previously updated : 10/24/2019 Last updated : 01/19/2024 # Scalability and Performance
ai-services Encrypt Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/encrypt-data-at-rest.md
Previously updated : 06/02/2022 Last updated : 01/19/2024 #Customer intent: As a user of the Personalizer service, I want to learn how encryption at rest works.
ai-services How Personalizer Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/how-personalizer-works.md
ms. Previously updated : 02/18/2020 Last updated : 01/19/2024 # How Personalizer works
ai-services How To Create Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/how-to-create-resource.md
ms. Previously updated : 03/26/2020 Last updated : 01/19/2024
ai-services How To Feature Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/how-to-feature-evaluation.md
ms. Previously updated : 09/22/2022 Last updated : 01/19/2024 # Evaluate feature importances
ai-services How To Inference Explainability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/how-to-inference-explainability.md
ms. Previously updated : 09/20/2022 Last updated : 01/19/2024 # Inference Explainability
ai-services How To Learning Behavior https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/how-to-learning-behavior.md
ms. Previously updated : 07/26/2022 Last updated : 01/19/2024 # Configure the Personalizer learning behavior
ai-services How To Manage Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/how-to-manage-model.md
ms. Previously updated : 02/20/2020 Last updated : 01/19/2024 # How to manage model and learning settings
ai-services How To Multi Slot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/how-to-multi-slot.md
Previously updated : 05/24/2021 Last updated : 01/19/2024 zone_pivot_groups: programming-languages-set-six ms.devlang: csharp # ms.devlang: csharp, javascript, python
ai-services How To Offline Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/how-to-offline-evaluation.md
ms. Previously updated : 02/20/2020 Last updated : 01/19/2024 # Analyze your learning loop with an offline evaluation
ai-services How To Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/how-to-settings.md
ms. Previously updated : 04/29/2020 Last updated : 01/19/2024 # Configure Personalizer learning loop
ai-services How To Thick Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/how-to-thick-client.md
Previously updated : 09/06/2022 Last updated : 01/19/2024 # Get started with the local inference SDK for Azure AI Personalizer
ai-services Quickstart Personalizer Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/quickstart-personalizer-sdk.md
ms. Previously updated : 02/02/2023 Last updated : 01/19/2024 ms.devlang: csharp # ms.devlang: csharp, javascript, python
ai-services Responsible Characteristics And Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/responsible-characteristics-and-limitations.md
Previously updated : 05/23/2022 Last updated : 01/19/2024
ai-services Responsible Data And Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/responsible-data-and-privacy.md
Previously updated : 05/23/2022 Last updated : 01/19/2024
ai-services Responsible Guidance Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/responsible-guidance-integration.md
Previously updated : 05/23/2022 Last updated : 01/19/2024
ai-services Responsible Use Cases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/responsible-use-cases.md
Previously updated : 05/23/2022 Last updated : 01/19/2024
ai-services Terminology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/terminology.md
ms. Previously updated : 09/16/2022 Last updated : 01/19/2024 # Personalizer terminology
ai-services Tutorial Use Azure Notebook Generate Loop Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/tutorial-use-azure-notebook-generate-loop-data.md
ms. Previously updated : 04/27/2020 Last updated : 01/19/2024 #Customer intent: As a Python developer, I want use Personalizer in an Azure Notebook so that I can understand the end to end lifecycle of a Personalizer loop.
ai-services Tutorial Use Personalizer Chat Bot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/tutorial-use-personalizer-chat-bot.md
ms. Previously updated : 05/17/2021 Last updated : 01/19/2024 ms.devlang: csharp
ai-services Tutorial Use Personalizer Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/tutorial-use-personalizer-web-app.md
ms. Previously updated : 06/10/2020 Last updated : 01/19/2024 ms.devlang: csharp
ai-services What Is Personalizer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/what-is-personalizer.md
ms. Previously updated : 11/17/2022 Last updated : 01/19/2024 keywords: personalizer, Azure AI Personalizer, machine learning
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/whats-new.md
ms. Previously updated : 05/28/2021 Last updated : 01/19/2024 # What's new in Personalizer
ai-services Where Can You Use Personalizer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/where-can-you-use-personalizer.md
ms. Previously updated : 02/18/2020 Last updated : 01/19/2024 # Where and how to use Personalizer
ai-services Azure Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/Concepts/azure-resources.md
Previously updated : 12/19/2023 Last updated : 01/19/2024
ai-services Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/Concepts/best-practices.md
Previously updated : 12/19/2023 Last updated : 01/19/2024
ai-services Confidence Score https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/Concepts/confidence-score.md
Previously updated : 12/19/2023 Last updated : 01/19/2024
ai-services Data Sources And Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/Concepts/data-sources-and-content.md
Previously updated : 12/19/2023 Last updated : 01/19/2024 # Importing from data sources
ai-services Development Lifecycle Knowledge Base https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/Concepts/development-lifecycle-knowledge-base.md
Previously updated : 12/19/2023 Last updated : 01/19/2024 # Knowledge base lifecycle in QnA Maker
ai-services Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/Concepts/plan.md
Previously updated : 12/19/2023 Last updated : 01/19/2024
ai-services Query Knowledge Base https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/Concepts/query-knowledge-base.md
Previously updated : 12/19/2023 Last updated : 01/19/2024
ai-services Question Answer Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/Concepts/question-answer-set.md
Previously updated : 12/19/2023 Last updated : 01/19/2024 # Question and answer pair concepts
ai-services Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/Concepts/role-based-access-control.md
Previously updated : 12/19/2023 Last updated : 01/19/2024 # Collaborate with other authors and editors
ai-services Add Sharepoint Datasources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/add-sharepoint-datasources.md
Previously updated : 12/19/2023 Last updated : 01/19/2024 # Add a secured SharePoint data source to your knowledge base
ai-services Change Default Answer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/change-default-answer.md
Previously updated : 12/19/2023 Last updated : 01/19/2024
ai-services Chit Chat Knowledge Base https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/chit-chat-knowledge-base.md
Previously updated : 12/19/2023 Last updated : 01/19/2024
ai-services Configure Qna Maker Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/configure-qna-maker-resources.md
Previously updated : 12/19/2023 Last updated : 01/19/2024
ai-services Edit Knowledge Base https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/edit-knowledge-base.md
Previously updated : 12/19/2023 Last updated : 01/19/2024
ai-services Get Analytics Knowledge Base https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/get-analytics-knowledge-base.md
displayName: chat history, history, chat logs, logs
Previously updated : 12/19/2023 Last updated : 01/19/2024
ai-services Improve Knowledge Base https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/improve-knowledge-base.md
description: Improve the quality of your knowledge base with active learning. Re
Previously updated : 12/19/2023 Last updated : 01/19/2024 ms.devlang: csharp
ai-services Manage Knowledge Bases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/manage-knowledge-bases.md
Previously updated : 12/19/2023 Last updated : 01/19/2024
ai-services Manage Qna Maker App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/manage-qna-maker-app.md
Previously updated : 12/19/2023 Last updated : 01/19/2024
ai-services Metadata Generateanswer Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/metadata-generateanswer-usage.md
Previously updated : 12/19/2023 Last updated : 01/19/2024
ai-services Migrate To Openai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/migrate-to-openai.md
Previously updated : 12/19/2023 Last updated : 01/19/2024 # Migrate QnA Maker to Azure OpenAI on your data
ai-services Multi Turn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/multi-turn.md
Previously updated : 12/19/2023 Last updated : 01/19/2024 # Use follow-up prompts to create multiple turns of a conversation
ai-services Network Isolation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/network-isolation.md
Previously updated : 12/19/2023 Last updated : 01/19/2024
ai-services Query Knowledge Base With Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/query-knowledge-base-with-metadata.md
Previously updated : 12/19/2023 Last updated : 01/19/2024
ai-services Set Up Qnamaker Service Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/set-up-qnamaker-service-azure.md
Previously updated : 12/19/2023 Last updated : 01/19/2024 # Manage QnA Maker resources
ai-services Test Knowledge Base https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/test-knowledge-base.md
Previously updated : 12/19/2023 Last updated : 01/19/2024 # Test your knowledge base in QnA Maker
ai-services Use Active Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/use-active-learning.md
Previously updated : 12/19/2023 Last updated : 01/19/2024
ai-services Using Prebuilt Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/using-prebuilt-api.md
Previously updated : 12/19/2023 Last updated : 01/19/2024 # Prebuilt question answering
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/Overview/language-support.md
Previously updated : 12/19/2023 Last updated : 01/19/2024 # Language support for a QnA Maker resource and knowledge bases
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/Overview/overview.md
Previously updated : 12/19/2023 Last updated : 01/19/2024 keywords: "qna maker, low code chat bots, multi-turn conversations"
ai-services Add Question Metadata Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/Quickstarts/add-question-metadata-portal.md
Previously updated : 12/19/2023 Last updated : 01/19/2024
ai-services Create Publish Knowledge Base https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/Quickstarts/create-publish-knowledge-base.md
Previously updated : 12/19/2023 Last updated : 01/19/2024
ai-services Get Answer From Knowledge Base Using Url Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/Quickstarts/get-answer-from-knowledge-base-using-url-tool.md
zone_pivot_groups: URL-test-interface Previously updated : 12/19/2023 Last updated : 01/19/2024
ai-services Quickstart Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/Quickstarts/quickstart-sdk.md
Previously updated : 12/19/2023 Last updated : 01/19/2024 ms.devlang: csharp # ms.devlang: csharp, java, javascript, python
ai-services Create Faq Bot With Azure Bot Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/Tutorials/create-faq-bot-with-azure-bot-service.md
Previously updated : 12/19/2023 Last updated : 01/19/2024
ai-services Export Knowledge Base https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/Tutorials/export-knowledge-base.md
Previously updated : 12/19/2023 Last updated : 01/19/2024 # Move a knowledge base using export-import
ai-services Integrate With Power Virtual Assistant Fallback Topic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/Tutorials/integrate-with-power-virtual-assistant-fallback-topic.md
Previously updated : 12/19/2023 Last updated : 01/19/2024 # Tutorial: Add your knowledge base to Power Virtual Agents
ai-services Choose Natural Language Processing Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/choose-natural-language-processing-service.md
Previously updated : 12/19/2023 Last updated : 01/19/2024 # Use Azure AI services with natural language processing (NLP) to enrich bot conversations
ai-services Encrypt Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/encrypt-data-at-rest.md
Previously updated : 12/19/2023 Last updated : 01/19/2024 #Customer intent: As a user of the QnA Maker service, I want to learn how encryption at rest works.
ai-services Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/limits.md
Previously updated : 12/19/2023 Last updated : 01/19/2024
ai-services Reference App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/reference-app-service.md
Previously updated : 12/19/2023 Last updated : 01/19/2024
ai-services Reference Document Format Guidelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/reference-document-format-guidelines.md
Previously updated : 12/19/2023 Last updated : 01/19/2024
ai-services Reference Markdown Format https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/reference-markdown-format.md
Previously updated : 12/19/2023 Last updated : 01/19/2024 # Markdown format supported in QnA Maker answer text
ai-services Reference Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/reference-private-endpoint.md
Previously updated : 12/19/2023 Last updated : 01/19/2024 # Private Endpoints
ai-services Reference Tsv Format Batch Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/reference-tsv-format-batch-testing.md
Previously updated : 12/19/2023 Last updated : 01/19/2024 # Batch testing TSV format
ai-services Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/troubleshooting.md
Previously updated : 12/19/2023 Last updated : 01/19/2024 # Troubleshooting for QnA Maker
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/whats-new.md
Previously updated : 12/19/2023 Last updated : 01/19/2024
ai-services Custom Keyword Basics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/custom-keyword-basics.md
Previously updated : 11/12/2021 Last updated : 1/18/2024 ms.devlang: csharp
-# ms.devlang: csharp, objective-c, python
zone_pivot_groups: programming-languages-speech-services
ai-services Custom Neural Voice Lite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/custom-neural-voice-lite.md
Previously updated : 10/27/2022 Last updated : 1/18/2024
To create a custom neural voice lite project, follow these steps:
> To create a custom neural voice Pro project, see [Create a project for custom neural voice](professional-voice-create-project.md). 1. Follow the instructions provided by the wizard to create your project.
-1. Select the new project by name or select **Go to project**. You'll see these menu items in the left panel: **Record and build**, **Review model**, and **Deploy model**.
+1. Select the new project by name or select **Go to project**. You see these menu items in the left panel: **Record and build**, **Review model**, and **Deploy model**.
:::image type="content" source="media/custom-voice/lite/lite-project-get-started.png" alt-text="Screenshot with an overview of the CNV lite record, train, test, and deploy workflow."::: The CNV lite project expires after 90 days unless the [verbal statement](#submit-verbal-statement) recorded by the voice talent is submitted. ## Record and build a CNV lite model
-Record at least 20 voice samples (up to 50) with provided scripts online. Voice samples recorded here will be used to create a synthetic version of your voice.
+Record at least 20 voice samples (up to 50) with provided scripts online. Voice samples recorded here are used to create a synthetic version of your voice.
Here are some tips to help you record your voice samples: - Use a good microphone. Increase the clarity of your samples by using a high-quality microphone. Speak about 8 inches away from the microphone to avoid mouth noises.
To record and build a CNV lite model, follow these steps:
1. Select **Get started**. 1. Read the Voice talent terms of use carefully. Select the checkbox to acknowledge the terms of use. 1. Select **Accept**
-1. Press the microphone icon to start the noise check. This noise check will take only a few seconds, and you won't need to speak during it.
+1. Press the microphone icon to start the noise check. This noise check takes only a few seconds, and you don't need to speak during it.
1. If noise was detected, you can select **Check again** to repeat the noise check. If no noise was detected, you can select **Done** to proceed to the next step. :::image type="content" source="media/custom-voice/lite/cnv-record-noise-check.png" alt-text="Screenshot of the noise check results when noise was detected."::: 1. Review the recording tips and select **Got it**. For the best results, go to a quiet area without background noise before recording your voice samples.
To submit the voice talent verbal statement, follow these steps:
## Deploy model
-To deploy your voice model and use it in your applications, you must get the full access to custom neural voice. Request access on the [intake form](https://aka.ms/customneural). Within approximately 10 business days, you'll receive an email with the approval status. A [verbal statement](#submit-verbal-statement) recorded by the voice talent is also required before you can deploy the model for your business use.
+To deploy your voice model and use it in your applications, you must get the full access to custom neural voice. Request access on the [intake form](https://aka.ms/customneural). Within approximately 10 business days, you receive an email with the approval status. A [verbal statement](#submit-verbal-statement) recorded by the voice talent is also required before you can deploy the model for your business use.
To deploy a CNV lite model, follow these steps:
ai-services Custom Neural Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/custom-neural-voice.md
Previously updated : 03/27/2023 Last updated : 1/18/2024
Custom neural voice (CNV) is a text to speech feature that lets you create a one
> > Access to [Custom neural voice (CNV) Lite](custom-neural-voice-lite.md) is available for anyone to demo and evaluate CNV before investing in professional recordings to create a higher-quality voice.
-Out of the box, [text to speech](text-to-speech.md) can be used with prebuilt neural voices for each [supported language](language-support.md?tabs=tts). The prebuilt neural voices work very well in most text to speech scenarios if a unique voice isn't required.
+Out of the box, [text to speech](text-to-speech.md) can be used with prebuilt neural voices for each [supported language](language-support.md?tabs=tts). The prebuilt neural voices work well in most text to speech scenarios if a unique voice isn't required.
Custom neural voice is based on the neural text to speech technology and the multilingual, multi-speaker, universal model. You can create synthetic voices that are rich in speaking styles, or adaptable cross languages. The realistic and natural sounding voice of custom neural voice can represent brands, personify machines, and allow users to interact with applications conversationally. See the [supported languages](language-support.md?tabs=tts) for custom neural voice.
Before you get started in Speech Studio, here are some considerations:
Here's an overview of the steps to create a custom neural voice in Speech Studio:
-1. [Create a project](professional-voice-create-project.md) to contain your data, voice models, tests, and endpoints. Each project is specific to a country/region and language. If you are going to create multiple voices, it's recommended that you create a project for each voice.
+1. [Create a project](professional-voice-create-project.md) to contain your data, voice models, tests, and endpoints. Each project is specific to a country/region and language. If you're going to create multiple voices, it's recommended that you create a project for each voice.
1. [Set up voice talent](professional-voice-create-project.md). Before you can train a neural voice, you must submit a recording of the voice talent's consent statement. The voice talent statement is a recording of the voice talent reading a statement that they consent to the usage of their speech data to train a custom voice model. 1. [Prepare training data](professional-voice-create-training-set.md) in the right [format](how-to-custom-voice-training-data.md). It's a good idea to capture the audio recordings in a professional quality recording studio to achieve a high signal-to-noise ratio. The quality of the voice model depends heavily on your training data. Consistent volume, speaking rate, pitch, and consistency in expressive mannerisms of speech are required. 1. [Train your voice model](professional-voice-train-voice.md). Select at least 300 utterances to create a custom neural voice. A series of data quality checks are automatically performed when you upload them. To build high-quality voice models, you should fix any errors and submit again. 1. [Test your voice](professional-voice-train-voice.md#test-your-voice-model). Prepare test scripts for your voice model that cover the different use cases for your apps. ItΓÇÖs a good idea to use scripts within and outside the training dataset, so you can test the quality more broadly for different content. 1. [Deploy and use your voice model](professional-voice-deploy-endpoint.md) in your apps.
-You can tune, adjust, and use your custom voice, similarly as you would use a prebuilt neural voice. Convert text into speech in real-time, or generate audio content offline with text input. You can do this by using the [REST API](./rest-text-to-speech.md), the [Speech SDK](./get-started-text-to-speech.md), or the [Speech Studio](https://speech.microsoft.com/audiocontentcreation).
+You can tune, adjust, and use your custom voice, similarly as you would use a prebuilt neural voice. Convert text into speech in real-time, or generate audio content offline with text input. You use the [REST API](./rest-text-to-speech.md), the [Speech SDK](./get-started-text-to-speech.md), or the [Speech Studio](https://speech.microsoft.com/audiocontentcreation).
The style and the characteristics of the trained voice model depend on the style and the quality of the recordings from the voice talent used for training. However, you can make several adjustments by using [SSML (Speech Synthesis Markup Language)](./speech-synthesis-markup.md?tabs=csharp) when you make the API calls to your voice model to generate synthetic speech. SSML is the markup language used to communicate with the text to speech service to convert text into audio. The adjustments you can make include change of pitch, rate, intonation, and pronunciation correction. If the voice model is built with multiple styles, you can also use SSML to switch the styles.
If you're using the old version of custom voice (which is scheduled to be retire
## Responsible AI
-An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the transparency notes to learn about responsible AI use and deployment in your systems.
+An AI system includes not only the technology, but also the people who use it, the people who are affected by it, and the environment in which it's deployed. Read the transparency notes to learn about responsible AI use and deployment in your systems.
* [Transparency note and use cases for custom neural voice](/legal/cognitive-services/speech-service/custom-neural-voice/transparency-note-custom-neural-voice?context=/azure/ai-services/speech-service/context/context) * [Characteristics and limitations for using custom neural voice](/legal/cognitive-services/speech-service/custom-neural-voice/characteristics-and-limitations-custom-neural-voice?context=/azure/ai-services/speech-service/context/context)
ai-services Custom Speech Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/custom-speech-overview.md
Previously updated : 09/15/2023 Last updated : 1/18/2024
With Custom Speech, you can upload your own data, test and train a custom model,
Here's more information about the sequence of steps shown in the previous diagram:
-1. [Create a project](how-to-custom-speech-create-project.md) and choose a model. Use a <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices" title="Create a Speech resource" target="_blank">Speech resource</a> that you create in the Azure portal. If you'll train a custom model with audio data, choose a Speech resource region with dedicated hardware for training audio data. See footnotes in the [regions](regions.md#speech-service) table for more information.
+1. [Create a project](how-to-custom-speech-create-project.md) and choose a model. Use a <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices" title="Create a Speech resource" target="_blank">Speech resource</a> that you create in the Azure portal. If you train a custom model with audio data, choose a Speech resource region with dedicated hardware for training audio data. See footnotes in the [regions](regions.md#speech-service) table for more information.
1. [Upload test data](./how-to-custom-speech-upload-data.md). Upload test data to evaluate the speech to text offering for your applications, tools, and products. 1. [Test recognition quality](how-to-custom-speech-inspect-data.md). Use the [Speech Studio](https://aka.ms/speechstudio/customspeech) to play back uploaded audio and inspect the speech recognition quality of your test data. 1. [Test model quantitatively](how-to-custom-speech-evaluate-data.md). Evaluate and improve the accuracy of the speech to text model. The Speech service provides a quantitative word error rate (WER), which you can use to determine if more training is required.
ai-services Customize Pronunciation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/customize-pronunciation.md
Previously updated : 05/08/2022 Last updated : 1/18/2024
You can specify the phonetic pronunciation of words using the Universal Phone Se
UPS pronunciations consist of a string of UPS phonemes, each separated by whitespace. UPS phoneme labels are all defined using ASCII character strings.
-For steps on implementing UPS, see [Structured text phonetic pronunciation](how-to-custom-speech-test-and-train.md#structured-text-data-for-training). Structured text phonetic pronunciation data is separate from [pronunciation data](how-to-custom-speech-test-and-train.md#pronunciation-data-for-training), and they cannot be used together. The first one is "sounds-like" or spoken-form data, and is input as a separate file, and trains the model what the spoken form sounds like
+For steps on implementing UPS, see [Structured text phonetic pronunciation](how-to-custom-speech-test-and-train.md#structured-text-data-for-training). Structured text phonetic pronunciation data is separate from [pronunciation data](how-to-custom-speech-test-and-train.md#pronunciation-data-for-training), and they can't be used together. The first one is "sounds-like" or spoken-form data, and is input as a separate file, and trains the model what the spoken form sounds like
[Structured text phonetic pronunciation data](how-to-custom-speech-test-and-train.md#structured-text-data-for-training) is specified per syllable in a markdown file. Separately, [pronunciation data](how-to-custom-speech-test-and-train.md#pronunciation-data-for-training) it input on its own, and trains the model what the spoken form sounds like. You can either use a pronunciation data file on its own, or you can add pronunciation within a structured text data file. The Speech service doesn't support training a model with both of those datasets as input.
ai-services Devices Sdk Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/devices-sdk-release-notes.md
Previously updated : 02/12/2022 Last updated : 1/18/2024
The following sections list changes in the most recent releases.
- Linux ARM is now supported. - Initial binaries for [Roobo v2 DDK](https://aka.ms/sdsdk-download-roobov2) are provided (Linux ARM64). - Windows users can use `AudioConfig.fromDefaultMicrophoneInput()` or `AudioConfig.fromMicrophoneInput(deviceName)` to specify the microphone to be used.-- The library size has been optimized.
+- The library size is optimized.
- Support for multi-turn recognition using the same speech/intent recognizer object. - Fix occasional issue where the process would stop responding while stopping recognition. - Sample apps now contain a sample participants.properties file to demonstrate the format of the file.
The following sections list changes in the most recent releases.
**Breaking changes** -- Due to the new keyword technology all keywords must be re-created at our improved keyword portal. To fully remove old keywords from the device uninstall the old app.
+- Due to the new keyword technology, all keywords must be re-created at our improved keyword portal. To completely remove old keywords from the device, uninstall the old app.
- adb uninstall com.microsoft.cognitiveservices.speech.samples.sdsdkstarterapp ## Speech Devices SDK 1.4.0: 2019-Apr release
The following sections list changes in the most recent releases.
**Breaking changes** -- [Installing a keyword](./custom-keyword-basics.md) has been simplified, it is now part of the app and does not need separate installation on the device.-- The keyword recognition has changed, and two events are supported.
+- [Installing a keyword](./custom-keyword-basics.md) has been simplified, it's now part of the app and doesn't need separate installation on the device.
+- The keyword recognition is changed, and two events are supported.
- `RecognizingKeyword,` indicates the speech result contains (unverified) keyword text. - `RecognizedKeyword`, indicates that keyword recognition completed recognizing the given keyword.
The following sections list changes in the most recent releases.
## Speech Devices SDK 1.0.1: 2018-Oct release - Updated the [Speech SDK](./speech-sdk.md) component to version 1.0.1. For more information, see its [release notes](./releasenotes.md).-- Speech recognition accuracy will be improved with our improved audio processing algorithm
+- Speech recognition accuracy is improved with our improved audio processing algorithm
- One continuous recognition audio session bug is fixed. **Breaking changes** -- With this release a number of breaking changes are introduced. Please check [this page](https://aka.ms/csspeech/breakingchanges_1_0_0) for details relating to the APIs.-- The keyword recognition model files are not compatible with Speech Devices SDK 1.0.1. The existing keyword files will be deleted after the new keyword files are written to the device.
+- With this release many breaking changes are introduced. Check [this page](https://aka.ms/csspeech/breakingchanges_1_0_0) for details relating to the APIs.
+- The keyword recognition model files aren't compatible with Speech Devices SDK 1.0.1. The existing keyword files will be deleted after the new keyword files are written to the device.
## Speech Devices SDK 0.5.0: 2018-Aug release
ai-services Direct Line Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/direct-line-speech.md
Previously updated : 03/11/2020 Last updated : 1/18/2024 # What is Direct Line Speech?
-Direct Line Speech is a robust, end-to-end solution for creating a flexible, extensible voice assistant. It is powered by the Bot Framework and its Direct Line Speech channel, that is optimized for voice-in, voice-out interaction with bots.
+Direct Line Speech is a robust, end-to-end solution for creating a flexible, extensible voice assistant. It's powered by the Bot Framework and its Direct Line Speech channel, that is optimized for voice-in, voice-out interaction with bots.
[Voice assistants](voice-assistants.md) listen to users and take an action in response, often speaking back. They use [speech to text](speech-to-text.md) to transcribe the user's speech, then take action on the natural language understanding of the text. This action frequently includes spoken output from the assistant generated with [text to speech](text-to-speech.md).
-Direct Line Speech offers the highest levels of customization and sophistication for voice assistants. It's designed for conversational scenarios that are open-ended, natural, or hybrids of the two with task completion or command-and-control use. This high degree of flexibility comes with a greater complexity, and scenarios that are scoped to well-defined tasks using natural language input may want to consider [Custom Commands](custom-commands.md) for a streamlined solution experience.
+Direct Line Speech offers the highest levels of customization and sophistication for voice assistants. It's designed for conversational scenarios that are open-ended, natural, or hybrids of the two with task completion or command-and-control use. This high degree of flexibility comes with a greater complexity. Consider [Custom Commands](custom-commands.md) for scenarios that are scoped to well-defined tasks using natural language input.
Direct Line Speech supports these locales: `ar-eg`, `ar-sa`, `ca-es`, `da-dk`, `de-de`, `en-au`, `en-ca`, `en-gb`, `en-in`, `en-nz`, `en-us`, `es-es`, `es-mx`, `fi-fi`, `fr-ca`, `fr-fr`, `gu-in`, `hi-in`, `hu-hu`, `it-it`, `ja-jp`, `ko-kr`, `mr-in`, `nb-no`, `nl-nl`, `pl-pl`, `pt-br`, `pt-pt`, `ru-ru`, `sv-se`, `ta-in`, `te-in`, `th-th`, `tr-tr`, `zh-cn`, `zh-hk`, and `zh-tw`.
Voice assistants built using Speech service can use the full range of customizat
> [!NOTE] > Customization options vary by language/locale (see [Supported languages](./language-support.md?tabs=stt)).
-Direct Line Speech and its associated functionality for voice assistants are an ideal supplement to the [Virtual Assistant Solution and Enterprise Template](/azure/bot-service/bot-builder-enterprise-template-overview). Though Direct Line Speech can work with any compatible bot, these resources provide a reusable baseline for high-quality conversational experiences as well as common supporting skills and models to get started quickly.
+Direct Line Speech and its associated functionality for voice assistants are an ideal supplement to the [Virtual Assistant Solution and Enterprise Template](/azure/bot-service/bot-builder-enterprise-template-overview). Though Direct Line Speech can work with any compatible bot, these resources provide a reusable baseline for high-quality conversational experiences and common supporting skills and models to get started quickly.
## Reference docs
ai-services Display Text Format https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/display-text-format.md
Previously updated : 09/19/2022 Last updated : 1/18/2024 zone_pivot_groups: programming-languages-speech-sdk-cli # Display text formatting with speech to text
-Speech to text offers an array of formatting features to ensure that the transcribed text is clear and legible. Below is an overview of these features and how each one is used to improve the overall clarity of the final text output.
+Speech to text offers an array of formatting features to ensure that the transcribed text is clear and legible. See the sections below for an overview of how each feature is used to improve the overall clarity of the final text output.
## ITN
-Inverse Text Normalization (ITN) is a process that converts spoken words into their written form. For example, the spoken word "four" is converted to the written form "4". This process is performed by the speech to text service and isn't configurable. Some of the supported text formats include dates, times, decimals, currencies, addresses, emails, and phone numbers. You can speak naturally, and the service formats text as expected. The following table shows the ITN rules that are applied to the text output.
+Inverse Text Normalization (ITN) is a process that converts spoken words into their written form. For example, the spoken word "four" is converted to the written form "4". The speech to text service completes this process and it's not configurable. Some of the supported text formats include dates, times, decimals, currencies, addresses, emails, and phone numbers. You can speak naturally, and the service formats text as expected. The following table shows the ITN rules that are applied to the text output.
|Recognized speech|Display text| |||
Inverse Text Normalization (ITN) is a process that converts spoken words into th
## Capitalization
-Speech to text models recognize words that should be capitalized to improve readability, accuracy, and grammar. For example, the Speech service will automatically capitalize proper nouns and words at the beginning of a sentence. Some examples are shown in this table.
+Speech to text models recognize words that should be capitalized to improve readability, accuracy, and grammar. For example, the Speech service automatically capitalizes proper nouns and words at the beginning of a sentence. Some examples are shown in this table.
|Recognized speech|Display text| |||
When you're using speech to text with continuous recognition, you can configure
|`the options are apple forward slash banana forward slash orange period`|`The options are apple/banana/orange.`| |`are you sure question mark`|`Are you sure?`|
-Use the Speech SDK to enable dictation mode when you're using speech to text with continuous recognition. This mode will cause the speech configuration instance to interpret word descriptions of sentence structures such as punctuation.
+Use the Speech SDK to enable dictation mode when you're using speech to text with continuous recognition. This mode causes the speech configuration instance to interpret word descriptions of sentence structures such as punctuation.
::: zone pivot="programming-language-csharp" ```csharp
ai-services Embedded Speech Performance Evaluations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/embedded-speech-performance-evaluations.md
Previously updated : 11/28/2023 Last updated : 1/18/2024
Embedded speech models run fully on your target devices. Understanding the perfo
**Real-time factor (RTF)** ΓÇô The real-time factor (RTF) of a device measures how fast the embedded speech model can process audio input. It's the ratio of the processing time to the audio length. For example, if a device processes a 1-minute audio file in 30 seconds, the RTF is 0.5. This metric evaluates the computational power of the device for running embedded speech models. It can help identify devices that are too slow to support the models. Measurement of this metric should only be done using file-based input rather than real-time microphone input.
-To support real-time & interactive speech experiences, the device should have an RTF of `1` or lower. An RTF value higher than `1` means that the device can't keep up with the audio input and will cause poor user experiences.
+To support real-time & interactive speech experiences, the device should have an RTF of `1` or lower. An RTF value higher than `1` means that the device can't keep up with the audio input, which can result in a poor user experience.
When measuring the RTF of a device, it's important to measure multiple samples and analyze the distribution across percentiles. This allows you to capture the effect of variations in the device's behavior like different CPU clock speeds due to thermal throttling. The predefined measurement tests outlined in [Measuring the real-time factor on your device](#measuring-the-real-time-factor-on-your-device) automatically measure the RTF for each speech recognition result, yielding a sufficiently large sample size.
When measuring the RTF of a device, it's important to measure multiple samples a
For example, if the device is under moderate to high CPU load from all other applications running on the device, it's possible to encounter performance issues for running embedded speech in addition to the other applications, even with a powerful processor.
-**Memory load** ΓÇô An embedded speech to text model consumes between 200-300 MB of memory at runtime. If your device has less memory available than that for the embedded speech process to use, frequent fallbacks to virtual memory and paging can introduce more latencies. This can affect both the real-time factor and user-perceived latency.
+**Memory load** ΓÇô An embedded speech to text model consumes between 200-300 MB of memory at runtime. If your device has less memory available for the embedded speech process to use, frequent fallbacks to virtual memory and paging can introduce more latencies. This can affect both the real-time factor and user-perceived latency.
## Built-in performance optimizations
ai-services Embedded Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/embedded-speech.md
Previously updated : 11/15/2023 Last updated : 1/18/2024 zone_pivot_groups: programming-languages-set-thirteen
All text to speech locales [here](language-support.md?tabs=tts) (except fa-IR, P
## Embedded speech configuration
-For cloud connected applications, as shown in most Speech SDK samples, you use the `SpeechConfig` object with a Speech resource key and region. For embedded speech, you don't use a Speech resource. Instead of a cloud resource, you use the [models and voices](#models-and-voices) that you downloaded to your local device.
+For cloud connected applications, as shown in most Speech SDK samples, you use the `SpeechConfig` object with a Speech resource key and region. For embedded speech, you don't use a Speech resource. Instead of a cloud resource, you use the [models and voices](#models-and-voices) that you download to your local device.
Use the `EmbeddedSpeechConfig` object to set the location of the models or voices. If your application is used for both speech to text and text to speech, you can use the same `EmbeddedSpeechConfig` object to set the location of the models and voices.
You can find ready to use embedded speech samples at [GitHub](https://aka.ms/emb
Hybrid speech with the `HybridSpeechConfig` object uses the cloud speech service by default and embedded speech as a fallback in case cloud connectivity is limited or slow.
-With hybrid speech configuration for [speech to text](speech-to-text.md) (recognition models), embedded speech is used when connection to the cloud service fails after repeated attempts. Recognition may continue using the cloud service again if the connection is later resumed.
+With hybrid speech configuration for [speech to text](speech-to-text.md) (recognition models), embedded speech is used when connection to the cloud service fails after repeated attempts. Recognition might continue using the cloud service again if the connection is later resumed.
With hybrid speech configuration for [text to speech](text-to-speech.md) (voices), embedded and cloud synthesis are run in parallel and the final result is selected based on response speed. The best result is evaluated again on each new synthesis request.
For cloud speech, you use the `SpeechConfig` object, as shown in the [speech to
## Embedded voices capabilities
-For embedded voices, it is essential to note that certain SSML tags may not be currently supported due to differences in the model structure. For detailed information regarding the unsupported SSML tags, refer to the following table.
+For embedded voices, it's essential to note that certain SSML tags might not be currently supported due to differences in the model structure. For detailed information regarding the unsupported SSML tags, refer to the following table.
| Level 1 | Level 2 | Sub values | Support in embedded NTTS | |--|--|-|--|
ai-services Gaming Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/gaming-concepts.md
Previously updated : 01/25/2023 Last updated : 1/18/2024
Azure AI services for Speech can be used to improve various gaming scenarios, bo
Here are a few Speech features to consider for flexible and interactive game experiences: - Bring everyone into the conversation by synthesizing audio from text. Or by displaying text from audio.-- Make the game more accessible for players who are unable to read text in a particular language, including young players who haven't learned to read and write. Players can listen to storylines and instructions in their preferred language.
+- Make the game more accessible for players who are unable to read text in a particular language, including young players who don't read or write. Players can listen to storylines and instructions in their preferred language.
- Create game avatars and non-playable characters (NPC) that can initiate or participate in a conversation in-game. - Prebuilt neural voice can provide highly natural out-of-box voices with leading voice variety in terms of a large portfolio of languages and voices. - Custom neural voice for creating a voice that stays on-brand with consistent quality and speaking style. You can add emotions, accents, nuances, laughter, and other para linguistic sounds and expressions.
When enabling this functionality in your game, keep in mind the following benefi
- Fine-tuning Text to speech output with Speech Synthesis Markup Language (SSML) - With SSML, you can customize Text to speech outputs, with richer voice tuning supports. For more information, see [Speech Synthesis Markup Language (SSML) overview](speech-synthesis-markup.md). - Audio outputs - Each prebuilt neural voice model is available at 24 kHz and high-fidelity 48 kHz. If you select 48-kHz output format, the high-fidelity voice model with 48 kHz will be invoked accordingly. The sample rates other than 24 kHz and 48 kHz can be obtained through upsampling or downsampling when synthesizing. For example, 44.1 kHz is downsampled from 48 kHz. Each audio format incorporates a bitrate and encoding type. For more information, see the [supported audio formats](rest-text-to-speech.md?tabs=streaming#audio-outputs). For more information on 48-kHz high-quality voices, see [this introduction blog](https://techcommunity.microsoft.com/t5/ai-cognitive-services-blog/azure-neural-tts-voices-upgraded-to-48khz-with-hifinet2-vocoder/ba-p/3665252).
-For an example, see the [Text to speech quickstart](get-started-text-to-speech.md).
+For an example, see the [text to speech quickstart](get-started-text-to-speech.md).
## Speech to text
With [language identification](language-identification.md), you can detect the l
## Speech translation
-It's not unusual that players in the same game session natively speak different languages and may appreciate receiving both the original message and its translation. You can use [speech translation](speech-translation.md) to translate text between languages so players across the world can communicate with each other in their native language.
+It's not unusual that players in the same game session natively speak different languages and might appreciate receiving both the original message and its translation. You can use [speech translation](speech-translation.md) to translate text between languages so players across the world can communicate with each other in their native language.
For an example, see the [Speech translation quickstart](get-started-speech-translation.md).
ai-services How To Async Meeting Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-async-meeting-transcription.md
description: Learn how to use asynchronous meeting transcription using the Speec
Previously updated : 11/04/2019 Last updated : 1/18/2024 ms.devlang: csharp
-# ms.devlang: csharp, java
zone_pivot_groups: programming-languages-set-twenty-one
In this article, asynchronous meeting transcription is demonstrated using the **
## Asynchronous vs. real-time + asynchronous
-With asynchronous transcription, you stream the meeting audio, but don't need a transcription returned in real-time. Instead, after the audio is sent, use the `meetingId` of `Meeting` to query for the status of the asynchronous transcription. When the asynchronous transcription is ready, you'll get a `RemoteMeetingTranscriptionResult`.
+With asynchronous transcription, you stream the meeting audio, but don't need a transcription returned in real-time. Instead, after the audio is sent, use the `meetingId` of `Meeting` to query for the status of the asynchronous transcription. When the asynchronous transcription is ready, you get a `RemoteMeetingTranscriptionResult`.
With real-time plus asynchronous, you get the transcription in real-time, but also get the transcription by querying with the `meetingId` (similar to asynchronous scenario).
ai-services How To Audio Content Creation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-audio-content-creation.md
Previously updated : 09/25/2022 Last updated : 1/18/2024
You can use the [Audio Content Creation](https://speech.microsoft.com/portal/audiocontentcreation) tool in Speech Studio for Text to speech synthesis without writing any code. You can use the output audio as-is, or as a starting point for further customization.
-Build highly natural audio content for a variety of scenarios, such as audiobooks, news broadcasts, video narrations, and chat bots. With Audio Content Creation, you can efficiently fine-tune Text to speech voices and design customized audio experiences.
+Build highly natural audio content for various scenarios, such as audiobooks, news broadcasts, video narrations, and chat bots. With Audio Content Creation, you can efficiently fine-tune Text to speech voices and design customized audio experiences.
The tool is based on [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md). It allows you to adjust Text to speech output attributes in real-time or batch synthesis, such as voice characters, voice styles, speaking speed, pronunciation, and prosody. - No-code approach: You can use the Audio Content Creation tool for Text to speech synthesis without writing any code. The output audio might be the final deliverable that you want. For example, you can use the output audio for a podcast or a video narration. - Developer-friendly: You can listen to the output audio and adjust the SSML to improve speech synthesis. Then you can use the [Speech SDK](speech-sdk.md) or [Speech CLI](spx-basics.md) to integrate the SSML into your applications. For example, you can use the SSML for building a chat bot.
-You have easy access to a broad portfolio of [languages and voices](language-support.md?tabs=tts). These voices include state-of-the-art prebuilt neural voices and your custom neural voice, if you've built one.
+You have easy access to a broad portfolio of [languages and voices](language-support.md?tabs=tts). These voices include state-of-the-art prebuilt neural voices and your custom neural voice, if you built one.
To learn more, view the Audio Content Creation tutorial video [on YouTube](https://youtu.be/ygApYuOOG6w). ## Get started
-The Audio Content Creation tool in Speech Studio is free to access, but you'll pay for Speech service usage. To work with the tool, you need to sign in with an Azure account and create a Speech resource. For each Azure account, you have free monthly speech quotas, which include 0.5 million characters for prebuilt neural voices (referred to as *Neural* on the [pricing page](https://aka.ms/speech-pricing)). The monthly allotted amount is usually enough for a small content team of around 3-5 people.
+The Audio Content Creation tool in Speech Studio is free to access, but you pay for Speech service usage. To work with the tool, you need to sign in with an Azure account and create a Speech resource. For each Azure account, you have free monthly speech quotas, which include 0.5 million characters for prebuilt neural voices (referred to as *Neural* on the [pricing page](https://aka.ms/speech-pricing)). Usually, the monthly allotted amount is enough for a small content team of around 3-5 people.
The next sections cover how to create an Azure account and get a Speech resource.
You can get your content into the Audio Content Creation tool in either of two w
## Export tuned audio
-After you've reviewed your audio output and are satisfied with your tuning and adjustment, you can export the audio.
+After you review your audio output and are satisfied with your tuning and adjustment, you can export the audio.
1. Select **Export** to create an audio creation task.
After you've reviewed your audio output and are satisfied with your tuning and a
## Configure BYOS and anonymous public read access for blobs
-If you lose access permission to your Bring Your Own Storage (BYOS), you won't be able to view, create, edit, or delete files. To resume your access, you need to remove the current storage and reconfigure the BYOS in the [Azure portal](https://portal.azure.com/#allservices). To learn more about how to configure BYOS, see [Mount Azure Storage as a local share in App Service](/azure/app-service/configure-connect-to-azure-storage?pivots=container-linux&tabs=portal).
+If you lose access permission to your Bring Your Own Storage (BYOS), you can't view, create, edit, or delete files. To resume your access, you need to remove the current storage and reconfigure the BYOS in the [Azure portal](https://portal.azure.com/#allservices). To learn more about how to configure BYOS, see [Mount Azure Storage as a local share in App Service](/azure/app-service/configure-connect-to-azure-storage?pivots=container-linux&tabs=portal).
-After configuring the BYOS permission, you need to configure anonymous public read access for related containers and blobs. Otherwise, blob data isn't available for public access and your lexicon file in the blob will be inaccessible. By default, a containerΓÇÖs public access setting is disabled. To grant anonymous users read access to a container and its blobs, first set **Allow Blob public access** to **Enabled** to allow public access for the storage account, then set the container's (named **acc-public-files**) public access level (**anonymous read access for blobs only**). To learn more about how to configure anonymous public read access, see [Configure anonymous public read access for containers and blobs](/azure/storage/blobs/anonymous-read-access-configure?tabs=portal).
+After configuring the BYOS permission, you need to configure anonymous public read access for related containers and blobs. Otherwise, blob data isn't available for public access and your lexicon file in the blob is inaccessible. By default, a containerΓÇÖs public access setting is disabled. To grant anonymous users read access to a container and its blobs, first set **Allow Blob public access** to **Enabled** to allow public access for the storage account, then set the container's (named **acc-public-files**) public access level (**anonymous read access for blobs only**). To learn more about how to configure anonymous public read access, see [Configure anonymous public read access for containers and blobs](/azure/storage/blobs/anonymous-read-access-configure?tabs=portal).
## Add or remove Audio Content Creation users
-If more than one user wants to use Audio Content Creation, you can grant them access to the Azure subscription and the Speech resource. If you add users to an Azure subscription, they can access all the resources under the Azure subscription. But if you add users to a Speech resource only, they'll have access only to the Speech resource and not to other resources under this Azure subscription. Users with access to the Speech resource can use the Audio Content Creation tool.
+If more than one user wants to use Audio Content Creation, you can grant them access to the Azure subscription and the Speech resource. If you add users to an Azure subscription, they can access all the resources under the Azure subscription. But if you add users to a Speech resource only, they only have access to the Speech resource and not to other resources under this Azure subscription. Users with access to the Speech resource can use the Audio Content Creation tool.
The users you grant access to need to set up a [Microsoft account](https://account.microsoft.com/account). If they don' have a Microsoft account, they can create one in just a few minutes. They can use their existing email and link it to a Microsoft account, or they can create and use an Outlook email address as a Microsoft account.
To add users to a Speech resource so that they can use Audio Content Creation, d
1. On the **Members** tab, enter a user's email address and select the user's name in the directory. The email address must be linked to a Microsoft account that's trusted by Microsoft Entra ID. Users can easily sign up for a [Microsoft account](https://account.microsoft.com/account) by using their personal email address. 1. On the **Review + assign** tab, select **Review + assign** to assign the role.
-Here is what happens next:
+Here's what happens next:
An email invitation is automatically sent to users. They can accept it by selecting **Accept invitation** > **Accept to join Azure** in their email. They're then redirected to the Azure portal. They don't need to take further action in the Azure portal. After a few moments, users are assigned the role at the Speech resource scope, which gives them access to this Speech resource. If users don't receive the invitation email, you can search for their account under **Role assignments** and go into their profile. Look for **Identity** > **Invitation accepted**, and select **(manage)** to resend the email invitation. You can also copy and send the invitation link to them.
Users now visit or refresh the [Audio Content Creation](https://aka.ms/audiocont
If they can't find the available Speech resource, they can check to ensure that they're in the right directory. To do so, they select the account profile at the upper right and then select **Switch** next to **Current directory**. If there's more than one directory available, it means they have access to multiple directories. They can switch to different directories and go to **Settings** to see whether the right Speech resource is available.
-Users who are in the same Speech resource will see each other's work in the Audio Content Creation tool. If you want each individual user to have a unique and private workplace in Audio Content Creation, [create a new Speech resource](#step-2-create-a-speech-resource) for each user and give each user the unique access to the Speech resource.
+Users who are in the same Speech resource see each other's work in the Audio Content Creation tool. If you want each individual user to have a unique and private workplace in Audio Content Creation, [create a new Speech resource](#step-2-create-a-speech-resource) for each user and give each user the unique access to the Speech resource.
### Remove users from a Speech resource
ai-services How To Configure Azure Ad Auth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-configure-azure-ad-auth.md
Previously updated : 06/18/2021 Last updated : 1/18/2024 zone_pivot_groups: programming-languages-set-two ms.devlang: cpp
-# ms.devlang: cpp, csharp, java, python
+ # Microsoft Entra authentication with the Speech SDK When using the Speech SDK to access the Speech service, there are three authentication methods available: service keys, a key-based token, and Microsoft Entra ID. This article describes how to configure a Speech resource and create a Speech SDK configuration object to use Microsoft Entra ID for authentication.
-This article shows how to use Microsoft Entra authentication with the Speech SDK. You'll learn how to:
+This article shows how to use Microsoft Entra authentication with the Speech SDK. You learn how to:
> [!div class="checklist"] >
$resourceId = resource.Id
With a Microsoft Entra access token, you can now create a Speech SDK configuration object.
-The method of providing the token, and the method to construct the corresponding Speech SDK ```Config``` object varies by the object you'll be using.
+The method of providing the token, and the method to construct the corresponding Speech SDK ```Config``` object varies by the object you're using.
### SpeechRecognizer, SpeechSynthesizer, IntentRecognizer, ConversationTranscriber
ai-services How To Configure Openssl Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-configure-openssl-linux.md
Previously updated : 06/22/2022 Last updated : 1/18/2024 zone_pivot_groups: programming-languages-set-three
export SSL_CERT_FILE=/etc/pki/tls/certs/ca-bundle.crt
When the Speech SDK connects to the Speech service, it checks the Transport Layer Security (TLS/SSL) certificate. The Speech SDK verifies that the certificate reported by the remote endpoint is trusted and hasn't been revoked. This verification provides a layer of protection against attacks involving spoofing and other related vectors. The check is accomplished by retrieving a certificate revocation list (CRL) from a certificate authority (CA) used by Azure. A list of Azure CA download locations for updated TLS CRLs can be found in [this document](../../security/fundamentals/tls-certificate-changes.md).
-If a destination posing as the Speech service reports a certificate that's been revoked in a retrieved CRL, the SDK will terminate the connection and report an error via a `Canceled` event. The authenticity of a reported certificate can't be checked without an updated CRL. Therefore, the Speech SDK will also treat a failure to download a CRL from an Azure CA location as an error.
+If a destination posing as the Speech service reports a certificate that's been revoked in a retrieved CRL, the SDK terminates the connection and reports an error via a `Canceled` event. The authenticity of a reported certificate can't be checked without an updated CRL. Therefore, the Speech SDK also treats a failure to download a CRL from an Azure CA location as an error.
> [!WARNING] > If your solution uses proxy or firewall it should be configured to allow access to all certificate revocation list URLs used by Azure. Note that many of these URLs are outside of `microsoft.com` domain, so allowing access to `*.microsoft.com` is not enough. See [this document](../../security/fundamentals/tls-certificate-changes.md) for details. In exceptional cases you may ignore CRL failures (see [the correspondent section](#bypassing-or-ignoring-crl-failures)), but such configuration is strongly not recommended, especially for production scenarios.
speechConfig.properties.SetPropertyByString("CONFIG_MAX_CRL_SIZE_KB", "15000")
### Bypassing or ignoring CRL failures
-If an environment can't be configured to access an Azure CA location, the Speech SDK will never be able to retrieve an updated CRL. You can configure the SDK either to continue and log download failures or to bypass all CRL checks.
+If an environment can't be configured to access an Azure CA location, the Speech SDK can't retrieve an updated CRL. You can configure the SDK either to continue and log download failures or to bypass all CRL checks.
> [!WARNING] > CRL checks are a security measure and bypassing them increases susceptibility to attacks. They should not be bypassed without thorough consideration of the security implications and alternative mechanisms for protecting against the attack vectors that CRL checks mitigate.
-To continue with the connection when a CRL can't be retrieved, set the property `"OPENSSL_CONTINUE_ON_CRL_DOWNLOAD_FAILURE"` to `"true"`. An attempt will still be made to retrieve a CRL and failures will still be emitted in logs, but connection attempts will be allowed to continue.
+To continue with the connection when a CRL can't be retrieved, set the property `"OPENSSL_CONTINUE_ON_CRL_DOWNLOAD_FAILURE"` to `"true"`. An attempt is still made to retrieve a CRL and failures is still emitted in logs, but connection attempts are allowed to continue.
::: zone pivot="programming-language-csharp"
speechConfig.properties.SetPropertyByString("OPENSSL_DISABLE_CRL_CHECK", "true")
### CRL caching and performance
-By default, the Speech SDK will cache a successfully downloaded CRL on disk to improve the initial latency of future connections. When no cached CRL is present or when the cached CRL is expired, a new list will be downloaded.
+By default, the Speech SDK will cache a successfully downloaded CRL on disk to improve the initial latency of future connections. When no cached CRL is present or when the cached CRL is expired, a new list is downloaded.
-Some Linux distributions don't have a `TMP` or `TMPDIR` environment variable defined, so the Speech SDK won't cache downloaded CRLs. Without `TMP` or `TMPDIR` environment variable defined, the Speech SDK will download a new CRL for each connection. To improve initial connection performance in this situation, you can [create a `TMPDIR` environment variable and set it to the accessible path of a temporary directory.](https://help.ubuntu.com/community/EnvironmentVariables).
+Some Linux distributions don't have a `TMP` or `TMPDIR` environment variable defined, so the Speech SDK doesn't cache downloaded CRLs. Without `TMP` or `TMPDIR` environment variable defined, the Speech SDK downloads a new CRL for each connection. To improve initial connection performance in this situation, you can [create a `TMPDIR` environment variable and set it to the accessible path of a temporary directory.](https://help.ubuntu.com/community/EnvironmentVariables).
## Next steps
ai-services How To Configure Rhel Centos 7 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-configure-rhel-centos-7.md
Previously updated : 04/01/2022 Last updated : 1/18/2024
make -j$(nproc)
sudo make install-strip ```
-If the updated compiler and libraries need to be deployed on several machines, you can simply copy them from under `/usr/local` to other machines. If only the runtime libraries are needed then the files in `/usr/local/lib64` will be enough.
+If the updated compiler and libraries need to be deployed on several machines, you can copy them from under `/usr/local` to other machines. If only the runtime libraries are needed, then the files in `/usr/local/lib64` are enough.
## Environment settings
ai-services How To Control Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-control-connections.md
Previously updated : 04/12/2021 Last updated : 1/18/2024 zone_pivot_groups: programming-languages-set-thirteen ms.devlang: cpp
-# ms.devlang: cpp, csharp, java
# How to monitor and control service connections with the Speech SDK
-`SpeechRecognizer` and other objects in the Speech SDK automatically connect to the Speech service when it's appropriate. Sometimes, you may either want extra control over when connections begin and end or want more information about when the Speech SDK establishes or loses its connection. The supporting `Connection` class provides this capability.
+The `SpeechRecognizer` and other objects in the Speech SDK automatically connect to the Speech service when it's appropriate. Sometimes, you either want extra control over when connections begin and end or you want more information about when the Speech SDK establishes or loses its connection. The supporting `Connection` class provides this capability.
## Retrieve a Connection object
connection.disconnected.addEventListener((s, connectionEventArgs) -> {
## Connect and disconnect
-`Connection` has explicit methods to start or end a connection to the Speech service. Reasons you may want to control the connection include:
+`Connection` has explicit methods to start or end a connection to the Speech service. Reasons you might want to control the connection include:
- Preconnecting to the Speech service to allow the first interaction to start as quickly as possible - Establishing connection at a specific time in your application's logic to gracefully and predictably handle initial connection failures
connection.disconnected.addEventListener((s, connectionEventArgs) -> {
Some important notes on the behavior when manually modifying connection state: - Trying to connect when already connected will do nothing. It will not generate an error. Monitor the `Connected` and `Disconnected` events if you want to know the current state of the connection.-- A failure to connect that originates from a problem that has no involvement with the Speech service--such as attempting to do so from an invalid state--will throw or return an error as appropriate to the programming language. Failures that require network resolution--such as authentication failures--will not throw or return an error but instead generate a `Canceled` event on the top-level object the `Connection` was created from.
+- A failure to connect that originates from a problem that has no involvement with the Speech service--such as attempting to do so from an invalid state--will throw or return an error as appropriate to the programming language. Failures that require network resolution--such as authentication failures--won't throw or return an error but instead generate a `Canceled` event on the top-level object the `Connection` was created from.
- Manually disconnecting from the Speech service during an ongoing interaction results in a connection error and loss of data for that interaction. Connection errors are surfaced on the appropriate top-level object's `Canceled` event. ::: zone pivot="programming-language-csharp"
ai-services How To Custom Commands Debug Build Time https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-commands-debug-build-time.md
Previously updated : 06/18/2020 Last updated : 1/18/2024
This article describes how to debug when you see errors while building Custom Co
## Errors when creating an application Custom Commands also creates an application in [LUIS](https://www.luis.ai/) when creating a Custom Commands application.
-[LUIS limits 500 applications per authoring resource](../luis/luis-limits.md). Creation of LUIS application could fail if you are using an authoring resource that already has 500 applications.
+[LUIS limits 500 applications per authoring resource](../luis/luis-limits.md). Creation of LUIS application could fail if you're using an authoring resource that already has 500 applications.
-Make sure the selected LUIS authoring resource has less than 500 applications. If not, you can create new LUIS authoring resource, switch to another one, or try to clean up your LUIS applications.
+Make sure the selected LUIS authoring resource has fewer than 500 applications. If not, you can create new LUIS authoring resource, switch to another one, or try to clean up your LUIS applications.
## Errors when deleting an application ### Can't delete LUIS application
-When deleting a Custom Commands application, Custom Commands may also try to delete the LUIS application associated with the Custom Commands application.
+When you delete a Custom Commands application, Custom Commands might also try to delete the LUIS application associated with the Custom Commands application.
If the deletion of LUIS application failed, go to your [LUIS](https://www.luis.ai/) account to delete them manually.
Refresh your page and try to delete fewer applications.
## Errors when modifying an application ### Can't delete a parameter or a Web Endpoint
-You are not allowed to delete a parameter when it is being used.
+You aren't allowed to delete a parameter when it's being used.
Remove any reference of the parameter in any speech responses, sample sentences, conditions, actions, and try again. ### Can't delete a Web Endpoint
-You are not allowed to delete a Web Endpoint when it is being used.
+You aren't allowed to delete a Web Endpoint when it's being used.
Remove any **Call Web Endpoint** action that uses this Web Endpoint before removing a Web Endpoint. ## Errors when training an application
LUIS has built-in Yes/No intents. Having sample sentences with only "yes", "no"
| No | Nope, Not | ### Common sample sentences
-Custom Commands does not allow common sample sentences shared among different commands. The training of an application could fail if some sample sentences in one command are already defined in another command.
+Custom Commands doesn't allow common sample sentences shared among different commands. The training of an application could fail if some sample sentences in one command are already defined in another command.
Make sure you don't have common sample sentences shared among different commands.
For best practice of LUIS training, refer [LUIS best practice](../luis/faq.md).
## Can't update LUIS key ### Reassign to E0 authoring resource
-LUIS does not support reassigning LUIS application to E0 authoring resource.
+LUIS doesn't support reassigning LUIS application to E0 authoring resource.
If you need to change your authoring resource from F0 to E0, or change to a different E0 resource, recreate the application.
ai-services How To Custom Commands Debug Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-commands-debug-runtime.md
Previously updated : 06/18/2020 Last updated : 1/18/2024
This article describes how to debug when you see errors while running Custom Com
## Connection failed
-If your run Custom Commands application from [client application (with Speech SDK)](./how-to-custom-commands-setup-speech-sdk.md) or [Windows Voice Assistant Client](./how-to-custom-commands-developer-flow-test.md), you may experience connection errors as listed below:
+If your run Custom Commands application from [client application (with Speech SDK)](./how-to-custom-commands-setup-speech-sdk.md) or [Windows Voice Assistant Client](./how-to-custom-commands-developer-flow-test.md), you might experience connection errors as listed below:
| Error code | Details | | - | -- |
The CancelledDialog event consists of cancellation code and description, as list
The dialog is canceled when a required slot isn't successfully updated after certain number of turns. The build-in max number is 3. ### Recognizer usage quota exceeded
-Language Understanding (LUIS) has limits on resource usage. Usually "Recognizer usage quota exceeded error" can be caused by:
+Language Understanding (LUIS) has limits on resource usage. Usually "Recognizer usage quota exceeded error" is caused by:
- Your LUIS authoring exceeds the limit Add a prediction resource to your Custom Commands application:
Usually it means transient connection failure to Language Understanding (LUIS) r
Your subscription isn't authorized to access the LUIS application. ### Input exceeds the maximum supported length
-Your input has exceeded 500 characters. We only allow at most 500 characters for input utterance.
+Your input exceeded 500 characters. We only allow at most 500 characters for input utterance.
### Invalid query for the recognizer
-Your input has exceeded 500 characters. We only allow at most 500 characters for input utterance.
+Your input exceeded 500 characters. We only allow at most 500 characters for input utterance.
### Recognizer return an error The LUIS recognizer returned an error when trying to recognize your input.
Can't find the recognizer type specified in your custom commands dialog model. C
## Other common errors ### Unexpected response
-Unexpected responses may be caused multiple things.
+Unexpected responses result from many different reasons.
A few checks to start with: - Yes/No Intents in example sentences
A few checks to start with:
- Similar intents and examples sentences among commands
- The LUIS recognition accuracy may get affected when two commands share similar intents and examples sentences. You can try to make commands functionality and example sentences as distinct as possible.
+ The LUIS recognition accuracy might get affected when two commands share similar intents and examples sentences. You can try to make commands functionality and example sentences as distinct as possible.
For best practice of improving recognition accuracy, refer [LUIS best practice](../luis/faq.md).
ai-services How To Custom Commands Deploy Cicd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-commands-deploy-cicd.md
Title: 'Continuous Deployment with Azure DevOps (Preview)'
+ Title: 'Continuous Deployment with Azure DevOps'
description: In this article, you learn how to set up continuous deployment for your Custom Commands applications. You create the scripts to support the continuous deployment workflows. Previously updated : 06/18/2020 Last updated : 1/18/2024
The scripts are hosted at [Voice Assistant - Custom Commands](https://github.com
### Set up a pipeline
-1. Go to **Azure DevOps - Pipelines** and click "New Pipeline"
+1. Go to **Azure DevOps - Pipelines** and select "New Pipeline"
1. In **Connect** section, select the location of your repository where these scripts are located 1. In **Select** section, select your repository 1. In **Configure** section, select "Starter pipeline"
-1. Next you'll get an editor with a YAML file, replace the "steps" section with this script.
+1. Next you get an editor with a YAML file, replace the "steps" section with this script.
```yaml steps:
The scripts are hosted at [Voice Assistant - Custom Commands](https://github.com
failOnStderr: true ```
-1. Note that these scripts assume that you are using the region `westus2`, if that's not the case update the arguments of the tasks accordingly
+1. These scripts assume that you're using the region `westus2`, if that's not the case update the arguments of the tasks accordingly
> [!div class="mx-imgBorder"] > ![Screenshot that highlights the region value in the arguments.](media/custom-commands/cicd-new-pipeline-yaml.png)
-1. In the "Save and run" button, open the dropdown and click "Save"
+1. In the "Save and run" button, open the dropdown and select "Save"
### Hook up the pipeline with your application
The scripts are hosted at [Voice Assistant - Custom Commands](https://github.com
## Deploy from source code
-In case you want to keep the definition of your application in a repository, we provide the scripts for deployments from source code. Since the scripts are in bash, If you are using Windows you'll need to install the [Linux subsystem](/windows/wsl/install-win10).
+In case you want to keep the definition of your application in a repository, we provide the scripts for deployments from source code. Since the scripts are in bash, If you're using Windows you need to install the [Linux subsystem](/windows/wsl/install-win10).
The scripts are hosted at [Voice Assistant - Custom Commands](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant/tree/master/custom-commands). Clone the scripts in the bash directory to your repository. Make sure you maintain the same path. ### Prepare your repository 1. Create a directory for your application, in our example create one called "apps".
-1. Update the arguments of the bash script below, and run. It will import the dialog model of your application to the file myapp.json
+1. Update the arguments of the bash script below, and run. It imports the dialog model of your application to the file myapp.json
```BASH bash/export.sh -r <region> -s <subscriptionkey> -c en-us -a <appid> -f apps/myapp.json ```
The scripts are hosted at [Voice Assistant - Custom Commands](https://github.com
### Set up a pipeline
-1. Go to **Azure DevOps - Pipelines** and click "New Pipeline"
+1. Go to **Azure DevOps - Pipelines** and select "New Pipeline"
1. In **Connect** section, select the location of your repository where these scripts are located 1. In **Select** section, select your repository 1. In **Configure** section, select "Starter pipeline"
-1. Next you'll get an editor with a YAML file, replace the "steps" section with this script.
+1. Next you get an editor with a YAML file, replace the "steps" section with this script.
```yaml steps:
The scripts are hosted at [Voice Assistant - Custom Commands](https://github.com
> [!NOTE] > these scripts assume that you are using the region westus2, if that's not the case update the arguments of the tasks accordingly
-1. In the "Save and run" button, open the dropdown and click "Save"
+1. In the "Save and run" button, open the dropdown and select "Save"
### Hook up the pipeline with your target applications
ai-services How To Custom Commands Developer Flow Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-commands-developer-flow-test.md
Previously updated : 06/18/2020 Last updated : 1/18/2024
In this article, you learn different approaches to testing a custom commands app
## Test in the portal
-Test in the portal is the simplest and quickest way to check if your custom command application work as expected. After the app is successfully trained, click `Test` button to start testing.
+Test in the portal is the simplest and quickest way to check if your custom command application work as expected. After the app is successfully trained, select `Test` button to start testing.
> [!div class="mx-imgBorder"] > ![Test in the portal](media/custom-commands/create-basic-test-chat-no-mic.png)
ai-services How To Custom Commands Send Activity To Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-commands-send-activity-to-client.md
Previously updated : 06/18/2020 Last updated : 1/18/2024 ms.devlang: csharp
using Windows.UI.Core;
### Handle the received payload
-In `InitializeDialogServiceConnector`, replace the `ActivityReceived` event handler with following code. The modified `ActivityReceived` event handler will extract the payload from the activity and change the visual state of the tv or fan respectively.
+In `InitializeDialogServiceConnector`, replace the `ActivityReceived` event handler with following code. The modified `ActivityReceived` event handler extracts the payload from the activity and change the visual state of the tv or fan respectively.
```C# connector.ActivityReceived += async (sender, activityReceivedEventArgs) =>
ai-services How To Custom Commands Setup Speech Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-commands-setup-speech-sdk.md
Previously updated : 06/18/2020 Last updated : 1/18/2024 ms.devlang: csharp
In this article, you learn how to make requests to a published Custom Commands a
## Prerequisites
-A Custom Commands application is required to complete this article. If you haven't created a Custom Commands application, you can do so following the quickstarts:
+A Custom Commands application is required to complete this article. Try a quickstart to create a custom commands application:
> [!div class = "checklist"] > * [Create a Custom Commands application](quickstart-custom-commands-application.md)
-You'll also need:
+You also need:
> [!div class = "checklist"] > * [Visual Studio 2019](https://visualstudio.microsoft.com/downloads/) or higher. This guide is based on Visual Studio 2019. > * An Azure AI Speech resource key and region: Create a Speech resource on the [Azure portal](https://portal.azure.com). For more information, see [Create a multi-service resource](../multi-service-resource.md?pivots=azportal).
You'll also need:
## Step 1: Publish Custom Commands application
-1. Open your previously created Custom Commands application
-1. Go to **Settings**, select **LUIS resource**
-1. If **Prediction resource** is not assigned, select a query prediction key or create a new one
+1. Open your previously created Custom Commands application.
+1. Go to **Settings**, select **LUIS resource**.
+1. If **Prediction resource** isn't assigned, select a query prediction key or create a new one.
Query prediction key is always required before publishing an application. For more information about LUIS resources, reference [Create LUIS Resource](../luis/luis-how-to-azure-subscription.md)
-1. Go back to editing Commands, Select **Publish**
+1. Go back to editing Commands, Select **Publish**.
> [!div class="mx-imgBorder"] > ![Publish application](media/custom-commands/setup-speech-sdk-publish-application.png)
-1. Copy the App ID from the publish notification for later use
-1. Copy the Speech Resource Key for later use
+1. Copy the App ID from the "publish" notification for later use.
+1. Copy the Speech Resource Key for later use.
## Step 2: Create a Visual Studio project
The Design view is updated to show the application's user interface.
Add the code-behind source so that the application works as expected. The code-behind source includes: -- Required `using` statements for the `Speech` and `Speech.Dialog` namespaces-- A simple implementation to ensure microphone access, wired to a button handler-- Basic UI helpers to present messages and errors in the application-- A landing point for the initialization code path that will be populated later-- A helper to play back text to speech (without streaming support)-- An empty button handler to start listening that will be populated later
+- Required `using` statements for the `Speech` and `Speech.Dialog` namespaces.
+- A simple implementation to ensure microphone access, wired to a button handler.
+- Basic UI helpers to present messages and errors in the application.
+- A landing point for the initialization code path.
+- A helper to play back text to speech (without streaming support).
+- An empty button handler to start listening.
Add the code-behind source as follows:
ai-services How To Custom Commands Setup Web Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-commands-setup-web-endpoints.md
Previously updated : 06/18/2020 Last updated : 1/18/2024 ms.devlang: csharp
[!INCLUDE [deprecation notice](./includes/custom-commands-retire.md)]
-In this article, you'll learn how to set up web endpoints in a Custom Commands application that allow you to make HTTP requests from a client application. You'll complete the following tasks:
+In this article, you learn how to set up web endpoints in a Custom Commands application that allow you to make HTTP requests from a client application. You complete the following tasks:
- Set up web endpoints in Custom Commands application - Call web endpoints in Custom Commands application
In this article, you'll learn how to set up web endpoints in a Custom Commands a
For this tutorial, you need an HTTP endpoint that maintains states for all the devices you set up in the **TurnOnOff** command of your Custom Commands application.
-If you already have a web endpoint you want to call, skip to the [next section](#setup-web-endpoints-in-custom-commands).
+If you already have a web endpoint you want to call, skip to the [next section](#set-up-web-endpoints-in-custom-commands).
Alternatively, the next section provides details about a default hosted web endpoint you can use if you want to skip this section. ### Input format of Azure function
-Next, you'll deploy an endpoint using [Azure Functions](../../azure-functions/index.yml).
+Next, you deploy an endpoint using [Azure Functions](../../azure-functions/index.yml).
The following is the format of a Custom Commands event that is passed to your Azure function. Use this information when you're writing your Azure Function app. ```json
The following table describes the key attributes of this input:
| **currentGlobalParameters** | A map like `parameters`, but used for global parameters. |
-For the **DeviceState** Azure Function, an example Custom Commands event will look like following. This will act as an **input** to the function app.
+For the **DeviceState** Azure Function, an example Custom Commands event looks like following. This acts as an **input** to the function app.
```json {
For our **DeviceState** endpoint, output of your Azure function is consumed by a
} ```
-This output should be written to an external storage, so that you can maintain the state of devices. The external storage state will be used in the [Integrate with client application](#integrate-with-client-application) section below.
+This output should be written to an external storage, so that you can maintain the state of devices. The external storage state is used in the [Integrate with client application](#integrate-with-client-application) section below.
### Deploy Azure function
This output should be written to an external storage, so that you can maintain t
We provide a sample you can configure and deploy as an Azure Functions app. To create a storage account for our sample, follow these steps. 1. Create table storage to save device state. In the Azure portal, create a new resource of type **Storage account** by name **devicestate**.
-1. Copy the **Connection string** value from **devicestate -> Access keys**. You'll need to add this string secret to the downloaded sample Function App code.
+1. Copy the **Connection string** value from **devicestate -> Access keys**. You need to add this string secret to the downloaded sample Function App code.
1. Download sample [Function App code](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant/tree/main/custom-commands/quick-start). 1. Open the downloaded solution in Visual Studio 2019. In **Connections.json**, replace **STORAGE_ACCOUNT_SECRET_CONNECTION_STRING** with the secret from Step 2. 1. Download the **DeviceStateAzureFunction** code.
To deploy the sample app to Azure Functions, follow these steps.
1. Select **Functions** in the left pane, and then select **DeviceState**. 1. In the new window, select **Code + Test** and then select **Get function URL**.
-## Setup web endpoints in Custom Commands
+## Set up web endpoints in Custom Commands
Let's hook up the Azure function with the existing Custom Commands application.
-In this section, you'll use an existing default **DeviceState** endpoint. If you created your own web endpoint using Azure Function or otherwise, use that instead of the default `https://webendpointexample.azurewebsites.net/api/DeviceState`.
+In this section, you use an existing default **DeviceState** endpoint. If you created your own web endpoint using Azure Function or otherwise, use that instead of the default `https://webendpointexample.azurewebsites.net/api/DeviceState`.
1. Open the Custom Commands application you previously created. 1. Go to **Web endpoints**, select **New web endpoint**.
In this section, you'll use an existing default **DeviceState** endpoint. If you
| Name | UpdateDeviceState | Name for the web endpoint. | | URL | ```https://webendpointexample.azurewebsites.net/api/DeviceState``` | The URL of the endpoint you wish your custom command app to talk to. | | Method | POST | The allowed interactions (such as GET, POST) with your endpoint.|
- | Headers | Key: app, Value: take the first 8 digits of your applicationId | The header parameters to include in the request header.|
+ | Headers | Key: app, Value: take the first eight digits of your applicationId | The header parameters to include in the request header.|
> [!NOTE] > - The example web endpoint created using [Azure Functions](../../azure-functions/index.yml), which hooks up with the database that saves the device state of the tv and fan.
ai-services How To Custom Commands Update Command From Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-commands-update-command-from-client.md
Previously updated : 10/20/2020 Last updated : 1/18/2024
[!INCLUDE [deprecation notice](./includes/custom-commands-retire.md)]
-In this article, you'll learn how to update an ongoing command from a client application.
+In this article, you learn how to update an ongoing command from a client application.
## Prerequisites > [!div class = "checklist"]
Note how the value for the parameter `"OnOff"` was set to `"on"` through an acti
When you configure the list of valid options for a parameter, the values for the parameter are defined globally for the application.
-In our example, the `SubjectDevice` parameter will have a fixed list of supported values regardless of the conversation.
+In our example, the `SubjectDevice` parameter has a fixed list of supported values regardless of the conversation.
If you want to add new entries to the parameter's catalog per conversation, you can send the following activity:
Note a couple of things:
## Add more context from the client application
-You can set additional context from the client application per conversation that can later be used in your Custom Commands application.
+You can set more context from the client application per conversation that can later be used in your Custom Commands application.
For example, think about the scenario where you want to send the ID and name of the device connected to the Custom Commands application.
ai-services How To Custom Commands Update Command From Web Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-commands-update-command-from-web-endpoint.md
Previously updated : 10/20/2020 Last updated : 1/18/2024
If your client application requires an update to the state of an ongoing command without voice input, you can use a call to a web endpoint to update the command.
-In this article, you'll learn how to update an ongoing command from a web endpoint.
+In this article, you learn how to update an ongoing command from a web endpoint.
## Prerequisites > [!div class = "checklist"]
In this article, you'll learn how to update an ongoing command from a web endpoi
## Create an Azure function
-For this example, you'll need an HTTP-triggered [Azure function](../../azure-functions/index.yml) that supports the following input (or a subset of this input):
+For this example, you need an HTTP-triggered [Azure function](../../azure-functions/index.yml) that supports the following input (or a subset of this input):
```JSON {
Let's review the key attributes of this input:
| Attribute | Explanation | | - | |
-| **conversationId** | The unique identifier of the conversation. Note that this ID can be generated from the client app. |
+| **conversationId** | The unique identifier of the conversation. This ID can be generated from the client app. |
| **currentCommand** | The command that's currently active in the conversation. | | **name** | The name of the command. The `parameters` attribute is a map with the current values of the parameters. | | **currentGlobalParameters** | A map like `parameters`, but used for global parameters. |
module.exports = async function (context, req) {
} ```
-When you call this Azure function from Custom Commands, you'll send the current values of the conversation. You'll return the parameters that you want to update or if you want to cancel the current command.
+When you call this Azure function from Custom Commands, you send the current values of the conversation. You return the parameters that you want to update or if you want to cancel the current command.
## Update the existing Custom Commands app
ai-studio Index Lookup Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/index-lookup-tool.md
+
+ Title: Index lookup tool for flows in Azure AI Studio
+
+description: This article introduces the Index Lookup tool for flows in Azure AI Studio.
++++ Last updated : 01/18/2024+++
+# Index Lookup tool for Azure AI Studio
++
+The prompt flow *Index Lookup* tool enables the usage of common vector indices (such as Azure AI Search, FAISS, and Pinecone) for retrieval augmented generation (RAG) in prompt flow. The tool automatically detects the indices in the workspace and allows the selection of the index to be used in the flow.
+
+## Build with the Index Lookup tool
+
+1. Create or open a flow in Azure AI Studio. For more information, see [Create a flow](../flow-develop.md).
+1. Select **+ More tools** > **Index Lookup** to add the Index Lookup tool to your flow.
+
+ :::image type="content" source="../../media/prompt-flow/index-lookup-tool.png" alt-text="Screenshot of the Index Lookup tool added to a flow in Azure AI Studio." lightbox="../../media/prompt-flow/index-lookup-tool.png":::
+
+1. Enter values for the Index Lookup tool [input parameters](#inputs). The [LLM tool](llm-tool.md) can generate the vector input.
+1. Add more tools to your flow as needed, or select **Run** to run the flow.
+1. To learn more about the returned output, see [outputs](#outputs).
++
+## Inputs
+
+The following are available input parameters:
+
+| Name | Type | Description | Required |
+| - | - | -- | -- |
+| mlindex_content | string | Type of Index to be used. Input depends on Index type. Example of an Azure Cog Search Index JSON can be seen below the table* | Yes |
+| queries | string, Union[string, List[String]] | The text to be queried.| Yes |
+|query_type | string | The type of query to be performed. Options include Keyword, Semantic, Hybrid, etc. | Yes |
+| top_k | integer | The count of top-scored entities to return. Default value is 3. | No |
+
+\**ACS JSON Example:*
+```json
+embeddings:
+ api_base: <api_base>
+ api_type: azure
+ api_version: 2023-07-01-preview
+ batch_size: '1'
+ connection:
+ id: /subscriptions/<subscription>/resourceGroups/<resource_group>/providers/Microsoft.MachineLearningServices/workspaces/<workspace> /connections/<AOAI_connection>
+ connection_type: workspace_connection
+ deployment: <embedding_deployment>
+ dimension: <embedding_model_dimension>
+ kind: open_ai
+ model: <embedding_model>
+ schema_version: <version>
+index:
+ api_version: 2023-07-01-Preview
+ connection:
+ id: /subscriptions/<subscription>/resourceGroups/<resource_group>/providers/Microsoft.MachineLearningServices/workspaces/<workspace> /connections/<cogsearch_connection>
+ connection_type: workspace_connection
+ endpoint: <cogsearch_endpoint>
+ engine: azure-sdk
+ field_mapping:
+ content: id
+ embedding: content_vector_open_ai
+ metadata: id
+ index: <index_name>
+ kind: acs
+ semantic_configuration_name: azureml-default
+++
+```
+
+## Outputs
+
+The following JSON format response is an example returned by the tool that includes the top-k scored entities. The entity follows a generic schema of vector search result provided by promptflow-vectordb SDK. For the Vector Index Search, the following fields are populated:
+
+| Field Name | Type | Description |
+| - | - | -- |
+| metadata | dict | Customized key-value pairs provided by user when creating the index |
+| page_content | string | Content of the vector chunk being used in the lookup |
+| score | float | Depends on index type defined in Vector Index. If index type is Faiss, score is L2 distance. If index type is Azure AI Search, score is cosine similarity. |
++
+
+```json
+[
+ {
+ "metadata":{
+ "answers":{},
+ "captions":{
+ "highlights":"sample_highlight1",
+ "text":"sample_text1"
+ },
+ "page_number":44,
+ "source":{
+ "filename":"sample_file1.pdf",
+ "mtime":1686329994,
+ "stats":{
+ "chars":4385,
+ "lines":41,
+ "tiktokens":891
+ },
+ "url":"sample_url1.pdf"
+ },
+ "stats":{
+ "chars":4385,"lines":41,"tiktokens":891
+ }
+ },
+ "page_content":"vector chunk",
+ "score":0.021349556744098663
+ },
+
+ {
+ "metadata":{
+ "answers":{},
+ "captions":{
+ "highlights":"sample_highlight2",
+ "text":"sample_text2"
+ },
+ "page_number":44,
+ "source":{
+ "filename":"sample_file2.pdf",
+ "mtime":1686329994,
+ "stats":{
+ "chars":4385,
+ "lines":41,
+ "tiktokens":891
+ },
+ "url":"sample_url2.pdf"
+ },
+ "stats":{
+ "chars":4385,"lines":41,"tiktokens":891
+ }
+ },
+ "page_content":"vector chunk",
+ "score":0.021349556744098663
+ },
+
+]
+
+```
+
+## Next steps
+
+- [Learn more about how to create a flow](../flow-develop.md)
aks Azure Csi Blob Storage Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-blob-storage-provision.md
description: Learn how to create a static or dynamic persistent volume with Azure Blob storage for use with multiple concurrent pods in Azure Kubernetes Service (AKS) Previously updated : 11/28/2023 Last updated : 01/18/2024 # Create and use a volume with Azure Blob storage in Azure Kubernetes Service (AKS)
For more information on Kubernetes volumes, see [Storage options for application
## Before you begin -- If you don't have a storage account that supports the NFS v3 protocol, review [NFS v3 support with Azure Blob storage][azure-blob-storage-nfs-support].- - [Enable the Blob storage CSI driver][enable-blob-csi-driver] on your AKS cluster. - To support an [Azure DataLake Gen2 storage account][azure-datalake-storage-account] when using blobfuse mount, you'll need to do the following:
For more information on Kubernetes volumes, see [Storage options for application
- To create an ADLS account using the driver in dynamic provisioning, specify `isHnsEnabled: "true"` in the storage class parameters. - To enable blobfuse access to an ADLS account in static provisioning, specify the mount option `--use-adls=true` in the persistent volume. - If you are going to enable a storage account with Hierarchical Namespace, existing persistent volumes should be remounted with `--use-adls=true` mount option.
+
+- About blobfuse cache
+ - By default, the blobfuse cache is located in the `/mnt` directory. If the VM SKU provides a temporary disk, the `/mnt` directory is mounted on the temporary disk. However, if the VM SKU does not provide a temporary disk, the `/mnt` directory is mounted on the OS disk, you could set `--tmp-path=` mount option to specify a different cache directory
## Dynamically provision a volume
aks Cluster Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-configuration.md
The following deployment uses the ARM template `azurelinuxaksarm.json`.
"count": "[parameters('agentCount')]", "vmSize": "[parameters('agentVMSize')]", "osType": "[parameters('osType')]",
- "osSKU": "[parameters('osSKU')]",
- "storageProfile": "ManagedDisks"
+ "osSKU": "[parameters('osSKU')]"
} ], "linuxProfile": {
aks Concepts Clusters Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-clusters-workloads.md
Title: Azure Kubernetes Services (AKS) Core Basic Concepts
description: Learn about the core components that make up workloads and clusters in Kubernetes and their counterparts on Azure Kubernetes Services (AKS). Previously updated : 12/13/2023 Last updated : 01/16/2024 # Core Kubernetes concepts for Azure Kubernetes Service
If you need advanced configuration and control on your Kubernetes node container
AKS uses node resources to help the node function as part of your cluster. This usage can create a discrepancy between your node's total resources and the allocatable resources in AKS. Remember this information when setting requests and limits for user deployed pods. To find a node's allocatable resources, run:+ ```kubectl kubectl describe node [NODE_NAME] ``` To maintain node performance and functionality, AKS reserves resources on each node. As a node grows larger in resources, the resource reservation grows due to a higher need for management of user-deployed pods.
->[!NOTE]
+> [!NOTE]
> Using AKS add-ons such as Container Insights (OMS) will consume additional node resources. Two types of resources are reserved:
Two types of resources are reserved:
Reserved CPU is dependent on node type and cluster configuration, which may cause less allocatable CPU due to running additional features.
-| CPU cores on host | 1 | 2 | 4 | 8 | 16 | 32|64|
-|||||||||
-|Kube-reserved (millicores)|60|100|140|180|260|420|740|
+| CPU cores on host | 1 | 2 | 4 | 8 | 16 | 32 | 64 |
+|-|-|--|--|--|--|--|--|
+| Kube-reserved (millicores) | 60 | 100 | 140 | 180 | 260 | 420 | 740 |
#### Memory
A pod is a logical resource, but application workloads run on the containers. Po
A *deployment* represents identical pods managed by the Kubernetes Deployment Controller. A deployment defines the number of pod *replicas* to create. The Kubernetes Scheduler ensures that additional pods are scheduled on healthy nodes if pods or nodes encounter problems. You can update deployments to change the configuration of pods, container image used, or attached storage. The Deployment Controller:+ * Drains and terminates a given number of replicas. * Creates replicas from the new deployment definition. * Continues the process until all replicas in the deployment are updated.
aks Deploy Confidential Containers Default Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/deploy-confidential-containers-default-policy.md
Use the following command to enable Confidential Containers (preview) by creatin
* **--cluster-name**: Enter a unique name for the AKS cluster, such as *myAKSCluster*. * **--name**: Enter a unique name for your clusters node pool, such as *nodepool2*. * **--workload-runtime**: Specify *KataCcIsolation* to enable the feature on the node pool. Along with the `--workload-runtime` parameter, these other parameters shall satisfy the following requirements. Otherwise, the command fails and reports an issue with the corresponding parameter(s).
- * **--os-sku**: **AzureLinux*. Only the Azure Linux os-sku supports this feature in this preview release.
+ * **--os-sku**: *AzureLinux*. Only the Azure Linux os-sku supports this feature in this preview release.
* **--node-vm-size**: Any Azure VM size that is a generation 2 VM and supports nested virtualization works. For example, [Standard_DC8as_cc_v5][DC8as-series] VMs. The following example adds a user node pool to *myAKSCluster* with two nodes in *nodepool2* in the *myResourceGroup*:
For this preview release, we recommend for test and evaluation purposes to eithe
1. Grant the managed identity you created earlier, and your account, access to the key vault. [Assign][assign-key-vault-access-cli] both identities the **Key Vault Crypto Officer** and **Key Vault Crypto User** Azure RBAC roles. >[!NOTE]
- >The managed identity is the value you assigned to the `USER_ASSIGNED_IDENTITY_NAME` variable.
+ >The managed identity is the value you assign to the `USER_ASSIGNED_IDENTITY_NAME` variable.
>[!NOTE] >To add role assignments, you must have `Microsoft.Authorization/roleAssignments/write` and `Microsoft.Authorization/roleAssignments/delete` permissions, such as [Key Vault Data Access Administrator][key-vault-data-access-admin-rbac], [User Access Administrator][user-access-admin-rbac], or [Owner][owner-rbac].
For this preview release, we recommend for test and evaluation purposes to eithe
```
-1. Prepare the RSA Encryption/Decryption key by the [bash script](https://github.com/microsoft/confidential-container-demos/raw/main/kafka/setup-key.sh) for the workload from GitHub. Save the file as `setup-key.sh`.
+1. Prepare the RSA Encryption/Decryption key using the [bash script](https://github.com/microsoft/confidential-container-demos/raw/main/kafka/setup-key.sh) for the workload from GitHub. Save the file as `setup-key.sh`.
1. Set the `MAA_ENDPOINT` environmental variable to match the value for the `SkrClientMAAEndpoint` from the `consumer.yaml` manifest file by running the following command.
For this preview release, we recommend for test and evaluation purposes to eithe
1. Copy and paste the external IP address of the consumer service into your browser and observe the decrypted message.
- The following resembles the output of the command:
+ The following example resembles the output of the command:
```output Welcome to Confidential Containers on AKS!
aks Keda Deploy Add On Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/keda-deploy-add-on-arm.md
The following snippet is a sample deployment that creates a cluster with KEDA en
"enableAutoScaling": false, "vmSize": "Standard_D2S_v5", "osType": "Linux",
- "storageProfile": "ManagedDisks",
"type": "VirtualMachineScaleSets", "mode": "System", "maxPods": 110,
To learn more, view the [upstream KEDA docs][keda].
[kubectl]: https://kubernetes.io/docs/reference/kubectl/ [keda-scalers]: https://keda.sh/docs/scalers/ [keda-sample]: https://github.com/kedacore/sample-dotnet-worker-servicebus-queue
-[keda]: https://keda.sh/docs/2.12/
+[keda]: https://keda.sh/docs/2.12/
aks Manage Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/manage-node-pools.md
When you use an Azure Resource Manager template to create and manage resources,
"count": "[variables('agentPoolProfiles').agentCount]", "vmSize": "[variables('agentPoolProfiles').agentVmSize]", "osType": "[variables('agentPoolProfiles').osType]",
- "storageProfile": "ManagedDisks",
"type": "VirtualMachineScaleSets", "vnetSubnetID": "[variables('agentPoolProfiles').vnetSubnetId]", "orchestratorVersion": "1.15.7"
aks Node Autoprovision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-autoprovision.md
Title: Node autoprovisioning (Preview)
-description: Learn about Azure Kubernetes Service (AKS) Node autoprovisioning
+ Title: Node autoprovisioning (preview)
+description: Learn about Azure Kubernetes Service (AKS) node autoprovisioning (preview).
Previously updated : 10/19/2023 Last updated : 01/18/2024 #Customer intent: As a cluster operator or developer, how to scale my cluster based on workload requirements and right size my nodes automatically
-# Node autoprovision
-When deploying workloads onto AKS, you need to make a decision about the node pool configuration regarding the VM size needed. As your workloads become more complex, and require different CPU, Memory and capabilities to run, the overhead of having to design your VM configuration for numerous resource requests becomes difficult.
+# Node autoprovisioning (preview)
-Node autoprovision (NAP) decides based on pending pod resource requirements the optimal VM configuration to run those workloads in the most efficient and cost effective manner.
+When you deploy workloads onto AKS, you need to make a decision about the node pool configuration regarding the VM size needed. As your workloads become more complex, and require different CPU, memory, and capabilities to run, the overhead of having to design your VM configuration for numerous resource requests becomes difficult.
+
+Node autoprovisioning (NAP) (preview) decides based on pending pod resource requirements the optimal VM configuration to run those workloads in the most efficient and cost effective manner.
NAP is based on the Open Source [Karpenter](https://karpenter.sh) project, and the [AKS provider](https://github.com/Azure/karpenter) is also Open Source. NAP automatically deploys and configures and manages Karpenter on your AKS clusters.
+> [!IMPORTANT]
+> Node autoprovisioning (NAP) for AKS is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
## Before you begin
NAP is based on the Open Source [Karpenter](https://karpenter.sh) project, and t
``` ## Limitations
-* Windows and Azure Linux node pools aren't supported yet
-* Kubelet configuration through Node pool configuration is not supported
-* NAP can only be enabled on new clusters currently
+
+- Windows and Azure Linux node pools aren't supported yet
+- Kubelet configuration through Node pool configuration is not supported
+- NAP can only be enabled on new clusters currently
## Enable node autoprovisioning
-To enable node autoprovisioning, create a new cluster using the az aks create command and set --node-provisioning-mode to "Auto". You'll also need to use overlay networking and the cilium network policy.
+To enable node autoprovisioning, create a new cluster using the az aks create command and set --node-provisioning-mode to "Auto". You'll also need to use overlay networking and the cilium network policy.
### [Azure CLI](#tab/azure-cli)
az aks create --name karpuktest --resource-group karpuk --node-provisioning-mode
``` ### [Azure ARM](#tab/azure-arm)+ ```azurecli-interactive az deployment group create --resource-group napcluster --template-file ./nap.json ```
az deployment group create --resource-group napcluster --template-file ./nap.jso
] } ```+ + ## Node pools
-Node autoprovision uses a list of VM SKUs as a starting point to decide which is best suited for the workloads that are in a pending state. Having control over what SKU you want in the initial pool allows you to specify specific SKU families, or VM types and the maximum amount of resources a provisioner uses.
+Node autoprovision uses a list of VM SKUs as a starting point to decide which is best suited for the workloads that are in a pending state. Having control over what SKU you want in the initial pool allows you to specify specific SKU families, or VM types and the maximum amount of resources a provisioner uses.
If you have specific VM SKUs that are reserved instances, for example, you may wish to only use those VMs as the starting pool. You can have multiple node pool definitions in a cluster, but AKS deploys a default node pool definition that you can modify: - ```yaml apiVersion: karpenter.sh/v1beta1 kind: NodePool
spec:
- D ```
-### Supported node provisioner requirements
+### Supported node provisioner requirements
-#### SKU selectors with well known labels
+#### SKU selectors with well known labels
-| Selector | Description | Example |
-|||
-| karpenter.azure.com/sku-family | VM SKU Family | D, F, L etc. |
-| karpenter.azure.com/sku-name | Explicit SKU name | Standard_A1_v2 |
-| karpenter.azure.com/sku-version | SKU version (without "v", can use 1) | 1 , 2 |
+| Selector | Description | Example |
+|--|--|--|
+| karpenter.azure.com/sku-family | VM SKU Family | D, F, L etc. |
+| karpenter.azure.com/sku-name | Explicit SKU name | Standard_A1_v2 |
+| karpenter.azure.com/sku-version | SKU version (without "v", can use 1) | 1 , 2 |
| karpenter.sh/capacity-type | VM allocation type (Spot / On Demand) | spot or on-demand | | karpenter.azure.com/sku-cpu | Number of CPUs in VM | 16 |
-| karpenter.azure.com/sku-memory | Memory in VM in MiB | 131072 |
-| karpenter.azure.com/sku-gpu-name | GPU name | A100 |
-| karpenter.azure.com/sku-gpu-manufacturer | GPU manufacturer | nvidia |
+| karpenter.azure.com/sku-memory | Memory in VM in MiB | 131072 |
+| karpenter.azure.com/sku-gpu-name | GPU name | A100 |
+| karpenter.azure.com/sku-gpu-manufacturer | GPU manufacturer | nvidia |
| karpenter.azure.com/sku-gpu-count | GPU count per VM | 2 | | karpenter.azure.com/sku-networking-accelerated | Whether the VM has accelerated networking | [true, false] | | karpenter.azure.com/sku-storage-premium-capable | Whether the VM supports Premium IO storage | [true, false] | | karpenter.azure.com/sku-storage-ephemeralos-maxsize | Size limit for the Ephemeral OS disk in Gb | 92 |
-| topology.kubernetes.io/zone | The Availability Zone(s) | [uksouth-1,uksouth-2,uksouth-3] |
-| kubernetes.io/os | Operating System (Linux only during preview) | linux |
-| kubernetes.io/arch | CPU architecture (AMD64 or ARM64) | [amd64, arm64] |
-
-
+| topology.kubernetes.io/zone | The Availability Zone(s) | [uksouth-1,uksouth-2,uksouth-3] |
+| kubernetes.io/os | Operating System (Linux only during preview) | linux |
+| kubernetes.io/arch | CPU architecture (AMD64 or ARM64) | [amd64, arm64] |
To list the VM SKU capabilities and allowed values, use the `vm list-skus` command from the Azure CLI.
az vm list-skus --resource-type virtualMachines --location <location> --query '[
``` ## Node pool limits
-By default, NAP attempts to schedule your workloads within the Azure quota you have available. You can also specify the upper limit of resources that is used by a Nodepool, specifying limits within the Node pool spec.
+
+By default, NAP attempts to schedule your workloads within the Azure quota you have available. You can also specify the upper limit of resources that is used by a node pool, specifying limits within the node pool spec.
``` # Resource limits constrain the total size of the cluster.
By default, NAP attempts to schedule your workloads within the Azure quota you h
memory: 1000Gi ``` - ## Node pool weights
-When you have multiple Nodepools defined, it's possible to set a preference of where a workload should be scheduled. Define the relative weight on your Node pool definitions.
+
+When you have multiple node pools defined, it's possible to set a preference of where a workload should be scheduled. Define the relative weight on your Node pool definitions.
``` # Priority given to the node pool when the scheduler considers which to select. Higher weights indicate higher priority when comparing node pools.
When you have multiple Nodepools defined, it's possible to set a preference of w
weight: 10 ```
-## Kubernetes and node image updates
+## Kubernetes and node image updates
+ AKS with NAP manages the Kubernetes version upgrades and VM OS disk updates for you by default. ### Kubernetes upgrades+ Kubernetes upgrades for NAP node pools follows the Control Plane Kubernetes version. If you perform a cluster upgrade, your NAP nodes are updated automatically to follow the same versioning. ### Node image updates+ By default NAP node pool virtual machines are automatically updated when a new image is available. If you wish to pin a node pool at a certain node image version, you can set the imageVersion on the node class: ```kubectl
spec:
Removing the imageVersion spec would revert the node pool to be updated to the latest node image version. - ## Node disruption When the workloads on your nodes scale down, NAP uses disruption rules on the Node pool specification to decide when and how to remove those nodes and potentially reschedule your workloads to be more efficient. You can remove a node manually using `kubectl delete node`, but NAP can also control when it should optimize your nodes. - ```yaml disruption: # Describes which types of Nodes NAP should consider for consolidation
You can remove a node manually using `kubectl delete node`, but NAP can also con
consolidateAfter: 30s ```
-## Monitoring selection events
+## Monitoring selection events
+ Node autoprovision produces cluster events that can be used to monitor deployment and scheduling decisions being made. You can view events through the Kubernetes events stream. ```
kubectl get events -A --field-selector source=karpenter -w
[az-extension-add]: /cli/azure/extension#az-extension-add [az-extension-update]: /cli/azure/extension#az-extension-update
-[az-feature-register]: /cli/azure/feature#az-feature-register
aks Scale Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/scale-cluster.md
Review the [AKS service quotas and limits][service-quotas] to ensure your cluste
"name": "nodepool1", "osDiskSizeGb": 30, "osType": "Linux",
- "storageProfile": "ManagedDisks",
"vmSize": "Standard_DS2_v2" } ]
Review the [AKS service quotas and limits][service-quotas] to ensure your cluste
"name": "nodepool1", "osDiskSizeGb": 30, "osType": "Linux",
- "storageProfile": "ManagedDisks",
"vmSize": "Standard_DS2_v2", "vnetSubnetId": null }
aks Supported Kubernetes Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/supported-kubernetes-versions.md
New Supported Version List
## Platform support policy
-Platform support policy is a reduced support plan for certain unsupported kubernetes versions. During platform support, customers only receive support from Microsoft for AKS/Azure platform related issues. Any issues related to Kubernetes functionality and components aren't supported.
+Platform support policy is a reduced support plan for certain unsupported Kubernetes versions. During platform support, customers only receive support from Microsoft for AKS/Azure platform related issues. Any issues related to Kubernetes functionality and components aren't supported.
-Platform support policy applies to clusters in an n-3 version (where n is the latest supported AKS GA minor version), before the cluster drops to n-4. For example, kubernetes v1.25 is considered platform support when v1.28 is the latest GA version. However, during the v1.29 GA release, v1.25 will then auto-upgrade to v1.26. If you are a running an n-2 version, the moment it becomes n-3 it also becomes deprecated, and you enter into the platform support policy.
+Platform support policy applies to clusters in an n-3 version (where n is the latest supported AKS GA minor version), before the cluster drops to n-4. For example, Kubernetes v1.25 is considered platform support when v1.28 is the latest GA version. However, during the v1.29 GA release, v1.25 will then auto-upgrade to v1.26. If you are a running an n-2 version, the moment it becomes n-3 it also becomes deprecated, and you enter into the platform support policy.
-AKS relies on the releases and patches from [kubernetes](https://kubernetes.io/releases/), which is an Open Source project that only supports a sliding window of three minor versions. AKS can only guarantee [full support](#kubernetes-version-support-policy) while those versions are being serviced upstream. Since there's no more patches being produced upstream, AKS can either leave those versions unpatched or fork. Due to this limitation, platform support doesn't support anything from relying on kubernetes upstream.
+AKS relies on the releases and patches from [Kubernetes](https://kubernetes.io/releases/), which is an Open Source project that only supports a sliding window of three minor versions. AKS can only guarantee [full support](#kubernetes-version-support-policy) while those versions are being serviced upstream. Since there's no more patches being produced upstream, AKS can either leave those versions unpatched or fork. Due to this limitation, platform support doesn't support anything from relying on Kubernetes upstream.
This table outlines support guidelines for Community Support compared to Platform support.
aks Tutorial Kubernetes Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-scale.md
The following example increases the number of nodes to three in the Kubernetes c
"osDiskSizeGb": null, "osType": "Linux", "ports": null,
- "storageProfile": "ManagedDisks",
"vmSize": "Standard_D2_v2", "vnetSubnetId": null }
aks Tutorial Kubernetes Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-upgrade-cluster.md
You can either [manually upgrade your cluster](#manually-upgrade-cluster) or [co
"maxPods": 110, "name": "nodepool1", "osType": "Linux",
- "storageProfile": "ManagedDisks",
"vmSize": "Standard_DS1_v2", } ],
api-center Add Metadata Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/add-metadata-properties.md
Last updated 11/07/2023 - # Tutorial: Customize metadata properties
api-center Configure Environments Deployments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/configure-environments-deployments.md
Last updated 11/07/2023 -
api-center Key Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/key-concepts.md
Title: Azure API Center (preview) - Key concepts description: Key concepts of Azure API Center. API Center enables tracking APIs in a centralized location for discovery, reuse, and governance.
-
Last updated 11/08/2023 - # Azure API Center - key concepts
api-center Manage Apis Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/manage-apis-azure-cli.md
Last updated 01/12/2024 - # Customer intent: As an API program manager, I want to automate processes to register and update APIs in my Azure API center.
api-center Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/overview.md
Title: Azure API Center (preview) - Overview description: Introduction to key scenarios and capabilities of Azure API Center. API Center inventories an organization's APIs for discovery, reuse, and governance at scale.
api-center Register Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/register-apis.md
Last updated 11/07/2023 - # Tutorial: Register APIs in your API inventory
api-center Set Up Api Center https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/set-up-api-center.md
Last updated 11/07/2023 - # Quickstart: Create your API center
api-management Add Api Manually https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/add-api-manually.md
Title: Add an API manually using the Azure portal | Microsoft Docs description: This tutorial shows you how to use API Management to add an API manually.
api-management Api Management Api Import Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-api-import-restrictions.md
Title: Restrictions and details of API formats support
description: Details of known issues and restrictions on OpenAPI, WSDL, and WADL formats support in Azure API Management.
api-management Api Management Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-capacity.md
Title: Capacity of an Azure API Management instance | Microsoft Docs description: This article explains what the capacity metric is and how to make informed decisions whether to scale an Azure API Management instance.
api-management Api Management Debug Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-debug-policies.md
Title: Debug Azure API Management policies in Visual Studio Code | Microsoft Docs description: Learn how to debug Azure API Management Policies using the Azure API Management Visual Studio Code extension - -
-
Last updated 09/22/2020
api-management Api Management Error Handling Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-error-handling-policies.md
Title: Error handling in Azure API Management policies | Microsoft Docs description: Learn how to respond to error conditions that may occur during the processing of requests in Azure API Management.- - ms.assetid: 3c777964-02b2-4f55-8731-8c3bd3c0ae27 Last updated 01/10/2020
api-management Api Management Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-features.md
Title: Feature-based comparison of the Azure API Management tiers | Microsoft Docs description: Compare API Management tiers based on the features they offer. See a table that summarizes the key features available in each pricing tier.
api-management Api Management Gateways Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-gateways-overview.md
Title: API gateway overview | Azure API Management description: Learn more about the features of the API gateway component of Azure API Management. API Management offers both Azure-managed and self-hosted gateways.
api-management Api Management Get Started Revise Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-get-started-revise-api.md
Title: Tutorial - Use revisions in API Management to make non-breaking API chang
description: Follow the steps of this tutorial to learn how to make non-breaking changes using revisions in API Management.
api-management Api Management Howto Api Inspector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-api-inspector.md
Title: Tutorial - Debug APIs in Azure API Management using request tracing description: Follow the steps of this tutorial to enable tracing and inspect request processing steps in Azure API Management. Last updated 08/08/2022
api-management Api Management Howto Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-autoscale.md
Title: Configure autoscale of an Azure API Management instance | Microsoft Docs description: This article describes how to set up autoscale behavior for an Azure API Management instance.
api-management Api Management Howto Ca Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-ca-certificates.md
Title: Add a custom CA certificate - Azure API Management | Microsoft Docs description: Learn how to add a custom CA certificate in Azure API Management. You can also see instructions to delete a certificate.
api-management Api Management Howto Cache External https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-cache-external.md
Title: Use an external cache in Azure API Management | Microsoft Docs description: Learn how to configure and use an external Redis-compatible cache in Azure API Management. Using an external cache gives you more control and flexibility than the built-in cache.
api-management Api Management Howto Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-cache.md
Title: Add caching to improve performance in Azure API Management | Microsoft Docs description: Learn how to improve the latency, bandwidth consumption, and web service load for API Management service calls.- - ms.assetid: 740f6a27-8323-474d-ade2-828ae0c75e7a Last updated 11/13/2020
api-management Api Management Howto Configure Custom Domain Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-configure-custom-domain-gateway.md
Title: Configure a custom domain name for your self-hosted Azure API Management gateway | Microsoft Docs description: This topic describes the steps for configuring a custom domain name for self-hosted Azure API Management gateway.
api-management Api Management Howto Create Or Invite Developers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-create-or-invite-developers.md
Title: How to manage user accounts in Azure API Management | Microsoft Docs description: Learn how to create or invite users in Azure API Management. View additional resources to use after creating a developer account. - Last updated 02/13/2018
api-management Api Management Howto Create Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-create-subscriptions.md
Title: Create subscriptions in Azure API Management | Microsoft Docs description: Learn how to create subscriptions in Azure API Management. A subscription is necessary to get subscription keys that allow access to APIs. Last updated 08/03/2022
api-management Api Management Howto Integrate Internal Vnet Appgateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-integrate-internal-vnet-appgateway.md
Title: Use API Management in a virtual network with Azure Application Gateway
description: Set up and configure Azure API Management in an internal virtual network with Azure Application Gateway (Web Application Firewall) as a front end.
api-management Api Management Howto Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-ip-addresses.md
Title: IP addresses of Azure API Management service | Microsoft Docs description: Learn how to retrieve the IP addresses of an Azure API Management service and when they change.
api-management Api Management Howto Mutual Certificates For Clients https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-mutual-certificates-for-clients.md
Title: Secure APIs using client certificate authentication in API Management
description: Learn how to secure access to APIs by using client certificates. You can use policy expressions to validate incoming certificates.
api-management Api Management Howto Mutual Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-mutual-certificates.md
Title: Secure API Management backend using client certificate authentication
description: Learn how to manage client certificates and secure backend services using client certificate authentication in Azure API Management.
api-management Api Management Howto Oauth2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-oauth2.md
Title: Authorize test console of API Management developer portal using OAuth 2.0
description: Set up OAuth 2.0 user authorization for the test console in the Azure API Management developer portal. This example uses Microsoft Entra ID as an OAuth 2.0 provider.
api-management Api Management Howto Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-policies.md
Title: Policies in Azure API Management description: Introduction to API Management policies, which change API behavior through configuration. Policy statements run sequentially on an API request or response.
api-management Api Management Howto Provision Self Hosted Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-provision-self-hosted-gateway.md
Title: Provision a self-hosted gateway in Azure API Management | Microsoft Docs description: Learn how to provision a self-hosted gateway in Azure API Management. - Last updated 03/31/2020
api-management Api Management Howto Use Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-use-managed-service-identity.md
Title: Use managed identities in Azure API Management | Microsoft Docs description: Learn how to create system-assigned and user-assigned identities in API Management by using the Azure portal, PowerShell, and a Resource Manager template. Learn about supported scenarios with managed identities.
api-management Api Management In Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-in-workspace.md
Last updated 03/10/2023- # Manage APIs and other resources in your API Management workspace
api-management Api Management Key Concepts Experiment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-key-concepts-experiment.md
Title: Azure API Management - Overview and key concepts | Microsoft Docs description: Introduction to key scenarios, capabilities, and concepts of the Azure API Management service. API Management supports the full API lifecycle.
api-management Api Management Key Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-key-concepts.md
Title: Azure API Management - Overview and key concepts description: Introduction to key scenarios, capabilities, and concepts of the Azure API Management service. API Management supports the full API lifecycle.
api-management Api Management Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-kubernetes.md
Title: Use Azure API Management with Microservices Deployed in Azure Kubernetes Service | Microsoft Docs description: This article describes the options of deploying API Management with AKS Last updated 12/14/2019
api-management Api Management Log To Eventhub Sample https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-log-to-eventhub-sample.md
Title: Monitor APIs with Azure API Management, Event Hubs, and Moesif
description: Sample application demonstrating the log-to-eventhub policy by connecting Azure API Management, Azure Event Hubs and Moesif for HTTP logging and monitoring - ms.assetid: c528cf6f-5f16-4a06-beea-fa1207541a47 ms.devlang: csharp
api-management Api Management Policy Expressions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-policy-expressions.md
Title: Azure API Management policy expressions | Microsoft Docs description: Learn about policy expressions in Azure API Management. See examples and view other available resources.
api-management Api Management Revisions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-revisions.md
Title: Revisions in Azure API Management | Microsoft Docs description: Learn about the concept of revisions in Azure API Management.
api-management Api Management Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-role-based-access-control.md
Title: How to use Role-Based Access Control in Azure API Management | Microsoft Docs description: Learn how to use the built-in roles and create custom roles in Azure API Management
api-management Api Management Sample Cache By Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-sample-cache-by-key.md
Title: Custom caching in Azure API Management description: Learn how to cache items by key in Azure API Management. You can modify the key by using request headers. Last updated 05/19/2022
api-management Api Management Sample Flexible Throttling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-sample-flexible-throttling.md
Title: Advanced request throttling with Azure API Management description: Learn how to create and apply flexible quota and rate limiting policies with Azure API Management. - ms.assetid: fc813a65-7793-4c17-8bb9-e387838193ae - Last updated 02/03/2018
api-management Api Management Sample Send Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-sample-send-request.md
Title: Using API Management service to generate HTTP requests description: Learn to use request and response policies in API Management to call external services from your API - ms.assetid: 4539c0fa-21ef-4b1c-a1d4-d89a38c242fa - Last updated 04/14/2022
api-management Api Management Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-subscriptions.md
Title: Subscriptions in Azure API Management | Microsoft Docs description: Learn about the concept of subscriptions in Azure API Management. Consumers commonly get access to APIs by using subscriptions in Azure API Management.
api-management Api Management Terminology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-terminology.md
Title: Azure API Management terminology | Microsoft Docs description: This article gives definitions for the terms that are specific to API Management. -
api-management Api Management Troubleshoot Cannot Add Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-troubleshoot-cannot-add-custom-domain.md
Title: Cannot add custom domain by using Key Vault certificate
description: Learn how to troubleshoot the issue in which you can't add a custom domain in Azure API Management by using a key vault certificate. - Last updated 07/19/2019
api-management Api Management Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-versions.md
Title: Versions in Azure API Management | Microsoft Docs description: Learn about the concept of versions in Azure API Management.
api-management Automation Manage Api Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/automation-manage-api-management.md
Title: Manage Azure API Management using Azure Automation description: Learn about how the Azure Automation service can be used to manage Azure API Management. - ms.assetid: 2e53c9af-f738-47f8-b1b6-593050a7c51b Last updated 02/13/2018
api-management Backends https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/backends.md
Title: Azure API Management backends | Microsoft Docs description: Learn about custom backends in Azure API Management - Last updated 01/09/2024
resource symbolicname 'Microsoft.ApiManagement/service/backends@2023-05-01-previ
description: 'Load balancer for multiple backends' type: 'Pool' protocol: 'http'
- url: 'http://unused'
+ url: 'http://google.com'
pool: { {
Include a JSON snippet similar to the following in your ARM template for a backe
"description": "Load balancer for multiple backends", "type": "Pool", "protocol": "http",
- "url": "http://unused",
+ "url": "http://google.com",
"pool": { "services": [ {
api-management Api Version Retirement Sep 2023 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/api-version-retirement-sep-2023.md
Title: Azure API Management - API version retirements (September 2023) | Microsoft Docs description: The Azure API Management service is retiring all API versions prior to 2021-08-01. If you use one of these API versions, you must update your tools, scripts, or programs to use the latest versions.
api-management Captcha Endpoint Change Sep 2025 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/captcha-endpoint-change-sep-2025.md
Title: Azure API Management CAPTCHA endpoint change (September 2025) | Microsoft Docs description: Azure API Management is updating the CAPTCHA endpoint. If your service is hosted in an Azure virtual network, you may need to update network settings to continue using the developer portal.
api-management Identity Provider Adal Retirement Sep 2025 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/identity-provider-adal-retirement-sep-2025.md
Title: Azure API Management identity providers configuration change (September 2025) | Microsoft Docs description: Azure API Management is updating the library used for user authentication in the developer portal. If you use Microsoft Entra ID or Azure AD B2C identity providers, you need to update application settings and identity provider configuration to use the Microsoft Authentication Library (MSAL).
api-management Legacy Portal Retirement Oct 2023 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/legacy-portal-retirement-oct-2023.md
Title: Azure API Management - Legacy developer portal retirement (October 2023) description: Azure API Management is retiring the legacy developer portal effective 31 October 2023. If you use the legacy portal, migrate to the new developer portal.
api-management Metrics Retirement Aug 2023 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/metrics-retirement-aug-2023.md
Title: Azure API Management - Metrics retirement (August 2023) description: Azure API Management is retiring five legacy metrics as of August 2023. If you monitor your API Management instance using these metrics, you must update your monitoring settings and alert rules to use the Requests metric.
api-management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/overview.md
Title: Upcoming Breaking Changes in Azure API Management | Microsoft Docs description: A list of all the upcoming breaking changes for Azure API Management
api-management Rp Source Ip Address Change Mar 2023 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/rp-source-ip-address-change-mar-2023.md
Title: Azure API Management IP address change (March 2023) | Microsoft Docs description: Azure API Management is updating the source IP address of the resource provider in certain regions. If your service is hosted in a Microsoft Azure Virtual Network, you may need to update network settings to continue managing your service.
api-management Rp Source Ip Address Change Sep 2023 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/rp-source-ip-address-change-sep-2023.md
Title: Azure API Management IP address change (September 2023) | Microsoft Docs description: Azure API Management is updating the source IP address of the resource provider in Switzerland North. If your service is hosted in a Microsoft Azure virtual network, you may need to update network settings to continue managing your service.
api-management Self Hosted Gateway V0 V1 Retirement Oct 2023 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/self-hosted-gateway-v0-v1-retirement-oct-2023.md
Title: Azure API Management - Self-hosted gateway v0/v1 retirement (October 2023) | Microsoft Docs description: Azure API Management is retiring the v0 and v1 versions of the self-hosted gateway container image, effective 1 October 2023. If you've deployed one of these versions, you must migrate to the v2 version of the self-hosted gateway.
api-management Stv1 Platform Retirement August 2024 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/stv1-platform-retirement-august-2024.md
Title: Azure API Management - stv1 platform retirement (August 2024) | Microsoft Docs description: Azure API Management will retire the stv1 compute platform effective 31 August 2024. Instances hosted on the stv1 platform must be migrated to the stv2 platform.
api-management Configure Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/configure-custom-domain.md
Title: Configure custom domain name for Azure API Management instance
description: How to configure a custom domain name and choose certificates for the endpoints of your Azure API Management instance.
api-management Credentials Process Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/credentials-process-flow.md
Last updated 11/14/2023 - # OAuth 2.0 connections in credential manager - process details and flows
api-management Edit Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/edit-api.md
Title: Edit an API with the Azure portal | Microsoft Docs description: Learn how to use API Management to edit an API. Add, delete, or rename operations in the APIM instance, or edit the API's swagger. - Last updated 01/19/2022
api-management How To Configure Cloud Metrics Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-configure-cloud-metrics-logs.md
Title: Configure cloud metrics and logs for Azure API Management self-hosted gateway | Microsoft Docs description: Learn how to configure cloud metrics and logs for Azure API Management self-hosted gateway - Last updated 04/30/2020
api-management How To Configure Local Metrics Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-configure-local-metrics-logs.md
Title: Configure local metrics and logs for Azure API Management self-hosted gateway | Microsoft Docs description: Learn how to configure local metrics and logs for Azure API Management self-hosted gateway on a Kubernetes cluster - Last updated 05/11/2021
api-management Howto Protect Backend Frontend Azure Ad B2c https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/howto-protect-backend-frontend-azure-ad-b2c.md
Title: Protect APIs in Azure API Management with Active Directory B2C
description: Protect a serverless API with OAuth 2.0 by using Azure Active Directory B2C, Azure API Management, and Easy Auth to be called from a JavaScript SPA using the PKCE enabled SPA Auth Flow. - Last updated 02/18/2021
api-management Import Api From Odata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/import-api-from-odata.md
Last updated 01/03/2024 - # Import an OData API
api-management Import App Service As Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/import-app-service-as-api.md
Title: Import Azure Web App to Azure API Management | Microsoft Docs description: This article shows you how to use Azure API Management to import a web API hosted in Azure App Service.
api-management Import Container App With Oas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/import-container-app-with-oas.md
Title: Import Azure Container App to Azure API Management | Microsoft Docs description: This article shows you how to use Azure API Management to import a web API hosted in Azure Container Apps.
api-management Import Function App As Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/import-function-app-as-api.md
Title: Import an Azure Function App as an API in API Management
description: This article shows you how to import an Azure Function App into Azure API Management as an API. - Last updated 04/16/2021
api-management Import Logic App As Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/import-logic-app-as-api.md
Title: Import a Logic App as an API with the Azure portal | Microsoft Docs description: This article shows you how to use API Management to import a Logic App (Consumption) resource as an API. - Last updated 04/16/2021
api-management Observability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/observability.md
Title: Observability in Azure API Management | Microsoft Docs description: Overview of all API observability and monitoring options in Azure API Management. Last updated 06/01/2020
api-management Policy Fragments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policy-fragments.md
Title: Reuse policy configurations in Azure API Management | Microsoft Docs description: Learn how to create and manage reusable policy fragments in Azure API Management. Policy fragments are XML elements containing policy configurations that can be included in any policy definition.
api-management Sap Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/sap-api.md
Last updated 07/21/2023- # Import SAP OData metadata as an API
api-management Self Hosted Gateway Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/self-hosted-gateway-migration-guide.md
Title: Self-hosted gateway migration guide - Azure API Management description: Learn how to migrate the Azure API Management self-hosted gateway to v2.
api-management Self Hosted Gateway Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/self-hosted-gateway-overview.md
Title: Self-hosted gateway overview | Azure API Management description: Learn how self-hosted gateway feature of Azure API Management helps organizations manage APIs in hybrid and multicloud environments.
api-management Set Edit Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-edit-policies.md
Title: How to set or edit Azure API Management policies | Microsoft Docs description: Configure policies at different scopes in an Azure API Management instance using the policy editor in the Azure portal.
api-management V2 Service Tiers Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/v2-service-tiers-overview.md
Title: Azure API Management - v2 tiers (preview)
description: Introduction to key scenarios, capabilities, and concepts of the v2 tiers (SKUs) of the Azure API Management service. The v2 tiers are in preview.
api-management Validate Azure Ad Token Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-azure-ad-token-policy.md
Title: Azure API Management policy reference - validate-azure-ad-token | Microsoft Docs description: Reference for the validate-azure-ad-token policy available for use in Azure API Management. Provides policy usage, settings, and examples.
api-management Validate Jwt Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-jwt-policy.md
Title: Azure API Management policy reference - validate-jwt | Microsoft Docs description: Reference for the validate-jwt policy available for use in Azure API Management. Provides policy usage, settings, and examples.
api-management Virtual Network Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/virtual-network-concepts.md
Last updated 09/14/2023 - # Use a virtual network to secure inbound and outbound traffic for Azure API Management
api-management Visualize Using Managed Grafana Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/visualize-using-managed-grafana-dashboard.md
Last updated 10/17/2022 - # Visualize API Management monitoring data using a Managed Grafana dashboard
api-management Workspaces Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/workspaces-overview.md
Title: Workspaces in Azure API Management | Microsoft Docs description: Learn about workspaces (preview) in Azure API Management. Workspaces allow decentralized API development teams to manage and productize their own APIs, while a central API platform team maintains the API Management infrastructure.
app-service Overview App Gateway Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-app-gateway-integration.md
Title: Application Gateway integration - Azure App Service | Microsoft Learn description: Learn how Application Gateway integrates with Azure App Service. - ms.assetid: 073eb49c-efa1-4760-9f0c-1fecd5c251cc Last updated 09/29/2023
application-gateway Create Gateway Internal Load Balancer App Service Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/create-gateway-internal-load-balancer-app-service-environment.md
Title: Troubleshoot an Application Gateway in Azure ΓÇô ILB ASE | Microsoft Docs description: Learn how to troubleshoot an application gateway by using an Internal Load Balancer with an App Service Environment in Azure
-tags: ''
- Last updated 06/10/2022
azure-app-configuration Concept Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-geo-replication.md
description: Details of the geo-replication feature in Azure App Configuration.
- Last updated 08/01/2022
azure-app-configuration Enable Dynamic Configuration Azure Functions Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/enable-dynamic-configuration-azure-functions-csharp.md
Title: Tutorial for using Azure App Configuration dynamic configuration in an Azure Functions app description: In this tutorial, you learn how to dynamically update the configuration data for Azure Functions apps - - ms.devlang: csharp Last updated 09/14/2022
azure-app-configuration Enable Dynamic Configuration Dotnet Core Push Refresh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/enable-dynamic-configuration-dotnet-core-push-refresh.md
Title: "Tutorial: Use dynamic configuration using push refresh in a .NET Core ap
description: In this tutorial, you learn how to dynamically update the configuration data for .NET Core apps using push refresh - - ms.devlang: csharp
azure-app-configuration Enable Dynamic Configuration Dotnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/enable-dynamic-configuration-dotnet-core.md
Title: "Tutorial: Use dynamic configuration in a .NET app"
description: In this tutorial, you learn how to dynamically update the configuration data for .NET apps - - ms.devlang: csharp
azure-app-configuration Enable Dynamic Configuration Java Spring Push Refresh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/enable-dynamic-configuration-java-spring-push-refresh.md
Title: "Tutorial: Use dynamic configuration using push refresh in a single insta
description: In this tutorial, you learn how to dynamically update the configuration data for a Java Spring app using push refresh - - ms.devlang: java
azure-app-configuration Howto App Configuration Event https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-app-configuration-event.md
Title: Use Event Grid for App Configuration data change notifications
description: Learn how to use Azure App Configuration event subscriptions to send key-value modification events to a web endpoint ms.devlang: csharp
azure-app-configuration Howto Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-best-practices.md
Title: Azure App Configuration best practices | Microsoft Docs description: Learn best practices while using Azure App Configuration. Topics covered include key groupings, key-value compositions, App Configuration bootstrap, and more. - Last updated 12/21/2023
azure-app-configuration Howto Leverage Json Content Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-leverage-json-content-type.md
description: Learn how to use JSON content-type for key-values ms.devlang: azurecli
azure-app-configuration Manage Feature Flags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/manage-feature-flags.md
Title: "Tutorial: Use Azure App Configuration to manage feature flags"
description: In this tutorial, you learn how to manage feature flags separately from your application by using Azure App Configuration.
azure-app-configuration Quickstart Dotnet App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-dotnet-app.md
Title: Quickstart for Azure App Configuration with .NET Framework | Microsoft Docs description: In this article, create a .NET Framework app with Azure App Configuration to centralize storage and management of application settings separate from your code. ms.devlang: csharp
azure-app-configuration Quickstart Feature Flag Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-feature-flag-dotnet.md
Title: Quickstart for adding feature flags to .NET Framework apps | Microsoft Docs | Microsoft Docs description: A quickstart for adding feature flags to .NET Framework apps and managing them in Azure App Configuration ms.devlang: csharp .NET- Last updated 3/20/2023 #Customer intent: As a .NET Framework developer, I want to use feature flags to control feature availability quickly and confidently.
azure-app-configuration Reload Key Vault Secrets Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/reload-key-vault-secrets-dotnet.md
description: Learn how to set up your application to automatically reload secrets and certificates from Key Vault. ms.devlang: csharp
azure-app-configuration Use Feature Flags Dotnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/use-feature-flags-dotnet-core.md
Title: Tutorial for using feature flags in a .NET app | Microsoft Docs description: In this tutorial, you learn how to implement feature flags in .NET Core apps. - - ms.devlang: csharp Last updated 07/11/2023
azure-app-configuration Use Feature Flags Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/use-feature-flags-spring-boot.md
Title: Tutorial for using feature flags in a Spring Boot app - Azure App Configuration | Microsoft Docs description: In this tutorial, you learn how to implement feature flags in Spring Boot apps. - - ms.devlang: java Last updated 09/27/2023
azure-app-configuration Use Key Vault References Dotnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/use-key-vault-references-dotnet-core.md
Title: Tutorial for using Azure App Configuration Key Vault references in an ASP.NET Core app | Microsoft Docs description: In this tutorial, you learn how to use Azure App Configuration's Key Vault references from an ASP.NET Core app - ms.devlang: csharp Last updated 07/11/2023
azure-app-configuration Use Key Vault References Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/use-key-vault-references-spring-boot.md
Title: Tutorial for using Azure App Configuration Key Vault references in a Java Spring Boot app | Microsoft Docs description: In this tutorial, you learn how to use Azure App Configuration's Key Vault references from a Java Spring Boot app - - ms.devlang: java Last updated 05/02/2022
azure-arc Agent Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes.md
Download for [Windows](https://download.microsoft.com/download/f/6/4/f64c574f-d3
- Agent installation and upgrades apply more restrictive permissions to the agent's data directories on Windows - Improved reliability when detecting Azure Stack HCI as a cloud provider - Removed the log zipping feature introduced in version 1.37 for extension manager and machine configuration agent logs. Log files will still be rotated automatically.
+- Removed the scheduled tasks for automatic agent upgrades (introduced in agent version 1.30). We will reintroduce this functionality when the automatic upgrade mechanism is available.
- Resolved [Azure Connected Machine Agent Elevation of Privilege Vulnerability](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-35624) ## Version 1.36 - November 2023
azure-arc Deliver Extended Security Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/deliver-extended-security-updates.md
You have 8 Windows Server 2012 R2 Standard instances, each with 8 physical cores
> You needed a regular production license to start with, and you'll be billed only for the production cores. You did not and should not provision non-production cores in your license. >
+## Upgrading from Windows Server 2012/2012 R2
+
+When upgrading a Windows Server 2012/2012R machine to Windows Server 2016 or above, it's not necessary to remove the Connected Machine agent from the machine. The new operating system will be visible for the machine in Azure within a few minutes of upgrade completion. Upgraded machines no longer require ESUs and are no longer eligible for them. Any ESU license associated with the machine is not automatically unlinked from the machine. See [Unlink a license](api-extended-security-updates.md#unlink-a-license) for instructions on doing so manually.
<!--
azure-arc Manage Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-agent.md
Actions of the [zypper](https://en.opensuse.org/Portal:Zypper) command, such as
### Automatic agent upgrades
-The Azure Connected Machine agent will support automatic and manual upgrades of the agent, initiated by Azure, in an upcoming release. To facilitate this capability, the agent enables a scheduled task on Windows or cron job on Linux that runs daily to see if the agent should be upgraded. The scheduler job will be installed when you install agent versions 1.30 or higher. While the scheduler job is currently enabled, the complete automatic upgrade experience is not yet available, so no changes will be made to your system even if a newer version of the Azure Connected Machine agent is available.
-
-To view these scheduler jobs in Windows through PowerShell, run the following command:
-
-```powershell
-schtasks /query /TN azcmagent
-```
-
-To view these scheduler jobs in Windows through Task Scheduler:
--
-To view these scheduler jobs in Linux, run the following command:
-
-```
-cat /etc/cron.d/azcmagent_autoupgrade
-```
-
-To opt-out of any future automatic upgrades or the scheduler jobs, execute the following Azure CLI commands:
-
-For Windows:
-
-```powershell
-az rest --method patch --url https://management.azure.com/subscriptions/<subscriptionId>/resourceGroups/<resourceGroup>/providers/Microsoft.HybridCompute/machines/<machineName>?api-version=2022-12-27-preview --resource https://management.azure.com/ --headers Content-Type=application/json --body '{\"properties\": {\"agentUpgrade\": {\"enableAutomaticUpgrade\": false}}}'
-```
-
-For Linux:
-
-```bash
-az rest --method patch --url https://management.azure.com/subscriptions/<subscriptionId>/resourceGroups/<resourceGroup>/providers/Microsoft.HybridCompute/machines/<machineName>?api-version=2022-12-27-preview --resource https://management.azure.com/ --headers Content-Type=application/json --body '{"properties": {"agentUpgrade": {"enableAutomaticUpgrade": false}}}'
-```
+The Azure Connected Machine agent doesn't automatically upgrade itself when a new version is released. You should include the latest version of the agent with your scheduled patch cycles.
## Renaming an Azure Arc-enabled server resource
azure-arc Plan Evaluate On Azure Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/plan-evaluate-on-azure-virtual-machine.md
When Azure Arc-enabled servers is configured on the VM, you see two representati
If your Azure VM is running CentOS, Red Hat, or SUSE Linux Enterprise Server (SLES), perform the following steps to configure firewalld: ```bash
- sudo firewall-cmd --permanent --direct --add-rule ipv4 filter OUTPUT 1 -p tcp -d 169.254.169.254 -j DROP
+ sudo firewall-cmd --permanent --direct --add-rule ipv4 filter OUTPUT 1 -p tcp -d 169.254.169.254 -j REJECT
sudo firewall-cmd --reload ``` For other distributions, consult your firewall docs or configure a generic iptables rule with the following command: ```bash
- sudo iptables -A OUTPUT -d 169.254.169.254 -j DROP
+ sudo iptables -A OUTPUT -d 169.254.169.254 -j REJECT
``` > [!NOTE]
azure-arc Run Command https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/run-command.md
Run Command on Azure Arc-enabled servers supports the following operations:
## Example scenarios
-Suppose you have an Azure Arc-enabled server called ΓÇ£2012DatacenterServer1ΓÇ¥ in resource group ΓÇ£ContosoRGΓÇ¥ with Subscription ID ΓÇ£aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaaΓÇ¥. Consider a scenario where you need to provide remote access to an endpoint for Windows Server 2012 / R2 servers. Access to Extended Security Updates enabled by Azure Arc requires access to the endpoint `microsoft.com/pkiops/certs`. You need to remotely configure a firewall rule that allows access to this endpoint. Use Run Command in order to allow connectivity to this endpoint.
+Suppose you have an Azure Arc-enabled server called ΓÇ£2012DatacenterServer1ΓÇ¥ in resource group ΓÇ£ContosoRGΓÇ¥ with Subscription ID ΓÇ£aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaaΓÇ¥. Consider a scenario where you need to provide remote access to an endpoint for Windows Server 2012 / R2 servers. Access to Extended Security Updates enabled by Azure Arc requires access to the endpoint `www.microsoft.com/pkiops/certs`. You need to remotely configure a firewall rule that allows access to this endpoint. Use Run Command in order to allow connectivity to this endpoint.
### Example 1: Endpoint access with Run Command
-Start off by creating a Run Command script to provide endpoint access to the `microsoft.com/pkiops/certs` endpoint on your target Arc-enabled server using the PUT operation.
+Start off by creating a Run Command script to provide endpoint access to the `www.microsoft.com/pkiops/certs` endpoint on your target Arc-enabled server using the PUT operation.
To directly provide the script in line, use the following operation:
PUT https://management.azure.com/subscriptions/aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaa
"parameters": [ { "name": "ruleName",
- "value": " Allow access to microsoft.com/pkiops/certs"
+ "value": " Allow access to www.microsoft.com/pkiops/certs"
}, { "name": "endpoint",
- "value": ""microsoft.com/pkiops/certs"
+ "value": "www.microsoft.com/pkiops/certs"
}, { "name": "port",
PUT https://management.azure.com/subscriptions/aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaa
"parameters": [ { "name": "ruleName",
- "value": " Allow access to microsoft.com/pkiops/certs"
+ "value": " Allow access to www.microsoft.com/pkiops/certs"
}, { "name": "endpoint",
- "value": ""microsoft.com/pkiops/certs"
+ "value": "www.microsoft.com/pkiops/certs"
}, { "name": "port",
azure-arc Troubleshoot Extended Security Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/troubleshoot-extended-security-updates.md
Title: How to troubleshoot delivery of Extended Security Updates for Windows Server 2012 through Azure Arc description: Learn how to troubleshoot delivery of Extended Security Updates for Windows Server 2012 through Azure Arc. Previously updated : 01/02/2024 Last updated : 01/19/2024
If you're unable to enable this service offering, review the resource providers
- **Microsoft.GuestConfiguration:** Enables Guest Configuration policies, which are used to assess and enforce configurations on your Arc-enabled servers for compliance and security. -- **Microsoft.UpdateManagement:** This resource provider is required for Azure Update Management, which is used to manage updates and patches on your on-premises servers, including ESU updates.
+- **Microsoft.Compute:** This resource provider is required for Azure Update Management, which is used to manage updates and patches on your on-premises servers, including ESU updates.
- **Microsoft.Security:** Enabling this resource provider is crucial for implementing security-related features and configurations for both Azure Arc and on-premises servers.
If you're unable to enable this service offering, review the resource providers
Ensure that both the licensing package and servicing stack update (SSU) are downloaded for the Azure Arc-enabled server as documented at [KB5031043: Procedure to continue receiving security updates after extended support has ended on October 10, 2023](https://support.microsoft.com/topic/kb5031043-procedure-to-continue-receiving-security-updates-after-extended-support-has-ended-on-october-10-2023-c1a20132-e34c-402d-96ca-1e785ed51d45). Ensure you are following all of the networking prerequisites as recorded at [Prepare to deliver Extended Security Updates for Windows Server 2012](prepare-extended-security-updates.md?tabs=azure-cloud#networking). -
-### Error: Trying to check IMDS again (HRESULT 12002)
+### Error: Trying to check IMDS again (HRESULT 12002 or 12029)
If installing the Extended Security Update enabled by Azure Arc fails with errors such as "ESU: Trying to Check IMDS Again LastError=HRESULT_FROM_WIN32(12029)" or "ESU: Trying to Check IMDS Again LastError=HRESULT_FROM_WIN32(12002)", you may need to update the intermediate certificate authorities trusted by your computer using one of the following two methods:
-1. Configure your network firewall and/or proxy server to allow access from the Windows Server 2012 (R2) machines to `https://microsoft.com/pkiops/certs`. This will allow the machine to automatically retrieve updated intermediate certificates as required and is Microsoft's preferred approach.
-1. Download all intermediate CAs from a machine with internet access, copy them to each Windows Server 2012 (R2) machine, and import them to the machine's intermediate certificate authority store:
- 1. Download the 4 intermediate CA certificates:
- 1. [Microsoft Azure TLS Issuing CA 01](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2001%20-%20xsign.crt)
- 1. [Microsoft Azure TLS Issuing CA 02](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2002%20-%20xsign.crt)
- 1. [Microsoft Azure TLS Issuing CA 05](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2005%20-%20xsign.crt)
- 1. [Microsoft Azure TLS Issuing CA 06](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2006%20-%20xsign.crt)
- 1. Copy the certificate files to your Windows Server 2012 (R2) machine.
- 1. Run the following commands in an elevated command prompt or PowerShell session to add the certificates to the "Intermediate Certificate Authorities" store for the local computer. The command should be run from the same directory as the certificate files. The commands are idempotent and won't make any changes if you've already imported the certificate:
-
- ```powershell
- certstore -addstore CA "Microsoft Azure TLS Issuing CA 01 - xsign.crt"
- certstore -addstore CA "Microsoft Azure TLS Issuing CA 02 - xsign.crt"
- certstore -addstore CA "Microsoft Azure TLS Issuing CA 05 - xsign.crt"
- certstore -addstore CA "Microsoft Azure TLS Issuing CA 06 - xsign.crt"
- ```
-
-After allowing the servers to reach the PKI URL or manually importing the intermediate certificates, try installing the Extended Security Updates again using Windows Update or your preferred patch management software. You may need to reboot your computer for the changes to take effect.
+#### Option 1: Allow access to the PKI URL
+
+Configure your network firewall and/or proxy server to allow access from the Windows Server 2012 (R2) machines to `http://www.microsoft.com/pkiops/certs` and `https://www.microsoft.com/pkiops/certs` (both TCP 80 and 443). This will enable the machines to automatically retrieve any missing intermediate CA certificates from Microsoft.
+
+Once the network changes are made to allow access to the PKI URL, try installing the Windows updates again. You may need to reboot your computer for the automatic installation of certificates and validation of the license to take effect.
+
+#### Option 2: Manually download and install the intermediate CA certificates
+
+If you're unable to allow access to the PKI URL from your servers, you can manually download and install the certificates on each machine.
+
+1. On any computer with internet access, download these intermediate CA certificates:
+ 1. [Microsoft Azure TLS Issuing CA 01](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2001%20-%20xsign.crt)
+ 1. [Microsoft Azure TLS Issuing CA 02](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2002%20-%20xsign.crt)
+ 1. [Microsoft Azure TLS Issuing CA 05](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2005%20-%20xsign.crt)
+ 1. [Microsoft Azure TLS Issuing CA 06](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2006%20-%20xsign.crt)
+1. Copy the certificate files to your Windows Server 2012 (R2) machines.
+1. Run any one set of the following commands in an elevated command prompt or PowerShell session to add the certificates to the "Intermediate Certificate Authorities" store for the local computer. The command should be run from the same directory as the certificate files. The commands are idempotent and won't make any changes if you've already imported the certificate:
+
+ ```
+ certutil -addstore CA "Microsoft Azure TLS Issuing CA 01 - xsign.crt"
+ certutil -addstore CA "Microsoft Azure TLS Issuing CA 02 - xsign.crt"
+ certutil -addstore CA "Microsoft Azure TLS Issuing CA 05 - xsign.crt"
+ certutil -addstore CA "Microsoft Azure TLS Issuing CA 06 - xsign.crt"
+ ```
+
+1. Try installing the Windows updates again. You may need to reboot your computer for the validation logic to recognize the newly imported intermediate CA certificates.
### Error: Not eligible (HRESULT 1633) If you encounter the error "ESU: not eligible HRESULT_FROM_WIN32(1633)", follow these steps:
-`Remove-Item ΓÇ£$env:ProgramData\AzureConnectedMachineAgent\Certs\license.jsonΓÇ¥ -Force`
-
-`Restart-Service himds`
+```powershell
+Remove-Item "$env:ProgramData\AzureConnectedMachineAgent\Certs\license.json" -Force
+Restart-Service himds
+```
If you have other issues receiving ESUs after successfully enrolling the server through Arc-enabled servers, or you need additional information related to issues affecting ESU deployment, see [Troubleshoot issues in ESU](/troubleshoot/windows-client/windows-7-eos-faq/troubleshoot-extended-security-updates-issues).
azure-arc Enable Virtual Hardware Scvmm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/enable-virtual-hardware-scvmm.md
- # Enable virtual hardware and VM CRUD capabilities in an SCVMM machine with Arc agent installed
azure-arc Enable Virtual Hardware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/enable-virtual-hardware.md
- # Enable virtual hardware and VM CRUD capabilities in a machine with Arc agent installed
azure-functions Configure Networking How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/configure-networking-how-to.md
To secure the storage for an existing function app:
| Setting name | Value | Comment | |-|-|-| | `AzureWebJobsStorage`| Storage connection string | This is the connection string for a secured storage account. |
- | `WEBSITE_CONTENTAZUREFILECONNECTIONSTRING` | Storage connection string | This is the connection string for a secured storage account. This setting is required for Consumption and Premium plan apps on both Windows and Linux. It's not required for Dedicated plan apps, which aren't dynamically scaled by Functions. |
- | `WEBSITE_CONTENTSHARE` | File share | The name of the file share created in the secured storage account where the project deployment files reside. This setting is required for Consumption and Premium plan apps on both Windows and Linux. It's not required for Dedicated plan apps, which aren't dynamically scaled by Functions. |
+ | `WEBSITE_CONTENTAZUREFILECONNECTIONSTRING` | Storage connection string | This is the connection string for a secured storage account. This setting is required for Consumption and Elastic Premium plan apps on both Windows and Linux. It's not required for Dedicated plan apps, which aren't dynamically scaled by Functions. |
+ | `WEBSITE_CONTENTSHARE` | File share | The name of the file share created in the secured storage account where the project deployment files reside. This setting is required for Consumption and Elastic Premium plan apps on both Windows and Linux. It's not required for Dedicated plan apps, which aren't dynamically scaled by Functions. |
| `WEBSITE_CONTENTOVERVNET` | 1 | A value of 1 enables your function app to scale when you have your storage account restricted to a virtual network. You should enable this setting when restricting your storage account to a virtual network. | 1. Select **Save** to save the application settings. Changing app settings causes the app to restart.
azure-functions Quickstart Python Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-python-vscode.md
def hello_orchestrator(context):
# Activity @myApp.activity_trigger(input_name="city") def hello(city: str):
- return "Hello " + city
+ return f"Hello {city}"
``` Review the table below for an explanation of each function and its purpose in the sample.
azure-functions Functions App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-app-settings.md
Azure Files doesn't support using managed identity when accessing the file share
## WEBSITE\_CONTENTOVERVNET
-A value of `1` enables your function app to scale when you have your storage account restricted to a virtual network. You should enable this setting when restricting your storage account to a virtual network. To learn more, see [Restrict your storage account to a virtual network](configure-networking-how-to.md#restrict-your-storage-account-to-a-virtual-network).
+A value of `1` enables your function app to scale when you have your storage account restricted to a virtual network. You should enable this setting when restricting your storage account to a virtual network. Only required when using `WEBSITE_CONTENTAZUREFILECONNECTIONSTRING`. To learn more, see [Restrict your storage account to a virtual network](configure-networking-how-to.md#restrict-your-storage-account-to-a-virtual-network).
|Key|Sample value| |||
azure-functions Functions Bindings Rabbitmq Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-rabbitmq-output.md
Title: RabbitMQ output bindings for Azure Functions description: Learn to send RabbitMQ messages from Azure Functions. Last updated 01/21/2022
azure-functions Functions Bindings Rabbitmq Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-rabbitmq-trigger.md
Title: RabbitMQ trigger for Azure Functions description: Learn how to run an Azure Function when a RabbitMQ message is created. Last updated 01/21/2022
azure-functions Functions Bindings Rabbitmq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-rabbitmq.md
Title: Azure RabbitMQ bindings for Azure Functions description: Learn to send Azure RabbitMQ triggers and bindings in Azure Functions. Last updated 11/15/2021
azure-functions Functions Recover Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-recover-storage-account.md
In the preceding step, if you can't find a storage account connection string, it
* Required: * [`AzureWebJobsStorage`](./functions-app-settings.md#azurewebjobsstorage)
-* Required for Premium plan functions:
+* Required for Elastic Premium and Consumption plan functions:
* [`WEBSITE_CONTENTAZUREFILECONNECTIONSTRING`](./functions-app-settings.md) * [`WEBSITE_CONTENTSHARE`](./functions-app-settings.md)
azure-functions Migrate Dotnet To Isolated Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-dotnet-to-isolated-model.md
- devx-track-dotnet - ignite-2023 Previously updated : 08/2/2023 Last updated : 01/17/2024 # Migrate .NET apps from the in-process model to the isolated worker model
var host = new HostBuilder()
host.Run(); ```
+This examples supports [ASP.NET Core integration] to use normal .NET 8 types. To use the built-in Functions HTTP types instead, replace the call to `ConfigureFunctionsWebApplication` with a call to `ConfigureFunctionsWorkerDefaults`.
+ # [.NET Framework 4.8](#tab/netframework48) ```csharp
azure-functions Migrate Dotnet To Isolated Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-glossary-cloud-terminology.md
Title: Azure glossary - Azure dictionary
description: Use the Azure glossary to understand cloud terminology on the Azure platform. This short Azure dictionary provides definitions for common cloud terms for Azure. keywords: Azure dictionary, cloud terminology, Azure glossary, terminology definitions, cloud terms - ms.assetid: d7ac12f7-24b5-4bcd-9e4d-3d76fbd8d297
azure-government Documentation Accelerate Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compliance/documentation-accelerate-compliance.md
description: Provides an overview of resources for Development, Automation, and
cloud: gov Last updated 05/30/2023
azure-government Documentation Government Cognitiveservices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-cognitiveservices.md
Title: Azure AI services on Azure Government
description: Guidance for developing Azure AI services applications for Azure Government cloud: gov Last updated 08/30/2021
azure-government Documentation Government Connect Vs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-connect-vs.md
Title: Connect to Azure Government with Visual Studio
description: This quickstart shows how to connect to Azure Government with Visual Studio cloud: gov Last updated 03/09/2021
azure-government Documentation Government Csp Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-csp-application.md
description: Provides an overview of the process to become an Azure Government C
cloud: gov Last updated 05/30/2023
azure-government Documentation Government Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-extension.md
cloud: gov
Last updated 08/31/2021
azure-government Documentation Government Get Started Connect With Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-get-started-connect-with-cli.md
Title: Connect to Azure Government with Azure CLI
description: This quickstart shows you how to connect to Azure Government and create a web app in Azure Government with Azure CLI cloud: gov Last updated 03/09/2021
azure-government Documentation Government How To Access Enterprise Agreement Billing Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-how-to-access-enterprise-agreement-billing-account.md
Title: Access your EA billing account in the Azure Government portal | Microsoft
description: This article describes how to Access your EA billing account in the Azure Government portal. cloud: gov Last updated 11/08/2023
azure-government Documentation Government Howto Deploy Webandmobile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-howto-deploy-webandmobile.md
Title: Deploy an Azure App Services app| Microsoft Docs
description: This tutorial describes how to deploy a Web App, API App, or Mobile App to Azure Government using Visual Studio and Azure SDK. cloud: gov Last updated 08/10/2018
azure-government Documentation Government Image Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-image-gallery.md
cloud: gov
Last updated 08/31/2021
azure-government Documentation Government Manage Marketplace Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-manage-marketplace-partners.md
Title: Publishing to Azure Government Marketplace
description: This article provides guidance on publishing solutions to Azure Government Marketplace. cloud: gov Last updated 08/31/2021
azure-government Documentation Government Manage Oms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-manage-oms.md
Title: Azure Government Azure Monitor logs | Microsoft Docs
description: This article describes how Azure Monitor logs is applicable to US Government agencies and solution providers cloud: gov Last updated 12/05/2016
azure-government Documentation Government Quickstarts Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-quickstarts-vm.md
Title: Create Virtual Machines in Azure Government | Microsoft Docs
description: This tutorial shows steps for creating Virtual Machines with Azure Government cloud: gov Last updated 08/10/2018
azure-maps Migrate From Bing Maps Web Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps-web-services.md
Last updated 10/28/2021
- # Tutorial: Migrate web service from Bing Maps
azure-maps Migrate From Bing Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps.md
Last updated 12/1/2021
- # Tutorial: Migrate from Bing Maps to Azure Maps
azure-maps Migrate From Google Maps Web Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps-web-services.md
Last updated 09/28/2023
- # Tutorial: Migrate web service from Google Maps
azure-maps Migrate From Google Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps.md
Last updated 09/23/2020
- # Tutorial: Migrate from Google Maps to Azure Maps
azure-maps Open Source Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/open-source-projects.md
Last updated 12/07/2020
- # Azure Maps community - Open-source projects
azure-maps Traffic Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/traffic-coverage.md
The following tables provide information about what kind of traffic information
| Country/Region | Incidents | Flow | ||::|:-:|
-| Belarus | Γ£ô | Γ£ô |
| Belgium | Γ£ô | Γ£ô | | Bosnia and Herzegovina | Γ£ô | Γ£ô | | Bulgaria | Γ£ô | Γ£ô |
The following tables provide information about what kind of traffic information
| Poland | Γ£ô | Γ£ô | | Portugal | Γ£ô | Γ£ô | | Romania | Γ£ô | Γ£ô |
-| Russian Federation | Γ£ô | Γ£ô |
| San Marino | Γ£ô | Γ£ô | | Serbia | Γ£ô | Γ£ô | | Slovakia | Γ£ô | Γ£ô |
The following tables provide information about what kind of traffic information
| Sweden | Γ£ô | Γ£ô | | Switzerland | Γ£ô | Γ£ô | | T├╝rkiye | Γ£ô | Γ£ô |
-| Ukraine | Γ£ô | Γ£ô |
| United Kingdom | Γ£ô | Γ£ô | ## Middle East & Africa
The following tables provide information about what kind of traffic information
|-|::|:-:| | Bahrain | Γ£ô | Γ£ô | | Egypt | Γ£ô | Γ£ô |
-| Israel | Γ£ô | Γ£ô |
| Kenya | Γ£ô | Γ£ô | | Kuwait | Γ£ô | Γ£ô | | Lesotho | Γ£ô | Γ£ô |
azure-monitor Azure Monitor Agent Extension Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-extension-versions.md
We strongly recommended to always update to the latest version, or opt in to the
## Version details | Release Date | Release notes | Windows | Linux | |:|:|:|:|
-| December 2023 |**Windows** <ul><li>Support new settings that control agent disk size</li><li>Prevent CPU spikes by not using bookmark when resetting an Event Log subscription</li><li>Added missing fluentbit exe to AMA client setup for Custom Log support</li><li>Updated to latest AzureCredentialsManagementService and DsmsCredentialsManagement package</li><li>Update ME to v2.2023.1027.1417</li></ul>**Linux**<ul><li>Support for TLS V1.3</li><li>Support for nopri in Syslog</li><li>Ability to set disk quota from DCR Agent Settings</li><li>Add ARM64 Ubuntu 22 support</li><li>**Fixes**<ul><li>SysLog</li><ul><li>Parse syslog Palo Alto CEF with multiple space characters following the hostname</li><li>Fix an issue with incorrectly parsing messages containing two '\n' chars in a row</li><li>Improved support for non-RFC compliant devices</li><li>Support infoblox device messages containing both hostname and IP headers</li></ul><li>Fix AMA crash in RHEL 7.2</li><li>Remove dependency on "which" command</li><li>Fix port conflicts due to AMA using 13000 </li><li>Reliability and Performance improvements</li></ul></li></ul>| 1.22.0 | 1.29.4|
+| December 2023 |**Windows** <ul><li>Prevent CPU spikes by not using bookmark when resetting an Event Log subscription</li><li>Added missing fluentbit exe to AMA client setup for Custom Log support</li><li>Updated to latest AzureCredentialsManagementService and DsmsCredentialsManagement package</li><li>Update ME to v2.2023.1027.1417</li></ul>**Linux**<ul><li>Support for TLS V1.3</li><li>Support for nopri in Syslog</li><li>Ability to set disk quota from DCR Agent Settings</li><li>Add ARM64 Ubuntu 22 support</li><li>**Fixes**<ul><li>SysLog</li><ul><li>Parse syslog Palo Alto CEF with multiple space characters following the hostname</li><li>Fix an issue with incorrectly parsing messages containing two '\n' chars in a row</li><li>Improved support for non-RFC compliant devices</li><li>Support infoblox device messages containing both hostname and IP headers</li></ul><li>Fix AMA crash in RHEL 7.2</li><li>Remove dependency on "which" command</li><li>Fix port conflicts due to AMA using 13000 </li><li>Reliability and Performance improvements</li></ul></li></ul>| 1.22.0 | 1.29.4|
| October 2023| **Windows** <ul><li>Minimize CPU spikes when resetting an Event Log subscription</li><li>Enable multiple IIS subscriptions to use same filter</li><li>Cleanup files and folders for inactive tenants in multi-tenant mode</li><li>AMA installer will not install unnecessary certs</li><li>AMA emits Telemetry table locally</li><li>Update Metric Extension to v2.2023.721.1630</li><li>Update AzureSecurityPack to v4.29.0.4</li><li>Update AzureWatson to v1.0.99</li></ul>**Linux**<ul><li> Add support for Process metrics counters for Log Analytics upload and Azure Monitor Metrics</li><li>Use rsyslog omfwd TCP for improved syslog reliability</li><li>Support Palo Alto CEF logs where hostname is followed by 2 spaces</li><li>Bug and reliability improvements</li></ul> |1.21.0|1.28.11| | September 2023| **Windows** <ul><li>Fix issue with high CPU usage due to excessive Windows Event Logs subscription reset</li><li>Reduce fluentbit resource usage by limiting tracked files older than 3 days and limiting logging to errors only</li><li>Fix race-condition where resource_id is unavailable when agent is restarted</li><li>Fix race-condition when vm-extension provision agent (aka GuestAgent) is issuing a disable-vm-extension command to AMA.</li><li>Update MetricExtension version to 2.2023.721.1630</li><li>Update Troubleshooter to v1.5.14 </li></ul>|1.20.0| None | | August 2023| **Windows** <ul><li>AMA: Allow prefixes in the tag names to handle regression</li><li>Updating package version for AzSecPack 4.28 release</li></ui>|1.19.0| None |
azure-monitor Data Model Complete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-complete.md
Title: Application Insights telemetry data model
description: This article describes the Application Insights telemetry data model including request, dependency, exception, trace, event, metric, PageView, and context. documentationcenter: .net-- ibiza Last updated 09/25/2023
azure-monitor Opentelemetry Add Modify https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-add-modify.md
Telemetry emitted by Azure SDKS is automatically [collected](https://github.com/
> The Azure Monitor OpenTelemetry Distros include custom mapping and logic to automatically emit [Application Insights standard metrics](standard-metrics.md). > [!TIP]
-> The OpenTelemetry-based offerings currently emit all OpenTelemetry metrics as [Custom Metrics](opentelemetry-add-modify.md#add-custom-metrics) and [Performance Counters](standard-metrics.md#performance-counters) in Metrics Explorer. For .NET, Node.js, and Python, whatever you set as the meter name becomes the metrics namespace.
+> All OpenTelemetry metrics whether automatically collected from instrumentation libraries or manual collected from custom coding are currently considered Application Insights "custom metrics" for billing purposes. [Learn More](pre-aggregated-metrics-log-metrics.md#custom-metrics-dimensions-and-pre-aggregation).
### Add a community instrumentation library
The following table represents the currently supported custom telemetry types:
### Add custom metrics
-> [!NOTE]
-> Custom Metrics are under preview in Azure Monitor Application Insights. Custom metrics without dimensions are available by default. To view and alert on dimensions, you need to [opt-in](pre-aggregated-metrics-log-metrics.md#custom-metrics-dimensions-and-pre-aggregation).
+In this context, custom metrics refers to manually instrumenting your code to collect additional metrics beyond what the OpenTelemetry Instrumentation Libraries automatically collect.
The OpenTelemetry API offers six metric "instruments" to cover various metric scenarios and you need to pick the correct "Aggregation Type" when visualizing metrics in Metrics Explorer. This requirement is true when using the OpenTelemetry Metric API to send metrics and when using an instrumentation library.
azure-monitor Pre Aggregated Metrics Log Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/pre-aggregated-metrics-log-metrics.md
There are several [ways of sending custom metrics from the Application Insights
## Custom metrics dimensions and pre-aggregation
-All metrics that you send by using [trackMetric](./api-custom-events-metrics.md#trackmetric) or [GetMetric and TrackValue](./api-custom-events-metrics.md#getmetric) API calls are automatically stored in both logs and metrics stores. Although the log-based version of your custom metric always retains all dimensions, the pre-aggregated version of the metric is stored by default with no dimensions. You can turn on collection of dimensions of custom metrics on the [usage and estimated cost](../cost-usage.md#usage-and-estimated-costs) tab by selecting the **Enable alerting on custom metric dimensions** checkbox.
+All metrics that you send using [OpenTelemetry](opentelemetry-add-modify.md), [trackMetric](./api-custom-events-metrics.md#trackmetric) or [GetMetric and TrackValue](./api-custom-events-metrics.md#getmetric) API calls are automatically stored in both logs and metrics stores. These metrics can be found in the customMetrics table in Application Insights and in Metrics Explorer under the Custom Metric Namespace called "azure.applicationinsights". Although the log-based version of your custom metric always retains all dimensions, the pre-aggregated version of the metric is stored by default with no dimensions. Retaining dimensions of custom metrics is a Preview feature that can be turned on via the [usage and estimated cost](../cost-usage.md#usage-and-estimated-costs) tab and selecting the **Enable alerting on custom metric dimensions** checkbox.
:::image type="content" source="./media/pre-aggregated-metrics-log-metrics/001-cost.png" lightbox="./media/pre-aggregated-metrics-log-metrics/001-cost.png" alt-text="Screenshot that shows usage and estimated costs.":::
azure-monitor Autoscale Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-overview.md
Last updated 03/08/2023- # Overview of autoscale in Azure
azure-monitor Data Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/data-platform.md
Title: Azure Monitor data platform
description: Overview of the Azure Monitor data platform and collection of observability data. Last updated 08/09/2023
azure-monitor Data Platform Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-platform-metrics.md
Title: Metrics in Azure Monitor | Microsoft Docs description: Learn about metrics in Azure Monitor, which are lightweight monitoring data capable of supporting near real-time scenarios. - Last updated 04/25/2023
azure-monitor Data Platform Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-platform-logs.md
Title: Azure Monitor Logs description: Learn the basics of Azure Monitor Logs, which is used for advanced analysis of monitoring data. Last updated 09/14/2023
azure-monitor Log Analytics Workspace Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-analytics-workspace-overview.md
Title: Log Analytics workspace overview description: Overview of Log Analytics workspace, which stores data for Azure Monitor Logs. Last updated 10/24/2023
azure-monitor Tables Feature Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tables-feature-support.md
description: Reference for tables that support ingestion-time transformations in
Last updated 07/10/2022
azure-monitor Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/overview.md
Title: Azure Monitor overview description: Overview of Microsoft services and functionalities that contribute to a complete monitoring strategy for your Azure services and applications. - Last updated 12/07/2023
azure-monitor Workbook Templates Move Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbook-templates-move-region.md
Title: Move and Azure Workbook template to another region
description: How to move a workbook template to a different region - ibiza
azure-monitor Workbooks Access Troubleshooting Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-access-troubleshooting-guide.md
Title: Access Troubleshooting Guides in Azure Workbooks description: Learn how to access the deprecated troubleshooting guides in Azure Workbooks. - ibiza Last updated 06/21/2023
azure-monitor Workbooks Automate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-automate.md
Title: Azure Monitor workbooks and Azure Resource Manager templates description: Simplify complex reporting with prebuilt and custom parameterized Azure Monitor workbooks deployed via Azure Resource Manager templates. - ibiza
azure-monitor Workbooks Bring Your Own Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-bring-your-own-storage.md
Title: Azure Monitor workbooks bring your own storage description: Learn how to secure your workbook by saving the workbook content to your storage. - ibiza Last updated 06/21/2023
azure-monitor Workbooks Chart Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-chart-visualizations.md
Title: Azure Monitor workbook chart visualizations description: Learn about all the Azure Monitor workbook chart visualizations. - ibiza Last updated 06/21/2023
azure-monitor Workbooks Composite Bar https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-composite-bar.md
Title: Azure Workbooks composite bar renderer description: Learn about all the Azure Workbooks composite bar renderer visualizations. - ibiza Last updated 06/21/2023
azure-monitor Workbooks Dropdowns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-dropdowns.md
Title: Azure Monitor workbook dropdown parameters description: Use dropdown parameters to simplify complex reporting with prebuilt and custom parameterized workbooks. -- ibiza Last updated 06/21/2023
azure-monitor Workbooks Graph Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-graph-visualizations.md
Title: Azure Workbooks graph visualizations description: Learn about all the Azure Workbooks graph visualizations. - ibiza Last updated 06/21/2023
azure-monitor Workbooks Honey Comb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-honey-comb.md
Title: Azure Workbooks honeycomb visualizations description: Learn about Azure Workbooks honeycomb visualizations. - ibiza Last updated 06/21/2023
azure-monitor Workbooks Jsonpath https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-jsonpath.md
Title: Azure Monitor workbooks - Transform JSON data with JSONPath description: Use JSONPath in Azure Monitor workbooks to transform the JSON data results to a different data format. - ibiza Last updated 06/21/2023
azure-monitor Workbooks Map Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-map-visualizations.md
Title: Azure Workbooks map visualizations description: Learn about Azure Workbooks map visualizations. - ibiza Last updated 06/21/2023
azure-monitor Workbooks Move Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-move-region.md
Title: Azure Monitor Workbooks - Move Regions description: How to move a workbook to a different region - ibiza
azure-monitor Workbooks Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-parameters.md
Title: Create workbook parameters description: Learn how to add parameters to your workbook to collect input from the consumers and reference it in other parts of the workbook. -- ibiza Last updated 06/21/2023
azure-monitor Workbooks Text Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-text-visualizations.md
Title: Azure Monitor workbook text visualizations description: Learn about all the Azure Monitor workbook text visualizations. - ibiza Last updated 06/21/2023
azure-monitor Workbooks Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-text.md
Title: Azure Monitor workbook text parameters
description: Simplify complex reporting with prebuilt and custom parameterized workbooks. Learn more about workbook text parameters. - ibiza Last updated 06/21/2023
azure-monitor Workbooks Tile Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-tile-visualizations.md
Title: Azure Monitor workbook tile visualizations description: Learn about all the Azure Monitor workbook tile visualizations. - ibiza Last updated 06/21/2023
azure-monitor Workbooks Time https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-time.md
Title: Azure Monitor workbook time parameters description: Learn how to set time parameters to allow users to set the time context of analysis. The time parameters are used by almost all reports. - - ibiza Last updated 06/21/2023
azure-monitor Workbooks Tree Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-tree-visualizations.md
Title: Azure Monitor workbook tree visualizations description: Learn about all the Azure Monitor workbook tree visualizations. - ibiza Last updated 06/21/2023
azure-netapp-files Application Volume Group Add Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/application-volume-group-add-hosts.md
Title: Add hosts to a multiple-host SAP HANA system using Azure NetApp Files application volume group for SAP HANA | Microsoft Docs description: Describes how to add additional HANA hosts after you have created the first host in an SAP HANA multiple-host setup. -- Last updated 11/19/2021
azure-netapp-files Application Volume Group Add Volume Secondary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/application-volume-group-add-volume-secondary.md
Title: Add volumes for an SAP HANA system as a secondary database in HSR using Azure NetApp Files | Microsoft Docs description: Describes using application volume group to add volumes for an SAP HANA system as a secondary database in HANA System Replication (HSR). -- Last updated 11/19/2021
azure-netapp-files Application Volume Group Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/application-volume-group-considerations.md
Title: Requirements and considerations for Azure NetApp Files application volume group for SAP HANA | Microsoft Docs description: Describes the requirements and considerations you need to be aware of before using Azure NetApp Files application volume group for SAP HANA. -- Last updated 11/08/2023
azure-netapp-files Application Volume Group Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/application-volume-group-delete.md
Title: Delete an application volume group in Azure NetApp Files | Microsoft Docs description: Describes how to delete an application volume group. -- Last updated 11/19/2021
azure-netapp-files Application Volume Group Deploy First Host https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/application-volume-group-deploy-first-host.md
Title: Deploy the first SAP HANA host using Azure NetApp Files application volume group for SAP HANA | Microsoft Docs description: Describes how to deploy the first SAP HANA host using Azure NetApp Files application volume group for SAP HANA. -- Last updated 10/13/2022
azure-netapp-files Application Volume Group Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/application-volume-group-disaster-recovery.md
Title: Add volumes for an SAP HANA system as a DR system using Azure NetApp Files cross-region replication | Microsoft Docs description: Describes using an application volume group to add volumes for an SAP HANA system as a disaster recovery (DR) system. -- Last updated 08/22/2022
azure-netapp-files Application Volume Group Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/application-volume-group-introduction.md
Title: Understand Azure NetApp Files application volume group for SAP HANA | Microsoft Docs description: Describes the use cases and key features of Azure NetApp Files application volume group for SAP HANA. -- Last updated 02/24/2023
azure-netapp-files Application Volume Group Manage Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/application-volume-group-manage-volumes.md
Title: Manage volumes in Azure NetApp Files application volume group | Microsoft Docs description: Describes how to manage a volume from its application volume group, including resizing, deleting, or changing throughput for the volume. -- Last updated 11/19/2021
azure-netapp-files Auxiliary Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/auxiliary-groups.md
Title: Understand auxiliary/supplemental groups with NFS in Azure NetApp Files description: Learn about auxiliary/supplemental groups with NFS in Azure NetApp Files. -- Last updated 11/13/2023
azure-netapp-files Azacsnap Cmd Ref Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-cmd-ref-backup.md
Title: Back up using Azure Application Consistent Snapshot tool for Azure NetApp Files | Microsoft Docs description: Provides a guide for running the backup command of the Azure Application Consistent Snapshot tool that you can use with Azure NetApp Files. -- Last updated 07/29/2022
azure-netapp-files Azacsnap Cmd Ref Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-cmd-ref-configure.md
Title: Configure the Azure Application Consistent Snapshot tool for Azure NetApp Files description: Learn how to run the configure command of the Azure Application Consistent Snapshot tool that you can use with Azure NetApp Files. -- Last updated 08/21/2023
azure-netapp-files Azacsnap Cmd Ref Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-cmd-ref-delete.md
Title: Delete using Azure Application Consistent Snapshot tool for Azure NetApp Files | Microsoft Docs description: Provides a guide for running the delete command of the Azure Application Consistent Snapshot tool that you can use with Azure NetApp Files. -- Last updated 01/18/2023
azure-netapp-files Azacsnap Cmd Ref Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-cmd-ref-details.md
Title: Obtain details using Azure Application Consistent Snapshot tool for Azure NetApp Files | Microsoft Docs description: Provides a guide for running the details command of the Azure Application Consistent Snapshot tool that you can use with Azure NetApp Files. -- Last updated 04/21/2021
azure-netapp-files Azacsnap Cmd Ref Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-cmd-ref-restore.md
Title: Restore using Azure Application Consistent Snapshot tool for Azure NetApp Files | Microsoft Docs description: Provides a guide for running the restore command of the Azure Application Consistent Snapshot tool that you can use with Azure NetApp Files. -- Last updated 05/04/2023
azure-netapp-files Azacsnap Cmd Ref Runbefore Runafter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-cmd-ref-runbefore-runafter.md
Title: RunBefore and RunAfter using Azure Application Consistent Snapshot tool for Azure NetApp Files | Microsoft Docs description: Provides a guide for using the runbefore and runafter options of the Azure Application Consistent Snapshot tool that you can use with Azure NetApp Files. -- Last updated 07/29/2022
azure-netapp-files Azacsnap Cmd Ref Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-cmd-ref-test.md
Title: Test Azure Application Consistent Snapshot tool for Azure NetApp Files | Microsoft Docs description: Explains how to run the test command of the Azure Application Consistent Snapshot tool that you can use with Azure NetApp Files. -- Last updated 08/04/2021
azure-netapp-files Azacsnap Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-disaster-recovery.md
Title: Disaster recovery using Azure Application Consistent Snapshot tool for Azure NetApp Files | Microsoft Docs description: Explains how to perform disaster recovery when using the Azure Application Consistent Snapshot tool that you can use with Azure NetApp Files. -- Last updated 04/21/2021
azure-netapp-files Azacsnap Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-get-started.md
Title: Get started with Azure Application Consistent Snapshot tool for Azure NetApp Files | Microsoft Docs description: Provides a guide for installing the Azure Application Consistent Snapshot tool that you can use with Azure NetApp Files. -- Last updated 03/03/2022
azure-netapp-files Azacsnap Installation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-installation.md
Title: Install the Azure Application Consistent Snapshot tool for Azure NetApp Files description: Learn how to install the Azure Application Consistent Snapshot tool that you can use with Azure NetApp Files. -- Last updated 08/21/2023
azure-netapp-files Azacsnap Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-introduction.md
Title: What is the Azure Application Consistent Snapshot tool for Azure NetApp Files description: Get basic information about the Azure Application Consistent Snapshot tool that you can use with Azure NetApp Files. -- Last updated 08/21/2023
azure-netapp-files Azacsnap Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-preview.md
- Last updated 08/21/2023
azure-netapp-files Azacsnap Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-release-notes.md
Title: Release Notes for Azure Application Consistent Snapshot tool for Azure NetApp Files | Microsoft Docs description: Provides release notes for the Azure Application Consistent Snapshot tool that you can use with Azure NetApp Files. -- Last updated 08/21/2023
azure-netapp-files Azacsnap Tips https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-tips.md
Title: Tips and tricks for using Azure Application Consistent Snapshot tool for Azure NetApp Files | Microsoft Docs description: Provides tips and tricks for using the Azure Application Consistent Snapshot tool that you can use with Azure NetApp Files. -- Last updated 09/20/2023
azure-netapp-files Azacsnap Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-troubleshoot.md
Title: Troubleshoot Azure Application Consistent Snapshot tool - Azure NetApp Files description: Troubleshoot communication issues, test failures, and other SAP HANA issues when using the Azure Application Consistent Snapshot (AzAcSnap) tool. -- Last updated 01/16/2023
azure-netapp-files Azure Government https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-government.md
Title: Azure NetApp Files for Azure Government | Microsoft Docs description: Learn how to connect to Azure Government to use Azure NetApp Files and the Azure NetApp Files feature availability in Azure Government. -- Last updated 11/02/2023
azure-netapp-files Azure Netapp Files Configure Nfsv41 Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-configure-nfsv41-domain.md
Title: Configure NFSv4.1 ID domain for Azure NetApp Files | Microsoft Docs description: Learn how to configure NFSv4.1 ID domain for using NFSv4.1 with Azure NetApp Files. -- Last updated 07/12/2023
azure-netapp-files Azure Netapp Files Cost Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-cost-model.md
Title: Cost model for Azure NetApp Files | Microsoft Docs description: Describes the cost model for Azure NetApp Files for managing expenses from the service. -- Last updated 11/08/2021
azure-netapp-files Azure Netapp Files Create Netapp Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-create-netapp-account.md
Title: Create a NetApp account to access Azure NetApp Files | Microsoft Docs description: Learn how to access Azure NetApp Files and create a NetApp account so that you can set up a capacity pool and create a volume. -- Last updated 10/04/2021
azure-netapp-files Azure Netapp Files Create Volumes Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-create-volumes-smb.md
Title: Create an SMB volume for Azure NetApp Files | Microsoft Docs description: This article shows you how to create an SMB3 volume in Azure NetApp Files. Learn about requirements for Active Directory connections and Domain Services. -- Last updated 05/31/2023
azure-netapp-files Azure Netapp Files Create Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-create-volumes.md
Title: Create an NFS volume for Azure NetApp Files | Microsoft Docs description: This article shows you how to create an NFS volume in Azure NetApp Files. Learn about considerations, like which version to use, and best practices. -- Last updated 05/28/2023
azure-netapp-files Azure Netapp Files Delegate Subnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-delegate-subnet.md
Title: Delegate a subnet to Azure NetApp Files | Microsoft Docs description: Learn how to delegate a subnet to Azure NetApp Files. Specify the delegated subnet when you create a volume. -- Last updated 09/28/2023
azure-netapp-files Azure Netapp Files Develop With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-develop-with-rest-api.md
Title: Develop for Azure NetApp Files with REST API | Microsoft Docs description: The REST API for the Azure NetApp Files service defines HTTP operations for resources such as the NetApp account, the capacity pool, the volumes, and snapshots. -- Last updated 09/30/2022
azure-netapp-files Azure Netapp Files Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-introduction.md
Title: What is Azure NetApp Files | Microsoft Docs description: Learn about Azure NetApp Files, an Azure native, first-party, enterprise-class, high-performance file storage service. -- Last updated 01/11/2024
azure-netapp-files Azure Netapp Files Manage Snapshots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-manage-snapshots.md
Title: Create an on-demand snapshot using Azure NetApp Files | Microsoft Docs description: Describes how to create on-demand snapshots with Azure NetApp Files. -- Last updated 10/25/2021
azure-netapp-files Azure Netapp Files Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-metrics.md
Title: Metrics for Azure NetApp Files | Microsoft Docs description: Azure NetApp Files provides metrics on allocated storage, actual storage usage, volume IOPS, and latency. Use these metrics to understand usage and performance. -- Last updated 07/19/2023
azure-netapp-files Azure Netapp Files Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md
Title: Guidelines for Azure NetApp Files network planning | Microsoft Docs description: Describes guidelines that can help you design an effective network architecture by using Azure NetApp Files. -- Last updated 08/10/2023
azure-netapp-files Azure Netapp Files Performance Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-performance-considerations.md
Title: Performance considerations for Azure NetApp Files | Microsoft Docs description: Learn about performance for Azure NetApp Files, including the relationship of quota and throughput limit and how to dynamically increase/decrease volume quota. -- Last updated 08/31/2023
azure-netapp-files Azure Netapp Files Register https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-register.md
Title: Register for NetApp Resource Provider to use with Azure NetApp Files | Microsoft Docs description: Learn how to register the NetApp Resource Provider for Azure NetApp Files. -- Last updated 01/21/2022
azure-netapp-files Azure Netapp Files Resize Capacity Pools Or Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-resize-capacity-pools-or-volumes.md
Title: Resize the capacity pool or a volume for Azure NetApp Files | Microsoft Docs description: Learn how to change the size of a capacity pool or a volume. Resizing the capacity pool changes the purchased Azure NetApp Files capacity. -- Last updated 02/21/2023
azure-netapp-files Azure Netapp Files Resource Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-resource-limits.md
Title: Resource limits for Azure NetApp Files | Microsoft Docs description: Describes limits for Azure NetApp Files resources and how to request resource limit increase. -- Last updated 09/29/2023
azure-netapp-files Azure Netapp Files Sdk Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-sdk-cli.md
Title: Azure NetApp Files SDKs and CLI tools | Microsoft Docs description: "Learn about supported SDKs for Azure NetApp Files and their published locations in GitHub, and about supported command-line tools: Azure CLI and PowerShell." -- Last updated 09/30/2022
azure-netapp-files Azure Netapp Files Service Levels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-service-levels.md
Title: Service levels for Azure NetApp Files | Microsoft Docs description: Describes throughput performance for the service levels of Azure NetApp Files. -- Last updated 08/02/2022
azure-netapp-files Azure Netapp Files Set Up Capacity Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-set-up-capacity-pool.md
Title: Create a capacity pool for Azure NetApp Files | Microsoft Docs description: Describes how to create a capacity pool so that you can create volumes within it. -- Last updated 10/23/2023
azure-netapp-files Azure Netapp Files Smb Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-smb-performance.md
Title: SMB performance best practices for Azure NetApp Files| Microsoft Docs description: Helps you understand SMB performance and best practices for Azure NetApp Files. -- Last updated 02/07/2022
azure-netapp-files Azure Netapp Files Solution Architectures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md
Title: Solution architectures using Azure NetApp Files | Microsoft Docs description: Provides references to best practices for solution architectures using Azure NetApp Files. -- Last updated 09/18/2023
azure-netapp-files Azure Netapp Files Troubleshoot Resource Provider Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-troubleshoot-resource-provider-errors.md
Title: Troubleshoot Azure NetApp Files Resource Provider errors | Microsoft Docs description: Describes causes, solutions, and workarounds for common Azure NetApp Files Resource Provider errors. -
-tags:
- Last updated 02/09/2022
azure-netapp-files Azure Netapp Files Understand Storage Hierarchy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-understand-storage-hierarchy.md
Title: Storage hierarchy of Azure NetApp Files | Microsoft Docs description: Describes the storage hierarchy, including Azure NetApp Files accounts, capacity pools, and volumes. -- Last updated 07/27/2023
azure-netapp-files Azure Netapp Files Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-videos.md
Title: Azure NetApp Files videos | Microsoft Docs description: Provides references to videos that contain discussions about using Azure NetApp Files. -- Last updated 12/07/2023
azure-netapp-files Backup Configure Manual https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-configure-manual.md
Title: Configure manual backups for Azure NetApp Files | Microsoft Docs description: Describes how to configure manual backups for Azure NetApp Files volumes. -- Last updated 06/13/2023
azure-netapp-files Backup Configure Policy Based https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-configure-policy-based.md
Title: Configure policy-based backups for Azure NetApp Files | Microsoft Docs description: Describes how to configure policy-based (scheduled) backups for Azure NetApp Files volumes. -- Last updated 10/25/2023
azure-netapp-files Backup Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-delete.md
Title: Delete backups of an Azure NetApp Files volume | Microsoft Docs description: Describes how to delete individual backups that you no longer need to keep for a volume. -- Last updated 10/27/2022
azure-netapp-files Backup Disable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-disable.md
Title: Disable backup functionality for an Azure NetApp Files volume | Microsoft Docs description: Describes how to disable the backup functionality for a volume that no longer needs backup protection. -- Last updated 10/27/2022
azure-netapp-files Backup Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-introduction.md
Title: Understand Azure NetApp Files backup | Microsoft Docs description: Describes what Azure NetApp Files backup does, supported regions, and the cost model. -- Last updated 09/29/2023
azure-netapp-files Backup Manage Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-manage-policies.md
Title: Manage backup policies for Azure NetApp Files | Microsoft Docs description: Describes how to modify or suspend a backup policy for Azure NetApp Files volumes. -- Last updated 07/31/2023
azure-netapp-files Backup Requirements Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-requirements-considerations.md
Title: Requirements and considerations for Azure NetApp Files backup | Microsoft Docs description: Describes the requirements and considerations you need to be aware of before using Azure NetApp Files backup. -- Last updated 08/15/2023
azure-netapp-files Backup Restore New Volume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-restore-new-volume.md
Title: Restore a backup to a new Azure NetApp Files volume | Microsoft Docs description: Describes how to restore a backup to a new volume. -- Last updated 10/17/2023
azure-netapp-files Backup Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-search.md
Title: Search backups of Azure NetApp Files volumes | Microsoft Docs description: Describes how to display and search backups of Azure NetApp Files volumes at the volume level and the NetApp account level. -- Last updated 09/27/2021
azure-netapp-files Backup Vault Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-vault-manage.md
Title: Manage backup vaults for Azure NetApp Files | Microsoft Docs description: Describes how to use backup vaults to manage backups in Azure NetApp Files. -- Last updated 10/27/2022
azure-netapp-files Configure Application Volume Group Sap Hana Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-application-volume-group-sap-hana-api.md
Title: Configure application volume groups for SAP HANA using REST API description: Setting up your application volume groups for the SAP HANA API requires special configurations. -- Last updated 04/09/2023
azure-netapp-files Configure Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-customer-managed-keys.md
Title: Configure customer-managed keys for Azure NetApp Files volume encryption | Microsoft Docs description: Describes how to configure customer-managed keys for Azure NetApp Files volume encryption. -- Last updated 10/02/2023
azure-netapp-files Configure Kerberos Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-kerberos-encryption.md
Title: Configure NFSv4.1 Kerberos encryption for Azure NetApp Files | Microsoft Docs description: Describes how to configure NFSv4.1 Kerberos encryption for Azure NetApp Files and the performance impact. -- Last updated 01/10/2022
azure-netapp-files Configure Ldap Extended Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-ldap-extended-groups.md
Title: Enable Active Directory Domain Services (AD DS) LDAP authentication for NFS volumes | Microsoft Docs description: Describes the considerations and steps for enabling LDAP with extended groups when you create an NFS volume by using Azure NetApp Files. -- Last updated 03/17/2023
azure-netapp-files Configure Ldap Over Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-ldap-over-tls.md
Title: Configure AD DS LDAP over TLS for Azure NetApp Files | Microsoft Docs description: Describes how to configure AD DS LDAP over TLS for Azure NetApp Files, including root CA certificate management. -- Last updated 02/23/2023
azure-netapp-files Configure Network Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-network-features.md
Title: Configure network features for an Azure NetApp Files volume | Microsoft Docs description: Describes the options for network features and how to configure the Network Features option for a volume. -- Last updated 11/07/2023
See [regions supported for this feature](azure-netapp-files-network-topologies.m
This feature currently doesn't support SDK. > [!NOTE]
-> The option to edit network features is currently in preview. You need to submit a waitlist request for accessing the feature through the **[Azure NetApp Files standard networking features (edit volumes) Public Preview Request Form](https://aka.ms/anfeditnetworkfeaturespreview)**. This feature is expected to be enabled within a week after you submit the waitlist request. You can check the status of feature registration by using the following command:
+> The option to edit network features is currently in preview. You need to submit a waitlist request for accessing the feature through the **[Azure NetApp Files standard networking features (edit volumes) Public Preview Request Form](https://aka.ms/anfeditnetworkfeaturespreview)**. The feature can take approximately one week to be enabled after you submit the waitlist request. You can check the status of feature registration by using the following command:
> > ```azurepowershell-interactive > Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFBasicToStdNetworkFeaturesUpgrade
This feature currently doesn't support SDK.
> ``` > [!NOTE]
-> You can also revert the option from *Standard* back to *Basic* network features. However, before performing the revert operation, you need to submit a waitlist request through the **[Azure NetApp Files standard networking features (edit volumes) Public Preview Request Form](https://aka.ms/anfeditnetworkfeaturespreview)**. The revert capability is expected to be enabled within a week after you submit the waitlist request. You can check the status of the registration by using the following command:
+> You can also revert the option from *Standard* back to *Basic* network features. However, before performing the revert operation, you need to submit a waitlist request through the **[Azure NetApp Files standard networking features (edit volumes) Public Preview Request Form](https://aka.ms/anfeditnetworkfeaturespreview)**. The revert capability can take approximately one week to be enabled after you submit the waitlist request. You can check the status of the registration by using the following command:
> > ```azurepowershell-interactive > Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFStdToBasicNetworkFeaturesRevert
azure-netapp-files Configure Nfs Clients https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-nfs-clients.md
Title: Configure an NFS client for Azure NetApp Files | Microsoft Docs description: Describes how to configure NFS clients to use with Azure NetApp Files. -- Last updated 05/27/2022
azure-netapp-files Configure Unix Permissions Change Ownership Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-unix-permissions-change-ownership-mode.md
Title: Configure Unix permissions and change ownership mode for Azure NetApp Files NFS and dual-protocol volumes | Microsoft Docs description: Describes how to set the Unix permissions and the change ownership mode options for Azure NetApp Files NFS and dual-protocol volumes. -- Last updated 02/28/2023
azure-netapp-files Convert Nfsv3 Nfsv41 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/convert-nfsv3-nfsv41.md
Title: Convert an NFS volume between NFSv3 and NFSv4.1 with Azure NetApp Files | Microsoft Docs description: Describes how to convert an NFS volume between NFSv3 and NFSv4.1. -- Last updated 11/08/2022
azure-netapp-files Cool Access Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cool-access-introduction.md
Title: Standard storage with cool access in Azure NetApp Files description: Explains how to use standard storage with cool access to configure inactive data to move from Azure NetApp Files Standard service-level storage (the hot tier) to an Azure storage account (the cool tier). -- Last updated 11/01/2023
azure-netapp-files Create Active Directory Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-active-directory-connections.md
Last updated 11/07/2023
Several features of Azure NetApp Files require that you have an Active Directory
> [!IMPORTANT] > You must follow guidelines described in [Understand guidelines for Active Directory Domain Services site design and planning for Azure NetApp Files](understand-guidelines-active-directory-domain-service-site.md) for Active Directory Domain Services (AD DS) or Microsoft Entra Domain Services used with Azure NetApp Files.
-> In addition, before creating the AD connection, review [Modify Active Directory connections for Azure NetApp Files](modify-active-directory-connections.md) to understand the impact of making changes to the AD connection configuration options after the AD connection has been created. Changes to the AD connection configuration options are disruptive to client access and some options cannot be changed at all.
+>
+> Before creating the AD connection, review [Modify Active Directory connections for Azure NetApp Files](modify-active-directory-connections.md) to understand the impact of making changes to the AD connection configuration options after the AD connection has been created. Changes to the AD connection configuration options are disruptive to client access and some options cannot be changed at all.
* An Azure NetApp Files account must be created in the region where the Azure NetApp Files volumes are deployed.
azure-netapp-files Create Cross Zone Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-cross-zone-replication.md
Title: Create cross-zone replication relationships for Azure NetApp Files | Microsoft Docs description: This article shows you how to create and manage cross-zone replication relationships for Azure NetApp Files. -- Last updated 01/04/2023
azure-netapp-files Create Volumes Dual Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-volumes-dual-protocol.md
Title: Create a dual-protocol volume for Azure NetApp Files | Microsoft Docs description: Describes how to create a volume that uses the dual protocol (NFSv3 and SMB, or NFSv4.1 and SMB) with support for LDAP user mapping. -- Last updated 06/22/2023
azure-netapp-files Cross Region Replication Create Peering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-region-replication-create-peering.md
Title: Create volume replication for Azure NetApp Files | Microsoft Docs description: Describes how to create volume replication peering for Azure NetApp Files to set up cross-region replication. -- Last updated 02/23/2023
azure-netapp-files Cross Region Replication Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-region-replication-delete.md
Title: Delete volume replications or volumes for Azure NetApp Files cross-region replication | Microsoft Docs description: Describes how to delete a replication connection that is no longer needed between the source and the destination volumes. -- Last updated 03/22/2023
azure-netapp-files Cross Region Replication Display Health Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-region-replication-display-health-status.md
Title: Display health status of Azure NetApp Files replication relationship | Microsoft Docs description: Describes how to view replication status on the source volume or the destination volume of Azure NetApp Files. -- Last updated 05/16/2022
azure-netapp-files Cross Region Replication Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-region-replication-introduction.md
Title: Cross-region replication of Azure NetApp Files volumes | Microsoft Docs description: Describes what Azure NetApp Files cross-region replication does, supported region pairs, service-level objectives, data durability, and cost model. -- Last updated 05/08/2023
azure-netapp-files Cross Region Replication Manage Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-region-replication-manage-disaster-recovery.md
Title: Manage disaster recovery using Azure NetApp Files cross-region replication | Microsoft Docs description: Describes how to manage disaster recovery by using Azure NetApp Files cross-region replication. -- Last updated 11/09/2022
azure-netapp-files Cross Region Replication Requirements Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-region-replication-requirements-considerations.md
Title: Requirements and considerations for Azure NetApp Files cross-region replication | Microsoft Docs description: Describes the requirements and considerations for using the volume cross-region replication functionality of Azure NetApp Files. -- Last updated 02/28/2023
azure-netapp-files Cross Zone Replication Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-zone-replication-introduction.md
Title: Cross-zone replication of Azure NetApp Files volumes | Microsoft Docs description: Describes what Azure NetApp Files cross-zone replication does. -- Last updated 02/17/2023
azure-netapp-files Cross Zone Replication Requirements Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-zone-replication-requirements-considerations.md
Title: Requirements and considerations for Azure NetApp Files cross-zone replication | Microsoft Docs description: Describes the requirements and considerations for using the volume cross-zone replication functionality of Azure NetApp Files. -- Last updated 08/18/2023
azure-netapp-files Data Protection Disaster Recovery Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/data-protection-disaster-recovery-options.md
Title: Understand data protection and disaster recovery options in Azure NetApp Files description: Learn about data protection and disaster recovery options available in Azure NetApp Files, including snapshots, backups, cross-zone replication, and cross-region replication. -- Last updated 07/11/2023
azure-netapp-files Default Individual User Group Quotas Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/default-individual-user-group-quotas-introduction.md
Title: Understand default and individual user and group quotas for Azure NetApp Files volumes | Microsoft Docs description: Helps you understand the use cases of managing default and individual user and group quotas for Azure NetApp Files volumes. -- Last updated 02/23/2023
azure-netapp-files Develop Rest Api Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/develop-rest-api-powershell.md
Title: Develop for Azure NetApp Files with REST API using PowerShell | Microsoft Docs description: Describes how to get started with the Azure NetApp Files REST API using PowerShell. -- Last updated 09/30/2022
azure-netapp-files Disable Showmount https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/disable-showmount.md
Title: Disable showmount in Azure NetApp Files | Microsoft Docs description: Showmount on NFS clients has historically been how users can see exported file systems on an NFS server. You can disable the showmount if it presents a security concern for your needs. -- Last updated 03/16/2023
azure-netapp-files Double Encryption At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/double-encryption-at-rest.md
Title: Azure NetApp Files double encryption at rest | Microsoft Docs description: Explains Azure NetApp Files double encryption at rest to help you use this feature. -- Last updated 08/28/2023
azure-netapp-files Dual Protocol Permission Behaviors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/dual-protocol-permission-behaviors.md
Title: Understand dual-protocol security style and permission behaviors in Azure NetApp Files | Microsoft Docs description: This article helps you understand dual-protocol security style and permission when you use Azure NetApp Files. -- Last updated 08/02/2023
azure-netapp-files Dynamic Change Volume Service Level https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/dynamic-change-volume-service-level.md
Title: Dynamically change the service level of a volume for Azure NetApp Files | Microsoft Docs description: Describes how to dynamically change the service level of a volume. -- Last updated 05/11/2023
azure-netapp-files Enable Continuous Availability Existing SMB https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/enable-continuous-availability-existing-SMB.md
Title: Enable Continuous Availability on existing Azure NetApp Files SMB volumes | Microsoft Docs description: Describes how to enable SMB Continuous Availability on existing Azure NetApp Files SMB volume. -- Last updated 05/31/2023
azure-netapp-files Join Active Directory Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/join-active-directory-domain.md
Title: Join a Linux VM to a Microsoft Entra Domain | Microsoft Docs description: Describes how to join a Linux VM to a Microsoft Entra Domain -- Last updated 12/20/2022
azure-netapp-files Large Volumes Requirements Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/large-volumes-requirements-considerations.md
Title: Requirements and considerations for large volumes | Microsoft Docs description: Describes the requirements and considerations you need to be aware of before using large volumes. -- Last updated 11/02/2023
azure-netapp-files Lightweight Directory Access Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/lightweight-directory-access-protocol.md
Title: Understand the use of LDAP with Azure NetApp Files | Microsoft Learn description: This article helps you understand how Azure NetApp Files uses lightweight directory access protocol (LDAP). -- Last updated 08/05/2023
azure-netapp-files Manage Availability Zone Volume Placement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/manage-availability-zone-volume-placement.md
Title: Manage availability zone volume placement for Azure NetApp Files | Microsoft Docs description: Describes how to create a volume with an availability zone by using Azure NetApp Files. -- Last updated 01/13/2023
azure-netapp-files Manage Billing Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/manage-billing-tags.md
Title: Manage Azure NetApp Files billing by using tags | Microsoft Docs description: Describes how to manage Azure NetApp Files billing by using tags. -- Last updated 05/06/2021
azure-netapp-files Manage Cool Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/manage-cool-access.md
Title: Manage Azure NetApp Files standard storage with cool access description: Learn how to free up storage by configuring inactive data to move from Azure NetApp Files Standard service-level storage (the hot tier) to an Azure storage account (the cool tier). -- Last updated 10/20/2023
azure-netapp-files Manage Manual Qos Capacity Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/manage-manual-qos-capacity-pool.md
Title: Manage a manual QoS capacity pool for Azure NetApp Files | Microsoft Docs description: Describes how to manage a capacity pool that uses the manual QoS type, including setting up a manual QoS capacity pool and changing a capacity pool to use manual QoS. -- Last updated 06/14/2021
azure-netapp-files Modify Active Directory Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/modify-active-directory-connections.md
Once you've [created an Active Directory connection](create-active-directory-connections.md) in Azure NetApp Files, you can modify it. When you're modifying an Active Directory connection, not all configurations are modifiable.
+For more information, see [Understand guidelines for Active Directory Domain Services site design and planning for Azure NetApp Files](understand-guidelines-active-directory-domain-service-site.md).
+ ## Modify Active Directory connections 1. Select **Active Directory connections**. Then, select **Edit** to edit an existing AD connection.
azure-netapp-files Monitor Azure Netapp Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/monitor-azure-netapp-files.md
Title: Ways to monitor Azure NetApp Files | Microsoft Docs description: Describes ways to monitor Azure NetApp Files, including the Activity log, metrics, and capacity utilization monitoring. -- Last updated 01/24/2022
azure-netapp-files Monitor Volume Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/monitor-volume-capacity.md
Title: Monitor the capacity of an Azure NetApp Files volume | Microsoft Docs description: Describes ways to monitor the capacity utilization of an Azure NetApp Files volume. -- Last updated 09/30/2022
azure-netapp-files Network Attached File Permissions Nfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/network-attached-file-permissions-nfs.md
Title: Understand NFS file permissions in Azure NetApp Files description: Learn about mode bits in NFS workloads on Azure NetApp Files. -- Last updated 11/13/2023
azure-netapp-files Network Attached File Permissions Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/network-attached-file-permissions-smb.md
Title: Understand SMB file permissions in Azure NetApp Files description: Learn about SMB file permissions options in Azure NetApp Files. -- Last updated 11/13/2023
azure-netapp-files Network Attached File Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/network-attached-file-permissions.md
Title: Understand NAS file permissions in Azure NetApp Files description: Learn about NAS file permissions options in Azure NetApp Files. -- Last updated 11/13/2023
azure-netapp-files Network Attached Storage Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/network-attached-storage-concept.md
Title: Understand NAS concepts in Azure NetApp Files | Microsoft Docs description: This article covers important information about NAS volumes when using Azure NetApp Files. -- Last updated 06/26/2023
azure-netapp-files Network Attached Storage Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/network-attached-storage-permissions.md
Title: Understand NAS share permissions in Azure NetApp Files description: Learn about NAS share permissions options in Azure NetApp Files. -- Last updated 11/13/2023
azure-netapp-files Network Attached Storage Protocols https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/network-attached-storage-protocols.md
Title: Understand NAS protocols in Azure NetApp Files | Microsoft Learn description: Learn how SMB, NFS, and dual protocols operate in Azure NetApp Files. -- Last updated 08/02/2023
azure-netapp-files Network File System Group Memberships https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/network-file-system-group-memberships.md
Title: Understand NFS group memberships and supplemental groups for Azure NetApp Files | Microsoft Learn description: This article helps you understand NFS group memberships and supplemental groups as they apply to Azure NetApp Files. -- Last updated 08/02/2023
azure-netapp-files Nfs Access Control Lists https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/nfs-access-control-lists.md
Title: Understand NFSv4.x access control lists in Azure NetApp Files description: Learn about using NFSv4.x access control lists in Azure NetApp Files. -- Last updated 11/13/2023
azure-netapp-files Performance Azure Vmware Solution Datastore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-azure-vmware-solution-datastore.md
Title: Azure VMware Solution datastore performance considerations for Azure NetApp Files | Microsoft Docs description: Describes considerations for Azure VMware Solution (AVS) datastore design and sizing when used with Azure NetApp Files. -- Last updated 11/12/2023
azure-netapp-files Performance Benchmarks Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-benchmarks-azure-vmware-solution.md
Title: Azure NetApp Files datastore performance benchmarks for Azure VMware Solution | Microsoft Docs description: Describes performance benchmarks that Azure NetApp Files datastores deliver for virtual machines on Azure VMware Solution. -- Last updated 03/15/2023
azure-netapp-files Performance Benchmarks Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-benchmarks-linux.md
Title: Azure NetApp Files performance benchmarks for Linux | Microsoft Docs description: Describes performance benchmarks Azure NetApp Files delivers for Linux. -- Last updated 09/29/2021
azure-netapp-files Performance Impact Kerberos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-impact-kerberos.md
Title: Performance impact of Kerberos on Azure NetApp Files NFSv4.1 volumes | Microsoft Docs description: Describes the available security options, the tested performance vectors, and the expected performance impact of kerberos on Azure NetApp Files NFSv4.1 volumes. -- Last updated 08/22/2022
azure-netapp-files Performance Linux Concurrency Session Slots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-linux-concurrency-session-slots.md
Title: Linux concurrency best practices for Azure NetApp Files - Session slots and slot table entries | Microsoft Docs description: Describes best practices about session slots and slot table entries for Azure NetApp Files NFS protocol. -- Last updated 08/02/2021
azure-netapp-files Performance Linux Direct Io https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-linux-direct-io.md
Title: Linux direct I/O best practices for Azure NetApp Files | Microsoft Docs description: Describes Linux direct I/O and the best practices to follow for Azure NetApp Files. -- Last updated 07/02/2021
azure-netapp-files Performance Linux Filesystem Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-linux-filesystem-cache.md
Title: Linux filesystem cache best practices for Azure NetApp Files | Microsoft Docs description: Describes Linux filesystem cache best practices to follow for Azure NetApp Files. -- Last updated 07/02/2021
azure-netapp-files Performance Linux Mount Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-linux-mount-options.md
Title: Linux NFS mount options best practices for Azure NetApp Files | Microsoft Docs description: Describes mount options and the best practices about using them with Azure NetApp Files. -- Last updated 12/07/2022
azure-netapp-files Performance Linux Nfs Read Ahead https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-linux-nfs-read-ahead.md
Title: Linux NFS read-ahead best practices for Azure NetApp Files - Session slots and slot table entries | Microsoft Docs description: Describes filesystem cache and Linux NFS read-ahead best practices for Azure NetApp Files. -- Last updated 09/29/2022
azure-netapp-files Performance Oracle Multiple Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-oracle-multiple-volumes.md
Title: Oracle database performance on Azure NetApp Files multiple volumes | Microsoft Docs description: Migrating highly performant Exadata grade databases to the cloud is increasingly becoming an imperative for Microsoft customers. -- Last updated 05/04/2023
azure-netapp-files Performance Oracle Single Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-oracle-single-volumes.md
Title: Oracle database performance on Azure NetApp Files single volume | Microsoft Docs description: Describes performance test results of a Azure NetApp Files single volume on Oracle database. -- Last updated 08/04/2022
azure-netapp-files Performance Virtual Machine Sku https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-virtual-machine-sku.md
Title: Azure virtual machine SKUs best practices for Azure NetApp Files | Microsoft Docs description: Describes Azure NetApp Files best practices about Azure virtual machine SKUs, including differences within and between SKUs. -- Last updated 07/02/2021
azure-netapp-files Regional Capacity Quota https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/regional-capacity-quota.md
Title: Regional capacity quota for Azure NetApp Files | Microsoft Docs description: Explains regional capacity quota of Azure NetApp Files. -- Last updated 10/11/2021
azure-netapp-files Request Region Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/request-region-access.md
Title: Request region access for Azure NetApp Files | Microsoft Docs description: Describes how to request access to a region for using Azure NetApp Files. -- Last updated 11/15/2021
azure-netapp-files Snapshots Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/snapshots-delete.md
Title: Delete snapshots using Azure NetApp Files | Microsoft Docs description: Describes how to delete snapshots by using Azure NetApp Files. -- Last updated 09/16/2021
azure-netapp-files Snapshots Edit Hide Path https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/snapshots-edit-hide-path.md
Title: Edit the Hide Snapshot Path option of Azure NetApp Files | Microsoft Docs description: Describes how to control the visibility of a snapshot volume with Azure NetApp Files. -- Last updated 09/16/2021
azure-netapp-files Snapshots Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/snapshots-introduction.md
Title: How Azure NetApp Files snapshots work | Microsoft Docs description: Explains how Azure NetApp Files snapshots work, including ways to create snapshots, ways to restore snapshots, how to use snapshots in cross-region replication settings. -- Last updated 11/22/2022
azure-netapp-files Snapshots Manage Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/snapshots-manage-policy.md
Title: Manage snapshot policies in Azure NetApp Files | Microsoft Docs description: Describes how to create, manage, modify, and delete snapshot policies by using Azure NetApp Files. -- Last updated 05/18/2023
azure-netapp-files Snapshots Restore File Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/snapshots-restore-file-client.md
Title: Restore a file from a snapshot using a client with Azure NetApp Files | Microsoft Docs description: Describes how to restore a file from a snapshot using a client with the volume mounted using Azure NetApp Files. -- Last updated 09/16/2021
azure-netapp-files Snapshots Restore File Single https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/snapshots-restore-file-single.md
Title: Restore individual files in Azure NetApp Files using single-file snapshot restore | Microsoft Docs description: Describes how to recover individual files directly within a volume from a snapshot. -- Last updated 05/04/2023
azure-netapp-files Snapshots Restore New Volume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/snapshots-restore-new-volume.md
Title: Restore a snapshot to a new volume using Azure NetApp Files | Microsoft Docs description: Describes how to create a new volume from a snapshot by using Azure NetApp Files. -- Last updated 02/22/2023
azure-netapp-files Snapshots Revert Volume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/snapshots-revert-volume.md
Title: Revert a volume using snapshot revert with Azure NetApp Files | Microsoft Docs description: Describes how to revert a volume to an earlier state using Azure NetApp Files. -- Last updated 02/28/2023
azure-netapp-files Solutions Benefits Azure Netapp Files Electronic Design Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/solutions-benefits-azure-netapp-files-electronic-design-automation.md
Title: Benefits of using Azure NetApp Files for electronic design automation | Microsoft Docs description: Explains the solution Azure NetApp Files provides for meeting the needs of the semiconductor and chip design industry. Presents test scenarios running a standard industry benchmark for electronic design automation (EDA) using Azure NetApp Files. -- Last updated 04/24/2020
azure-netapp-files Solutions Benefits Azure Netapp Files Oracle Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/solutions-benefits-azure-netapp-files-oracle-database.md
Title: Benefits of using Azure NetApp Files with Oracle Database | Microsoft Docs description: Describes the technology and provides a performance comparison between Oracle Direct NFS (dNFS) and the traditional NFS client. Shows the advantages of using dNFS with Azure NetApp Files. -- Last updated 08/04/2022
azure-netapp-files Solutions Benefits Azure Netapp Files Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/solutions-benefits-azure-netapp-files-sql-server.md
Title: Benefits of using Azure NetApp Files for SQL Server deployment | Microsoft Docs description: Shows a detailed cost analysis performance benefits about using Azure NetApp Files for SQL Server deployment. -- Last updated 05/19/2021
azure-netapp-files Understand Guidelines Active Directory Domain Service Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/understand-guidelines-active-directory-domain-service-site.md
Azure NetApp Files supports identity-based authentication over SMB through the f
* **Microsoft Entra Kerberos for hybrid identities**: Using Microsoft Entra ID for authenticating [hybrid user identities](../active-directory/hybrid/whatis-hybrid-identity.md) allows Microsoft Entra users to access Azure NetApp Files file shares using Kerberos authentication. This means your end users can access Azure NetApp Files file shares without requiring a line-of-sight to domain controllers from Microsoft Entra hybrid joined and Microsoft Entra joined Windows or Linux virtual machines. *Cloud-only identities aren't currently supported.* * **AD Kerberos authentication for Linux clients**: Linux clients can use Kerberos authentication over SMB for Azure NetApp Files using AD DS. - ### <a name="network-requirements"></a>Network requirements Azure NetApp Files SMB, dual-protocol, and Kerberos NFSv4.1 volumes require reliable and low-latency network connectivity (less than 10 ms RTT) to AD DS domain controllers. Poor network connectivity or high network latency between Azure NetApp Files and AD DS domain controllers can cause client access interruptions or client timeouts.
azure-relay Relay Migrate Acs Sas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/relay-migrate-acs-sas.md
- Title: Azure Relay - Migrate to Shared Access Signature authorization
-description: Describes how to migrate Azure Relay applications from using Microsoft Entra ID Access Control Service to Shared Access Signature authorization.
- Previously updated : 08/10/2023--
-# Azure Relay - Migrate from Microsoft Entra ID Access Control Service to Shared Access Signature authorization
-
-Azure Relay applications historically had a choice of using two different authorization models: the [Shared Access Signature (SAS)](../service-bus-messaging/service-bus-sas.md) token model provided directly by the Relay service, and a federated model where the management of authorization rules is managed inside by the [Microsoft Entra ID](../active-directory/index.yml) Access Control Service (ACS), and tokens obtained from ACS are passed to Relay for authorizing access to the desired features.
-
-The ACS authorization model has long been superseded by [SAS authorization](../service-bus-messaging/service-bus-authentication-and-authorization.md) as the preferred model, and all documentation, guidance, and samples exclusively use SAS today. Moreover, it's no longer possible to create new Relay namespaces that are paired with ACS.
-
-SAS has the advantage in that it isn't immediately dependent on another service, but can be used directly from a client without any intermediaries by giving the client access to the SAS rule name and rule key. SAS can also be easily integrated with an approach in which a client has to first pass an authorization check with another service and then is issued a token. The latter approach is similar to the ACS usage pattern, but enables issuing access tokens based on application-specific conditions that are difficult to express in ACS.
-
-For all existing applications that are dependent on ACS, we urge customers to migrate their applications to rely on SAS instead.
-
-## Migration scenarios
-
-ACS and Relay are integrated through the shared knowledge of a *signing key*. The signing key is used by an ACS namespace to sign authorization tokens, and it's used by Azure Relay to verify that the token has been issued by the paired ACS namespace. The ACS namespace holds service identities and authorization rules. The authorization rules define which service identity or which token issued by an external identity provider gets which type of access to a part of the Relay namespace graph, in the form of a longest-prefix match.
-
-For example, an ACS rule might grant the **Send** claim on the path prefix `/` to a service identity, which means that a token issued by ACS based on that rule grants the client rights to send to all entities in the namespace. If the path prefix is `/abc`, the identity is restricted to sending to entities named `abc` or organized beneath that prefix. It's assumed that readers of this migration guidance are already familiar with these concepts.
-
-The migration scenarios fall into three broad categories:
-
-1. **Unchanged defaults**. Some customers use a [SharedSecretTokenProvider](/dotnet/api/microsoft.servicebus.sharedsecrettokenprovider) object, passing the automatically generated **owner** service identity and its secret key for the ACS namespace, paired with the Relay namespace, and don't add new rules.
-
-2. **Custom service identities with simple rules**. Some customers add new service identities and grant each new service identity **Send**, **Listen**, and **Manage** permissions for one specific entity.
-
-3. **Custom service identities with complex rules**. Very few customers have complex rule sets in which externally issued tokens are mapped to rights on Relay, or where a single service identity is assigned differentiated rights on several namespace paths through multiple rules.
-
-For assistance with the migration of complex rule sets, you can contact [Azure support](https://azure.microsoft.com/support/options/). The other two scenarios enable straightforward migration.
-
-### Unchanged defaults
-
-If your application hasn't changed ACS defaults, you can replace all [SharedSecretTokenProvider](/dotnet/api/microsoft.servicebus.sharedsecrettokenprovider) usage with a [SharedAccessSignatureTokenProvider](/dotnet/api/microsoft.servicebus.sharedaccesssignaturetokenprovider) object, and use the namespace preconfigured **RootManageSharedAccessKey** instead of the ACS **owner** account. Note that even with the ACS **owner** account, this configuration was (and still is) not generally recommended, because this account/rule provides full management authority over the namespace, including permission to delete any entities.
-
-### Simple rules
-
-If the application uses custom service identities with simple rules, the migration is straightforward in the case where an ACS service identity was created to provide access control on a specific relay. This scenario is often the case in SaaS-style solutions where each relay is used as a bridge to a tenant site or branch office, and the service identity is created for that particular site. In this case, the respective service identity can be migrated to a Shared Access Signature rule, directly on the relay. The service identity name can become the SAS rule name and the service identity key can become the SAS rule key. The rights of the SAS rule are then configured equivalent to the respectively applicable ACS rule for the entity.
-
-You can make this new and additional configuration of SAS in-place on any existing namespace that is federated with ACS, and the migration away from ACS is subsequently performed by using [SharedAccessSignatureTokenProvider](/dotnet/api/microsoft.servicebus.sharedaccesssignaturetokenprovider) instead of [SharedSecretTokenProvider](/dotnet/api/microsoft.servicebus.sharedsecrettokenprovider). The namespace doesn't need to be unlinked from ACS.
-
-### Complex rules
-
-SAS rules aren't meant to be accounts, but are named signing keys associated with rights. As such, scenarios in which the application creates many service identities and grants them access rights for several entities or the whole namespace still require a token-issuing intermediary. You can obtain guidance for such an intermediary by [contacting support](https://azure.microsoft.com/support/options/).
-
-## Next steps
-
-To learn more about Azure Relay authentication, see the following articles:
-
-* [Azure Relay authentication and authorization](relay-authentication-and-authorization.md)
-* [Service Bus authentication with Shared Access Signatures](../service-bus-messaging/service-bus-sas.md)
azure-resource-manager Bicep Functions Date https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-date.md
Title: Bicep functions - date
description: Describes the functions to use in a Bicep file to work with dates. Previously updated : 11/03/2023 Last updated : 01/17/2024 # Date functions for Bicep
The datetime value that results from adding the duration value to the base value
### Remarks
-The dateTimeAdd function takes into account leap years and the number of days in a month when performing date arithmetic. The following example adds one month to January 31:
+The `dateTimeAdd` function doesn't take leap years into consideration, and _P1Y_ should be interpreted as _P365D_, while _P1M_ should be interpreted as _P30D_. The following Bicep file shows some examples:
```bicep
-output add1MonthOutput string = dateTimeAdd('2023-01-31 00:00:00Z', 'P1M') //2023-03-02T00:00:00Z
-output add1MonthLeapOutput string = dateTimeAdd('2024-01-31 00:00:00Z', 'P1M') //2024-03-01T00:00:00Z
+output addOneYearNonLeap string = dateTimeAdd('2023-01-01 00:00:00Z', 'P1Y') //2024-01-01T00:00:00Z
+output addOneYearLeap string = dateTimeAdd('2024-01-01 00:00:00Z', 'P1Y') //2024-12-31T00:00:00Z
+
+output addOneMonthNonLeap string = dateTimeAdd('2023-02-01 00:00:00Z', 'P1M') //2023-03-03T00:00:00Z
+output addOneMonthLeap string = dateTimeAdd('2024-02-01 00:00:00Z', 'P1M') //2023-03-02T00:00:00Z
```
-In this example, `dateTimeAdd` returns `2023-03-02T00:00:00Z`, not `2023-02-28T00:00:00Z`. If the base is `2024-01-31 00:00:00Z`, it returns `2024-03-01T00:00:00Z` because 2024 is a leap year.
+In the preceding example, considering 2023 as a non-leap year, the outcome of adding one year to the initial day of the year is _2024-01-01T00:00:00Z_. Conversely, adding one year to the starting day of 2024, a leap year, results in _2024-12-31T00:00:00Z_, not _2025-01-01T00:00:00Z_, given that a leap year comprises 366 days instead of 365 days. Furthermore, the distinction between leap and non-leap years becomes apparent when adding one month to the first day of February, leading to varying day-of-the-month results.
### Examples
azure-resource-manager Resource Name Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-name-rules.md
description: Shows the rules and restrictions for naming Azure resources.
- Previously updated : 01/04/2024 Last updated : 01/19/2024 # Naming rules and restrictions for Azure resources
azure-resource-manager Template Functions Date https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-date.md
Title: Template functions - date
description: Describes the functions to use in an Azure Resource Manager template (ARM template) to work with dates. Previously updated : 10/12/2023 Last updated : 01/17/2024 # Date functions for ARM templates
The datetime value that results from adding the duration value to the base value
### Remarks
-The dateTimeAdd function takes into account leap years and the number of days in a month when performing date arithmetic. The following example adds one month to January 31:
+The `dateTimeAdd` function doesn't take leap years into consideration, and _P1Y_ should be interpreted as _P365D_, while _P1M_ should be interpreted as _P30D_. The following json shows some examples:
```json
-"outputs": {
- "add10YearsOutput": {
- "type": "string",
- "value": "[dateTimeAdd('2023-01-31 00:00:00Z', 'P1M')]" //2023-03-02T00:00:00Z
- },
- "add1MonthOutput": {
- "type": "string",
- "value": "[dateTimeAdd('2024-01-31 00:00:00Z', 'P1M')]" //2024-03-01T00:00:00Z
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "resources": [],
+ "outputs": {
+ "addOneYearNonLeap": {
+ "type": "string",
+ "value": "[dateTimeAdd('2023-01-01 00:00:00Z', 'P1Y')]" //2024-01-01T00:00:00Z
+ },
+ "addOneYearLeap": {
+ "type": "string",
+ "value": "[dateTimeAdd('2024-01-01 00:00:00Z', 'P1Y')]" //2024-12-31T00:00:00Z
+ },
+ "addOneMonthNonLeap": {
+ "type": "string",
+ "value": "[dateTimeAdd('2023-02-01 00:00:00Z', 'P1M')]" //2023-03-03T00:00:00Z
+ },
+ "addOneMonthLeap": {
+ "type": "string",
+ "value": "[dateTimeAdd('2024-02-01 00:00:00Z', 'P1M')]" //2024-03-02T00:00:00Z
+ }
} } ```
-In this example, `dateTimeAdd` returns `2023-03-02T00:00:00Z`, not `2023-02-28T00:00:00Z`. If the base is `2024-01-31 00:00:00Z`, it returns `2024-03-01T00:00:00Z` because 2024 is a leap year.
+In the preceding example, considering 2023 as a non-leap year, the outcome of adding one year to the initial day of the year is _2024-01-01T00:00:00Z_. Conversely, adding one year to the starting day of 2024, a leap year, results in _2024-12-31T00:00:00Z_, not _2025-01-01T00:00:00Z_, given that a leap year comprises 366 days instead of 365 days. Furthermore, the distinction between leap and non-leap years becomes apparent when adding one month to the first day of February, leading to varying day-of-the-month results.
### Examples
chaos-studio Chaos Studio Private Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-private-networking.md
Currently, you can only enable certain resource types for Chaos Studio virtual n
To use Chaos Studio with virtual network injection, you must meet the following requirements. 1. The `Microsoft.ContainerInstance` and `Microsoft.Relay` resource providers must be registered with your subscription. 1. The virtual network where Chaos Studio resources will be injected must have two subnets: a container subnet and a relay subnet. A container subnet is used for the Chaos Studio containers that will be injected into your private network. A relay subnet is used to forward communication from Chaos Studio to the containers inside the private network.
- 1. Both subnets need at least `/27` in the address space. An example is an address prefix of `10.0.0.0/28` or `10.0.0.0/24`.
+ 1. Both subnets need at least `/28` for the size of the address space (in this case `/27` is larger than `/28`, for example). An example is an address prefix of `10.0.0.0/28` or `10.0.0.0/24`.
1. The container subnet must be delegated to `Microsoft.ContainerInstance/containerGroups`. 1. The subnets can be arbitrarily named, but we recommend `ChaosStudioContainerSubnet` and `ChaosStudioRelaySubnet`. 1. When you enable the desired resource as a target so that you can use it in Chaos Studio experiments, the following properties must be set:
chaos-studio Chaos Studio Private Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-php-create-web-role.md
documentationcenter: php
ms.assetid: 9f7ccda0-bd96-4f7b-a7af-fb279a9e975b - ms.devlang: php Last updated 04/11/2018
communication-services Classification Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/router/classification-concepts.md
The process of classifying a Job involves optionally setting the following prope
- Worker Selectors - Queue ID
+## Fallback queue
+
+The fallback queue is used when queue selectors, resulting from queue selector attachments, are unable to select a queue _and_ the job does not have a `queueId`. In this case, if the fallback queue is defined, the job is assigned to it.
+ ## Prioritization rule The priority of a Job can be resolved during classification using one of many rule engines.
container-apps Tutorial Ci Cd Runners Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/tutorial-ci-cd-runners-jobs.md
You can now create a job that uses to use the container image. In this section,
az containerapp job create -n "$JOB_NAME" -g "$RESOURCE_GROUP" --environment "$ENVIRONMENT" \ --trigger-type Event \ --replica-timeout 1800 \
- --replica-retry-limit 1 \
+ --replica-retry-limit 0 \
--replica-completion-count 1 \ --parallelism 1 \ --image "$CONTAINER_REGISTRY_NAME.azurecr.io/$CONTAINER_IMAGE_NAME" \
You can now create a job that uses to use the container image. In this section,
--polling-interval 30 \ --scale-rule-name "github-runner" \ --scale-rule-type "github-runner" \
- --scale-rule-metadata "github-runner=https://api.github.com" "owner=$REPO_OWNER" "runnerScope=repo" "repos=$REPO_NAME" "targetWorkflowQueueLength=1" \
+ --scale-rule-metadata "githubAPIURL=https://api.github.com" "owner=$REPO_OWNER" "runnerScope=repo" "repos=$REPO_NAME" "targetWorkflowQueueLength=1" \
--scale-rule-auth "personalAccessToken=personal-access-token" \ --cpu "2.0" \ --memory "4Gi" \
You can now create a job that uses to use the container image. In this section,
--polling-interval 30 ` --scale-rule-name "github-runner" ` --scale-rule-type "github-runner" `
- --scale-rule-metadata "github-runner=https://api.github.com" "owner=$REPO_OWNER" "runnerScope=repo" "repos=$REPO_NAME" "targetWorkflowQueueLength=1" `
+ --scale-rule-metadata "githubAPIURL=https://api.github.com" "owner=$REPO_OWNER" "runnerScope=repo" "repos=$REPO_NAME" "targetWorkflowQueueLength=1" `
--scale-rule-auth "personalAccessToken=personal-access-token" ` --cpu "2.0" ` --memory "4Gi" `
You can now create a job that uses to use the container image. In this section,
| `--polling-interval` | The polling interval at which to evaluate the scale rule. | | `--scale-rule-name` | The name of the scale rule. | | `--scale-rule-type` | The type of scale rule to use. To learn more about the GitHub runner scaler, see the KEDA [documentation](https://keda.sh/docs/latest/scalers/github-runner/). |
- | `--scale-rule-metadata` | The metadata for the scale rule. |
+ | `--scale-rule-metadata` | The metadata for the scale rule. If you're using GitHub Enterprise, update `githubAPIURL` with its API URL. |
| `--scale-rule-auth` | The authentication for the scale rule. | | `--secrets` | The secrets to use for the job. | | `--env-vars` | The environment variables to use for the job. |
You can run a manual job to register an offline placeholder agent. The job runs
az containerapp job create -n "$PLACEHOLDER_JOB_NAME" -g "$RESOURCE_GROUP" --environment "$ENVIRONMENT" ` --trigger-type Manual ` --replica-timeout 300 `
- --replica-retry-limit 1 `
+ --replica-retry-limit 0 `
--replica-completion-count 1 ` --parallelism 1 ` --image "$CONTAINER_REGISTRY_NAME.azurecr.io/$CONTAINER_IMAGE_NAME" `
Now that you have a placeholder agent, you can create a self-hosted agent. In th
az containerapp job create -n "$JOB_NAME" -g "$RESOURCE_GROUP" --environment "$ENVIRONMENT" \ --trigger-type Event \ --replica-timeout 1800 \
- --replica-retry-limit 1 \
+ --replica-retry-limit 0 \
--replica-completion-count 1 \ --parallelism 1 \ --image "$CONTAINER_REGISTRY_NAME.azurecr.io/$CONTAINER_IMAGE_NAME" \
az containerapp job create -n "$JOB_NAME" -g "$RESOURCE_GROUP" --environment "$E
az containerapp job create -n "$JOB_NAME" -g "$RESOURCE_GROUP" --environment "$ENVIRONMENT" ` --trigger-type Event ` --replica-timeout 1800 `
- --replica-retry-limit 1 `
+ --replica-retry-limit 0 `
--replica-completion-count 1 ` --parallelism 1 ` --image "$CONTAINER_REGISTRY_NAME.azurecr.io/$CONTAINER_IMAGE_NAME" `
cosmos-db Network Bandwidth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/network-bandwidth.md
+
+ Title: Network bandwidth
+
+description: Review how Azure Cosmos DB ingress and egress operations influence Azure network bandwidth consumption.
+++++ Last updated : 01/19/2024++
+# Azure Cosmos DB network bandwidth
+
+Azure Cosmos DB is a globally distributed database system that allows you to read and write data from the local replicas of your database. Azure Cosmos DB transparently replicates the data to all the regions associated with your Azure Cosmos DB account.
+
+To move your database requests and responses between the Cosmos DB account and the application, which connects it requires Azure network bandwidth. Cosmos DB also uses Azure network bandwidth to replicate data between Cosmos DB regions when you select multiple regions for your Cosmos DB account.
+
+Azure Cosmos DB bills for data that leaves the Azure cloud to any destination on the internet or transits the Azure WAN between Azure regions.
+
+| Data Transfer In (Ingress) | All GB / Month |
+| | |
+| Data transfer into any regionΓÇ» | Not billed |
+| Data transfer within any region | Not billed |
+| Data transfer within an availability zone or between availability zones within any region | Not billed |
+
+| Data Transfer Out (Egress)ΓÇ» | First 5 GB / MonthΓÇ» | > 5 GB / MonthΓÇ» |
+| | | |
+| Data transfer from any region in Europe or North America to any other regions or any destination on the internet | Not billed | BilledΓÇ» |
+| Data transfer from any region in Asia or Oceania or Middle East and Africa to any other region or any destination on the internet (excluding China) | Not billed | BilledΓÇ» |
+| Data transfer from any region in South America to any other region within the same continent or across continents or any destination on the internet | Not billed | BilledΓÇ» |
+| Data transfer from any region in China to any other region within the same continent or across continents or any destination on the internet | Not billed | BilledΓÇ» |
+
+## Billing meter details
+
+The following are the recently introduced Azure Cosmos DB Bandwidth billing meter details along with the previous meter identifiers from which Azure Cosmos DB Bandwidth moved.
+
+| New Meter NameΓÇ» | New Meter TypeΓÇ» | New Meter ID | Previous Meter ID |
+| | | | |
+| Inter Region Data Transfer In | Data Transfer In (GB) | 415e7499-ea3b-5b45-8a9c-80a141aa262 | d8831a85-697a-4d43-acec-8e1599f58b5dΓÇ» |
+| Inter Region Data Transfer In | Data Transfer In (GB) | ffa692fd-9d68-5f7c-a4b3-2644fe8ddb4 | 3a9d164b-d3c1-4350-9945-fa8056700299ΓÇ» |
+| Inter Region Data Transfer In | Data Transfer In (GB) | adae3632-6f0c-5bc0-b864-b6a7b437438 | 32c3ebec-1646-49e3-8127-2cafbd3a04d8ΓÇ» |
+| Inter Region Data Transfer In | Data Transfer In (GB) | b25ed7ec-9731-59f1-a0fb-d3327646847 | 42bb05e1-6f42-4de1-a6ba-7ffb976cb56ΓÇ» |
+| Inter Region Data Transfer In | Data Transfer In (GB) | 8f44fdc5-8992-5838-b309-8a101b97576 | e315c24e-2f54-4668-95ae-5aef18f93125ΓÇ» |
+| Inter Region Data Transfer Out | Data Transfer Out (GB) | 475ec5dc-3ce0-526d-a38f-0868db1a8fb | 3730eb6d-75a1-4e4b-82a2-383264ebffd8ΓÇ» |
+| Inter Region Data Transfer Out | Data Transfer Out (GB) | 08339433-af53-57f7-87ee-22a39c0f35c | 6c5c3140-e894-4ecf-aacc-60137b0bc93bΓÇ» |
+| Inter Region Data Transfer Out | Data Transfer Out (GB) | dffc0580-fe39-515f-86af-7a5cf75b74d | 9995d93a-7d35-4d3f-9c69-7a7fea447ef4ΓÇ» |
+| Inter Region Data Transfer Out | Data Transfer Out (GB) | 722e5945-90e6-59b3-8ed2-412d73c3984 | fe167397-a38d-43c3-9bb3-8e2907e56a41ΓÇ» |
+| Inter Region Data Transfer Out | Data Transfer Out (GB) | dc9930dd-f096-5af1-8467-ba15cf7232d | c089a13a-9dd0-44b5-aa9e-44a77bbd6788ΓÇ» |
+
+## Frequently asked questions
+
+Here's a list of commonly asked questions for this service.
+
+- **Will the change to new Meters cost me more for Azure Cosmos DB bandwidth?**
+
+ - No, whilst the new meters are no longer tiered, the new meter billing rate is equal to or lower than the least expensive tier of the previous meters. Most Cosmos DB accounts see a reduction in Azure Cosmos DB bandwidth costs.ΓÇ»
+
+- **Do I get any network bandwidth every month at no cost?**
+
+ - Yes, Azure Cosmos DB includes 5 GB of network bandwidth per month.
+
+- **Will I still see the previous meters on my bill after Azure Cosmos DB moves to the new meters?**
+
+ - It depends, you could continue to see previous meters on your Azure bill as they continue to be used by some other Azure Services. You could see Azure Cosmos DB bandwidth billing disabled for a short period during the transition. You aren't double charged during the transition.
+
+- **Will a region failover impact my egress data?**
+
+ - It can, when your Azure Cosmos DB account is failed over to new region all traffic is automatically redirected to this new region. If this region is now different from where your Azure Cosmos DB client is located it results in data egress between the regions.
+
+## Next step
+
+> [!div class="nextstepaction"]
+> [If you have additional questions or feedback on Cosmos DB networking](mailto:CosmosDBNetworking@Microsoft.com)ΓÇ»
data-factory Connector Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-mysql.md
When copying data from MySQL, the following mappings are used from MySQL data ty
| `int unsigned` |`Int64`| | `integer` |`Int32` | | `integer unsigned` |`Int64` |
+| `json` |`String` |
| `long varbinary` |`Byte[]` | | `long varchar` |`String` | | `longblob` |`Byte[]` |
dev-box Quickstart Create Dev Box https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/quickstart-create-dev-box.md
To create a dev box in the Microsoft Dev Box developer portal:
||| | **Name** | Enter a name for your dev box. Dev box names must be unique within a project. | | **Project** | Select a project from the dropdown list. |
- | **Dev box pool** | Select a pool from the dropdown list, which includes all the dev box pools for that project. |
+ | **Dev box pool** | Select a pool from the dropdown list, which includes all the dev box pools for that project. Choose a dev box pool near to you for least latency.|
:::image type="content" source="./media/quickstart-create-dev-box/create-dev-box.png" alt-text="Screenshot of the dialog for adding a dev box.":::
dms Known Issues Azure Sql Migration Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-azure-sql-migration-azure-data-studio.md
This article provides a list of known issues and troubleshooting steps associate
- **Recommendation**: Make sure to change the target Azure SQL Database collation to the same as the source SQL Server database. Azure SQL Database uses `SQL_Latin1_General_CP1_CI_AS` collation by default, in case your source SQL Server database uses a different collation you might need to re-create or select a different target database whose collation matches. For more information, see [Collation and Unicode support](/sql/relational-databases/collations/collation-and-unicode-support)
+- **Message**: `TableColumnCollationMismatch: Table <Tablename> with column <columnname> has collation <collationoptionsource> on source but has collation <collationoptiontarget> on target table.`
+
+- **Cause**: The source database table column's collation isn't the same as the target database table column's collation.
+
+- **Recommendation**:
+ 1) Make sure to migrate the Schema to target Azure SQL Database using Database Migration Service. Refer [blog](https://techcommunity.microsoft.com/t5/microsoft-data-migration-blog/public-preview-schema-migration-for-target-azure-sql-db/ba-p/3990463).
+ 2) Follow this [article](https://learn.microsoft.com/sql/relational-databases/collations/set-or-change-the-column-collation?) to manually change collation.
+ For more information, see [Collation and Unicode support](/sql/relational-databases/collations/collation-and-unicode-support)
+ - **Message**: `DatabaseSizeMoreThanMax: No tables were found in the target Azure SQL Database. Check if schema migration was completed beforehand.` - **Cause**: The selected tables for the migration don't exist in the target Azure SQL Database.
expressroute Expressroute About Virtual Network Gateways https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-about-virtual-network-gateways.md
ErGwScale supports both zonal and zonal-redundant deployments in Azure availabil
ErGwScale is available in preview in the following regions: * Australia East
+* Canada Central
+* East US
+* East Asia
* France Central
+* Germany Central
+* Germany West
+* India Central
* Italy North * North Europe * Norway East * Sweden Central * UAE North
+* UK South
* West US 3 ### Autoscaling vs. fixed scale unit
governance First Query Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/first-query-go.md
- Title: "Quickstart: Your first Go query"
-description: In this quickstart, you follow the steps to enable the Resource Graph package for Go and run your first query.
Previously updated : 07/09/2021---
-# Quickstart: Run your first Resource Graph query using Go
-
-The first step to using Azure Resource Graph is to check that the required packages for Go are
-installed. This quickstart walks you through the process of adding the packages to your Go
-installation.
-
-At the end of this process, you'll have added the packages to your Go installation and run your
-first Resource Graph query.
-
-## Prerequisites
-
-If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account
-before you begin.
-
-## Add the Resource Graph package
-
-To enable Go to query Azure Resource Graph, the package must be added. This package works wherever
-Go can be used, including [bash on Windows 10](/windows/wsl/install-win10) or locally installed.
-
-1. Check that the latest Go is installed (at least **1.14**). If it isn't yet installed, download it
- at [Golang.org](https://go.dev/dl/).
-
-1. Check that the latest Azure CLI is installed (at least **2.5.1**). If it isn't yet installed, see
- [Install the Azure CLI](/cli/azure/install-azure-cli).
-
- > [!NOTE]
- > Azure CLI is required to enable Go to use the `auth.NewAuthorizerFromCLI()` method in the
- > following example. For information about other options, see
- > [Azure SDK for Go - More authentication details](https://github.com/Azure/azure-sdk-for-go#more-authentication-details).
-
-1. Authenticate through Azure CLI.
-
- ```azurecli
- az login
- ```
-
-1. In your Go environment of choice, install the required packages for Azure Resource Graph:
-
- ```bash
- # Add the Resource Graph package for Go
- go get -u github.com/Azure/azure-sdk-for-go/services/resourcegraph/mgmt/2021-03-01/resourcegraph
-
- # Add the Azure auth package for Go
- go get -u github.com/Azure/go-autorest/autorest/azure/auth
- ```
-
-## Run your first Resource Graph query
-
-With the Go packages added to your environment of choice, it's time to try out a simple Resource
-Graph query. The query returns the first five Azure resources with the **Name** and **Resource
-Type** of each resource.
-
-1. Create the Go application and save the following source as `argQuery.go`:
-
- ```go
- package main
-
- import (
- "fmt"
- "os"
- "context"
- "strconv"
- arg "github.com/Azure/azure-sdk-for-go/services/resourcegraph/mgmt/2021-03-01/resourcegraph"
- "github.com/Azure/go-autorest/autorest/azure/auth"
- )
-
- func main() {
- // Get variables from command line arguments
- var query = os.Args[1]
- var subList = os.Args[2:]
-
- // Create and authorize a ResourceGraph client
- argClient := arg.New()
- authorizer, err := auth.NewAuthorizerFromCLI()
- if err == nil {
- argClient.Authorizer = authorizer
- } else {
- fmt.Printf(err.Error())
- }
-
- // Set options
- RequestOptions := arg.QueryRequestOptions {
- ResultFormat: "objectArray",
- }
-
- // Create the query request
- Request := arg.QueryRequest {
- Subscriptions: &subList,
- Query: &query,
- Options: &RequestOptions,
- }
-
- // Run the query and get the results
- var results, queryErr = argClient.Resources(context.Background(), Request)
- if queryErr == nil {
- fmt.Printf("Resources found: " + strconv.FormatInt(*results.TotalRecords, 10) + "\n")
- fmt.Printf("Results: " + fmt.Sprint(results.Data) + "\n")
- } else {
- fmt.Printf(queryErr.Error())
- }
- }
- ```
-
-1. Build the Go application:
-
- ```bash
- go build argQuery.go
- ```
-
-1. Run your first Azure Resource Graph query using the compiled Go application. Replace `<SubID>`
- with your subscription ID:
-
- ```bash
- argQuery "Resources | project name, type | limit 5" "<SubID>"
- ```
-
- > [!NOTE]
- > As this query example does not provide a sort modifier such as `order by`, running this query
- > multiple times is likely to yield a different set of resources per request.
-
-1. Change the first parameter to `argQuery` and change the query to `order by` the **Name**
- property. Replace `<SubID>` with your subscription ID:
-
- ```bash
- argQuery "Resources | project name, type | limit 5 | order by name asc" "<SubID>"
- ```
-
- > [!NOTE]
- > Just as with the first query, running this query multiple times is likely to yield a different
- > set of resources per request. The order of the query commands is important. In this example,
- > the `order by` comes after the `limit`. This command order first limits the query results and
- > then orders them.
-
-1. Change the first parameter to `argQuery` and change the query to first `order by` the **Name**
- property and then `limit` to the top five results. Replace `<SubID>` with your subscription ID:
-
- ```bash
- argQuery "Resources | project name, type | order by name asc | limit 5" "<SubID>"
- ```
-
-When the final query is run several times, assuming that nothing in your environment is changing,
-the results returned are consistent and ordered by the **Name** property, but still limited to the
-top five results.
-
-## Clean up resources
-
-If you wish to remove the installed packages from your Go environment, you can do so by using
-the following command:
-
-```bash
-# Remove the installed packages from the Go environment
-go clean -i github.com/Azure/azure-sdk-for-go/services/resourcegraph/mgmt/2019-04-01/resourcegraph
-go clean -i github.com/Azure/go-autorest/autorest/azure/auth
-```
-
-## Next steps
-
-In this quickstart, you've added the Resource Graph packages to your Go environment and run your
-first query. To learn more about the Resource Graph language, continue to the query language details
-page.
-
-> [!div class="nextstepaction"]
-> [Get more information about the query language](./concepts/query-language.md)
governance First Query Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/first-query-java.md
- Title: "Quickstart: Your first Java query"
-description: In this quickstart, you follow the steps to enable the Resource Graph Maven packages for Java and run your first query.
Previously updated : 07/09/2021---
-# Quickstart: Run your first Resource Graph query using Java
-
-The first step to using Azure Resource Graph is to check that the required Maven packages for Java
-are installed. This quickstart walks you through the process of adding the Maven packages to your
-Java installation.
-
-At the end of this process, you'll have added the Maven packages to your Java installation and run
-your first Resource Graph query.
-
-## Prerequisites
--- An Azure subscription. If you don't have an Azure subscription, create a
- [free](https://azure.microsoft.com/free/) account before you begin.
--- Check that the latest Azure CLI is installed (at least **2.21.0**). If it isn't yet installed, see
- [Install the Azure CLI](/cli/azure/install-azure-cli).
-
- > [!NOTE]
- > Azure CLI is required to enable Azure SDK for Java to use the **CLI-based authentication** in
- > the following examples. For information about other options, see
- > [Azure Identity client library for Java](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/identity/azure-identity).
--- The [Java Developer Kit](/azure/developer/java/fundamentals/java-support-on-azure), version
- 8.
--- [Apache Maven](https://maven.apache.org/), version 3.6 or above.-
-## Create the Resource Graph project
-
-To enable Java to query Azure Resource Graph, create and configure a new application with Maven and
-install the required Maven packages.
-
-1. Initialize a new Java application named "argQuery" with a
- [Maven archetype](https://maven.apache.org/guides/introduction/introduction-to-archetypes.html):
-
- ```cmd
- mvn -B archetype:generate -DarchetypeGroupId="org.apache.maven.archetypes" -DgroupId="com.Fabrikam" -DartifactId="argQuery"
- ```
-
-1. Change directories into the new project folder `argQuery` and open `pom.xml` in your favorite
- editor. Add the following `<dependency>` nodes under the existing `<dependencies>` node:
-
- ```xml
- <dependency>
- <groupId>com.azure</groupId>
- <artifactId>azure-identity</artifactId>
- <version>1.2.4</version>
- </dependency>
- <dependency>
- <groupId>com.azure.resourcemanager</groupId>
- <artifactId>azure-resourcemanager-resourcegraph</artifactId>
- <version>1.0.0</version>
- </dependency>
- ```
-
-1. In the `pom.xml` file, add the following `<properties>` node under the base `<project>` node to
- update the source and target versions:
-
- ```xml
- <properties>
- <maven.compiler.source>1.8</maven.compiler.source>
- <maven.compiler.target>1.8</maven.compiler.target>
- </properties>
- ```
-
-1. In the `pom.xml` file, add the following `<build>` node under the base `<project>` node to
- configure the goal and main class for the project to run.
-
- ```xml
- <build>
- <plugins>
- <plugin>
- <groupId>org.codehaus.mojo</groupId>
- <artifactId>exec-maven-plugin</artifactId>
- <version>1.2.1</version>
- <executions>
- <execution>
- <goals>
- <goal>java</goal>
- </goals>
- </execution>
- </executions>
- <configuration>
- <mainClass>com.Fabrikam.App</mainClass>
- </configuration>
- </plugin>
- </plugins>
- </build>
- ```
-
-1. Replace the default `App.java` in `\argQuery\src\main\java\com\Fabrikam` with the following code
- and save the updated file:
-
- ```java
- package com.Fabrikam;
-
- import java.util.Arrays;
- import java.util.List;
- import com.azure.core.management.AzureEnvironment;
- import com.azure.core.management.profile.AzureProfile;
- import com.azure.identity.DefaultAzureCredentialBuilder;
- import com.azure.resourcemanager.resourcegraph.ResourceGraphManager;
- import com.azure.resourcemanager.resourcegraph.models.QueryRequest;
- import com.azure.resourcemanager.resourcegraph.models.QueryRequestOptions;
- import com.azure.resourcemanager.resourcegraph.models.QueryResponse;
- import com.azure.resourcemanager.resourcegraph.models.ResultFormat;
-
- public class App
- {
- public static void main( String[] args )
- {
- List<String> listSubscriptionIds = Arrays.asList(args[0]);
- String strQuery = args[1];
-
- ResourceGraphManager manager = ResourceGraphManager.authenticate(new DefaultAzureCredentialBuilder().build(), new AzureProfile(AzureEnvironment.AZURE));
-
- QueryRequest queryRequest = new QueryRequest()
- .withSubscriptions(listSubscriptionIds)
- .withQuery(strQuery);
-
- QueryResponse response = manager.resourceProviders().resources(queryRequest);
-
- System.out.println("Records: " + response.totalRecords());
- System.out.println("Data:\n" + response.data());
- }
- }
- ```
-
-1. Build the `argQuery` console application:
-
- ```bash
- mvn package
- ```
-
-## Run your first Resource Graph query
-
-With the Java console application built, it's time to try out a simple Resource Graph query. The
-query returns the first five Azure resources with the **Name** and **Resource Type** of each
-resource.
-
-In each call to `argQuery`, there are variables that are used that you need to replace with your own
-values:
--- `{subscriptionId}` - Replace with your subscription ID-- `{query}` - Replace with your Azure Resource Graph query-
-1. Use the Azure CLI to authenticate with `az login`.
-
-1. Change directories to the `argQuery` project folder you created with the previous
- `mvn -B archetype:generate` command.
-
-1. Run your first Azure Resource Graph query using Maven to compile the console application and pass
- the arguments. The `exec.args` property identifies arguments by spaces. To identify the query as
- a single argument, we wrap it with single quotes (`'`).
-
- ```bash
- mvn compile exec:java -Dexec.args "{subscriptionId} 'Resources | project name, type | limit 5'"
- ```
-
- > [!NOTE]
- > As this query example doesn't provide a sort modifier such as `order by`, running this query
- > multiple times is likely to yield a different set of resources per request.
-
-1. Change the argument to `argQuery.exe` and change the query to `order by` the **Name** property:
-
- ```bash
- mvn compile exec:java -Dexec.args "{subscriptionId} 'Resources | project name, type | limit 5 | order by name asc'"
- ```
-
- > [!NOTE]
- > Just as with the first query, running this query multiple times is likely to yield a different
- > set of resources per request. The order of the query commands is important. In this example,
- > the `order by` comes after the `limit`. This command order first limits the query results and
- > then orders them.
-
-1. Change the final parameter to `argQuery.exe` and change the query to first `order by` the
- **Name** property and then `limit` to the top five results:
-
- ```bash
- mvn compile exec:java -Dexec.args "{subscriptionId} 'Resources | project name, type | order by name asc | limit 5'"
- ```
-
-When the final query is run several times, assuming that nothing in your environment is changing,
-the results returned are consistent and ordered by the **Name** property, but still limited to the
-top five results.
-
-## Clean up resources
-
-If you wish to remove the Java console application and installed packages, you can do so by deleting
-the `argQuery` project folder.
-
-## Next steps
-
-In this quickstart, you've created a Java console application with the required Resource Graph
-packages and run your first query. To learn more about the Resource Graph language, continue to the
-query language details page.
-
-> [!div class="nextstepaction"]
-> [Get more information about the query language](./concepts/query-language.md)
governance First Query Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/first-query-javascript.md
- Title: 'Quickstart: Your first JavaScript query'
-description: In this quickstart, you follow the steps to enable the Resource Graph library for JavaScript and run your first query.
Previously updated : 07/09/2021---
-# Quickstart: Run your first Resource Graph query using JavaScript
-
-This quickstart walks you through the process of adding the libraries to your JavaScript
-installation. The first step to using Azure Resource Graph is to initialize a JavaScript application
-with the required libraries.
-
-At the end of this process, you'll have added the libraries to your JavaScript installation and run
-your first Resource Graph query.
-
-## Prerequisites
--- **Azure subscription**: If you don't have an Azure subscription, create a
- [free](https://azure.microsoft.com/free/) account before you begin.
--- **Node.js**: [Node.js](https://nodejs.org/) version 12 or higher is required.-
-## Application initialization
-
-To enable JavaScript to query Azure Resource Graph, the environment must be configured. This setup
-works wherever JavaScript can be used, including [bash on Windows 10](/windows/wsl/install-win10).
-
-1. Initialize a new Node.js project by running the following command.
-
- ```bash
- npm init -y
- ```
-
-1. Add a reference to the yargs module.
-
- ```bash
- npm install yargs
- ```
-
-1. Add a reference to the Azure Resource Graph module.
-
- ```bash
- npm install @azure/arm-resourcegraph
- ```
-
-1. Add a reference to the Azure authentication library.
-
- ```bash
- npm install @azure/identity
- ```
-
- > [!NOTE]
- > Verify in _package.json_ `@azure/arm-resourcegraph` is version **4.2.1** or higher and
- > `@azure/identity` is version **2.0.4** or higher.
-
-## Query the Resource Graph
-
-1. Create a new file named _index.js_ and enter the following code.
-
- ```javascript
- const argv = require("yargs").argv;
- const { DefaultAzureCredential } = require("@azure/identity");
- const { ResourceGraphClient } = require("@azure/arm-resourcegraph");
-
- if (argv.query) {
- const query = async () => {
- const credentials = new DefaultAzureCredential();
- const client = new ResourceGraphClient(credentials);
- const result = await client.resources(
- {
- query: argv.query
- },
- { resultFormat: "table" }
- );
- console.log("Records: " + result.totalRecords);
- console.log(result.data);
- };
-
- query();
- }
- ```
-
- > [!NOTE]
- > This code creates a tenant-based query. To limit the query to a
- > [management group](../management-groups/overview.md) or subscription, define and add a
- > [queryrequest](/javascript/api/@azure/arm-resourcegraph/queryrequest) to the `client.resources`
- > call and specify either `managementGroups` or `subscriptions`.
-
-1. Enter the following command in the terminal:
-
- ```bash
- node index.js --query "Resources | project name, type | limit 5"
- ```
-
- > [!NOTE]
- > As this query example doesn't provide a sort modifier such as `order by`, running this query
- > multiple times is likely to yield a different set of resources per request.
-
-1. Change the first parameter to `index.js` and change the query to `order by` the **Name**
- property.
-
- ```bash
- node index.js --query "Resources | project name, type | limit 5 | order by name asc"
- ```
-
- As the script attempts to authenticate, a message similar to the following message is displayed
- in the terminal:
-
- > To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code FGB56WJUGK to authenticate.
-
- Once you authenticate in the browser, then the script continues to run.
-
- > [!NOTE]
- > Just as with the first query, running this query multiple times is likely to yield a different
- > set of resources per request. The order of the query commands is important. In this example,
- > the `order by` comes after the `limit`. This command order first limits the query results and
- > then orders them.
-
-1. Change the first parameter to `index.js` and change the query to first `order by` the **Name**
- property and then `limit` to the top five results.
-
- ```bash
- node index.js --query "Resources | project name, type | order by name asc | limit 5"
- ```
-
-When the final query is run several times, assuming that nothing in your environment is changing,
-the results returned are consistent and ordered by the **Name** property, but still limited to the
-top five results.
-
-## Clean up resources
-
-If you wish to remove the installed libraries from your application, run the following command.
-
-```bash
-npm uninstall @azure/arm-resourcegraph @azure/identity yargs
-```
-
-## Next steps
-
-In this quickstart, you've added the Resource Graph libraries to your JavaScript environment and run
-your first query. To learn more about the Resource Graph language, continue to the query language
-details page.
-
-> [!div class="nextstepaction"]
-> [Get more information about the query language](./concepts/query-language.md)
governance First Query Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/first-query-python.md
- Title: 'Quickstart: Your first Python query'
-description: In this quickstart, you follow the steps to enable the Resource Graph library for Python and run your first query.
Previously updated : 10/01/2021---
-# Quickstart: Run your first Resource Graph query using Python
-
-The first step to using Azure Resource Graph is to check that the required libraries for Python are
-installed. This quickstart walks you through the process of adding the libraries to your Python
-installation.
-
-At the end of this process, you'll have added the libraries to your Python installation and run your
-first Resource Graph query.
-
-## Prerequisites
-
-If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account
-before you begin.
--
-## Add the Resource Graph library
-
-To enable Python to query Azure Resource Graph, the library must be added. This library works
-wherever Python can be used, including [bash on Windows 10](/windows/wsl/install-win10) or locally
-installed.
-
-1. Check that the latest Python is installed (at least **3.8**). If it isn't yet installed, download
- it at [Python.org](https://www.python.org/downloads/).
-
-1. Check that the latest Azure CLI is installed (at least **2.5.1**). If it isn't yet installed, see
- [Install the Azure CLI](/cli/azure/install-azure-cli).
-
- > [!NOTE]
- > Azure CLI is required to enable Python to use the **CLI-based authentication** in the following
- > examples. For information about other options, see
- > [Authenticate using the Azure management libraries for Python](/azure/developer/python/sdk/authentication-overview).
-
-1. Authenticate through Azure CLI.
-
- ```azurecli
- az login
- ```
-
-1. In your Python environment of choice, install the required libraries for Azure Resource Graph:
-
- ```bash
- # Add the Resource Graph library for Python
- pip install azure-mgmt-resourcegraph
-
- # Add the Resources library for Python
- pip install azure-mgmt-resource
-
- # Add the CLI Core library for Python for authentication (development only!)
- pip install azure-cli-core
-
- # Add the Azure identity library for Python
- pip install azure.identity
- ```
-
- > [!NOTE]
- > If Python is installed for all users, these commands must be run from an elevated console.
-
-1. Validate that the libraries have been installed. `azure-mgmt-resourcegraph` should be **2.0.0**
- or higher, `azure-mgmt-resource` should be **9.0.0** or higher, and `azure-cli-core` should be
- **2.5.0** or higher.
-
- ```bash
- # Check each installed library
- pip show azure-mgmt-resourcegraph azure-mgmt-resource azure-cli-core azure.identity
- ```
-
-## Run your first Resource Graph query
-
-With the Python libraries added to your environment of choice, it's time to try out a simple
-subscription-based Resource Graph query. The query returns the first five Azure resources with the
-**Name** and **Resource Type** of each resource. To query by
-[management group](../management-groups/overview.md), use the `management_groups` parameter with
-`QueryRequest`.
-
-1. Run your first Azure Resource Graph query using the installed libraries and the `resources`
- method:
-
- ```python
- # Import Azure Resource Graph library
- import azure.mgmt.resourcegraph as arg
-
- # Import specific methods and models from other libraries
- from azure.mgmt.resource import SubscriptionClient
- from azure.identity import AzureCliCredential
-
- # Wrap all the work in a function
- def getresources( strQuery ):
- # Get your credentials from Azure CLI (development only!) and get your subscription list
- credential = AzureCliCredential()
- subsClient = SubscriptionClient(credential)
- subsRaw = []
- for sub in subsClient.subscriptions.list():
- subsRaw.append(sub.as_dict())
- subsList = []
- for sub in subsRaw:
- subsList.append(sub.get('subscription_id'))
-
- # Create Azure Resource Graph client and set options
- argClient = arg.ResourceGraphClient(credential)
- argQueryOptions = arg.models.QueryRequestOptions(result_format="objectArray")
-
- # Create query
- argQuery = arg.models.QueryRequest(subscriptions=subsList, query=strQuery, options=argQueryOptions)
-
- # Run query
- argResults = argClient.resources(argQuery)
-
- # Show Python object
- print(argResults)
-
- getresources("Resources | project name, type | limit 5")
- ```
-
- > [!NOTE]
- > As this query example does not provide a sort modifier such as `order by`, running this query
- > multiple times is likely to yield a different set of resources per request.
-
-1. Update the call to `getresources` and change the query to `order by` the **Name** property:
-
- ```python
- getresources("Resources | project name, type | limit 5 | order by name asc")
- ```
-
- > [!NOTE]
- > Just as with the first query, running this query multiple times is likely to yield a different
- > set of resources per request. The order of the query commands is important. In this example,
- > the `order by` comes after the `limit`. This command order first limits the query results and
- > then orders them.
-
-1. Update the call to `getresources` and change the query to first `order by` the **Name** property
- and then `limit` to the top five results:
-
- ```python
- getresources("Resources | project name, type | order by name asc | limit 5")
- ```
-
-When the final query is run several times, assuming that nothing in your environment is changing,
-the results returned are consistent and ordered by the **Name** property, but still limited to the
-top five results.
-
-## Clean up resources
-
-If you wish to remove the installed libraries from your Python environment, you can do so by using
-the following command:
-
-```bash
-# Remove the installed libraries from the Python environment
-pip uninstall azure-mgmt-resourcegraph azure-mgmt-resource azure-cli-core azure.identity
-```
-
-## Next steps
-
-In this quickstart, you've added the Resource Graph libraries to your Python environment and run
-your first query. To learn more about the Resource Graph language, continue to the query language
-details page.
-
-> [!div class="nextstepaction"]
-> [Get more information about the query language](./concepts/query-language.md)
governance First Query Ruby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/first-query-ruby.md
- Title: "Quickstart: Your first Ruby query"
-description: In this quickstart, you follow the steps to enable the Resource Graph gem for Ruby and run your first query.
Previously updated : 07/09/2021--
-# Quickstart: Run your first Resource Graph query using Ruby
-
-The first step to using Azure Resource Graph is to check that the required gems for Ruby are
-installed. This quickstart walks you through the process of adding the gems to your Ruby
-installation.
-
-At the end of this process, you'll have added the gems to your Ruby installation and run your first
-Resource Graph query.
-
-## Prerequisites
--- If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/)
- account before you begin.
-- An Azure service principal, including the _clientId_ and _clientSecret_.--
-## Create the Resource Graph project
-
-To enable Ruby to query Azure Resource Graph, the gem must be added to the `Gemfile`. This gem works
-wherever Ruby can be used, including with [Azure Cloud Shell](https://shell.azure.com),
-[bash on Windows 10](/windows/wsl/install-win10), or locally installed.
-
-1. Check that the latest Ruby is installed (at least **2.7.1**). If it isn't yet installed, download
- it at [Ruby-Lang.org](https://www.ruby-lang.org/en/downloads/).
-
-1. In your Ruby environment of choice, initialize a bundle in a new project folder:
-
- ```bash
- # Initialize a bundle to create a new Gemfile
- bundle init
- ```
-
-1. Update your `Gemfile` with the gems needed for Azure Resource Graph. The updated file should look
- similar to this example:
-
- ```file
- # frozen_string_literal: true
-
- source "https://rubygems.org"
-
- git_source(:github) {|repo_name| "https://github.com/#{repo_name}" }
-
- # gem "rails"
- gem 'azure_mgmt_resourcegraph', '~> 0.17.2'
- ```
-
-1. From the project folder, run `bundle install`. Confirm the gems were installed with
- `bundle list`.
-
-1. In the same project folder, create `argQuery.rb` with the following code and save the updated
- file:
-
- ```ruby
- #!/usr/bin/env ruby
-
- require 'azure_mgmt_resourcegraph'
- ARG = Azure::ResourceGraph::Profiles::Latest::Mgmt
-
- # Get arguments and set options
- options = {
- tenant_id: ARGV[0],
- client_id: ARGV[1],
- client_secret: ARGV[2],
- subscription_id: ARGV[3]
- }
-
- # Create Resource Graph client from options
- argClient = ARG::Client.new(options)
-
- # Create Resource Graph QueryRequest for subscription with query
- query_request = ARGModels::QueryRequest.new.tap do |qr|
- qr.subscriptions = [ARGV[3]]
- qr.query = ARGV[4]
- end
-
- # Get the resources from Resource Graph
- response = argClient.resources(query_request)
-
- # Convert data to JSON and output
- puts response.data.to_json
- ```
-
-## Run your first Resource Graph query
-
-With the Ruby script saved and ready to use, it's time to try out a simple Resource Graph query. The
-query returns the first five Azure resources with the **Name** and **Resource Type** of each
-resource.
-
-In each call to `argQuery`, there are variables that are used that you need to replace with your own
-values:
--- `{tenantId}` - Replace with your tenant ID-- `{clientId}` - Replace with the client ID of your service principal-- `{clientSecret}` - Replace with the client secret of your service principal-- `{subscriptionId}` - Replace with your subscription ID-
-1. Change directories to the project folder where you created the `Gemfile` and `argClient.rb`
- files.
-
-1. Run your first Azure Resource Graph query using the gems and the `resources` method:
-
- ```bash
- ruby argQuery.rb "{tenantId}" "{clientId}" "{clientSecret}" "{subscriptionId}" "Resources | project name, type | limit 5"
- ```
-
- > [!NOTE]
- > As this query example does not provide a sort modifier such as `order by`, running this query
- > multiple times is likely to yield a different set of resources per request.
-
-1. Change the final parameter to `argQuery.rb` and change the query to `order by` the **Name**
- property:
-
- ```bash
- ruby argQuery.rb "{tenantId}" "{clientId}" "{clientSecret}" "{subscriptionId}" "Resources | project name, type | limit 5 | order by name asc"
- ```
-
- > [!NOTE]
- > Just as with the first query, running this query multiple times is likely to yield a different
- > set of resources per request. The order of the query commands is important. In this example,
- > the `order by` comes after the `limit`. This command order first limits the query results and
- > then orders them.
-
-1. Change the final parameter to `argQuery.rb` and change the query to first `order by` the **Name**
- property and then `limit` to the top five results:
-
- ```bash
- ruby argQuery.rb "{tenantId}" "{clientId}" "{clientSecret}" "{subscriptionId}" "Resources | project name, type | order by name asc | limit 5"
- ```
-
-When the final query is run several times, assuming that nothing in your environment is changing,
-the results returned are consistent and ordered by the **Name** property, but still limited to the
-top five results.
-
-## Clean up resources
-
-If you wish to remove the installed gems from your Ruby environment, you can do so by using
-the following command:
-
-```bash
-# Remove the installed gems from the Ruby environment
-gem uninstall azure_mgmt_resourcegraph
-```
-
-> [!NOTE]
-> The gem `azure_mgmt_resourcegraph` has dependencies such as `ms_rest` and `ms_rest_azure` that may
-> have also been installed depending on your environment. You may uninstall these gems also if no
-> longer needed.
-
-## Next steps
-
-In this quickstart, you've added the Resource Graph gems to your Ruby environment and run your first
-query. To learn more about the Resource Graph language, continue to the query language details page.
-
-> [!div class="nextstepaction"]
-> [Get more information about the query language](./concepts/query-language.md)
hdinsight Hdinsight Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes-archive.md
For workload specific versions, see
* HDInsight 5.1 is now supported with ESP cluster. * Upgraded version of Ranger 2.3.0 and Oozie 5.2.1 are now part of HDInsight 5.1 * The Spark 3.3.1 (HDInsight 5.1) cluster comes with Hive Warehouse Connector (HWC) 2.1, which works together with the Interactive Query (HDInsight 5.1) cluster.
+* Ubuntu 18.04 is supported under [ESM(Extended Security Maintenance)](https://techcommunity.microsoft.com/t5/linux-and-open-source-blog/canonical-ubuntu-18-04-lts-reaching-end-of-standard-support/ba-p/3822623) by the Azure Linux team for Azure HDInsight July 2023, release onwards.
+ > [!IMPORTANT] > This release addresses the following CVEs released by [MSRC](https://msrc.microsoft.com/update-guide/vulnerability) on August 8, 2023. The action is to update to the latest image **2307201242**. Customers are advised to plan accordingly.
A minimum 4-core VM is required for Head Node to ensure the high availability an
#### Cluster worker node provisioning change When 80% of the worker nodes are ready, the cluster enters **operational** stage. At this stage, customers can do all the data plane operations like running scripts and jobs. But customers can't do any control plane operation like scaling up/down. Only deletion is supported.
-After the **operational** stage, the cluster waits another 60 minutes for the remaining 20% worker nodes. At the end of this 60 minute, the cluster moves to the **running** stage, even if all of worker nodes are still not available. Once a cluster enters the **running** stage, you can use it as normal. Both control plan operations like scaling up/down, and data plan operations like running scripts and jobs are accepted. If some of the requested worker nodes aren't available, the cluster will be marked as partial success. You are charged for the nodes that were deployed successfully.
+After the **operational** stage, the cluster waits another 60 minutes for the remaining 20% worker nodes. At the end of this 60 minute period, the cluster moves to the **running** stage, even if all of worker nodes are still not available. Once a cluster enters the **running** stage, you can use it as normal. Both control plan operations like scaling up/down, and data plan operations like running scripts and jobs are accepted. If some of the requested worker nodes aren't available, the cluster will be marked as partial success. You are charged for the nodes that were deployed successfully.
#### Create new service principal through HDInsight Previously, with cluster creation, customers can create a new service principal to access the connected ADLS Gen 1 account in Azure portal. Starting June 15, 2020, new service principal creation is not possible in the HDInsight creation workflow, only existing service principal is supported. See [Create Service Principal and Certificates using Azure Active Directory](../active-directory/develop/howto-create-service-principal-portal.md).
hdinsight Hdinsight Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes.md
description: Latest release notes for Azure HDInsight. Get development tips and
Previously updated : 10/26/2023 Last updated : 01/18/2024 # Azure HDInsight release notes
This article provides information about the **most recent** Azure HDInsight rele
Azure HDInsight is one of the most popular services among enterprise customers for open-source analytics on Azure. Subscribe to the [HDInsight Release Notes](./subscribe-to-hdi-release-notes-repo.md) for up-to-date information on HDInsight and all HDInsight versions.
-In this release HDI 5.1 version is moved to General Availability (GA) stage.
To subscribe, click the ΓÇ£watchΓÇ¥ button in the banner and watch out for [HDInsight Releases](https://github.com/Azure/HDInsight/releases).
-## Release date: October 26, 2023
+## Release Information
-This release applies to HDInsight 4.x and 5.x HDInsight release will be available to all regions over several days. This release is applicable for image number **2310140056**. [How to check the image number?](./view-hindsight-cluster-image-version.md)
+### Release date: January 10, 2024
+
+This hotfix release applies to HDInsight 4.x and 5.x versions. HDInsight release will be available to all regions over several days. This release is applicable for image number **2401030422**. [How to check the image number?](./view-hindsight-cluster-image-version.md)
HDInsight uses safe deployment practices, which involve gradual region deployment. it might take up to 10 business days for a new release or a new version to be available in all regions.
HDInsight uses safe deployment practices, which involve gradual region deploymen
* HDInsight 5.0: Ubuntu 18.04.5 LTS Linux Kernel 5.4 * HDInsight 5.1: Ubuntu 18.04.5 LTS Linux Kernel 5.4
+> [!NOTE]
+> Ubuntu 18.04 is supported under [Extended Security Maintenance(ESM)](https://techcommunity.microsoft.com/t5/linux-and-open-source-blog/canonical-ubuntu-18-04-lts-reaching-end-of-standard-support/ba-p/3822623) by the Azure Linux team for [Azure HDInsight July 2023](/azure/hdinsight/hdinsight-release-notes-archive#release-date-july-25-2023), release onwards.
+ For workload specific versions, see * [HDInsight 5.x component versions](./hdinsight-5x-component-versioning.md) * [HDInsight 4.x component versions](./hdinsight-40-component-versioning.md)
-## What's new
-
-* HDInsight announces the General availability of HDInsight 5.1 starting November 1, 2023. This release brings in a full stack refresh to the [open source components](./hdinsight-5x-component-versioning.md#open-source-components-available-with-hdinsight-5x) and the integrations from Microsoft.
- * Latest Open Source Versions ΓÇô [HDInsight 5.1](./hdinsight-5x-component-versioning.md) comes with the latest stable [open-source version](./hdinsight-5x-component-versioning.md#open-source-components-available-with-hdinsight-5x) available. Customers can benefit from all latest open source features, Microsoft performance improvements, and Bug fixes.
- * Secure ΓÇô The latest versions come with the most recent security fixes, both open-source security fixes and security improvements by Microsoft.
- * Lower TCO ΓÇô With performance enhancements customers can lower the operating cost, along with [enhanced autoscale](https://techcommunity.microsoft.com/t5/analytics-on-azure-blog/enhanced-autoscale-capabilities-in-hdinsight-clusters/ba-p/3811271).
-
-* Cluster permissions for secure storage
- * Customers can specify (during cluster creation) whether a secure channel should be used for HDInsight cluster nodes to connect the storage account.
-
-* HDInsight Cluster Creation with Custom VNets.
- * To improve the overall security posture of the HDInsight clusters, HDInsight clusters using custom VNETs need to ensure that the user needs to have permission for `Microsoft Network/virtualNetworks/subnets/join/action` to perform create operations. Customer might face creation failures if this check is not enabled.
-
- * Non-ESP ABFS clusters [Cluster Permissions for Word Readable]
- * Non-ESP ABFS clusters restrict non-Hadoop group users from executing Hadoop commands for storage operations. This change improves cluster security posture.
-
-* In-line quota update.
- * Now you can request quota increase directly from the My Quota page, with the direct API call it is much faster. In case the API call fails, you can create a new support request for quota increase.
+## Fixed issues
-## ![Icon showing coming soon.](./media/hdinsight-release-notes/clock.svg) Coming soon
+- Security fixes from Ambari and Oozie components
-* The max length of cluster name will be changed to 45 from 59 characters, to improve the security posture of clusters. This change will be rolled out to all regions starting upcoming release.
+
+## ![Icon showing coming soon.](./media/hdinsight-release-notes/clock.svg) Coming soon
* Basic and Standard A-series VMs Retirement.
- * On August 31, 2024, we will retire Basic and Standard A-series VMs. Before that date, you need to migrate your workloads to Av2-series VMs, which provide more memory per vCPU and faster storage on solid-state drives (SSDs).
+ * On August 31, 2024, we'll retire Basic and Standard A-series VMs. Before that date, you need to migrate your workloads to Av2-series VMs, which provide more memory per vCPU and faster storage on solid-state drives (SSDs).
* To avoid service disruptions, [migrate your workloads](https://aka.ms/Av1retirement) from Basic and Standard A-series VMs to Av2-series VMs before August 31, 2024. If you have any more questions, contact [Azure Support](https://ms.portal.azure.com/#view/Microsoft_Azure_Support/HelpAndSupportBlade/~/overview).
You can always ask us about HDInsight on [Azure HDInsight - Microsoft Q&A](/answ
We are listening: YouΓÇÖre welcome to add more ideas and other topics here and vote for them - [HDInsight Ideas](https://feedback.azure.com/d365community/search/?q=HDInsight) and follow us for more updates on [AzureHDInsight Community](https://www.linkedin.com/groups/14313521/)
-> [!NOTE]
-> This release addresses the following CVEs released by [MSRC](https://msrc.microsoft.com/update-guide/vulnerability) on September 12, 2023. The action is to update to the latest image 2308221128 or 2310140056. Customers are advised to plan accordingly.
-
-| CVE | Severity | CVE Title | Remark |
-| - | - | - | - |
-| [CVE-2023-38156](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-38156) | Important | Azure HDInsight Apache Ambari Elevation of Privilege Vulnerability |Included on image 2308221128 or 2310140056 |
-| [CVE-2023-36419](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-36419) | Important | Azure HDInsight Apache Oozie Workflow Scheduler Elevation of Privilege Vulnerability | Apply [Script action](https://hdiconfigactions2.blob.core.windows.net/msrc-script/script_action.sh) on your clusters, or update to 2310140056 image |
- > [!NOTE] > We advise customers to use to latest versions of HDInsight [Images](./view-hindsight-cluster-image-version.md) as they bring in the best of open source updates, Azure updates and security fixes. For more information, see [Best practices](./hdinsight-overview-before-you-start.md).
iot-operations Howto Manage Assets Remotely https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-devices-assets/howto-manage-assets-remotely.md
This article describes how to use the Azure IoT Operations (preview) portal to:
These assets, tags, and events map inbound data from OPC UA servers to friendly names that you can use in the MQ broker and Azure IoT Data Processor (preview) pipelines.
-You can also use the Azure CLI to manage assets. To learn more, see [az iot ops asset](/cli/azure/iot/ops/asset). Currently, it's not possible to use the Azure CLI to manage asset endpoints.
+You can also use the Azure CLI to manage assets and asset endpoints. To learn more, see:
+
+- [az iot ops asset](/cli/azure/iot/ops/asset)
+- [az iot ops asset endpoint](/cli/azure/iot/ops/asset/endpoint).
## Prerequisites
load-balancer Load Balancer Tcp Reset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-tcp-reset.md
Previously updated : 12/19/2022 Last updated : 01/19/2024 # Load Balancer TCP Reset and Idle Timeout
-You can use [Standard Load Balancer](./load-balancer-overview.md) to create a more predictable application behavior for your scenarios by enabling TCP Reset on Idle for a given rule. Load Balancer's default behavior is to silently drop flows when the idle timeout of a flow is reached. Enabling TCP reset will cause Load Balancer to send bidirectional TCP Resets (TCP RST packet) on idle timeout. This will inform your application endpoints that the connection has timed out and is no longer usable. Endpoints can immediately establish a new connection if needed.
+You can use [Standard Load Balancer](./load-balancer-overview.md) to create a more predictable application behavior for your scenarios by enabling TCP Reset on Idle for a given rule. Load Balancer's default behavior is to silently drop flows when the idle timeout of a flow is reached. Enabling TCP reset causes Load Balancer to send bidirectional TCP Resets (TCP RST packets) on idle timeout to inform your application endpoints that the connection timed out and is no longer usable. Endpoints can immediately establish a new connection if needed.
:::image type="content" source="media/load-balancer-tcp-reset/load-balancer-tcp-reset.png" alt-text="Diagram shows default TCP reset behavior of network nodes."::: ## TCP reset
-You change this default behavior and enable sending TCP Resets on idle timeout on inbound NAT rules, load balancing rules, and [outbound rules](./load-balancer-outbound-connections.md#outboundrules). When enabled per rule, Load Balancer will send bidirectional TCP Reset (TCP RST packets) to both client and server endpoints at the time of idle timeout for all matching flows.
+You change this default behavior and enable sending TCP Resets on idle timeout on inbound NAT rules, load balancing rules, and [outbound rules](./load-balancer-outbound-connections.md#outboundrules). When enabled per rule, Load Balancer sends bidirectional TCP Resets (TCP RST packets) to both client and server endpoints at the time of idle timeout for all matching flows.
-Endpoints receiving TCP RST packets close the corresponding socket immediately. This provides an immediate notification to the endpoints that the release of the connection has occurred and any future communication on the same TCP connection will fail. Applications can purge connections when the socket closes and reestablish connections as needed without waiting for the TCP connection to eventually time-out.
+Endpoints receiving TCP RST packets close the corresponding socket immediately. This provides an immediate notification to the endpoint's connection release and any future communication on the same TCP connection will fail. Applications can purge connections when the socket closes and reestablish connections as needed without waiting for the TCP connection to eventually time-out.
-For many scenarios, TCP reset may reduce the need to send TCP (or application layer) keepalives to refresh the idle timeout of a flow.
+For many scenarios, TCP reset can reduce the need to send TCP (or application layer) keepalives to refresh the idle timeout of a flow.
-If your idle durations exceed configuration limits or your application shows an undesirable behavior with TCP Resets enabled, you may still need to use TCP keepalives, or application layer keepalives, to monitor the liveness of the TCP connections. Further, keepalives can also remain useful for when the connection is proxied somewhere in the path, particularly application layer keepalives.
+If your idle durations exceed configuration limits or your application shows an undesirable behavior with TCP Resets enabled, you can still need to use TCP keepalives, or application layer keepalives, to monitor the liveness of the TCP connections. Further, keepalives can also remain useful for when the connection is proxied somewhere in the path, particularly application layer keepalives.
-By carefully examining the entire end to end scenario, you can determine the benefits from enabling TCP Resets and adjusting the idle timeout. Then you decide if more steps may be required to ensure the desired application behavior.
+By carefully examining the entire end to end scenario, you can determine the benefits from enabling TCP Resets and adjusting the idle timeout. Then you decide if more steps can be required to ensure the desired application behavior.
## Configurable TCP idle timeout
-Azure Load Balancer has a 4 minutes to 100 minutes timeout range for Load Balancer rules, Outbound Rules, and Inbound NAT rules.
+Azure Load Balancer has a 4 minutes to 100-minutes timeout range for Load Balancer rules, Outbound Rules, and Inbound NAT rules. The default is 4 minutes. If a period of inactivity is longer than the timeout value, there's no guarantee that the TCP or HTTP session is maintained between the client and your cloud service.
-By default, it's set to 4 minutes. If a period of inactivity is longer than the timeout value, there's no guarantee that the TCP or HTTP session is maintained between the client and your cloud service.
-
-When the connection is closed, your client application may receive the following error message: "The underlying connection was closed: A connection that was expected to be kept alive was closed by the server."
+When the connection is closed, your client application can receive the following error message: "The underlying connection was closed: A connection that was expected to be kept alive was closed by the server."
A common practice is to use a TCP keep-alive. This practice keeps the connection active for a longer period. For more information, see these [.NET examples](/dotnet/api/system.net.servicepoint.settcpkeepalive). With keep-alive enabled, packets are sent during periods of inactivity on the connection. Keep-alive packets ensure the idle timeout value isn't reached and the connection is maintained for a long period.
-The setting works for inbound connections only. To avoid losing the connection, configure the TCP keep-alive with an interval less than the idle timeout setting or increase the idle timeout value. To support these scenarios, support for a configurable idle timeout has been added.
+The setting works for inbound connections only. To avoid losing the connection, configure the TCP keep-alive with an interval less than the idle timeout setting or increase the idle timeout value. To support these scenarios, support for a configurable idle timeout is available.
TCP keep-alive works for scenarios where battery life isn't a constraint. It isn't recommended for mobile applications. Using a TCP keep-alive in a mobile application can drain the device battery faster. ## Order of precedence
-It is important to take into account how the idle timeout values set for different IPs could potentially interact.
+It's important to take into account how the idle timeout values set for different IPs could potentially interact.
### Inbound -- If there is an (inbound) load balancer rule with an idle timeout value set differently than the idle timeout of the frontend IP it references, the load balancer frontend IP idle timeout will take precedence.-- If there is an inbound NAT rule with an idle timeout value set differently than the idle timeout of the frontend IP it references, the load balancer frontend IP idle timeout will take precedence.
+- If there's an (inbound) load balancer rule with an idle timeout value set differently than the idle timeout of the frontend IP it references, the load balancer frontend IP idle timeout takes precedence.
+- If there's an inbound NAT rule with an idle timeout value set differently than the idle timeout of the frontend IP it references, the load balancer frontend IP idle timeout takes precedence.
### Outbound -- If there is an outbound rule with an idle timeout value different than 4 minutes (which is what public IP outbound idle timeout is locked at), the outbound rule idle timeout will take precedence.-- Because a NAT gateway will always take precedence over load balancer outbound rules (and over public IP addresses assigned directly to VMs), the idle timeout value assigned to the NAT gateway will be used. (Along the same lines, the locked public IP outbound idle timeouts of 4 minutes of any IPs assigned to the NAT GW are not considered.)
+- If there's an outbound rule with an idle timeout value different than 4 minutes (which is what public IP outbound idle timeout is locked at), the outbound rule idle timeout takes precedence.
+- Because a NAT gateway will always take precedence over load balancer outbound rules (and over public IP addresses assigned directly to VMs), the idle timeout value assigned to the NAT gateway will be used. (Along the same lines, the locked public IP outbound idle timeouts of 4 minutes of any IPs assigned to the NAT GW aren't considered.)
## Limitations
load-balancer Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/manage.md
Previously updated : 12/06/2022 Last updated : 01/19/2024 # Azure Load Balancer portal settings
-As you create Azure Load Balancer, information in this article will help you learn more about the individual settings and what the right configuration is for you.
+As you create Azure Load Balancer, information in this article helps you learn more about the individual settings and what the right configuration is for you.
## Create load balancer
To create a load balancer in the portal, at the top of the page select the searc
### Basics
-In the **Basics** tab of the create load balancer portal page, you'll see the following information:
+In the **Basics** tab of the create load balancer portal page, you see the following information:
| Setting | Details | | - | - |
In the **Basics** tab of the create load balancer portal page, you'll see the fo
| Resource group | Select **Create new** and type in the name for your resource group in the text box. If you have an existing resource group created, select it. | | Name | This setting is the name for your Azure Load Balancer. | | Region | Select an Azure region you'd like to deploy your load balancer in. |
-| SKU | Select **Standard**. </br> Load balancer has three SKUs: </br> **Basic** </br>**Standard** </br> **Gateway**. </br> Basic has limited functionality. </br> Standard is recommended for production workloads. </br> Gateway caters to third-party network virtual appliances (NVAs) </br> Learn more about [SKUs](skus.md). |
+| SKU | Select **Standard**. </br> Load balancer has three SKUs: </br> **Basic** </br>**Standard** </br> **Gateway**. </br> Basic has limited functionality. </br> Standard is recommended for production workloads. </br> Gateway caters to non-Microsoft network virtual appliances (NVAs) </br> Learn more about [SKUs](skus.md). |
| Type | Load balancer has two types: </br> **Internal (Private)** </br> **Public (External)**.</br> An internal load balancer (ILB) routes traffic to backend pool members via a private IP address.</br> A public load balancer directs requests from clients over the internet to the backend pool.</br> Learn more about [load balancer types](components.md#frontend-ip-configuration-).| | Tier | Load balancer has two tiers: </br> **Regional** </br> **Global** </br> A regional load balancer is constrained to load balancing within a region. Global refers to a cross-region load balancer that load-balances across regions. </br> For more information on the **Global** tier, see [Cross-region load balancer (preview)](cross-region-overview.md)
In the **Frontend IP configuration** tab of the create load balancer portal page
#### **Add frontend IP configuration** ##### Public load balancer
-If you select **Public** as your load balancer type in the **Basics** tab, you'll see the following information:
+If you select **Public** as your load balancer type in the **Basics** tab, you see the following information:
| Setting | Details | | - | - |
-| Name | The name of the frontend that will be added to the load balancer. |
+| Name | The name of the frontend added to the load balancer. |
| IP version | **IPv4** </br> **IPv6** </br> Load balancer supports IPv4 and IPv6 frontends. </br> Learn more about [load Balancer and IPv6](load-balancer-ipv6-overview.md). | | IP type | **IP address** </br> **IP prefix** </br> Load balancer supports an IP address or an IP prefix for the frontend IP address. For more information, see [Azure Public IP address prefix](../virtual-network/ip-services/public-ip-address-prefix.md). | | Gateway Load Balancer | If you're using a Gateway Load Balancer, choose the **Azure Resource Manager ID** of the Gateway Load Balancer you want to chain to your frontend IP Configuration. |
If you select **Public** as your load balancer type in the **Basics** tab, you'l
###### IP address
-If you select **IP address** for **IP type**, you'll see the following information:
+If you select **IP address** for **IP type**, you see the following information:
| Setting | Details | | - | - | | Public IP address | Select **Create new** to create a public IP address for your public load balancer. </br> If you have an existing public IP, select it in the pull-down box. | | Name | The name of the public IP address resource. | | SKU | Public IP addresses have two SKUs: **Basic** and **Standard**. </br> Basic doesn't support zone-resiliency and zonal attributes. </br> **Standard** is recommended for production workloads. </br> Load balancer and public IP address SKUs **must match**. |
-| Tier | **Regional** </br> **Global** </br> Depending on type of load balancer tier will determine what is selected. Regional for traditional load balancer, global for cross-region. |
+| Tier | **Regional** </br> **Global** </br> Depending on type of load balancer tier determines what is selected. Regional for traditional load balancer, global for cross-region. |
| Assignment | **Static** is auto selected for standard. </br> Basic public IPs have two types: **Dynamic** and **Static**. </br> Dynamic public IP addresses aren't assigned until creation. </br> IPs can be lost if the resource is deleted. </br> Static IP addresses are recommended. | | Availability zone | Select **Zone-redundant** to create a resilient load balancer. </br> To create a zonal load balancer, select a specific zone from **1**, **2**, or **3**. </br> Standard load balancer and public IPs support zones. </br> Learn more about [load balancer and availability zones](load-balancer-standard-availability-zones.md). </br> You won't see zone selection for basic. Basic load balancer doesn't support zones. | | Routing preference | Select **Microsoft Network**. </br> Microsoft Network means that traffic is routed via the Microsoft global network. </br> Internet means that traffic is routed through the internet service provider network. </br> Learn more about [Routing Preferences](../virtual-network/ip-services/routing-preference-overview.md)|
If you select **IP address** for **IP type**, you'll see the following informati
###### IP Prefix
-If you select **IP prefix** for **IP type**, you'll see the following information:
+If you select **IP prefix** for **IP type**, you see the following information:
| Setting | Details | | - | - | | Public IP prefix | Select **Create new** to create a public IP prefix for your public load balancer. </br> If you have an existing public prefix, select it in the pull-down box. | | Name | The name of the public IP prefix resource. | | SKU | Public IP prefixes have one SKU, **Standard**. |
-| IP version | **IPv4** or **IPv6**. </br> The version displayed will correspond to the version chosen above. |
+| IP version | **IPv4** or **IPv6**. </br> The version displayed corresponds to the version chosen. |
| Prefix size | IPv4 or IPv6 prefixes are displayed depending on the selection above. </br> **IPv4** </br> /24 (256 addresses) </br> /25 (128 addresses) </br> /26 (64 addresses) </br> /27 (32 addresses) </br> /28 (16 addresses) </br> /29 (8 addresses) </br> /30 (4 addresses) </br> /31 (2 addresses) </br> **IPv6** </br> /124 (16 addresses) </br> /125 (8 addresses) </br> 126 (4 addresses) </br> 127 (2 addresses) | | Availability zone | Select **Zone-redundant** to create a resilient load balancer. </br> To create a zonal load balancer, select a specific zone from **1**, **2**, or **3**. </br> Standard load balancer and public IP prefixes support zones. </br> Learn more about [load balancer and availability zones](load-balancer-standard-availability-zones.md).
If you select **IP prefix** for **IP type**, you'll see the following informatio
##### Internal load balancer
-If you select **Internal** as your load balancer type in the **Basics** tab, you'll see the following information:
+If you select **Internal** as your load balancer type in the **Basics** tab, you see the following information:
| Setting | Details | | - | - |
-| Virtual network | The virtual network you want your internal load balancer to be part of. </br> The private frontend IP address you select for your internal load balancer will be from this virtual network. |
+| Virtual network | The virtual network your internal load balancer will connect to. </br> The private frontend IP address you select for your internal load balancer is from this virtual network. |
| Subnet | The subnets available for the IP address of the frontend IP are displayed here. | | Assignment | Your options are **Static** or **Dynamic**. </br> Static ensures the IP doesn't change. A dynamic IP could change. | | Availability zone | Your options are: </br> **Zone redundant** </br> **Zone 1** </br> **Zone 2** </br> **Zone 3** </br> To create a load balancer that is highly available and resilient to availability zone failures, select a **zone-redundant** IP. |
The following is displayed in the **Add load balancing rule** creation page:
| Protocol | Azure Load Balancer is a layer 4 network load balancer. </br> Your options are: **TCP** or **UDP**. | | Port | This setting is the port associated with the frontend IP that you want traffic to be distributed based on this load-balancing rule. | | Backend port | This setting is the port on the instances in the backend pool you would like the load balancer to send traffic to. This setting can be the same as the frontend port or different if you need the flexibility for your application. |
-| Health probe | Select **Create new**, to create a new probe. </br> Only healthy instances will receive new traffic. |
-| Session persistence | Your options are: </br> **None** </br> **Client IP** </br> **Client IP and protocol**</br> </br> Maintain traffic from a client to the same virtual machine in the backend pool. This traffic will be maintained during the session. </br> **None** specifies that successive requests from the same client may be handled by any virtual machine. </br> **Client IP** specifies that successive requests from the same client IP address will be handled by the same virtual machine. </br> **Client IP and protocol** ensure that successive requests from the same client IP address and protocol will be handled by the same virtual machine. </br> Learn more about [distribution modes](load-balancer-distribution-mode.md). |
+| Health probe | Select **Create new**, to create a new probe. </br> Only healthy instances receive new traffic. |
+| Session persistence | Your options are: </br> **None** </br> **Client IP** </br> **Client IP and protocol**</br> </br> Maintain traffic from a client to the same virtual machine in the backend pool. This traffic is maintained during the session. </br> **None** specifies that successive requests from the same client can be handled by any virtual machine. </br> **Client IP** specifies that successive requests from the same client IP address are handled by the same virtual machine. </br> **Client IP and protocol** ensure that successive requests from the same client IP address and protocol are handled by the same virtual machine. </br> Learn more about [distribution modes](load-balancer-distribution-mode.md). |
| Idle timeout (minutes) | Keep a **TCP** or **HTTP** connection open without relying on clients to send keep-alive messages | | TCP reset | Load balancer can send **TCP resets** to help create a more predictable application behavior on when the connection is idle. </br> Learn more about [TCP reset](load-balancer-tcp-reset.md)| | Floating IP | Floating IP is Azure's terminology for a portion of what is known as **Direct Server Return (DSR)**. </br> DSR consists of two parts: <br> 1. Flow topology </br> 2. An IP address-mapping scheme at a platform level. </br></br> Azure Load Balancer always operates in a DSR flow topology whether floating IP is enabled or not. </br> This operation means that the outbound part of a flow is always correctly rewritten to flow directly back to the origin. </br> Without floating IP, Azure exposes a traditional load-balancing IP address-mapping scheme, the VM instances' IP. </br> Enabling floating IP changes the IP address mapping to the frontend IP of the load Balancer to allow for more flexibility. </br> For more information, see [Multiple frontends for Azure Load Balancer](load-balancer-multivip-overview.md).|
If you selected **Create new** in the health probe configuration of the load-bal
| Setting | Details | | - | - | | Name | The name of your health probe. |
-| Protocol | The protocol you select determines the type of check used to determine if the backend instance(s) are healthy. </br> Your options are: </br> **TCP** </br> **HTTPS** </br> **HTTP** </br> Ensure you're using the right protocol. This selection will depend on the nature of your application. </br> The configuration of the health probe and probe responses determines which backend pool instances will receive new flows. </br> You can use health probes to detect the failure of an application on a backend endpoint. </br> Learn more about [health probes](load-balancer-custom-probe-overview.md). |
-| Port | The destination port for the health probe. </br> This setting is the port on the backend instance the health probe will use to determine the instance's health. |
-| Interval | The number of seconds in between probe attempts. </br> The interval will determine how frequently the health probe will attempt to reach the backend instance. </br> If you select 5, the second probe attempt will be made after 5 seconds and so on. |
+| Protocol | The protocol you select determines the type of check used to determine if the backend instance(s) are healthy. </br> Your options are: </br> **TCP** </br> **HTTPS** </br> **HTTP** </br> Ensure you're using the right protocol. This selection depends on the nature of your application. </br> The configuration of the health probe and probe responses determines which backend pool instances receive new flows. </br> You can use health probes to detect the failure of an application on a backend endpoint. </br> Learn more about [health probes](load-balancer-custom-probe-overview.md). |
+| Port | The destination port for the health probe. </br> This setting is the port on the backend instance the health probe uses to determine the instance's health. |
+| Interval | The number of seconds in between probe attempts. </br> The interval determines how frequently the health probe attempts to reach the backend instance. </br> If you select 5, the second probe attempt is made after 5 seconds and so on. |
:::image type="content" source="./media/manage/add-health-probe.png" alt-text="Screenshot of add health probe." border="true":::
The following is displayed in the **Add an inbound NAT rule** creation page for
| Frontend IP address | Select the frontend IP address. </br> The frontend IP address of your load balancer you want the inbound NAT rule associated to. | | Frontend Port | This setting is the port associated with the frontend IP that you want traffic to be distributed based on this inbound NAT rule. | | Service Tag | Enter a service tag to use for your rule. The frontend port value is populated based on Service Tag chosen. |
-| Backend port | Enter a port on the backend virtual machine that traffic will be sent to. |
+| Backend port | Enter a port for traffic sent to the backend virtual machine. |
| Protocol | Azure Load Balancer is a layer 4 network load balancer. </br> Your options are: TCP or UDP. | | Enable TCP Reset | Load Balancer can send TCP resets to help create a more predictable application behavior on when the connection is idle. </br> Learn more about [TCP reset](load-balancer-tcp-reset.md) | | Idle timeout (minutes) | Keep a TCP or HTTP connection open without relying on clients to send keep-alive messages. |
The following is displayed in the **Add an inbound NAT rule** creation page for
|Target backend pool | Select the backend pool this rule applies to from the dropdown menu. | | Frontend IP address | Select the frontend IP address. </br> The frontend IP address of your load balancer you want the inbound NAT rule associated to. | | Frontend port range start | Enter the starting port of a range of frontend ports pre-allocated for the specific backend pool. |
-| Current number of machines in backend pool | The number of machines in the selected backend pool will be displayed. The displayed value is for information only; you can't modify this value. |
+| Current number of machines in backend pool | The displayed value is the number of machines in the selected backend pool, and for information only; you can't modify this value. |
| Maximum number of machines in backend pool | Enter the maximum number of instances in the backend pool when scaling out. |
-| Backend port | Enter a port on the backend pool that traffic will be sent to. |
-| Protocol | Azure Load Balancer is a layer 4 network load balancer. </br> Your options are: TCP or UDP. |
+| Backend port | Enter a port for traffic sent to on backend pool. |
+| Protocol | Azure Load Balancer is a layer 4 network lod balancer. </br> Your options are: TCP or UDP. |
| Enable TCP Reset | Load Balancer can send TCP resets to help create a more predictable application behavior on when the connection is idle. </br> Learn more about [TCP reset](load-balancer-tcp-reset.md) | | Idle timeout (minutes) | Keep a TCP or HTTP connection open without relying on clients to send keep-alive messages. | | Enable Floating IP | Some application scenarios prefer or require the same port to be used by multiple application instances on a single VM in the backend pool. If you want to reuse the backend port across multiple rules, you must enable [Floating IP](load-balancer-floating-ip.md) in the rule definition.|
The following is displayed in the **Add outbound rule** creation page:
| TCP Reset | Load balancer can send **TCP resets** to help create a more predictable application behavior on when the connection is idle. </br> Learn more about [TCP reset](load-balancer-tcp-reset.md) | | Backend pool | The backend pool you would like this outbound rule to be applied on. | | **Port allocation** | |
-| Port allocation | Your choices are: </br> **Manually choose number of outbound ports** </br> **Use the default number of outbound ports** </br> The recommended selection is the default of **Manually choose number of outbound ports** to prevent SNAT port exhaustion. If choose **Use the default number of outbound ports**, the **Outbound ports** selection is disabled. |
+| Port allocation | Your choices are: </br> **Manually choose number of outbound ports** </br> **Use the default number of outbound ports** </br> The recommended selection is the default of **Manually choose number of outbound ports** to prevent SNAT port exhaustion. If **Use the default number of outbound ports** is chosen, the **Outbound ports** selection is disabled. |
| Outbound ports | Your choices are: </br> **Ports per instance** </br> **Maximum number of backend instances**. </br> The recommended selections are select **Ports per instance** and enter **10,000**. | :::image type="content" source="./media/manage/add-outbound-rule.png" alt-text="Screenshot of add outbound rule." border="true"::: ## Portal settings+ ### Frontend IP configuration The IP address of your Azure Load Balancer. It's the point of contact for clients.
-You can have one or many frontend IP configurations. If you went through the create section above, you would have already created a frontend for your load balancer.
+You can have one or many frontend IP configurations. If you went through the create section in this article, you created a frontend for your load balancer.
If you want to add a frontend IP configuration to your load balancer, go to your load balancer in the Azure portal, select **Frontend IP configuration**, and then select **+Add**.
If you want to add a health probe to your load balancer, go to your load balance
| Setting | Details | | - | - | | Name | The name of your health probe. |
-| Protocol | The protocol you select determines the type of check used to determine if the backend instance(s) are healthy. </br> Your options are: </br> **TCP** </br> **HTTPS** </br> **HTTP** </br> Ensure you're using the right protocol. This selection will depend on the nature of your application. </br> The configuration of the health probe and probe responses determines which backend pool instances will receive new flows. </br> You can use health probes to detect the failure of an application on a backend endpoint. </br> Learn more about [health probes](load-balancer-custom-probe-overview.md). |
-| Port | The destination port for the health probe. </br> This setting is the port on the backend instance the health probe will use to determine the instance's health. |
-| Interval | The number of seconds in between probe attempts. </br> The interval will determine how frequently the health probe will attempt to reach the backend instance. </br> If you select 5, the second probe attempt will be made after 5 seconds and so on. |
-| Unhealthy threshold | The number of consecutive probe failures that must occur before a VM is considered unhealthy.</br> If you select 2, no new flows will be set to this backend instance after two consecutive failures. |
+| Protocol | The protocol you select determines the type of check used to determine if the backend instance(s) are healthy. </br> Your options are: </br> **TCP** </br> **HTTPS** </br> **HTTP** </br> Ensure you're using the right protocol. This selection depends on the nature of your application. </br> The configuration of the health probe and probe responses determines which backend pool instances receive new flows. </br> You can use health probes to detect the failure of an application on a backend endpoint. </br> Learn more about [health probes](load-balancer-custom-probe-overview.md). |
+| Port | The destination port for the health probe. </br> This setting is the port on the backend instance the health probe uses to determine the instance's health. |
+| Interval | The number of seconds in between probe attempts. </br> The interval determines how frequently the health probe attempts to reach the backend instance. </br> If you select 5, the second probe attempt is made after 5 seconds and so on. |
+| Unhealthy threshold | The number of consecutive probe failures that must occur before a VM is considered unhealthy.</br> If you select 2, no new flows are sent to this backend instance after two consecutive failures. |
:::image type="content" source="./media/manage/health-probe.png" alt-text="Screenshot of create add health probe." border="true":::
If you want to add a load balancer rule to your load balancer, go to your load b
| Port | This setting is the port associated with the frontend IP that you want traffic to be distributed based on this load-balancing rule. | | Backend port | This setting is the port on the instances in the backend pool you would like the load balancer to send traffic to. This setting can be the same as the frontend port or different if you need the flexibility for your application. | | Backend pool | The backend pool you would like this load balancer rule to be applied on. |
-| Health probe | The health probe you created to check the status of the instances in the backend pool. </br> Only healthy instances will receive new traffic. |
-| Session persistence | Your options are: </br> **None** </br> **Client IP** </br> **Client IP and protocol**</br> </br> Maintain traffic from a client to the same virtual machine in the backend pool. This traffic will be maintained during the session. </br> **None** specifies that successive requests from the same client may be handled by any virtual machine. </br> **Client IP** specifies that successive requests from the same client IP address will be handled by the same virtual machine. </br> **Client IP and protocol** ensure that successive requests from the same client IP address and protocol will be handled by the same virtual machine. </br> Learn more about [distribution modes](load-balancer-distribution-mode.md). |
+| Health probe | The health probe you created to check the status of the instances in the backend pool. </br> Only healthy instances receive new traffic. |
+| Session persistence | Your options are: </br> **None** </br> **Client IP** </br> **Client IP and protocol**</br> </br> Maintain traffic from a client to the same virtual machine in the backend pool. This traffic is maintained during the session. </br> **None** specifies that successive requests from the same client can be handled by any virtual machine. </br> **Client IP** specifies that successive requests from the same client IP address are handled by the same virtual machine. </br> **Client IP and protocol** ensure that successive requests from the same client IP address and protocol are handled by the same virtual machine. </br> Learn more about [distribution modes](load-balancer-distribution-mode.md). |
| Idle timeout (minutes) | Keep a **TCP** or **HTTP** connection open without relying on clients to send keep-alive messages | | TCP reset | Load balancer can send **TCP resets** to help create a more predictable application behavior on when the connection is idle. </br> Learn more about [TCP reset](load-balancer-tcp-reset.md)| | Floating IP | Floating IP is Azure's terminology for a portion of what is known as **Direct Server Return (DSR)**. </br> DSR consists of two parts: <br> 1. Flow topology </br> 2. An IP address-mapping scheme at a platform level. </br></br> Azure Load Balancer always operates in a DSR flow topology whether floating IP is enabled or not. </br> This operation means that the outbound part of a flow is always correctly rewritten to flow directly back to the origin. </br> Without floating IP, Azure exposes a traditional load-balancing IP address-mapping scheme, the VM instances' IP. </br> Enabling floating IP changes the IP address mapping to the frontend IP of the load Balancer to allow for more flexibility. </br> For more information, see [Multiple frontends for Azure Load Balancer](load-balancer-multivip-overview.md).|
If you want to add an inbound nat rule to your load balancer, go to your load ba
| Name | The name of your inbound NAT rule | | Frontend IP address | Select the frontend IP address. </br> The frontend IP address of your load balancer you want the inbound NAT rule associated to. | | IP Version | Your options are **IPv4** and **IPv6**. |
-| Service | The type of service you'll be running on Azure Load Balancer. </br> A selection here will update the port information appropriately. |
+| Service | The type of service you're running on Azure Load Balancer. </br> A selection here updates the port information appropriately. |
| Protocol | Azure Load Balancer is a layer 4 network load balancer. </br> Your options are: TCP or UDP. | | Idle timeout (minutes) | Keep a TCP or HTTP connection open without relying on clients to send keep-alive messages. | | TCP Reset | Load Balancer can send TCP resets to help create a more predictable application behavior on when the connection is idle. </br> Learn more about [TCP reset](load-balancer-tcp-reset.md) |
If you want to add an outbound rule to your load balancer, go to your load balan
| Idle timeout (minutes) | Keep a **TCP** or **HTTP** connection open without relying on clients to send keep-alive messages. | | TCP Reset | Load balancer can send **TCP resets** to create a more predictable application behavior when the connection is idle. </br> Learn more about [TCP reset](load-balancer-tcp-reset.md) | | Backend pool | The backend pool you would like this outbound rule to be applied on. |
-| Port allocation | Your options are **Manually choose number of outbound ports** or **Use the default number of outbound ports**. </br> When you use default port allocation, Azure may drop existing connections when you scale out. Manually allocate ports to avoid dropped connections. |
+| Port allocation | Your options are **Manually choose number of outbound ports** or **Use the default number of outbound ports**. </br> When you use default port allocation, Azure can drop existing connections when you scale out. Manually allocate ports to avoid dropped connections. |
| **Outbound Ports** | |
-| Choose by | Your options are **Ports per instance** or **Maximum number of backend instances**. </br> When you use default port allocation, Azure may drop existing connections when you scale out. Manually allocate ports to avoid dropped connections. |
+| Choose by | Your options are **Ports per instance** or **Maximum number of backend instances**. </br> When you use default port allocation, Azure can drop existing connections when you scale out. Manually allocate ports to avoid dropped connections. |
| Ports per instance | Enter number of ports to be used per instance. This entry is only available when choosing **Ports per instance** for outbound ports above. | | Available Frontend ports | Displayed value of total available frontend ports based on selected port allocation. | | Maximum number of backend instances | Enter the maximum number of back end instances. This entry is only available when choosing **Maximum number of backend instances** for outbound ports above. </br> You can't scale your backend pool above this number of instances. Increasing the number of instances decreases the number of ports per instance unless you also add more frontend IP addresses. |
load-testing How To Define Test Criteria https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-define-test-criteria.md
You can define test criteria at two levels. A load test can combine criteria at
- At the load test level. For example, to ensure that the total error percentage doesn't exceed a threshold. - At the JMeter request level (JMeter sampler). For example, you could specify a response time threshold of the *getProducts* request, but disregard the response time of the *sign in* request.
-You can define a maximum of 10 test criteria for a load test. If there are multiple criteria for the same client metric, the criterion with the lowest threshold value is used.
+You can define a maximum of 50 test criteria for a load test. If there are multiple criteria for the same client metric, the criterion with the lowest threshold value is used.
### Fail criteria structure
load-testing Resource Limits Quotas Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/resource-limits-quotas-capacity.md
The following limits apply on a per-region, per-subscription basis.
| Test runs per test | 5000 | | | File uploads per test | 1000 | | | App Components per test or test run | 100 | |
-| [Test criteria](./how-to-define-test-criteria.md#load-test-fail-criteria) per test | 10 | |
+| [Test criteria](./how-to-define-test-criteria.md#load-test-fail-criteria) per test | 50 | |
<sup>2</sup> If you aren't already at the maximum limit, you can request an increase. We aren't currently able to approve increase requests past our maximum limitations stated above. To request an increase for your default limit, contact Azure Support. Default limits vary by offer category type.
machine-learning Feature Set Materialization Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/feature-set-materialization-concepts.md
Last updated 12/06/2023-+ # Feature set materialization concepts
machine-learning How To Attach Kubernetes To Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-attach-kubernetes-to-workspace.md
Previously updated : 08/31/2022 Last updated : 01/18/2024
Once Azure Machine Learning extension is deployed on AKS or Arc Kubernetes clust
## Prerequisites
-Attaching a Kubernetes cluster to Azure Machine Learning workspace can flexibly support many different scenarios, such as the shared scenarios with multiple attachments, model training scripts accessing Azure resources, and the authentication configuration of the workspace. But you need to pay attention to the following prerequisites.
+Attaching a Kubernetes cluster to Azure Machine Learning workspace can flexibly support many different scenarios. For example, the shared scenarios with multiple attachments, model training scripts accessing Azure resources, and the authentication configuration of the workspace.
#### Multi-attach and workload isolation
Attaching a Kubernetes cluster makes it available to your workspace for training
1. Enter a compute name and select your Kubernetes cluster from the dropdown.
- * **(Optional)** Enter Kubernetes namespace, which defaults to `default`. All machine learning workloads will be sent to the specified Kubernetes namespace in the cluster. Compute attach won't create the Kubernetes namespace automatically or validate whether the kubernetes namespace exists. You need to verify that the specified namespace exists in your cluster, otherwise, any Azure Machine Learning workloads submitted to this compute will fail.
+ * **(Optional)** Enter Kubernetes namespace, which defaults to `default`. All machine learning workloads are sent to the specified Kubernetes namespace in the cluster. Compute attach doesn't create the Kubernetes namespace automatically or validate whether the kubernetes namespace exists. You need to verify that the specified namespace exists in your cluster, otherwise, any Azure Machine Learning workloads submitted to this compute would fail.
* **(Optional)** Assign system-assigned or user-assigned managed identity. Managed identities eliminate the need for developers to manage credentials. For more information, see the [Assign managed identity](#assign-managed-identity-to-the-compute-target) section of this article.
Azure offers a couple of ways to assign roles to a managed identity.
- [Use Azure CLI to assign roles](../role-based-access-control/role-assignments-cli.md) - [Use Azure PowerShell to assign roles](../role-based-access-control/role-assignments-powershell.md)
-If you are using the Azure portal to assign roles and have a **system-assigned managed identity**, **Select User**, **Group Principal** or **Service Principal**, you can search for the identity name by selecting **Select members**. The identity name needs to be formatted as: `<workspace name>/computes/<compute target name>`.
+If you're using the Azure portal to assign roles and have a **system-assigned managed identity**, **Select User**, **Group Principal** or **Service Principal**, you can search for the identity name by selecting **Select members**. The identity name needs to be formatted as: `<workspace name>/computes/<compute target name>`.
If you have user-assigned managed identity, select **Managed identity** to find the target identity.
You can use a managed identity to access Azure Blob:
- [Create and manage instance types](./how-to-manage-kubernetes-instance-types.md) - [Azure Machine Learning inference router and connectivity requirements](./how-to-kubernetes-inference-routing-azureml-fe.md)-- [Secure AKS inferencing environment](./how-to-secure-kubernetes-inferencing-environment.md)
+- [Secure AKS inferencing environment](./how-to-secure-kubernetes-inferencing-environment.md)
machine-learning How To Manage Resources Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-resources-vscode.md
Previously updated : 05/25/2021 Last updated : 01/15/2024 monikerRange: 'azureml-api-2 || azureml-api-1'
The extension currently supports datastores of the following types:
For more information, see [datastore](concept-data.md#datastore). :::moniker-end :::moniker range="azureml-api-1"
-For more information, see [datastore](./v1/concept-data.md#connect-to-storage-with-datastores).
+For more information, see [datastore](concept-data.md).
:::moniker-end ### Create a datastore
The extension currently supports the following dataset types:
- *Tabular*: Allows you to materialize data into a DataFrame. - *File*: A file or collection of files. Allows you to download or mount files to your compute.
-For more information, see [datasets](./v1/concept-data.md)
+For more information, see [datasets](./concept-data.md)
### Create dataset
Alternatively, use the `> Azure ML: View Environment` command in the command pal
:::moniker range="azureml-api-1" ## Experiments
-For more information, see [experiments](v1/concept-azure-machine-learning-architecture.md#experiments).
+For more information, see [experiments](concept-azure-machine-learning-architecture.md).
:::moniker-end ### Create job
Alternatively, use the `> Azure ML: View Compute Properties` and `> Azure ML: De
For more information, see [train machine learning models](concept-train-machine-learning-model.md). :::moniker-end :::moniker range="azureml-api-1"
-For more information, see [train machine learning models](./v1/concept-train-machine-learning-model.md).
+For more information, see [train machine learning models](./concept-train-machine-learning-model.md).
:::moniker-end ### Create model
Alternatively, use the `> Azure ML: Remove Model` command in the command palette
For more information, see [endpdoints](concept-endpoints.md). :::moniker-end :::moniker range="azureml-api-1"
-For more information, see [endpoints](v1/concept-azure-machine-learning-architecture.md#endpoints).
+For more information, see [endpoints](concept-azure-machine-learning-architecture.md).
:::moniker-end ### Create endpoint
machine-learning How To Setup Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-setup-vs-code.md
Previously updated : 10/21/2021 Last updated : 01/16/2024 monikerRange: 'azureml-api-1 || azureml-api-2'
To choose your default workspace, select the **Set Azure Machine Learning Worksp
Alternatively, use the `> Azure ML: Set Default Workspace` command in the command palette and follow the prompts to set your workspace.
+To open the command palette, Press `Command` and `P` on your keyboard. Then type `> Azure ML: Set Default Workspace`.
+ ## Next Steps - [Manage your Azure Machine Learning resources](how-to-manage-resources-vscode.md)
machine-learning How To Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-model.md
- ignite-2022 - build-2023 - ignite-2023
+ - update-code
# Train models with Azure Machine Learning CLI, SDK, and REST API
machine-learning How To Use Secrets In Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-secrets-in-runs.md
Previously updated : 09/16/2022 Last updated : 01/19/2024
machine-learning Troubleshoot Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/tools-reference/troubleshoot-guidance.md
First, go to the compute instance terminal and run `docker ps` to find the root
Use `docker images` to check if the image was pulled successfully. If your image was pulled successfully, check if the Docker container is running. If it's already running, locate this runtime. It attempts to restart the runtime and compute instance.
+If you are using compute instance runtime AI studio, this is not scenario currently supported, please try use automatic runtime instead, [Switch compute instance runtime to automatic runtime](../how-to-create-manage-runtime.md#switch-compute-instance-runtime-to-automatic-runtime-preview).
+ ### Run failed because of "No module named XXX" This type of error related to runtime lacks required packages. If you're using a default environment, make sure the image of your runtime is using the latest version. For more information, see [Runtime update](../how-to-create-manage-runtime.md#update-a-runtime-on-the-ui). If you're using a custom image and you're using a conda environment, make sure you installed all the required packages in your conda environment. For more information, see [Customize a prompt flow environment](../how-to-customize-environment-runtime.md#customize-environment-with-docker-context-for-runtime).
machine-learning Tutorial Enable Recurrent Materialization Run Batch Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-enable-recurrent-materialization-run-batch-inference.md
Last updated 11/28/2023 -+ #Customer intent: As a professional data scientist, I want to know how to build and deploy a model with Azure Machine Learning by using Python in a Jupyter Notebook.
machine-learning Tutorial Get Started With Feature Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-get-started-with-feature-store.md
Last updated 11/28/2023 -+ #Customer intent: As a professional data scientist, I want to know how to build and deploy a model with Azure Machine Learning by using Python in a Jupyter Notebook.
machine-learning Tutorial Network Isolation For Feature Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-network-isolation-for-feature-store.md
Last updated 09/13/2023 -+ #Customer intent: As a professional data scientist, I want to know how to build and deploy a model with Azure Machine Learning by using Python in a Jupyter Notebook.
machine-learning Algorithm Cheat Sheet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/algorithm-cheat-sheet.md
Previously updated : 11/04/2022 Last updated : 01/18/2024 adobe-target: true # Machine Learning Algorithm Cheat Sheet for Azure Machine Learning designer
migrate Migrate Support Matrix Vmware Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-vmware-migration.md
ms. Previously updated : 10/16/2023 Last updated : 1/19/2024
You can migrate VMware vSphere VMs in a couple of ways:
- **Using agentless migration**: Migrate VMs without needing to install anything on them. You deploy the [Azure Migrate appliance](migrate-appliance.md) for agentless migration. - **Using agent-based migration**: Install an agent on the VM for replication. For agent-based migration, you deploy a [replication appliance](migrate-replication-appliance.md).
+> [!NOTE]
+> This also supports migrating VMs from AVS.
+ Review [this article](server-migrate-overview.md) to figure out which method you want to use. ## Agentless migration
migrate Migrate Support Matrix Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-vmware.md
ms. Previously updated : 11/23/2023 Last updated : 01/19/2024
As you plan your migration of VMware servers to Azure, review the [migration sup
Requirement | Details | **Project limits** | You can create multiple Azure Migrate projects in an Azure subscription.<br /><br /> You can discover and assess up to 50,000 servers in a VMware environment in a single [project](migrate-support-matrix.md#project). A project can include physical servers and servers from a Hyper-V environment, up to the assessment limits.
-**Discovery** | The Azure Migrate appliance can discover up to 10,000 servers running across multiple vCenter Servers.<br /><br /> The appliance supports adding multiple vCenter Servers. You can add up to 10 vCenter Servers per appliance.
+**Discovery** | The Azure Migrate appliance can discover up to 10,000 servers running across multiple vCenter Servers.<br /><br /> The appliance supports adding multiple vCenter Servers. You can add up to 10 vCenter Servers per appliance.<br /><br /> This is valid for AVS as well.
**Assessment** | You can add up to 35,000 servers in a single group.<br /><br /> You can assess up to 35,000 servers in a single assessment. Learn more about [assessments](concepts-assessment-calculation.md).
migrate Tutorial Discover Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-vmware.md
ms. Previously updated : 01/09/2024 Last updated : 01/19/2024 #Customer intent: As an VMware admin, I want to discover my on-premises servers running in a VMware environment.
In this tutorial, you learn how to:
> * Start continuous discovery. > [!NOTE]
-> Tutorials show you the quickest path for trying out a scenario. They use default options where possible.
+> Tutorials show you the quickest path for trying out a scenario. They use default options where possible. This scenario is also applicable for discovery of VM servers from AVS.
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/pricing/free-trial/) before you begin.
mysql Concepts Networking Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-networking-vnet.md
description: Learn about private access networking option in Azure Database for
Previously updated : 11/21/2022 Last updated : 01/18/2024
-# Private Network Access using VNet Integration for Azure Database for MySQL - Flexible Server
+# Private Network Access using virtual network integration for Azure Database for MySQL - Flexible Server
[!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)] This article describes the private connectivity option for Azure Database for MySQL flexible server. You learn in detail the virtual network concepts for Azure Database for MySQL flexible server to create a server securely in Azure.
-## Private access (VNet Integration)
+## Private access (Virtual Network integration)
-[Azure Virtual Network (VNet)](../../virtual-network/virtual-networks-overview.md) is the fundamental building block for your private network in Azure. Virtual Network (VNet) integration with Azure Database for MySQL flexible server brings Azure's benefits of network security and isolation.
+[Azure Virtual Network)](../../virtual-network/virtual-networks-overview.md) is the fundamental building block for your private network in Azure. Virtual network integration with Azure Database for MySQL flexible server brings Azure's benefits of network security and isolation.
-Virtual Network (VNet) integration for an Azure Database for MySQL flexible server instance enables you to lock down access to the server to only your virtual network infrastructure. Your virtual network (VNet) can include all your application and database resources in a single virtual network or may stretch across different VNets in the same region or a different region. Seamless connectivity between various virtual networks can be established by [peering](../../virtual-network/virtual-network-peering-overview.md), which uses Microsoft's low latency, high-bandwidth private backbone infrastructure. The virtual networks appear as one for connectivity purposes.
+Virtual network integration for an Azure Database for MySQL flexible server instance enables you to lock down access to the server to only your virtual network infrastructure. Your virtual network can include all your application and database resources in a single virtual network or can stretch across different Virtual Networks in the same region or a different region. Seamless connectivity between various virtual networks can be established by [peering](../../virtual-network/virtual-network-peering-overview.md), which uses Microsoft's low latency, high-bandwidth private backbone infrastructure. The virtual networks appear as one for connectivity purposes.
Azure Database for MySQL flexible server supports client connectivity from: -- Virtual networks within the same Azure region (locally peered VNets)-- Virtual networks across Azure regions (Global peered VNets)
+- Virtual networks within the same Azure region (locally peered virtual networks)
+- Virtual networks across Azure regions (Global peered virtual networks)
Subnets enable you to segment the virtual network into one or more subnetworks and allocate a portion of the virtual network's address space to which you can then deploy Azure resources. Azure Database for MySQL flexible server requires a [delegated subnet](../../virtual-network/subnet-delegation-overview.md). A delegated subnet is an explicit identifier that a subnet can host only Azure Database for MySQL flexible server instances. By delegating the subnet, the service gets direct permissions to create service-specific resources to manage your Azure Database for MySQL flexible server instance seamlessly. > [!NOTE]
-> The smallest CIDR range you can specify for the subnet to host Azure Database for MySQL flexible server is /29, which provides eight IP addresses. However, the first and last address in any network or subnet canΓÇÖt be assigned to any individual host. Azure reserves five IPs to be utilized internally by Azure networking, which include two IPs that cannot be assigned to a host. This leaves you 3 available IP addresses for a /29 CIDR range. For Azure Database for MySQL flexible server, you would require one IP address per node to be allocated from the delegated subnet when private access is enabled. HA-enabled servers would need two, and Non-HA server would need one IP address. The recommendation is to reserve at least 2 IP addresses per Azure Database for MySQL flexible server instance, keeping in mind that we can enable high availability options later.
+> The smallest CIDR range you can specify for the subnet to host Azure Database for MySQL flexible server is /29, which provides eight IP addresses. However, the first and last address in any network or subnet canΓÇÖt be assigned to any individual host. Azure reserves five IP addresses for internal use by Azure networking, including the two IP addresses that can't be assigned to a host. This leaves three available IP addresses for a /29 CIDR range. For Azure Database for MySQL flexible server, it's required to allocate one IP address per node from the delegated subnet when private access is enabled. HA-enabled servers require two IP addresses, and a Non-HA server requires one IP address. It is recommended to reserve at least two IP addresses per Azure Database for MySQL flexible server instance, as high availability options can be enabled later.
Azure Database for MySQL flexible server integrates with Azure [Private DNS zones](../../dns/private-dns-privatednszone.md) to provide a reliable, secure DNS service to manage and resolve domain names in a virtual network without the need to add a custom DNS solution. A private DNS zone can be linked to one or more virtual networks by creating [virtual network links](../../dns/private-dns-virtual-network-links.md) :::image type="content" source="./media/concepts-networking/vnet-diagram.png" alt-text="Flexible server MySQL VNET"::: In the above diagram,
-1. Azure Database for MySQL flexible server instances are injected into a delegated subnet - 10.0.1.0/24 of VNET **VNet-1**.
-1. Applications deployed on different subnets within the same vnet can access the Azure Database for MySQL flexible server instances directly.
-1. Applications deployed on a different VNET **VNet-2** don't have direct access to Azure Database for MySQL flexible server instances. Before they can access an instance, you must perform a [private DNS zone VNET peering](#private-dns-zone-and-vnet-peering).
+1. Azure Database for MySQL flexible server instances are injected into a delegated subnet - 10.0.1.0/24 of virtual network **VNet-1**.
+2. Applications deployed on different subnets within the same virtual network can access the Azure Database for MySQL flexible server instances directly.
+3. Applications deployed on a different virtual network **VNet-2** don't have direct access to Azure Database for MySQL flexible server instances. Before they can access an instance, you must perform a [private DNS zone virtual network peering](#private-dns-zone-and-virtual-network-peering).
## Virtual network concepts
-Here are some concepts to be familiar with when using virtual networks with Azure Database for MySQL flexible server instances.
+Here are some concepts to be familiar with when using Virtual Networks with Azure Database for MySQL flexible server instances.
- **Virtual network** -
- An Azure Virtual Network (VNet) contains a private IP address space configured for your use. Visit the [Azure Virtual Network overview](../../virtual-network/virtual-networks-overview.md) to learn more about Azure virtual networking.
+ An Azure Virtual Network contains a private IP address space configured for your use. Visit the [Azure Virtual Network overview](../../virtual-network/virtual-networks-overview.md) to learn more about Azure virtual networking.
Your virtual network must be in the same Azure region as your Azure Database for MySQL flexible server instance.
Here are some concepts to be familiar with when using virtual networks with Azur
- **Private DNS zone integration**
- Azure private DNS zone integration allows you to resolve the private DNS within the current VNET or any in-region peered VNET where the private DNS Zone is linked.
+ Azure private DNS zone integration allows you to resolve the private DNS within the current virtual network or any in-region peered virtual network where the private DNS Zone is linked.
- **Virtual network peering**
- A virtual network peering enables you to connect two or more Virtual Networks in Azure seamlessly. The peered virtual networks appear as one for connectivity purposes. The traffic between virtual machines in peered virtual networks uses the Microsoft backbone infrastructure. The traffic between the client application and the Azure Database for MySQL flexible server instance in peered VNets is routed only through Microsoft's private network and is isolated to that network.
+ A virtual network peering enables you to connect two or more virtual networks in Azure seamlessly. The peered virtual networks appear as one for connectivity purposes. The traffic between virtual machines in peered virtual networks uses the Microsoft backbone infrastructure. The traffic between the client application and the Azure Database for MySQL flexible server instance in peered virtual networks is routed only through Microsoft's private network and is isolated to that network.
## Use Private DNS Zone -- If you use the Azure portal or the Azure CLI to create Azure Database for MySQL flexible server instances with VNET, a new private DNS zone ending with `mysql.database.azure.com` is auto-provisioned per server in your subscription using the server name provided. Alternatively, if you want to set up your own private DNS zone with the Azure Database for MySQL flexible server instance, see the [private DNS overview](../../dns/private-dns-overview.md) documentation.
+- If you use the Azure portal or the Azure CLI to create Azure Database for MySQL flexible server instances with a virtual network, a new private DNS zone ending with `mysql.database.azure.com` is autoprovisioned per server in your subscription using the server name provided. Alternatively, if you want to set up your own private DNS zone with the Azure Database for MySQL flexible server instance, see the [private DNS overview](../../dns/private-dns-overview.md) documentation.
- If you use Azure API, an Azure Resource Manager template (ARM template), or Terraform, create private DNS zones that end with `mysql.database.azure.com` and use them while configuring Azure Database for MySQL flexible server instances with private access. For more information, see the [private DNS zone overview](../../dns/private-dns-overview.md). > [!IMPORTANT] > Private DNS zone names must end with `mysql.database.azure.com`. If you are connecting to an Azure Database for MySQL flexible server instance with SSL and you're using an option to perform full verification (sslmode=VERIFY_IDENTITY) with certificate subject name, use \<servername\>.mysql.database.azure.com in your connection string.
-Learn how to create an Azure Database for MySQL flexible server instance with private access (VNet integration) in [the Azure portal](how-to-manage-virtual-network-portal.md) or [the Azure CLI](how-to-manage-virtual-network-cli.md).
+Learn how to create an Azure Database for MySQL flexible server instance with private access (virtual network integration) in [the Azure portal](how-to-manage-virtual-network-portal.md) or [the Azure CLI](how-to-manage-virtual-network-cli.md).
## Integration with a custom DNS server
-If you're using the custom DNS server, then you must **use a DNS forwarder to resolve the FQDN of the Azure Database for MySQL flexible server instance**. The forwarder IP address should be [168.63.129.16](../../virtual-network/what-is-ip-address-168-63-129-16.md). The custom DNS server should be inside the VNet or reachable via the VNET's DNS Server setting. Refer to [name resolution that uses your DNS server](../../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server) to learn more.
+If you're using the custom DNS server, then you must **use a DNS forwarder to resolve the FQDN of the Azure Database for MySQL flexible server instance**. The forwarder IP address should be [168.63.129.16](../../virtual-network/what-is-ip-address-168-63-129-16.md). The custom DNS server should be inside the virtual network or reachable via the virtual network's DNS Server setting. Refer to [name resolution that uses your DNS server](../../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server) to learn more.
> [!IMPORTANT] > For successful provisioning of the Azure Database for MySQL flexible server instance, even if you are using a custom DNS server, **you must not block DNS traffic to [AzurePlatformDNS](../../virtual-network/service-tags-overview.md) using [NSG](../../virtual-network/network-security-groups-overview.md)**.
-## Private DNS zone and VNET peering
+## Private DNS zone and virtual network peering
-Private DNS zone settings and VNET peering are independent of each other. For more information on creating and using Private DNS zones, see the [Use Private DNS Zone](#use-private-dns-zone) section.
+Private DNS zone settings and virtual network peering are independent of each other. For more information on creating and using Private DNS zones, see the [Use Private DNS Zone](#use-private-dns-zone) section.
-If you want to connect to the Azure Database for MySQL flexible server instance from a client that is provisioned in another VNET from the same region or a different region, you have to link the private DNS zone with the VNET. See [how to link the virtual network](../../dns/private-dns-getstarted-portal.md#link-the-virtual-network) documentation.
+If you want to connect to the Azure Database for MySQL flexible server instance from a client that is provisioned in another virtual network from the same region or a different region, you have to link the private DNS zone with the virtual network. See [how to link the virtual network](../../dns/private-dns-getstarted-portal.md#link-the-virtual-network) documentation.
> [!NOTE] > Only private DNS zone names that end with `mysql.database.azure.com` can be linked.
-## Connect from an on-premises server to an Azure Database for MySQL flexible server instance in a Virtual Network using ExpressRoute or VPN
+## Connect from an on-premises server to an Azure Database for MySQL flexible server instance in a virtual network using ExpressRoute or VPN
For workloads requiring access to an Azure Database for MySQL flexible server instance in a virtual network from an on-premises network, you need an [ExpressRoute](/azure/architecture/reference-architectures/hybrid-networking/expressroute/) or [VPN](/azure/architecture/reference-architectures/hybrid-networking/vpn/) and virtual network [connected to on-premises](/azure/architecture/reference-architectures/hybrid-networking/). With this setup in place, you need a DNS forwarder to resolve the Azure Database for MySQL flexible server servername if you want to connect from client applications (like MySQL Workbench) running on on-premises virtual networks. This DNS forwarder is responsible for resolving all the DNS queries via a server-level forwarder to the Azure-provided DNS service [168.63.129.16](../../virtual-network/what-is-ip-address-168-63-129-16.md). To configure correctly, you need the following resources: -- On-premises network-- An Azure Database for MySQL flexible server instance provisioned with private access (VNet integration)-- Virtual network [connected to on-premises](/azure/architecture/reference-architectures/hybrid-networking/)-- Use DNS forwarder [168.63.129.16](../../virtual-network/what-is-ip-address-168-63-129-16.md) deployed in Azure
+- An On-premises network.
+- An Azure Database for MySQL flexible server instance provisioned with private access (virtual network integration).
+- A virtual network [connected to on-premises](/azure/architecture/reference-architectures/hybrid-networking/).
+- A DNS forwarder [168.63.129.16](../../virtual-network/what-is-ip-address-168-63-129-16.md) deployed in Azure.
You can then use the Azure Database for MySQL flexible server servername (FQDN) to connect from the client application in the peered virtual network or on-premises network to the Azure Database for MySQL flexible server instance.
You can then use the Azure Database for MySQL flexible server servername (FQDN)
- Public endpoint (or public IP or DNS) - An Azure Database for MySQL flexible server instance deployed to a virtual network can't have a public endpoint. - After the Azure Database for MySQL flexible server instance is deployed to a virtual network and subnet, you can't move it to another virtual network or subnet. You can't move the virtual network into another resource group or subscription.-- Private DNS integration config cannot be changed once deployed.-- Subnet size (address spaces) can't be increased once resources exist in the subnet.
+- Private DNS integration config can't be changed after deployment.
+- Subnet size (address spaces) can't be increased after resources exist in the subnet.
## Next steps -- Learn how to enable private access (VNet integration) using the [Azure portal](how-to-manage-virtual-network-portal.md) or [Azure CLI](how-to-manage-virtual-network-cli.md)-- Learn how to [use TLS](how-to-connect-tls-ssl.md)
+- Learn how to enable private access (virtual network integration) using the [Azure portal](how-to-manage-virtual-network-portal.md) or [Azure CLI](how-to-manage-virtual-network-cli.md).
+- Learn how to [use TLS](how-to-connect-tls-ssl.md).
mysql How To Manage Virtual Network Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-manage-virtual-network-cli.md
Previously updated : 9/21/2020 Last updated : 1/18/2024 # Create and manage virtual networks for Azure Database for MySQL - Flexible Server using the Azure CLI
Last updated 9/21/2020
Azure Database for MySQL flexible server supports two types of mutually exclusive network connectivity methods to connect to your Azure Database for MySQL flexible server instance. The two options are: - Public access (allowed IP addresses)-- Private access (VNet Integration)
+- Private access (virtual network integration)
-In this article, we will focus on creation of MySQL server with **Private access (VNet Integration)** using Azure CLI. With *Private access (VNet Integration)*, you can deploy your Azure Database for MySQL flexible server instance into your own [Azure Virtual Network](../../virtual-network/virtual-networks-overview.md). Azure Virtual Networks provide private and secure network communication. In Private access, the connections to the Azure Database for MySQL flexible server instance are restricted to only within your virtual network. To learn more about it, refer to [Private access (VNet Integration)](./concepts-networking-vnet.md).
+This article focuses on creation of MySQL server with **Private access (virtual network Integration)** using Azure CLI. With *Private access (virtual network integration)*, you can deploy your Azure Database for MySQL flexible server instance into your own [Azure Virtual Network](../../virtual-network/virtual-networks-overview.md). Azure Virtual Networks provide private and secure network communication. In Private access, the connections to the Azure Database for MySQL flexible server instance are restricted to only within your virtual network. To learn more about it, refer to [Private access (Virtual Network Integration)](./concepts-networking-vnet.md).
-In Azure Database for MySQL flexible server, you can only deploy the server to a virtual network and subnet during creation of the server. After the Azure Database for MySQL flexible server instance is deployed to a virtual network and subnet, you cannot move it to another virtual network, subnet or to *Public access (allowed IP addresses)*.
+In Azure Database for MySQL flexible server, you can only deploy the server to a virtual network and subnet during creation of the server. After the Azure Database for MySQL flexible server instance is deployed to a virtual network and subnet, you can't move it to another virtual network, subnet or to *Public access (allowed IP addresses)*.
## Launch Azure Cloud Shell
If you prefer to install and use the CLI locally, this quickstart requires Azure
## Prerequisites
-You'll need to sign in to your account using the [az login](/cli/azure/reference-index#az-login) command. Note the **ID** property, which refers to **Subscription ID** for your Azure account.
+You need to sign in to your account using the [az login](/cli/azure/reference-index#az-login) command. Note the **ID** property, which refers to **Subscription ID** for your Azure account.
```azurecli-interactive az login
az account set --subscription <subscription id>
``` ## Create an Azure Database for MySQL flexible server instance using CLI
-You can use the `az mysql flexible-server` command to create the Azure Database for MySQL flexible server instance with *Private access (VNet Integration)*. This command uses Private access (VNet Integration) as the default connectivity method. A virtual network and subnet will be created for you if none is provided. You can also provide the already existing virtual network and subnet using subnet ID. <!-- You can provide the **vnet**,**subnet**,**vnet-address-prefix** or**subnet-address-prefix** to customize the virtual network and subnet.--> There are various options to create a Azure Database for MySQL flexible server instance using CLI as shown in the following examples.
+
+You can use the `az mysql flexible-server` command to create the Azure Database for MySQL flexible server instance with *Private access (virtual network integration)*. This command uses Private access (virtual network integration) as the default connectivity method. A virtual network and subnet are created for you if none is provided. You can also provide the already existing virtual network and subnet using subnet ID. <!-- You can provide the **vnet**,**subnet**,**vnet-address-prefix** or**subnet-address-prefix** to customize the virtual network and subnet.--> There are various options to create an Azure Database for MySQL flexible server instance using CLI as shown in the following examples.
>[!Important] > Using this command will delegate the subnet to **Microsoft.DBforMySQL/flexibleServers**. This delegation means that only Azure Database for MySQL flexible server instances can use that subnet. No other Azure resource types can be in the delegated subnet.
You can use the `az mysql flexible-server` command to create the Azure Database
Refer to the Azure CLI [reference documentation](/cli/azure/mysql/flexible-server) for the complete list of configurable CLI parameters. For example, in the following commands you can optionally specify the resource group. - Create an Azure Database for MySQL flexible server instance using default virtual network, subnet with default address prefix.+ ```azurecli-interactive az mysql flexible-server create ```-- Create an Azure Database for MySQL flexible server instance using already existing virtual network and subnet. If provided virtual network and subnet does not exists then virtual network and subnet with default address prefix will be created.+
+- Create an Azure Database for MySQL flexible server instance using already existing virtual network and subnet. If provided virtual network and subnet don't exist, then virtual network and subnet with default address prefix are created.
+ ```azurecli-interactive az mysql flexible-server create --vnet myVnet --subnet mySubnet ``` -- Create an Azure Database for MySQL flexible server instance using already existing virtual network, subnet, and using the subnet ID. The provided subnet should not have any other resource deployed in it and this subnet will be delegated to **Microsoft.DBforMySQL/flexibleServers**, if not already delegated.
+- Create an Azure Database for MySQL flexible server instance using already existing virtual network, subnet, and using the subnet ID. The provided subnet shouldn't have any other resource deployed in it and this subnet are delegated to **Microsoft.DBforMySQL/flexibleServers**, if not already delegated.
+ ```azurecli-interactive az mysql flexible-server create --subnet /subscriptions/{SubID}/resourceGroups/{ResourceGroup}/providers/Microsoft.Network/virtualNetworks/{VNetName}/subnets/{SubnetName} ```+ > [!Note] > The virtual network and subnet should be in the same region and subscription as your Azure Database for MySQL flexible server instance. <-- Create an Azure Database for MySQL flexible server instance using a new virtual network, subnet with non-default address prefix.
+- Create an Azure Database for MySQL flexible server instance using a new virtual network, subnet with nondefault address prefix.
+ ```azurecli-interactive az mysql flexible-server create --vnet myVnet --address-prefixes 10.0.0.0/24 --subnet mySubnet --subnet-prefixes 10.0.0.0/24 ```
-Refer to the Azure CLI [reference documentation](/cli/azure/mysql/flexible-server) for the complete list of configurable CLI parameters.
+Refer to the Azure CLI [reference documentation](/cli/azure/mysql/flexible-server) for the complete list of configurable CLI parameters.
## Next steps+ - Learn more about [networking in Azure Database for MySQL flexible server](./concepts-networking.md). - [Create and manage Azure Database for MySQL flexible server virtual network using Azure portal](./how-to-manage-virtual-network-portal.md).-- Understand more about [Azure Database for MySQL flexible server virtual network](./concepts-networking-vnet.md#private-access-vnet-integration).
+- Understand more about [Azure Database for MySQL flexible server virtual network](./concepts-networking-vnet.md#private-access-virtual-network-integration).
mysql How To Manage Virtual Network Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-manage-virtual-network-portal.md
Previously updated : 9/21/2020 Last updated : 1/18/2024 # Create and manage virtual networks for Azure Database for MySQL - Flexible Server using the Azure portal [!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)] - Azure Database for MySQL flexible server supports two types of mutually exclusive network connectivity methods to connect to your Azure Database for MySQL flexible server instance. The two options are: - Public access (allowed IP addresses)-- Private access (VNet Integration)
+- Private access (virtual network integration)
-In this article, we will focus on creation of MySQL server with **Private access (VNet Integration)** using Azure portal. With Private access (VNet Integration), you can deploy your Azure Database for MySQL flexible server instance into your own [Azure Virtual Network](../../virtual-network/virtual-networks-overview.md). Azure Virtual Networks provide private and secure network communication. With private access, connections to the MySQL server are restricted to your virtual network. To learn more about it, refer to [Private access (VNet Integration)](./concepts-networking-vnet.md#private-access-vnet-integration).
+This article focuses on creation of MySQL server with **Private access (VNet Integration)** using Azure portal. With Private access (virtual network integration), you can deploy your Azure Database for MySQL flexible server instance into your own [Azure Virtual Network](../../virtual-network/virtual-networks-overview.md). Azure Virtual Networks provide private and secure network communication. With private access, connections to the MySQL server are restricted to your virtual network. To learn more about it, refer to [Private access (virtual network Integration)](./concepts-networking-vnet.md#private-access-virtual-network-integration).
>[!Note] >You can deploy your Azure Database for MySQL flexible server instance into a virtual network and subnet during server creation. After the Azure Database for MySQL flexible server instance is deployed, you cannot move it into another virtual network, subnet or to *Public access (allowed IP addresses)*.
To create an Azure Database for MySQL flexible server instance in a virtual netw
## Next steps - [Create and manage Azure Database for MySQL flexible server virtual network using Azure CLI](./how-to-manage-virtual-network-cli.md).-- Learn more about [networking in Azure Database for MySQL flexible server](./concepts-networking.md)-- Understand more about [Azure Database for MySQL flexible server virtual network](./concepts-networking-vnet.md#private-access-vnet-integration).
+- Learn more about [networking in Azure Database for MySQL flexible server](./concepts-networking.md).
+- Understand more about [Azure Database for MySQL flexible server virtual network](./concepts-networking-vnet.md#private-access-virtual-network-integration).
network-watcher Vnet Flow Logs Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/vnet-flow-logs-overview.md
Title: VNet flow logs (preview)
-description: Learn about Azure Network Watcher VNet flow logs feature and how to use them to record your virtual networks traffic.
+description: Learn about Azure Network Watcher VNet flow logs and how to use them to record your virtual network's traffic.
Last updated 01/16/2024
-#CustomerIntent: As an Azure administrator, I want to learn about VNet flow logs so that I can log my network traffic to analyze and optimize the network performance.
+#CustomerIntent: As an Azure administrator, I want to learn about VNet flow logs so that I can log my network traffic to analyze and optimize network performance.
# VNet flow logs (preview)
-> [!IMPORTANT]
-> VNet flow logs is currently in PREVIEW. This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+Virtual network (VNet) flow logs are a feature of Azure Network Watcher. You can use them to log information about IP traffic flowing through a virtual network.
+
+Flow data from VNet flow logs is sent to Azure Storage. From there, you can access the data and export it to any visualization tool, security information and event management (SIEM) solution, or intrusion detection system (IDS). VNet flow logs overcome some of the limitations of [NSG flow logs](network-watcher-nsg-flow-logging-overview.md).
-Virtual network (VNet) flow logs is a feature of Azure Network Watcher that allows you to log information about IP traffic flowing through a virtual network. Flow data is sent to Azure Storage from where you can access it and export it to any visualization tool, security information and event management (SIEM) solution, or intrusion detection system (IDS) of your choice. Network Watcher VNet flow logs capability overcomes some of the existing limitations of [NSG flow logs](network-watcher-nsg-flow-logging-overview.md).
+> [!IMPORTANT]
+> The VNet flow logs feature is currently in preview. This preview version is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Why use flow logs?
-It's vital to monitor, manage, and know your network so that you can protect and optimize it. You may need to know the current state of the network, who's connecting, and where users are connecting from. You may also need to know which ports are open to the internet, what network behavior is expected, what network behavior is irregular, and when sudden rises in traffic happen.
+It's vital to monitor, manage, and know your network so that you can protect and optimize it. You might need to know the current state of the network, who's connecting, and where users are connecting from. You might also need to know which ports are open to the internet, what network behavior is expected, what network behavior is irregular, and when sudden rises in traffic happen.
Flow logs are the source of truth for all network activity in your cloud environment. Whether you're in a startup that's trying to optimize resources or a large enterprise that's trying to detect intrusion, flow logs can help. You can use them for optimizing network flows, monitoring throughput, verifying compliance, detecting intrusions, and more.
Flow logs are the source of truth for all network activity in your cloud environ
- Analyze network flows from compromised IPs and network interfaces. - Export flow logs to any SIEM or IDS tool of your choice.
-## VNet flow logs compared to NSG flow logs
+## VNet flow logs compared to NSG flow logs
+
+Both VNet flow logs and [NSG flow logs](network-watcher-nsg-flow-logging-overview.md) record IP traffic, but they differ in their behavior and capabilities.
-Both VNet flow logs and [NSG flow logs](network-watcher-nsg-flow-logging-overview.md) record IP traffic but they differ in their behavior and capabilities. VNet flow logs simplify the scope of traffic monitoring by allowing you to enable logging at [virtual networks](../virtual-network/virtual-networks-overview.md), ensuring that traffic through all supported workloads within a virtual network is recorded. VNet flow logs also avoids the need to enable multi-level flow logging such as in cases of [NSG flow logs](network-watcher-nsg-flow-logging-overview.md#best-practices) where network security groups are configured at both subnet and network interface (NIC).
+VNet flow logs simplify the scope of traffic monitoring because you can enable logging at [virtual networks](../virtual-network/virtual-networks-overview.md). Traffic through all supported workloads within a virtual network is recorded.
-In addition to existing support to identify allowed/denied traffic by [network security group rules](../virtual-network/network-security-groups-overview.md), VNet flow logs support identification of traffic allowed/denied by [Azure Virtual Network Manager security admin rules](../virtual-network-manager/concept-security-admins.md). VNet flow logs also support evaluating the encryption status of your network traffic in scenarios where [virtual network encryption](../virtual-network/virtual-network-encryption-overview.md) is enabled.
+VNet flow logs also avoid the need to enable multiple-level flow logging, such as in [NSG flow logs](network-watcher-nsg-flow-logging-overview.md#best-practices). In NSG flow logs, network security groups are configured at both the subnet and the network interface (NIC).
+
+In addition to existing support to identify traffic that [network security group rules](../virtual-network/network-security-groups-overview.md) allow or deny, VNet flow logs support identification of traffic that [Azure Virtual Network Manager security admin rules](../virtual-network-manager/concept-security-admins.md) allow or deny. VNet flow logs also support evaluating the encryption status of your network traffic in scenarios where you're using [virtual network encryption](../virtual-network/virtual-network-encryption-overview.md).
## How logging works Key properties of VNet flow logs include: - Flow logs operate at Layer 4 of the Open Systems Interconnection (OSI) model and record all IP flows going through a virtual network.-- Logs are collected at 1-minute intervals through the Azure platform and don't affect your Azure resources or network traffic.-- Logs are written in the JSON (JavaScript Object Notation) format.-- Each log record contains the network interface (NIC) the flow applies to, 5-tuple information, traffic direction, flow state, encryption state and throughput information.-- All traffic flows in your network are evaluated through the rules in the applicable [network security group rules](../virtual-network/network-security-groups-overview.md) or [Azure Virtual Network Manager security admin rules](../virtual-network-manager/concept-security-admins.md). For more information, see [Log format](#log-format).
+- Logs are collected at one-minute intervals through the Azure platform. They don't affect your Azure resources or network traffic.
+- Logs are written in the JavaScript Object Notation (JSON) format.
+- Each log record contains the network interface that the flow applies to, 5-tuple information, traffic direction, flow state, encryption state, and throughput information.
+- All traffic flows in your network are evaluated through the applicable [network security group rules](../virtual-network/network-security-groups-overview.md) or [Azure Virtual Network Manager security admin rules](../virtual-network-manager/concept-security-admins.md).
## Log format VNet flow logs have the following properties: -- `time`: Time in UTC when the event was logged. -- `flowLogVersion`: Version of flow log schema.-- `flowLogGUID`: The resource GUID of the FlowLog resource. -- `macAddress`: MAC address of the network interface where the event was captured. -- `category`: Category of the event. The category is always `FlowLogFlowEvent`. -- `flowLogResourceID`: Resource ID of the FlowLog resource. -- `targetResourceID`: Resource ID of target resource associated to the FlowLog resource. -- `operationName`: Always `FlowLogFlowEvent`.
+- `time`: Time in UTC when the event was logged.
+- `flowLogVersion`: Version of the flow log schema.
+- `flowLogGUID`: Resource GUID of the `FlowLog` resource.
+- `macAddress`: MAC address of the network interface where the event was captured.
+- `category`: Category of the event. The category is always `FlowLogFlowEvent`.
+- `flowLogResourceID`: Resource ID of the `FlowLog` resource.
+- `targetResourceID`: Resource ID of the target resource that's associated with the `FlowLog` resource.
+- `operationName`: Always `FlowLogFlowEvent`.
- `flowRecords`: Collection of flow records.
- - `flows`: Collection of flows. This property has multiple entries for different ACLs.
- - `aclID`: Identifier of the resource evaluating traffic, either a network security group or Virtual Network Manager. For cases like traffic denied by encryption, this value is `unspecified`.
- - `flowGroups`: Collection of flow records at a rule level.
- - `rule`: Name of the rule that allowed or denied the traffic. For traffic denied due to encryption, this value is `unspecified`.
- - `flowTuples`: string that contains multiple properties for the flow tuple in a comma-separated format:
- - `Time Stamp`: Time stamp of when the flow occurred in UNIX epoch format.
- - `Source IP`: Source IP address.
- - `Destination IP`: Destination IP address.
- - `Source port`: Source port.
- - `Destination port`: Destination Port.
- - `Protocol`: Layer 4 protocol of the flow expressed in IANA assigned values.
- - `Flow direction`: Direction of the traffic flow. Valid values are `I` for inbound and `O` for outbound.
- - `Flow state`: State of the flow. Possible states are:
- - `B`: Begin, when a flow is created. No statistics are provided.
- - `C`: Continuing for an ongoing flow. Statistics are provided at 5-minute intervals.
- - `E`: End, when a flow is terminated. Statistics are provided.
- - `D`: Deny, when a flow is denied.
- - `Flow encryption`: Encryption state of the flow. Possible values are:
- - `X`: Encrypted.
- - `NX`: Unencrypted.
- - `NX_HW_NOT_SUPPORTED`: Unsupported hardware.
- - `NX_SW_NOT_READY`: Software not ready.
- - `NX_NOT_ACCEPTED`: Drop due to no encryption.
- - `NX_NOT_SUPPORTED`: Discovery not supported.
- - `NX_LOCAL_DST`: Destination on same host.
- - `NX_FALLBACK`: Fall back to no encryption.
- - `Packets sent`: Total number of packets sent from source to destination since the last update.
- - `Bytes sent`: Total number of packet bytes sent from source to destination since the last update. Packet bytes include the packet header and payload.
- - `Packets received`: Total number of packets sent from destination to source since the last update.
- - `Bytes received`: Total number of packet bytes sent from destination to source since the last update. Packet bytes include packet header and payload.
-
-Traffic in your virtual networks is Unencrypted (NX) by default. For encrypted traffic, enable [virtual network encryption](../virtual-network/virtual-network-encryption-overview.md).
+ - `flows`: Collection of flows. This property has multiple entries for access control lists (ACLs):
+ - `aclID`: Identifier of the resource that's evaluating traffic, either a network security group or Virtual Network Manager. For traffic that's denied because of encryption, this value is `unspecified`.
+ - `flowGroups`: Collection of flow records at a rule level:
+ - `rule`: Name of the rule that allowed or denied the traffic. For traffic that's denied because of encryption, this value is `unspecified`.
+ - `flowTuples`: String that contains multiple properties for the flow tuple in a comma-separated format:
+ - `Time Stamp`: Time stamp of when the flow occurred, in UNIX epoch format.
+ - `Source IP`: Source IP address.
+ - `Destination IP`: Destination IP address.
+ - `Source port`: Source port.
+ - `Destination port`: Destination port.
+ - `Protocol`: Layer 4 protocol of the flow, expressed in IANA assigned values.
+ - `Flow direction`: Direction of the traffic flow. Valid values are `I` for inbound and `O` for outbound.
+ - `Flow state`: State of the flow. Possible states are:
+ - `B`: Begin, when a flow is created. No statistics are provided.
+ - `C`: Continuing for an ongoing flow. Statistics are provided at five-minute intervals.
+ - `E`: End, when a flow is terminated. Statistics are provided.
+ - `D`: Deny, when a flow is denied.
+ - `Flow encryption`: Encryption state of the flow. The table after this list describes the possible values.
+ - `Packets sent`: Total number of packets sent from the source to the destination since the last update.
+ - `Bytes sent`: Total number of packet bytes sent from the source to the destination since the last update. Packet bytes include the packet header and payload.
+ - `Packets received`: Total number of packets sent from the destination to the source since the last update.
+ - `Bytes received`: Total number of packet bytes sent from the destination to the source since the last update. Packet bytes include the packet header and payload.
`Flow encryption` has the following possible encryption statuses:
-| Encryption Status | Description |
+| Encryption status | Description |
| -- | -- |
-| `X` | **Connection is encrypted**. Encryption is configured and the platform has encrypted the connection. |
-| `NX` | **Connection is Unencrypted**. This event is logged in two scenarios: <br> - When encryption isn't configured. <br> - When an encrypted virtual machine communicates with an endpoint that lacks encryption (such as an internet endpoint). |
-| `NX_HW_NOT_SUPPORTED` | **Unsupported hardware**. Encryption is configured, but the virtual machine is running on a host that doesn't support encryption. This issue can usually be the case where the FPGA isn't attached to the host, or could be faulty. Report this issue to Microsoft for investigation. |
-| `NX_SW_NOT_READY` | **Software not ready**. Encryption is configured, but the software component (GFT) in the host networking stack isn't ready to process encrypted connections. This issue can happen when the virtual machine is booting for the first time / restarting / redeployed. It can also happen in the case where there's an update to the networking components on the host where virtual machine is running. In all these scenarios, the packet gets dropped. The issue should be temporary and encryption should start working once either the virtual machine is fully up and running or the software update on the host is complete. If the issue is seen for longer durations, report it to Microsoft for investigation. |
-| `NX_NOT_ACCEPTED` | **Drop due to no encryption**. Encryption is configured on both source and destination endpoints with drop on unencrypted policy. If there's a failure to encrypt traffic, packet is dropped. |
-| `NX_NOT_SUPPORTED` | **Discovery not supported**. Encryption is configured, but the encryption session wasn't established, as discovery isn't supported in the host networking stack. In this case, packet is dropped. If you encounter this issue, report it to Microsoft for investigation. |
-| `NX_LOCAL_DST` | **Destination on same host**. Encryption is configured, but the source and destination virtual machines are running on the same Azure host. In this case, the connection isn't encrypted by design. |
-| `NX_FALLBACK` | **Fall back to no encryption**. Encryption is configured with the allow unencrypted policy for both source and destination endpoints. Encryption was attempted, but ran into an issue. In this case, connection is allowed but it isn't encrypted. An example of this can be, the virtual machine initially landed on a node that supports encryption, but later, this support was disabled. |
+| `X` | **Connection is encrypted**. Encryption is configured, and the platform encrypted the connection. |
+| `NX` | **Connection is unencrypted**. This event is logged in two scenarios: <br> - When encryption isn't configured. <br> - When an encrypted virtual machine communicates with an endpoint that lacks encryption (such as an internet endpoint). |
+| `NX_HW_NOT_SUPPORTED` | **Hardware is unsupported**. Encryption is configured, but the virtual machine is running on a host that doesn't support encryption. This problem usually happens because the field-programmable gate array (FPGA) isn't attached to the host or is faulty. Report this problem to Microsoft for investigation. |
+| `NX_SW_NOT_READY` | **Software isn't ready**. Encryption is configured, but the software component (GFT) in the host networking stack isn't ready to process encrypted connections. This problem can happen when the virtual machine is starting for the first time, is restarting, or is redeployed. It can also happen when there's an update to the networking components on the host where virtual machine is running. In all these scenarios, the packet is dropped. The problem should be temporary. Encryption should start working after either the virtual machine is fully up and running or the software update on the host is complete. If the problem has a longer duration, report it to Microsoft for investigation. |
+| `NX_NOT_ACCEPTED` | **Drop due to no encryption**. Encryption is configured on both source and destination endpoints, with a drop on unencrypted policies. If traffic encryption fails, the packet is dropped. |
+| `NX_NOT_SUPPORTED` | **Discovery is unsupported**. Encryption is configured, but the encryption session wasn't established because the host networking stack doesn't support discovery. In this case, the packet is dropped. If you encounter this problem, report it to Microsoft for investigation. |
+| `NX_LOCAL_DST` | **Destination is on the same host**. Encryption is configured, but the source and destination virtual machines are running on the same Azure host. In this case, the connection isn't encrypted by design. |
+| `NX_FALLBACK` | **Fall back to no encryption**. Encryption is configured with the **Allow unencrypted** policy for both source and destination endpoints. The system attempted encryption but had a problem. In this case, the connection is allowed but isn't encrypted. For example, a virtual machine initially landed on a node that supports encryption, but this support was removed later. |
+Traffic in your virtual networks is unencrypted (`NX`) by default. For encrypted traffic, see [Virtual network encryption](../virtual-network/virtual-network-encryption-overview.md).
## Sample log record
-In the following example of VNet flow logs, multiple records that follow the property list described earlier.
+In the following example of VNet flow logs, multiple records follow the property list described earlier.
```json {
In the following example of VNet flow logs, multiple records that follow the pro
} ```+ ## Log tuple and bandwidth calculation
-Here's an example bandwidth calculation for flow tuples from a TCP conversation between **185.170.185.105:35370** and **10.2.0.4:23**:
+Here's an example bandwidth calculation for flow tuples from a TCP conversation between `185.170.185.105:35370` and `10.2.0.4:23`:
`1493763938,185.170.185.105,10.2.0.4,35370,23,6,I,B,NX,,,,` `1493695838,185.170.185.105,10.2.0.4,35370,23,6,I,C,NX,1021,588096,8005,4610880` `1493696138,185.170.185.105,10.2.0.4,35370,23,6,I,E,NX,52,29952,47,27072`
-For continuation (`C`) and end (`E`) flow states, byte and packet counts are aggregate counts from the time of the previous flow's tuple record. In the example conversation, the total number of packets transferred is 1021+52+8005+47 = 9125. The total number of bytes transferred is 588096+29952+4610880+27072 = 5256000.
+For continuation (`C`) and end (`E`) flow states, byte and packet counts are aggregate counts from the time of the previous flow's tuple record. In the example conversation, the total number of packets transferred is 1,021 + 52 + 8,005 + 47 = 9,125. The total number of bytes transferred is 588,096 + 29,952 + 4,610,880 + 27,072 = 5,256,000.
## Considerations for VNet flow logs ### Storage account - **Location**: The storage account must be in the same region as the virtual network.-- **Subscription**: The storage account must be in the same subscription of the virtual network or in a subscription associated with the same Microsoft Entra tenant of the virtual network's subscription.
+- **Subscription**: The storage account must be in either:
+
+ - The same subscription as the virtual network.
+ - A subscription that's associated with the same Microsoft Entra tenant as the virtual network's subscription.
- **Performance tier**: The storage account must be standard. Premium storage accounts aren't supported. - **Self-managed key rotation**: If you change or rotate the access keys to your storage account, VNet flow logs stop working. To fix this problem, you must disable and then re-enable VNet flow logs. ### Cost
-VNet flow logging is billed on the volume of logs produced. High traffic volume can result in large-flow log volume and the associated costs.
+VNet flow logs are billed on the volume of logs produced. High traffic volume can result in large-flow log volume and the associated costs.
Pricing of VNet flow logs doesn't include the underlying costs of storage. Using the retention policy feature with VNet flow logs means incurring separate storage costs for extended periods of time.
-If you want to retain data forever and don't want to apply any retention policy, set retention days to 0. For more information, see [Network Watcher pricing](https://azure.microsoft.com/pricing/details/network-watcher/) and [Azure Storage pricing](https://azure.microsoft.com/pricing/details/storage/).
+If you want to retain data forever and don't want to apply any retention policy, set retention days to zero. For more information, see [Network Watcher pricing](https://azure.microsoft.com/pricing/details/network-watcher/) and [Azure Storage pricing](https://azure.microsoft.com/pricing/details/storage/).
## Pricing
-Currently, VNet flow logs aren't billed. In the future, VNet flow logs will be billed per gigabyte of *Network Logs Collected* and will come with a free tier of 5 GB/month per subscription. If VNet flow logs are configured with traffic analytics enabled, existing traffic analytics pricing applies. For more information, see [Network Watcher pricing](https://azure.microsoft.com/pricing/details/network-watcher/).
+Currently, VNet flow logs aren't billed. In the future, VNet flow logs will be billed per gigabyte of *network logs collected* and will come with a free tier of 5 GB/month per subscription. If enable traffic analytics for VNet flow logs, existing pricing for traffic analytics applies. For more information, see [Network Watcher pricing](https://azure.microsoft.com/pricing/details/network-watcher/).
## Availability
-VNet flow logs is available in the following regions during the preview:
+VNet flow logs are available in the following regions during the preview:
- East US 2 EUAP - Central US EUAP
VNet flow logs is available in the following regions during the preview:
- West US - West US 2
-To sign up to get access to the public preview, see [VNet flow logs - public preview sign up](https://aka.ms/VNetflowlogspreviewsignup).
+To get access to the preview, go to the [VNet flow logs preview sign-up page](https://aka.ms/VNetflowlogspreviewsignup).
## Related content - To learn how to manage VNet flow logs, see [Create, change, enable, disable, or delete VNet flow logs using Azure PowerShell](vnet-flow-logs-powershell.md) or [Create, change, enable, disable, or delete VNet flow logs using the Azure CLI](vnet-flow-logs-cli.md).-- To learn about traffic analytics, see [Traffic analytics](traffic-analytics.md) and [Traffic analytics schema](traffic-analytics-schema.md).
+- To learn about traffic analytics, see [Traffic analytics overview](traffic-analytics.md) and [Schema and data aggregation in Azure Network Watcher traffic analytics](traffic-analytics-schema.md).
- To learn how to use Azure built-in policies to audit or enable traffic analytics, see [Manage traffic analytics using Azure Policy](traffic-analytics-policy-portal.md).
networking Nva Accelerated Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/nva-accelerated-connections.md
-# Accelerated connections and NVAs (Limited GA)
+# Accelerated connections on NVAs or other VMs (Limited GA)
-This article helps you understand the **Accelerated Connections** feature. When Accelerated Connections is enabled on the virtual network interface (vNIC) with Accelerated Networking, this feature significantly improves networking efficiency, resulting in enhanced overall performance. This high-performance feature offers industry leading performance in Connections Per Second (CPS) optimization, along with improvements to handling large amounts of simultaneous connections. The feature also improves the number of Total Active Connections for network intensive workloads. Accelerated Connections is configured at the network interface level to allow flexibility to size the performance at vNIC. This especially benefits smaller VM sizes. These benefits are available for Network Virtual Appliances (NVAs) with a large number of connections. To access this feature during limited General Availability (limited GA), use the [sign-up form](https://go.microsoft.com/fwlink/?linkid=2223706).
+This article helps you understand the **Accelerated Connections** feature. When Accelerated Connections is enabled on the virtual network interface (vNIC) with Accelerated Networking, this feature significantly improves networking efficiency, resulting in enhanced overall performance. This high-performance feature offers industry leading performance in Connections Per Second (CPS) optimization, along with improvements to handling large amounts of simultaneous connections. The feature also improves the number of Total Active Connections for network intensive workloads. Accelerated Connections is configured at the network interface level to allow flexibility to size the performance at vNIC. This especially benefits smaller VM sizes. These benefits are available for Network Virtual Appliances (NVAs) as well as Virtual Machines (VMs) with a large number of connections. To access this feature during limited General Availability (limited GA), use the [sign-up form](https://go.microsoft.com/fwlink/?linkid=2223706).
> [!IMPORTANT] > This feature is currently in limited General Availability (GA) and customer sign-up is needed to use it.
Feature enablement is at the vNIC level and irrespective of the VM size, making
There are a total of four performance tiers at vNIC level which gives the flexibility to control the networking capability. All tiers have different networking capabilities. Instructions on how to select performance tier based on VM sizes will be provided after a customer sign up for the feature.
-Accelerated Connections is implemented at the network interface level to allow maximum flexibility of network capacity. Multiple vNICs can be configured with this enhancement, the number depends on the supported VM family. Network Virtual Appliances (NVAs) on Azure Marketplace will be the first workloads to be offered this ground-breaking feature.
+Accelerated Connections is implemented at the network interface level to allow maximum flexibility of network capacity. Multiple vNICs can be configured with this enhancement, the number depends on the supported VM family. VM families up to v4 will be the first workloads to be offered this ground-breaking feature.
+
+Network Virtual Appliances (NVAs) with the largest scale workloads requiring virtual firewalls, virtual switches, load balancers and other critical network features, as well as Virtual Machines (VMs) with high connection workloads will experience dramatically improved CPS performance with Accelerated Connections.
-Network Virtual Appliances (NVAs) with the largest scale workloads requiring virtual firewalls, virtual switches, load balancers and other critical network features will experience dramatically improved CPS performance with Accelerated Connections.
-> [!NOTE]
-> During limited GA, this feature is only supported for NVAs available on the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps?search=network%20virtual%20appliance&page=1&filters=virtual-machine-images%3Bpartners).
->
**Architecture diagram**
Network Virtual Appliances (NVAs) with the largest scale workloads requiring vir
### Considerations and limitations
-* This feature is available only for NVAs deployed from Azure Marketplace during limited GA.
+* This feature is available for NVAs deployed from Azure Marketplace and Azure Virtual Machines during limited GA.
* To enable this feature, you must sign up using the [sign-up form](https://go.microsoft.com/fwlink/?linkid=2223706). * This feature can be enabled and is supported only on new deployments. * Feature support may vary as per the NVAs available on Marketplace.
openshift Howto Create A Storageclass https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-create-a-storageclass.md
ARO_RESOURCE_GROUP=aro-rg
CLUSTER=cluster ARO_SERVICE_PRINCIPAL_ID=$(az aro show -g $ARO_RESOURCE_GROUP -n $CLUSTER --query servicePrincipalProfile.clientId -o tsv)
-az role assignment create --role Contributor --scope /subscriptions/mySubscriptionID/resourceGroups/myResourceGroupName --assignee $ARO_SERVICE_PRINCIPAL_ID -g $AZURE_FILES_RESOURCE_GROUP
+az role assignment create --role Contributor --scope /subscriptions/mySubscriptionID/resourceGroups/$AZURE_FILES_RESOURCE_GROUP --assignee $ARO_SERVICE_PRINCIPAL_ID
``` ### Set ARO cluster permissions
openshift Quickstart Openshift Arm Bicep Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/quickstart-openshift-arm-bicep-template.md
param aadClientSecret string
@description('The ObjectID of the Resource Provider Service Principal') param rpObjectId string
+@description('Specify if FIPS validated crypto modules are used')
+@allowed([
+ 'Enabled'
+ 'Disabled'
+])
+param fips string = 'Disabled'
+
+@description('Specify if master VMs are encrypted at host')
+@allowed([
+ 'Enabled'
+ 'Disabled'
+])
+param masterEncryptionAtHost string = 'Disabled'
+
+@description('Specify if worker VMs are encrypted at host')
+@allowed([
+ 'Enabled'
+ 'Disabled'
+])
+param workerEncryptionAtHost string = 'Disabled'
+ var contributorRoleDefinitionId = resourceId('Microsoft.Authorization/roleDefinitions', 'b24988ac-6180-42a0-ab88-20f7382dd24c') var resourceGroupId = '/subscriptions/${subscription().subscriptionId}/resourceGroups/aro-${domain}-${location}' var masterSubnetId=resourceId('Microsoft.Network/virtualNetworks/subnets', clusterVnetName, 'master')
resource clusterName_resource 'Microsoft.RedHatOpenShift/OpenShiftClusters@2023-
domain: domain resourceGroupId: resourceGroupId pullSecret: pullSecret
+ fipsValidatedModules: fips
} networkProfile: { podCidr: podCidr
resource clusterName_resource 'Microsoft.RedHatOpenShift/OpenShiftClusters@2023-
masterProfile: { vmSize: masterVmSize subnetId: masterSubnetId
+ encryptionAtHost: masterEncryptionAtHost
} workerProfiles: [ {
resource clusterName_resource 'Microsoft.RedHatOpenShift/OpenShiftClusters@2023-
diskSizeGB: workerVmDiskSize subnetId: workerSubnetId count: workerCount
+ encryptionAtHost: workerEncryptionAtHost
} ] apiserverProfile: {
postgresql Concepts Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-monitoring.md
The following metrics are available for an Azure Database for PostgreSQL flexibl
|Display name |Metric ID |Unit |Description |Default enabled| |--|--|-|--||
-|**Active Connections** |`active_connections` |Count |Total number of connections to database Server, covering all states of connections, including active, idle, and others as seen in `pg_stat_activity` view. For a detailed view focusing solely on active connections, use "Sessions By State" metric. |Yes |
+|**Active Connections** |`active_connections` |Count |Total number of connections to the database server, including all connection states such as active, idle, and others, as seen in `pg_stat_activity` view. This figure represents the overall sum of connections across all states, without distinguishing between specific states. For an in-depth analysis on a specific state, such as active connections, refer to the 'Sessions By State' metric. |Yes |
|**Backup Storage Used** |`backup_storage_used` |Bytes |Amount of backup storage used. This metric represents the sum of storage that's consumed by all the full backups, differential backups, and log backups that are retained based on the backup retention period that's set for the server. The frequency of the backups is service managed. For geo-redundant storage, backup storage usage is twice the usage for locally redundant storage.|Yes | |**Failed Connections** |`connections_failed` |Count |Number of failed connections. |Yes | |**Succeeded Connections** |`connections_succeeded` |Count |Number of succeeded connections. |Yes |
You can choose from the following categories of enhanced metrics:
|Display name|Metric ID|Unit|Description|Dimension|Default enabled| |||||||
-|**Sessions By State** |`sessions_by_state` |Count|Overall state of the backend. |State|No|
-|**Sessions By WaitEventType** |`sessions_by_wait_event_type` |Count|Sessions by the type of event for which the back end is waiting.|Wait Event Type|No|
-|**Oldest Backend** |`oldest_backend_time_sec` |Seconds|Age in seconds of the oldest back end (irrespective of the state).|Doesn't apply|No|
+|**Sessions By State** |`sessions_by_state` |Count|Sessions by state as shown in `pg_stat_activity` view. It categorizes client backends into various states, such as active or idle. |State|No|
+|**Sessions By WaitEventType** |`sessions_by_wait_event_type` |Count|Sessions by the type of event for which the client backend is waiting.|Wait Event Type|No|
+|**Oldest Backend** |`oldest_backend_time_sec` |Seconds|Age in seconds of the oldest backend (irrespective of the state).|Doesn't apply|No|
|**Oldest Query** |`longest_query_time_sec`|Seconds|Age in seconds of the longest query that's currently running. |Doesn't apply|No| |**Oldest Transaction** |`longest_transaction_time_sec`|Seconds|Age in seconds of the longest transaction (including idle transactions).|Doesn't apply|No| |**Oldest xmin** |`oldest_backend_xmin`|Count|The actual value of the oldest `xmin`. If `xmin` isn't increasing, it indicates that there are some long-running transactions that can potentially hold dead tuples from being removed. |Doesn't apply|No|
You can choose from the following categories of enhanced metrics:
- Enhanced metrics that use the DatabaseName dimension have a *50-database* limit. - On the *Burstable* SKU, the limit is 10 databases for metrics that use the DatabaseName dimension. - The DatabaseName dimension limit is applied on the database identifier (datid) column of the pg_stat_database system view, which reflects the order of creation for the database.-- The DatabaseName in the metrics dimension is *case insensitive*. That means that after querying pg_catalog.pg_stat_database, filtering out rows in which datname is either "template1" or "template0", ordering by datid, and limiting the returned rows to the first 50 (or 10 in the case of *Burstable* SKU), the metrics for database names in that result set, that are the same except for case (for example, *contoso_database* and *Contoso_database*) will be merged and might not show accurate data.
+- The `DatabaseName` in the metrics dimension is *case insensitive*. That means that after querying `pg_stat_database` view, filtering out rows in which `datname` is either `template1` or `template0`, ordering by `datid`, and limiting the returned rows to the first 50 (or 10 in the case of *Burstable* SKU), the metrics for database names in that result set, that are the same except for case (for example, `contoso_database` and `Contoso_database`) will be merged and might not show accurate data.
### Autovacuum metrics
-Autovacuum metrics can be used to monitor and tune autovacuum performance for Azure Database for PostgreSQL flexible server. Each metric is emitted at a *30-minute* interval and has up to *93 days* of retention. You can create alerts for specific metrics, and you can split and filter metrics data by using the DatabaseName dimension.
+Autovacuum metrics can be used to monitor and tune autovacuum performance for Azure Database for PostgreSQL - Flexible Server. Each metric is emitted at a *30-minute* interval and has up to *93 days* of retention. You can create alerts for specific metrics, and you can split and filter metrics data by using the `DatabaseName` dimension.
#### How to enable autovacuum metrics
You can use PgBouncer metrics to monitor the performance of the PgBouncer proces
|Display name|Metric ID|Unit|Description|Dimension|Default enabled| |||||||
-|**Active client connections** |`client_connections_active` |Count|Connections from clients that are associated with an Azure Database for PostgreSQL flexible server connection. |DatabaseName|No |
-|**Waiting client connections** |`client_connections_waiting`|Count|Connections from clients that are waiting for an Azure Database for PostgreSQL flexible server connection to service them.|DatabaseName|No |
-|**Active server connections** |`server_connections_active` |Count|Connections to Azure Database for PostgreSQL flexible server that are in use by a client connection. |DatabaseName|No |
-|**Idle server connections** |`server_connections_idle` |Count|Connections to Azure Database for PostgreSQL flexible server that are idle and ready to service a new client connection. |DatabaseName|No |
+|**Active client connections** |`client_connections_active` |Count|Connections from clients that are associated with an Azure Database for PostgreSQL - Flexible Server connection. |DatabaseName|No |
+|**Waiting client connections** |`client_connections_waiting`|Count|Connections from clients that are waiting for an Azure Database for PostgreSQL - Flexible Server connection to service them.|DatabaseName|No |
+|**Active server connections** |`server_connections_active` |Count|Connections to Azure Database for PostgreSQL - Flexible Server that are in use by a client connection. |DatabaseName|No |
+|**Idle server connections** |`server_connections_idle` |Count|Connections to Azure Database for PostgreSQL - Flexible Server that are idle and ready to service a new client connection. |DatabaseName|No |
|**Total pooled connections** |`total_pooled_connections`|Count|Current number of pooled connections. |DatabaseName|No | |**Number of connection pools** |`num_pools` |Count|Total number of connection pools. |DatabaseName|No |
Is-db-alive is a database server availability metric for Azure Database for Post
|Display Name |Metric ID |Unit |Description |Dimension |Default enabled| |-|-|-|--|||
-|**Database Is Alive** |`is_db_alive` |Count |Indicates if the database is up or not |N/a |Yes |
+|**Database Is Alive** |`is_db_alive` |Count |Indicates if the database is up or not. |N/a |Yes |
#### Considerations when using the Database availability metrics
postgresql Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-security.md
Last updated 2/10/2023
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-Multiple layers of security are available to help protect the data on your Azure Database for PostgreSQL flexible server instance. This article outlines those security options.
+Multiple layers of security are available to help protect the data on your Azure Database for PostgreSQL - Flexible Server instance. This article outlines those security options.
## Information protection and encryption
-Azure Database for PostgreSQL flexible server encrypts data in two ways:
+Azure Database for PostgreSQL - Flexible Server encrypts data in two ways:
-- **Data in transit**: Azure Database for PostgreSQL flexible server encrypts in-transit data with Secure Sockets Layer and Transport Layer Security (SSL/TLS). Encryption is enforced by default. See this [guide](how-to-connect-tls-ssl.md) for more details. For better security, you may choose to enable [SCRAM authentication](how-to-connect-scram.md).
+- **Data in transit**: Azure Database for PostgreSQL - Flexible Server encrypts in-transit data with Secure Sockets Layer and Transport Layer Security (SSL/TLS). Encryption is enforced by default. See this [guide](how-to-connect-tls-ssl.md) for more details. For better security, you may choose to enable [SCRAM authentication](how-to-connect-scram.md).
- Although it's not recommended, if needed, you have an option to disable TLS\SSL for connections to Azure Database for PostgreSQL flexible server by updating the `require_secure_transport` server parameter to OFF. You can also set TLS version by setting `ssl_max_protocol_version` server parameters.
+ Although it's not recommended, if needed, you have an option to disable TLS\SSL for connections to Azure Database for PostgreSQL - Flexible Server by updating the `require_secure_transport` server parameter to OFF. You can also set TLS version by setting `ssl_max_protocol_version` server parameters.
-- **Data at rest**: For storage encryption, Azure Database for PostgreSQL flexible server uses the FIPS 140-2 validated cryptographic module. Data is encrypted on disk, including backups and the temporary files created while queries are running.
+- **Data at rest**: For storage encryption, Azure Database for PostgreSQL - Flexible Server uses the FIPS 140-2 validated cryptographic module. Data is encrypted on disk, including backups and the temporary files created while queries are running.
The service uses the AES 256-bit cipher included in Azure storage encryption, and the keys are system managed. This is similar to other at-rest encryption technologies, like transparent data encryption in SQL Server or Oracle databases. Storage encryption is always on and can't be disabled. ## Network security
-When you're running Azure Database for PostgreSQL flexible server, you have two main networking options:
+When you're running Azure Database for PostgreSQL - Flexible Server, you have two main networking options:
- **Private access**: You can deploy your server into an Azure virtual network. Azure virtual networks help provide private and secure network communication. Resources in a virtual network can communicate through private IP addresses. For more information, see the [networking overview for Azure Database for PostgreSQL - Flexible Server](concepts-networking.md).
To get alerts from the Microsoft Defender plan you'll first need to **enable it*
## Access management
-The best way to manage Azure Database for PostgreSQL flexible server database access permissions at scale is using the concept of [roles](https://www.postgresql.org/docs/current/user-manag.html). A role can be either a database user or a group of database users. Roles can own the database objects and assign privileges on those objects to other roles to control who has access to which objects. It is also possible to grant membership in a role to another role, thus allowing the member role to use privileges assigned to another role.
-Azure Database for PostgreSQL flexible server lets you grant permissions directly to the database users. **As a good security practice, it can be recommended that you create roles with specific sets of permissions based on minimum application and access requirements. You can then assign the appropriate roles to each user. Roles are used to enforce a *least privilege model* for accessing database objects.**
+The best way to manage Azure Database for PostgreSQL - Flexible Server database access permissions at scale is using the concept of [roles](https://www.postgresql.org/docs/current/user-manag.html). A role can be either a database user or a group of database users. Roles can own the database objects and assign privileges on those objects to other roles to control who has access to which objects. It is also possible to grant membership in a role to another role, thus allowing the member role to use privileges assigned to another role.
+Azure Database for PostgreSQL - Flexible Server lets you grant permissions directly to the database users. **As a good security practice, it can be recommended that you create roles with specific sets of permissions based on minimum application and access requirements. You can then assign the appropriate roles to each user. Roles are used to enforce a *least privilege model* for accessing database objects.**
-The Azure Database for PostgreSQL flexible server instance is created with the three default roles defined. You can see these roles by running the command:
+The Azure Database for PostgreSQL - Flexible Server instance is created with the three default roles defined. You can see these roles by running the command:
```sql SELECT rolname FROM pg_roles; ```
-* azure_pg_admin.
-* azuresu.
-* administrator role.
+* `azure_pg_admin`
+* `azuresu`
+* administrator role
-While you're creating the Azure Database for PostgreSQL flexible server instance, you provide credentials for an **administrator role**. This administrator role can be used to create more [PostgreSQL roles](https://www.postgresql.org/docs/current/user-manag.html).
-For example, below we can create an example role called *demouser*,
+While you're creating the Azure Database for PostgreSQL - Flexible Server instance, you provide credentials for an **administrator role**. This administrator role can be used to create more [PostgreSQL roles](https://www.postgresql.org/docs/current/user-manag.html).
+For example, below we can create an example user/role called `demouser`,
```SQL
-postgres=> create role demouser with password 'password123';
+postgres=> CREATE USER demouser PASSWORD 'password123';
``` The **administrator role** should never be used by the application.
-In cloud-based PaaS environments access to an Azure Database for PostgreSQL flexible server superuser account is restricted to control plane operations only by cloud operators. Therefore, the **azure_pg_admin** account exists as a pseudo-superuser account. Your administrator role is a member of the **azure_pg_admin** role.
-However, the server admin account is not part of the **azuresu** role, which has superuser privileges and is used to perform control pane operations. Since this service is a managed PaaS service, only Microsoft is part of the superuser role.
+In cloud-based PaaS environments access to an Azure Database for PostgreSQL - Flexible Server superuser account is restricted to control plane operations only by cloud operators. Therefore, the `azure_pg_admin` account exists as a pseudo-superuser account. Your administrator role is a member of the `azure_pg_admin` role.
+However, the server admin account is not part of the `azuresu` role, which has superuser privileges and is used to perform control plane operations. Since this service is a managed PaaS service, only Microsoft is part of the superuser role.
> [!NOTE]
-> Number of superuser only permissions , such as creation of certain [implicit casts](https://www.postgresql.org/docs/current/sql-createcast.html), are not available with Azure Database for PostgreSQL flexible server, since azure_pg_admin role doesn't align to permissions of postgresql superuser role.
+> Number of superuser only permissions, such as creation of certain [implicit casts](https://www.postgresql.org/docs/current/sql-createcast.html), are not available with Azure Database for PostgreSQL - Flexible Server, since `azure_pg_admin` role doesn't align to permissions of PostgreSQL superuser role.
You can periodically audit the list of roles in your server. For example, you can connect using `psql` client and query the `pg_roles` table which lists all the roles along with privileges such as create additional roles, create databases, replication etc.
rolvaliduntil |
rolbypassrls | f rolconfig | oid | 24827--- ```
-[Audit logging](concepts-audit.md) is also available with Azure Database for PostgreSQL flexible server to track activity in your databases.
-
-> [!NOTE]
-> Azure Database for PostgreSQL flexible server currently doesn't support [Microsoft Defender for Cloud protection](../../security-center/azure-defender.md).
-
+[Audit logging](concepts-audit.md) is also available with Azure Database for PostgreSQL - Flexible Server to track activity in your databases.
### Controlling schema access
-Newly created databases in Azure Database for PostgreSQL flexible server have a default set of privileges in the database's public schema that allow all database users and roles to create objects. To better limit application user access to the databases that you create on your Azure Database for PostgreSQL flexible server instance, we recommend that you consider revoking these default public privileges. After doing so, you can then grant specific privileges for database users on a more granular basis. For example:
+Newly created databases in Azure Database for PostgreSQL - Flexible Server have a default set of privileges in the database's public schema that allow all database users and roles to create objects. To better limit application user access to the databases that you create on your Azure Database for PostgreSQL - Flexible Server instance, we recommend that you consider revoking these default public privileges. After doing so, you can then grant specific privileges for database users on a more granular basis. For example:
-* To prevent application database users from creating objects in the public schema, revoke create privileges to *public* schema
+* To prevent application database users from creating objects in the public schema, revoke create privileges to `public` schema from `public` role.
```sql REVOKE CREATE ON SCHEMA public FROM PUBLIC;- ```
-* Next, create new database:
-```sql
-CREATE DATABASE Test_db;
-
-```
+* Next, create new database.
+ ```sql
+ CREATE DATABASE Test_db;
+ ```
* Revoke all privileges from the PUBLIC schema on this new database.
-```sql
-REVOKE ALL ON DATABASE Test_db FROM PUBLIC;
-
-```
+ ```sql
+ REVOKE ALL ON DATABASE Test_db FROM PUBLIC;
+ ```
* Create custom role for application db users
-```sql
-CREATE ROLE Test_db_user;
-```
+ ```sql
+ CREATE ROLE Test_db_user;
+ ```
* Give database users with this role the ability to connect to the database.
-```sql
-GRANT CONNECT ON DATABASE Test_db TO Test_db_user;
-GRANT ALL PRIVILEGES ON DATABASE Test_db TO Test_db_user;
--
-```
+ ```sql
+ GRANT CONNECT ON DATABASE Test_db TO Test_db_user;
+ GRANT ALL PRIVILEGES ON DATABASE Test_db TO Test_db_user;
+ ```
* Create database user
-```sql
-CREATE ROLE user1 LOGIN PASSWORD 'Password_to_change'
-```
+ ```sql
+ CREATE USER user1 PASSWORD 'Password_to_change'
+ ```
* Assign role, with its connect and select privileges to user
-```sql
-GRANT Test_db_user TO user1;
--
-```
+ ```sql
+ GRANT Test_db_user TO user1;
+ ```
In this example, user *user1* can connect and has all privileges in our test database *Test_db*, but not any other db on the server. It would be recommended further, instead of giving this user\role *ALL PRIVILEGES* on that database and its objects, to provide more selective permissions, such as *SELECT*,*INSERT*,*EXECUTE*, etc. For more information about privileges in PostgreSQL databases, see the [GRANT](https://www.postgresql.org/docs/current/sql-grant.html) and [REVOKE](https://www.postgresql.org/docs/current/sql-revoke.html) commands in the PostgreSQL docs. ## Row level security
-[Row level security (RLS)](https://www.postgresql.org/docs/current/ddl-rowsecurity.html) is an Azure Database for PostgreSQL flexible server security feature that allows database administrators to define policies to control how specific rows of data display and operate for one or more roles. Row level security is an additional filter you can apply to an Azure Database for PostgreSQL flexible server database table. When a user tries to perform an action on a table, this filter is applied before the query criteria or other filtering, and the data is narrowed or rejected according to your security policy. You can create row level security policies for specific commands like *SELECT*, *INSERT*, *UPDATE*, and *DELETE*, specify it for ALL commands. Use cases for row level security include PCI compliant implementations, classified environments, as well as shared hosting / multitenant applications.
+[Row level security (RLS)](https://www.postgresql.org/docs/current/ddl-rowsecurity.html) is an Azure Database for PostgreSQL - Flexible Server security feature that allows database administrators to define policies to control how specific rows of data display and operate for one or more roles. Row level security is an additional filter you can apply to an Azure Database for PostgreSQL - Flexible Server database table. When a user tries to perform an action on a table, this filter is applied before the query criteria or other filtering, and the data is narrowed or rejected according to your security policy. You can create row level security policies for specific commands like *SELECT*, *INSERT*, *UPDATE*, and *DELETE*, specify it for ALL commands. Use cases for row level security include PCI compliant implementations, classified environments, as well as shared hosting / multitenant applications.
-Only users with `SET ROW SECURITY` rights may apply row security rights to a table. The table owner may set row security on a table. Like `OVERRIDE ROW SECURITY` this is currently an implicit right. Row-level security does not override existing *GRANT* permissions, it adds a finer grained level of control. For example, setting `ROW SECURITY FOR SELECT` to allow a given user to give rows would only give that user access if the user also has *SELECT* privileges on the column or table in question.
+Only users with `SET ROW SECURITY` rights may apply row security rights to a table. The table owner may set row security on a table. Like `OVERRIDE ROW SECURITY` this is currently an implicit right. Row-level security does not override existing `GRANT` permissions, it adds a finer grained level of control. For example, setting `ROW SECURITY FOR SELECT` to allow a given user to give rows would only give that user access if the user also has `SELECT` privileges on the column or table in question.
Here is an example showing how to create a policy that ensures only members of the custom created *ΓÇ£managerΓÇ¥* [role](#access-management) can access only the rows for a specific account. The code in the following example was shared in the [PostgreSQL documentation](https://www.postgresql.org/docs/current/ddl-rowsecurity.html).
ALTER TABLE accounts ENABLE ROW LEVEL SECURITY;
CREATE POLICY account_managers ON accounts TO managers USING (manager = current_user); ```
-The USING clause implicitly adds a `WITH CHECK` clause, ensuring that members of the manager role cannot perform SELECT, DELETE, or UPDATE operations on rows that belong to other managers, and cannot INSERT new rows belonging to another manager.
+
+The USING clause implicitly adds a `WITH CHECK` clause, ensuring that members of the manager role cannot perform `SELECT`, `DELETE`, or `UPDATE` operations on rows that belong to other managers, and cannot `INSERT` new rows belonging to another manager.
+ > [!NOTE]
-> In [PostgreSQL it is possible for a user to be assigned the *BYPASSRLS* attribute by another superuser](https://www.postgresql.org/docs/current/ddl-rowsecurity.html). With this permission, a user can bypass RLS for all tables in Postgres, as is superuser. That permission cannot be assigned in Azure Database for PostgreSQL flexible server, since administrator role has no superuser privileges, as common in cloud based PaaS PostgreSQL service.
+> In [PostgreSQL it is possible for a user to be assigned the `BYPASSRLS` attribute by another superuser](https://www.postgresql.org/docs/current/ddl-rowsecurity.html). With this permission, a user can bypass RLS for all tables in Postgres, as superuser. That permission cannot be assigned in Azure Database for PostgreSQL - Flexible Server, since administrator role has no superuser privileges, as common in cloud based PaaS PostgreSQL services.
## Updating passwords
-For better security, it is a good practice to periodically rotate your admin password and database user passwords. It is recommended to use strong passwords using upper and lower cases, numbers and special characters.
+For better security, it is a good practice to periodically rotate your admin password and database users passwords. It is recommended to use strong passwords using upper and lower cases, numbers and special characters.
## Using SCRAM The [Salted Challenge Response Authentication Mechanism (SCRAM)](https://datatracker.ietf.org/doc/html/rfc5802) greatly improves the security of password-based user authentication by adding several key security features that prevent rainbow-table attacks, man-in-the-middle attacks, and stored password attacks, while also adding support for multiple hashing algorithms and passwords that contain non-ASCII characters.
-If your [client driver supports SCRAM](https://wiki.postgresql.org/wiki/List_of_drivers) , you can **[setup access to Azure Database for PostgreSQL flexible server using SCRAM](./how-to-connect-scram.md)** as `scram-sha-256` vs. default `md5`.
+If your [client driver supports SCRAM](https://wiki.postgresql.org/wiki/List_of_drivers) , you can **[setup access to Azure Database for PostgreSQL - Flexible Server using SCRAM](./how-to-connect-scram.md)** as `scram-sha-256` vs. default `md5`.
### Reset administrator password- Follow the [how to guide](./how-to-manage-server-portal.md#reset-admin-password) to reset the admin password. ### Update database user password- You can use client tools to update database user passwords. For example,+ ```SQL
-postgres=> alter role demouser with password 'Password123!';
+postgres=> ALTER ROLE demouser PASSWORD 'Password123!';
ALTER ROLE ``` ## Next steps
postgresql How To Integrate Azure Ai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-integrate-azure-ai.md
To retrieve the database connection details:
export PGPASSWORD="{your-password}" ```
- Add one extra environment variable to require an SSL connection to the database.
-
- ```bash
- export PGSSLMODE=require
- ```
-
Connect to your database using the [psql command-line utility](https://www.postgresguide.com/utilities/psql/) by entering the following at the prompt. ```bash
The meta-command output shows that the `azure_ai` extension creates three schema
| `azure_openai` | Contains the UDFs that enable calling an Azure OpenAI endpoint. | | `azure_cognitive` | Provides UDFs and composite types related to integrating the database with Azure Cognitive Services. |
-The functions and types are all associated with one of the schemas. To review the functions defined in the `azure_ai` schema, use the `\df` meta-command, specifying the schema whose functions should be displayed. The `\x` commands before and after the `\df` command toggle the expanded display on and off to make the output from the command easier to view in the Azure Cloud Shell.
+The functions and types are all associated with one of the schemas. To review the functions defined in the `azure_ai` schema, use the `\df` meta-command, specifying the schema whose functions should be displayed. The `\x auto` commands before the `\df` command toggle the expanded display on and off automatically to make the output from the command easier to view in the Azure Cloud Shell.
```sql
-\x
+\x auto
\df+ azure_ai.*
-\x
``` The `azure_ai.set_setting()` function lets you set the endpoint and critical values for Azure AI services. It accepts a **key** and the **value** to assign it. The `azure_ai.get_setting()` function provides a way to retrieve the values you set with the `set_setting()` function. It accepts the **key** of the setting you want to view. For both methods, the key must be one of the following:
The `bill_summaries` table is now ready to store embeddings. Using the `azure_op
Before using the `create_embeddings()` function, run the following command to inspect it and review the required arguments: ```sql
-\x
\df+ azure_openai.*
-\x
``` The `Argument data types` property in the output of the `\df+ azure_openai.*` command reveals the list of arguments the function expects.
To demonstrate some of the capabilities of the `azure_cognitive` functions of th
To use the Azure AI Language service's ability to generate new, original content, you use the `summarize_abstractive` function to create a summary of text input. Use the `\df` meta-command from `psql` again, this time to look specifically at the `azure_cognitive.summarize_abstractive` function. ```sql
-\x
\df azure_cognitive.summarize_abstractive
-\x
``` The `Argument data types` property in the output of the `\df azure_cognitive.summarize_abstractive` command reveals the list of arguments the function expects.
The `Argument data types` property in the output of the `\df azure_cognitive.sum
| sentence_count | `integer` | 3 | The maximum number of sentences to include in the generated summary. | | disable_service_logs | `boolean` | false | The Language service logs your input text for 48 hours solely to allow for troubleshooting issues. Setting this property to `true` disables input logging and might limit our ability to investigate issues that occur. For more information, see Cognitive Services Compliance and Privacy notes at <https://aka.ms/cs-compliance> and Microsoft Responsible AI principles at <https://www.microsoft.com/ai/responsible-ai>. |
-The `summarize_abstractive` function, including its required arguments, looks like the following:
-
-```sql
-azure_cognitive.summarize_abstractive(text TEXT, language TEXT)
-```
+The `summarize_abstractive` functionfunction requires the following arguments: `azure_cognitive.summarize_abstractive(text TEXT, language TEXT)`.
The following query against the `bill_summaries` table uses the `summarize_abstractive` function to generate a new one-sentence summary for the text of a bill, allowing you to incorporate the power of generative AI directly into your queries.
The function can also be used to write data into your database tables. Modify th
```sql ALTER TABLE bill_summaries
-ADD COLUMN one_sentence_summary text;
+ADD COLUMN one_sentence_summary TEXT;
``` Next, update the table with the summaries. The `summarize_abstractive` function returns an array of text (`text[]`). The `array_to_string` function converts the return value to its string representation. In the query below, the `throw_on_error` argument has been set to `false`. This setting allows the summarization process to continue if an error occurs.
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/overview.md
One advantage of running your workload in Azure is global reach. Azure Database
| Region | Intel V3/V4/V5/AMD Compute | Zone-Redundant HA | Same-Zone HA | Geo-Redundant backup | | | | | | |
-| Australia Central | :heavy_check_mark: (v3/v4 only) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
+| Australia Central | :heavy_check_mark: (v3/v4 only) | :heavy_check_mark: | :heavy_check_mark: | :x: |
+| Australia Central 2 *| :heavy_check_mark: (v3/v4 only) | :heavy_check_mark: | :heavy_check_mark: | :x:|
| Australia East | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Australia Southeast | (v3/v4/v5 only) | :x: | :heavy_check_mark: | :heavy_check_mark: | | Brazil South | :heavy_check_mark: (v3 only) | :x: $ | :heavy_check_mark: | :x: |
+| Brazil Southeast * | :heavy_check_mark: (v3 only) | :x: $ | :heavy_check_mark: | :x: |
| Canada Central | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Canada East | :heavy_check_mark: | :x: | :heavy_check_mark: | :heavy_check_mark: | | Central India | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
One advantage of running your workload in Azure is global reach. Azure Database
| France Central | :heavy_check_mark: (v3/v4/v5 only) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | France South | :heavy_check_mark: (v3/v4 only) | :x: | :heavy_check_mark: | :heavy_check_mark: | | Germany West Central | :heavy_check_mark: (v3/v4/v5 only) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
+| Germany North* | :heavy_check_mark: (v3/v4/v5 only) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Israel Central | :heavy_check_mark: (v3/v4 only) | :heavy_check_mark: | :heavy_check_mark: | :x: | | Italy North | :heavy_check_mark: (v3/v4 only) | :heavy_check_mark: | :heavy_check_mark: | :x: | | Japan East | :heavy_check_mark: (v3/v4 only) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
One advantage of running your workload in Azure is global reach. Azure Database
| North Central US | :heavy_check_mark: | :x: | :heavy_check_mark: | :heavy_check_mark: | | North Europe | :heavy_check_mark: (v3/v4 only) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Norway East | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :x: |
-| Norway West | :heavy_check_mark: (v3/v4 only) | :x: | :heavy_check_mark: | :x: |
+| Norway West * | :heavy_check_mark: (v3/v4 only) | :x: | :heavy_check_mark: | :x: |
| Qatar Central | :heavy_check_mark: (v3/v4 only) | :heavy_check_mark: | :heavy_check_mark: | :x: | | South Africa North | :heavy_check_mark: (v3/v4/v5 only) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
+| South Africa West* | :heavy_check_mark: (v3/v4/v5 only) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| South Central US | :heavy_check_mark: (v3/v4 only) | :x: $ | :heavy_check_mark: | :heavy_check_mark: | | South India | :heavy_check_mark: (v3/v4/v5 only) | :x: | :heavy_check_mark: | :heavy_check_mark: | | Southeast Asia | :heavy_check_mark:(v3/v4 only) | :x: $ | :heavy_check_mark: | :heavy_check_mark: | | Sweden Central | :heavy_check_mark: (v3/v4/v5 only) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
+| Sweden South* | :heavy_check_mark: (v3/v4/v5 only) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Switzerland North | :heavy_check_mark: (v3/v4/v5 only) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| Switzerland West | :heavy_check_mark: (v3/v4 only) | :x: | :heavy_check_mark: | :heavy_check_mark: |
+| Switzerland West*| :heavy_check_mark: (v3/v4 only) | :x: | :heavy_check_mark: | :heavy_check_mark: |
| UAE Central* | :heavy_check_mark: (v3/v4 only) | :x: | :heavy_check_mark: | :x: | | UAE North | :heavy_check_mark: (v3/v4/v5 only) | :heavy_check_mark: | :heavy_check_mark: | :x: | | US Gov Arizona | :heavy_check_mark: (v3/v4 only) | :x: | :heavy_check_mark: | :x: |
$ New Zone-redundant high availability deployments are temporarily blocked in th
$$ New server deployments are temporarily blocked in these regions. Already provisioned servers are fully supported. ** Zone-redundant high availability can now be deployed when you provision new servers in these regions. Any existing servers deployed in AZ with *no preference* (which you can check on the Azure portal) before the region started to support AZ, even when you enable zone-redundant HA, the standby is provisioned in the same AZ (same-zone HA) as the primary server. To enable zone-redundant high availability, [follow the steps](how-to-manage-high-availability-portal.md#enabling-zone-redundant-ha-after-the-region-supports-az).+ (*) Certain regions are access-restricted to support specific customer scenarios, such as in-country/region disaster recovery. These regions are available only upon request by creating a new support request. <!-- We continue to add more regions for flexible servers. -->
postgresql Concepts Single To Flexible https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/concepts-single-to-flexible.md
The following table lists the different tools available for performing the migra
The next section of the document gives an overview of the Single to Flex Migration tool, its implementation, limitations, and the experience that makes it the recommended tool to perform migrations from single to flexible server. > [!NOTE]
-> The Single to Flex Migration tool is available in all Azure regions and currently supports **Offline** migrations. Support for **Online** migrations is currently available in UK South, South Africa North, UAE North, and all regions across Asia and Australia. In other regions, Online migration can be enabled by the user at a subscription-level by registering for the **Online PostgreSQL migrations to Azure PostgreSQL Flexible server** preview feature as shown in the image.
+> The Single to Flex Migration tool is available in all Azure regions and currently supports **Offline** migrations. Support for **Online** migrations is currently available in Central US, France Central, Germany West Central, North Central US, South Central US, North Europe, all West US regions, UK South, South Africa North, UAE North, and all regions across Asia and Australia. In other regions, Online migration can be enabled by the user at a subscription-level by registering for the **Online PostgreSQL migrations to Azure PostgreSQL Flexible server** preview feature as shown in the image.
:::image type="content" source="./media/concepts-single-to-flexible/online-migration-feature-switch.png" alt-text="Screenshot of online PostgreSQL migrations to Azure PostgreSQL Flexible server." lightbox="./media/concepts-single-to-flexible/online-migration-feature-switch.png":::
postgresql How To Migrate Single To Flexible Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-migrate-single-to-flexible-cli.md
Note these important points for the command response:
- The migration moves to the `Succeeded` state as soon as the `Migrating Data` substate finishes successfully. If there's a problem at the `Migrating Data` substate, the migration moves into a `Failed` state. > [!NOTE]
-> The Single to Flex Migration tool is available in all Azure regions and currently supports **Offline** migrations. Support for **Online** migrations is currently available in UK South, South Africa North, UAE North, and all regions across Asia and Australia. In other regions, Online migration can be enabled by the user at a subscription-level by registering for the **Online PostgreSQL migrations to Azure PostgreSQL Flexible server** preview feature as shown in the image.
+> The Single to Flex Migration tool is available in all Azure regions and currently supports **Offline** migrations. Support for **Online** migrations is currently available in Central US, France Central, Germany West Central, North Central US, South Central US, North Europe, all West US regions, UK South, South Africa North, UAE North, and all regions across Asia and Australia. In other regions, Online migration can be enabled by the user at a subscription-level by registering for the **Online PostgreSQL migrations to Azure PostgreSQL Flexible server** preview feature as shown in the image.
:::image type="content" source="./media/concepts-single-to-flexible/online-migration-feature-switch.png" alt-text="Screenshot of online PostgreSQL migrations to Azure PostgreSQL Flexible server." lightbox="./media/concepts-single-to-flexible/online-migration-feature-switch.png":::
postgresql How To Migrate Single To Flexible Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-migrate-single-to-flexible-portal.md
The first tab is **Setup**. Just in case you missed it, allowlist necessary exte
It's always a good practice to choose **Validate** or **Validate and Migrate** option to perform pre-migration validations before running the migration. To learn more about the pre-migration validation refer to this [documentation](./concepts-single-to-flexible.md#pre-migration-validations).
-**Migration mode** gives you the option to pick the mode for the migration. **Offline** is the default option. Support for **Online** migrations is currently available in UK South, South Africa North, UAE North, and all regions across Asia and Australia. In other regions, Online migration can be enabled by the user at a subscription-level by registering for the **Online PostgreSQL migrations to Azure PostgreSQL Flexible server** preview feature as shown in the image.
+**Migration mode** gives you the option to pick the mode for the migration. **Offline** is the default option. Support for **Online** migrations is currently available in Central US, France Central, Germany West Central, North Central US, South Central US, North Europe, all West US regions, UK South, South Africa North, UAE North, and all regions across Asia and Australia. In other regions, Online migration can be enabled by the user at a subscription-level by registering for the **Online PostgreSQL migrations to Azure PostgreSQL Flexible server** preview feature as shown in the image.
:::image type="content" source="./media/concepts-single-to-flexible/online-migration-feature-switch.png" alt-text="Screenshot of online PostgreSQL migrations to Azure PostgreSQL Flexible server." lightbox="./media/concepts-single-to-flexible/online-migration-feature-switch.png":::
sap Register Existing System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/register-existing-system.md
# Register existing SAP system
-In this how-to guide, you'll learn how to register an existing SAP system with *Azure Center for SAP solutions*. After you register an SAP system with Azure Center for SAP solutions, you can use its visualization, management and monitoring capabilities through the Azure portal. For example, you can:
+In this how-to guide, you learn how to register an existing SAP system with *Azure Center for SAP solutions*. After you register an SAP system with Azure Center for SAP solutions, you can use its visualization, management and monitoring capabilities through the Azure portal. For example, you can:
- View and track the SAP system as an Azure resource, called the *Virtual Instance for SAP solutions (VIS)*. - Get recommendations for your SAP infrastructure, Operating System configurations etc. based on quality checks that evaluate best practices for SAP on Azure.
In this how-to guide, you'll learn how to register an existing SAP system with *
When you register a system with Azure Center for SAP solutions, the following resources are created in your Subscription: - Virtual Instance for SAP solutions, Central service instance for SAP solutions, App server instance for SAP solutions and Database for SAP solutions. These resource types are created to represent the SAP system on Azure. These resources do not have any billing or cost associated with them.-- A managed resource group which is used by Azure Center for SAP solutions service.-- A Storage account within the managed resource group which contains blobs that have scripts and logs necessary for the service to provide the various capabilities including discovering and registering all components of SAP system.
+- A managed resource group that is used by Azure Center for SAP solutions service.
+- A Storage account within the managed resource group that contains blobs. These blobs are scripts and logs necessary for the service to provide various capabilities that include discovering and registering all components of SAP system.
> [!NOTE]
-> You can customize the names of the Managed resource group and the Storage account which get deployed as part of the registration process by using Azure Portal, [Azure PowerShell](quickstart-register-system-powershell.md) or [Azure CLI](quickstart-register-system-cli.md) interfaces, when you register your systems.
+> You can customize the names of the **Managed resource group** and the **Storage account** which get deployed as part of the registration process by using **Azure Portal**, [Azure PowerShell](quickstart-register-system-powershell.md) or [Azure CLI](quickstart-register-system-cli.md) interfaces, when you register your systems.
> [!NOTE]
-> You can now enable secure access to the ACSS managed storage account from specific virtual networks using the [new option in the registration experience](#managed-storage-account-network-access-settings).
+> You can now enable **secure access** from specific virtual networks to the ACSS **managed storage account** using the [new option in the registration experience](#managed-storage-account-network-access-settings).
## Prerequisites
When you register a system with Azure Center for SAP solutions, the following re
- File path on Linux VMs: /usr/sap/hostctrl/exe - File path on Windows VMs: C:\Program Files\SAP\hostctrl\exe\ - Make sure the **sapstartsrv** process is running on all **SAP instances** and for **SAP hostctrl agent** on all the VMs in the SAP system.
- - To start hostctrl sapstartsrv use this command for Linux VMs: 'hostexecstart -start'
- - To start instance sapstartsrv use the command: 'sapcontrol -nr 'instanceNr' -function StartService S0S'
+ - To start hostctrl sapstartsrv, use this command for Linux VMs: 'hostexecstart -start'
+ - To start instance sapstartsrv, use the command: 'sapcontrol -nr 'instanceNr' -function StartService S0S'
- To check status of hostctrl sapstartsrv use this command for Windows VMs: C:\Program Files\SAP\hostctrl\exe\saphostexec ΓÇôstatus - For successful discovery and registration of the SAP system, ensure there is network connectivity between ASCS, App and DB VMs. 'ping' command for App instance hostname must be successful from ASCS VM. 'ping' for Database hostname must be successful from App server VM. - On App server profile, SAPDBHOST, DBTYPE, DBID parameters must have the right values configured for the discovery and registration of Database instance details.
The following SAP system configurations aren't supported in Azure Center for SAP
## Enable resource permissions
-When you register an existing SAP system as a VIS, Azure Center for SAP solutions service needs a **User-assigned managed identity** which has **Azure Center for SAP solutions service role** access on the Compute (VMs, Disks, Load balancers) resource group and **Reader** role access on the Virtual Network resource group of the SAP system. Before you register an SAP system with Azure Center for SAP solutions, either [create a new user-assigned managed identity or update role access for an existing managed identity](#setup-user-assigned-managed-identity).
+When you register an existing SAP system as a VIS, Azure Center for SAP solutions service needs a **User-assigned managed identity** that has **Azure Center for SAP solutions service role** access on the Compute (VMs, Disks, Load balancers) resource group and **Reader** role access on the Virtual Network resource group of the SAP system. Before you register an SAP system with Azure Center for SAP solutions, either [create a new user-assigned managed identity or update role access for an existing managed identity](#setup-user-assigned-managed-identity).
Azure Center for SAP solutions uses this user-assigned managed identity to install VM extensions on the ASCS, Application Server and DB VMs. This step allows Azure Center for SAP solutions to discover the SAP system components, and other SAP system metadata. User-assigned managed identity is required to enable SAP system monitoring and management capabilities.
Azure Center for SAP solutions uses this user-assigned managed identity to insta
To provide permissions to the SAP system resources to a user-assigned managed identity: 1. [Create a new user-assigned managed identity](../../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md#create-a-user-assigned-managed-identity) if needed or use an existing one.
-1. [Assign **Azure Center for SAP solutions service role**](../../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md#manage-access-to-user-assigned-managed-identities) role access to the user-assigned managed identity on the resource group(s) which have the Virtual Machines, Disks and Load Balancers of the SAP system and **Reader** role on the resource group(s) which have the Virtual Network components of the SAP system.
+1. [Assign **Azure Center for SAP solutions service role**](../../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md#manage-access-to-user-assigned-managed-identities) role access to the user-assigned managed identity on the resource group(s) that have the Virtual Machines, Disks and Load Balancers of the SAP system and **Reader** role on the resource group(s) which have the Virtual Network components of the SAP system.
1. Once the permissions are assigned, this managed identity can be used in Azure Center for SAP solutions to register and manage SAP systems. ## Managed storage account network access settings
-ACSS deploys a **managed storage account** into your subscription, for each SAP system being registered. When you register your SAP system using Azure Portal, PowerShell or REST API, you have the option to choose **network access** setting for the storage account.
+ACSS deploys a **managed storage account** into your subscription, for each SAP system being registered. When you register your SAP system using Azure Portal, PowerShell or REST API, you have the option to choose **network access** setting for the storage account. You can choose either public network access or access from specific virtual networks.
To secure the managed storage account and limit access to only the virtual network that has your SAP virtual machines, you can choose the network access setting as **Enable access from specific Virtual Networks**. You can learn more about storage account network security in [this documentation](../../storage/common/storage-network-security.md).
To register an existing SAP system in Azure Center for SAP solutions:
1. For **Environment**, select the environment type from the drop-down menu. For example, production or non-production environments. 1. For **Managed identity source**, select **Use existing user-assigned managed identity** option. 1. For **Managed identity name**, select a **User-assigned managed identity** which has **Azure Center for SAP solutions service role** and **Reader** role access to the [respective resources of this SAP system.](#enable-resource-permissions)
+ 1. For **Managed resource group name**, optionally enter a resource group name as per your organization's naming policies. This resource group is managed by ACSS service.
+ 1. For **Managed storage account name**, optionally enter a storage account name as per your organization's naming policies. This storage account is managed by ACSS service.
+ 1. For **Storage account network access**, select **Enable access from specific virtual network** for enhanced network security access for the managed storage account.
1. Select **Review + register** to discover the SAP system and begin the registration process. :::image type="content" source="media/register-existing-system/registration-page.png" alt-text="Screenshot of Azure Center for SAP solutions registration page, highlighting mandatory fields to identify the existing SAP system." lightbox="media/register-existing-system/registration-page.png":::
sap Dbms Guide Ha Ibm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/dbms-guide-ha-ibm.md
Previously updated : 12/06/2022 Last updated : 01/18/2024 -- + # High availability of IBM Db2 LUW on Azure VMs on SUSE Linux Enterprise Server with Pacemaker
-IBM Db2 for Linux, UNIX, and Windows (LUW) in [high availability and disaster recovery (HADR) configuration](https://www.ibm.com/support/knowledgecenter/en/SSEPGG_10.5.0/com.ibm.db2.luw.admin.ha.doc/doc/c0011267.html) consists of one node that runs a primary database instance and at least one node that runs a secondary database instance. Changes to the primary database instance are replicated to a secondary database instance synchronously or asynchronously, depending on your configuration.
+IBM Db2 for Linux, UNIX, and Windows (LUW) in [high availability and disaster recovery (HADR) configuration](https://www.ibm.com/support/knowledgecenter/en/SSEPGG_10.5.0/com.ibm.db2.luw.admin.ha.doc/doc/c0011267.html) consists of one node that runs a primary database instance and at least one node that runs a secondary database instance. Changes to the primary database instance are replicated to a secondary database instance synchronously or asynchronously, depending on your configuration.
> [!NOTE] > This article contains references to terms that Microsoft no longer uses. When these terms are removed from the software, we'll remove them from this article.
-
-This article describes how to deploy and configure the Azure virtual machines (VMs), install the cluster framework, and install the IBM Db2 LUW with HADR configuration.
-The article doesn't cover how to install and configure IBM Db2 LUW with HADR or SAP software installation. To help you accomplish these tasks, we provide references to SAP and IBM installation manuals. This article focuses on parts that are specific to the Azure environment.
+This article describes how to deploy and configure the Azure virtual machines (VMs), install the cluster framework, and install the IBM Db2 LUW with HADR configuration.
+
+The article doesn't cover how to install and configure IBM Db2 LUW with HADR or SAP software installation. To help you accomplish these tasks, we provide references to SAP and IBM installation manuals. This article focuses on parts that are specific to the Azure environment.
The supported IBM Db2 versions are 10.5 and later, as documented in SAP note [1928533].
Before you begin an installation, see the following SAP notes and documentation:
| [2233094] | DB6: SAP applications on Azure that use IBM Db2 for Linux, UNIX, and Windows - additional information | | [1612105] | DB6: FAQ on Db2 with HADR | -
-| Documentation |
+| Documentation |
| | | [SAP Community Wiki](https://wiki.scn.sap.com/wiki/display/HOME/SAPonLinuxNotes): Has all of the required SAP Notes for Linux | | [Azure Virtual Machines planning and implementation for SAP on Linux][planning-guide] guide |
Before you begin an installation, see the following SAP notes and documentation:
| [IBM Db2 HADR R 10.5][db2-hadr-10.5] | ## Overview
-To achieve high availability, IBM Db2 LUW with HADR is installed on at least two Azure virtual machines, which are deployed in an [virtual machine scale set](./virtual-machine-scale-set-sap-deployment-guide.md) with flexible orchestration across [availability zones](./high-availability-zones.md) or in an [availability set](../../virtual-machines/windows/tutorial-availability-sets.md).
+
+To achieve high availability, IBM Db2 LUW with HADR is installed on at least two Azure virtual machines, which are deployed in an [virtual machine scale set](./virtual-machine-scale-set-sap-deployment-guide.md) with flexible orchestration across [availability zones](./high-availability-zones.md) or in an [availability set](../../virtual-machines/windows/tutorial-availability-sets.md).
The following graphics display a setup of two database server Azure VMs. Both database server Azure VMs have their own storage attached and are up and running. In HADR, one database instance in one of the Azure VMs has the role of the primary instance. All clients are connected to this primary instance. All changes in database transactions are persisted locally in the Db2 transaction log. As the transaction log records are persisted locally, the records are transferred via TCP/IP to the database instance on the second database server, the standby server, or standby instance. The standby instance updates the local database by rolling forward the transferred transaction log records. In this way, the standby server is kept in sync with the primary server.
HADR is only a replication functionality. It has no failure detection and no aut
![IBM Db2 high availability overview](./media/dbms-guide-ha-ibm/ha-db2-hadr-lb.png)
-To have SAP application servers connect to primary database, you need a virtual host name and a virtual IP address. In the event of a failover, the SAP application servers will connect to new primary database instance. In an Azure environment, an [Azure load balancer](https://microsoft.sharepoint.com/teams/WAG/AzureNetworking/Wiki/Load%20Balancing.aspx) is required to use a virtual IP address in the way that's required for HADR of IBM Db2.
+To have SAP application servers connect to primary database, you need a virtual host name and a virtual IP address. After a failover, the SAP application servers connect to new primary database instance. In an Azure environment, an [Azure load balancer](https://microsoft.sharepoint.com/teams/WAG/AzureNetworking/Wiki/Load%20Balancing.aspx) is required to use a virtual IP address in the way that's required for HADR of IBM Db2.
To help you fully understand how IBM Db2 LUW with HADR and Pacemaker fits into a highly available SAP system setup, the following image presents an overview of a highly available setup of an SAP system based on IBM Db2 database. This article covers only IBM Db2, but it provides references to other articles about how to set up other components of an SAP system. ![IBM DB2 high availability full environment overview](.//media/dbms-guide-ha-ibm/end-2-end-ha.png) - ### High-level overview of the required steps
-To deploy an IBM Db2 configuration, you need to follow these steps:
-
- + Plan your environment.
- + Deploy the VMs.
- + Update SUSE Linux and configure file systems.
- + Install and configure Pacemaker.
- + Install [highly available NFS][nfs-ha].
- + Install [ASCS/ERS on a separate cluster][ascs-ha].
- + Install IBM Db2 database with Distributed/High Availability option (SWPM).
- + Install and create a secondary database node and instance, and configure HADR.
- + Confirm that HADR is working.
- + Apply the Pacemaker configuration to control IBM Db2.
- + Configure Azure Load Balancer.
- + Install primary and dialog application servers.
- + Check and adapt the configuration of SAP application servers.
- + Perform failover and takeover tests.
+To deploy an IBM Db2 configuration, you need to follow these steps:
+- Plan your environment.
+- Deploy the VMs.
+- Update SUSE Linux and configure file systems.
+- Install and configure Pacemaker.
+- Install [highly available NFS][nfs-ha].
+- Install [ASCS/ERS on a separate cluster][ascs-ha].
+- Install IBM Db2 database with Distributed/High Availability option (SWPM).
+- Install and create a secondary database node and instance, and configure HADR.
+- Confirm that HADR is working.
+- Apply the Pacemaker configuration to control IBM Db2.
+- Configure Azure Load Balancer.
+- Install primary and dialog application servers.
+- Check and adapt the configuration of SAP application servers.
+- Perform failover and takeover tests.
## Plan Azure infrastructure for hosting IBM Db2 LUW with HADR
Complete the planning process before you execute the deployment. Planning builds
| Topic | Short description | | | |
-| Define Azure resource groups | Resource groups where you deploy VM, VNet, Azure Load Balancer, and other resources. Can be existing or new. |
+| Define Azure resource groups | Resource groups where you deploy VM, virtual network, Azure Load Balancer, and other resources. Can be existing or new. |
| Virtual network / Subnet definition | Where VMs for IBM Db2 and Azure Load Balancer are being deployed. Can be existing or newly created. | | Virtual machines hosting IBM Db2 LUW | VM size, storage, networking, IP address. | | Virtual host name and virtual IP for IBM Db2 database| The virtual IP or host name that's used for connection of SAP application servers. **db-virt-hostname**, **db-virt-ip**. |
Complete the planning process before you execute the deployment. Planning builds
| SBD VM | SBD virtual machine size, storage, network. | | Azure Load Balancer | Usage of Standard (recommended), probe port for Db2 database (our recommendation 62500) **probe-port**. | | Name resolution| How name resolution works in the environment. DNS service is highly recommended. Local hosts file can be used. |
-
+ For more information about Linux Pacemaker in Azure, see [Set up Pacemaker on SUSE Linux Enterprise Server in Azure](./high-availability-guide-suse-pacemaker.md).
->[!IMPORTANT]
->For Db2 versions 11.5.6 and higher we highly recommend Integrated solution using Pacemaker from IBM. \
->* [Integrated solution using Pacemaker](https://www.ibm.com/docs/en/db2/11.5?topic=feature-integrated-solution-using-pacemaker) \
->* [Alternate or additional configurations available on Microsoft Azure](https://www.ibm.com/support/pages/alternate-or-additional-configurations-available-microsoft-azure)
+> [!IMPORTANT]
+> For Db2 versions 11.5.6 and higher we highly recommend Integrated solution using Pacemaker from IBM.
+>
+> - [Integrated solution using Pacemaker](https://www.ibm.com/docs/en/db2/11.5?topic=feature-integrated-solution-using-pacemaker).
+> - [Alternate or additional configurations available on Microsoft Azure](https://www.ibm.com/support/pages/alternate-or-additional-configurations-available-microsoft-azure).
## Deployment on SUSE Linux
-The resource agent for IBM Db2 LUW is included in SUSE Linux Enterprise Server for SAP Applications. For the setup that's described in this document, you must use SUSE Linux Server for SAP Applications. The Azure Marketplace contains an image for SUSE Enterprise Server for SAP Applications 12 that you can use to deploy new Azure virtual machines. Be aware of the various support or service models that are offered by SUSE through the Azure Marketplace when you choose a VM image in the Azure VM Marketplace.
+The resource agent for IBM Db2 LUW is included in SUSE Linux Enterprise Server for SAP Applications. For the setup that's described in this document, you must use SUSE Linux Server for SAP Applications. The Azure Marketplace contains an image for SUSE Enterprise Server for SAP Applications 12 that you can use to deploy new Azure virtual machines. Be aware of the various support or service models that are offered by SUSE through the Azure Marketplace when you choose a VM image in the Azure VM Marketplace.
### Hosts: DNS updates
-Make a list of all host names, including virtual host names, and update your DNS servers to enable proper IP address to host-name resolution. If a DNS server doesn't exist or you can't update and create DNS entries, you need to use the local host files of the individual VMs that are participating in this scenario. If you're using host files entries, make sure that the entries are applied to all VMs in the SAP system environment. However, we recommend that you use your DNS that, ideally, extends into Azure
+Make a list of all host names, including virtual host names, and update your DNS servers to enable proper IP address to host-name resolution. If a DNS server doesn't exist or you can't update and create DNS entries, you need to use the local host files of the individual VMs that are participating in this scenario. If you're using host files entries, make sure that the entries are applied to all VMs in the SAP system environment. However, we recommend that you use your DNS that, ideally, extends into Azure
### Manual deployment Make sure that the selected OS is supported by IBM/SAP for IBM Db2 LUW. The list of supported OS versions for Azure VMs and Db2 releases is available in SAP note [1928533]. The list of OS releases by individual Db2 release is available in the SAP Product Availability Matrix. We highly recommend a minimum of SLES 12 SP4 because of Azure-related performance improvements in this or later SUSE Linux versions. 1. Create or select a resource group.
-1. Create or select a virtual network and subnet.
-1. Choose a [suitable deployment type](./sap-high-availability-architecture-scenarios.md#comparison-of-different-deployment-types-for-sap-workload) for SAP virtual machines. Typically a virtual machine scale set with flexible orchestration.
-1. Create Virtual Machine 1.
- + Use SLES for SAP image in the Azure Marketplace.
- + Select the scale set, availability zone or availability set created in step 3.
-1. Create Virtual Machine 2.
- + Use SLES for SAP image in the Azure Marketplace.
- + Select the scale set, availability zone or availability set created in step 3 (not the same zone as in step 4).
-1. Add data disks to the VMs, and then check the recommendation of a file system setup in the article [IBM Db2 Azure Virtual Machines DBMS deployment for SAP workload][dbms-db2].
-
-## Create the Pacemaker cluster
-
-To create a basic Pacemaker cluster for this IBM Db2 server, see [Set up Pacemaker on SUSE Linux Enterprise Server in Azure][sles-pacemaker].
+2. Create or select a virtual network and subnet.
+3. Choose a [suitable deployment type](./sap-high-availability-architecture-scenarios.md#comparison-of-different-deployment-types-for-sap-workload) for SAP virtual machines. Typically a virtual machine scale set with flexible orchestration.
+4. Create Virtual Machine 1.
+ 1. Use SLES for SAP image in the Azure Marketplace.
+ 2. Select the scale set, availability zone or availability set created in step 3.
+5. Create Virtual Machine 2.
+ 1. Use SLES for SAP image in the Azure Marketplace.
+ 2. Select the scale set, availability zone or availability set created in step 3 (not the same zone as in step 4).
+6. Add data disks to the VMs, and then check the recommendation of a file system setup in the article [IBM Db2 Azure Virtual Machines DBMS deployment for SAP workload][dbms-db2].
## Install the IBM Db2 LUW and SAP environment Before you start the installation of an SAP environment based on IBM Db2 LUW, review the following documentation:
-+ Azure documentation
-+ SAP documentation
-+ IBM documentation
+- Azure documentation
+- SAP documentation
+- IBM documentation
Links to this documentation are provided in the introductory section of this article.
To set up the primary IBM Db2 LUW database instance:
- Install the SAP ASCS/ERS and Database instance. - Take a backup of the newly installed database. -
-> [!IMPORTANT]
+> [!IMPORTANT]
> Write down the "Database Communication port" that's set during installation. It must be the same port number for both database instances
->![SAP SWPM Port Definition](./media/high-availability-guide-rhel-ibm-db2-luw/hadr-swpm-db2-port.png)
+> ![SAP SWPM Port Definition](./media/high-availability-guide-rhel-ibm-db2-luw/hadr-swpm-db2-port.png)
To set up the Standby database server by using the SAP homogeneous system copy procedure, execute these steps:
To set up the Standby database server by using the SAP homogeneous system copy p
> [!NOTE] > For installation and configuration that's specific to Azure and Pacemaker: During the installation procedure through SAP Software Provisioning Manager, there is an explicit question about high availability for IBM Db2 LUW:
- >+ Do not select **IBM Db2 pureScale**.
- >+ Do not select **Install IBM Tivoli System Automation for Multiplatforms**.
- >+ Do not select **Generate cluster configuration files**.
+ > - Do not select **IBM Db2 pureScale**.
+ > - Do not select **Install IBM Tivoli System Automation for Multiplatforms**.
+ > - Do not select **Generate cluster configuration files**.
+
+When you use an SBD device for Linux Pacemaker, set the following Db2 HADR parameters:
- When you use an SBD device for Linux Pacemaker, set the following Db2 HADR parameters:
- + HADR peer window duration (seconds) (HADR_PEER_WINDOW) = 300
- + HADR timeout value (HADR_TIMEOUT) = 60
+- HADR peer window duration (seconds) (HADR_PEER_WINDOW) = 300
+- HADR timeout value (HADR_TIMEOUT) = 60
- When you use an Azure Pacemaker fencing agent, set the following parameters:
- + HADR peer window duration (seconds) (HADR_PEER_WINDOW) = 900
- + HADR timeout value (HADR_TIMEOUT) = 60
+When you use an Azure Pacemaker fencing agent, set the following parameters:
-We recommend the preceding parameters based on initial failover/takeover testing. It is mandatory that you test for proper functionality of failover and takeover with these parameter settings. Because individual configurations can vary, the parameters might require adjustment.
+- HADR peer window duration (seconds) (HADR_PEER_WINDOW) = 900
+- HADR timeout value (HADR_TIMEOUT) = 60
+
+We recommend the preceding parameters based on initial failover/takeover testing. It's mandatory that you test for proper functionality of failover and takeover with these parameter settings. Because individual configurations can vary, the parameters might require adjustment.
> [!IMPORTANT] > Specific to IBM Db2 with HADR configuration with normal startup: The secondary or standby database instance must be up and running before you can start the primary database instance.
We recommend the preceding parameters based on initial failover/takeover testing
For demonstration purposes and the procedures described in this article, the database SID is **PTR**. #### IBM Db2 HADR check+ After you've configured HADR and the status is PEER and CONNECTED on the primary and standby nodes, perform the following check:
-<pre><code>
-Execute command as db2&lt;sid&gt; db2pd -hadr -db &lt;SID&gt;
+```bash
+Execute command as db2<sid> db2pd -hadr -db <SID>
#Primary output: # Database Member 0 -- Database PTR -- Active -- Up 1 days 01:51:38 -- Date 2019-02-06-15.35.28.505451 #
-# <b>HADR_ROLE = PRIMARY
+# HADR_ROLE = PRIMARY
# REPLAY_TYPE = PHYSICAL # HADR_SYNCMODE = NEARSYNC # STANDBY_ID = 1
Execute command as db2&lt;sid&gt; db2pd -hadr -db &lt;SID&gt;
# STANDBY_MEMBER_HOST = azibmdb01 # STANDBY_INSTANCE = db2ptr # STANDBY_MEMBER = 0
-# HADR_CONNECT_STATUS = CONNECTED</b>
+# HADR_CONNECT_STATUS = CONNECTED
# HADR_CONNECT_STATUS_TIME = 02/05/2019 13:51:47.170561 (1549374707) # HEARTBEAT_INTERVAL(seconds) = 15 # HEARTBEAT_MISSED = 0
Execute command as db2&lt;sid&gt; db2pd -hadr -db &lt;SID&gt;
#Secondary output: # Database Member 0 -- Database PTR -- Standby -- Up 1 days 01:46:43 -- Date 2019-02-06-15.38.25.644168 #
-# <b>HADR_ROLE = STANDBY
+# HADR_ROLE = STANDBY
# REPLAY_TYPE = PHYSICAL # HADR_SYNCMODE = NEARSYNC # STANDBY_ID = 0
Execute command as db2&lt;sid&gt; db2pd -hadr -db &lt;SID&gt;
# STANDBY_MEMBER_HOST = azibmdb01 # STANDBY_INSTANCE = db2ptr # STANDBY_MEMBER = 0
-# HADR_CONNECT_STATUS = CONNECTED</b>
+# HADR_CONNECT_STATUS = CONNECTED
# HADR_CONNECT_STATUS_TIME = 02/05/2019 13:51:47.205067 (1549374707) # HEARTBEAT_INTERVAL(seconds) = 15 # HEARTBEAT_MISSED = 0
Execute command as db2&lt;sid&gt; db2pd -hadr -db &lt;SID&gt;
# PEER_WINDOW(seconds) = 300 # PEER_WINDOW_END = 02/06/2019 15:43:19.000000 (1549467799) # READS_ON_STANDBY_ENABLED = N
-</code></pre>
+```
+### Configure Azure Load Balancer
+During VM configuration, you have an option to create or select exiting load balancer in networking section. Follow below steps, to set up standard load balancer for high availability setup of DB2 database.
-## Db2 Pacemaker configuration
+#### [Azure portal](#tab/lb-portal)
-When you use Pacemaker for automatic failover in the event of a node failure, you need to configure your Db2 instances and Pacemaker accordingly. This section describes this type of configuration.
-The following items are prefixed with either:
+#### [Azure CLI](#tab/lb-azurecli)
-- **[A]**: Applicable to all nodes-- **[1]**: Applicable only to node 1 -- **[2]**: Applicable only to node 2
-**[A]** Prerequisites for Pacemaker configuration:
-1. Shut down both database servers with user db2\<sid> with db2stop.
-1. Change the shell environment for db2\<sid> user to */bin/ksh*. We recommend that you use the Yast tool.
+#### [PowerShell](#tab/lb-powershell)
-### Pacemaker configuration
+ > [!IMPORTANT]
-> Recent testing revealed situations, where netcat stops responding to requests due to backlog and its limitation of handling only one connection. The netcat resource stops listening to the Azure Load balancer requests and the floating IP becomes unavailable.
-> For existing Pacemaker clusters, we recommended in the past replacing netcat with socat. Currently we recommend using azure-lb resource agent, which is part of package resource-agents, with the following package version requirements:
-> - For SLES 12 SP4/SP5, the version must be at least resource-agents-4.3.018.a7fb5035-3.30.1.
-> - For SLES 15/15 SP1, the version must be at least resource-agents-4.3.0184.6ee15eb2-4.13.1.
->
-> Note that the change will require brief downtime.
-> For existing Pacemaker clusters, if the configuration was already changed to use socat as described in [Azure Load-Balancer Detection Hardening](https://www.suse.com/support/kb/doc/?id=7024128), there is no requirement to switch immediately to azure-lb resource agent.
-
-**[1]** IBM Db2 HADR-specific Pacemaker configuration:
-<pre><code># Put Pacemaker into maintenance mode
-sudo crm configure property maintenance-mode=true
-</code></pre>
-
-**[1]** Create IBM Db2 resources:
-<pre><code># Replace **bold strings** with your instance name db2sid, database SID, and virtual IP address/Azure Load Balancer.
-
-sudo crm configure primitive rsc_Db2_db2ptr_<b>PTR</b> db2 \
- params instance="<b>db2ptr</b>" dblist="<b>PTR</b>" \
- op start interval="0" timeout="130" \
- op stop interval="0" timeout="120" \
- op promote interval="0" timeout="120" \
- op demote interval="0" timeout="120" \
- op monitor interval="30" timeout="60" \
- op monitor interval="31" role="Master" timeout="60"
-
-# Configure virtual IP - same as Azure Load Balancer IP
-sudo crm configure primitive rsc_ip_db2ptr_<b>PTR</b> IPaddr2 \
- op monitor interval="10s" timeout="20s" \
- params ip="<b>10.100.0.10</b>"
-
-# Configure probe port for Azure load Balancer
-sudo crm configure primitive rsc_nc_db2ptr_<b>PTR</b> azure-lb port=<b>62500</b> \
- op monitor timeout=20s interval=10
-
-sudo crm configure group g_ip_db2ptr_<b>PTR</b> rsc_ip_db2ptr_<b>PTR</b> rsc_nc_db2ptr_<b>PTR</b>
+> Floating IP isn't supported on a NIC secondary IP configuration in load-balancing scenarios. For more information, see [Azure Load Balancer limitations](../../load-balancer/load-balancer-multivip-overview.md#limitations). If you need another IP address for the VM, deploy a second NIC.
-sudo crm configure ms msl_Db2_db2ptr_<b>PTR</b> rsc_Db2_db2ptr_<b>PTR</b> \
- meta target-role="Started" notify="true"
-
-sudo crm configure colocation col_db2_db2ptr_<b>PTR</b> inf: g_ip_db2ptr_<b>PTR</b>:Started msl_Db2_db2ptr_<b>PTR</b>:Master
-
-sudo crm configure order ord_db2_ip_db2ptr_<b>PTR</b> inf: msl_Db2_db2ptr_<b>PTR</b>:promote g_ip_db2ptr_<b>PTR</b>:start
-
-sudo crm configure rsc_defaults resource-stickiness=1000
-sudo crm configure rsc_defaults migration-threshold=5000
-</code></pre>
-
-**[1]** Start IBM Db2 resources:
-* Put Pacemaker out of maintenance mode.
-<pre><code># Put Pacemaker out of maintenance-mode - that start IBM Db2
-sudo crm configure property maintenance-mode=false</pre></code>
-
-**[1]** Make sure that the cluster status is OK and that all of the resources are started. It's not important which node the resources are running on.
-<pre><code>sudo crm status</code>
-
-# 2 nodes configured
-# 5 resources configured
-
-# Online: [ azibmdb01 azibmdb02 ]
-
-# Full list of resources:
-
-# stonith-sbd (stonith:external/sbd): Started azibmdb02
-# Resource Group: g_ip_db2ptr_PTR
-# rsc_ip_db2ptr_PTR (ocf::heartbeat:IPaddr2): Started azibmdb02
-# rsc_nc_db2ptr_PTR (ocf::heartbeat:azure-lb): Started azibmdb02
-# Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR]
-# Masters: [ azibmdb02 ]
-# Slaves: [ azibmdb01 ]
-</pre>
+> [!NOTE]
+> When VMs without public IP addresses are placed in the back-end pool of an internal (no public IP address) instance of Standard Azure Load Balancer, there's no outbound internet connectivity unless more configuration is performed to allow routing to public endpoints. For more information on how to achieve outbound connectivity, see [Public endpoint connectivity for VMs using Azure Standard Load Balancer in SAP high-availability scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md).
> [!IMPORTANT]
-> You must manage the Pacemaker clustered Db2 instance by using Pacemaker tools. If you use db2 commands such as db2stop, Pacemaker detects the action as a failure of resource. If you're performing maintenance, you can put the nodes or resources in maintenance mode. Pacemaker suspends monitoring resources, and you can then use normal db2 administration commands.
+> Don't enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps could cause the health probes to fail. Set the parameter `net.ipv4.tcp_timestamps` to `0`. For more information, see [Load Balancer health probes](../../load-balancer/load-balancer-custom-probe-overview.md).
+## Create the Pacemaker cluster
-### Configure Azure Load Balancer
-To configure Azure Load Balancer, we recommend that you use the [Azure Standard Load Balancer SKU](../../load-balancer/load-balancer-overview.md) and then do the following;
+To create a basic Pacemaker cluster for this IBM Db2 server, see [Set up Pacemaker on SUSE Linux Enterprise Server in Azure][sles-pacemaker].
-> [!NOTE]
-> The Standard Load Balancer SKU has restrictions accessing public IP addresses from the nodes underneath the Load Balancer. The article [Public endpoint connectivity for Virtual Machines using Azure Standard Load Balancer in SAP high-availability scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md) is describing ways on how to enable those nodes to access public IP addresses
-
-> [!IMPORTANT]
-> Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see [Azure Load balancer Limitations](../../load-balancer/load-balancer-multivip-overview.md#limitations). If you need additional IP address for the VM, deploy a second NIC.
+## Db2 Pacemaker configuration
-1. Create a front-end IP pool:
+When you use Pacemaker for automatic failover in the event of a node failure, you need to configure your Db2 instances and Pacemaker accordingly. This section describes this type of configuration.
- a. In the Azure portal, open the Azure Load Balancer, select **frontend IP pool**, and then select **Add**.
+The following items are prefixed with either:
- b. Enter the name of the new front-end IP pool (for example, **Db2-connection**).
+- **[A]**: Applicable to all nodes
+- **[1]**: Applicable only to node 1
+- **[2]**: Applicable only to node 2
- c. Set the **Assignment** to **Static**, and enter the IP address **Virtual-IP** defined at the beginning.
+**[A]** Prerequisites for Pacemaker configuration:
- d. Select **OK**.
+- Shut down both database servers with user db2\<sid> with db2stop.
+- Change the shell environment for db2\<sid> user to */bin/ksh*. We recommend that you use the Yast tool.
- e. After the new front-end IP pool is created, note the pool IP address.
+### Pacemaker configuration
-1. Create a single back-end pool:
+> [!IMPORTANT]
+> Recent testing revealed situations, where netcat stops responding to requests due to backlog and its limitation of handling only one connection. The netcat resource stops listening to the Azure Load balancer requests and the floating IP becomes unavailable.
+> For existing Pacemaker clusters, we recommended in the past replacing netcat with socat. Currently we recommend using azure-lb resource agent, which is part of package resource-agents, with the following package version requirements:
+>
+> - For SLES 12 SP4/SP5, the version must be at least resource-agents-4.3.018.a7fb5035-3.30.1.
+> - For SLES 15/15 SP1, the version must be at least resource-agents-4.3.0184.6ee15eb2-4.13.1.
+>
+> Note that the change will require brief downtime.
+> For existing Pacemaker clusters, if the configuration was already changed to use socat as described in [Azure Load-Balancer Detection Hardening](https://www.suse.com/support/kb/doc/?id=7024128), there is no requirement to switch immediately to azure-lb resource agent.
- 1. Open the load balancer, select **Backend pools**, and then select **Add**.
+1. **[1]** IBM Db2 HADR-specific Pacemaker configuration:
+
+ ```bash
+ # Put Pacemaker into maintenance mode
+ sudo crm configure property maintenance-mode=true
+ ```
+
+2. **[1]** Create IBM Db2 resources:
+
+ ```bash
+ # Replace **bold strings** with your instance name db2sid, database SID, and virtual IP address/Azure Load Balancer.
+ sudo crm configure primitive rsc_Db2_db2ptr_PTR db2 \
+ params instance="db2ptr" dblist="PTR" \
+ op start interval="0" timeout="130" \
+ op stop interval="0" timeout="120" \
+ op promote interval="0" timeout="120" \
+ op demote interval="0" timeout="120" \
+ op monitor interval="30" timeout="60" \
+ op monitor interval="31" role="Master" timeout="60"
- 1. Enter the name of the new back-end pool (for example, **Db2-backend**).
+ # Configure virtual IP - same as Azure Load Balancer IP
+ sudo crm configure primitive rsc_ip_db2ptr_PTR IPaddr2 \
+ op monitor interval="10s" timeout="20s" \
+ params ip="10.100.0.10"
- 2. Select **NIC** for Backend Pool Configuration.
-
- 1. Select **Add a virtual machine**.
+ # Configure probe port for Azure load Balancer
+ sudo crm configure primitive rsc_nc_db2ptr_PTR azure-lb port=62500 \
+ op monitor timeout=20s interval=10
- 1. Select the virtual machines of the cluster.
+ sudo crm configure group g_ip_db2ptr_PTR rsc_ip_db2ptr_PTR rsc_nc_db2ptr_PTR
- 1. Select **Add**.
-
- 2. Select **Save**.
-
-1. Create a health probe:
-
- a. In the Azure portal, open the Azure Load Balancer, select **health probes**, and select **Add**.
-
- b. Enter the name of the new health probe (for example, **Db2-hp**).
-
- c. Select **TCP** as the protocol and port **62500**. Keep the **Interval** value set to **5**.
-
- d. Select **OK**.
-
-1. Create the load-balancing rules:
-
- a. In the Azure portal, open the Azure Load Balancer, select **Load balancing rules**, and then select **Add**.
-
- b. Enter the name of the new Load Balancer rule (for example, **Db2-SID**).
+ sudo crm configure ms msl_Db2_db2ptr_PTR rsc_Db2_db2ptr_PTR \
+ meta target-role="Started" notify="true"
+
+ sudo crm configure colocation col_db2_db2ptr_PTR inf: g_ip_db2ptr_PTR:Started msl_Db2_db2ptr_PTR:Master
+
+ sudo crm configure order ord_db2_ip_db2ptr_PTR inf: msl_Db2_db2ptr_PTR:promote g_ip_db2ptr_PTR:start
+
+ sudo crm configure rsc_defaults resource-stickiness=1000
+ sudo crm configure rsc_defaults migration-threshold=5000
+ ```
- c. Select the front-end IP address, the back-end pool, and the health probe that you created earlier (for example, **Db2-frontend**).
+3. **[1]** Start IBM Db2 resources:
- d. Keep the **Protocol** set to **TCP**, and enter port *Database Communication port*.
+ Put Pacemaker out of maintenance mode.
- e. Increase the **idle timeout** to 30 minutes.
+ ```bash
+ # Put Pacemaker out of maintenance-mode - that start IBM Db2
+ sudo crm configure property maintenance-mode=false
+ ```
- f. Make sure to **enable Floating IP**.
+4. **[1]** Make sure that the cluster status is OK and that all of the resources are started. It's not important which node the resources are running on.
- g. Select **OK**.
+ ```bash
+ sudo crm status
+
+ # 2 nodes configured
+ # 5 resources configured
+
+ # Online: [ azibmdb01 azibmdb02 ]
+
+ # Full list of resources:
+
+ # stonith-sbd (stonith:external/sbd): Started azibmdb02
+ # Resource Group: g_ip_db2ptr_PTR
+ # rsc_ip_db2ptr_PTR (ocf::heartbeat:IPaddr2): Started azibmdb02
+ # rsc_nc_db2ptr_PTR (ocf::heartbeat:azure-lb): Started azibmdb02
+ # Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR]
+ # Masters: [ azibmdb02 ]
+ # Slaves: [ azibmdb01 ]
+ ```
+> [!IMPORTANT]
+> You must manage the Pacemaker clustered Db2 instance by using Pacemaker tools. If you use db2 commands such as db2stop, Pacemaker detects the action as a failure of resource. If you're performing maintenance, you can put the nodes or resources in maintenance mode. Pacemaker suspends monitoring resources, and you can then use normal db2 administration commands.
### Make changes to SAP profiles to use virtual IP for connection+ To connect to the primary instance of the HADR configuration, the SAP application layer needs to use the virtual IP address that you defined and configured for the Azure Load Balancer. The following changes are required: /sapmnt/\<SID>/profile/DEFAULT.PFL
-<pre><code>SAPDBHOST = db-virt-hostname
+
+```bash
+SAPDBHOST = db-virt-hostname
j2ee/dbhost = db-virt-hostname
-</code></pre>
+```
/sapmnt/\<SID>/global/db6/db2cli.ini
-<pre><code>Hostname=db-virt-hostname
-</code></pre>
-
+```bash
+Hostname=db-virt-hostname
+```
## Install primary and dialog application servers
-When you install primary and dialog application servers against an Db2 HADR configuration, use the virtual host name that you picked for the configuration.
+When installing primary and dialog application servers against a Db2 HADR configuration, use the virtual host name that you picked for the configuration.
If you performed the installation before you created the Db2 HADR configuration, make the changes as described in the preceding section and as follows for SAP Java stacks. ### ABAP+Java or Java stack systems JDBC URL check Use the J2EE Config tool to check or update the JDBC URL. Because the J2EE Config tool is a graphical tool, you need to have X server installed:
-
+ 1. Sign in to the primary application server of the J2EE instance and execute:
- `sudo /usr/sap/*SID*/*Instance*/j2ee/configtool/configtool.sh`
-1. In the left frame, choose **security store**.
-1. In the right frame, choose the key jdbc/pool/\<SAPSID>/url.
-1. Change the host name in the JDBC URL to the virtual host name.
- `jdbc:db2://db-virt-hostname:5912/TSP:deferPrepares=0`
-1. Select **Add**.
-1. To save your changes, select the disk icon at the upper left.
-1. Close the configuration tool.
-1. Restart the Java instance.
+
+ ```bash
+ sudo /usr/sap/*SID*/*Instance*/j2ee/configtool/configtool.sh
+ ```
+
+2. In the left frame, choose **security store**.
+3. In the right frame, choose the key jdbc/pool/\<SAPSID>/url.
+4. Change the host name in the JDBC URL to the virtual host name.
+
+ ```TEXT
+ jdbc:db2://db-virt-hostname:5912/TSP:deferPrepares=0
+ ```
+
+5. Select **Add**.
+6. To save your changes, select the disk icon at the upper left.
+7. Close the configuration tool.
+8. Restart the Java instance.
## Configure log archiving for HADR setup
-To configure the Db2 log archiving for HADR setup, we recommend that you configure both the primary and the standby database to have automatic log retrieval capability from all log archive locations. Both the primary and standby database must be able to retrieve log archive files from all the log archive locations to which either one of the database instances might archive log files.
+
+To configure the Db2 log archiving for HADR setup, we recommend that you configure both the primary and the standby database to have automatic log retrieval capability from all log archive locations. Both the primary and standby database must be able to retrieve log archive files from all the log archive locations to which either one of the database instances might archive log files.
The log archiving is performed only by the primary database. If you change the HADR roles of the database servers or if a failure occurs, the new primary database is responsible for log archiving. If you've set up multiple log archive locations, your logs might be archived twice. In the event of a local or remote catch-up, you might also have to manually copy the archived logs from the old primary server to the active log location of the new primary server.
-We recommend configuring a common NFS share where logs are written from both nodes. The NFS share has to be highly available.
+We recommend configuring a common NFS share where logs are written from both nodes. The NFS share has to be highly available.
You can use existing highly available NFS shares for transports or a profile directory. For more information, see: -- [High availability for NFS on Azure VMs on SUSE Linux Enterprise Server][nfs-ha] -- [High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise Server with Azure NetApp Files for SAP Applications](./high-availability-guide-suse-netapp-files.md)-- [Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-introduction.md) (to create NFS shares)-
+- [High availability for NFS on Azure VMs on SUSE Linux Enterprise Server][nfs-ha].
+- [High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise Server with Azure NetApp Files for SAP Applications](./high-availability-guide-suse-netapp-files.md).
+- [Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-introduction.md) (to create NFS shares).
## Test the cluster setup
-This section describes how you can test your Db2 HADR setup. *Every test assumes that you are logged in as user root* and the IBM Db2 primary is running on the *azibmdb01* virtual machine.
+This section describes how you can test your Db2 HADR setup. *Every test assumes that you're logged in as user root* and the IBM Db2 primary is running on the *azibmdb01* virtual machine.
The initial status for all test cases is explained here: (crm_mon -r or crm status) -- **crm status** is a snapshot of Pacemaker status at execution time -- **crm_mon -r** is continuous output of Pacemaker status
+- **crm status** is a snapshot of Pacemaker status at execution time.
+- **crm_mon -r** is continuous output of Pacemaker status.
-<pre><code>2 nodes configured
+```bash
+2 nodes configured
5 resources configured Online: [ azibmdb01 azibmdb02 ]
Online: [ azibmdb01 azibmdb02 ]
Full list of resources: stonith-sbd (stonith:external/sbd): Started azibmdb02
- Resource Group: g_ip_db2ptr_PTR
+Resource Group: g_ip_db2ptr_PTR
rsc_ip_db2ptr_PTR (ocf::heartbeat:IPaddr2): Stopped rsc_nc_db2ptr_PTR (ocf::heartbeat:azure-lb): Stopped
- Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR]
+Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR]
rsc_Db2_db2ptr_PTR (ocf::heartbeat:db2): Promoting azibmdb01 Slaves: [ azibmdb02 ]
-</code></pre>
+```
The original status in an SAP system is documented in Transaction DBACOCKPIT > Configuration > Overview, as shown in the following image: ![DBACockpit - Pre Migration](./media/dbms-guide-ha-ibm/hadr-sap-mgr-org.png) --- ### Test takeover of IBM Db2 -
-> [!IMPORTANT]
+> [!IMPORTANT]
> Before you start the test, make sure that:
-> * Pacemaker doesn't have any failed actions (crm status).
-> * There are no location constraints (leftovers of migration test)
-> * The IBM Db2 HADR synchronization is working. Check with user db2\<sid> <pre><code>db2pd -hadr -db \<DBSID></code></pre>
-
+>
+> - Pacemaker doesn't have any failed actions (crm status).
+> - There are no location constraints (leftovers of migration test.
+> - The IBM Db2 HADR synchronization is working. Check with user db2\<sid>
+>
+> ```bash
+> db2pd -hadr -db <DBSID>
+> ```
Migrate the node that's running the primary Db2 database by executing following command:
-<pre><code>crm resource migrate msl_<b>Db2_db2ptr_PTR</b> azibmdb02</code></pre>
+
+```bash
+crm resource migrate msl_Db2_db2ptr_PTR azibmdb02
+```
After the migration is done, the crm status output looks like:
-<pre><code>2 nodes configured
+
+```bash
+2 nodes configured
5 resources configured Online: [ azibmdb01 azibmdb02 ]
Online: [ azibmdb01 azibmdb02 ]
Full list of resources: stonith-sbd (stonith:external/sbd): Started azibmdb02
- Resource Group: g_ip_db2ptr_PTR
+Resource Group: g_ip_db2ptr_PTR
rsc_ip_db2ptr_PTR (ocf::heartbeat:IPaddr2): Started azibmdb02 rsc_nc_db2ptr_PTR (ocf::heartbeat:azure-lb): Started azibmdb02
- Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR]
+Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR]
Masters: [ azibmdb02 ] Slaves: [ azibmdb01 ]
-</code></pre>
+```
The original status in an SAP system is documented in Transaction DBACOCKPIT > Configuration > Overview, as shown in the following image: ![DBACockpit - Post Migration](./media/dbms-guide-ha-ibm/hadr-sap-mgr-post.png)
-Resource migration with "crm resource migrate" creates location constraints. Location constraints should be deleted. If location constraints are not deleted, the resource cannot fail back or you can experience unwanted takeovers.
+Resource migration with "crm resource migrate" creates location constraints. Location constraints should be deleted. If location constraints aren't deleted, the resource can't fail back or you can experience unwanted takeovers.
Migrate the resource back to *azibmdb01* and clear the location constraints
-<pre><code>crm resource migrate msl_<b>Db2_db2ptr_PTR</b> azibmdb01
-crm resource clear msl_<b>Db2_db2ptr_PTR</b>
-</code></pre>
+
+```bash
+crm resource migrate msl_Db2_db2ptr_PTR azibmdb01
+crm resource clear msl_Db2_db2ptr_PTR
+```
- **crm resource migrate \<res_name> \<host>:** Creates location constraints and can cause issues with takeover - **crm resource clear \<res_name>**: Clears location constraints - **crm resource cleanup \<res_name>**: Clears all errors of the resource
-### Test the fencing agent
+### Test SBD fencing
In this case, we test SBD fencing, which we recommend that you do when you use SUSE Linux.
-<pre><code>
+```bash
azibmdb01:~ # ps -ef|grep sbd root 2374 1 0 Feb05 ? 00:00:17 sbd: inquisitor root 2378 2374 0 Feb05 ? 00:00:40 sbd: watcher: /dev/disk/by-id/scsi-36001405fbbaab35ee77412dacb77ae36 - slot: 0 - uuid: 27cad13a-0bce-4115-891f-43b22cfabe65
root 2379 2374 0 Feb05 ? 00:01:51 sbd: watcher: Pacemaker
root 2380 2374 0 Feb05 ? 00:00:18 sbd: watcher: Cluster azibmdb01:~ # kill -9 2374
-</code></pre>
+```
-Cluster node *azibmdb01* should be rebooted. The IBM Db2 primary HADR role is going to be moved to *azibmdb02*. When *azibmdb01* is back online, the Db2 instance is going to move in the role of a secondary database instance.
+Cluster node *azibmdb01* should be rebooted. The IBM Db2 primary HADR role is going to be moved to *azibmdb02*. When *azibmdb01* is back online, the Db2 instance is going to move in the role of a secondary database instance.
If the Pacemaker service doesn't start automatically on the rebooted former primary, be sure to start it manually with:
-<pre><code>sudo service pacemaker start</code></pre>
+```bash
+sudo service pacemaker start
+```
### Test a manual takeover You can test a manual takeover by stopping the Pacemaker service on *azibmdb01* node:
-<pre><code>service pacemaker stop</code></pre>
+
+```bash
+service pacemaker stop
+```
status on *azibmdb02*
-<pre><code>
+
+```bash
2 nodes configured 5 resources configured
OFFLINE: [ azibmdb01 ]
Full list of resources: stonith-sbd (stonith:external/sbd): Started azibmdb02
- Resource Group: g_ip_db2ptr_PTR
+Resource Group: g_ip_db2ptr_PTR
rsc_ip_db2ptr_PTR (ocf::heartbeat:IPaddr2): Started azibmdb02 rsc_nc_db2ptr_PTR (ocf::heartbeat:azure-lb): Started azibmdb02
- Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR]
+Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR]
Masters: [ azibmdb02 ] Stopped: [ azibmdb01 ]
-</code></pre>
+```
After the failover, you can start the service again on *azibmdb01*.
-<pre><code>service pacemaker start</code></pre>
+```bash
+service pacemaker start
+```
### Kill the Db2 process on the node that runs the HADR primary database
-<pre><code>#Kill main db2 process - db2sysc
+```bash
+#Kill main db2 process - db2sysc
azibmdb01:~ # ps -ef|grep db2s db2ptr 34598 34596 8 14:21 ? 00:00:07 db2sysc 0 azibmdb01:~ # kill -9 34598
-</code></pre>
+```
The Db2 instance is going to fail, and Pacemaker will report following status:
-<pre><code>
+```bash
2 nodes configured 5 resources configured
Online: [ azibmdb01 azibmdb02 ]
Full list of resources:
- stonith-sbd (stonith:external/sbd): Started azibmdb01
- Resource Group: g_ip_db2ptr_PTR
+stonith-sbd (stonith:external/sbd): Started azibmdb01
+Resource Group: g_ip_db2ptr_PTR
rsc_ip_db2ptr_PTR (ocf::heartbeat:IPaddr2): Stopped rsc_nc_db2ptr_PTR (ocf::heartbeat:azure-lb): Stopped
- Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR]
+Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR]
Slaves: [ azibmdb02 ] Stopped: [ azibmdb01 ] Failed Actions: * rsc_Db2_db2ptr_PTR_demote_0 on azibmdb01 'unknown error' (1): call=157, status=complete, exitreason='', last-rc-change='Tue Feb 12 14:28:19 2019', queued=40ms, exec=223ms
+```
-</code></pre>
-
-Pacemaker will restart the Db2 primary database instance on the same node, or it will fail over to the node that's running the secondary database instance and an error is reported.
+Pacemaker restarts the Db2 primary database instance on the same node, or it fails over to the node that's running the secondary database instance and an error is reported.
-<pre><code>2 nodes configured
+```bash
+2 nodes configured
5 resources configured Online: [ azibmdb01 azibmdb02 ] Full list of resources:
- stonith-sbd (stonith:external/sbd): Started azibmdb01
- Resource Group: g_ip_db2ptr_PTR
+stonith-sbd (stonith:external/sbd): Started azibmdb01
+Resource Group: g_ip_db2ptr_PTR
rsc_ip_db2ptr_PTR (ocf::heartbeat:IPaddr2): Started azibmdb01 rsc_nc_db2ptr_PTR (ocf::heartbeat:azure-lb): Started azibmdb01
- Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR]
+Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR]
Masters: [ azibmdb01 ] Slaves: [ azibmdb02 ] Failed Actions: * rsc_Db2_db2ptr_PTR_demote_0 on azibmdb01 'unknown error' (1): call=157, status=complete, exitreason='', last-rc-change='Tue Feb 12 14:28:19 2019', queued=40ms, exec=223ms
-</code></pre>
-
+```
### Kill the Db2 process on the node that runs the secondary database instance
-<pre><code>azibmdb02:~ # ps -ef|grep db2s
+```bash
+azibmdb02:~ # ps -ef|grep db2s
db2ptr 65250 65248 0 Feb11 ? 00:09:27 db2sysc 0
-azibmdb02:~ # kill -9</code></pre>
+azibmdb02:~ # kill -9
+```
The node gets into failed stated and error reported
-<pre><code>2 nodes configured
+
+```bash
+2 nodes configured
5 resources configured Online: [ azibmdb01 azibmdb02 ] Full list of resources:
- stonith-sbd (stonith:external/sbd): Started azibmdb01
- Resource Group: g_ip_db2ptr_PTR
+stonith-sbd (stonith:external/sbd): Started azibmdb01
+Resource Group: g_ip_db2ptr_PTR
rsc_ip_db2ptr_PTR (ocf::heartbeat:IPaddr2): Started azibmdb01 rsc_nc_db2ptr_PTR (ocf::heartbeat:azure-lb): Started azibmdb01
- Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR]
+Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR]
rsc_Db2_db2ptr_PTR (ocf::heartbeat:db2): FAILED azibmdb02 Masters: [ azibmdb01 ] Failed Actions: * rsc_Db2_db2ptr_PTR_monitor_30000 on azibmdb02 'not running' (7): call=144, status=complete, exitreason='',
-last-rc-change='Tue Feb 12 14:36:59 2019', queued=0ms, exec=0ms</code></pre>
+last-rc-change='Tue Feb 12 14:36:59 2019', queued=0ms, exec=0ms
+```
The Db2 instance gets restarted in the secondary role it had assigned before.
-<pre><code>2 nodes configured
+```bash
+2 nodes configured
5 resources configured Online: [ azibmdb01 azibmdb02 ]
Online: [ azibmdb01 azibmdb02 ]
Full list of resources: stonith-sbd (stonith:external/sbd): Started azibmdb01
- Resource Group: g_ip_db2ptr_PTR
+Resource Group: g_ip_db2ptr_PTR
rsc_ip_db2ptr_PTR (ocf::heartbeat:IPaddr2): Started azibmdb01 rsc_nc_db2ptr_PTR (ocf::heartbeat:azure-lb): Started azibmdb01
- Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR]
+Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR]
Masters: [ azibmdb01 ] Slaves: [ azibmdb02 ] Failed Actions: * rsc_Db2_db2ptr_PTR_monitor_30000 on azibmdb02 'not running' (7): call=144, status=complete, exitreason='',
- last-rc-change='Tue Feb 12 14:36:59 2019', queued=0ms, exec=0ms</code></pre>
--
+ last-rc-change='Tue Feb 12 14:36:59 2019', queued=0ms, exec=0ms
+```
### Stop DB via db2stop force on the node that runs the HADR primary database instance
-<pre><code>2 nodes configured
+```bash
+2 nodes configured
5 resources configured Online: [ azibmdb01 azibmdb02 ]
Online: [ azibmdb01 azibmdb02 ]
Full list of resources: stonith-sbd (stonith:external/sbd): Started azibmdb01
- Resource Group: g_ip_db2ptr_PTR
+Resource Group: g_ip_db2ptr_PTR
rsc_ip_db2ptr_PTR (ocf::heartbeat:IPaddr2): Started azibmdb01 rsc_nc_db2ptr_PTR (ocf::heartbeat:azure-lb): Started azibmdb01
- Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR]
+Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR]
Masters: [ azibmdb01 ]
- Slaves: [ azibmdb02 ]</code></pre>
+ Slaves: [ azibmdb02 ]
+```
As user db2\<sid> execute command db2stop force:
-<pre><code>azibmdb01:~ # su - db2ptr
-azibmdb01:db2ptr> db2stop force</code></pre>
+
+```bash
+azibmdb01:~ # su - db2ptr
+azibmdb01:db2ptr> db2stop force
+```
Failure detected
-<pre><code>2 nodes configured
+
+```bash
+2 nodes configured
5 resources configured Online: [ azibmdb01 azibmdb02 ] Full list of resources:
- stonith-sbd (stonith:external/sbd): Started azibmdb01
- Resource Group: g_ip_db2ptr_PTR
+stonith-sbd (stonith:external/sbd): Started azibmdb01
+Resource Group: g_ip_db2ptr_PTR
rsc_ip_db2ptr_PTR (ocf::heartbeat:IPaddr2): Stopped rsc_nc_db2ptr_PTR (ocf::heartbeat:azure-lb): Stopped
- Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR]
+Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR]
rsc_Db2_db2ptr_PTR (ocf::heartbeat:db2): FAILED azibmdb01 Slaves: [ azibmdb02 ] Failed Actions: * rsc_Db2_db2ptr_PTR_demote_0 on azibmdb01 'unknown error' (1): call=201, status=complete, exitreason='',
- last-rc-change='Tue Feb 12 14:45:25 2019', queued=1ms, exec=150ms</code></pre>
+ last-rc-change='Tue Feb 12 14:45:25 2019', queued=1ms, exec=150ms
+```
+
+The Db2 HADR secondary database instance got promoted into the primary role.
-The Db2 HADR secondary database instance got promoted into the primary role
-<pre><code> nodes configured
+```bash
+nodes configured
5 resources configured Online: [ azibmdb01 azibmdb02 ]
Online: [ azibmdb01 azibmdb02 ]
Full list of resources: stonith-sbd (stonith:external/sbd): Started azibmdb01
- Resource Group: g_ip_db2ptr_PTR
+Resource Group: g_ip_db2ptr_PTR
rsc_ip_db2ptr_PTR (ocf::heartbeat:IPaddr2): Started azibmdb02 rsc_nc_db2ptr_PTR (ocf::heartbeat:azure-lb): Started azibmdb02
- Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR]
+Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR]
Masters: [ azibmdb02 ] Stopped: [ azibmdb01 ] Failed Actions: * rsc_Db2_db2ptr_PTR_start_0 on azibmdb01 'unknown error' (1): call=205, stat us=complete, exitreason='',
- last-rc-change='Tue Feb 12 14:45:27 2019', queued=0ms, exec=865ms</pre></code>
-
+ last-rc-change='Tue Feb 12 14:45:27 2019', queued=0ms, exec=865ms
+```
### Crash VM with restart on the node that runs the HADR primary database instance
-<pre><code>#Linux kernel panic - with OS restart
-azibmdb01:~ # echo b > /proc/sysrq-trigger</code></pre>
+```bash
+#Linux kernel panic - with OS restart
+azibmdb01:~ # echo b > /proc/sysrq-trigger
+```
-Pacemaker will promote the secondary instance to the primary instance role. The old primary instance will move into the secondary role after the VM and all services are fully restored after the VM reboot:
+Pacemaker promotes the secondary instance to the primary instance role. The old primary instance will move into the secondary role after the VM and all services are fully restored after the VM reboot.
-<pre><code> nodes configured
+```bash
+nodes configured
5 resources configured Online: [ azibmdb01 azibmdb02 ]
Online: [ azibmdb01 azibmdb02 ]
Full list of resources: stonith-sbd (stonith:external/sbd): Started azibmdb02
- Resource Group: g_ip_db2ptr_PTR
+Resource Group: g_ip_db2ptr_PTR
rsc_ip_db2ptr_PTR (ocf::heartbeat:IPaddr2): Started azibmdb01 rsc_nc_db2ptr_PTR (ocf::heartbeat:azure-lb): Started azibmdb01
- Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR]
+Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR]
Masters: [ azibmdb01 ]
- Slaves: [ azibmdb02 ]</code></pre>
--
+ Slaves: [ azibmdb02 ]
+```
### Crash the VM that runs the HADR primary database instance with "halt"
-<pre><code>#Linux kernel panic - halts OS
-azibmdb01:~ # echo b > /proc/sysrq-trigger</code></pre>
+```bash
+#Linux kernel panic - halts OS
+azibmdb01:~ # echo b > /proc/sysrq-trigger
+```
-In such a case, Pacemaker will detect that the node that's running the primary database instance isn't responding.
+In such a case, Pacemaker detects that the node that's running the primary database instance isn't responding.
-<pre><code>2 nodes configured
+```bash
+2 nodes configured
5 resources configured Node azibmdb01: UNCLEAN (online)
Online: [ azibmdb02 ]
Full list of resources: stonith-sbd (stonith:external/sbd): Started azibmdb02
- Resource Group: g_ip_db2ptr_PTR
+Resource Group: g_ip_db2ptr_PTR
rsc_ip_db2ptr_PTR (ocf::heartbeat:IPaddr2): Started azibmdb01 rsc_nc_db2ptr_PTR (ocf::heartbeat:azure-lb): Started azibmdb01
- Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR]
+Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR]
Masters: [ azibmdb01 ]
- Slaves: [ azibmdb02 ]</code></pre>
+ Slaves: [ azibmdb02 ]
+```
The next step is to check for a *Split brain* situation. After the surviving node has determined that the node that last ran the primary database instance is down, a failover of resources is executed.
-<pre><code>2 nodes configured
+
+```bash
+2 nodes configured
5 resources configured Online: [ azibmdb02 ]
OFFLINE: [ azibmdb01 ]
Full list of resources: stonith-sbd (stonith:external/sbd): Started azibmdb02
- Resource Group: g_ip_db2ptr_PTR
+Resource Group: g_ip_db2ptr_PTR
rsc_ip_db2ptr_PTR (ocf::heartbeat:IPaddr2): Started azibmdb02 rsc_nc_db2ptr_PTR (ocf::heartbeat:azure-lb): Started azibmdb02
- Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR]
+Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR]
Masters: [ azibmdb02 ]
- Stopped: [ azibmdb01 ] </code></pre>
-
+ Stopped: [ azibmdb01 ]
+```
In the event of a "halting" of the node, the failed node has to be restarted via Azure Management tools (in the Azure portal, PowerShell, or the Azure CLI). After the failed node is back online, it starts the Db2 instance into the secondary role.
-<pre><code>2 nodes configured
+```bash
+2 nodes configured
5 resources configured Online: [ azibmdb01 azibmdb02 ]
stonith-sbd (stonith:external/sbd): Started azibmdb02
rsc_nc_db2ptr_PTR (ocf::heartbeat:azure-lb): Started azibmdb02 Master/Slave Set: msl_Db2_db2ptr_PTR [rsc_Db2_db2ptr_PTR] Masters: [ azibmdb02 ]
- Slaves: [ azibmdb01 ]</code></pre>
+ Slaves: [ azibmdb01 ]
+```
## Next steps+ - [High-availability architecture and scenarios for SAP NetWeaver](./sap-high-availability-architecture-scenarios.md) - [Set up Pacemaker on SUSE Linux Enterprise Server in Azure](./high-availability-guide-suse-pacemaker.md)
sap High Availability Guide Rhel Ibm Db2 Luw https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel-ibm-db2-luw.md
vm-linux Previously updated : 12/06/2022 Last updated : 01/18/2024 - # High availability of IBM Db2 LUW on Azure VMs on Red Hat Enterprise Linux Server
-IBM Db2 for Linux, UNIX, and Windows (LUW) in [high availability and disaster recovery (HADR) configuration](https://www.ibm.com/support/knowledgecenter/en/SSEPGG_10.5.0/com.ibm.db2.luw.admin.ha.doc/doc/c0011267.html) consists of one node that runs a primary database instance and at least one node that runs a secondary database instance. Changes to the primary database instance are replicated to a secondary database instance synchronously or asynchronously, depending on your configuration.
+IBM Db2 for Linux, UNIX, and Windows (LUW) in [high availability and disaster recovery (HADR) configuration](https://www.ibm.com/support/knowledgecenter/en/SSEPGG_10.5.0/com.ibm.db2.luw.admin.ha.doc/doc/c0011267.html) consists of one node that runs a primary database instance and at least one node that runs a secondary database instance. Changes to the primary database instance are replicated to a secondary database instance synchronously or asynchronously, depending on your configuration.
> [!NOTE] > This article contains references to terms that Microsoft no longer uses. When these terms are removed from the software, we'll remove them from this article.
-This article describes how to deploy and configure the Azure virtual machines (VMs), install the cluster framework, and install the IBM Db2 LUW with HADR configuration.
+This article describes how to deploy and configure the Azure virtual machines (VMs), install the cluster framework, and install the IBM Db2 LUW with HADR configuration.
-The article doesn't cover how to install and configure IBM Db2 LUW with HADR or SAP software installation. To help you accomplish these tasks, we provide references to SAP and IBM installation manuals. This article focuses on parts that are specific to the Azure environment.
+The article doesn't cover how to install and configure IBM Db2 LUW with HADR or SAP software installation. To help you accomplish these tasks, we provide references to SAP and IBM installation manuals. This article focuses on parts that are specific to the Azure environment.
The supported IBM Db2 versions are 10.5 and later, as documented in SAP note [1928533].
Before you begin an installation, see the following SAP notes and documentation:
| [2233094] | DB6: SAP applications on Azure that use IBM Db2 for Linux, UNIX, and Windows - additional information | | [1612105] | DB6: FAQ on Db2 with HADR | -
-| Documentation |
+| Documentation |
| | | [SAP Community Wiki](https://wiki.scn.sap.com/wiki/display/HOME/SAPonLinuxNotes): Has all of the required SAP Notes for Linux | | [Azure Virtual Machines planning and implementation for SAP on Linux][planning-guide] guide |
Before you begin an installation, see the following SAP notes and documentation:
| [IBM Db2 HADR 10.5][db2-hadr-10.5] | | [Support Policy for RHEL High Availability Clusters - Management of IBM Db2 for Linux, Unix, and Windows in a Cluster][rhel-db2-supp] -- ## Overview
-To achieve high availability, IBM Db2 LUW with HADR is installed on at least two Azure virtual machines, which are deployed in an [virtual machine scale set](./virtual-machine-scale-set-sap-deployment-guide.md) with flexible orchestration across [availability zones](./high-availability-zones.md) or in an [availability set](../../virtual-machines/windows/tutorial-availability-sets.md).
+
+To achieve high availability, IBM Db2 LUW with HADR is installed on at least two Azure virtual machines, which are deployed in an [virtual machine scale set](./virtual-machine-scale-set-sap-deployment-guide.md) with flexible orchestration across [availability zones](./high-availability-zones.md) or in an [availability set](../../virtual-machines/windows/tutorial-availability-sets.md).
The following graphics display a setup of two database server Azure VMs. Both database server Azure VMs have their own storage attached and are up and running. In HADR, one database instance in one of the Azure VMs has the role of the primary instance. All clients are connected to primary instance. All changes in database transactions are persisted locally in the Db2 transaction log. As the transaction log records are persisted locally, the records are transferred via TCP/IP to the database instance on the second database server, the standby server, or standby instance. The standby instance updates the local database by rolling forward the transferred transaction log records. In this way, the standby server is kept in sync with the primary server.
HADR is only a replication functionality. It has no failure detection and no aut
![IBM Db2 high availability overview](./media/high-availability-guide-rhel-ibm-db2-luw/ha-db2-hadr-lb-rhel.png)
-To have SAP application servers connect to primary database, you need a virtual host name and a virtual IP address. In the event of a failover, the SAP application servers will connect to new primary database instance. In an Azure environment, an [Azure load balancer](https://microsoft.sharepoint.com/teams/WAG/AzureNetworking/Wiki/Load%20Balancing.aspx) is required to use a virtual IP address in the way that's required for HADR of IBM Db2.
+To have SAP application servers connect to primary database, you need a virtual host name and a virtual IP address. After a failover, the SAP application servers connect to new primary database instance. In an Azure environment, an [Azure load balancer](https://microsoft.sharepoint.com/teams/WAG/AzureNetworking/Wiki/Load%20Balancing.aspx) is required to use a virtual IP address in the way that's required for HADR of IBM Db2.
To help you fully understand how IBM Db2 LUW with HADR and Pacemaker fits into a highly available SAP system setup, the following image presents an overview of a highly available setup of an SAP system based on IBM Db2 database. This article covers only IBM Db2, but it provides references to other articles about how to set up other components of an SAP system. ![IBM DB2 high availability full environment overview](./media/high-availability-guide-rhel-ibm-db2-luw/end-2-end-ha-rhel.png) - ### High-level overview of the required steps
-To deploy an IBM Db2 configuration, you need to follow these steps:
-
- + Plan your environment.
- + Deploy the VMs.
- + Update RHEL Linux and configure file systems.
- + Install and configure Pacemaker.
- + Setup [glusterfs cluster][glusterfs] or [Azure NetApp Files][anf-rhel]
- + Install [ASCS/ERS on a separate cluster][ascs-ha-rhel].
- + Install IBM Db2 database with Distributed/High Availability option (SWPM).
- + Install and create a secondary database node and instance, and configure HADR.
- + Confirm that HADR is working.
- + Apply the Pacemaker configuration to control IBM Db2.
- + Configure Azure Load Balancer.
- + Install primary and dialog application servers.
- + Check and adapt the configuration of SAP application servers.
- + Perform failover and takeover tests.
+To deploy an IBM Db2 configuration, you need to follow these steps:
+* Plan your environment.
+* Deploy the VMs.
+* Update RHEL Linux and configure file systems.
+* Install and configure Pacemaker.
+* Setup [glusterfs cluster][glusterfs] or [Azure NetApp Files][anf-rhel]
+* Install [ASCS/ERS on a separate cluster][ascs-ha-rhel].
+* Install IBM Db2 database with Distributed/High Availability option (SWPM).
+* Install and create a secondary database node and instance, and configure HADR.
+* Confirm that HADR is working.
+* Apply the Pacemaker configuration to control IBM Db2.
+* Configure Azure Load Balancer.
+* Install primary and dialog application servers.
+* Check and adapt the configuration of SAP application servers.
+* Perform failover and takeover tests.
## Plan Azure infrastructure for hosting IBM Db2 LUW with HADR
Complete the planning process before you execute the deployment. Planning builds
| Topic | Short description | | | |
-| Define Azure resource groups | Resource groups where you deploy VM, VNet, Azure Load Balancer, and other resources. Can be existing or new. |
+| Define Azure resource groups | Resource groups where you deploy VM, virtual network, Azure Load Balancer, and other resources. Can be existing or new. |
| Virtual network / Subnet definition | Where VMs for IBM Db2 and Azure Load Balancer are being deployed. Can be existing or newly created. | | Virtual machines hosting IBM Db2 LUW | VM size, storage, networking, IP address. |
-| Virtual host name and virtual IP for IBM Db2 database| The virtual IP or host name that's used for connection of SAP application servers. **db-virt-hostname**, **db-virt-ip**. |
+| Virtual host name and virtual IP for IBM Db2 database| The virtual IP or host name is used for connection of SAP application servers. **db-virt-hostname**, **db-virt-ip**. |
| Azure fencing | Method to avoid split brain situations is prevented. | | Azure Load Balancer | Usage of Standard (recommended), probe port for Db2 database (our recommendation 62500) **probe-port**. | | Name resolution| How name resolution works in the environment. DNS service is highly recommended. Local hosts file can be used. |
-
+ For more information about Linux Pacemaker in Azure, see [Setting up Pacemaker on Red Hat Enterprise Linux in Azure][rhel-pcs-azr].
->[!IMPORTANT]
->For Db2 versions 11.5.6 and higher we highly recommend Integrated solution using Pacemaker from IBM.
->* [Integrated solution using Pacemaker](https://www.ibm.com/docs/en/db2/11.5?topic=feature-integrated-solution-using-pacemaker)
->* [Alternate or additional configurations available on Microsoft Azure](https://www.ibm.com/support/pages/alternate-or-additional-configurations-available-microsoft-azure)
+> [!IMPORTANT]
+> For Db2 versions 11.5.6 and higher we highly recommend Integrated solution using Pacemaker from IBM.
+>
+> * [Integrated solution using Pacemaker](https://www.ibm.com/docs/en/db2/11.5?topic=feature-integrated-solution-using-pacemaker)
+> * [Alternate or additional configurations available on Microsoft Azure](https://www.ibm.com/support/pages/alternate-or-additional-configurations-available-microsoft-azure)
## Deployment on Red Hat Enterprise Linux The resource agent for IBM Db2 LUW is included in Red Hat Enterprise Linux Server HA Addon. For the setup that's described in this document, you should use Red Hat Enterprise Linux for SAP. The Azure Marketplace contains an image for Red Hat Enterprise Linux 7.4 for SAP or higher that you can use to deploy new Azure virtual machines. Be aware of the various support or service models that are offered by Red Hat through the Azure Marketplace when you choose a VM image in the Azure VM Marketplace. ### Hosts: DNS updates
-Make a list of all host names, including virtual host names, and update your DNS servers to enable proper IP address to host-name resolution. If a DNS server doesn't exist or you can't update and create DNS entries, you need to use the local host files of the individual VMs that are participating in this scenario. If you're using host files entries, make sure that the entries are applied to all VMs in the SAP system environment. However, we recommend that you use your DNS that, ideally, extends into Azure
+Make a list of all host names, including virtual host names, and update your DNS servers to enable proper IP address to host-name resolution. If a DNS server doesn't exist or you can't update and create DNS entries, you need to use the local host files of the individual VMs that are participating in this scenario. If you're using host files entries, make sure that the entries are applied to all VMs in the SAP system environment. However, we recommend that you use your DNS that, ideally, extends into Azure
### Manual deployment Make sure that the selected OS is supported by IBM/SAP for IBM Db2 LUW. The list of supported OS versions for Azure VMs and Db2 releases is available in SAP note [1928533]. The list of OS releases by individual Db2 release is available in the SAP Product Availability Matrix. We highly recommend a minimum of Red Hat Enterprise Linux 7.4 for SAP because of Azure-related performance improvements in this or later Red Hat Enterprise Linux versions. 1. Create or select a resource group.
-1. Create or select a virtual network and subnet.
-1. Choose a [suitable deployment type](./sap-high-availability-architecture-scenarios.md#comparison-of-different-deployment-types-for-sap-workload) for SAP virtual machines. Typically a virtual machine scale set with flexible orchestration.
-1. Create Virtual Machine 1.
- + Use Red Hat Enterprise Linux for SAP image in the Azure Marketplace.
- + Select the scale set, availability zone or availability set created in step 3.
-1. Create Virtual Machine 2.
- + Use Red Hat Enterprise Linux for SAP image in the Azure Marketplace.
- + Select the scale set, availability zone or availability set created in step 3 (not the same zone as in step 4).
-1. Add data disks to the VMs, and then check the recommendation of a file system setup in the article [IBM Db2 Azure Virtual Machines DBMS deployment for SAP workload][dbms-db2].
+2. Create or select a virtual network and subnet.
+3. Choose a [suitable deployment type](./sap-high-availability-architecture-scenarios.md#comparison-of-different-deployment-types-for-sap-workload) for SAP virtual machines. Typically a virtual machine scale set with flexible orchestration.
+4. Create Virtual Machine 1.
+ 1. Use Red Hat Enterprise Linux for SAP image in the Azure Marketplace.
+ 2. Select the scale set, availability zone or availability set created in step 3.
+5. Create Virtual Machine 2.
+ 1. Use Red Hat Enterprise Linux for SAP image in the Azure Marketplace.
+ 2. Select the scale set, availability zone or availability set created in step 3 (not the same zone as in step 4).
+6. Add data disks to the VMs, and then check the recommendation of a file system setup in the article [IBM Db2 Azure Virtual Machines DBMS deployment for SAP workload][dbms-db2].
## Install the IBM Db2 LUW and SAP environment Before you start the installation of an SAP environment based on IBM Db2 LUW, review the following documentation:
-+ Azure documentation
-+ SAP documentation
-+ IBM documentation
+* Azure documentation.
+* SAP documentation.
+* IBM documentation.
Links to this documentation are provided in the introductory section of this article.
Check the SAP installation manuals about installing NetWeaver-based applications
You can find the guides on the SAP Help portal by using the [SAP Installation Guide Finder][sap-instfind]. You can reduce the number of guides displayed in the portal by setting the following filters:-- I want to: "Install a new system"-- My Database: "IBM Db2 for Linux, Unix, and Windows"-- Additional filters for SAP NetWeaver versions, stack configuration, or operating system
-#### Red Hat firewall rules
-Red Hat Enterprise Linux has firewall enabled by default.
+* I want to: Install a new system.
+* My Database: IBM Db2 for Linux, Unix, and Windows.
+* Additional filters for SAP NetWeaver versions, stack configuration, or operating system.
+
+### Red Hat firewall rules
+
+Red Hat Enterprise Linux has firewall enabled by default.
-<pre><code>#Allow access to SWPM tool. Rule is not permanent.
-sudo firewall-cmd --add-port=4237/tcp</code></pre>
+```bash
+#Allow access to SWPM tool. Rule is not permanent.
+sudo firewall-cmd --add-port=4237/tcp
+```
### Installation hints for setting up IBM Db2 LUW with HADR To set up the primary IBM Db2 LUW database instance: -- Use the high availability or distributed option.-- Install the SAP ASCS/ERS and Database instance.-- Take a backup of the newly installed database.
+* Use the high availability or distributed option.
+* Install the SAP ASCS/ERS and Database instance.
+* Take a backup of the newly installed database.
-> [!IMPORTANT]
+> [!IMPORTANT]
> Write down the "Database Communication port" that's set during installation. It must be the same port number for both database instances.
->![SAP SWPM Port Definition](./media/high-availability-guide-rhel-ibm-db2-luw/hadr-swpm-db2-port.png)
+> ![SAP SWPM Port Definition](./media/high-availability-guide-rhel-ibm-db2-luw/hadr-swpm-db2-port.png)
### IBM Db2 HADR settings for Azure
- When you use an Azure Pacemaker fencing agent, set the following parameters:
+When you use an Azure Pacemaker fencing agent, set the following parameters:
- - HADR peer window duration (seconds) (HADR_PEER_WINDOW) = 240
- - HADR timeout value (HADR_TIMEOUT) = 45
+* HADR peer window duration (seconds) (HADR_PEER_WINDOW) = 240
+* HADR timeout value (HADR_TIMEOUT) = 45
-We recommend the preceding parameters based on initial failover/takeover testing. It is mandatory that you test for proper functionality of failover and takeover with these parameter settings. Because individual configurations can vary, the parameters might require adjustment.
+We recommend the preceding parameters based on initial failover/takeover testing. It's mandatory that you test for proper functionality of failover and takeover with these parameter settings. Because individual configurations can vary, the parameters might require adjustment.
> [!NOTE] > Specific to IBM Db2 with HADR configuration with normal startup: The secondary or standby database instance must be up and running before you can start the primary database instance.
-
> [!NOTE] > For installation and configuration that's specific to Azure and Pacemaker: During the installation procedure through SAP Software Provisioning Manager, there is an explicit question about high availability for IBM Db2 LUW:
->+ Do not select **IBM Db2 pureScale**.
->+ Do not select **Install IBM Tivoli System Automation for Multiplatforms**.
->+ Do not select **Generate cluster configuration files**.
->![SAP SWPM - DB2 HA options](./media/high-availability-guide-rhel-ibm-db2-luw/swpm-db2ha-opt.png)
-
+>
+> * Do not select **IBM Db2 pureScale**.
+> * Do not select **Install IBM Tivoli System Automation for Multiplatforms**.
+> * Do not select **Generate cluster configuration files**.
+> ![SAP SWPM - DB2 HA options](./media/high-availability-guide-rhel-ibm-db2-luw/swpm-db2ha-opt.png)
To set up the Standby database server by using the SAP homogeneous system copy procedure, execute these steps: 1. Select the **System copy** option > **Target systems** > **Distributed** > **Database instance**.
-1. As a copy method, select **Homogeneous System** so that you can use backup to restore a backup on the standby server instance.
-1. When you reach the exit step to restore the database for homogeneous system copy, exit the installer. Restore the database from a backup of the primary host. All subsequent installation phases have already been executed on the primary database server.
+2. As a copy method, select **Homogeneous System** so that you can use backup to restore a backup on the standby server instance.
+3. When you reach the exit step to restore the database for homogeneous system copy, exit the installer. Restore the database from a backup of the primary host. All subsequent installation phases have already been executed on the primary database server.
#### Red Hat firewall rules for DB2 HADR+ Add firewall rules to allow traffic to DB2 and between DB2 for HADR to work:
-+ Database communication port. If using partitions, add those ports too.
-+ HADR port (value of DB2 parameter HADR_LOCAL_SVC)
-+ Azure probe port
-<pre><code>sudo firewall-cmd --add-port=&lt;port&gt;/tcp --permanent
-sudo firewall-cmd --reload</code></pre>
+
+* Database communication port. If using partitions, add those ports too.
+* HADR port (value of DB2 parameter HADR_LOCAL_SVC).
+* Azure probe port.
+
+```bash
+sudo firewall-cmd --add-port=<port>/tcp --permanent
+sudo firewall-cmd --reload
+```
#### IBM Db2 HADR check+ For demonstration purposes and the procedures described in this article, the database SID is **ID2**. After you've configured HADR and the status is PEER and CONNECTED on the primary and standby nodes, perform the following check:
-<pre><code>
-Execute command as db2&lt;sid&gt; db2pd -hadr -db &lt;SID&gt;
+```bash
+Execute command as db2<sid> db2pd -hadr -db <SID>
#Primary output: Database Member 0 -- Database ID2 -- Active -- Up 1 days 15:45:23 -- Date 2019-06-25-10.55.25.349375
- <b>HADR_ROLE = PRIMARY
+ HADR_ROLE = PRIMARY
REPLAY_TYPE = PHYSICAL HADR_SYNCMODE = NEARSYNC STANDBY_ID = 1
Database Member 0 -- Database ID2 -- Active -- Up 1 days 15:45:23 -- Date 2019-0
STANDBY_MEMBER_HOST = az-idb02 STANDBY_INSTANCE = db2id2 STANDBY_MEMBER = 0
- HADR_CONNECT_STATUS = CONNECTED</b>
+ HADR_CONNECT_STATUS = CONNECTED
HADR_CONNECT_STATUS_TIME = 06/25/2019 10:55:05.076494 (1561460105) HEARTBEAT_INTERVAL(seconds) = 7 HEARTBEAT_MISSED = 5
SOCK_RECV_BUF_REQUESTED,ACTUAL(bytes) = 0, 369280
#Secondary output: Database Member 0 -- Database ID2 -- Standby -- Up 1 days 15:45:18 -- Date 2019-06-25-10.56.19.820474
- <b>HADR_ROLE = STANDBY
+ HADR_ROLE = STANDBY
REPLAY_TYPE = PHYSICAL HADR_SYNCMODE = NEARSYNC STANDBY_ID = 0
Database Member 0 -- Database ID2 -- Standby -- Up 1 days 15:45:18 -- Date 2019-
STANDBY_MEMBER_HOST = az-idb02 STANDBY_INSTANCE = db2id2 STANDBY_MEMBER = 0
- HADR_CONNECT_STATUS = CONNECTED</b>
+ HADR_CONNECT_STATUS = CONNECTED
HADR_CONNECT_STATUS_TIME = 06/25/2019 10:55:05.078116 (1561460105) HEARTBEAT_INTERVAL(seconds) = 7 HEARTBEAT_MISSED = 0
SOCK_RECV_BUF_REQUESTED,ACTUAL(bytes) = 0, 367360
PEER_WINDOW(seconds) = 1000 PEER_WINDOW_END = 06/25/2019 11:12:59.000000 (1561461179) READS_ON_STANDBY_ENABLED = N
-</code></pre>
+```
### Configure Azure Load Balancer
-To configure Azure Load Balancer, we recommend that you use the [Azure Standard Load Balancer SKU](../../load-balancer/load-balancer-overview.md) and then do the following;
-
-> [!NOTE]
-> The Standard Load Balancer SKU has restrictions accessing public IP addresses from the nodes underneath the Load Balancer. The article [Public endpoint connectivity for Virtual Machines using Azure Standard Load Balancer in SAP high-availability scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md) is describing ways on how to enable those nodes to access public IP addresses
-
-> [!IMPORTANT]
-> Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see [Azure Load balancer Limitations](../../load-balancer/load-balancer-multivip-overview.md#limitations). If you need additional IP address for the VM, deploy a second NIC.
-
-1. Create a front-end IP pool:
-
- a. In the Azure portal, open the Azure Load Balancer, select **frontend IP pool**, and then select **Add**.
-
- b. Enter the name of the new front-end IP pool (for example, **Db2-connection**).
-
- c. Set the **Assignment** to **Static**, and enter the IP address **Virtual-IP** defined at the beginning.
-
- d. Select **OK**.
-
- e. After the new front-end IP pool is created, note the pool IP address.
-
-1. Create a single back-end pool:
-
- 1. Open the load balancer, select **Backend pools**, and then select **Add**.
-
- 1. Enter the name of the new back-end pool (for example, **Db2-backend**).
-
- 2. Select **NIC** for Backend Pool Configuration.
-
- 1. Select **Add a virtual machine**.
-
- 1. Select the virtual machines of the cluster.
-
- 1. Select **Add**.
-
- 2. Select **Save**.
-
+During VM configuration, you have an option to create or select exiting load balancer in networking section. Follow below steps, to set up standard load balancer for high availability setup of DB2 database.
-1. Create a health probe:
+#### [Azure portal](#tab/lb-portal)
- a. In the Azure portal, open the Azure Load Balancer, select **health probes**, and select **Add**.
- b. Enter the name of the new health probe (for example, **Db2-hp**).
+#### [Azure CLI](#tab/lb-azurecli)
- c. Select **TCP** as the protocol and port **62500**. Keep the **Interval** value set to **5**.
- d. Select **OK**.
+#### [PowerShell](#tab/lb-powershell)
-1. Create the load-balancing rules:
- a. In the Azure portal, open the Azure Load Balancer, select **Load balancing rules**, and then select **Add**.
-
- b. Enter the name of the new Load Balancer rule (for example, **Db2-SID**).
-
- c. Select the front-end IP address, the back-end pool, and the health probe that you created earlier (for example, **Db2-frontend**).
-
- d. Keep the **Protocol** set to **TCP**, and enter port *Database Communication port*.
+
- e. Increase the **idle timeout** to 30 minutes.
+> [!IMPORTANT]
+> Floating IP isn't supported on a NIC secondary IP configuration in load-balancing scenarios. For more information, see [Azure Load Balancer limitations](../../load-balancer/load-balancer-multivip-overview.md#limitations). If you need another IP address for the VM, deploy a second NIC.
- f. Make sure to **enable Floating IP**.
+> [!NOTE]
+> When VMs without public IP addresses are placed in the back-end pool of an internal (no public IP address) instance of Standard Azure Load Balancer, there's no outbound internet connectivity unless more configuration is performed to allow routing to public endpoints. For more information on how to achieve outbound connectivity, see [Public endpoint connectivity for VMs using Azure Standard Load Balancer in SAP high-availability scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md).
- g. Select **OK**.
+> [!IMPORTANT]
+> Don't enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps could cause the health probes to fail. Set the parameter `net.ipv4.tcp_timestamps` to `0`. For more information, see [Load Balancer health probes](../../load-balancer/load-balancer-custom-probe-overview.md).
**[A]** Add firewall rule for probe port:
-<pre><code>sudo firewall-cmd --add-port=<b>&lt;probe-port&gt;</b>/tcp --permanent
-sudo firewall-cmd --reload</code></pre>
+```bash
+sudo firewall-cmd --add-port=<probe-port>/tcp --permanent
+sudo firewall-cmd --reload
+```
## Create the Pacemaker cluster
-
-To create a basic Pacemaker cluster for this IBM Db2 server, see [Setting up Pacemaker on Red Hat Enterprise Linux in Azure][rhel-pcs-azr].
+
+To create a basic Pacemaker cluster for this IBM Db2 server, see [Setting up Pacemaker on Red Hat Enterprise Linux in Azure][rhel-pcs-azr].
## Db2 Pacemaker configuration
When you use Pacemaker for automatic failover in the event of a node failure, yo
The following items are prefixed with either: -- **[A]**: Applicable to all nodes-- **[1]**: Applicable only to node 1 -- **[2]**: Applicable only to node 2
+* **[A]**: Applicable to all nodes
+* **[1]**: Applicable only to node 1
+* **[2]**: Applicable only to node 2
**[A]** Prerequisite for Pacemaker configuration:
-1. Shut down both database servers with user db2\<sid> with db2stop.
-1. Change the shell environment for db2\<sid> user to */bin/ksh*:
-<pre><code># Install korn shell:
-sudo yum install ksh
-# Change users shell:
-sudo usermod -s /bin/ksh db2&lt;sid&gt;</code></pre>
-
-### Pacemaker configuration
-
-**[1]** IBM Db2 HADR-specific Pacemaker configuration:
-<pre><code># Put Pacemaker into maintenance mode
-sudo pcs property set maintenance-mode=true
-</code></pre>
-
-**[1]** Create IBM Db2 resources:
-
-If building a cluster on **RHEL 7.x**, make sure to update package **resource-agents** to version `resource-agents-4.1.1-61.el7_9.15` or higher. Use the following commands to create the cluster resources:
-
-<pre><code># Replace <b>bold strings</b> with your instance name db2sid, database SID, and virtual IP address/Azure Load Balancer.
-sudo pcs resource create Db2_HADR_<b>ID2</b> db2 instance='<b>db2id2</b>' dblist='<b>ID2</b>' master meta notify=true resource-stickiness=5000
-
-#Configure resource stickiness and correct cluster notifications for master resoruce
-sudo pcs resource update Db2_HADR_<b>ID2</b>-master meta notify=true resource-stickiness=5000
-
-# Configure virtual IP - same as Azure Load Balancer IP
-sudo pcs resource create vip_<b>db2id2</b>_<b>ID2</b> IPaddr2 ip='<b>10.100.0.40</b>'
-
-# Configure probe port for Azure load Balancer
-sudo pcs resource create nc_<b>db2id2</b>_<b>ID2</b> azure-lb port=<b>62500</b>
-
-#Create a group for ip and Azure loadbalancer probe port
-sudo pcs resource group add g_ipnc_<b>db2id2</b>_<b>ID2</b> vip_<b>db2id2</b>_<b>ID2</b> nc_<b>db2id2</b>_<b>ID2</b>
-
-#Create colocation constrain - keep Db2 HADR Master and Group on same node
-sudo pcs constraint colocation add g_ipnc_<b>db2id2</b>_<b>ID2</b> with master Db2_HADR_<b>ID2</b>-master
-
-#Create start order constrain
-sudo pcs constraint order promote Db2_HADR_<b>ID2</b>-master then g_ipnc_<b>db2id2</b>_<b>ID2</b>
-</code></pre>
+* Shut down both database servers with user db2\<sid> with db2stop.
+* Change the shell environment for db2\<sid> user to */bin/ksh*:
+ ```bash
+ # Install korn shell:
+ sudo yum install ksh
+ # Change users shell:
+ sudo usermod -s /bin/ksh db2<sid>
+ ```
-If building a cluster on **RHEL 8.x**, make sure to update package **resource-agents** to version `resource-agents-4.1.1-93.el8` or higher. For details see Red Hat KB [A `db2` resource with HADR fails promote with state `PRIMARY/REMOTE_CATCHUP_PENDING/CONNECTED`](https://access.redhat.com/solutions/6516791). Use the following commands to create the cluster resources:
-
-<pre><code># Replace <b>bold strings</b> with your instance name db2sid, database SID, and virtual IP address/Azure Load Balancer.
-sudo pcs resource create Db2_HADR_<b>ID2</b> db2 instance='<b>db2id2</b>' dblist='<b>ID2</b>' promotable meta notify=true resource-stickiness=5000
+### Pacemaker configuration
-#Configure resource stickiness and correct cluster notifications for master resoruce
-sudo pcs resource update Db2_HADR_<b>ID2</b>-clone meta notify=true resource-stickiness=5000
+1. **[1]** IBM Db2 HADR-specific Pacemaker configuration:
-# Configure virtual IP - same as Azure Load Balancer IP
-sudo pcs resource create vip_<b>db2id2</b>_<b>ID2</b> IPaddr2 ip='<b>10.100.0.40</b>'
+ ```bash
+ # Put Pacemaker into maintenance mode
+ sudo pcs property set maintenance-mode=true
+ ```
-# Configure probe port for Azure load Balancer
-sudo pcs resource create nc_<b>db2id2</b>_<b>ID2</b> azure-lb port=<b>62500</b>
+2. **[1]** Create IBM Db2 resources:
-#Create a group for ip and Azure loadbalancer probe port
-sudo pcs resource group add g_ipnc_<b>db2id2</b>_<b>ID2</b> vip_<b>db2id2</b>_<b>ID2</b> nc_<b>db2id2</b>_<b>ID2</b>
+ If building a cluster on **RHEL 7.x**, make sure to update package **resource-agents** to version `resource-agents-4.1.1-61.el7_9.15` or higher. Use the following commands to create the cluster resources:
-#Create colocation constrain - keep Db2 HADR Master and Group on same node
-sudo pcs constraint colocation add g_ipnc_<b>db2id2</b>_<b>ID2</b> with master Db2_HADR_<b>ID2</b>-clone
+ ```bash
+ # Replace bold strings with your instance name db2sid, database SID, and virtual IP address/Azure Load Balancer.
+ sudo pcs resource create Db2_HADR_ID2 db2 instance='db2id2' dblist='ID2' master meta notify=true resource-stickiness=5000
+
+ #Configure resource stickiness and correct cluster notifications for master resoruce
+ sudo pcs resource update Db2_HADR_ID2-master meta notify=true resource-stickiness=5000
+
+ # Configure virtual IP - same as Azure Load Balancer IP
+ sudo pcs resource create vip_db2id2_ID2 IPaddr2 ip='10.100.0.40'
+
+ # Configure probe port for Azure load Balancer
+ sudo pcs resource create nc_db2id2_ID2 azure-lb port=62500
+
+ #Create a group for ip and Azure loadbalancer probe port
+ sudo pcs resource group add g_ipnc_db2id2_ID2 vip_db2id2_ID2 nc_db2id2_ID2
+
+ #Create colocation constrain - keep Db2 HADR Master and Group on same node
+ sudo pcs constraint colocation add g_ipnc_db2id2_ID2 with master Db2_HADR_ID2-master
+
+ #Create start order constrain
+ sudo pcs constraint order promote Db2_HADR_ID2-master then g_ipnc_db2id2_ID2
+ ```
-#Create start order constrain
-sudo pcs constraint order promote Db2_HADR_<b>ID2</b>-clone then g_ipnc_<b>db2id2</b>_<b>ID2</b>
-</code></pre>
+ If building a cluster on **RHEL 8.x**, make sure to update package **resource-agents** to version `resource-agents-4.1.1-93.el8` or higher. For details see Red Hat KBA [`db2` resource with HADR fails promote with state `PRIMARY/REMOTE_CATCHUP_PENDING/CONNECTED`](https://access.redhat.com/solutions/6516791). Use the following commands to create the cluster resources:
-**[1]** Start IBM Db2 resources:
-* Put Pacemaker out of maintenance mode.
-<pre><code># Put Pacemaker out of maintenance-mode - that start IBM Db2
-sudo pcs property set maintenance-mode=false</pre></code>
+ ```bash
+ # Replace bold strings with your instance name db2sid, database SID, and virtual IP address/Azure Load Balancer.
+ sudo pcs resource create Db2_HADR_ID2 db2 instance='db2id2' dblist='ID2' promotable meta notify=true resource-stickiness=5000
+
+ #Configure resource stickiness and correct cluster notifications for master resoruce
+ sudo pcs resource update Db2_HADR_ID2-clone meta notify=true resource-stickiness=5000
+
+ # Configure virtual IP - same as Azure Load Balancer IP
+ sudo pcs resource create vip_db2id2_ID2 IPaddr2 ip='10.100.0.40'
+
+ # Configure probe port for Azure load Balancer
+ sudo pcs resource create nc_db2id2_ID2 azure-lb port=62500
+
+ #Create a group for ip and Azure loadbalancer probe port
+ sudo pcs resource group add g_ipnc_db2id2_ID2 vip_db2id2_ID2 nc_db2id2_ID2
+
+ #Create colocation constrain - keep Db2 HADR Master and Group on same node
+ sudo pcs constraint colocation add g_ipnc_db2id2_ID2 with master Db2_HADR_ID2-clone
+
+ #Create start order constrain
+ sudo pcs constraint order promote Db2_HADR_ID2-clone then g_ipnc_db2id2_ID2
+ ```
-**[1]** Make sure that the cluster status is OK and that all of the resources are started. It's not important which node the resources are running on.
-<pre><code>sudo pcs status
-2 nodes configured
-5 resources configured
+3. **[1]** Start IBM Db2 resources:
-Online: [ az-idb01 az-idb02 ]
+ Put Pacemaker out of maintenance mode.
-Full list of resources:
+ ```bash
+ # Put Pacemaker out of maintenance-mode - that start IBM Db2
+ sudo pcs property set maintenance-mode=false
+ ```
- rsc_st_azure (stonith:fence_azure_arm): Started az-idb01
- Master/Slave Set: Db2_HADR_ID2-master [Db2_HADR_ID2]
- Masters: [ az-idb01 ]
- Slaves: [ az-idb02 ]
- Resource Group: g_ipnc_db2id2_ID2
- vip_db2id2_ID2 (ocf::heartbeat:IPaddr2): Started az-idb01
- nc_db2id2_ID2 (ocf::heartbeat:azure-lb): Started az-idb01
+4. **[1]** Make sure that the cluster status is OK and that all of the resources are started. It's not important which node the resources are running on.
-Daemon Status:
- corosync: active/disabled
- pacemaker: active/disabled
- pcsd: active/enabled
-</code></pre>
+ ```bash
+ sudo pcs status
+ 2 nodes configured
+ 5 resources configured
+
+ Online: [ az-idb01 az-idb02 ]
+
+ Full list of resources:
+
+ rsc_st_azure (stonith:fence_azure_arm): Started az-idb01
+ Master/Slave Set: Db2_HADR_ID2-master [Db2_HADR_ID2]
+ Masters: [ az-idb01 ]
+ Slaves: [ az-idb02 ]
+ Resource Group: g_ipnc_db2id2_ID2
+ vip_db2id2_ID2 (ocf::heartbeat:IPaddr2): Started az-idb01
+ nc_db2id2_ID2 (ocf::heartbeat:azure-lb): Started az-idb01
+
+ Daemon Status:
+ corosync: active/disabled
+ pacemaker: active/disabled
+ pcsd: active/enabled
+ ```
> [!IMPORTANT] > You must manage the Pacemaker clustered Db2 instance by using Pacemaker tools. If you use db2 commands such as db2stop, Pacemaker detects the action as a failure of resource. If you're performing maintenance, you can put the nodes or resources in maintenance mode. Pacemaker suspends monitoring resources, and you can then use normal db2 administration commands. ### Make changes to SAP profiles to use virtual IP for connection+ To connect to the primary instance of the HADR configuration, the SAP application layer needs to use the virtual IP address that you defined and configured for the Azure Load Balancer. The following changes are required: /sapmnt/\<SID>/profile/DEFAULT.PFL
-<pre><code>SAPDBHOST = db-virt-hostname
+
+```bash
+SAPDBHOST = db-virt-hostname
j2ee/dbhost = db-virt-hostname
-</code></pre>
+```
+ /sapmnt/\<SID>/global/db6/db2cli.ini
-<pre><code>Hostname=db-virt-hostname
-</code></pre>
+
+```bash
+Hostname=db-virt-hostname
+```
## Install primary and dialog application servers
-When you install primary and dialog application servers against an Db2 HADR configuration, use the virtual host name that you picked for the configuration.
+When you install primary and dialog application servers against a Db2 HADR configuration, use the virtual host name that you picked for the configuration.
If you performed the installation before you created the Db2 HADR configuration, make the changes as described in the preceding section and as follows for SAP Java stacks. ### ABAP+Java or Java stack systems JDBC URL check Use the J2EE Config tool to check or update the JDBC URL. Because the J2EE Config tool is a graphical tool, you need to have X server installed:
-
+ 1. Sign in to the primary application server of the J2EE instance and execute:
-
- <pre><code>sudo /usr/sap/*SID*/*Instance*/j2ee/configtool/configtool.sh</code></pre>
-
-1. In the left frame, choose **security store**.
-1. In the right frame, choose the key `jdbc/pool/\<SAPSID>/url`.
-1. Change the host name in the JDBC URL to the virtual host name.
-
- <pre><code>jdbc:db2://db-virt-hostname:5912/TSP:deferPrepares=0</code></pre>
-
-1. Select **Add**.
-1. To save your changes, select the disk icon at the upper left.
-1. Close the configuration tool.
-1. Restart the Java instance.
+
+ ```bash
+ sudo /usr/sap/*SID*/*Instance*/j2ee/configtool/configtool.sh
+ ```
+
+2. In the left frame, choose **security store**.
+3. In the right frame, choose the key `jdbc/pool/\<SAPSID>/url`.
+4. Change the host name in the JDBC URL to the virtual host name.
+
+ ```bash
+ jdbc:db2://db-virt-hostname:5912/TSP:deferPrepares=0
+ ```
+
+5. Select **Add**.
+6. To save your changes, select the disk icon at the upper left.
+7. Close the configuration tool.
+8. Restart the Java instance.
## Configure log archiving for HADR setup
-To configure the Db2 log archiving for HADR setup, we recommend that you configure both the primary and the standby database to have automatic log retrieval capability from all log archive locations. Both the primary and standby database must be able to retrieve log archive files from all the log archive locations to which either one of the database instances might archive log files.
+To configure the Db2 log archiving for HADR setup, we recommend that you configure both the primary and the standby database to have automatic log retrieval capability from all log archive locations. Both the primary and standby database must be able to retrieve log archive files from all the log archive locations to which either one of the database instances might archive log files.
The log archiving is performed only by the primary database. If you change the HADR roles of the database servers or if a failure occurs, the new primary database is responsible for log archiving. If you've set up multiple log archive locations, your logs might be archived twice. In the event of a local or remote catch-up, you might also have to manually copy the archived logs from the old primary server to the active log location of the new primary server.
-We recommend configuring a common NFS share or GlusterFS, where logs are written from both nodes. The NFS share or GlusterFS has to be highly available.
+We recommend configuring a common NFS share or GlusterFS, where logs are written from both nodes. The NFS share or GlusterFS has to be highly available.
You can use existing highly available NFS shares or GlusterFS for transports or a profile directory. For more information, see: -- [GlusterFS on Azure VMs on Red Hat Enterprise Linux for SAP NetWeaver][glusterfs] -- [High availability for SAP NetWeaver on Azure VMs on Red Hat Enterprise Linux with Azure NetApp Files for SAP Applications][anf-rhel]-- [Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-introduction.md) (to create NFS shares)
+* [GlusterFS on Azure VMs on Red Hat Enterprise Linux for SAP NetWeaver][glusterfs].
+* [High availability for SAP NetWeaver on Azure VMs on Red Hat Enterprise Linux with Azure NetApp Files for SAP Applications][anf-rhel].
+* [Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-introduction.md) (to create NFS shares).
## Test the cluster setup
This section describes how you can test your Db2 HADR setup. Every test assumes
The initial status for all test cases is explained here: (crm_mon -r or pcs status) -- **pcs status** is a snapshot of Pacemaker status at execution time -- **crm_mon -r** is continuous output of Pacemaker status
+* **pcs status** is a snapshot of Pacemaker status at execution time.
+* **crm_mon -r** is continuous output of Pacemaker status.
-<pre><code>2 nodes configured
+```bash
+2 nodes configured
5 resources configured Online: [ az-idb01 az-idb02 ] Full list of resources:
- rsc_st_azure (stonith:fence_azure_arm): Started az-idb01
- Master/Slave Set: Db2_HADR_ID2-master [Db2_HADR_ID2]
+rsc_st_azure (stonith:fence_azure_arm): Started az-idb01
+Master/Slave Set: Db2_HADR_ID2-master [Db2_HADR_ID2]
Masters: [ az-idb01 ] Slaves: [ az-idb02 ]
- Resource Group: g_ipnc_db2id2_ID2
+Resource Group: g_ipnc_db2id2_ID2
vip_db2id2_ID2 (ocf::heartbeat:IPaddr2): Started az-idb01 nc_db2id2_ID2 (ocf::heartbeat:azure-lb): Started az-idb01
Daemon Status:
corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled
-</code></pre>
+```
The original status in an SAP system is documented in Transaction DBACOCKPIT > Configuration > Overview, as shown in the following image:
The original status in an SAP system is documented in Transaction DBACOCKPIT > C
### Test takeover of IBM Db2 -
-> [!IMPORTANT]
+> [!IMPORTANT]
> Before you start the test, make sure that:
+>
> * Pacemaker doesn't have any failed actions (pcs status).
-> * There are no location constraints (leftovers of migration test)
-> * The IBM Db2 HADR synchronization is working. Check with user db2\<sid> <pre><code>db2pd -hadr -db \<DBSID></code></pre>
+> * There are no location constraints (leftovers of migration test).
+> * The IBM Db2 HADR synchronization is working. Check with user db2\<sid>.
+>
+> ```bash
+> db2pd -hadr -db <DBSID>
+> ```
Migrate the node that's running the primary Db2 database by executing following command:
-<pre><code># On RHEL 7.x
-sudo pcs resource move Db2_HADR_<b>ID2</b>-master
+
+```bash
+# On RHEL 7.x
+sudo pcs resource move Db2_HADR_ID2-master
# On RHEL 8.x
-sudo pcs resource move Db2_HADR_<b>ID2</b>-clone --master
-</code></pre>
+sudo pcs resource move Db2_HADR_ID2-clone --master
+```
After the migration is done, the crm status output looks like:
-<pre><code>2 nodes configured
+
+```bash
+2 nodes configured
5 resources configured Online: [ az-idb01 az-idb02 ] Full list of resources:
- rsc_st_azure (stonith:fence_azure_arm): Started az-idb01
- Master/Slave Set: Db2_HADR_ID2-master [Db2_HADR_ID2]
+rsc_st_azure (stonith:fence_azure_arm): Started az-idb01
+Master/Slave Set: Db2_HADR_ID2-master [Db2_HADR_ID2]
Masters: [ az-idb02 ] Stopped: [ az-idb01 ]
- Resource Group: g_ipnc_db2id2_ID2
+Resource Group: g_ipnc_db2id2_ID2
vip_db2id2_ID2 (ocf::heartbeat:IPaddr2): Started az-idb02 nc_db2id2_ID2 (ocf::heartbeat:azure-lb): Started az-idb02-
-</code></pre>
+```
The original status in an SAP system is documented in Transaction DBACOCKPIT > Configuration > Overview, as shown in the following image: ![DBACockpit - Post Migration](./media/high-availability-guide-rhel-ibm-db2-luw/hadr-sap-mgr-post-rhel.png)
-Resource migration with "pcs resource move" creates location constraints. Location constraints in this case are preventing running IBM Db2 instance on az-idb01. If location constraints are not deleted, the resource cannot fail back.
+Resource migration with "pcs resource move" creates location constraints. Location constraints in this case are preventing running IBM Db2 instance on az-idb01. If location constraints aren't deleted, the resource can't fail back.
+
+Remove the location constrain and standby node would be started on az-idb01.
-Remove the location constrain and standby node will be started on az-idb01.
-<pre><code># On RHEL 7.x
-sudo pcs resource clear Db2_HADR_<b>ID2</b>-master
+```bash
+# On RHEL 7.x
+sudo pcs resource clear Db2_HADR_ID2-master
# On RHEL 8.x
-sudo pcs resource clear Db2_HADR_<b>ID2</b>-clone</code></pre>
+sudo pcs resource clear Db2_HADR_ID2-clone
+```
And cluster status changes to:
-<pre><code>2 nodes configured
+```bash
+2 nodes configured
5 resources configured Online: [ az-idb01 az-idb02 ]
Full list of resources:
Slaves: [ az-idb01 ] Resource Group: g_ipnc_db2id2_ID2 vip_db2id2_ID2 (ocf::heartbeat:IPaddr2): Started az-idb02
- nc_db2id2_ID2 (ocf::heartbeat:azure-lb): Started az-idb02</code></pre>
+ nc_db2id2_ID2 (ocf::heartbeat:azure-lb): Started az-idb02
+```
![DBACockpit - Removed location constrain](./media/high-availability-guide-rhel-ibm-db2-luw/hadr-sap-mgr-clear-rhel.png) Migrate the resource back to *az-idb01* and clear the location constraints
-<pre><code># On RHEL 7.x
-sudo pcs resource move Db2_HADR_<b>ID2</b>-master az-idb01
-sudo pcs resource clear Db2_HADR_<b>ID2</b>-master
+
+```bash
+# On RHEL 7.x
+sudo pcs resource move Db2_HADR_ID2-master az-idb01
+sudo pcs resource clear Db2_HADR_ID2-master
# On RHEL 8.x
-sudo pcs resource move Db2_HADR_<b>ID2</b>-clone --master
-sudo pcs resource clear Db2_HADR_<b>ID2</b>-clone</code></pre>
+sudo pcs resource move Db2_HADR_ID2-clone --master
+sudo pcs resource clear Db2_HADR_ID2-clone
+```
-- **On RHEL 7.x - pcs resource move \<res_name\> \<host\>:** Creates location constraints and can cause issues with takeover-- **On RHEL 8.x - pcs resource move \<res_name\> --master:** Creates location constraints and can cause issues with takeover-- **pcs resource clear \<res_name\>**: Clears location constraints-- **pcs resource cleanup \<res_name\>**: Clears all errors of the resource
+* On RHEL 7.x - `pcs resource move <resource_name> <host>`: Creates location constraints and can cause issues with takeover
+* On RHEL 8.x - `pcs resource move <resource_name> --master`: Creates location constraints and can cause issues with takeover
+* `pcs resource clear <resource_name>`: Clears location constraints
+* `pcs resource cleanup <resource_name>`: Clears all errors of the resource
### Test a manual takeover You can test a manual takeover by stopping the Pacemaker service on *az-idb01* node:
-<pre><code>systemctl stop pacemaker</code></pre>
+
+```bash
+systemctl stop pacemaker
+```
status on *az-ibdb02*
-<pre><code>2 nodes configured
+
+```bash
+2 nodes configured
5 resources configured Node az-idb01: pending
Online: [ az-idb02 ]
Full list of resources:
- rsc_st_azure (stonith:fence_azure_arm): Started az-idb02
- Master/Slave Set: Db2_HADR_ID2-master [Db2_HADR_ID2]
+rsc_st_azure (stonith:fence_azure_arm): Started az-idb02
+Master/Slave Set: Db2_HADR_ID2-master [Db2_HADR_ID2]
Masters: [ az-idb02 ] Stopped: [ az-idb01 ]
- Resource Group: g_ipnc_db2id2_ID2
+Resource Group: g_ipnc_db2id2_ID2
vip_db2id2_ID2 (ocf::heartbeat:IPaddr2): Started az-idb02 nc_db2id2_ID2 (ocf::heartbeat:azure-lb): Started az-idb02 Daemon Status: corosync: active/disabled pacemaker: active/disabled
- pcsd: active/enabled</code></pre>
+ pcsd: active/enabled
+```
After the failover, you can start the service again on *az-idb01*.
-<pre><code>systemctl start pacemaker</code></pre>
+```bash
+systemctl start pacemaker
+```
### Kill the Db2 process on the node that runs the HADR primary database
-<pre><code>#Kill main db2 process - db2sysc
+```bash
+#Kill main db2 process - db2sysc
[sapadmin@az-idb02 ~]$ sudo ps -ef|grep db2sysc db2ptr 34598 34596 8 14:21 ? 00:00:07 db2sysc 0 [sapadmin@az-idb02 ~]$ sudo kill -9 34598
-</code></pre>
+```
The Db2 instance is going to fail, and Pacemaker will move master node and report following status:
-<pre><code>2 nodes configured
+```bash
+2 nodes configured
5 resources configured Online: [ az-idb01 az-idb02 ] Full list of resources:
- rsc_st_azure (stonith:fence_azure_arm): Started az-idb02
- Master/Slave Set: Db2_HADR_ID2-master [Db2_HADR_ID2]
+rsc_st_azure (stonith:fence_azure_arm): Started az-idb02
+Master/Slave Set: Db2_HADR_ID2-master [Db2_HADR_ID2]
Masters: [ az-idb02 ] Stopped: [ az-idb01 ]
- Resource Group: g_ipnc_db2id2_ID2
+Resource Group: g_ipnc_db2id2_ID2
vip_db2id2_ID2 (ocf::heartbeat:IPaddr2): Started az-idb02 nc_db2id2_ID2 (ocf::heartbeat:azure-lb): Started az-idb02 Failed Actions: * Db2_HADR_ID2_demote_0 on az-idb01 'unknown error' (1): call=49, status=complete, exitreason='none',
- last-rc-change='Wed Jun 26 09:57:35 2019', queued=0ms, exec=362ms</code></pre>
+ last-rc-change='Wed Jun 26 09:57:35 2019', queued=0ms, exec=362ms
+```
-Pacemaker will restart the Db2 primary database instance on the same node, or it will fail over to the node that's running the secondary database instance and an error is reported.
+Pacemaker restarts the Db2 primary database instance on the same node, or it fails over to the node that's running the secondary database instance and an error is reported.
### Kill the Db2 process on the node that runs the secondary database instance
-<pre><code>[sapadmin@az-idb02 ~]$ sudo ps -ef|grep db2sysc
+```bash
+[sapadmin@az-idb02 ~]$ sudo ps -ef|grep db2sysc
db2id2 23144 23142 2 09:53 ? 00:00:13 db2sysc 0 [sapadmin@az-idb02 ~]$ sudo kill -9 23144
-</code></pre>
+```
+
+The node gets into failed stated and error reported.
-The node gets into failed stated and error reported
-<pre><code>2 nodes configured
+```bash
+2 nodes configured
5 resources configured Online: [ az-idb01 az-idb02 ] Full list of resources:
- rsc_st_azure (stonith:fence_azure_arm): Started az-idb02
- Master/Slave Set: Db2_HADR_ID2-master [Db2_HADR_ID2]
+rsc_st_azure (stonith:fence_azure_arm): Started az-idb02
+Master/Slave Set: Db2_HADR_ID2-master [Db2_HADR_ID2]
Masters: [ az-idb01 ] Slaves: [ az-idb02 ]
- Resource Group: g_ipnc_db2id2_ID2
+Resource Group: g_ipnc_db2id2_ID2
vip_db2id2_ID2 (ocf::heartbeat:IPaddr2): Started az-idb01 nc_db2id2_ID2 (ocf::heartbeat:azure-lb): Started az-idb01 Failed Actions: * Db2_HADR_ID2_monitor_20000 on az-idb02 'not running' (7): call=144, status=complete, exitreason='none',
- last-rc-change='Wed Jun 26 10:02:09 2019', queued=0ms, exec=0ms</code></pre>
+ last-rc-change='Wed Jun 26 10:02:09 2019', queued=0ms, exec=0ms
+```
The Db2 instance gets restarted in the secondary role it had assigned before. ### Stop DB via db2stop force on the node that runs the HADR primary database instance As user db2\<sid> execute command db2stop force:
-<pre><code>az-idb01:db2ptr> db2stop force</code></pre>
+
+```bash
+az-idb01:db2ptr> db2stop force
+```
Failure detected:
-<pre><code>2 nodes configured
+```bash
+2 nodes configured
5 resources configured Online: [ az-idb01 az-idb02 ] Full list of resources:
- rsc_st_azure (stonith:fence_azure_arm): Started az-idb02
- Master/Slave Set: Db2_HADR_ID2-master [Db2_HADR_ID2]
+rsc_st_azure (stonith:fence_azure_arm): Started az-idb02
+Master/Slave Set: Db2_HADR_ID2-master [Db2_HADR_ID2]
Slaves: [ az-idb02 ] Stopped: [ az-idb01 ]
- Resource Group: g_ipnc_db2id2_ID2
+Resource Group: g_ipnc_db2id2_ID2
vip_db2id2_ID2 (ocf::heartbeat:IPaddr2): Stopped nc_db2id2_ID2 (ocf::heartbeat:azure-lb): Stopped Failed Actions: * Db2_HADR_ID2_demote_0 on az-idb01 'unknown error' (1): call=110, status=complete, exitreason='none',
- last-rc-change='Wed Jun 26 14:03:12 2019', queued=0ms, exec=355ms</code></pre>
+ last-rc-change='Wed Jun 26 14:03:12 2019', queued=0ms, exec=355ms
+```
The Db2 HADR secondary database instance got promoted into the primary role.
-<pre><code>2 nodes configured
+
+```bash
+2 nodes configured
5 resources configured Online: [ az-idb01 az-idb02 ] Full list of resources:
- rsc_st_azure (stonith:fence_azure_arm): Started az-idb02
- Master/Slave Set: Db2_HADR_ID2-master [Db2_HADR_ID2]
+rsc_st_azure (stonith:fence_azure_arm): Started az-idb02
+Master/Slave Set: Db2_HADR_ID2-master [Db2_HADR_ID2]
Masters: [ az-idb02 ] Slaves: [ az-idb01 ]
- Resource Group: g_ipnc_db2id2_ID2
+Resource Group: g_ipnc_db2id2_ID2
vip_db2id2_ID2 (ocf::heartbeat:IPaddr2): Started az-idb02 nc_db2id2_ID2 (ocf::heartbeat:azure-lb): Started az-idb02 Failed Actions: * Db2_HADR_ID2_demote_0 on az-idb01 'unknown error' (1): call=110, status=complete, exitreason='none',
- last-rc-change='Wed Jun 26 14:03:12 2019', queued=0ms, exec=355ms</pre></code>
-
+ last-rc-change='Wed Jun 26 14:03:12 2019', queued=0ms, exec=355ms
+```
### Crash the VM that runs the HADR primary database instance with "halt"
-<pre><code>#Linux kernel panic.
-sudo echo b > /proc/sysrq-trigger</code></pre>
+```bash
+#Linux kernel panic.
+sudo echo b > /proc/sysrq-trigger
+```
-In such a case, Pacemaker will detect that the node that's running the primary database instance isn't responding.
+In such a case, Pacemaker detects that the node that's running the primary database instance isn't responding.
-<pre><code>2 nodes configured
+```bash
+2 nodes configured
5 resources configured Node az-idb01: UNCLEAN (online)
Online: [ az-idb02 ]
Full list of resources: rsc_st_azure (stonith:fence_azure_arm): Started az-idb02
- Master/Slave Set: Db2_HADR_ID2-master [Db2_HADR_ID2]
+Master/Slave Set: Db2_HADR_ID2-master [Db2_HADR_ID2]
Masters: [ az-idb01 ] Slaves: [ az-idb02 ]
- Resource Group: g_ipnc_db2id2_ID2
+Resource Group: g_ipnc_db2id2_ID2
vip_db2id2_ID2 (ocf::heartbeat:IPaddr2): Started az-idb01
- nc_db2id2_ID2 (ocf::heartbeat:azure-lb): Started az-idb01</code></pre>
+ nc_db2id2_ID2 (ocf::heartbeat:azure-lb): Started az-idb01
+```
The next step is to check for a *Split brain* situation. After the surviving node has determined that the node that last ran the primary database instance is down, a failover of resources is executed.
-<pre><code>2 nodes configured
+```bash
+2 nodes configured
5 resources configured Online: [ az-idb02 ]
OFFLINE: [ az-idb01 ]
Full list of resources: rsc_st_azure (stonith:fence_azure_arm): Started az-idb02
- Master/Slave Set: Db2_HADR_ID2-master [Db2_HADR_ID2]
+Master/Slave Set: Db2_HADR_ID2-master [Db2_HADR_ID2]
Masters: [ az-idb02 ] Stopped: [ az-idb01 ]
- Resource Group: g_ipnc_db2id2_ID2
+Resource Group: g_ipnc_db2id2_ID2
vip_db2id2_ID2 (ocf::heartbeat:IPaddr2): Started az-idb02
- nc_db2id2_ID2 (ocf::heartbeat:azure-lb): Started az-idb02 </code></pre>
+ nc_db2id2_ID2 (ocf::heartbeat:azure-lb): Started az-idb02
+```
+
+In the event of a kernel panic, the failed node will be restarted by fencing agent. After the failed node is back online, you must start pacemaker cluster by
+```bash
+sudo pcs cluster start
+```
-In the event of a kernel panic, the failed node will be restared by fencing agent. After the failed node is back online, you must start pacemaker cluster by
-<pre><code>sudo pcs cluster start</code></pre> it starts the Db2 instance into the secondary role.
+it starts the Db2 instance into the secondary role.
-<pre><code>2 nodes configured
+```bash
+2 nodes configured
5 resources configured Online: [ az-idb01 az-idb02 ]
Online: [ az-idb01 az-idb02 ]
Full list of resources: rsc_st_azure (stonith:fence_azure_arm): Started az-idb02
- Master/Slave Set: Db2_HADR_ID2-master [Db2_HADR_ID2]
+Master/Slave Set: Db2_HADR_ID2-master [Db2_HADR_ID2]
Masters: [ az-idb02 ] Slaves: [ az-idb01 ]
- Resource Group: g_ipnc_db2id2_ID2
+Resource Group: g_ipnc_db2id2_ID2
vip_db2id2_ID2 (ocf::heartbeat:IPaddr2): Started az-idb02
- nc_db2id2_ID2 (ocf::heartbeat:azure-lb): Started az-idb02</code></pre>
+ nc_db2id2_ID2 (ocf::heartbeat:azure-lb): Started az-idb02
+```
## Next steps-- [High-availability architecture and scenarios for SAP NetWeaver](./sap-high-availability-architecture-scenarios.md)-- [Setting up Pacemaker on Red Hat Enterprise Linux in Azure][rhel-pcs-azr]+
+* [High-availability architecture and scenarios for SAP NetWeaver](./sap-high-availability-architecture-scenarios.md)
+* [Setting up Pacemaker on Red Hat Enterprise Linux in Azure][rhel-pcs-azr]
[1928533]:https://launchpad.support.sap.com/#/notes/1928533 [2015553]:https://launchpad.support.sap.com/#/notes/2015553
rsc_st_azure (stonith:fence_azure_arm): Started az-idb02
[1612105]:https://launchpad.support.sap.com/#/notes/1612105 [2694118]:https://launchpad.support.sap.com/#/notes/2694118 - [db2-hadr-11.1]:https://www.ibm.com/support/knowledgecenter/en/SSEPGG_11.1.0/com.ibm.db2.luw.admin.ha.doc/doc/c0011267.html [db2-hadr-10.5]:https://www.ibm.com/support/knowledgecenter/en/SSEPGG_10.5.0/com.ibm.db2.luw.admin.ha.doc/doc/c0011267.html [dbms-db2]:dbms-guide-ibm.md
sap High Availability Guide Rhel Multi Sid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel-multi-sid.md
Previously updated : 11/20/2023 Last updated : 01/18/2024
This article assumes that:
### Prepare for SAP NetWeaver Installation
-1. Add configuration for the newly deployed system (that is, `NW2` and `NW3`) to the existing Azure Load Balancer, following the instructions [Deploy Azure Load Balancer manually via Azure portal](./high-availability-guide-rhel-netapp-files.md#deploy-azure-load-balancer-via-the-azure-portal). Adjust the IP addresses, health probe ports, and load-balancing rules for your configuration.
+1. Add configuration for the newly deployed system (that is, `NW2` and `NW3`) to the existing Azure Load Balancer, following the instructions [Deploy Azure Load Balancer manually via Azure portal](./high-availability-guide-rhel-netapp-files.md#configure-azure-load-balancer). Adjust the IP addresses, health probe ports, and load-balancing rules for your configuration.
2. **[A]** Set up name resolution for the more SAP systems. You can either use DNS server or modify */etc/hosts* on all nodes. This example shows how to use the */etc/hosts* file. Adapt the IP addresses and the host names to your environment.
This article assumes that:
sudo pcs resource create vip_NW2_ASCS IPaddr2 \ ip=10.3.1.52 \ --group g-NW2_ASCS
-
+
sudo pcs resource create nc_NW2_ASCS azure-lb port=62010 \ --group g-NW2_ASCS
This article assumes that:
```cmd sudo vi /sapmnt/NW2/profile/NW2_ERS12_msnw2ers
-
+
# Change the restart command to a start command #Restart_Program_00 = local $(_ER) pf=$(_PFL) NR=$(SCSID) Start_Program_00 = local $(_ER) pf=$(_PFL) NR=$(SCSID)
-
+
# remove Autostart from ERS profile # Autostart = 1 ```
This article assumes that:
> > When utilizing Red Hat HA services for SAP (cluster configuration) to manage SAP application server instances such as SAP ASCS and SAP ERS, additional modifications will be necessary to ensure compatibility between the SAPInstance resource agent and the new systemd-based SAP startup framework. So once the SAP application server instances has been installed or switched to a systemd enabled SAP Kernel as per SAP Note [3115048](https://me.sap.com/notes/3115048), the steps mentioned in [Red Hat KBA 6884531](https://access.redhat.com/articles/6884531) must be completed successfully on all cluster nodes.
-7. **[1]** Create the SAP cluster resources for the newly installed SAP system.
+7. **[1]** Create the SAP cluster resources for the newly installed SAP system.
- If using enqueue server 1 architecture (ENSA1), define the resources for SAP systems `NW2` and `NW3` as follows:
+ Depending on whether you are running an ENSA1 or ENSA2 system, select respective tab to define the resources for SAP systems `NW2` and `NW3` as follows. SAP introduced support for [ENSA2](https://help.sap.com/docs/ABAP_PLATFORM_NEW/cff8531bc1d9416d91bb6781e628d4e0/6d655c383abf4c129b0e5c8683e7ecd8.html), including replication, in SAP NetWeaver 7.52. Starting with ABAP Platform 1809, ENSA2 is installed by default. For ENSA2 support, see SAP Note [2630416](https://launchpad.support.sap.com/#/notes/2630416) for enqueue server 2 support.
- ```cmd
- sudo pcs property set maintenance-mode=true
-
- sudo pcs resource create rsc_sap_NW2_ASCS10 SAPInstance \
- InstanceName=NW2_ASCS10_msnw2ascs START_PROFILE="/sapmnt/NW2/profile/NW2_ASCS10_msnw2ascs" \
- AUTOMATIC_RECOVER=false \
- meta resource-stickiness=5000 migration-threshold=1 failure-timeout=60 \
- op monitor interval=20 on-fail=restart timeout=60 \
- op start interval=0 timeout=600 op stop interval=0 timeout=600 \
- --group g-NW2_ASCS
-
- sudo pcs resource meta g-NW2_ASCS resource-stickiness=3000
-
- sudo pcs resource create rsc_sap_NW2_ERS12 SAPInstance \
- InstanceName=NW2_ERS12_msnw2ers START_PROFILE="/sapmnt/NW2/profile/NW2_ERS12_msnw2ers" \
- AUTOMATIC_RECOVER=false IS_ERS=true \
- op monitor interval=20 on-fail=restart timeout=60 op start interval=0 timeout=600 op stop interval=0 timeout=600 \
- --group g-NW2_AERS
-
- sudo pcs constraint colocation add g-NW2_AERS with g-NW2_ASCS -5000
- sudo pcs constraint location rsc_sap_NW2_ASCS10 rule score=2000 runs_ers_NW2 eq 1
- sudo pcs constraint order start g-NW2_ASCS then stop g-NW2_AERS kind=Optional symmetrical=false
-
- sudo pcs resource create rsc_sap_NW3_ASCS20 SAPInstance \
- InstanceName=NW3_ASCS20_msnw3ascs START_PROFILE="/sapmnt/NW3/profile/NW3_ASCS20_msnw3ascs" \
- AUTOMATIC_RECOVER=false \
- meta resource-stickiness=5000 migration-threshold=1 failure-timeout=60 \
- op monitor interval=20 on-fail=restart timeout=60 \
- op start interval=0 timeout=600 op stop interval=0 timeout=600 \
- --group g-NW3_ASCS
-
- sudo pcs resource meta g-NW3_ASCS resource-stickiness=3000
-
- sudo pcs resource create rsc_sap_NW3_ERS22 SAPInstance \
- InstanceName=NW3_ERS22_msnw3ers START_PROFILE="/sapmnt/NW3/profile/NW2_ERS22_msnw3ers" \
- AUTOMATIC_RECOVER=false IS_ERS=true \
- op monitor interval=20 on-fail=restart timeout=60 op start interval=0 timeout=600 op stop interval=0 timeout=600 \
- --group g-NW3_AERS
-
- sudo pcs constraint colocation add g-NW3_AERS with g-NW3_ASCS -5000
- sudo pcs constraint location rsc_sap_NW3_ASCS20 rule score=2000 runs_ers_NW3 eq 1
- sudo pcs constraint order start g-NW3_ASCS then stop g-NW3_AERS kind=Optional symmetrical=false
-
- sudo pcs property set maintenance-mode=false
- ```
+ If you use enqueue server 2 architecture ([ENSA2](https://help.sap.com/docs/ABAP_PLATFORM_NEW/cff8531bc1d9416d91bb6781e628d4e0/6d655c383abf4c129b0e5c8683e7ecd8.html)), install resource agent resource-agents-sap-4.1.1-12.el7.x86_64 or newer and define the resources for SAP systems `NW2` and `NW3` as follows:
- SAP introduced support for enqueue server 2, including replication, as of SAP NW 7.52. Beginning with ABAP Platform 1809, enqueue server 2 is installed by default. See SAP note [2630416](https://launchpad.support.sap.com/#/notes/2630416) for enqueue server 2 support.
- If using enqueue server 2 architecture ([ENSA2](https://help.sap.com/viewer/cff8531bc1d9416d91bb6781e628d4e0/1709%20001/en-US/6d655c383abf4c129b0e5c8683e7ecd8.html)), define the resources for SAP systems `NW2` and `NW3` as follows:
+ #### [ENSA1](#tab/ensa1)
- ```cmd
- sudo pcs property set maintenance-mode=true
+ ```bash
+ sudo pcs property set maintenance-mode=true
+
+ sudo pcs resource create rsc_sap_NW2_ASCS10 SAPInstance \
+ InstanceName=NW2_ASCS10_msnw2ascs START_PROFILE="/sapmnt/NW2/profile/NW2_ASCS10_msnw2ascs" \
+ AUTOMATIC_RECOVER=false \
+ meta resource-stickiness=5000 migration-threshold=1 failure-timeout=60 \
+ op monitor interval=20 on-fail=restart timeout=60 \
+ op start interval=0 timeout=600 op stop interval=0 timeout=600 \
+ --group g-NW2_ASCS
+
+ sudo pcs resource meta g-NW2_ASCS resource-stickiness=3000
- sudo pcs resource create rsc_sap_NW2_ASCS10 SAPInstance \
- InstanceName=NW2_ASCS10_msnw2ascs START_PROFILE="/sapmnt/NW2/profile/NW2_ASCS10_msnw2ascs" \
- AUTOMATIC_RECOVER=false \
- meta resource-stickiness=5000 \
- op monitor interval=20 on-fail=restart timeout=60 \
- op start interval=0 timeout=600 op stop interval=0 timeout=600 \
- --group g-NW2_ASCS
+ sudo pcs resource create rsc_sap_NW2_ERS12 SAPInstance \
+ InstanceName=NW2_ERS12_msnw2ers START_PROFILE="/sapmnt/NW2/profile/NW2_ERS12_msnw2ers" \
+ AUTOMATIC_RECOVER=false IS_ERS=true \
+ op monitor interval=20 on-fail=restart timeout=60 op start interval=0 timeout=600 op stop interval=0 timeout=600 \
+ --group g-NW2_AERS
- sudo pcs resource meta g-NW2_ASCS resource-stickiness=3000
+ sudo pcs constraint colocation add g-NW2_AERS with g-NW2_ASCS -5000
+ sudo pcs constraint location rsc_sap_NW2_ASCS10 rule score=2000 runs_ers_NW2 eq 1
+ sudo pcs constraint order start g-NW2_ASCS then stop g-NW2_AERS kind=Optional symmetrical=false
- sudo pcs resource create rsc_sap_NW2_ERS12 SAPInstance \
- InstanceName=NW2_ERS12_msnw2ers START_PROFILE="/sapmnt/NW2/profile/NW2_ERS12_msnw2ers" \
- AUTOMATIC_RECOVER=false IS_ERS=true \
- op monitor interval=20 on-fail=restart timeout=60 op start interval=0 timeout=600 op stop interval=0 timeout=600 \
- --group g-NW2_AERS
+ sudo pcs resource create rsc_sap_NW3_ASCS20 SAPInstance \
+ InstanceName=NW3_ASCS20_msnw3ascs START_PROFILE="/sapmnt/NW3/profile/NW3_ASCS20_msnw3ascs" \
+ AUTOMATIC_RECOVER=false \
+ meta resource-stickiness=5000 migration-threshold=1 failure-timeout=60 \
+ op monitor interval=20 on-fail=restart timeout=60 \
+ op start interval=0 timeout=600 op stop interval=0 timeout=600 \
+ --group g-NW3_ASCS
- sudo pcs resource meta rsc_sap_NW2_ERS12 resource-stickiness=3000
+ sudo pcs resource meta g-NW3_ASCS resource-stickiness=3000
- sudo pcs constraint colocation add g-NW2_AERS with g-NW2_ASCS -5000
- sudo pcs constraint order start g-NW2_ASCS then start g-NW2_AERS kind=Optional symmetrical=false
- sudo pcs constraint order start g-NW2_ASCS then stop g-NW2_AERS kind=Optional symmetrical=false
+ sudo pcs resource create rsc_sap_NW3_ERS22 SAPInstance \
+ InstanceName=NW3_ERS22_msnw3ers START_PROFILE="/sapmnt/NW3/profile/NW2_ERS22_msnw3ers" \
+ AUTOMATIC_RECOVER=false IS_ERS=true \
+ op monitor interval=20 on-fail=restart timeout=60 op start interval=0 timeout=600 op stop interval=0 timeout=600 \
+ --group g-NW3_AERS
- sudo pcs resource create rsc_sap_NW3_ASCS20 SAPInstance \
- InstanceName=NW3_ASCS20_msnw3ascs START_PROFILE="/sapmnt/NW3/profile/NW3_ASCS20_msnw3ascs" \
- AUTOMATIC_RECOVER=false \
- meta resource-stickiness=5000 \
- op monitor interval=20 on-fail=restart timeout=60 \
- op start interval=0 timeout=600 op stop interval=0 timeout=600 \
- --group g-NW3_ASCS
+ sudo pcs constraint colocation add g-NW3_AERS with g-NW3_ASCS -5000
+ sudo pcs constraint location rsc_sap_NW3_ASCS20 rule score=2000 runs_ers_NW3 eq 1
+ sudo pcs constraint order start g-NW3_ASCS then stop g-NW3_AERS kind=Optional symmetrical=false
- sudo pcs resource meta g-NW3_ASCS resource-stickiness=3000
+ sudo pcs property set maintenance-mode=false
+ ```
- sudo pcs resource create rsc_sap_NW3_ERS22 SAPInstance \
- InstanceName=NW3_ERS22_msnw3ers START_PROFILE="/sapmnt/NW3/profile/NW2_ERS22_msnw3ers" \
- AUTOMATIC_RECOVER=false IS_ERS=true \
- op monitor interval=20 on-fail=restart timeout=60 op start interval=0 timeout=600 op stop interval=0 timeout=600 \
- --group g-NW3_AERS
+ #### [ENSA2](#tab/ensa2)
- sudo pcs resource meta rsc_sap_NW3_ERS22 resource-stickiness=3000
+ ```bash
+ sudo pcs property set maintenance-mode=true
- sudo pcs constraint colocation add g-NW3_AERS with g-NW3_ASCS -5000
- sudo pcs constraint order start g-NW3_ASCS then start g-NW3_AERS kind=Optional symmetrical=false
- sudo pcs constraint order start g-NW3_ASCS then stop g-NW3_AERS kind=Optional symmetrical=false
+ sudo pcs resource create rsc_sap_NW2_ASCS10 SAPInstance \
+ InstanceName=NW2_ASCS10_msnw2ascs START_PROFILE="/sapmnt/NW2/profile/NW2_ASCS10_msnw2ascs" \
+ AUTOMATIC_RECOVER=false \
+ meta resource-stickiness=5000 \
+ op monitor interval=20 on-fail=restart timeout=60 \
+ op start interval=0 timeout=600 op stop interval=0 timeout=600 \
+ --group g-NW2_ASCS
- sudo pcs property set maintenance-mode=false
- ```
+ sudo pcs resource meta g-NW2_ASCS resource-stickiness=3000
- If you're upgrading from an older version and switching to enqueue server 2, see SAP note [2641019](https://launchpad.support.sap.com/#/notes/2641019).
+ sudo pcs resource create rsc_sap_NW2_ERS12 SAPInstance \
+ InstanceName=NW2_ERS12_msnw2ers START_PROFILE="/sapmnt/NW2/profile/NW2_ERS12_msnw2ers" \
+ AUTOMATIC_RECOVER=false IS_ERS=true \
+ op monitor interval=20 on-fail=restart timeout=60 op start interval=0 timeout=600 op stop interval=0 timeout=600 \
+ --group g-NW2_AERS
- > [!NOTE]
- > The timeouts in the above configuration are just examples and might need to be adapted to the specific SAP setup.
+ sudo pcs resource meta rsc_sap_NW2_ERS12 resource-stickiness=3000
- Make sure that the cluster status is ok and that all resources are started. It's not important on which node the resources are running.
- The following example shows the cluster resources status, after SAP systems `NW2` and `NW3` were added to the cluster.
+ sudo pcs constraint colocation add g-NW2_AERS with g-NW2_ASCS -5000
+ sudo pcs constraint order start g-NW2_ASCS then start g-NW2_AERS kind=Optional symmetrical=false
+ sudo pcs constraint order start g-NW2_ASCS then stop g-NW2_AERS kind=Optional symmetrical=false
- ```cmd
+ sudo pcs resource create rsc_sap_NW3_ASCS20 SAPInstance \
+ InstanceName=NW3_ASCS20_msnw3ascs START_PROFILE="/sapmnt/NW3/profile/NW3_ASCS20_msnw3ascs" \
+ AUTOMATIC_RECOVER=false \
+ meta resource-stickiness=5000 \
+ op monitor interval=20 on-fail=restart timeout=60 \
+ op start interval=0 timeout=600 op stop interval=0 timeout=600 \
+ --group g-NW3_ASCS
+
+ sudo pcs resource meta g-NW3_ASCS resource-stickiness=3000
+
+ sudo pcs resource create rsc_sap_NW3_ERS22 SAPInstance \
+ InstanceName=NW3_ERS22_msnw3ers START_PROFILE="/sapmnt/NW3/profile/NW2_ERS22_msnw3ers" \
+ AUTOMATIC_RECOVER=false IS_ERS=true \
+ op monitor interval=20 on-fail=restart timeout=60 op start interval=0 timeout=600 op stop interval=0 timeout=600 \
+ --group g-NW3_AERS
+
+ sudo pcs resource meta rsc_sap_NW3_ERS22 resource-stickiness=3000
+
+ sudo pcs constraint colocation add g-NW3_AERS with g-NW3_ASCS -5000
+ sudo pcs constraint order start g-NW3_ASCS then start g-NW3_AERS kind=Optional symmetrical=false
+ sudo pcs constraint order start g-NW3_ASCS then stop g-NW3_AERS kind=Optional symmetrical=false
+
+ sudo pcs property set maintenance-mode=false
+ ```
+
+
+
+ If you're upgrading from an older version and switching to enqueue server 2, see SAP note [2641019](https://launchpad.support.sap.com/#/notes/2641019).
+
+ > [!NOTE]
+ > The timeouts in the above configuration are just examples and might need to be adapted to the specific SAP setup.
+
+ Make sure that the cluster status is ok and that all resources are started. It's not important on which node the resources are running.
+ The following example shows the cluster resources status, after SAP systems `NW2` and `NW3` were added to the cluster.
+
+ ```bash
sudo pcs status
- Online: [ rhelmsscl1 rhelmsscl2 ]
-
- Full list of resources:
-
- rsc_st_azure (stonith:fence_azure_arm): Started rhelmsscl1
- Resource Group: g-NW1_ASCS
- fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started rhelmsscl1
- vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started rhelmsscl1
- nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started rhelmsscl1
- rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started rhelmsscl1
- Resource Group: g-NW1_AERS
- fs_NW1_AERS (ocf::heartbeat:Filesystem): Started rhelmsscl2
- vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started rhelmsscl2
- nc_NW1_AERS (ocf::heartbeat:azure-lb): Started rhelmsscl2
- rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started rhelmsscl2
- Resource Group: g-NW2_ASCS
- fs_NW2_ASCS (ocf::heartbeat:Filesystem): Started rhelmsscl1
- vip_NW2_ASCS (ocf::heartbeat:IPaddr2): Started rhelmsscl1
- nc_NW2_ASCS (ocf::heartbeat:azure-lb): Started rhelmsscl1
- rsc_sap_NW2_ASCS10 (ocf::heartbeat:SAPInstance): Started rhelmsscl1
- Resource Group: g-NW2_AERS
- fs_NW2_AERS (ocf::heartbeat:Filesystem): Started rhelmsscl1
- vip_NW2_AERS (ocf::heartbeat:IPaddr2): Started rhelmsscl1
- nc_NW2_AERS (ocf::heartbeat:azure-lb): Started rhelmsscl1
- rsc_sap_NW2_ERS12 (ocf::heartbeat:SAPInstance): Started rhelmsscl1
- Resource Group: g-NW3_ASCS
- fs_NW3_ASCS (ocf::heartbeat:Filesystem): Started rhelmsscl1
- vip_NW3_ASCS (ocf::heartbeat:IPaddr2): Started rhelmsscl1
- nc_NW3_ASCS (ocf::heartbeat:azure-lb): Started rhelmsscl1
- rsc_sap_NW3_ASCS20 (ocf::heartbeat:SAPInstance): Started rhelmsscl1
- Resource Group: g-NW3_AERS
- fs_NW3_AERS (ocf::heartbeat:Filesystem): Started rhelmsscl1
- vip_NW3_AERS (ocf::heartbeat:IPaddr2): Started rhelmsscl1
- nc_NW3_AERS (ocf::heartbeat:azure-lb): Started rhelmsscl1
- rsc_sap_NW3_ERS22 (ocf::heartbeat:SAPInstance): Started rhelmsscl1
+ # Online: [ rhelmsscl1 rhelmsscl2 ]
+
+ # Full list of resources:
+
+ # rsc_st_azure (stonith:fence_azure_arm): Started rhelmsscl1
+ # Resource Group: g-NW1_ASCS
+ # fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started rhelmsscl1
+ # vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started rhelmsscl1
+ # nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started rhelmsscl1
+ # rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started rhelmsscl1
+ # Resource Group: g-NW1_AERS
+ # fs_NW1_AERS (ocf::heartbeat:Filesystem): Started rhelmsscl2
+ # vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started rhelmsscl2
+ # nc_NW1_AERS (ocf::heartbeat:azure-lb): Started rhelmsscl2
+ # rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started rhelmsscl2
+ # Resource Group: g-NW2_ASCS
+ # fs_NW2_ASCS (ocf::heartbeat:Filesystem): Started rhelmsscl1
+ # vip_NW2_ASCS (ocf::heartbeat:IPaddr2): Started rhelmsscl1
+ # nc_NW2_ASCS (ocf::heartbeat:azure-lb): Started rhelmsscl1
+ # rsc_sap_NW2_ASCS10 (ocf::heartbeat:SAPInstance): Started rhelmsscl1
+ # Resource Group: g-NW2_AERS
+ # fs_NW2_AERS (ocf::heartbeat:Filesystem): Started rhelmsscl1
+ # vip_NW2_AERS (ocf::heartbeat:IPaddr2): Started rhelmsscl1
+ # nc_NW2_AERS (ocf::heartbeat:azure-lb): Started rhelmsscl1
+ # rsc_sap_NW2_ERS12 (ocf::heartbeat:SAPInstance): Started rhelmsscl1
+ # Resource Group: g-NW3_ASCS
+ # fs_NW3_ASCS (ocf::heartbeat:Filesystem): Started rhelmsscl1
+ # vip_NW3_ASCS (ocf::heartbeat:IPaddr2): Started rhelmsscl1
+ # nc_NW3_ASCS (ocf::heartbeat:azure-lb): Started rhelmsscl1
+ # rsc_sap_NW3_ASCS20 (ocf::heartbeat:SAPInstance): Started rhelmsscl1
+ # Resource Group: g-NW3_AERS
+ # fs_NW3_AERS (ocf::heartbeat:Filesystem): Started rhelmsscl1
+ # vip_NW3_AERS (ocf::heartbeat:IPaddr2): Started rhelmsscl1
+ # nc_NW3_AERS (ocf::heartbeat:azure-lb): Started rhelmsscl1
+ # rsc_sap_NW3_ERS22 (ocf::heartbeat:SAPInstance): Started rhelmsscl1
``` 8. **[A]** Add firewall rules for ASCS and ERS on both nodes. The example below shows the firewall rules for both SAP systems `NW2` and `NW3`.
- ```cmd
- # NW2 - ASCS
- sudo firewall-cmd --zone=public --add-port=62010/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=62010/tcp
- sudo firewall-cmd --zone=public --add-port=3210/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=3210/tcp
- sudo firewall-cmd --zone=public --add-port=3610/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=3610/tcp
- sudo firewall-cmd --zone=public --add-port=3910/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=3910/tcp
- sudo firewall-cmd --zone=public --add-port=8110/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=8110/tcp
- sudo firewall-cmd --zone=public --add-port=51013/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=51013/tcp
- sudo firewall-cmd --zone=public --add-port=51014/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=51014/tcp
- sudo firewall-cmd --zone=public --add-port=51016/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=51016/tcp
+ ```bash
+ # NW1 - ASCS
+ sudo firewall-cmd --zone=public --add-port={62010,3210,3610,3910,8110,51013,51014,51016}/tcp --permanent
+ sudo firewall-cmd --zone=public --add-port={62010,3210,3610,3910,8110,51013,51014,51016}/tcp
# NW2 - ERS
- sudo firewall-cmd --zone=public --add-port=62112/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=62112/tcp
- sudo firewall-cmd --zone=public --add-port=3212/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=3212/tcp
- sudo firewall-cmd --zone=public --add-port=3312/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=3312/tcp
- sudo firewall-cmd --zone=public --add-port=51213/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=51213/tcp
- sudo firewall-cmd --zone=public --add-port=51214/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=51214/tcp
- sudo firewall-cmd --zone=public --add-port=51216/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=51216/tcp
+ sudo firewall-cmd --zone=public --add-port={62112,3212,3312,51213,51214,51216}/tcp --permanent
+ sudo firewall-cmd --zone=public --add-port={62112,3212,3312,51213,51214,51216}/tcp
# NW3 - ASCS
- sudo firewall-cmd --zone=public --add-port=62020/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=62020/tcp
- sudo firewall-cmd --zone=public --add-port=3220/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=3220/tcp
- sudo firewall-cmd --zone=public --add-port=3620/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=3620/tcp
- sudo firewall-cmd --zone=public --add-port=3920/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=3920/tcp
- sudo firewall-cmd --zone=public --add-port=8120/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=8120/tcp
- sudo firewall-cmd --zone=public --add-port=52013/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=52013/tcp
- sudo firewall-cmd --zone=public --add-port=52014/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=52014/tcp
- sudo firewall-cmd --zone=public --add-port=52016/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=52016/tcp
+ sudo firewall-cmd --zone=public --add-port={62020,3220,3620,3920,8120,52013,52014,52016}/tcp --permanent
+ sudo firewall-cmd --zone=public --add-port={62020,3220,3620,3920,8120,52013,52014,52016}/tcp
# NW3 - ERS
- sudo firewall-cmd --zone=public --add-port=62122/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=62122/tcp
- sudo firewall-cmd --zone=public --add-port=3222/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=3222/tcp
- sudo firewall-cmd --zone=public --add-port=3322/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=3322/tcp
- sudo firewall-cmd --zone=public --add-port=52213/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=52213/tcp
- sudo firewall-cmd --zone=public --add-port=52214/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=52214/tcp
- sudo firewall-cmd --zone=public --add-port=52216/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=52216/tcp
+ sudo firewall-cmd --zone=public --add-port={62122,3222,3322,52213,52214,52216}/tcp --permanent
+ sudo firewall-cmd --zone=public --add-port={62122,3222,3322,52213,52214,52216}/tcp
``` ### Proceed with the SAP installation
Always read the Red Hat best practices guides and perform all other tests that m
```cmd Online: [ rhelmsscl1 rhelmsscl2 ]-
+
Full list of resources:-
+
rsc_st_azure (stonith:fence_azure_arm): Started rhelmsscl1 Resource Group: g-NW1_ASCS fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started rhelmsscl1
Always read the Red Hat best practices guides and perform all other tests that m
```cmd Full list of resources:-
+
rsc_st_azure (stonith:fence_azure_arm): Started rhelmsscl2 Resource Group: g-NW1_ASCS fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started rhelmsscl2
sap High Availability Guide Rhel Netapp Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel-netapp-files.md
Previously updated : 11/20/2023 Last updated : 01/18/2024
When you consider Azure NetApp Files for the SAP NetWeaver on RHEL HA architectu
* The Azure NetApp Files feature isn't zone aware yet. Currently, the Azure NetApp Files feature isn't deployed in all availability zones in an Azure region. Be aware of the potential latency implications in some Azure regions. * You can deploy Azure NetApp Files volumes as NFSv3 or NFSv4.1 volumes. Both protocols are supported for the SAP application layer (ASCS/ERS, SAP application servers).
-## Set up (A)SCS
+## Prepare the infrastructure
+
+Azure Marketplace contains images qualified for SAP with the High Availability add-on, which you can use to deploy new VMs by using various versions of Red Hat.
+
+## Deploy Linux VMs manually via the Azure portal
+
+This document assumes that you already deployed an [Azure virtual network](../../virtual-network/virtual-networks-overview.md), subnet, and resource group.
+
+Deploy VMs for SAP ASCS, ERS and Application servers. Choose a suitable RHEL image that's supported for the SAP system. You can deploy a VM in any one of the availability options: virtual machine scale set, availability zone, or availability set.
+
+## Configure Azure load balancer
+
+During VM configuration, you have an option to create or select exiting load balancer in networking section. Follow the steps below to configure a standard load balancer for the high-availability setup of SAP ASCS and SAP ERS.
+
+#### [Azure portal](#tab/lb-portal)
-In this example, the resources were deployed manually via the [Azure portal](https://portal.azure.com/#home).
-
-### Deploy Azure Load Balancer via the Azure portal
-
-After you deploy the VMs for your SAP system, create a load balancer. Use VMs created for SAP ASCS/ERS instances in the back-end pool.
-
-1. Create the load balancer (internal, standard).
- 1. Create the front-end IP addresses.
- 1. IP address 192.168.14.9 for the ASCS:
- 1. Open the load balancer, select the front-end IP pool, and select **Add**.
- 1. Enter the name of the new front-end IP pool (for example, **frontend.QAS.ASCS**).
- 1. Set **Assignment** to **Static** and enter the IP address (for example, **192.168.14.9**).
- 1. Select **OK**.
- 1. IP address 192.168.14.10 for the ASCS ERS
- * Repeat the preceding steps under "a" to create an IP address for the ERS (for example, **192.168.14.10** and **frontend.QAS.ERS**).
- 1. Create a single back-end pool:
- 1. Open the load balancer, select **Backend pools**, and then select **Add**.
- 1. Enter the name of the new back-end pool (for example, **backend.QAS**).
- 1. Select **NIC** for **Backend Pool Configuration**.
- 1. Select **Add a virtual machine**.
- 1. Select the VMs of the ASCS cluster.
- 1. Select **Add**.
- 1. Select **Save**.
- 1. Create the health probes.
- 1. Port 620**00** for ASCS:
- 1. Open the load balancer, select health probes, and select **Add**.
- 1. Enter the name of the new health probe (for example, **health.QAS.ASCS**).
- 1. Select **TCP** as the protocol, port 620**00**, and keep **Interval 5**.
- 1. Select **OK**.
- 1. Port 621**01** for ASCS ERS:
- * Repeat the preceding steps under "c" to create a health probe for the ERS (for example, 621**01** and **health.QAS.ERS**).
- 1. Create load-balancing rules.
- 1. Load-balancing rules for ASCS:
- 1. Open the load balancer, select load-balancing rules, and select **Add**.
- 1. Enter the name of the new load balancer rule (for example, **lb.QAS.ASCS**).
- 1. Select the front-end IP address for ASCS, the back-end pool, and the health probe you created earlier (for example, **frontend.QAS.ASCS**, **backend.QAS**, and **health.QAS.ASCS**).
- 1. Increase the idle timeout to **30 minutes**.
- 1. Select **HA ports**.
- 1. Make sure to enable **Floating IP**.
- 1. Select **OK**.
- * Repeat the preceding steps to create load balancing rules for ERS (for example, **lb.QAS.ERS**).
+
+#### [Azure CLI](#tab/lb-azurecli)
++
+#### [PowerShell](#tab/lb-powershell)
+++ > [!IMPORTANT] > Floating IP isn't supported on a NIC secondary IP configuration in load-balancing scenarios. For more information, see [Azure Load Balancer limitations](../../load-balancer/load-balancer-multivip-overview.md#limitations). If you need more IP addresses for the VM, deploy a second NIC.
-When VMs without public IP addresses are placed in the back-end pool of an internal (no public IP address) standard load balancer, there's no outbound internet connectivity unless more configuration is performed to allow routing to public endpoints. For more information on how to achieve outbound connectivity, see [Public endpoint connectivity for VMs by using Azure Standard Load Balancer in SAP high-availability scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md).
+> [!NOTE]
+> When VMs without public IP addresses are placed in the back-end pool of an internal (no public IP address) standard load balancer, there's no outbound internet connectivity unless more configuration is performed to allow routing to public endpoints. For more information on how to achieve outbound connectivity, see [Public endpoint connectivity for VMs by using Azure Standard Load Balancer in SAP high-availability scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md).
> [!IMPORTANT] > Don't enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps could cause the health probes to fail. Set the parameter **net.ipv4.tcp_timestamps** to **0**. For more information, see [Load Balancer health probes](../../load-balancer/load-balancer-custom-probe-overview.md).
The following **[A]** prefix applies to both PAS and AAS.
echo "options nfs nfs4_disable_idmapping=Y" >> /etc/modprobe.d/nfs.conf ```
+## Set up (A)SCS
+
+Next, you'll prepare and install the SAP ASCS and ERS instances.
+ ### Create a Pacemaker cluster Follow the steps in [Set up Pacemaker on Red Hat Enterprise Linux in Azure](high-availability-guide-rhel-pacemaker.md) to create a basic Pacemaker cluster for this (A)SCS server.
The following items are prefixed with either:
192.168.14.10 anftstsapers ```
-1. **[1]** Create SAP directories in the Azure NetApp Files volume.
+2. **[1]** Create SAP directories in the Azure NetApp Files volume.
Mount the Azure NetApp Files volume temporarily on one of the VMs and create the SAP directories (file paths). ```bash
The following items are prefixed with either:
sudo rmdir /saptmp ```
-1. **[A]** Create the shared directories.
+3. **[A]** Create the shared directories.
```bash sudo mkdir -p /sapmnt/QAS
The following items are prefixed with either:
sudo chattr +i /usr/sap/QAS/ERS01 ```
-1. **[A]** Install the NFS client and other requirements.
+4. **[A]** Install the NFS client and other requirements.
```bash sudo yum -y install nfs-utils resource-agents resource-agents-sap ```
-1. **[A]** Check the version of `resource-agents-sap`.
+5. **[A]** Check the version of `resource-agents-sap`.
Make sure that the version of the installed `resource-agents-sap` package is at least `3.9.5-124.el7`.
The following items are prefixed with either:
# : environment. ```
-1. **[A]** Add mount entries.
+6. **[A]** Add mount entries.
If you use NFSv3:
The following items are prefixed with either:
sudo mount -a ```
-1. **[A]** Configure the SWAP file.
+7. **[A]** Configure the SWAP file.
```bash sudo vi /etc/waagent.conf
The following items are prefixed with either:
sudo service waagent restart ```
-1. **[A]** Perform RHEL OS configuration.
+8. **[A]** Perform RHEL OS configuration.
Based on the RHEL version, perform the configuration mentioned in SAP Note [2002167](https://launchpad.support.sap.com/#/notes/2002167), [2772999](https://launchpad.support.sap.com/#/notes/2772999), or [3108316](https://launchpad.support.sap.com/#/notes/2772999).
The following items are prefixed with either:
pcs resource defaults migration-threshold=3 ```
-1. **[1]** Create a virtual IP resource and health probe for the ASCS instance.
+2. **[1]** Create a virtual IP resource and health probe for the ASCS instance.
```bash sudo pcs node standby anftstsapcl2
The following items are prefixed with either:
# vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1 ```
-1. **[1]** Install SAP NetWeaver ASCS.
+3. **[1]** Install SAP NetWeaver ASCS.
Install SAP NetWeaver ASCS as the root on the first node by using a virtual hostname that maps to the IP address of the load balancer front-end configuration for the ASCS, for example, **anftstsapvh**, **192.168.14.9**, and the instance number that you used for the probe of the load balancer, for example, **00**.
The following items are prefixed with either:
sudo chgrp sapsys /usr/sap/QAS/ASCS00 ```
-1. **[1]** Create a virtual IP resource and health probe for the ERS instance.
+4. **[1]** Create a virtual IP resource and health probe for the ERS instance.
```bash sudo pcs node unstandby anftstsapcl2
The following items are prefixed with either:
# vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2 ```
-1. **[2]** Install SAP NetWeaver ERS.
+5. **[2]** Install SAP NetWeaver ERS.
Install SAP NetWeaver ERS as the root on the second node by using a virtual hostname that maps to the IP address of the load balancer front-end configuration for the ERS, for example, **anftstsapers**, **192.168.14.10**, and the instance number that you used for the probe of the load balancer, for example, **01**.
The following items are prefixed with either:
sudo chgrp sapsys /usr/sap/QAS/ERS01 ```
-1. **[1]** Adapt the ASCS/SCS and ERS instance profiles.
+6. **[1]** Adapt the ASCS/SCS and ERS instance profiles.
* ASCS/SCS profile
The following items are prefixed with either:
# Autostart = 1 ```
-1. **[A]** Configure Keep Alive.
+7. **[A]** Configure Keep Alive.
The communication between the SAP NetWeaver application server and the ASCS/SCS is routed through a software load balancer. The load balancer disconnects inactive connections after a configurable timeout. To prevent this action, set a parameter in the SAP NetWeaver ASCS/SCS profile, if you use ENSA1, and change the Linux system `keepalive` settings on all SAP servers for both ENSA1/ENSA2. For more information, see [SAP Note 1410736][1410736].
The following items are prefixed with either:
sudo sysctl net.ipv4.tcp_keepalive_time=300 ```
-1. **[A]** Update the `/usr/sap/sapservices` file.
+8. **[A]** Update the `/usr/sap/sapservices` file.
To prevent the start of the instances by the `sapinit` startup script, all instances managed by Pacemaker must be commented out from the `/usr/sap/sapservices` file.
The following items are prefixed with either:
> With the systemd based SAP Startup Framework, SAP instances can now be managed by systemd. The minimum required Red Hat Enterprise Linux (RHEL) version is RHEL 8 for SAP. As described in SAP Note [3115048](https://me.sap.com/notes/3115048), a fresh installation of a SAP kernel with integrated systemd based SAP Startup Framework support will always result in a systemd controlled SAP instance. After an SAP kernel upgrade of an existing SAP installation to a kernel which has systemd based SAP Startup Framework support, however, some manual steps have to be performed as documented in SAP Note [3115048](https://me.sap.com/notes/3115048) to convert the existing SAP startup environment to one which is systemd controlled. > > When utilizing Red Hat HA services for SAP (cluster configuration) to manage SAP application server instances such as SAP ASCS and SAP ERS, additional modifications will be necessary to ensure compatibility between the SAPInstance resource agent and the new systemd-based SAP startup framework. So once the SAP application server instances has been installed or switched to a systemd enabled SAP Kernel as per SAP Note [3115048](https://me.sap.com/notes/3115048), the steps mentioned in [Red Hat KBA 6884531](https://access.redhat.com/articles/6884531) must be completed successfully on all cluster nodes.
-
-2. **[1]** Create the SAP cluster resources.
- If you use enqueue server 1 architecture (ENSA1), define the resources as shown here:
+9. **[1]** Create the SAP cluster resources.
+
+ Depending on whether you are running an ENSA1 or ENSA2 system, select respective tab to define the resources. SAP introduced support for [ENSA2](https://help.sap.com/docs/ABAP_PLATFORM_NEW/cff8531bc1d9416d91bb6781e628d4e0/6d655c383abf4c129b0e5c8683e7ecd8.html), including replication, in SAP NetWeaver 7.52. Starting with ABAP Platform 1809, ENSA2 is installed by default. For ENSA2 support. See SAP Note [2630416](https://launchpad.support.sap.com/#/notes/2630416) for enqueue server 2 support.
+
+ If you use enqueue server 2 architecture ([ENSA2](https://help.sap.com/docs/ABAP_PLATFORM_NEW/cff8531bc1d9416d91bb6781e628d4e0/6d655c383abf4c129b0e5c8683e7ecd8.html)), install resource agent resource-agents-sap-4.1.1-12.el7.x86_64 or newer and define the resources as shown here:
+
+ #### [ENSA1](#tab/ensa1)
```bash sudo pcs property set maintenance-mode=true
The following items are prefixed with either:
sudo pcs constraint colocation add g-QAS_AERS with g-QAS_ASCS -5000 sudo pcs constraint location rsc_sap_QAS_ASCS00 rule score=2000 runs_ers_QAS eq 1 sudo pcs constraint order start g-QAS_ASCS then stop g-QAS_AERS kind=Optional symmetrical=false
-
+
sudo pcs node unstandby anftstsapcl1 sudo pcs property set maintenance-mode=false ```
- SAP introduced support for enqueue server 2, including replication, as of SAP NW 7.52. Starting with ABAP Platform 1809, enqueue server 2 is installed by default. For enqueue server 2 support, see SAP Note [2630416](https://launchpad.support.sap.com/#/notes/2630416).
-
- If you use enqueue server 2 architecture ([ENSA2](https://help.sap.com/viewer/cff8531bc1d9416d91bb6781e628d4e0/1709%20001/en-US/6d655c383abf4c129b0e5c8683e7ecd8.html)), install the resource agent as `resource-agents-sap-4.1.1-12.el7.x86_64` or newer and define the resources as shown here:
+ #### [ENSA2](#tab/ensa2)
```bash sudo pcs property set maintenance-mode=true
- # If using NFSv3
+ # If using NFSv3
sudo pcs resource create rsc_sap_QAS_ASCS00 SAPInstance \ InstanceName=QAS_ASCS00_anftstsapvh START_PROFILE="/sapmnt/QAS/profile/QAS_ASCS00_anftstsapvh" \ AUTOMATIC_RECOVER=false \
The following items are prefixed with either:
sudo pcs property set maintenance-mode=false ```
+
+ If you're upgrading from an older version and switching to enqueue server 2, see SAP Note [2641322](https://launchpad.support.sap.com/#/notes/2641322). > [!NOTE]
The following items are prefixed with either:
# rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl1 ```
-3. **[1]** Run the following step to configure `priority-fencing-delay` (applicable only as of pacemaker-2.0.4-6.el8 or higher).
+10. **[1]** Run the following step to configure `priority-fencing-delay` (applicable only as of pacemaker-2.0.4-6.el8 or higher).
> [!NOTE] > If you have a two-node cluster, you have the option to configure the `priority-fencing-delay` cluster property. This property introduces more delay in fencing a node that has higher total resource priority when a split-brain scenario occurs. For more information, see [Can Pacemaker fence the cluster node with the fewest running resources?](https://access.redhat.com/solutions/5110521).
The following items are prefixed with either:
sudo pcs property set priority-fencing-delay=15s ```
-4. **[A]** Add firewall rules for ASCS and ERS on both nodes.
+11. **[A]** Add firewall rules for ASCS and ERS on both nodes.
```bash # Probe Port of ASCS
- sudo firewall-cmd --zone=public --add-port=62000/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=62000/tcp
- sudo firewall-cmd --zone=public --add-port=3200/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=3200/tcp
- sudo firewall-cmd --zone=public --add-port=3600/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=3600/tcp
- sudo firewall-cmd --zone=public --add-port=3900/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=3900/tcp
- sudo firewall-cmd --zone=public --add-port=8100/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=8100/tcp
- sudo firewall-cmd --zone=public --add-port=50013/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=50013/tcp
- sudo firewall-cmd --zone=public --add-port=50014/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=50014/tcp
- sudo firewall-cmd --zone=public --add-port=50016/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=50016/tcp
-
+ sudo firewall-cmd --zone=public --add-port={62000,3200,3600,3900,8100,50013,50014,50016}/tcp --permanent
+ sudo firewall-cmd --zone=public --add-port={62000,3200,3600,3900,8100,50013,50014,50016}/tcp
# Probe Port of ERS
- sudo firewall-cmd --zone=public --add-port=62101/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=62101/tcp
- sudo firewall-cmd --zone=public --add-port=3201/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=3201/tcp
- sudo firewall-cmd --zone=public --add-port=3301/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=3301/tcp
- sudo firewall-cmd --zone=public --add-port=50113/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=50113/tcp
- sudo firewall-cmd --zone=public --add-port=50114/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=50114/tcp
- sudo firewall-cmd --zone=public --add-port=50116/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=50116/tcp
+ sudo firewall-cmd --zone=public --add-port={62101,3201,3301,50113,50114,50116}/tcp --permanent
+ sudo firewall-cmd --zone=public --add-port={62101,3201,3301,50113,50114,50116}/tcp
``` ## SAP NetWeaver application server preparation
The following steps assume that you install the application server on a server d
The following items are prefixed with either: -- **[A]**: Applicable to both PAS and AAS-- **[P]**: Only applicable to PAS-- **[S]**: Only applicable to AAS
+* **[A]**: Applicable to both PAS and AAS
+* **[P]**: Only applicable to PAS
+* **[S]**: Only applicable to AAS
1. **[A]** Set up hostname resolution.+ You can either use a DNS server or modify the `/etc/hosts` file on all nodes. This example shows how to use the `/etc/hosts` file. Replace the IP address and the hostname in the following commands: ```bash
The following items are prefixed with either:
192.168.14.8 anftstsapa02 ```
-1. **[A]** Create the `sapmnt` directory.
+2. **[A]** Create the `sapmnt` directory.
```bash sudo mkdir -p /sapmnt/QAS
The following items are prefixed with either:
sudo chattr +i /usr/sap/trans ```
-1. **[A]** Install the NFS client and other requirements.
+3. **[A]** Install the NFS client and other requirements.
```bash sudo yum -y install nfs-utils uuidd ```
-1. **[A]** Add mount entries.
+4. **[A]** Add mount entries.
If you use NFSv3:
The following items are prefixed with either:
sudo mount -a ```
-1. **[P]** Create and mount the PAS directory.
+5. **[P]** Create and mount the PAS directory.
If you use NFSv3:
The following items are prefixed with either:
sudo mount -a ```
-1. **[S]** Create and mount the AAS directory.
+6. **[S]** Create and mount the AAS directory.
If you use NFSv3:
The following items are prefixed with either:
sudo mount -a ```
-1. **[A]** Configure the SWAP file.
+7. **[A]** Configure the SWAP file.
```bash sudo vi /etc/waagent.conf
sap High Availability Guide Rhel Nfs Azure Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel-nfs-azure-files.md
Previously updated : 11/20/2023 Last updated : 01/18/2024
The example configurations and installation commands use the following instance
This diagram shows a typical SAP NetWeaver HA architecture. The `sapmnt` and `saptrans` file systems are deployed on NFS shares on Azure Files. The SAP central services are protected by a Pacemaker cluster. The clustered VMs are behind an instance of Azure Load Balancer. The NFS shares are mounted through private endpoints. :::image-end:::
-## Prepare infrastructure
+## Prepare the infrastructure
+
+Azure Marketplace contains images qualified for SAP with the High Availability add-on, which you can use to deploy new VMs by using various versions of Red Hat.
+
+## Deploy Linux VMs manually via the Azure portal
This document assumes that you already deployed an [Azure virtual network](../../virtual-network/virtual-networks-overview.md), subnet, and resource group.
-1. Deploy your VMs. Choose a [suitable deployment type](./sap-high-availability-architecture-scenarios.md#comparison-of-different-deployment-types-for-sap-workload). You can deploy VMs in availability zones, if the Azure region supports zones, or in availability sets. If you need more IP addresses for your VMs, deploy and attach a second NIC. Don't add secondary IP addresses to the primary NIC. [Azure Load Balancer Floating IP doesn't support this scenario](../../load-balancer/load-balancer-multivip-overview.md#limitations).
+Deploy VMs for SAP ASCS, ERS and Application servers. Choose a suitable RHEL image that's supported for the SAP system. You can deploy a VM in any one of the availability options: virtual machine scale set, availability zone, or availability set.
+
+## Configure Azure load balancer
+
+During VM configuration, you have an option to create or select exiting load balancer in networking section. Follow the steps below to configure a standard load balancer for the high-availability setup of SAP ASCS and SAP ERS.
+
+#### [Azure portal](#tab/lb-portal)
-1. For your virtual IPs, deploy and configure an instance of [Load Balancer](../../load-balancer/load-balancer-overview.md). We recommend that you use a [Standard load balancer](../../load-balancer/quickstart-load-balancer-standard-public-portal.md).
- 1. Configure two front-end IPs. One is for ASCS (`10.90.90.10`) and one is for ERS (`10.90.90.9`).
- 1. Create a back-end pool and add both VMs, which will be part of the cluster.
- 1. Create the health probe for ASCS. The probe port is `62000`. Create the probe port for ERS. The ERS probe port is `62101`. When you configure the Pacemaker resources later on, you must use matching probe ports.
- 1. Configure the load-balancing rules for ASCS and ERS. Select the corresponding front IPs, health probes, and the back-end pool. Select HA ports, increase the idle timeout to 30 minutes, and enable floating IP.
+#### [Azure CLI](#tab/lb-azurecli)
++
+#### [PowerShell](#tab/lb-powershell)
++++
+> [!IMPORTANT]
+> Floating IP isn't supported on a NIC secondary IP configuration in load-balancing scenarios. For more information, see [Load Balancer limitations](../../load-balancer/load-balancer-multivip-overview.md#limitations). If you need another IP address for the VM, deploy a second NIC.
+
+> [!NOTE]
+> When VMs without public IP addresses are placed in the back-end pool of an internal (no public IP address) Standard instance of Load Balancer, there's no outbound internet connectivity unless more configuration is performed to allow routing to public endpoints. For more information on how to achieve outbound connectivity, see [Public endpoint connectivity for virtual machines using Azure Standard Load Balancer in SAP high-availability scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md).
+
+> [!IMPORTANT]
+> Don't enable TCP timestamps on Azure VMs placed behind Load Balancer. Enabling TCP timestamps causes the health probes to fail. Set the parameter `net.ipv4.tcp_timestamps` to `0`. For more information, see [Load Balancer health probes](../../load-balancer/load-balancer-custom-probe-overview.md).
### Deploy Azure Files storage account and NFS shares
When you plan your deployment with NFS on Azure Files, consider the following im
## Set up (A)SCS
-In this example, you deploy the resources manually through the [Azure portal](https://portal.azure.com/#home).
-
-### Deploy Azure Load Balancer via the Azure portal
-
-After you deploy the VMs for your SAP system, create a load balancer. Then, use the VMs in the back-end pool.
-
-1. Create an internal, Standard instance of Load Balancer.
- 1. Create the front-end IP addresses.
- 1. IP address 10.90.90.10 for the ASCS:
- 1. Open the load balancer, select the front-end IP pool, and select **Add**.
- 1. Enter the name of the new front-end IP pool (for example, **frontend.NW1.ASCS**).
- 1. Set the **Assignment** to **Static** and enter the IP address (for example, **10.90.90.10**).
- 1. Select **OK**.
- 1. IP address 10.90.90.9 for the ASCS ERS:
- * Repeat the preceding steps under "a" to create an IP address for the ERS (for example, **10.90.90.9** and **frontend.NW1.ERS**).
- 1. Create a single back-end pool:
- 1. Open the load balancer, select **Backend pools**, and then select **Add**.
- 1. Enter the name of the new back-end pool (for example, **backend.NW1**).
- 1. Select **NIC** for **Backend Pool Configuration**.
- 1. Select **Add a virtual machine**.
- 1. Select the VMs of the ASCS cluster.
- 1. Select **Add**.
- 1. Select **Save**.
- 1. Create the health probes.
- 1. Port 620**00** for ASCS:
- 1. Open the load balancer, select **Health probes**, and select **Add**.
- 1. Enter the name of the new health probe (for example, **health.NW1.ASCS**).
- 1. Select **TCP** as the protocol and the port 620**00** and keep **Interval 5**.
- 1. Select **OK**.
- 1. Port 621**01** for ASCS ERS:
- * Repeat the preceding steps under "c" to create a health probe for the ERS (for example, 621**01** and **health.NW1.ERS**).
- 1. Create load-balancing rules.
- 1. Create a back-end pool for the ASCS:
- 1. Open the load balancer, select **Load-balancing rules**, and select **Add**.
- 1. Enter the name of the new load balancer rule (for example, **lb.NW1.ASCS**).
- 1. Select the front-end IP address for ASCS, the back-end pool, and the health probe you created earlier (for example, **frontend.NW1.ASCS**, **backend.NW1**, and **health.NW1.ASCS**).
- 1. Select **HA ports**.
- 1. Increase the idle timeout to **30 minutes**.
- 1. Make sure to enable **Floating IP**.
- 1. Select **OK**.
- * Repeat the preceding steps to create load-balancing rules for ERS (for example, **lb.NW1.ERS**).
-
-> [!IMPORTANT]
-> Floating IP isn't supported on a NIC secondary IP configuration in load-balancing scenarios. For more information, see [Load Balancer limitations](../../load-balancer/load-balancer-multivip-overview.md#limitations). If you need another IP address for the VM, deploy a second NIC.
-
-> [!NOTE]
-> When VMs without public IP addresses are placed in the back-end pool of an internal (no public IP address) Standard instance of Load Balancer, there's no outbound internet connectivity unless more configuration is performed to allow routing to public endpoints. For more information on how to achieve outbound connectivity, see [Public endpoint connectivity for virtual machines using Azure Standard Load Balancer in SAP high-availability scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md).
-
-> [!IMPORTANT]
-> Don't enable TCP timestamps on Azure VMs placed behind Load Balancer. Enabling TCP timestamps causes the health probes to fail. Set the parameter **net.ipv4.tcp_timestamps** to **0**. For more information, see [Load Balancer health probes](../../load-balancer/load-balancer-custom-probe-overview.md).
+Next, you'll prepare and install the SAP ASCS and ERS instances.
### Create a Pacemaker cluster
Follow the steps in [Set up Pacemaker on Red Hat Enterprise Linux in Azure](high
The following items are prefixed with: -- **[A]**: Applicable to all nodes-- **[1]**: Only applicable to node 1-- **[2]**: Only applicable to node 2
+* **[A]**: Applicable to all nodes
+* **[1]**: Only applicable to node 1
+* **[2]**: Only applicable to node 2
1. **[A]** Set up hostname resolution.
The following items are prefixed with:
10.90.90.9 sapers ```
-1. **[A]** Install the NFS client and other requirements.
+2. **[A]** Install the NFS client and other requirements.
```bash sudo yum -y install nfs-utils resource-agents resource-agents-sap ```
-1. **[1]** Create the SAP directories on the NFS share.
+3. **[1]** Create the SAP directories on the NFS share.
Mount the NFS share **sapnw1** temporarily on one of the VMs, and create the SAP directories that will be used as nested mount points. ```bash
The following items are prefixed with:
sudo rmdir /saptmp ```
-1. **[A]** Create the shared directories.
+4. **[A]** Create the shared directories.
```bash sudo mkdir -p /sapmnt/NW1
The following items are prefixed with:
sudo chattr +i /usr/sap/NW1/ERS01 ```
-1. **[A]** Check the version of `resource-agents-sap`.
+5. **[A]** Check the version of `resource-agents-sap`.
Make sure that the version of the installed `resource-agents-sap` package is at least `3.9.5-124.el7`.
The following items are prefixed with:
sudo yum info resource-agents-sap ```
-1. **[A]** Add mount entries.
+6. **[A]** Add mount entries.
```bash vi /etc/fstab
The following items are prefixed with:
mount -a ```
-1. **[A]** Configure the SWAP file.
+7. **[A]** Configure the SWAP file.
```bash sudo vi /etc/waagent.conf
The following items are prefixed with:
sudo service waagent restart ```
-1. **[A]** Configure RHEL.
+8. **[A]** Configure RHEL.
Configure RHEL as described in SAP Note [2002167] for RHEL 7.x, SAP Note [2772999] for RHEL 8.x, or SAP Note [3108316] for RHEL 9.x.
The following items are prefixed with:
pcs resource defaults update migration-threshold=3 ```
-1. **[1]** Create a virtual IP resource and health probe for the ASCS instance.
+2. **[1]** Create a virtual IP resource and health probe for the ASCS instance.
```bash sudo pcs node standby sap-cl2
The following items are prefixed with:
# vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started sap-cl1 ```
-1. **[1]** Install SAP NetWeaver ASCS.
+3. **[1]** Install SAP NetWeaver ASCS.
Install SAP NetWeaver ASCS as the root on the first node by using a virtual hostname that maps to the IP address of the load balancer front-end configuration for the ASCS, for example, **sapascs** and **10.90.90.10**, and the instance number that you used for the probe of the load balancer, for example, **00**.
The following items are prefixed with:
sudo chgrp sapsys /usr/sap/NW1/ASCS00 ```
-1. **[1]** Create a virtual IP resource and health probe for the ERS instance.
+4. **[1]** Create a virtual IP resource and health probe for the ERS instance.
```bash sudo pcs node unstandby sap-cl2
The following items are prefixed with:
# vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started sap-cl2 ```
-1. **[2]** Install SAP NetWeaver ERS.
+5. **[2]** Install SAP NetWeaver ERS.
Install SAP NetWeaver ERS as the root on the second node by using a virtual hostname that maps to the IP address of the load balancer front-end configuration for the ERS, for example, **sapers** and **10.90.90.9**, and the instance number that you used for the probe of the load balancer, for example, **01**.
The following items are prefixed with:
sudo chgrp sapsys /usr/sap/NW1/ERS01 ```
-1. **[1]** Adapt the ASCS/SCS and ERS instance profiles.
+6. **[1]** Adapt the ASCS/SCS and ERS instance profiles.
* ASCS/SCS profile:
The following items are prefixed with:
# Autostart = 1 ```
-1. **[A]** Configure Keep Alive.
+7. **[A]** Configure Keep Alive.
The communication between the SAP NetWeaver application server and the ASCS/SCS is routed through a software load balancer. The load balancer disconnects inactive connections after a configurable timeout. To prevent this action, set a parameter in the SAP NetWeaver ASCS/SCS profile, if you're using ENSA1. Change the Linux system `keepalive` settings on all SAP servers for both ENSA1 and ENSA2. For more information, see SAP Note [1410736][1410736].
The following items are prefixed with:
sudo sysctl net.ipv4.tcp_keepalive_time=300 ```
-1. **[A]** Update the `/usr/sap/sapservices` file.
+8. **[A]** Update the `/usr/sap/sapservices` file.
To prevent the start of the instances by the `sapinit` startup script, all instances managed by Pacemaker must be commented out from the `/usr/sap/sapservices` file.
The following items are prefixed with:
> > When utilizing Red Hat HA services for SAP (cluster configuration) to manage SAP application server instances such as SAP ASCS and SAP ERS, additional modifications will be necessary to ensure compatibility between the SAPInstance resource agent and the new systemd-based SAP startup framework. So once the SAP application server instances has been installed or switched to a systemd enabled SAP Kernel as per SAP Note [3115048](https://me.sap.com/notes/3115048), the steps mentioned in [Red Hat KBA 6884531](https://access.redhat.com/articles/6884531) must be completed successfully on all cluster nodes.
-2. **[1]** Create the SAP cluster resources.
+9. **[1]** Create the SAP cluster resources.
- If you use enqueue server 1 architecture (ENSA1), define the resources as shown here:
+ Depending on whether you are running an ENSA1 or ENSA2 system, select respective tab to define the resources. SAP introduced support for [ENSA2](https://help.sap.com/docs/ABAP_PLATFORM_NEW/cff8531bc1d9416d91bb6781e628d4e0/6d655c383abf4c129b0e5c8683e7ecd8.html), including replication, in SAP NetWeaver 7.52. Starting with ABAP Platform 1809, ENSA2 is installed by default. For ENSA2 support. See SAP Note [2630416](https://launchpad.support.sap.com/#/notes/2630416) for enqueue server 2 support.
+
+ If you use enqueue server 2 architecture ([ENSA2](https://help.sap.com/docs/ABAP_PLATFORM_NEW/cff8531bc1d9416d91bb6781e628d4e0/6d655c383abf4c129b0e5c8683e7ecd8.html)), install resource agent resource-agents-sap-4.1.1-12.el7.x86_64 or newer and define the resources as shown here:
+
+ #### [ENSA1](#tab/ensa1)
```bash sudo pcs property set maintenance-mode=true
The following items are prefixed with:
sudo pcs property set maintenance-mode=false ```
- SAP introduced support for enqueue server 2, including replication, as of SAP NW 7.52. Starting with ABAP Platform 1809, enqueue server 2 is installed by default. See SAP Note [2630416](https://launchpad.support.sap.com/#/notes/2630416) for enqueue server 2 support.
- If you use enqueue server 2 architecture ([ENSA2](https://help.sap.com/viewer/cff8531bc1d9416d91bb6781e628d4e0/1709%20001/en-US/6d655c383abf4c129b0e5c8683e7ecd8.html)), install resource agent resource-agents-sap-4.1.1-12.el7.x86_64 or newer and define the resources as shown here:
+ #### [ENSA2](#tab/ensa2)
```bash sudo pcs property set maintenance-mode=true
The following items are prefixed with:
sudo pcs property set maintenance-mode=false ```
+
+ If you're upgrading from an older version and switching to enqueue server 2, see SAP Note [2641322](https://launchpad.support.sap.com/#/notes/2641322). > [!NOTE]
The following items are prefixed with:
# rsc_sap_NW1_ERS01 (ocf::heartbeat:SAPInstance): Started sap-cl1 ```
-3. **[1]** Run the following step to configure `priority-fencing-delay` (applicable only as of pacemaker-2.0.4-6.el8 or higher).
+10. **[1]** Run the following step to configure `priority-fencing-delay` (applicable only as of pacemaker-2.0.4-6.el8 or higher).
> [!NOTE] > If you have a two-node cluster, you have the option to configure the `priority-fencing-delay` cluster property. This property introduces additional delay in fencing a node that has higher total resource priority when a split-brain scenario occurs. For more information, see [Can Pacemaker fence the cluster node with the fewest running resources?](https://access.redhat.com/solutions/5110521).
The following items are prefixed with:
sudo pcs property set priority-fencing-delay=15s ```
-4. **[A]** Add firewall rules for ASCS and ERS on both nodes.
+11. **[A]** Add firewall rules for ASCS and ERS on both nodes.
```bash # Probe Port of ASCS
- sudo firewall-cmd --zone=public --add-port=62000/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=62000/tcp
- sudo firewall-cmd --zone=public --add-port=3200/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=3200/tcp
- sudo firewall-cmd --zone=public --add-port=3600/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=3600/tcp
- sudo firewall-cmd --zone=public --add-port=3900/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=3900/tcp
- sudo firewall-cmd --zone=public --add-port=8100/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=8100/tcp
- sudo firewall-cmd --zone=public --add-port=50013/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=50013/tcp
- sudo firewall-cmd --zone=public --add-port=50014/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=50014/tcp
- sudo firewall-cmd --zone=public --add-port=50016/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=50016/tcp
+ sudo firewall-cmd --zone=public --add-port={62000,3200,3600,3900,8100,50013,50014,50016}/tcp --permanent
+ sudo firewall-cmd --zone=public --add-port={62000,3200,3600,3900,8100,50013,50014,50016}/tcp
# Probe Port of ERS
- sudo firewall-cmd --zone=public --add-port=62101/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=62101/tcp
- sudo firewall-cmd --zone=public --add-port=3201/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=3201/tcp
- sudo firewall-cmd --zone=public --add-port=3301/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=3301/tcp
- sudo firewall-cmd --zone=public --add-port=50113/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=50113/tcp
- sudo firewall-cmd --zone=public --add-port=50114/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=50114/tcp
- sudo firewall-cmd --zone=public --add-port=50116/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=50116/tcp
+ sudo firewall-cmd --zone=public --add-port={62101,3201,3301,50113,50114,50116}/tcp --permanent
+ sudo firewall-cmd --zone=public --add-port={62101,3201,3301,50113,50114,50116}/tcp
``` ## SAP NetWeaver application server preparation
- Some databases require that the database instance installation runs on an application server. Prepare the application server VMs to be able to use them in these cases.
+Some databases require that the database instance installation runs on an application server. Prepare the application server VMs to be able to use them in these cases.
- The following steps assume that you install the application server on a server different from the ASCS/SCS and HANA servers. Otherwise, some of the steps (like configuring hostname resolution) aren't needed.
+The following steps assume that you install the application server on a server different from the ASCS/SCS and HANA servers. Otherwise, some of the steps (like configuring hostname resolution) aren't needed.
- The following items are prefixed with:
+The following items are prefixed with:
- - **[A]**: Applicable to both PAS and AAS
- - **[P]**: Only applicable to PAS
- - **[S]**: Only applicable to AAS
+* **[A]**: Applicable to both PAS and AAS
+* **[P]**: Only applicable to PAS
+* **[S]**: Only applicable to AAS
1. **[A]** Set up hostname resolution. You can either use a DNS server or modify the `/etc/hosts` file on all nodes. This example shows how to use the `/etc/hosts` file. Replace the IP address and the hostname in the following commands:
sap High Availability Guide Rhel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel.md
Previously updated : 06/21/2023 Last updated : 01/18/2024
SAP NetWeaver ASCS, SAP NetWeaver SCS, SAP NetWeaver ERS, and the SAP HANA datab
SAP NetWeaver requires shared storage for the transport and profile directory. To see how to set up GlusterFS for SAP NetWeaver, see [GlusterFS on Azure VMs on Red Hat Enterprise Linux for SAP NetWeaver][glusterfs-ha].
-## Set up (A)SCS
+## Prepare the infrastructure
+
+Azure Marketplace contains images qualified for SAP with the High Availability add-on, which you can use to deploy new VMs by using various versions of Red Hat.
+
+### Deploy Linux VMs manually via the Azure portal
+
+This document assumes that you already deployed an [Azure virtual network](../../virtual-network/virtual-networks-overview.md), subnet, and resource group.
+
+Deploy VMs for SAP ASCS, ERS and Application servers. Choose a suitable RHEL image that's supported for the SAP system. You can deploy a VM in any one of the availability options: virtual machine scale set, availability zone, or availability set.
+
+### Configure Azure load balancer
+
+During VM configuration, you have an option to create or select exiting load balancer in networking section. Follow the steps below to configure a standard load balancer for the high-availability setup of SAP ASCS and SAP ERS.
+
+#### [Azure portal](#tab/lb-portal)
++
+#### [Azure CLI](#tab/lb-azurecli)
++
+#### [PowerShell](#tab/lb-powershell)
-In this example, the resources were deployed manually via the [Azure portal](https://portal.azure.com/#home).
-
-### Deploy Linux manually via the Azure portal
-
-This document assumes that you already deployed a resource group, an [Azure virtual network](../../virtual-network/virtual-networks-overview.md), and a subnet.
-
-Deploy VMs for SAP ASCS, ERS, and application server instances. Choose a suitable RHEL image that's supported with your SAP system. You can deploy VM in any one of the availability options: scale set, availability zone, or availability set.
-
-### Deploy Azure Load Balancer via the Azure portal
-
-After you deploy the VMs for your SAP system, create a load balancer. Use VMs created for SAP ASCS/ERS instances in the back-end pool.
-
-1. Create a load balancer (internal, standard).
- 1. Create the front-end IP addresses.
- 1. IP address 10.0.0.7 for the ASCS:
- 1. Open the load balancer, select the front-end IP pool, and select **Add**.
- 1. Enter the name of the new front-end IP pool (for example, **nw1-ascs-frontend**).
- 1. Set **Assignment** to **Static** and enter the IP address (for example, **10.0.0.7**).
- 1. Select **OK**.
- 1. IP address 10.0.0.8 for the ASCS ERS:
- * Repeat the preceding steps to create an IP address for the ERS (for example, **10.0.0.8** and **nw1-aers-frontend**).
- 1. Create a single back-end pool:
- 1. Open the load balancer, select **Backend pools**, and then select **Add**.
- 1. Enter the name of the new back-end pool (for example, **nw1-backend**).
- 1. Select **NIC** for **Backend Pool Configuration**.
- 1. Select **Add a virtual machine**.
- 1. Select the VMs of the ASCS cluster.
- 1. Select **Add**.
- 1. Select **Save**.
- 1. Create the health probes.
- 1. Port 620**00** for ASCS:
- 1. Open the load balancer, select health probes, and select **Add**.
- 1. Enter the name of the new health probe (for example, **nw1-ascs-hp**).
- 1. Select **TCP** as the protocol, port 620**00**, and keep **Interval 5**.
- 1. Select **OK**.
- 1. Port 621**02** for ASCS ERS:
- * Repeat the preceding steps to create a health probe for the ERS (for example, 621**02** and **nw1-aers-hp**).
- 1. Create load-balancing rules for ASCS:
- 1. Open the load balancer, select load-balancing rules, and select **Add**.
- 1. Enter the name of the new load balancer rule (for example, **nw1-lb-ascs**).
- 1. Select the front-end IP address, back-end pool, and health probe you created earlier (for example, **nw1-ascs-frontend**, **nw1-backend**, and **nw1-ascs-hp**).
- 1. Increase the idle timeout to **30 minutes**.
- 1. Select **HA ports**.
- 1. Make sure to enable **Floating IP**.
- 1. Select **OK**.
- * Repeat the preceding steps to create load-balancing rules for ERS (for example, **nw1-lb-ers**).
++ > [!IMPORTANT] > Floating IP isn't supported on a NIC secondary IP configuration in load-balancing scenarios. For more information, see [Azure Load Balancer limitations](../../load-balancer/load-balancer-multivip-overview.md#limitations). If you need another IP address for the VM, deploy a second NIC.
After you deploy the VMs for your SAP system, create a load balancer. Use VMs cr
> When VMs without public IP addresses are placed in the back-end pool of an internal (no public IP address) Standard Azure load balancer, there's no outbound internet connectivity unless more configuration is performed to allow routing to public endpoints. For more information on how to achieve outbound connectivity, see [Public endpoint connectivity for VMs using Azure Standard Load Balancer in SAP high-availability scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md). > [!IMPORTANT]
-> Don't enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps causes the health probes to fail. Set the parameter **net.ipv4.tcp_timestamps** to **0**. For more information, see [Load Balancer health probes](../../load-balancer/load-balancer-custom-probe-overview.md).
+> Don't enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps causes the health probes to fail. Set the parameter `net.ipv4.tcp_timestamps` to `0`. For more information, see [Load Balancer health probes](../../load-balancer/load-balancer-custom-probe-overview.md).
+
+## Set up (A)SCS
+
+Next, you'll prepare and install the SAP ASCS and ERS instances.
### Create a Pacemaker cluster
Follow the steps in [Set up Pacemaker on Red Hat Enterprise Linux in Azure](high
The following items are prefixed with: -- **[A]**: Applicable to all nodes-- **[1]**: Only applicable to node 1-- **[2]**: Only applicable to node 2
+* **[A]**: Applicable to all nodes
+* **[1]**: Only applicable to node 1
+* **[2]**: Only applicable to node 2
1. **[A]** Set up hostname resolution.
The following items are prefixed with:
10.0.0.8 nw1-aers ```
-1. **[A]** Create the shared directories.
+2. **[A]** Create the shared directories.
```bash sudo mkdir -p /sapmnt/NW1
The following items are prefixed with:
sudo chattr +i /usr/sap/NW1/ERS02 ```
-1. **[A]** Install the GlusterFS client and other required packages.
+3. **[A]** Install the GlusterFS client and other required packages.
```bash sudo yum -y install glusterfs-fuse resource-agents resource-agents-sap ```
-1. **[A]** Check the version of `resource-agents-sap`.
+4. **[A]** Check the version of `resource-agents-sap`.
Make sure that the version of the installed `resource-agents-sap` package is at least 3.9.5-124.el7.
The following items are prefixed with:
# : environment. ```
-1. **[A]** Add mount entries.
+5. **[A]** Add mount entries.
```bash sudo vi /etc/fstab
The following items are prefixed with:
sudo mount -a ```
-1. **[A]** Configure the SWAP file.
+6. **[A]** Configure the SWAP file.
```bash sudo vi /etc/waagent.conf
The following items are prefixed with:
sudo service waagent restart ```
-1. **[A]** Configure RHEL.
+7. **[A]** Configure RHEL.
Based on the RHEL version, perform the configuration mentioned in SAP Note [2002167](https://launchpad.support.sap.com/#/notes/2002167), SAP Note [2772999](https://launchpad.support.sap.com/#/notes/2772999), or SAP Note [3108316](https://launchpad.support.sap.com/#/notes/2772999).
The following items are prefixed with:
pcs resource defaults migration-threshold=3 ```
-1. **[1]** Create a virtual IP resource and health probe for the ASCS instance.
+2. **[1]** Create a virtual IP resource and health probe for the ASCS instance.
```bash sudo pcs node standby nw1-cl-1
The following items are prefixed with:
# vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-0 ```
-1. **[1]** Install SAP NetWeaver ASCS.
+3. **[1]** Install SAP NetWeaver ASCS.
Install SAP NetWeaver ASCS as the root on the first node by using a virtual hostname that maps to the IP address of the load balancer front-end configuration for the ASCS, for example, **nw1-ascs** and **10.0.0.7**, and the instance number that you used for the probe of the load balancer, for example, **00**.
The following items are prefixed with:
sudo chgrp sapsys /usr/sap/NW1/ASCS00 ```
-1. **[1]** Create a virtual IP resource and health probe for the ERS instance.
+4. **[1]** Create a virtual IP resource and health probe for the ERS instance.
```bash sudo pcs node unstandby nw1-cl-1
The following items are prefixed with:
# vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started nw1-cl-1 ```
-1. **[2]** Install SAP NetWeaver ERS.
+5. **[2]** Install SAP NetWeaver ERS.
Install SAP NetWeaver ERS as the root on the second node by using a virtual hostname that maps to the IP address of the load balancer front-end configuration for the ERS, for example, **nw1-aers** and **10.0.0.8**, and the instance number that you used for the probe of the load balancer, for example, **02**.
The following items are prefixed with:
sudo chgrp sapsys /usr/sap/NW1/ERS02 ```
-1. **[1]** Adapt the ASCS/SCS and ERS instance profiles.
+6. **[1]** Adapt the ASCS/SCS and ERS instance profiles.
* ASCS/SCS profile:
The following items are prefixed with:
# Autostart = 1 ```
-1. **[A]** Configure Keep Alive.
+7. **[A]** Configure Keep Alive.
The communication between the SAP NetWeaver application server and the ASCS/SCS is routed through a software load balancer. The load balancer disconnects inactive connections after a configurable timeout. To prevent this action, set a parameter in the SAP NetWeaver ASCS/SCS profile, if you're using ENSA1. Change the Linux system `keepalive` settings on all SAP servers for both ENSA1 and ENSA2. For more information, see SAP Note [1410736][1410736].
The following items are prefixed with:
sudo sysctl net.ipv4.tcp_keepalive_time=300 ```
-1. **[A]** Update the `/usr/sap/sapservices` file.
+8. **[A]** Update the `/usr/sap/sapservices` file.
To prevent the start of the instances by the `sapinit` startup script, all instances managed by Pacemaker must be commented out from the `/usr/sap/sapservices` file.
The following items are prefixed with:
# LD_LIBRARY_PATH=/usr/sap/NW1/ERS02/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH; /usr/sap/NW1/ERS02/exe/sapstartsrv pf=/usr/sap/NW1/ERS02/profile/NW1_ERS02_nw1-aers -D -u nw1adm ```
-1. **[1]** Create the SAP cluster resources.
+9. **[1]** Create the SAP cluster resources.
- If you use enqueue server 1 architecture (ENSA1), define the resources as shown here:
+ Depending on whether you are running an ENSA1 or ENSA2 system, select respective tab to define the resources. SAP introduced support for [ENSA2](https://help.sap.com/docs/ABAP_PLATFORM_NEW/cff8531bc1d9416d91bb6781e628d4e0/6d655c383abf4c129b0e5c8683e7ecd8.html), including replication, in SAP NetWeaver 7.52. Starting with ABAP Platform 1809, ENSA2 is installed by default. For ENSA2 support, see SAP Note [2630416](https://launchpad.support.sap.com/#/notes/2630416) for enqueue server 2 support.
- ```bash
- sudo pcs property set maintenance-mode=true
+ If you use enqueue server 2 architecture ([ENSA2](https://help.sap.com/docs/ABAP_PLATFORM_NEW/cff8531bc1d9416d91bb6781e628d4e0/6d655c383abf4c129b0e5c8683e7ecd8.html)), install resource agent resource-agents-sap-4.1.1-12.el7.x86_64 or newer and define the resources as shown here:
+
+ #### [ENSA1](#tab/ensa1)
+
+ ```bash
+ sudo pcs property set maintenance-mode=true
- sudo pcs resource create rsc_sap_NW1_ASCS00 SAPInstance \
+ sudo pcs resource create rsc_sap_NW1_ASCS00 SAPInstance \
InstanceName=NW1_ASCS00_nw1-ascs START_PROFILE="/sapmnt/NW1/profile/NW1_ASCS00_nw1-ascs" \ AUTOMATIC_RECOVER=false \ meta resource-stickiness=5000 migration-threshold=1 failure-timeout=60 \
The following items are prefixed with:
op start interval=0 timeout=600 op stop interval=0 timeout=600 \ --group g-NW1_ASCS
- sudo pcs resource meta g-NW1_ASCS resource-stickiness=3000
+ sudo pcs resource meta g-NW1_ASCS resource-stickiness=3000
- sudo pcs resource create rsc_sap_NW1_ERS02 SAPInstance \
+ sudo pcs resource create rsc_sap_NW1_ERS02 SAPInstance \
InstanceName=NW1_ERS02_nw1-aers START_PROFILE="/sapmnt/NW1/profile/NW1_ERS02_nw1-aers" \ AUTOMATIC_RECOVER=false IS_ERS=true \ op monitor interval=20 on-fail=restart timeout=60 op start interval=0 timeout=600 op stop interval=0 timeout=600 \ --group g-NW1_AERS
- sudo pcs constraint colocation add g-NW1_AERS with g-NW1_ASCS -5000
- sudo pcs constraint location rsc_sap_NW1_ASCS00 rule score=2000 runs_ers_NW1 eq 1
- sudo pcs constraint order start g-NW1_ASCS then stop g-NW1_AERS kind=Optional symmetrical=false
+ sudo pcs constraint colocation add g-NW1_AERS with g-NW1_ASCS -5000
+ sudo pcs constraint location rsc_sap_NW1_ASCS00 rule score=2000 runs_ers_NW1 eq 1
+ sudo pcs constraint order start g-NW1_ASCS then stop g-NW1_AERS kind=Optional symmetrical=false
- sudo pcs node unstandby nw1-cl-0
- sudo pcs property set maintenance-mode=false
- ```
+ sudo pcs node unstandby nw1-cl-0
+ sudo pcs property set maintenance-mode=false
+ ```
- SAP introduced support for enqueue server 2, including replication, as of SAP NW 7.52. Starting with ABAP Platform 1809, enqueue server 2 is installed by default. See SAP Note [2630416](https://launchpad.support.sap.com/#/notes/2630416) for enqueue server 2 support. If you use enqueue server 2 architecture ([ENSA2](https://help.sap.com/viewer/cff8531bc1d9416d91bb6781e628d4e0/1709%20001/en-US/6d655c383abf4c129b0e5c8683e7ecd8.html)), install resource agent resource-agents-sap-4.1.1-12.el7.x86_64 or newer and define the resources as shown here:
+ #### [ENSA2](#tab/ensa2)
- ```bash
- sudo pcs property set maintenance-mode=true
+ ```bash
+ sudo pcs property set maintenance-mode=true
- sudo pcs resource create rsc_sap_NW1_ASCS00 SAPInstance \
+ sudo pcs resource create rsc_sap_NW1_ASCS00 SAPInstance \
InstanceName=NW1_ASCS00_nw1-ascs START_PROFILE="/sapmnt/NW1/profile/NW1_ASCS00_nw1-ascs" \ AUTOMATIC_RECOVER=false \ meta resource-stickiness=5000 \
The following items are prefixed with:
op start interval=0 timeout=600 op stop interval=0 timeout=600 \ --group g-NW1_ASCS
- sudo pcs resource meta g-NW1_ASCS resource-stickiness=3000
+ sudo pcs resource meta g-NW1_ASCS resource-stickiness=3000
- sudo pcs resource create rsc_sap_NW1_ERS02 SAPInstance \
+ sudo pcs resource create rsc_sap_NW1_ERS02 SAPInstance \
InstanceName=NW1_ERS02_nw1-aers START_PROFILE="/sapmnt/NW1/profile/NW1_ERS02_nw1-aers" \ AUTOMATIC_RECOVER=false IS_ERS=true \ op monitor interval=20 on-fail=restart timeout=60 op start interval=0 timeout=600 op stop interval=0 timeout=600 \ --group g-NW1_AERS
- sudo pcs resource meta rsc_sap_NW1_ERS02 resource-stickiness=3000
-
- sudo pcs constraint colocation add g-NW1_AERS with g-NW1_ASCS -5000
- sudo pcs constraint order start g-NW1_ASCS then start g-NW1_AERS kind=Optional symmetrical=false
- sudo pcs constraint order start g-NW1_ASCS then stop g-NW1_AERS kind=Optional symmetrical=false
+ sudo pcs resource meta rsc_sap_NW1_ERS02 resource-stickiness=3000
- sudo pcs node unstandby nw1-cl-0
- sudo pcs property set maintenance-mode=false
- ```
+ sudo pcs constraint colocation add g-NW1_AERS with g-NW1_ASCS -5000
+ sudo pcs constraint order start g-NW1_ASCS then start g-NW1_AERS kind=Optional symmetrical=false
+ sudo pcs constraint order start g-NW1_ASCS then stop g-NW1_AERS kind=Optional symmetrical=false
- > [!NOTE]
- > If you're upgrading from an older version and switching to enqueue server 2, see SAP Note [2641322](https://launchpad.support.sap.com/#/notes/2641322).
+ sudo pcs node unstandby nw1-cl-0
+ sudo pcs property set maintenance-mode=false
+ ```
- > [!NOTE]
- > The timeouts in the preceding configuration are only examples and might need to be adapted to the specific SAP setup.
+
- Make sure that the cluster status is okay and that all resources are started. Which node the resources are running on isn't important.
+ > [!NOTE]
+ > If you're upgrading from an older version and switching to enqueue server 2, see SAP Note [2641322](https://launchpad.support.sap.com/#/notes/2641322).
- ```bash
- sudo pcs status
+ > [!NOTE]
+ > The timeouts in the preceding configuration are only examples and might need to be adapted to the specific SAP setup.
+
+ Make sure that the cluster status is okay and that all resources are started. Which node the resources are running on isn't important.
+
+ ```bash
+ sudo pcs status
- # Online: [ nw1-cl-0 nw1-cl-1 ]
- #
- # Full list of resources:
- #
- # rsc_st_azure (stonith:fence_azure_arm): Started nw1-cl-0
- # Resource Group: g-NW1_ASCS
- # fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-1
- # nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-1
- # vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
- # rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-1
- # Resource Group: g-NW1_AERS
- # fs_NW1_AERS (ocf::heartbeat:Filesystem): Started nw1-cl-0
- # nc_NW1_AERS (ocf::heartbeat:azure-lb): Started nw1-cl-0
- # vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
- # rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-0
- ```
+ # Online: [ nw1-cl-0 nw1-cl-1 ]
+ #
+ # Full list of resources:
+ #
+ # rsc_st_azure (stonith:fence_azure_arm): Started nw1-cl-0
+ # Resource Group: g-NW1_ASCS
+ # fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-1
+ # nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-1
+ # vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
+ # rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-1
+ # Resource Group: g-NW1_AERS
+ # fs_NW1_AERS (ocf::heartbeat:Filesystem): Started nw1-cl-0
+ # nc_NW1_AERS (ocf::heartbeat:azure-lb): Started nw1-cl-0
+ # vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
+ # rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-0
+ ```
-1. **[A]** Add firewall rules for ASCS and ERS on both nodes.
+10. **[A]** Add firewall rules for ASCS and ERS on both nodes.
```bash # Probe Port of ASCS
- sudo firewall-cmd --zone=public --add-port=62000/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=62000/tcp
- sudo firewall-cmd --zone=public --add-port=3200/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=3200/tcp
- sudo firewall-cmd --zone=public --add-port=3600/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=3600/tcp
- sudo firewall-cmd --zone=public --add-port=3900/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=3900/tcp
- sudo firewall-cmd --zone=public --add-port=8100/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=8100/tcp
- sudo firewall-cmd --zone=public --add-port=50013/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=50013/tcp
- sudo firewall-cmd --zone=public --add-port=50014/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=50014/tcp
- sudo firewall-cmd --zone=public --add-port=50016/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=50016/tcp
-
+ sudo firewall-cmd --zone=public --add-port={62000,3200,3600,3900,8100,50013,50014,50016}/tcp --permanent
+ sudo firewall-cmd --zone=public --add-port={62000,3200,3600,3900,8100,50013,50014,50016}/tcp
# Probe Port of ERS
- sudo firewall-cmd --zone=public --add-port=62102/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=62102/tcp
- sudo firewall-cmd --zone=public --add-port=3202/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=3202/tcp
- sudo firewall-cmd --zone=public --add-port=3302/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=3302/tcp
- sudo firewall-cmd --zone=public --add-port=50213/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=50213/tcp
- sudo firewall-cmd --zone=public --add-port=50214/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=50214/tcp
- sudo firewall-cmd --zone=public --add-port=50216/tcp --permanent
- sudo firewall-cmd --zone=public --add-port=50216/tcp
+ sudo firewall-cmd --zone=public --add-port={62102,3202,3302,50213,50214,50216}/tcp --permanent
+ sudo firewall-cmd --zone=public --add-port={62102,3202,3302,50213,50214,50216}/tcp
``` ## SAP NetWeaver application server preparation
sap Integration Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/integration-get-started.md
The [Defender product family](../../defender-for-cloud/defender-for-cloud-introd
- Defender for Servers (SAP hosts) - [Protect your SAP hosts with Defender](../../defender-for-cloud/defender-for-servers-introduction.md) including OS specific Endpoint protection with Microsoft Defender for Endpoint (MDE)
- - [Microsoft Defender for Endpoint on Linux](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint-linux)
+ - [Microsoft Defender for Endpoint on Linux](/microsoft-365/security/defender-endpoint/mde-linux-deployment-on-sap)
- [Microsoft Defender for Endpoint on Windows](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint) - [Enable Defender for Servers](../../defender-for-cloud/tutorial-enable-servers-plan.md#enable-the-defender-for-servers-plan) - Defender for Storage (SAP SMB file shares on Azure)
search Cognitive Search Output Field Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-output-field-mapping.md
- ignite-2023 Previously updated : 09/14/2022 Last updated : 01/18/2024 # Map enriched output to fields in a search index in Azure AI Search ![Indexer Stages](./media/cognitive-search-output-field-mapping/indexer-stages-output-field-mapping.png "indexer stages")
-This article explains how to set up *output field mappings* that determine a data path between in-memory data structures created during skill processing, and target fields in a search index. An output field mapping is defined in an [indexer](search-indexer-overview.md) and has the following elements:
+This article explains how to set up *output field mappings*, defining a data path between in-memory data structures created during [skillset processing](cognitive-search-concept-intro.md), and target fields in a search index. An output field mapping is defined in an [indexer](search-indexer-overview.md) and has the following elements:
```json "outputFieldMappings": [
This article explains how to set up *output field mappings* that determine a dat
], ```
-In contrast with a [`fieldMappings`](search-indexer-field-mappings.md) definition that maps a path between two physical data structures, an `outputFieldMappings` definition maps in-memory data to fields in a search index.
+In contrast with a [`fieldMappings`](search-indexer-field-mappings.md) definition that maps a path between two physical data structures, an `outputFieldMappings` definition maps in-memory enrichments to fields in a search index.
Output field mappings are required if your indexer has an attached [skillset](cognitive-search-working-with-skillsets.md) that creates new information, such as text translation or key phrase extraction. During indexer execution, AI-generated information exists in memory only. To persist this information in a search index, you'll need to tell the indexer where to send the data.
Output field mappings apply to:
+ In-memory content that's created by skills or extracted by an indexer. The source field is a node in an enriched document tree.
-+ Search indexes. If you're populating a [knowledge store](knowledge-store-concept-intro.md), use [projections](knowledge-store-projections-examples.md) for data path configuration.
++ Search indexes. If you're populating a [knowledge store](knowledge-store-concept-intro.md), use [projections](knowledge-store-projections-examples.md) for data path configuration. If you're populating a vector store, output field mappings aren't used. Output field mappings are applied after [skillset execution](cognitive-search-working-with-skillsets.md) or after document cracking if there's no associated skillset.
Output field mappings are added to the `outputFieldMappings` array in an indexer
|-|-| | sourceFieldName | Required. Specifies a path to enriched content. An example might be `/document/content`. See [Reference enrichments in an Azure AI Search skillset](cognitive-search-concept-annotations-syntax.md) for path syntax and examples. | | targetFieldName | Optional. Specifies the search field that receives the enriched content. Target fields must be top-level simple fields or collections. It can't be a path to a subfield in a complex type. If you want to retrieve specific nodes in a complex structure, you can [flatten individual nodes](#flattening-information-from-complex-types) in memory, and then send the output to a string collection in your index. |
-| mappingFunction | Optional. Adds extra processing provided by [mapping functions](search-indexer-field-mappings.md#mappingFunctions) supported by indexers. In the case of enrichment nodes, encoding and decoding are the most commonly used functions. |
+| mappingFunction | Optional. Adds extra processing provided by [mapping functions](search-indexer-field-mappings.md#mappingFunctions) supported by indexers. For enrichment nodes, encoding and decoding are the most commonly used functions. |
You can use the REST API or an Azure SDK to define output field mappings.
api-key: [admin key]
} ```
-For each output field mapping, set the location of the data in the enriched document tree (sourceFieldName), and the name of the field as referenced in the index (targetFieldName). Assign any [mapping functions](search-indexer-field-mappings.md#mappingFunctions) that you require to transform the content of a field before it's stored in the index.
+For each output field mapping, set the location of the data in the enriched document tree (sourceFieldName), and the name of the field as referenced in the index (targetFieldName). Assign any [mapping functions](search-indexer-field-mappings.md#mappingFunctions) needed to transform the content of a field before it's stored in the index.
### [**.NET SDK (C#)**](#tab/csharp)
SearchIndexer indexer = new SearchIndexer(
await indexerClient.CreateIndexerAsync(indexer); ```
- -->
search Search Get Started Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-rest.md
Title: 'Quickstart: Create a search index using REST APIs'
+ Title: 'Quickstart: search index (REST)'
-description: In this REST API quickstart, learn how to call the Azure AI Search REST APIs using Postman.
+description: In quickstart, use Postman to call the Azure AI Search REST APIs to create, load, and query a search index.
zone_pivot_groups: URL-test-interface-rest-apis
ms.devlang: rest-api Previously updated : 01/27/2023 Last updated : 01/19/2024 - mode-api - ignite-2023
-# Quickstart: Create an Azure AI Search index using REST APIs
+# Quickstart: Create a search index in Azure AI Search using REST
-This article explains how to formulate requests interactively using the [Azure AI Search REST APIs](/rest/api/searchservice) and a REST client for sending and receiving requests.
+Learn how to use the [Search REST APIs](/rest/api/searchservice) to create, load, and query a search index in Azure AI Search.
-The article uses the Postman app. You can [download and import a Postman collection](https://github.com/Azure-Samples/azure-search-postman-samples/tree/main/Quickstart) if you prefer to use predefined requests.
+The article uses the Postman app. [Download and import a Postman collection](https://github.com/Azure-Samples/azure-search-postman-samples/tree/main/Quickstart) or create requests manually using the instructions in this article.
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Prerequisites
-+ [Postman app](https://www.postman.com/downloads/), used for sending requests to Azure AI Search.
++ [Postman app](https://www.postman.com/downloads/), used for sending REST requests to Azure AI Search.
-+ [Create an Azure AI Search service](search-create-service-portal.md) or [find an existing service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices) under your current subscription. You can use a free service for this quickstart.
++ [Create](search-create-service-portal.md) or [find an existing Azure AI Search resource](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices) under your current subscription. You can use a free service for this quickstart. ## Copy a key and URL
-REST calls require the service URL and an access key on every request. A search service is created with both, so if you added Azure AI Search to your subscription, follow these steps to get the necessary information:
+REST calls require the service endpoint and an API key on every request. You can get these values from the Azure portal.
-1. Sign in to the [Azure portal](https://portal.azure.com), and in your search service **Overview** page, get the URL. An example endpoint might look like `https://mydemo.search.windows.net`.
+1. Sign in to the [Azure portal](https://portal.azure.com), navigate to the **Overview** page, and copy the URL. An example endpoint might look like `https://mydemo.search.windows.net`.
-1. In **Settings** > **Keys**, get an admin key for full rights on the service. There are two interchangeable admin keys, provided for business continuity in case you need to roll one over. You can use either the primary or secondary key on requests for adding, modifying, and deleting objects.
+1. Under **Settings** > **Keys**, copy an admin key. Admin keys are used to add, modify, and delete objects. There are two interchangeable admin keys. Copy either one.
- ![Get an HTTP endpoint and access key](media/search-get-started-rest/get-url-key.png "Get an HTTP endpoint and access key")
+ :::image type="content" source="media/search-get-started-rest/get-url-key.png" alt-text="Screenshot of the URL and API keys in the Azure portal.":::
-All requests require an api-key on every request sent to your service. Having a valid key establishes trust, on a per request basis, between the application sending the request and the service that handles it.
+A valid API key establishes trust, on a per request basis, between the application sending the request and the search service handling it.
-## Connect to Azure AI Search
+## Set collection variables
-Connection information is specified in the URI endpoint. Collection variables are used to represent the search service name and API keys. A typical URI in this quickstart looks like this:
+Postman provides collection variables, enclosed in brackets in a request, to reuse the same string on every request. We use collection variables for customer-specific values, such as `{{service-name}}` in the URI or an `{{admin-key}}` in the request header.
+
+A URI with multiple variables looks like this:
```http
-https://{{service-name}}.search.windows.net/indexes/hotels-quickstart?api-version=2020-06-30
+https://{{service-name}}.search.windows.net/indexes/{{index-name}}?api-version={{api-version}}
```
-Notice the HTTPS prefix, the name of the service (variable, the name of an object (in this case, the name of an index in the indexes collection), and the [api-version](search-api-versions.md). The api-version is a required.
-
-Request header composition includes two elements: `Content-Type` and the `api-key` used to authenticate to Azure AI Search. The `api-key` is specified as variable, and it's also required.
-
-For the requests to succeed, you'll need to provide the service name and api-key as collection variables.
-
-1. Open the Postman app and import the collection.
+A request header for Azure AI Search calls must have `Content-Type` set to `application/json`, and an `api-key` set to an API key of your search service. In this quickstart, the `api-key` in the request header is specified as variable.
-1. Select the collection's access menu, select **Edit**, and provide the service name and key of your search service.
+1. Open the Postman app and import the [sample collection](https://github.com/Azure-Samples/azure-search-postman-samples/tree/main/Quickstart) or create a new one.
- :::image type="content" source="media/search-get-started-rest/postman-collection-variables.png" lightbox="media/search-get-started-rest/postman-collection-variables.png"alt-text="Screenshot of the Postman collection variable page." border="true":::
+1. Select the collection's access menu, select **Edit**, and provide the search service name and an admin API key.
-## 1 - Create an index
+ :::image type="content" source="media/search-get-started-rest/postman-collection-variables.png" lightbox="media/search-get-started-rest/postman-collection-variables.png" alt-text="Screenshot of the Postman collection variable page." border="true":::
-In Azure AI Search, you usually create the index before loading it with data. The [Create Index REST API](/rest/api/searchservice/create-index) is used for this task.
+## Create an index
-The URL is extended to include the `hotels-quickstart` index name.
+Use the [Create Index (REST)](/rest/api/searchservice/create-index) to specify a schema. The endpoint includes the `/indexes` collection and `hotels-quickstart` for the index name.
1. Set the verb to **PUT**.
-2. Copy in this URL `https://{{service-name}}.search.windows.net/indexes/hotels-quickstart?api-version=2020-06-30`.
+1. Copy in this URL `https://{{service-name}}.search.windows.net/indexes/hotels-quickstart?api-version=2023-11-01`.
-3. Provide the index definition (copy-ready code is provided next) in the body of the request.
+1. Under **Headers**, set `Content-Type` to `application/json` and set `api-key` to `{{admin-key}}`.
-4. Select **Send**.
+1. Under **Body** paste in index definition (copyable JSON is provided in the next section). Make sure the request body selection is **raw** and the type is to **JSON**
-![Index JSON document in request body](media/search-get-started-rest/postman-request.png "Index JSON document in request body")
+1. Select **Send**.
+
+ :::image type="content" source="media/search-get-started-rest/postman-request.png" lightbox="media/search-get-started-rest/postman-request.png" alt-text="Screenshot of the PUT create index request.":::
### Index definition
-The fields collection defines document structure. Each document must have these fields, and each field must have a data type. String fields are used in full text search. If you need numeric data to be searchable, you'll need to cast numeric data as strings.
+The fields collection defines document structure. Each document must have these fields, and each field must have an [EDM data type](/rest/api/searchservice/supported-data-types). String fields are used in full text search. If you want numeric data to be searchable, make sure the data type is `Edm.String`. Other data types such as `Edm.Int32` are filterable, sortable, facetable, and retrievable but not full-text searchable.
-Attributes on the field determine allowed action. The REST APIs allow many actions by default. For example, all strings are searchable, retrievable, filterable, and facetable by default. Often, you only have to set attributes when you need to turn off a behavior.
+Attributes on the field determine allowed actions. The REST APIs allow [many actions by default](/rest/api/searchservice/create-index#request-body). For example, all strings are searchable and retrievable by default. For REST APIs, you might only have to attributes if you need to turn off a behavior.
```json {
Attributes on the field determine allowed action. The REST APIs allow many actio
} ```
-When you submit this request, you should get an HTTP 201 response, indicating the index was created successfully. You can verify this action in the portal, but note that the portal page has refresh intervals so it could take a minute or two to catch up.
+When you submit this request, you should get an HTTP 201 response, indicating the index was created successfully. You can verify the index exists in the portal.
> [!TIP]
-> If you get HTTP 504, verify the URL specifies HTTPS. If you see HTTP 400 or 404, check the request body to verify there were no copy-paste errors. An HTTP 403 typically indicates a problem with the api-key (either an invalid key or a syntax problem with how the api-key is specified).
+> If you get HTTP 504, verify the URL specifies HTTPS. If you see HTTP 400 or 404, check the request body to verify there were no copy-paste errors. An HTTP 403 typically indicates a problem with the API key (either an invalid key or a syntax problem with how the API key is specified).
-## 2 - Load documents
+## Load documents
-Creating the index and populating the index are separate steps. In Azure AI Search, the index contains all searchable data. In this scenario, the data is provided as JSON documents. The [Add, Update, or Delete Documents REST API](/rest/api/searchservice/addupdate-or-delete-documents) is used for this task.
+Creating and loading the index are separate steps. In Azure AI Search, the index contains all searchable data and queries execute on the search service. For REST calls, the data is provided as JSON documents. Use [Documents- Index REST API](/rest/api/searchservice/addupdate-or-delete-documents) for this task.
The URL is extended to include the `docs` collections and `index` operation. 1. Set the verb to **POST**.
-2. Copy in this URL `https://{{service-name}}.search.windows.net/indexes/hotels-quickstart/docs/index?api-version=2020-06-30`.
+1. Copy in this URL `https://{{service-name}}.search.windows.net/indexes/hotels-quickstart/docs/index?api-version=2023-11-01`.
+
+1. Set up the request headers as you did in the previous step.
-3. Provide the JSON documents (copy-ready code is next) in the body of the request.
+1. Provide the JSON documents (copyable JSON is provided in the next section) in the body of the request.
-4. Select **Send**.
+1. Select **Send**.
-![JSON documents in request body](media/search-get-started-rest/postman-docs.png "JSON documents in request body")
+ :::image type="content" source="media/search-get-started-rest/postman-docs.png" lightbox="media/search-get-started-rest/postman-docs.png" alt-text="Screenshot of a POST load documents request.":::
### JSON documents to load into the index
In a few seconds, you should see an HTTP 201 response in the session list. This
If you get a 207, at least one document failed to upload. If you get a 404, you have a syntax error in either the header or body of the request: verify you changed the endpoint to include `/docs/index`.
-> [!TIP]
-> For selected data sources, you can [create an indexer](/rest/api/searchservice/create-indexer), which simplifies and reduces the amount of code required for indexing.
-
-## 3 - Search an index
+## Search an index
Now that an index and document set are loaded, you can issue queries against them using [Search Documents REST API](/rest/api/searchservice/search-documents).
-The URL is extended to include a query expression, specified using the search operator.
+Use GET or POST to query an index. On a GET call, specify query parameters on the URI. On POST, specify query parameters in JSON. POST is preferred for setting multiple query parameters.
+
+The URL is extended to include a query expression, specified using the `/docs/search` operator.
1. Set the verb to **GET**.
-2. Copy in this URL `https://{{service-name}}.search.windows.net/indexes/hotels-quickstart/docs?search=*&$count=true&api-version=2020-06-30`.
+1. Copy in this URL `https://{{service-name}}.search.windows.net/indexes/hotels-quickstart/docs?search=*&$count=true&api-version=2023-11-01`. There's no JSON body for this request. All parameters are on the URI. On a GET request, the API version is preceded by an `&` character.
-3. Select **Send**.
+1. Select **Send**.
-This query is an empty and returns a count of the documents in the search results. The request and response should look similar to the following screenshot for Postman after you select **Send**. The status code should be 200.
+ This query is an empty and returns a count of the documents in the search results. The request and response should look similar to the following screenshot for Postman after you select **Send**. The status code should be 200.
- ![GET with search string on the URL](media/search-get-started-rest/postman-query.png "GET with search string on the URL")
+ :::image type="content" source="media/search-get-started-rest/postman-query.png" lightbox="media/search-get-started-rest/postman-query.png" alt-text="Screenshot of a GET query request.":::
-Try a few other query examples to get a feel for the syntax. You can do a string search, verbatim $filter queries, limit the results set, scope the search to specific fields, and more.
+1. Set the verb to **POST**.
+1. Copy in this URL `https://{{service-name}}.search.windows.net/indexes/hotels-quickstart/docs/search?api-version=2023-11-01`. On a POST request, the API version is preceded by a `?` character.
-```
-# Query example 1 - Search on restaurant and wifi
-# Return only the HotelName, Description, and Tags fields
-https://{{service-name}}.search.windows.net/indexes/hotels-quickstart/docs?search=restaurant wifi&$count=true&$select=HotelName,Description,Tags&api-version=2020-06-30
+1. Copy in this JSON query and then select **Send**.
-# Query example 2 - Apply a filter to the index to find hotels rated 4 or highter
-# Returns the HotelName and Rating. Two documents match
-https://{service-name}}.search.windows.net/indexes/hotels-quickstart/docs?search=*&$filter=Rating gt 4&$select=HotelName,Rating&api-version=2020-06-30
+ ```json
+ {
+ "search": "lake view",
+ "select": "HotelId, HotelName, Tags, Description",
+ "searchFields": "Description, Tags",
+ "count": true
+ }
+ ```
-# Query example 3 - Take the top two results, and show only HotelName and Category in the results
-https://{service-name}}.search.windows.net/indexes/hotels-quickstart/docs?search=boutique&$top=2&$select=HotelName,Category&api-version=2020-06-30
+ The request and response should look similar to the following screenshot. For more query examples, including filters and sorting, see [Query examples](search-query-simple-examples.md).
-# Query example 4 - Sort by a specific field (Address/City) in ascending order
-https://{service-name}}.search.windows.net/indexes/hotels-quickstart/docs?search=pool&$orderby=Address/City asc&$select=HotelName, Address/City, Tags, Rating&api-version=2020-06-30
-```
+ :::image type="content" source="media/search-get-started-rest/postman-query-post.png" lightbox="media/search-get-started-rest/postman-query-post.png" alt-text="Screenshot of a POST request and response in Postman.":::
## Get index properties You can also use [Get Statistics](/rest/api/searchservice/get-index-statistics) to query for document counts and index size: ```http
-https://{{service-name}}.search.windows.net/indexes/hotels-quickstart/stats?api-version=2020-06-30
+https://{{service-name}}.search.windows.net/indexes/hotels-quickstart/stats?api-version=2023-11-01
``` Adding `/stats` to your URL returns index information. In Postman, your request should look similar to the following, and the response includes a document count and space used in bytes.
- ![Get index information](media/search-get-started-rest/postman-system-query.png "Get index information")
-
-Notice that the api-version syntax differs. For this request, use `?` to append the api-version. The `?` separates the URL path from the query string, while & separates each 'name=value' pair in the query string. For this query, api-version is the first and only item in the query string.
## Clean up resources
When you're working in your own subscription, it's a good idea at the end of a p
You can find and manage resources in the portal, using the **All resources** or **Resource groups** link in the left-navigation pane.
-If you're using a free service, remember that you're limited to three indexes, indexers, and data sources. You can delete individual items in the portal to stay under the limit.
+On a free service, remember the limitation of three indexes, indexers, and data sources. You can delete individual items in the portal to stay under the limit.
## Next steps
-Now that you know how to perform core tasks, you can move forward with more REST API calls for advanced features, such as indexers or [setting up an enrichment pipeline](cognitive-search-tutorial-blob.md) that adds content transformations to indexing. For your next step, we recommend the following link:
+Now that you know how to perform basic tasks, try advanced features, such as indexers or [enrichment pipelines](cognitive-search-tutorial-blob.md) that add content transformations to indexing. We recommend the following article:
> [!div class="nextstepaction"] > [Tutorial: Use REST and AI to generate searchable content from Azure blobs](cognitive-search-tutorial-blob.md)
search Search Get Started Vector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-vector.md
Title: Quickstart vector search
-description: Use the generally available REST APIs to call vector search.
+description: Use the generally available REST APIs to create and query a vector store.
- ignite-2023 Previously updated : 11/02/2023 Last updated : 01/19/2024 # Quickstart: Vector search using REST APIs Get started with vector search in Azure AI Search using the **2023-11-01** REST APIs that create, load, and query a search index.
-Search indexes can have vector fields in the fields collection. When querying the search index, you can build vector-only queries, or create hybrid queries that target vector fields *and* textual fields configured for filters, sorts, facets, and semantic ranking.
+Search indexes can have vector and non-vector fields. You can create pure vector queries, or hybrid queries targeting both vector *and* textual fields configured for filters, sorts, facets, and semantic reranking.
> [!NOTE]
-> Looking for [built-in data chunking and vectorization](vector-search-integrated-vectorization.md)? Try the [**Import and vectorize data** wizard](search-get-started-portal-import-vectors.md) instead.
+> Looking for [built-in data chunking and vectorization public preview](vector-search-integrated-vectorization.md)? Try the [**Import and vectorize data** wizard](search-get-started-portal-import-vectors.md) instead.
## Prerequisites
Search indexes can have vector fields in the fields collection. When querying th
+ [Sample Postman collection](https://github.com/Azure-Samples/azure-search-postman-samples/tree/main/Quickstart-vectors), with requests targeting the **2023-11-01** API version of Azure AI Search.
-+ Optional. The Postman collection includes a **Generate Embedding** request that can generate vectors from text. The collection provides a ready-to-use vector, but if you want to replace it, provide an [Azure OpenAI](https://aka.ms/oai/access) endpoint with a deployment of **text-embedding-ada-002**. The step for generating a custom embedding is the only step that requires an Azure OpenAI endpoint, Azure OpenAI key, model deployment name, and API version in the collection variables.
++ Optionally, an [Azure OpenAI](https://aka.ms/oai/access) resource with a deployment of **text-embedding-ada-002**. The quickstart includes an optional step for generating new text embeddings, but we provide existing embeddings so that you can skip this step. > [!NOTE] > This quickstart is for the generally available version of [vector search](vector-search-overview.md). If you want to try integrated vectorization, currently in public preview, try [this quickstart](search-get-started-portal-import-vectors.md) instead.
Sample data consists of text and vector descriptions for seven fictitious hotels
+ Textual data is used for keyword search, semantic ranking, and capabilities that depend on text (filters, facets, and sorting).
-+ Vector data (text embeddings) is used for vector search. Currently, Azure AI Search doesn't generate vectors for you. For this quickstart, vector data was generated separately and copied into the "Upload Documents" request and into the query requests.
++ Vector data (text embeddings) is used for vector queries. Currently, Azure AI Search doesn't generate vectors for you in the generally available REST APIs and SDKs. For this quickstart, vector data was generated previously and copied into the "Upload Documents" request and into the query requests.
-For vector queries, we used the **Generate Embedding** request that calls Azure OpenAI and outputs embeddings for a search string. If you want to formulate your own vector queries against the sample data, provide your Azure OpenAI connection information in the Postman collection variables. Your Azure OpenAI service must have a deployment of an embedding model that's identical to the one used to generate embeddings in your search corpus.
+To create vector query strings, we used the **Generate Embedding** request to an [Azure OpenAI](https://aka.ms/oai/access) resource. If you want to formulate your own vector queries against the sample data, provide your Azure OpenAI connection information in the Postman collection variables. Your Azure OpenAI service must have a deployment of an embedding model that's identical to the one used to generate embeddings in your search corpus.
For this quickstart, the following parameters were used:
For this quickstart, the following parameters were used:
## Set up your project
-If you're unfamiliar with Postman, see [this quickstart](search-get-started-rest.md) for instructions on how to import collections and set variables.
+If you're unfamiliar with Postman, see [this quickstart](search-get-started-rest.md) for instructions on how to set collection variables and formulate REST calls.
1. [Fork or clone the azure-search-postman-samples repository](https://github.com/Azure-Samples/azure-search-postman-samples).
If you're unfamiliar with Postman, see [this quickstart](search-get-started-rest
1. Save your changes.
-You're now ready to send the requests to your search service. For each request, select the blue **Send** button. When you see a success message, move on to the next request.
- ## Create an index Use the [Create or Update Index](/rest/api/searchservice/indexes/create-or-update) REST API for this request. The index schema is organized around hotels content. Sample data consists of the names, descriptions, and locations of seven fictitious hotels. This schema includes fields for vector and traditional keyword search, with configurations for vector and semantic ranking.
-The following example is a subset of the full index. We trimmed the definition so that you can focus on field definitions, vector configuration, and optional semantic configuration.
+The following example is a subset of the full index. We trimmed the definition so that you can focus on field definitions, vector configuration, and optional semantic configuration. Compare `HotelName` and `HotelNameVector` for differences in field definitions. For more information, see [Add vector fields to a search index](vector-search-how-to-create-index.md).
```http PUT https://{{search-service-name}}.search.windows.net/indexes/{{index-name}}?api-version=2023-11-01
search Search How To Alias https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-how-to-alias.md
- ignite-2023 Previously updated : 04/04/2023 Last updated : 01/18/2024 # Create an index alias in Azure AI Search
You can create an alias using the preview REST API, the preview SDKs, or through
You can use the [Create or Update Alias (REST preview)](/rest/api/searchservice/preview-api/create-or-update-alias) to create an index alias. ```http
-POST /aliases?api-version=2021-04-30-preview
+POST /aliases?api-version=2023-10-01-preview
{ "name": "my-alias", "indexes": ["hotel-samples-index"]
Follow the steps below to create an index alias in the Azure portal.
### [**.NET SDK**](#tab/sdk)
-In the preview [.NET SDK](https://www.nuget.org/packages/Azure.Search.Documents/11.5.0-beta.5) for Azure AI Search, you can use the following syntax to create an index alias.
+Using one of the beta packages from the [Azure SDK for .NET](https://www.nuget.org/packages/Azure.Search.Documents/), you can use the following syntax to create an index alias.
```csharp // Create a SearchIndexClient
SearchAlias myAlias = new SearchAlias("my-alias", "hotel-quickstart-index");
adminClient.CreateAlias(myAlias); ```
-Index aliases are also supported in the latest preview SDKs for [Java](https://search.maven.org/artifact/com.azure/azure-search-documents/11.6.0-beta.1/jar), [Python](https://pypi.org/project/azure-search-documents/11.4.0b1/), and [JavaScript](https://www.npmjs.com/package/@azure/search-documents/v/11.3.0-beta.8).
+Index aliases are also supported in the latest preview SDKs for [Java](https://central.sonatype.com/artifact/com.azure/azure-search-documents/versions), [Python](https://pypi.org/project/azure-search-documents/#history), and [JavaScript](https://www.npmjs.com/package/@azure/search-documents?activeTab=versions).
Once you've created your alias, you're ready to start using it. Aliases can be u
In the query below, instead of sending the request to `hotel-samples-index`, you can instead send the request to `my-alias` and it will be routed accordingly. ```http
-POST /indexes/my-alias/docs/search?api-version=2021-04-30-preview
+POST /indexes/my-alias/docs/search?api-version=2023-10-01-preview
{ "search": "pool spa +airport", "searchMode": any,
If you expect to make updates to a production index, specify an alias rather tha
Now, whenever you need to update your application to point to a new index, all you need to do is update the mapping in your alias. PUT is required for updates as described in [Create or Update Alias (REST preview)](/rest/api/searchservice/preview-api/create-or-update-alias). ```http
-PUT /aliases/my-alias?api-version=2021-04-30-preview
+PUT /aliases/my-alias?api-version=2023-10-01-preview
{ "name": "my-alias", "indexes": ["hotel-samples-index2"] } ```+ After you make the update to the alias, requests will automatically start to be routed to the new index. > [!NOTE]
search Search Howto Complex Data Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-complex-data-types.md
- ignite-2023 Previously updated : 01/30/2023 Last updated : 01/18/2024 # Model complex data types in Azure AI Search
-External datasets used to populate an Azure AI Search index can come in many shapes. Sometimes they include hierarchical or nested substructures. Examples might include multiple addresses for a single customer, multiple colors and sizes for a single SKU, multiple authors of a single book, and so on. In modeling terms, you might see these structures referred to as *complex*, *compound*, *composite*, or *aggregate* data types. The term Azure AI Search uses for this concept is **complex type**. In Azure AI Search, complex types are modeled using **complex fields**. A complex field is a field that contains children (sub-fields) which can be of any data type, including other complex types. This works in a similar way as structured data types in a programming language.
+External datasets used to populate an Azure AI Search index can come in many shapes. Sometimes they include hierarchical or nested substructures. Examples might include multiple addresses for a single customer, multiple colors and sizes for a single SKU, multiple authors of a single book, and so on. In modeling terms, you might see these structures referred to as *complex*, *compound*, *composite*, or *aggregate* data types. The term Azure AI Search uses for this concept is **complex type**. In Azure AI Search, complex types are modeled using **complex fields**. A complex field is a field that contains children (subfields) which can be of any data type, including other complex types. This works in a similar way as structured data types in a programming language.
Complex fields represent either a single object in the document, or an array of objects, depending on the data type. Fields of type `Edm.ComplexType` represent single objects, while fields of type `Collection(Edm.ComplexType)` represent arrays of objects.
To get started, we recommend the [Hotels data set](https://github.com/Azure-Samp
## Example of a complex structure
-The following JSON document is composed of simple fields and complex fields. Complex fields, such as `Address` and `Rooms`, have sub-fields. `Address` has a single set of values for those sub-fields, since it's a single object in the document. In contrast, `Rooms` has multiple sets of values for its sub-fields, one for each object in the collection.
+The following JSON document is composed of simple fields and complex fields. Complex fields, such as `Address` and `Rooms`, have subfields. `Address` has a single set of values for those subfields, since it's a single object in the document. In contrast, `Rooms` has multiple sets of values for its subfields, one for each object in the collection.
```json
Other Azure SDKs provide samples in [Python](https://github.com/Azure/azure-sdk-
Use [Create Index (REST API)](/rest/api/searchservice/create-index) to define a schema.
-The following example shows a JSON index schema with simple fields, collections, and complex types. Notice that within a complex type, each sub-field has a type and may have attributes, just as top-level fields do. The schema corresponds to the example data above. `Address` is a complex field that isn't a collection (a hotel has one address). `Rooms` is a complex collection field (a hotel has many rooms).
+The following example shows a JSON index schema with simple fields, collections, and complex types. Notice that within a complex type, each subfield has a type and can have attributes, just as top-level fields do. The schema corresponds to the example data above. `Address` is a complex field that isn't a collection (a hotel has one address). `Rooms` is a complex collection field (a hotel has many rooms).
```json {
All of the [reindexing rules](search-howto-reindex.md) that apply to fields in g
### Structural updates to the definition
-You can add new sub-fields to a complex field at any time without the need for an index rebuild. For example, adding "ZipCode" to `Address` or "Amenities" to `Rooms` is allowed, just like adding a top-level field to an index. Existing documents have a null value for new fields until you explicitly populate those fields by updating your data.
+You can add new subfields to a complex field at any time without the need for an index rebuild. For example, adding "ZipCode" to `Address` or "Amenities" to `Rooms` is allowed, just like adding a top-level field to an index. Existing documents have a null value for new fields until you explicitly populate those fields by updating your data.
-Notice that within a complex type, each sub-field has a type and may have attributes, just as top-level fields do
+Notice that within a complex type, each subfield has a type and can have attributes, just as top-level fields do
### Data updates
-Updating existing documents in an index with the `upload` action works the same way for complex and simple fields -- all fields are replaced. However, `merge` (or `mergeOrUpload` when applied to an existing document) doesn't work the same across all fields. Specifically, `merge` doesn't support merging elements within a collection. This limitation exists for collections of primitive types and complex collections. To update a collection, you'll need to retrieve the full collection value, make changes, and then include the new collection in the Index API request.
+Updating existing documents in an index with the `upload` action works the same way for complex and simple fields: all fields are replaced. However, `merge` (or `mergeOrUpload` when applied to an existing document) doesn't work the same across all fields. Specifically, `merge` doesn't support merging elements within a collection. This limitation exists for collections of primitive types and complex collections. To update a collection, you need to retrieve the full collection value, make changes, and then include the new collection in the Index API request.
## Search complex fields
-Free-form search expressions work as expected with complex types. If any searchable field or sub-field anywhere in a document matches, then the document itself is a match.
+Free-form search expressions work as expected with complex types. If any searchable field or subfield anywhere in a document matches, then the document itself is a match.
-Queries get more nuanced when you have multiple terms and operators, and some terms have field names specified, as is possible with the [Lucene syntax](query-lucene-syntax.md). For example, this query attempts to match two terms, "Portland" and "OR", against two sub-fields of the Address field:
+Queries get more nuanced when you have multiple terms and operators, and some terms have field names specified, as is possible with the [Lucene syntax](query-lucene-syntax.md). For example, this query attempts to match two terms, "Portland" and "OR", against two subfields of the Address field:
> `search=Address/City:Portland AND Address/State:OR`
-Queries like this are *uncorrelated* for full-text search, unlike filters. In filters, queries over sub-fields of a complex collection are correlated using range variables in [`any` or `all`](search-query-odata-collection-operators.md). The Lucene query above returns documents containing both "Portland, Maine" and "Portland, Oregon", along with other cities in Oregon. This happens because each clause applies to all values of its field in the entire document, so there's no concept of a "current sub-document". For more information on this, see [Understanding OData collection filters in Azure AI Search](search-query-understand-collection-filters.md).
+Queries like this are *uncorrelated* for full-text search, unlike filters. In filters, queries over subfields of a complex collection are correlated using range variables in [`any` or `all`](search-query-odata-collection-operators.md). The Lucene query above returns documents containing both "Portland, Maine" and "Portland, Oregon", along with other cities in Oregon. This happens because each clause applies to all values of its field in the entire document, so there's no concept of a "current subdocument". For more information on this, see [Understanding OData collection filters in Azure AI Search](search-query-understand-collection-filters.md).
## Select complex fields
-The `$select` parameter is used to choose which fields are returned in search results. To use this parameter to select specific sub-fields of a complex field, include the parent field and sub-field separated by a slash (`/`).
+The `$select` parameter is used to choose which fields are returned in search results. To use this parameter to select specific subfields of a complex field, include the parent field and subfield separated by a slash (`/`).
> `$select=HotelName, Address/City, Rooms/BaseRate`
Fields must be marked as Retrievable in the index if you want them in search res
## Filter, facet, and sort complex fields
-The same [OData path syntax](query-odata-filter-orderby-syntax.md) used for filtering and fielded searches can also be used for faceting, sorting, and selecting fields in a search request. For complex types, rules apply that govern which sub-fields can be marked as sortable or facetable. For more information on these rules, see the [Create Index API reference](/rest/api/searchservice/create-index).
+The same [OData path syntax](query-odata-filter-orderby-syntax.md) used for filtering and fielded searches can also be used for faceting, sorting, and selecting fields in a search request. For complex types, rules apply that govern which subfields can be marked as sortable or facetable. For more information on these rules, see the [Create Index API reference](/rest/api/searchservice/create-index).
-### Faceting sub-fields
+### Faceting subfields
-Any sub-field can be marked as facetable unless it is of type `Edm.GeographyPoint` or `Collection(Edm.GeographyPoint)`.
+Any subfield can be marked as facetable unless it is of type `Edm.GeographyPoint` or `Collection(Edm.GeographyPoint)`.
-The document counts returned in the facet results are calculated for the parent document (a hotel), not the sub-documents in a complex collection (rooms). For example, suppose a hotel has 20 rooms of type "suite". Given this facet parameter `facet=Rooms/Type`, the facet count will be one for the hotel, not 20 for the rooms.
+The document counts returned in the facet results are calculated for the parent document (a hotel), not the subdocuments in a complex collection (rooms). For example, suppose a hotel has 20 rooms of type "suite". Given this facet parameter `facet=Rooms/Type`, the facet count is one for the hotel, not 20 for the rooms.
### Sorting complex fields
-Sort operations apply to documents (Hotels) and not sub-documents (Rooms). When you have a complex type collection, such as Rooms, it's important to realize that you can't sort on Rooms at all. In fact, you can't sort on any collection.
+Sort operations apply to documents (Hotels) and not subdocuments (Rooms). When you have a complex type collection, such as Rooms, it's important to realize that you can't sort on Rooms at all. In fact, you can't sort on any collection.
-Sort operations work when fields have a single value per document, whether the field is a simple field, or a sub-field in a complex type. For example, `Address/City` is allowed to be sortable because there's only one address per hotel, so `$orderby=Address/City` will sort hotels by city.
+Sort operations work when fields have a single value per document, whether the field is a simple field, or a subfield in a complex type. For example, `Address/City` is allowed to be sortable because there's only one address per hotel, so `$orderby=Address/City` sorts hotels by city.
### Filtering on complex fields
-You can refer to sub-fields of a complex field in a filter expression. Just use the same [OData path syntax](query-odata-filter-orderby-syntax.md) that's used for faceting, sorting, and selecting fields. For example, the following filter will return all hotels in Canada:
+You can refer to subfields of a complex field in a filter expression. Just use the same [OData path syntax](query-odata-filter-orderby-syntax.md) that's used for faceting, sorting, and selecting fields. For example, the following filter returns all hotels in Canada:
> `$filter=Address/Country eq 'Canada'`
-To filter on a complex collection field, you can use a **lambda expression** with the [`any` and `all` operators](search-query-odata-collection-operators.md). In that case, the **range variable** of the lambda expression is an object with sub-fields. You can refer to those sub-fields with the standard OData path syntax. For example, the following filter will return all hotels with at least one deluxe room and all non-smoking rooms:
+To filter on a complex collection field, you can use a **lambda expression** with the [`any` and `all` operators](search-query-odata-collection-operators.md). In that case, the **range variable** of the lambda expression is an object with subfields. You can refer to those subfields with the standard OData path syntax. For example, the following filter returns all hotels with at least one deluxe room and all nonsmoking rooms:
> `$filter=Rooms/any(room: room/Type eq 'Deluxe Room') and Rooms/all(room: not room/SmokingAllowed)`
-As with top-level simple fields, simple sub-fields of complex fields can only be included in filters if they have the **filterable** attribute set to `true` in the index definition. For more information, see the [Create Index API reference](/rest/api/searchservice/create-index).
+As with top-level simple fields, simple subfields of complex fields can only be included in filters if they have the **filterable** attribute set to `true` in the index definition. For more information, see the [Create Index API reference](/rest/api/searchservice/create-index).
## Next steps
-Try the [Hotels data set](https://github.com/Azure-Samples/azure-search-sample-data/tree/master/hotels) in the **Import data** wizard. You'll need the Azure Cosmos DB connection information provided in the readme to access the data.
+Try the [Hotels data set](https://github.com/Azure-Samples/azure-search-sample-data/tree/master/hotels) in the **Import data** wizard. You need the Azure Cosmos DB connection information provided in the readme to access the data.
-With that information in hand, your first step in the wizard is to create a new Azure Cosmos DB data source. Further on in the wizard, when you get to the target index page, you'll see an index with complex types. Create and load this index, and then execute queries to understand the new structure.
+With that information in hand, your first step in the wizard is to create a new Azure Cosmos DB data source. Further on in the wizard, when you get to the target index page, you see an index with complex types. Create and load this index, and then execute queries to understand the new structure.
> [!div class="nextstepaction"] > [Quickstart: portal wizard for import, indexing, and queries](search-get-started-portal.md)
search Search Howto Index Cosmosdb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-cosmosdb.md
- devx-track-dotnet - ignite-2023 Previously updated : 01/18/2023 Last updated : 01/18/2024 # Import data from Azure Cosmos DB for NoSQL for queries in Azure AI Search In this article, learn how to configure an [**indexer**](search-indexer-overview.md) that imports content from [Azure Cosmos DB for NoSQL](../cosmos-db/nosql/index.yml) and makes it searchable in Azure AI Search. - This article supplements [**Create an indexer**](search-howto-create-indexers.md) with information that's specific to Cosmos DB. It uses the REST APIs to demonstrate a three-part workflow common to all indexers: create a data source, create an index, create an indexer. Data extraction occurs when you submit the Create Indexer request. Because terminology can be confusing, it's worth noting that [Azure Cosmos DB indexing](../cosmos-db/index-overview.md) and [Azure AI Search indexing](search-what-is-an-index.md) are different operations. Indexing in Azure AI Search creates and loads a search index on your search service.
Because terminology can be confusing, it's worth noting that [Azure Cosmos DB in
+ An [Azure Cosmos DB account, database, container and items](../cosmos-db/sql/create-cosmosdb-resources-portal.md). Use the same region for both Azure AI Search and Azure Cosmos DB for lower latency and to avoid bandwidth charges.
-+ An [automatic indexing policy](../cosmos-db/index-policy.md) on the Azure Cosmos DB collection, set to [Consistent](../cosmos-db/index-policy.md#indexing-mode). This is the default configuration. Lazy indexing isn't recommended and may result in missing data.
++ An [automatic indexing policy](../cosmos-db/index-policy.md) on the Azure Cosmos DB collection, set to [Consistent](../cosmos-db/index-policy.md#indexing-mode). This is the default configuration. Lazy indexing isn't recommended and can result in missing data.
-+ Read permissions. A "full access" connection string includes a key that grants access to the content, but if you're using Azure RBAC (Entra ID), make sure the [search service managed identity](search-howto-managed-identities-data-sources.md) is assigned both **Cosmos DB Account Reader Role** and [**Cosmos DB Built-in Data Reader Role**](../cosmos-db/how-to-setup-rbac.md#built-in-role-definitions).
++ Read permissions. A "full access" connection string includes a key that grants access to the content, but if you're using Azure RBAC (Microsoft Entra ID), make sure the [search service managed identity](search-howto-managed-identities-data-sources.md) is assigned both **Cosmos DB Account Reader Role** and [**Cosmos DB Built-in Data Reader Role**](../cosmos-db/how-to-setup-rbac.md#built-in-role-definitions). + A REST client, such as [Postman](search-get-started-rest.md), to send REST calls that create the data source, index, and indexer. ## Define the data source
-The data source definition specifies the data to index, credentials, and policies for identifying changes in the data. A data source is defined as an independent resource so that it can be used by multiple indexers.
+The data source definition specifies the data to index, credentials, and policies for identifying changes in the data. A data source is an independent resource that can be used by multiple indexers.
1. [Create or update a data source](/rest/api/searchservice/create-data-source) to set its definition: ```http
- POST https://[service name].search.windows.net/datasources?api-version=2020-06-30
+ POST https://[service name].search.windows.net/datasources?api-version=2023-11-01
Content-Type: application/json api-key: [Search service admin key] {
Avoid port numbers in the endpoint URL. If you include the port number, the conn
| Managed identity connection string | || |`{ "connectionString" : "ResourceId=/subscriptions/<your subscription ID>/resourceGroups/<your resource group name>/providers/Microsoft.DocumentDB/databaseAccounts/<your cosmos db account name>/;(ApiKind=[api-kind];)/(IdentityAuthType=[identity-auth-type])" }`|
-|This connection string doesn't require an account key, but you must have previously configured a search service to [connect using a managed identity](search-howto-managed-identities-data-sources.md). For connections that target the [SQL API](../cosmos-db/sql-query-getting-started.md), you can omit `ApiKind` from the connection string. For more information about `ApiKind`, `IdentityAuthType` see [Setting up an indexer connection to an Azure Cosmos DB database using a managed identity](search-howto-managed-identities-cosmos-db.md).|
+|This connection string doesn't require an account key, but you must have a search service that can [connect using a managed identity](search-howto-managed-identities-data-sources.md). For connections targeting the [SQL API](../cosmos-db/sql-query-getting-started.md), you can omit `ApiKind` from the connection string. For more information about `ApiKind`, `IdentityAuthType` see [Setting up an indexer connection to an Azure Cosmos DB database using a managed identity](search-howto-managed-identities-cosmos-db.md).|
<a name="flatten-structures"></a>
SELECT DISTINCT VALUE c.name FROM c ORDER BY c.name
SELECT TOP 4 COUNT(1) AS foodGroupCount, f.foodGroup FROM Food f GROUP BY f.foodGroup ```
-Although Azure Cosmos DB has a workaround to support [SQL query pagination with the DISTINCT keyword by using the ORDER BY clause](../cosmos-db/sql-query-pagination.md#continuation-tokens), it isn't compatible with Azure AI Search. The query will return a single JSON value, whereas Azure AI Search expects a JSON object.
+Although Azure Cosmos DB has a workaround to support [SQL query pagination with the DISTINCT keyword by using the ORDER BY clause](../cosmos-db/sql-query-pagination.md#continuation-tokens), it isn't compatible with Azure AI Search. The query returns a single JSON value, whereas Azure AI Search expects a JSON object.
```sql -- The following query returns a single JSON value and isn't supported by Azure AI Search
SELECT DISTINCT VALUE c.name FROM c ORDER BY c.name
In a [search index](search-what-is-an-index.md), add fields to accept the source JSON documents or the output of your custom query projection. Ensure that the search index schema is compatible with source data. For content in Azure Cosmos DB, your search index schema should correspond to the [Azure Cosmos DB items](../cosmos-db/resource-model.md#azure-cosmos-db-items) in your data source.
-1. [Create or update an index](/rest/api/searchservice/create-index) to define search fields that will store data:
+1. [Create or update an index](/rest/api/searchservice/create-index) to define search fields that store data:
```http
- POST https://[service name].search.windows.net/indexes?api-version=2020-06-30
+ POST https://[service name].search.windows.net/indexes?api-version=2023-11-01
Content-Type: application/json api-key: [Search service admin key] {
In a [search index](search-what-is-an-index.md), add fields to accept the source
1. Create a document key field ("key": true). For partitioned collections, the default document key is the Azure Cosmos DB `_rid` property, which Azure AI Search automatically renames to `rid` because field names canΓÇÖt start with an underscore character. Also, Azure Cosmos DB `_rid` values contain characters that are invalid in Azure AI Search keys. For this reason, the `_rid` values are Base64 encoded.
-1. Create additional fields for more searchable content. See [Create an index](search-how-to-create-search-index.md) for details.
+1. Create more fields for more searchable content. See [Create an index](search-how-to-create-search-index.md) for details.
### Mapping data types
Once the index and data source have been created, you're ready to create the ind
1. [Create or update an indexer](/rest/api/searchservice/create-indexer) by giving it a name and referencing the data source and target index: ```http
- POST https://[service name].search.windows.net/indexers?api-version=2020-06-30
+ POST https://[service name].search.windows.net/indexers?api-version=2023-11-01
Content-Type: application/json api-key: [search service admin key] {
An indexer runs automatically when it's created. You can prevent this by setting
To monitor the indexer status and execution history, send a [Get Indexer Status](/rest/api/searchservice/get-indexer-status) request: ```http
-GET https://myservice.search.windows.net/indexers/myindexer/status?api-version=2020-06-30
+GET https://myservice.search.windows.net/indexers/myindexer/status?api-version=2023-11-01
Content-Type: application/json api-key: [admin key] ```
The following example shows a [data source definition](#define-the-data-source)
If you're using a [custom query to retrieve documents](#flatten-structures), make sure the query orders the results by the `_ts` column. This enables periodic check-pointing that Azure AI Search uses to provide incremental progress in the presence of failures.
-In some cases, even if your query contains an `ORDER BY [collection alias]._ts` clause, Azure AI Search may not infer that the query is ordered by the `_ts`. You can tell Azure AI Search that results are ordered by setting the `assumeOrderByHighWaterMarkColumn` configuration property.
+In some cases, even if your query contains an `ORDER BY [collection alias]._ts` clause, Azure AI Search might not infer that the query is ordered by the `_ts`. You can tell Azure AI Search that results are ordered by setting the `assumeOrderByHighWaterMarkColumn` configuration property.
To specify this hint, [create or update your indexer definition](#configure-and-run-the-azure-cosmos-db-indexer) as follows:
If you're using a custom query, make sure that the property referenced by `softD
The following example creates a data source with a soft-deletion policy: ```http
-POST https://[service name].search.windows.net/datasources?api-version=2020-06-30
+POST https://[service name].search.windows.net/datasources?api-version=2023-11-01
Content-Type: application/json api-key: [Search service admin key]
search Search Howto Index One To Many Blobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-one-to-many-blobs.md
- ignite-2023 Previously updated : 01/31/2023 Last updated : 01/18/2024 # Indexing blobs and files to produce multiple search documents **Applies to**: [Blob indexers](search-howto-indexing-azure-blob-storage.md), [File indexers](search-file-storage-integration.md)
-By default, an indexer will treat the contents of a blob or file as a single search document. If you want a more granular representation in a search index, you can set **parsingMode** values to create multiple search documents from one blob or file. The **parsingMode** values that result in many search documents include `delimitedText` (for [CSV](search-howto-index-csv-blobs.md)), and `jsonArray` or `jsonLines` (for [JSON](search-howto-index-json-blobs.md)).
+By default, an indexer treats the contents of a blob or file as a single search document. If you want a more granular representation in a search index, you can set **parsingMode** values to create multiple search documents from one blob or file. The **parsingMode** values that result in many search documents include `delimitedText` (for [CSV](search-howto-index-csv-blobs.md)), and `jsonArray` or `jsonLines` (for [JSON](search-howto-index-json-blobs.md)).
When you use any of these parsing modes, the new search documents that emerge must have unique document keys, and a problem arises in determining where that value comes from. The parent blob has at least one unique value in the form of `metadata_storage_path property`, but if it contributes that value to more than one search document, the key is no longer unique in the index.
To address this problem, the blob indexer generates an `AzureSearch_DocumentKey`
## One-to-many document key
-Each document that shows up in an Azure AI Search index is uniquely identified by a document key.
+Each document in an index is uniquely identified by a document key. When no parsing mode is specified, and if there's no [explicit field mapping](search-indexer-field-mappings.md) in the indexer definition for the search document key, the blob indexer automatically maps the `metadata_storage_path property` as the document key. This default mapping ensures that each blob appears as a distinct search document, and it saves you the step of having to create this field mapping yourself (normally, only fields having identical names and types are automatically mapped).
-When no parsing mode is specified, and if there's no [explicit field mapping](search-indexer-field-mappings.md) in the indexer definition for the search document key, the blob indexer automatically maps the `metadata_storage_path property` as the document key. This mapping ensures that each blob appears as a distinct search document, and it saves you the step of having to create this field mapping yourself (normally, only fields having identical names and types are automatically mapped).
-
-When using any of the parsing modes, one blob maps to "many" search documents, making a document key solely based on blob metadata unsuitable. To overcome this constraint, Azure AI Search is capable of generating a "one-to-many" document key for each individual entity extracted from a blob. This property is named AzureSearch_DocumentKey and is added to each individual entity extracted from the blob. The value of this property is guaranteed to be unique for each individual entity across blobs and the entities will show up as separate search documents.
+In a one-to-many search document scenario, an implicit document key based on `metadata_storage_path property` isn't possible. As a workaround, Azure AI Search can generate a document key for each individual entity extracted from a blob. The generated key is named `AzureSearch_DocumentKey` and it's added to each search document. The indexer keeps track of the "many documents" created from each blob, and can target updates to the search index when source data changes over time.
By default, when no explicit field mappings for the key index field are specified, the `AzureSearch_DocumentKey` is mapped to it, using the `base64Encode` field-mapping function. ## Example
-Assume you've an index definition with the following fields:
+Assume an index definition with the following fields:
+ `id` + `temperature`
And your blob container has blobs with the following structure:
_Blob1.json_ ```json
-{ "temperature": 100, "pressure": 100, "timestamp": "2020-02-13T00:00:00Z" }
-{ "temperature" : 33, "pressure" : 30, "timestamp": "2020-02-14T00:00:00Z" }
+{ "temperature": 100, "pressure": 100, "timestamp": "2024-02-13T00:00:00Z" }
+{ "temperature" : 33, "pressure" : 30, "timestamp": "2024-02-14T00:00:00Z" }
``` _Blob2.json_ ```json
-{ "temperature": 1, "pressure": 1, "timestamp": "2019-01-12T00:00:00Z" }
-{ "temperature" : 120, "pressure" : 3, "timestamp": "2017-05-11T00:00:00Z" }
+{ "temperature": 1, "pressure": 1, "timestamp": "2023-01-12T00:00:00Z" }
+{ "temperature" : 120, "pressure" : 3, "timestamp": "2022-05-11T00:00:00Z" }
```
-When you create an indexer and set the **parsingMode** to `jsonLines` - without specifying any explicit field mappings for the key field, the following mapping will be applied implicitly.
+When you create an indexer and set the **parsingMode** to `jsonLines` - without specifying any explicit field mappings for the key field, the following mapping is applied implicitly.
```http {
When you create an indexer and set the **parsingMode** to `jsonLines` - without
} ```
-This setup will result in disambiguated document keys, similar to the following illustration (base64-encoded ID shortened for brevity).
+This setup results in disambiguated document keys, similar to the following illustration (base64-encoded ID shortened for brevity).
| ID | temperature | pressure | timestamp | |-|-|-|--|
-| aHR0 ... YjEuanNvbjsx | 100 | 100 | 2020-02-13T00:00:00Z |
-| aHR0 ... YjEuanNvbjsy | 33 | 30 | 2020-02-14T00:00:00Z |
-| aHR0 ... YjIuanNvbjsx | 1 | 1 | 2019-01-12T00:00:00Z |
-| aHR0 ... YjIuanNvbjsy | 120 | 3 | 2017-05-11T00:00:00Z |
+| aHR0 ... YjEuanNvbjsx | 100 | 100 | 2024-02-13T00:00:00Z |
+| aHR0 ... YjEuanNvbjsy | 33 | 30 | 2024-02-14T00:00:00Z |
+| aHR0 ... YjIuanNvbjsx | 1 | 1 | 2023-01-12T00:00:00Z |
+| aHR0 ... YjIuanNvbjsy | 120 | 3 | 2022-05-11T00:00:00Z |
## Custom field mapping for index key field
_Blob1.json_
```json recordid, temperature, pressure, timestamp
-1, 100, 100,"2019-02-13T00:00:00Z"
-2, 33, 30,"2019-02-14T00:00:00Z"
+1, 100, 100,"2024-02-13T00:00:00Z"
+2, 33, 30,"2024-02-14T00:00:00Z"
``` _Blob2.json_ ```json recordid, temperature, pressure, timestamp
-1, 1, 1,"2018-01-12T00:00:00Z"
-2, 120, 3,"2013-05-11T00:00:00Z"
+1, 1, 1,"20123-01-12T00:00:00Z"
+2, 120, 3,"2022-05-11T00:00:00Z"
``` When you create an indexer with `delimitedText` **parsingMode**, it might feel natural to set up a field-mapping function to the key field as follows:
_Blob1.json_
```json id, temperature, pressure, timestamp
-1, 100, 100,"2019-02-13T00:00:00Z"
-2, 33, 30,"2019-02-14T00:00:00Z"
+1, 100, 100,"2024-02-13T00:00:00Z"
+2, 33, 30,"2024-02-14T00:00:00Z"
``` _Blob2.json_ ```json id, temperature, pressure, timestamp
-1, 1, 1,"2018-01-12T00:00:00Z"
-2, 120, 3,"2013-05-11T00:00:00Z"
+1, 1, 1,"2023-01-12T00:00:00Z"
+2, 120, 3,"2022-05-11T00:00:00Z"
``` Notice that each document contains the `id` field, which is defined as the `key` field in the index. In such a case, even though a document-unique `AzureSearch_DocumentKey` will be generated, it won't be used as the "key" for the document. Rather, the value of the `id` field will be mapped to the `key` field
search Search Howto Large Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-large-index.md
- ignite-2023 Previously updated : 01/17/2023 Last updated : 01/17/2024 # Index large data sets in Azure AI Search
-If your search solution requirements include indexing big data or complex data, this article describes the strategies for accommodating long running processes on Azure AI Search.
+If your search solution requirements include indexing big data or complex data, this article articulates strategies for accommodating long running processes on Azure AI Search.
-This article assumes familiarity with the [two basic approaches for importing data](search-what-is-data-import.md): pushing data into an index, or pulling in data from a supported data source using a [search indexer](search-indexer-overview.md). The strategy you choose will be determined by the indexing approach you're already using. If your scenario involves computationally intensive [AI enrichment](cognitive-search-concept-intro.md), then your strategy must include indexers, given the skillset dependency on indexers.
+This article assumes familiarity with the [two basic approaches for importing data](search-what-is-data-import.md): pushing data into an index, or pulling in data from a supported data source using a [search indexer](search-indexer-overview.md). If your scenario involves computationally intensive [AI enrichment](cognitive-search-concept-intro.md), then indexers are required, given the skillset dependency on indexers.
This article complements [Tips for better performance](search-performance-tips.md), which offers best practices on index and query design. A well-designed index that includes only the fields and attributes you need is an important prerequisite for large-scale indexing.
This article complements [Tips for better performance](search-performance-tips.m
## Index large data using the push APIs
-"Push" APIs, such as [Add Documents REST API](/rest/api/searchservice/addupdate-or-delete-documents) or the [IndexDocuments method (Azure SDK for .NET)](/dotnet/api/azure.search.documents.searchclient.indexdocuments), are the most prevalent form of indexing in Azure AI Search. For solutions that use a push API, the strategy for long-running indexing will have one or both of the following components:
+"Push" APIs, such as [Documents Index REST API](/rest/api/searchservice/documents) or the [IndexDocuments method (Azure SDK for .NET)](/dotnet/api/azure.search.documents.searchclient.indexdocuments), are the most prevalent form of indexing in Azure AI Search. For solutions that use a push API, the strategy for long-running indexing will have one or both of the following components:
+ Batching documents + Managing threads ### Batch multiple documents per request
-A simple mechanism for indexing a large quantity of data is to submit multiple documents or records in a single request. As long as the entire payload is under 16 MB, a request can handle up to 1000 documents in a bulk upload operation. These limits apply whether you're using the [Add Documents REST API](/rest/api/searchservice/addupdate-or-delete-documents) or the [IndexDocuments method](/dotnet/api/azure.search.documents.searchclient.indexdocuments) in the .NET SDK. For either API, you would package 1000 documents in the body of each request.
+A simple mechanism for indexing a large quantity of data is to submit multiple documents or records in a single request. As long as the entire payload is under 16 MB, a request can handle up to 1000 documents in a bulk upload operation. These limits apply whether you're using the [Documents Index REST API](/rest/api/searchservice/documents) or the [IndexDocuments method](/dotnet/api/azure.search.documents.searchclient.indexdocuments) in the .NET SDK. For either API, you would package 1000 documents in the body of each request.
Batching documents will significantly shorten the amount of time it takes to work through a large data volume. Determining the optimal batch size for your data is a key component of optimizing indexing speeds. The two primary factors influencing the optimal batch size are:
The Azure .NET SDK automatically retries 503s and other failed requests, but you
Indexer schedules can resume processing at the last known stopping point. If data isn't fully indexed within the processing window, the indexer picks up wherever it left off on the next run, assuming you're using a data source that provides change detection.
-Partitioning data into smaller individual data sources enables parallel processing. You can break up source data, such as into multiple containers in Azure Blob Storage, create a [data source](/rest/api/searchservice/create-data-source) for each partition, and then [run the indexers in parallel](search-howto-run-reset-indexers.md), subject to the number of search units of your search service.
+Partitioning data into smaller individual data sources enables parallel processing. You can break up source data, such as into multiple containers in Azure Blob Storage, [create a data source](/rest/api/searchservice/data-sources/create) for each partition, and then [run the indexers in parallel](search-howto-run-reset-indexers.md), subject to the number of search units of your search service.
### Check indexer batch size
-As with the push API, indexers allow you to configure the number of items per batch. For indexers based on the [Create Indexer REST API](/rest/api/searchservice/Create-Indexer), you can set the `batchSize` argument to customize this setting to better match the characteristics of your data.
+As with the push API, indexers allow you to configure the number of items per batch. For indexers based on the [Create Indexer REST API](/rest/api/searchservice/indexers/create), you can set the `batchSize` argument to customize this setting to better match the characteristics of your data.
Default batch sizes are data source specific. Azure SQL Database and Azure Cosmos DB have a default batch size of 1000. In contrast, Azure Blob indexing sets batch size at 10 documents in recognition of the larger average document size.
Typically, indexer processing runs within a 2-hour window. If the indexing workl
{ "dataSourceName" : "hotels-ds", "targetIndexName" : "hotels-idx",
- "schedule" : { "interval" : "PT2H", "startTime" : "2022-01-01T00:00:00Z" }
+ "schedule" : { "interval" : "PT2H", "startTime" : "2024-01-01T00:00:00Z" }
} ``` When there are no longer any new or updated documents in the data source, indexer execution history will report `0/0` documents processed, and no processing occurs.
-For more information about setting schedules, see [Create Indexer REST API](/rest/api/searchservice/Create-Indexer) or see [How to schedule indexers for Azure AI Search](search-howto-schedule-indexers.md).
+For more information about setting schedules, see [Create Indexer REST API](/rest/api/searchservice/indexers/create) or see [How to schedule indexers for Azure AI Search](search-howto-schedule-indexers.md).
> [!NOTE] > Some indexers that run on an older runtime architecture have a 24-hour rather than 2-hour maximum processing window. The 2-hour limit is for newer content processors that run in an [internally managed multi-tenant environment](search-indexer-securing-resources.md#indexer-execution-environment). Whenever possible, Azure AI Search tries to offload indexer and skillset processing to the multi-tenant environment. If the indexer can't be migrated, it will run in the private environment and it can run for as long as 24 hours. If you're scheduling an indexer that exhibits these characteristics, assume a 24 hour processing window.
If your data source is an [Azure Blob Storage container](../storage/blobs/storag
1. Partition source data among multiple containers or multiple virtual folders inside the same container.
-1. Create multiple [data sources](/rest/api/searchservice/create-data-source), one for each partition, paired to its own [indexer](/rest/api/searchservice/create-indexer).
+1. Create multiple [data sources](/rest/api/searchservice/data-sources/create), one for each partition, paired to its own [indexer](/rest/api/searchservice/indexers/create).
1. Specify the same target search index in each indexer.
If you have a big data architecture and your data is on a Spark cluster, we reco
## See also ++ [Tutorial: Optimize indexing workloads](tutorial-optimize-indexing-push-api.md)++ [Tutorial: Index at scale using SynapseML and Apache Spark](search-synapseml-cognitive-services.md) + [Tips for improving performance](search-performance-tips.md) + [Performance analysis](search-performance-analysis.md) + [Indexer overview](search-indexer-overview.md) + [Monitor indexer status](search-howto-monitor-indexers.md)--
-<!-- Azure AI Search supports [two basic approaches](search-what-is-data-import.md) for importing data into a search index. You can *push* your data into the index programmatically, or point an [Azure AI Search indexer](search-indexer-overview.md) at a supported data source to *pull* in the data.
-
-As data volumes grow or processing needs change, you might find that simple indexing strategies are no longer practical. For Azure AI Search, there are several approaches for accommodating larger data sets, ranging from how you structure a data upload request, to using a source-specific indexer for scheduled and distributed workloads.
-
-The same techniques used for long-running processes. In particular, the steps outlined in [parallel indexing](#run-indexers-in-parallel) are helpful for computationally intensive indexing, such as image analysis or natural language processing in an [AI enrichment pipeline](cognitive-search-concept-intro.md).
-
-The following sections explain techniques for indexing large amounts of data for both push and pull approaches. You should also review [Tips for improving performance](search-performance-tips.md) for more best practices.
-
-For C# tutorials, code samples, and alternative strategies, see:
-
-+ [Tutorial: Optimize indexing workloads](tutorial-optimize-indexing-push-api.md)
-+ [Tutorial: Index at scale using SynapseML and Apache Spark](search-synapseml-cognitive-services.md) -->
search Search Security Api Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-api-keys.md
A script example showing API key usage for various operations can be found at [Q
### [**REST API**](#tab/rest-use)
-Set an admin key in the request header using the syntax `api-key` equal to your key. Admin keys are used for most operations, including create, delete, and update. Admin keys are also used on requests issued to the search service itself, such as listing objects or requesting service statistics. see [Connect to Azure AI Search using REST APIs](search-get-started-rest.md#connect-to-azure-ai-search) for a more detailed example.
+Set an admin key in the request header using the syntax `api-key` equal to your key. Admin keys are used for most operations, including create, delete, and update. Admin keys are also used on requests issued to the search service itself, such as listing objects or requesting service statistics. see [Quickstart: Create a search index using REST](search-get-started-rest.md) for a more detailed example.
:::image type="content" source="media/search-security-api-keys/rest-headers.png" alt-text="Screenshot of the Headers section of a request in Postman." border="true":::
search Search Security Manage Encryption Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-manage-encryption-keys.md
Previously updated : 01/20/2023 Last updated : 01/20/2024 - references_regions - ignite-2023
This article walks you through the steps of setting up customer-managed key (CMK
+ CMK encryption becomes operational when an object is created. You can't encrypt objects that already exist. CMK encryption occurs whenever an object is saved to disk, either data at rest for long-term storage or temporary data for short-term storage. With CMK, the disk never sees unencrypted data. > [!NOTE]
-> If an index is CMK encrypted, it is only accessible if the search service has access the key. If access to the key is revoked, the index is unusable and the service cannot be scaled until the index is deleted or access to the key is restored.
+> If an index is CMK encrypted, it is only accessible if the search service has access to the key. If access is revoked, the index is unusable and the service cannot be scaled until the index is deleted or access to the key is restored.
## CMK encrypted objects
Encryption is performed over the following content:
When you introduce CMK encryption, you're encrypting content twice. For the objects and fields noted in the previous section, content is first encrypted with your CMK, and secondly with the Microsoft-managed key. Content is doubly encrypted on data disks for long-term storage, and on temporary disks used for short-term storage.
-Enabling CMK encryption will increase index size and degrade query performance. Based on observations to date, you can expect to see an increase of 30-60 percent in query times, although actual performance will vary depending on the index definition and types of queries. Because of the negative performance impact, we recommend that you only enable this feature on indexes that really require it.
+Enabling CMK encryption increases index size and degrades query performance. Based on observations to date, you can expect to see an increase of 30-60 percent in query times, although actual performance varies depending on the index definition and types of queries. Because performance is diminished, we recommend that you only enable this feature on indexes that really require it.
Although double encryption is now available in all regions, support was rolled out in two phases:
Although double encryption is now available in all regions, support was rolled o
+ The second rollout on May 13, 2021 added encryption for temporary disks and extended CMK encryption to [all supported regions](https://azure.microsoft.com/global-infrastructure/services/?products=search#select-product).
- If you're using CMK from a service created during the first rollout and you also want CMK encryption over temporary disks, you'll need to create a new search service in your region of choice and redeploy your content.
+ If you're using CMK from a service created during the first rollout and you also want CMK encryption over temporary disks, you need to create a new search service in your region of choice and redeploy your content.
## Prerequisites
The following tools and services are used in this scenario.
+ [Microsoft Entra ID](../active-directory/fundamentals/active-directory-whatis.md). If you don't have one, [set up a new tenant](../active-directory/develop/quickstart-create-new-tenant.md).
-You should have a search client that can create the encrypted object. Into this code, you'll reference a key vault key and Active Directory registration information. This code could be a working app, or prototype code such as the [C# code sample DotNetHowToEncryptionUsingCMK](https://github.com/Azure-Samples/search-dotnet-getting-started/tree/master/DotNetHowToEncryptionUsingCMK).
+You should have a search client that can create the encrypted object. Into this code, you reference a key vault key and Active Directory registration information. This code could be a working app, or prototype code such as the [C# code sample DotNetHowToEncryptionUsingCMK](https://github.com/Azure-Samples/search-dotnet-getting-started/tree/master/DotNetHowToEncryptionUsingCMK).
> [!TIP] > You can use [Postman](search-get-started-rest.md) or [Azure PowerShell](search-get-started-powershell.md) to call REST APIs that create indexes and synonym maps that include an encryption key parameter. You can also use Azure SDKs. Portal support for adding a key to indexes or synonym maps isn't supported.
You can set both properties using the portal, PowerShell, or Azure CLI commands.
## 2 - Create a key in Key Vault
-Skip key generation if you already have a key in Azure Key Vault that you want to use, but collect the key identifier. You'll need this information when creating an encrypted object.
+Skip key generation if you already have a key in Azure Key Vault that you want to use, but collect the key identifier. You need this information when creating an encrypted object.
1. Sign in to the [Azure portal](https://portal.azure.com) and open your key vault overview page.
Skip key generation if you already have a key in Azure Key Vault that you want t
1. Select **Create** to start the deployment.
-1. Select the key, select the current version, and then make a note of the key identifier. It's composed of the **key value Uri**, the **key name**, and the **key version**. You'll need the identifier to define an encrypted index in Azure AI Search.
+1. Select the key, select the current version, and then make a note of the key identifier. It's composed of the **key value Uri**, the **key name**, and the **key version**. You need the identifier to define an encrypted index in Azure AI Search.
:::image type="content" source="media/search-manage-encryption-keys/cmk-key-identifier.png" alt-text="Create a new key vault key" border="true":::
Skip key generation if you already have a key in Azure Key Vault that you want t
You have several options for accessing the encryption key at run time. The simplest approach is to retrieve the key using the managed identity and permissions of your search service. You can use either a system or user-managed identity. Doing so allows you to omit the steps for application registration and application secrets, and simplifies the encryption key definition.
-Alternatively, you can create and register a Microsoft Entra application. The search service will provide the application ID on requests.
+Alternatively, you can create and register a Microsoft Entra application. The search service provides the application ID on requests.
A managed identity enables your search service to authenticate to Azure Key Vault without storing credentials (ApplicationID or ApplicationSecret) in code. The lifecycle of this type of managed identity is tied to the lifecycle of your search service, which can only have one managed identity. For more information about how managed identities work, see [What are managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md).
A managed identity enables your search service to authenticate to Azure Key Vaul
![Turn on system assigned managed identity](./media/search-managed-identities/turn-on-system-assigned-identity.png "Turn on system assigned managed identity")
-Conditions that will prevent you from adopting this approach include:
+Conditions that prevent you from adopting this approach include:
+ You can't directly grant your search service access permissions to the key vault (for example, if the search service is in a different Active Directory tenant than the Azure Key Vault).
-+ A single search service is required to host multiple encrypted indexes or synonym maps, each using a different key from a different key vault, where each key vault must use **a different identity** for authentication. Because a search service can only have one managed identity, a requirement for multiple identities will disqualify the simplified approach for your scenario.
++ A single search service is required to host multiple encrypted indexes or synonym maps, each using a different key from a different key vault, where each key vault must use **a different identity** for authentication. Because a search service can only have one managed identity, a requirement for multiple identities rules out the simplified approach for your scenario. ### [**User-managed identity (preview)**](#tab/managed-id-user)
Conditions that will prevent you from adopting this approach include:
The identity property takes a type and one or more fully qualified user-assigned identities:
- * **type** is the type of identity used for the resource. The type 'SystemAssigned, UserAssigned' includes both an identity created by the system and a set of user assigned identities. The type 'None' will remove all identities from the service.
+ * **type** is the type of identity used for the resource. The type 'SystemAssigned, UserAssigned' includes both an identity created by the system and a set of user assigned identities. The type 'None' removes all identities from the service.
* **userAssignedIdentities** includes the details of the user-managed identity. * User-managed identity format: * /subscriptions/**subscription ID**/resourcegroups/**resource group name**/providers/Microsoft.ManagedIdentity/userAssignedIdentities/**managed identity name**
Conditions that will prevent you from adopting this approach include:
1. Give the registration a name, perhaps a name that is similar to the search application name. Select **Register**.
-1. Once the app registration is created, copy the Application ID. You'll need to provide this string to your application.
+1. Once the app registration is created, copy the Application ID. You need to provide this string to your application.
If you're stepping through the [DotNetHowToEncryptionUsingCMK](https://github.com/Azure-Samples/search-dotnet-getting-started/tree/master/DotNetHowToEncryptionUsingCMK), paste this value into the **appsettings.json** file.
Conditions that will prevent you from adopting this approach include:
## 4 - Grant permissions
-In this step, you'll create an access policy in Key Vault. This policy gives the application you registered with Active Directory permission to use your customer-managed key.
+In this step, you create an access policy in Key Vault. This policy gives the application you registered with Active Directory permission to use your customer-managed key.
-Access permissions could be revoked at any given time. Once revoked, any search service index or synonym map that uses that key vault will become unusable. Restoring key vault access permissions at a later time will restore index\synonym map access. For more information, see [Secure access to a key vault](../key-vault/general/security-features.md).
+Access permissions could be revoked at any given time. Once revoked, any search service index or synonym map that uses that key vault become unusable. Restoring key vault access permissions at a later time restores index and synonym map access. For more information, see [Secure access to a key vault](../key-vault/general/security-features.md).
1. Still in the Azure portal, open your key vault **Overview** page.
Once you create the encrypted object on the search service, you can use it as yo
Azure policies help to enforce organizational standards and to assess compliance at-scale. Azure AI Search has an optional [built-in policy for service-wide CMK enforcement](https://portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F76a56461-9dc0-40f0-82f5-2453283afa2f).
-In this section, you'll set the policy that defines a CMK standard for your search service. Then, you'll set up your search service to enforce this policy.
+In this section, you set the policy that defines a CMK standard for your search service. Then, you set up your search service to enforce this policy.
1. Navigate to the [built-in policy](https://portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F76a56461-9dc0-40f0-82f5-2453283afa2f) in your web browser. Select **Assign**
In this section, you'll set the policy that defines a CMK standard for your sear
1. Set up the [policy scope](../governance/policy/concepts/scope.md). In the **Parameters** section, uncheck **Only show parameters...** and set **Effect** to [**Deny**](../governance/policy/concepts/effects.md#deny).
- During evaluation of the request, a request that matches a deny policy definition is marked as non-compliant. Assuming the standard for your service is CMK encryption, "deny" means that requests that *don't* specify CMK encryption are non-compliant.
+ During evaluation of the request, a request that matches a deny policy definition is marked as noncompliant. Assuming the standard for your service is CMK encryption, "deny" means that requests that *don't* specify CMK encryption are noncompliant.
:::image type="content" source="media/search-security-manage-encryption-keys/effect-deny.png" alt-text="Screenshot of changing built-in CMK policy effect to deny." border="true":::
You can now send the indexer creation request, and then start using it normally.
## Work with encrypted content
-With customer-managed key encryption, you'll notice latency for both indexing and queries due to the extra encrypt/decrypt work. Azure AI Search doesn't log encryption activity, but you can monitor key access through key vault logging. We recommend that you [enable logging](../key-vault/general/logging.md) as part of key vault configuration.
+With customer-managed key encryption, you might notice latency for both indexing and queries due to the extra encrypt/decrypt work. Azure AI Search doesn't log encryption activity, but you can monitor key access through key vault logging. We recommend that you [enable logging](../key-vault/general/logging.md) as part of key vault configuration.
Key rotation is expected to occur over time. Whenever you rotate keys, it's important to follow this sequence:
Key rotation is expected to occur over time. Whenever you rotate keys, it's impo
1. [Update the encryptionKey properties](/rest/api/searchservice/update-index) on an index or synonym map to use the new values. Only objects that were originally created with this property can be updated to use a different value. 1. Disable or delete the previous key in the key vault. Monitor key access to verify the new key is being used.
-For performance reasons, the search service caches the key for up to several hours. If you disable or delete the key without providing a new one, queries will continue to work on a temporary basis until the cache expires. However, once the search service can no longer decrypt content, you'll get this message: "Access forbidden. The query key used might have been revoked - please retry."
+For performance reasons, the search service caches the key for up to several hours. If you disable or delete the key without providing a new one, queries continue to work on a temporary basis until the cache expires. However, once the search service can no longer decrypt content, you get this message: "Access forbidden. The query key used might have been revoked - please retry."
## Next steps
search Search What Is An Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-what-is-an-index.md
- ignite-2023 Previously updated : 06/29/2023 Last updated : 01/19/2024 # Indexes in Azure AI Search
-In Azure AI Search, a *search index* is your searchable content, available to the search engine for indexing, full text search, and filtered queries. An index is defined by a schema and saved to the search service, with data import following as a second step. This content exists within your search service, apart from your primary data stores, which is necessary for the millisecond response times expected in modern applications. Except for specific indexing scenarios, the search service will never connect to or query your local data.
+In Azure AI Search, a *search index* is your searchable content, available to the search engine for indexing, full text search, vector search, hybrid search, and filtered queries. An index is defined by a schema and saved to the search service, with data import following as a second step. This content exists within your search service, apart from your primary data stores, which is necessary for the millisecond response times expected in modern search applications. Except for indexer-driven indexing scenarios, the search service never connects to or queries your source data.
-If you're creating and managing a search index, this article helps you understand the following:
+If you want to create and manage a search index, this article helps you understand the following points:
+ Content (documents and schema)
-+ Physical representation
++ Physical data structure + Basic operations Prefer to be hands-on right away? See [Create a search index](search-how-to-create-search-index.md) instead.
Prefer to be hands-on right away? See [Create a search index](search-how-to-crea
In Azure AI Search, indexes contain *search documents*. Conceptually, a document is a single unit of searchable data in your index. For example, a retailer might have a document for each product, a news organization might have a document for each article, a travel site might have a document for each hotel and destination, and so forth. Mapping these concepts to more familiar database equivalents: a *search index* equates to a *table*, and *documents* are roughly equivalent to *rows* in a table.
-The structure of a document is determined by the index schema, as illustrated below. The "fields" collection is typically the largest part of an index, where each field is named, assigned a [data type](/rest/api/searchservice/Supported-data-types), and attributed with allowable behaviors that determine how it's used.
+The structure of a document is determined by the index schema, as illustrated in the following example. The "fields" collection is typically the largest part of an index, where each field is named, assigned a [data type](/rest/api/searchservice/Supported-data-types), and attributed with allowable behaviors that determine how it's used.
```json {
The structure of a document is determined by the index schema, as illustrated be
"fields": [ { "name": "name_of_field",
- "type": "Edm.String | Collection(Edm.String) | Edm.Int32 | Edm.Int64 | Edm.Double | Edm.Boolean | Edm.DateTimeOffset | Edm.GeographyPoint",
+ "type": "Edm.String | Collection(Edm.String) | Collection(Edm.Single) | Edm.Int32 | Edm.Int64 | Edm.Double | Edm.Boolean | Edm.DateTimeOffset | Edm.GeographyPoint",
"searchable": true (default where applicable) | false (only Edm.String and Collection(Edm.String) fields can be searchable), "filterable": true (default) | false, "sortable": true (default where applicable) | false (Collection(Edm.String) fields cannot be sortable),
The structure of a document is determined by the index schema, as illustrated be
"analyzer": "name_of_analyzer_for_search_and_indexing", (only if 'searchAnalyzer' and 'indexAnalyzer' are not set) "searchAnalyzer": "name_of_search_analyzer", (only if 'indexAnalyzer' is set and 'analyzer' is not set) "indexAnalyzer": "name_of_indexing_analyzer", (only if 'searchAnalyzer' is set and 'analyzer' is not set)
- "synonymMaps": [ "name_of_synonym_map" ] (optional, only one synonym map per field is currently supported)
+ "normalizer": "name_of_normalizer", (applies to fields that are filterable)
+ "synonymMaps": "name_of_synonym_map", (optional, only one synonym map per field is currently supported)
+ "dimensions": "number of dimensions used by an emedding models", (applies to vector fields only, of type Collection(Edm.Single))
+ "vectorSearchProfile": "name_of_vector_profile" (indexes can have many configurations, a field can use just one)
} ], "suggesters": [ ],
The structure of a document is determined by the index schema, as illustrated be
"tokenFilters":(optional)[ ... ], "defaultScoringProfile": (optional) "...", "corsOptions": (optional) { },
- "encryptionKey":(optional){ }
- }
+ "encryptionKey":(optional){ },
+ "semantic":(optional){ },
+ "vectorSearch":(optional){ }
} ```
-Other elements are collapsed for brevity, but the following links can provide the detail:
+Other elements are collapsed for brevity, but the following links provide details:
-+ [Suggesters](index-add-suggesters.md) support type-ahead queries like autocomplete
-+ [Scoring Profiles](index-add-scoring-profiles.md) are used for relevance tuning
-+ [Analyzers](search-analyzers.md) are used to process strings into tokens according to linguistic rules or other characteristics supported by the analyzer
-+ [Cross-origin remote scripting (CORS)](search-how-to-create-search-index.md#corsoptions) is used for apps that issues requests from different domains
-+ [Encryption key](search-security-manage-encryption-keys.md) is used for double-encryption of sensitive content in the index.
++ [suggesters](index-add-suggesters.md) support type-ahead queries like autocomplete.++ [scoringProfiles](index-add-scoring-profiles.md) are used for relevance tuning.++ [analyzers](search-analyzers.md) are used to process strings into tokens according to linguistic rules or other characteristics supported by the analyzer.++ [corsOptions](search-how-to-create-search-index.md#corsoptions), or Cross-origin remote scripting (CORS), is used for apps that issues requests from different domains.++ [encryptionKey](search-security-manage-encryption-keys.md) configures double-encryption of sensitive content in the index.++ [semantic](semantic-how-to-query-request.md) configures semantic reranking in full text and hybrid search.++ [vectorSearch](vector-search-how-to-create-index.md) configures vector fields and queries. ### Field definitions
-A search document is defined by the "fields" collection in the body of [Create Index request](/rest/api/searchservice/create-index). You need fields for document identification (keys), storing searchable text, and fields for supporting filters, facets, and sorts. You might also need fields for data that a user never sees. For example, you might want fields for profit margins or marketing promotions that you can use to modify search rank.
+A search document is defined by the "fields" collection in the body of [Create Index request](/rest/api/searchservice/create-index). You need fields for document identification (keys), storing searchable text, and fields for supporting filters, facets, and sorting. You might also need fields for data that a user never sees. For example, you might want fields for profit margins or marketing promotions that you can use in a scoring profile to boost a search score.
-If incoming data is hierarchical in nature, you can represent it within an index as a [complex type](search-howto-complex-data-types.md), used to represent nested structures. The built-in sample data set, Hotels, illustrates complex types using an Address (contains multiple subfields) that has a one-to-one relationship with each hotel, and a Rooms complex collection, where multiple rooms are associated with each hotel.
+If incoming data is hierarchical in nature, you can represent it within an index as a [complex type](search-howto-complex-data-types.md), used for nested structures. The built-in sample data set, Hotels, illustrates complex types using an Address (contains multiple subfields) that has a one-to-one relationship with each hotel, and a Rooms complex collection, where multiple rooms are associated with each hotel.
<a name="index-attributes"></a>
String fields are often marked as "searchable" and "retrievable". Fields used to
|Attribute|Description| ||--|
-|"searchable" |Full-text searchable, subject to lexical analysis such as word-breaking during indexing. If you set a searchable field to a value like "sunny day", internally it's split into the individual tokens "sunny" and "day". For details, see [How full text search works](search-lucene-query-architecture.md).|
+|"searchable" |Full-text or vector searchable. Text fields are subject to lexical analysis such as word-breaking during indexing. If you set a searchable field to a value like "sunny day", internally it's split into the individual tokens "sunny" and "day". For details, see [How full text search works](search-lucene-query-architecture.md).|
|"filterable" |Referenced in $filter queries. Filterable fields of type `Edm.String` or `Collection(Edm.String)` don't undergo word-breaking, so comparisons are for exact matches only. For example, if you set such a field f to "sunny day", `$filter=f eq 'sunny'` finds no matches, but `$filter=f eq 'sunny day'` will. | |"sortable" |By default the system sorts results by score, but you can configure sort based on fields in the documents. Fields of type `Collection(Edm.String)` can't be "sortable". | |"facetable" |Typically used in a presentation of search results that includes a hit count by category (for example, hotels in a specific city). This option can't be used with fields of type `Edm.GeographyPoint`. Fields of type `Edm.String` that are filterable, "sortable", or "facetable" can be at most 32 kilobytes in length. For details, see [Create Index (REST API)](/rest/api/searchservice/create-index).|
The size of an index is determined by:
Document composition and quantity are determined by what you choose to import. Remember that a search index should only contain searchable content. If source data includes binary fields, omit those fields unless you're using AI enrichment to crack and analyze the content to create text searchable information.
-Field attributes determine behaviors. To support those behaviors, the indexing process creates the necessary data structures. For example, "searchable" invokes [full text search](search-lucene-query-architecture.md), which scans inverted indices for the tokenized term. In contrast, a "filterable" or "sortable" attribute supports iteration over unmodified strings. The example in the next section shows variations in index size based on the selected attributes.
+Field attributes determine behaviors. To support those behaviors, the indexing process creates the necessary data structures. For example, for a field of type `Edm.String`, "searchable" invokes [full text search](search-lucene-query-architecture.md), which scans inverted indices for the tokenized term. In contrast, a "filterable" or "sortable" attribute supports iteration over unmodified strings. The example in the next section shows variations in index size based on the selected attributes.
[**Suggesters**](index-add-suggesters.md) are constructs that support type-ahead or autocomplete queries. As such, when you include a suggester, the indexing process creates the data structures necessary for verbatim character matches. Suggesters are implemented at the field level, so choose only those fields that are reasonable for type-ahead.
Although these index variants are somewhat artificial, we can refer to them for
+ "filterable", "sortable", "facetable" consume more storage. + **suggester** has a large potential for increasing index size, but not as much as the screenshot would indicate (all fields that could be made suggester-aware were selected, which isn't a likely scenario in most indexes).
-Also not reflected in the above table is the effect of [analyzers](search-analyzers.md). If you're using the edgeNgram tokenizer to store verbatim sequences of characters (`a, ab, abc, abcd`), the size of the index will be larger than if you used a standard analyzer.
+Also not reflected in the previous table is the effect of [analyzers](search-analyzers.md). If you use the edgeNgram tokenizer to store verbatim sequences of characters (`a, ab, abc, abcd`), the index is larger than if you use the standard analyzer.
## Basic operations and interaction
Now that you have a better idea of what an index is, this section introduces ind
### Index isolation
-In Azure AI Search, you'll work with one index at a time, where all index-related operations target a single index. There's no concept of related indexes or the joining of independent indexes for either indexing or querying.
+In Azure AI Search, you work with one index at a time, where all index-related operations target a single index. There's no concept of related indexes or the joining of independent indexes for either indexing or querying.
### Continuously available
But you'll also want to become familiar with methodologies for loading an index
+ [Create a search index](search-how-to-create-search-index.md) ++ [Create a vector index](vector-search-how-to-create-index.md)+ + [Create an index alias](search-how-to-alias.md) + [Data import overview](search-what-is-data-import.md)
-+ [Add, Update or Delete Documents (REST)](/rest/api/searchservice/addupdate-or-delete-documents)
++ [Load an index](search-how-to-load-search-index.md)
search Tutorial Optimize Indexing Push Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-optimize-indexing-push-api.md
Azure AI Search supports [two basic approaches](search-what-is-data-import.md) for importing data into a search index: *push* your data into the index programmatically, or pointing an [Azure AI Search indexer](search-indexer-overview.md) at a supported data source to *pull* in the data.
-This tutorial describes how to efficiently index data using the [push model](search-what-is-data-import.md#pushing-data-to-an-index) by batching requests and using an exponential backoff retry strategy. You can [download and run the sample application](https://github.com/Azure-Samples/azure-search-dotnet-scale/tree/main/optimize-data-indexing). This article explains the key aspects of the application and factors to consider when indexing data.
+This tutorial explains how to efficiently index data using the [push model](search-what-is-data-import.md#pushing-data-to-an-index) by batching requests and using an exponential backoff retry strategy. You can [download and run the sample application](https://github.com/Azure-Samples/azure-search-dotnet-scale/tree/main/optimize-data-indexing). This article explains the key aspects of the application and what factors to consider when indexing data.
-This tutorial uses C# and the [Azure.Search.Documents library from the Azure SDK for .NET](/dotnet/api/overview/azure/search) to perform the following tasks:
+This tutorial uses C# and the [Azure.Search.Documents library](/dotnet/api/overview/azure/search) from the Azure SDK for .NET to perform the following tasks:
> [!div class="checklist"] > * Create an index
Source code for this tutorial is in the [optimize-data-indexing/v11](https://git
Factors affecting indexing speeds are listed next. You can learn more in [Index large data sets](search-howto-large-index.md).
-+ **Service tier and number of partitions/replicas** - Adding partitions and increasing your tier will both increase indexing speeds.
-+ **Index Schema** - Adding fields and adding additional properties to fields (such as *searchable*, *facetable*, or *filterable*) both reduce indexing speeds.
++ **Service tier and number of partitions/replicas** - Adding partitions or upgrading your tier increases indexing speeds.++ **Index schema complexity** - Adding fields and field properties lowers indexing speeds. Smaller indexes are faster to index. + **Batch size** - The optimal batch size varies based on your index schema and dataset.
-+ **Number of threads/workers** - a single thread won't take full advantage of indexing speeds
-+ **Retry strategy** - An exponential backoff retry strategy should be used to optimize indexing.
++ **Number of threads/workers** - A single thread won't take full advantage of indexing speeds.++ **Retry strategy** - An exponential backoff retry strategy is a best practice for optimum indexing. + **Network data transfer speeds** - Data transfer speeds can be a limiting factor. Index data from within your Azure environment to increase data transfer speeds. ## 1 - Create Azure AI Search service
-To complete this tutorial, you'll need an Azure AI Search service, which you can [create in the portal](search-create-service-portal.md). We recommend using the same tier you plan to use in production so that you can accurately test and optimize indexing speeds.
+To complete this tutorial, you need an Azure AI Search service, which you can [create in the portal](search-create-service-portal.md). We recommend using the same tier you plan to use in production so that you can accurately test and optimize indexing speeds.
### Get an admin api-key and URL for Azure AI Search
-API calls require the service URL and an access key. A search service is created with both, so if you added Azure AI Search to your subscription, follow these steps to get the necessary information:
+This tutorial uses key-based authentication. Copy an admin API key to paste into the **appsettings.json** file.
1. Sign in to the [Azure portal](https://portal.azure.com), and in your search service **Overview** page, get the URL. An example endpoint might look like `https://mydemo.search.windows.net`.
This simple C#/.NET console app performs the following tasks:
+ **Hotel.cs** and **Address.cs** contains the schema that defines the index + **DataGenerator.cs** contains a simple class to make it easy to create large amounts of hotel data
- + **ExponentialBackoff.cs** contains code to optimize the indexing process as described below
+ + **ExponentialBackoff.cs** contains code to optimize the indexing process as described in this article
+ **Program.cs** contains functions that create and delete the Azure AI Search index, indexes batches of data, and tests different batch sizes ### Creating the index
-This sample program uses the .NET SDK to define and create an Azure AI Search index. It takes advantage of the `FieldBuilder` class to generate an index structure from a C# data model class.
+This sample program uses the Azure SDK for .NET to define and create an Azure AI Search index. It takes advantage of the `FieldBuilder` class to generate an index structure from a C# data model class.
The data model is defined by the Hotel class, which also contains references to the Address class. The FieldBuilder drills down through multiple class definitions to generate a complex data structure for the index. Metadata tags are used to define the attributes of each field, such as whether it's searchable or sortable.
private static async Task CreateIndexAsync(string indexName, SearchIndexClient i
A simple class is implemented in the **DataGenerator.cs** file to generate data for testing. The sole purpose of this class is to make it easy to generate a large number of documents with a unique ID for indexing.
-To get a list of 100,000 hotels with unique IDs, you'd run the following lines of code:
+To get a list of 100,000 hotels with unique IDs, run the following lines of code:
```csharp long numDocuments = 100000;
The schema of your index has an effect on indexing speeds. For this reason, it m
Azure AI Search supports the following APIs to load single or multiple documents into an index:
-+ [Add, Update, or Delete Documents (REST API)](/rest/api/searchservice/AddUpdate-or-Delete-Documents)
++ [Documents - Index (REST API)](/rest/api/searchservice/documents) + [IndexDocumentsAction class](/dotnet/api/azure.search.documents.models.indexdocumentsaction) or [IndexDocumentsBatch class](/dotnet/api/azure.search.documents.models.indexdocumentsbatch) Indexing documents in batches will significantly improve indexing performance. These batches can be up to 1000 documents, or up to about 16 MB per batch.
Now that we've identified the batch size we intend to use, the next step is to b
+ Uses multiple threads/workers. + Implements an exponential backoff retry strategy.
+Uncomment lines 41 through 49 and rerun and the program. On this run, the sample generates and sends batches of documents, up to 100,000 if you run the code without changing the parameters.
+ ### Use multiple threads/workers
-To take full advantage of Azure AI Search's indexing speeds, you'll likely need to use multiple threads to send batch indexing requests concurrently to the service.
+To take full advantage of Azure AI Search's indexing speeds, use multiple threads to send batch indexing requests concurrently to the service.
Several of the key considerations previously mentioned can affect the optimal number of threads. You can modify this sample and test with different thread counts to determine the optimal thread count for your scenario. However, as long as you have several threads running concurrently, you should be able to take advantage of most of the efficiency gains.
As you ramp up the requests hitting the search service, you might encounter [HTT
If a failure happens, requests should be retried using an [exponential backoff retry strategy](/dotnet/architecture/microservices/implement-resilient-applications/implement-retries-exponential-backoff).
-Azure AI Search's .NET SDK automatically retries 503s and other failed requests but you'll need to implement your own logic to retry 207s. Open-source tools such as [Polly](https://github.com/App-vNext/Polly) can also be used to implement a retry strategy.
+Azure AI Search's .NET SDK automatically retries 503s and other failed requests but you should implement your own logic to retry 207s. Open-source tools such as [Polly](https://github.com/App-vNext/Polly) can be useful in a retry strategy.
-In this sample, we implement our own exponential backoff retry strategy. To implement this strategy, we start by defining some variables including the `maxRetryAttempts` and the initial `delay` for a failed request:
+In this sample, we implement our own exponential backoff retry strategy. We start by defining some variables including the `maxRetryAttempts` and the initial `delay` for a failed request:
```csharp // Create batch of documents for indexing
After the function is finished running, you can verify that all of the documents
## 6 - Explore index
-You can explore the populated search index after the program has run programatically or using the [**Search explorer**](search-explorer.md) in the portal.
+You can explore the populated search index after the program has run programmatically or using the [**Search explorer**](search-explorer.md) in the portal.
### Programatically
var indexStats = await indexClient.GetIndexStatisticsAsync(indexName);
### Azure portal
-In Azure portal, open the search service **Overview** page, and find the **optimize-indexing** index in the **Indexes** list.
+In Azure portal, from the left navigation pane, and find the **optimize-indexing** index in the **Indexes** list.
![List of Azure AI Search indexes](media/tutorial-optimize-data-indexing/portal-output.png "List of Azure AI Search indexes")
You can find and manage resources in the portal, using the **All resources** or
## Next steps
-Now that you're familiar with the concept of ingesting data efficiently, let's take a closer look at Lucene query architecture and how full text search works in Azure AI Search.
+To learn more about indexing large amounts data, try the following tutorial.
> [!div class="nextstepaction"]
-> [How full text search works in Azure AI Search](search-lucene-query-architecture.md)
+> [Tutorial: Index large data from Apache Spark using SynapseML and Azure AI Search](search-synapseml-cognitive-services.md)
sentinel Deployment Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deployment-overview.md
Microsoft Sentinel solution for SAP® applications is certified for SAP S/4HANA
> - Microsoft Sentinel identifies a production system by looking at the configuration on the SAP system. To do this, Microsoft Sentinel searches for a production entry in the T000 table. > - [View the roles of your connected production systems](../monitor-sap-system-health.md).
-The Microsoft Sentinel for SAP data connector is an agent, installed on a VM or a physical server that collects application logs from across the entire SAP system landscape. It then sends those logs to your Log Analytics workspace in Microsoft Sentinel. You can then use the other content in the Threat Monitoring for SAP solution ΓÇô the analytics rules, workbooks, and watchlists ΓÇô to gain insight into your organization's SAP environment and to detect and respond to security threats.
+The Microsoft Sentinel for SAP data connector is an agent, installed on a VM, a physical server, or a Kubernetes cluster that collects application logs from across the entire SAP system landscape for all of your SAP SIDs. It then sends those logs to your Log Analytics workspace in Microsoft Sentinel. You can then use the other content in the Threat Monitoring for SAP solution ΓÇô the analytics rules, workbooks, and watchlists ΓÇô to gain insight into your organization's SAP environment and to detect and respond to security threats.
+
+ This diagram shows a multi-SID SAP landscape with a split between productive and non-productive systems including the SAP Business Technology Platform. All of the systems and services are being onboarded to the Sentinel for SAP solution.
## Deployment milestones
sentinel Solution Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/solution-overview.md
To help close this gap, Microsoft Sentinel offers the Microsoft Sentinel solutio
- It also allows you to build automated response processes to interact with your SAP systems to stop active security threats.
+- In addition to that it offers threat monitoring and detection for SAP Business Technology Platform.
+ ## Solution details
+ This diagram shows a multi-SID SAP landscape with a split between productive and non-productive systems including the SAP Business Technology Platform. All of the systems and services are being onboarded to the Sentinel for SAP solution.
+ ### Log sources The solution's data connector retrieves a wide variety of SAP Log Sources:
Learn more about the Microsoft Sentinel solution for SAP® applications:
- [Deploy the Microsoft Sentinel for SAP data connector with SNC](configure-snc.md) - [Enable and configure SAP auditing](configure-audit.md) - [Collect SAP HANA audit logs](collect-sap-hana-audit-logs.md)
+- [Deploy Microsoft Sentinel Solution for SAP® BTP](deploy-sap-btp-solution.md)
Troubleshooting:
site-recovery Unregister Vmm Server Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/unregister-vmm-server-script.md
+
+ Title: Unregister a VMM server script
+description: This article describes the cleanup script on the VMM server
+++ Last updated : 03/25/2021++++
+# Cleanup script on a VMM server
+If your VMM server was in a Disconnected state, then download and run the cleanup script on the VMM server.
++
+```
+pushd .
+try
+{
+ $error.Clear()
+ "This script will remove the old Hyper-V Recovery Manager related properties for this VMM. This can be run in below scenarios :"
+ "1. Complete VMM site clean up."
+ "2. VMM site clean up in case the associated VMM has become unresponsive. Input in this case will be the VMM ID of the unresponsive server."
+
+ $choice = Read-Host "Enter your choice "
+
+ if($choice -eq 1)
+ {
+ $vmmid = get-itemproperty 'hklm:\software\Microsoft\Microsoft System Center Virtual Machine Manager Server\Setup' -Name VMMID
+ $vmmid = $vmmid.VmmID
+
+ # $fullCleanup = 1 indicates that clean up all hyper-V recovery manager settings from this VMM.
+ $fullCleanup = 1
+
+ }
+ else
+ {
+ try
+ {
+ [GUID]$vmmid = Read-Host "Enter the VMMId for the unresponsive VMM server "
+ }
+ catch
+ {
+ Write-Host "Error occurred" -ForegroundColor "Red"
+ $error[0]
+ return
+ }
+
+ # $fullCleanup = 0 indicates that clean up only those clouds/VMs which are protecting/protected by the objects on the given VMMId.
+ $fullCleanup = 0
+ }
+
+ if($vmmid -ne "")
+ {
+
+ Write-Host "Proceeding to remove Hyper-V Recovery Manager related properties for this VMM with ID: " $vmmid
+ Write-Host "Before running the script ensure that the VMM service is running."
+ Write-Host "In a VMM cluster ensure that the Windows Cluster service is running and run the script on each node."
+ Write-Host "The VMM service (or the Cluster role) will be stopped when the script runs. After the script completes, restart the VMM or Cluster service."
+
+ $choice = Read-Host "Do you want to continue (Y/N) ?"
+ ""
+ if($choice.ToLower() -eq "y" -or $choice.ToLower() -eq "yes" )
+ {
+ $isCluster = $false
+ $path = 'HKLM:\SOFTWARE\Microsoft\Microsoft System Center Virtual Machine Manager Server\Setup'
+ $key = Get-Item -LiteralPath $path -ErrorAction SilentlyContinue
+ $name = 'HAVMMName'
+ if ($key)
+ {
+ $clusterName = $key.GetValue($name, $null)
+ if($clusterName -eq $null)
+ {
+ $serviceName = "SCVMMService"
+ $service = Get-Service -Name $serviceName
+ if ($service.Status -eq "Running")
+ {
+ "Stopping the VMM service..."
+ net stop $serviceName
+ }
+ else
+ {
+ if($service.Status -eq "Stopped")
+ {
+ "VMM service is not running."
+ }
+ else
+ {
+ "Could not stop the VMM service as it is starting or stopping. Please try again later"
+ return
+ }
+
+ }
+ }
+ else
+ {
+ $isCluster = $True
+ $isPrimaryNode = $false
+ $clusterName = $key.GetValue($name, $null)
+
+ Write-Host "Clustered VMM detected"
+
+ $clusService = Get-Service -Name ClusSvc
+ Add-Type -AssemblyName System.ServiceProcess
+ if ($clusService.Status -ne [System.ServiceProcess.ServiceControllerStatus]::Running)
+ {
+ Write-Host "Windows Cluster service is not running on this machine. Please start Windows cluster service before running this script"
+ return
+ }
+
+ $clusterResources = Get-ClusterResource -Cluster $clusterName
+ Write-Host "Searching for VMM cluster resource....."
+
+ foreach ($clusterResource in $clusterResources)
+ {
+ if ($clusterResource.Name -match 'VMM Service')
+ {
+ Write-Host "Found SCVMM Cluster Resource" $clusterResource
+ Write-Host "Cluster owner node is " $clusterResource.OwnerNode
+ $currentHostName = [System.Net.Dns]::GetHostName()
+ $clusterCheckpointList = get-clustercheckpoint -ResourceName $clusterResource.Name
+ Write-Host "Current node is " $currentHostName
+
+ if ([string]::Compare($clusterResource.OwnerNode, $currentHostName, $True) -eq 0)
+ {
+ $isPrimaryNode = $True
+ Write-Host "Current node owns VMM cluster resource"
+ Write-Host "Shutting VMM Cluster Resource down"
+ Stop-ClusterResource $clusterResource
+ }
+ else
+ {
+ Write-Error "Current node does not own VMM cluster resource. Please run on this script on $clusterResource.OwnerNode"
+ Exit
+ }
+
+ break
+ }
+ }
+ }
+ }
+ else
+ {
+ Write-Error ΓÇ£Failed to find registry keys associated with VMMΓÇ¥
+ return
+ }
+
+ ""
+ "Connect to SCVMM database using"
+ "1. Windows Authentication"
+ "2. SQL Server Authentication"
+
+ $mode = Read-Host "Enter your choice "
+ ""
+
+ cd 'hklm:\software\Microsoft\Microsoft System Center Virtual Machine Manager Server\Settings\Sql'
+ $connectionString = get-itemproperty . -Name ConnectionString
+ $conn = New-Object System.Data.SqlClient.SqlConnection
+
+ if($mode -eq 1)
+ {
+ "Connecting to SQL via Windows Authentication..."
+ $conn.ConnectionString = $connectionString.ConnectionString
+ }
+ else
+ {
+ "Connecting to SQL via SQL Server Authentication..."
+
+ $credential = Get-Credential
+ $loginName = $credential.UserName
+ $password = $credential.password
+ $password.MakeReadOnly();
+ $conn.ConnectionString = $connectionString.ConnectionString.ToString().split(";",2)[1]
+ $sqlcred = New-Object System.Data.SqlClient.SqlCredential($loginName, $password)
+ $conn.Credential = $sqlcred
+ }
+
+ Write-Host "Connection string: " $conn.ConnectionString
+ $conn.Open()
+ $transaction = $conn.BeginTransaction("CleanupTransaction");
+
+ try
+ {
+ $sql = "SELECT TOP 1 [Id]
+ FROM [sysobjects]
+ WHERE [Name] = 'tbl_DR_ProtectionUnit'
+ AND [xType] = 'U'"
+ $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
+ $cmd.Transaction = $transaction
+ $rdr = $cmd.ExecuteReader()
+ $PUTableExists = $rdr.HasRows
+ $rdr.Close()
+ $SCVMM2012R2Detected = $false
+ if($PUTableExists)
+ {
+ $sql = "SELECT [Id]
+ FROM [tbl_DR_ProtectionUnit]"
+ $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
+ $cmd.Transaction = $transaction
+ $rdr = $cmd.ExecuteReader()
+ $SCVMM2012R2Detected = $rdr.HasRows
+ $rdr.Close()
+ }
+
+ ""
+ "Getting all clouds configured for protection..."
+
+ $sql = "SELECT [PrimaryCloudID],
+ [RecoveryCloudID],
+ [PrimaryCloudName],
+ [RecoveryCloudName]
+ FROM [tbl_Cloud_CloudDRPairing]
+ WHERE [PrimaryVMMID] = @VMMId
+ OR [RecoveryVMMID] = @VMMId"
+ $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
+
+ $cmd.Parameters.AddWithValue("@VMMId",$vmmid) | Out-Null
+ $cmd.Transaction = $transaction
+ $da = New-Object System.Data.SqlClient.SqlDataAdapter
+ $da.SelectCommand = $cmd
+ $ds = New-Object System.Data.DataSet
+ $da.Fill($ds, "Clouds") | Out-Null
+
+ if($ds.Tables["Clouds"].Rows.Count -eq 0 )
+ {
+ "No clouds were found in protected or protecting status."
+ }
+ else
+ {
+ "Cloud pairing list populated."
+
+ ""
+ "Listing the clouds and their VMs..."
+
+ $vmIds = @()
+
+ foreach ($row in $ds.tables["Clouds"].rows)
+ {
+ ""
+ "'{0}' protected by '{1}'" -f $row.PrimaryCloudName.ToString(), $row.RecoveryCloudName.ToString()
+
+ $sql = "SELECT [ObjectId],
+ [Name]
+ FROM [tbl_WLC_VObject]
+ WHERE [CloudId] IN (@PrimaryCloudId,@RecoveryCloudId)"
+ $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
+ $cmd.Transaction = $transaction
+ $cmd.Parameters.AddWithValue("@PrimaryCloudId",$row.PrimaryCloudId.ToString()) | Out-Null
+ $cmd.Parameters.AddWithValue("@RecoveryCloudId",$row.RecoveryCloudId.ToString()) | Out-Null
+ $rdr = $cmd.ExecuteReader()
+ if($rdr.HasRows)
+ {
+ "VM list:"
+ }
+ else
+ {
+ "No VMs found."
+ }
+ while($rdr.Read())
+ {
+ Write-Host $rdr["Name"].ToString()
+ $vmIds = $vmIds + $rdr["ObjectId"].ToString();
+ }
+
+ $rdr.Close()
+ }
++
+ if($vmIds.Count -eq 0)
+ {
+ "No protected VMs are present."
+ }
+ else
+ {
+ ""
+ "Removing recovery settings from all protected VMs..."
+
+ if($SCVMM2012R2Detected)
+ {
+ $sql = "UPDATE vm
+ SET [DRState] = 0,
+ [DRErrors] = NULL,
+ [ProtectionUnitId] = NULL
+ FROM
+ [tbl_WLC_VMInstance] vm
+ INNER JOIN [tbl_WLC_VObject] vObj
+ ON vm.[VMInstanceId] = vObj.[ObjectId]
+ INNER JOIN [tbl_Cloud_CloudDRPairing] cpair
+ ON vObj.[CloudId] = cpair.[PrimaryCloudID]
+ OR vObj.[CloudId] = cpair.[RecoveryCloudID]
+ WHERE cpair.[PrimaryVMMId] = @VMMId
+ OR cpair.[RecoveryVMMID] = @VMMId"
+ $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
+ $cmd.Transaction = $transaction
+ $cmd.Parameters.AddWithValue("@VMMId",$vmmid) | Out-Null
+ $cmd.ExecuteNonQuery() | Out-Null
+ }
+ else
+ {
+ $sql = "UPDATE vm
+ SET [DRState] = 0,
+ [DRErrors] = NULL
+ FROM
+ [tbl_WLC_VMInstance] vm
+ INNER JOIN [tbl_WLC_VObject] vObj
+ ON vm.[VMInstanceId] = vObj.[ObjectId]
+ INNER JOIN [tbl_Cloud_CloudDRPairing] cpair
+ ON vObj.[CloudId] = cpair.[PrimaryCloudID]
+ OR vObj.[CloudId] = cpair.[RecoveryCloudID]
+ WHERE cpair.[PrimaryVMMId] = @VMMId
+ OR cpair.[RecoveryVMMID] = @VMMId"
+ $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
+ $cmd.Transaction = $transaction
+ $cmd.Parameters.AddWithValue("@VMMId",$vmmid) | Out-Null
+ $cmd.ExecuteNonQuery() | Out-Null
+ }
++
+ $sql = "UPDATE hwp
+ SET [IsDRProtectionRequired] = 0
+ FROM
+ [tbl_WLC_HWProfile] hwp
+ INNER JOIN [tbl_WLC_VObject] vObj
+ ON hwp.[HWProfileId] = vObj.[HWProfileId]
+ INNER JOIN [tbl_Cloud_CloudDRPairing] cpair
+ ON vObj.[CloudId] = cpair.[PrimaryCloudID]
+ OR vObj.[CloudId] = cpair.[RecoveryCloudID]
+ WHERE cpair.[PrimaryVMMId] = @VMMId
+ OR cpair.[RecoveryVMMID] = @VMMId"
+ $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
+ $cmd.Transaction = $transaction
+ $cmd.Parameters.AddWithValue("@VMMId",$vmmid) | Out-Null
+ $cmd.ExecuteNonQuery() | Out-Null
+
+ "Recovery settings removed successfully for {0} VMs" -f $vmIds.Count
+ }
++
+ ""
+ "Removing recovery settings from all clouds..."
+ if($SCVMM2012R2Detected)
+ {
+ if($fullCleanup -eq 1)
+ {
+ $sql = "DELETE phost
+ FROM [tbl_DR_ProtectionUnit_HostRelation] phost
+ INNER JOIN [tbl_Cloud_CloudScopeRelation] csr
+ ON phost.[ProtectionUnitId] = csr.[ScopeId]
+ WHERE csr.[ScopeType] = 214"
+ $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
+ $cmd.Transaction = $transaction
+ $cmd.ExecuteNonQuery() | Out-Null
++
+ $sql = "UPDATE [tbl_Cloud_Cloud]
+ SET [IsDRProtected] = 0,
+ [IsDRProvider] = 0,
+ [DisasterRecoverySupported] = 0"
+ $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
+ $cmd.Transaction = $transaction
+ $cmd.ExecuteNonQuery() | Out-Null
+
+ }
+ else
+ {
+ $sql = "DELETE phost
+ FROM [tbl_DR_ProtectionUnit_HostRelation] phost
+ INNER JOIN [tbl_Cloud_CloudScopeRelation] csr
+ ON phost.[ProtectionUnitId] = csr.[ScopeId]
+ INNER JOIN [tbl_Cloud_CloudDRPairing] cpair
+ ON csr.[CloudId] = cpair.[primaryCloudId]
+ OR csr.[CloudId] = cpair.[recoveryCloudId]
+ WHERE csr.ScopeType = 214
+ AND cpair.[PrimaryVMMId] = @VMMId
+ OR cpair.[RecoveryVMMID] = @VMMId"
+ $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
+ $cmd.Transaction = $transaction
+ $cmd.Parameters.AddWithValue("@VMMId",$vmmid) | Out-Null
+ $cmd.ExecuteNonQuery() | Out-Null
+
+ $sql = "UPDATE cloud
+ SET [IsDRProtected] = 0,
+ [IsDRProvider] = 0
+ FROM
+ [tbl_Cloud_Cloud] cloud
+ INNER JOIN [tbl_Cloud_CloudDRPairing] cpair
+ ON cloud.[ID] = cpair.[PrimaryCloudID]
+ OR cloud.[ID] = cpair.[RecoveryCloudID]
+ WHERE cpair.[PrimaryVMMId] = @VMMId
+ OR cpair.[RecoveryVMMID] = @VMMId"
+ $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
+ $cmd.Transaction = $transaction
+ $cmd.Parameters.AddWithValue("@VMMId",$vmmid) | Out-Null
+ $cmd.ExecuteNonQuery() | Out-Null
+
+ }
+
+ }
+
+ # VMM 2012 SP1 detected.
+ else
+ {
+ $sql = "UPDATE cloud
+ SET [IsDRProtected] = 0,
+ [IsDRProvider] = 0
+ FROM
+ [tbl_Cloud_Cloud] cloud
+ INNER JOIN [tbl_Cloud_CloudDRPairing] cpair
+ ON cloud.[ID] = cpair.[PrimaryCloudID]
+ OR cloud.[ID] = cpair.[RecoveryCloudID]
+ WHERE cpair.[PrimaryVMMId] = @VMMId
+ OR cpair.[RecoveryVMMID] = @VMMId"
+ $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
+ $cmd.Transaction = $transaction
+ $cmd.Parameters.AddWithValue("@VMMId",$vmmid) | Out-Null
+ $cmd.ExecuteNonQuery() | Out-Null
+ }
+
+ "Recovery settings removed successfully."
+
+ ""
+ "Deleting cloud pairing entities..."
+
+ $sql = "DELETE FROM [tbl_Cloud_CloudDRPairing]
+ WHERE [PrimaryVMMID] = @VMMId
+ OR [RecoveryVMMID] = @VMMId"
+ $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
+ $cmd.Transaction = $transaction
+ $cmd.Parameters.AddWithValue("@VMMId",$vmmid) | Out-Null
+ $cmd.ExecuteNonQuery() | Out-Null
+
+ "Cloud pairing entities deleted successfully."
+ }
++
+ if ($SCVMM2012R2Detected)
+ {
+ "Removing SAN related entries"
+
+ $sql = "DELETE sanMap
+ FROM [tbl_DR_ProtectionUnit_StorageArray] sanMap
+ INNER JOIN [tbl_Cloud_CloudScopeRelation] csr
+ ON sanMap.[ProtectionUnitId] = csr.[ScopeId]
+ WHERE csr.[ScopeType] = 214"
+ $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
+ $cmd.Transaction = $transaction
+ $cmd.ExecuteNonQuery() | Out-Null
+
+ "SAN related entities deleted successfully"
+ }
++
+ if($fullCleanup -eq 1)
+ {
+ # In case of full cleanup reset all VMs protection data.
+ ""
+ "Removing stale entries for VMs..."
+ if($SCVMM2012R2Detected)
+ {
+ $sql = "UPDATE [tbl_WLC_VMInstance]
+ SET [DRState] = 0,
+ [DRErrors] = NULL,
+ [ProtectionUnitId] = NULL"
+ $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
+ $cmd.Transaction = $transaction
+ $cmd.ExecuteNonQuery() | Out-Null
+ }
+ else
+ {
+ $sql = "UPDATE [tbl_WLC_VMInstance]
+ SET [DRState] = 0,
+ [DRErrors] = NULL"
+ $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
+ $cmd.Transaction = $transaction
+ $cmd.ExecuteNonQuery() | Out-Null
+ }
++
+ $sql = "UPDATE [tbl_WLC_HWProfile]
+ SET [IsDRProtectionRequired] = 0"
+ $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
+ $cmd.Transaction = $transaction
+ $cmd.ExecuteNonQuery() | Out-Null
+ # Done removing stale enteries
+
+ # Cloud publish settings and registration details are cleaned up even if there are no paired clouds.
+ if($SCVMM2012R2Detected)
+ {
+ ""
+ "Removing cloud publish settings..."
+
+ # Currently 214 scopeType points to only ProtectionProvider = 1,2 (HVR1 and HVR2).
+ # Once new providers are introduced appropriate filtering should be done before delete
+ # in below two queries.
+ $sql = "DELETE punit
+ FROM [tbl_DR_ProtectionUnit] punit
+ INNER JOIN [tbl_Cloud_CloudScopeRelation] csr
+ ON punit.[ID] = csr.[ScopeId]
+ WHERE csr.[ScopeType] = 214"
+ $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
+ $cmd.Transaction = $transaction
+ $cmd.ExecuteNonQuery() | Out-Null
++
+ $sql = "DELETE FROM [tbl_Cloud_CloudScopeRelation]
+ WHERE [ScopeType] = 214"
+ $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
+ $cmd.Transaction = $transaction
+ $cmd.ExecuteNonQuery() | Out-Null
+ "Cloud publish settings removed successfully."
+ }
+
+ ""
+ "Un-registering VMM..."
+
+ $currentTime = Get-Date
+ $sql = "UPDATE [tbl_DR_VMMRegistrationDetails]
+ SET [DRSubscriptionId] = '',
+ [VMMFriendlyName] = '',
+ [DRAdapterInstalledVersion] = '',
+ [LastModifiedDate] = @LastModifiedTime,
+ [DRAuthCertBlob] = NULL,
+ [DRAuthCertThumbprint] = NULL,
+ [HostSigningCertBlob] = NULL,
+ [HostSigningCertThumbprint] = NULL,
+ [DRAdapterUpdateVersion] = '',
+ [OrgIdUserName] = ''
+ WHERE [VMMId] = @VMMId"
+ $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
+ $cmd.Transaction = $transaction
+ $param1 = $cmd.Parameters.AddWithValue("@LastModifiedTime", [System.Data.SqlDbType]::DateTime)
+ $param1.Value = Get-Date
+ $cmd.Parameters.AddWithValue("@VMMId",$vmmid) | Out-Null
+ $cmd.ExecuteNonQuery() | Out-Null
+
+ "Un-registration completed successfully."
+
+ ""
+ "Removing KEK..."
+
+ $kekid = "06cda9f3-2e3d-49ee-8e18-2d9bd1d74034"
+ $rolloverKekId = "fe0adfd7-309a-429a-b420-e8ed067338e6"
+ $sql = "DELETE FROM [tbl_VMM_CertificateStore]
+ WHERE [CertificateID] IN (@KEKId,@RolloverKekId)"
+ $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
+ $cmd.Transaction = $transaction
+ $cmd.Parameters.AddWithValue("@KEKId",$kekid) | Out-Null
+ $cmd.Parameters.AddWithValue("@RolloverKekId",$rolloverKekId) | Out-Null
+ $cmd.ExecuteNonQuery() | Out-Null
+
+ "Removing KEK completed successfully."
+
+ if($error.Count -eq 0)
+ {
+ $transaction.Commit()
+
+ ""
+ "Removing registration related registry keys."
+
+ $path = "software\Microsoft\Microsoft System Center Virtual Machine Manager Server\DRAdapter\Registration"
+ if((Test-Path "hklm:\$path" ))
+ {
+ if($isCluster -and $isPrimaryNode)
+ {
+ foreach($checkpoint in $clusterCheckpointList)
+ {
+ $compareResult = [string]::Compare($path, $checkpoint.Name, $True)
+
+ if($compareResult -eq 0)
+ {
+ Write-Host "Removing Checkpointing for $path"
+ Remove-ClusterCheckpoint -CheckpointName $path
+ }
+ }
+ }
+
+ Remove-Item -Path "hklm:\$path"
+
+ $proxyPath = "software\Microsoft\Microsoft System Center Virtual Machine Manager Server\DRAdapter\ProxySettings"
+ if((Test-Path "hklm:\$proxyPath"))
+ {
+ if($isCluster -and $isPrimaryNode)
+ {
+ foreach($checkpoint in $clusterCheckpointList)
+ {
+ $compareResult = [string]::Compare($proxyPath, $checkpoint.Name, $True)
+
+ if($compareResult -eq 0)
+ {
+ Write-Host "Removing Checkpointing for $proxyPath"
+ Remove-ClusterCheckpoint -CheckpointName $proxyPath
+ }
+ }
+ }
+
+ Remove-Item -Path "hklm:\$proxyPath"
+ }
+
+ $backupPath = "software\Microsoft\Hyper-V Recovery Manager"
+ if((Test-Path "hklm:\$backupPath"))
+ {
+ if($isCluster -and $isPrimaryNode)
+ {
+ foreach($checkpoint in $clusterCheckpointList)
+ {
+ $compareResult = [string]::Compare($backupPath, $checkpoint.Name, $True)
+
+ if($compareResult -eq 0)
+ {
+ Write-Host "Removing Checkpointing for $backupPath"
+ Remove-ClusterCheckpoint -CheckpointName $backupPath
+ }
+ }
+ }
+ Remove-Item "hklm:\$backupPath" -recurse
+ }
+ "Registry keys removed successfully."
+ ""
+ }
+ else
+ {
+ "Could not delete registration key as hklm:\software\Microsoft\Microsoft System Center Virtual Machine Manager Server\DRAdapter\Registration doesn't exist."
+ }
+
+ Write-Host "SUCCESS!!" -ForegroundColor "Green"
+ }
+ else
+ {
+ $transaction.Rollback()
+ Write-Error "Error occured"
+ $error[0]
+ ""
+ Write-Error "FAILED"
+ "All updates to the VMM database have been rolled back."
+ }
+ }
+ else
+ {
+ if($error.Count -eq 0)
+ {
+ $transaction.Commit()
+ Write-Host "SUCCESS!!" -ForegroundColor "Green"
+ }
+ else
+ {
+ $transaction.Rollback()
+ Write-Error "FAILED"
+ }
+ }
+
+ $conn.Close()
+ }
+ catch
+ {
+ $transaction.Rollback()
+ Write-Host "Error occurred" -ForegroundColor "Red"
+ $error[0]
+ Write-Error "FAILED"
+ "All updates to the VMM database have been rolled back."
+ }
+ }
+ }
+ else
+ {
+ Write-Error "VMM Id is missing from hklm:\software\Microsoft\Microsoft System Center Virtual Machine Manager Server\Setup or VMMId is not provided."
+ Write-Error "FAILED" -ForegroundColor
+ }
+}
+
+catch
+{
+ Write-Error "Error occurred"
+ $error[0]
+ Write-Error "FAILED"
+}
+
+if($isCluster)
+{
+ if($clusterResource.State -eq [Microsoft.FailoverClusters.PowerShell.ClusterResourceState]::Offline)
+ {
+ Write-Host "Cluster role is in stopped state."
+ }
+ else
+ {
+ Write-Host "Operation completed. Cluster role was not stopped."
+ }
+}
+else
+{
+ Write-Host "The VMM service is in stopped state."
+}
+
+popd
+# SIG # Begin signature block
+# MIId0wYJKoZIhvcNAQcCoIIdxDCCHcACAQExCzAJBgUrDgMCGgUAMGkGCisGAQQB
+# gjcCAQSgWzBZMDQGCisGAQQBgjcCAR4wJgIDAQAABBAfzDtgWUsITrck0sYpfvNR
+# AgEAAgEAAgEAAgEAAgEAMCEwCQYFKw4DAhoFAAQU3rRWHH5OCASnIAZsmgmowP/T
+# p6egghhkMIIEwzCCA6ugAwIBAgITMwAAAIgVUlHPFzd7VQAAAAAAiDANBgkqhkiG
+# 9w0BAQUFADB3MQswCQYDVQQGEwJVUzETMBEGA1UECBMKV2FzaGluZ3RvbjEQMA4G
+# A1UEBxMHUmVkbW9uZDEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMSEw
+# HwYDVQQDExhNaWNyb3NvZnQgVGltZS1TdGFtcCBQQ0EwHhcNMTUxMDA3MTgxNDAx
+# WhcNMTcwMTA3MTgxNDAxWjCBszELMAkGA1UEBhMCVVMxEzARBgNVBAgTCldhc2hp
+# bmd0b24xEDAOBgNVBAcTB1JlZG1vbmQxHjAcBgNVBAoTFU1pY3Jvc29mdCBDb3Jw
+# b3JhdGlvbjENMAsGA1UECxMETU9QUjEnMCUGA1UECxMebkNpcGhlciBEU0UgRVNO
+# OjdBRkEtRTQxQy1FMTQyMSUwIwYDVQQDExxNaWNyb3NvZnQgVGltZS1TdGFtcCBT
+# ZXJ2aWNlMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAyBEjpkOcrwAm
+# 9WRMNBv90OUqsqL7/17OvrhGMWgwAsx3sZD0cMoNxrlfHwNfCNopwH0z7EI3s5gQ
+# Z4Pkrdl9GjQ9/FZ5uzV24xfhdq/u5T2zrCXC7rob9FfhBtyTI84B67SDynCN0G0W
+# hJaBW2AFx0Dn2XhgYzpvvzk4NKZl1NYi0mHlHSjWfaqbeaKmVzp9JSfmeaW9lC6s
+# IgqKo0FFZb49DYUVdfbJI9ECTyFEtUaLWGchkBwj9oz62u9Kg6sh3+UslWTY4XW+
+# 7bBsN3zC430p0X7qLMwQf+0oX7liUDuszCp828HsDb4pu/RRyv+KOehVKx91UNcr
+# Dc9Z7isNeQIDAQABo4IBCTCCAQUwHQYDVR0OBBYEFJQRxg5HoMTIdSZj1v3l1GjM
+# 6KEMMB8GA1UdIwQYMBaAFCM0+NlSRnAK7UD7dvuzK7DDNbMPMFQGA1UdHwRNMEsw
+# SaBHoEWGQ2h0dHA6Ly9jcmwubWljcm9zb2Z0LmNvbS9wa2kvY3JsL3Byb2R1Y3Rz
+# L01pY3Jvc29mdFRpbWVTdGFtcFBDQS5jcmwwWAYIKwYBBQUHAQEETDBKMEgGCCsG
+# AQUFBzAChjxodHRwOi8vd3d3Lm1pY3Jvc29mdC5jb20vcGtpL2NlcnRzL01pY3Jv
+# c29mdFRpbWVTdGFtcFBDQS5jcnQwEwYDVR0lBAwwCgYIKwYBBQUHAwgwDQYJKoZI
+# hvcNAQEFBQADggEBAHoudDDxFsg2z0Y+GhQ91SQW1rdmWBxJOI5OpoPzI7P7X2dU
+# ouvkmQnysdipDYER0xxkCf5VAz+dDnSkUQeTn4woryjzXBe3g30lWh8IGMmGPWhq
+# L1+dpjkxKbIk9spZRdVH0qGXbi8tqemmEYJUW07wn76C+wCZlbJnZF7W2+5g9MZs
+# RT4MAxpQRw+8s1cflfmLC5a+upyNO3zBEY2gaBs1til9O7UaUD4OWE4zPuz79AJH
+# 9cGBQo8GnD2uNFYqLZRx3T2X+AVt/sgIHoUSK06fqVMXn1RFSZT3jRL2w/tD5uef
+# 4ta/wRmAStRMbrMWYnXAeCJTIbWuE2lboA3IEHIwggYHMIID76ADAgECAgphFmg0
+# AAAAAAAcMA0GCSqGSIb3DQEBBQUAMF8xEzARBgoJkiaJk/IsZAEZFgNjb20xGTAX
+# BgoJkiaJk/IsZAEZFgltaWNyb3NvZnQxLTArBgNVBAMTJE1pY3Jvc29mdCBSb290
+# IENlcnRpZmljYXRlIEF1dGhvcml0eTAeFw0wNzA0MDMxMjUzMDlaFw0yMTA0MDMx
+# MzAzMDlaMHcxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpXYXNoaW5ndG9uMRAwDgYD
+# VQQHEwdSZWRtb25kMR4wHAYDVQQKExVNaWNyb3NvZnQgQ29ycG9yYXRpb24xITAf
+# BgNVBAMTGE1pY3Jvc29mdCBUaW1lLVN0YW1wIFBDQTCCASIwDQYJKoZIhvcNAQEB
+# BQADggEPADCCAQoCggEBAJ+hbLHf20iSKnxrLhnhveLjxZlRI1Ctzt0YTiQP7tGn
+# 0UytdDAgEesH1VSVFUmUG0KSrphcMCbaAGvoe73siQcP9w4EmPCJzB/LMySHnfL0
+# Zxws/HvniB3q506jocEjU8qN+kXPCdBer9CwQgSi+aZsk2fXKNxGU7CG0OUoRi4n
+# rIZPVVIM5AMs+2qQkDBuh/NZMJ36ftaXs+ghl3740hPzCLdTbVK0RZCfSABKR2YR
+# JylmqJfk0waBSqL5hKcRRxQJgp+E7VV4/gGaHVAIhQAQMEbtt94jRrvELVSfrx54
+# QTF3zJvfO4OToWECtR0Nsfz3m7IBziJLVP/5BcPCIAsCAwEAAaOCAaswggGnMA8G
+# A1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFCM0+NlSRnAK7UD7dvuzK7DDNbMPMAsG
+# A1UdDwQEAwIBhjAQBgkrBgEEAYI3FQEEAwIBADCBmAYDVR0jBIGQMIGNgBQOrIJg
+# QFYnl+UlE/wq4QpTlVnkpKFjpGEwXzETMBEGCgmSJomT8ixkARkWA2NvbTEZMBcG
+# CgmSJomT8ixkARkWCW1pY3Jvc29mdDEtMCsGA1UEAxMkTWljcm9zb2Z0IFJvb3Qg
+# Q2VydGlmaWNhdGUgQXV0aG9yaXR5ghB5rRahSqClrUxzWPQHEy5lMFAGA1UdHwRJ
+# MEcwRaBDoEGGP2h0dHA6Ly9jcmwubWljcm9zb2Z0LmNvbS9wa2kvY3JsL3Byb2R1
+# Y3RzL21pY3Jvc29mdHJvb3RjZXJ0LmNybDBUBggrBgEFBQcBAQRIMEYwRAYIKwYB
+# BQUHMAKGOGh0dHA6Ly93d3cubWljcm9zb2Z0LmNvbS9wa2kvY2VydHMvTWljcm9z
+# b2Z0Um9vdENlcnQuY3J0MBMGA1UdJQQMMAoGCCsGAQUFBwMIMA0GCSqGSIb3DQEB
+# BQUAA4ICAQAQl4rDXANENt3ptK132855UU0BsS50cVttDBOrzr57j7gu1BKijG1i
+# uFcCy04gE1CZ3XpA4le7r1iaHOEdAYasu3jyi9DsOwHu4r6PCgXIjUji8FMV3U+r
+# kuTnjWrVgMHmlPIGL4UD6ZEqJCJw+/b85HiZLg33B+JwvBhOnY5rCnKVuKE5nGct
+# xVEO6mJcPxaYiyA/4gcaMvnMMUp2MT0rcgvI6nA9/4UKE9/CCmGO8Ne4F+tOi3/F
+# NSteo7/rvH0LQnvUU3Ih7jDKu3hlXFsBFwoUDtLaFJj1PLlmWLMtL+f5hYbMUVbo
+# nXCUbKw5TNT2eb+qGHpiKe+imyk0BncaYsk9Hm0fgvALxyy7z0Oz5fnsfbXjpKh0
+# NbhOxXEjEiZ2CzxSjHFaRkMUvLOzsE1nyJ9C/4B5IYCeFTBm6EISXhrIniIh0EPp
+# K+m79EjMLNTYMoBMJipIJF9a6lbvpt6Znco6b72BJ3QGEe52Ib+bgsEnVLaxaj2J
+# oXZhtG6hE6a/qkfwEm/9ijJssv7fUciMI8lmvZ0dhxJkAj0tr1mPuOQh5bWwymO0
+# eFQF1EEuUKyUsKV4q7OglnUa2ZKHE3UiLzKoCG6gW4wlv6DvhMoh1useT8ma7kng
+# 9wFlb4kLfchpyOZu6qeXzjEp/w7FW1zYTRuh2Povnj8uVRZryROj/TCCBhAwggP4
+# oAMCAQICEzMAAABkR4SUhttBGTgAAAAAAGQwDQYJKoZIhvcNAQELBQAwfjELMAkG
+# A1UEBhMCVVMxEzARBgNVBAgTCldhc2hpbmd0b24xEDAOBgNVBAcTB1JlZG1vbmQx
+# HjAcBgNVBAoTFU1pY3Jvc29mdCBDb3Jwb3JhdGlvbjEoMCYGA1UEAxMfTWljcm9z
+# b2Z0IENvZGUgU2lnbmluZyBQQ0EgMjAxMTAeFw0xNTEwMjgyMDMxNDZaFw0xNzAx
+# MjgyMDMxNDZaMIGDMQswCQYDVQQGEwJVUzETMBEGA1UECBMKV2FzaGluZ3RvbjEQ
+# MA4GA1UEBxMHUmVkbW9uZDEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9u
+# MQ0wCwYDVQQLEwRNT1BSMR4wHAYDVQQDExVNaWNyb3NvZnQgQ29ycG9yYXRpb24w
+# ggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCTLtrY5j6Y2RsPZF9NqFhN
+# FDv3eoT8PBExOu+JwkotQaVIXd0Snu+rZig01X0qVXtMTYrywPGy01IVi7azCLiL
+# UAvdf/tqCaDcZwTE8d+8dRggQL54LJlW3e71Lt0+QvlaHzCuARSKsIK1UaDibWX+
+# 9xgKjTBtTTqnxfM2Le5fLKCSALEcTOLL9/8kJX/Xj8Ddl27Oshe2xxxEpyTKfoHm
+# 5jG5FtldPtFo7r7NSNCGLK7cDiHBwIrD7huTWRP2xjuAchiIU/urvzA+oHe9Uoi/
+# etjosJOtoRuM1H6mEFAQvuHIHGT6hy77xEdmFsCEezavX7qFRGwCDy3gsA4boj4l
+# AgMBAAGjggF/MIIBezAfBgNVHSUEGDAWBggrBgEFBQcDAwYKKwYBBAGCN0wIATAd
+# BgNVHQ4EFgQUWFZxBPC9uzP1g2jM54BG91ev0iIwUQYDVR0RBEowSKRGMEQxDTAL
+# BgNVBAsTBE1PUFIxMzAxBgNVBAUTKjMxNjQyKzQ5ZThjM2YzLTIzNTktNDdmNi1h
+# M2JlLTZjOGM0NzUxYzRiNjAfBgNVHSMEGDAWgBRIbmTlUAXTgqoXNzcitW2oynUC
+# lTBUBgNVHR8ETTBLMEmgR6BFhkNodHRwOi8vd3d3Lm1pY3Jvc29mdC5jb20vcGtp
+# b3BzL2NybC9NaWNDb2RTaWdQQ0EyMDExXzIwMTEtMDctMDguY3JsMGEGCCsGAQUF
+# BwEBBFUwUzBRBggrBgEFBQcwAoZFaHR0cDovL3d3dy5taWNyb3NvZnQuY29tL3Br
+# aW9wcy9jZXJ0cy9NaWNDb2RTaWdQQ0EyMDExXzIwMTEtMDctMDguY3J0MAwGA1Ud
+# EwEB/wQCMAAwDQYJKoZIhvcNAQELBQADggIBAIjiDGRDHd1crow7hSS1nUDWvWas
+# W1c12fToOsBFmRBN27SQ5Mt2UYEJ8LOTTfT1EuS9SCcUqm8t12uD1ManefzTJRtG
+# ynYCiDKuUFT6j
+# vDW+vCT5wN3nxO8DIlAUBbXMn7TJKAH2W7a/CDQ0p607Ivt3F7cqhEtrO1Rypehh
+# bkKQj4y/ebwc56qWHJ8VNjE8HlhfJAk8pAliHzML1v3QlctPutozuZD3jKAO4WaV
+# qJn5BJRHddW6l0SeCuZmBQHmNfXcz4+XZW/s88VTfGWjdSGPXC26k0LzV6mjEaEn
+# S1G4t0RqMP90JnTEieJ6xFcIpILgcIvcEydLBVe0iiP9AXKYVjAPn6wBm69FKCQr
+# IPWsMDsw9wQjaL8GHk4wCj0CmnixHQanTj2hKRc2G9GL9q7tAbo0kFNIFs0EYkbx
+# Cn7lBOEqhBSTyaPS6CvjJZGwD0lNuapXDu72y4Hk4pgExQ3iEv/Ij5oVWwT8okie
+# +fFLNcnVgeRrjkANgwoAyX58t0iqbefHqsg3RGSgMBu9MABcZ6FQKwih3Tj0DVPc
+# gnJQle3c6xN3dZpuEgFcgJh/EyDXSdppZzJR4+Bbf5XA/Rcsq7g7X7xl4bJoNKLf
+# cafOabJhpxfcFOowMIIHejCCBWKgAwIBAgIKYQ6Q0gAAAAAAAzANBgkqhkiG9w0B
+# AQsFADCBiDELMAkGA1UEBhMCVVMxEzARBgNVBAgTCldhc2hpbmd0b24xEDAOBgNV
+# BAcTB1JlZG1vbmQxHjAcBgNVBAoTFU1pY3Jvc29mdCBDb3Jwb3JhdGlvbjEyMDAG
+# A1UEAxMpTWljcm9zb2Z0IFJvb3QgQ2VydGlmaWNhdGUgQXV0aG9yaXR5IDIwMTEw
+# HhcNMTEwNzA4MjA1OTA5WhcNMjYwNzA4MjEwOTA5WjB+MQswCQYDVQQGEwJVUzET
+# MBEGA1UECBMKV2FzaGluZ3RvbjEQMA4GA1UEBxMHUmVkbW9uZDEeMBwGA1UEChMV
+# TWljcm9zb2Z0IENvcnBvcmF0aW9uMSgwJgYDVQQDEx9NaWNyb3NvZnQgQ29kZSBT
+# aWduaW5nIFBDQSAyMDExMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEA
+# q/D6chAcLq3YbqqCEE00uvK2WCGfQhsqa+laUKq4BjgaBEm6f8MMHt03a8YS2Avw
+# OMKZBrDIOdUBFDFC04kNeWSHfpRgJGyvnkmc6Whe0t+bU7IKLMOv2akrrnoJr9eW
+# WcpgGgXpZnboMlImEi/nqwhQz7NEt13YxC4Ddato88tt8zpcoRb0RrrgOGSsbmQ1
+# eKagYw8t00CT+OPeBw3VXHmlSSnnDb6gE3e+lD3v++MrWhAfTVYoonpy4BI6t0le
+# 2O3tQ5GD2Xuye4Yb2T6xjF3oiU+EGvKhL1nkkDstrjNYxbc+/jLTswM9sbKvkjh+
+# 0p2ALPVOVpEhNSXDOW5kf1O6nA+tGSOEy/S6A4aN91/w0FK/jJSHvMAhdCVfGCi2
+# zCcoOCWYOUo2z3yxkq4cI6epZuxhH2rhKEmdX4jiJV3TIUs+UsS1Vz8kA/DRelsv
+# 1SPjcF0PUUZ3s/gA4bysAoJf28AVs70b1FVL5zmhD+kjSbwYuER8ReTBw3J64HLn
+# JN+/RpnF78IcV9uDjexNSTCnq47f7Fufr/zdsGbiwZeBe+3W7UvnSSmnEyimp31n
+# gOaKYnhfsi+E11ecXL93KCjx7W3DKI8sj0A3T8HhhUSJxAlMxdSlQy90lfdu+Hgg
+# WCwTXWCVmj5PM4TasIgX3p5O9JawvEagbJjS4NaIjAsCAwEAAaOCAe0wggHpMBAG
+# CSsGAQQBgjcVAQQDAgEAMB0GA1UdDgQWBBRIbmTlUAXTgqoXNzcitW2oynUClTAZ
+# BgkrBgEEAYI3FAIEDB4KAFMAdQBiAEMAQTALBgNVHQ8EBAMCAYYwDwYDVR0TAQH/
+# BAUwAwEB/zAfBgNVHSMEGDAWgBRyLToCMZBDuRQFTuHqp8cx0SOJNDBaBgNVHR8E
+# UzBRME+gTaBLhklodHRwOi8vY3JsLm1pY3Jvc29mdC5jb20vcGtpL2NybC9wcm9k
+# dWN0cy9NaWNSb29DZXJBdXQyMDExXzIwMTFfMDNfMjIuY3JsMF4GCCsGAQUFBwEB
+# BFIwUDBOBggrBgEFBQcwAoZCaHR0cDovL3d3dy5taWNyb3NvZnQuY29tL3BraS9j
+# ZXJ0cy9NaWNSb29DZXJBdXQyMDExXzIwMTFfMDNfMjIuY3J0MIGfBgNVHSAEgZcw
+# gZQwgZEGCSsGAQQBgjcuAzCBgzA/BggrBgEFBQcCARYzaHR0cDovL3d3dy5taWNy
+# b3NvZnQuY29tL3BraW9wcy9kb2NzL3ByaW1hcnljcHMuaHRtMEAGCCsGAQUFBwIC
+# MDQeMiAdAEwAZQBnAGEAbABfAHAAbwBsAGkAYwB5AF8AcwB0AGEAdABlAG0AZQBu
+# AHQALiAdMA0GCSqGSIb3DQEBCwUAA4ICAQBn8oalmOBUeRou09h0ZyKbC5YR4WOS
+# mUKWfdJ5DJDBZV8uLD74w3LRbYP+vj/oCso7v0epo/Np22O/IjWll11lhJB9i0ZQ
+# VdgMknzSGksc8zxCi1LQsP1r4z4HLimb5j0bpdS1HXeUOeLpZMlEPXh6I/MTfaaQ
+# dION9MsmAkYqwooQu6SpBQyb7Wj6aC6VoCo/KmtYSWMfCWluWpiW5IP0wI/zRive
+# /DvQvTXvbiWu5a8n7dDd8w6vmSiXmE0OPQvyCInWH8MyGOLwxS3OW560STkKxgrC
+# xq2u5bLZ2xWIUUVYODJxJxp/sfQn+N4sOiBpmLJZiWhub6e3dMNABQamASooPoI/
+# E01mC8CzTfXhj38cbxV9Rad25UAqZaPDXVJihsMdYzaXht/a8/jyFqGaJ+HNpZfQ
+# 7l1jQeNbB5yHPgZ3BtEGsXUfFL5hYbXw3MYbBL7fQccOKO7eZS/sl/ahXJbYANah
+# Rr1Z85elCUtIEJmAH9AAKcWxm6U/RXceNcbSoqKfenoi+kiVH6v7RyOA9Z74v2u3
+# S5fi63V4GuzqN5l5GEv/1rMjaHXmr/r8i+sLgOppO6/8MO0ETI7f33VtY5E90Z1W
+# Tk+/gFcioXgRMiF670EKsT/7qMykXcGhiJtXcVZOSEXAQsmbdlsKgEhr/Xmfwb1t
+# bWrJUnMTDXpQzTGCBNkwggTVAgEBMIGVMH4xCzAJBgNVBAYTAlVTMRMwEQYDVQQI
+# EwpXYXNoaW5ndG9uMRAwDgYDVQQHEwdSZWRtb25kMR4wHAYDVQQKExVNaWNyb3Nv
+# ZnQgQ29ycG9yYXRpb24xKDAmBgNVBAMTH01pY3Jvc29mdCBDb2RlIFNpZ25pbmcg
+# UENBIDIwMTECEzMAAABkR4SUhttBGTgAAAAAAGQwCQYFKw4DAhoFAKCB7TAZBgkq
+# hkiG9w0BCQMxDAYKKwYBBAGCNwIBBDAcBgorBgEEAYI3AgELMQ4wDAYKKwYBBAGC
+# NwIBFTAjBgkqhkiG9w0BCQQxFgQUBdBqDyVXnqZzMp1OJYf3joRoaTAwgYwGCisG
+# AQQBgjcCAQwxfjB8oE6ATABNAGkAYwByAG8AcwBvAGYAdAAgAEEAegB1AHIAZQAg
+# AFMAaQB0AGUAIABSAGUAYwBvAHYAZQByAHkAIABQAHIAbwB2AGkAZABlAHKhKoAo
+# aHR0cDovL2dvLm1pY3Jvc29mdC5jb20vP2xpbmtpZD05ODI3Mzk1IDANBgkqhkiG
+# 9w0BAQEFAASCAQBTkB941lb+sBGlUfrKY0rio8iWs3zcjnJUshSKfimD2pJLYdHx
+# hiBkoWXz/nM5ruhKh9Iu62xvqNNTDLt5H2PxvjCrH0v3TpSaRp6QnxIzIKSgtUnT
+# /nxqpvT8QMbecpHXKARw+WcDlZBZWv5PZBoJBytoT+hRuYFOlUsVH7emimic9BlI
+# lW+yX8Ip9txXOOoQluBgkIJ59fpNGS+p3t/hxwaYWSiOD5J+Ug7IELRmg1PfiCMW
+# bg5hXYbvl18qaWFZIf3AXlY+22rYZvx0/hHwqLr/ULNDXF/ylMct2mxzzspN1u9P
+# cJGLbFcxDNaxxzxEEY6ZVup1ycgI59W+16USoYICKDCCAiQGCSqGSIb3DQEJBjGC
+# AhUwggIRAgEBMIGOMHcxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpXYXNoaW5ndG9u
+# MRAwDgYDVQQHEwdSZWRtb25kMR4wHAYDVQQKExVNaWNyb3NvZnQgQ29ycG9yYXRp
+# b24xITAfBgNVBAMTGE1pY3Jvc29mdCBUaW1lLVN0YW1wIFBDQQITMwAAAIgVUlHP
+# Fzd7VQAAAAAAiDAJBgUrDgMCGgUAoF0wGAYJKoZIhvcNAQkDMQsGCSqGSIb3DQEH
+# ATAcBgkqhkiG9w0BCQUxDxcNMTYwMzIyMTg0OTUwWjAjBgkqhkiG9w0BCQQxFgQU
+# Urmh+SC+zZzOARYhxu4k2PZFcIIwDQYJKoZIhvcNAQEFBQAEggEAW1kLw6IKNCm6
+# 1nvELi0fHxB898JSoh+eRpVzm+ffOmTEiRqT3S0VZB24U6/FUkMwbNsRcRXeQ4aP
+# RXHHlz2OtrHw/SCdNxFZQ6/4Kq/2a0VQRUtZKe4gZ+rQb7TX3axUf1A0FXTmZg0m
+# 9wX8uiww0tsdrfEVQiluLrLdypGhFppZbf3T1/OlC11udPPfzfRN3HrKBuuYpCKx
+# 8BzNYjCNRbGtsRjYTKQABuGtnTc+XrsLR6qPStI2sjS8qKVN155xu048VBK6FXLt
+# RnrqKUMM6fsMKnWQwjoBauyFe54/p22HKQskWNwmHOg1CSOC31z9XaPkL3FHT+U4
+# EUkEgDZz3A==
+# SIG # End signature block
+
+```
storage Storage Auth Abac Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac-examples.md
Previously updated : 01/17/2024 Last updated : 01/19/2024 #Customer intent: As a dev, devops, or it admin, I want to learn about the conditions so that I write more complex conditions.
Here are the settings to add this condition using the Azure portal visual editor
# [Portal: Code editor](#tab/portal-code-editor)
-To add the condition using the code editor, copy the condition code sample and paste it into the code editor.
+To add the condition using the code editor, copy the condition code sample and paste it into the code editor. After entering your code, switch back to the visual editor to validate it.
``` (
To add the condition using the code editor, copy the condition code sample and p
) ```
-After entering your code, switch back to the visual editor to validate it.
+In this example, the condition restricts the read action except when the suboperation is `Blob.List`. This means that a List Blobs operation is allowed, but all other read actions are further evaluated against the expression that checks for the blob index tag.
-> [!NOTE]
-> If you try to perform an action in the assigned role that is *not* the action restricted by the condition, `!(ActionMatches)` evaluates to true and the overall condition evaluates to true. This result allows the action to be performed.
->
-> If you try to perform the action restricted by the condition, `!(ActionMatches)` evaluates to false, so the expression is evaluated. If the expression evaluates to true, the overall condition evaluates to true and allows the action to be performed. Otherwise, the action is not allowed to be performed.
->
-> In this example, the condition restricts the read action except when the suboperation is `Blob.List`. This means that a List Blobs operation is allowed, but all other read actions are further evaluated against the expression that checks for the blob index tag.
->
->To learn more about how conditions are formatted and evaluated, see [Conditions format](../../role-based-access-control/conditions-format.md).
# [PowerShell](#tab/azure-powershell)
Here are the settings to add this condition using the Azure portal.
# [Portal: Code editor](#tab/portal-code-editor)
-To add the condition using the code editor, copy the condition code sample and paste it into the code editor.
+To add the condition using the code editor, copy the condition code sample and paste it into the code editor. After entering your code, switch back to the visual editor to validate it.
``` (
To add the condition using the code editor, copy the condition code sample and p
) ```
-After entering your code, switch back to the visual editor to validate it.
# [PowerShell](#tab/azure-powershell)
Here are the settings to add this condition using the Azure portal.
# [Portal: Code editor](#tab/portal-code-editor)
-To add the condition using the code editor, copy the condition code sample and paste it into the code editor.
+To add the condition using the code editor, copy the condition code sample and paste it into the code editor. After entering your code, switch back to the visual editor to validate it.
``` (
To add the condition using the code editor, copy the condition code sample and p
) ```
-After entering your code, switch back to the visual editor to validate it.
# [PowerShell](#tab/azure-powershell)
Here are the settings to add this condition using the Azure portal.
# [Portal: Code editor](#tab/portal-code-editor)
-To add the condition using the code editor, copy the condition code sample and paste it into the code editor.
+To add the condition using the code editor, copy the condition code sample and paste it into the code editor. After entering your code, switch back to the visual editor to validate it.
``` (
To add the condition using the code editor, copy the condition code sample and p
) ```
-After entering your code, switch back to the visual editor to validate it.
# [PowerShell](#tab/azure-powershell)
Here are the settings to add this condition using the Azure portal.
# [Portal: Code editor](#tab/portal-code-editor)
-To add the condition using the code editor, copy the condition code sample and paste it into the code editor.
+To add the condition using the code editor, copy the condition code sample and paste it into the code editor. After entering your code, switch back to the visual editor to validate it.
**Storage Blob Data Owner**
To add the condition using the code editor, copy the condition code sample and p
) ```
-After entering your code, switch back to the visual editor to validate it.
# [PowerShell](#tab/azure-powershell)
Here are the settings to add this condition using the Azure portal.
# [Portal: Code editor](#tab/portal-code-editor)
-To add the condition using the code editor, copy the condition code sample and paste it into the code editor.
+To add the condition using the code editor, copy the condition code sample and paste it into the code editor. After entering your code, switch back to the visual editor to validate it.
**Storage Blob Data Owner**
To add the condition using the code editor, copy the condition code sample and p
) ```
-After entering your code, switch back to the visual editor to validate it.
+In this example, the condition restricts the read action except when the suboperation is `Blob.List`. This means that a List Blobs operation is allowed, but all other read actions are further evaluated against the expression that checks for the container name and path.
+ # [PowerShell](#tab/azure-powershell)
Here are the settings to add this condition using the Azure portal.
# [Portal: Code editor](#tab/portal-code-editor)
-To add the condition using the code editor, copy the condition code sample and paste it into the code editor.
+To add the condition using the code editor, copy the condition code sample and paste it into the code editor. After entering your code, switch back to the visual editor to validate it.
**Storage Blob Data Owner**
AND
) ```
-After entering your code, switch back to the visual editor to validate it.
# [PowerShell](#tab/azure-powershell)
Here are the settings to add this condition using the Azure portal.
# [Portal: Code editor](#tab/portal-code-editor)
-To add the condition using the code editor, copy the condition code sample and paste it into the code editor.
+To add the condition using the code editor, copy the condition code sample and paste it into the code editor. After entering your code, switch back to the visual editor to validate it.
**Storage Blob Data Owner**
To add the condition using the code editor, copy the condition code sample and p
) ```
-After entering your code, switch back to the visual editor to validate it.
# [PowerShell](#tab/azure-powershell)
Here are the settings to add this condition using the Azure portal.
# [Portal: Code editor](#tab/portal-code-editor)
-To add the condition using the code editor, copy the condition code sample and paste it into the code editor.
+To add the condition using the code editor, copy the condition code sample and paste it into the code editor. After entering your code, switch back to the visual editor to validate it.
``` (
AND
) ```
-After entering your code, switch back to the visual editor to validate it.
+In this example, the condition restricts the read action except when the suboperation is `Blob.List`. This means that a List Blobs operation is allowed, but all other read actions are further evaluated against the expression that checks for the blob index tag and path.
+ # [PowerShell](#tab/azure-powershell)
Here are the settings to add this condition using the Azure portal.
# [Portal: Code editor](#tab/portal-code-editor)
-To add the condition using the code editor, copy the condition code sample and paste it into the code editor.
+To add the condition using the code editor, copy the condition code sample and paste it into the code editor. After entering your code, switch back to the visual editor to validate it.
**Storage Blob Data Owner**
To add the condition using the code editor, copy the condition code sample and p
) ```
-After entering your code, switch back to the visual editor to validate it.
+In this example, the condition restricts the read action except when the suboperation is `Blob.List`. This means that a List Blobs operation is allowed, but all other read actions are further evaluated against the expression that checks the version.
+ # [PowerShell](#tab/azure-powershell)
Here are the settings to add this condition using the Azure portal.
# [Portal: Code editor](#tab/portal-code-editor)
-To add the condition using the code editor, copy the condition code sample and paste it into the code editor.
+To add the condition using the code editor, copy the condition code sample and paste it into the code editor. After entering your code, switch back to the visual editor to validate it.
``` (
To add the condition using the code editor, copy the condition code sample and p
) ```
-After entering your code, switch back to the visual editor to validate it.
+In this example, the condition restricts the read action except when the suboperation is `Blob.List`. This means that a List Blobs operation is allowed, but all other read actions are further evaluated against the expression that checks version information.
+ # [PowerShell](#tab/azure-powershell)
Here are the settings to add this condition using the Azure portal.
# [Portal: Code editor](#tab/portal-code-editor)
-To add the condition using the code editor, copy the condition code sample and paste it into the code editor.
+To add the condition using the code editor, copy the condition code sample and paste it into the code editor. After entering your code, switch back to the visual editor to validate it.
``` (
To add the condition using the code editor, copy the condition code sample and p
) ```
-After entering your code, switch back to the visual editor to validate it.
# [PowerShell](#tab/azure-powershell)
Here are the settings to add this condition using the Azure portal.
# [Portal: Code editor](#tab/portal-code-editor)
-To add the condition using the code editor, copy the condition code sample and paste it into the code editor.
+To add the condition using the code editor, copy the condition code sample and paste it into the code editor. After entering your code, switch back to the visual editor to validate it.
**Storage Blob Data Owner**
To add the condition using the code editor, copy the condition code sample and p
) ```
-After entering your code, switch back to the visual editor to validate it.
+In this example, the condition restricts the read action except when the suboperation is `Blob.List`. This means that a List Blobs operation is allowed, but all other read actions are further evaluated against the expression that checks version and snapshot information.
+ # [PowerShell](#tab/azure-powershell)
Here are the settings to add this condition using the Azure portal.
# [Portal: Code editor](#tab/portal-code-editor)
-To add the condition using the code editor, copy the condition code sample and paste it into the code editor.
+To add the condition using the code editor, copy the condition code sample and paste it into the code editor. After entering your code, switch back to the visual editor to validate it.
**Storage Blob Data Owner**
To add the condition using the code editor, copy the condition code sample and p
) ```
-After entering your code, switch back to the visual editor to validate it.
+In this example, the condition restricts the read action except when the suboperation is `Blob.List`. This means that a List Blobs operation is allowed, but all other read actions are further evaluated against the expression that checks for hierarchical namespace.
+ # [PowerShell](#tab/azure-powershell)
Here are the settings to add this condition using the Azure portal.
# [Portal: Code editor](#tab/portal-code-editor)
-To add the condition using the code editor, copy the condition code sample and paste it into the code editor.
+To add the condition using the code editor, copy the condition code sample and paste it into the code editor. After entering your code, switch back to the visual editor to validate it.
``` (
To add the condition using the code editor, copy the condition code sample and p
) ```
-After entering your code, switch back to the visual editor to validate it.
+In this example, the condition restricts the read action except when the suboperation is `Blob.List`. This means that a List Blobs operation is allowed, but all other read actions are further evaluated against the expression that checks encryption scopes.
+ # [PowerShell](#tab/azure-powershell)
Here are the settings to add this condition using the Azure portal.
# [Portal: Code editor](#tab/portal-code-editor)
-To add the condition using the code editor, copy the condition code sample and paste it into the code editor.
+To add the condition using the code editor, copy the condition code sample and paste it into the code editor. After entering your code, switch back to the visual editor to validate it.
``` (
To add the condition using the code editor, copy the condition code sample and p
) ```
-After entering your code, switch back to the visual editor to validate it.
# [PowerShell](#tab/azure-powershell)
Here are the settings to add this condition using the Azure portal.
# [Portal: Code editor](#tab/portal-code-editor)
-To add the condition using the code editor, copy the condition code sample and paste it into the code editor.
+To add the condition using the code editor, copy the condition code sample and paste it into the code editor. After entering your code, switch back to the visual editor to validate it.
``` (
AND
) ```
-After entering your code, switch back to the visual editor to validate it.
# [PowerShell](#tab/azure-powershell)
Here are the settings to add this condition using the Azure portal.
# [Portal: Code editor](#tab/portal-code-editor)
-To add the condition using the code editor, copy the condition code sample and paste it into the code editor.
+To add the condition using the code editor, copy the condition code sample and paste it into the code editor. After entering your code, switch back to the visual editor to validate it.
``` (
To add the condition using the code editor, copy the condition code sample and p
) ```
-After entering your code, switch back to the visual editor to validate it.
+In this example, the condition restricts the read action except when the suboperation is `Blob.List`. This means that a List Blobs operation is allowed, but all other read actions are further evaluated against the expression that checks blob index tags and custom security attributes.
+ # [PowerShell](#tab/azure-powershell)
The following image shows the condition after the settings are entered into the
# [Portal: Code editor](#tab/portal-code-editor)
-To add the condition using the code editor, copy the following condition code sample and paste it into the code editor.
+To add the condition using the code editor, copy the following condition code sample and paste it into the code editor. After entering your code, switch back to the visual editor to validate it.
``` (
To add the condition using the code editor, copy the following condition code sa
) ```
-After entering your code, switch back to the visual editor to validate it.
# [PowerShell](#tab/azure-powershell)
The following image shows the condition after the settings are entered into the
# [Portal: Code editor](#tab/portal-code-editor)
-To add the condition using the code editor, copy the following condition code sample and paste it into the code editor.
+To add the condition using the code editor, copy the following condition code sample and paste it into the code editor. After entering your code, switch back to the visual editor to validate it.
``` (
To add the condition using the code editor, copy the following condition code sa
) ```
-After entering your code, switch back to the visual editor to validate it.
# [PowerShell](#tab/azure-powershell)
The following image shows the condition after the settings are entered into the
# [Portal: Code editor](#tab/portal-code-editor)
-To add the condition using the code editor, copy the following condition code sample and paste it into the code editor.
+To add the condition using the code editor, copy the following condition code sample and paste it into the code editor. After entering your code, switch back to the visual editor to validate it.
``` (
To add the condition using the code editor, copy the following condition code sa
) ```
-After entering your code, switch back to the visual editor to validate it.
+In this example, the condition restricts the read action except when the suboperation is `Blob.List`. This means that a List Blobs operation is allowed, but all other read actions are further evaluated against the expression.
+ # [PowerShell](#tab/azure-powershell)
The following image shows the condition after the settings are entered into the
# [Portal: Code editor](#tab/portal-code-editor)
-To add the condition using the code editor, choose one of the following condition code samples, depending on the role associated with the assignment.
+To add the condition using the code editor, choose one of the following condition code samples, depending on the role associated with the assignment. After entering your code, switch back to the visual editor to validate it.
**Storage Blob Data Owner:**
To add the condition using the code editor, choose one of the following conditio
) ```
-After entering your code, switch back to the visual editor to validate it.
# [PowerShell](#tab/azure-powershell)
The following image shows the condition after the settings are entered into the
# [Portal: Code editor](#tab/portal-code-editor)
-To add the condition using the code editor, copy the following condition code sample and paste it into the code editor.
+To add the condition using the code editor, copy the following condition code sample and paste it into the code editor. After entering your code, switch back to the visual editor to validate it.
``` (
To add the condition using the code editor, copy the following condition code sa
) ```
-After entering your code, switch back to the visual editor to validate it.
+In this example, the condition restricts the read action except when the suboperation is `Blob.List`. This means that a List Blobs operation is allowed, but all other read actions are further evaluated against the expression.
+ # [PowerShell](#tab/azure-powershell)
storage Storage Blob Java Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-java-get-started.md
Previously updated : 07/12/2023 Last updated : 01/19/2024
Blob client library information:
## Authorize access and connect to Blob Storage
-To connect an application to Blob Storage, create an instance of the [BlobServiceClient](/java/api/com.azure.storage.blob.blobserviceclient) class. This object is your starting point to interact with data resources at the storage account level. You can use it to operate on the storage account and its containers. You can also use the service client to create container clients or blob clients, depending on the resource you need to work with.
+To connect an application to Blob Storage, create an instance of the [BlobServiceClient](/java/api/com.azure.storage.blob.blobserviceclient) class. You can also use the [BlobServiceAsyncClient](/java/api/com.azure.storage.blob.blobserviceasyncclient) class for [asynchronous programming](/azure/developer/java/sdk/async-programming). This object is your starting point to interact with data resources at the storage account level. You can use it to operate on the storage account and its containers. You can also use the service client to create container clients or blob clients, depending on the resource you need to work with.
To learn more about creating and managing client objects, see [Create and manage client objects that interact with data resources](storage-blob-client-management.md).
storage Elastic San Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-performance.md
- ignite-2023-elastic-SAN- Previously updated : 11/06/2023+ Last updated : 01/19/2024
The performance of an individual volume is determined by its capacity. The maxim
Each of the example scenarios in this article uses the following configuration for the VMs and the Elastic SAN:
-### VM limits
+### VM SKUs
-|VM |VM IOPS limit |
-|||
-|Standard_DS2_v2 (AKS) |5,000 |
-|Standard_L48s_v2 (workload 1) |48,000 |
-|Standard_L32s_v3 (workload 2) |51,200 |
-|Standard_L48_v3 (workload 3) |76,800 |
+- Standard_D2_v5 (AKS)
+- Standard_D4s_v5 (workload 1)
+- Standard_D32_v5 (workload 2)
+- Standard_D48_v5 (workload 3)
### Elastic SAN limits
stream-analytics Kafka Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/kafka-output.md
Previously updated : 11/30/2023 Last updated : 01/18/2024 # Kafka output from Azure Stream Analytics (Preview)
You can use four types of security protocols to connect to your Kafka clusters:
|mTLS |encryption and authentication | |SASL_SSL |It combines two different security mechanisms - SASL (Simple Authentication and Security Layer) and SSL (Secure Sockets Layer) - to ensure both authentication and encryption are in place for data transmission. The mechanism supported is PLAIN. The SASL_SSL protocol doesn't support SCRAM. | |SASL_PLAINTEXT |standard authentication with username and password without encryption |
-|None |The serialization format (JSON, CSV, Avro, Parquet) of the incoming data stream. |
+|None | No authentication and encryption. |
> [!IMPORTANT]
stream-analytics Stream Analytics Define Kafka Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-define-kafka-input.md
Previously updated : 11/30/2023 Last updated : 01/18/2024 # Stream data from Kafka into Azure Stream Analytics (Preview)
You can use four types of security protocols to connect to your Kafka clusters:
|mTLS |encryption and authentication | |SASL_SSL |It combines two different security mechanisms - SASL (Simple Authentication and Security Layer) and SSL (Secure Sockets Layer) - to ensure both authentication and encryption are in place for data transmission. The mechanism supported is PLAIN. The SASL_SSL protocol doesn't support SCRAM | |SASL_PLAINTEXT |standard authentication with username and password without encryption |
-|None |The serialization format (JSON, CSV, Avro, Parquet) of the incoming data stream. |
+|None | No authentication and encryption. |
> [!IMPORTANT]
stream-analytics Stream Analytics Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-introduction.md
Last updated 07/08/2022
# Welcome to Azure Stream Analytics
-Azure Stream Analytics is a fully managed stream processing engine that is designed to analyze and process large volumes of streaming data with sub-millisecond latencies. Patterns and relationships can be identified in data that originates from a variety of input sources including applications, devices, sensors, clickstreams, and social media feeds. These patterns can be used to trigger actions and initiate workflows such as creating alerts, feeding information to a reporting tool, or storing transformed data for later use. Stream Analytics is also available on the Azure IoT Edge runtime, enabling to process data directly on IoT devices.
+Azure Stream Analytics is a fully managed stream processing engine that is designed to analyze and process large volumes of streaming data with sub-millisecond latencies. You can build a streaming data pipeline using Stream Analytics to identity patterns and relationships in data that originates from various input sources including applications, devices, sensors, clickstreams, and social media feeds. Then, you can use these patterns to trigger actions and initiate workflows such as raising alerts, feeding information to a reporting tool, or storing transformed data for later use. Stream Analytics is also available on the Azure IoT Edge runtime, which enables you to process data directly from IoT devices.
-The following scenarios are examples of when you can use Azure Stream Analytics:
+Here are a few example scenarios where you can use Azure Stream Analytics:
-* Streaming ETL pipeline to Azure Storage in Parquet format
-* Event driven applications with Azure SQL Database and Azure Cosmos DB
-* Analyze real-time telemetry streams and logs from applications and IoT devices
-* Real-time dashboarding with Power BI
-* Anomaly detection to detect spikes, dips, and slow positive and negative changes in sensor values
-* Geospatial analytics for fleet management and driverless vehicles
+* Anomaly detection in sensor data to detect spikes, dips, and slow positive and negative changes
+* Geo-spatial analytics for fleet management and driverless vehicles
* Remote monitoring and predictive maintenance of high value assets * Clickstream analytics to determine customer behavior
+* Analyze real-time telemetry streams and logs from applications and IoT devices
-You can try Azure Stream Analytics with a free Azure subscription.
-> [!div class="nextstepaction"]
-> [Try Azure Stream Analytics](https://azure.microsoft.com/services/stream-analytics/)
+## Key capabilities and benefits
-![Stream Analytics intro pipeline](./media/stream-analytics-introduction/stream-analytics-e2e-pipeline.png)
+## Fully managed service
-## Key capabilities and benefits
+Azure Stream Analytics is a fully managed (PaaS) offering on Azure. You don't have to provision any hardware or infrastructure, update OS, or software. Azure Stream Analytics fully manages your job, so you can focus on your business logic and not on the infrastructure.
-## Ease of use
+## Ease of use
-Azure Stream Analytics is easy to start. It only takes a few clicks to connect to multiple sources and sinks, creating an end-to-end pipeline. Stream Analytics can connect to Azure Event Hubs and Azure IoT Hub for streaming data ingestion, as well as Azure Blob storage to ingest historical data. Job input can also include static or slow-changing reference data from Azure Blob storage or SQL Database that you can join to streaming data to perform lookup operations.
+Azure Stream Analytics is easy to start. It takes only a few clicks to create an end-to-end streaming data pipeline that connects to multiple sources and sinks.
-Stream Analytics can route job output to many storage systems such as Azure Blob storage, Azure SQL Database, Azure Data Lake Store, and Azure Cosmos DB. You can also run batch analytics on stream outputs with Azure Synapse Analytics or HDInsight, or you can send the output to another service, like Event Hubs for consumption or Power BI for real-time visualization. For the entire list of Stream Analytics outputs, see [Understand outputs from Azure Stream Analytics](stream-analytics-define-outputs.md).
+You can create a Stream Analytics job that can connect to Azure Event Hubs and Azure IoT Hub for streaming data ingestion, and Azure Blob storage or Azure Data Lake Storage Gen2 to ingest historical data. The input for the Stream Analytics job can also include static or slow-changing reference data from Azure Blob storage or SQL Database that you can join with streaming data to perform lookup operations. For more information on Stream Analytics **inputs**, see [Stream data as input into Stream Analytics](stream-analytics-define-inputs.md).
-The Azure Stream Analytics no-code editor offers a no-code experience that enables you to develop Stream Analytics jobs effortlessly, using drag-and-drop functionality, without having to write any code. It further simplifies Stream Analytics job development experience. To learn more about the no-code editor, see [No-code stream processing in Azure Stream Analytics](./no-code-stream-processing.md)
+You can route output from a Stream Analytics job to many storage systems such as Azure Blob storage, Azure SQL Database, Azure Data Lake Store, and Azure Cosmos DB. You can also run batch analytics on stream outputs with Azure Synapse Analytics or HDInsight, or you can send the output to another service, like Event Hubs for consumption or Power BI for real-time visualization. For the entire list of Stream Analytics **outputs**, see [Understand outputs from Azure Stream Analytics](stream-analytics-define-outputs.md).
+
+The Azure Stream Analytics no-code editor offers a no-code experience that enables you to develop Stream Analytics jobs effortlessly, using drag-and-drop functionality, without having to write any code. It further simplifies Stream Analytics job development experience. To learn more about the no-code editor, see [No-code stream processing in Azure Stream Analytics](./no-code-stream-processing.md).
## Programmer productivity
-Azure Stream Analytics uses a SQL query language that has been augmented with powerful temporal constraints to analyze data in motion. You can also create jobs by using developer tools like Azure PowerShell, Azure CLI, Stream Analytics Visual Studio tools, the [Stream Analytics Visual Studio Code extension](quick-create-visual-studio-code.md), or Azure Resource Manager templates. Using developer tools allows you to develop transformation queries offline and use the CI/CD pipeline to submit jobs to Azure.
+Azure Stream Analytics uses a SQL query language that's augmented with powerful temporal constraints to analyze data in motion. You can [create a Stream Analytics job using the Azure portal](stream-analytics-quick-create-portal.md). You can also create jobs by using developer tools such as the following ones:
-The Stream Analytics query language allows you to perform CEP (Complex Event Processing) by offering a wide array of functions for analyzing streaming data. This query language supports simple data manipulation, aggregation and analytics functions, geospatial functions, pattern matching and anomaly detection. You can edit queries in the portal or using our development tools, and test them using sample data that is extracted from a live stream.
+- [Visual Studio Code](quick-create-visual-studio-code.md)
+- [Visual Studio](stream-analytics-quick-create-vs.md)
+- [Azure CLI](quick-create-azure-cli.md)
+- [Azure PowerShell](stream-analytics-quick-create-powershell.md)
+- [Bicep](quick-create-bicep.md)
+- [Azure Resource Manager templates](quick-create-azure-resource-manager.md)
+- [Terraform](quick-create-terraform.md)
-You can extend the capabilities of the query language by defining and invoking additional functions. You can define function calls in the Azure Machine Learning to take advantage of Azure Machine Learning solutions, and integrate JavaScript or C# user-defined functions (UDFs) or user-defined aggregates to perform complex calculations as part a Stream Analytics query.
+Developer tools allow you to develop transformation queries offline and use the CI/CD pipeline to submit jobs to Azure.
-## Fully managed
+The Stream Analytics query language allows you to perform Complex Event Processing (CEP) by offering a wide array of functions for analyzing streaming data. This query language supports simple data manipulation, aggregation and analytics functions, geospatial functions, pattern matching, and anomaly detection. You can edit queries in the portal or using development tools, and test them using sample data that is extracted from a live stream.
+
+You can extend the capabilities of the query language by defining and invoking additional functions. You can define function calls in the Azure Machine Learning to take advantage of Azure Machine Learning solutions, and integrate JavaScript or C# user-defined functions (UDFs) or user-defined aggregates to perform complex calculations as part a Stream Analytics query.
-Azure Stream Analytics is a fully managed (PaaS) offering on Azure. You don't have to provision any hardware or infrastructure, update OS or software. Azure Stream Analytics fully manages your job, so you can focus on your business logic and not on the infrastructure.
## Run in the cloud or on the intelligent edge
Azure Stream Analytics can run in the cloud, for large-scale analytics, or run o
## Low total cost of ownership
-As a cloud service, Stream Analytics is optimized for cost. There are no upfront costs involved - you only pay for the [streaming units you consume](stream-analytics-streaming-unit-consumption.md). There is no commitment or cluster provisioning required, and you can scale the job up or down based on your business needs.
+As a cloud service, Stream Analytics is optimized for cost. There are no upfront costs involved - you only pay for the [streaming units you consume](stream-analytics-streaming-unit-consumption.md). There's no commitment or cluster provisioning required, and you can scale the job up or down based on your business needs.
## Mission-critical ready
Azure Stream Analytics is available across multiple regions worldwide and is des
### Reliability
-Azure Stream Analytics guarantees exactly once event processing and at-least-once delivery of events, so events are never lost. Exactly once processing is guaranteed with selected output as described in Event Delivery Guarantees.
+Azure Stream Analytics guarantees exactly once event processing and at-least-once delivery of events, so events are never lost. Exactly once processing is guaranteed with selected output as described in [Event Delivery Guarantees](/stream-analytics-query/event-delivery-guarantees-azure-stream-analytics).
Azure Stream Analytics has built-in recovery capabilities in case the delivery of an event fails. Stream Analytics also provides built-in checkpoints to maintain the state of your job and provides repeatable results.
As a managed service, Stream Analytics guarantees event processing with a 99.9%
### Security
-In terms of security, Azure Stream Analytics encrypts all incoming and outgoing communications and supports TLS 1.2. Built-in checkpoints are also encrypted. Stream Analytics doesn't store the incoming data since all processing is done in-memory.
-Stream Analytics also supports Azure Virtual Networks (VNET) when running a job in a [Stream Analytics Cluster](./cluster-overview.md).
+In terms of security, Azure Stream Analytics encrypts all incoming and outgoing communications and supports Transport Layer Security (TLS) 1.2. Built-in checkpoints are also encrypted. Stream Analytics doesn't store the incoming data since all processing is done in-memory. Stream Analytics also supports Azure Virtual Networks when running a job in a [Stream Analytics Cluster](./cluster-overview.md).
## Performance Stream Analytics can process millions of events every second and it can deliver results with ultra low latencies. It allows you to [scale out](stream-analytics-autoscale.md) to adjust to your workloads. Stream Analytics supports higher performance by partitioning, allowing complex queries to be parallelized and executed on multiple streaming nodes. Azure Stream Analytics is built on [Trill](https://github.com/Microsoft/Trill), a high-performance in-memory streaming analytics engine developed in collaboration with Microsoft Research. + ## Next steps
-You now have an overview of Azure Stream Analytics. Next, you can dive deep and create your first Stream Analytics job:
+You can try Azure Stream Analytics with a free Azure subscription.
+
+> [!div class="nextstepaction"]
+> [Try Azure Stream Analytics](https://azure.microsoft.com/services/stream-analytics/)
+
+You have an overview of Azure Stream Analytics. Next, you can dive deep and create your first Stream Analytics job:
* [Create a Stream Analytics job by using the Azure portal](stream-analytics-quick-create-portal.md) * [Create a Stream Analytics job by using Azure PowerShell](stream-analytics-quick-create-powershell.md)
update-manager Configure Wu Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/configure-wu-agent.md
Title: Configure Windows Update settings in Azure Update Manager description: This article tells how to configure Windows update settings to work with Azure Update Manager. Previously updated : 09/18/2023 Last updated : 01/19/2024 + # Configure Windows update settings for Azure Update Manager
The Windows update client on Windows servers can get their patches from either o
If scheduled patching is configured on your machine using the Azure Update Manager, the Auto update on the client is disabled. To edit the registry and configure the setting, see [First party updates on Windows](support-matrix.md#first-party-updates-on-windows).
-### Patching using group policy on Azure Update management
+### Patching using group policy on Azure Update Manager
-If your machine is patched using Automation Update management, and has Automatic updates enabled on the client, you can use the group policy to have complete control. To patch using group policy, follow these steps:
+If your machine is patched using Azure Update Manager, and has Automatic updates enabled on the client, you can use the group policy to have complete control. To patch using group policy, follow these steps:
1. Go to **Computer Configuration** > **Administrative Templates** > **Windows Components** > **Windows Update** > **Manage end user experience**. 1. Select **Configure Automatic Updates**.
virtual-desktop Add Session Hosts Host Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/add-session-hosts-host-pool.md
Review the [Prerequisites for Azure Virtual Desktop](prerequisites.md) for a gen
- At least one Windows OS image available on the cluster. For more information, see how to [create VM images using Azure Marketplace images](/azure-stack/hci/manage/virtual-machine-image-azure-marketplace), [use images in Azure Storage account](/azure-stack/hci/manage/virtual-machine-image-storage-account), and [use images in local share](/azure-stack/hci/manage/virtual-machine-image-local-share).
+ - If you create VMs on Azure Stack HCI outside of the Azure Virtual Desktop service, such as with an automated pipeline, then add them as session hosts to a host pool, you need to install the [Azure Connected Machine agent](../azure-arc/servers/agent-overview.md) on the virtual machines so they can communicate with [Azure Instance Metadata Service](../virtual-machines/instance-metadata-service.md), which is a [required endpoint for Azure Virtual Desktop](../virtual-desktop/required-fqdn-endpoint.md).
+ - If you want to use Azure CLI or Azure PowerShell locally, see [Use Azure CLI and Azure PowerShell with Azure Virtual Desktop](cli-powershell.md) to make sure you have the [desktopvirtualization](/cli/azure/desktopvirtualization) Azure CLI extension or the [Az.DesktopVirtualization](/powershell/module/az.desktopvirtualization) PowerShell module installed. Alternatively, use the [Azure Cloud Shell](../cloud-shell/overview.md). > [!IMPORTANT]
virtual-desktop Automatic Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/automatic-migration.md
The migration module tool lets you migrate your organization from Azure Virtual Desktop (classic) to Azure Virtual Desktop automatically. This article will show you how to use the tool.
-## Requirements
+## Prerequisites
Before you use the migration module, make sure you have the following things ready:
virtual-desktop Azure Stack Hci Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/azure-stack-hci-overview.md
With Azure Virtual Desktop for Azure Stack HCI, you can:
- Deploy the latest fully patched images quickly and easily using [Azure Marketplace images](/azure-stack/hci/manage/virtual-machine-image-azure-marketplace).
-## Supported platforms
+## Supported deployment configurations
-Your Azure Stack HCI clusters need to be running a minimum of version 23H2. For more information, see [Azure Stack HCI release information](/azure-stack/hci/release-information) and [Updates and upgrades](/azure-stack/hci/concepts/updates).
+Your Azure Stack HCI clusters need to be running a minimum of [version 23H2](/azure-stack/hci/release-information) and [registered with Azure](/azure-stack/hci/deploy/register-with-azure).
-Azure Virtual Desktop for Azure Stack HCI supports the same [Remote Desktop clients](user-documentation/index.yml) as Azure Virtual Desktop, and you can use the following 64-bit operating system images that are in support:
+Once you're cluster is ready, you can use the following 64-bit operating system images for your session hosts that are in support:
- Windows 11 Enterprise multi-session - Windows 11 Enterprise
Azure Virtual Desktop for Azure Stack HCI supports the same [Remote Desktop clie
- Windows Server 2022 - Windows Server 2019
-You must license and activate the virtual machines you use for your session hosts on Azure Stack HCI before you use them with Azure Virtual Desktop. For activating Windows 10 and Windows 11 Enterprise multi-session, and Windows Server 2022 Datacenter: Azure Edition, use [Azure verification for VMs](/azure-stack/hci/deploy/azure-verification). For all other OS images (such as Windows 10 and Windows 11 Enterprise, and other editions of Windows Server), you should continue to use existing activation methods. For more information, see [Activate Windows Server VMs on Azure Stack HCI](/azure-stack/hci/manage/vm-activate).
+To use session hosts on Azure Stack HCI with Azure Virtual Desktop, you also need to:
+
+- License and activate the virtual machines. For activating Windows 10 and Windows 11 Enterprise multi-session, and Windows Server 2022 Datacenter: Azure Edition, use [Azure verification for VMs](/azure-stack/hci/deploy/azure-verification). For all other OS images (such as Windows 10 and Windows 11 Enterprise, and other editions of Windows Server), you should continue to use existing activation methods. For more information, see [Activate Windows Server VMs on Azure Stack HCI](/azure-stack/hci/manage/vm-activate).
+
+- Install the [Azure Connected Machine agent](../azure-arc/servers/agent-overview.md) on the virtual machines so they can communicate with [Azure Instance Metadata Service](../virtual-machines/instance-metadata-service.md), which is a [required endpoint for Azure Virtual Desktop](../virtual-desktop/required-fqdn-endpoint.md). The Azure Connected Machine agent is automatically installed when you add session hosts using the Azure portal as part of the process to [Deploy Azure Virtual Desktop](deploy-azure-virtual-desktop.md) or [Add session hosts to a host pool](add-session-hosts-host-pool.md).
+
+Finally, users can connect using the same [Remote Desktop clients](users/remote-desktop-clients-overview.md) as Azure Virtual Desktop.
## Licensing and pricing
virtual-desktop Configure Adfs Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/configure-adfs-sso.md
This article will walk you through the process of configuring Active Directory F
> [!NOTE] > Azure Virtual Desktop (Classic) doesn't support this feature.
-## Requirements
+## Prerequisites
Before configuring AD FS single sign-on, you must have the following setup running in your environment:
virtual-desktop Deploy Azure Virtual Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/deploy-azure-virtual-desktop.md
Here's how to create a host pool using the Azure portal.
1. Once the host pool has been created, select **Go to resource** to go to the overview of your new host pool, then select **Properties** to view its properties.
-### Optional: Post deployment
+### Post deployment
-If you also added session hosts to your host pool, there's some extra configuration you might need to do, which is covered in the following sections.
+If you also added session hosts to your host pool, there's some extra configuration you need to do, which is covered in the following sections.
[!INCLUDE [include-session-hosts-post-deployment](includes/include-session-hosts-post-deployment.md)]
Here's how to assign users or user groups to an application group to a workspace
## Next steps
+# [Portal](#tab/portal)
+ Once you've deployed Azure Virtual Desktop, your users can connect. There are several platforms you can connect from, including from a web browser. For more information, see [Remote Desktop clients for Azure Virtual Desktop](users/remote-desktop-clients-overview.md) and [Connect to Azure Virtual Desktop with the Remote Desktop Web client](users/connect-web.md). Here are some extra tasks you might want to do:
Here are some extra tasks you might want to do:
- [Add session hosts to a host pool](add-session-hosts-host-pool.md). - [Enable diagnostics settings](diagnostics-log-analytics.md).+
+# [Azure PowerShell](#tab/powershell)
+
+Once you've deployed a host pool, workspace, and application group, you'll need to create session hosts before your users can connect. You can do this by following the steps in [Add session hosts to a host pool](add-session-hosts-host-pool.md).
+
+Here are some extra tasks you might want to do:
+
+- Configure profile management with FSLogix. To learn more, see [FSLogix profile containers](fslogix-containers-azure-files.md).
+
+- [Enable diagnostics settings](diagnostics-log-analytics.md).
+
+# [Azure CLI](#tab/cli)
+
+Once you've deployed a host pool, workspace, and application group, you'll need to create session hosts before your users can connect. You can do this by following the steps in [Add session hosts to a host pool](add-session-hosts-host-pool.md).
+
+Here are some extra tasks you might want to do:
+
+- Configure profile management with FSLogix. To learn more, see [FSLogix profile containers](fslogix-containers-azure-files.md).
+
+- [Enable diagnostics settings](diagnostics-log-analytics.md).
virtual-desktop Enroll Per User Access Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/enroll-per-user-access-pricing.md
Last updated 01/08/2024
Per-user access pricing lets you pay for Azure Virtual Desktop access rights on behalf of external users. External users aren't members of your organization, such as customers of a business. To learn more about licensing options, see [Licensing Azure Virtual Desktop](licensing.md).
-Before external users can connect to your deployment, you need to enroll your Azure subscriptions that you use for Azure Virtual Desktop in per-user access pricing. Your enrolled subscription is charged each month based on the number of distinct users that connect to Azure Virtual Desktop resources.
+Before external users can connect to your deployment, you need to enroll your Azure subscriptions that you use for Azure Virtual Desktop in per-user access pricing. Your enrolled subscription is charged each month based on the number of distinct users that connect to Azure Virtual Desktop resources. All Azure subscriptions are applicable, such as those from an [Enterprise Agreement (EA)](/azure/cloud-adoption-framework/ready/landing-zone/design-area/azure-billing-enterprise-agreement), [Cloud Solution Provider (CSP)](/azure/cloud-adoption-framework/ready/landing-zone/design-area/azure-billing-cloud-solution-provider), or [Microsoft Customer Agreement](/azure/cloud-adoption-framework/ready/landing-zone/design-area/azure-billing-microsoft-customer-agreement).
> [!IMPORTANT] > Per-user access pricing with Azure Virtual Desktop doesn't currently support Citrix DaaS and VMware Horizon Cloud.
virtual-desktop Key Distribution Center Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/key-distribution-center-proxy.md
There are two components to the Azure Virtual Desktop service that need to be au
This article will show you how to configure the feed in the Azure Virtual Desktop client in the Azure portal. If you want to learn how to configure the RD Gateway role, see [Deploy the RD Gateway role](/windows-server/remote/remote-desktop-services/remote-desktop-gateway-role).
-## Requirements
+## Prerequisites
To configure an Azure Virtual Desktop session host with a KDC proxy, you'll need the following things:
virtual-desktop Set Up Scaling Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/set-up-scaling-script.md
Now that you have an Azure Automation account, you'll also need to set up a [man
To set up a managed identity, follow the directions in [Using a system-assigned managed identity for an Azure Automation account](../automation/enable-managed-identity-for-automation.md). Once you've created a managed identity, assign it with appropriate contributor permissions to Azure Virtual Desktop resources such as host pools, VMs, etc. Once you're done, return to this article and [Create the Azure Logic App and execution schedule](#create-the-azure-logic-app-and-execution-schedule) to finish the initial setup process.
-> [!IMPORTANT]
-> As of April 1, 2023, Run As accounts no longer work. We recommend you use [managed identities](../automation/automation-security-overview.md#managed-identities) instead. If you need help switching from your Run As account to a managed identity, see [Migrate from an existing Run As account to a managed identity](../automation/migrate-run-as-accounts-managed-identity.md).
->
-> Autoscale is an alternative way to scale session host VMs and is a native feature of Azure Virtual Desktop. We recommend you use Autoscale instead. For more information, see [Autoscale scaling plans](autoscale-scenarios.md).
- ## Create the Azure Logic App and execution schedule Finally, you'll need to create the Azure Logic App and set up an execution schedule for your new scaling tool. First, download and import the [Desktop Virtualization PowerShell module](powershell-module.md) to use in your PowerShell session if you haven't already.
virtual-desktop Manual Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/manual-delete.md
This article describes how to delete Azure Virtual Desktop (classic).
> When you delete resources as described in this article, these actions are irreversible.
-## Requirements
+## Prerequisites
Before you begin, make sure you have the following things ready:
virtual-desktop Watermarking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/watermarking.md
Title: Watermarking in Azure Virtual Desktop
description: Learn how to enable watermarking in Azure Virtual Desktop to help prevent sensitive information from being captured on client endpoints. Previously updated : 11/16/2023 Last updated : 01/19/2024 # Watermarking in Azure Virtual Desktop
You'll need the following things before you can use watermarking:
- [Windows Desktop client](users/connect-windows.md?toc=%2Fazure%2Fvirtual-desktop%2Ftoc.json), version 1.2.3317 or later, on Windows 10 and later. - [Web client](users/connect-web.md?toc=%2Fazure%2Fvirtual-desktop%2Ftoc.json).
- - [macOS client](users/connect-macos.md).
+ - [macOS client](users/connect-macos.md), version 10.9.5 or later.
+ - [iOS client](users/connect-ios-ipados.md), version 10.5.4 or later.
- Note: iOS and Android clients don't support watermarking.
+ >[!NOTE]
+ >The Android client doesn't support watermarking.
- [Azure Virtual Desktop Insights](azure-monitor.md) configured for your environment.
virtual-desktop Whats New Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-agent.md
A rollout may take several weeks before the agent is available in all environmen
| Production | 1.0.7909.2600 | | Validation | 1.0.7909.1200 |
-The agent is automatically installed when adding session hosts in most scenarios. If you need to download the agent, you find it at [Register session hosts to a host pool](add-session-hosts-host-pool.md#register-session-hosts-to-a-host-pool), together with the steps to install it.
+> [!TIP]
+> The Azure Virtual Desktop Agent is automatically installed when adding session hosts in most scenarios. If you need to install the agent manually, you can download it at [Register session hosts to a host pool](add-session-hosts-host-pool.md#register-session-hosts-to-a-host-pool), together with the steps to install it.
## Version 1.0.7909.2600
virtual-desktop Whats New Client Ios Ipados https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-ios-ipados.md
description: Learn about recent changes to the Remote Desktop client for iOS and
Previously updated : 11/08/2023 Last updated : 01/19/2024 # What's new in the Remote Desktop client for iOS and iPadOS
virtual-desktop Whats New Client Macos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-macos.md
description: Learn about recent changes to the Remote Desktop client for macOS
Previously updated : 10/02/2023 Last updated : 01/19/2024 # What's new in the Remote Desktop client for macOS
virtual-desktop Windows 11 Language Packs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/windows-11-language-packs.md
When your organization includes users with multiple different languages, you hav
The second option is more efficient in terms of resources and cost, but requires a few extra steps. Fortunately, this article will help walk you through how to build an image that can accommodate users of all languages and localization needs.
-## Requirements
+## Prerequisites
Before you can add languages to a Windows 11 Enterprise VM, you'll need to have the following things ready:
To create the content repository you'll use to add languages and features to you
1. Open the VM you want to add languages to in Azure.
-2. Open and mount the ISO file you downloaded in the [Requirements](#requirements) section above on the VM.
+2. Open and mount the ISO file you downloaded in the [Prerequisites](#prerequisites) section above on the VM.
3. Create a folder on the file share.
virtual-machine-scale-sets Virtual Machine Scale Sets Attach Detach Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-attach-detach-vm.md
New-AzVm `
### Attach an existing VM to a Virtual Machine Scale Set (Preview)
-Attach an existing virtual machine to a Virtual Machine Scale Set after the time of VM creation by specifying the `virtualMachineScaleSet` property. Attaching an existing VM to a scale set with a fault domain count of 1 does not require downtime.
+Attach an existing virtual machine to a Virtual Machine Scale Set after the time of VM creation by specifying the `virtualMachineScaleSet` property. Attaching an existing VM to a scale set with a fault domain count of 1 doesn't require downtime.
#### Enroll in the Preview
Update-AzVMΓÇ»-ResourceGroupNameΓÇ»$resourceGroupNameΓÇ»-VMΓÇ»$vm -VirtualMachin
- The scale set must use Flexible orchestration mode. - The scale set must have a `platformFaultDomainCount` of **1**. - VMs created by the scale set must be `Stopped` prior to being detached.
+- Detach of VMs created by the scale set is currently not supported in West Central US, East Asia, UK South, and North Europe.
## Moving VMs between scale sets (Preview)
virtual-machine-scale-sets Virtual Machine Scale Sets Attach Detach Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vs-azure-tools-storage-explorer-blobs.md
ms.assetid: 2f09e545-ec94-4d89-b96c-14783cc9d7a9 - Last updated 05/21/2019
virtual-machine-scale-sets Virtual Machine Scale Sets Attach Detach Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vs-azure-tools-storage-explorer-files.md
description: Learn how learn how to use Storage Explorer to work with file share
Last updated 03/09/2017