Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
applied-ai-services | Choose Model Feature | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/choose-model-feature.md | The following decision charts highlight the features of each **Form Recognizer v ## Next steps -* [Learn how to process your own forms and documents](quickstarts/try-v3-form-recognizer-studio.md) with the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio) +* [Learn how to process your own forms and documents](quickstarts/try-form-recognizer-studio.md) with the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio) |
applied-ai-services | Concept Accuracy Confidence | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-accuracy-confidence.md | Variances in the visual structure of your documents affect the accuracy of your ## Next step > [!div class="nextstepaction"]-> [Learn to create custom models ](quickstarts/try-v3-form-recognizer-studio.md#custom-models) +> [Learn to create custom models ](quickstarts/try-form-recognizer-studio.md#custom-models) |
applied-ai-services | Concept Form Recognizer Studio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-form-recognizer-studio.md | monikerRange: 'form-recog-3.0.0' **This article applies to:**  **Form Recognizer v3.0**. -[Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/) is an online tool for visually exploring, understanding, and integrating features from the Form Recognizer service into your applications. Use the [Form Recognizer Studio quickstart](quickstarts/try-v3-form-recognizer-studio.md) to get started analyzing documents with pretrained models. Build custom template models and reference the models in your applications using the [Python SDK v3.0](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) and other quickstarts. +[Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/) is an online tool for visually exploring, understanding, and integrating features from the Form Recognizer service into your applications. Use the [Form Recognizer Studio quickstart](quickstarts/try-form-recognizer-studio.md) to get started analyzing documents with pretrained models. Build custom template models and reference the models in your applications using the [Python SDK v3.0](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) and other quickstarts. The following image shows the Invoice prebuilt model feature at work. The following Form Recognizer service features are available in the Studio. * Refer to our [**v3.0 REST API quickstarts**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) to try the v3.0features using the new REST API. > [!div class="nextstepaction"]-> [Form Recognizer Studio quickstart](quickstarts/try-v3-form-recognizer-studio.md) +> [Form Recognizer Studio quickstart](quickstarts/try-form-recognizer-studio.md) |
applied-ai-services | Concept Layout | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-layout.md | See here for a [sample document file](https://github.com/Azure-Samples/cognitive The JSON output has two parts: * `readResults` node contains all of the recognized text and selection mark. The text presentation hierarchy is page, then line, then individual words.-* `pageResults` node contains the tables and cells extracted with their bounding boxes, confidence, and a reference to the lines and words in "readResults". +* `pageResults` node contains the tables and cells extracted with their bounding boxes, confidence, and a reference to the lines and words in "readResults" field. ## Example Output Layout API also extracts selection marks from documents. Extracted selection mar ::: moniker range="form-recog-3.0.0" -* [Learn how to process your own forms and documents](quickstarts/try-v3-form-recognizer-studio.md) with the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio) +* [Learn how to process your own forms and documents](quickstarts/try-form-recognizer-studio.md) with the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio) * Complete a [Form Recognizer quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) and get started creating a document processing app in the development language of your choice. |
applied-ai-services | Build A Custom Classifier | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/how-to-guides/build-a-custom-classifier.md | Once you've put together the set of forms or documents for training, you need to The Form Recognizer Studio provides and orchestrates all the API calls required to complete your dataset and train your model. -1. Start by navigating to the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio). The first time you use the Studio, you need to [initialize your subscription, resource group, and resource](../quickstarts/try-v3-form-recognizer-studio.md). Then, follow the [prerequisites for custom projects](../quickstarts/try-v3-form-recognizer-studio.md#additional-prerequisites-for-custom-projects) to configure the Studio to access your training dataset. +1. Start by navigating to the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio). The first time you use the Studio, you need to [initialize your subscription, resource group, and resource](../quickstarts/try-form-recognizer-studio.md). Then, follow the [prerequisites for custom projects](../quickstarts/try-form-recognizer-studio.md#added-prerequisites-for-custom-projects) to configure the Studio to access your training dataset. 1. In the Studio, select the **Custom classification model** tile, on the custom models section of the page and select the **Create a project** button. |
applied-ai-services | Build A Custom Model | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/how-to-guides/build-a-custom-model.md | Follow these tips to further optimize your data set for training: ## Upload your training data -Once you've put together the set of forms or documents for training, you'll need to upload it to an Azure blob storage container. If you don't know how to create an Azure storage account with a container, following the [Azure Storage quickstart for Azure portal](../../../storage/blobs/storage-quickstart-blobs-portal.md). You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production. +Once you've put together the set of forms or documents for training, you need to upload it to an Azure blob storage container. If you don't know how to create an Azure storage account with a container, following the [Azure Storage quickstart for Azure portal](../../../storage/blobs/storage-quickstart-blobs-portal.md). You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production. ## Video: Train your custom model -* Once you've gathered and uploaded your training dataset, you're ready to train your custom model. In the following video, we'll create a project and explore some of the fundamentals for successfully labeling and training a model.</br></br> +* Once you've gathered and uploaded your training dataset, you're ready to train your custom model. In the following video, we create a project and explore some of the fundamentals for successfully labeling and training a model.</br></br> > [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE5fX1c] Once you've put together the set of forms or documents for training, you'll need The Form Recognizer Studio provides and orchestrates all the API calls required to complete your dataset and train your model. -1. Start by navigating to the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio). The first time you use the Studio, you'll need to [initialize your subscription, resource group, and resource](../quickstarts/try-v3-form-recognizer-studio.md). Then, follow the [prerequisites for custom projects](../quickstarts/try-v3-form-recognizer-studio.md#additional-prerequisites-for-custom-projects) to configure the Studio to access your training dataset. +1. Start by navigating to the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio). The first time you use the Studio, you need to [initialize your subscription, resource group, and resource](../quickstarts/try-form-recognizer-studio.md). Then, follow the [prerequisites for custom projects](../quickstarts/try-form-recognizer-studio.md#added-prerequisites-for-custom-projects) to configure the Studio to access your training dataset. 1. In the Studio, select the **Custom models** tile, on the custom models page and select the **Create a project** button. The Form Recognizer Studio provides and orchestrates all the API calls required ## Label your data -In your project, your first task is to label your dataset with the fields you wish to extract. +In your project, your first task is to label your dataset with the fields you wish to extract. -You'll see the files you uploaded to storage on the left of your screen, with the first file ready to be labeled. +The files you uploaded to storage are listed on the left of your screen, with the first file ready to be labeled. 1. To start labeling your dataset, create your first field by selecting the plus (Γ₧ò) button on the top-right of the screen to select a field type. You'll see the files you uploaded to storage on the left of your screen, with th 1. Enter a name for the field. -1. To assign a value to the field, choose a word or words in the document and select the field in either the dropdown or the field list on the right navigation bar. You'll see the labeled value below the field name in the list of fields. +1. To assign a value to the field, choose a word or words in the document and select the field in either the dropdown or the field list on the right navigation bar. The labeled value is below the field name in the list of fields. 1. Repeat the process for all the fields you wish to label for your dataset. 1. Label the remaining documents in your dataset by selecting each document and selecting the text to be labeled. -You now have all the documents in your dataset labeled. If you look at the storage account, you'll find a *.labels.json* and *.ocr.json* files that correspond to each document in your training dataset and a new fields.json file. This training dataset will be submitted to train the model. +You now have all the documents in your dataset labeled. The *.labels.json* and *.ocr.json* files correspond to each document in your training dataset and a new fields.json file. This training dataset is submitted to train the model. ## Train your model Follow these tips to further optimize your data set for training. ## Upload your training data -When you've put together the set of form documents that you'll use for training, you need to upload it to an Azure blob storage container. If you don't know how to create an Azure storage account with a container, follow the [Azure Storage quickstart for Azure portal](../../../storage/blobs/storage-quickstart-blobs-portal.md). Use the standard performance tier. +When you've put together the set of form documents for training, you need to upload it to an Azure blob storage container. If you don't know how to create an Azure storage account with a container, follow the [Azure Storage quickstart for Azure portal](../../../storage/blobs/storage-quickstart-blobs-portal.md). Use the standard performance tier. -If you want to use manually labeled data, you'll also have to upload the *.labels.json* and *.ocr.json* files that correspond to your training documents. You can use the [Sample Labeling tool](../label-tool.md) (or your own UI) to generate these files. +If you want to use manually labeled data, upload the *.labels.json* and *.ocr.json* files that correspond to your training documents. You can use the [Sample Labeling tool](../label-tool.md) (or your own UI) to generate these files. ### Organize your data in subfolders (optional) -By default, the [Train Custom Model](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/TrainCustomModelAsync) API will only use documents that are located at the root of your storage container. However, you can train with data in subfolders if you specify it in the API call. Normally, the body of the [Train Custom Model](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/TrainCustomModelAsync) call has the following format, where `<SAS URL>` is the Shared access signature URL of your container: +By default, the [Train Custom Model](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/TrainCustomModelAsync) API only uses documents that are located at the root of your storage container. However, you can train with data in subfolders if you specify it in the API call. Normally, the body of the [Train Custom Model](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/TrainCustomModelAsync) call has the following format, where `<SAS URL>` is the Shared access signature URL of your container: ```json { By default, the [Train Custom Model](https://westus.dev.cognitive.microsoft.com/ } ``` -If you add the following content to the request body, the API will train with documents located in subfolders. The `"prefix"` field is optional and will limit the training data set to files whose paths begin with the given string. So a value of `"Test"`, for example, will cause the API to look at only the files or folders that begin with the word "Test". +If you add the following content to the request body, the API trains with documents located in subfolders. The `"prefix"` field is optional and limits the training data set to files whose paths begin with the given string. So a value of `"Test"`, for example, causes the API to look at only the files or folders that begin with the word *Test*. ```json { |
applied-ai-services | Compose Custom Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/how-to-guides/compose-custom-models.md | While creating your custom models, you may need to extract data collections from * Specific collection of values for a given set of fields (columns and/or rows) -See [Form Recognizer Studio: labeling as tables](../quickstarts/try-v3-form-recognizer-studio.md#labeling-as-tables) +See [Form Recognizer Studio: labeling as tables](../quickstarts/try-form-recognizer-studio.md#labeling-as-tables) ### [REST API](#tab/rest) Once you have your label files, you can include them with by calling the trainin ### [Client libraries](#tab/sdks) -Training with labels leads to better performance in some scenarios. To train with labels, you need to have special label information files (*\<filename\>.pdf.labels.json*) in your blob storage container alongside the training documents. Once you've them, you can call the training method with the *useTrainingLabels* parameter set to `true`. +Training with labels leads to better performance in some scenarios. To train with labels, you need to have special label information files (*\<filename\>.pdf.labels.json*) in your blob storage container alongside the training documents. Once you have them, you can call the training method with the *useTrainingLabels* parameter set to `true`. |Language |Method| |--|--| Great! You've learned the steps to create custom and composed models and use the Try one of our Form Recognizer quickstarts: > [!div class="nextstepaction"]-> [Form Recognizer Studio](../quickstarts/try-v3-form-recognizer-studio.md) +> [Form Recognizer Studio](../quickstarts/try-form-recognizer-studio.md) > [!div class="nextstepaction"] > [REST API](../quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) |
applied-ai-services | Label Tool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/label-tool.md | You need an Azure subscription ([create one for free](https://azure.microsoft.co > [!NOTE] >-> If your storage data is behind a VNet or firewall, you must deploy the **Form Recognizer Sample Labeling tool** behind your VNet or firewall and grant access by creating a [system-assigned managed identity](managed-identity-byos.md "Azure managed identity is a service principal that creates an Azure Active Directory (Azure AD) identity and specific permissions for Azure managed resources"). +> If your storage data is behind a VNet or firewall, you must deploy the **Form Recognizer Sample Labeling tool** behind your VNet or firewall and grant access by creating a [system-assigned managed identity](managed-identities.md "Azure managed identity is a service principal that creates an Azure Active Directory (Azure AD) identity and specific permissions for Azure managed resources"). You use the Docker engine to run the Sample Labeling tool. Follow these steps to set up the Docker container. For a primer on Docker and container basics, see the [Docker overview](https://docs.docker.com/engine/docker-overview/). With Model Compose, you can compose up to 200 models to a single model ID. When ## Analyze a form -Select the Analyze icon from the navigation bar to test your model. Select source 'Local file'. Browse for a file and select a file from the sample dataset that you unzipped in the test folder. Then choose the **Run analysis** button to get key/value pairs, text and tables predictions for the form. The tool applies tags in bounding boxes and reports the confidence of each tag. +Select the Analyze icon from the navigation bar to test your model. Select source *Local file*. Browse for a file and select a file from the sample dataset that you unzipped in the test folder. Then choose the **Run analysis** button to get key/value pairs, text and tables predictions for the form. The tool applies tags in bounding boxes and reports the confidence of each tag. :::image type="content" source="media/analyze.png" alt-text="Screenshot: analyze-a-custom-form window"::: |
applied-ai-services | Managed Identities Secured Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/managed-identities-secured-access.md | That's it! You can now configure secure access for your Form Recognizer resource :::image type="content" source="media/managed-identities/cors-error.png" alt-text="Screenshot of error message when CORS config is required"::: - **Resolution**: [Configure CORS](quickstarts/try-v3-form-recognizer-studio.md#prerequisites-for-new-users). + **Resolution**: [Configure CORS](quickstarts/try-form-recognizer-studio.md#prerequisites-for-new-users). * **AuthorizationFailure**: |
applied-ai-services | Service Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/service-limits.md | This article contains both a quick reference and detailed description of Azure F > > * [**Form Recognizer SDKs**](quickstarts/get-started-sdks-rest-api.md) > * [**Form Recognizer REST API**](quickstarts/get-started-sdks-rest-api.md)-> * [**Form Recognizer Studio v3.0**](quickstarts/try-v3-form-recognizer-studio.md) +> * [**Form Recognizer Studio v3.0**](quickstarts/try-form-recognizer-studio.md) ::: moniker-end ::: moniker range="form-recog-2.1.0" Initiate the increase of transactions per second(TPS) limit for your resource by * Go to [Azure portal](https://portal.azure.com/) * Select the Form Recognizer Resource for which you would like to increase the TPS limit * Select *New support request* (*Support + troubleshooting* group)-* A new window appears with auto-populated information about your Azure Subscription and Azure Resource +* A new window appears with autopopulated information about your Azure Subscription and Azure Resource * Enter *Summary* (like "Increase Form Recognizer TPS limit") * In Problem type,* select "Quota or usage validation" * Select *Next: Solutions* Initiate the increase of transactions per second(TPS) limit for your resource by ## Example of a workload pattern best practice -This example presents the approach we recommend following to mitigate possible request throttling due to [Autoscaling being in progress](#detailed-description-quota-adjustment-and-best-practices). It isn't an "exact recipe", but merely a template we invite to follow and adjust as necessary. +This example presents the approach we recommend following to mitigate possible request throttling due to [Autoscaling being in progress](#detailed-description-quota-adjustment-and-best-practices). It isn't an *exact recipe*, but merely a template we invite to follow and adjust as necessary. Let us suppose that a Form Recognizer resource has the default limit set. Start the workload to submit your analyze requests. If you find that you're seeing frequent throttling with response code 429, start by implementing an exponential backoff on the GET analyze response request. By using a progressively longer wait time between retries for consecutive error responses, for example a 2-5-13-34 pattern of delays between requests. In general, it's recommended to not call the get analyze response more than once every 2 seconds for a corresponding POST request. |
applied-ai-services | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/whats-new.md | Form Recognizer service is updated on an ongoing basis. Bookmark this page to st ## July 2021 -* System-assigned managed identity support: You can now enable a system-assigned managed identity to grant Form Recognizer limited access to private storage accounts including accounts protected by a Virtual Network (VNet) or firewall or have enabled bring-your-own-storage (BYOS). *See* [Create and use managed identity for your Form Recognizer resource](managed-identity-byos.md) to learn more. +* System-assigned managed identity support: You can now enable a system-assigned managed identity to grant Form Recognizer limited access to private storage accounts including accounts protected by a Virtual Network (VNet) or firewall or have enabled bring-your-own-storage (BYOS). _See_ [Create and use managed identity for your Form Recognizer resource](managed-identities.md) to learn more. |
azure-cache-for-redis | Cache Private Link | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-private-link.md | You can restrict public access to the private endpoint of your cache by disablin >[!Important] > Private endpoint is supported on cache tiers Basic, Standard, Premium, and Enterprise. We recommend using private endpoint instead of VNets. Private endpoints are easy to set up or remove, are supported on all tiers, and can connect your cache to multiple different VNets at once. >-> +> When using the Basic tier, you might experience data loss when you delete and recreate a private endpoint. ## Prerequisites |
azure-government | Azure Services In Fedramp Auditscope | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md | This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and | [Azure NetApp Files](../../azure-netapp-files/index.yml) | ✅ | ✅ | ✅ | ✅ | | | [Azure Policy](../../governance/policy/index.yml) | ✅ | ✅ | ✅ | ✅ | ✅ | | [Azure Policy's guest configuration](../../governance/machine-configuration/overview.md) | ✅ | ✅ | ✅ | ✅ | |-| [Azure Red Hat OpenShift](../../openshift/index.yml) | ✅ | ✅ | | | | +| [Azure Red Hat OpenShift](../../openshift/index.yml) | ✅ | ✅ | ✅ | | | | **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | | [Azure Resource Manager](../../azure-resource-manager/management/index.yml) | ✅ | ✅ | ✅ | ✅ | ✅ | | [Azure Service Manager (RDFE)](/previous-versions/azure/ee460799(v=azure.100)) | ✅ | ✅ | ✅ | ✅ | ✅ | | [Azure Sign-up portal](https://signup.azure.com/) | ✅ | ✅ | ✅ | ✅ | | | [Azure Stack Bridge](/azure-stack/operator/azure-stack-usage-reporting) | ✅ | ✅ | ✅ | ✅ | ✅ | | [Azure Stack Edge](../../databox-online/index.yml) (formerly Data Box Edge) ***** | ✅ | ✅ | ✅ | ✅ | ✅ |-| [Azure Stack HCI](/azure-stack/hci/) | ✅ | ✅ | | | | -| [Azure Video Indexer](../../azure-video-indexer/index.yml) | ✅ | ✅ | | | | +| [Azure Stack HCI](/azure-stack/hci/) | ✅ | ✅ | ✅ | | | +| [Azure Video Indexer](../../azure-video-indexer/index.yml) | ✅ | ✅ | ✅ | | | | [Azure Virtual Desktop](../../virtual-desktop/index.yml) (formerly Windows Virtual Desktop) | ✅ | ✅ | ✅ | ✅ | ✅ | | [Backup](../../backup/index.yml) | ✅ | ✅ | ✅ | ✅ | ✅ | | [Bastion](../../bastion/index.yml) | ✅ | ✅ | ✅ | ✅ | | This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and | [Virtual Network](../../virtual-network/index.yml) | ✅ | ✅ | ✅ | ✅ | ✅ | | [Virtual Network NAT](../../virtual-network/nat-gateway/index.yml) | ✅ | ✅ | ✅ | ✅ | | | [Virtual WAN](../../virtual-wan/index.yml) | ✅ | ✅ | ✅ | ✅ | ✅ |-| [VM Image Builder](../../virtual-machines/image-builder-overview.md) | ✅ | ✅ | | | | +| [VM Image Builder](../../virtual-machines/image-builder-overview.md) | ✅ | ✅ | ✅ | | | | [VPN Gateway](../../vpn-gateway/index.yml) | ✅ | ✅ | ✅ | ✅ | ✅ | | [Web Application Firewall](../../web-application-firewall/index.yml) | ✅ | ✅ | ✅ | ✅ | | |
azure-monitor | Api Custom Events Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-custom-events-metrics.md | Title: Application Insights API for custom events and metrics | Microsoft Docs description: Insert a few lines of code in your device or desktop app, webpage, or service to track usage and diagnose issues. Previously updated : 06/23/2023 Last updated : 01/24/2023 ms.devlang: csharp, java, javascript, vb |
azure-monitor | Api Filtering Sampling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-filtering-sampling.md | What's the difference between telemetry processors and telemetry initializers? * Confirm that the fully qualified type name and assembly name are correct. * Confirm that the applicationinsights.config file is in your output directory and contains any recent changes. +## Azure Monitor Telemetry Data Types Reference ++ * [ASP.NET Core SDK](https://learn.microsoft.com/dotnet/api/microsoft.applicationinsights.datacontracts?view=azure-dotnet) + * [ASP.NET SDK](https://learn.microsoft.com/dotnet/api/microsoft.applicationinsights.datacontracts?view=azure-dotnet) + * [Node.js SDK](https://github.com/Microsoft/ApplicationInsights-node.js/tree/develop/Declarations/Contracts/TelemetryTypes) + * [Java SDK (via config)](https://learn.microsoft.com/azure/azure-monitor/app/java-in-process-agent#modify-telemetry) + * [Python SDK](https://github.com/census-instrumentation/opencensus-python/blob/master/contrib/opencensus-ext-azure/opencensus/ext/azure/common/protocol.py) + * [JavaScript SDK](https://github.com/microsoft/ApplicationInsights-JS/tree/master/shared/AppInsightsCommon/src/Telemetry) + ## Reference docs * [API overview](./api-custom-events-metrics.md) |
azure-monitor | App Map | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-map.md | Title: Application Map in Azure Application Insights | Microsoft Docs description: Monitor complex application topologies with Application Map and Intelligent view. Previously updated : 06/23/2023 Last updated : 11/15/2022 ms.devlang: csharp, java, javascript, python |
azure-monitor | Availability Azure Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-azure-functions.md | Title: Review TrackAvailability() test results description: This article explains how to review data logged by TrackAvailability() tests Previously updated : 04/06/2023 Last updated : 06/23/2023 # Review TrackAvailability() test results |
azure-monitor | Azure Web Apps Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-java.md | Title: Monitor Azure app services performance Java | Microsoft Docs description: Application performance monitoring for Azure app services using Java. Chart load and response time, dependency information, and set alerts on performance. Previously updated : 03/22/2023 Last updated : 06/23/2023 ms.devlang: java |
azure-monitor | Custom Operations Tracking | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/custom-operations-tracking.md | description: Learn how to track custom operations with the Application Insights ms.devlang: csharp Previously updated : 06/23/2023 Last updated : 11/26/2019 |
azure-monitor | Kubernetes Codeless | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/kubernetes-codeless.md | Title: Monitor applications on AKS with Application Insights - Azure Monitor | M description: Azure Monitor integrates seamlessly with your application running on Azure Kubernetes Service and allows you to spot the problems with your apps quickly. Previously updated : 06/23/2023 Last updated : 11/15/2022 |
azure-monitor | Nodejs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/nodejs.md | Title: Monitor Node.js services with Application Insights | Microsoft Docs description: Monitor performance and diagnose problems in Node.js services with Application Insights. Previously updated : 11/15/2022 Last updated : 06/23/2023 ms.devlang: javascript |
azure-monitor | Opentelemetry Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-configuration.md | Title: Azure Monitor OpenTelemetry configuration for .NET, Java, Node.js, and Python applications description: This article provides configuration guidance for .NET, Java, Node.js, and Python applications. Previously updated : 05/10/2023 Last updated : 06/23/2023 ms.devlang: csharp, javascript, typescript, python |
azure-monitor | Transaction Diagnostics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/transaction-diagnostics.md | Title: Application Insights transaction diagnostics | Microsoft Docs description: This article explains Application Insights end-to-end transaction diagnostics. Previously updated : 06/23/2023 Last updated : 11/15/2022 |
azure-resource-manager | Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/best-practices.md | For more information about Bicep variables, see [Variables in Bicep](variables.m * Avoid using `name` in a symbolic name. The symbolic name represents the resource, not the resource's name. For example, instead of the following syntax: ```bicep- resource cosmosDBAccountName 'Microsoft.DocumentDB/databaseAccounts@2021-04-15' = { + resource cosmosDBAccountName 'Microsoft.DocumentDB/databaseAccounts@2023-04-15' = { ``` Use: ```bicep- resource cosmosDBAccount 'Microsoft.DocumentDB/databaseAccounts@2021-04-15' = { + resource cosmosDBAccount 'Microsoft.DocumentDB/databaseAccounts@2023-04-15' = { ``` * Avoid distinguishing variables and parameters by the use of suffixes. |
azure-resource-manager | Bicep Functions Any | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-any.md | Title: Bicep functions - any description: Describes the any function that is available in Bicep to convert types. Previously updated : 09/09/2021 Last updated : 06/23/2023 # Any function for Bicep The value in a form that is compatible with any data type. The following example shows how to use the `any()` function to provide numeric values as strings. ```bicep-resource wpAci 'microsoft.containerInstance/containerGroups@2019-12-01' = { +resource wpAci 'Microsoft.ContainerInstance/containerGroups@2023-05-01' = { name: 'wordpress-containerinstance' location: location properties: { |
azure-resource-manager | Bicep Functions Date | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-date.md | Title: Bicep functions - date description: Describes the functions to use in a Bicep file to work with dates.-- Previously updated : 05/03/2022 Last updated : 06/23/2023 # Date functions for Bicep var startTime = dateTimeAdd(baseTime, 'PT1H') ... -resource scheduler 'Microsoft.Automation/automationAccounts/schedules@2015-10-31' = { +resource scheduler 'Microsoft.Automation/automationAccounts/schedules@2022-08-08' = { name: concat(omsAutomationAccountName, '/', scheduleName) properties: { description: 'Demo Scheduler' The next example shows how to use a value from the function when setting a tag v param utcShort string = utcNow('d') param rgName string -resource myRg 'Microsoft.Resources/resourceGroups@2020-10-01' = { +resource myRg 'Microsoft.Resources/resourceGroups@2022-09-01' = { name: rgName location: 'westeurope' tags: { |
azure-resource-manager | Bicep Functions Numeric | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-numeric.md | Title: Bicep functions - numeric description: Describes the functions to use in a Bicep file to work with numbers.-- Previously updated : 09/30/2021 Last updated : 06/23/2023 # Numeric functions for Bicep |
azure-resource-manager | Child Resource Name Type | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/child-resource-name-type.md | Title: Child resources in Bicep description: Describes how to set the name and type for child resources in Bicep.-- Previously updated : 09/13/2021 Last updated : 06/23/2023 # Set name and type for child resources in Bicep output childAddressPrefix string = VNet1::VNet1_Subnet1.properties.addressPrefix ## Outside parent resource -The following example shows the child resource outside of the parent resource. You might use this approach if the parent resource isn't deployed in the same template, or if want to use [a loop](loops.md) to create more than one child resource. Specify the parent property on the child with the value set to the symbolic name of the parent. With this syntax you still need to declare the full resource type, but the name of the child resource is only the name of the child. +The following example shows the child resource outside of the parent resource. You might use this approach if the parent resource isn't deployed in the same template, or if you want to use [a loop](loops.md) to create more than one child resource. Specify the parent property on the child with the value set to the symbolic name of the parent. With this syntax you still need to declare the full resource type, but the name of the child resource is only the name of the child. ```bicep resource <parent-resource-symbolic-name> '<resource-type>@<api-version>' = { |
azure-resource-manager | Compare Template Syntax | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/compare-template-syntax.md | Title: Compare syntax for Azure Resource Manager templates in JSON and Bicep description: Compares Azure Resource Manager templates developed with JSON and Bicep, and shows how to convert between the languages.-- Previously updated : 04/26/2022 Last updated : 06/23/2023 + # Comparing JSON and Bicep for templates This article compares Bicep syntax with JSON syntax for Azure Resource Manager templates (ARM templates). In most cases, Bicep provides syntax that is less verbose than the equivalent in JSON. targetScope = 'subscription' To declare a resource: ```bicep-resource virtualMachine 'Microsoft.Compute/virtualMachines@2020-06-01' = { +resource virtualMachine 'Microsoft.Compute/virtualMachines@2023-03-01' = { ... } ``` resource virtualMachine 'Microsoft.Compute/virtualMachines@2020-06-01' = { To conditionally deploy a resource: ```bicep-resource virtualMachine 'Microsoft.Compute/virtualMachines@2020-06-01' = if(deployVM) { +resource virtualMachine 'Microsoft.Compute/virtualMachines@2023-03-01' = if(deployVM) { ... } ``` resource virtualMachine 'Microsoft.Compute/virtualMachines@2020-06-01' = if(depl { "condition": "[parameters('deployVM')]", "type": "Microsoft.Compute/virtualMachines",- "apiVersion": "2020-06-01", + "apiVersion": "2023-03-01", ... } ] For Bicep, you can set an explicit dependency but this approach isn't recommende The following shows a network interface with an implicit dependency on a network security group. It references the network security group with `netSecurityGroup.id`. ```bicep-resource netSecurityGroup 'Microsoft.Network/networkSecurityGroups@2020-06-01' = { +resource netSecurityGroup 'Microsoft.Network/networkSecurityGroups@2022-11-01' = { ... } -resource nic1 'Microsoft.Network/networkInterfaces@2020-06-01' = { +resource nic1 'Microsoft.Network/networkInterfaces@2022-11-01' = { name: nic1Name location: location properties: { storageAccount.properties.primaryEndpoints.blob To get a property from an existing resource that isn't deployed in the template: ```bicep-resource storageAccount 'Microsoft.Storage/storageAccounts@2019-06-01' existing = { +resource storageAccount 'Microsoft.Storage/storageAccounts@2022-09-01' existing = { name: storageAccountName } |
azure-resource-manager | Contribute | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/contribute.md | |
azure-resource-manager | Deploy Cloud Shell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-cloud-shell.md | Title: Deploy Bicep files with Cloud Shell description: Use Azure Resource Manager and Azure Cloud Shell to deploy resources to Azure. The resources are defined in a Bicep file.-- Previously updated : 06/01/2021 Last updated : 06/23/2023 # Deploy Bicep files from Azure Cloud Shell |
azure-resource-manager | Deploy To Management Group | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-to-management-group.md | Title: Use Bicep to deploy resources to management group description: Describes how to create a Bicep file that deploys resources at the management group scope. Previously updated : 11/22/2021 Last updated : 06/23/2023 # Management group deployments with Bicep files To deploy resources to the target management group, add those resources with the targetScope = 'managementGroup' // policy definition created in the management group-resource policyDefinition 'Microsoft.Authorization/policyDefinitions@2019-09-01' = { +resource policyDefinition 'Microsoft.Authorization/policyDefinitions@2021-06-01' = { ... } ``` targetScope = 'managementGroup' param mgName string = 'mg-${uniqueString(newGuid())}' -resource newMG 'Microsoft.Management/managementGroups@2020-05-01' = { +resource newMG 'Microsoft.Management/managementGroups@2021-04-01' = { scope: tenant() name: mgName properties: {} targetScope = 'managementGroup' param mgName string = 'mg-${uniqueString(newGuid())}' -resource newMG 'Microsoft.Management/managementGroups@2020-05-01' = { +resource newMG 'Microsoft.Management/managementGroups@2021-04-01' = { scope: tenant() name: mgName properties: { param allowedLocations array = [ 'australiacentral' ] -resource policyDefinition 'Microsoft.Authorization/policyDefinitions@2020-09-01' = { +resource policyDefinition 'Microsoft.Authorization/policyDefinitions@2021-06-01' = { name: 'locationRestriction' properties: { policyType: 'Custom' resource policyDefinition 'Microsoft.Authorization/policyDefinitions@2020-09-01' } } -resource policyAssignment 'Microsoft.Authorization/policyAssignments@2020-09-01' = { +resource policyAssignment 'Microsoft.Authorization/policyAssignments@2022-06-01' = { name: 'locationAssignment' properties: { policyDefinitionId: policyDefinition.id |
azure-resource-manager | Deploy To Subscription | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-to-subscription.md | Title: Use Bicep to deploy resources to subscription description: Describes how to create a Bicep file that deploys resources to the Azure subscription scope. It shows how to create a resource group. Previously updated : 11/22/2021 Last updated : 06/23/2023 # Subscription deployments with Bicep files To deploy resources to the target subscription, add those resources with the `re targetScope = 'subscription' // resource group created in target subscription-resource exampleResource 'Microsoft.Resources/resourceGroups@2020-10-01' = { +resource exampleResource 'Microsoft.Resources/resourceGroups@2022-09-01' = { ... } ``` targetScope = 'subscription' param mgName string = 'mg-${uniqueString(newGuid())}' // management group created at tenant-resource managementGroup 'Microsoft.Management/managementGroups@2020-05-01' = { +resource managementGroup 'Microsoft.Management/managementGroups@2021-04-01' = { scope: tenant() name: mgName properties: {} targetScope='subscription' param resourceGroupName string param resourceGroupLocation string -resource newRG 'Microsoft.Resources/resourceGroups@2021-01-01' = { +resource newRG 'Microsoft.Resources/resourceGroups@2022-09-01' = { name: resourceGroupName location: resourceGroupLocation } param resourceGroupLocation string param storageName string param storageLocation string -resource newRG 'Microsoft.Resources/resourceGroups@2021-01-01' = { +resource newRG 'Microsoft.Resources/resourceGroups@2022-09-01' = { name: resourceGroupName location: resourceGroupLocation } The module uses a Bicep file named **storage.bicep** with the following contents param storageLocation string param storageName string -resource storageAcct 'Microsoft.Storage/storageAccounts@2019-06-01' = { +resource storageAcct 'Microsoft.Storage/storageAccounts@2022-09-01' = { name: storageName location: storageLocation sku: { param policyDefinitionID string param policyName string param policyParameters object = {} -resource policyAssign 'Microsoft.Authorization/policyAssignments@2020-09-01' = { +resource policyAssign 'Microsoft.Authorization/policyAssignments@2022-06-01' = { name: policyName properties: { policyDefinitionId: policyDefinitionID You can [define](../../governance/policy/concepts/definition-structure.md) and a ```bicep targetScope = 'subscription' -resource locationPolicy 'Microsoft.Authorization/policyDefinitions@2020-09-01' = { +resource locationPolicy 'Microsoft.Authorization/policyDefinitions@2021-06-01' = { name: 'locationpolicy' properties: { policyType: 'Custom' resource locationPolicy 'Microsoft.Authorization/policyDefinitions@2020-09-01' = } } -resource locationRestrict 'Microsoft.Authorization/policyAssignments@2020-09-01' = { +resource locationRestrict 'Microsoft.Authorization/policyAssignments@2022-06-01' = { name: 'allowedLocation' properties: { policyDefinitionId: locationPolicy.id param roleAssignmentName string = guid(principalId, roleDefinitionId, resourceGr var roleID = '/subscriptions/${subscription().subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/${roleDefinitionId}' -resource newResourceGroup 'Microsoft.Resources/resourceGroups@2019-10-01' = { +resource newResourceGroup 'Microsoft.Resources/resourceGroups@2022-09-01' = { name: resourceGroupName location: resourceGroupLocation properties: {} module assignRole 'role.bicep' = { The following example shows the module to apply the lock: ```bicep-resource createRgLock 'Microsoft.Authorization/locks@2016-09-01' = { +resource createRgLock 'Microsoft.Authorization/locks@2020-05-01' = { name: 'rgLock' properties: { level: 'CanNotDelete' param roleNameGuid string = newGuid() param roleDefinitionId string -resource roleNameGuid_resource 'Microsoft.Authorization/roleAssignments@2020-04-01-preview' = { +resource roleNameGuid_resource 'Microsoft.Authorization/roleAssignments@2022-04-01' = { name: roleNameGuid properties: { roleDefinitionId: roleDefinitionId |
azure-resource-manager | Deploy To Tenant | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-to-tenant.md | Title: Use Bicep to deploy resources to tenant description: Describes how to deploy resources at the tenant scope in a Bicep file. Previously updated : 11/22/2021 Last updated : 06/23/2023 # Tenant deployments with Bicep file Resources defined within the Bicep file are applied to the tenant. targetScope = 'tenant' // create resource at tenant-resource mgName_resource 'Microsoft.Management/managementGroups@2020-02-01' = { +resource mgName_resource 'Microsoft.Management/managementGroups@2021-04-01' = { ... } ``` The following template creates a management group. targetScope = 'tenant' param mgName string = 'mg-${uniqueString(newGuid())}' -resource mgName_resource 'Microsoft.Management/managementGroups@2020-02-01' = { +resource mgName_resource 'Microsoft.Management/managementGroups@2021-04-01' = { name: mgName properties: {} } param roleDefinitionId string = '8e3af657-a8ff-443c-a75c-2fe8c4bcb635' var roleAssignmentName = guid(principalId, roleDefinitionId) -resource roleAssignment 'Microsoft.Authorization/roleAssignments@2020-03-01-preview' = { +resource roleAssignment 'Microsoft.Authorization/roleAssignments@2022-04-01' = { name: roleAssignmentName properties: { roleDefinitionId: tenantResourceId('Microsoft.Authorization/roleDefinitions', roleDefinitionId) |
azure-resource-manager | Existing Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/existing-resource.md | Title: Reference existing resource in Bicep description: Describes how to reference a resource that already exists. Previously updated : 02/04/2022 Last updated : 06/23/2023 # Existing resources in Bicep The resource isn't redeployed when referenced with the `existing` keyword. The following example gets an existing storage account in the same resource group as the current deployment. Notice that you provide only the name of the existing resource. The properties are available through the symbolic name. ```bicep-resource stg 'Microsoft.Storage/storageAccounts@2019-06-01' existing = { +resource stg 'Microsoft.Storage/storageAccounts@2022-09-01' existing = { name: 'examplestorage' } output blobEndpoint string = stg.properties.primaryEndpoints.blob Set the `scope` property to access a resource in a different scope. The following example references an existing storage account in a different resource group. ```bicep-resource stg 'Microsoft.Storage/storageAccounts@2019-06-01' existing = { +resource stg 'Microsoft.Storage/storageAccounts@2022-09-01' existing = { name: 'examplestorage' scope: resourceGroup(exampleRG) } |
azure-resource-manager | Key Vault Parameter | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/key-vault-parameter.md | Title: Key Vault secret with Bicep description: Shows how to pass a secret from a key vault as a parameter during Bicep deployment. Previously updated : 06/15/2023 Last updated : 06/23/2023 # Use Azure Key Vault to pass secure parameter value during Bicep deployment param subscriptionId string param kvResourceGroup string param kvName string -resource kv 'Microsoft.KeyVault/vaults@2019-09-01' existing = { +resource kv 'Microsoft.KeyVault/vaults@2023-02-01' existing = { name: kvName scope: resourceGroup(subscriptionId, kvResourceGroup ) } If you don't want to use a module, you can reference the key vault directly in t The following Bicep file deploys a SQL server that includes an administrator password. The password parameter is set to a secure string. But the Bicep doesn't specify where that value comes from. ```bicep+param location string = resourceGroup().location param adminLogin string @secure() param adminPassword string param sqlServerName string -resource sqlServer 'Microsoft.Sql/servers@2020-11-01-preview' = { +resource sqlServer 'Microsoft.Sql/servers@2022-11-01-preview' = { name: sqlServerName- location: resourceGroup().location + location: location properties: { administratorLogin: adminLogin administratorLoginPassword: adminPassword |
azure-resource-manager | Linter Rule Admin Username Should Not Be Literal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-admin-username-should-not-be-literal.md | Title: Linter rule - admin user name should not be literal description: Linter rule - admin user name should not be a literal Previously updated : 11/18/2021 Last updated : 06/23/2023 # Linter rule - admin user name should not be literal Don't use a literal value or an expression that evaluates to a literal value. In The following example fails this test because the user name is a literal value. ```bicep-resource vm 'Microsoft.Compute/virtualMachines@2020-12-01' = { +resource vm 'Microsoft.Compute/virtualMachines@2023-03-01' = { name: 'name' location: location properties: { The next example fails this test because the expression evaluates to a literal v ```bicep var defaultAdmin = 'administrator'-resource vm 'Microsoft.Compute/virtualMachines@2020-12-01' = { +resource vm 'Microsoft.Compute/virtualMachines@2023-03-01' = { name: 'name' location: location properties: { This example passes this test. @secure() param adminUsername string param location string-resource vm 'Microsoft.Compute/virtualMachines@2020-12-01' = { +resource vm 'Microsoft.Compute/virtualMachines@2023-03-01' = { name: 'name' location: location properties: { |
azure-resource-manager | Linter Rule Explicit Values For Loc Params | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-explicit-values-for-loc-params.md | Title: Linter rule - use explicit values for module location parameters description: Linter rule - use explicit values for module location parameters Previously updated : 1/6/2022 Last updated : 06/23/2023 # Linter rule - use explicit values for module location parameters A parameter that defaults to a resource group's or deployment's location is conv The following example fails this test. Module `m1`'s parameter `location` isn't assigned an explicit value, so it will default to `resourceGroup().location`, as specified in *module1.bicep*. But using the resource group location may not be the intended behavior, since other resources in *main.bicep* might be created in a different location than the resource group's location. *main.bicep*:+ ```bicep param location string = 'eastus' module m1 'module1.bicep' = { name: 'm1' } -resource storageaccount 'Microsoft.Storage/storageAccounts@2021-02-01' = { +resource storageaccount 'Microsoft.Storage/storageAccounts@2022-09-01' = { name: 'storageaccount' location: location kind: 'StorageV2' resource storageaccount 'Microsoft.Storage/storageAccounts@2021-02-01' = { ``` *module1.bicep*:+ ```bicep param location string = resourceGroup().location- -resource stg 'Microsoft.Storage/storageAccounts@2021-02-01' = { ++resource stg 'Microsoft.Storage/storageAccounts@2022-09-01' = { name: 'stg' location: location kind: 'StorageV2' resource stg 'Microsoft.Storage/storageAccounts@2021-02-01' = { You can fix the failure by explicitly passing in a value for the module's `location` property: *main.bicep*:+ ```bicep param location string = 'eastus' module m1 'module1.bicep' = { location: location // An explicit value will override the default value specified in module1.bicep } }- -resource storageaccount 'Microsoft.Storage/storageAccounts@2021-02-01' = { ++resource storageaccount 'Microsoft.Storage/storageAccounts@2022-09-01' = { name: 'storageaccount' location: location kind: 'StorageV2' |
azure-resource-manager | Linter Rule Max Outputs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-max-outputs.md | Title: Linter rule - max outputs description: Linter rule - max outputs. Previously updated : 02/03/2022 Last updated : 06/23/2023 # Linter rule - max outputs |
azure-resource-manager | Linter Rule Max Parameters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-max-parameters.md | Title: Linter rule - max parameters description: Linter rule - max parameters. Previously updated : 02/03/2022 Last updated : 06/23/2023 # Linter rule - max parameters |
azure-resource-manager | Linter Rule Max Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-max-resources.md | Title: Linter rule - max resources description: Linter rule - max resources. Previously updated : 02/03/2022 Last updated : 06/23/2023 # Linter rule - max resources |
azure-resource-manager | Linter Rule Max Variables | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-max-variables.md | Title: Linter rule - max variables description: Linter rule - max variables. Previously updated : 02/03/2022 Last updated : 06/23/2023 # Linter rule - max variables |
azure-resource-manager | Linter Rule No Hardcoded Environment Urls | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-no-hardcoded-environment-urls.md | Title: Linter rule - no hardcoded environment URL description: Linter rule - no hardcoded environment URL Previously updated : 11/18/2021 Last updated : 06/23/2023 # Linter rule - no hardcoded environment URL In some cases, you can fix it by getting a property from a resource you've deplo ```bicep param storageAccountName string+param location string = resourceGroup().location -resource sa 'Microsoft.Storage/storageAccounts@2021-04-01' = { +resource sa 'Microsoft.Storage/storageAccounts@2022-09-01' = { name: storageAccountName- location: 'westus' + location: location sku: { name: 'Standard_LRS' } output endpoint string = sa.properties.primaryEndpoints.web ## Configuration -By default, this rule uses the following settings for determining which URLs are disallowed. +By default, this rule uses the following settings for determining which URLs are disallowed. ```json "analyzers": { |
azure-resource-manager | Linter Rule Protect Commandtoexecute Secrets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-protect-commandtoexecute-secrets.md | Title: Linter rule - use protectedSettings for commandToExecute secrets description: Linter rule - use protectedSettings for commandToExecute secrets Previously updated : 12/17/2021 Last updated : 06/23/2023 # Linter rule - use protectedSettings for commandToExecute secrets param location string param fileUris string param storageAccountName string -resource storageAccount 'Microsoft.Storage/storageAccounts@2021-06-01' existing = { +resource storageAccount 'Microsoft.Storage/storageAccounts@2022-09-01' existing = { name: storageAccountName } -resource customScriptExtension 'Microsoft.HybridCompute/machines/extensions@2019-08-02-preview' = { +resource customScriptExtension 'Microsoft.HybridCompute/machines/extensions@2023-03-15-preview' = { name: '${vmName}/CustomScriptExtension' location: location properties: { param location string param fileUris string param storageAccountName string -resource storageAccount 'Microsoft.Storage/storageAccounts@2021-06-01' existing = { +resource storageAccount 'Microsoft.Storage/storageAccounts@2022-09-01' existing = { name: storageAccountName } -resource customScriptExtension 'Microsoft.HybridCompute/machines/extensions@2019-08-02-preview' = { +resource customScriptExtension 'Microsoft.HybridCompute/machines/extensions@2023-03-15-preview' = { name: '${vmName}/CustomScriptExtension' location: location properties: { |
azure-resource-manager | Linter Rule Use Stable Vm Image | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-use-stable-vm-image.md | Title: Linter rule - use stable VM image description: Linter rule - use stable VM image Previously updated : 12/15/2021 Last updated : 06/23/2023 # Linter rule - use stable VM image Use the following value in the [Bicep configuration file](bicep-config-linter.md The following example fails this test. ```bicep-resource vm 'Microsoft.Compute/virtualMachines@2020-06-01' = { +param location string = resourceGroup().location ++resource vm 'Microsoft.Compute/virtualMachines@2023-03-01' = { name: 'virtualMachineName'- location: resourceGroup().location + location: location properties: { storageProfile: { imageReference: { resource vm 'Microsoft.Compute/virtualMachines@2020-06-01' = { You can fix it by using an image that does not contain the string `preview` in the imageReference. ```bicep-resource vm 'Microsoft.Compute/virtualMachines@2020-06-01' = { +param location string = resourceGroup().location ++resource vm 'Microsoft.Compute/virtualMachines@2023-03-01' = { name: 'virtualMachineName'- location: resourceGroup().location + location: location properties: { storageProfile: { imageReference: { |
azure-resource-manager | Operators Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/operators-access.md | Title: Bicep accessor operators description: Describes Bicep resource access operator and property access operator.-- Previously updated : 09/10/2021 Last updated : 06/23/2023 # Bicep accessor operators var arrayVar = [ ] output accessorResult string = arrayVar[1]-``` +``` Output from the example: Two functions - [getSecret](bicep-functions-resource.md#getsecret) and [list*](b The following example references an existing key vault, then uses `getSecret` to pass a secret to a module. ```bicep-resource kv 'Microsoft.KeyVault/vaults@2019-09-01' existing = { +resource kv 'Microsoft.KeyVault/vaults@2023-02-01' existing = { name: kvName scope: resourceGroup(subscriptionId, kvResourceGroup ) } Within the parent resource, you reference the nested resource with just the symb The following example shows how to reference a nested resource from within the parent resource and from outside of the parent resource. ```bicep-resource demoParent 'demo.Rp/parentType@2020-01-01' = { +resource demoParent 'demo.Rp/parentType@2023-01-01' = { name: 'demoParent' location: 'West US' Output from the example: Typically, you use the property accessor with a resource deployed in the Bicep file. The following example creates a public IP address and uses property accessors to return a value from the deployed resource. ```bicep-resource publicIp 'Microsoft.Network/publicIPAddresses@2020-06-01' = { +resource publicIp 'Microsoft.Network/publicIPAddresses@2022-11-01' = { name: publicIpResourceName location: location properties: { |
azure-resource-manager | Operators Comparison | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/operators-comparison.md | Title: Bicep comparison operators description: Describes Bicep comparison operators that compare values.-- Previously updated : 09/07/2021 Last updated : 06/23/2023 # Bicep comparison operators |
azure-resource-manager | Operators Numeric | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/operators-numeric.md | Title: Bicep numeric operators description: Describes Bicep numeric operators that calculate values.-- Previously updated : 06/01/2021 Last updated : 06/23/2023 # Bicep numeric operators |
azure-resource-manager | Patterns Configuration Set | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/patterns-configuration-set.md | |
azure-resource-manager | Patterns Logical Parameter | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/patterns-logical-parameter.md | |
azure-resource-manager | Patterns Name Generation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/patterns-name-generation.md | The following example generates the names for two storage accounts for a differe > ```bicep > var uniqueNameComponent = uniqueString(resourceGroup().id) > ```- > + > > The name of the resource group (`resourceGroup().name`) may not be sufficiently unique to enable you to reuse the file across subscriptions. - Avoid changing the seed values for the `uniqueString()` function after resources have been deployed. Changing the seed value results in new names, and might affect your production resources. |
azure-resource-manager | Quickstart Create Template Specs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/quickstart-create-template-specs.md | Title: Create and deploy a template spec with Bicep description: Learn how to use Bicep to create and deploy a template spec to a resource group in your Azure subscription. Then, use a template spec to deploy Azure resources. Previously updated : 03/30/2022 Last updated : 06/23/2023 # Customer intent: As a developer I want to use Bicep to create and share deployment templates so that other people in my organization can deploy Microsoft Azure resources. param location string = resourceGroup().location var storageAccountName = 'storage${uniqueString(resourceGroup().id)}' -resource storageAccount 'Microsoft.Storage/storageAccounts@2021-08-01' = { +resource storageAccount 'Microsoft.Storage/storageAccounts@2022-09-01' = { name: storageAccountName location: location sku: { You can create a template spec with a Bicep file but the `mainTemplate` must be @description('Location for all resources.') param location string = resourceGroup().location - resource createTemplateSpec 'Microsoft.Resources/templateSpecs@2021-05-01' = { + resource createTemplateSpec 'Microsoft.Resources/templateSpecs@2022-02-01' = { name: templateSpecName location: location } - resource createTemplateSpecVersion 'Microsoft.Resources/templateSpecs/versions@2021-05-01' = { + resource createTemplateSpecVersion 'Microsoft.Resources/templateSpecs/versions@2022-02-01' = { parent: createTemplateSpec name: templateSpecVersionName location: location You can create a template spec with a Bicep file but the `mainTemplate` must be 'resources': [ { 'type': 'Microsoft.Storage/storageAccounts'- 'apiVersion': '2021-08-01' + 'apiVersion': '2022-09-01' 'name': '[variables(\'storageAccountName\')]' 'location': '[parameters(\'location\')]' 'sku': { param storageNamePrefix string = 'storage' var storageAccountName = '${toLower(storageNamePrefix)}${uniqueString(resourceGroup().id)}' -resource storageAccount 'Microsoft.Storage/storageAccounts@2021-08-01' = { +resource storageAccount 'Microsoft.Storage/storageAccounts@2022-09-01' = { name: storageAccountName location: location sku: { Rather than create a new template spec for the revised template, add a new versi @description('Location for all resources.') param location string = resourceGroup().location - resource createTemplateSpec 'Microsoft.Resources/templateSpecs@2021-05-01' = { + resource createTemplateSpec 'Microsoft.Resources/templateSpecs@2022-02-01' = { name: templateSpecName location: location } - resource createTemplateSpecVersion 'Microsoft.Resources/templateSpecs/versions@2021-05-01' = { + resource createTemplateSpecVersion 'Microsoft.Resources/templateSpecs/versions@2022-02-01' = { parent: createTemplateSpec name: templateSpecVersionName location: location Rather than create a new template spec for the revised template, add a new versi 'resources': [ { 'type': 'Microsoft.Storage/storageAccounts'- 'apiVersion': '2021-08-01' + 'apiVersion': '2022-09-01' 'name': '[variables(\'storageAccountName\')]' 'location': '[parameters(\'location\')]' 'sku': { |
azure-resource-manager | Quickstart Loops | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/quickstart-loops.md | Title: Create multiple resource instances in Bicep description: Use different methods to create multiple resource instances in Bicep Previously updated : 12/06/2021 Last updated : 06/23/2023 #Customer intent: As a developer new to Azure deployment, I want to learn how to create multiple resources in Bicep. |
azure-resource-manager | Scenarios Rbac | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/scenarios-rbac.md | |
azure-resource-manager | Manage Resource Groups Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resource-groups-python.md | content_well_notification: Learn how to use Python with [Azure Resource Manager](overview.md) to manage your Azure resource groups. -<!--[!INCLUDE [AI attribution](../../../includes/ai-generated-attribution.md)]--> - ## Prerequisites * Python 3.7 or later installed. To install the latest, see [Python.org](https://www.python.org/downloads/) |
azure-signalr | Signalr Quickstart Dotnet Core | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-quickstart-dotnet-core.md | Ready to start? ## Prerequisites -* Install the [.NET Core SDK](https://dotnet.microsoft.com/download). +* Install the latest [.NET Core SDK](https://dotnet.microsoft.com/download). * Download or clone the [AzureSignalR-sample](https://github.com/aspnet/AzureSignalR-samples) GitHub repository. Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.md) or [let us know](https://aka.ms/asrs/qsnetcore). In this section, you use the [.NET Core command-line interface (CLI)](/dotnet/co 2. In the new folder, run the following command to create the project: ```dotnetcli- dotnet new mvc + dotnet new web ``` ## Add Secret Manager to the project In this section, you'll add the [Secret Manager tool](/aspnet/core/security/app-secrets) to your project. The Secret Manager tool stores sensitive data for development work outside your project tree. This approach helps prevent the accidental sharing of app secrets in source code. -1. Open your *csproj* file. Add a `DotNetCliToolReference` element to include *Microsoft.Extensions.SecretManager.Tools*. Also add a `UserSecretsId` element as shown in the following code for *chattest.csproj*, and save the file. -- ```xml - <Project Sdk="Microsoft.NET.Sdk.Web"> -- <PropertyGroup> - <TargetFramework>netcoreapp3.1</TargetFramework> - <UserSecretsId>SignalRChatRoomEx</UserSecretsId> - </PropertyGroup> -- <ItemGroup> - <DotNetCliToolReference Include="Microsoft.VisualStudio.Web.CodeGeneration.Tools" Version="2.0.4" /> - <DotNetCliToolReference Include="Microsoft.Extensions.SecretManager.Tools" Version="2.0.2" /> - </ItemGroup> -- </Project> - ``` --## Add Azure SignalR to the web app --1. Add a reference to the `Microsoft.Azure.SignalR` NuGet package by running the following command: -- ```dotnetcli - dotnet add package Microsoft.Azure.SignalR - ``` --1. Run the following command to restore packages for your project: -- ```dotnetcli - dotnet restore - ``` --1. Prepare the Secret Manager for use with this project. -- ````dotnetcli - dotnet user-secrets init - ```` +1. In the folder, init `UserSecretsId` by running the following command: + ```dotnetcli + dotnet user-secrets init + ``` 1. Add a secret named *Azure:SignalR:ConnectionString* to Secret Manager. In this section, you'll add the [Secret Manager tool](/aspnet/core/security/app- Secret Manager will be used only for testing the web app while it's hosted locally. In a later tutorial, you'll deploy the chat web app to Azure. After the web app is deployed to Azure, you'll use an application setting instead of storing the connection string with Secret Manager. This secret is accessed with the Configuration API. A colon (:) works in the configuration name with the Configuration API on all supported platforms. See [Configuration by environment](/dotnet/core/extensions/configuration-providers#environment-variable-configuration-provider).+ +## Add Azure SignalR to the web app -1. Open *Startup.cs* and update the `ConfigureServices` method to use Azure SignalR Service by calling the `AddSignalR()` and `AddAzureSignalR()` methods: +1. Add a reference to the `Microsoft.Azure.SignalR` NuGet package by running the following command: - ```csharp - public void ConfigureServices(IServiceCollection services) - { - services.AddSignalR() - .AddAzureSignalR(); - } + ```dotnetcli + dotnet add package Microsoft.Azure.SignalR ```-- Not passing a parameter to `AddAzureSignalR()` causes this code to use the default configuration key for the SignalR Service resource connection string. The default configuration key is *Azure:SignalR:ConnectionString*. --1. In *Startup.cs*, update the `Configure` method by replacing it with the following code. + +1. Open *Program.cs* and update the code to the following, it calls the `AddSignalR()` and `AddAzureSignalR()` methods to use Azure SignalR Service: ```csharp- public void Configure(IApplicationBuilder app, IWebHostEnvironment env) - { - app.UseRouting(); - app.UseFileServer(); - app.UseEndpoints(endpoints => - { - endpoints.MapHub<ChatHub>("/chat"); - }); - } + var builder = WebApplication.CreateBuilder(args); + builder.Services.AddSignalR().AddAzureSignalR(); + var app = builder.Build(); + + app.UseDefaultFiles(); + app.UseRouting(); + app.UseStaticFiles(); + app.MapHub<ChatSampleHub>("/chat"); + app.Run(); ``` + Not passing a parameter to `AddAzureSignalR()` means it uses the default configuration key for the SignalR Service resource connection string. The default configuration key is *Azure:SignalR:ConnectionString*. It also uses `ChatHub` which we will create in the below section. + ### Add a hub class In SignalR, a *hub* is a core component that exposes a set of methods that can be called by the client. In this section, you define a hub class with two methods: -* `Broadcast`: This method broadcasts a message to all clients. +* `BroadcastMessage`: This method broadcasts a message to all clients. * `Echo`: This method sends a message back to the caller. Both methods use the `Clients` interface that the ASP.NET Core SignalR SDK provides. This interface gives you access to all connected clients, so you can push content to your clients. 1. In your project directory, add a new folder named *Hub*. Add a new hub code file named *ChatHub.cs* to the new folder. -2. Add the following code to *ChatHub.cs* to define your hub class and save the file. -- Update the namespace for this class if you used a project name that differs from *SignalR.Mvc*. +2. Add the following code to *ChatSampleHub.cs* to define your hub class and save the file. ```csharp using Microsoft.AspNetCore.SignalR;- using System.Threading.Tasks; - - namespace SignalR.Mvc ++ public class ChatSampleHub : Hub {- public class ChatHub : Hub - { - public Task BroadcastMessage(string name, string message) => - Clients.All.SendAsync("broadcastMessage", name, message); + public Task BroadcastMessage(string name, string message) => + Clients.All.SendAsync("broadcastMessage", name, message); - public Task Echo(string name, string message) => - Clients.Client(Context.ConnectionId) - .SendAsync("echo", name, $"{message} (echo from server)"); - } + public Task Echo(string name, string message) => + Clients.Client(Context.ConnectionId) + .SendAsync("echo", name, $"{message} (echo from server)"); } ``` The client user interface for this chat room app will consist of HTML and JavaSc Copy the *css/site.css* file from the *wwwroot* folder of the [samples repository](https://github.com/aspnet/AzureSignalR-samples/tree/master/samples/ChatRoom/wwwroot). Replace your project's *css/site.css* with the one you copied. -Create a new file in the *wwwroot* directory named *https://docsupdatetracker.net/index.html*, copy, and paste the following HTML into the newly created file. +Create a new file in the *wwwroot* directory named *https://docsupdatetracker.net/index.html*, copy and paste the following HTML into the newly created file. ```html <!DOCTYPE html> Create a new file in the *wwwroot* directory named *https://docsupdatetracker.net/index.html*, copy, and paste </div> <!--Reference the SignalR library. -->- <script src="https://cdn.jsdelivr.net/npm/@microsoft/signalr@3.1.8/dist/browser/signalr.min.js"></script> -+ <script src="https://cdnjs.cloudflare.com/ajax/libs/microsoft-signalr/6.0.1/signalr.js"></script> + <!--Add script to update the page and send messages.--> <script type="text/javascript"> document.addEventListener('DOMContentLoaded', function () { If the connection is successful, that connection is passed to `bindConnectionMes `HubConnection.start()` starts communication with the hub. Then, `onConnected()` adds the button event handlers. These handlers use the connection to allow this client to push content updates to all connected clients. -## Add a development runtime profile --In this section, you'll add a development runtime environment for ASP.NET Core. For more information, see [Work with multiple environments in ASP.NET Core](/aspnet/core/fundamentals/environments). --1. Create a folder named *Properties* in your project. --2. Add a new file named *launchSettings.json* to the folder, with the following content, and save the file. -- ```json - { - "profiles" : { - "ChatRoom": { - "commandName": "Project", - "launchBrowser": true, - "environmentVariables": { - "ASPNETCORE_ENVIRONMENT": "Development" - }, - "applicationUrl": "http://localhost:5000/" - } - } - } - ``` - ## Build and run the app locally--1. To build the app by using the .NET Core CLI, run the following command in the command shell: -- ```dotnetcli - dotnet build - ``` --1. After the build successfully finishes, run the following command to run the web app locally: +1. Run the following command to run the web app locally: ```dotnetcli dotnet run ``` - The app will be hosted locally on port 5000, as configured in our development runtime profile: -+ The app will be hosted locally with output containing the localhost URL, for example, as the following: ```output- info: Microsoft.Hosting.Lifetime[0] - Now listening on: https://localhost:5001 - info: Microsoft.Hosting.Lifetime[0] + Building... + info: Microsoft.Hosting.Lifetime[14] Now listening on: http://localhost:5000 info: Microsoft.Hosting.Lifetime[0] Application started. Press Ctrl+C to shut down. info: Microsoft.Hosting.Lifetime[0] Hosting environment: Development- info: Microsoft.Hosting.Lifetime[0] - Content root path: E:\Testing\chattest ``` -1. Open two browser windows. In each browser, go to `http://localhost:5000`. You're prompted to enter your name. Enter a client name for both clients and test pushing message content between both clients by using the **Send** button. +1. Open two browser windows. In each browser, go to the localhost URL shown in the output window, for example, http://localhost:5000/ as the above output window shows. You're prompted to enter your name. Enter a client name for both clients and test pushing message content between both clients by using the **Send** button.  If you'll continue to the next tutorial, you can keep the resources created in t If you're finished with the quickstart sample application, you can delete the Azure resources created in this quickstart to avoid charges. > [!IMPORTANT]-> Deleting a resource group is irreversible and includes all the resources in that group. Make sure that you don't accidentally delete the wrong resource group or resources. If you created the resources this sample in an existing resource group that contains resources you want to keep, you can delete each resource individually from its blade instead of deleting the resource group. +> Deleting a resource group is irreversible and includes all the resources in that group. Make sure that you don't accidentally delete the wrong resource group or resources. If you created the resources in this sample in an existing resource group that contains resources you want to keep, you can delete each resource individually from its blade instead of deleting the resource group. Sign in to the [Azure portal](https://portal.azure.com) and select **Resource groups**. Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide. ## Next steps -In this quickstart, you created a new Azure SignalR Service resource. You then used it with an ASP.NET Core web app to push content updates in real time to multiple connected clients. To learn more about using Azure SignalR Service, continue to the tutorial that demonstrates authentication. +In this quickstart, you created a new Azure SignalR Service resource. You then used it with an ASP.NET Core web app to push content updates in real-time to multiple connected clients. To learn more about using Azure SignalR Service, continue to the tutorial that demonstrates authentication. > [!div class="nextstepaction"] > [Azure SignalR Service authentication](./signalr-concept-authenticate-oauth.md) |
azure-vmware | Deploy Vsan Stretched Clusters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-vsan-stretched-clusters.md | Title: Deploy vSAN stretched clusters description: Learn how to deploy vSAN stretched clusters. Previously updated : 06/23/2023 Last updated : 06/24/2023 It's important to understand that stretched cluster private clouds only offer an - If the secondary site partitioning progressed into the failure of the primary site instead, or resulted in a complete partitioning, vSphere HA would attempt to restart the workload VMs on the secondary site. If vSphere HA attempted to restart the workload VMs on the secondary site, it would put the workload VMs in an unsteady state. - The following diagram shows the preferred site failure or complete partitioning scenario. + The following diagrams show the preferred site failure and complete network partitioning scenarios. :::image type="content" source="media/stretch-clusters/diagram-3-restart-workload-secondary-site.png" alt-text="Diagram shows vSphere high availability trying to restart the workload virtual machines on the secondary site when preferred site failure occurs." border="false" lightbox="media/stretch-clusters/diagram-3-restart-workload-secondary-site.png"::: + :::image type="content" source="media/stretch-clusters/diagram-4-restart-workload-secondary-site.png" alt-text="Diagram shows vSphere high availability trying to restart the workload virtual machines on the secondary site when complete network isolation occurs." border="false" lightbox="media/stretch-clusters/diagram-4-restart-workload-secondary-site.png"::: + It should be noted that these types of failures, although rare, fall outside the scope of the protection offered by a stretched cluster private cloud. Because of those types of rare failures, a stretched cluster solution should be regarded as a multi-AZ high availability solution reliant upon vSphere HA. It's important you understand that a stretched cluster solution isn't meant to replace a comprehensive multi-region Disaster Recovery strategy that can be employed to ensure application availability. The reason is because a Disaster Recovery solution typically has separate management and control planes in separate Azure regions. Azure VMware Solution stretched clusters have a single management and control plane stretched across two availability zones within the same Azure region. For example, one vCenter Server, one NSX-T Manager cluster, one NSX-T Data Center Edge VM pair. ## Stretched clusters region availability |
baremetal-infrastructure | Supported Instances And Regions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/supported-instances-and-regions.md | NC2 on Azure supports the following regions using AN36P: * Australia East * UK South * West Europe+* Germany West Central ## Next steps |
cognitive-services | Developer Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/developer-guide.md | The conversation analysis authoring API enables you to author custom models and * [Conversational language understanding](../conversational-language-understanding/quickstart.md?pivots=rest-api) * [Orchestration workflow](../orchestration-workflow/quickstart.md?pivots=rest-api) -As you use this API in your application, see the [reference documentation](/rest/api/language/2022-05-01/conversational-analysis-authoring) for additional information. +As you use this API in your application, see the [reference documentation](/rest/api/language/2023-04-01/conversational-analysis-authoring) for additional information. ### Conversation analysis runtime API It additionally enables you to use the following features, without creating any * [Conversation summarization](../summarization/quickstart.md?pivots=rest-api&tabs=conversation-summarization) * [Personally Identifiable Information (PII) detection for conversations](../personally-identifiable-information/how-to-call-for-conversations.md?tabs=rest-api#examples) -As you use this API in your application, see the [reference documentation](/rest/api/language/2022-05-01/conversation-analysis-runtime) for additional information. +As you use this API in your application, see the [reference documentation](/rest/api/language/2023-04-01/conversation-analysis-runtime) for additional information. ### Text analysis authoring API The text analysis authoring API enables you to author custom models and create/m * [Custom named entity recognition](../custom-named-entity-recognition/quickstart.md?pivots=rest-api) * [Custom text classification](../custom-text-classification/quickstart.md?pivots=rest-api) -As you use this API in your application, see the [reference documentation](/rest/api/language/2022-05-01/text-analysis-authoring) for additional information. +As you use this API in your application, see the [reference documentation](/rest/api/language/2023-04-01/text-analysis-authoring) for additional information. ### Text analysis runtime API It additionally enables you to use the following features, without creating any * [Sentiment analysis and opinion mining](../sentiment-opinion-mining/quickstart.md?pivots=rest-api) * [Text analytics for health](../text-analytics-for-health/quickstart.md?pivots=rest-api) -As you use this API in your application, see the [reference documentation](/rest/api/language/2022-05-01/text-analysis-runtime/analyze-text) for additional information. +As you use this API in your application, see the [reference documentation](/rest/api/language/2023-04-01/text-analysis-runtime/analyze-text) for additional information. ### Question answering APIs |
cognitive-services | Role Based Access Control | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/role-based-access-control.md | A user that should only be validating and reviewing the Language apps, typically :::column-end::: :::column span=""::: All GET APIs under: - * [Language authoring conversational language understanding APIs](/rest/api/language/2022-05-01/conversational-analysis-authoring) - * [Language authoring text analysis APIs](/rest/api/language/2022-05-01/text-analysis-authoring) + * [Language authoring conversational language understanding APIs](/rest/api/language/2023-04-01/conversational-analysis-authoring) + * [Language authoring text analysis APIs](/rest/api/language/2023-04-01/text-analysis-authoring) * [Question answering projects](/rest/api/cognitiveservices/questionanswering/question-answering-projects) Only `TriggerExportProjectJob` POST operation under: - * [Language authoring conversational language understanding export API](/rest/api/language/2022-05-01/text-analysis-authoring/export) - * [Language authoring text analysis export API](/rest/api/language/2022-05-01/text-analysis-authoring/export) + * [Language authoring conversational language understanding export API](/rest/api/language/2023-04-01/text-analysis-authoring/export) + * [Language authoring text analysis export API](/rest/api/language/2023-04-01/text-analysis-authoring/export) Only Export POST operation under: * [Question Answering Projects](/rest/api/cognitiveservices/questionanswering/question-answering-projects/export) All the Batch Testing Web APIs- *[Language Runtime CLU APIs](/rest/api/language/2022-05-01/conversation-analysis-runtime) - *[Language Runtime Text Analysis APIs](/rest/api/language/2022-05-01/text-analysis-runtime/analyze-text) + *[Language Runtime CLU APIs](/rest/api/language/2023-04-01/conversation-analysis-runtime) + *[Language Runtime Text Analysis APIs](/rest/api/language/2023-04-01/text-analysis-runtime/analyze-text) :::column-end::: :::row-end::: A user that is responsible for building and modifying an application, as a colla :::column span=""::: * All APIs under Language reader * All POST, PUT and PATCH APIs under:- * [Language conversational language understanding APIs](/rest/api/language/2022-05-01/conversational-analysis-authoring) - * [Language text analysis APIs](/rest/api/language/2022-05-01/text-analysis-authoring) + * [Language conversational language understanding APIs](/rest/api/language/2023-04-01/conversational-analysis-authoring) + * [Language text analysis APIs](/rest/api/language/2023-04-01/text-analysis-authoring) * [question answering projects](/rest/api/cognitiveservices/questionanswering/question-answering-projects) Except for * Delete deployment These users are the gatekeepers for the Language applications in production envi :::column-end::: :::column span=""::: All APIs available under:- * [Language authoring conversational language understanding APIs](/rest/api/language/2022-05-01/conversational-analysis-authoring) - * [Language authoring text analysis APIs](/rest/api/language/2022-05-01/text-analysis-authoring) + * [Language authoring conversational language understanding APIs](/rest/api/language/2023-04-01/conversational-analysis-authoring) + * [Language authoring text analysis APIs](/rest/api/language/2023-04-01/text-analysis-authoring) * [question answering projects](/rest/api/cognitiveservices/questionanswering/question-answering-projects) :::column-end::: |
cognitive-services | Use Asynchronously | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/use-asynchronously.md | When you send asynchronous requests, you will incur charges based on number of t ## Submit an asynchronous job using the REST API -To submit an asynchronous job, review the [reference documentation](/rest/api/language/2022-05-01/text-analysis-runtime/submit-job) for the JSON body you'll send in your request. +To submit an asynchronous job, review the [reference documentation](/rest/api/language/2023-04-01/text-analysis-runtime/submit-job) for the JSON body you'll send in your request. 1. Add your documents to the `analysisInput` object. 1. In the `tasks` object, include the operations you want performed on your data. For example, if you wanted to perform sentiment analysis, you would include the `SentimentAnalysisLROTask` object. 1. You can optionally: A successful call will return a 202 response code. The `operation-location` in t GET {Endpoint}/language/analyze-text/jobs/12345678-1234-1234-1234-12345678?api-version=2022-05-01 ``` -To [get the status and retrieve the results](/rest/api/language/2022-05-01/text-analysis-runtime/job-status) of the request, send a GET request to the URL you received in the `operation-location` header from the previous API response. Remember to include your key in the `Ocp-Apim-Subscription-Key`. The response will include the results of your API call. +To [get the status and retrieve the results](/rest/api/language/2023-04-01/text-analysis-runtime/job-status) of the request, send a GET request to the URL you received in the `operation-location` header from the previous API response. Remember to include your key in the `Ocp-Apim-Subscription-Key`. The response will include the results of your API call. ## Send asynchronous API requests using the client library |
cosmos-db | Materialized Views | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/materialized-views.md | There are a few limitations with the Cosmos DB NoSQL API Materialized View Featu - point-in-time restore, hierarchical partitioning, end-to-end encryption isn't supported on source containers, which have materialized views associated with them. - Role-based access control is currently not supported for materialized views. - Cross-tenant customer-managed-key (CMK) encryption isn't supported on materialized views.-- This feature can't be enabled along with Partition Merge feature or Analytical Store +- Currently, this feature can't be enabled along with Partition Merge feature, Analytical Store, or Continuous Backup mode. In addition to the above limitations, consider the following extra limitations: |
data-manager-for-agri | Concepts Byol And Credentials | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/concepts-byol-and-credentials.md | + + Title: Storing your license keys in Azure Data Manager for Agriculture +description: Provides information on using third party keys ++++ Last updated : 06/23/2023++++# Store and use your license keys. ++Azure Data Manager for Agriculture supports a range of data ingress connectors to centralize your fragmented accounts. These connections require the customer to populate their credentials in a Bring Your Own License (BYOL) model, so that the data manager may retrieve data on behalf of the customer. +++> [!NOTE] +> Microsoft Azure Data Manager for Agriculture is currently in preview. For legal terms that apply to features that are in beta, in preview, or otherwise not yet released into general availability, see the [**Supplemental Terms of Use for Microsoft Azure Previews**](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). +> Microsoft Azure Data Manager for Agriculture requires registration and is available to only approved customers and partners during the preview period. To request access to Microsoft Data Manager for Agriculture during the preview period, use this [**form**](https://aka.ms/agridatamanager). ++## Prerequisites ++To access Azure Key Vault, you need an Azure subscription. If you don't already have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin. +++## Overview ++In BYOL model, you're responsible for providing your own licenses for satellite imagery and weather connector. In the vault reference model, you store your credentials as secret in a customer managed Azure Key Vault. The URI of the secret must be shared and read permissions granted to Azure Data Manager for Agriculture so that the APIs can work seamlessly. This process is a one-time setup for each connector. Our Data Manager then refers to and reads the secret from the customersΓÇÖ key vault as part of the API call with no exposure of the secret. ++Flow diagram showing creation and sharing of credentials. ++The steps to use Azure Key Vault in Data Manager for Agriculture are as follows: ++## Step 1: Create Key Vault +Customers can create a key vault or use an existing key vault to share license credentials for satellite (Sentinel Hub) and weather (IBM Weather). Customer [creates Azure Key Vault](/azure/key-vault/general/quick-create-portal) or reuses existing an existing key vault. The following properties are recommended: +++Data Manager for Agriculture is a Microsoft trusted service and supports private network key vaults in addition to publicly available key vaults. If you put your key vault behind a VNET, then you need to select the `ΓÇ£Allow trusted Microsoft services to bypass this firewall."` +++## Step 2: Store secret in Azure Key Vault +For sharing your satellite or weather service credentials, store client secrets in a key vault, for example `ClientSecret` for `SatelliteSentinelHub` and `APIKey` for `WeatherIBM`. Customers are in control of secret name and rotation. ++Refer to [this guidance](/azure/key-vault/secrets/quick-create-portal#add-a-secret-to-key-vault) to store and retrieve your secret from the vault. +++## Step 3: Enable system identity +As a customer you have to enable system identity for your Data Manager for Agriculture instance. There are two options: + +1. Via UI ++ :::image type="content" source="./media/concepts-byol-and-credentials/enable-system-via-ui.png" alt-text="Screenshot showing usage of UI to enable key."::: ++2. Via Azure Resource Manager client ++ ```cmd + armclient patch /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.AgFoodPlatform/farmBeats/{ADMA_instance_name}?api-version=2023-04-01-preview "{identity: { type: 'systemAssigned' }} + ``` ++## Step 4: Access policy +Add an access policy in key vault for your Data Manager for Agriculture instance. + +1. Go to access policies tab in the created key vault. ++ :::image type="content" source="./media/concepts-byol-and-credentials/select-access-policies.png" alt-text="Screenshot showing selection of access policy."::: ++2. Choose Secret GET and LIST permissions. ++ :::image type="content" source="./media/concepts-byol-and-credentials/select-permissions.png" alt-text="Screenshot showing selection of permissions."::: ++3. Select the next tab, and then select Data Manager for Agriculture instance name and then select the review + create tab to create the access policy. ++ :::image type="content" source="./media/concepts-byol-and-credentials/access-policy-creation.png" alt-text="Screenshot showing selection create and review tab."::: ++## Step 5: Invoke control plane API call +Use the [API call](/rest/api/data-manager-for-agri/controlplane-version2021-09-01-preview/farm-beats-models/create-or-update?tabs=HTTP) to specify credentials. Key vault URI/ key name/ key version can be found after creating secret as shown in the following figure. +++Flow showing how Azure Data Manager for Agriculture accesses secret. ++If you disable and then re-enable system identity, then you have to delete the access policy in key vault and add it again. ++## Conclusion +You can use your license keys safely by storing your secrets in the Azure Key Vault, enabling system identity and providing read access to our Data Manager. ISV solutions available with our Data Manager also use these credentials. ++You can use our data plane APIs and reference license keys in your key vault. You can also choose to override default license credentials dynamically in our data plane API calls. Our Data Manager does basic validations including checking if it can access the secret specified in credentials object or not. ++## Next steps ++* Test our APIs [here](/rest/api/data-manager-for-agri). |
data-manager-for-agri | Concepts Ingest Sensor Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/concepts-ingest-sensor-data.md | The following diagram depicts the topology of a sensor in Azure Data Manager for ## Next steps -How to [get started as a customer](./how-to-set-up-sensors-customer.md) to consume sensor data from the supported sensor partners. +How to [get started when you push and consume sensor data](./how-to-set-up-sensor-as-customer-and-partner.md). ++How to [get started as a customer](./how-to-set-up-sensors-customer.md) to consume sensor data from a supported sensor partner like Davis Instruments. How to [get started as a sensor partner](./how-to-set-up-sensors-partner.md) to push sensor data into Data Manager for Agriculture Service. |
data-manager-for-agri | How To Set Up Sensor As Customer And Partner | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/how-to-set-up-sensor-as-customer-and-partner.md | + + Title: Push and consume sensor data in Data Manager for Agriculture +description: Learn how to push sensor data as a provider and egress it as a customer ++++ Last updated : 06/19/2023+++# Sensor Integration as both partner and customer in Azure Data Manager for Agriculture ++Follow the below steps to register as a sensor partner so that you can start pushing your data into your Data Manager for Agriculture instance. ++## Step 1: Enable sensor integration ++1. Sensor integration should be enabled before it can be initiated. This step provisions required internal Azure resources for sensor integration for Data Manager for Agriculture instance. This can be done by running following <a href="https://github.com/projectkudu/ARMClient" target=" blank">armclient</a> command. ++```armclient +armclient patch /subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.AgFoodPlatform/farmBeats/<datamanager-instance-name>?api-version=2023-04-01-preview "{properties:{sensorIntegration:{enabled:'true'}}}" +``` ++Sample output: ++```json +{ + "id": "/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.AgFoodPlatform/farmBeats/<datamanager-instance-name>", + "type": "Microsoft.AgFoodPlatform/farmBeats", + "sku": { + "name": "A0" + }, + "systemData": { + "createdBy": "<customer-id>", + "createdByType": "User", + "createdAt": "2022-03-11T03:36:32Z", + "lastModifiedBy": "<customer-id>", + "lastModifiedByType": "User", + "lastModifiedAt": "2022-03-11T03:40:06Z" + }, + "properties": { + "instanceUri": "https://<datamanager-instance-name>.farmbeats.azure.net/", + "provisioningState": "Succeeded", + "sensorIntegration": { + "enabled": "True", + "provisioningState": "**Creating**" + }, + "publicNetworkAccess": "Enabled" + }, + "location": "eastus", + "name": "myfarmbeats" +} +``` ++2. The above job might take a few minutes to complete. To know the status of job, the following armclient command should be run: ++```armclient +armclient get /subscriptions/<subscription-id>/resourceGroups/<resource-group-name> /providers/Microsoft.AgFoodPlatform/farmBeats/<datamanager-instance-name>?api-version=2023-04-01-preview +``` ++3. To verify whether it's completed, look at the highlighted attribute. It should be updated as ΓÇ£SucceededΓÇ¥ from ΓÇ£CreatingΓÇ¥ in the earlier step. The attribute that indicates that the sensor integration is enabled is indicated by **provisioningState inside the sensorIntegration object**. ++Sample output: +```json +{ + "id": "/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.AgFoodPlatform/farmBeats/<datamanager-instance-name>", + "type": "Microsoft.AgFoodPlatform/farmBeats", + "sku": { + "name": "A0" + }, + "systemData": { + "createdBy": "<customer-id>", + "createdByType": "User", + "createdAt": "2022-03-11T03:36:32Z", + "lastModifiedBy": "<customer-id>", + "lastModifiedByType": "User", + "lastModifiedAt": "2022-03-11T03:40:06Z" + }, + "properties": { + "instanceUri": "https://<customer-host-name>.farmbeats.azure.net/", + "provisioningState": "Succeeded", + "sensorIntegration": { + "enabled": "True", + "provisioningState": "**Succeeded**" + }, + "publicNetworkAccess": "Enabled" + }, + "tags": { + "usage": "<sensor-partner-id>" + }, + "location": "eastus", + "name": "<customer-id>" +} +``` +Once the provisioning status for sensor integration is completed, sensor integration objects can be created. ++## Step 2: Create sensor partner integration +Create sensor partner integration step should be executed to connect customer with provider. +The integrationId is later used in sensor creation. ++API documentation: [Sensor Partner Integrations - Create Or Update](/rest/api/data-manager-for-agri/dataplane-version2022-11-01-preview/sensor-partner-integrations/create-or-update) ++## Step 3: Create sensor data model +Use sensor data model to define the model of telemetry being sent. All the telemetry sent by the sensor is validated as per this data model. ++API documentation: [Sensor Data Models - Create Or Update](/rest/api/data-manager-for-agri/dataplane-version2022-11-01-preview/sensor-data-models/create-or-update) ++Sample telemetry +```json +{ + "pressure": 30.45, + "temperature": 28, + "name": "sensor-1" +} +``` ++Corresponding sensor data model +```json +{ + "type": "Sensor", + "manufacturer": "Some sensor manufacturer", + "productCode": "soil m", + "measures": { + "pressure": { + "description": "measures soil moisture", + "dataType": "Double", + "type": "sm", + "unit": "Bar", + "properties": { + "abc": "def", + "elevation": 5 + } + }, + "temperature": { + "description": "measures soil temperature", + "dataType": "Long", + "type": "sm", + "unit": "Celsius", + "properties": { + "abc": "def", + "elevation": 5 + } + }, + "name": { + "description": "Sensor name", + "dataType": "String", + "type": "sm", + "unit": "none", + "properties": { + "abc": "def", + "elevation": 5 + } + } + }, + "sensorPartnerId": "sensor-partner-1", + "id": "sdm124", + "status": "new", + "createdDateTime": "2022-01-24T06:12:15Z", + "modifiedDateTime": "2022-01-24T06:12:15Z", + "eTag": "040158a0-0000-0700-0000-61ee433f0000", + "name": "my sdm for soil moisture", + "description": "description goes here", + "properties": { + "key1": "value1", + "key2": 123.45 + } +} +``` ++## Step 4: Create sensor +Create sensor using the corresponding integration ID and sensor data model ID. DeviceId and HardwareId are optional parameters, if needed, you can use the [Devices - Create Or Update](/rest/api/data-manager-for-agri/dataplane-version2022-11-01-preview/devices/create-or-update?tabs=HTTP) to create the device. ++API documentation: [Sensors - Create Or Update](/rest/api/data-manager-for-agri/dataplane-version2022-11-01-preview/sensors/create-or-update?tabs=HTTP) ++## Step 5: Get IoTHub connection string +Get IoTHub connection string to push sensor telemetry to the platform for the Sensor created. ++API Documentation: [Sensors - Get Connection String](/rest/api/data-manager-for-agri/dataplane-version2022-11-01-preview/sensors/get-connection-string?tabs=HTTP) ++## Step 6: Push data using IoT Hub +Use [IoT Hub Device SDKs](/azure/iot-hub/iot-hub-devguide-sdks#azure-iot-hub-device-sdks) to push the telemetry using the connection string. ++For all sensor telemetry events, "timestamp" is a mandatory property and has to be in ISO 8601 format (YYYY-MM-DDTHH:MM:SSZ). ++You're now all set to start pushing sensor data for all sensors using the respective connection string provided for each sensor. However, sensor data should be sent in a JSON format as defined by Data Manager for Agriculture. Refer to the telemetry schema that follows: ++```json +{ + "timestamp": "2022-02-11T03:15:00Z", + "bar": 30.181, + "bar_absolute": 29.748, + "bar_trend": 0, + "et_day": 0.081, + "humidity": 55, + "rain_15_min": 0, + "rain_60_min": 0, + "rain_24_hr": 0, + "rain_day": 0, + "rain_rate": 0, + "rain_storm": 0, + "solar_rad": 0, + "temp_out": 58.8, + "uv_index": 0, + "wind_dir": 131, + "wind_dir_of_gust_10_min": 134, + "wind_gust_10_min": 0, + "wind_speed": 0, + "wind_speed_2_min": 0, + "wind_speed_10_min": 0 +} +``` ++## Next steps ++* Test our APIs [here](/rest/api/data-manager-for-agri). |
data-manager-for-agri | How To Set Up Sensors Customer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/how-to-set-up-sensors-customer.md | Follow the steps to integrate with a sensor partner to enable the partner to sta ## Step 1: Identify the sensor partner app and provide consent -Each sensor partner has their own multi-tenant Azure Active Directory app created and published on the Data Manager for Agriculture platform. The sensor partner supported by default on the platform is Davis Instruments(sensorPartnerId: `DavisInstruments`). However, you're free to add your own sensors by being a sensor partner yourself. Follow [these steps](./how-to-set-up-sensors-partner.md) to sign up being a sensor partner on the platform. +Each sensor partner has their own multi-tenant Azure Active Directory app created and published on the Data Manager for Agriculture platform. The sensor partner supported by default on the platform is Davis Instruments (sensorPartnerId: `DavisInstruments`). To start using the on-boarded sensor partners, you need to give consent to the sensor partner so that they start showing up in `App Registrations`. The steps for you to follow: |
data-manager-for-agri | How To Set Up Sensors Partner | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/how-to-set-up-sensors-partner.md | -The below section of this document talks about the onboarding steps needed to integrate with Data Manager for Agriculture, the APIs used to create models & sensors, telemetry format to push the data and finally the IOTHub based data ingestion. +This document talks about the onboarding steps that a partner needs to take to integrate with Data Manager for Agriculture. It presents an overview of the APIs used to create models & list sensor, telemetry format to push the data and finally the IOTHub based data ingestion. ## Onboarding From the above figure, the blocks highlighted in white are the steps taken by a ## Partner flow: Phase 1 -Below are the set of steps that a partner will be required to do for integrating with Data Manager for Agriculture. This is a one-time integration. At the end of phase 1, partners will have established their identity in Data Manager for Agriculture. +Here's the set of steps that a partner needs to do for integrating with Data Manager for Agriculture. This is a one-time integration. At the end of phase 1, partners establish their identity in Data Manager for Agriculture. ### App creation -Partners need to be authenticated and authorized to access the Data Manager for Agriculture customersΓÇÖ data plane APIs. Access to these APIs enables the partners to create sensor models, sensors & device objects within the customersΓÇÖ Data Manager for Agriculture instance. The sensor object information (created by partner) is what will be used by Data Manager for Agriculture to create respective devices (sensors) in IOTHub. +Partners need to be authenticated and authorized to access the Data Manager for Agriculture customersΓÇÖ data plane APIs. Access to these APIs enables the partners to create sensor models, sensors & device objects within the customersΓÇÖ Data Manager for Agriculture instance. The sensor object information (created by partner) is what is used by Data Manager for Agriculture to create respective devices (sensors) in IOTHub. -Hence to enable authentication & authorization, partners will need to do the following +Hence to enable authentication & authorization, partners need to do the following 1. **Create an Azure account** (If you don't have one already created.)-2. **Create a multi-tenant Azure Active Directory app** - The multi-tenant Azure Active Directory app as the name signifies, it will have access to multiple customersΓÇÖ tenants, provided that the customers have given explicit consent to the partner app (explained in the role assignment step below). +2. **Create a multi-tenant Azure Active Directory app** - The multi-tenant Azure Active Directory app as the name signifies, has access to multiple customersΓÇÖ tenants, if the customers have given explicit consent to the partner app (explained in the role assignment step). -Partners can access the APIs in customer tenant using the multi-tenant Azure Active Directory App, registered in Azure Active Directory. App registration is done on the Azure portal so the Microsoft identity platform can provide authentication and authorization services for your application which in turn accesses Data Manager for Agriculture. +Partners can access the APIs in customer tenant using the multi-tenant Azure Active Directory App, registered in Azure Active Directory. App registration is done on the Azure portal so the Microsoft identity platform can provide authentication and authorization services for your application that in turn accesses Data Manager for Agriculture. Follow the steps provided in [App Registration](/azure/active-directory/develop/quickstart-register-app#register-an-application) **until the Step 8** to generate the following information: Follow the steps provided in [Add a client secret](/azure/active-directory/devel ### Registration -Once the partner has created a multi-tenant Azure Active Directory app successfully, partners will be manually sharing the APP ID and Partner ID with Data Manager for Agriculture through email madma@microsoft.com alias. Using this information Data Manager for Agriculture will validate if itΓÇÖs an authentic partner and create a partner identity (sensorPartnerId) using the internal APIs. As part of the registration process, partners will be enabled to use their partner ID (sensorPartnerId) while creating the sensor/devices object and also as part of the sensor data telemetry that they would be pushing. +Once the partner has created a multi-tenant Azure Active Directory app successfully, partners manually share the APP ID and Partner ID with Data Manager for Agriculture by emailing madma@microsoft.com alias. Using this information Data Manager for Agriculture validates if itΓÇÖs an authentic partner and creating a partner identity (sensorPartnerId) using the internal APIs. As part of the registration process, partners are enabled to use their partner ID (sensorPartnerId) while creating the sensor/devices object and also as part of the sensor data that they push. -Getting the partner ID marks the completion of partner-Data Manager for Agriculture integration. Now, the partner will wait for input from any of their sensor customers to initiate their data ingestion into Data Manager for Agriculture. +Getting the partner ID marks the completion of partner-Data Manager for Agriculture integration. Now, the partner waits for input from any of their sensor customers to initiate their data ingestion into Data Manager for Agriculture. ## Customer flow -Customers using Data Manager for Agriculture will now know all the supported Data Manager for Agriculture sensor partners and their respective APP IDs. This information will be available in the public documentation for all our customers. -Based on the sensors that customers use and their respective sensor partnerΓÇÖs APP ID, the customer has to provide access to the partner (APP ID) to start pushing their sensor data into their Data Manager for Agriculture instance. This can be done using the following steps: +Customers using Data Manager for Agriculture will be aware of all the supported sensor partners and their respective APP IDs. This information is available in the public documentation for all our customers. +Based on the sensors that customers use and their respective sensor partnerΓÇÖs APP ID, the customer has to provide access to the partner (APP ID) to start pushing their sensor data into their Data Manager for Agriculture instance. Here are the required steps: ### Role assignment -Customers who choose to onboard to a specific partner will know the app ID of that specific partner. Now using the app ID customer will need to do the following things in sequence. +Customers who choose to onboard to a specific partner should have the app ID of that specific partner. Using the app ID customer needs to do the following things in sequence. 1. **Consent** ΓÇô Since the partnerΓÇÖs app resides in a different tenant and the customer wants the partner to access certain APIs in their Data Manager for Agriculture instance, the customers are required to call a specific endpoint `https://login.microsoft.com/common/adminconsent/clientId=[client_id]` and replace the [client_id] with the partnersΓÇÖ app ID. This enables the customersΓÇÖ Azure Active Directory to recognize this APP ID whenever they use it for role assignment. -2. **Identity Access Management (IAM)** ΓÇô As part of Identity access management, customers will create a new role assignment to the above app ID which was provided consent. Data Manager for Agriculture will create a new role called Sensor Partner (In addition to the existing Admin, Contributor, Reader roles). Customers will choose the sensor partner role and add the partner app ID and provide access. +2. **Identity Access Management (IAM)** ΓÇô As part of Identity access management, customers create a new role assignment to the above app ID, which was provided consent. Data Manager for Agriculture creates a new role called Sensor Partner (In addition to the existing Admin, Contributor, Reader roles). Customers choose the sensor partner role and add the partner app ID and provide access. ### Initiation The customer has made Data Manager for Agriculture aware that they need to get sensor data from a specific partner. However, the partner doesnΓÇÖt yet know for which customer should they send the sensor data. Hence as a next step, the customer would call into integration API within Data Manager for Agriculture to generate an integration link. Post acquiring the integration link, customers would be sharing the below information in sequence, either manually sharing or using the partnerΓÇÖs portal. -1. **Consent link & Tenant ID** ΓÇô In this step, the customer will provide a consent link & tenant ID. The integration link looks like shown in the example: +1. **Consent link & Tenant ID** ΓÇô In this step, the customer provides a consent link & tenant ID. The integration link looks like shown in the example: `fb-resource-name.farmbeats.com/sensor-partners/partnerId/integrations/IntegrationId/:check-consent?key=jgtHTFGDR?api-version=2021-07-31-preview` - In addition to the consent link, customers would also provide a tenant ID. This tenant ID will be used to fetch the access token required to call into the customerΓÇÖs API endpoint. + In addition to the consent link, customers would also provide a tenant ID. The tenant ID is used to fetch the access token required to call into the customerΓÇÖs API endpoint. - The partners will validate the consent link by making a GET call on the check consent link API. As the link is fully pre-populated request URI as expected by Data Manager for Agriculture. As part of the GET call, the partners will check for a 200 OK response code and IntegrationId to be passed in the response. + The partners validate the consent link by making a GET call on the check consent link API. As the link is fully prepopulated request URI as expected by Data Manager for Agriculture. As part of the GET call, the partners check for a 200 OK response code and IntegrationId to be passed in the response. - Once the valid response is received, partners will have to store two sets of information + Once the valid response is received, partners have to store two sets of information - * API endpoint (This can be extracted from the first part of the integration link) - * IntegrationId (This will be returned as part of the response to GET call) + * API endpoint (can be extracted from the first part of the integration link) + * IntegrationId (is returned as part of the response to GET call) - Post validating and storing these data points, partners can start allowing customers to add specific/all sensors for which they want the data to be pushed into Data Manager for Agriculture. + Once partner validates and stores these data points, they can enable customers to add sensors for which the data has to be pushed into Data Manager for Agriculture. -2. **Add sensors/devices** ΓÇô Now, the partner knows for which customer (API endpoint) do they need to integrate with, however, they still donΓÇÖt know for which all sensors do they need to push the data. Hence, partners will be collecting the sensor/device information for which the data needs to be pushed. This data can be collected either manually or through portal UI. +2. **Add sensors/devices** ΓÇô Now, the partner knows for which customer (API endpoint) do they need to integrate with, however, they still donΓÇÖt know for which all sensors do they need to push the data. Hence, partners collect the sensor/device information for which the data needs to be pushed. This data can be collected either manually or through portal UI. Post adding the sensors/devices, the customer can expect the respective sensorsΓÇÖ data flow into their Data Manager for Agriculture instance. This step marks the completion of customer onboarding to fetch sensor data. Partner now has the information to call a specific API endpoint (CustomersΓÇÖ da ### Integration -As part of integration, partners will be using their own app ID, app secret & customerΓÇÖs tenant ID acquired during the app registration step, to generate an access token using the MicrosoftΓÇÖs oAuth API. Below is curl command to generate the access token +As part of integration, partners need to use their own app ID, app secret & customerΓÇÖs tenant ID acquired during the app registration step, to generate an access token using the MicrosoftΓÇÖs oAuth API. Here's curl command to generate the access token ```azurepowershell curl --location --request GET 'https://login.microsoftonline.com/<customerΓÇÖs tenant ID> /oauth2/v2.0/token' \ The response should look like: } ``` -Using the generated access_token, partners will call the customersΓÇÖ data plane endpoint to create sensor model, sensor, and device objects in that specific Data Manager for Agriculture instance using the APIs built by Data Manager for Agriculture. Refer to the [partner API documentation](/rest/api/data-manager-for-agri/dataplane-version2022-11-01-preview/sensor-partner-integrations) for more information on the partner APIs. +With the generated access_token, partners call the customersΓÇÖ data plane endpoint to create sensor model, sensor, and device. It's created in that specific Data Manager for Agriculture instance using the APIs built by Data Manager for Agriculture. For more information on partners APIs, refer to the [partner API documentation](/rest/api/data-manager-for-agri/dataplane-version2023-04-01-preview/sensor-partner-integrations). -As part of the sensor creation API, the partners will be providing the sensor ID, once the sensor resource is created, partners will call into the get connection string API to get a connection string for that sensor. +As part of the sensor creation API, the partners provide the sensor ID, once the sensor resource is created, partners call into the get connection string API to get a connection string for that sensor. ### Push data -Partner is now all set to start pushing sensor data for all sensors using the respective connection string provided for each sensor. However, the partner would be sending the sensor data in a JSON format as defined by Data Manager for Agriculture. Refer to the telemetry schema below. +#### Create sensor partner integration +Create sensor partner integration to connect a particular party with a specific provider. The integrationId is later used in sensor creation. +API documentation: [Sensor Partner Integrations - Create Or Update](/rest/api/data-manager-for-agri/dataplane-version2023-04-01-preview/sensor-partner-integrations/create-or-update?tabs=HTTP) ++#### Create sensor data model +Use sensor data model to define the model of telemetry being sent. All the telemetry sent by the sensor is validated as per this data model. ++API documentation: [Sensor Data Models - Create Or Update](/rest/api/data-manager-for-agri/dataplane-version2023-04-01-preview/sensor-data-models/create-or-update?tabs=HTTP) ++Sample telemetry +```json +{ + "pressure": 30.45, + "temperature": 28, + "name": "sensor-1" +} +``` ++Corresponding sensor data model +```json +{ + "type": "Sensor", + "manufacturer": "Some sensor manufacturer", + "productCode": "soil m", + "measures": { + "pressure": { + "description": "measures soil moisture", + "dataType": "Double", + "type": "sm", + "unit": "Bar", + "properties": { + "abc": "def", + "elevation": 5 + } + }, + "temperature": { + "description": "measures soil temperature", + "dataType": "Long", + "type": "sm", + "unit": "Celsius", + "properties": { + "abc": "def", + "elevation": 5 + } + }, + "name": { + "description": "Sensor name", + "dataType": "String", + "type": "sm", + "unit": "none", + "properties": { + "abc": "def", + "elevation": 5 + } + } + }, + "sensorPartnerId": "sensor-partner-1", + "id": "sdm124", + "status": "new", + "createdDateTime": "2022-01-24T06:12:15Z", + "modifiedDateTime": "2022-01-24T06:12:15Z", + "eTag": "040158a0-0000-0700-0000-61ee433f0000", + "name": "my sdm for soil moisture", + "description": "description goes here", + "properties": { + "key1": "value1", + "key2": 123.45 + } +} +``` ++#### Create sensor +Create sensor using the corresponding integration ID and sensor data model ID. DeviceId and HardwareId are optional parameters, if needed, you can use the [Devices - Create Or Update](/rest/api/data-manager-for-agri/dataplane-version2023-04-01-preview/devices/create-or-update?tabs=HTTP) to create the device. ++API documentation: [Sensors - Create Or Update](/rest/api/data-manager-for-agri/dataplane-version2023-04-01-preview/sensors/create-or-update?tabs=HTTP) ++#### Get IoTHub connection string +Get IoTHub connection string to push sensor telemetry to the platform for the Sensor created. ++API Documentation: [Sensors - Get Connection String](/rest/api/data-manager-for-agri/dataplane-version2023-04-01-preview/sensors/get-connection-string) ++#### Push Data using IoT Hub +Use [IoT Hub Device SDKs](/azure/iot-hub/iot-hub-devguide-sdks) to push the telemetry using the connection string. ++For all sensor telemetry events, "timestamp" is a mandatory property and has to be in ISO 8601 format (YYYY-MM-DDTHH:MM:SSZ). ++Partner is now all set to start pushing sensor data for all sensors using the respective connection string provided for each sensor. However, the partner would be sending the sensor data in a JSON format as defined by FarmBeats. Refer to the telemetry schema provided here. ```json { Partner is now all set to start pushing sensor data for all sensors using the re } ``` -This marks the completion of the onboarding flow for partners as well. Once the data is pushed to IOTHub, the customers would be able to query sensor data using the egress API. +Once the data is pushed to IOTHub, the customers would be able to query sensor data using the egress API. ## Next steps |
data-manager-for-agri | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/release-notes.md | Azure Data Manager for Agriculture Preview is updated on an ongoing basis. To st > > Microsoft Azure Data Manager for Agriculture requires registration and is available to only approved customers and partners during the preview period. To request access to Microsoft Data Manager for Agriculture during the preview period, use this [**form**](https://aka.ms/agridatamanager). +## June 2023 ++### Use your license keys via key vault +Azure Data Manager for Agriculture supports a range of data ingress connectors. These connections require customer keys in a Bring Your Own License (BYOL) model. You can use your license keys safely by storing your secrets in the Azure Key Vault, enabling system identity and providing read access to our Data Manager. Details are available [here](concepts-byol-and-credentials.md). ++### Sensor integration as both partner and customer +Now you can start pushing data from your own sensors into Data Manager for Agriculture. It's useful in case your sensor provider doesn't want to take steps to onboard their sensors or if you don't have such support from your sensor provider. Details are available [here](how-to-set-up-sensor-as-customer-and-partner.md). + ## May 2023 ### Understanding throttling Azure Data Manager for Agriculture implements API throttling to ensure consisten In Azure Data Manager for Agriculture Preview, you can monitor how and when your resources are accessed, and by whom. You can also debug reasons for failure for data-plane requests. [Audit Logs](how-to-set-up-audit-logs.md) are now available for your use. ### Private links-You can connect to Azure Data Manager for Agriculture service from your virtual network via a private endpoint, which is a set of private IP addresses in a subnet within the virtual network. You can then limit access to your Azure Data Manager for Agriculture Preview instance over these private IP addresses. [Private Links](how-to-set-up-private-links.md) are now available for your use. +You can connect to Azure Data Manager for Agriculture service from your virtual network via a private endpoint. You can then limit access to your Azure Data Manager for Agriculture Preview instance over these private IP addresses. [Private Links](how-to-set-up-private-links.md) are now available for your use. ### BYOL for satellite imagery To support scalable ingestion of geometry-clipped imagery, we've partnered with Sentinel Hub by Sinergise to provide a seamless bring your own license (BYOL) experience. Read more about our satellite connector [here](concepts-ingest-satellite-imagery.md). |
defender-for-cloud | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md | Title: Release notes for Microsoft Defender for Cloud description: This page is updated frequently with the latest updates in Defender for Cloud. Previously updated : 06/21/2023 Last updated : 06/25/2023 # What's new in Microsoft Defender for Cloud? Updates in June include: |Date |Update | |||+ June 25 | [Private Endpoint support for Malware Scanning in Defender for Storage](#private-endpoint-support-for-malware-scanning-in-defender-for-storage) | June 21 | [Recommendation released for preview: Running container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)](#recommendation-released-for-preview-running-container-images-should-have-vulnerability-findings-resolved-powered-by-microsoft-defender-vulnerability-management) | | June 15 | [Control updates were made to the NIST 800-53 standards in regulatory compliance](#control-updates-were-made-to-the-nist-800-53-standards-in-regulatory-compliance) | |June 11 | [Planning of cloud migration with an Azure Migrate business case now includes Defender for Cloud](#planning-of-cloud-migration-with-an-azure-migrate-business-case-now-includes-defender-for-cloud) | Updates in June include: |June 5 | [Onboarding directly (without Azure Arc) to Defender for Servers is now Generally Available](#onboarding-directly-without-azure-arc-to-defender-for-servers-is-now-generally-available) | |June 4 | [Replacing agent-based discovery with agentless discovery for containers capabilities in Defender CSPM](#replacing-agent-based-discovery-with-agentless-discovery-for-containers-capabilities-in-defender-cspm) | +### Private Endpoint support for Malware Scanning in Defender for Storage ++June 25, 2023 ++Private Endpoint support is now available as part of the Malware Scanning public preview in Defender for Storage. This capability allows enabling Malware Scanning on storage accounts that are using private endpoints. No additional configuration is needed. ++[Malware Scanning (Preview)](defender-for-storage-malware-scan.md) in Defender for Storage helps protect your storage accounts from malicious content by performing a full malware scan on uploaded content in near real-time, using Microsoft Defender Antivirus capabilities. It's designed to help fulfill security and compliance requirements for handling untrusted content. It is an agentless SaaS solution that allows simple setup at scale, with zero maintenance, and supports automating response at scale. ++Private endpoints provide secure connectivity to your Azure Storage services, effectively eliminating public internet exposure, and are considered a security best practice. ++For storage accounts with private endpoints that have Malware Scanning already enabled, you will need to disable and [enable the plan with Malware Scanning](https://learn.microsoft.com/azure/storage/common/azure-defender-storage-configure?toc=%2Fazure%2Fdefender-for-cloud%2Ftoc.json&tabs=enable-subscription) for this to work. ++Learn more about using [private endpoints](https://learn.microsoft.com/azure/private-link/private-endpoint-overview) in [Defender for Storage](defender-for-storage-introduction.md) and how to secure your storage services further. ### Recommendation released for preview: Running container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management) |
healthcare-apis | Events Consume Logic Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-consume-logic-apps.md | Title: Consume events with Logic Apps - Azure Health Data Services -description: This tutorial provides resources on how to consume events with Logic Apps. +description: Learn how to consume FHIR events with Logic Apps. Previously updated : 12/21/2022 Last updated : 06/23/2022 -# Tutorial: Consume events with Logic Apps +# Tutorial: Consume FHIR events with Logic Apps -This tutorial shows how to use Azure Logic Apps to process Azure Health Data Services Fast Healthcare Interoperability Resources (FHIR®) events. Logic Apps creates and runs automated workflows to process event data from other applications. You'll learn how to register a FHIR event with your Logic App, meet a specified event criteria, and perform a service operation. +> [!NOTE] +> [Fast Healthcare Interoperability Resources (FHIR®)](https://www.hl7.org/fhir/) is an open healthcare specification. ++This tutorial shows how to use Azure Logic Apps to process Azure Health Data Services FHIR events. Logic Apps creates and runs automated workflows to process event data from other applications. Learn how to register a FHIR event with your Logic App, meet a specified event criteria, and perform a service operation. Here's an example of a Logic App workflow: To set up an automated workflow, you must first create a Logic App. For more inf Follow these steps: 1. Go to the Azure portal.-2. Search for "Logic App". -3. Select "Add". -4. Specify Basic details. -5. Specify Hosting. -6. Specify Monitoring. -7. Specify Tags. -8. Review and create your Logic App. +2. Search for **Logic App**. +3. Select **Add**. +4. Specify **Basic details**. +5. Specify **Hosting**. +6. Specify **Monitoring**. +7. Specify **Tags**. +8. **Review and create** your Logic App. You now need to fill out the details of your Logic App. Specify information for these five categories. They are in separate tabs: Choose a plan type (Standard or Consumption). Create a new Windows Plan name and - Zone redundancy deployment -Enabling your plan will make it zone redundant. +Enabling your plan makes it zone redundant. ### Hosting - Tab 2 Enable Azure Monitor Application Insights to automatically monitor your applicat ### Tags - Tab 4 -Continue specifying your Logic App by clicking "Next: Tags". +Continue specifying your Logic App by clicking **Next: Tags**. #### Use tags to categorize resources Tags are name/value pairs that enable you to categorize resources and view consolidated billing by applying the same tag to multiple resources and resource groups. -This example won't use tagging. +This example doesn't use tagging. ### Review + create - Tab 5 -Finish specifying your Logic App by clicking "Next: Review + create". +Finish specifying your Logic App by clicking **Next: Review + create**. #### Review your Logic App -Your proposed Logic app will display the following details: +Your proposed Logic app displays the following details: - Subscription - Resource Group Your proposed Logic app will display the following details: - Plan - Monitoring -If you're satisfied with the proposed configuration, select "Create". If not, select "Previous" to go back and specify new details. +If you're satisfied with the proposed configuration, select **Create**. If not, select **Previous** to go back and specify new details. First you'll see an alert telling you that deployment is initializing. Next you'll see a new page telling you that the deployment is in progress. If there are no errors, you'll finally see a notification telling you that your #### Your Logic App dashboard -Azure creates a dashboard when your Logic App is complete. The dashboard will show you the status of your app. You can return to your dashboard by clicking Overview in the Logic App menu. Here's a Logic App dashboard: +Azure creates a dashboard when your Logic App is complete. The dashboard shows you the status of your app. You can return to your dashboard by clicking Overview in the Logic App menu. Here's a Logic App dashboard: :::image type="content" source="media/events-logic-apps/events-logic-overview.png" alt-text="Screenshot of your Logic Apps overview dashboard." lightbox="media/events-logic-apps/events-logic-overview.png"::: Before you begin, you'll need to have a Logic App configured and running correct Once your Logic App is running, you can create and configure a workflow. To initialize a workflow, follow these steps: 1. Start at the Azure portal.-2. Select "Logic Apps" in Azure services. +2. Select **Logic Apps** in Azure services. 3. Select the Logic App you created.-4. Select "Workflows" in the Workflow menu on the left. -5. Select "Add" to add a workflow. +4. Select **Workflows** in the Workflow menu on the left. +5. Select **Add** to add a workflow. ### Configuring a new workflow Fill in the details for subscription, resource type, and resource name. Then you - Resource deleted - Resource updated -For more information about event types, see [What FHIR resource events does Events support?](events-faqs.md). +For more information about event types, see [What FHIR resource events does Events support?](events-faqs.md#what-fhir-resource-events-does-events-support). ### Adding an HTTP action When the condition is ready, you can specify what actions happen if the conditio ### Choosing a condition criteria -In order to specify whether you want to take action for the specific event, begin specifying the criteria by clicking on "Condition" in the workflow on the left. You'll then see a set of condition choices on the right. +In order to specify whether you want to take action for the specific event, begin specifying the criteria by clicking on **Condition** in the workflow. A set of condition choices are then displayed. -Under the "And" box, add these two conditions: +Under the **And** box, add these two conditions: - resourceType - Event Type To test your new workflow, do the following steps: 1. Add a new Patient FHIR Resource to your FHIR Service. 2. Wait a moment or two and then check the Overview webpage of your Logic App workflow. 3. The event should be shaded in green if the action was successful.-4. If it failed, the event will be shaded in red. +4. If it failed, the event is shaded in red. Here's an example of a workflow trigger success operation: Here's an example of a workflow trigger success operation: ## Next steps -In this tutorial, you learned about how to use Logic Apps to process FHIR events. +In this tutorial, you learned how to use Logic Apps to process FHIR events. ++To learn about Events, see ++> [!div class="nextstepaction"] +> [What are Events?](events-overview.md) -To learn more about FHIR events, see: +To learn about the Events frequently asked questions (FAQs), see > [!div class="nextstepaction"]-> [What are Events?](./events-overview.md) +> [Frequently asked questions about Events](events-faqs.md) FHIR® is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission. |
healthcare-apis | Events Deploy Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-deploy-portal.md | Title: Deploy Events using the Azure portal - Azure Health Data Services -description: This article describes how to deploy the Events feature in the Azure portal. +description: Learn how to deploy the Events feature using the Azure portal. Previously updated : 10/21/2022 Last updated : 06/23/2022 # Quickstart: Deploy Events using the Azure portal -In this quickstart, youΓÇÖll learn how to deploy the Azure Health Data Services Events feature in the Azure portal to send Fast Healthcare Interoperability Resources (FHIR®) and DICOM event messages. +> [!NOTE] +> [Fast Healthcare Interoperability Resources (FHIR®)](https://www.hl7.org/fhir/) is an open healthcare specification. ++In this quickstart, learn how to deploy the Azure Health Data Services Events feature in the Azure portal to send FHIR and DICOM event messages. ## Prerequisites It's important that you have the following prerequisites completed before you be 4. After the form is completed, select **Create** to begin the subscription creation. -5. Event messages won't be sent until the Event Grid System Topic deployment has successfully completed. Upon successful creation of the Event Grid System Topic, the status of the workspace will change from "Updating" to "Succeeded". +5. Event message sending can't occur until the Event Grid System Topic deployment has successfully completed. Upon successful creation of the Event Grid System Topic, the status of the workspace changes from **Updating** to **Succeeded**. :::image type="content" source="media/events-deploy-in-portal/events-new-subscription-create.png" alt-text="Screenshot of an events subscription being deployed" lightbox="media/events-deploy-in-portal/events-new-subscription-create.png"::: It's important that you have the following prerequisites completed before you be ## Next steps -In this article, you've learned how to deploy events in the Azure portal. For details about supported events, see [Azure Health Data Services as an Event Grid source](../../event-grid/event-schema-azure-health-data-services.md). +In this article, you learned how to deploy Events in the Azure portal. For details about supported events, see [Azure Health Data Services as an Event Grid source](../../event-grid/event-schema-azure-health-data-services.md). To learn how to enable the Events metrics, see |
healthcare-apis | Events Disable Delete Workspace | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-disable-delete-workspace.md | Title: Disable Events and delete workspaces - Azure Health Data Services -description: This article provides resources on how to disable the Events service and delete workspaces. +description: Learn how to disable the Events service and delete workspaces. Previously updated : 10/21/2022 Last updated : 06/23/2022 # How to disable Events and delete workspaces -In this article, you'll learn how to disable the Events feature and delete workspaces in the Azure Health Data Services. +> [!NOTE] +> [Fast Healthcare Interoperability Resources (FHIR®)](https://www.hl7.org/fhir/) is an open healthcare specification. ++In this article, learn how to disable the Events feature and delete workspaces in the Azure Health Data Services. ## Disable Events To disable Events from sending event messages for a single **Event Subscription**, the **Event Subscription** must be deleted. -1. Select the **Event Subscription** to be deleted. In this example, we'll be selecting an Event Subscription named **fhir-events**. +1. Select the **Event Subscription** to be deleted. In this example, we select an Event Subscription named **fhir-events**. :::image type="content" source="media/disable-delete-workspaces/events-select-subscription.png" alt-text="Screenshot of Events subscriptions and select event subscription to be deleted." lightbox="media/disable-delete-workspaces/events-select-subscription.png"::: To disable Events from sending event messages for a single **Event Subscription* :::image type="content" source="media/disable-delete-workspaces/events-disable-no-subscriptions.png" alt-text="Screenshot of Events subscriptions and delete all event subscriptions to disable events." lightbox="media/disable-delete-workspaces/events-disable-no-subscriptions.png"::: > [!NOTE]-> The Fast Healthcare Interoperability Resources (FHIR®) service will automatically go into an **Updating** status to disable the Events extension when a full delete of Event Subscriptions is executed. The FHIR service will remain online while the operation is completing. +> The FHIR service will automatically go into an **Updating** status to disable the Events extension when a full delete of Event Subscriptions is executed. The FHIR service will remain online while the operation is completing. ## Delete workspaces -To successfully delete a workspace, delete all associated child resources first (for example: DICOM services, FHIR services and MedTech services), delete all Event Subscriptions, and then delete the workspace. Not deleting the child resources and Event Subscriptions first will cause an error when attempting to delete a workspace with child resources. +To avoid errors and successfully delete workspaces, follow these steps and in this specific order: -As an example: -- 1. Delete all workspaces associated child resources - for example: DICOM service(s), FHIR service(s), and MedTech service(s). - 2. Delete all workspaces associated Event Subscriptions. - 3. Delete workspace. +1. Delete all workspace associated child resources - for example: DICOM services, FHIR services, and MedTech services. +2. Delete all workspace associated Event Subscriptions. +3. Delete workspace. ## Next steps -For more information about troubleshooting Events, see the Events troubleshooting guide: +In this article, you learned how to disable Events and delete workspaces. ++For more information about troubleshooting Events, see > [!div class="nextstepaction"] > [Troubleshoot Events](events-troubleshooting-guide.md) |
healthcare-apis | Events Enable Diagnostic Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-enable-diagnostic-settings.md | Title: Enable Events diagnostic settings for diagnostic logs and metrics export - Azure Health Data Services -description: This article provides resources on how to enable Events diagnostic settings for diagnostic logs and metrics exporting. +description: Learn how to enable Events diagnostic settings for diagnostic logs and metrics exporting. Previously updated : 10/21/2022 Last updated : 06/23/2022 # How to enable diagnostic settings for Events -In this article, you'll be provided resources to enable the Events diagnostic settings for Azure Event Grid system topics. +> [!NOTE] +> [Fast Healthcare Interoperability Resources (FHIR®)](https://www.hl7.org/fhir/) is an open healthcare specification. -After they're enabled, Event Grid system topics diagnostic logs and metrics will be exported to the destination of your choosing for audit, analysis, troubleshooting, or backup. +In this article, learn how to enable the Events diagnostic settings for Azure Event Grid system topics. ## Resources |Description|Resource|-|-|--| +|--|--| |Learn how to enable the Event Grid system topics diagnostic logging and metrics export feature.|[Enable diagnostic logs for Event Grid system topics](../../event-grid/enable-diagnostic-logs-topic.md#enable-diagnostic-logs-for-event-grid-system-topics)| |View a list of currently captured Event Grid system topics diagnostic logs.|[Event Grid system topic diagnostic logs](../../azure-monitor/essentials/resource-logs-categories.md#microsofteventgridsystemtopics)| |View a list of currently captured Event Grid system topics metrics.|[Event Grid system topic metrics](../../azure-monitor/essentials/metrics-supported.md#microsofteventgridsystemtopics)| |
healthcare-apis | Events Faqs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-faqs.md | For a detailed description of the Events message structure and both required and ## What is the throughput for the Events messages? -The throughput of the FHIR or DICOM service and the Event Grid govern the throughput of FHIR and DICOM events. When a request made to the FHIR service is successful, it returns a 2xx HTTP status code. It also generates a FHIR resource or DICOM image changing event. The current limitation is 5,000 events/second per a workspace for all FHIR or DICOM service instances in it. +The throughput of the FHIR or DICOM service and the Event Grid govern the throughput of FHIR and DICOM events. When a request made to the FHIR service is successful, it returns a 2xx HTTP status code. It also generates a FHIR resource or DICOM image changing event. The current limitation is 5,000 events/second per workspace for all FHIR or DICOM service instances in the workspace. ## How am I charged for using Events? You can use the Event Grid filtering feature. There are unique identifiers in th ## Can I use the same subscriber for multiple workspaces, FHIR accounts, or DICOM accounts? -Yes. We recommend that you use different subscribers for each individual FHIR or DICOM account to process in isolated scopes. +Yes. We recommend that you use different subscribers for each individual FHIR or DICOM service to process in isolated scopes. ## Is Event Grid compatible with HIPAA and HITRUST compliance obligations? |
healthcare-apis | Events Message Structure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-message-structure.md | Title: Events message structure - Azure Health Data Services -description: In this article, you'll learn about Events message structure and required values. +description: Learn about Events message structure and required values. Previously updated : 07/06/2022 Last updated : 06/23/2023 # Events message structure -In this article, you'll learn about the Events message structure, required and non-required elements, and you'll be provided with samples of Events message payloads. +In this article, learn about the Events message structure, required and nonrequired elements, and samples of Events message payloads. > [!IMPORTANT] > Events currently supports only the following operations: In this article, you'll learn about the Events message structure, required and n |Name|Type|Required|Description| |-|-|--|--|-|topic|string|Yes|The topic is the Azure Resource ID of your Azure Health Data Services workspace.| -|subject|string|Yes|The Uniform Resource Identifier (URI) of the FHIR resource that was changed. Customer can access the resource with the subject with https:// scheme. Customer should use the dataVersion or data.resourceVersionId to visit specific data version regarding this event.| -|eventType|string(enum)|Yes|The type of change on the FHIR resource.| -|eventTime|string(datetime)|Yes|The UTC time when the FHIR resource change committed.| -|id|string|Yes|Unique identifier for the event.| -|data|object|Yes|FHIR resource change event details.| -|data.resourceType|string(enum)|Yes|The FHIR Resource Type.| -|data.resourceFhirAccount|string|Yes|The service name of FHIR account in the Azure Health Data Services workspace.| -|data.resourceFhirId|string|Yes|The resource ID of the FHIR account. Note that this ID is randomly generated by the FHIR service of the Azure Health Data Services when a customer creates the Resource. Customer can also use customized ID in FHIR resource creation; however the ID should **not** include or infer any PHI/PII information. It should be a system metadata, not specific to any personal data content.| -|data.resourceVersionId|string(number)|Yes|The data version of the FHIR resource.| -|dataVersion|string|No|Same as ΓÇ£data.resourceVersionIdΓÇ¥.| -|metadataVersion|string|No|The schema version of the event metadata. This is defined by Azure Event Grid and should be constant most of the time.| +|`topic`|string|Yes|The topic is the Azure Resource ID of your Azure Health Data Services workspace.| +|`subject`|string|Yes|The Uniform Resource Identifier (URI) of the FHIR resource that was changed. Customer can access the resource with the subject with https:// scheme. Customer should use the dataVersion or data.resourceVersionId to visit specific data version regarding this event.| +|`eventType`|string(enum)|Yes|The type of change on the FHIR resource.| +|`eventTime`|string(datetime)|Yes|The UTC time when the FHIR resource change committed.| +|`id`|string|Yes|Unique identifier for the event.| +|`data`|object|Yes|FHIR resource change event details.| +|`data.resourceType`|string(enum)|Yes|The FHIR Resource Type.| +|`data.resourceFhirAccount`|string|Yes|The service name of FHIR account in the Azure Health Data Services workspace.| +|`data.resourceFhirId`|string|Yes|The resource ID of the FHIR account. This ID is randomly generated by the FHIR service of the Azure Health Data Services when a customer creates the Resource. Customer can also use customized ID in FHIR resource creation; however the ID should **not** include or infer any PHI/PII information. It should be a system metadata, not specific to any personal data content.| +|`data.resourceVersionId`|string(number)|Yes|The data version of the FHIR resource.| +|`dataVersion`|string|No|Same as `data.resourceVersionId`.| +|`metadataVersion`|string|No|The schema version of the event metadata. This is defined by Azure Event Grid and should be constant most of the time.| ## FHIR events message samples In this article, you'll learn about the Events message structure, required and n ## DICOM events message structure -|Name | Type | Required | Description -|--||-|--| +|Name | Type | Required | Description | +|--||-|-| |topic | string | Yes | The topic is the Azure Resource ID of your Azure Health Data Services workspace. |subject | string | Yes | The Uniform Resource Identifier (URI) of the DICOM image that was changed. Customer can access the image with the subject with https:// scheme. Customer should use the dataVersion or data.resourceVersionId to visit specific data version regarding this event. | eventType | string(enum) | Yes | The type of change on the DICOM image. In this article, you'll learn about the Events message structure, required and n | data.imageStudyInstanceUid | string | Yes | The image's Study Instance UID. | data.imageSeriesInstanceUid | string | Yes | The image's Series Instance UID. | data.imageSopInstanceUid | string | Yes | The image's SOP Instance UID.-| data.serviceHostName | string | Yes | The hostname of the dicom service where the change occurred. -| data.sequenceNumber | int | Yes | The sequence number of the change in the DICOM service. Every image creation and deletion will have a unique sequence within the service. This number correlates to the sequence number of the DICOM service's Change Feed. Querying the DICOM Service Change Feed with this sequence number will give you the change that created this event. +| data.serviceHostName | string | Yes | The hostname of the DICOM service where the change occurred. +| data.sequenceNumber | int | Yes | The sequence number of the change in the DICOM service. Every image creation and deletion have a unique sequence within the service. This number correlates to the sequence number of the DICOM service's Change Feed. Querying the DICOM Service Change Feed with this sequence number gives you the change that created this event. | dataVersion | string | No | The data version of the DICOM image. | metadataVersion | string | No | The schema version of the event metadata. This is defined by Azure Event Grid and should be constant most of the time. In this article, you'll learn about the Events message structure, required and n ## Next steps -For more information about deploying Events, see +To learn about deploying Events using the Azure portal, see >[!div class="nextstepaction"]->[Deploying Events in the Azure portal](./events-deploy-portal.md) +>[Deploy Events using the Azure portal](events-deploy-portal.md) FHIR® is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission. |
healthcare-apis | Events Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-overview.md | Title: What are Events? - Azure Health Data Services -description: In this article, you'll learn about Events, its features, integrations, and next steps. +description: Learn about Events, its features, integrations, and next steps. Previously updated : 07/06/2022 Last updated : 06/23/2023 # What are Events? +> [!NOTE] +> [Fast Healthcare Interoperability Resources (FHIR®)](https://www.hl7.org/fhir/) is an open healthcare specification. + Events are a notification and subscription feature in the Azure Health Data Services. Events enable customers to utilize and enhance the analysis and workflows of structured and unstructured data like vitals and clinical or progress notes, operations data, Internet of Medical Things (IoMT) health data, and medical imaging data. -When Fast Healthcare Interoperability Resources (FHIR®) resource changes or Digital Imaging and Communications in Medicine (DICOM) image changes are successfully written to the Azure Health Data Services, the Events feature sends notification messages to Events subscribers. These event notification occurrences can be sent to multiple endpoints to trigger automation ranging from starting workflows to sending email and text messages to support the changes occurring from the health data it originated from. The Events feature integrates with the [Azure Event Grid service](../../event-grid/overview.md) and creates a system topic for the Azure Health Data Services Workspace. +When FHIR resource changes or Digital Imaging and Communications in Medicine (DICOM) image changes are successfully written to the Azure Health Data Services, the Events feature sends notification messages to Events subscribers. These event notification occurrences can be sent to multiple endpoints to trigger automation ranging from starting workflows to sending email and text messages to support the changes occurring from the health data it originated from. The Events feature integrates with the [Azure Event Grid service](../../event-grid/overview.md) and creates a system topic for the Azure Health Data Services Workspace. > [!IMPORTANT] > When Fast Healthcare Interoperability Resources (FHIR®) resource changes or > [!IMPORTANT] > -> Events currently supports only the following operations: +> Events currently supports the following operations: > > - **FhirResourceCreated** - The event emitted after a FHIR resource gets created successfully. > Use Events to send FHIR resource and DICOM image change messages to services lik ## Secure -Built on a platform that supports protected health information and customer content data compliance with privacy, safety, and security in mind, the Events messages do not transmit sensitive data as part of the message payload. +Built on a platform that supports protected health information and customer content data compliance with privacy, safety, and security in mind, the Events messages don't transmit sensitive data as part of the message payload. Use [Azure Managed identities](../../active-directory/managed-identities-azure-resources/overview.md) to provide secure access from your Event Grid system topic to the Events message receiving endpoints of your choice. ## Next steps -For more information about deploying Events, see +To learn about deploying Events using the Azure portal, see >[!div class="nextstepaction"] >[Deploying Events using the Azure portal](./events-deploy-portal.md) -For frequently asks questions (FAQs) about Events, see +To learn about frequently asks questions (FAQs) about Events, see >[!div class="nextstepaction"] >[Frequently asked questions about Events](./events-faqs.md) -For Events troubleshooting resources, see +To learn about troubleshooting Events, see >[!div class="nextstepaction"]->[Events troubleshooting guide](./events-troubleshooting-guide.md) +>[Troubleshoot Events](./events-troubleshooting-guide.md) FHIR® is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission. |
healthcare-apis | Events Troubleshooting Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-troubleshooting-guide.md | Title: Events troubleshooting guides - Azure Health Data Services -description: This article helps Events users troubleshoot error messages, conditions, and provides fixes. + Title: Troubleshoot Events - Azure Health Data Services +description: Learn how to troubleshoot Events. Previously updated : 07/06/2022 Last updated : 06/23/2023 # Troubleshoot Events +> [!NOTE] +> [Fast Healthcare Interoperability Resources (FHIR®)](https://www.hl7.org/fhir/) is an open healthcare specification. + This article provides guides and resources to troubleshoot Events. > [!IMPORTANT] >-> Fast Healthcare Interoperability Resources (FHIR®) resource and DICOM image change data is only written and event messages are sent when the Events feature is turned on. The Event feature doesn't send messages on past FHIR resource or DICOM image changes or when the feature is turned off. +> FHIR resource and DICOM image change data is only written and event messages are sent when the Events feature is turned on. The Event feature doesn't send messages on past FHIR resource or DICOM image changes or when the feature is turned off. :::image type="content" source="media/events-overview/events-overview-flow.png" alt-text="Diagram of data flow from users to a FHIR service and then into the Events pipeline" lightbox="media/events-overview/events-overview-flow.png"::: This article provides guides and resources to troubleshoot Events. ### Events message structure -Use this resource to learn about the Events message structure, required and non-required elements, and sample messages: +Use this resource to learn about the Events message structure, required and nonrequired elements, and sample messages: * [Events message structure](./events-message-structure.md) ### How to If you have a technical question about Events or if you have a support related i To learn about frequently asked questions (FAQs) about Events, see >[!div class="nextstepaction"]->[Frequently asked questions about Events](./events-faqs.md) +>[Frequently asked questions about Events](events-faqs.md) FHIR® is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission. |
healthcare-apis | Events Use Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-use-metrics.md | Title: Use Events metrics in Azure Health Data Services -description: This article explains how use display Events metrics + Title: Use Events metrics - Azure Health Data Services +description: Learn how use Events metrics. Previously updated : 10/21/2022 Last updated : 06/23/2023 # How to use Events metrics -In this article, you'll learn how to use Events metrics in the Azure portal. +> [!NOTE] +> [Fast Healthcare Interoperability Resources (FHIR®)](https://www.hl7.org/fhir/) is an open healthcare specification. ++In this article, learn how to use Events metrics in the Azure portal. > [!TIP]-> To learn more about Azure Monitor and metrics, see [Azure Monitor Metrics overview](../../azure-monitor/essentials/data-platform-metrics.md)] +> To learn more about Azure Monitor and metrics, see [Azure Monitor Metrics overview](../../azure-monitor/essentials/data-platform-metrics.md). > [!NOTE]-> For the purposes of this article, an Azure Event Hubs event hub was used as the Events message endpoint. +> For the purposes of this article, an [Azure Event Hubs](../../event-hubs/event-hubs-about.md) was used as the Events message endpoint. ## Use metrics In this article, you'll learn how to use Events metrics in the Azure portal. :::image type="content" source="media\events-display-metrics\events-metrics-main.png" alt-text="Screenshot of events you would like to display metrics for." lightbox="media\events-display-metrics\events-metrics-main.png"::: -3. From this page, you'll notice that the subscription named **fhir-events** has one processed message. To view the Event Hubs metrics, select the name of the Event Hubs (for this example, **azuredocsfhirservice**) from the lower right-hand corner of the page. +3. From this page, notice that the subscription named **fhir-events** has one processed message. To view the Event Hubs metrics, select the name of the Event Hubs (for this example, **azuredocsfhirservice**) from the lower right-hand corner of the page. :::image type="content" source="media\events-display-metrics\events-metrics-subscription.png" alt-text="Screenshot of select the metrics button." lightbox="media\events-display-metrics\events-metrics-subscription.png"::: -4. From this page, you'll notice that the Event Hubs received the incoming message presented in the previous Events subscription metrics pages. +4. From this page, notice that the Event Hubs received the incoming message presented in the previous Events subscription metrics pages. :::image type="content" source="media\events-display-metrics\events-metrics-event-hub.png" alt-text="Screenshot of displaying event hubs metrics." lightbox="media\events-display-metrics\events-metrics-event-hub.png"::: To learn how to export Events Azure Event Grid system diagnostic logs and metric > [!div class="nextstepaction"] > [Enable diagnostic settings for Events](events-enable-diagnostic-settings.md) -FHIR® is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission. +FHIR® is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission. |
iot-hub-device-update | Import Update | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/import-update.md | To import an update, you first upload the update files and import manifest into :::image type="content" source="media/import-update/import-new-update-2-ppr.png" alt-text="Import New Update" lightbox="media/import-update/import-new-update-2-ppr.png"::: -5. Select **+ Select from storage container**. The Storage accounts UI is shown. Select an existing account, or create an account using **+ Storage account**. This account is used for a container to stage your updates for import. +5. Select **+ Select from storage container**. The Storage accounts UI is shown. Select an existing account, or create an account using **+ Storage account**. This account is used for a container to stage your updates for import. The account should not have both public and private endpoints enabled at the same time. :::image type="content" source="media/import-update/select-update-files-ppr.png" alt-text="Select Update Files" lightbox="media/import-update/select-update-files-ppr.png"::: |
iot-hub | Migrate Tls Certificate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/migrate-tls-certificate.md | For each IoT hub, you can expect the following: ### Request an extension -This TLS certificate migration is critical for the security of our customers and Microsoft's infrastructure, and is time-bound by the expiration of the Baltimore CyberTrust Root certificate. Therefore, there's little extra time that we can provide for customers that don't think their devices will be ready by February 15, 2023. If you absolutely can't meet the February 2023 target date, [fill out this form](https://aka.ms/BaltimoreAllow) with the details of your extension request, and then [email us](mailto:iot-ca-updates@microsoft.com?subject=Requesting%20extension%20for%20Baltimore%20migration) with a message that indicates you've completed the form, along with your company name. We can flag the specific hubs to be migrated later in the rollout window. +This TLS certificate migration is critical for the security of our customers and Microsoft's infrastructure, and is time-bound by the expiration of the Baltimore CyberTrust Root certificate. Therefore, there's little extra time that we can provide for customers that don't think their devices will be ready by the migration deadlines. ++As of June 2023 the extension request process is closed for IoT Hub customers. ++IoT Central applications are scheduled for migration between June 15th and October 15th, 2023. For IoT Central customers who absolutely can't have their devices ready for migration by June 2023, [fill out this form](https://aka.ms/BaltimoreAllowCentral) before August 15, 2023 with the details of your extension request, and then [email us](mailto:iot-ca-updates@microsoft.com?subject=Requesting%20extension%20for%20Baltimore%20migration) with a message that indicates you've completed the form, along with your company name. We can flag the specific IoT Central apps to be migrated on the requested extension date. > [!NOTE] > We are collecting this information to help with the Baltimore migration. We will hold onto this information until October 15th, 2023, when this migration is slated to complete. If you would like us to delete this information, please [email us](mailto:iot-ca-updates@microsoft.com) and we can assist you. For any additional questions about the Microsoft privacy policy, see the [Microsoft Privacy Statement](https://go.microsoft.com/fwlink/?LinkId=521839). |
machine-learning | How To Prepare Datasets For Automl Images | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-prepare-datasets-for-automl-images.md | In this article, you learn how to prepare image data for training computer visio To generate models for computer vision tasks with automated machine learning, you need to bring labeled image data as input for model training in the form of an `MLTable`. You can create an `MLTable` from labeled training data in JSONL format. -If your labeled training data is in a different format (like, pascal VOC or COCO), you can use a [conversion script](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/coco2jsonl.py) to first convert it to JSONL, and then create an `MLTable`. Alternatively, you can use Azure Machine Learning's [data labeling tool](how-to-create-image-labeling-projects.md) to manually label images, and export the labeled data to use for training your AutoML model. +If your labeled training data is in a different format (like, pascal VOC or COCO), you can use a [conversion script](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection/coco2jsonl.py) to first convert it to JSONL, and then create an `MLTable`. Alternatively, you can use Azure Machine Learning's [data labeling tool](how-to-create-image-labeling-projects.md) to manually label images, and export the labeled data to use for training your AutoML model. ## Prerequisites |
private-link | Create Private Link Service Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-link-service-powershell.md | -If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure. +- Azure Cloud Shell or Azure PowerShell. ++ The steps in this quickstart run the Azure PowerShell cmdlets interactively in [Azure Cloud Shell](/azure/cloud-shell/overview). To run the commands in the Cloud Shell, select **Open Cloudshell** at the upper-right corner of a code block. Select **Copy** to copy the code and then paste it into Cloud Shell to run it. You can also run the Cloud Shell from within the Azure portal. ++ You can also [install Azure PowerShell locally](/powershell/azure/install-azure-powershell) to run the cmdlets. The steps in this article require Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find your installed version. If you need to upgrade, see [Update the Azure PowerShell module](/powershell/azure/install-Az-ps#update-the-azure-powershell-module). ++ If you run PowerShell locally, run `Connect-AzAccount` to connect to Azure. ## Create a resource group An Azure resource group is a logical container into which Azure resources are de Create a resource group with [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup): ```azurepowershell-interactive-New-AzResourceGroup -Name 'CreatePrivLinkService-rg' -Location 'eastus2' +New-AzResourceGroup -Name 'test-rg' -Location 'eastus2' ``` ## Create an internal load balancer -In this section, you'll create a virtual network and an internal Azure Load Balancer. +In this section, you create a virtual network and an internal Azure Load Balancer. ### Virtual network In this section, you create a virtual network and subnet to host the load balanc ```azurepowershell-interactive ## Create backend subnet config ## $subnet = @{- Name = 'mySubnet' - AddressPrefix = '10.1.0.0/24' + Name = 'subnet-1' + AddressPrefix = '10.0.0.0/24' } $subnetConfig = New-AzVirtualNetworkSubnetConfig @subnet ## Create the virtual network ## $net = @{- Name = 'myVNet' - ResourceGroupName = 'CreatePrivLinkService-rg' + Name = 'vnet-1' + ResourceGroupName = 'test-rg' Location = 'eastus2'- AddressPrefix = '10.1.0.0/16' + AddressPrefix = '10.0.0.0/16' Subnet = $subnetConfig } $vnet = New-AzVirtualNetwork @net This section details how you can create and configure the following components o ```azurepowershell-interactive ## Place virtual network created in previous step into a variable. ##-$vnet = Get-AzVirtualNetwork -Name 'myVNet' -ResourceGroupName 'CreatePrivLinkService-rg' +$vnet = Get-AzVirtualNetwork -Name 'vnet-1' -ResourceGroupName 'test-rg' ## Create load balancer frontend configuration and place in variable. ## $lbip = @{- Name = 'myFrontEnd' - PrivateIpAddress = '10.1.0.4' + Name = 'frontend' + PrivateIpAddress = '10.0.0.4' SubnetId = $vnet.subnets[0].Id } $feip = New-AzLoadBalancerFrontendIpConfig @lbip ## Create backend address pool configuration and place in variable. ##-$bepool = New-AzLoadBalancerBackendAddressPoolConfig -Name 'myBackEndPool' +$bepool = New-AzLoadBalancerBackendAddressPoolConfig -Name 'backend-pool' ## Create the health probe and place in variable. ## $probe = @{- Name = 'myHealthProbe' + Name = 'health-probe' Protocol = 'http' Port = '80' IntervalInSeconds = '360' $healthprobe = New-AzLoadBalancerProbeConfig @probe ## Create the load balancer rule and place in variable. ## $lbrule = @{- Name = 'myHTTPRule' + Name = 'http-rule' Protocol = 'tcp' FrontendPort = '80' BackendPort = '80' $rule = New-AzLoadBalancerRuleConfig @lbrule -EnableTcpReset ## Create the load balancer resource. ## $loadbalancer = @{- ResourceGroupName = 'CreatePrivLinkService-rg' - Name = 'myLoadBalancer' + ResourceGroupName = 'test-rg' + Name = 'load-balancer' Location = 'eastus2' Sku = 'Standard' FrontendIpConfiguration = $feip Before a private link service can be created in the virtual network, the setting ```azurepowershell-interactive ## Place the subnet name into a variable. ##-$subnet = 'mySubnet' +$subnet = 'subnet-1' ## Place the virtual network configuration into a variable. ## $net = @{- Name = 'myVNet' - ResourceGroupName = 'CreatePrivLinkService-rg' + Name = 'vnet-1' + ResourceGroupName = 'test-rg' } $vnet = Get-AzVirtualNetwork @net In this section, create a private link service that uses the Standard Azure Load ```azurepowershell-interactive ## Place the virtual network into a variable. ##-$vnet = Get-AzVirtualNetwork -Name 'myVNet' -ResourceGroupName 'CreatePrivLinkService-rg' +$vnet = Get-AzVirtualNetwork -Name 'vnet-1' -ResourceGroupName 'test-rg' ## Create the IP configuration for the private link service. ## $ipsettings = @{- Name = 'myIPconfig' - PrivateIpAddress = '10.1.0.5' + Name = 'ipconfig-1' + PrivateIpAddress = '10.0.0.5' Subnet = $vnet.subnets[0] } $ipconfig = New-AzPrivateLinkServiceIpConfig @ipsettings ## Place the load balancer frontend configuration into a variable. ## $par = @{- Name = 'myLoadBalancer' - ResourceGroupName = 'CreatePrivLinkService-rg' + Name = 'load-balancer' + ResourceGroupName = 'test-rg' } $fe = Get-AzLoadBalancer @par | Get-AzLoadBalancerFrontendIpConfig ## Create the private link service for the load balancer. ## $privlinksettings = @{- Name = 'myPrivateLinkService' - ResourceGroupName = 'CreatePrivLinkService-rg' + Name = 'private-link-service' + ResourceGroupName = 'test-rg' Location = 'eastus2' LoadBalancerFrontendIpConfiguration = $fe IpConfiguration = $ipconfig Your private link service is created and can receive traffic. If you want to see ## Create private endpoint -In this section, you'll map the private link service to a private endpoint. A virtual network contains the private endpoint for the private link service. This virtual network contains the resources that will access your private link service. +In this section, you map the private link service to a private endpoint. A virtual network contains the private endpoint for the private link service. This virtual network contains the resources that access your private link service. ### Create private endpoint virtual network In this section, you'll map the private link service to a private endpoint. A vi ```azurepowershell-interactive ## Create backend subnet config ## $subnet = @{- Name = 'mySubnetPE' - AddressPrefix = '11.1.0.0/24' + Name = 'subnet-pe' + AddressPrefix = '10.1.0.0/24' } $subnetConfig = New-AzVirtualNetworkSubnetConfig @subnet ## Create the virtual network ## $net = @{- Name = 'myVNetPE' - ResourceGroupName = 'CreatePrivLinkService-rg' + Name = 'vnet-pe' + ResourceGroupName = 'test-rg' Location = 'eastus2'- AddressPrefix = '11.1.0.0/16' + AddressPrefix = '10.1.0.0/16' Subnet = $subnetConfig } $vnetpe = New-AzVirtualNetwork @net $vnetpe = New-AzVirtualNetwork @net ```azurepowershell-interactive ## Place the private link service configuration into variable. ## $par1 = @{- Name = 'myPrivateLinkService' - ResourceGroupName = 'CreatePrivLinkService-rg' + Name = 'private-link-service' + ResourceGroupName = 'test-rg' } $pls = Get-AzPrivateLinkService @par1 ## Create the private link configuration and place in variable. ## $par2 = @{- Name = 'myPrivateLinkConnection' + Name = 'connection-1' PrivateLinkServiceId = $pls.Id } $plsConnection = New-AzPrivateLinkServiceConnection @par2 ## Place the virtual network into a variable. ## $par3 = @{- Name = 'myVNetPE' - ResourceGroupName = 'CreatePrivLinkService-rg' + Name = 'vnet-pe' + ResourceGroupName = 'test-rg' } $vnetpe = Get-AzVirtualNetwork @par3 ## Create private endpoint ## $par4 = @{- Name = 'MyPrivateEndpoint' - ResourceGroupName = 'CreatePrivLinkService-rg' + Name = 'private-endpoint' + ResourceGroupName = 'test-rg' Location = 'eastus2' Subnet = $vnetpe.subnets[0] PrivateLinkServiceConnection = $plsConnection New-AzPrivateEndpoint @par4 -ByManualRequest ### Approve the private endpoint connection -In this section, you'll approve the connection you created in the previous steps. +In this section, you approve the connection you created in the previous steps. * Use [Approve-AzPrivateEndpointConnection](/powershell/module/az.network/approve-azprivateendpointconnection) to approve the connection. ```azurepowershell-interactive ## Place the private link service configuration into variable. ## $par1 = @{- Name = 'myPrivateLinkService' - ResourceGroupName = 'CreatePrivLinkService-rg' + Name = 'private-link-service' + ResourceGroupName = 'test-rg' } $pls = Get-AzPrivateLinkService @par1 $par2 = @{ Name = $pls.PrivateEndpointConnections[0].Name- ServiceName = 'myPrivateLinkService' - ResourceGroupName = 'CreatePrivLinkService-rg' + ServiceName = 'private-link-service' + ResourceGroupName = 'test-rg' Description = 'Approved' PrivateLinkResourceType = 'Microsoft.Network/privateLinkServices' } Approve-AzPrivateEndpointConnection @par2 ### IP address of private endpoint -In this section, you'll find the IP address of the private endpoint that corresponds with the load balancer and private link service. +In this section, you find the IP address of the private endpoint that corresponds with the load balancer and private link service. * Use [Get-AzPrivateEndpoint](/powershell/module/az.network/get-azprivateendpoint) to retrieve the IP address. ```azurepowershell-interactive ## Get private endpoint and the IP address and place in a variable for display. ## $par1 = @{- Name = 'myPrivateEndpoint' - ResourceGroupName = 'CreatePrivLinkService-rg' + Name = 'private-endpoint' + ResourceGroupName = 'test-rg' ExpandResource = 'networkinterfaces' } $pe = Get-AzPrivateEndpoint @par1 $pe.NetworkInterfaces[0].IpConfigurations[0].PrivateIpAddress ```powershell Γ¥» $pe.NetworkInterfaces[0].IpConfigurations[0].PrivateIpAddress-11.1.0.4 +10.1.0.4 ``` ## Clean up resources $pe.NetworkInterfaces[0].IpConfigurations[0].PrivateIpAddress When no longer needed, you can use the [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) command to remove the resource group, load balancer, and the remaining resources. ```azurepowershell-interactive-Remove-AzResourceGroup -Name 'CreatePrivLinkService-rg' +Remove-AzResourceGroup -Name 'test-rg' ``` ## Next steps |
role-based-access-control | Built In Roles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles.md | The following table provides a brief description of each built-in role. Click th > | [EventGrid EventSubscription Contributor](#eventgrid-eventsubscription-contributor) | Lets you manage EventGrid event subscription operations. | 428e0ff0-5e57-4d9c-a221-2c70d0e0a443 | > | [EventGrid EventSubscription Reader](#eventgrid-eventsubscription-reader) | Lets you read EventGrid event subscriptions. | 2414bbcf-6497-4faf-8c65-045460748405 | > | [FHIR Data Contributor](#fhir-data-contributor) | Role allows user or principal full access to FHIR Data | 5a1fc7df-4bf1-4951-a576-89034ee01acd |-> | [FHIR Data Importer](#fhir-data-importer) | Role allows user or principal to read and import FHIR Data | 4465e953-8ced-4406-a58e-0f6e3f3b530b | > | [FHIR Data Exporter](#fhir-data-exporter) | Role allows user or principal to read and export FHIR Data | 3db33094-8700-4567-8da5-1501d4e7e843 |+> | [FHIR Data Importer](#fhir-data-importer) | Role allows user or principal to read and import FHIR Data | 4465e953-8ced-4406-a58e-0f6e3f3b530b | > | [FHIR Data Reader](#fhir-data-reader) | Role allows user or principal to read FHIR Data | 4c8d0bbc-75d3-4935-991f-5f3c56d81508 | > | [FHIR Data Writer](#fhir-data-writer) | Role allows user or principal to read and write FHIR Data | 3f88fce4-5892-4214-ae73-ba5294559913 | > | [Integration Service Environment Contributor](#integration-service-environment-contributor) | Lets you manage integration service environments, but not access to them. | a41e2c5b-bd99-4a07-88f4-9bf657a760b8 | Lets you manage backup service, but can't create vaults and give access to other > | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupInstances/backup/action | Performs Backup on the Backup Instance | > | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupInstances/validateRestore/action | Validates for Restore of the Backup Instance | > | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupInstances/restore/action | Triggers restore on the Backup Instance |+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/subscriptions/resourceGroups/providers/locations/crossRegionRestore/action | Triggers cross region restore operation on given backup instance. | +> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/subscriptions/resourceGroups/providers/locations/validateCrossRegionRestore/action | Performs validations for cross region restore operation. | +> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/subscriptions/resourceGroups/providers/locations/fetchCrossRegionRestoreJobs/action | List cross region restore jobs of backup instance from secondary region. | +> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/subscriptions/resourceGroups/providers/locations/fetchCrossRegionRestoreJob/action | Get cross region restore job details from secondary region. | +> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/subscriptions/resourceGroups/providers/locations/fetchSecondaryRecoveryPoints/action | Returns recovery points from secondary region for cross region restore enabled Backup Vaults. | > | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupPolicies/write | Creates Backup Policy | > | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupPolicies/delete | Deletes the Backup Policy | > | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupPolicies/read | Returns all Backup Policies | Lets you manage backup service, but can't create vaults and give access to other "Microsoft.DataProtection/backupVaults/backupInstances/backup/action", "Microsoft.DataProtection/backupVaults/backupInstances/validateRestore/action", "Microsoft.DataProtection/backupVaults/backupInstances/restore/action",+ "Microsoft.DataProtection/subscriptions/resourceGroups/providers/locations/crossRegionRestore/action", + "Microsoft.DataProtection/subscriptions/resourceGroups/providers/locations/validateCrossRegionRestore/action", + "Microsoft.DataProtection/subscriptions/resourceGroups/providers/locations/fetchCrossRegionRestoreJobs/action", + "Microsoft.DataProtection/subscriptions/resourceGroups/providers/locations/fetchCrossRegionRestoreJob/action", + "Microsoft.DataProtection/subscriptions/resourceGroups/providers/locations/fetchSecondaryRecoveryPoints/action", "Microsoft.DataProtection/backupVaults/backupPolicies/write", "Microsoft.DataProtection/backupVaults/backupPolicies/delete", "Microsoft.DataProtection/backupVaults/backupPolicies/read", Lets you manage backup services, except removal of backup, vault creation and gi > | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupInstances/backup/action | Performs Backup on the Backup Instance | > | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupInstances/validateRestore/action | Validates for Restore of the Backup Instance | > | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupInstances/restore/action | Triggers restore on the Backup Instance |+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/subscriptions/resourceGroups/providers/locations/crossRegionRestore/action | Triggers cross region restore operation on given backup instance. | +> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/subscriptions/resourceGroups/providers/locations/validateCrossRegionRestore/action | Performs validations for cross region restore operation. | +> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/subscriptions/resourceGroups/providers/locations/fetchCrossRegionRestoreJobs/action | List cross region restore jobs of backup instance from secondary region. | +> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/subscriptions/resourceGroups/providers/locations/fetchCrossRegionRestoreJob/action | Get cross region restore job details from secondary region. | +> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/subscriptions/resourceGroups/providers/locations/fetchSecondaryRecoveryPoints/action | Returns recovery points from secondary region for cross region restore enabled Backup Vaults. | > | **NotActions** | | > | *none* | | > | **DataActions** | | Lets you manage backup services, except removal of backup, vault creation and gi "Microsoft.DataProtection/backupVaults/validateForBackup/action", "Microsoft.DataProtection/backupVaults/backupInstances/backup/action", "Microsoft.DataProtection/backupVaults/backupInstances/validateRestore/action",- "Microsoft.DataProtection/backupVaults/backupInstances/restore/action" + "Microsoft.DataProtection/backupVaults/backupInstances/restore/action", + "Microsoft.DataProtection/subscriptions/resourceGroups/providers/locations/crossRegionRestore/action", + "Microsoft.DataProtection/subscriptions/resourceGroups/providers/locations/validateCrossRegionRestore/action", + "Microsoft.DataProtection/subscriptions/resourceGroups/providers/locations/fetchCrossRegionRestoreJobs/action", + "Microsoft.DataProtection/subscriptions/resourceGroups/providers/locations/fetchCrossRegionRestoreJob/action", + "Microsoft.DataProtection/subscriptions/resourceGroups/providers/locations/fetchSecondaryRecoveryPoints/action" ], "notActions": [], "dataActions": [], Can view backup services, but can't make changes [Learn more](../backup/backup-r > | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/locations/operationResults/read | Returns Backup Operation Result for Backup Vault. | > | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/validateForBackup/action | Validates for backup of Backup Instance | > | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/operations/read | Operation returns the list of Operations for a Resource Provider |+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/subscriptions/resourceGroups/providers/locations/fetchCrossRegionRestoreJobs/action | List cross region restore jobs of backup instance from secondary region. | +> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/subscriptions/resourceGroups/providers/locations/fetchCrossRegionRestoreJob/action | Get cross region restore job details from secondary region. | +> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/subscriptions/resourceGroups/providers/locations/fetchSecondaryRecoveryPoints/action | Returns recovery points from secondary region for cross region restore enabled Backup Vaults. | > | **NotActions** | | > | *none* | | > | **DataActions** | | Can view backup services, but can't make changes [Learn more](../backup/backup-r "Microsoft.DataProtection/locations/operationStatus/read", "Microsoft.DataProtection/locations/operationResults/read", "Microsoft.DataProtection/backupVaults/validateForBackup/action",- "Microsoft.DataProtection/operations/read" + "Microsoft.DataProtection/operations/read", + "Microsoft.DataProtection/subscriptions/resourceGroups/providers/locations/fetchCrossRegionRestoreJobs/action", + "Microsoft.DataProtection/subscriptions/resourceGroups/providers/locations/fetchCrossRegionRestoreJob/action", + "Microsoft.DataProtection/subscriptions/resourceGroups/providers/locations/fetchSecondaryRecoveryPoints/action" ], "notActions": [], "dataActions": [], Allows for read, write, delete, and modify ACLs on files/directories in Azure fi > | **NotActions** | | > | *none* | | > | **DataActions** | |-> | [Microsoft.Storage](resource-provider-operations.md#microsoftstorage)/storageAccounts/fileServices/fileshares/files/read | Returns a file/folder or a list of files/folders. | -> | [Microsoft.Storage](resource-provider-operations.md#microsoftstorage)/storageAccounts/fileServices/fileshares/files/write | Returns the result of writing a file or creating a folder. | -> | [Microsoft.Storage](resource-provider-operations.md#microsoftstorage)/storageAccounts/fileServices/fileshares/files/delete | Returns the result of deleting a file/folder. | -> | [Microsoft.Storage](resource-provider-operations.md#microsoftstorage)/storageAccounts/fileServices/fileshares/files/modifypermissions/action | Returns the result of modifying permission on a file/folder. | -> | [Microsoft.Storage](resource-provider-operations.md#microsoftstorage)/storageAccounts/fileServices/readFileBackupSemantics/action | Read file backup sematics privilege. | -> | [Microsoft.Storage](resource-provider-operations.md#microsoftstorage)/storageAccounts/fileServices/writeFileBackupSemantics/action | Write file backup sematics privilege. | +> | [Microsoft.Storage](resource-provider-operations.md#microsoftstorage)/storageAccounts/fileServices/fileshares/files/read | Returns a file/folder or a list of files/folders | +> | [Microsoft.Storage](resource-provider-operations.md#microsoftstorage)/storageAccounts/fileServices/fileshares/files/write | Returns the result of writing a file or creating a folder | +> | [Microsoft.Storage](resource-provider-operations.md#microsoftstorage)/storageAccounts/fileServices/fileshares/files/delete | Returns the result of deleting a file/folder | +> | [Microsoft.Storage](resource-provider-operations.md#microsoftstorage)/storageAccounts/fileServices/fileshares/files/modifypermissions/action | Returns the result of modifying permission on a file/folder | +> | [Microsoft.Storage](resource-provider-operations.md#microsoftstorage)/storageAccounts/fileServices/readFileBackupSemantics/action | Read File Backup Sematics Privilege | +> | [Microsoft.Storage](resource-provider-operations.md#microsoftstorage)/storageAccounts/fileServices/writeFileBackupSemantics/action | Write File Backup Sematics Privilege | > | **NotDataActions** | | > | *none* | | ```json {- "id": "/providers/Microsoft.Authorization/roleDefinitions/69566ab7-960f-475b-8e7c-b3118f30c6bd", - "properties": { - "roleName": "Storage File Data Privileged Contributor", - "description": "Customer has read, write, delete and modify NTFS permission access on Azure Storage file shares.", - "assignableScopes": [ - "/" - ], - "permissions": [ - { - "actions": [], - "notActions": [], - "dataActions": [ - "Microsoft.Storage/storageAccounts/fileServices/fileshares/files/read", - "Microsoft.Storage/storageAccounts/fileServices/fileshares/files/write", - "Microsoft.Storage/storageAccounts/fileServices/fileshares/files/delete", - "Microsoft.Storage/storageAccounts/fileServices/fileshares/files/modifypermissions/action", - "Microsoft.Storage/storageAccounts/fileServices/readFileBackupSemantics/action", - "Microsoft.Storage/storageAccounts/fileServices/writeFileBackupSemantics/action" - ], - "notDataActions": [] - } - ] + "assignableScopes": [ + "/" + ], + "description": "Customer has read, write, delete and modify NTFS permission access on Azure Storage file shares.", + "id": "/providers/Microsoft.Authorization/roleDefinitions/69566ab7-960f-475b-8e7c-b3118f30c6bd", + "name": "69566ab7-960f-475b-8e7c-b3118f30c6bd", + "permissions": [ + { + "actions": [], + "notActions": [], + "dataActions": [ + "Microsoft.Storage/storageAccounts/fileServices/fileshares/files/read", + "Microsoft.Storage/storageAccounts/fileServices/fileshares/files/write", + "Microsoft.Storage/storageAccounts/fileServices/fileshares/files/delete", + "Microsoft.Storage/storageAccounts/fileServices/fileshares/files/modifypermissions/action", + "Microsoft.Storage/storageAccounts/fileServices/readFileBackupSemantics/action", + "Microsoft.Storage/storageAccounts/fileServices/writeFileBackupSemantics/action" + ], + "notDataActions": [] }+ ], + "roleName": "Storage File Data Privileged Contributor", + "roleType": "BuiltInRole", + "type": "Microsoft.Authorization/roleDefinitions" } ``` Allows for read access on files/directories in Azure file shares by overriding e > | **NotActions** | | > | *none* | | > | **DataActions** | |-> | [Microsoft.Storage](resource-provider-operations.md#microsoftstorage)/storageAccounts/fileServices/readFileBackupSemantics/action | Read file backup sematics privilege. | -> | [Microsoft.Storage](resource-provider-operations.md#microsoftstorage)/storageAccounts/fileServices/fileshares/files/read | Returns a file/folder or a list of files/folders. | +> | [Microsoft.Storage](resource-provider-operations.md#microsoftstorage)/storageAccounts/fileServices/fileshares/files/read | Returns a file/folder or a list of files/folders | +> | [Microsoft.Storage](resource-provider-operations.md#microsoftstorage)/storageAccounts/fileServices/readFileBackupSemantics/action | Read File Backup Sematics Privilege | > | **NotDataActions** | | > | *none* | | ```json {- "id": "/providers/Microsoft.Authorization/roleDefinitions/b8eda974-7b85-4f76-af95-65846b26df6d", - "properties": { - "roleName": "Storage File Data Privileged Reader", - "description": "Customer has read access on Azure Storage file shares.", - "assignableScopes": [ - "/" - ], - "permissions": [ - { - "actions": [], - "notActions": [], - "dataActions": [ - "Microsoft.Storage/storageAccounts/fileServices/fileshares/files/read", - "Microsoft.Storage/storageAccounts/fileServices/readFileBackupSemantics/action" - ], - "notDataActions": [] - } - ] + "assignableScopes": [ + "/" + ], + "description": "Customer has read access on Azure Storage file shares.", + "id": "/providers/Microsoft.Authorization/roleDefinitions/b8eda974-7b85-4f76-af95-65846b26df6d", + "name": "b8eda974-7b85-4f76-af95-65846b26df6d", + "permissions": [ + { + "actions": [], + "notActions": [], + "dataActions": [ + "Microsoft.Storage/storageAccounts/fileServices/fileshares/files/read", + "Microsoft.Storage/storageAccounts/fileServices/readFileBackupSemantics/action" + ], + "notDataActions": [] }+ ], + "roleName": "Storage File Data Privileged Reader", + "roleType": "BuiltInRole", + "type": "Microsoft.Authorization/roleDefinitions" } ``` Allows read-only access to see most objects in a namespace. It does not allow vi > | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/managedClusters/batch/cronjobs/read | Reads cronjobs | > | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/managedClusters/batch/jobs/read | Reads jobs | > | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/managedClusters/configmaps/read | Reads configmaps |+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/managedClusters/discovery.k8s.io/endpointslices/read | Reads endpointslices | > | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/managedClusters/endpoints/read | Reads endpoints | > | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/managedClusters/events.k8s.io/events/read | Reads events | > | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/managedClusters/events/read | Reads events | Allows read-only access to see most objects in a namespace. It does not allow vi > | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/managedClusters/extensions/networkpolicies/read | Reads networkpolicies | > | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/managedClusters/extensions/replicasets/read | Reads replicasets | > | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/managedClusters/limitranges/read | Reads limitranges |+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/managedClusters/metrics.k8s.io/pods/read | Reads pods | +> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/managedClusters/metrics.k8s.io/nodes/read | Reads nodes | > | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/managedClusters/namespaces/read | Reads namespaces | > | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/managedClusters/networking.k8s.io/ingresses/read | Reads ingresses | > | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/managedClusters/networking.k8s.io/networkpolicies/read | Reads networkpolicies | Allows read-only access to see most objects in a namespace. It does not allow vi > | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/managedClusters/pods/read | Reads pods | > | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/managedClusters/policy/poddisruptionbudgets/read | Reads poddisruptionbudgets | > | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/managedClusters/replicationcontrollers/read | Reads replicationcontrollers |-> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/managedClusters/replicationcontrollers/read | Reads replicationcontrollers | > | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/managedClusters/resourcequotas/read | Reads resourcequotas | > | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/managedClusters/serviceaccounts/read | Reads serviceaccounts | > | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/managedClusters/services/read | Reads services | Allows read-only access to see most objects in a namespace. It does not allow vi "Microsoft.ContainerService/managedClusters/batch/cronjobs/read", "Microsoft.ContainerService/managedClusters/batch/jobs/read", "Microsoft.ContainerService/managedClusters/configmaps/read",+ "Microsoft.ContainerService/managedClusters/discovery.k8s.io/endpointslices/read", "Microsoft.ContainerService/managedClusters/endpoints/read", "Microsoft.ContainerService/managedClusters/events.k8s.io/events/read", "Microsoft.ContainerService/managedClusters/events/read", Allows read-only access to see most objects in a namespace. It does not allow vi "Microsoft.ContainerService/managedClusters/extensions/networkpolicies/read", "Microsoft.ContainerService/managedClusters/extensions/replicasets/read", "Microsoft.ContainerService/managedClusters/limitranges/read",+ "Microsoft.ContainerService/managedClusters/metrics.k8s.io/pods/read", + "Microsoft.ContainerService/managedClusters/metrics.k8s.io/nodes/read", "Microsoft.ContainerService/managedClusters/namespaces/read", "Microsoft.ContainerService/managedClusters/networking.k8s.io/ingresses/read", "Microsoft.ContainerService/managedClusters/networking.k8s.io/networkpolicies/read", Allows read-only access to see most objects in a namespace. It does not allow vi "Microsoft.ContainerService/managedClusters/pods/read", "Microsoft.ContainerService/managedClusters/policy/poddisruptionbudgets/read", "Microsoft.ContainerService/managedClusters/replicationcontrollers/read",- "Microsoft.ContainerService/managedClusters/replicationcontrollers/read", "Microsoft.ContainerService/managedClusters/resourcequotas/read", "Microsoft.ContainerService/managedClusters/serviceaccounts/read", "Microsoft.ContainerService/managedClusters/services/read" Allows read/write access to most objects in a namespace. This role does not allo > | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/managedClusters/apps/statefulsets/* | | > | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/managedClusters/autoscaling/horizontalpodautoscalers/* | | > | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/managedClusters/batch/cronjobs/* | |+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/managedClusters/coordination.k8s.io/leases/read | Reads leases | +> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/managedClusters/coordination.k8s.io/leases/write | Writes leases | +> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/managedClusters/coordination.k8s.io/leases/delete | Deletes leases | +> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/managedClusters/discovery.k8s.io/endpointslices/read | Reads endpointslices | > | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/managedClusters/batch/jobs/* | | > | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/managedClusters/configmaps/* | | > | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/managedClusters/endpoints/* | | > | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/managedClusters/events.k8s.io/events/read | Reads events |-> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/managedClusters/events/read | Reads events | +> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/managedClusters/events/* | | > | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/managedClusters/extensions/daemonsets/* | | > | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/managedClusters/extensions/deployments/* | | > | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/managedClusters/extensions/ingresses/* | | > | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/managedClusters/extensions/networkpolicies/* | | > | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/managedClusters/extensions/replicasets/* | | > | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/managedClusters/limitranges/read | Reads limitranges |+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/managedClusters/metrics.k8s.io/pods/read | Reads pods | +> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/managedClusters/metrics.k8s.io/nodes/read | Reads nodes | > | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/managedClusters/namespaces/read | Reads namespaces | > | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/managedClusters/networking.k8s.io/ingresses/* | | > | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/managedClusters/networking.k8s.io/networkpolicies/* | | Allows read/write access to most objects in a namespace. This role does not allo > | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/managedClusters/pods/* | | > | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/managedClusters/policy/poddisruptionbudgets/* | | > | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/managedClusters/replicationcontrollers/* | |-> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/managedClusters/replicationcontrollers/* | | > | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/managedClusters/resourcequotas/read | Reads resourcequotas | > | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/managedClusters/secrets/* | | > | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/managedClusters/serviceaccounts/* | | Allows read/write access to most objects in a namespace. This role does not allo "Microsoft.ContainerService/managedClusters/apps/statefulsets/*", "Microsoft.ContainerService/managedClusters/autoscaling/horizontalpodautoscalers/*", "Microsoft.ContainerService/managedClusters/batch/cronjobs/*",+ "Microsoft.ContainerService/managedClusters/coordination.k8s.io/leases/read", + "Microsoft.ContainerService/managedClusters/coordination.k8s.io/leases/write", + "Microsoft.ContainerService/managedClusters/coordination.k8s.io/leases/delete", + "Microsoft.ContainerService/managedClusters/discovery.k8s.io/endpointslices/read", "Microsoft.ContainerService/managedClusters/batch/jobs/*", "Microsoft.ContainerService/managedClusters/configmaps/*", "Microsoft.ContainerService/managedClusters/endpoints/*", "Microsoft.ContainerService/managedClusters/events.k8s.io/events/read",- "Microsoft.ContainerService/managedClusters/events/read", + "Microsoft.ContainerService/managedClusters/events/*", "Microsoft.ContainerService/managedClusters/extensions/daemonsets/*", "Microsoft.ContainerService/managedClusters/extensions/deployments/*", "Microsoft.ContainerService/managedClusters/extensions/ingresses/*", "Microsoft.ContainerService/managedClusters/extensions/networkpolicies/*", "Microsoft.ContainerService/managedClusters/extensions/replicasets/*", "Microsoft.ContainerService/managedClusters/limitranges/read",+ "Microsoft.ContainerService/managedClusters/metrics.k8s.io/pods/read", + "Microsoft.ContainerService/managedClusters/metrics.k8s.io/nodes/read", "Microsoft.ContainerService/managedClusters/namespaces/read", "Microsoft.ContainerService/managedClusters/networking.k8s.io/ingresses/*", "Microsoft.ContainerService/managedClusters/networking.k8s.io/networkpolicies/*", Allows read/write access to most objects in a namespace. This role does not allo "Microsoft.ContainerService/managedClusters/pods/*", "Microsoft.ContainerService/managedClusters/policy/poddisruptionbudgets/*", "Microsoft.ContainerService/managedClusters/replicationcontrollers/*",- "Microsoft.ContainerService/managedClusters/replicationcontrollers/*", "Microsoft.ContainerService/managedClusters/resourcequotas/read", "Microsoft.ContainerService/managedClusters/secrets/*", "Microsoft.ContainerService/managedClusters/serviceaccounts/*", Can perform all actions within an Azure Machine Learning workspace, except for c > | [Microsoft.MachineLearningServices](resource-provider-operations.md#microsoftmachinelearningservices)/workspaces/*/action | | > | [Microsoft.MachineLearningServices](resource-provider-operations.md#microsoftmachinelearningservices)/workspaces/*/delete | | > | [Microsoft.MachineLearningServices](resource-provider-operations.md#microsoftmachinelearningservices)/workspaces/*/write | |-> | [Microsoft.MachineLearningServices](resource-provider-operations.md#microsoftmachinelearningservices)/featurestores/read | Gets the Machine Learning Services FeatureStore(s) | -> | [Microsoft.MachineLearningServices](resource-provider-operations.md#microsoftmachinelearningservices)/featurestores/checkNameAvailability/read | Checks the Machine Learning Services FeatureStore name availability | > | **NotActions** | | > | [Microsoft.MachineLearningServices](resource-provider-operations.md#microsoftmachinelearningservices)/workspaces/delete | Deletes the Machine Learning Services Workspace(s) | > | [Microsoft.MachineLearningServices](resource-provider-operations.md#microsoftmachinelearningservices)/workspaces/write | Creates or updates a Machine Learning Services Workspace(s) | Can perform all actions within an Azure Machine Learning workspace, except for c > | [Microsoft.MachineLearningServices](resource-provider-operations.md#microsoftmachinelearningservices)/workspaces/computes/*/delete | | > | [Microsoft.MachineLearningServices](resource-provider-operations.md#microsoftmachinelearningservices)/workspaces/computes/listKeys/action | List secrets for compute resources in Machine Learning Services Workspace | > | [Microsoft.MachineLearningServices](resource-provider-operations.md#microsoftmachinelearningservices)/workspaces/listKeys/action | List secrets for a Machine Learning Services Workspace |+> | [Microsoft.MachineLearningServices](resource-provider-operations.md#microsoftmachinelearningservices)/workspaces/hubs/write | Creates or updates a Machine Learning Services Hub Workspace(s) | +> | [Microsoft.MachineLearningServices](resource-provider-operations.md#microsoftmachinelearningservices)/workspaces/hubs/delete | Deletes the Machine Learning Services Hub Workspace(s) | +> | [Microsoft.MachineLearningServices](resource-provider-operations.md#microsoftmachinelearningservices)/workspaces/featurestores/write | Creates or Updates the Machine Learning Services FeatureStore(s) | +> | [Microsoft.MachineLearningServices](resource-provider-operations.md#microsoftmachinelearningservices)/workspaces/featurestores/delete | Deletes the Machine Learning Services FeatureStore(s) | > | **DataActions** | | > | *none* | | > | **NotDataActions** | | Can perform all actions within an Azure Machine Learning workspace, except for c "Microsoft.MachineLearningServices/workspaces/*/read", "Microsoft.MachineLearningServices/workspaces/*/action", "Microsoft.MachineLearningServices/workspaces/*/delete",- "Microsoft.MachineLearningServices/workspaces/*/write", - "Microsoft.MachineLearningServices/featurestores/read", - "Microsoft.MachineLearningServices/featurestores/checkNameAvailability/read" + "Microsoft.MachineLearningServices/workspaces/*/write" ], "notActions": [ "Microsoft.MachineLearningServices/workspaces/delete", Can perform all actions within an Azure Machine Learning workspace, except for c "Microsoft.MachineLearningServices/workspaces/computes/*/write", "Microsoft.MachineLearningServices/workspaces/computes/*/delete", "Microsoft.MachineLearningServices/workspaces/computes/listKeys/action",- "Microsoft.MachineLearningServices/workspaces/listKeys/action" + "Microsoft.MachineLearningServices/workspaces/listKeys/action", + "Microsoft.MachineLearningServices/workspaces/hubs/write", + "Microsoft.MachineLearningServices/workspaces/hubs/delete", + "Microsoft.MachineLearningServices/workspaces/featurestores/write", + "Microsoft.MachineLearningServices/workspaces/featurestores/delete" ], "dataActions": [], "notDataActions": [] Read access to view files, models, deployments. The ability to create completion > | [Microsoft.CognitiveServices](resource-provider-operations.md#microsoftcognitiveservices)/accounts/OpenAI/deployments/search/action | Search for the most relevant documents using the current engine. | > | [Microsoft.CognitiveServices](resource-provider-operations.md#microsoftcognitiveservices)/accounts/OpenAI/deployments/completions/action | Create a completion from a chosen model. | > | [Microsoft.CognitiveServices](resource-provider-operations.md#microsoftcognitiveservices)/accounts/OpenAI/deployments/chat/completions/action | Creates a completion for the chat message |+> | [Microsoft.CognitiveServices](resource-provider-operations.md#microsoftcognitiveservices)/accounts/OpenAI/deployments/extensions/chat/completions/action | Creates a completion for the chat message with extensions | > | [Microsoft.CognitiveServices](resource-provider-operations.md#microsoftcognitiveservices)/accounts/OpenAI/deployments/embeddings/action | Return the embeddings for a given prompt. | > | [Microsoft.CognitiveServices](resource-provider-operations.md#microsoftcognitiveservices)/accounts/OpenAI/deployments/completions/write | | > | **NotDataActions** | | Read access to view files, models, deployments. The ability to create completion "Microsoft.CognitiveServices/accounts/OpenAI/deployments/search/action", "Microsoft.CognitiveServices/accounts/OpenAI/deployments/completions/action", "Microsoft.CognitiveServices/accounts/OpenAI/deployments/chat/completions/action",+ "Microsoft.CognitiveServices/accounts/OpenAI/deployments/extensions/chat/completions/action", "Microsoft.CognitiveServices/accounts/OpenAI/deployments/embeddings/action", "Microsoft.CognitiveServices/accounts/OpenAI/deployments/completions/write" ], Role allows user or principal full access to FHIR Data [Learn more](../healthcar > | **NotActions** | | > | *none* | | > | **DataActions** | |-> | Microsoft.HealthcareApis/services/fhir/resources/* | | -> | Microsoft.HealthcareApis/workspaces/fhirservices/resources/* | | +> | [Microsoft.HealthcareApis](resource-provider-operations.md#microsofthealthcareapis)/services/fhir/resources/* | | +> | [Microsoft.HealthcareApis](resource-provider-operations.md#microsofthealthcareapis)/workspaces/fhirservices/resources/* | | > | **NotDataActions** | | > | *none* | | Role allows user or principal full access to FHIR Data [Learn more](../healthcar } ``` -### FHIR Data Importer +### FHIR Data Exporter -Role allows user or principal to read and import FHIR Data [Learn more](../healthcare-apis/azure-api-for-fhir/configure-azure-rbac.md) +Role allows user or principal to read and export FHIR Data [Learn more](../healthcare-apis/azure-api-for-fhir/configure-azure-rbac.md) > [!div class="mx-tableFixed"] > | Actions | Description | Role allows user or principal to read and import FHIR Data [Learn more](../healt > | **NotActions** | | > | *none* | | > | **DataActions** | |-> | Microsoft.HealthcareApis/services/fhir/resources/read | Read FHIR resources (includes searching and versioned history). | -> | Microsoft.HealthcareApis/services/fhir/resources/import/action | Import operation ($export). | -> | Microsoft.HealthcareApis/workspaces/fhirservices/resources/read | Read FHIR resources (includes searching and versioned history). | -> | Microsoft.HealthcareApis/workspaces/fhirservices/resources/import/action | Import operation ($export). | +> | [Microsoft.HealthcareApis](resource-provider-operations.md#microsofthealthcareapis)/services/fhir/resources/read | Read FHIR resources (includes searching and versioned history). | +> | [Microsoft.HealthcareApis](resource-provider-operations.md#microsofthealthcareapis)/services/fhir/resources/export/action | Export operation ($export). | +> | [Microsoft.HealthcareApis](resource-provider-operations.md#microsofthealthcareapis)/workspaces/fhirservices/resources/read | Read FHIR resources (includes searching and versioned history). | +> | [Microsoft.HealthcareApis](resource-provider-operations.md#microsofthealthcareapis)/workspaces/fhirservices/resources/export/action | Export operation ($export). | > | **NotDataActions** | | > | *none* | | Role allows user or principal to read and import FHIR Data [Learn more](../healt "assignableScopes": [ "/" ],- "description": "Role allows user or principal to read and import FHIR Data", - "id": "/providers/Microsoft.Authorization/roleDefinitions/4465e953-8ced-4406-a58e-0f6e3f3b530b", - "name": "4465e953-8ced-4406-a58e-0f6e3f3b530b", + "description": "Role allows user or principal to read and export FHIR Data", + "id": "/providers/Microsoft.Authorization/roleDefinitions/3db33094-8700-4567-8da5-1501d4e7e843", + "name": "3db33094-8700-4567-8da5-1501d4e7e843", "permissions": [ { "actions": [], "notActions": [], "dataActions": [ "Microsoft.HealthcareApis/services/fhir/resources/read",- "Microsoft.HealthcareApis/services/fhir/resources/import/action", + "Microsoft.HealthcareApis/services/fhir/resources/export/action", "Microsoft.HealthcareApis/workspaces/fhirservices/resources/read",- "Microsoft.HealthcareApis/workspaces/fhirservices/resources/import/action" + "Microsoft.HealthcareApis/workspaces/fhirservices/resources/export/action" ], "notDataActions": [] } ],- "roleName": "FHIR Data Importer", + "roleName": "FHIR Data Exporter", "roleType": "BuiltInRole", "type": "Microsoft.Authorization/roleDefinitions" } ``` -### FHIR Data Exporter +### FHIR Data Importer -Role allows user or principal to read and export FHIR Data [Learn more](../healthcare-apis/azure-api-for-fhir/configure-azure-rbac.md) +Role allows user or principal to read and import FHIR Data [Learn more](../healthcare-apis/azure-api-for-fhir/configure-azure-rbac.md) > [!div class="mx-tableFixed"] > | Actions | Description | Role allows user or principal to read and export FHIR Data [Learn more](../healt > | **NotActions** | | > | *none* | | > | **DataActions** | |-> | Microsoft.HealthcareApis/services/fhir/resources/read | Read FHIR resources (includes searching and versioned history). | -> | Microsoft.HealthcareApis/services/fhir/resources/export/action | Export operation ($export). | -> | Microsoft.HealthcareApis/workspaces/fhirservices/resources/read | Read FHIR resources (includes searching and versioned history). | -> | Microsoft.HealthcareApis/workspaces/fhirservices/resources/export/action | Export operation ($export). | +> | [Microsoft.HealthcareApis](resource-provider-operations.md#microsofthealthcareapis)/workspaces/fhirservices/resources/read | Read FHIR resources (includes searching and versioned history). | +> | [Microsoft.HealthcareApis](resource-provider-operations.md#microsofthealthcareapis)/workspaces/fhirservices/resources/import/action | Import FHIR resources in batch. | > | **NotDataActions** | | > | *none* | | Role allows user or principal to read and export FHIR Data [Learn more](../healt "assignableScopes": [ "/" ],- "description": "Role allows user or principal to read and export FHIR Data", - "id": "/providers/Microsoft.Authorization/roleDefinitions/3db33094-8700-4567-8da5-1501d4e7e843", - "name": "3db33094-8700-4567-8da5-1501d4e7e843", + "description": "Role allows user or principal to read and import FHIR Data", + "id": "/providers/Microsoft.Authorization/roleDefinitions/4465e953-8ced-4406-a58e-0f6e3f3b530b", + "name": "4465e953-8ced-4406-a58e-0f6e3f3b530b", "permissions": [ { "actions": [], "notActions": [], "dataActions": [- "Microsoft.HealthcareApis/services/fhir/resources/read", - "Microsoft.HealthcareApis/services/fhir/resources/export/action", "Microsoft.HealthcareApis/workspaces/fhirservices/resources/read",- "Microsoft.HealthcareApis/workspaces/fhirservices/resources/export/action" + "Microsoft.HealthcareApis/workspaces/fhirservices/resources/import/action" ], "notDataActions": [] } ],- "roleName": "FHIR Data Exporter", + "roleName": "FHIR Data Importer", "roleType": "BuiltInRole", "type": "Microsoft.Authorization/roleDefinitions" } Role allows user or principal to read FHIR Data [Learn more](../healthcare-apis/ > | **NotActions** | | > | *none* | | > | **DataActions** | |-> | Microsoft.HealthcareApis/services/fhir/resources/read | Read FHIR resources (includes searching and versioned history). | -> | Microsoft.HealthcareApis/workspaces/fhirservices/resources/read | Read FHIR resources (includes searching and versioned history). | +> | [Microsoft.HealthcareApis](resource-provider-operations.md#microsofthealthcareapis)/services/fhir/resources/read | Read FHIR resources (includes searching and versioned history). | +> | [Microsoft.HealthcareApis](resource-provider-operations.md#microsofthealthcareapis)/workspaces/fhirservices/resources/read | Read FHIR resources (includes searching and versioned history). | > | **NotDataActions** | | > | *none* | | Role allows user or principal to read and write FHIR Data [Learn more](../health > | **NotActions** | | > | *none* | | > | **DataActions** | |-> | Microsoft.HealthcareApis/services/fhir/resources/* | | -> | Microsoft.HealthcareApis/workspaces/fhirservices/resources/* | | +> | [Microsoft.HealthcareApis](resource-provider-operations.md#microsofthealthcareapis)/services/fhir/resources/* | | +> | [Microsoft.HealthcareApis](resource-provider-operations.md#microsofthealthcareapis)/workspaces/fhirservices/resources/* | | > | **NotDataActions** | |-> | Microsoft.HealthcareApis/services/fhir/resources/hardDelete/action | Hard Delete (including version history). | -> | Microsoft.HealthcareApis/workspaces/fhirservices/resources/hardDelete/action | Hard Delete (including version history). | +> | [Microsoft.HealthcareApis](resource-provider-operations.md#microsofthealthcareapis)/services/fhir/resources/hardDelete/action | Hard Delete (including version history). | +> | [Microsoft.HealthcareApis](resource-provider-operations.md#microsofthealthcareapis)/workspaces/fhirservices/resources/hardDelete/action | Hard Delete (including version history). | ```json { Perform any action on the certificates of a key vault, except manage permissions > | **DataActions** | | > | [Microsoft.KeyVault](resource-provider-operations.md#microsoftkeyvault)/vaults/certificatecas/* | | > | [Microsoft.KeyVault](resource-provider-operations.md#microsoftkeyvault)/vaults/certificates/* | |+> | [Microsoft.KeyVault](resource-provider-operations.md#microsoftkeyvault)/vaults/certificatecontacts/write | Manage Certificate Contact | > | **NotDataActions** | | > | *none* | | Perform any action on the certificates of a key vault, except manage permissions "notActions": [], "dataActions": [ "Microsoft.KeyVault/vaults/certificatecas/*",- "Microsoft.KeyVault/vaults/certificates/*" + "Microsoft.KeyVault/vaults/certificates/*", + "Microsoft.KeyVault/vaults/certificatecontacts/write" ], "notDataActions": [] } Users with rights to create/modify resource policy, create support ticket and re > | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/policyexemptions/* | Create and manage policy exemptions | > | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/policysetdefinitions/* | Create and manage policy sets | > | [Microsoft.PolicyInsights](resource-provider-operations.md#microsoftpolicyinsights)/* | |+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/* | Create and manage a deployment | > | [Microsoft.Support](resource-provider-operations.md#microsoftsupport)/* | Create and update a support ticket | > | **NotActions** | | > | *none* | | Users with rights to create/modify resource policy, create support ticket and re "Microsoft.Authorization/policyexemptions/*", "Microsoft.Authorization/policysetdefinitions/*", "Microsoft.PolicyInsights/*",+ "Microsoft.Resources/deployments/*", "Microsoft.Support/*" ], "notActions": [], Allows user to use the applications in an application group. [Learn more](../vir > | *none* | | > | **DataActions** | | > | [Microsoft.DesktopVirtualization](resource-provider-operations.md#microsoftdesktopvirtualization)/applicationGroups/useApplications/action | Use ApplicationGroup |+> | [Microsoft.DesktopVirtualization](resource-provider-operations.md#microsoftdesktopvirtualization)/appAttachPackages/useApplications/action | Allow user permissioning on app attach packages in an application group | > | **NotDataActions** | | > | *none* | | Allows user to use the applications in an application group. [Learn more](../vir "actions": [], "notActions": [], "dataActions": [- "Microsoft.DesktopVirtualization/applicationGroups/useApplications/action" + "Microsoft.DesktopVirtualization/applicationGroups/useApplications/action", + "Microsoft.DesktopVirtualization/appAttachPackages/useApplications/action" ], "notDataActions": [] } |
sap | High Availability Guide Suse Pacemaker | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-suse-pacemaker.md | -tags: azure-resource-manager -keywords: '' - Previously updated : 12/05/2022 Last updated : 06/23/2023 This article discusses how to set up Pacemaker on SUSE Linux Enterprise Server ( In Azure, you have two options for setting up fencing in the Pacemaker cluster for SLES. You can use an Azure fence agent, which restarts a failed node via the Azure APIs, or you can use SBD device. - ### Use an SBD device You can configure the SBD device by using either of two options: - SBD with an iSCSI target server: - The SBD device requires at least one additional virtual machine (VM) that acts as an Internet Small Computer System Interface (iSCSI) target server and provides an SBD device. These iSCSI target servers can, however, be shared with other Pacemaker clusters. The advantage of using an SBD device is that if you're already using SBD devices on-premises, they don't require any changes to how you operate the Pacemaker cluster. + The SBD device requires at least one additional virtual machine (VM) that acts as an Internet Small Computer System Interface (iSCSI) target server and provides an SBD device. These iSCSI target servers can, however, be shared with other Pacemaker clusters. The advantage of using an SBD device is that if you're already using SBD devices on-premises, they don't require any changes to how you operate the Pacemaker cluster. You can use up to three SBD devices for a Pacemaker cluster to allow an SBD device to become unavailable (for example, during OS patching of the iSCSI target server). If you want to use more than one SBD device per Pacemaker, be sure to deploy multiple iSCSI target servers and connect one SBD from each iSCSI target server. We recommend using either one SBD device or three. Pacemaker can't automatically fence a cluster node if only two SBD devices are configured and one of them is unavailable. If you want to be able to fence when one iSCSI target server is down, you have to use three SBD devices and, therefore, three iSCSI target servers. That's the most resilient configuration when you're using SBDs.  >[!IMPORTANT]- > When you're planning and deploying Linux Pacemaker clustered nodes and SBD devices, do not allow the routing between your virtual machines and the VMs that are hosting the SBD devices to pass through any other devices, such as a [network virtual appliance (NVA)](https://azure.microsoft.com/solutions/network-appliances/). + > When you're planning and deploying Linux Pacemaker clustered nodes and SBD devices, do not allow the routing between your virtual machines and the VMs that are hosting the SBD devices to pass through any other devices, such as a [network virtual appliance (NVA)](https://azure.microsoft.com/solutions/network-appliances/). >- >Maintenance events and other issues with the NVA can have a negative impact on the stability and reliability of the overall cluster configuration. For more information, see [User-defined routing rules](../../virtual-network/virtual-networks-udr-overview.md). + > Maintenance events and other issues with the NVA can have a negative impact on the stability and reliability of the overall cluster configuration. For more information, see [User-defined routing rules](../../virtual-network/virtual-networks-udr-overview.md). - SBD with an Azure shared disk: You can configure the SBD device by using either of two options:  - Here are some important considerations about SBD devices when you're using an Azure shared disk: -- - An Azure shared disk with Premium SSD is supported as an SBD device. - - SBD devices that use an Azure shared disk are supported on SLES High Availability 15 SP01 and later. - - SBD devices that use an Azure premium shared disk are supported on [locally redundant storage (LRS)](../../virtual-machines/disks-redundancy.md#locally-redundant-storage-for-managed-disks) and [zone-redundant storage (ZRS)](../../virtual-machines/disks-redundancy.md#zone-redundant-storage-for-managed-disks). - - Depending on the [type of your deployment](./sap-high-availability-architecture-scenarios.md#comparison-of-different-deployment-types-for-sap-workload), choose the appropriate redundant storage for an Azure shared disk as your SBD device. - - An SBD device using LRS for Azure premium shared disk (skuName - Premium_LRS) is only supported with deployment in availability set. - - An SBD device using ZRS for an Azure premium shared disk (skuName - Premium_ZRS) is recommended with deployment in availability zones. - - A ZRS for managed disk is currently unavailable in all regions with availability zones. For more information, review the ZRS "Limitations" section in [Redundancy options for managed disks](../../virtual-machines/disks-redundancy.md#limitations). - - The Azure shared disk that you use for SBD devices doesnΓÇÖt need to be large. The [maxShares](../../virtual-machines/disks-shared-enable.md#disk-sizes) value determines how many cluster nodes can use the shared disk. For example, you can use P1 or P2 disk sizes for your SBD device on two-node cluster such as SAP ASCS/ERS or SAP HANA scale-up. - - For [HANA scale-out with HANA system replication (HSR) and Pacemaker](sap-hana-high-availability-scale-out-hsr-suse.md), you can use an Azure shared disk for SBD devices in clusters with up to four nodes per replication site because of the current limit of [maxShares](../../virtual-machines/disks-shared-enable.md#disk-sizes). - - We do *not* recommend attaching an Azure shared disk SBD device across Pacemaker clusters. - - If you use multiple Azure shared disk SBD devices, check on the limit for a maximum number of data disks that can be attached to a VM. - - For more information about limitations for Azure shared disks, carefully review the "Limitations" section of [Azure shared disk documentation](../../virtual-machines/disks-shared.md#limitations). + Here are some important considerations about SBD devices when you're using an Azure shared disk: ++ - An Azure shared disk with Premium SSD is supported as an SBD device. + - SBD devices that use an Azure shared disk are supported on SLES High Availability 15 SP01 and later. + - SBD devices that use an Azure premium shared disk are supported on [locally redundant storage (LRS)](../../virtual-machines/disks-redundancy.md#locally-redundant-storage-for-managed-disks) and [zone-redundant storage (ZRS)](../../virtual-machines/disks-redundancy.md#zone-redundant-storage-for-managed-disks). + - Depending on the [type of your deployment](./sap-high-availability-architecture-scenarios.md#comparison-of-different-deployment-types-for-sap-workload), choose the appropriate redundant storage for an Azure shared disk as your SBD device. + - An SBD device using LRS for Azure premium shared disk (skuName - Premium_LRS) is only supported with deployment in availability set. + - An SBD device using ZRS for an Azure premium shared disk (skuName - Premium_ZRS) is recommended with deployment in availability zones. + - A ZRS for managed disk is currently unavailable in all regions with availability zones. For more information, review the ZRS "Limitations" section in [Redundancy options for managed disks](../../virtual-machines/disks-redundancy.md#limitations). + - The Azure shared disk that you use for SBD devices doesnΓÇÖt need to be large. The [maxShares](../../virtual-machines/disks-shared-enable.md#disk-sizes) value determines how many cluster nodes can use the shared disk. For example, you can use P1 or P2 disk sizes for your SBD device on two-node cluster such as SAP ASCS/ERS or SAP HANA scale-up. + - For [HANA scale-out with HANA system replication (HSR) and Pacemaker](sap-hana-high-availability-scale-out-hsr-suse.md), you can use an Azure shared disk for SBD devices in clusters with up to four nodes per replication site because of the current limit of [maxShares](../../virtual-machines/disks-shared-enable.md#disk-sizes). + - We do *not* recommend attaching an Azure shared disk SBD device across Pacemaker clusters. + - If you use multiple Azure shared disk SBD devices, check on the limit for a maximum number of data disks that can be attached to a VM. + - For more information about limitations for Azure shared disks, carefully review the "Limitations" section of [Azure shared disk documentation](../../virtual-machines/disks-shared.md#limitations). ### Use an Azure fence agent-You can set up fencing by using an Azure fence agent. Azure fence agent require managed identities for the cluster VMs or a service principal, that manages restarting failed nodes via Azure APIs. Azure fence agent doesn't require the deployment of additional virtual machines. ++You can set up fencing by using an Azure fence agent. Azure fence agent requires managed identities for the cluster VMs or a service principal that manages restarting failed nodes via Azure APIs. Azure fence agent doesn't require the deployment of additional virtual machines. ## SBD with an iSCSI target server To use an SBD device that uses an iSCSI target server for fencing, follow the in You first need to create the iSCSI target virtual machines. You can share iSCSI target servers with multiple Pacemaker clusters. 1. Deploy new SLES 12 SP3 or higher virtual machines and connect to them via SSH. The machines don't need to be large. Virtual machine sizes Standard_E2s_v3 or Standard_D2s_v3 are sufficient. Be sure to use Premium storage for the OS disk.--1. On **iSCSI target virtual machines**, run the following commands: +2. On **iSCSI target virtual machines**, run the following commands: a. Update SLES. - <pre><code>sudo zypper update - </code></pre> + ```bash + sudo zypper update + ``` - > [!NOTE] - > You might need to reboot the OS after you upgrade or update the OS. + > [!NOTE] + > You might need to reboot the OS after you upgrade or update the OS. b. Remove packages. - To avoid a known issue with targetcli and SLES 12 SP3, uninstall the following packages. You can ignore errors about packages that can't be found. + To avoid a known issue with targetcli and SLES 12 SP3, uninstall the following packages. You can ignore errors about packages that can't be found. - <pre><code>sudo zypper remove lio-utils python-rtslib python-configshell targetcli - </code></pre> + ```bash + sudo zypper remove lio-utils python-rtslib python-configshell targetcli + ``` c. Install iSCSI target packages. - <pre><code>sudo zypper install targetcli-fb dbus-1-python - </code></pre> + ```bash + sudo zypper install targetcli-fb dbus-1-python + ``` d. Enable the iSCSI target service. - <pre><code>sudo systemctl enable targetcli - sudo systemctl start targetcli - </code></pre> + ```bash + sudo systemctl enable targetcli + sudo systemctl start targetcli + ``` ### Create an iSCSI device on the iSCSI target server To create the iSCSI disks for the clusters to be used by your SAP systems, run the following commands on all iSCSI target virtual machines. In the example, SBD devices for multiple clusters are created. It shows how you would use one iSCSI target server for multiple clusters. The SBD devices are placed on the OS disk. Make sure that you have enough space. -* **nfs**: Identifies the NFS cluster. -* **ascsnw1**: Identifies the ASCS cluster of **NW1**. -* **dbnw1**: Identifies the database cluster of **NW1**. -* **nfs-0** and **nfs-1**: The hostnames of the NFS cluster nodes. -* **nw1-xscs-0** and **nw1-xscs-1**: The hostnames of the **NW1** ASCS cluster nodes. -* **nw1-db-0** and **nw1-db-1**: The hostnames of the database cluster nodes. +- **nfs**: Identifies the NFS cluster. +- **ascsnw1**: Identifies the ASCS cluster of **NW1**. +- **dbnw1**: Identifies the database cluster of **NW1**. +- **nfs-0** and **nfs-1**: The hostnames of the NFS cluster nodes. +- **nw1-xscs-0** and **nw1-xscs-1**: The hostnames of the **NW1** ASCS cluster nodes. +- **nw1-db-0** and **nw1-db-1**: The hostnames of the database cluster nodes. In the following instructions, replace the bold-formatted placeholder text with the hostnames of your cluster nodes and the SID of your SAP system. 1. Create the root folder for all SBD devices.- <pre><code>sudo mkdir /sbd</code></pre> ++ ```bash + sudo mkdir /sbd + ``` 1. Create the SBD device for the NFS server.- <pre><code>sudo targetcli backstores/fileio create sbdnfs /sbd/sbdnfs 50M write_back=false ++ ```bash + sudo targetcli backstores/fileio create sbdnfs /sbd/sbdnfs 50M write_back=false sudo targetcli iscsi/ create iqn.2006-04.nfs.local:nfs sudo targetcli iscsi/iqn.2006-04.nfs.local:nfs/tpg1/luns/ create /backstores/fileio/sbdnfs- sudo targetcli iscsi/iqn.2006-04.nfs.local:nfs/tpg1/acls/ create iqn.2006-04.<b>nfs-0.local:nfs-0</b> - sudo targetcli iscsi/iqn.2006-04.nfs.local:nfs/tpg1/acls/ create iqn.2006-04.<b>nfs-1.local:nfs-1</b></code></pre> + sudo targetcli iscsi/iqn.2006-04.nfs.local:nfs/tpg1/acls/ create iqn.2006-04.nfs-0.local:nfs-0 + sudo targetcli iscsi/iqn.2006-04.nfs.local:nfs/tpg1/acls/ create iqn.2006-04.nfs-1.local:nfs-1 + ``` 1. Create the SBD device for the ASCS server of SAP System NW1.- <pre><code>sudo targetcli backstores/fileio create sbdascs<b>nw1</b> /sbd/sbdascs<b>nw1</b> 50M write_back=false - sudo targetcli iscsi/ create iqn.2006-04.ascs<b>nw1</b>.local:ascs<b>nw1</b> - sudo targetcli iscsi/iqn.2006-04.ascs<b>nw1</b>.local:ascs<b>nw1</b>/tpg1/luns/ create /backstores/fileio/sbdascs<b>nw1</b> - sudo targetcli iscsi/iqn.2006-04.ascs<b>nw1</b>.local:ascs<b>nw1</b>/tpg1/acls/ create iqn.2006-04.<b>nw1-xscs-0.local:nw1-xscs-0</b> - sudo targetcli iscsi/iqn.2006-04.ascs<b>nw1</b>.local:ascs<b>nw1</b>/tpg1/acls/ create iqn.2006-04.<b>nw1-xscs-1.local:nw1-xscs-1</b></code></pre> ++ ```bash + sudo targetcli backstores/fileio create sbdascsnw1 /sbd/sbdascsnw1 50M write_back=false + sudo targetcli iscsi/ create iqn.2006-04.ascsnw1.local:ascsnw1 + sudo targetcli iscsi/iqn.2006-04.ascsnw1.local:ascsnw1/tpg1/luns/ create /backstores/fileio/sbdascsnw1 + sudo targetcli iscsi/iqn.2006-04.ascsnw1.local:ascsnw1/tpg1/acls/ create iqn.2006-04.nw1-xscs-0.local:nw1-xscs-0 + sudo targetcli iscsi/iqn.2006-04.ascsnw1.local:ascsnw1/tpg1/acls/ create iqn.2006-04.nw1-xscs-1.local:nw1-xscs-1 + ``` 1. Create the SBD device for the database cluster of SAP System NW1.- <pre><code>sudo targetcli backstores/fileio create sbddb<b>nw1</b> /sbd/sbddb<b>nw1</b> 50M write_back=false - sudo targetcli iscsi/ create iqn.2006-04.db<b>nw1</b>.local:db<b>nw1</b> - sudo targetcli iscsi/iqn.2006-04.db<b>nw1</b>.local:db<b>nw1</b>/tpg1/luns/ create /backstores/fileio/sbddb<b>nw1</b> - sudo targetcli iscsi/iqn.2006-04.db<b>nw1</b>.local:db<b>nw1</b>/tpg1/acls/ create iqn.2006-04.<b>nw1-db-0.local:nw1-db-0</b> - sudo targetcli iscsi/iqn.2006-04.db<b>nw1</b>.local:db<b>nw1</b>/tpg1/acls/ create iqn.2006-04.<b>nw1-db-1.local:nw1-db-1</b></code></pre> ++ ```bash + sudo targetcli backstores/fileio create sbddbnw1 /sbd/sbddbnw1 50M write_back=false + sudo targetcli iscsi/ create iqn.2006-04.dbnw1.local:dbnw1 + sudo targetcli iscsi/iqn.2006-04.dbnw1.local:dbnw1/tpg1/luns/ create /backstores/fileio/sbddbnw1 + sudo targetcli iscsi/iqn.2006-04.dbnw1.local:dbnw1/tpg1/acls/ create iqn.2006-04.nw1-db-0.local:nw1-db-0 + sudo targetcli iscsi/iqn.2006-04.dbnw1.local:dbnw1/tpg1/acls/ create iqn.2006-04.nw1-db-1.local:nw1-db-1 + ``` 1. Save the targetcli changes.- <pre><code>sudo targetcli saveconfig</code></pre> ++ ```bash + sudo targetcli saveconfig + ``` 1. Check to ensure that everything was set up correctly.- <pre><code>sudo targetcli ls + ```bash + sudo targetcli ls + o- / .......................................................................................................... [...] o- backstores ............................................................................................... [...] | o- block ................................................................................... [Storage Objects: 0] | o- fileio .................................................................................. [Storage Objects: 3]- | | o- <b>sbdascsnw1</b> ................................................ [/sbd/sbdascsnw1 (50.0MiB) write-thru activated] + | | o- sbdascsnw1 ................................................ [/sbd/sbdascsnw1 (50.0MiB) write-thru activated] | | | o- alua .................................................................................... [ALUA Groups: 1] | | | o- default_tg_pt_gp ........................................................ [ALUA state: Active/optimized]- | | o- <b>sbddbnw1</b> .................................................... [/sbd/sbddbnw1 (50.0MiB) write-thru activated] + | | o- sbddbnw1 .................................................... [/sbd/sbddbnw1 (50.0MiB) write-thru activated] | | | o- alua .................................................................................... [ALUA Groups: 1] | | | o- default_tg_pt_gp ........................................................ [ALUA state: Active/optimized]- | | o- <b>sbdnfs</b> ........................................................ [/sbd/sbdnfs (50.0MiB) write-thru activated] + | | o- sbdnfs ........................................................ [/sbd/sbdnfs (50.0MiB) write-thru activated] | | o- alua .................................................................................... [ALUA Groups: 1] | | o- default_tg_pt_gp ........................................................ [ALUA state: Active/optimized] | o- pscsi ................................................................................... [Storage Objects: 0] | o- ramdisk ................................................................................. [Storage Objects: 0] o- iscsi ............................................................................................. [Targets: 3]- | o- <b>iqn.2006-04.ascsnw1.local:ascsnw1</b> .................................................................. [TPGs: 1] + | o- iqn.2006-04.ascsnw1.local:ascsnw1 .................................................................. [TPGs: 1] | | o- tpg1 ................................................................................ [no-gen-acls, no-auth] | | o- acls ........................................................................................... [ACLs: 2]- | | | o- <b>iqn.2006-04.nw1-xscs-0.local:nw1-xscs-0</b> ............................................... [Mapped LUNs: 1] - | | | | o- mapped_lun0 ............................................................ [lun0 fileio/<b>sbdascsnw1</b> (rw)] - | | | o- <b>iqn.2006-04.nw1-xscs-1.local:nw1-xscs-1</b> ............................................... [Mapped LUNs: 1] - | | | o- mapped_lun0 ............................................................ [lun0 fileio/<b>sbdascsnw1</b> (rw)] + | | | o- iqn.2006-04.nw1-xscs-0.local:nw1-xscs-0 ............................................... [Mapped LUNs: 1] + | | | | o- mapped_lun0 ............................................................ [lun0 fileio/sbdascsnw1 (rw)] + | | | o- iqn.2006-04.nw1-xscs-1.local:nw1-xscs-1 ............................................... [Mapped LUNs: 1] + | | | o- mapped_lun0 ............................................................ [lun0 fileio/sbdascsnw1 (rw)] | | o- luns ........................................................................................... [LUNs: 1] | | | o- lun0 .......................................... [fileio/sbdascsnw1 (/sbd/sbdascsnw1) (default_tg_pt_gp)] | | o- portals ..................................................................................... [Portals: 1] | | o- 0.0.0.0:3260 ...................................................................................... [OK]- | o- <b>iqn.2006-04.dbnw1.local:dbnw1</b> ...................................................................... [TPGs: 1] + | o- iqn.2006-04.dbnw1.local:dbnw1 ...................................................................... [TPGs: 1] | | o- tpg1 ................................................................................ [no-gen-acls, no-auth] | | o- acls ........................................................................................... [ACLs: 2]- | | | o- <b>iqn.2006-04.nw1-db-0.local:nw1-db-0</b> ................................................... [Mapped LUNs: 1] - | | | | o- mapped_lun0 .............................................................. [lun0 fileio/<b>sbddbnw1</b> (rw)] - | | | o- <b>iqn.2006-04.nw1-db-1.local:nw1-db-1</b> ................................................... [Mapped LUNs: 1] - | | | o- mapped_lun0 .............................................................. [lun0 fileio/<b>sbddbnw1</b> (rw)] + | | | o- iqn.2006-04.nw1-db-0.local:nw1-db-0 ................................................... [Mapped LUNs: 1] + | | | | o- mapped_lun0 .............................................................. [lun0 fileio/sbddbnw1 (rw)] + | | | o- iqn.2006-04.nw1-db-1.local:nw1-db-1 ................................................... [Mapped LUNs: 1] + | | | o- mapped_lun0 .............................................................. [lun0 fileio/sbddbnw1 (rw)] | | o- luns ........................................................................................... [LUNs: 1] | | | o- lun0 .............................................. [fileio/sbddbnw1 (/sbd/sbddbnw1) (default_tg_pt_gp)] | | o- portals ..................................................................................... [Portals: 1] | | o- 0.0.0.0:3260 ...................................................................................... [OK]- | o- <b>iqn.2006-04.nfs.local:nfs</b> .......................................................................... [TPGs: 1] + | o- iqn.2006-04.nfs.local:nfs .......................................................................... [TPGs: 1] | o- tpg1 ................................................................................ [no-gen-acls, no-auth] | o- acls ........................................................................................... [ACLs: 2]- | | o- <b>iqn.2006-04.nfs-0.local:nfs-0</b> ......................................................... [Mapped LUNs: 1] - | | | o- mapped_lun0 ................................................................ [lun0 fileio/<b>sbdnfs</b> (rw)] - | | o- <b>iqn.2006-04.nfs-1.local:nfs-1</b> ......................................................... [Mapped LUNs: 1] - | | o- mapped_lun0 ................................................................ [lun0 fileio/<b>sbdnfs</b> (rw)] + | | o- iqn.2006-04.nfs-0.local:nfs-0 ......................................................... [Mapped LUNs: 1] + | | | o- mapped_lun0 ................................................................ [lun0 fileio/sbdnfs (rw)] + | | o- iqn.2006-04.nfs-1.local:nfs-1 ......................................................... [Mapped LUNs: 1] + | | o- mapped_lun0 ................................................................ [lun0 fileio/sbdnfs (rw)] | o- luns ........................................................................................... [LUNs: 1] | | o- lun0 .................................................. [fileio/sbdnfs (/sbd/sbdnfs) (default_tg_pt_gp)] | o- portals ..................................................................................... [Portals: 1] In the following instructions, replace the bold-formatted placeholder text with o- loopback .......................................................................................... [Targets: 0] o- vhost ............................................................................................. [Targets: 0] o- xen-pvscsi ........................................................................................ [Targets: 0]- </code></pre> + ``` ### Set up the iSCSI target server SBD device Connect to the iSCSI device that you created in the last step from the cluster. Run the following commands on the nodes of the new cluster that you want to create. > [!NOTE]-> * **[A]**: Applies to all nodes. -> * **[1]**: Applies only to node 1. -> * **[2]**: Applies only to node 2. +> +> - **[A]**: Applies to all nodes. +> - **[1]**: Applies only to node 1. +> - **[2]**: Applies only to node 2. 1. **[A]** Connect to the iSCSI devices. First, enable the iSCSI and SBD services. - <pre><code>sudo systemctl enable iscsid + ```bash + sudo systemctl enable iscsid sudo systemctl enable iscsi sudo systemctl enable sbd- </code></pre> + ``` 1. **[1]** Change the initiator name on the first node. - <pre><code>sudo vi /etc/iscsi/initiatorname.iscsi - </code></pre> + ```bash + sudo vi /etc/iscsi/initiatorname.iscsi + ``` 1. **[1]** Change the contents of the file to match the access control lists (ACLs) you used when you created the iSCSI device on the iSCSI target server (for example, for the NFS server). - <pre><code>InitiatorName=<b>iqn.2006-04.nfs-0.local:nfs-0</b></code></pre> + ```bash + InitiatorName=iqn.2006-04.nfs-0.local:nfs-0 + ``` 1. **[2]** Change the initiator name on the second node. - <pre><code>sudo vi /etc/iscsi/initiatorname.iscsi - </code></pre> + ```bash + sudo vi /etc/iscsi/initiatorname.iscsi + ``` 1. **[2]** Change the contents of the file to match the ACLs you used when you created the iSCSI device on the iSCSI target server. - <pre><code>InitiatorName=<b>iqn.2006-04.nfs-1.local:nfs-1</b> - </code></pre> + ```bash + InitiatorName=iqn.2006-04.nfs-1.local:nfs-1 + ``` 1. **[A]** Restart the iSCSI service to apply the change. - <pre><code>sudo systemctl restart iscsid + ```bash + sudo systemctl restart iscsid sudo systemctl restart iscsi- </code></pre> + ``` -1. **[A]** Connect the iSCSI devices. In the following example, 10.0.0.17 is the IP address of the iSCSI target server, and 3260 is the default port. <b>iqn.2006-04.nfs.local:nfs</b> is one of the target names that's listed when you run the first command, `iscsiadm -m discovery`. +1. **[A]** Connect the iSCSI devices. In the following example, 10.0.0.17 is the IP address of the iSCSI target server, and 3260 is the default port. **iqn.2006-04.nfs.local:nfs** is one of the target names that's listed when you run the first command, `iscsiadm -m discovery`. ++ ```bash + sudo iscsiadm -m discovery --type=st --portal=10.0.0.17:3260 + sudo iscsiadm -m node -T iqn.2006-04.nfs.local:nfs --login --portal=10.0.0.17:3260 + sudo iscsiadm -m node -p 10.0.0.17:3260 -T iqn.2006-04.nfs.local:nfs --op=update --name=node.startup --value=automatic + ``` - <pre><code>sudo iscsiadm -m discovery --type=st --portal=<b>10.0.0.17:3260</b> - sudo iscsiadm -m node -T <b>iqn.2006-04.nfs.local:nfs</b> --login --portal=<b>10.0.0.17:3260</b> - sudo iscsiadm -m node -p <b>10.0.0.17:3260</b> -T <b>iqn.2006-04.nfs.local:nfs</b> --op=update --name=node.startup --value=automatic</code></pre> - 1. **[A]** If you want to use multiple SBD devices, also connect to the second iSCSI target server. - <pre><code>sudo iscsiadm -m discovery --type=st --portal=<b>10.0.0.18:3260</b> - sudo iscsiadm -m node -T <b>iqn.2006-04.nfs.local:nfs</b> --login --portal=<b>10.0.0.18:3260</b> - sudo iscsiadm -m node -p <b>10.0.0.18:3260</b> -T <b>iqn.2006-04.nfs.local:nfs</b> --op=update --name=node.startup --value=automatic</code></pre> - + ```bash + sudo iscsiadm -m discovery --type=st --portal=10.0.0.18:3260 + sudo iscsiadm -m node -T iqn.2006-04.nfs.local:nfs --login --portal=10.0.0.18:3260 + sudo iscsiadm -m node -p 10.0.0.18:3260 -T iqn.2006-04.nfs.local:nfs --op=update --name=node.startup --value=automatic + ``` + 1. **[A]** If you want to use multiple SBD devices, also connect to the third iSCSI target server. - <pre><code>sudo iscsiadm -m discovery --type=st --portal=<b>10.0.0.19:3260</b> - sudo iscsiadm -m node -T <b>iqn.2006-04.nfs.local:nfs</b> --login --portal=<b>10.0.0.19:3260</b> - sudo iscsiadm -m node -p <b>10.0.0.19:3260</b> -T <b>iqn.2006-04.nfs.local:nfs</b> --op=update --name=node.startup --value=automatic - </code></pre> + ```bash + sudo iscsiadm -m discovery --type=st --portal=10.0.0.19:3260 + sudo iscsiadm -m node -T iqn.2006-04.nfs.local:nfs --login --portal=10.0.0.19:3260 + sudo iscsiadm -m node -p 10.0.0.19:3260 -T iqn.2006-04.nfs.local:nfs --op=update --name=node.startup --value=automatic + ``` 1. **[A]** Make sure that the iSCSI devices are available and note the device name (**/dev/sde**, in the following example). - <pre><code>lsscsi + ```bash + lsscsi # [2:0:0:0] disk Msft Virtual Disk 1.0 /dev/sda # [3:0:1:0] disk Msft Virtual Disk 1.0 /dev/sdb # [5:0:0:0] disk Msft Virtual Disk 1.0 /dev/sdc # [5:0:0:1] disk Msft Virtual Disk 1.0 /dev/sdd- # <b>[6:0:0:0] disk LIO-ORG sbdnfs 4.0 /dev/sdd</b> - # <b>[7:0:0:0] disk LIO-ORG sbdnfs 4.0 /dev/sde</b> - # <b>[8:0:0:0] disk LIO-ORG sbdnfs 4.0 /dev/sdf</b> - </code></pre> + # [6:0:0:0] disk LIO-ORG sbdnfs 4.0 /dev/sdd + # [7:0:0:0] disk LIO-ORG sbdnfs 4.0 /dev/sde + # [8:0:0:0] disk LIO-ORG sbdnfs 4.0 /dev/sdf + ``` 1. **[A]** Retrieve the IDs of the iSCSI devices. - <pre><code>ls -l /dev/disk/by-id/scsi-* | grep <b>sdd</b> - - # lrwxrwxrwx 1 root root 9 Aug 9 13:20 /dev/disk/by-id/scsi-1LIO-ORG_sbdnfs:afb0ba8d-3a3c-413b-8cc2-cca03e63ef42 -> ../../sdd - # <b>lrwxrwxrwx 1 root root 9 Aug 9 13:20 /dev/disk/by-id/scsi-36001405afb0ba8d3a3c413b8cc2cca03 -> ../../sdd</b> - # lrwxrwxrwx 1 root root 9 Aug 9 13:20 /dev/disk/by-id/scsi-SLIO-ORG_sbdnfs_afb0ba8d-3a3c-413b-8cc2-cca03e63ef42 -> ../../sdd - - ls -l /dev/disk/by-id/scsi-* | grep <b>sde</b> - - # lrwxrwxrwx 1 root root 9 Feb 7 12:39 /dev/disk/by-id/scsi-1LIO-ORG_cl1:3fe4da37-1a5a-4bb6-9a41-9a4df57770e4 -> ../../sde - # <b>lrwxrwxrwx 1 root root 9 Feb 7 12:39 /dev/disk/by-id/scsi-360014053fe4da371a5a4bb69a419a4df -> ../../sde</b> - # lrwxrwxrwx 1 root root 9 Feb 7 12:39 /dev/disk/by-id/scsi-SLIO-ORG_cl1_3fe4da37-1a5a-4bb6-9a41-9a4df57770e4 -> ../../sde - - ls -l /dev/disk/by-id/scsi-* | grep <b>sdf</b> - - # lrwxrwxrwx 1 root root 9 Aug 9 13:32 /dev/disk/by-id/scsi-1LIO-ORG_sbdnfs:f88f30e7-c968-4678-bc87-fe7bfcbdb625 -> ../../sdf - # <b>lrwxrwxrwx 1 root root 9 Aug 9 13:32 /dev/disk/by-id/scsi-36001405f88f30e7c9684678bc87fe7bf -> ../../sdf</b> - # lrwxrwxrwx 1 root root 9 Aug 9 13:32 /dev/disk/by-id/scsi-SLIO-ORG_sbdnfs_f88f30e7-c968-4678-bc87-fe7bfcbdb625 -> ../../sdf - </code></pre> + ```bash + ls -l /dev/disk/by-id/scsi-* | grep sdd + + # lrwxrwxrwx 1 root root 9 Aug 9 13:20 /dev/disk/by-id/scsi-1LIO-ORG_sbdnfs:afb0ba8d-3a3c-413b-8cc2-cca03e63ef42 -> ../../sdd + # lrwxrwxrwx 1 root root 9 Aug 9 13:20 /dev/disk/by-id/scsi-36001405afb0ba8d3a3c413b8cc2cca03 -> ../../sdd + # lrwxrwxrwx 1 root root 9 Aug 9 13:20 /dev/disk/by-id/scsi-SLIO-ORG_sbdnfs_afb0ba8d-3a3c-413b-8cc2-cca03e63ef42 -> ../../sdd + + ls -l /dev/disk/by-id/scsi-* | grep sde + + # lrwxrwxrwx 1 root root 9 Feb 7 12:39 /dev/disk/by-id/scsi-1LIO-ORG_cl1:3fe4da37-1a5a-4bb6-9a41-9a4df57770e4 -> ../../sde + # lrwxrwxrwx 1 root root 9 Feb 7 12:39 /dev/disk/by-id/scsi-360014053fe4da371a5a4bb69a419a4df -> ../../sde + # lrwxrwxrwx 1 root root 9 Feb 7 12:39 /dev/disk/by-id/scsi-SLIO-ORG_cl1_3fe4da37-1a5a-4bb6-9a41-9a4df57770e4 -> ../../sde + + ls -l /dev/disk/by-id/scsi-* | grep sdf + + # lrwxrwxrwx 1 root root 9 Aug 9 13:32 /dev/disk/by-id/scsi-1LIO-ORG_sbdnfs:f88f30e7-c968-4678-bc87-fe7bfcbdb625 -> ../../sdf + # lrwxrwxrwx 1 root root 9 Aug 9 13:32 /dev/disk/by-id/scsi-36001405f88f30e7c9684678bc87fe7bf -> ../../sdf + # lrwxrwxrwx 1 root root 9 Aug 9 13:32 /dev/disk/by-id/scsi-SLIO-ORG_sbdnfs_f88f30e7-c968-4678-bc87-fe7bfcbdb625 -> ../../sdf + ``` - The command lists three device IDs for every SBD device. We recommend using the ID that starts with scsi-1. In the preceding example, the IDs are: + The command lists three device IDs for every SBD device. We recommend using the ID that starts with scsi-1. In the preceding example, the IDs are: - * **/dev/disk/by-id/scsi-36001405afb0ba8d3a3c413b8cc2cca03** - * **/dev/disk/by-id/scsi-360014053fe4da371a5a4bb69a419a4df** - * **/dev/disk/by-id/scsi-36001405f88f30e7c9684678bc87fe7bf** + - **/dev/disk/by-id/scsi-36001405afb0ba8d3a3c413b8cc2cca03** + - **/dev/disk/by-id/scsi-360014053fe4da371a5a4bb69a419a4df** + - **/dev/disk/by-id/scsi-36001405f88f30e7c9684678bc87fe7bf** 1. **[1]** Create the SBD device. - a. Use the device ID of the iSCSI devices to create the new SBD devices on the first cluster node. + a. Use the device ID of the iSCSI devices to create the new SBD devices on the first cluster node. - <pre><code>sudo sbd -d <b>/dev/disk/by-id/scsi-36001405afb0ba8d3a3c413b8cc2cca03</b> -1 60 -4 120 create</code></pre> - - b. Also create the second and third SBD devices if you want to use more than one. - <pre><code>sudo sbd -d <b>/dev/disk/by-id/scsi-360014053fe4da371a5a4bb69a419a4df</b> -1 60 -4 120 create - sudo sbd -d <b>/dev/disk/by-id/scsi-36001405f88f30e7c9684678bc87fe7bf</b> -1 60 -4 120 create - </code></pre> + ```bash + sudo sbd -d /dev/disk/by-id/scsi-36001405afb0ba8d3a3c413b8cc2cca03 -1 60 -4 120 create + ``` ++ b. Also create the second and third SBD devices if you want to use more than one. ++ ```bash + sudo sbd -d /dev/disk/by-id/scsi-360014053fe4da371a5a4bb69a419a4df -1 60 -4 120 create + sudo sbd -d /dev/disk/by-id/scsi-36001405f88f30e7c9684678bc87fe7bf -1 60 -4 120 create + ``` 1. **[A]** Adapt the SBD configuration. - a. Open the SBD config file. + a. Open the SBD config file. - <pre><code>sudo vi /etc/sysconfig/sbd - </code></pre> + ```bash + sudo vi /etc/sysconfig/sbd + ``` - b. Change the property of the SBD device, enable the Pacemaker integration, and change the start mode of SBD. + b. Change the property of the SBD device, enable the Pacemaker integration, and change the start mode of SBD. - <pre><code>[...] - <b>SBD_DEVICE="/dev/disk/by-id/scsi-36001405afb0ba8d3a3c413b8cc2cca03;/dev/disk/by-id/scsi-360014053fe4da371a5a4bb69a419a4df;/dev/disk/by-id/scsi-36001405f88f30e7c9684678bc87fe7bf"</b> - [...] - <b>SBD_PACEMAKER="yes"</b> - [...] - <b>SBD_STARTMODE="always"</b> - [...] - </code></pre> + ```bash + [...] + SBD_DEVICE="/dev/disk/by-id/scsi-36001405afb0ba8d3a3c413b8cc2cca03;/dev/disk/by-id/scsi-360014053fe4da371a5a4bb69a419a4df;/dev/disk/by-id/scsi-36001405f88f30e7c9684678bc87fe7bf" + [...] + SBD_PACEMAKER="yes" + [...] + SBD_STARTMODE="always" + [...] + ``` 1. **[A]** Create the `softdog` configuration file. - <pre><code>echo softdog | sudo tee /etc/modules-load.d/softdog.conf - </code></pre> + ```bash + echo softdog | sudo tee /etc/modules-load.d/softdog.conf + ``` 1. **[A]** Load the module. - <pre><code>sudo modprobe -v softdog - </code></pre> + ```bash + sudo modprobe -v softdog + ``` ## SBD with an Azure shared disk This section applies only if you want to use an SBD device with an Azure shared 1. Adjust the values for your resource group, Azure region, virtual machines, logical unit numbers (LUNs), and so on. - <pre><code>$ResourceGroup = "<b>MyResourceGroup</b>" - $Location = "<b>MyAzureRegion</b>"</code></pre> + ```bash + $ResourceGroup = "MyResourceGroup" + $Location = "MyAzureRegion" + ``` 1. Define the size of the disk based on available disk size for Premium SSDs. In this example, P1 disk size of 4G is mentioned.- <pre><code>$DiskSizeInGB = <b>4</b> - $DiskName = "<b>SBD-disk1</b>"</code></pre> ++ ```bash + $DiskSizeInGB = 4 + $DiskName = "SBD-disk1" + ``` 1. With parameter -MaxSharesCount, define the maximum number of cluster nodes to attach the shared disk for the SBD device.- <pre><code>$ShareNodes = <b>2</b></code></pre> ++ ```bash + $ShareNodes = 2 + ``` 1. For an SBD device that uses LRS for an Azure premium shared disk, use the following storage SkuName:- <pre><code>$SkuName = "<b>Premium_LRS</b>"</code></pre> ++ ```bash + $SkuName = "Premium_LRS" + ``` + 1. For an SBD device that uses ZRS for an Azure premium shared disk, use the following storage SkuName:- <pre><code>$SkuName = "<b>Premium_ZRS</b>"</code></pre> ++ ```bash + $SkuName = "Premium_ZRS" + ``` 1. Set up an Azure shared disk.- <pre><code>$diskConfig = New-AzDiskConfig -Location $Location -SkuName $SkuName -CreateOption Empty -DiskSizeGB $DiskSizeInGB -MaxSharesCount $ShareNodes - $dataDisk = New-AzDisk -ResourceGroupName $ResourceGroup -DiskName $DiskName -Disk $diskConfig</code></pre> ++ ```bash + $diskConfig = New-AzDiskConfig -Location $Location -SkuName $SkuName -CreateOption Empty -DiskSizeGB $DiskSizeInGB -MaxSharesCount $ShareNodes + $dataDisk = New-AzDisk -ResourceGroupName $ResourceGroup -DiskName $DiskName -Disk $diskConfig + ``` 1. Attach the disk to the cluster VMs.- <pre><code>$VM1 = "<b>prod-cl1-0</b>" - $VM2 = "<b>prod-cl1-1</b>"</code></pre> ++ ```bash + $VM1 = "prod-cl1-0" + $VM2 = "prod-cl1-1" + ``` a. Add the Azure shared disk to cluster node 1.- <pre><code>$vm = Get-AzVM -ResourceGroupName $ResourceGroup -Name $VM1 - $vm = Add-AzVMDataDisk -VM $vm -Name $DiskName -CreateOption Attach -ManagedDiskId $dataDisk.Id -Lun <b>0</b> - Update-AzVm -VM $vm -ResourceGroupName $ResourceGroup -Verbose</code></pre> ++ ```bash + $vm = Get-AzVM -ResourceGroupName $ResourceGroup -Name $VM1 + $vm = Add-AzVMDataDisk -VM $vm -Name $DiskName -CreateOption Attach -ManagedDiskId $dataDisk.Id -Lun 0 + Update-AzVm -VM $vm -ResourceGroupName $ResourceGroup -Verbose + ``` b. Add the Azure shared disk to cluster node 2.- <pre><code>$vm = Get-AzVM -ResourceGroupName $ResourceGroup -Name $VM2 - $vm = Add-AzVMDataDisk -VM $vm -Name $DiskName -CreateOption Attach -ManagedDiskId $dataDisk.Id -Lun <b>0</b> - Update-AzVm -VM $vm -ResourceGroupName $ResourceGroup -Verbose</code></pre> ++ ```bash + $vm = Get-AzVM -ResourceGroupName $ResourceGroup -Name $VM2 + $vm = Add-AzVMDataDisk -VM $vm -Name $DiskName -CreateOption Attach -ManagedDiskId $dataDisk.Id -Lun 0 + Update-AzVm -VM $vm -ResourceGroupName $ResourceGroup -Verbose + ``` If you want to deploy resources by using the Azure CLI or the Azure portal, you can also refer to [Deploy a ZRS disk](../../virtual-machines/disks-deploy-zrs.md). If you want to deploy resources by using the Azure CLI or the Azure portal, you 1. **[A]** Make sure that the attached disk is available. - <pre><code># lsblk + ```bash + # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT fd0 2:0 1 4K 0 disk sda 8:0 0 30G 0 disk If you want to deploy resources by using the Azure CLI or the Azure portal, you Γö£ΓöÇsda4 8:4 0 28.5G 0 part / sdb 8:16 0 256G 0 disk Γö£ΓöÇsdb1 8:17 0 256G 0 part /mnt- <b>sdc 8:32 0 4G 0 disk</b> + sdc 8:32 0 4G 0 disk sr0 11:0 1 1024M 0 rom # lsscsi [1:0:0:0] cd/dvd Msft Virtual CD/ROM 1.0 /dev/sr0 [2:0:0:0] disk Msft Virtual Disk 1.0 /dev/sda [3:0:1:0] disk Msft Virtual Disk 1.0 /dev/sdb- <b>[5:0:0:0] disk Msft Virtual Disk 1.0 /dev/sdc</b> - </code></pre> + [5:0:0:0] disk Msft Virtual Disk 1.0 /dev/sdc + ``` 1. **[A]** Retrieve the IDs of the attached disks. - <pre><code># ls -l /dev/disk/by-id/scsi-* | grep sdc + ```bash + # ls -l /dev/disk/by-id/scsi-* | grep sdc lrwxrwxrwx 1 root root 9 Nov 8 16:55 /dev/disk/by-id/scsi-14d534654202020204208a67da80744439b513b2a9728af19 -> ../../sdc- <b>lrwxrwxrwx 1 root root 9 Nov 8 16:55 /dev/disk/by-id/scsi-3600224804208a67da8073b2a9728af19 -> ../../sdc</b> - </code></pre> + lrwxrwxrwx 1 root root 9 Nov 8 16:55 /dev/disk/by-id/scsi-3600224804208a67da8073b2a9728af19 -> ../../sdc + ``` The commands list device IDs for the SBD device. We recommend using the ID that starts with scsi-3. In the preceding example, the ID is **/dev/disk/by-id/scsi-3600224804208a67da8073b2a9728af19**. If you want to deploy resources by using the Azure CLI or the Azure portal, you Use the device ID from step 2 to create the new SBD devices on the first cluster node. - <pre><code># sudo sbd -d <b>/dev/disk/by-id/scsi-3600224804208a67da8073b2a9728af19</b> -1 60 -4 120 create - </code></pre> + ```bash + # sudo sbd -d /dev/disk/by-id/scsi-3600224804208a67da8073b2a9728af19 -1 60 -4 120 create + ``` 1. **[A]** Adapt the SBD configuration. a. Open the SBD config file. - <pre><code>sudo vi /etc/sysconfig/sbd - </code></pre> + ```bash + sudo vi /etc/sysconfig/sbd + ``` b. Change the property of the SBD device, enable the Pacemaker integration, and change the start mode of the SBD device. - <pre><code>[...] - <b>SBD_DEVICE="/dev/disk/by-id/scsi-3600224804208a67da8073b2a9728af19"</b> + ```bash [...]- <b>SBD_PACEMAKER="yes"</b> + SBD_DEVICE="/dev/disk/by-id/scsi-3600224804208a67da8073b2a9728af19" [...]- <b>SBD_STARTMODE="always"</b> + SBD_PACEMAKER="yes" [...]- </code></pre> + SBD_STARTMODE="always" + [...] + ``` 1. Create the `softdog` configuration file. - <pre><code>echo softdog | sudo tee /etc/modules-load.d/softdog.conf - </code></pre> + ```bash + echo softdog | sudo tee /etc/modules-load.d/softdog.conf + ``` 1. Load the module. - <pre><code>sudo modprobe -v softdog - </code></pre> + ```bash + sudo modprobe -v softdog + ``` ## Use an Azure fence agent This section applies only if you want to use a fencing device with an Azure fenc ### Create an Azure fence agent device -This section applies only if you're using a fencing device that's based on an Azure fence agent. The fencing device uses either a managed identity or a service principal to authorize against Microsoft Azure. +This section applies only if you're using a fencing device that's based on an Azure fence agent. The fencing device uses either a managed identity or a service principal to authorize against Microsoft Azure. #### Using managed identity-To create a managed identity (MSI), [create a system-assigned](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md#system-assigned-managed-identity) managed identity for each VM in the cluster. Should a system-assigned managed identity already exist, it will be used. User assigned managed identities should not be used with Pacemaker at this time. Azure fence agent, based on managed identity is supported for SLES 12 SP5 and SLES 15 SP1 and above. ++To create a managed identity (MSI), [create a system-assigned](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md#system-assigned-managed-identity) managed identity for each VM in the cluster. Should a system-assigned managed identity already exist, it will be used. User assigned managed identities shouldn't be used with Pacemaker at this time. Azure fence agent, based on managed identity is supported for SLES 12 SP5 and SLES 15 SP1 and above. #### Using service principal To create a service principal, do the following: 1. In the [Azure portal](https://portal.azure.com), select **Azure Active Directory** > **Properties**, and then write down the Directory ID. This is the **tenant ID**.-1. Select **App registrations**. -1. Select **New registration**. -1. Enter a name for the registration, and then select **Accounts in this organization directory only**. -1. For **Application type**, select **Web**, enter a sign-on URL (for example, <code>*http://</code><code>localhost*</code>), and then select **Add**. - The sign-on URL is not used and can be any valid URL. -1. Select **Certificates and secrets**, and then select **New client secret**. -1. Enter a description for a new key, select **Never expires**, and then select **Add**. -1. Write down the value, which you'll use as the password for the service principal. -1. Select **Overview**, and then write down the application ID, which you'll use as the username of the service principal. +2. Select **App registrations**. +3. Select **New registration**. +4. Enter a name for the registration, and then select **Accounts in this organization directory only**. +5. For **Application type**, select **Web**, enter a sign-on URL (for example, *http://localhost*), and then select **Add**. + The sign-on URL isn't used and can be any valid URL. +6. Select **Certificates and secrets**, and then select **New client secret**. +7. Enter a description for a new key, select **Never expires**, and then select **Add**. +8. Write down the value, which you'll use as the password for the service principal. +9. Select **Overview**, and then write down the application ID, which you'll use as the username of the service principal. ### **[1]** Create a custom role for the fence agent -By default, neither managed identity nor service principal have permissions to access your Azure resources. You need to give the managed identity or service principal permissions to start and stop (deallocate) all virtual machines in the cluster. If you didn't already create the custom role, you can do so by using [PowerShell](../../role-based-access-control/custom-roles-powershell.md#create-a-custom-role) or the [Azure CLI](../../role-based-access-control/custom-roles-cli.md). +By default, neither managed identity nor service principal has permissions to access your Azure resources. You need to give the managed identity or service principal permissions to start and stop (deallocate) all virtual machines in the cluster. If you didn't already create the custom role, you can do so by using [PowerShell](../../role-based-access-control/custom-roles-powershell.md#create-a-custom-role) or the [Azure CLI](../../role-based-access-control/custom-roles-cli.md). Use the following content for the input file. You need to adapt the content to your subscriptions. That is, replace *xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx* and *yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy* with your own subscription IDs. If you have only one subscription, remove the second entry under AssignableScopes. Assign the custom role "Linux Fence Agent Role" that was created in the last cha #### Using Service Principal -Assign the custom role *Linux fence agent Role* that you already created to the service principal. Do *not* use the *Owner* role anymore. For more information, see [Assign Azure roles by using the Azure portal](../../role-based-access-control/role-assignments-portal.md). +Assign the custom role *Linux fence agent Role* that you already created to the service principal. Do *not* use the *Owner* role anymore. For more information, see [Assign Azure roles by using the Azure portal](../../role-based-access-control/role-assignments-portal.md). Make sure to assign the custom role to the service principal at all VM (cluster node) scopes. ## Install the cluster > [!NOTE]-> * **[A]**: Applies to all nodes. -> * **[1]**: Applies only to node 1. -> * **[2]**: Applies only to node 2. +> +> - **[A]**: Applies to all nodes. +> - **[1]**: Applies only to node 1. +> - **[2]**: Applies only to node 2. 1. **[A]** Update SLES. - <pre><code>sudo zypper update - </code></pre> + ```bash + sudo zypper update + ``` > [!NOTE] > On SLES 15 SP4 check the version of *crmsh* and *pacemaker* package, and make sure that the miniumum version requirements are met:+ > > - crmsh-4.4.0+20221028.3e41444-150400.3.9.1 or later > - pacemaker-2.1.2+20211124.ada5c3b36-150400.4.6.1 or later -2. **[A]** Install the component, which you'll need for the cluster resources. +2. **[A]** Install the component, which you need for the cluster resources. - <pre><code>sudo zypper in socat - </code></pre> + ```bash + sudo zypper in socat + ``` -3. **[A]** Install the azure-lb component, which you'll need for the cluster resources. +3. **[A]** Install the azure-lb component, which you need for the cluster resources. - <pre><code>sudo zypper in resource-agents - </code></pre> + ```bash + sudo zypper in resource-agents + ``` > [!NOTE]- > Check the version of the *resource-agents* package, and make sure that the minimum version requirements are met: + > Check the version of the *resource-agents* package, and make sure that the minimum version requirements are met: + > > - **SLES 12 SP4/SP5**: The version must be resource-agents-4.3.018.a7fb5035-3.30.1 or later. > - **SLES 15/15 SP1**: The version must be resource-agents-4.3.0184.6ee15eb2-4.13.1 or later. Make sure to assign the custom role to the service principal at all VM (cluster a. Pacemaker occasionally creates many processes, which can exhaust the allowed number. When this happens, a heartbeat between the cluster nodes might fail and lead to a failover of your resources. We recommend increasing the maximum number of allowed processes by setting the following parameter: - <pre><code># Edit the configuration file + ```bash + # Edit the configuration file sudo vi /etc/systemd/system.conf # Change the DefaultTasksMax Make sure to assign the custom role to the service principal at all VM (cluster # Test to ensure that the change was successful sudo systemctl --no-pager show | grep DefaultTasksMax- </code></pre> + ``` b. Reduce the size of the dirty cache. For more information, see [Low write performance on SLES 11/12 servers with large RAM](https://www.suse.com/support/kb/doc/?id=7010287). - <pre><code>sudo vi /etc/sysctl.conf + ```bash + sudo vi /etc/sysctl.conf # Change/set the following settings vm.dirty_bytes = 629145600 vm.dirty_background_bytes = 314572800- </code></pre> - + ``` + c. Make sure vm.swappiness is set to 10 to reduce swap usage and favor memory. - <pre><code>sudo vi /etc/sysctl.conf + ```bash + sudo vi /etc/sysctl.conf # Change/set the following setting vm.swappiness = 10- </code></pre> + ``` 5. **[A]** Configure *cloud-netconfig-azure* for the high availability cluster. - >[!NOTE] + > [!NOTE] > Check the installed version of the *cloud-netconfig-azure* package by running **zypper info cloud-netconfig-azure**. If the version in your environment is 1.3 or later, it's no longer necessary to suppress the management of network interfaces by the cloud network plug-in. If the version is earlier than 1.3, we recommend that you update the *cloud-netconfig-azure* package to the latest available version. - To prevent the cloud network plug-in from removing the virtual IP address (Pacemaker must control the assignment), change the configuration file for the network interface as shown in the following code. For more information, see [SUSE KB 7023633](https://www.suse.com/support/kb/doc/?id=7023633). + To prevent the cloud network plug-in from removing the virtual IP address (Pacemaker must control the assignment), change the configuration file for the network interface as shown in the following code. For more information, see [SUSE KB 7023633](https://www.suse.com/support/kb/doc/?id=7023633). - <pre><code># Edit the configuration file + ```bash + # Edit the configuration file sudo vi /etc/sysconfig/network/ifcfg-eth0 # Change CLOUD_NETCONFIG_MANAGE # CLOUD_NETCONFIG_MANAGE="yes" CLOUD_NETCONFIG_MANAGE="no"- </code></pre> + ``` 6. **[1]** Enable SSH access. - <pre><code>sudo ssh-keygen + ```bash + sudo ssh-keygen # Enter file in which to save the key (/root/.ssh/id_rsa), and then select Enter # Enter passphrase (empty for no passphrase), and then select Enter Make sure to assign the custom role to the service principal at all VM (cluster # copy the public key sudo cat /root/.ssh/id_rsa.pub- </code></pre> + ``` 7. **[2]** Enable SSH access. - <pre><code>sudo ssh-keygen + ```bash + sudo ssh-keygen # Enter file in which to save the key (/root/.ssh/id_rsa), and then select Enter # Enter passphrase (empty for no passphrase), and then select Enter Make sure to assign the custom role to the service principal at all VM (cluster # copy the public key sudo cat /root/.ssh/id_rsa.pub- </code></pre> + ``` 8. **[1]** Enable SSH access. - <pre><code># insert the public key you copied in the last step into the authorized keys file on the first server + ```bash + # insert the public key you copied in the last step into the authorized keys file on the first server sudo vi /root/.ssh/authorized_keys- </code></pre> + ``` 9. **[A]** Install the *fence-agents* package if you're using a fencing device, based on the Azure fence agent. - - <pre><code>sudo zypper install fence-agents - </code></pre> - >[!IMPORTANT] + ```bash + sudo zypper install fence-agents + ``` ++ > [!IMPORTANT] > The installed version of the *fence-agents* package must be 4.4.0 or later to benefit from the faster failover times with the Azure fence agent, when a cluster node is fenced. If you're running an earlier version, we recommend that you update the package. - >[!IMPORTANT] - > If using managed identity, the installed version of the *fence-agents* package must be - > SLES 12 SP5: fence-agents 4.9.0+git.1624456340.8d746be9-3.35.2 or later - > SLES 15 SP1 and higher: fence-agents 4.5.2+git.1592573838.1eee0863 or later. + > [!IMPORTANT] + > If using managed identity, the installed version of the *fence-agents* package must be - + > + > - SLES 12 SP5: fence-agents 4.9.0+git.1624456340.8d746be9-3.35.2 or later + > - SLES 15 SP1 and higher: fence-agents 4.5.2+git.1592573838.1eee0863 or later. + > > Earlier versions will not work correctly with a managed identity configuration. - + 10. **[A]** Install the Azure Python SDK and Azure Identity Python module. Install the Azure Python SDK on SLES 12 SP4 or SLES 12 SP5:- <pre><code># You might need to activate the public cloud extension first ++ ```bash + # You might need to activate the public cloud extension first SUSEConnect -p sle-module-public-cloud/12/x86_64 sudo zypper install python-azure-mgmt-compute sudo zypper install python-azure-identity- </code></pre> + ``` Install the Azure Python SDK on SLES 15 or later:- <pre><code># You might need to activate the public cloud extension first. In this example, the SUSEConnect command is for SLES 15 SP1 ++ ```bash + # You might need to activate the public cloud extension first. In this example, the SUSEConnect command is for SLES 15 SP1 SUSEConnect -p sle-module-public-cloud/15.1/x86_64 sudo zypper install python3-azure-mgmt-compute sudo zypper install python3-azure-identity- </code></pre> + ``` - >[!IMPORTANT] - >Depending on your version and image type, you might need to activate the public cloud extension for your OS release before you can install the Azure Python SDK. - >You can check the extension by running `SUSEConnect list-extensions`. - >To achieve the faster failover times with the Azure fence agent: + > [!IMPORTANT] + > Depending on your version and image type, you might need to activate the public cloud extension for your OS release before you can install the Azure Python SDK. + > You can check the extension by running `SUSEConnect list-extensions`. + > To achieve the faster failover times with the Azure fence agent: + > > - On SLES 12 SP4 or SLES 12 SP5, install version 4.6.2 or later of the *python-azure-mgmt-compute* package. > - If your *python-azure-mgmt-compute or python**3**-azure-mgmt-compute* package version is 17.0.0-6.7.1, follow the instructions in [SUSE KBA](https://www.suse.com/support/kb/doc/?id=000020377) to update the fence-agents version and install the Azure Identity client library for Python module if it is missing. Make sure to assign the custom role to the service principal at all VM (cluster You can either use a DNS server or modify the */etc/hosts* file on all nodes. This example shows how to use the */etc/hosts* file. Replace the IP address and the hostname in the following commands.- + >[!IMPORTANT] > If you're using hostnames in the cluster configuration, it's essential to have a reliable hostname resolution. The cluster communication will fail if the names are unavailable, and that can lead to cluster failover delays. > > The benefit of using */etc/hosts* is that your cluster becomes independent of the DNS, which could be a single point of failure too. - <pre><code>sudo vi /etc/hosts - </code></pre> + ```bash + sudo vi /etc/hosts + ``` Insert the following lines in the */etc/hosts*. Change the IP address and hostname to match your environment. - <pre><code># IP address of the first cluster node - <b>10.0.0.6 prod-cl1-0</b> + ```text + # IP address of the first cluster node + 10.0.0.6 prod-cl1-0 # IP address of the second cluster node- <b>10.0.0.7 prod-cl1-1</b> - </code></pre> + 10.0.0.7 prod-cl1-1 + ``` 12. **[1]** Install the cluster.- + - If you're using SBD devices for fencing (for either the iSCSI target server or Azure shared disk): - <pre><code>sudo crm cluster init + ```bash + sudo crm cluster init # ! NTP is not configured to start at system boot.- # Do you want to continue anyway (y/n)? <b>y</b> - # /root/.ssh/id_rsa already exists - overwrite (y/n)? <b>n</b> - # Address for ring0 [10.0.0.6] <b>Select Enter</b> - # Port for ring0 [5405] <b>Select Enter</b> - # SBD is already configured to use /dev/disk/by-id/scsi-36001405639245768818458b930abdf69;/dev/disk/by-id/scsi-36001405afb0ba8d3a3c413b8cc2cca03;/dev/disk/by-id/scsi-36001405f88f30e7c9684678bc87fe7bf - overwrite (y/n)? <b>n</b> - # Do you wish to configure an administration IP (y/n)? <b>n</b> - </code></pre> - + # Do you want to continue anyway (y/n)? y + # /root/.ssh/id_rsa already exists - overwrite (y/n)? n + # Address for ring0 [10.0.0.6] Select Enter + # Port for ring0 [5405] Select Enter + # SBD is already configured to use /dev/disk/by-id/scsi-36001405639245768818458b930abdf69;/dev/disk/by-id/scsi-36001405afb0ba8d3a3c413b8cc2cca03;/dev/disk/by-id/scsi-36001405f88f30e7c9684678bc87fe7bf - overwrite (y/n)? n + # Do you wish to configure an administration IP (y/n)? n + ``` + - If you're *not* using SBD devices for fencing:- - <pre><code>sudo crm cluster init ++ ```bash + sudo crm cluster init # ! NTP is not configured to start at system boot.- # Do you want to continue anyway (y/n)? <b>y</b> - # /root/.ssh/id_rsa already exists - overwrite (y/n)? <b>n</b> - # Address for ring0 [10.0.0.6] <b>Select Enter</b> - # Port for ring0 [5405] <b>Select Enter</b> - # Do you wish to use SBD (y/n)? <b>n</b> + # Do you want to continue anyway (y/n)? y + # /root/.ssh/id_rsa already exists - overwrite (y/n)? n + # Address for ring0 [10.0.0.6] Select Enter + # Port for ring0 [5405] Select Enter + # Do you wish to use SBD (y/n)? n # WARNING: Not configuring SBD - STONITH will be disabled.- # Do you wish to configure an administration IP (y/n)? <b>n</b> - </code></pre> + # Do you wish to configure an administration IP (y/n)? n + ``` 13. **[2]** Add the node to the cluster.- - <pre><code>sudo crm cluster join ++ ```bash + sudo crm cluster join # ! NTP is not configured to start at system boot.- # Do you want to continue anyway (y/n)? <b>y</b> - # IP address or hostname of existing node (for example, 192.168.1.1) []<b>10.0.0.6</b> - # /root/.ssh/id_rsa already exists - overwrite (y/n)? <b>n</b> - </code></pre> + # Do you want to continue anyway (y/n)? y + # IP address or hostname of existing node (for example, 192.168.1.1) []10.0.0.6 + # /root/.ssh/id_rsa already exists - overwrite (y/n)? n + ``` 14. **[A]** Change the hacluster password to the same password. - <pre><code>sudo passwd hacluster - </code></pre> + ```bash + sudo passwd hacluster + ``` 15. **[A]** Adjust the corosync settings. - <pre><code>sudo vi /etc/corosync/corosync.conf - </code></pre> + ```bash + sudo vi /etc/corosync/corosync.conf + ``` - a. Add the following bold-formatted content to the file if the values are not there or are different. Be sure to change the token to 30000 to allow memory-preserving maintenance. For more information, see the "Maintenance for virtual machines in Azure" article for [Linux][virtual-machines-linux-maintenance] or [Windows][virtual-machines-windows-maintenance]. + a. Add the following bold-formatted content to the file if the values aren't there or are different. Be sure to change the token to 30000 to allow memory-preserving maintenance. For more information, see the "Maintenance for virtual machines in Azure" article for [Linux][virtual-machines-linux-maintenance] or [Windows][virtual-machines-windows-maintenance]. - <pre><code>[...] - <b>token: 30000 + ```text + [...] + token: 30000 token_retransmits_before_loss_const: 10 join: 60 consensus: 36000- max_messages: 20</b> - + max_messages: 20 + interface { [...] } Make sure to assign the custom role to the service principal at all VM (cluster # Enable and configure quorum subsystem (default: off) # See also corosync.conf.5 and votequorum.5 provider: corosync_votequorum- <b>expected_votes: 2</b> - <b>two_node: 1</b> + expected_votes: 2 + two_node: 1 }- </code></pre> + ``` b. Restart the corosync service. - <pre><code>sudo service corosync restart - </code></pre> + ```bash + sudo service corosync restart + ``` ### Create a fencing device on the Pacemaker cluster 1. **[1]** If you're using an SBD device (iSCSI target server or Azure shared disk) as a fencing device, run the following commands. Enable the use of a fencing device, and set the fence delay. - <pre><code>sudo crm configure property stonith-timeout=144 + ```bash + sudo crm configure property stonith-timeout=144 sudo crm configure property stonith-enabled=true # List the resources to find the name of the SBD device sudo crm resource list sudo crm resource stop stonith-sbd- sudo crm configure delete <b>stonith-sbd</b> - sudo crm configure primitive <b>stonith-sbd</b> stonith:external/sbd \ + sudo crm configure delete stonith-sbd + sudo crm configure primitive stonith-sbd stonith:external/sbd \ params pcmk_delay_max="15" \ op monitor interval="600" timeout="15"- </code></pre> + ``` 1. **[1]** If you're using an Azure fence agent for fencing, run the following commands. After you've assigned roles to both cluster nodes, you can configure the fencing devices in the cluster.- - <pre><code>sudo crm configure property stonith-enabled=true ++ ```bash + sudo crm configure property stonith-enabled=true crm configure property concurrent-fencing=true- </code></pre> + ``` > [!NOTE] > The 'pcmk_host_map' option is required in the command only if the hostnames and the Azure VM names are *not* identical. Specify the mapping in the format *hostname:vm-name*. > Refer to the bold section in the following command.- + If using **managed identity** for your fence agent, run the following command- <pre><code> ++ ```bash # replace the bold strings with your subscription ID and resource group of the VM sudo crm configure primitive rsc_st_azure stonith:fence_azure_arm \- params <b>msi=true</b> subscriptionId="<b>subscription ID</b>" resourceGroup="<b>resource group</b>" \ - pcmk_monitor_retries=4 pcmk_action_limit=3 power_timeout=240 pcmk_reboot_timeout=900 pcmk_delay_max=15 <b>pcmk_host_map="prod-cl1-0:prod-cl1-0-vm-name;prod-cl1-1:prod-cl1-1-vm-name"</b> \ + params msi=true subscriptionId="subscription ID" resourceGroup="resource group" \ + pcmk_monitor_retries=4 pcmk_action_limit=3 power_timeout=240 pcmk_reboot_timeout=900 pcmk_delay_max=15 pcmk_host_map="prod-cl1-0:prod-cl1-0-vm-name;prod-cl1-1:prod-cl1-1-vm-name" \ op monitor interval=3600 timeout=120 sudo crm configure property stonith-timeout=900- </code></pre> + ``` If using **service principal** for your fence agent, run the following command- <pre><code> ++ ```bash # replace the bold strings with your subscription ID, resource group of the VM, tenant ID, service principal application ID and password sudo crm configure primitive rsc_st_azure stonith:fence_azure_arm \- params subscriptionId="<b>subscription ID</b>" resourceGroup="<b>resource group</b>" tenantId="<b>tenant ID</b>" login="<b>application ID</b>" passwd="<b>password</b>" \ - pcmk_monitor_retries=4 pcmk_action_limit=3 power_timeout=240 pcmk_reboot_timeout=900 pcmk_delay_max=15 <b>pcmk_host_map="prod-cl1-0:prod-cl1-0-vm-name;prod-cl1-1:prod-cl1-1-vm-name"</b> \ + params subscriptionId="subscription ID" resourceGroup="resource group" tenantId="tenant ID" login="application ID" passwd="password" \ + pcmk_monitor_retries=4 pcmk_action_limit=3 power_timeout=240 pcmk_reboot_timeout=900 pcmk_delay_max=15 pcmk_host_map="prod-cl1-0:prod-cl1-0-vm-name;prod-cl1-1:prod-cl1-1-vm-name" \ op monitor interval=3600 timeout=120 sudo crm configure property stonith-timeout=900- </code></pre> + ``` - If you are using fencing device, based on service principal configuration, read [Change from SPN to MSI for Pacemaker clusters using Azure fencing](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-on-azure-high-availability-change-from-spn-to-msi-for/ba-p/3609278) and learn how to convert to managed identity configuration. + If you're using fencing device, based on service principal configuration, read [Change from SPN to MSI for Pacemaker clusters using Azure fencing](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-on-azure-high-availability-change-from-spn-to-msi-for/ba-p/3609278) and learn how to convert to managed identity configuration. > [!IMPORTANT] > The monitoring and fencing operations are deserialized. As a result, if there's a longer-running monitoring operation and simultaneous fencing event, there's no delay to the cluster failover because the monitoring operation is already running. Make sure to assign the custom role to the service principal at all VM (cluster ## Configure Pacemaker for Azure scheduled events -Azure offers [scheduled events](../../virtual-machines/linux/scheduled-events.md). Scheduled events are provided via the metadata service and allow time for the application to prepare for such events. Resource agent [azure-events-az](https://github.com/ClusterLabs/resource-agents/blob/main/heartbeat/azure-events-az.in) monitors for scheduled Azure events. If events are detected and the resource agent determines that another cluster node is available, it sets a cluster health attribute. When the cluster health attribute is set for a node, the location constraint triggers and all resources, whose name doesnΓÇÖt start with ΓÇ£health-ΓÇ£ are migrated away from the node with scheduled event. Once the affected cluster node is free of running cluster resources, scheduled event is acknowledged and can execute its action, such as restart. +Azure offers [scheduled events]((../../virtual-machines/linux/scheduled-events.md). Scheduled events are provided via the metadata service and allow time for the application to prepare for such events. Resource agent [azure-events-az](https://github.com/ClusterLabs/resource-agents/pull/1161) monitors for scheduled Azure events. If events are detected and the resource agent determines that another cluster node is available, it sets a cluster health attribute. When the cluster health attribute is set for a node, the location constraint triggers and all resources, whose name doesnΓÇÖt start with ΓÇ£health-ΓÇ£ are migrated away from the node with scheduled event. Once the affected cluster node is free of running cluster resources, scheduled event is acknowledged and can execute its action, such as restart. > [!IMPORTANT]-> Previously, this document described the use of resource agent [azure-events](https://github.com/ClusterLabs/resource-agents/blob/main/heartbeat/azure-events.in). New resource agent [azure-events-az](https://github.com/ClusterLabs/resource-agents/blob/main/heartbeat/azure-events-az.in) fully supports Azure environments deployed in different availability zones. +> Previously, this document described the use of resource agent [azure-events](https://github.com/ClusterLabs/resource-agents/blob/main/heartbeat/azure-events.in). New resource agent [azure-events-az](https://github.com/ClusterLabs/resource-agents/blob/main/heartbeat/azure-events-az.in) fully supports Azure environments deployed in different availability zones. > It is recommended to utilize the newer azure-events-az agent for all SAP highly available systems with Pacemaker. -The following items are prefixed with either **[A]** - applicable to all nodes, including majority maker VM for HANA scale-out, **[1]** - only applicable to cluster node 1. +1. **[A]** Make sure that the package for the azure-events agent is already installed and up to date. -1. **[A]** Make sure that the package for the azure-events-az agent is already installed and up to date. - - <pre><code>sudo zypper info resource-agents - </code></pre> + ```bash + sudo zypper info resource-agents + ``` - Minimum version requirements: - SLES 12 SP5: `resource-agents-4.3.018.a7fb5035-3.98.1` - SLES 15 SP1: `resource-agents-4.3.0184.6ee15eb2-150100.4.72.1` - SLES 15 SP2: `resource-agents-4.4.0+git57.70549516-150200.3.56.1` - SLES 15 SP3: `resource-agents-4.8.0+git30.d0077df0-150300.8.31.1` - SLES 15 SP4 and newer: `resource-agents-4.10.0+git40.0f4de473-150400.3.19.1` + Minimum version requirements: + - SLES 12 SP5: `resource-agents-4.3.018.a7fb5035-3.98.1` + - SLES 15 SP1: `resource-agents-4.3.0184.6ee15eb2-150100.4.72.1` + - SLES 15 SP2: `resource-agents-4.4.0+git57.70549516-150200.3.56.1` + - SLES 15 SP3: `resource-agents-4.8.0+git30.d0077df0-150300.8.31.1` + - SLES 15 SP4 and newer: `resource-agents-4.10.0+git40.0f4de473-150400.3.19.1` -1. **[1]** Configure the resources in Pacemaker. - - <pre><code>sudo crm configure property maintenance-mode=true - </code></pre> +2. **[1]** Configure the resources in Pacemaker. -1. **[1]** Set the Pacemaker cluster health node strategy and constraint - - <pre><code>sudo crm configure property node-health-strategy=custom + ```bash + #Place the cluster in maintenance mode + sudo crm configure property maintenance-mode=true ++3. **[1]** Set the pacemaker cluster health node strategy and constraint ++ ```bash + sudo crm configure property node-health-strategy=custom sudo crm configure location loc_azure_health \ /'!health-.*'/ rule '#health-azure': defined '#uname'- </code></pre> + ``` > [!IMPORTANT]+ > > Don't define any other resources in the cluster starting with ΓÇ£health-ΓÇ¥, besides the resources described in the next steps of the documentation. -1. **[1]** Set initial value of the cluster attributes. - Run for each cluster node. For scale-out environments including majority maker VM. +4. **[1]** Set initial value of the cluster attributes. + Run for each cluster node. For scale-out environments including majority maker VM. - <pre><code>sudo crm_attribute --node <b>prod-cl1-0</b> --name '#health-azure' --update 0 - sudo crm_attribute --node <b>prod-cl1-1</b> --name '#health-azure' --update 0 - </code></pre> + ```bash + sudo crm_attribute --node prod-cl1-0 --name '#health-azure' --update 0 + sudo crm_attribute --node prod-cl1-1 --name '#health-azure' --update 0 + ``` -1. **[1]** Configure the resources in Pacemaker. +5. **[1]** Configure the resources in Pacemaker. Important: The resources must start with ΓÇÿhealth-azureΓÇÖ. - <pre><code>sudo crm configure primitive health-azure-events \ + ```bash + sudo crm configure primitive health-azure-events \ ocf:heartbeat:azure-events-az op monitor interval=10s sudo crm configure clone health-azure-events-cln health-azure-events- </code></pre> + ``` -1. Take the Pacemaker cluster out of maintenance mode +6. Take the Pacemaker cluster out of maintenance mode - <pre><code>sudo crm configure property maintenance-mode=false - </code></pre> + ```bash + sudo crm configure property maintenance-mode=false + ``` -1. Clear any errors during enablement and verify that the health-azure-events resources have started successfully on all cluster nodes. - - <pre><code>sudo crm resource cleanup - </code></pre> +7. Clear any errors during enablement and verify that the health-azure-events resources have started successfully on all cluster nodes. ++ ```bash + sudo crm resource cleanup + ``` First time query execution for scheduled events [can take up to 2 minutes](../../virtual-machines/linux/scheduled-events.md#enabling-and-disabling-scheduled-events). Pacemaker testing with scheduled events can use reboot or redeploy actions for the cluster VMs. For more information, see [scheduled events](../../virtual-machines/linux/scheduled-events.md) documentation. > [!NOTE]- > After you've configured the Pacemaker resources for the azure-events agent, if you place the cluster in or out of maintenance mode, you might get warning messages such as: - WARNING: cib-bootstrap-options: unknown attribute 'hostName_ <strong> hostname</strong>' - WARNING: cib-bootstrap-options: unknown attribute 'azure-events_globalPullState' - WARNING: cib-bootstrap-options: unknown attribute 'hostName_ <strong>hostname</strong>' + > + > After you've configured the Pacemaker resources for the azure-events agent, if you place the cluster in or out of maintenance mode, you might get warning messages such as: + > + > WARNING: cib-bootstrap-options: unknown attribute 'hostName_ **hostname**' + > WARNING: cib-bootstrap-options: unknown attribute 'azure-events_globalPullState' + > WARNING: cib-bootstrap-options: unknown attribute 'hostName_ **hostname**' > These warning messages can be ignored. ## Next steps -* [Azure Virtual Machines planning and implementation for SAP][planning-guide] -* [Azure Virtual Machines deployment for SAP][deployment-guide] -* [Azure Virtual Machines DBMS deployment for SAP][dbms-guide] -* [High availability for NFS on Azure VMs on SUSE Linux Enterprise Server][sles-nfs-guide] -* [High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise Server for SAP applications][sles-guide] -* To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see [High availability of SAP HANA on Azure Virtual Machines][sap-hana-ha] +- [Azure Virtual Machines planning and implementation for SAP][planning-guide] +- [Azure Virtual Machines deployment for SAP][deployment-guide] +- [Azure Virtual Machines DBMS deployment for SAP][dbms-guide] +- [High availability for NFS on Azure VMs on SUSE Linux Enterprise Server][sles-nfs-guide] +- [High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise Server for SAP applications][sles-guide] +- To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see [High availability of SAP HANA on Azure Virtual Machines][sap-hana-ha] |
sap | Sap Hana High Availability Netapp Files Suse | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability-netapp-files-suse.md | tags: azure-resource-manager Previously updated : 04/25/2023 Last updated : 06/23/2023 - # High availability of SAP HANA Scale-up with Azure NetApp Files on SUSE Enterprise Linux When steps in this document are marked with the following prefixes, the meaning Read the following SAP Notes and papers first: - SAP Note [1928533](https://launchpad.support.sap.com/#/notes/1928533), which has:- - The list of Azure VM sizes that are supported for the deployment of SAP software. - - Important capacity information for Azure VM sizes. - - The supported SAP software, and operating system (OS) and database combinations. - - The required SAP kernel version for Windows and Linux on Microsoft Azure. -- SAP Note [2015553](https://launchpad.support.sap.com/#/notes/1928533) lists prerequisites for SAP-supported SAP software deployments in Azure. + - The list of Azure VM sizes that are supported for the deployment of SAP software. + - Important capacity information for Azure VM sizes. + - The supported SAP software, and operating system (OS) and database combinations. + - The required SAP kernel version for Windows and Linux on Microsoft Azure. +- SAP Note [2015553](https://launchpad.support.sap.com/#/notes/1928533) lists prerequisites for SAP-supported SAP software deployments in Azure. - SAP Note [405827](https://launchpad.support.sap.com/#/notes/405827) lists out recommended file system for HANA environment. - SAP Note [2684254](https://launchpad.support.sap.com/#/notes/2684254) -has recommended OS settings for SLES 15 / SLES for SAP Applications 15. - SAP Note [1944799](https://launchpad.support.sap.com/#/notes/1944799) has SAP HANA Guidelines for SLES Operating System Installation. Read the following SAP Notes and papers first: - [Azure Virtual Machines deployment for SAP on Linux](./deployment-guide.md) - [Azure Virtual Machines DBMS deployment for SAP on Linux](./dbms-guide-general.md) - General SLES documentation- - [Setting up SAP HANA Cluster](https://documentation.suse.com/sles-sap/15-SP1/html/SLES4SAP-guide/cha-s4s-cluster.html). - - [SLES High Availability Extension 15 SP3 Release Notes](https://www.suse.com/releasenotes/x86_64/SLE-HA/15-SP3/https://docsupdatetracker.net/index.html) - - [Operating System Security Hardening Guide for SAP HANA for SUSE Linux Enterprise Server 15](https://documentation.suse.com/sbp/all/html/OS_Security_Hardening_Guide_for_SAP_HANA_SLES15/https://docsupdatetracker.net/index.html). - - [SUSE Linux Enterprise Server for SAP Applications 15 SP3 Guide](https://documentation.suse.com/sles/15-SP3/) - - [SUSE Linux Enterprise Server for SAP Applications 15 SP3 SAP Automation](https://documentation.suse.com/sles-sap/15-SP3/html/SLES-SAP-automation/article-sap-automation.html) - - [SUSE Linux Enterprise Server for SAP Applications 15 SP3 SAP Monitoring](https://documentation.suse.com/sles-sap/15-SP3/html/SLES-SAP-monitoring/article-sap-monitoring.html) + - [Setting up SAP HANA Cluster](https://documentation.suse.com/sles-sap/15-SP1/html/SLES4SAP-guide/cha-s4s-cluster.html). + - [SLES High Availability Extension 15 SP3 Release Notes](https://www.suse.com/releasenotes/x86_64/SLE-HA/15-SP3/https://docsupdatetracker.net/index.html) + - [Operating System Security Hardening Guide for SAP HANA for SUSE Linux Enterprise Server 15](https://documentation.suse.com/sbp/all/html/OS_Security_Hardening_Guide_for_SAP_HANA_SLES15/https://docsupdatetracker.net/index.html). + - [SUSE Linux Enterprise Server for SAP Applications 15 SP3 Guide](https://documentation.suse.com/sles/15-SP3/) + - [SUSE Linux Enterprise Server for SAP Applications 15 SP3 SAP Automation](https://documentation.suse.com/sles-sap/15-SP3/html/SLES-SAP-automation/article-sap-automation.html) + - [SUSE Linux Enterprise Server for SAP Applications 15 SP3 SAP Monitoring](https://documentation.suse.com/sles-sap/15-SP3/html/SLES-SAP-monitoring/article-sap-monitoring.html) - Azure-specific SLES documentation:- - [Getting Started with SAP HANA High Availability Cluster Automation Operating on Azure](https://documentation.suse.com/sbp/all/html/SBP-SAP-HANA-PerOpt-HA-Azure/https://docsupdatetracker.net/index.html) - - [SUSE and Microsoft Solution Templates for SAP Applications Simplified Deployment on Microsoft](https://documentation.suse.com/sbp/all/html/SBP-SAP-AzureSolutionTemplates/https://docsupdatetracker.net/index.html) + - [Getting Started with SAP HANA High Availability Cluster Automation Operating on Azure](https://documentation.suse.com/sbp/all/html/SBP-SAP-HANA-PerOpt-HA-Azure/https://docsupdatetracker.net/index.html) + - [SUSE and Microsoft Solution Templates for SAP Applications Simplified Deployment on Microsoft](https://documentation.suse.com/sbp/all/html/SBP-SAP-AzureSolutionTemplates/https://docsupdatetracker.net/index.html) - [NetApp SAP Applications on Microsoft Azure using Azure NetApp Files](https://www.netapp.com/us/media/tr-4746.pdf) - [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md) - [Azure Virtual Machines planning and implementation for SAP on Linux](./planning-guide.md) Mounted on node2 (**hanadb2**) SAP high availability HANA System Replication configuration uses a dedicated virtual hostname and virtual IP addresses. On Azure, a load balancer is required to use a virtual IP address. The presented configuration shows a load balancer with: - Front-end configuration IP address: 10.3.0.50 for hn1-db-- Probe Port: 62503 +- Probe Port: 62503 -## Set up the Azure NetApp File infrastructure +## Setup the Azure NetApp File infrastructure Before you continue with the set up for Azure NetApp Files infrastructure, familiarize yourself with the Azure [NetApp Files documentation](../../azure-netapp-files/index.yml). For information about the availability of Azure NetApp Files by Azure region, se The following instructions assume that you've already deployed your [Azure virtual network](../../virtual-network/virtual-networks-overview.md). The Azure NetApp Files resources and VMs, where the Azure NetApp Files resources will be mounted, must be deployed in the same Azure virtual network or in peered Azure virtual networks. 1. Create a NetApp account in your selected Azure region by following the instructions in [Create a NetApp account](../../azure-netapp-files/azure-netapp-files-create-netapp-account.md).--2. Set up Azure NetApp Files capacity pool by following the instructions in [Set up an Azure NetApp Files capacity pool](../../azure-netapp-files/azure-netapp-files-set-up-capacity-pool.md). -- The HANA architecture presented in this article uses a single Azure NetApp Files capacity pool at the *Ultra* Service level. For HANA workloads on Azure, we recommend using Azure NetApp Files *Ultra* or *Premium* [service Level](../../azure-netapp-files/azure-netapp-files-service-levels.md). --3. Delegate a subnet to Azure NetApp Files, as described in the instructions in [Delegate a subnet to Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-delegate-subnet.md). --4. Deploy Azure NetApp Files volumes by following the instructions in [Create an NFS volume for Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-create-volumes.md). -- As you deploy the volumes, be sure to select the NFSv4.1 version. Deploy the volumes in the designated Azure NetApp Files subnet. The IP addresses of the Azure NetApp volumes are assigned automatically. -- Keep in mind that the Azure NetApp Files resources and the Azure VMs must be in the same Azure virtual network or in peered Azure virtual networks. For example, hanadb1-data-mnt00001, hanadb1-log-mnt00001, and so on, are the volume names and nfs://10.3.1.4/hanadb1-data-mnt00001, nfs://10.3.1.4/hanadb1-log-mnt00001, and so on, are the file paths for the Azure NetApp Files volumes. - - On **hanadb1** -- - Volume hanadb1-data-mnt00001 (nfs://10.3.1.4:/hanadb1-data-mnt00001) - - Volume hanadb1-log-mnt00001 (nfs://10.3.1.4:/hanadb1-log-mnt00001) - - Volume hanadb1-shared-mnt00001 (nfs://10.3.1.4:/hanadb1-shared-mnt00001) -- On **hanadb2** - - - Volume hanadb2-data-mnt00001 (nfs://10.3.1.4:/hanadb2-data-mnt00001) - - Volume hanadb2-log-mnt00001 (nfs://10.3.1.4:/hanadb2-log-mnt00001) - - Volume hanadb2-shared-mnt00001 (nfs://10.3.1.4:/hanadb2-shared-mnt00001) +2. Set up Azure NetApp Files capacity pool by following the instructions in [Set up an Azure NetApp Files capacity pool](../../azure-netapp-files/azure-netapp-files-set-up-capacity-pool.md). ++ The HANA architecture presented in this article uses a single Azure NetApp Files capacity pool at the *Ultra* Service level. For HANA workloads on Azure, we recommend using Azure NetApp Files *Ultra* or *Premium* [service Level](../../azure-netapp-files/azure-netapp-files-service-levels.md). +3. Delegate a subnet to Azure NetApp Files, as described in the instructions in [Delegate a subnet to Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-delegate-subnet.md). +4. Deploy Azure NetApp Files volumes by following the instructions in [Create an NFS volume for Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-create-volumes.md). + + As you deploy the volumes, be sure to select the NFSv4.1 version. Deploy the volumes in the designated Azure NetApp Files subnet. The IP addresses of the Azure NetApp volumes are assigned automatically. + + Keep in mind that the Azure NetApp Files resources and the Azure VMs must be in the same Azure virtual network or in peered Azure virtual networks. For example, hanadb1-data-mnt00001, hanadb1-log-mnt00001, and so on, are the volume names and nfs://10.3.1.4/hanadb1-data-mnt00001, nfs://10.3.1.4/hanadb1-log-mnt00001, and so on, are the file paths for the Azure NetApp Files volumes. ++ On **hanadb1** + - Volume hanadb1-data-mnt00001 (nfs://10.3.1.4:/hanadb1-data-mnt00001) + - Volume hanadb1-log-mnt00001 (nfs://10.3.1.4:/hanadb1-log-mnt00001) + - Volume hanadb1-shared-mnt00001 (nfs://10.3.1.4:/hanadb1-shared-mnt00001) ++ On **hanadb2** + - Volume hanadb2-data-mnt00001 (nfs://10.3.1.4:/hanadb2-data-mnt00001) + - Volume hanadb2-log-mnt00001 (nfs://10.3.1.4:/hanadb2-log-mnt00001) + - Volume hanadb2-shared-mnt00001 (nfs://10.3.1.4:/hanadb2-shared-mnt00001) ### Important considerations As you create your Azure NetApp Files for SAP HANA Scale-up systems, be aware of - The selected virtual network must have a subnet that is delegated to Azure NetApp Files. - The throughput of an Azure NetApp Files volume is a function of the volume quota and service level, as documented in [Service level for Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-service-levels.md). While sizing the HANA Azure NetApp volumes, make sure that the resulting throughput meets the HANA system requirements. - With the Azure NetApp Files [export policy](../../azure-netapp-files/azure-netapp-files-configure-export-policy.md), you can control the allowed clients, the access type (read-write, read only, and so on).-- The Azure NetApp Files feature is not zone-aware yet. Currently, the feature is not deployed in all availability zones in an Azure region. Be aware of the potential latency implications in some Azure regions.+- The Azure NetApp Files feature isn't zone-aware yet. Currently, the feature isn't deployed in all availability zones in an Azure region. Be aware of the potential latency implications in some Azure regions. > [!IMPORTANT] > For SAP HANA workloads, low latency is critical. Work with your Microsoft representative to ensure that the virtual machines and the Azure NetApp Files volumes are deployed in proximity. To meet the SAP minimum throughput requirements for /hana/data and /hana/log, an | :-: | :--: | :: | :--: | | /hana/log | 4 TiB | 2 TiB | v4.1 | | /hana/data | 6.3 TiB | 3.2 TiB | v4.1 |-| /hana/shared | 1 x RAM | 1 x RAM | v3 or v4.1 | +| /hana/shared | 1 x RAM | 1 x RAM | v3 or v4.1 | > [!NOTE] > The Azure NetApp Files sizing recommendations stated here are targeted to meet the minimum requirements that SAP recommends for their infrastructure providers. In real customer deployments and workload scenarios, these sizes may not be sufficient. Use these recommendations as a starting point and adapt, based on the requirements of your specific workload. To meet the SAP minimum throughput requirements for /hana/data and /hana/log, an > All commands to mount /hana/shared in this article are presented for NFSv4.1 /hana/shared volumes. > If you deployed the /hana/shared volumes as NFSv3 volumes, don't forget to adjust the mount commands for /hana/shared for NFSv3. +## Deploy Linux virtual machine via Azure portal -## Deploy Linux virtual machine via Azure portal --First you need to create the Azure NetApp Files volumes. Then do the following steps: +This document assumes that you've already deployed a resource group, [Azure Virtual Network](../../virtual-network/virtual-networks-overview.md), and subnet. -1. Create a resource group. -2. Create a virtual network. -3. Choose a [suitable deployment type](./sap-high-availability-architecture-scenarios.md#comparison-of-different-deployment-types-for-sap-workload) for SAP virtual machines. Typically a virtual machine scale set with flexible orchestration. -4. Create a load balancer (internal). We recommend standard load balancer. - Select the virtual network created in step 2. -5. Create Virtual Machine 1 (**hanadb1**). -6. Create Virtual Machine 2 (**hanadb2**). -7. While creating virtual machine, we won't be adding any disk as all our mount points will be on NFS shares from Azure NetApp Files. +Deploy virtual machines for SAP HANA. Choose a suitable SLES image that is supported for HANA system. You can deploy VM in any one of the availability options - scale set, availability zone or availability set. > [!IMPORTANT]-> Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see [Azure Load balancer Limitations](../../load-balancer/load-balancer-multivip-overview.md#limitations). If you need additional IP address for the VM, deploy a second NIC. --> [!NOTE] -> When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard Azure load balancer, there will be no outbound internet connectivity, unless additional configuration is performed to allow routing to public end points. For details on how to achieve outbound connectivity see [Public endpoint connectivity for Virtual Machines using Azure Standard Load Balancer in SAP high-availability scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md). --8. To set up standard load balancer, follow these configuration steps: - 1. First, create a front-end IP pool: - 1. Open the load balancer, select **frontend IP configuration**, and select **Add**. - 1. Enter the name of the new front-end IP (for example, **hana-frontend**). - 1. Set the **Assignment** to **Static** and enter the IP address (for example, **10.3.0.50**). - 1. Select **OK**. - 1. After the new front-end IP pool is created, note the pool IP address. -- 1. Create a single back-end pool: +> Make sure that the OS you select is SAP certified for SAP HANA on the specific VM types that you plan to use in your deployment. You can look up SAP HANA-certified VM types and their OS releases in [SAP HANA Certified IaaS Platforms](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=v:deCertified;ve:24;iaas;v:125;v:105;v:99;v:120). Make sure that you look at the details of the VM type to get the complete list of SAP HANA-supported OS releases for the specific VM type. ++During VM configuration, we won't be adding any disk as all our mount points will be on NFS shares from Azure NetApp Files. Also, you have an option to create or select exiting load balancer in networking section. If you're creating a new load balancer, follow below steps - ++1. To set up standard load balancer, follow these configuration steps: + 1. First, create a front-end IP pool: + 1. Open the load balancer, select **frontend IP configuration**, and select **Add**. + 2. Enter the name of the new front-end IP (for example, **hana-frontend**). + 3. Set the **Assignment** to **Static** and enter the IP address (for example, **10.3.0.50**). + 4. Select **OK**. + 5. After the new front-end IP pool is created, note the pool IP address. + 2. Create a single back-end pool: 1. Open the load balancer, select **Backend pools**, and then select **Add**.- 1. Enter the name of the new back-end pool (for example, **hana-backend**). - 2. Select **NIC** for Backend Pool Configuration. - 1. Select **Add a virtual machine**. - 1. Select the virtual machines of the HANA cluster. - 1. Select **Add**. - 2. Select **Save**. - - 1. Next, create a health probe: - 1. Open the load balancer, select **health probes**, and select **Add**. - 1. Enter the name of the new health probe (for example, **hana-hp**). - 1. Select TCP as the protocol and port 625**03**. Keep the **Interval** value set to 5. - 1. Select **OK**. - - 1. Next, create the load-balancing rules: - 1. Open the load balancer, select **load balancing rules**, and select **Add**. - 1. Enter the name of the new load balancer rule (for example, **hana-lb**). - 1. Select the front-end IP address, the back-end pool, and the health probe that you created earlier (for example, **hana-frontend**, **hana-backend** and **hana-hp**). - 2. Increase idle timeout to 30 minutes - 1. Select **HA Ports**. - 1. Make sure to **enable Floating IP**. - 1. Select **OK**. + 2. Enter the name of the new back-end pool (for example, **hana-backend**). + 3. Select **NIC** for Backend Pool Configuration. + 4. Select **Add a virtual machine**. + 5. Select the virtual machines of the HANA cluster. + 6. Select **Add**. + 7. Select **Save**. + 3. Next, create a health probe: + 1. Open the load balancer, select **health probes**, and select **Add**. + 2. Enter the name of the new health probe (for example, **hana-hp**). + 3. Select TCP as the protocol and port 625**03**. Keep the **Interval** value set to 5. + 4. Select **OK**. + 4. Next, create the load-balancing rules: + 1. Open the load balancer, select **load balancing rules**, and select **Add**. + 2. Enter the name of the new load balancer rule (for example, **hana-lb**). + 3. Select the front-end IP address, the back-end pool, and the health probe that you created earlier (for example, **hana-frontend**, **hana-backend** and **hana-hp**). + 1. Increase idle timeout to 30 minutes + 4. Select **HA Ports**. + 5. Make sure to **enable Floating IP**. + 6. Select **OK**. For more information about the required ports for SAP HANA, read the chapter [Connections to Tenant Databases](https://help.sap.com/viewer/78209c1d3a9b41cd8624338e42a12bf6/latest/en-US/7a9343c9f2a2436faa3cfdb5ca00c052.html) in the [SAP HANA Tenant Databases](https://help.sap.com/viewer/78209c1d3a9b41cd8624338e42a12bf6) guide or SAP Note [2388694](https://launchpad.support.sap.com/#/notes/2388694). +> [!IMPORTANT] +> Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see [Azure Load balancer Limitations](../../load-balancer/load-balancer-multivip-overview.md#limitations). If you need additional IP address for the VM, deploy a second NIC. ++> [!NOTE] +> When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard Azure load balancer, there will be no outbound internet connectivity, unless additional configuration is performed to allow routing to public end points. For details on how to achieve outbound connectivity see [Public endpoint connectivity for Virtual Machines using Azure Standard Load Balancer in SAP high-availability scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md). + > [!IMPORTANT] > Do not enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will cause the health probes to fail. Set parameter **net.ipv4.tcp_timestamps** to **0**. For details see [Load Balancer health probes](../../load-balancer/load-balancer-custom-probe-overview.md). See also SAP note [2382421](https://launchpad.support.sap.com/#/notes/2382421). ## Mount the Azure NetApp Files volume -1.**[A]** Create mount points for the HANA database volumes. +1. **[A]** Create mount points for the HANA database volumes. - ```bash - sudo mkdir -p /hana/data/HN1/mnt00001 - sudo mkdir -p /hana/log/HN1/mnt00001 - sudo mkdir -p /hana/shared/HN1 - ``` + ```bash + sudo mkdir -p /hana/data/HN1/mnt00001 + sudo mkdir -p /hana/log/HN1/mnt00001 + sudo mkdir -p /hana/shared/HN1 + ``` -2.**[A]** Verify the NFS domain setting. Make sure that the domain is configured as the default Azure NetApp Files domain, that is, **defaultv4iddomain.com** and the mapping is set to **nobody**. +2. **[A]** Verify the NFS domain setting. Make sure that the domain is configured as the default Azure NetApp Files domain, that is, **defaultv4iddomain.com** and the mapping is set to **nobody**. ```bash sudo cat /etc/idmapd.conf ```+ Example output- ```output ++ ```bash [General] Domain = defaultv4iddomain.com [Mapping] For more information about the required ports for SAP HANA, read the chapter [Co > [!IMPORTANT] > Make sure to set the NFS domain in /etc/idmapd.conf on the VM to match the default domain configuration on Azure NetApp Files: **defaultv4iddomain.com**. If there's a mismatch between the domain configuration on the NFS client (i.e. the VM) and the NFS server, i.e. the Azure NetApp configuration, then the permissions for files on Azure NetApp volumes that are mounted on the VMs will be displayed as nobody. -3.**[A]** Edit the /etc/fstab on both nodes to permanently mount the volumes relevant to each node. Below is an example of how you mount the volumes permanently. +3. **[A]** Edit the /etc/fstab on both nodes to permanently mount the volumes relevant to each node. Below is an example of how you mount the volumes permanently. ```bash sudo vi /etc/fstab ```- Add the following entries in /etc/fstab on both nodes ++ Add the following entries in /etc/fstab on both nodes Example for hanadb1- - ```output ++ ```example 10.3.1.4:/hanadb1-data-mnt00001 /hana/data/HN1/mnt00001 nfs rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0 10.3.1.4:/hanadb1-log-mnt00001 /hana/log/HN1/mnt00001 nfs rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0 10.3.1.4:/hanadb1-shared-mnt00001 /hana/shared/HN1 nfs rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0 ```+ Example for hanadb2- - ```output ++ ```example 10.3.1.4:/hanadb2-data-mnt00001 /hana/data/HN1/mnt00001 nfs rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0 10.3.1.4:/hanadb2-log-mnt00001 /hana/log/HN1/mnt00001 nfs rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0 10.3.1.4:/hanadb2-shared-mnt00001 /hana/shared/HN1 nfs rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0 ```+ Mount all volumes- + ```bash sudo mount -a ```- For workloads, that require higher throughput, consider using the `nconnect` mount option, as described in [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md#nconnect-mount-option). Check if `nconnect` is [supported by Azure NetApp Files](../../azure-netapp-files/performance-linux-mount-options.md#nconnect) on your Linux release. + For workloads that require higher throughput consider using the `nconnect` mount option, as described in [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md#nconnect-mount-option). Check if `nconnect` is [supported by Azure NetApp Files](../../azure-netapp-files/performance-linux-mount-options.md#nconnect) on your Linux release. -4.**[A]** Verify that all HANA volumes are mounted with NFS protocol version NFSv4. +4. **[A]** Verify that all HANA volumes are mounted with NFS protocol version NFSv4. ```bash sudo nfsstat -m ```- Verify that flag vers is set to 4.1 - - Example from hanadb1 - ```output ++ Verify that flag vers is set to 4.1. ++ Example from hanadb1. ++ ```example /hana/log/HN1/mnt00001 from 10.3.1.4:/hanadb1-log-mnt00001 Flags: rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.3.0.4,local_lock=none,addr=10.3.1.4 /hana/data/HN1/mnt00001 from 10.3.1.4:/hanadb1-data-mnt00001 For more information about the required ports for SAP HANA, read the chapter [Co Flags: rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.3.0.4,local_lock=none,addr=10.3.1.4 ``` -5.**[A]** Verify **nfs4_disable_idmapping**. It should be set to **Y**. To create the directory structure where **nfs4_disable_idmapping** is located, execute the mount command. You won't be able to manually create the directory under /sys/modules, because access is reserved for the kernel / drivers. +5. **[A]** Verify **nfs4_disable_idmapping**. It should be set to **Y**. To create the directory structure where **nfs4_disable_idmapping** is located, execute the mount command. You won't be able to manually create the directory under /sys/modules, because access is reserved for the kernel / drivers. - Check nfs4_disable_idmapping ```bash+ #Check nfs4_disable_idmapping sudo cat /sys/module/nfs/parameters/nfs4_disable_idmapping- ``` - If you need to set nfs4_disable_idmapping to Y - ```bash + + #If you need to set nfs4_disable_idmapping to Y sudo echo "Y" > /sys/module/nfs/parameters/nfs4_disable_idmapping- ``` - Make the configuration permanent - ```bash + + #Make the configuration permanent sudo echo "options nfs nfs4_disable_idmapping=Y" >> /etc/modprobe.d/nfs.conf ``` ## SAP HANA Installation -1.**[A]** Set up host name resolution for all hosts. +1. **[A]** Set up host name resolution for all hosts. You can either use a DNS server or modify the /etc/hosts file on all nodes. This example shows you how to use the /etc/hosts file. Replace the IP address and the hostname in the following commands: ```bash sudo vi /etc/hosts ```- Insert the following lines in the /etc/hosts file. Change the IP address and hostname to match your environment - ```output ++ Insert the following lines in the /etc/hosts file. Change the IP address and hostname to match your environment ++ ```example 10.3.0.4 hanadb1 10.3.0.5 hanadb2 ``` -2.**[A]** Prepare the OS for running SAP HANA on Azure NetApp with NFS, as described in SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346). Create configuration file */etc/sysctl.d/91-NetApp-HANA.conf* for the NetApp configuration settings. +2. **[A]** Prepare the OS for running SAP HANA on Azure NetApp with NFS, as described in SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346). Create configuration file */etc/sysctl.d/91-NetApp-HANA.conf* for the NetApp configuration settings. ```bash sudo vi /etc/sysctl.d/91-NetApp-HANA.conf ```+ Add the following entries in the configuration file- ```config ++ ```parameters net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 4096 131072 16777216 For more information about the required ports for SAP HANA, read the chapter [Co net.ipv4.tcp_sack = 1 ``` -3.**[A]** Create configuration file */etc/sysctl.d/ms-az.conf* with additional optimization settings. +3. **[A]** Create configuration file */etc/sysctl.d/ms-az.conf* with additional optimization settings. ```bash sudo vi /etc/sysctl.d/ms-az.conf ```+ Add the following entries in the configuration file- ```config ++ ```parameters net.ipv6.conf.all.disable_ipv6 = 1 net.ipv4.tcp_max_syn_backlog = 16348 net.ipv4.conf.all.rp_filter = 0 For more information about the required ports for SAP HANA, read the chapter [Co vm.swappiness=10 ``` -> [!TIP] -> Avoid setting net.ipv4.ip_local_port_range and net.ipv4.ip_local_reserved_ports explicitly in the sysctl configuration files to allow SAP Host Agent to manage the port ranges. For more information, see SAP note [2382421](https://launchpad.support.sap.com/#/notes/2382421). + > [!TIP] + > Avoid setting net.ipv4.ip_local_port_range and net.ipv4.ip_local_reserved_ports explicitly in the sysctl configuration files to allow SAP Host Agent to manage the port ranges. For more information, see SAP note [2382421](https://launchpad.support.sap.com/#/notes/2382421). -4.**[A]** Adjust the sunrpc settings, as recommended in SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346). +4. **[A]** Adjust the sunrpc settings, as recommended in SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346). ```bash sudo vi /etc/modprobe.d/sunrpc.conf ```+ Insert the following line- ```config ++ ```parameter options sunrpc tcp_max_slot_table_entries=128 ``` -5.**[A]** SLES for HANA Configuration --ΓÇï Configure SLES as described in below SAP Note based on your SLES version --- [2684254 Recommended OS settings for SLES 15 / SLES for SAP Applications 15](https://launchpad.support.sap.com/#/notes/2684254)-- [2205917 Recommended OS settings for SLES 12 / SLES for SAP Applications 12](https://launchpad.support.sap.com/#/notes/2205917)-- [2455582 Linux: Running SAP applications compiled with GCC 6.x](https://launchpad.support.sap.com/#/notes/2455582)-- [2593824 Linux: Running SAP applications compiled with GCC 7.x](https://launchpad.support.sap.com/#/notes/2593824)-- [2886607 Linux: Running SAP applications compiled with GCC 9.x](https://launchpad.support.sap.com/#/notes/2886607)--6.**[A]** Install the SAP HANA -- Starting with HANA 2.0 SPS 01, MDC is the default option. When you install HANA system, SYSTEMDB and a tenant with same SID will be created together. In some cases, you do not want the default tenant. In case, if you donΓÇÖt want to create initial tenant along with the installation you can follow SAP Note [2629711](https://launchpad.support.sap.com/#/notes/2629711). --a. Start the hdblcm program from the HANA installation software directory. -- ```bash - ./hdblcm - ``` -- b. At the prompt, enter the following values: --- For Choose installation: Enter **1** (for install)-- For Select additional components for installation: Enter **1**. -- For Enter Installation Path [/hana/shared]: press Enter to accept the default -- For Enter Local Host Name [..]: Press Enter to accept the default -- Under, Do you want to add additional hosts to the system? (y/n) [n]: **n**-- For Enter SAP HANA System ID: Enter **HN1**.-- For Enter Instance Number [00]: Enter **03**-- For Select Database Mode / Enter Index [1]: press **Enter** to accept the default-- For Select System Usage / Enter Index [4]: enter **4** (for custom)-- For Enter Location of Data Volumes [/hana/data]: press **Enter** to accept the default-- For Enter Location of Log Volumes [/hana/log]: press **Enter** to accept the default-- For Restrict maximum memory allocation? [n]: press **Enter** to accept the default-- For Enter Certificate Host Name For Host '...' [...]: press **Enter** to accept the default-- For Enter SAP Host Agent User (sapadm) Password: Enter the host agent user password-- For Confirm SAP Host Agent User (sapadm) Password: Enter the host agent user password again to confirm-- For Enter System Administrator (hn1adm) Password: Enter the system administrator password-- For Confirm System Administrator (hn1adm) Password: Enter the system administrator password again to confirm-- For Enter System Administrator Home Directory [/usr/sap/HN1/home]: press Enter to accept the default-- For Enter System Administrator Login Shell [/bin/sh]: press Enter to accept the default-- For Enter System Administrator User ID [1001]: press Enter to accept the default-- For Enter ID of User Group (sapsys) [79]: press Enter to accept the default-- For Enter Database User (SYSTEM) Password: Enter the database user password-- For Confirm Database User (SYSTEM) Password: Enter the database user password again to confirm-- For Restart system after machine reboot? [n]: press Enter to accept the default-- For Do you want to continue? (y/n): Validate the summary. Enter **y** to continue --7.**[A]** Upgrade SAP Host Agent +5. **[A]** SLES for HANA Configuration ++ Configure SLES as described in below SAP Note based on your SLES version ++ - [2684254 Recommended OS settings for SLES 15 / SLES for SAP Applications 15](https://launchpad.support.sap.com/#/notes/2684254) + - [2205917 Recommended OS settings for SLES 12 / SLES for SAP Applications 12](https://launchpad.support.sap.com/#/notes/2205917) + - [2455582 Linux: Running SAP applications compiled with GCC 6.x](https://launchpad.support.sap.com/#/notes/2455582) + - [2593824 Linux: Running SAP applications compiled with GCC 7.x](https://launchpad.support.sap.com/#/notes/2593824) + - [2886607 Linux: Running SAP applications compiled with GCC 9.x](https://launchpad.support.sap.com/#/notes/2886607) ++6. **[A]** Install the SAP HANA ++ Starting with HANA 2.0 SPS 01, MDC is the default option. When you install HANA system, SYSTEMDB and a tenant with same SID will be created together. In some cases, you don't want the default tenant. In case, if you donΓÇÖt want to create initial tenant along with the installation you can follow SAP Note [2629711](https://launchpad.support.sap.com/#/notes/2629711). + + 1. Start the hdblcm program from the HANA installation software directory. ++ ```bash + ./hdblcm + ``` ++ 2. At the prompt, enter the following values: + - For Choose installation: Enter **1** (for install) + - For Select additional components for installation: Enter **1**. + - For Enter Installation Path [/hana/shared]: press Enter to accept the default + - For Enter Local Host Name [..]: Press Enter to accept the default + - Under, Do you want to add additional hosts to the system? (y/n) [n]: **n** + - For Enter SAP HANA System ID: Enter **HN1**. + - For Enter Instance Number [00]: Enter **03** + - For Select Database Mode / Enter Index [1]: press **Enter** to accept the default + - For Select System Usage / Enter Index [4]: enter **4** (for custom) + - For Enter Location of Data Volumes [/hana/data]: press **Enter** to accept the default + - For Enter Location of Log Volumes [/hana/log]: press **Enter** to accept the default + - For Restrict maximum memory allocation? [n]: press **Enter** to accept the default + - For Enter Certificate Host Name For Host '...' [...]: press **Enter** to accept the default + - For Enter SAP Host Agent User (sapadm) Password: Enter the host agent user password + - For Confirm SAP Host Agent User (sapadm) Password: Enter the host agent user password again to confirm + - For Enter System Administrator (hn1adm) Password: Enter the system administrator password + - For Confirm System Administrator (hn1adm) Password: Enter the system administrator password again to confirm + - For Enter System Administrator Home Directory [/usr/sap/HN1/home]: press Enter to accept the default + - For Enter System Administrator Login Shell [/bin/sh]: press Enter to accept the default + - For Enter System Administrator User ID [1001]: press Enter to accept the default + - For Enter ID of User Group (sapsys) [79]: press Enter to accept the default + - For Enter Database User (SYSTEM) Password: Enter the database user password + - For Confirm Database User (SYSTEM) Password: Enter the database user password again to confirm + - For Restart system after machine reboot? [n]: press Enter to accept the default + - For Do you want to continue? (y/n): Validate the summary. Enter **y** to continue ++7. **[A]** Upgrade SAP Host Agent Download the latest SAP Host Agent archive from the [SAP Software Center](https://launchpad.support.sap.com/#/softwarecenter) and run the following command to upgrade the agent. Replace the path to the archive to point to the file that you downloaded: a. Start the hdblcm program from the HANA installation software directory. sudo /usr/sap/hostctrl/exe/saphostexec -upgrade -archive <path to SAP Host Agent SAR> ``` - - ## Configure SAP HANA system replication -Follow the steps in setup [SAP HANA System Replication](./sap-hana-high-availability.md#configure-sap-hana-20-system-replication) to configure SAP HANA System Replication. +Follow the steps in set up [SAP HANA System Replication](./sap-hana-high-availability.md#configure-sap-hana-20-system-replication) to configure SAP HANA System Replication. ## Cluster configuration -This section describes necessary steps required for cluster to operate seamlessly when SAP HANA is installed on NFS shares using Azure NetApp Files. +This section describes necessary steps required for cluster to operate seamlessly when SAP HANA is installed on NFS shares using Azure NetApp Files. ### Create a Pacemaker cluster Follow the steps in, [Setting up Pacemaker on SUSE Enterprise Linux](./high-avai ## Implement HANA hooks SAPHanaSR and susChkSrv -This is an important step to optimize the integration with the cluster and improve the detection, when a cluster failover is needed. It is highly recommended to configure both SAPHanaSR and susChkSrv Python hooks. Follow the steps mentioned in, [Implement the Python System Replication hooks SAPHanaSR and susChkSrv](./sap-hana-high-availability.md#implement-hana-hooks-saphanasr-and-suschksrv) -+This is an important step to optimize the integration with the cluster and improve the detection, when a cluster failover is needed. It's highly recommended to configure both SAPHanaSR and susChkSrv Python hooks. Follow the steps mentioned in, [Implement the Python System Replication hooks SAPHanaSR and susChkSrv](./sap-hana-high-availability.md#implement-hana-hooks-saphanasr-and-suschksrv) ## Configure SAP HANA cluster resources Follow the steps in [creating SAP HANA cluster resources](./sap-hana-high-availa ```bash sudo crm_mon -r ```+ Example output+ ```output # Online: [ hn1-db-0 hn1-db-1 ] # Full list of resources: Example output ### Create File System resources -Create a dummy file system cluster resource, which will monitor and report failures, in case there is a problem accessing the NFS-mounted file system `/hana/shared`. That allows the cluster to trigger failover, in case there is a problem accessing `/hana/shared`. For more information, see [Handling failed NFS share in SUSE HA cluster for HANA system replication](https://www.suse.com/support/kb/doc/?id=000019904). +Create a dummy file system cluster resource, which will monitor and report failures, in case there's a problem accessing the NFS-mounted file system `/hana/shared`. That allows the cluster to trigger failover, in case there's a problem accessing `/hana/shared`. For more information, see [Handling failed NFS share in SUSE HA cluster for HANA system replication](https://www.suse.com/support/kb/doc/?id=000019904). -1.**[A]** Create the directory structure on both nodes. +1. **[A]** Create the directory structure on both nodes. ```bash sudo mkdir -p /hana/shared/HN1/check sudo mkdir -p /hana/shared/check ``` -2.**[1]** Configure the cluster to add the directory structure for monitoring +2. **[1]** Configure the cluster to add the directory structure for monitoring ```bash sudo crm configure primitive rsc_fs_check_HN1_HDB03 Filesystem params \ Create a dummy file system cluster resource, which will monitor and report failu op stop interval=0 timeout=120 ``` -3.**[1]** Clone and check the newly configured volume in the cluster +3. **[1]** Clone and check the newly configured volume in the cluster ```bash sudo crm configure clone cln_fs_check_HN1_HDB03 rsc_fs_check_HN1_HDB03 meta clone-node-max=1 interleave=true ```- ```bash - sudo crm status - ``` + Example output- ```output - #Cluster Summary: - # Stack: corosync - # Current DC: hanadb1 (version 2.0.5+20201202.ba59be712-4.9.1-2.0.5+20201202.ba59be712) - partition with quorum - # Last updated: Tue Nov 2 17:57:39 2021 - # Last change: Tue Nov 2 17:57:38 2021 by root via crm_attribute on hanadb1 - # 2 nodes configured - # 11 resource instances configured --# Node List: - # Online: [ hanadb1 hanadb2 ] --# Full List of Resources: - # Clone Set: cln_azure-events [rsc_azure-events]: - # Started: [ hanadb1 hanadb2 ] - # Clone Set: cln_SAPHanaTopology_HN1_HDB03 [rsc_SAPHanaTopology_HN1_HDB03]: - # rsc_SAPHanaTopology_HN1_HDB03 (ocf::suse:SAPHanaTopology): Started hanadb1 (Monitoring) - # rsc_SAPHanaTopology_HN1_HDB03 (ocf::suse:SAPHanaTopology): Started hanadb2 (Monitoring) - # Clone Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03] (promotable): - # rsc_SAPHana_HN1_HDB03 (ocf::suse:SAPHana): Master hanadb1 (Monitoring) - # Slaves: [ hanadb2 ] - # Resource Group: g_ip_HN1_HDB03: - # rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hanadb1 - # rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hanadb1 - # rsc_st_azure (stonith:fence_azure_arm): Started hanadb2 - # Clone Set: cln_fs_check_HN1_HDB03 [rsc_fs_check_HN1_HDB03]: - # Started: [ hanadb1 hanadb2 ] - ``` --`OCF_CHECK_LEVEL=20` attribute is added to the monitor operation, so that monitor operations perform a read/write test on the file system. Without this attribute, the monitor operation only verifies that the file system is mounted. This can be a problem because when connectivity is lost, the file system may remain mounted, despite being inaccessible. --`on-fail=fence` attribute is also added to the monitor operation. With this option, if the monitor operation fails on a node, that node is immediately fenced. ++ ```bash + sudo crm status + + # Cluster Summary: + # Stack: corosync + # Current DC: hanadb1 (version 2.0.5+20201202.ba59be712-4.9.1-2.0.5+20201202.ba59be712) - partition with quorum + # Last updated: Tue Nov 2 17:57:39 2021 + # Last change: Tue Nov 2 17:57:38 2021 by root via crm_attribute on hanadb1 + # 2 nodes configured + # 11 resource instances configured + + # Node List: + # Online: [ hanadb1 hanadb2 ] + + # Full List of Resources: + # Clone Set: cln_azure-events [rsc_azure-events]: + # Started: [ hanadb1 hanadb2 ] + # Clone Set: cln_SAPHanaTopology_HN1_HDB03 [rsc_SAPHanaTopology_HN1_HDB03]: + # rsc_SAPHanaTopology_HN1_HDB03 (ocf::suse:SAPHanaTopology): Started hanadb1 (Monitoring) + # rsc_SAPHanaTopology_HN1_HDB03 (ocf::suse:SAPHanaTopology): Started hanadb2 (Monitoring) + # Clone Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03] (promotable): + # rsc_SAPHana_HN1_HDB03 (ocf::suse:SAPHana): Master hanadb1 (Monitoring) + # Slaves: [ hanadb2 ] + # Resource Group: g_ip_HN1_HDB03: + # rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hanadb1 + # rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hanadb1 + # rsc_st_azure (stonith:fence_azure_arm): Started hanadb2 + # Clone Set: cln_fs_check_HN1_HDB03 [rsc_fs_check_HN1_HDB03]: + # Started: [ hanadb1 hanadb2 ] + ``` ++ `OCF_CHECK_LEVEL=20` attribute is added to the monitor operation, so that monitor operations perform a read/write test on the file system. Without this attribute, the monitor operation only verifies that the file system is mounted. This can be a problem because when connectivity is lost, the file system may remain mounted, despite being inaccessible. ++ `on-fail=fence` attribute is also added to the monitor operation. With this option, if the monitor operation fails on a node, that node is immediately fenced. > [!IMPORTANT] > Timeouts in the above configuration may need to be adapted to the specific HANA set up to avoid unnecessary fence actions. DonΓÇÖt set the timeout values too low. Be aware that the filesystem monitor is not related to the HANA system replication. For details see [SUSE documentation](https://www.suse.com/support/kb/doc/?id=000019904). -- ## Test the cluster setup -This section describes how you can test your setup. +This section describes how you can test your set up. -1.Before you start a test, make sure that Pacemaker does not have any failed action (via crm status), no unexpected location constraints (for example leftovers of a migration test) and that HANA system replication is sync state, for example with systemReplicationStatus: +1. Before you start a test, make sure that Pacemaker doesn't have any failed action (via crm status), no unexpected location constraints (for example leftovers of a migration test) and that HANA system replication is sync state, for example with systemReplicationStatus: ```bash sudo su - hn1adm -c "python /usr/sap/HN1/HDB03/exe/python_support/systemReplicationStatus.py" ``` -2.Verify the status of the HANA Resources using the command below +2. Verify the status of the HANA Resources using the command below - ```output + ```bash SAPHanaSR-showAttr # You should see something like below This section describes how you can test your setup. # hanadb2 DEMOTED 30 online logreplay hanadb1 4:S:master1:master:worker:master 100 SITE2 sync SOK 2.00.058.00.1634122452 hanadb2 ``` -3.Verify the cluster configuration for a failure scenario when a node is shutdown (below, for example shows shutting down node 1) +3. Verify the cluster configuration for a failure scenario when a node is shut down (below, for example shows shutting down node 1) ```bash sudo crm status sudo crm resource move msl_SAPHana_HN1_HDB03 hanadb2 force sudo crm resource cleanup ```++ Example output + ```bash sudo crm status- ``` - Example output - ```output - #Cluster Summary: - # Stack: corosync - # Current DC: hanadb2 (version 2.0.5+20201202.ba59be712-4.9.1-2.0.5+20201202.ba59be712) - partition with quorum - # Last updated: Mon Nov 8 23:25:36 2021 - # Last change: Mon Nov 8 23:25:19 2021 by root via crm_attribute on hanadb2 - # 2 nodes configured - # 11 resource instances configured - - #Node List: - - # Online: [ hanadb1 hanadb2 ] - #Full List of Resources: + #Cluster Summary: + # Stack: corosync + # Current DC: hanadb2 (version 2.0.5+20201202.ba59be712-4.9.1-2.0.5+20201202.ba59be712) - partition with quorum + # Last updated: Mon Nov 8 23:25:36 2021 + # Last change: Mon Nov 8 23:25:19 2021 by root via crm_attribute on hanadb2 + # 2 nodes configured + # 11 resource instances configured - # Clone Set: cln_azure-events [rsc_azure-events]: - # Started: [ hanadb1 hanadb2 ] - # Clone Set: cln_SAPHanaTopology_HN1_HDB03 [rsc_SAPHanaTopology_HN1_HDB03]: - # Started: [ hanadb1 hanadb2 ] - # Clone Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03] (promotable): - # Masters: [ hanadb2 ] - # Stopped: [ hanadb1 ] - # Resource Group: g_ip_HN1_HDB03: - # rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hanadb2 - # rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hanadb2 - # rsc_st_azure (stonith:fence_azure_arm): Started hanadb2 - # Clone Set: cln_fs_check_HN1_HDB03 [rsc_fs_check_HN1_HDB03]: - # Started: [ hanadb1 hanadb2 ] + # Node List: + # Online: [ hanadb1 hanadb2 ] + # Full List of Resources: + # Clone Set: cln_azure-events [rsc_azure-events]: + # Started: [ hanadb1 hanadb2 ] + # Clone Set: cln_SAPHanaTopology_HN1_HDB03 [rsc_SAPHanaTopology_HN1_HDB03]: + # Started: [ hanadb1 hanadb2 ] + # Clone Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03] (promotable): + # Masters: [ hanadb2 ] + # Stopped: [ hanadb1 ] + # Resource Group: g_ip_HN1_HDB03: + # rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hanadb2 + # rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hanadb2 + # rsc_st_azure (stonith:fence_azure_arm): Started hanadb2 + # Clone Set: cln_fs_check_HN1_HDB03 [rsc_fs_check_HN1_HDB03]: + # Started: [ hanadb1 hanadb2 ] ```+ Stop the HANA on Node1 ```bash sudo su - hn1adm- ``` - ```bash sapcontrol -nr 03 -function StopWait 600 10 ```+ Register Node 1 as the Secondary Node and check status+ ```bash hdbnsutil -sr_register --remoteHost=hanadb2 --remoteInstance=03 --replicationMode=sync --name=SITE1 --operationMode=logreplay ```+ Example output- ```output ++ ```example #adding site ... #nameserver hanadb1:30301 not responding. #collecting information ... #updating local ini files ... #done. ```+ ```bash sudo crm status ```+ ```bash sudo SAPHanaSR-showAttr ``` -4.Verify the cluster configuration for a failure scenario when a node loses access to the NFS share (/hana/shared) +4. Verify the cluster configuration for a failure scenario when a node loses access to the NFS share (/hana/shared) ++ The SAP HANA resource agents depend on binaries stored on `/hana/shared` to perform operations during fail-over. File system `/hana/shared` is mounted over NFS in the presented scenario. - The SAP HANA resource agents depend on binaries stored on `/hana/shared` to perform operations during failover. File system `/hana/shared` is mounted over NFS in the presented scenario. - It is difficult to simulate a failure, where one of the servers loses access to the NFS share. A test that can be performed is to re-mount the file system as read-only. - This approach validates that the cluster will be able to failover, if access to `/hana/shared` is lost on the active node. + It's difficult to simulate a failure, where one of the servers loses access to the NFS share. A test that can be performed is to re-mount the file system as read-only. + This approach validates that the cluster will be able to fail over, if access to `/hana/shared` is lost on the active node. - **Expected Result:** On making `/hana/shared` as read-only file system, the `OCF_CHECK_LEVEL` attribute of the resource `hana_shared1`, which performs read/write operation on file system will fail as it is not able to write anything on the file system and will perform HANA resource failover. The same result is expected when your HANA node loses access to the NFS shares. + **Expected Result:** On making `/hana/shared` as read-only file system, the `OCF_CHECK_LEVEL` attribute of the resource `hana_shared1`, which performs read/write operation on file system will fail as it isn't able to write anything on the file system and will perform HANA resource failover. The same result is expected when your HANA node loses access to the NFS shares. Resource state before starting the test: ```bash sudo crm status- ``` - Example output - ```output + #Cluster Summary:- # Stack: corosync - # Current DC: hanadb2 (version 2.0.5+20201202.ba59be712-4.9.1-2.0.5+20201202.ba59be712) - partition with quorum - # Last updated: Mon Nov 8 23:01:27 2021 - # Last change: Mon Nov 8 23:00:46 2021 by root via crm_attribute on hanadb1 - # 2 nodes configured - # 11 resource instances configured + # Stack: corosync + # Current DC: hanadb2 (version 2.0.5+20201202.ba59be712-4.9.1-2.0.5+20201202.ba59be712) - partition with quorum + # Last updated: Mon Nov 8 23:01:27 2021 + # Last change: Mon Nov 8 23:00:46 2021 by root via crm_attribute on hanadb1 + # 2 nodes configured + # 11 resource instances configured - #Node List: - # Online: [ hanadb1 hanadb2 ] + #Node List: + # Online: [ hanadb1 hanadb2 ] - #Full List of Resources: - # Clone Set: cln_azure-events [rsc_azure-events]: - # Started: [ hanadb1 hanadb2 ] - # Clone Set: cln_SAPHanaTopology_HN1_HDB03 [rsc_SAPHanaTopology_HN1_HDB03]: - # Started: [ hanadb1 hanadb2 ] - # Clone Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03] (promotable): - # Masters: [ hanadb1 ] - # Slaves: [ hanadb2 ] - # Resource Group: g_ip_HN1_HDB03: - # rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hanadb1 - # rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hanadb1 - # rsc_st_azure (stonith:fence_azure_arm): Started hanadb2 - # Clone Set: cln_fs_check_HN1_HDB03 [rsc_fs_check_HN1_HDB03]: - # Started: [ hanadb1 hanadb2 ] + #Full List of Resources: + # Clone Set: cln_azure-events [rsc_azure-events]: + # Started: [ hanadb1 hanadb2 ] + # Clone Set: cln_SAPHanaTopology_HN1_HDB03 [rsc_SAPHanaTopology_HN1_HDB03]: + # Started: [ hanadb1 hanadb2 ] + # Clone Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03] (promotable): + # Masters: [ hanadb1 ] + # Slaves: [ hanadb2 ] + # Resource Group: g_ip_HN1_HDB03: + # rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hanadb1 + # rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hanadb1 + # rsc_st_azure (stonith:fence_azure_arm): Started hanadb2 + # Clone Set: cln_fs_check_HN1_HDB03 [rsc_fs_check_HN1_HDB03]: + # Started: [ hanadb1 hanadb2 ] ``` You can place /hana/shared in read-only mode on the active cluster node, using below command: ```bash- sudo mount -o ro 10.3.1.4:/hanadb1-shared-mnt00001 /hana/shared + sudo mount -o ro 10.3.1.4:/hanadb1-shared-mnt00001 /hana/sharedb ``` hanadb1 will either reboot or poweroff based on the action set. Once the server (hanadb1) is down, HANA resource move to hanadb2. You can check the status of cluster from hanadb2. - ``` + ```bash sudo crm status- ``` - Example output - ```output + #Cluster Summary:- # Stack: corosync - # Current DC: hanadb2 (version 2.0.5+20201202.ba59be712-4.9.1-2.0.5+20201202.ba59be712) - partition with quorum - # Last updated: Wed Nov 10 22:00:27 2021 - # Last change: Wed Nov 10 21:59:47 2021 by root via crm_attribute on hanadb2 - # 2 nodes configured - # 11 resource instances configured + # Stack: corosync + # Current DC: hanadb2 (version 2.0.5+20201202.ba59be712-4.9.1-2.0.5+20201202.ba59be712) - partition with quorum + # Last updated: Wed Nov 10 22:00:27 2021 + # Last change: Wed Nov 10 21:59:47 2021 by root via crm_attribute on hanadb2 + # 2 nodes configured + # 11 resource instances configured - #Node List: - # Online: [ hanadb1 hanadb2 ] + #Node List: + # Online: [ hanadb1 hanadb2 ] - #Full List of Resources: - # Clone Set: cln_azure-events [rsc_azure-events]: - # Started: [ hanadb1 hanadb2 ] - # Clone Set: cln_SAPHanaTopology_HN1_HDB03 [rsc_SAPHanaTopology_HN1_HDB03]: - # Started: [ hanadb1 hanadb2 ] - # Clone Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03] (promotable): - # Masters: [ hanadb2 ] - # Stopped: [ hanadb1 ] - # Resource Group: g_ip_HN1_HDB03: + #Full List of Resources: + # Clone Set: cln_azure-events [rsc_azure-events]: + # Started: [ hanadb1 hanadb2 ] + # Clone Set: cln_SAPHanaTopology_HN1_HDB03 [rsc_SAPHanaTopology_HN1_HDB03]: + # Started: [ hanadb1 hanadb2 ] + # Clone Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03] (promotable): + # Masters: [ hanadb2 ] + # Stopped: [ hanadb1 ] + # Resource Group: g_ip_HN1_HDB03: # rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hanadb2- # rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hanadb2 - # rsc_st_azure (stonith:fence_azure_arm): Started hanadb2 - # Clone Set: cln_fs_check_HN1_HDB03 [rsc_fs_check_HN1_HDB03]: - # Started: [ hanadb1 hanadb2 ] + # rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hanadb2 + # rsc_st_azure (stonith:fence_azure_arm): Started hanadb2 + # Clone Set: cln_fs_check_HN1_HDB03 [rsc_fs_check_HN1_HDB03]: + # Started: [ hanadb1 hanadb2 ] ``` -We recommend testing the SAP HANA cluster configuration thoroughly, by also doing the tests described in [SAP HANA System Replication](./sap-hana-high-availability.md#test-the-cluster-setup). + We recommend testing the SAP HANA cluster configuration thoroughly, by also doing the tests described in [SAP HANA System Replication](./sap-hana-high-availability.md#test-the-cluster-setup). ## Next steps -* [Azure Virtual Machines planning and implementation for SAP](./planning-guide.md) -* [Azure Virtual Machines deployment for SAP](./deployment-guide.md) -* [Azure Virtual Machines DBMS deployment for SAP](./dbms-guide-general.md) -* [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md) +- [Azure Virtual Machines planning and implementation for SAP](./planning-guide.md) +- [Azure Virtual Machines deployment for SAP](./deployment-guide.md) +- [Azure Virtual Machines DBMS deployment for SAP](./dbms-guide-general.md) +- [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md) |
sap | Sap Hana High Availability Scale Out Hsr Suse | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability-scale-out-hsr-suse.md | Title: SAP HANA scale-out with HSR and Pacemaker on SLES | Microsoft Docs description: SAP HANA scale-out with HSR and Pacemaker on SLES. -tags: azure-resource-manager Previously updated : 04/25/2023 Last updated : 06/23/2023 - -# High availability for SAP HANA scale-out system with HSR on SUSE Linux Enterprise Server +# High availability for SAP HANA scale-out system with HSR on SUSE Linux Enterprise Server [dbms-guide]:dbms-guide-general.md [deployment-guide]:deployment-guide.md [planning-guide]:planning-guide.md [anf-azure-doc]:../../azure-netapp-files/index.yml-[anf-avail-matrix]:https://azure.microsoft.com/global-infrastructure/services/?products=netapp®ions=all [2205917]:https://launchpad.support.sap.com/#/notes/2205917 [1944799]:https://launchpad.support.sap.com/#/notes/1944799-[1410736]:https://launchpad.support.sap.com/#/notes/1410736 [1900823]:https://launchpad.support.sap.com/#/notes/1900823 -[sap-swcenter]:https://support.sap.com/en/my-support/software-downloads.html - [suse-ha-guide]:https://www.suse.com/products/sles-for-sap/resource-library/sap-best-practices/-[suse-drbd-guide]:https://www.suse.com/documentation/sle-ha-12/singlehtml/book_sleha_techguides/book_sleha_techguides.html -[suse-ha-12sp3-relnotes]:https://www.suse.com/releasenotes/x86_64/SLE-HA/12-SP3/ [sap-hana-ha]:sap-hana-high-availability.md-[nfs-ha]:high-availability-guide-suse-nfs.md - This article describes how to deploy a highly available SAP HANA system in a scale-out configuration with HANA system replication (HSR) and Pacemaker on Azure SUSE Linux Enterprise Server virtual machines (VMs). The shared file systems in the presented architecture are NFS mounted and are provided by [Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-introduction.md) or [NFS share on Azure Files](../../storage/files/files-nfs-protocol.md). Before you begin, refer to the following SAP notes and papers: One method to achieve HANA high availability for HANA scale-out installations, is to configure HANA system replication and protect the solution with Pacemaker cluster to allow automatic failover. When an active node fails, the cluster fails over the HANA resources to the other site. The presented configuration shows three HANA nodes on each site, plus majority maker node to prevent split-brain scenario. The instructions can be adapted, to include more VMs as HANA DB nodes. -The HANA shared file system `/han). The HANA shared file system is NFS mounted on each HANA node in the same HANA system replication site. File systems `/hana/data` and `/hana/log` are local file systems and are not shared between the HANA DB nodes. SAP HANA will be installed in non-shared mode. +The HANA shared file system `/han). The HANA shared file system is NFS mounted on each HANA node in the same HANA system replication site. File systems `/hana/data` and `/hana/log` are local file systems and aren't shared between the HANA DB nodes. SAP HANA will be installed in non-shared mode. > [!WARNING] > Deploying `/hana/data` and `/hana/log` on NFS on Azure Files is not supported. -> For recommended SAP HANA storage configurations, see [SAP HANA Azure VMs storage configurations](./hana-vm-operations-storage.md). +> For recommended SAP HANA storage configurations, see [SAP HANA Azure VMs storage configurations](./hana-vm-operations-storage.md). [](./media/sap-hana-high-availability/sap-hana-high-availability-scale-out-hsr-suse-detail.png#lightbox) -In the preceding diagram, three subnets are represented within one Azure virtual network, following the SAP HANA network recommendations: +In the preceding diagram, three subnets are represented within one Azure virtual network, following the SAP HANA network recommendations: + * for client communication - `client` 10.23.0.0/24 * for internal HANA inter-node communication - `inter` 10.23.1.128/26 * for HANA system replication - `hsr` 10.23.1.192/26 -As `/hana/data` and `/hana/log` are deployed on local disks, it is not necessary to deploy separate subnet and separate virtual network cards for communication to the storage. +As `/hana/data` and `/hana/log` are deployed on local disks, it isn't necessary to deploy separate subnet and separate virtual network cards for communication to the storage. -If you are using Azure NetApp Files, the NFS volumes for `/han): `anf` 10.23.1.0/26. +If you're using Azure NetApp Files, the NFS volumes for `/han): `anf` 10.23.1.0/26. ## Set up the infrastructure In the instructions that follow, we assume that you've already created the resource group, the Azure virtual network with three Azure network subnets: `client`, `inter` and `hsr`. ### Deploy Linux virtual machines via the Azure portal-1. Deploy the Azure VMs. -For the configuration presented in this document, deploy seven virtual machines: - - three virtual machines to serve as HANA DB nodes for HANA replication site 1: **hana-s1-db1**, **hana-s1-db2** and **hana-s1-db3** - - three virtual machines to serve as HANA DB nodes for HANA replication site 2: **hana-s2-db1**, **hana-s2-db2** and **hana-s2-db3** - - a small virtual machine to serve as *majority maker*: **hana-s-mm** ++1. Deploy the Azure VMs. ++ For the configuration presented in this document, deploy seven virtual machines: ++ * three virtual machines to serve as HANA DB nodes for HANA replication site 1: **hana-s1-db1**, **hana-s1-db2** and **hana-s1-db3** + * three virtual machines to serve as HANA DB nodes for HANA replication site 2: **hana-s2-db1**, **hana-s2-db2** and **hana-s2-db3** + * a small virtual machine to serve as *majority maker*: **hana-s-mm** The VMs, deployed as SAP DB HANA nodes should be certified by SAP for HANA as published in the [SAP HANA Hardware directory](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=v:deCertified;ve:24;iaas;v:125;v:105;v:99;v:120). When deploying the HANA DB nodes, make sure that [Accelerated Network](../../virtual-network/create-vm-accelerated-networking-cli.md) is selected. - For the majority maker node, you can deploy a small VM, as this VM doesn't run any of the SAP HANA resources. The majority maker VM is used in the cluster configuration to achieve odd number of cluster nodes in a split-brain scenario. The majority maker VM only needs one virtual network interface in the `client` subnet in this example. + For the majority maker node, you can deploy a small VM, as this VM doesn't run any of the SAP HANA resources. The majority maker VM is used in the cluster configuration to achieve odd number of cluster nodes in a split-brain scenario. The majority maker VM only needs one virtual network interface in the `client` subnet in this example. Deploy local managed disks for `/han). Deploy the primary network interface for each VM in the `client` virtual network subnet. When the VM is deployed via Azure portal, the network interface name is automatically generated. In these instructions for simplicity we'll refer to the automatically generated, primary network interfaces, which are attached to the `client` Azure virtual network subnet as **hana-s1-db1-client**, **hana-s1-db2-client**, **hana-s1-db3-client**, and so on. - > [!IMPORTANT]- > Make sure that the OS you select is SAP-certified for SAP HANA on the specific VM types you're using. For a list of SAP HANA certified VM types and OS releases for those types, go to the [SAP HANA certified IaaS platforms](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=v:deCertified;ve:24;iaas;v:125;v:105;v:99;v:120) site. Click into the details of the listed VM type to get the complete list of SAP HANA-supported OS releases for that type. - > If you choose to deploy `/hana/shared` on NFS on Azure Files, we recommend to deploy on SLES15 SP2 and above. + > Make sure that the OS you select is SAP-certified for SAP HANA on the specific VM types you're using. For a list of SAP HANA certified VM types and OS releases for those types, go to the [SAP HANA certified IaaS platforms](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=v:deCertified;ve:24;iaas;v:125;v:105;v:99;v:120) site. Click into the details of the listed VM type to get the complete list of SAP HANA-supported OS releases for that type. + > If you choose to deploy `/hana/shared` on NFS on Azure Files, we recommend to deploy on SLES15 SP2 and above. - 2. Create six network interfaces, one for each HANA DB virtual machine, in the `inter` virtual network subnet (in this example, **hana-s1-db1-inter**, **hana-s1-db2-inter**, **hana-s1-db3-inter**, **hana-s2-db1-inter**, **hana-s2-db2-inter**, and **hana-s2-db3-inter**). 3. Create six network interfaces, one for each HANA DB virtual machine, in the `hsr` virtual network subnet (in this example, **hana-s1-db1-hsr**, **hana-s1-db2-hsr**, **hana-s1-db3-hsr**, **hana-s2-db1-hsr**, **hana-s2-db2-hsr**, and **hana-s2-db3-hsr**). -4. Attach the newly created virtual network interfaces to the corresponding virtual machines: -- a. Go to the virtual machine in the [Azure portal](https://portal.azure.com/#home). +4. Attach the newly created virtual network interfaces to the corresponding virtual machines: - b. In the left pane, select **Virtual Machines**. Filter on the virtual machine name (for example, **hana-s1-db1**), and then select the virtual machine. + 1. Go to the virtual machine in the [Azure portal](https://portal.azure.com/#home). + 2. In the left pane, select **Virtual Machines**. Filter on the virtual machine name (for example, **hana-s1-db1**), and then select the virtual machine. + 3. In the **Overview** pane, select **Stop** to deallocate the virtual machine. + 4. Select **Networking**, and then attach the network interface. In the **Attach network interface** drop-down list, select the already created network interfaces for the `inter` and `hsr` subnets. + 5. Select **Save**. + 6. Repeat steps b through e for the remaining virtual machines (in our example, **hana-s1-db2**, **hana-s1-db3**, **hana-s2-db1**, **hana-s2-db2** and **hana-s2-db3**). + 7. Leave the virtual machines in stopped state for now. Next, we'll enable [accelerated networking](../../virtual-network/create-vm-accelerated-networking-cli.md) for all newly attached network interfaces. - c. In the **Overview** pane, select **Stop** to deallocate the virtual machine. -- d. Select **Networking**, and then attach the network interface. In the **Attach network interface** drop-down list, select the already created network interfaces for the `inter` and `hsr` subnets. - - e. Select **Save**. - - f. Repeat steps b through e for the remaining virtual machines (in our example, **hana-s1-db2**, **hana-s1-db3**, **hana-s2-db1**, **hana-s2-db2** and **hana-s2-db3**). - - g. Leave the virtual machines in stopped state for now. Next, we'll enable [accelerated networking](../../virtual-network/create-vm-accelerated-networking-cli.md) for all newly attached network interfaces. +5. Enable accelerated networking for the additional network interfaces for the `inter` and `hsr` subnets by doing the following steps: -5. Enable accelerated networking for the additional network interfaces for the `inter` and `hsr` subnets by doing the following steps: + 1. Open [Azure Cloud Shell](https://azure.microsoft.com/features/cloud-shell/) in the [Azure portal](https://portal.azure.com/#home). + 2. Execute the following commands to enable accelerated networking for the additional network interfaces, which are attached to the `inter` and `hsr` subnets. - a. Open [Azure Cloud Shell](https://azure.microsoft.com/features/cloud-shell/) in the [Azure portal](https://portal.azure.com/#home). -- b. Execute the following commands to enable accelerated networking for the additional network interfaces, which are attached to the `inter` and `hsr` subnets. -- ```azurecli - az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s1-db1-inter --accelerated-networking true - az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s1-db2-inter --accelerated-networking true - az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s1-db3-inter --accelerated-networking true - az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s2-db1-inter --accelerated-networking true - az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s2-db2-inter --accelerated-networking true - az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s2-db3-inter --accelerated-networking true - - az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s1-db1-hsr --accelerated-networking true - az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s1-db2-hsr --accelerated-networking true - az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s1-db3-hsr --accelerated-networking true - az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s2-db1-hsr --accelerated-networking true - az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s2-db2-hsr --accelerated-networking true - az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s2-db3-hsr --accelerated-networking true - ``` + ```bash + az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s1-db1-inter --accelerated-networking true + az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s1-db2-inter --accelerated-networking true + az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s1-db3-inter --accelerated-networking true + az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s2-db1-inter --accelerated-networking true + az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s2-db2-inter --accelerated-networking true + az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s2-db3-inter --accelerated-networking true + + az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s1-db1-hsr --accelerated-networking true + az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s1-db2-hsr --accelerated-networking true + az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s1-db3-hsr --accelerated-networking true + az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s2-db1-hsr --accelerated-networking true + az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s2-db2-hsr --accelerated-networking true + az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s2-db3-hsr --accelerated-networking true + ``` -7. Start the HANA DB virtual machines +6. Start the HANA DB virtual machines ### Deploy Azure Load Balancer 1. We recommend using standard load balancer. Follow these configuration steps to deploy standard load balancer:+ 1. First, create a front-end IP pool: 1. Open the load balancer, select **frontend IP pool**, and select **Add**.- 1. Enter the name of the new front-end IP pool (for example, **hana-frontend**). - 1. Set the **Assignment** to **Static** and enter the IP address (for example, **10.23.0.27**). - 1. Select **OK**. - 1. After the new front-end IP pool is created, note the pool IP address. + 2. Enter the name of the new front-end IP pool (for example, **hana-frontend**). + 3. Set the **Assignment** to **Static** and enter the IP address (for example, **10.23.0.27**). + 4. Select **OK**. + 5. After the new front-end IP pool is created, note the pool IP address. ++ 2. Create a single back-end pool: - 1. Create a single back-end pool: - 1. Open the load balancer, select **Backend pools**, and then select **Add**.- 1. Enter the name of the new back-end pool (for example, **hana-backend**). - 2. Select **NIC** for Backend Pool Configuration. - 1. Select **Add a virtual machine**. - 1. Select the virtual machines of the HANA cluster (the NICs for the `client` subnet). - 1. Select **Add**. - 2. Select **Save**. + 2. Enter the name of the new back-end pool (for example, **hana-backend**). + 3. Select **NIC** for Backend Pool Configuration. + 4. Select **Add a virtual machine**. + 5. Select the virtual machines of the HANA cluster (the NICs for the `client` subnet). + 6. Select **Add**. + 7. Select **Save**. - 1. Next, create a health probe: + 3. Next, create a health probe: 1. Open the load balancer, select **health probes**, and select **Add**.- 1. Enter the name of the new health probe (for example, **hana-hp**). - 1. Select **TCP** as the protocol and port 625**03**. Keep the **Interval** value set to 5. - 1. Select **OK**. + 2. Enter the name of the new health probe (for example, **hana-hp**). + 3. Select **TCP** as the protocol and port 625**03**. Keep the **Interval** value set to 5. + 4. Select **OK**. ++ 4. Next, create the load-balancing rules: - 1. Next, create the load-balancing rules: - 1. Open the load balancer, select **load balancing rules**, and select **Add**.- 1. Enter the name of the new load balancer rule (for example, **hana-lb**). - 1. Select the front-end IP address, the back-end pool, and the health probe that you created earlier (for example, **hana-frontend**, **hana-backend** and **hana-hp**). - 2. Increase idle timeout to 30 minutes. - 1. Select **HA Ports**. - 1. Make sure to **enable Floating IP**. - 1. Select **OK**. + 2. Enter the name of the new load balancer rule (for example, **hana-lb**). + 3. Select the front-end IP address, the back-end pool, and the health probe that you created earlier (for example, **hana-frontend**, **hana-backend** and **hana-hp**). + 4. Increase idle timeout to 30 minutes. + 5. Select **HA Ports**. + 6. Make sure to **enable Floating IP**. + 7. Select **OK**. > [!IMPORTANT]- > Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see [Azure Load balancer Limitations](../../load-balancer/load-balancer-multivip-overview.md#limitations). If you need additional IP address for the VM, deploy a second NIC. - - > [!Note] + > Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see [Azure Load balancer Limitations](../../load-balancer/load-balancer-multivip-overview.md#limitations). If you need additional IP address for the VM, deploy a second NIC. ++ > [!NOTE] > When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard Azure load balancer, there will be no outbound internet connectivity, unless additional configuration is performed to allow routing to public end points. For details on how to achieve outbound connectivity see [Public endpoint connectivity for Virtual Machines using Azure Standard Load Balancer in SAP high-availability scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md). > [!IMPORTANT] For the configuration presented in this document, deploy seven virtual machines: ### Deploy NFS -There are two options for deploying Azure native NFS for `/han). Azure files supports NFSv4.1 protocol, NFS on Azure NetApp files supports both NFSv4.1 and NFSv3. +There are two options for deploying Azure native NFS for `/han). Azure files support NFSv4.1 protocol, NFS on Azure NetApp files supports both NFSv4.1 and NFSv3. -The next sections describe the steps to deploy NFS - you'll need to select only *one* of the options. +The next sections describe the steps to deploy NFS - you'll need to select only *one* of the options. > [!TIP] > You chose to deploy `/han). -#### Deploy the Azure NetApp Files infrastructure +#### Deploy the Azure NetApp Files infrastructure -Deploy ANF volumes for the `/han#set-up-the-azure-netapp-files-infrastructure). +Deploy ANF volumes for the `/han#set-up-the-azure-netapp-files-infrastructure). -In this example, the following Azure NetApp Files volumes were used: +In this example, the following Azure NetApp Files volumes were used: * volume **HN1**-shared-s1 (nfs://10.23.1.7/**HN1**-shared-s1) * volume **HN1**-shared-s2 (nfs://10.23.1.7/**HN1**-shared-s2) #### Deploy the NFS on Azure Files infrastructure -Deploy Azure Files NFS shares for the `/han?tabs=azure-portal). +Deploy Azure Files NFS shares for the `/han?tabs=azure-portal). -In this example, the following Azure Files NFS shares were used: +In this example, the following Azure Files NFS shares were used: * share **hn1**-shared-s1 (sapnfsafs.file.core.windows.net:/sapnfsafs/hn1-shared-s1) * share **hn1**-shared-s2 (sapnfsafs.file.core.windows.net:/sapnfsafs/hn1-shared-s2) In this example, the following Azure Files NFS shares were used: ## Operating system configuration and preparation The instructions in the next sections are prefixed with one of the following abbreviations:-* **[A]**: Applicable to all nodes, including majority maker -* **[AH]**: Applicable to all HANA DB nodes -* **[M]**: Applicable to the majority maker node only -* **[AH1]**: Applicable to all HANA DB nodes on SITE 1 -* **[AH2]**: Applicable to all HANA DB nodes on SITE 2 -* **[1]**: Applicable only to HANA DB node 1, SITE 1 -* **[2]**: Applicable only to HANA DB node 1, SITE 2 ++* **[A]**: Applicable to all nodes, including majority maker +* **[AH]**: Applicable to all HANA DB nodes +* **[M]**: Applicable to the majority maker node only +* **[AH1]**: Applicable to all HANA DB nodes on SITE 1 +* **[AH2]**: Applicable to all HANA DB nodes on SITE 2 +* **[1]**: Applicable only to HANA DB node 1, SITE 1 +* **[2]**: Applicable only to HANA DB node 1, SITE 2 Configure and prepare your OS by doing the following steps: 1. **[A]** Maintain the host files on the virtual machines. Include entries for all subnets. The following entries were added to `/etc/hosts` for this example. ```bash- # Client subnet - 10.23.0.19 hana-s1-db1 - 10.23.0.20 hana-s1-db2 - 10.23.0.21 hana-s1-db3 - 10.23.0.22 hana-s2-db1 - 10.23.0.23 hana-s2-db2 - 10.23.0.24 hana-s2-db3 - 10.23.0.25 hana-s-mm - # Internode subnet - 10.23.1.132 hana-s1-db1-inter - 10.23.1.133 hana-s1-db2-inter - 10.23.1.134 hana-s1-db3-inter - 10.23.1.135 hana-s2-db1-inter - 10.23.1.136 hana-s2-db2-inter - 10.23.1.137 hana-s2-db3-inter - # HSR subnet - 10.23.1.196 hana-s1-db1-hsr - 10.23.1.197 hana-s1-db2-hsr - 10.23.1.198 hana-s1-db3-hsr - 10.23.1.199 hana-s2-db1-hsr - 10.23.1.200 hana-s2-db2-hsr - 10.23.1.201 hana-s2-db3-hsr + # Client subnet + 10.23.0.19 hana-s1-db1 + 10.23.0.20 hana-s1-db2 + 10.23.0.21 hana-s1-db3 + 10.23.0.22 hana-s2-db1 + 10.23.0.23 hana-s2-db2 + 10.23.0.24 hana-s2-db3 + 10.23.0.25 hana-s-mm + + # Internode subnet + 10.23.1.132 hana-s1-db1-inter + 10.23.1.133 hana-s1-db2-inter + 10.23.1.134 hana-s1-db3-inter + 10.23.1.135 hana-s2-db1-inter + 10.23.1.136 hana-s2-db2-inter + 10.23.1.137 hana-s2-db3-inter + + # HSR subnet + 10.23.1.196 hana-s1-db1-hsr + 10.23.1.197 hana-s1-db2-hsr + 10.23.1.198 hana-s1-db3-hsr + 10.23.1.199 hana-s2-db1-hsr + 10.23.1.200 hana-s2-db2-hsr + 10.23.1.201 hana-s2-db3-hsr ``` 2. **[A]** Create configuration file */etc/sysctl.d/ms-az.conf* with Microsoft for Azure configuration settings. - <pre><code> + ```bash vi /etc/sysctl.d/ms-az.conf+ # Add the following entries in the configuration file net.ipv6.conf.all.disable_ipv6 = 1 net.ipv4.tcp_max_syn_backlog = 16348 net.ipv4.conf.all.rp_filter = 0 sunrpc.tcp_slot_table_entries = 128 vm.swappiness=10- </code></pre> + ``` > [!TIP]- > Avoid setting net.ipv4.ip_local_port_range and net.ipv4.ip_local_reserved_ports explicitly in the sysctl configuration files to allow SAP Host Agent to manage the port ranges. For more details see SAP note [2382421](https://launchpad.support.sap.com/#/notes/2382421). + > Avoid setting net.ipv4.ip_local_port_range and net.ipv4.ip_local_reserved_ports explicitly in the sysctl configuration files to allow SAP Host Agent to manage the port ranges. For more information, see SAP note [2382421](https://launchpad.support.sap.com/#/notes/2382421). 3. **[A]** SUSE delivers special resource agents for SAP HANA and by default agents for SAP HANA scale-up are installed. Uninstall the packages for scale-up, if installed and install the packages for scenario SAP HANA scale-out. The step needs to be performed on all cluster VMs, including the majority maker. Configure and prepare your OS by doing the following steps: > SAPHanaSR-ScaleOut version 0.181 or higher must be installed. ```bash- # Uninstall scale-up packages and patterns - sudo zypper remove patterns-sap-hana - sudo zypper remove SAPHanaSR SAPHanaSR-doc yast2-sap-ha - # Install the scale-out packages and patterns - sudo zypper in SAPHanaSR-ScaleOut SAPHanaSR-ScaleOut-doc - sudo zypper in -t pattern ha_sles + # Uninstall scale-up packages and patterns + sudo zypper remove patterns-sap-hana + sudo zypper remove SAPHanaSR SAPHanaSR-doc yast2-sap-ha + + # Install the scale-out packages and patterns + sudo zypper in SAPHanaSR-ScaleOut SAPHanaSR-ScaleOut-doc + sudo zypper in -t pattern ha_sles ``` 4. **[AH]** Prepare the VMs - apply the recommended settings per SAP note [2205917] for SUSE Linux Enterprise Server for SAP Applications. You chose to deploy the SAP shared directories on [NFS share on Azure Files](../ ### Mount the shared file systems (Azure NetApp Files NFS) -In this example, the shared HANA file systems are deployed on Azure NetApp Files and mounted over NFSv4.1. Follow the steps in this section, only if you are using NFS on Azure NetApp Files. +In this example, the shared HANA file systems are deployed on Azure NetApp Files and mounted over NFSv4.1. Follow the steps in this section, only if you're using NFS on Azure NetApp Files. 1. **[AH]** Prepare the OS for running SAP HANA on NetApp Systems with NFS, as described in SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346). Create configuration file */etc/sysctl.d/91-NetApp-HANA.conf* for the NetApp configuration settings. - <pre><code> + ```bash vi /etc/sysctl.d/91-NetApp-HANA.conf+ # Add the following entries in the configuration file net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 In this example, the shared HANA file systems are deployed on Azure NetApp Files net.ipv4.tcp_moderate_rcvbuf = 1 net.ipv4.tcp_window_scaling = 1 net.ipv4.tcp_sack = 1- </code></pre> + ``` 2. **[AH]** Adjust the sunrpc settings, as recommended in SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346). - <pre><code> + ```bash vi /etc/modprobe.d/sunrpc.conf+ # Insert the following line options sunrpc tcp_max_slot_table_entries=128- </code></pre> -+ ``` 3. **[AH]** Create mount points for the HANA database volumes. In this example, the shared HANA file systems are deployed on Azure NetApp Files ### Mount the shared file systems (Azure Files NFS) -In this example, the shared HANA file systems are deployed on NFS on Azure Files. Follow the steps in this section, only if you are using NFS on Azure Files. +In this example, the shared HANA file systems are deployed on NFS on Azure Files. Follow the steps in this section, only if you're using NFS on Azure Files. 1. **[AH]** Create mount points for the HANA database volumes. In this example, the shared HANA file systems are deployed on NFS on Azure Files mkdir -p /hana/shared ``` -6. **[AH1]** Mount the shared Azure NetApp Files volumes on the SITE1 HANA DB VMs. +2. **[AH1]** Mount the shared Azure NetApp Files volumes on the SITE1 HANA DB VMs. ```bash sudo vi /etc/fstab In this example, the shared HANA file systems are deployed on NFS on Azure Files sudo mount -a ``` -7. **[AH2]** Mount the shared Azure NetApp Files volumes on the SITE2 HANA DB VMs. +3. **[AH2]** Mount the shared Azure NetApp Files volumes on the SITE2 HANA DB VMs. ```bash sudo vi /etc/fstab In this example, the shared HANA file systems are deployed on NFS on Azure Files sudo mount -a ``` -8. **[AH]** Verify that the corresponding `/hana/shared/` file systems are mounted on all HANA DB VMs with NFS protocol version **NFSv4.1**. +4. **[AH]** Verify that the corresponding `/hana/shared/` file systems are mounted on all HANA DB VMs with NFS protocol version **NFSv4.1**. ```bash sudo nfsstat -m In this example, the shared HANA file systems are deployed on NFS on Azure Files ``` ### Prepare the data and log local file systems-In the presented configuration, file systems `/hana/data` and `/hana/log` are deployed on managed disk and are locally attached to each HANA DB VM. -You will need to execute the steps to create the local data and log volumes on each HANA DB virtual machine. ++In the presented configuration, file systems `/hana/data` and `/hana/log` are deployed on managed disk and are locally attached to each HANA DB VM. +You'll need to execute the steps to create the local data and log volumes on each HANA DB virtual machine. Set up the disk layout with **Logical Volume Manager (LVM)**. The following example assumes that each HANA virtual machine has three data disks attached, that are used to create two volumes. 1. **[AH]** List all of the available disks:+ ```bash ls /dev/disk/azure/scsi1/lun* ``` Set up the disk layout with **Logical Volume Manager (LVM)**. The following exa ``` 2. **[AH]** Create physical volumes for all of the disks that you want to use:+ ```bash sudo pvcreate /dev/disk/azure/scsi1/lun0 sudo pvcreate /dev/disk/azure/scsi1/lun1 sudo pvcreate /dev/disk/azure/scsi1/lun2 ``` -3. **[AH]** Create a volume group for the data files. Use one volume group for the log files and one for the shared directory of SAP HANA: +3. **[AH]** Create a volume group for the data files. Use one volume group for the log files and one for the shared directory of SAP HANA:\ + ```bash sudo vgcreate vg_hana_data_HN1 /dev/disk/azure/scsi1/lun0 /dev/disk/azure/scsi1/lun1 sudo vgcreate vg_hana_log_HN1 /dev/disk/azure/scsi1/lun2 ``` -4. **[AH]** Create the logical volumes. +4. **[AH]** Create the logical volumes. + A linear volume is created when you use `lvcreate` without the `-i` switch. We suggest that you create a striped volume for better I/O performance, and align the stripe sizes to the values documented in [SAP HANA VM storage configurations](./hana-vm-operations-storage.md). The `-i` argument should be the number of the underlying physical volumes and the `-I` argument is the stripe size. In this document, two physical volumes are used for the data volume, so the `-i` switch argument is set to **2**. The stripe size for the data volume is **256 KiB**. One physical volume is used for the log volume, so no `-i` or `-I` switches are explicitly used for the log volume commands. > [!IMPORTANT] Set up the disk layout with **Logical Volume Manager (LVM)**. The following exa ``` 5. **[AH]** Create the mount directories and copy the UUID of all of the logical volumes:+ ```bash sudo mkdir -p /hana/data/HN1 sudo mkdir -p /hana/log/HN1 Set up the disk layout with **Logical Volume Manager (LVM)**. The following exa ``` 6. **[AH]** Create `fstab` entries for the logical volumes and mount:+ ```bash sudo vi /etc/fstab ``` Include all virtual machines, including the majority maker in the cluster. > [!IMPORTANT] > Don't set `quorum expected-votes` to 2, as this is not a two node cluster. -> Make sure that cluster property `concurrent-fencing` is enabled, so that node fencing is deserialized. +> Make sure that cluster property `concurrent-fencing` is enabled, so that node fencing is deserialized. ## Installation In this example for deploying SAP HANA in scale-out configuration with HSR on Az 1. **[AH]** Before the HANA installation, set the root password. You can disable the root password after the installation has been completed. Execute as `root` command `passwd`. -2. **[1,2]** Change the permissions on `/hana/shared` +2. **[1,2]** Change the permissions on `/hana/shared` + ```bash chmod 775 /hana/shared ``` -3. **[1]** Verify that you can log in via SSH to the HANA DB VMs in this site **hana-s1-db2** and **hana-s1-db3**, without being prompted for a password. If that is not the case, exchange ssh keys as described in [Enable SSH Access via Public Key](https://documentation.suse.com/sbp/all/html/SLES4SAP-hana-scaleOut-PerfOpt-12/https://docsupdatetracker.net/index.html#_enable_ssh_access_via_public_key_optional). +3. **[1]** Verify that you can log in via SSH to the HANA DB VMs in this site **hana-s1-db2** and **hana-s1-db3**, without being prompted for a password. If that isn't the case, exchange ssh keys as described in [Enable SSH Access via Public Key](https://documentation.suse.com/sbp/all/html/SLES4SAP-hana-scaleOut-PerfOpt-12/https://docsupdatetracker.net/index.html#_enable_ssh_access_via_public_key_optional). + ```bash ssh root@hana-s1-db2 ssh root@hana-s1-db3 ``` 4. **[2]** Verify that you can log in via SSH to the HANA DB VMs in this site **hana-s2-db2** and **hana-s2-db3**, without being prompted for a password. - If that is not the case, exchange ssh keys. + If that isn't the case, exchange ssh keys. + ```bash ssh root@hana-s2-db2 ssh root@hana-s2-db3 ``` -5. **[AH]** Install additional packages, which are required for HANA 2.0 SP4 and above. For more information, see SAP Note [2593824](https://launchpad.support.sap.com/#/notes/2593824) for your SLES version. +5. **[AH]** Install additional packages, which are required for HANA 2.0 SP4 and above. For more information, see SAP Note [2593824](https://launchpad.support.sap.com/#/notes/2593824) for your SLES version. ```bash # In this example, using SLES12 SP5 sudo zypper install libgcc_s1 libstdc++6 libatomic1 ```+ ### HANA installation on the first node on each site -1. **[1]** Install SAP HANA by following the instructions in the [SAP HANA 2.0 Installation and Update guide](https://help.sap.com/docs/SAP_HANA_PLATFORM/2c1988d620e04368aa4103bf26f17727/7eb0167eb35e4e2885415205b8383584.html?version=2.0.05). In the instructions that follow, we show the SAP HANA installation on the first node on SITE 1. +1. **[1]** Install SAP HANA by following the instructions in the [SAP HANA 2.0 Installation and Update guide](https://help.sap.com/docs/SAP_HANA_PLATFORM/2c1988d620e04368aa4103bf26f17727/7eb0167eb35e4e2885415205b8383584.html?version=2.0.05). In the instructions that follow, we show the SAP HANA installation on the first node on SITE 1. a. Start the **hdblcm** program as `root` from the HANA installation software directory. Use the `internal_network` parameter and pass the address space for subnet, which is used for the internal HANA inter-node communication. In this example for deploying SAP HANA in scale-out configuration with HSR on Az * For **Confirm SAP Host Agent User (sapadm) Password**: enter the password * For **System Administrator (hn1adm) Password**: enter the password * For **System Administrator Home Directory** [/usr/sap/HN1/home]: press Enter to accept the default- * For **System Administrator Login Shell** [/bin/sh]: press Enter to accept the default - * For **System Administrator User ID** [1001]: press Enter to accept the default - * For **Enter ID of User Group (sapsys)** [79]: press Enter to accept the default + * For **System Administrator Login Shell** [/bin/sh]: press Enter to accept the default + * For **System Administrator User ID** [1001]: press Enter to accept the default + * For **Enter ID of User Group (sapsys)** [79]: press Enter to accept the default * For **System Database User (system) Password**: enter the system's password * For **Confirm System Database User (system) Password**: enter system's password- * For **Restart system after machine reboot?** [n]: enter **n** + * For **Restart system after machine reboot?** [n]: enter **n** * For **Do you want to continue (y/n)**: validate the summary and if everything looks good, enter **y** -2. **[2]** Repeat the preceding step to install SAP HANA on the first node on SITE 2. +2. **[2]** Repeat the preceding step to install SAP HANA on the first node on SITE 2. 3. **[1,2]** Verify global.ini In this example for deploying SAP HANA in scale-out configuration with HSR on Az basepath_shared = no ``` -4. **[1,2]** Restart SAP HANA to activate the changes. +5. **[1,2]** Restart SAP HANA to activate the changes. ```bash sudo -u hn1adm /usr/sap/hostctrl/exe/sapcontrol -nr 03 -function StopSystem In this example for deploying SAP HANA in scale-out configuration with HSR on Az sudo chmod o+w -R /hana/data /hana/log ``` -8. **[1]** Install the secondary HANA nodes. The example instructions in this step are for SITE 1. - a. Start the resident **hdblcm** program as `root`. +8. **[1]** Install the secondary HANA nodes. The example instructions in this step are for SITE 1. ++ a. Start the resident **hdblcm** program as `root`. + ```bash cd /hana/shared/HN1/hdblcm ./hdblcm In this example for deploying SAP HANA in scale-out configuration with HSR on Az * For **Certificate Host Name For Host hana-s1-db3** [hana-s1-db3]: press Enter to accept the default * For **Do you want to continue (y/n)**: validate the summary and if everything looks good, enter **y** -9. **[2]** Repeat the preceding step to install the secondary SAP HANA nodes on SITE 2. +9. **[2]** Repeat the preceding step to install the secondary SAP HANA nodes on SITE 2. ## Configure SAP HANA 2.0 System Replication In this example for deploying SAP HANA in scale-out configuration with HSR on Az ``` 2. **[2]** Configure System Replication on SITE 2:- + Register the second site to start the system replication. Run the following command as <hanasid\>adm: ```bash In this example for deploying SAP HANA in scale-out configuration with HSR on Az Check the replication status and wait until all databases are in sync. ```bash- sudo su - hn1adm -c "python /usr/sap/HN1/HDB03/exe/python_support/systemReplicationStatus.py" - # | Database | Host | Port | Service Name | Volume ID | Site ID | Site Name | Secondary | Secondary | Secondary | Secondary | Secondary | Replication | Replication | Replication | - # | | | | | | | | Host | Port | Site ID | Site Name | Active Status | Mode | Status | Status Details | - # | -- | - | -- | | | - | | - | | | | - | -- | -- | -- | - # | HN1 | hana-s1-db3 | 30303 | indexserver | 5 | 1 | HANA_S1 | hana-s2-db3 | 30303 | 2 | HANA_S2 | YES | SYNC | ACTIVE | | - # | SYSTEMDB | hana-s1-db1 | 30301 | nameserver | 1 | 1 | HANA_S1 | hana-s2-db1 | 30301 | 2 | HANA_S2 | YES | SYNC | ACTIVE | | - # | HN1 | hana-s1-db1 | 30307 | xsengine | 2 | 1 | HANA_S1 | hana-s2-db1 | 30307 | 2 | HANA_S2 | YES | SYNC | ACTIVE | | - # | HN1 | hana-s1-db1 | 30303 | indexserver | 3 | 1 | HANA_S1 | hana-s2-db1 | 30303 | 2 | HANA_S2 | YES | SYNC | ACTIVE | | - # | HN1 | hana-s1-db2 | 30303 | indexserver | 4 | 1 | HANA_S1 | hana-s2-db2 | 30303 | 2 | HANA_S2 | YES | SYNC | ACTIVE | | - # - # status system replication site "2": ACTIVE - # overall system replication status: ACTIVE - # - # Local System Replication State - # - # mode: PRIMARY - # site id: 1 - # site name: HANA_S1 + sudo su - hn1adm -c "python /usr/sap/HN1/HDB03/exe/python_support/systemReplicationStatus.py" + + # | Database | Host | Port | Service Name | Volume ID | Site ID | Site Name | Secondary | Secondary | Secondary | Secondary | Secondary | Replication | Replication | Replication | + # | | | | | | | | Host | Port | Site ID | Site Name | Active Status | Mode | Status | Status Details | + # | -- | - | -- | | | - | | - | | | | - | -- | -- | -- | + # | HN1 | hana-s1-db3 | 30303 | indexserver | 5 | 1 | HANA_S1 | hana-s2-db3 | 30303 | 2 | HANA_S2 | YES | SYNC | ACTIVE | | + # | SYSTEMDB | hana-s1-db1 | 30301 | nameserver | 1 | 1 | HANA_S1 | hana-s2-db1 | 30301 | 2 | HANA_S2 | YES | SYNC | ACTIVE | | + # | HN1 | hana-s1-db1 | 30307 | xsengine | 2 | 1 | HANA_S1 | hana-s2-db1 | 30307 | 2 | HANA_S2 | YES | SYNC | ACTIVE | | + # | HN1 | hana-s1-db1 | 30303 | indexserver | 3 | 1 | HANA_S1 | hana-s2-db1 | 30303 | 2 | HANA_S2 | YES | SYNC | ACTIVE | | + # | HN1 | hana-s1-db2 | 30303 | indexserver | 4 | 1 | HANA_S1 | hana-s2-db2 | 30303 | 2 | HANA_S2 | YES | SYNC | ACTIVE | | + # + # status system replication site "2": ACTIVE + # overall system replication status: ACTIVE + # + # Local System Replication State + # + # mode: PRIMARY + # site id: 1 + # site name: HANA_S1 ``` -4. **[1,2]** Change the HANA configuration so that communication for HANA system replication is directed through the HANA system replication virtual network interfaces. - - Stop HANA on both sites - ```bash - sudo -u hn1adm /usr/sap/hostctrl/exe/sapcontrol -nr 03 -function StopSystem HDB - ``` +4. **[1,2]** Change the HANA configuration so that communication for HANA system replication is directed through the HANA system replication virtual network interfaces. - - Edit global.ini to add the host mapping for HANA system replication: use the IP addresses from the `hsr` subnet. - ```bash - sudo vi /usr/sap/HN1/SYS/global/hdb/custom/config/global.ini - #Add the section - [system_replication_hostname_resolution] - 10.23.1.196 = hana-s1-db1 - 10.23.1.197 = hana-s1-db2 - 10.23.1.198 = hana-s1-db3 - 10.23.1.199 = hana-s2-db1 - 10.23.1.200 = hana-s2-db2 - 10.23.1.201 = hana-s2-db3 - ``` + * Stop HANA on both sites - - Start HANA on both sites - ```bash - sudo -u hn1adm /usr/sap/hostctrl/exe/sapcontrol -nr 03 -function StartSystem HDB - ``` + ```bash + sudo -u hn1adm /usr/sap/hostctrl/exe/sapcontrol -nr 03 -function StopSystem HDB + ``` ++ * Edit global.ini to add the host mapping for HANA system replication: use the IP addresses from the `hsr` subnet. + + ```bash + sudo vi /usr/sap/HN1/SYS/global/hdb/custom/config/global.ini + #Add the section + [system_replication_hostname_resolution] + 10.23.1.196 = hana-s1-db1 + 10.23.1.197 = hana-s1-db2 + 10.23.1.198 = hana-s1-db3 + 10.23.1.199 = hana-s2-db1 + 10.23.1.200 = hana-s2-db2 + 10.23.1.201 = hana-s2-db3 + ``` ++ * Start HANA on both sites ++ ```bash + sudo -u hn1adm /usr/sap/hostctrl/exe/sapcontrol -nr 03 -function StartSystem HDB + ``` For more information, see [Host Name resolution for System Replication](https://help.sap.com/docs/SAP_HANA_PLATFORM/6b94445c94ae495c83a19646e7c3fd56/c0cba1cb2ba34ec89f45b48b2157ec7b.html?version=2.0.05). ## Create file system resources -Create a dummy file system cluster resource, which will monitor and report failures, in case there is a problem accessing the NFS-mounted file system `/hana/shared`. That allows the cluster to trigger failover, in case there is a problem accessing `/hana/shared`. For more details see [Handling failed NFS share in SUSE HA cluster for HANA system replication](https://www.suse.com/support/kb/doc/?id=000019904) +Create a dummy file system cluster resource, which will monitor and report failures, in case there's a problem accessing the NFS-mounted file system `/hana/shared`. That allows the cluster to trigger failover, in case there's a problem accessing `/hana/shared`. For more information, see [Handling failed NFS share in SUSE HA cluster for HANA system replication](https://www.suse.com/support/kb/doc/?id=000019904) ++1. **[1]** Place pacemaker in maintenance mode, in preparation for the creation of the HANA cluster resources. -1. **[1]** Place pacemaker in maintenance mode, in preparation for the creation of the HANA cluster resources. ```bash crm configure property maintenance-mode=true ``` -2. **[1,2]** Create the directory on the NFS mounted file system /hana/shared, which will be used in the special file system monitoring resource. The directories need to be created on both sites. +2. **[1,2]** Create the directory on the NFS mounted file system /hana/shared, which will be used in the special file system monitoring resource. The directories need to be created on both sites. + ```bash mkdir -p /hana/shared/HN1/check ``` -2. **[AH]** Create the directory, which will be used to mount the special file system monitoring resource. The directory needs to be created on all HANA cluster nodes. +3. **[AH]** Create the directory, which will be used to mount the special file system monitoring resource. The directory needs to be created on all HANA cluster nodes. + ```bash mkdir -p /hana/check ``` -3. **[1]** Create the file system cluster resources. +4. **[1]** Create the file system cluster resources. ```bash crm configure primitive fs_HN1_HDB03_fscheck Filesystem \ Create a dummy file system cluster resource, which will monitor and report failu op monitor interval=120 timeout=120 on-fail=fence \ op_params OCF_CHECK_LEVEL=20 \ op start interval=0 timeout=120 op stop interval=0 timeout=120-+ crm configure clone cln_fs_HN1_HDB03_fscheck fs_HN1_HDB03_fscheck \ meta clone-node-max=1 interleave=true-+ crm configure location loc_cln_fs_HN1_HDB03_fscheck_not_on_mm \ cln_fs_HN1_HDB03_fscheck -inf: hana-s-mm ```- + `OCF_CHECK_LEVEL=20` attribute is added to the monitor operation, so that monitor operations perform a read/write test on the file system. Without this attribute, the monitor operation only verifies that the file system is mounted. This can be a problem because when connectivity is lost, the file system may remain mounted, despite being inaccessible. - `on-fail=fence` attribute is also added to the monitor operation. With this option, if the monitor operation fails on a node, that node is immediately fenced. + `on-fail=fence` attribute is also added to the monitor operation. With this option, if the monitor operation fails on a node, that node is immediately fenced. ## Implement HANA HA hooks SAPHanaSrMultiTarget and susChkSrv -This important step is to optimize the integration with the cluster and detection when a cluster failover is possible. It is highly recommended to configure SAPHanaSrMultiTarget Python hook. For HANA 2.0 SP5 and higher, implementing both SAPHanaSrMultiTarget and susChkSrv hooks is recommended. +This important step is to optimize the integration with the cluster and detection when a cluster failover is possible. It's highly recommended to configure SAPHanaSrMultiTarget Python hook. For HANA 2.0 SP5 and higher, implementing both SAPHanaSrMultiTarget and susChkSrv hooks is recommended. -> [!NOTE] +> [!NOTE] > SAPHanaSrMultiTarget HA provider replaces SAPHanaSR for HANA scale-out. SAPHanaSR was described in earlier version of this document. > See [SUSE blog post](https://www.suse.com/c/sap-hana-scale-out-multi-target-upgrade/) about changes with the new HANA HA hook. -Provided steps for SAPHanaSrMultiTarget hook are for a new installation. Upgrading an existing environment from SAPHanaSR to SAPHanaSrMultiTarget provider requires several changes and are _NOT_ described in this document. If the existing environment uses no third site for disaster recovery and [HANA multi-target system replication](https://help.sap.com/docs/SAP_HANA_PLATFORM/4e9b18c116aa42fc84c7dbfd02111aba/ba457510958241889a459e606bbcf3d3.html) is not used, SAPHanaSR HA provider can remain in use. +Provided steps for SAPHanaSrMultiTarget hook are for a new installation. Upgrading an existing environment from SAPHanaSR to SAPHanaSrMultiTarget provider requires several changes and are *NOT* described in this document. If the existing environment uses no third site for disaster recovery and [HANA multi-target system replication](https://help.sap.com/docs/SAP_HANA_PLATFORM/4e9b18c116aa42fc84c7dbfd02111aba/ba457510958241889a459e606bbcf3d3.html) isn't used, SAPHanaSR HA provider can remain in use. SusChkSrv extends the functionality of the main SAPHanaSrMultiTarget HA provider. It acts in the situation when HANA process hdbindexserver crashes. If a single process crashes typically HANA tries to restart it. Restarting the indexserver process can take a long time, during which the HANA database isn't responsive. With susChkSrv implemented, an immediate and configurable action is executed, instead of waiting on hdbindexserver process to restart on the same node. In HANA scale-out susChkSrv acts for every HANA VM independently. The configured action will kill HANA or fence the affected VM, which triggers a failover in the configured timeout period. SUSE SLES 15 SP1 or higher is required for operation of both HANA HA hooks. Following table shows other dependencies. |SAP HANA HA hook | HANA version required | SAPHanaSR-ScaleOut required |-|-| -- | | +|-| -- | | | SAPHanaSrMultiTarget | HANA 2.0 SPS4 or higher | 0.180 or higher | | susChkSrv | HANA 2.0 SPS5 or higher | 0.184.1 or higher | Steps to implement both hooks: 1. **[1,2]** Stop HANA on both system replication sites. Execute as <sid\>adm: -```bash -sapcontrol -nr 03 -function StopSystem -``` + ```bash + sapcontrol -nr 03 -function StopSystem + ``` 2. **[1,2]** Adjust `global.ini` on each cluster site. If the prerequisites for susChkSrv hook aren't met, entire block `[ha_dr_provider_suschksrv]` shouldn't be configured. You can adjust the behavior of susChkSrv with parameter action_on_lost. Valid values are `[ ignore | stop | kill | fence ]`. You can adjust the behavior of susChkSrv with parameter action_on_lost. Valid va ha_dr_saphanasrmultitarget = info ``` - Default location of the HA hooks as dalivered by SUSE is /usr/share/SAPHanaSR-ScaleOut. Using the standard location brings a benefit, that the python hook code is automatically updated through OS or package updates and gets used by HANA at next restart. With an optional own path, such as /hana/shared/myHooks you can decouple OS updates from the used hook version. + Default location of the HA hooks as delivered by SUSE is /usr/share/SAPHanaSR-ScaleOut. Using the standard location brings a benefit, that the python hook code is automatically updated through OS or package updates and gets used by HANA at next restart. With an optional own path, such as /hana/shared/myHooks you can decouple OS updates from the used hook version. 3. **[AH]** The cluster requires sudoers configuration on the cluster nodes for <sid\>adm. In this example that is achieved by creating a new file. Execute the commands as `root` adapt the values of hn1 with correct lowercase SID. You can adjust the behavior of susChkSrv with parameter action_on_lost. Valid va sapcontrol -nr 03 -function StartSystem ``` -5. **[A]** Verify the hook installation is active on all cluster nodes. Execute as <sid\>adm. +5. **[A]** Verify the hook installation is active on all cluster nodes. Execute as <sid\>adm. ```bash cdtrace You can adjust the behavior of susChkSrv with parameter action_on_lost. Valid va ``` Verify the susChkSrv hook installation. Execute as <sid\>adm.+ ```bash cdtrace egrep '(LOST:|STOP:|START:|DOWN:|init|load|fail)' nameserver_suschksrv.trc You can adjust the behavior of susChkSrv with parameter action_on_lost. Valid va ## Create SAP HANA cluster resources -1. **[1]** Create the HANA cluster resources. Execute the following commands as `root`. +1. **[1]** Create the HANA cluster resources. Execute the following commands as `root`. + 1. Make sure the cluster is already maintenance mode. - - 2. Next, create the HANA Topology resource. ++ 2. Next, create the HANA Topology resource. + ```bash sudo crm configure primitive rsc_SAPHanaTopology_HN1_HDB03 ocf:suse:SAPHanaTopology \ op monitor interval="10" timeout="600" \ op start interval="0" timeout="600" \ op stop interval="0" timeout="300" \ params SID="HN1" InstanceNumber="03"-+ sudo crm configure clone cln_SAPHanaTopology_HN1_HDB03 rsc_SAPHanaTopology_HN1_HDB03 \ meta clone-node-max="1" target-role="Started" interleave="true" ``` - 3. Next, create the HANA instance resource. - > [!NOTE] + 3. Next, create the HANA instance resource. ++ > [!NOTE] > This article contains references to the terms *master* and *slave*, terms that Microsoft no longer uses. When these terms are removed from the software, weΓÇÖll remove them from this article.- + ```bash sudo crm configure primitive rsc_SAPHana_HN1_HDB03 ocf:suse:SAPHanaController \ op start interval="0" timeout="3600" \ You can adjust the behavior of susChkSrv with parameter action_on_lost. Valid va op monitor interval="61" role="Slave" timeout="700" \ params SID="HN1" InstanceNumber="03" PREFER_SITE_TAKEOVER="true" \ DUPLICATE_PRIMARY_TIMEOUT="7200" AUTOMATED_REGISTER="false"-+ sudo crm configure ms msl_SAPHana_HN1_HDB03 rsc_SAPHana_HN1_HDB03 \ meta clone-node-max="1" master-max="1" interleave="true" ```+ > [!IMPORTANT]- > We recommend as a best practice that you only set AUTOMATED_REGISTER to **no**, while performing thorough fail-over tests, to prevent failed primary instance to automatically register as secondary. Once the fail-over tests have completed successfully, set AUTOMATED_REGISTER to **yes**, so that after takeover system replication can resume automatically. + > We recommend as a best practice that you only set AUTOMATED_REGISTER to **no**, while performing thorough fail-over tests, to prevent failed primary instance to automatically register as secondary. Once the fail-over tests have completed successfully, set AUTOMATED_REGISTER to **yes**, so that after takeover system replication can resume automatically. ++ 4. Create Virtual IP and associated resources. - 4. Create Virtual IP and associated resources. ```bash sudo crm configure primitive rsc_ip_HN1_HDB03 ocf:heartbeat:IPaddr2 \ op monitor interval="10s" timeout="20s" \ params ip="10.23.0.27"-+ sudo crm configure primitive rsc_nc_HN1_HDB03 azure-lb port=62503 \ op monitor timeout=20s interval=10 \ meta resource-stickiness=0-+ sudo crm configure group g_ip_HN1_HDB03 rsc_ip_HN1_HDB03 rsc_nc_HN1_HDB03 ``` - 5. Create the cluster constraints + 5. Create the cluster constraints + ```bash # Colocate the IP with HANA master sudo crm configure colocation col_saphana_ip_HN1_HDB03 4000: g_ip_HN1_HDB03:Started \ msl_SAPHana_HN1_HDB03:Master -+ # Start HANA Topology before HANA instance sudo crm configure order ord_SAPHana_HN1_HDB03 Optional: cln_SAPHanaTopology_HN1_HDB03 \ msl_SAPHana_HN1_HDB03-+ # HANA resources don't run on the majority maker node sudo crm configure location loc_SAPHanaCon_not_on_majority_maker msl_SAPHana_HN1_HDB03 -inf: hana-s-mm sudo crm configure location loc_SAPHanaTop_not_on_majority_maker cln_SAPHanaTopology_HN1_HDB03 -inf: hana-s-mm ``` -2. **[1]** Configure additional cluster properties +2. **[1]** Configure additional cluster properties + ```bash sudo crm configure rsc_defaults resource-stickiness=1000 sudo crm configure rsc_defaults migration-threshold=50 ``` -3. **[1]** Place the cluster out of maintenance mode. Make sure that the cluster status is ok and that all of the resources are started. +3. **[1]** Place the cluster out of maintenance mode. Make sure that the cluster status is ok and that all of the resources are started. + ```bash # Cleanup any failed resources - the following command is example crm resource cleanup rsc_SAPHana_HN1_HDB03-+ # Place the cluster out of maintenance mode sudo crm configure property maintenance-mode=false ``` 4. **[1]** Verify the communication between the HANA HA hook and the cluster, showing status SOK for SID and both replication sites with status P(rimary) or S(econdary).+ ```bash sudo /usr/sbin/SAPHanaSR-showAttr # Expected result You can adjust the behavior of susChkSrv with parameter action_on_lost. Valid va > [!NOTE] > The timeouts in the above configuration are just examples and may need to be adapted to the specific HANA setup. For instance, you may need to increase the start timeout, if it takes longer to start the SAP HANA database. -## Test SAP HANA failover +## Test SAP HANA failover > [!NOTE] > This article contains references to the terms *master* and *slave*, terms that Microsoft no longer uses. When these terms are removed from the software, weΓÇÖll remove them from this article. -1. Before you start a test, check the cluster and SAP HANA system replication status. +1. Before you start a test, check the cluster and SAP HANA system replication status. ++ a. Verify that there are no failed cluster actions - a. Verify that there are no failed cluster actions ```bash #Verify that there are no failed cluster actions crm status You can adjust the behavior of susChkSrv with parameter action_on_lost. Valid va The SAP HANA resource agents depend on binaries, stored on `/hana/shared` to perform operations during failover. File system `/hana/shared` is mounted over NFS in the presented configuration. A test that can be performed, is to create a temporary firewall rule to block access to the `/hana/shared` NFS mounted file system on one of the primary site VMs. This approach validates that the cluster will fail over, if access to `/hana/shared` is lost on the active system replication site. **Expected result**: When you block the access to the `/hana/shared` NFS mounted file system on one of the primary site VMs, the monitoring operation that performs read/write operation on file system, will fail, as it is not able to access the file system and will trigger HANA resource failover. The same result is expected when your HANA node loses access to the NFS share. - + You can check the state of the cluster resources by executing `crm_mon` or `crm status`. Resource state before starting the test:+ ```bash # Output of crm_mon #7 nodes configured You can adjust the behavior of susChkSrv with parameter action_on_lost. Valid va # Resource Group: g_ip_HN1_HDB03 # rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hana-s2-db1 # rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hana-s2-db1 - ``` + ``` To simulate failure for `/hana/shared`: -- If using NFS on ANF, first confirm the IP address for the `/hana/shared` ANF volume on the primary site. You can do that by running `df -kh|grep /hana/shared`. -- If using NFS on Azure Files, first determine the IP address of the private end point for your storage account. + * If using NFS on ANF, first confirm the IP address for the `/hana/shared` ANF volume on the primary site. You can do that by running `df -kh|grep /hana/shared`. + * If using NFS on Azure Files, first determine the IP address of the private end point for your storage account. Then, set up a temporary firewall rule to block access to the IP address of the `/hana/shared` NFS file system by executing the following command on one of the primary HANA system replication site VMs. - In this example the command was executed on hana-s1-db1 for ANF volume `/hana/shared`. + In this example, the command was executed on hana-s1-db1 for ANF volume `/hana/shared`. ```bash iptables -A INPUT -s 10.23.1.7 -j DROP; iptables -A OUTPUT -d 10.23.1.7 -j DROP ``` - The cluster resources will be migrated to the other HANA system replication site. - - If you set AUTOMATED_REGISTER="false", you will need to configure SAP HANA system replication on secondary site. In this case, you can execute these commands to reconfigure SAP HANA as secondary. + The cluster resources will be migrated to the other HANA system replication site. ++ If you set AUTOMATED_REGISTER="false", you'll need to configure SAP HANA system replication on secondary site. In this case, you can execute these commands to reconfigure SAP HANA as secondary. ```bash # Execute on the secondary You can adjust the behavior of susChkSrv with parameter action_on_lost. Valid va crm resource cleanup SAPHana_HN1_HDB03 ``` - The state of the resources, after the test: + The state of the resources, after the test: ```bash # Output of crm_mon You can adjust the behavior of susChkSrv with parameter action_on_lost. Valid va # Clone Set: cln_fs_HN1_HDB03_fscheck [fs_HN1_HDB03_fscheck] # Started: [ hana-s1-db1 hana-s1-db2 hana-s1-db3 hana-s2-db1 hana-s2-db2 hana-s2-db3 ] # Clone Set: cln_SAPHanaTopology_HN1_HDB03 [rsc_SAPHanaTopology_HN1_HDB03]- Started: [ hana-s1-db1 hana-s1-db2 hana-s1-db3 hana-s2-db1 hana-s2-db2 hana-s2-db3 ] + # Started: [ hana-s1-db1 hana-s1-db2 hana-s1-db3 hana-s2-db1 hana-s2-db2 hana-s2-db3 ] # Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03] # Masters: [ hana-s2-db1 ] # Slaves: [ hana-s1-db1 hana-s1-db2 hana-s1-db3 hana-s2-db2 hana-s2-db3 ] You can adjust the behavior of susChkSrv with parameter action_on_lost. Valid va # rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hana-s2-db1 ``` - ## Next steps * [Azure Virtual Machines planning and implementation for SAP][planning-guide] * [Azure Virtual Machines deployment for SAP][deployment-guide] * [Azure Virtual Machines DBMS deployment for SAP][dbms-guide] * [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md)-* To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see [High Availability of SAP HANA on Azure Virtual Machines (VMs)][sap-hana-ha]. +* To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see [High Availability of SAP HANA on Azure Virtual Machines (VMs)][sap-hana-ha]. |
sap | Sap Hana High Availability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability.md | -[template-multisid-db]:https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fapplication-workloads%2Fsap%2Fsap-3-tier-marketplace-image-multi-sid-db-md%2Fazuredeploy.json -[template-converged]:https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fapplication-workloads%2Fsap%2Fsap-3-tier-marketplace-image-converged-md%2Fazuredeploy.json To establish high availability in an on-premises SAP HANA deployment, you can use either SAP HANA system replication or shared storage. The preceding figure shows an *example* load balancer that has these configurati ## Deploy for Linux -The resource agent for SAP HANA is included in SUSE Linux Enterprise Server for SAP Applications. +The resource agent for SAP HANA is included in SUSE Linux Enterprise Server for SAP Applications. An image for SUSE Linux Enterprise Server for SAP Applications 12 or 15 is available in Azure Marketplace. You can use the image to deploy new VMs. -An image for SUSE Linux Enterprise Server for SAP Applications 12 or 15 is available in Azure Marketplace. You can use the image to deploy new VMs. +### Deploy Linux VMs manually via Azure portal -### Deploy by using a template +This document assumes that you've already deployed a resource group, [Azure Virtual Network](../../virtual-network/virtual-networks-overview.md), and subnet. -You can use one of the quickstart templates that are on GitHub to deploy the SAP HANA solution. The templates install all the required resources, including the VMs, the load balancer, and the availability set. --To deploy the template: --1. In the Azure portal, open the [database template][template-multisid-db] or the [converged template][template-converged]. -- The *database template* creates the load-balancing rules only for a database. The *converged template* creates load-balancing rules for a database, and also for an SAP ASCS/SCS instance and an SAP ERS (Linux only) instance. If you plan to install an SAP NetWeaver-based system and you want to install the ASCS/SCS instance on the same machines, use the [converged template][template-converged]. --1. Enter the following parameters in the template you choose: -- - **Sap System ID**: Enter the SAP system ID (SAP SID) of the SAP system you want to install. The ID is used as a prefix for the resources that are deployed. - - **Stack Type** (*converged template only*): Select the SAP NetWeaver stack type. - - **Os Type**: Select one of the Linux distributions. For this example, select **SLES 12**. - - **Db Type**: Select **HANA**. - - **Sap System Size**: Enter the number of SAP Application Performance Standard units (SAPS) the new system will provide. If you're not sure how many SAPS the system requires, ask your SAP Technology Partner or System Integrator. - - **System Availability**: Select **HA**. - - **Admin Username and Admin Password**: Create a new user and password that you can use to sign in to the machine. - - **New Or Existing Subnet**: If you already have a virtual network that's connected to your on-premises network, select **Existing**. Otherwise, select **New** and create a new virtual network and subnet. - - **Subnet ID**: If you want to deploy the VM to an existing virtual network that has a defined subnet, the VM should be assigned to the name the ID of that specific subnet. The ID usually is in this format: -- /subscriptions/\<subscription ID\>/resourceGroups/\<resource group name\>/providers/Microsoft.Network/virtualNetworks/\<virtual network name\>/subnets/\<subnet name\> --### Manual deployment +Deploy virtual machines for SAP HANA. Choose a suitable SLES image that is supported for HANA system. You can deploy VM in any one of the availability options - scale set, availability zone or availability set. > [!IMPORTANT] > Make sure that the OS you select is SAP certified for SAP HANA on the specific VM types that you plan to use in your deployment. You can look up SAP HANA-certified VM types and their OS releases in [SAP HANA Certified IaaS Platforms](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=v:deCertified;ve:24;iaas;v:125;v:105;v:99;v:120). Make sure that you look at the details of the VM type to get the complete list of SAP HANA-supported OS releases for the specific VM type. -To manually deploy SAP HANA system replication: --1. Create a resource group. --1. Create a virtual network. --1. Choose a [suitable deployment type](./sap-high-availability-architecture-scenarios.md#comparison-of-different-deployment-types-for-sap-workload) for SAP virtual machines. Typically a virtual machine scale set with flexible orchestration. --1. Create a load balancer (internal). -- - We recommend that you use the [standard load balancer](../../load-balancer/load-balancer-overview.md). - - Select the virtual network you created in step 2. --1. Create virtual machine 1. -- - Use an SLES4SAP image in the Azure gallery that's supported for SAP HANA on the VM type you selected. - - Select the scale set, availability zone or availability set created in step 3. --1. Create virtual machine 2. -- - Use an SLES4SAP image in the Azure gallery that's supported for SAP HANA on the VM type you selected. - - Select the scale set, availability zone or availability set created in step 3. --1. Add data disks. -- > [!IMPORTANT] - > A floating IP address isn't supported on a network interface card (NIC) secondary IP configuration in load-balancing scenarios. For details, see [Azure Load Balancer limitations](../../load-balancer/load-balancer-multivip-overview.md#limitations). If you need another IP address for the VM, deploy a second NIC. -- > [!NOTE] - > When VMs that don't have public IP addresses are placed in the back-end pool of an internal (no public IP address) standard instance of Azure Load Balancer, the default configuration is no outbound internet connectivity. You can take extra steps to allow routing to public endpoints. For details on how to achieve outbound connectivity, see [Public endpoint connectivity for VMs by using Azure Standard Load Balancer in SAP high-availability scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md). +During VM configuration, you have an option to create or select exiting load balancer in networking section. If you are creating a new load balancer, follow below steps - 1. Set up a standard load balancer.- 1. Create a front-end IP pool:- 1. Open the load balancer, select **frontend IP pool**, and then select **Add**.+ 2. Enter the name of the new front-end IP pool (for example, **hana-frontend**). + 3. Set **Assignment** to **Static** and enter the IP address (for example, **10.0.0.13**). + 4. Select **OK**. + 5. After the new front-end IP pool is created, note the pool IP address. - 1. Enter the name of the new front-end IP pool (for example, **hana-frontend**). -- 1. Set **Assignment** to **Static** and enter the IP address (for example, **10.0.0.13**). -- 1. Select **OK**. -- 1. After the new front-end IP pool is created, note the pool IP address. -- 1. Create a single back-end pool: -+ 2. Create a single back-end pool: 1. In the load balancer, select **Backend pools**, and then select **Add**.-- 1. Enter the name of the new back-end pool (for example, **hana-backend**). -- 1. For **Backend Pool Configuration**, select **NIC**. -- 1. Select **Add a virtual machine**. -- 1. Select the VMs that are in the HANA cluster. -- 1. Select **Add**. -- 1. Select **Save**. -- 1. Create a health probe: -+ 2. Enter the name of the new back-end pool (for example, **hana-backend**). + 3. For **Backend Pool Configuration**, select **NIC**. + 4. Select **Add a virtual machine**. + 5. Select the VMs that are in the HANA cluster. + 6. Select **Add**. + 7. Select **Save**. ++ 3. Create a health probe: 1. In the load balancer, select **health probes**, and then select **Add**.+ 2. Enter the name of the new health probe (for example, **hana-hp**). + 3. For **Protocol**, select **TCP** and select port **625\<instance number\>**. Keep **Interval** set to **5**. + 4. Select **OK**. - 1. Enter the name of the new health probe (for example, **hana-hp**). -- 1. For **Protocol**, select **TCP** and select port **625\<instance number\>**. Keep **Interval** set to **5**. -- 1. Select **OK**. -- 1. Create the load-balancing rules: -+ 4. Create the load-balancing rules: 1. In the load balancer, select **load balancing rules**, and then select **Add**.+ 2. Enter the name of the new load balancer rule (for example, **hana-lb**). + 3. Select the front-end IP address, the back-end pool, and the health probe that you created earlier (for example, **hana-frontend**, **hana-backend**, and **hana-hp**). + 4. Increase the idle timeout to 30 minutes. + 5. Select **HA Ports**. + 6. Enable **Floating IP**. + 7. Select **OK**. - 1. Enter the name of the new load balancer rule (for example, **hana-lb**). -- 1. Select the front-end IP address, the back-end pool, and the health probe that you created earlier (for example, **hana-frontend**, **hana-backend**, and **hana-hp**). -- 1. Increase the idle timeout to 30 minutes. +For more information about the required ports for SAP HANA, read the chapter [Connections to Tenant Databases](https://help.sap.com/viewer/78209c1d3a9b41cd8624338e42a12bf6/latest/en-US/7a9343c9f2a2436faa3cfdb5ca00c052.html) in the [SAP HANA Tenant Databases](https://help.sap.com/viewer/78209c1d3a9b41cd8624338e42a12bf6) guide or [SAP Note 2388694][2388694]. - 1. Select **HA Ports**. -- 1. Enable **Floating IP**. -- 1. Select **OK**. +> [!IMPORTANT] +> A floating IP address isn't supported on a network interface card (NIC) secondary IP configuration in load-balancing scenarios. For details, see [Azure Load Balancer limitations](../../load-balancer/load-balancer-multivip-overview.md#limitations). If you need another IP address for the VM, deploy a second NIC. - For more information about the required ports for SAP HANA, read the chapter [Connections to Tenant Databases](https://help.sap.com/viewer/78209c1d3a9b41cd8624338e42a12bf6/latest/en-US/7a9343c9f2a2436faa3cfdb5ca00c052.html) in the [SAP HANA Tenant Databases](https://help.sap.com/viewer/78209c1d3a9b41cd8624338e42a12bf6) guide or [SAP Note 2388694][2388694]. +> [!NOTE] +> When VMs that don't have public IP addresses are placed in the back-end pool of an internal (no public IP address) standard instance of Azure Load Balancer, the default configuration is no outbound internet connectivity. You can take extra steps to allow routing to public endpoints. For details on how to achieve outbound connectivity, see [Public endpoint connectivity for VMs by using Azure Standard Load Balancer in SAP high-availability scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md). > [!IMPORTANT] > Don't enable TCP timestamps on Azure VMs that are placed behind Azure Load Balancer. Enabling TCP timestamps causes the health probes to fail. Set parameter `net.ipv4.tcp_timestamps` to `0`. For details see [Load Balancer health probes](../../load-balancer/load-balancer-custom-probe-overview.md) or SAP note [2382421](https://launchpad.support.sap.com/#/notes/2382421). Replace `<placeholders>` with the values for your SAP HANA installation. When you're prompted, enter the following values: 1. Choose installation: Enter **1**.- 1. Select additional components for installation: Enter **1**.- 1. Enter installation path: Enter **/hana/shared** and select Enter.- 1. Enter local host name: Enter **..** and select Enter.- 1. Do you want to add additional hosts to the system? (y/n): Enter **n** and select Enter.- 1. Enter the SAP HANA system ID: Enter your HANA SID.- 1. Enter the instance number: Enter the HANA instance number. If you deployed by using the Azure template or if you followed the manual deployment section of this article, enter **03**.- 1. Select the database mode / Enter the index: Enter or select **1** and select Enter.- 1. Select the system usage / Enter the index: Select the system usage value **4**.- 1. Enter the location of the data volumes: Enter **/hana/data/\<HANA SID\>** and select Enter.- 1. Enter the location of the log volumes: Enter **/hana/log/\<HANA SID\>** and select Enter.- 1. Restrict maximum memory allocation?: Enter **n** and select Enter.- 1. Enter the certificate host name for the host: Enter **...** and select Enter.- 1. Enter the SAP host agent user (sapadm) password: Enter the host agent user password, and then select Enter.- 1. Confirm the SAP host agent user (sapadm) password: Enter the host agent user password again, and then select Enter.- 1. Enter the system administrator (hdbadm) password: Enter the system administrator password, and then select Enter.- 1. Confirm the system administrator (hdbadm) password: Enter the system administrator password again, and then select Enter.- 1. Enter the system administrator home directory: Enter **/usr/sap/\<HANA SID\>/home** and select Enter.- 1. Enter the system administrator login shell: Enter **/bin/sh** and select Enter.- 1. Enter the system administrator user ID: Enter **1001** and select Enter.- 1. Enter ID of the user group (sapsys): Enter **79** and select Enter.- 1. Enter the database user (SYSTEM) password: Enter the database user password, and then select Enter.- 1. Confirm the database user (SYSTEM) password: Enter the database user password again, and then select Enter.- 1. Restart the system after machine reboot? (y/n): Enter **n** and select Enter.- 1. Do you want to continue? (y/n): Validate the summary. Enter **y** to continue. 1. **[A]** Upgrade the SAP host agent. Before you proceed, make sure that you have fully configured the SUSE high avail ### Set up the load balancer for active/read-enabled system replication -To proceed with extra steps to provision the second virtual IP, make sure that you configured Azure Load Balancer as described in [Manual deployment](#manual-deployment). +To proceed with extra steps to provision the second virtual IP, make sure that you configured Azure Load Balancer as described in [Deploy Linux VMs manually via Azure portal](#deploy-linux-vms-manually-via-azure-portal). For the *standard* load balancer, complete these extra steps on the same load balancer that you created earlier. 1. Create a second front-end IP pool:- 1. Open the load balancer, select **frontend IP pool**, and select **Add**.-- 1. Enter the name of the second front-end IP pool (for example, **hana-secondaryIP**). -- 1. Set the **Assignment** to **Static** and enter the IP address (for example, **10.0.0.14**). -- 1. Select **OK**. -- 1. After the new front-end IP pool is created, note the front-end IP address. --1. Create a health probe: -+ 2. Enter the name of the second front-end IP pool (for example, **hana-secondaryIP**). + 3. Set the **Assignment** to **Static** and enter the IP address (for example, **10.0.0.14**). + 4. Select **OK**. + 5. After the new front-end IP pool is created, note the front-end IP address. +2. Create a health probe: 1. In the load balancer, select **health probes**, and select **Add**.-- 1. Enter the name of the new health probe (for example, **hana-secondaryhp**). -- 1. Select **TCP** as the protocol and port **626\<instance number\>**. Keep the **Interval** value set to **5**, and the **Unhealthy threshold** value set to **2**. -- 1. Select **OK**. --1. Create the load-balancing rules: -+ 2. Enter the name of the new health probe (for example, **hana-secondaryhp**). + 3. Select **TCP** as the protocol and port **626\<instance number\>**. Keep the **Interval** value set to **5**, and the **Unhealthy threshold** value set to **2**. + 4. Select **OK**. +3. Create the load-balancing rules: 1. In the load balancer, select **load balancing rules**, and select **Add**.-- 1. Enter the name of the new load balancer rule (for example, **hana-secondarylb**). -- 1. Select the front-end IP address, the back-end pool, and the health probe that you created earlier (for example, **hana-secondaryIP**, **hana-backend**, and **hana-secondaryhp**). -- 1. Select **HA Ports**. -- 1. Increase idle timeout to 30 minutes. -- 1. Make sure that you **enable floating IP**. -- 1. Select **OK**. + 2. Enter the name of the new load balancer rule (for example, **hana-secondarylb**). + 3. Select the front-end IP address, the back-end pool, and the health probe that you created earlier (for example, **hana-secondaryIP**, **hana-backend**, and **hana-secondaryhp**). + 4. Select **HA Ports**. + 5. Increase idle timeout to 30 minutes. + 6. Make sure that you **enable floating IP**. + 7. Select **OK**. ### Set up HANA active/read-enabled system replication This section describes how you can test your setup. Every test assumes that you' Before you start the test, make sure that Pacemaker doesn't have any failed action (run `crm_mon -r`), that there are no unexpected location constraints (for example, leftovers of a migration test), and that HANA is in sync state, for example, by running `SAPHanaSR-showAttr`. ```bash- hn1-db-0:~ # SAPHanaSR-showAttr +hn1-db-0:~ # SAPHanaSR-showAttr Sites srHook - SITE2 SOK hn1-db-1 DEMOTED 30 online logreplay nws-hana-vm-0 4:S:master1: You can migrate the SAP HANA master node by running the following command: ```bash- crm resource move msl_SAPHana_<HANA SID>_HDB<instance number> hn1-db-1 force +crm resource move msl_SAPHana_<HANA SID>_HDB<instance number> hn1-db-1 force ``` If you set `AUTOMATED_REGISTER="false"`, this sequence of commands migrates the SAP HANA master node and the group that contains the virtual IP address to `hn1-db-1`. If you set `AUTOMATED_REGISTER="false"`, this sequence of commands migrates the When the migration is finished, the `crm_mon -r` output looks like this example: ```bash- Online: [ hn1-db-0 hn1-db-1 ] +Online: [ hn1-db-0 hn1-db-1 ] Full list of resources: stonith-sbd (stonith:external/sbd): Started hn1-db-1 Failed Actions: The SAP HANA resource on `hn1-db-0` fails to start as secondary. In this case, configure the HANA instance as secondary by running this command: ```bash- su - <hana sid>adm +su - <hana sid>adm # Stop the HANA instance, just in case it is running hn1adm@hn1-db-0:/usr/sap/HN1/HDB03> sapcontrol -nr <instance number> -function StopWait 600 10 hn1adm@hn1-db-0:/usr/sap/HN1/HDB03> hdbnsutil -sr_register --remoteHost=hn1-db-1 The migration creates location constraints that need to be deleted again: ```bash- # Switch back to root and clean up the failed state +# Switch back to root and clean up the failed state exit hn1-db-0:~ # crm resource clear msl_SAPHana_<HANA SID>_HDB<instance number> ``` hn1-db-0:~ # crm resource clear msl_SAPHana_<HANA SID>_HDB<instance number> You also need to clean up the state of the secondary node resource: ```bash- hn1-db-0:~ # crm resource cleanup msl_SAPHana_<HANA SID>_HDB<instance number> hn1-db-0 +hn1-db-0:~ # crm resource cleanup msl_SAPHana_<HANA SID>_HDB<instance number> hn1-db-0 ``` Monitor the state of the HANA resource by using `crm_mon -r`. When HANA is started on `hn1-db-0`, the output looks like this example: ```bash- Online: [ hn1-db-0 hn1-db-1 ] +Online: [ hn1-db-0 hn1-db-1 ] Full list of resources: stonith-sbd (stonith:external/sbd): Started hn1-db-1 crm resource cleanup msl_SAPHana_<HANA SID>_HDB<instance number> hn1-db-0 You can test the setup of SBD by killing the inquisitor process: ```bash- hn1-db-0:~ # ps aux | grep sbd +hn1-db-0:~ # ps aux | grep sbd root 1912 0.0 0.0 85420 11740 ? SL 12:25 0:00 sbd: inquisitor root 1929 0.0 0.0 85456 11776 ? SL 12:25 0:00 sbd: watcher: /dev/disk/by-id/scsi-360014056f268462316e4681b704a9f73 - slot: 0 - uuid: 7b862dba-e7f7-4800-92ed-f76a4e3978c8 root 1930 0.0 0.0 85456 11776 ? SL 12:25 0:00 sbd: watcher: /dev/disk/by-id/scsi-360014059bc9ea4e4bac4b18808299aaf - slot: 0 - uuid: 5813ee04-b75c-482e-805e-3b1e22ba16cd The `<HANA SID>-db-<database 1>` cluster node reboots. The Pacemaker service mig You can test a manual failover by stopping the Pacemaker service on the `hn1-db-0` node: ```bash- service pacemaker stop +service pacemaker stop ``` After the failover, you can start the service again. If you set `AUTOMATED_REGISTER="false"`, the SAP HANA resource on the `hn1-db-0` node fails to start as secondary. After the failover, you can start the service again. If you set `AUTOMATED_REGIS In this case, configure the HANA instance as secondary by running this command: ```bash- service pacemaker start +service pacemaker start su - <hana sid>adm # Stop the HANA instance, just in case it is running |
sap | Sap Hana Scale Out Standby Netapp Files Suse | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-scale-out-standby-netapp-files-suse.md | -# Deploy a SAP HANA scale-out system with standby node on Azure VMs by using Azure NetApp Files on SUSE Linux Enterprise Server +# Deploy a SAP HANA scale-out system with standby node on Azure VMs by using Azure NetApp Files on SUSE Linux Enterprise Server [dbms-guide]:dbms-guide-general.md [deployment-guide]:deployment-guide.md [planning-guide]:planning-guide.md [anf-azure-doc]:/azure/azure-netapp-files/-[anf-avail-matrix]:https://azure.microsoft.com/global-infrastructure/services/?products=netapp®ions=all +[anf-avail-matrix]:https://azure.microsoft.com/global-infrastructure/services/?products=netapp®ions=all [2205917]:https://launchpad.support.sap.com/#/notes/2205917 [1944799]:https://launchpad.support.sap.com/#/notes/1944799-[1410736]:https://launchpad.support.sap.com/#/notes/1410736 [1900823]:https://launchpad.support.sap.com/#/notes/1900823 -[sap-swcenter]:https://support.sap.com/en/my-support/software-downloads.html - [suse-ha-guide]:https://www.suse.com/products/sles-for-sap/resource-library/sap-best-practices/-[suse-drbd-guide]:https://www.suse.com/documentation/sle-ha-12/singlehtml/book_sleha_techguides/book_sleha_techguides.html [suse-ha-12sp3-relnotes]:https://www.suse.com/releasenotes/x86_64/SLE-HA/12-SP3/ [sap-hana-ha]:sap-hana-high-availability.md-[nfs-ha]:high-availability-guide-suse-nfs.md - This article describes how to deploy a highly available SAP HANA system in a scale-out configuration with standby on Azure virtual machines (VMs) by using [Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-introduction.md) for the shared storage volumes. -In the example configurations, installation commands, and so on, the HANA instance is **03** and the HANA system ID is **HN1**. The examples are based on HANA 2.0 SP4 and SUSE Linux Enterprise Server for SAP 12 SP4. +In the example configurations, installation commands, and so on, the HANA instance is **03** and the HANA system ID is **HN1**. The examples are based on HANA 2.0 SP4 and SUSE Linux Enterprise Server for SAP 12 SP4. Before you begin, refer to the following SAP notes and papers: -* [Azure NetApp Files documentation][anf-azure-doc] +* [Azure NetApp Files documentation][anf-azure-doc] * SAP Note [1928533] includes: * A list of Azure VM sizes that are supported for the deployment of SAP software * Important capacity information for Azure VM sizes Before you begin, refer to the following SAP notes and papers: One method for achieving HANA high availability is by configuring host auto failover. To configure host auto failover, you add one or more virtual machines to the HANA system and configure them as standby nodes. When active node fails, a standby node automatically takes over. In the presented configuration with Azure virtual machines, you achieve auto failover by using [NFS on Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-introduction.md). > [!NOTE]-> The standby node needs access to all database volumes. The HANA volumes must be mounted as NFSv4 volumes. The improved file lease-based locking mechanism in the NFSv4 protocol is used for `I/O` fencing. +> The standby node needs access to all database volumes. The HANA volumes must be mounted as NFSv4 volumes. The improved file lease-based locking mechanism in the NFSv4 protocol is used for `I/O` fencing. > [!IMPORTANT] > To build the supported configuration, you must deploy the HANA data and log volumes as NFSv4.1 volumes and mount them by using the NFSv4.1 protocol. The HANA host auto-failover configuration with standby node is not supported with NFSv3. - +[](./media/high-availability-guide-suse-anf/sap-hana-scale-out-standby-netapp-files-suse.png#lightbox) ++In the preceding diagram, which follows SAP HANA network recommendations, three subnets are represented within one Azure virtual network: -In the preceding diagram, which follows SAP HANA network recommendations, three subnets are represented within one Azure virtual network: * For client communication * For communication with the storage system * For internal HANA inter-node communication The Azure NetApp volumes are in separate subnet, [delegated to Azure NetApp File For this example configuration, the subnets are: - - `client` 10.23.0.0/24 - - `storage` 10.23.2.0/24 - - `hana` 10.23.3.0/24 - - `anf` 10.23.1.0/26 +* `client` 10.23.0.0/24 +* `storage` 10.23.2.0/24 +* `hana` 10.23.3.0/24 +* `anf` 10.23.1.0/26 -## Set up the Azure NetApp Files infrastructure +## Set up the Azure NetApp Files infrastructure -Before you proceed with the setup for Azure NetApp Files infrastructure, familiarize yourself with the [Azure NetApp Files documentation][anf-azure-doc]. +Before you proceed with the set up for Azure NetApp Files infrastructure, familiarize yourself with the [Azure NetApp Files documentation][anf-azure-doc]. Azure NetApp Files is available in several [Azure regions](https://azure.microsoft.com/global-infrastructure/services/?products=netapp). Check to see whether your selected Azure region offers Azure NetApp Files. The following instructions assume that you've already deployed your [Azure virtu 4. Deploy Azure NetApp Files volumes by following the instructions in [Create an NFS volume for Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-create-volumes.md). - As you're deploying the volumes, be sure to select the **NFSv4.1** version. Currently, access to NFSv4.1 requires being added to an allowlist. Deploy the volumes in the designated Azure NetApp Files [subnet](/rest/api/virtualnetwork/subnets). The IP addresses of the Azure NetApp volumes are assigned automatically. - + As you're deploying the volumes, be sure to select the **NFSv4.1** version. Currently, access to NFSv4.1 requires being added to an allowlist. Deploy the volumes in the designated Azure NetApp Files [subnet](/rest/api/virtualnetwork/subnets). The IP addresses of the Azure NetApp volumes are assigned automatically. + Keep in mind that the Azure NetApp Files resources and the Azure VMs must be in the same Azure virtual network or in peered Azure virtual networks. For example, **HN1**-data-mnt00001, **HN1**-log-mnt00001, and so on, are the volume names and nfs://10.23.1.5/**HN1**-data-mnt00001, nfs://10.23.1.4/**HN1**-log-mnt00001, and so on, are the file paths for the Azure NetApp Files volumes. * volume **HN1**-data-mnt00001 (nfs://10.23.1.5/**HN1**-data-mnt00001) The following instructions assume that you've already deployed your [Azure virtu * volume **HN1**-log-mnt00001 (nfs://10.23.1.4/**HN1**-log-mnt00001) * volume **HN1**-log-mnt00002 (nfs://10.23.1.6/**HN1**-log-mnt00002) * volume **HN1**-shared (nfs://10.23.1.4/**HN1**-shared)- + In this example, we used a separate Azure NetApp Files volume for each HANA data and log volume. For a more cost-optimized configuration on smaller or non-productive systems, it's possible to place all data mounts and all logs mounts on a single volume. ### Important considerations As you're creating your Azure NetApp Files for SAP NetWeaver on SUSE High Availability architecture, be aware of the following important considerations: -- The minimum capacity pool is 4 tebibytes (TiB). -- The minimum volume size is 100 gibibytes (GiB).-- Azure NetApp Files and all virtual machines where the Azure NetApp Files volumes will be mounted must be in the same Azure virtual network or in [peered virtual networks](../../virtual-network/virtual-network-peering-overview.md) in the same region. -- The selected virtual network must have a subnet that's delegated to Azure NetApp Files.-- The throughput of an Azure NetApp Files volume is a function of the volume quota and service level, as documented in [Service level for Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-service-levels.md). When you're sizing the HANA Azure NetApp volumes, make sure that the resulting throughput meets the HANA system requirements. -- With the Azure NetApp Files [export policy](../../azure-netapp-files/azure-netapp-files-configure-export-policy.md), you can control the allowed clients, the access type (read-write, read only, and so on). -- The Azure NetApp Files feature isn't zone-aware yet. Currently, the feature isn't deployed in all availability zones in an Azure region. Be aware of the potential latency implications in some Azure regions. -- +* The minimum capacity pool is 4 tebibytes (TiB). +* The minimum volume size is 100 gibibytes (GiB). +* Azure NetApp Files and all virtual machines where the Azure NetApp Files volumes will be mounted must be in the same Azure virtual network or in [peered virtual networks](../../virtual-network/virtual-network-peering-overview.md) in the same region. +* The selected virtual network must have a subnet that's delegated to Azure NetApp Files. +* The throughput of an Azure NetApp Files volume is a function of the volume quota and service level, as documented in [Service level for Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-service-levels.md). When you're sizing the HANA Azure NetApp volumes, make sure that the resulting throughput meets the HANA system requirements. +* With the Azure NetApp Files [export policy](../../azure-netapp-files/azure-netapp-files-configure-export-policy.md), you can control the allowed clients, the access type (read-write, read only, and so on). +* The Azure NetApp Files feature isn't zone-aware yet. Currently, the feature isn't deployed in all availability zones in an Azure region. Be aware of the potential latency implications in some Azure regions. > [!IMPORTANT] > For SAP HANA workloads, low latency is critical. Work with your Microsoft representative to ensure that the virtual machines and the Azure NetApp Files volumes are deployed in close proximity. ### Sizing for HANA database on Azure NetApp Files -The throughput of an Azure NetApp Files volume is a function of the volume size and service level, as documented in [Service level for Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-service-levels.md). +The throughput of an Azure NetApp Files volume is a function of the volume size and service level, as documented in [Service level for Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-service-levels.md). As you design the infrastructure for SAP in Azure, be aware of some minimum storage requirements by SAP, which translate into minimum throughput characteristics: -- Enable read-write on /hana/log of 250 megabytes per second (MB/s) with 1-MB I/O sizes. -- Enable read activity of at least 400 MB/s for /hana/data for 16-MB and 64-MB I/O sizes. -- Enable write activity of at least 250 MB/s for /hana/data with 16-MB and 64-MB I/O sizes. +* Enable read-write on /hana/log of 250 megabytes per second (MB/s) with 1-MB I/O sizes. +* Enable read activity of at least 400 MB/s for /hana/data for 16-MB and 64-MB I/O sizes. +* Enable write activity of at least 250 MB/s for /hana/data with 16-MB and 64-MB I/O sizes. The [Azure NetApp Files throughput limits](../../azure-netapp-files/azure-netapp-files-service-levels.md) per 1 TiB of volume quota are:-- Premium Storage tier - 64 MiB/s -- Ultra Storage tier - 128 MiB/s ++* Premium Storage tier - 64 MiB/s +* Ultra Storage tier - 128 MiB/s To meet the SAP minimum throughput requirements for data and log, and the guidelines for /hana/shared, the recommended sizes would be: -| Volume | Size of<br>Premium Storage tier | Size of<br>Ultra Storage tier | Supported NFS protocol | +| Volume | Size of Premium Storage tier | Size of Ultra Storage tier | Supported NFS protocol | | | | | | | /hana/log/ | 4 TiB | 2 TiB | v4.1 | | /hana/data | 6.3 TiB | 3.2 TiB | v4.1 | To meet the SAP minimum throughput requirements for data and log, and the guidel The SAP HANA configuration for the layout that's presented in this article, using Azure NetApp Files Ultra Storage tier, would be: -| Volume | Size of<br>Ultra Storage tier | Supported NFS protocol | +| Volume | Size of Ultra Storage tier | Supported NFS protocol | | | | | | /hana/log/mnt00001 | 2 TiB | v4.1 | | /hana/log/mnt00002 | 2 TiB | v4.1 | The SAP HANA configuration for the layout that's presented in this article, usin ## Deploy Linux virtual machines via the Azure portal First you need to create the Azure NetApp Files volumes. Then do the following steps:-1. Create the [Azure virtual network subnets](../../virtual-network/virtual-network-manage-subnet.md) in your [Azure virtual network](../../virtual-network/virtual-networks-overview.md). -1. Deploy the VMs. -1. Create the additional network interfaces, and attach the network interfaces to the corresponding VMs. - Each virtual machine has three network interfaces, which correspond to the three Azure virtual network subnets (`client`, `storage` and `hana`). +1. Create the [Azure virtual network subnets](../../virtual-network/virtual-network-manage-subnet.md) in your [Azure virtual network](../../virtual-network/virtual-networks-overview.md). +2. Deploy the VMs. +3. Create the additional network interfaces, and attach the network interfaces to the corresponding VMs. ++ Each virtual machine has three network interfaces, which correspond to the three Azure virtual network subnets (`client`, `storage` and `hana`). For more information, see [Create a Linux virtual machine in Azure with multiple network interface cards](../../virtual-machines/linux/multiple-nics.md). > [!IMPORTANT]-> For SAP HANA workloads, low latency is critical. To achieve low latency, work with your Microsoft representative to ensure that the virtual machines and the Azure NetApp Files volumes are deployed in close proximity. When you're [onboarding new SAP HANA system](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbRxjSlHBUxkJBjmARn57skvdUQlJaV0ZBOE1PUkhOVk40WjZZQVJXRzI2RC4u) that's using SAP HANA Azure NetApp Files, submit the necessary information. - -The next instructions assume that you've already created the resource group, the Azure virtual network, and the three Azure virtual network subnets: `client`, `storage` and `hana`. When you deploy the VMs, select the client subnet, so that the client network interface is the primary interface on the VMs. You will also need to configure an explicit route to the Azure NetApp Files delegated subnet via the storage subnet gateway. +> For SAP HANA workloads, low latency is critical. To achieve low latency, work with your Microsoft representative to ensure that the virtual machines and the Azure NetApp Files volumes are deployed in close proximity. When you're [onboarding new SAP HANA system](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbRxjSlHBUxkJBjmARn57skvdUQlJaV0ZBOE1PUkhOVk40WjZZQVJXRzI2RC4u) that's using SAP HANA Azure NetApp Files, submit the necessary information. ++The next instructions assume that you've already created the resource group, the Azure virtual network, and the three Azure virtual network subnets: `client`, `storage` and `hana`. When you deploy the VMs, select the client subnet, so that the client network interface is the primary interface on the VMs. You will also need to configure an explicit route to the Azure NetApp Files delegated subnet via the storage subnet gateway. > [!IMPORTANT] > Make sure that the OS you select is SAP-certified for SAP HANA on the specific VM types you're using. For a list of SAP HANA certified VM types and OS releases for those types, go to the [SAP HANA certified IaaS platforms](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=v:deCertified;ve:24;iaas;v:125;v:105;v:99;v:120) site. Click into the details of the listed VM type to get the complete list of SAP HANA-supported OS releases for that type. 1. Create an availability set for SAP HANA. Make sure to set the max update domain. -2. Create three virtual machines (**hanadb1**, **hanadb2**, **hanadb3**) by doing the following steps: +2. Create three virtual machines (**hanadb1**, **hanadb2**, **hanadb3**) by doing the following steps: - a. Use a SLES4SAP image in the Azure gallery that's supported for SAP HANA. We used a SLES4SAP 12 SP4 image in this example. + a. Use a SLES4SAP image in the Azure gallery that's supported for SAP HANA. - b. Select the availability set that you created earlier for SAP HANA. + b. Select the availability set that you created earlier for SAP HANA. c. Select the client Azure virtual network subnet. Select [Accelerated Network](../../virtual-network/create-vm-accelerated-networking-cli.md). - When you deploy the virtual machines, the network interface name is automatically generated. In these instructions for simplicity we'll refer to the automatically generated network interfaces, which are attached to the client Azure virtual network subnet, as **hanadb1-client**, **hanadb2-client**, and **hanadb3-client**. + When you deploy the virtual machines, the network interface name is automatically generated. In these instructions for simplicity we'll refer to the automatically generated network interfaces, which are attached to the client Azure virtual network subnet, as **hanadb1-client**, **hanadb2-client**, and **hanadb3-client**. 3. Create three network interfaces, one for each virtual machine, for the `storage` virtual network subnet (in this example, **hanadb1-storage**, **hanadb2-storage**, and **hanadb3-storage**). 4. Create three network interfaces, one for each virtual machine, for the `hana` virtual network subnet (in this example, **hanadb1-hana**, **hanadb2-hana**, and **hanadb3-hana**). -5. Attach the newly created virtual network interfaces to the corresponding virtual machines by doing the following steps: -- a. Go to the virtual machine in the [Azure portal](https://portal.azure.com/#home). -- b. In the left pane, select **Virtual Machines**. Filter on the virtual machine name (for example, **hanadb1**), and then select the virtual machine. -- c. In the **Overview** pane, select **Stop** to deallocate the virtual machine. +5. Attach the newly created virtual network interfaces to the corresponding virtual machines by doing the following steps: - d. Select **Networking**, and then attach the network interface. In the **Attach network interface** drop-down list, select the already created network interfaces for the `storage` and `hana` subnets. - - e. Select **Save**. - - f. Repeat steps b through e for the remaining virtual machines (in our example, **hanadb2** and **hanadb3**). - - g. Leave the virtual machines in stopped state for now. Next, we'll enable [accelerated networking](../../virtual-network/create-vm-accelerated-networking-cli.md) for all newly attached network interfaces. --6. Enable accelerated networking for the additional network interfaces for the `storage` and `hana` subnets by doing the following steps: + 1. Go to the virtual machine in the [Azure portal](https://portal.azure.com/#home). + 2. In the left pane, select **Virtual Machines**. Filter on the virtual machine name (for example, **hanadb1**), and then select the virtual machine. + 3. In the **Overview** pane, select **Stop** to deallocate the virtual machine. + 4. Select **Networking**, and then attach the network interface. In the **Attach network interface** drop-down list, select the already created network interfaces for the `storage` and `hana` subnets. + 5. Select **Save**. + 6. Repeat steps b through e for the remaining virtual machines (in our example, **hanadb2** and **hanadb3**). + 7. Leave the virtual machines in stopped state for now. Next, we'll enable [accelerated networking](../../virtual-network/create-vm-accelerated-networking-cli.md) for all newly attached network interfaces. - a. Open [Azure Cloud Shell](https://azure.microsoft.com/features/cloud-shell/) in the [Azure portal](https://portal.azure.com/#home). +6. Enable accelerated networking for the additional network interfaces for the `storage` and `hana` subnets by doing the following steps: - b. Execute the following commands to enable accelerated networking for the additional network interfaces, which are attached to the `storage` and `hana` subnets. -- <pre><code> - az network nic update --id /subscriptions/<b>your subscription</b>/resourceGroups/<b>your resource group</b>/providers/Microsoft.Network/networkInterfaces/<b>hanadb1-storage</b> --accelerated-networking true - az network nic update --id /subscriptions/<b>your subscription</b>/resourceGroups/<b>your resource group</b>/providers/Microsoft.Network/networkInterfaces/<b>hanadb2-storage</b> --accelerated-networking true - az network nic update --id /subscriptions/<b>your subscription</b>/resourceGroups/<b>your resource group</b>/providers/Microsoft.Network/networkInterfaces/<b>hanadb3-storage</b> --accelerated-networking true - - az network nic update --id /subscriptions/<b>your subscription</b>/resourceGroups/<b>your resource group</b>/providers/Microsoft.Network/networkInterfaces/<b>hanadb1-hana</b> --accelerated-networking true - az network nic update --id /subscriptions/<b>your subscription</b>/resourceGroups/<b>your resource group</b>/providers/Microsoft.Network/networkInterfaces/<b>hanadb2-hana</b> --accelerated-networking true - az network nic update --id /subscriptions/<b>your subscription</b>/resourceGroups/<b>your resource group</b>/providers/Microsoft.Network/networkInterfaces/<b>hanadb3-hana</b> --accelerated-networking true + 1. Open [Azure Cloud Shell](https://azure.microsoft.com/features/cloud-shell/) in the [Azure portal](https://portal.azure.com/#home). - </code></pre> + 2. Execute the following commands to enable accelerated networking for the additional network interfaces, which are attached to the `storage` and `hana` subnets. -7. Start the virtual machines by doing the following steps: + ```bash + az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hanadb1-storage --accelerated-networking true + az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hanadb2-storage --accelerated-networking true + az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hanadb3-storage --accelerated-networking true + + az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hanadb1-hana --accelerated-networking true + az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hanadb2-hana --accelerated-networking true + az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hanadb3-hana --accelerated-networking true + ``` - a. In the left pane, select **Virtual Machines**. Filter on the virtual machine name (for example, **hanadb1**), and then select it. +7. Start the virtual machines by doing the following steps: - b. In the **Overview** pane, select **Start**. + 1. In the left pane, select **Virtual Machines**. Filter on the virtual machine name (for example, **hanadb1**), and then select it. + 2. In the **Overview** pane, select **Start**. ## Operating system configuration and preparation The instructions in the next sections are prefixed with one of the following:+ * **[A]**: Applicable to all nodes * **[1]**: Applicable only to node 1 * **[2]**: Applicable only to node 2 Configure and prepare your OS by doing the following steps: 1. **[A]** Maintain the host files on the virtual machines. Include entries for all subnets. The following entries were added to `/etc/hosts` for this example. - <pre><code> + ```bash # Storage- 10.23.2.4 hanadb1-storage - 10.23.2.5 hanadb2-storage - 10.23.2.6 hanadb3-storage - # Client - 10.23.0.5 hanadb1 - 10.23.0.6 hanadb2 - 10.23.0.7 hanadb3 - # Hana - 10.23.3.4 hanadb1-hana - 10.23.3.5 hanadb2-hana - 10.23.3.6 hanadb3-hana - </code></pre> + 10.23.2.4 hanadb1-storage + 10.23.2.5 hanadb2-storage + 10.23.2.6 hanadb3-storage + # Client + 10.23.0.5 hanadb1 + 10.23.0.6 hanadb2 + 10.23.0.7 hanadb3 + # Hana + 10.23.3.4 hanadb1-hana + 10.23.3.5 hanadb2-hana + 10.23.3.6 hanadb3-hana + ``` 2. **[A]** Change DHCP and cloud config settings for the network interface for storage to avoid unintended hostname changes. - The following instructions assume that the storage network interface is `eth1`. + The following instructions assume that the storage network interface is `eth1`. - <pre><code> - vi /etc/sysconfig/network/dhcp + ```bash + vi /etc/sysconfig/network/dhcp # Change the following DHCP setting to "no" DHCLIENT_SET_HOSTNAME="no"- vi /etc/sysconfig/network/ifcfg-<b>eth1</b> + + vi /etc/sysconfig/network/ifcfg-eth1 # Edit ifcfg-eth1 #Change CLOUD_NETCONFIG_MANAGE='yes' to "no" CLOUD_NETCONFIG_MANAGE='no'- </code></pre> + ``` -2. **[A]** Add a network route, so that the communication to the Azure NetApp Files goes via the storage network interface. +3. **[A]** Add a network route, so that the communication to the Azure NetApp Files goes via the storage network interface. The following instructions assume that the storage network interface is `eth1`. - <pre><code> - vi /etc/sysconfig/network/ifroute-<b>eth1</b> + ```bash + vi /etc/sysconfig/network/ifroute-eth1 + # Add the following routes # RouterIPforStorageNetwork # ANFNetwork/cidr RouterIPforStorageNetwork - -- <b>10.23.2.1</b> - <b>10.23.1.0/26</b> <b>10.23.2.1</b> - - - </code></pre> + 10.23.2.1 + 10.23.1.0/26 10.23.2.1 - - + ``` Reboot the VM to activate the changes. -3. **[A]** Prepare the OS for running SAP HANA on NetApp Systems with NFS, as described in SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346). Create configuration file */etc/sysctl.d/91-NetApp-HANA.conf* for the NetApp configuration settings. +4. **[A]** Prepare the OS for running SAP HANA on NetApp Systems with NFS, as described in SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346). Create configuration file */etc/sysctl.d/91-NetApp-HANA.conf* for the NetApp configuration settings. - <pre><code> + ```bash vi /etc/sysctl.d/91-NetApp-HANA.conf+ # Add the following entries in the configuration file net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 Configure and prepare your OS by doing the following steps: net.ipv4.tcp_window_scaling = 1 net.ipv4.tcp_timestamps = 1 net.ipv4.tcp_sack = 1- </code></pre> + ``` -4. **[A]** Create configuration file */etc/sysctl.d/ms-az.conf* with Microsoft for Azure configuration settings. +5. **[A]** Create configuration file */etc/sysctl.d/ms-az.conf* with Microsoft for Azure configuration settings. - <pre><code> + ```bash vi /etc/sysctl.d/ms-az.conf+ # Add the following entries in the configuration file net.ipv6.conf.all.disable_ipv6 = 1 net.ipv4.tcp_max_syn_backlog = 16348 net.ipv4.conf.all.rp_filter = 0 sunrpc.tcp_slot_table_entries = 128 vm.swappiness=10- </code></pre> + ``` -> [!TIP] -> Avoid setting net.ipv4.ip_local_port_range and net.ipv4.ip_local_reserved_ports explicitly in the sysctl configuration files to allow SAP Host Agent to manage the port ranges. For more details see SAP note [2382421](https://launchpad.support.sap.com/#/notes/2382421). + > [!TIP] + > Avoid setting net.ipv4.ip_local_port_range and net.ipv4.ip_local_reserved_ports explicitly in the sysctl configuration files to allow SAP Host Agent to manage the port ranges. For more details see SAP note [2382421](https://launchpad.support.sap.com/#/notes/2382421). -4. **[A]** Adjust the sunrpc settings for NFSv3 volumes, as recommended in SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346). +6. **[A]** Adjust the sunrpc settings for NFSv3 volumes, as recommended in SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346). - <pre><code> + ```bash vi /etc/modprobe.d/sunrpc.conf+ # Insert the following line options sunrpc tcp_max_slot_table_entries=128- </code></pre> + ``` ## Mount the Azure NetApp Files volumes 1. **[A]** Create mount points for the HANA database volumes. - <pre><code> - mkdir -p /hana/data/<b>HN1</b>/mnt00001 - mkdir -p /hana/data/<b>HN1</b>/mnt00002 - mkdir -p /hana/log/<b>HN1</b>/mnt00001 - mkdir -p /hana/log/<b>HN1</b>/mnt00002 + ```bash + mkdir -p /hana/data/HN1/mnt00001 + mkdir -p /hana/data/HN1/mnt00002 + mkdir -p /hana/log/HN1/mnt00001 + mkdir -p /hana/log/HN1/mnt00002 mkdir -p /hana/shared- mkdir -p /usr/sap/<b>HN1</b> - </code></pre> + mkdir -p /usr/sap/HN1 + ``` 2. **[1]** Create node-specific directories for /usr/sap on **HN1**-shared. - <pre><code> - # Create a temporary directory to mount <b>HN1</b>-shared + ```bash + # Create a temporary directory to mount HN1-shared mkdir /mnt/tmp+ # if using NFSv3 for this volume, mount with the following command- mount <b>10.23.1.4</b>:/<b>HN1</b>-shared /mnt/tmp + mount 10.23.1.4:/HN1-shared /mnt/tmp + # if using NFSv4.1 for this volume, mount with the following command- mount -t nfs -o sec=sys,nfsvers=4.1 <b>10.23.1.4</b>:/<b>HN1</b>-shared /mnt/tmp + mount -t nfs -o sec=sys,nfsvers=4.1 10.23.1.4:/HN1-shared /mnt/tmp + cd /mnt/tmp- mkdir shared usr-sap-<b>hanadb1</b> usr-sap-<b>hanadb2</b> usr-sap-<b>hanadb3</b> + mkdir shared usr-sap-hanadb1 usr-sap-hanadb2 usr-sap-hanadb3 + # unmount /hana/shared cd umount /mnt/tmp- </code></pre> + ``` 3. **[A]** Verify the NFS domain setting. Make sure that the domain is configured as the default Azure NetApp Files domain, i.e. **`defaultv4iddomain.com`** and the mapping is set to **nobody**. > [!IMPORTANT] > Make sure to set the NFS domain in `/etc/idmapd.conf` on the VM to match the default domain configuration on Azure NetApp Files: **`defaultv4iddomain.com`**. If there's a mismatch between the domain configuration on the NFS client (i.e. the VM) and the NFS server, i.e. the Azure NetApp configuration, then the permissions for files on Azure NetApp volumes that are mounted on the VMs will be displayed as `nobody`. - <pre><code> + ```bash sudo cat /etc/idmapd.conf+ # Example [General] Verbosity = 0 Pipefs-Directory = /var/lib/nfs/rpc_pipefs- Domain = <b>defaultv4iddomain.com</b> + Domain = defaultv4iddomain.com [Mapping]- Nobody-User = <b>nobody</b> - Nobody-Group = <b>nobody</b> - </code></pre> + Nobody-User = nobody + Nobody-Group = nobody + ``` 4. **[A]** Verify `nfs4_disable_idmapping`. It should be set to **Y**. To create the directory structure where `nfs4_disable_idmapping` is located, execute the mount command. You won't be able to manually create the directory under /sys/modules, because access is reserved for the kernel / drivers. - <pre><code> + ```bash # Check nfs4_disable_idmapping cat /sys/module/nfs/parameters/nfs4_disable_idmapping+ # If you need to set nfs4_disable_idmapping to Y mkdir /mnt/tmp mount 10.23.1.4:/HN1-shared /mnt/tmp umount /mnt/tmp echo "Y" > /sys/module/nfs/parameters/nfs4_disable_idmapping+ # Make the configuration permanent echo "options nfs nfs4_disable_idmapping=Y" >> /etc/modprobe.d/nfs.conf- </code></pre> + ``` 5. **[A]** Create the SAP HANA group and user manually. The IDs for group sapsys and user **hn1**adm must be set to the same IDs, which are provided during the onboarding. (In this example, the IDs are set to **1001**.) If the IDs aren't set correctly, you won't be able to access the volumes. The IDs for group sapsys and user accounts **hn1**adm and sapadm must be the same on all virtual machines. - <pre><code> + ```bash # Create user group sudo groupadd -g 1001 sapsys+ # Create users - sudo useradd <b>hn1</b>adm -u 1001 -g 1001 -d /usr/sap/<b>HN1</b>/home -c "SAP HANA Database System" -s /bin/sh + sudo useradd hn1adm -u 1001 -g 1001 -d /usr/sap/HN1/home -c "SAP HANA Database System" -s /bin/sh sudo useradd sapadm -u 1002 -g 1001 -d /home/sapadm -c "SAP Local Administrator" -s /bin/sh+ # Set the password for both user ids sudo passwd hn1adm sudo passwd sapadm- </code></pre> + ``` 6. **[A]** Mount the shared Azure NetApp Files volumes. - <pre><code> - sudo vi /etc/fstab - # Add the following entries - 10.23.1.5:/<b>HN1</b>-data-mnt00001 /hana/data/<b>HN1</b>/mnt00001 nfs rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0 - 10.23.1.6:/<b>HN1</b>-data-mnt00002 /hana/data/<b>HN1</b>/mnt00002 nfs rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0 - 10.23.1.4:/<b>HN1</b>-log-mnt00001 /hana/log/<b>HN1</b>/mnt00001 nfs rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0 - 10.23.1.6:/<b>HN1</b>-log-mnt00002 /hana/log/HN1/mnt00002 nfs rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0 - 10.23.1.4:/<b>HN1</b>-shared/shared /hana/shared nfs rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0 - # Mount all volumes - sudo mount -a - </code></pre> + ```bash + sudo vi /etc/fstab + + # Add the following entries + 10.23.1.5:/HN1-data-mnt00001 /hana/data/HN1/mnt00001 nfs rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0 + 10.23.1.6:/HN1-data-mnt00002 /hana/data/HN1/mnt00002 nfs rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0 + 10.23.1.4:/HN1-log-mnt00001 /hana/log/HN1/mnt00001 nfs rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0 + 10.23.1.6:/HN1-log-mnt00002 /hana/log/HN1/mnt00002 nfs rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0 + 10.23.1.4:/HN1-shared/shared /hana/shared nfs rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0 + + # Mount all volumes + sudo mount -a + ``` For workloads, that require higher throughput, consider using the `nconnect` mount option, as described in [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md#nconnect-mount-option). Check if `nconnect` is [supported by Azure NetApp Files](../../azure-netapp-files/performance-linux-mount-options.md#nconnect) on your Linux release. 7. **[1]** Mount the node-specific volumes on **hanadb1**. - <pre><code> + ```bash sudo vi /etc/fstab+ # Add the following entries- 10.23.1.4:/<b>HN1</b>-shared/usr-sap-<b>hanadb1</b> /usr/sap/<b>HN1</b> nfs rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0 + 10.23.1.4:/HN1-shared/usr-sap-hanadb1 /usr/sap/HN1 nfs rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0 + # Mount the volume- sudo mount -a - </code></pre> + sudo mount -a + ``` 8. **[2]** Mount the node-specific volumes on **hanadb2**. - <pre><code> + ```bash sudo vi /etc/fstab+ # Add the following entries- 10.23.1.4:/<b>HN1</b>-shared/usr-sap-<b>hanadb2</b> /usr/sap/<b>HN1</b> nfs rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0 + 10.23.1.4:/HN1-shared/usr-sap-hanadb2 /usr/sap/HN1 nfs rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0 + # Mount the volume- sudo mount -a - </code></pre> + sudo mount -a + ``` 9. **[3]** Mount the node-specific volumes on **hanadb3**. - <pre><code> + ```bash sudo vi /etc/fstab+ # Add the following entries- 10.23.1.4:/<b>HN1</b>-shared/usr-sap-<b>hanadb3</b> /usr/sap/<b>HN1</b> nfs rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0 + 10.23.1.4:/HN1-shared/usr-sap-hanadb3 /usr/sap/HN1 nfs rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0 + # Mount the volume- sudo mount -a - </code></pre> + sudo mount -a + ``` 10. **[A]** Verify that all HANA volumes are mounted with NFS protocol version **NFSv4.1**. - <pre><code> + ```bash sudo nfsstat -m- # Verify that flag vers is set to <b>4.1</b> - # Example from <b>hanadb1</b> - /hana/data/<b>HN1</b>/mnt00001 from 10.23.1.5:/<b>HN1</b>-data-mnt00001 - Flags: rw,noatime,vers=<b>4.1</b>,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.23.2.4,local_lock=none,addr=10.23.1.5 - /hana/log/<b>HN1</b>/mnt00002 from 10.23.1.6:/<b>HN1</b>-log-mnt00002 - Flags: rw,noatime,vers=<b>4.1</b>,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.23.2.4,local_lock=none,addr=10.23.1.6 - /hana/data/<b>HN1</b>/mnt00002 from 10.23.1.6:/<b>HN1</b>-data-mnt00002 - Flags: rw,noatime,vers=<b>4.1</b>,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.23.2.4,local_lock=none,addr=10.23.1.6 - /hana/log/<b>HN1</b>/mnt00001 from 10.23.1.4:/<b>HN1</b>-log-mnt00001 - Flags: rw,noatime,vers=<b>4.1</b>,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.23.2.4,local_lock=none,addr=10.23.1.4 - /usr/sap/<b>HN1</b> from 10.23.1.4:/<b>HN1</b>-shared/usr-sap-<b>hanadb1</b> - Flags: rw,noatime,vers=<b>4.1</b>,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.23.2.4,local_lock=none,addr=10.23.1.4 - /hana/shared from 10.23.1.4:/<b>HN1</b>-shared/shared - Flags: rw,noatime,vers=<b>4.1</b>,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.23.2.4,local_lock=none,addr=10.23.1.4 - </code></pre> + + # Verify that flag vers is set to 4.1 + # Example from hanadb1 + /hana/data/HN1/mnt00001 from 10.23.1.5:/HN1-data-mnt00001 + Flags: rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.23.2.4,local_lock=none,addr=10.23.1.5 + /hana/log/HN1/mnt00002 from 10.23.1.6:/HN1-log-mnt00002 + Flags: rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.23.2.4,local_lock=none,addr=10.23.1.6 + /hana/data/HN1/mnt00002 from 10.23.1.6:/HN1-data-mnt00002 + Flags: rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.23.2.4,local_lock=none,addr=10.23.1.6 + /hana/log/HN1/mnt00001 from 10.23.1.4:/HN1-log-mnt00001 + Flags: rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.23.2.4,local_lock=none,addr=10.23.1.4 + /usr/sap/HN1 from 10.23.1.4:/HN1-shared/usr-sap-hanadb1 + Flags: rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.23.2.4,local_lock=none,addr=10.23.1.4 + /hana/shared from 10.23.1.4:/HN1-shared/shared + Flags: rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.23.2.4,local_lock=none,addr=10.23.1.4 + ``` ## Installation In this example for deploying SAP HANA in scale-out configuration with standby n 2. **[1]** Verify that you can log in via SSH to **hanadb2** and **hanadb3**, without being prompted for a password. - <pre><code> - ssh root@<b>hanadb2</b> - ssh root@<b>hanadb3</b> - </code></pre> + ```bash + ssh root@hanadb2 + ssh root@hanadb3 + ``` -3. **[A]** Install additional packages, which are required for HANA 2.0 SP4. For more information, see SAP Note [2593824](https://launchpad.support.sap.com/#/notes/2593824). +3. **[A]** Install additional packages, which are required for HANA 2.0 SP4. For more information, see SAP Note [2593824](https://launchpad.support.sap.com/#/notes/2593824). - <pre><code> - sudo zypper install libgcc_s1 libstdc++6 libatomic1 - </code></pre> + ```bash + sudo zypper install libgcc_s1 libstdc++6 libatomic1 + ``` -4. **[2], [3]** Change ownership of SAP HANA `data` and `log` directories to **hn1**adm. +4. **[2], [3]** Change ownership of SAP HANA `data` and `log` directories to **hn1**adm. - <pre><code> + ```bash # Execute as root- sudo chown hn1adm:sapsys /hana/data/<b>HN1</b> - sudo chown hn1adm:sapsys /hana/log/<b>HN1</b> - </code></pre> + sudo chown hn1adm:sapsys /hana/data/HN1 + sudo chown hn1adm:sapsys /hana/log/HN1 + ``` ### HANA installation 1. **[1]** Install SAP HANA by following the instructions in the [SAP HANA 2.0 Installation and Update guide](https://help.sap.com/viewer/2c1988d620e04368aa4103bf26f17727/2.0.04/en-US/7eb0167eb35e4e2885415205b8383584.html). In this example, we install SAP HANA scale-out with master, one worker, and one standby node. - a. Start the **hdblcm** program from the HANA installation software directory. Use the `internal_network` parameter and pass the address space for subnet, which is used for the internal HANA inter-node communication. -- <pre><code> - ./hdblcm --internal_network=10.23.3.0/24 - </code></pre> -- b. At the prompt, enter the following values: -- * For **Choose an action**: enter **1** (for install) - * For **Additional components for installation**: enter **2, 3** - * For installation path: press Enter (defaults to /hana/shared) - * For **Local Host Name**: press Enter to accept the default - * Under **Do you want to add hosts to the system?**: enter **y** - * For **comma-separated host names to add**: enter **hanadb2, hanadb3** - * For **Root User Name** [root]: press Enter to accept the default - * For **Root User Password**: enter the root user's password - * For roles for host hanadb2: enter **1** (for worker) - * For **Host Failover Group** for host hanadb2 [default]: press Enter to accept the default - * For **Storage Partition Number** for host hanadb2 [\<\<assign automatically\>\>]: press Enter to accept the default - * For **Worker Group** for host hanadb2 [default]: press Enter to accept the default - * For **Select roles** for host hanadb3: enter **2** (for standby) - * For **Host Failover Group** for host hanadb3 [default]: press Enter to accept the default - * For **Worker Group** for host hanadb3 [default]: press Enter to accept the default - * For **SAP HANA System ID**: enter **HN1** - * For **Instance number** [00]: enter **03** - * For **Local Host Worker Group** [default]: press Enter to accept the default - * For **Select System Usage / Enter index [4]**: enter **4** (for custom) - * For **Location of Data Volumes** [/hana/data/HN1]: press Enter to accept the default - * For **Location of Log Volumes** [/hana/log/HN1]: press Enter to accept the default - * For **Restrict maximum memory allocation?** [n]: enter **n** - * For **Certificate Host Name For Host hanadb1** [hanadb1]: press Enter to accept the default - * For **Certificate Host Name For Host hanadb2** [hanadb2]: press Enter to accept the default - * For **Certificate Host Name For Host hanadb3** [hanadb3]: press Enter to accept the default - * For **System Administrator (hn1adm) Password**: enter the password - * For **System Database User (system) Password**: enter the system's password - * For **Confirm System Database User (system) Password**: enter system's password - * For **Restart system after machine reboot?** [n]: enter **n** - * For **Do you want to continue (y/n)**: validate the summary and if everything looks good, enter **y** ---2. **[1]** Verify global.ini + 1. Start the **hdblcm** program from the HANA installation software directory. Use the `internal_network` parameter and pass the address space for subnet, which is used for the internal HANA inter-node communication. ++ ```bash + ./hdblcm --internal_network=10.23.3.0/24 + ``` ++ 2. At the prompt, enter the following values: + * For **Choose an action**: enter **1** (for install) + * For **Additional components for installation**: enter **2, 3** + * For installation path: press Enter (defaults to /hana/shared) + * For **Local Host Name**: press Enter to accept the default + * Under **Do you want to add hosts to the system?**: enter **y** + * For **comma-separated host names to add**: enter **hanadb2, hanadb3** + * For **Root User Name** [root]: press Enter to accept the default + * For **Root User Password**: enter the root user's password + * For roles for host hanadb2: enter **1** (for worker) + * For **Host Failover Group** for host hanadb2 [default]: press Enter to accept the default + * For **Storage Partition Number** for host hanadb2 [\<\<assign automatically\>\>]: press Enter to accept the default + * For **Worker Group** for host hanadb2 [default]: press Enter to accept the default + * For **Select roles** for host hanadb3: enter **2** (for standby) + * For **Host Failover Group** for host hanadb3 [default]: press Enter to accept the default + * For **Worker Group** for host hanadb3 [default]: press Enter to accept the default + * For **SAP HANA System ID**: enter **HN1** + * For **Instance number** [00]: enter **03** + * For **Local Host Worker Group** [default]: press Enter to accept the default + * For **Select System Usage / Enter index [4]**: enter **4** (for custom) + * For **Location of Data Volumes** [/hana/data/HN1]: press Enter to accept the default + * For **Location of Log Volumes** [/hana/log/HN1]: press Enter to accept the default + * For **Restrict maximum memory allocation?** [n]: enter **n** + * For **Certificate Host Name For Host hanadb1** [hanadb1]: press Enter to accept the default + * For **Certificate Host Name For Host hanadb2** [hanadb2]: press Enter to accept the default + * For **Certificate Host Name For Host hanadb3** [hanadb3]: press Enter to accept the default + * For **System Administrator (hn1adm) Password**: enter the password + * For **System Database User (system) Password**: enter the system's password + * For **Confirm System Database User (system) Password**: enter system's password + * For **Restart system after machine reboot?** [n]: enter **n** + * For **Do you want to continue (y/n)**: validate the summary and if everything looks good, enter **y** ++2. **[1]** Verify global.ini. Display global.ini, and ensure that the configuration for the internal SAP HANA inter-node communication is in place. Verify the **communication** section. It should have the address space for the `hana` subnet, and `listeninterface` should be set to `.internal`. Verify the **internal_hostname_resolution** section. It should have the IP addresses for the HANA virtual machines that belong to the `hana` subnet. - <pre><code> - sudo cat /usr/sap/<b>HN1</b>/SYS/global/hdb/custom/config/global.ini - # Example - #global.ini last modified 2019-09-10 00:12:45.192808 by hdbnameserve - [communication] - internal_network = <b>10.23.3/24</b> - listeninterface = .internal - [internal_hostname_resolution] - <b>10.23.3.4</b> = <b>hanadb1</b> - <b>10.23.3.5</b> = <b>hanadb2</b> - <b>10.23.3.6</b> = <b>hanadb3</b> - </code></pre> + ```bash + sudo cat /usr/sap/HN1/SYS/global/hdb/custom/config/global.ini + + # Example + #global.ini last modified 2019-09-10 00:12:45.192808 by hdbnameserve + [communication] + internal_network = 10.23.3/24 + listeninterface = .internal + [internal_hostname_resolution] + 10.23.3.4 = hanadb1 + 10.23.3.5 = hanadb2 + 10.23.3.6 = hanadb3 + ``` 3. **[1]** Add host mapping to ensure that the client IP addresses are used for client communication. Add section `public_host_resolution`, and add the corresponding IP addresses from the client subnet. - <pre><code> - sudo vi /usr/sap/HN1/SYS/global/hdb/custom/config/global.ini - #Add the section - [public_hostname_resolution] - map_<b>hanadb1</b> = <b>10.23.0.5</b> - map_<b>hanadb2</b> = <b>10.23.0.6</b> - map_<b>hanadb3</b> = <b>10.23.0.7</b> - </code></pre> + ```bash + sudo vi /usr/sap/HN1/SYS/global/hdb/custom/config/global.ini + + #Add the section + [public_hostname_resolution] + map_hanadb1 = 10.23.0.5 + map_hanadb2 = 10.23.0.6 + map_hanadb3 = 10.23.0.7 + ``` 4. **[1]** Restart SAP HANA to activate the changes. - <pre><code> - sudo -u <b>hn1</b>adm /usr/sap/hostctrl/exe/sapcontrol -nr <b>03</b> -function StopSystem HDB - sudo -u <b>hn1</b>adm /usr/sap/hostctrl/exe/sapcontrol -nr <b>03</b> -function StartSystem HDB - </code></pre> + ```bash + sudo -u hn1adm /usr/sap/hostctrl/exe/sapcontrol -nr 03 -function StopSystem HDB + sudo -u hn1adm /usr/sap/hostctrl/exe/sapcontrol -nr 03 -function StartSystem HDB + ``` 5. **[1]** Verify that the client interface will be using the IP addresses from the `client` subnet for communication. - <pre><code> - sudo -u hn1adm /usr/sap/HN1/HDB03/exe/hdbsql -u SYSTEM -p "<b>password</b>" -i 03 -d SYSTEMDB 'select * from SYS.M_HOST_INFORMATION'|grep net_publicname - # Expected result - "<b>hanadb3</b>","net_publicname","<b>10.23.0.7</b>" - "<b>hanadb2</b>","net_publicname","<b>10.23.0.6</b>" - "<b>hanadb1</b>","net_publicname","<b>10.23.0.5</b>" - </code></pre> + ```bash + sudo -u hn1adm /usr/sap/HN1/HDB03/exe/hdbsql -u SYSTEM -p "password" -i 03 -d SYSTEMDB 'select * from SYS.M_HOST_INFORMATION'|grep net_publicname + + # Expected result + "hanadb3","net_publicname","10.23.0.7" + "hanadb2","net_publicname","10.23.0.6" + "hanadb1","net_publicname","10.23.0.5" + ``` For information about how to verify the configuration, see SAP Note [2183363 - Configuration of SAP HANA internal network](https://launchpad.support.sap.com/#/notes/2183363). 6. To optimize SAP HANA for the underlying Azure NetApp Files storage, set the following SAP HANA parameters: - - `max_parallel_io_requests` **128** - - `async_read_submit` **on** - - `async_write_submit_active` **on** - - `async_write_submit_blocks` **all** + * `max_parallel_io_requests` **128** + * `async_read_submit` **on** + * `async_write_submit_active` **on** + * `async_write_submit_blocks` **all** For more information, see [I/O stack configuration for SAP HANA](https://docs.netapp.com/us-en/netapp-solutions-sap/bp/saphana_aff_nfs_i_o_stack_configuration_for_sap_hana.html). Starting with SAP HANA 2.0 systems, you can set the parameters in `global.ini`. For more information, see SAP Note [1999930](https://launchpad.support.sap.com/#/notes/1999930). - + For SAP HANA 1.0 systems versions SPS12 and earlier, these parameters can be set during the installation, as described in SAP Note [2267798](https://launchpad.support.sap.com/#/notes/2267798). -7. The storage that's used by Azure NetApp Files has a file size limitation of 16 terabytes (TB). SAP HANA is not implicitly aware of the storage limitation, and it won't automatically create a new data file when the file size limit of 16 TB is reached. As SAP HANA attempts to grow the file beyond 16 TB, that attempt will result in errors and, eventually, in an index server crash. +7. The storage that's used by Azure NetApp Files has a file size limitation of 16 terabytes (TB). SAP HANA is not implicitly aware of the storage limitation, and it won't automatically create a new data file when the file size limit of 16 TB is reached. As SAP HANA attempts to grow the file beyond 16 TB, that attempt will result in errors and, eventually, in an index server crash. > [!IMPORTANT]- > To prevent SAP HANA from trying to grow data files beyond the [16-TB limit](../../azure-netapp-files/azure-netapp-files-resource-limits.md) of the storage subsystem, set the following parameters in `global.ini`. - > - datavolume_striping = true - > - datavolume_striping_size_gb = 15000 + > To prevent SAP HANA from trying to grow data files beyond the [16-TB limit](../../azure-netapp-files/azure-netapp-files-resource-limits.md) of the storage subsystem, set the following parameters in `global.ini`. + > + > * datavolume_striping = true + > * datavolume_striping_size_gb = 15000 > For more information, see SAP Note [2400005](https://launchpad.support.sap.com/#/notes/2400005).- > Be aware of SAP Note [2631285](https://launchpad.support.sap.com/#/notes/2631285). + > Be aware of SAP Note [2631285](https://launchpad.support.sap.com/#/notes/2631285). -## Test SAP HANA failover +## Test SAP HANA failover > [!NOTE] > This article contains references to the terms *master* and *slave*, terms that Microsoft no longer uses. When these terms are removed from the software, weΓÇÖll remove them from this article. -1. Simulate a node crash on an SAP HANA worker node. Do the following: -- a. Before you simulate the node crash, run the following commands as **hn1**adm to capture the status of the environment: -- <pre><code> - # Check the landscape status - python /usr/sap/<b>HN1</b>/HDB<b>03</b>/exe/python_support/landscapeHostConfiguration.py - | Host | Host | Host | Failover | Remove | Storage | Storage | Failover | Failover | NameServer | NameServer | IndexServer | IndexServer | Host | Host | Worker | Worker | - | | Active | Status | Status | Status | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | - | | | | | | Partition | Partition | Group | Group | Role | Role | Role | Role | Roles | Roles | Groups | Groups | - | - | | | -- | | | | -- | -- | - | - | -- | -- | - | - | - | - | - | hanadb1 | yes | ok | | | 1 | 1 | default | default | master 1 | master | worker | master | worker | worker | default | default | - | hanadb2 | yes | ok | | | 2 | 2 | default | default | master 2 | slave | worker | slave | worker | worker | default | default | - | hanadb3 | yes | ignore | | | 0 | 0 | default | default | master 3 | slave | standby | standby | standby | standby | default | - | - # Check the instance status - sapcontrol -nr <b>03</b> -function GetSystemInstanceList - GetSystemInstanceList - OK - hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus - hanadb2, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN - hanadb1, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN - hanadb3, 3, 50313, 50314, 0.3, HDB|HDB_STANDBY, GREEN - </code></pre> -- b. To simulate a node crash, run the following command as root on the worker node, which is **hanadb2** in this case: - - <pre><code> - echo b > /proc/sysrq-trigger - </code></pre> -- c. Monitor the system for failover completion. When the failover has been completed, capture the status, which should look like the following: -- <pre><code> - # Check the instance status - sapcontrol -nr <b>03</b> -function GetSystemInstanceList - GetSystemInstanceList - OK - hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus - hanadb1, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN - hanadb3, 3, 50313, 50314, 0.3, HDB|HDB_STANDBY, GREEN - hanadb2, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GRAY - # Check the landscape status - /usr/sap/HN1/HDB03/exe/python_support> python landscapeHostConfiguration.py - | Host | Host | Host | Failover | Remove | Storage | Storage | Failover | Failover | NameServer | NameServer | IndexServer | IndexServer | Host | Host | Worker | Worker | - | | Active | Status | Status | Status | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | - | | | | | | Partition | Partition | Group | Group | Role | Role | Role | Role | Roles | Roles | Groups | Groups | - | - | | | -- | | | | -- | -- | - | - | -- | -- | - | - | - | - | - | hanadb1 | yes | ok | | | 1 | 1 | default | default | master 1 | master | worker | master | worker | worker | default | default | - | hanadb2 | no | info | | | 2 | 0 | default | default | master 2 | slave | worker | standby | worker | standby | default | - | - | hanadb3 | yes | info | | | 0 | 2 | default | default | master 3 | slave | standby | slave | standby | worker | default | default | - </code></pre> -- > [!IMPORTANT] - > When a node experiences kernel panic, avoid delays with SAP HANA failover by setting `kernel.panic` to 20 seconds on *all* HANA virtual machines. The configuration is done in `/etc/sysctl`. Reboot the virtual machines to activate the change. If this change isn't performed, failover can take 10 or more minutes when a node is experiencing kernel panic. +1. Simulate a node crash on an SAP HANA worker node. Do the following: ++ 1. Before you simulate the node crash, run the following commands as **hn1**adm to capture the status of the environment: ++ ```bash + # Check the landscape status + python /usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py + | Host | Host | Host | Failover | Remove | Storage | Storage | Failover | Failover | NameServer | NameServer | IndexServer | IndexServer | Host | Host | Worker | Worker | + | | Active | Status | Status | Status | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | + | | | | | | Partition | Partition | Group | Group | Role | Role | Role | Role | Roles | Roles | Groups | Groups | + | - | | | -- | | | | -- | -- | - | - | -- | -- | - | - | - | - | + | hanadb1 | yes | ok | | | 1 | 1 | default | default | master 1 | master | worker | master | worker | worker | default | default | + | hanadb2 | yes | ok | | | 2 | 2 | default | default | master 2 | slave | worker | slave | worker | worker | default | default | + | hanadb3 | yes | ignore | | | 0 | 0 | default | default | master 3 | slave | standby | standby | standby | standby | default | - | + + # Check the instance status + sapcontrol -nr 03 -function GetSystemInstanceList + GetSystemInstanceList + OK + hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus + hanadb2, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN + hanadb1, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN + hanadb3, 3, 50313, 50314, 0.3, HDB|HDB_STANDBY, GREEN + ``` ++ 2. To simulate a node crash, run the following command as root on the worker node, which is **hanadb2** in this case: ++ ```bash + echo b > /proc/sysrq-trigger + ``` ++ 3. Monitor the system for failover completion. When the failover has been completed, capture the status, which should look like the following: ++ ```bash + # Check the instance status + sapcontrol -nr 03 -function GetSystemInstanceList + GetSystemInstanceList + OK + hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus + hanadb1, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN + hanadb3, 3, 50313, 50314, 0.3, HDB|HDB_STANDBY, GREEN + hanadb2, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GRAY + + # Check the landscape status + /usr/sap/HN1/HDB03/exe/python_support> python landscapeHostConfiguration.py + | Host | Host | Host | Failover | Remove | Storage | Storage | Failover | Failover | NameServer | NameServer | IndexServer | IndexServer | Host | Host | Worker | Worker | + | | Active | Status | Status | Status | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | + | | | | | | Partition | Partition | Group | Group | Role | Role | Role | Role | Roles | Roles | Groups | Groups | + | - | | | -- | | | | -- | -- | - | - | -- | -- | - | - | - | - | + | hanadb1 | yes | ok | | | 1 | 1 | default | default | master 1 | master | worker | master | worker | worker | default | default | + | hanadb2 | no | info | | | 2 | 0 | default | default | master 2 | slave | worker | standby | worker | standby | default | - | + | hanadb3 | yes | info | | | 0 | 2 | default | default | master 3 | slave | standby | slave | standby | worker | default | default | + ``` ++ > [!IMPORTANT] + > When a node experiences kernel panic, avoid delays with SAP HANA failover by setting `kernel.panic` to 20 seconds on *all* HANA virtual machines. The configuration is done in `/etc/sysctl`. Reboot the virtual machines to activate the change. If this change isn't performed, failover can take 10 or more minutes when a node is experiencing kernel panic. 2. Kill the name server by doing the following: - a. Prior to the test, check the status of the environment by running the following commands as **hn1**adm: -- <pre><code> - #Landscape status - python /usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py - | Host | Host | Host | Failover | Remove | Storage | Storage | Failover | Failover | NameServer | NameServer | IndexServer | IndexServer | Host | Host | Worker | Worker | - | | Active | Status | Status | Status | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | - | | | | | | Partition | Partition | Group | Group | Role | Role | Role | Role | Roles | Roles | Groups | Groups | - | - | | | -- | | | | -- | -- | - | - | -- | -- | - | - | - | - | - | hanadb1 | yes | ok | | | 1 | 1 | default | default | master 1 | master | worker | master | worker | worker | default | default | - | hanadb2 | yes | ok | | | 2 | 2 | default | default | master 2 | slave | worker | slave | worker | worker | default | default | - | hanadb3 | no | ignore | | | 0 | 0 | default | default | master 3 | slave | standby | standby | standby | standby | default | - | - # Check the instance status - sapcontrol -nr 03 -function GetSystemInstanceList - GetSystemInstanceList - OK - hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus - hanadb2, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN - hanadb1, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN - hanadb3, 3, 50313, 50314, 0.3, HDB|HDB_STANDBY, GRAY - </code></pre> -- b. Run the following commands as **hn1**adm on the active master node, which is **hanadb1** in this case: -- <pre><code> - hn1adm@hanadb1:/usr/sap/HN1/HDB03> HDB kill - </code></pre> - - The standby node **hanadb3** will take over as master node. Here is the resource state after the failover test is completed: -- <pre><code> - # Check the instance status - sapcontrol -nr 03 -function GetSystemInstanceList - GetSystemInstanceList - OK - hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus - hanadb2, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN - hanadb1, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GRAY - hanadb3, 3, 50313, 50314, 0.3, HDB|HDB_STANDBY, GREEN - # Check the landscape status - python /usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py - | Host | Host | Host | Failover | Remove | Storage | Storage | Failover | Failover | NameServer | NameServer | IndexServer | IndexServer | Host | Host | Worker | Worker | - | | Active | Status | Status | Status | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | - | | | | | | Partition | Partition | Group | Group | Role | Role | Role | Role | Roles | Roles | Groups | Groups | - | - | | | -- | | | | -- | -- | - | - | -- | -- | - | - | - | - | - | hanadb1 | no | info | | | 1 | 0 | default | default | master 1 | slave | worker | standby | worker | standby | default | - | - | hanadb2 | yes | ok | | | 2 | 2 | default | default | master 2 | slave | worker | slave | worker | worker | default | default | - | hanadb3 | yes | info | | | 0 | 1 | default | default | master 3 | master | standby | master | standby | worker | default | default | - </code></pre> -- c. Restart the HANA instance on **hanadb1** (that is, on the same virtual machine, where the name server was killed). The **hanadb1** node will rejoin the environment and will keep its standby role. -- <pre><code> - hn1adm@hanadb1:/usr/sap/HN1/HDB03> HDB start - </code></pre> -- After SAP HANA has started on **hanadb1**, expect the following status: -- <pre><code> - # Check the instance status - sapcontrol -nr 03 -function GetSystemInstanceList - GetSystemInstanceList - OK - hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus - hanadb1, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN - hanadb2, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN - hanadb3, 3, 50313, 50314, 0.3, HDB|HDB_STANDBY, GREEN - # Check the landscape status - python /usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py - | Host | Host | Host | Failover | Remove | Storage | Storage | Failover | Failover | NameServer | NameServer | IndexServer | IndexServer | Host | Host | Worker | Worker | - | | Active | Status | Status | Status | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | - | | | | | | Partition | Partition | Group | Group | Role | Role | Role | Role | Roles | Roles | Groups | Groups | - | - | | | -- | | | | -- | -- | - | - | -- | -- | - | - | - | - | - | hanadb1 | yes | info | | | 1 | 0 | default | default | master 1 | slave | worker | standby | worker | standby | default | - | - | hanadb2 | yes | ok | | | 2 | 2 | default | default | master 2 | slave | worker | slave | worker | worker | default | default | - | hanadb3 | yes | info | | | 0 | 1 | default | default | master 3 | master | standby | master | standby | worker | default | default | - </code></pre> -- d. Again, kill the name server on the currently active master node (that is, on node **hanadb3**). - - <pre><code> - hn1adm@hanadb3:/usr/sap/HN1/HDB03> HDB kill - </code></pre> -- Node **hanadb1** will resume the role of master node. After the failover test has been completed, the status will look like this: -- <pre><code> - # Check the instance status - sapcontrol -nr 03 -function GetSystemInstanceList & python /usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py - GetSystemInstanceList - OK - hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus - GetSystemInstanceList - OK - hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus - hanadb1, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN - hanadb2, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN - hanadb3, 3, 50313, 50314, 0.3, HDB|HDB_STANDBY, GRAY - # Check the landscape status - python /usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py - | Host | Host | Host | Failover | Remove | Storage | Storage | Failover | Failover | NameServer | NameServer | IndexServer | IndexServer | Host | Host | Worker | Worker | - | | Active | Status | Status | Status | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | - | | | | | | Partition | Partition | Group | Group | Role | Role | Role | Role | Roles | Roles | Groups | Groups | - | - | | | -- | | | | -- | -- | - | - | -- | -- | - | - | - | - | - | hanadb1 | yes | ok | | | 1 | 1 | default | default | master 1 | master | worker | master | worker | worker | default | default | - | hanadb2 | yes | ok | | | 2 | 2 | default | default | master 2 | slave | worker | slave | worker | worker | default | default | - | hanadb3 | no | ignore | | | 0 | 0 | default | default | master 3 | slave | standby | standby | standby | standby | default | - | - </code></pre> -- e. Start SAP HANA on **hanadb3**, which will be ready to serve as a standby node. -- <pre><code> - hn1adm@hanadb3:/usr/sap/HN1/HDB03> HDB start - </code></pre> -- After SAP HANA has started on **hanadb3**, the status looks like the following: -- <pre><code> - # Check the instance status - sapcontrol -nr 03 -function GetSystemInstanceList & python /usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py - GetSystemInstanceList - OK - hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus - GetSystemInstanceList - OK - hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus - hanadb1, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN - hanadb2, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN - hanadb3, 3, 50313, 50314, 0.3, HDB|HDB_STANDBY, GRAY - # Check the landscape status - python /usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py - | Host | Host | Host | Failover | Remove | Storage | Storage | Failover | Failover | NameServer | NameServer | IndexServer | IndexServer | Host | Host | Worker | Worker | - | | Active | Status | Status | Status | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | - | | | | | | Partition | Partition | Group | Group | Role | Role | Role | Role | Roles | Roles | Groups | Groups | - | - | | | -- | | | | -- | -- | - | - | -- | -- | - | - | - | - | - | hanadb1 | yes | ok | | | 1 | 1 | default | default | master 1 | master | worker | master | worker | worker | default | default | - | hanadb2 | yes | ok | | | 2 | 2 | default | default | master 2 | slave | worker | slave | worker | worker | default | default | - | hanadb3 | no | ignore | | | 0 | 0 | default | default | master 3 | slave | standby | standby | standby | standby | default | - | - </code></pre> + 1. Prior to the test, check the status of the environment by running the following commands as **hn1**adm: ++ ```bash + #Landscape status + python /usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py + | Host | Host | Host | Failover | Remove | Storage | Storage | Failover | Failover | NameServer | NameServer | IndexServer | IndexServer | Host | Host | Worker | Worker | + | | Active | Status | Status | Status | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | + | | | | | | Partition | Partition | Group | Group | Role | Role | Role | Role | Roles | Roles | Groups | Groups | + | - | | | -- | | | | -- | -- | - | - | -- | -- | - | - | - | - | + | hanadb1 | yes | ok | | | 1 | 1 | default | default | master 1 | master | worker | master | worker | worker | default | default | + | hanadb2 | yes | ok | | | 2 | 2 | default | default | master 2 | slave | worker | slave | worker | worker | default | default | + | hanadb3 | no | ignore | | | 0 | 0 | default | default | master 3 | slave | standby | standby | standby | standby | default | - | + + # Check the instance status + sapcontrol -nr 03 -function GetSystemInstanceList + GetSystemInstanceList + OK + hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus + hanadb2, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN + hanadb1, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN + hanadb3, 3, 50313, 50314, 0.3, HDB|HDB_STANDBY, GRAY + ``` ++ 2. Run the following commands as **hn1**adm on the active master node, which is **hanadb1** in this case: ++ ```bash + hn1adm@hanadb1:/usr/sap/HN1/HDB03> HDB kill + ``` ++ The standby node **hanadb3** will take over as master node. Here is the resource state after the failover test is completed: ++ ```bash + # Check the instance status + sapcontrol -nr 03 -function GetSystemInstanceList + GetSystemInstanceList + OK + hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus + hanadb2, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN + hanadb1, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GRAY + hanadb3, 3, 50313, 50314, 0.3, HDB|HDB_STANDBY, GREEN + + # Check the landscape status + python /usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py + | Host | Host | Host | Failover | Remove | Storage | Storage | Failover | Failover | NameServer | NameServer | IndexServer | IndexServer | Host | Host | Worker | Worker | + | | Active | Status | Status | Status | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | + | | | | | | Partition | Partition | Group | Group | Role | Role | Role | Role | Roles | Roles | Groups | Groups | + | - | | | -- | | | | -- | -- | - | - | -- | -- | - | - | - | - | + | hanadb1 | no | info | | | 1 | 0 | default | default | master 1 | slave | worker | standby | worker | standby | default | - | + | hanadb2 | yes | ok | | | 2 | 2 | default | default | master 2 | slave | worker | slave | worker | worker | default | default | + | hanadb3 | yes | info | | | 0 | 1 | default | default | master 3 | master | standby | master | standby | worker | default | default | + ``` ++ 3. Restart the HANA instance on **hanadb1** (that is, on the same virtual machine, where the name server was killed). The **hanadb1** node will rejoin the environment and will keep its standby role. ++ ```bash + hn1adm@hanadb1:/usr/sap/HN1/HDB03> HDB start + ``` ++ After SAP HANA has started on **hanadb1**, expect the following status: ++ ```bash + # Check the instance status + sapcontrol -nr 03 -function GetSystemInstanceList + GetSystemInstanceList + OK + hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus + hanadb1, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN + hanadb2, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN + hanadb3, 3, 50313, 50314, 0.3, HDB|HDB_STANDBY, GREEN + # Check the landscape status + python /usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py + | Host | Host | Host | Failover | Remove | Storage | Storage | Failover | Failover | NameServer | NameServer | IndexServer | IndexServer | Host | Host | Worker | Worker | + | | Active | Status | Status | Status | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | + | | | | | | Partition | Partition | Group | Group | Role | Role | Role | Role | Roles | Roles | Groups | Groups | + | - | | | -- | | | | -- | -- | - | - | -- | -- | - | - | - | - | + | hanadb1 | yes | info | | | 1 | 0 | default | default | master 1 | slave | worker | standby | worker | standby | default | - | + | hanadb2 | yes | ok | | | 2 | 2 | default | default | master 2 | slave | worker | slave | worker | worker | default | default | + | hanadb3 | yes | info | | | 0 | 1 | default | default | master 3 | master | standby | master | standby | worker | default | default | + ``` ++ 4. Again, kill the name server on the currently active master node (that is, on node **hanadb3**). ++ ```bash + hn1adm@hanadb3:/usr/sap/HN1/HDB03> HDB kill + ``` ++ Node **hanadb1** will resume the role of master node. After the failover test has been completed, the status will look like this: ++ ```bash + # Check the instance status + sapcontrol -nr 03 -function GetSystemInstanceList & python /usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py + GetSystemInstanceList + OK + hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus + GetSystemInstanceList + OK + hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus + hanadb1, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN + hanadb2, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN + hanadb3, 3, 50313, 50314, 0.3, HDB|HDB_STANDBY, GRAY + + # Check the landscape status + python /usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py + | Host | Host | Host | Failover | Remove | Storage | Storage | Failover | Failover | NameServer | NameServer | IndexServer | IndexServer | Host | Host | Worker | Worker | + | | Active | Status | Status | Status | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | + | | | | | | Partition | Partition | Group | Group | Role | Role | Role | Role | Roles | Roles | Groups | Groups | + | - | | | -- | | | | -- | -- | - | - | -- | -- | - | - | - | - | + | hanadb1 | yes | ok | | | 1 | 1 | default | default | master 1 | master | worker | master | worker | worker | default | default | + | hanadb2 | yes | ok | | | 2 | 2 | default | default | master 2 | slave | worker | slave | worker | worker | default | default | + | hanadb3 | no | ignore | | | 0 | 0 | default | default | master 3 | slave | standby | standby | standby | standby | default | - | + ``` ++ 5. Start SAP HANA on **hanadb3**, which will be ready to serve as a standby node. ++ ```bash + hn1adm@hanadb3:/usr/sap/HN1/HDB03> HDB start + ``` ++ After SAP HANA has started on **hanadb3**, the status looks like the following: ++ ```bash + # Check the instance status + sapcontrol -nr 03 -function GetSystemInstanceList & python /usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py + GetSystemInstanceList + OK + hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus + GetSystemInstanceList + OK + hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus + hanadb1, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN + hanadb2, 3, 50313, 50314, 0.3, HDB|HDB_WORKER, GREEN + hanadb3, 3, 50313, 50314, 0.3, HDB|HDB_STANDBY, GRAY + # Check the landscape status + python /usr/sap/HN1/HDB03/exe/python_support/landscapeHostConfiguration.py + | Host | Host | Host | Failover | Remove | Storage | Storage | Failover | Failover | NameServer | NameServer | IndexServer | IndexServer | Host | Host | Worker | Worker | + | | Active | Status | Status | Status | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | Config | Actual | + | | | | | | Partition | Partition | Group | Group | Role | Role | Role | Role | Roles | Roles | Groups | Groups | + | - | | | -- | | | | -- | -- | - | - | -- | -- | - | - | - | - | + | hanadb1 | yes | ok | | | 1 | 1 | default | default | master 1 | master | worker | master | worker | worker | default | default | + | hanadb2 | yes | ok | | | 2 | 2 | default | default | master 2 | slave | worker | slave | worker | worker | default | default | + | hanadb3 | no | ignore | | | 0 | 0 | default | default | master 3 | slave | standby | standby | standby | standby | default | - | + ``` ## Next steps |
sentinel | Atlassian Jira Audit Using Azure Function | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/atlassian-jira-audit-using-azure-function.md | Use this method for automated deployment of the Jira Audit data connector using [](https://aka.ms/sentineljiraauditazuredeploy) 2. Select the preferred **Subscription**, **Resource Group** and **Location**. > **NOTE:** Within the same resource group, you can't mix Windows and Linux apps in the same region. Select existing resource group without Windows apps in it or create new resource group.-3. Enter the **JiraAccessToken**, **JiraUsername**, **JiraHomeSiteName** (short site name part, as example HOMESITENAME from https://HOMESITENAME.atlassian.net) and deploy. +3. Enter the **JiraAccessToken**, **JiraUsername**, **JiraHomeSiteName** (short site name part, as example HOMESITENAME from `https://HOMESITENAME.atlassian.net`) and deploy. 4. Mark the checkbox labeled **I agree to the terms and conditions stated above**. 5. Click **Purchase** to deploy. |
sentinel | Braodcom Symantec Dlp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/braodcom-symantec-dlp.md | Install the Microsoft Monitoring Agent on your Linux machine and configure the m 2. Forward Symantec DLP logs to a Syslog agent Configure Symantec DLP to forward Syslog messages in CEF format to your Microsoft Sentinel workspace via the Syslog agent.-1. [Follow these instructions](https://help.symantec.com/cs/DLP15.7/DLP/v27591174_v133697641/Configuring-the-Log-to-a-Syslog-Server-action?locale=EN_US) to configure the Symantec DLP to forward syslog +1. [Follow these instructions](https://techdocs.broadcom.com/content/dam/broadcom/techdocs/symantec-security-software/information-security/data-loss-prevention/generated-pdfs/Symantec_DLP_15.7_Whats_New.pdf) to configure the Symantec DLP to forward syslog 2. Use the IP address or hostname for the Linux device with the Linux agent installed as the Destination IP address. 3. Validate connection |
sentinel | Cisco Firepower Estreamer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/cisco-firepower-estreamer.md | Make sure to configure the machine's security according to your organization's s [Learn more >](https://aka.ms/SecureCEF)----## Next steps --For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/cisco.cisco-firepower-estreamer?tab=Overview) in the Azure Marketplace. |
sentinel | Claroty | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/claroty.md | -The [Claroty](https://claroty.com/) data connector provides the capability to ingest [Continuous Threat Detection](https://claroty.com/continuous-threat-detection/) and [Secure Remote Access](https://claroty.com/secure-remote-access/) events into Microsoft Sentinel. +The [Claroty](https://claroty.com/) data connector provides the capability to ingest [Continuous Threat Detection](https://claroty.com/resources/datasheets/continuous-threat-detection) and [Secure Remote Access](https://claroty.com/secure-remote-access/) events into Microsoft Sentinel. ## Connector attributes |
sentinel | Morphisec Utpp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/morphisec-utpp.md | Integrate vital insights from your security products with the Morphisec Data Con | **Kusto function url** | https://aka.ms/sentinel-morphisecutpp-parser | | **Log Analytics table(s)** | CommonSecurityLog (Morphisec)<br/> | | **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |-| **Supported by** | [Morphisec](https://support.morphisec.com/support/home) | +| **Supported by** | [Morphisec](https://support.morphisec.com/hc) | ## Query samples |
sentinel | Netskope Using Azure Function | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/netskope-using-azure-function.md | Netskope To integrate with Netskope (using Azure Function) make sure you have: - **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/).-- **Netskope API Token**: A Netskope API Token is required. [See the documentation to learn more about Netskope API](https://innovatechcloud.goskope.com/docs/Netskope_Help/en/rest-api-v1-overview.html). **Note:** A Netskope account is required+- **Netskope API Token**: A Netskope API Token is required. [See the documentation to learn more about Netskope API](https://docs.netskope.com/en/rest-api-v1-overview.html). **Note:** A Netskope account is required ## Vendor installation instructions |
sentinel | Nxlog Aix Audit | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/nxlog-aix-audit.md | The NXLog [AIX Audit](https://nxlog.co/documentation/nxlog-user-guide/im_aixaudi | | | | **Log Analytics table(s)** | AIX_Audit_CL<br/> | | **Data collection rules support** | Not currently supported |-| **Supported by** | [NXLog](https://nxlog.co/user?destination=node/add/support-ticket) | +| **Supported by** | [NXLog](https://nxlog.co/community-forum/t/819-support-tickets) | ## Query samples |
sentinel | Nxlog Bsm Macos | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/nxlog-bsm-macos.md | The NXLog [BSM](https://nxlog.co/documentation/nxlog-user-guide/im_bsm.html) mac | | | | **Log Analytics table(s)** | BSMmacOS_CL<br/> | | **Data collection rules support** | Not currently supported |-| **Supported by** | [NXLog](https://nxlog.co/user?destination=node/add/support-ticket) | +| **Supported by** | [NXLog](https://nxlog.co/community-forum/t/819-support-tickets) | ## Query samples |
sentinel | Nxlog Dns Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/nxlog-dns-logs.md | The NXLog DNS Logs data connector uses Event Tracing for Windows ([ETW](/windows | | | | **Log Analytics table(s)** | NXLog_DNS_Server_CL<br/> | | **Data collection rules support** | Not currently supported |-| **Supported by** | [NXLog](https://nxlog.co/user?destination=node/add/support-ticket) | +| **Supported by** | [NXLog](https://nxlog.co/community-forum/t/819-support-tickets) | ## Query samples |
sentinel | Tenable Io Vulnerability Management Using Azure Function | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/tenable-io-vulnerability-management-using-azure-function.md | Tenable_IO_Assets_CL To integrate with Tenable.io Vulnerability Management (using Azure Function) make sure you have: - **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/).-- **REST API Credentials/permissions**: Both a **TenableAccessKey** and a **TenableSecretKey** is required to access the Tenable REST API. [See the documentation to learn more about API](https://developer.tenable.com/reference#vulnerability-management). Check all [requirements and follow the instructions](https://docs.tenable.com/tenableio/vulnerabilitymanagement/Content/Settings/GenerateAPIKey.htm) for obtaining credentials.+- **REST API Credentials/permissions**: Both a **TenableAccessKey** and a **TenableSecretKey** is required to access the Tenable REST API. [See the documentation to learn more about API](https://developer.tenable.com/reference#vulnerability-management). Check all [requirements and follow the instructions](https://docs.tenable.com/tenableio/Content/Platform/Settings/MyAccount/GenerateAPIKey.htm) for obtaining credentials. ## Vendor installation instructions To integrate with Tenable.io Vulnerability Management (using Azure Function) mak **STEP 1 - Configuration steps for Tenable.io** - [Follow the instructions](https://docs.tenable.com/tenableio/vulnerabilitymanagement/Content/Settings/GenerateAPIKey.htm) to obtain the required API credentials. + [Follow the instructions](https://docs.tenable.com/tenableio/Content/Platform/Settings/MyAccount/GenerateAPIKey.htm) to obtain the required API credentials. |
sentinel | Vmware Vcenter | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/vmware-vcenter.md | vCenter **NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias VMware vCenter and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/VMware%20vCenter/Parsers/vCenter.txt), on the second line of the query, enter the hostname(s) of your VMware vCenter device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update. -> 1. If you have not installed the vCenter solution from ContentHub then [Follow the steps](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/VCenter/Parsers/vCenter.txt) to use the Kusto function alias, **vCenter** +> 1. If you have not installed the vCenter solution from ContentHub then [Follow the steps](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/VMware%20vCenter/Parsers/vCenter.txt) to use the Kusto function alias, **vCenter** 1. Install and onboard the agent for Linux |
sentinel | Zoom Reports Using Azure Function | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/zoom-reports-using-azure-function.md | -The [Zoom](https://zoom.us/) Reports data connector provides the capability to ingest [Zoom Reports](https://marketplace.zoom.us/docs/api-reference/zoom-api/reports/) events into Microsoft Sentinel through the REST API. Refer to [API documentation](https://marketplace.zoom.us/docs/api-reference/introduction) for more information. The connector provides ability to get events which helps to examine potential security risks, analyze your team's use of collaboration, diagnose configuration problems and more. +The [Zoom](https://zoom.us/) Reports data connector provides the capability to ingest [Zoom Reports](https://developers.zoom.us/docs/api/) events into Microsoft Sentinel through the REST API. Refer to [API documentation](https://developers.zoom.us/docs/api/) for more information. The connector provides ability to get events which helps to examine potential security risks, analyze your team's use of collaboration, diagnose configuration problems and more. ## Connector attributes Zoom To integrate with Zoom Reports (using Azure Function) make sure you have: - **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/).-- **REST API Credentials/permissions**: **ZoomApiKey** and **ZoomApiSecret** are required for Zoom API. [See the documentation to learn more about API](https://marketplace.zoom.us/docs/guides/auth/jwt). Check all [requirements and follow the instructions](https://marketplace.zoom.us/docs/guides/auth/jwt) for obtaining credentials.+- **REST API Credentials/permissions**: **ZoomApiKey** and **ZoomApiSecret** are required for Zoom API. [See the documentation to learn more about API](https://developers.zoom.us/docs/internal-apps/jwt/). Check all [requirements and follow the instructions](https://developers.zoom.us/docs/internal-apps/jwt/) for obtaining credentials. ## Vendor installation instructions To integrate with Zoom Reports (using Azure Function) make sure you have: **STEP 1 - Configuration steps for the Zoom API** - [Follow the instructions](https://marketplace.zoom.us/docs/guides/auth/jwt) to obtain the credentials. + [Follow the instructions](https://developers.zoom.us/docs/internal-apps/jwt/) to obtain the credentials. |
spring-apps | Quickstart Provision Standard Consumption Service Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-provision-standard-consumption-service-instance.md | You can create the Azure Container Apps environment in one of two ways: AZURE_CONTAINER_APPS_ENVIRONMENT="<Azure-Container-Apps-environment-name>" ``` +1. Use the following command to create a resource group: ++ ```azurecli + az group create \ + --name $RESOURCE_GROUP \ + --location $LOCATION + ``` + 1. Use the following command to create the Azure Container Apps environment: ```azurecli |