Updates from: 01/18/2024 02:13:58
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Claim Resolver Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/claim-resolver-overview.md
Previously updated : 01/11/2024 Last updated : 01/17/2024
-#Customer intent: As a developer using Azure Active Directory B2C custom policies, I want to understand how to use claim resolvers in my technical profiles, so that I can provide context information about authorization requests and populate claims with dynamic values.
+#Customer intent: As a developer using Azure AD B2C custom policies, I want to understand how to use claim resolvers in my technical profiles, so that I can provide context information about authorization requests and populate claims with dynamic values.
Any parameter name included as part of an OIDC or OAuth2 request can be mapped t
| {OAUTH-KV:loyalty_number} | A query string parameter. | 1234 | | {OAUTH-KV:any custom query string} | A query string parameter. | N/A |
+## SAML key-value parameters
+
+In a SAML authentication request, any parameter name that's included in the request, but isnΓÇÖt specific to the protocol (such as SAMLRequest) can be mapped to a claim in the user journey. For example, the request may include a custom parameter such as `username`. This applies to both SP-Initiated and IDP-Initiated SAML requests.
+
+| Claim | Description | Example |
+| -- | -- | --|
+| {SAML-KV:username} | A query string or POST body parameter. | username@domain.com |
+| {SAML-KV:loyalty_number} | A query string or POST body parameter. | 1234 |
+| {SAML-KV:any custom query string} | A query string or POST body parameter. | N/A |
++ ## SAML The following table lists the claim resolvers with information about the SAML authorization request:
active-directory-b2c Configure Authentication Sample Python Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-sample-python-web-app.md
Extract the sample file to a folder where the total length of the path is 260 or
In the project's root directory, follow these steps:
-1. Rename the *app_config.py* file to *app_config.py.OLD*.
-1. Rename the *app_config_b2c.py* file to *app_config.py*. This file contains information about your Azure AD B2C identity provider.
-
-1. Create an `.env` file in the root folder of the project using `.env.sample.b2c` as a guide.
+1. Create an `.env` file in the root folder of the project using `.env.sample` as a guide.
```shell FLASK_DEBUG=True
- TENANT_NAME=<tenant name>
+ B2C_TENANT_NAME=<tenant name>
CLIENT_ID=<client id> CLIENT_SECRET=<client secret>
- SIGNUPSIGNIN_USER_FLOW=B2C_1_profile_editing
- EDITPROFILE_USER_FLOW=B2C_1_reset_password
- RESETPASSWORD_USER_FLOW=B2C_1_signupsignin1
+ SIGNUPSIGNIN_USER_FLOW=B2C_1_signupsignin1
+ EDITPROFILE_USER_FLOW=B2C_1_profile_editing
+ RESETPASSWORD_USER_FLOW=B2C_1_reset_password
``` |Key |Value | |||
- |`TENANT_NAME`| The first part of your Azure AD B2C [tenant name](tenant-management-read-tenant-name.md#get-your-tenant-name) (for example, `contoso`). |
+ |`B2C_TENANT_NAME`| The first part of your Azure AD B2C [tenant name](tenant-management-read-tenant-name.md#get-your-tenant-name) (for example, `contoso`). |
|`CLIENT_ID`| The web API application ID from [step 2.1](#step-21-register-the-app).| |`CLIENT_SECRET`| The client secret value you created in [step 2.2](#step-22-create-a-web-app-client-secret). | |`*_USER_FLOW`|The user flows you created in [step 1](#step-1-configure-your-user-flow).|
active-directory-b2c Custom Policy Developer Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policy-developer-notes.md
The following table summarizes the Security Assertion Markup Language (SAML) app
| - | :--: | -- | | Azure portal | GA | | | [Application Insights user journey logs](troubleshoot-with-application-insights.md) | Preview | Used for troubleshooting during development. |
-| [Application Insights event logs](analytics-with-application-insights.md) | Preview | Used to monitor user flows in production. |
+| [Application Insights event logs](analytics-with-application-insights.md) | Preview | Used to monitor user flows and custom policies in production. |
## Other features
active-directory-b2c Identity Provider Facebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-facebook.md
If you don't already have a Facebook account, sign up at [https://www.facebook.c
1. Select **Save Changes**. 1. From the menu, select the **plus** sign or **Add Product** link next to **PRODUCTS**. Under the **Add Products to Your App**, select **Set up** under **Facebook Login**. 1. From the menu, select **Facebook Login**, select **Settings**.
-1. In **Valid OAuth redirect URIs**, enter `https://your-tenant-name.b2clogin.com/your-tenant-id.onmicrosoft.com/oauth2/authresp`. If you use a [custom domain](custom-domain.md), enter `https://your-domain-name/your-tenant-id.onmicrosoft.com/oauth2/authresp`. Replace `your-tenant-id` with the id of your tenant, and `your-domain-name` with your custom domain.
+1. In **Valid OAuth redirect URIs**, enter `https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com/oauth2/authresp`. If you use a [custom domain](custom-domain.md), enter `https://your-domain-name/your-tenant-id.onmicrosoft.com/oauth2/authresp`. Replace `your-tenant-id` with the id of your tenant, and `your-domain-name` with your custom domain.
1. Select **Save Changes** at the bottom of the page. 1. To make your Facebook application available to Azure AD B2C, select the Status selector at the top right of the page and turn it **On** to make the Application public, and then select **Switch Mode**. At this point, the Status should change from **Development** to **Live**. For more information, see [Facebook App Development](https://developers.facebook.com/docs/development/release).
active-directory-b2c Openid Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/openid-connect.md
Error responses can also be sent to the `redirect_uri` parameter so that the app
```http GET https://jwt.ms/# error=access_denied
-&error_description=the+user+canceled+the+authentication
+&error_description=AADB2C90091%3a+The+user+has+cancelled+entering+self-asserted+information.%0d%0aCorrelation+ID%3a+xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx%0d%0aTimestamp%3a+xxxx-xx-xx+xx%3a23%3a27Z%0d%0a
&state=arbitrary_data_you_can_receive_in_the_response ```
Error responses look like:
```json {
- "error": "access_denied",
- "error_description": "The user revoked access to the app."
+ "error": "invalid_grant",
+ "error_description": "AADB2C90080: The provided grant has expired. Please re-authenticate and try again. Current time: xxxxxxxxxx, Grant issued time: xxxxxxxxxx, Grant expiration time: xxxxxxxxxx\r\nCorrelation ID: xxxxxxxx-xxxx-xxxX-xxxx-xxxxxxxxxxxx\r\nTimestamp: xxxx-xx-16 xx:10:52Z\r\n"
} ```
Error responses look like:
```json {
- "error": "access_denied",
- "error_description": "The user revoked access to the app.",
+ "error": "invalid_grant",
+ "error_description": "AADB2C90129: The provided grant has been revoked. Please reauthenticate and try again.\r\nCorrelation ID: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\r\nTimestamp: xxxx-xx-xx xx:xx:xxZ\r\n",
} ```
To set the required ID Token in logout requests, see [Configure session behavior
## Next steps -- Learn more about [Azure AD B2C session](session-behavior.md).
+- Learn more about [Azure AD B2C session](session-behavior.md).
active-directory-b2c Restful Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/restful-technical-profile.md
The following example shows a C# class that returns an error message:
```csharp public class ResponseContent {
- public string version { get; set; }
- public int status { get; set; }
- public string code { get; set; }
- public string userMessage { get; set; }
- public string developerMessage { get; set; }
- public string requestId { get; set; }
- public string moreInfo { get; set; }
+ public string Version { get; set; }
+ public int Status { get; set; }
+ public string Code { get; set; }
+ public string UserMessage { get; set; }
+ public string DeveloperMessage { get; set; }
+ public string RequestId { get; set; }
+ public string MoreInfo { get; set; }
} ```
active-directory-b2c Userinfo Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/userinfo-endpoint.md
Last updated 01/11/2024+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Userjourneys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/userjourneys.md
Previously updated : 01/11/2024 Last updated : 01/17/2024
-#Customer intent: As a developer integrating Azure AD B2C into an application, I want to understand how user journeys, authorization technical profiles, orchestration steps, preconditions, claims provider selection, claims exchanges, and journey lists work, so that I can configure the policy file correctly and ensure a successful user flow.
-
+#Customer intent: As a developer integrating Azure AD B2C into an application, I want to understand how custom policy user journeys work so that I can design the steps that a users goes through for the relying party application to obtain the desired claims for a user.
# UserJourneys
A user journey is represented as an orchestration sequence that must be followed
Orchestration steps can be conditionally executed based on preconditions defined in the orchestration step element. For example, you can check to perform an orchestration step only if a specific claim exists, or if a claim is equal or not to the specified value.
-To specify the ordered list of orchestration steps, an **OrchestrationSteps** element is added as part of the policy. This element is required.
+To specify the ordered list of orchestration steps, an **OrchestrationSteps** element is added as part of the policy. This element is required.
```xml <UserJourney Id="SignUpOrSignIn">
The **OrchestrationStep** element contains the following attributes:
| Attribute | Required | Description | | | -- | -- |
-| `Order` | Yes | The order of the orchestration steps. |
+| `Order` | Yes | The order of the orchestration steps. The value of the `Order` attribute starts at `1` through `N`. So, if you've 10 steps and you delete the second step, you need to renumber the steps three to 10 to become two to nine. |
| `Type` | Yes | The type of the orchestration step. Possible values: <ul><li>**ClaimsProviderSelection** - Indicates that the orchestration step presents various claims providers to the user to select one.</li><li>**CombinedSignInAndSignUp** - Indicates that the orchestration step presents a combined social provider sign-in and local account sign-up page.</li><li>**ClaimsExchange** - Indicates that the orchestration step exchanges claims with a claims provider.</li><li>**GetClaims** - Specifies that the orchestration step should process claim data sent to Azure AD B2C from the relying party via its `InputClaims` configuration.</li><li>**InvokeSubJourney** - Indicates that the orchestration step exchanges claims with a [sub journey](subjourneys.md).</li><li>**SendClaims** - Indicates that the orchestration step sends the claims to the relying party with a token issued by a claims issuer.</li></ul> | | ContentDefinitionReferenceId | No | The identifier of the [content definition](contentdefinitions.md) associated with this orchestration step. Usually the content definition reference identifier is defined in the self-asserted technical profile. But, there are some cases when Azure AD B2C needs to display something without a technical profile. There are two examples - if the type of the orchestration step is one of following: `ClaimsProviderSelection` or `CombinedSignInAndSignUp`, Azure AD B2C needs to display the identity provider selection without having a technical profile. | | CpimIssuerTechnicalProfileReferenceId | No | The type of the orchestration step is `SendClaims`. This property defines the technical profile identifier of the claims provider that issues the token for the relying party. If absent, no relying party token is created. |
advisor Advisor Reference Reliability Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-reliability-recommendations.md
description: Full list of available reliability recommendations in Advisor.
Previously updated : 09/27/2023 Last updated : 12/11/2023 # Reliability recommendations
Learn more about [Front Door Profile - RenewExpiredBYOC (Renew the expired Azure
Deploying two or more medium or large sized instances ensures business continuity during outages caused by planned or unplanned maintenance.
-Learn more about [Application gateway - AppGateway (Upgrade your SKU or add more instances to ensure fault tolerance)](https://aka.ms/aa_gatewayrec_learnmore).
+Learn more about [Improve the reliability of your application by using Azure Advisor - Ensure application gateway fault tolerance)](/azure/advisor/advisor-high-availability-recommendations#ensure-application-gateway-fault-tolerance).
### Avoid hostname override to ensure site integrity
ai-services Client Libraries Multivariate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/quickstarts/client-libraries-multivariate.md
Last updated 10/27/2022 keywords: anomaly detection, algorithms
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, python
ai-services Client Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/quickstarts/client-libraries.md
Last updated 10/27/2022 keywords: anomaly detection, algorithms
+ms.devlang: csharp
+# ms.devlang: csharp, javascript, python
recommendations: false
ai-services Client Libraries Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/client-libraries-rest-api.md
keywords: Azure, artificial intelligence, ai, natural language processing, nlp, LUIS, azure luis, natural language understanding, ai chatbot, chatbot maker, understanding natural language
+ms.devlang: csharp
+# ms.devlang: csharp, javascript, python
zone_pivot_groups: programming-languages-set-luis
ai-services Developer Reference Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/developer-reference-resource.md
Last updated 01/12/2021
+ms.devlang: csharp
+# ms.devlang: csharp, javascript
ai-services Get Started Get Model Rest Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/get-started-get-model-rest-apis.md
description: In this article, add example utterances to change a model and train
+ms.devlang: csharp
+# ms.devlang: csharp, golang, java, javascript, python
ai-services Client Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/quickstarts-sdk/client-library.md
Last updated 08/07/2023
+ms.devlang: csharp
+# ms.devlang: csharp, golang, java, javascript, python
zone_pivot_groups: programming-languages-ocr keywords: Azure AI Vision, Azure AI Vision service
ai-services Identity Client Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/quickstarts-sdk/identity-client-library.md
Last updated 07/04/2023
+ms.devlang: csharp
+# ms.devlang: csharp, golang, javascript, python
keywords: face search by image, facial recognition search, facial recognition, face recognition app
ai-services Image Analysis Client Library 40 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/quickstarts-sdk/image-analysis-client-library-40.md
Last updated 01/24/2023
+ms.devlang: csharp
+# ms.devlang: csharp, golang, java, javascript, python
zone_pivot_groups: programming-languages-computer-vision-40 keywords: Azure AI Vision, Azure AI Vision service
ai-services Image Analysis Client Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/quickstarts-sdk/image-analysis-client-library.md
Last updated 12/27/2022
+ms.devlang: csharp
+# ms.devlang: csharp, golang, java, javascript, python
zone_pivot_groups: programming-languages-computer-vision keywords: Azure AI Vision, Azure AI Vision service
ai-services Client Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-moderator/client-libraries.md
Last updated 09/28/2021
+ms.devlang: csharp
+# ms.devlang: csharp, java, python
keywords: content moderator, Azure Content Moderator, online moderator, content filtering software
ai-services Image Classification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/custom-vision-service/quickstarts/image-classification.md
Last updated 11/03/2022
+ms.devlang: csharp
+# ms.devlang: csharp, golang, java, javascript, python
keywords: custom vision, image recognition, image recognition app, image analysis, image recognition software zone_pivot_groups: programming-languages-set-cusvis
ai-services Object Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/custom-vision-service/quickstarts/object-detection.md
Last updated 11/03/2022
+ms.devlang: csharp
+# ms.devlang: csharp, golang, java, javascript, python
keywords: custom vision zone_pivot_groups: programming-languages-set-one
ai-services Install Run https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/containers/install-run.md
- ignite-2023 Previously updated : 12/13/2023 Last updated : 01/17/2024
The Document Intelligence containers send billing information to Azure by using
Queries to the container are billed at the pricing tier of the Azure resource used for the API `Key`. You're billed for each container instance used to process your documents and images.
-> [!NOTE]
-> Currently, Document Intelligence v3 containers only support pay as you go pricing. Support for commitment tiers and disconnected mode will be added in March 2023.
-Azure AI containers aren't licensed to run without being connected to the metering / billing endpoint. Containers must be enabled to always communicate billing information with the billing endpoint. Azure AI containers don't send customer data, such as the image or text that's being analyzed, to Microsoft.
- ### Connect to Azure The container needs the billing argument values to run. These values allow the container to connect to the billing endpoint. The container reports usage about every 10 to 15 minutes. If the container doesn't connect to Azure within the allowed time window, the container continues to run, but doesn't serve queries until the billing endpoint is restored. The connection is attempted 10 times at the same time interval of 10 to 15 minutes. If it can't connect to the billing endpoint within the 10 tries, the container stops serving requests. See the [Azure AI container FAQ](../../../ai-services/containers/container-faq.yml#how-does-billing-work) for an example of the information sent to Microsoft for billing.
ai-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-named-entity-recognition/how-to/call-api.md
Last updated 12/19/2023
+ms.devlang: csharp
+# ms.devlang: csharp, python
ai-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-text-analytics-for-health/how-to/call-api.md
Last updated 12/19/2023
+ms.devlang: http
ai-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-text-classification/how-to/call-api.md
Last updated 12/19/2023
+ms.devlang: csharp
+# ms.devlang: csharp, python
ai-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/entity-linking/quickstart.md
Last updated 12/19/2023
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, python
keywords: text mining, entity linking zone_pivot_groups: programming-languages-text-analytics
ai-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/key-phrase-extraction/quickstart.md
Last updated 12/19/2023
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, python
keywords: text mining, key phrase zone_pivot_groups: programming-languages-text-analytics
ai-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/language-detection/quickstart.md
Last updated 12/19/2023
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, python
keywords: text mining, language detection zone_pivot_groups: programming-languages-text-analytics
ai-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/named-entity-recognition/quickstart.md
Last updated 12/19/2023
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, python
keywords: text mining, key phrase zone_pivot_groups: programming-languages-text-analytics
ai-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/orchestration-workflow/how-to/call-api.md
Last updated 12/19/2023
+ms.devlang: csharp
+# ms.devlang: csharp, python
ai-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/personally-identifiable-information/quickstart.md
Last updated 12/19/2023
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, python
zone_pivot_groups: programming-languages-text-analytics
ai-services Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/quickstart/sdk.md
Last updated 12/19/2023
recommendations: false
+ms.devlang: csharp
+# ms.devlang: csharp, python
zone_pivot_groups: custom-qna-quickstart
ai-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/sentiment-opinion-mining/custom/how-to/call-api.md
Last updated 12/19/2023
+ms.devlang: csharp
+# ms.devlang: csharp, python
ai-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/sentiment-opinion-mining/quickstart.md
Last updated 12/19/2023
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, python
keywords: text mining, key phrase zone_pivot_groups: programming-languages-text-analytics
ai-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/summarization/quickstart.md
Last updated 12/19/2023
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, python
zone_pivot_groups: programming-languages-text-analytics
ai-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/text-analytics-for-health/quickstart.md
Last updated 12/19/2023
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, python
keywords: text mining, health, text analytics for health zone_pivot_groups: programming-languages-text-analytics
ai-services Content Filter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/content-filter.md
The default content filtering configuration is set to filter at the medium sever
| High | Yes| Yes | Content detected at severity levels low and medium isn't filtered. Only content at severity level high is filtered.| | No filters | If approved<sup>\*</sup>| If approved<sup>\*</sup>| No content is filtered regardless of severity level detected. Requires approval<sup>\*</sup>.|
-<sup>\*</sup> Only customers who have been approved for modified content filtering have full content filtering control and can turn content filters partially or fully off. Apply for modified content filters using this form: [Azure OpenAI Limited Access Review: Modified Content Filtering (microsoft.com)](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUMlBQNkZMR0lFRldORTdVQzQ0TEI5Q1ExOSQlQCN0PWcu)
+<sup>\*</sup> Only customers who have been approved for modified content filtering have full content filtering control and can turn content filters partially or fully off. Content filtering control does not apply to content filters for DALL-E (preview) or GPT-4 Turbo with Vision (preview). Apply for modified content filters using this form: [Azure OpenAI Limited Access Review: Modified Content Filtering (microsoft.com)](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUMlBQNkZMR0lFRldORTdVQzQ0TEI5Q1ExOSQlQCN0PWcu).
Customers are responsible for ensuring that applications integrating Azure OpenAI comply with the [Code of Conduct](/legal/cognitive-services/openai/code-of-conduct?context=%2Fazure%2Fai-services%2Fopenai%2Fcontext%2Fcontext).
ai-services Gpt With Vision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/gpt-with-vision.md
In order to use the video prompt enhancement, you need both an Azure AI Vision r
> [!IMPORTANT] > Pricing details are subject to change in the future.
-GPT-4 Turbo with Vision accrues charges like other Azure OpenAI chat models. You pay a per-token rate for the prompts and completions, detailed on the [Pricing page](/pricing/details/cognitive-services/openai-service/). The base charges and additional features are outlined here:
+GPT-4 Turbo with Vision accrues charges like other Azure OpenAI chat models. You pay a per-token rate for the prompts and completions, detailed on the [Pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/). The base charges and additional features are outlined here:
Base Pricing for GPT-4 Turbo with Vision is: - Input: $0.01 per 1000 tokens
This section describes the limitations of GPT-4 Turbo with Vision.
- Get started using GPT-4 Turbo with Vision by following the [quickstart](/azure/ai-services/openai/gpt-v-quickstart). - For a more in-depth look at the APIs, and to use video prompts in chat, follow the [how-to guide](../how-to/gpt-with-vision.md).-- See the [completions and embeddings API reference](../reference.md)
+- See the [completions and embeddings API reference](../reference.md)
ai-services How To Multi Slot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/how-to-multi-slot.md
Last updated 05/24/2021 zone_pivot_groups: programming-languages-set-six
+ms.devlang: csharp
+# ms.devlang: csharp, javascript, python
ai-services Quickstart Personalizer Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/quickstart-personalizer-sdk.md
ms.
Last updated 02/02/2023
+ms.devlang: csharp
+# ms.devlang: csharp, javascript, python
keywords: personalizer, Azure AI Personalizer, machine learning zone_pivot_groups: programming-languages-set-six
ai-services Improve Knowledge Base https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/improve-knowledge-base.md
Last updated 12/19/2023
+ms.devlang: csharp
+# ms.devlang: csharp, javascript
ai-services Quickstart Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/Quickstarts/quickstart-sdk.md
Last updated 12/19/2023
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, python
zone_pivot_groups: qnamaker-quickstart
ai-services Audio Processing Speech Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/audio-processing-speech-sdk.md
Last updated 09/16/2022
+ms.devlang: cpp
+# ms.devlang: cpp, csharp, java
ai-services Captioning Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/captioning-quickstart.md
Last updated 04/23/2022
+ms.devlang: cpp
+# ms.devlang: cpp, csharp
zone_pivot_groups: programming-languages-speech-sdk-cli
ai-services Custom Keyword Basics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/custom-keyword-basics.md
Last updated 11/12/2021
+ms.devlang: csharp
+# ms.devlang: csharp, objective-c, python
zone_pivot_groups: programming-languages-speech-services
ai-services Get Speech Recognition Results https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/get-speech-recognition-results.md
Last updated 06/13/2022
+ms.devlang: cpp
+# ms.devlang: cpp, csharp, golang, java, javascript, objective-c, python
zone_pivot_groups: programming-languages-speech-sdk-cli keywords: speech to text, speech to text software
ai-services Get Started Intent Recognition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/get-started-intent-recognition.md
Last updated 02/22/2023
+ms.devlang: cpp
+# ms.devlang: cpp, csharp, java, javascript, python
zone_pivot_groups: programming-languages-speech-services keywords: intent recognition
ai-services Get Started Speaker Recognition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/get-started-speaker-recognition.md
Last updated 01/08/2022
+ms.devlang: cpp
+# ms.devlang: cpp, csharp, javascript
zone_pivot_groups: programming-languages-speech-services keywords: speaker recognition, voice biometry
ai-services Get Started Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/get-started-speech-to-text.md
Last updated 08/24/2023
+ms.devlang: cpp
+# ms.devlang: cpp, csharp, golang, java, javascript, objective-c, python
zone_pivot_groups: programming-languages-speech-services keywords: speech to text, speech to text software
ai-services Get Started Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/get-started-text-to-speech.md
Last updated 08/25/2023
+ms.devlang: cpp
+# ms.devlang: cpp, csharp, golang, java, javascript, objective-c, python
zone_pivot_groups: programming-languages-speech-services keywords: text to speech
ai-services How To Async Meeting Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-async-meeting-transcription.md
Last updated 11/04/2019
+ms.devlang: csharp
+# ms.devlang: csharp, java
zone_pivot_groups: programming-languages-set-twenty-one
ai-services How To Configure Azure Ad Auth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-configure-azure-ad-auth.md
Last updated 06/18/2021 zone_pivot_groups: programming-languages-set-two
+ms.devlang: cpp
+# ms.devlang: cpp, csharp, java, python
# Microsoft Entra authentication with the Speech SDK
ai-services How To Control Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-control-connections.md
Last updated 04/12/2021 zone_pivot_groups: programming-languages-set-thirteen
+ms.devlang: cpp
+# ms.devlang: cpp, csharp, java
ai-services How To Recognize Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-recognize-speech.md
Last updated 09/01/2023
+ms.devlang: cpp
+# ms.devlang: cpp, csharp, golang, java, javascript, objective-c, python
zone_pivot_groups: programming-languages-speech-services keywords: speech to text, speech to text software
ai-services How To Select Audio Input Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-select-audio-input-devices.md
Last updated 07/05/2019
+ms.devlang: cpp
+# ms.devlang: cpp, csharp, java, javascript, objective-c, python
ai-services How To Speech Synthesis Viseme https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-speech-synthesis-viseme.md
Last updated 10/23/2022
+ms.devlang: cpp
+# ms.devlang: cpp, csharp, java, javascript, python
zone_pivot_groups: programming-languages-speech-services-nomore-variant
ai-services How To Speech Synthesis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-speech-synthesis.md
Last updated 08/30/2023
+ms.devlang: cpp
+# ms.devlang: cpp, csharp, golang, java, javascript, objective-c, python
zone_pivot_groups: programming-languages-speech-services keywords: text to speech
ai-services How To Track Speech Sdk Memory Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-track-speech-sdk-memory-usage.md
Last updated 12/10/2019
+ms.devlang: cpp
+# ms.devlang: cpp, csharp, java, objective-c, python
zone_pivot_groups: programming-languages-set-two
ai-services How To Use Codec Compressed Audio Input Streams https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-use-codec-compressed-audio-input-streams.md
Last updated 04/25/2022
+ms.devlang: cpp
+# ms.devlang: cpp, csharp, golang, java, python
zone_pivot_groups: programming-languages-speech-services
ai-services How To Use Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-use-logging.md
Last updated 07/05/2019
+ms.devlang: cpp
+# ms.devlang: cpp, csharp, golang, java, javascript, objective-c, python
ai-services How To Use Meeting Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-use-meeting-transcription.md
Last updated 05/06/2023 zone_pivot_groups: acs-js-csharp-python
+ms.devlang: csharp
+# ms.devlang: csharp, javascript
ai-services Multi Device Conversation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/quickstarts/multi-device-conversation.md
Last updated 06/25/2020 zone_pivot_groups: programming-languages-set-nine
+ms.devlang: cpp
+# ms.devlang: cpp, csharp
ai-services Voice Assistants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/quickstarts/voice-assistants.md
Last updated 06/25/2020
+ms.devlang: csharp
+# ms.devlang: csharp, golang, java
zone_pivot_groups: programming-languages-voice-assistants
ai-services Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/regions.md
Previously updated : 10/27/2023 Last updated : 1/17/2024
The following regions are supported for Speech service features such as speech t
| Europe | UK South | `uksouth` <sup>1,2,3,4,7</sup>| | Middle East | UAE North | `uaenorth` <sup>6</sup>| | South America | Brazil South | `brazilsouth` <sup>6</sup>|
+| Qatar | Qatar Central | `qatarcentral`<sup>8</sup> |
| US | Central US | `centralus` | | US | East US | `eastus` <sup>1,2,3,4,5,7,9</sup>| | US | East US 2 | `eastus2` <sup>1,2,4,5</sup>|
ai-services Use Rest Api Programmatically https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/how-to-guides/use-rest-api-programmatically.md
Last updated 01/08/2024 recommendations: false
+ms.devlang: csharp
+# ms.devlang: csharp, golang, java, javascript, python
ai-services Document Translation Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/quickstarts/document-translation-rest-api.md
Last updated 07/18/2023 recommendations: false
+ms.devlang: csharp
+# ms.devlang: csharp, golang, java, javascript, python
zone_pivot_groups: programming-languages-set-translator
ai-services Quickstart Text Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/quickstart-text-rest-api.md
Last updated 09/06/2023
+ms.devlang: csharp
+# ms.devlang: csharp, golang, java, javascript, python
ai-services Quickstart Text Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/quickstart-text-sdk.md
Last updated 09/06/2023
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, python
zone_pivot_groups: programming-languages-set-translator-sdk
ai-services Translator Text Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/translator-text-apis.md
Last updated 07/18/2023
+ms.devlang: csharp
+# ms.devlang: csharp, golang, java, javascript, python
keywords: translator, translator service, translate text, transliterate text, language detection
ai-studio Create Manage Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/create-manage-compute.md
You can start or stop a compute instance from the Azure AI Studio.
:::image type="content" source="../media/compute/compute-start-stop.png" alt-text="Screenshot of the option to start or stop a compute instance." lightbox="../media/compute/compute-start-stop.png"::: - ## Next steps - [Create and manage prompt flow runtimes](./create-manage-runtime.md)
ai-studio Create Manage Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/create-manage-runtime.md
Go to the page for runtime details and select **Update**. On the **Edit compute
Every time you open the page for runtime details, AI Studio checks whether there are new versions of the runtime. If new versions are available, a notification appears at the top of the page. You can also manually check the latest version by selecting the **Check version** button.
+## Switch compute instance runtime to automatic runtime
+
+Automatic runtime has following advantages over compute instance runtime:
+- Automatic manage lifecycle of runtime and underlying compute. You don't need to manually create and managed them anymore.
+- Easily customize packages by adding packages in the `requirements.txt` file in the flow folder, instead of creating a custom environment.
+
+We would recommend you to switch to automatic runtime if you're using compute instance runtime. If you have a compute instance runtime, you can switch it to an automatic runtime (preview) by using the following steps:
+- Prepare your `requirements.txt` file in the flow folder. Make sure that you don't pin the version of `promptflow` and `promptflow-tools` in `requirements.txt`, because we already include them in the runtime base image. Packages specified in `requirements.txt` will be installed when the runtime is started.
+- If you want to keep the automatic runtime (preview) as long running compute like compute instance, you can disable the idle shutdown toggle under automatic runtime (preview) `edit` option.
++ ## Next steps - [Learn more about prompt flow](./prompt-flow.md)
aks Azure Netapp Files Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-netapp-files-smb.md
You must install a Container Storage Interface (CSI) driver to create a Kubernet
```bash helm repo add csi-driver-smb https://raw.githubusercontent.com/kubernetes-csi/csi-driver-smb/master/charts
- helm install csi-driver-smb csi-driver-smb/csi-driver-smb --namespace kube-system --version v1.10.0 ΓÇô-set windows.enabled=true
+ helm install csi-driver-smb csi-driver-smb/csi-driver-smb --namespace kube-system --version v1.13.0 --set windows.enabled=true
``` For other methods of installing the SMB CSI Driver, see [Install SMB CSI driver master version on a Kubernetes cluster](https://github.com/kubernetes-csi/csi-driver-smb/blob/master/docs/install-csi-driver-master.md).
aks Cluster Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-configuration.md
As part of creating an AKS cluster, you may need to customize your cluster confi
## OS configuration
-AKS supports Ubuntu 22.04 as the only node operating system (OS) for clusters with Kubernetes 1.25 and higher. Ubuntu 18.04 can also be specified at node pool creation for Kubernetes versions 1.24 and below.
+AKS supports Ubuntu 22.04 and Azure Linux 2.0 as the node operating system (OS) for clusters with Kubernetes 1.25 and higher. Ubuntu 18.04 can also be specified at node pool creation for Kubernetes versions 1.24 and below.
AKS supports Windows Server 2022 as the default operating system (OS) for Windows node pools in clusters with Kubernetes 1.25 and higher. Windows Server 2019 can also be specified at node pool creation for Kubernetes versions 1.32 and below. Windows Server 2019 is being retired after Kubernetes version 1.32 reaches end of life (EOL) and isn't supported in future releases. For more information about this retirement, see the [AKS release notes][aks-release-notes].
aks Istio Deploy Addon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/istio-deploy-addon.md
export LOCATION=<location>
### Verify Azure CLI and aks-preview extension versions The add-on requires:
-* Azure CLI version 2.44.0 or later installed. To install or upgrade, see [Install Azure CLI][install-azure-cli].
-* `aks-preview` Azure CLI extension of version 0.5.133 or later installed
+* Azure CLI version 2.49.0 or later installed. To install or upgrade, see [Install Azure CLI][azure-cli-install].
+* `aks-preview` Azure CLI extension of version 0.5.163 or later installed
You can run `az --version` to verify above versions.
aks Istio Plugin Ca https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/istio-plugin-ca.md
+
+ Title: Plug in CA certificates for Istio-based service mesh add-on on Azure Kubernetes Service (preview)
+description: Plug in CA certificates for Istio-based service mesh add-on on Azure Kubernetes Service (preview)
+ Last updated : 12/04/2023+++
+# Plug in CA certificates for Istio-based service mesh add-on on Azure Kubernetes Service (preview)
+
+In the Istio-based service mesh addon for Azure Kubernetes Service (preview), by default the Istio certificate authority (CA) generates a self-signed root certificate and key and uses them to sign the workload certificates. To protect the root CA key, you should use a root CA, which runs on a secure machine offline. You can use the root CA to issue intermediate certificates to the Istio CAs that run in each cluster. An Istio CA can sign workload certificates using the administrator-specified certificate and key, and distribute an administrator-specified root certificate to the workloads as the root of trust. This article addresses how to bring your own certificates and keys for Istio CA in the Istio-based service mesh add-on for Azure Kubernetes Service.
+
+[ ![Diagram that shows root and intermediate CA with Istio.](./media/istio/istio-byo-ca.png) ](./media/istio/istio-byo-ca.png#lightbox)
+
+This article addresses how you can configure the Istio certificate authority with a root certificate, signing certificate and key provided as inputs using Azure Key Vault to the Istio-based service mesh add-on.
++
+## Before you begin
+
+### Verify Azure CLI and aks-preview extension versions
+
+The add-on requires:
+* Azure CLI version 2.49.0 or later installed. To install or upgrade, see [Install Azure CLI][install-azure-cli].
+* `aks-preview` Azure CLI extension of version 0.5.163 or later installed
+
+You can run `az --version` to verify above versions.
+
+To install the aks-preview extension, run the following command:
+
+```azurecli-interactive
+az extension add --name aks-preview
+```
+
+Run the following command to update to the latest version of the extension released:
+
+```azurecli-interactive
+az extension update --name aks-preview
+```
+
+### Register the _AzureServiceMeshPreview_ feature flag
+
+Register the `AzureServiceMeshPreview` feature flag by using the [az feature register][az-feature-register] command:
+
+```azurecli-interactive
+az feature register --namespace "Microsoft.ContainerService" --name "AzureServiceMeshPreview"
+```
+
+It takes a few minutes for the feature to register. Verify the registration status by using the [az feature show][az-feature-show] command:
+
+```azurecli-interactive
+az feature show --namespace "Microsoft.ContainerService" --name "AzureServiceMeshPreview"
+```
+
+When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
+
+```azurecli-interactive
+az provider register --namespace Microsoft.ContainerService
+```
+
+### Set up Azure Key Vault
+
+1. You need an [Azure Key Vault resource][akv-quickstart] to supply the certificate and key inputs to the Istio add-on.
+
+1. You need to generate root certificate, intermediate certificates, intermediate key, and the certificate chain offline. Steps 1-3 from [here][istio-generate-certs] has an example of how to generate these files.
+
+1. Create secrets in Azure Key Vault using the certificates and key:
+
+ ```bash
+ az keyvault secret set --vault-name $AKV_NAME --name root-cert --file <path-to-folder/root-cert.pem>
+ az keyvault secret set --vault-name $AKV_NAME --name ca-cert --file <path-to-folder/ca-cert.pem>
+ az keyvault secret set --vault-name $AKV_NAME --name ca-key --file <path-to-folder/ca-key.pem>
+ az keyvault secret set --vault-name $AKV_NAME --name cert-chain --file <path/cert-chain.pem>
+ ```
+
+1. Enable [Azure Key Vault provider for Secret Store CSI Driver for your cluster][akv-addon]:
+
+ ```bash
+ az aks enable-addons --addons azure-keyvault-secrets-provider --resource-group $RESOURCE_GROUP --name $CLUSTER
+ ```
+
+ > [!NOTE]
+ > When rotating certificates, to control how quickly the secrets are synced down to the cluster you can use the `--rotation-poll-interval` parameter of the Azure Key Vault Secrets Provider add-on. For example:
+ > `az aks addon update --resource-group $RESOURCE_GROUP --name $CLUSTER --addon azure-keyvault-secrets-provider --enable-secret-rotation --rotation-poll-interval 20s`
+
+1. Authorize the user-assigned managed identity of the add-on to have access to the Azure Key Vault resource:
+
+ ```bash
+ OBJECT_ID=$(az aks show --resource-group $RESOURCE_GROUP --name $CLUSTER --query 'addonProfiles.azureKeyvaultSecretsProvider.identity.objectId' -o tsv)
+
+ az keyvault set-policy --name $AKV_NAME --object-id $OBJECT_ID --secret-permissions get list
+ ```
+
+## Set up Istio-based service mesh addon with plug-in CA certificates
+
+1. Enable the Istio service mesh addon for your existing AKS cluster while referencing the Azure Key Vault secrets that were created earlier:
+
+ ```bash
+ az aks mesh enable --resource-group $RESOURCE_GROUP --name $CLUSTER \
+ --root-cert-object-name root-cert \
+ --ca-cert-object-name ca-cert \
+ --ca-key-object-name ca-key \
+ --cert-chain-object-name cert-chain \
+ --key-vault-id /subscriptions/$SUBSCRIPTION/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.KeyVault/vaults/$AKV_NAME
+ ```
+
+ > [!NOTE]
+ > For existing clusters with Istio addon using self-signed root certificate generated by Istio CA, switching to plugin CA is not supported. You need to [disable the mesh][disable-mesh] on these clusters first and then enable it again using the above command to pass through the plugin CA inputs.
++
+1. Verify that the `cacerts` gets created on the cluster:
+
+ ```bash
+ kubectl get secret -n aks-istio-system
+ ```
+
+ Expected output:
+
+ ```bash
+ NAME TYPE DATA AGE
+ cacerts opaque 4 13h
+ sh.helm.release.v1.azure-service-mesh-istio-discovery.v380 helm.sh/release.v1 1 2m15s
+ sh.helm.release.v1.azure-service-mesh-istio-discovery.v381 helm.sh/release.v1 1 8s
+ ```
+
+1. Verify that the Istio control plane picked up the custom certificate authority:
+
+ ```bash
+ kubectl logs deploy/istiod-asm-1-17 -c discovery -n aks-istio-system | grep -v validationController | grep x509
+ ```
+
+ Expected output should be similar to:
+
+ ```bash
+ 2023-11-06T15:49:15.493732Z info x509 cert - Issuer: "CN=Intermediate CA - A1,O=Istio,L=cluster-A1", Subject: "", SN: e191d220af347c7e164ec418d75ed19e, NotBefore: "2023-11-06T15:47:15Z", NotAfter: "2033-11-03T15:49:15Z"
+ 2023-11-06T15:49:15.493764Z info x509 cert - Issuer: "CN=Root A,O=Istio", Subject: "CN=Intermediate CA - A1,O=Istio,L=cluster-A1", SN: 885034cba2894f61036f2956fd9d0ed337dc636, NotBefore: "2023-11-04T01:40:02Z", NotAfter: "2033-11-01T01:40:02Z"
+ 2023-11-06T15:49:15.493795Z info x509 cert - Issuer: "CN=Root A,O=Istio", Subject: "CN=Root A,O=Istio", SN: 18e2ee4089c5a7363ec306627d21d9bb212bed3e, NotBefore: "2023-11-04T01:38:27Z", NotAfter: "2033-11-01T01:38:27Z"
+ ```
+
+## Certificate authority rotation
+
+You may need to periodically rotate the certificate authorities for security or policy reasons. This section walks you through how to handle intermediate CA and root CA rotation scenarios.
+
+### Intermediate certificate authority rotation
+
+1. You can rotate the intermediate CA while keeping the root CA the same. Update the secrets in Azure Key Vault resource with the new certificate and key files:
+
+ ```bash
+ az keyvault secret set --vault-name $AKV_NAME --name root-cert --file <path-to-folder/root-cert.pem>
+ az keyvault secret set --vault-name $AKV_NAME --name ca-cert --file <path-to-folder/ca-cert.pem>
+ az keyvault secret set --vault-name $AKV_NAME --name ca-key --file <path-to-folder/ca-key.pem>
+ az keyvault secret set --vault-name $AKV_NAME --name cert-chain --file <path/cert-chain.pem>
+ ```
+
+1. Wait for the time duration of `--rotation-poll-interval`. Check if the `cacerts` secret was refreshed on the cluster based on the new intermediate CA that was updated on the Azure Key Vault resource:
+
+ ```bash
+ kubectl logs deploy/istiod-asm-1-17 -c discovery -n aks-istio-system | grep -v validationController
+ ```
+
+ Expected output should be similar to:
+
+ ```bash
+ 2023-11-07T06:16:21.091844Z info Update Istiod cacerts
+ 2023-11-07T06:16:21.091901Z info Using istiod file format for signing ca files
+ 2023-11-07T06:16:21.354423Z info Istiod has detected the newly added intermediate CA and updated its key and certs accordingly
+ 2023-11-07T06:16:21.354910Z info x509 cert - Issuer: "CN=Intermediate CA - A2,O=Istio,L=cluster-A2", Subject: "", SN: b2753c6a23b54d8364e780bf664672ce, NotBefore: "2023-11-07T06:14:21Z", NotAfter: "2033-11-04T06:16:21Z"
+ 2023-11-07T06:16:21.354967Z info x509 cert - Issuer: "CN=Root A,O=Istio", Subject: "CN=Intermediate CA - A2,O=Istio,L=cluster-A2", SN: 17f36ace6496ac2df88e15878610a0725bcf8ae9, NotBefore: "2023-11-04T01:40:22Z", NotAfter: "2033-11-01T01:40:22Z"
+ 2023-11-07T06:16:21.355007Z info x509 cert - Issuer: "CN=Root A,O=Istio", Subject: "CN=Root A,O=Istio", SN: 18e2ee4089c5a7363ec306627d21d9bb212bed3e, NotBefore: "2023-11-04T01:38:27Z", NotAfter: "2033-11-01T01:38:27Z"
+ 2023-11-07T06:16:21.355012Z info Istiod certificates are reloaded
+ ```
+
+1. The workloads receive certificates from the Istio control plane that are valid for 24 hours by default. If you don't restart the pods, all the workloads obtain new leaf certificates based on the new intermediate CA in 24 hours. If you want to force all these workloads to obtain new leaf certificates right away from the new intermediate CA, then you need to restart the workloads.
++
+ ```bash
+ kubectl rollout restart deployment <deployment name> -n <deployment namespace>
+ ```
+
+### Root certificate authority rotation
+
+1. You need to update Azure Key Vault secrets with the root certificate file having the concatenation of the old and the new root certificates:
+
+ ```bash
+ az keyvault secret set --vault-name $AKV_NAME --name root-cert --file <path-to-folder/root-cert.pem>
+ az keyvault secret set --vault-name $AKV_NAME --name ca-cert --file <path-to-folder/ca-cert.pem>
+ az keyvault secret set --vault-name $AKV_NAME --name ca-key --file <path-to-folder/ca-key.pem>
+ az keyvault secret set --vault-name $AKV_NAME --name cert-chain --file <path/cert-chain.pem>
+ ```
+
+ Contents of `root-cert.pem` follow this format:
+
+ ```
+ --BEGIN CERTIFICATE--
+ <contents of old root certificate>
+ --END CERTIFICATE--
+ --BEGIN CERTIFICATE--
+ <contents of new root certificate>
+ --END CERTIFICATE--
+ ```
+
+ The add-on includes a `CronJob` running every ten minutes on the cluster to check for updates to root certificate. If it detects an update, it restarts the Istio control plane (`istiod` deployment) to pick up the updates. You can check its logs to confirm that the root certificate update was detected and that the Istio control plane was restarted:
+
+ ```bash
+ kubectl logs -n aks-istio-system $(kubectl get pods -n aks-istio-system | grep 'istio-cert-validator-cronjob-' | sort -k8 | tail -n 1 | awk '{print $1}')
+ ```
+
+ Expected output:
+
+ ```bash
+ Root certificate update detected. Restarting deployment...
+ deployment.apps/istiod-asm-1-17 restarted
+ Deployment istiod-asm-1-17 restarted.
+ ```
+
+ After `istiod` was restarted, it should indicate that two certificates were added to the trust domain:
+
+ ```bash
+ kubectl logs deploy/istiod-asm-1-17 -c discovery -n aks-istio-system
+ ```
+
+ Expected output:
+
+ ```bash
+ 2023-11-07T06:42:00.287916Z info Using istiod file format for signing ca files
+ 2023-11-07T06:42:00.287928Z info Use plugged-in cert at etc/cacerts/ca-key.pem
+ 2023-11-07T06:42:00.288254Z info x509 cert - Issuer: "CN=Intermediate CA - A2,O=Istio,L=cluster-A2", Subject: "", SN: 286451ca8ff7bf9e6696f56bef829d42, NotBefore: "2023-11-07T06:40:00Z", NotAfter: "2033-11-04T06:42:00Z"
+ 2023-11-07T06:42:00.288279Z info x509 cert - Issuer: "CN=Root A,O=Istio", Subject: "CN=Intermediate CA - A2,O=Istio,L=cluster-A2", SN: 17f36ace6496ac2df88e15878610a0725bcf8ae9, NotBefore: "2023-11-04T01:40:22Z", NotAfter: "2033-11-01T01:40:22Z"
+ 2023-11-07T06:42:00.288298Z info x509 cert - Issuer: "CN=Root A,O=Istio", Subject: "CN=Root A,O=Istio", SN: 18e2ee4089c5a7363ec306627d21d9bb212bed3e, NotBefore: "2023-11-04T01:38:27Z", NotAfter: "2033-11-01T01:38:27Z"
+ 2023-11-07T06:42:00.288303Z info Istiod certificates are reloaded
+ 2023-11-07T06:42:00.288365Z info spiffe Added 2 certs to trust domain cluster.local in peer cert verifier
+ ```
+
+1. You need to either wait for 24 hours (the default time for leaf certificate validity) or force a restart of all the workloads. This way, all workloads recognize both the old and the new certificate authorities for [mTLS verification][istio-mtls-reference].
+
+ ```bash
+ kubectl rollout restart deployment <deployment name> -n <deployment namespace>
+ ```
+
+1. You can now update Azure Key Vault secrets with only the new CA (without the old CA):
+
+ ```bash
+ az keyvault secret set --vault-name $AKV_NAME --name root-cert --file <path-to-folder/root-cert.pem>
+ az keyvault secret set --vault-name $AKV_NAME --name ca-cert --file <path-to-folder/ca-cert.pem>
+ az keyvault secret set --vault-name $AKV_NAME --name ca-key --file <path-to-folder/ca-key.pem>
+ az keyvault secret set --vault-name $AKV_NAME --name cert-chain --file <path/cert-chain.pem>
+ ```
+
+ Check the logs of the `CronJob` to confirm detection of root certificate update and the restart of `istiod`:
++
+ ```bash
+ kubectl logs -n aks-istio-system $(kubectl get pods -n aks-istio-system | grep 'istio-cert-validator-cronjob-' | sort -k8 | tail -n 1 | awk '{print $1}')
+ ```
+
+ Expected output:
+
+ ```bash
+ Root certificate update detected. Restarting deployment...
+ deployment.apps/istiod-asm-1-17 restarted
+ Deployment istiod-asm-1-17 restarted.
+ ```
+
+ After `istiod` was updated, it should only confirm the usage of new root CA:
+
+ ```bash
+ kubectl logs deploy/istiod-asm-1-17 -c discovery -n aks-istio-system | grep -v validationController
+ ```
+
+ Expected output:
+
+ ```bash
+ 2023-11-07T08:01:17.780299Z info x509 cert - Issuer: "CN=Intermediate CA - B1,O=Istio,L=cluster-B1", Subject: "", SN: 1159747c72cc7ac7a54880cd49b8df0a, NotBefore: "2023-11-07T07:59:17Z", NotAfter: "2033-11-04T08:01:17Z"
+ 2023-11-07T08:01:17.780330Z info x509 cert - Issuer: "CN=Root B,O=Istio", Subject: "CN=Intermediate CA - B1,O=Istio,L=cluster-B1", SN: 2aba0c438652a1f9beae4249457023013948c7e2, NotBefore: "2023-11-04T01:42:12Z", NotAfter: "2033-11-01T01:42:12Z"
+ 2023-11-07T08:01:17.780345Z info x509 cert - Issuer: "CN=Root B,O=Istio", Subject: "CN=Root B,O=Istio", SN: 3f9da6ddc4cb03749c3f43243a4b701ce5eb4e96, NotBefore: "2023-11-04T01:41:54Z", NotAfter: "2033-11-01T01:41:54Z"
+ ```
+
+ From the example outputs shown in this article, you can observe that we moved from Root A (used when enabling the addon) to Root B.
++
+1. You can either again wait for 24 hours or force a restart of all the workloads. Forcing a restart makes the workloads obtain new leaf certificates from the new root CA immediately.
+
+ ```bash
+ kubectl rollout restart deployment <deployment name> -n <deployment namespace>
+ ```
+
+[akv-quickstart]: ../key-vault/general/quick-create-cli.md
+[akv-addon]: ./csi-secrets-store-driver.md
+[install-azure-cli]: /cli/azure/install-azure-cli
+[az-feature-register]: /cli/azure/feature#az-feature-register
+[az-feature-show]: /cli/azure/feature#az-feature-show
+[az-provider-register]: /cli/azure/provider#az-provider-register
+[az-aks-mesh-disable]: /cli/azure/aks/mesh#az-aks-mesh-disable
+[istio-generate-certs]: https://istio.io/latest/docs/tasks/security/cert-management/plugin-ca-cert/#plug-in-certificates-and-key-into-the-cluster
+[istio-mtls-reference]: https://istio.io/latest/docs/concepts/security/#mutual-tls-authentication
aks Workload Identity Deploy Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-deploy-cluster.md
az aks update -g "${RESOURCE_GROUP}" -n myAKSCluster --enable-oidc-issuer --enab
To get the OIDC Issuer URL and save it to an environmental variable, run the following command. Replace the default value for the arguments `-n`, which is the name of the cluster: ```bash
-export AKS_OIDC_ISSUER="$(az aks show -n myAKSCluster -g "${RESOURCE_GROUP}" --query "oidcIssuerProfile.issuerUrl" -otsv)"
+export AKS_OIDC_ISSUER="$(az aks show -n myAKSCluster -g "${RESOURCE_GROUP}" --query "oidcIssuerProfile.issuerUrl" -o tsv)"
``` The variable should contain the Issuer URL similar to the following example: ```output
-https://eastus.oic.prod-aks.azure.com/00000000-0000-0000-0000-000000000000/00000000-0000-0000-0000-000000000000/
+https://eastus.oic.prod-aks.azure.com/00000000-0000-0000-0000-000000000000/11111111-1111-1111-1111-111111111111/
```
-By default, the Issuer is set to use the base URL `https://{region}.oic.prod-aks.azure.com/{uuid}`, where the value for `{region}` matches the location the AKS cluster is deployed in. The value `{uuid}` represents the OIDC key.
+By default, the Issuer is set to use the base URL `https://{region}.oic.prod-aks.azure.com/{tenant_id}/{uuid}`, where the value for `{region}` matches the location the AKS cluster is deployed in. The value `{uuid}` represents the OIDC key.
## Create a managed identity
az identity create --name "${USER_ASSIGNED_IDENTITY_NAME}" --resource-group "${R
Next, let's create a variable for the managed identity ID. ```bash
-export USER_ASSIGNED_CLIENT_ID="$(az identity show --resource-group "${RESOURCE_GROUP}" --name "${USER_ASSIGNED_IDENTITY_NAME}" --query 'clientId' -otsv)"
+export USER_ASSIGNED_CLIENT_ID="$(az identity show --resource-group "${RESOURCE_GROUP}" --name "${USER_ASSIGNED_IDENTITY_NAME}" --query 'clientId' -o tsv)"
``` ## Create Kubernetes service account
EOF
The following output resembles successful creation of the identity: ```output
-Serviceaccount/workload-identity-sa created
+serviceaccount/workload-identity-sa created
``` ## Establish federated identity credential
cat <<EOF | kubectl apply -f -
apiVersion: v1 kind: Pod metadata:
- name: quick-start
+ name: your-pod
namespace: "${SERVICE_ACCOUNT_NAMESPACE}" labels: azure.workload.identity/use: "true" spec: serviceAccountName: "${SERVICE_ACCOUNT_NAME}"
+ containers:
+ - image: <your image>
+ name: <containerName>
EOF ``` > [!IMPORTANT] > Ensure your application pods using workload identity have added the following label `azure.workload.identity/use: "true"` to your pod spec, otherwise the pods fail after their restarted.
-```bash
-kubectl apply -f <your application>
-```
-
-To check whether all properties are injected properly by the webhook, use the [kubectl describe][kubectl-describe] command:
-
-```bash
-kubectl describe pod containerName
-```
-
-To verify that pod is able to get a token and access the resource, use the kubectl logs command:
-
-```bash
-kubectl logs containerName
-```
## Optional - Grant permissions to access Azure Key Vault
You can retrieve this information using the Azure CLI command: [az keyvault list
1. Set an access policy for the managed identity to access secrets in your Key Vault by running the following commands: ```azurecli-interactive
- export RESOURCE_GROUP="myResourceGroup"
- export USER_ASSIGNED_IDENTITY_NAME="myIdentity"
+ export KEYVAULT_RESOURCE_GROUP="myResourceGroup"
export KEYVAULT_NAME="myKeyVault"
- export USER_ASSIGNED_CLIENT_ID="$(az identity show --resource-group "${RESOURCE_GROUP}" --name "${USER_ASSIGNED_IDENTITY_NAME}" --query 'clientId' -otsv)"
+ export USER_ASSIGNED_CLIENT_ID="$(az identity show --resource-group "${RESOURCE_GROUP}" --name "${USER_ASSIGNED_IDENTITY_NAME}" --query 'clientId' -o tsv)"
az keyvault set-policy --name "${KEYVAULT_NAME}" --secret-permissions get --spn "${USER_ASSIGNED_CLIENT_ID}" ```
+2. Create a secret in Key Vault:
+
+ ```azurecli-interactive
+ export KEYVAULT_SECRET_NAME="my-secret"
+
+ az keyvault secret set --vault-name "${KEYVAULT_NAME}" \
+ --name "${KEYVAULT_SECRET_NAME}" \
+ --value "Hello\!"
+ ```
+
+3. Export Key Vault URL:
+ ```azurecli-interactive
+ export KEYVAULT_URL="$(az keyvault show -g ${KEYVAULT_RESOURCE_GROUP} -n ${KEYVAULT_NAME} --query properties.vaultUri -o tsv)"
+ ```
+
+4. Deploy a pod that references the service account and Key Vault URL above:
+
+ ```yml
+ cat <<EOF | kubectl apply -f -
+ apiVersion: v1
+ kind: Pod
+ metadata:
+ name: quick-start
+ namespace: ${SERVICE_ACCOUNT_NAMESPACE}
+ labels:
+ azure.workload.identity/use: "true"
+ spec:
+ serviceAccountName: ${SERVICE_ACCOUNT_NAME}
+ containers:
+ - image: ghcr.io/azure/azure-workload-identity/msal-go
+ name: oidc
+ env:
+ - name: KEYVAULT_URL
+ value: ${KEYVAULT_URL}
+ - name: SECRET_NAME
+ value: ${KEYVAULT_SECRET_NAME}
+ nodeSelector:
+ kubernetes.io/os: linux
+ EOF
+ ```
+
+To check whether all properties are injected properly by the webhook, use the [kubectl describe][kubectl-describe] command:
+
+```bash
+kubectl describe pod quick-start | grep "SECRET_NAME:"
+```
+
+If successful, the output should be similar to the following:
+```bash
+ SECRET_NAME: ${KEYVAULT_SECRET_NAME}
+```
+
+To verify that pod is able to get a token and access the resource, use the kubectl logs command:
+
+```bash
+kubectl logs quick-start
+```
+
+If successful, the output should be similar to the following:
+```bash
+I0114 10:35:09.795900 1 main.go:63] "successfully got secret" secret="Hello\\!"
+```
+ ## Disable workload identity To disable the Microsoft Entra Workload ID on the AKS cluster where it's been enabled and configured, you can run the following command: ```azurecli-interactive
-az aks update --resource-group myResourceGroup --name myAKSCluster --disable-workload-identity
+az aks update --resource-group "${RESOURCE_GROUP}" --name myAKSCluster --disable-workload-identity
``` ## Next steps
app-service Configure Language Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-nodejs.md
Title: Configure Node.js apps description: Learn how to configure a Node.js app in the native Windows instances, or in a pre-built Linux container, in Azure App Service. This article shows the most common configuration tasks.
+ms.devlang: javascript
+# ms.devlang: javascript, devx-track-azurecli
Last updated 01/21/2022
app-service Overview Name Resolution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-name-resolution.md
# Name resolution (DNS) in App Service
-Your app uses DNS when making calls to dependent resources. Resources could be Azure services such as Key Vault, Storage or Azure SQL, but it could also be web apis that your app depends on. When you want to make a call to for example *myservice.com*, you're using DNS to resolve the name to an IP. This article describes how App Service is handling name resolution and how it determines what DNS servers to use. The article also describes settings you can use to configure DNS resolution.
+Your app uses DNS when making calls to dependent resources. Resources could be Azure services such as Key Vault, Storage or Azure SQL, but it could also be web APIs that your app depends on. When you want to make a call to for example *myservice.com*, you're using DNS to resolve the name to an IP. This article describes how App Service is handling name resolution and how it determines what DNS servers to use. The article also describes settings you can use to configure DNS resolution.
## How name resolution works in App Service
-If you aren't integrating your app with a virtual network and you haven't configured custom DNS, your app uses [Azure DNS](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#azure-provided-name-resolution). If you integrate your app with a virtual network, your app uses the DNS configuration of the virtual network. The default for virtual network is also to use Azure DNS. Through the virtual network, it's also possible to link to [Azure DNS private zones](../dns/private-dns-overview.md) and use that for private endpoint resolution or private domain name resolution.
+If you aren't integrating your app with a virtual network and custom DNS servers aren't configured, your app uses [Azure DNS](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#azure-provided-name-resolution). If you integrate your app with a virtual network, your app uses the DNS configuration of the virtual network. The default for virtual network is also to use Azure DNS. Through the virtual network, it's also possible to link to [Azure DNS private zones](../dns/private-dns-overview.md) and use that for private endpoint resolution or private domain name resolution.
If you configured your virtual network with a list of custom DNS servers, name resolution uses these servers. If your virtual network is using custom DNS servers and you're using private endpoints, you should read [this article](../private-link/private-endpoint-dns.md) carefully. You also need to consider that your custom DNS servers are able to resolve any public DNS records used by your app. Your DNS configuration needs to either forward requests to a public DNS server, include a public DNS server like Azure DNS in the list of custom DNS servers or specify an alternative server at the app level.
-When your app needs to resolve a domain name using DNS, the app sends a name resolution request to all configured DNS servers. If the first server in the list returns a response within the timeout limit, you get the result returned immediately. If not, the app waits for the other servers to respond within the timeout period and evaluates the DNS server responses in the order you've configured the servers. If none of the servers respond within the timeout and you have configured retry, you repeat the process.
+When your app needs to resolve a domain name using DNS, the app sends a name resolution request to all configured DNS servers. If the first server in the list returns a response within the timeout limit, you get the result returned immediately. If not, the app waits for the other servers to respond within the timeout period and evaluates the DNS server responses in the order you configured the servers. If none of the servers respond within the timeout and you configured retry, you repeat the process.
## Configuring DNS servers
The individual app allows you to override the DNS configuration by specifying th
az resource update --resource-group <group-name> --name <app-name> --resource-type "Microsoft.Web/sites" --set properties.dnsConfiguration.dnsServers="['168.63.129.16','xxx.xxx.xxx.xxx']" ```
+## DNS app settings
+
+App Service has existing app settings to configure DNS servers and name resolution behavior. Site properties override the app settings if both exist. Site properties have the advantage of being auditable with Azure Policy and validated at the time of configuration. We recommend you to use site properties.
+ You can still use the existing `WEBSITE_DNS_SERVER` app setting, and you can add custom DNS servers with either setting. If you want to add multiple DNS servers using the app setting, you must separate the servers by commas with no blank spaces added.
-Using the app setting `WEBSITE_DNS_ALT_SERVER`, you append a DNS server to end of the configured DNS servers. You use the setting to configure a fallback server to custom DNS servers from the virtual network.
+Using the app setting `WEBSITE_DNS_ALT_SERVER`, you appends the specific DNS server to the list of DNS servers configured. The alternative DNS server is appended to both explicitly configured DNS servers and DNS servers inherited from the virtual network.
+
+App settings also exist for configuring name resolution behavior and are named `WEBSITE_DNS_MAX_CACHE_TIMEOUT`, `WEBSITE_DNS_TIMEOUT` and `WEBSITE_DNS_ATTEMPTS`.
## Configure name resolution behavior
-If you require fine-grained control over name resolution, App Service allows you to modify the default behavior. You can modify retry attempts, retry timeout and cache timeout. Changing behavior like disabling or lowering cache duration may influence performance.
+If you require fine-grained control over name resolution, App Service allows you to modify the default behavior. You can modify retry attempts, retry timeout and cache timeout. Changing behavior like disabling or lowering cache duration can influence performance.
|Property name|Windows default value|Linux default value|Allowed values|Description| |-|-|-|-| |dnsRetryAttemptCount|1|5|1-5|Defines the number of attempts to resolve where one means no retries.|
-|dnsMaxCacheTimeout|30|0|0-60|DNS results will be cached according to the individual records TTL, but no longer than the defined max cache timeout. Setting cache to zero means you've disabled caching.|
+|dnsMaxCacheTimeout|30|0|0-60|DNS results are cached according to the individual records TTL, but no longer than the defined max cache timeout. Setting cache to zero means caching is disabled.|
|dnsRetryAttemptTimeout|3|1|1-30|Timeout before retrying or failing. Timeout also defines the time to wait for secondary server results if the primary doesn't respond.| >[!NOTE]
app-service Quickstart Golang https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-golang.md
Title: 'Quickstart: Create a Go web app'
description: Deploy your first Go (GoLang) Hello World to Azure App Service in minutes. Last updated 10/13/2022
+ms.devlang: golang
app-service Quickstart Wordpress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-wordpress.md
keywords: app service, azure app service, wordpress, preview, app service on lin
Last updated 05/15/2023
+# ms.devlang: wordpress
app-service Scenario Secure App Access Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scenario-secure-app-access-storage.md
Last updated 07/31/2023
+ms.devlang: csharp
+# ms.devlang: csharp, azurecli
#Customer intent: As an application developer, I want to learn how to access Azure Storage for an app by using managed identities.
app-service Tutorial Connect App Access Storage Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-app-access-storage-javascript.md
Last updated 07/31/2023
+ms.devlang: javascript
+# ms.devlang: javascript, azurecli
#Customer intent: As an application developer, I want to learn how to access Azure Storage for an app by using managed identities.
app-service Tutorial Connect Msi Azure Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-azure-database.md
description: Secure database connectivity (Azure SQL Database, Database for MySQ
keywords: azure app service, web app, security, msi, managed service identity, managed identity, .net, dotnet, asp.net, c#, csharp, node.js, node, python, java, visual studio, visual studio code, visual studio for mac, azure cli, azure powershell, defaultazurecredential -
+ms.devlang: csharp
+# ms.devlang: csharp,java,javascript,python
Last updated 04/12/2022
app-service Tutorial Connect Msi Key Vault Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-key-vault-javascript.md
Title: 'Tutorial: JavaScript connect to Azure services securely with Key Vault' description: Learn how to secure connectivity to back-end Azure services that don't support managed identity natively from a JavaScript web app
+ms.devlang: javascript
+# ms.devlang: javascript, azurecli
Last updated 10/26/2021
app-service Tutorial Connect Msi Key Vault Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-key-vault-php.md
Title: 'Tutorial: PHP connect to Azure services securely with Key Vault' description: Learn how to secure connectivity to back-end Azure services that don't support managed identity natively from a PHP web app
+ms.devlang: csharp
+# ms.devlang: csharp, azurecli
Last updated 10/26/2021
app-service Tutorial Connect Msi Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-key-vault.md
Title: 'Tutorial: .NET connect to Azure services securely with Key Vault' description: Learn how to secure connectivity to back-end Azure services that don't support managed identity natively from a .NET web app.
+ms.devlang: csharp
+# ms.devlang: csharp, azurecli
Last updated 10/26/2021
app-service Tutorial Send Email https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-send-email.md
Title: 'Tutorial: Send email with Azure Logic Apps'
description: Learn how to invoke business processes from your App Service app. Send emails, tweets, and Facebook posts, add to mailing lists, and much more. Last updated 04/08/2020
+ms.devlang: csharp
+# ms.devlang: csharp, javascript, php, python
application-gateway Self Signed Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/self-signed-certificates.md
Previously updated : 04/27/2023 Last updated : 01/17/2024 # Generate an Azure Application Gateway self-signed certificate with a custom root CA
-The Application Gateway v2 SKU introduces the use of Trusted Root Certificates to allow backend servers. This removes authentication certificates that were required in the v1 SKU. The *root certificate* is a Base-64 encoded X.509(.CER) format root certificate from the backend certificate server. It identifies the root certificate authority (CA) that issued the server certificate and the server certificate is then used for the TLS/SSL communication.
+The Application Gateway v2 SKU introduces the use of Trusted Root Certificates to allow TLS connections with the backend servers. This provision removes the use of authentication certificates (individual Leaf certificates) that were required in the v1 SKU. The *root certificate* is a Base-64 encoded X.509(.CER) format root certificate from the backend certificate server. It identifies the root certificate authority (CA) that issued the server certificate and the server certificate is then used for the TLS/SSL communication.
-Application Gateway trusts your website's certificate by default if it's signed by a well-known CA (for example, GoDaddy or DigiCert). You don't need to explicitly upload the root certificate in that case. For more information, see [Overview of TLS termination and end to end TLS with Application Gateway](ssl-overview.md). However, if you have a dev/test environment and don't want to purchase a verified CA signed certificate, you can create your own custom CA and create a self-signed certificate with it.
+Application Gateway trusts your website's certificate by default if it's signed by a well-known CA (for example, GoDaddy or DigiCert). You don't need to explicitly upload the root certificate in that case. For more information, see [Overview of TLS termination and end to end TLS with Application Gateway](ssl-overview.md). However, if you have a dev/test environment and don't want to purchase a verified CA signed certificate, you can create your own custom Root CA and a leaf certificate signed by that Root CA.
> [!NOTE]
-> Self-signed certificates are not trusted by default and they can be difficult to maintain. Also, they may use outdated hash and cipher suites that may not be strong. For better security, purchase a certificate signed by a well-known certificate authority.
+> Self-generated certificates are not trusted by default, and can be difficult to maintain. Also, they may use outdated hash and cipher suites that may not be strong. For better security, purchase a certificate signed by a well-known certificate authority.
+
+**You can use the following options to generate your private certificate for backend TLS connections.**
+1. Use the one-click private [**certificate generator tool**](https://appgwbackendcertgenerator.azurewebsites.net/). Using the domain name (Common Name) that you provide, this tool performs the same steps as documented in this article to generate Root and Server certificates. With the generated certificate files, you can immediately upload the Root certificate (.CER) file to the Backend Setting of your gateway and the corresponding certificate chain (.PFX) to the backend server. The password for the PFX file is also supplied in the downloaded ZIP file.
+
+2. Use OpenSSL commands to customize and generate certificates as per your needs. Continue to follow the instructions in this article if you wish to do this entirely on your own.
In this article, you will learn how to:
azure-app-configuration Howto Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-geo-replication.md
description: Learn how to use Azure App Configuration geo replication to create,
+ms.devlang: csharp
+# ms.devlang: csharp, java
Last updated 03/20/2023
azure-app-configuration Rest Api Authentication Hmac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-authentication-hmac.md
+ms.devlang: csharp
+# ms.devlang: csharp, golang, java, javascript, powershell, python
Last updated 08/17/2020
azure-arc System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/system-requirements.md
Notice that the IP addresses for the gateway, control plane, appliance VM and DN
Arc resource bridge may require a separate user account with the necessary roles to view and manage resources in the on-premises infrastructure (such as Arc-enabled VMware vSphere). If so, during creation of the configuration files, the `username` and `password` parameters will be required. The account credentials are then stored in a configuration file locally within the appliance VM.
+> [!WARNING]
+> Arc resource bridge can only use a user account that does not have multifactor authentication enabled.
If the user account is set to periodically change passwords, [the credentials must be immediately updated on the resource bridge](maintenance.md#update-credentials-in-the-appliance-vm). This user account may also be set with a lockout policy to protect the on-premises infrastructure, in case the credentials aren't updated and the resource bridge makes multiple attempts to use expired credentials to access the on-premises control center. For example, with Arc-enabled VMware, Arc resource bridge needs a separate user account for vCenter with the necessary roles. If the [credentials for the user account change](troubleshoot-resource-bridge.md#insufficient-permissions), then the credentials stored in Arc resource bridge must be immediately updated by running `az arcappliance update-infracredentials` from the [management machine](#management-machine-requirements). Otherwise, the appliance will make repeated attempts to use the expired credentials to access vCenter, which will result in a lockout of the account.
azure-cache-for-redis Cache Remove Tls 10 11 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-remove-tls-10-11.md
Last updated 09/12/2023
+ms.devlang: csharp
+# ms.devlang: csharp, golang, java, javascript, php, python
azure-functions Create First Function Vs Code Other https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-other.md
Title: Create a function in Go or Rust using Visual Studio Code - Azure Function
description: Learn how to create a Go function as an Azure Functions custom handler, then publish the local project to serverless hosting in Azure Functions using the Azure Functions extension in Visual Studio Code. Last updated 08/03/2023
+ms.devlang: golang
+# ms.devlang: golang, rust
azure-functions Durable Functions Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-create-portal.md
description: Learn how to install the Durable Functions extension for Azure Func
Last updated 04/10/2020
+ms.devlang: csharp
+# ms.devlang: csharp, javascript
azure-functions Durable Functions Custom Orchestration Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-custom-orchestration-status.md
description: Learn how to configure and use custom orchestration status for Dura
Last updated 12/07/2022
+ms.devlang: csharp
+# ms.devlang: csharp, javascript, python
# Custom orchestration status in Durable Functions (Azure Functions)
azure-functions Durable Functions Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-diagnostics.md
Last updated 12/07/2022
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, python
# Diagnostics in Durable Functions in Azure
azure-functions Durable Functions Entities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-entities.md
Last updated 10/24/2023
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, python
zone_pivot_groups: df-languages #Customer intent: As a developer, I want to learn what durable entities are and how to use them to solve distributed, stateful problems in my applications.
azure-functions Durable Functions Error Handling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-error-handling.md
description: Learn how to handle errors in the Durable Functions extension for A
Last updated 02/14/2023
+ms.devlang: csharp
+# ms.devlang: csharp, javascript, powershell, python, java
azure-functions Durable Functions Eternal Orchestrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-eternal-orchestrations.md
Last updated 12/07/2022
+ms.devlang: csharp
+# ms.devlang: csharp, javascript, python, java
# Eternal orchestrations in Durable Functions (Azure Functions)
azure-functions Durable Functions Event Publishing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-event-publishing.md
Title: Durable Functions publishing to Azure Event Grid
description: Learn how to configure automatic Azure Event Grid publishing for Durable Functions. Last updated 05/11/2020
+ms.devlang: csharp
+# ms.devlang: csharp, javascript
azure-functions Durable Functions External Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-external-events.md
description: Learn how to handle external events in the Durable Functions extens
Last updated 12/07/2022
+ms.devlang: csharp
+# ms.devlang: csharp, javascript, powershell, python, java
# Handling external events in Durable Functions (Azure Functions)
azure-functions Durable Functions Http Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-http-features.md
Last updated 05/10/2022
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, powershell, python
# HTTP Features
azure-functions Durable Functions Instance Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-instance-management.md
Last updated 12/07/2022
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, python
#Customer intent: As a developer, I want to understand the options provided for managing my Durable Functions orchestration instances, so I can keep my orchestrations running efficiently and make improvements.
azure-functions Durable Functions Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-monitor.md
description: Learn how to implement a status monitor using the Durable Functions
Last updated 12/07/2018
+ms.devlang: csharp
+# ms.devlang: csharp, javascript
# Monitor scenario in Durable Functions - Weather watcher sample
azure-functions Durable Functions Node Model Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-node-model-upgrade.md
description: This article shows you how to upgrade your existing Durable Functio
Last updated 04/06/2023
+ms.devlang: javascript
+# ms.devlang: javascript, typescript
azure-functions Durable Functions Orchestrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-orchestrations.md
Last updated 02/14/2023
+ms.devlang: csharp
+# ms.devlang: csharp, javascript, powershell, python, java
#Customer intent: As a developer, I want to understand durable orchestrations so that I can use them effectively in my applications.
azure-functions Durable Functions Phone Verification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-phone-verification.md
description: Learn how to handle human interaction and timeouts in the Durable F
Last updated 12/07/2018
+ms.devlang: csharp
+# ms.devlang: csharp, javascript, python
azure-functions Durable Functions Sequence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-sequence.md
Last updated 06/16/2022
+ms.devlang: csharp
+# ms.devlang: csharp, javascript, python
azure-functions Durable Functions Serialization And Persistence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-serialization-and-persistence.md
Last updated 07/18/2022
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, python
#Customer intent: As a developer, I want to understand what data is persisted to durable storage, how that data is serialized, and how I can customize it when it doesn't work the way my app needs it to.
azure-functions Durable Functions Singletons https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-singletons.md
Last updated 09/10/2020
+ms.devlang: csharp
+# ms.devlang: csharp, javascript, python
# Singleton orchestrators in Durable Functions (Azure Functions)
azure-functions Durable Functions Timers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-timers.md
description: Learn how to implement durable timers in the Durable Functions exte
Last updated 12/07/2022
+ms.devlang: csharp
+# ms.devlang: csharp, javascript, powershell, python, java
# Timers in Durable Functions (Azure Functions)
azure-functions Functions Add Output Binding Azure Sql Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-add-output-binding-azure-sql-vs-code.md
zone_pivot_groups: programming-languages-set-functions-temp
+ms.devlang: csharp
+# ms.devlang: csharp, javascript
azure-functions Functions Add Output Binding Cosmos Db Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-add-output-binding-cosmos-db-vs-code.md
description: Learn how to connect Azure Functions to an Azure Cosmos DB account
Last updated 02/09/2023 zone_pivot_groups: programming-languages-set-functions-temp
+ms.devlang: csharp
+# ms.devlang: csharp, javascript, python
azure-functions Functions Add Output Binding Storage Queue Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-add-output-binding-storage-queue-cli.md
Title: Connect Azure Functions to Azure Storage using command line tools
description: Learn how to connect Azure Functions to an Azure Storage queue by adding an output binding to your command line project. Last updated 02/07/2020
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, powershell, python, typescript
zone_pivot_groups: programming-languages-set-functions
azure-functions Functions Add Output Binding Storage Queue Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-add-output-binding-storage-queue-vs-code.md
Title: Connect Azure Functions to Azure Storage using Visual Studio Code
description: Learn how to connect Azure Functions to an Azure Queue Storage by adding an output binding to your Visual Studio Code project. Last updated 01/31/2023
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, powershell, python, typescript
zone_pivot_groups: programming-languages-set-functions #Customer intent: As an Azure Functions developer, I want to connect my function to Azure Storage so that I can easily write data to a storage queue.
azure-functions Functions Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-best-practices.md
description: Learn best practices for designing, deploying, and maintaining effi
ms.assetid: 9058fb2f-8a93-4036-a921-97a0772f503c Last updated 08/30/2021
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, powershell, python
# Customer intent: As a developer, I want to understand how to correctly design, deploy, and maintain my functions so I can run them in the most safe and efficient way possible. # Best practices for reliable Azure Functions
azure-functions Functions Bindings Cosmosdb V2 Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2-input.md
Title: Azure Cosmos DB input binding for Functions 2.x and higher
description: Learn to use the Azure Cosmos DB input binding in Azure Functions. Last updated 03/02/2023
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, powershell, python
zone_pivot_groups: programming-languages-set-functions
azure-functions Functions Bindings Cosmosdb V2 Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2-output.md
Title: Azure Cosmos DB output binding for Functions 2.x and higher
description: Learn to use the Azure Cosmos DB output binding in Azure Functions. Last updated 10/05/2023
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, powershell, python
zone_pivot_groups: programming-languages-set-functions
azure-functions Functions Bindings Cosmosdb V2 Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2-trigger.md
Title: Azure Cosmos DB trigger for Functions 2.x and higher
description: Learn to use the Azure Cosmos DB trigger in Azure Functions. Last updated 04/04/2023
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, powershell, python
zone_pivot_groups: programming-languages-set-functions
azure-functions Functions Bindings Cosmosdb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb.md
Title: Azure Cosmos DB bindings for Functions 1.x
description: Understand how to use Azure Cosmos DB triggers and bindings in Azure Functions 1.x. Last updated 11/21/2017
+ms.devlang: csharp
+# ms.devlang: csharp, javascript
azure-functions Functions Bindings Dapr Input Secret https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-dapr-input-secret.md
Title: Dapr Secret input binding for Azure Functions
description: Learn how to access Dapr Secret input binding data during function execution in Azure Functions. Last updated 10/11/2023
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, powershell, python
zone_pivot_groups: programming-languages-set-functions-lang-workers
azure-functions Functions Bindings Dapr Input State https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-dapr-input-state.md
Title: Dapr State input binding for Azure Functions
description: Learn how to provide Dapr State input binding data during a function execution in Azure Functions. Last updated 10/11/2023
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, powershell, python
zone_pivot_groups: programming-languages-set-functions-lang-workers
azure-functions Functions Bindings Dapr Output Invoke https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-dapr-output-invoke.md
Title: Dapr Invoke output binding for Azure Functions
description: Learn how to send data to a Dapr Invoke output binding during function execution in Azure Functions. Last updated 10/11/2023
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, powershell, python
zone_pivot_groups: programming-languages-set-functions-lang-workers
azure-functions Functions Bindings Dapr Output Publish https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-dapr-output-publish.md
Title: Dapr Publish output binding for Azure Functions
description: Learn how to provide Dapr Publish output binding data using Azure Functions. Last updated 10/11/2023
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, powershell, python
zone_pivot_groups: programming-languages-set-functions-lang-workers
azure-functions Functions Bindings Dapr Output State https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-dapr-output-state.md
Title: Dapr State output binding for Azure Functions
description: Learn how to provide Dapr State output binding data during a function execution in Azure Functions. Last updated 10/11/2023
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, powershell, python
zone_pivot_groups: programming-languages-set-functions-lang-workers
azure-functions Functions Bindings Dapr Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-dapr-output.md
Title: Dapr Binding output binding for Azure Functions
description: Learn how to provide Dapr Binding output binding data during a function execution in Azure Functions. Last updated 10/11/2023
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, powershell, python
zone_pivot_groups: programming-languages-set-functions-lang-workers
azure-functions Functions Bindings Dapr Trigger Svc Invoke https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-dapr-trigger-svc-invoke.md
Title: Dapr Service Invocation trigger for Azure Functions
description: Learn how to run Azure Functions as Dapr service invocation data changes. Last updated 11/29/2023
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, powershell, python
zone_pivot_groups: programming-languages-set-functions-lang-workers
azure-functions Functions Bindings Dapr Trigger Topic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-dapr-trigger-topic.md
Title: Dapr Topic trigger for Azure Functions
description: Learn how to run Azure Functions as Dapr topic data changes. Last updated 11/29/2023
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, powershell, python
zone_pivot_groups: programming-languages-set-functions-lang-workers
azure-functions Functions Bindings Dapr Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-dapr-trigger.md
Title: Dapr Input Bindings trigger for Azure Functions
description: Learn how to run Azure Functions as Dapr input binding data changes. Last updated 11/29/2023
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, powershell, python
zone_pivot_groups: programming-languages-set-functions-lang-workers
azure-functions Functions Bindings Event Grid Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-grid-output.md
description: Learn to send an Event Grid event in Azure Functions.
Last updated 09/22/2023
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, powershell, python
zone_pivot_groups: programming-languages-set-functions
azure-functions Functions Bindings Event Grid Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-grid-trigger.md
Title: Azure Event Grid trigger for Azure Functions
description: Learn to run code when Event Grid events in Azure Functions are dispatched. Last updated 04/02/2023
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, powershell, python
zone_pivot_groups: programming-languages-set-functions
azure-functions Functions Bindings Http Webhook Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-http-webhook-trigger.md
Title: Azure Functions HTTP trigger
description: Learn how to call an Azure Function via HTTP. Last updated 03/06/2023
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, powershell, python
zone_pivot_groups: programming-languages-set-functions
azure-functions Functions Bindings Mobile Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-mobile-apps.md
Title: Mobile Apps bindings for Azure Functions description: Understand how to use Azure Mobile Apps bindings in Azure Functions.
+ms.devlang: csharp
+# ms.devlang: csharp, javascript
Last updated 11/21/2017
azure-functions Functions Bindings Notification Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-notification-hubs.md
Title: Notification Hubs bindings for Azure Functions description: Understand how to use Azure Notification Hub binding in Azure Functions.
+ms.devlang: csharp
+# ms.devlang: csharp, fsharp, javascript
Last updated 11/21/2017
azure-functions Functions Bindings Rabbitmq Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-rabbitmq-output.md
ms.assetid:
Last updated 01/21/2022
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, python
zone_pivot_groups: programming-languages-set-functions-lang-workers
azure-functions Functions Bindings Rabbitmq Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-rabbitmq-trigger.md
ms.assetid:
Last updated 01/21/2022
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, python
zone_pivot_groups: programming-languages-set-functions-lang-workers
azure-functions Functions Bindings Return Value https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-return-value.md
Title: Using return value from an Azure Function description: Learn to manage return values for Azure Functions
+ms.devlang: csharp
+# ms.devlang: csharp, fsharp, java, javascript, powershell, python
Last updated 07/25/2023 zone_pivot_groups: programming-languages-set-functions-lang-workers
azure-functions Functions Bindings Sendgrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-sendgrid.md
Title: Azure Functions SendGrid bindings description: Azure Functions SendGrid bindings reference.
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, python
Last updated 03/04/2022 zone_pivot_groups: programming-languages-set-functions
azure-functions Functions Bindings Service Bus Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus-output.md
description: Learn to send Azure Service Bus messages from Azure Functions.
ms.assetid: daedacf0-6546-4355-a65c-50873e74f66b Last updated 03/06/2023
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, powershell, python
zone_pivot_groups: programming-languages-set-functions
azure-functions Functions Bindings Service Bus Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus-trigger.md
description: Learn to run an Azure Function when as Azure Service Bus messages a
ms.assetid: daedacf0-6546-4355-a65c-50873e74f66b Last updated 04/04/2023
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, powershell, python
zone_pivot_groups: programming-languages-set-functions
azure-functions Functions Bindings Signalr Service Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-signalr-service-input.md
Title: Azure Functions SignalR Service input binding
description: Learn to return a SignalR service endpoint URL and access token in Azure Functions.
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, python
Last updated 01/13/2022
azure-functions Functions Bindings Signalr Service Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-signalr-service-output.md
Title: Azure Functions SignalR Service output binding
description: Learn about the SignalR Service output binding for Azure Functions.
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, python
Last updated 01/13/2023
azure-functions Functions Bindings Signalr Service Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-signalr-service-trigger.md
Title: Azure Functions SignalR Service trigger binding
description: Learn to send SignalR Service messages from Azure Functions.
+ms.devlang: csharp
+# ms.devlang: csharp, javascript, python
Last updated 01/13/2023
azure-functions Functions Bindings Storage Blob Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-input.md
Title: Azure Blob storage input binding for Azure Functions
description: Learn how to provide Azure Blob storage input binding data to an Azure Function. Last updated 03/02/2023
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, powershell, python
zone_pivot_groups: programming-languages-set-functions
azure-functions Functions Bindings Storage Blob Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-output.md
Title: Azure Blob storage output binding for Azure Functions
description: Learn how to provide Azure Blob storage output binding data to an Azure Function. Last updated 03/02/2023
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, powershell, python
zone_pivot_groups: programming-languages-set-functions
azure-functions Functions Bindings Storage Blob Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-trigger.md
Title: Azure Blob storage trigger for Azure Functions
description: Learn how to run an Azure Function as Azure Blob storage data changes. Last updated 09/08/2023
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, powershell, python
zone_pivot_groups: programming-languages-set-functions
azure-functions Functions Bindings Storage Queue Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-queue-output.md
Title: Azure Queue storage output binding for Azure Functions
description: Learn to create Azure Queue storage messages in Azure Functions. Last updated 03/06/2023
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, powershell, python
zone_pivot_groups: programming-languages-set-functions
azure-functions Functions Bindings Storage Queue Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-queue-trigger.md
Title: Azure Queue storage trigger for Azure Functions
description: Learn to run an Azure Function as Azure Queue storage data changes. Last updated 04/04/2023
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, powershell, python
zone_pivot_groups: programming-languages-set-functions
azure-functions Functions Bindings Storage Table Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-table-input.md
Title: Azure Tables input bindings for Azure Functions
description: Understand how to use Azure Tables input bindings in Azure Functions. Last updated 11/11/2022
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, powershell, python
zone_pivot_groups: programming-languages-set-functions
azure-functions Functions Bindings Storage Table Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-table-output.md
Title: Azure Tables output bindings for Azure Functions
description: Understand how to use Azure Tables output bindings in Azure Functions. Last updated 11/11/2022
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, powershell, python
zone_pivot_groups: programming-languages-set-functions
azure-functions Functions Bindings Timer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-timer.md
description: Understand how to use timer triggers in Azure Functions.
ms.assetid: d2f013d1-f458-42ae-baf8-1810138118ac Last updated 03/06/2023
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, powershell, python
zone_pivot_groups: programming-languages-set-functions
azure-functions Functions Bindings Twilio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-twilio.md
Title: Azure Functions Twilio binding
description: Understand how to use Twilio bindings with Azure Functions. Last updated 03/04/2022
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, python
zone_pivot_groups: programming-languages-set-functions
azure-functions Functions Bindings Warmup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-warmup.md
description: Understand how to use the warmup trigger in Azure Functions.
keywords: azure functions, functions, event processing, warmup, cold start, premium, dynamic compute, serverless architecture
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, python
Last updated 09/04/2023 zone_pivot_groups: programming-languages-set-functions
azure-functions Functions Develop Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-develop-vs-code.md
Title: Develop Azure Functions by using Visual Studio Code description: Learn how to develop and test Azure Functions by using the Azure Functions extension for Visual Studio Code.
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, powershell, python
- devdivchpfy22 - vscode-azure-extension-update-complete
At this point, you can do one of these tasks:
## Add a function to your project
-You can add a new function to an existing project baswed on one of the predefined Functions trigger templates. To add a new function trigger, select F1 to open the command palette, and then search for and run the command **Azure Functions: Create Function**. Follow the prompts to choose your trigger type and define the required attributes of the trigger. If your trigger requires an access key or connection string to connect to a service, get it ready before you create the function trigger.
+You can add a new function to an existing project based on one of the predefined Functions trigger templates. To add a new function trigger, select F1 to open the command palette, and then search for and run the command **Azure Functions: Create Function**. Follow the prompts to choose your trigger type and define the required attributes of the trigger. If your trigger requires an access key or connection string to connect to a service, get it ready before you create the function trigger.
::: zone pivot="programming-language-csharp" The results of this action are that a new C# class library (.cs) file is added to your project.
azure-functions Functions Integrate Storage Queue Output Binding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-integrate-storage-queue-output-binding.md
description: Use Azure Functions to create a serverless function that is invoked
ms.assetid: 0b609bc0-c264-4092-8e3e-0784dcc23b5d Last updated 04/24/2020
+ms.devlang: csharp
+# ms.devlang: csharp, javascript
azure-functions Functions Integrate Store Unstructured Data Cosmosdb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-integrate-store-unstructured-data-cosmosdb.md
Title: Store unstructured data using Azure Cosmos DB and Functions
description: Store unstructured data using Azure Functions and Azure Cosmos DB Last updated 10/01/2020
+ms.devlang: csharp
+# ms.devlang: csharp, javascript
# Store unstructured data using Azure Functions and Azure Cosmos DB
azure-functions Functions Node Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-node-troubleshoot.md
Title: Troubleshoot Node.js apps in Azure Functions
description: Learn how to troubleshoot common errors when you deploy or run a Node.js app in Azure Functions. Last updated 09/20/2023
+ms.devlang: javascript
+# ms.devlang: javascript, typescript
zone_pivot_groups: functions-nodejs-model
azure-functions Functions Node Upgrade V4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-node-upgrade-v4.md
Title: Migrate to v4 of the Node.js model for Azure Functions
description: This article shows you how to upgrade your existing function apps running on v3 of the Node.js programming model to v4. Last updated 03/15/2023
+ms.devlang: javascript
+# ms.devlang: javascript, typescript
azure-functions Functions Reference Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-node.md
description: Understand how to develop functions by using Node.js.
ms.assetid: 45dedd78-3ff9-411f-bb4b-16d29a11384c Last updated 04/17/2023
+ms.devlang: javascript
+# ms.devlang: javascript, typescript
zone_pivot_groups: functions-nodejs-model
azure-functions Functions Target Based Scaling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-target-based-scaling.md
Title: Target-based scaling in Azure Functions description: Explains target-based scaling behaviors of Consumption plan and Premium plan function apps. Previously updated : 06/16/2023 Last updated : 01/16/2024
Target-based scaling provides a fast and intuitive scaling model for customers and is currently supported for the following extensions: -- Service Bus queues and topics-- Storage Queues-- Event Hubs-- Azure Cosmos DB
+- [Apache Kafka](#apache-kafka)
+- [Azure Cosmos DB](#azure-cosmos-db)
+- [Azure Event Hubs](#event-hubs)
+- [Azure Queue Storage](#storage-queues)
+- [Azure Service Bus (queue and topics)](#service-bus-queues-and-topics)
Target-based scaling replaces the previous Azure Functions incremental scaling model as the default for these extension types. Incremental scaling added or removed a maximum of one worker at [each new instance rate](event-driven-scaling.md#understanding-scaling-behaviors), with complex decisions for when to scale. In contrast, target-based scaling allows scale up of four instances at a time, and the scaling decision is based on a simple target-based equation:
The following considerations apply when using target-based scaling:
+ Target Based Scaling is enabled by default on function app runtime 4.19.0 or a later version. + When using target-based scaling, the `functionAppScaleLimit` site setting is still honored. For more information, see [Limit scale out](event-driven-scaling.md#limit-scale-out). + To achieve the most accurate scaling based on metrics, use only one target-based triggered function per function app.
-+ When multiple functions in the same function app are all requesting to scale out at the same time, a sum across those functions is used to determine the change in desired instances. Functions requesting to scale-out override functions requesting to scale-in.
++ When multiple functions in the same function app are all requesting to scale out at the same time, a sum across those functions is used to determine the change in desired instances. Functions requesting to scale out override functions requesting to scale in. + When there are scale-in requests without any scale-out requests, the max scale in value is used. ## Opting out
This table summarizes the `host.json` values that are used for the _target execu
| Extension | host.json values | Default Value | | -- | -- | - |
+| Event Hubs (Extension v5.x+) | extensions.eventHubs.maxEventBatchSize | 100<sup>*</sup> |
+| Event Hubs (Extension v3.x+) | extensions.eventHubs.eventProcessorOptions.maxBatchSize | 10 |
+| Event Hubs (if defined) | extensions.eventHubs.targetUnprocessedEventThreshold | n/a |
| Service Bus (Extension v5.x+, Single Dispatch) | extensions.serviceBus.maxConcurrentCalls | 16 | | Service Bus (Extension v5.x+, Single Dispatch Sessions Based) | extensions.serviceBus.maxConcurrentSessions | 8 | | Service Bus (Extension v5.x+, Batch Processing) | extensions.serviceBus.maxMessageBatchSize | 1000 | | Service Bus (Functions v2.x+, Single Dispatch) | extensions.serviceBus.messageHandlerOptions.maxConcurrentCalls | 16 | | Service Bus (Functions v2.x+, Single Dispatch Sessions Based) | extensions.serviceBus.sessionHandlerOptions.maxConcurrentSessions | 2000 | | Service Bus (Functions v2.x+, Batch Processing) | extensions.serviceBus.batchOptions.maxMessageCount | 1000 |
-| Event Hubs (Extension v5.x+) | extensions.eventHubs.maxEventBatchSize | 100<sup>1</sup> |
-| Event Hubs (Extension v3.x+) | extensions.eventHubs.eventProcessorOptions.maxBatchSize | 10 |
-| Event Hubs (if defined) | extensions.eventHubs.targetUnprocessedEventThreshold | n/a |
| Storage Queue | extensions.queues.batchSize | 16 |
-<sup>1</sup> The default `maxEventBatchSize` changed in [v6.0.0](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.EventHubs/6.0.0) of the `Microsoft.Azure.WebJobs.Extensions.EventHubs` package. In earlier versions, this was 10.
+<sup>*</sup> The default `maxEventBatchSize` changed in [v6.0.0](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.EventHubs/6.0.0) of the `Microsoft.Azure.WebJobs.Extensions.EventHubs` package. In earlier versions, this value was 10.
-For Azure Cosmos DB _target executions per instance_ is set in the function attribute:
+For some binding extensions, _target executions per instance_ is set using a function attribute:
| Extension | Function trigger setting | Default Value | | -- | | - |
-| Azure Cosmos DB | maxItemsPerInvocation | 100 |
+| Apache Kafka | `lagThreshold` | 1000 |
+| Azure Cosmos DB | `maxItemsPerInvocation` | 100 |
To learn more, see the [example configurations for the supported extensions](#supported-extensions).
In [runtime scale monitoring](functions-networking-options.md?tabs=azure-cli#pre
| Extension Name | Minimum Version Needed | | -- | - |
-| Storage Queue | 5.1.0 |
+| Apache Kafka | 3.9.0 |
+| Azure Cosmos DB | 4.1.0 |
| Event Hubs | 5.2.0 | | Service Bus | 5.9.0 |
-| Azure Cosmos DB | 4.1.0 |
+| Storage Queue | 5.1.0 |
## Dynamic concurrency support
-Target-based scaling introduces faster scaling, and uses defaults for _target executions per instance_. When using Service Bus or Storage queues, you can also enable [dynamic concurrency](functions-concurrency.md#dynamic-concurrency). In this configuration, the _target executions per instance_ value is determined automatically by the dynamic concurrency feature. It starts with limited concurrency and identifies the best setting over time.
+Target-based scaling introduces faster scaling, and uses defaults for _target executions per instance_. When using Service Bus, Storage queues, or Kafka, you can also enable [dynamic concurrency](functions-concurrency.md#dynamic-concurrency). In this configuration, the _target executions per instance_ value is determined automatically by the dynamic concurrency feature. It starts with limited concurrency and identifies the best setting over time.
## Supported extensions
For **v2.x+** of the Storage extension, modify the `host.json` setting `batchSiz
> [!NOTE] > **Scale efficiency:** For the storage queue extension, messages with [visibilityTimeout](/rest/api/storageservices/put-message#uri-parameters) are still counted in _event source length_ by the Storage Queue APIs. This can cause overscaling of your function app. Consider using Service Bus queues que scheduled messages, [limiting scale out](event-driven-scaling.md#limit-scale-out), or not using visibilityTimeout for your solution. - ### Azure Cosmos DB Azure Cosmos DB uses a function-level attribute, `MaxItemsPerInvocation`. The way you set this function-level attribute depends on your function language.
-# [C#](#tab/csharp)
+#### [C#](#tab/csharp)
For a compiled C# function, set `MaxItemsPerInvocation` in your trigger definition, as shown in the following examples for an in-process C# function:
namespace CosmosDBSamplesV2
```
-# [Java](#tab/java)
+#### [Java](#tab/java)
Java example pending.
-# [JavaScript/PowerShell/Python](#tab/node+powershell+python)
+#### [Python](#tab/python)
+
+For Functions languages that use `function.json`, the `MaxItemsPerInvocation` parameter is defined in the specific binding, as in this Azure Cosmos DB trigger example
+
+```json
+{
+ "scriptFile": "main.py",
+ "bindings": [
+ {
+ "type": "cosmosDBTrigger",
+ "maxItemsPerInvocation": 100,
+ "connection": "MyCosmosDb",
+ "leaseContainerName": "leases",
+ "containerName": "collectionName",
+ "databaseName": "databaseName",
+ "leaseDatabaseName": "databaseName",
+ "createLeaseContainerIfNotExists": false,
+ "startFromBeginning": false,
+ "name": "input"
+ }
+ ]
+}
+```
+
+Examples for the Python v2 programming model aren't yet available.
+
+#### [JavaScript/PowerShell/TypeScript](#tab/node+powershell)
For Functions languages that use `function.json`, the `MaxItemsPerInvocation` parameter is defined in the specific binding, as in this Azure Cosmos DB trigger example:
For Functions languages that use `function.json`, the `MaxItemsPerInvocation` pa
} ```
-Examples for the Python v2 programming model and the JavaScript v4 programming model aren't yet available.
+Examples for the Node.js v4 programming model aren't yet available.
> [!NOTE] > Since Azure Cosmos DB is a partitioned workload, the target instance count for the database is capped by the number of physical partitions in your container. To learn more about Azure Cosmos DB scaling, see [physical partitions](../cosmos-db/nosql/change-feed-processor.md#dynamic-scaling) and [lease ownership](../cosmos-db/nosql/change-feed-processor.md#dynamic-scaling).
+### Apache Kafka
+
+The Apache Kafka extension uses a function-level attribute, `LagThreshold`. For Kafka, the number of _desired instances_ is calculated based on the total consumer lag divided by the `LagThreshold` setting. For a given lag, reducing the lag threshold increases the number of desired instances.
+
+The way you set this function-level attribute depends on your function language. This example sets the threshold to `100`.
+
+#### [C#](#tab/csharp)
+
+For a compiled C# function, set `LagThreshold` in your trigger definition, as shown in the following examples for an in-process C# function for a Kafka Event Hubs trigger:
+
+```C#
+[FunctionName("KafkaTrigger")]
+public static void Run(
+ [KafkaTrigger("BrokerList",
+ "topic",
+ Username = "$ConnectionString",
+ Password = "%EventHubConnectionString%",
+ Protocol = BrokerProtocol.SaslSsl,
+ AuthenticationMode = BrokerAuthenticationMode.Plain,
+ ConsumerGroup = "$Default",
+ LagThreshold = 100)] KafkaEventData<string> kevent, ILogger log)
+{
+ log.LogInformation($"C# Kafka trigger function processed a message: {kevent.Value}");
+}
+```
+
+#### [Java](#tab/java)
+
+```java
+public class KafkaTriggerMany {
+ @FunctionName("KafkaTriggerMany")
+ public void runMany(
+ @KafkaTrigger(
+ name = "kafkaTriggerMany",
+ topic = "topic",
+ brokerList="%BrokerList%",
+ consumerGroup="$Default",
+ username = "$ConnectionString",
+ password = "EventHubConnectionString",
+ authenticationMode = BrokerAuthenticationMode.PLAIN,
+ protocol = BrokerProtocol.SASLSSL,
+ LagThreshold = 100,
+ // sslCaLocation = "confluent_cloud_cacert.pem", // Enable this line for windows.
+ cardinality = Cardinality.MANY,
+ dataType = "string"
+ ) String[] kafkaEvents,
+ final ExecutionContext context) {
+ for (String kevent: kafkaEvents) {
+ context.getLogger().info(kevent);
+ }
+```
+
+#### [Python](#tab/python)
+
+For Functions languages that use `function.json`, the `LagThreshold` parameter is defined in the specific binding, as in this Kafka Event Hubs trigger example:
+
+```json
+{
+ "scriptFile": "main.py",
+ "bindings": [
+ {
+ "type": "kafkaTrigger",
+ "name": "kevent",
+ "topic": "topic",
+ "brokerList": "%BrokerList%",
+ "username": "$ConnectionString",
+ "password": "EventHubConnectionString",
+ "consumerGroup" : "functions",
+ "protocol": "saslSsl",
+ "authenticationMode": "plain",
+ "lagThreshold": "100"
+ "FUNCTIONS_RUNTIME_SCALE_MONITORING_ENABLED" : 1,
+ "TARGET_BASED_SCALING_ENABLED" : 1
+ }
+ ]
+}
+```
+
+The Python v2 programming model isn't currently supported by the Kafka extension.
+
+#### [JavaScript/PowerShell/TypeScript](#tab/node+powershell)
+
+For Functions languages that use `function.json`, the `LagThreshold` parameter is defined in the specific binding, as in this Kafka Event Hubs trigger example:
+
+```json
+{
+ "bindings": [
+ {
+ "type": "kafkaTrigger",
+ "name": "kafkaEvent",
+ "direction": "in",
+ "protocol" : "SASLSSL",
+ "password" : "EventHubConnectionString",
+ "dataType" : "string",
+ "topic" : "topic",
+ "authenticationMode" : "PLAIN",
+ "consumerGroup" : "$Default",
+ "username" : "$ConnectionString",
+ "brokerList" : "%BrokerList%",
+ "sslCaLocation": "confluent_cloud_cacert.pem",
+ "lagThreshold": "100"
+ }
+ ]
+}
+```
+
+The Node.js v4 programming model isn't currently supported by the Kafka extension.
+++ ## Next steps To learn more, see the following articles:
azure-monitor Itsmc Connections Servicenow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-connections-servicenow.md
Use the following procedure to create a ServiceNow connection.
| **Server Url** | Enter the URL of the ServiceNow instance that you want to connect to ITSMC. The URL should point to a supported SaaS version with the suffix *.servicenow.com* (for example `https://XXXXX.service-now.com/`).| | **Username** | Enter the integration username that you created in the ServiceNow app to support the connection to ITSMC.| | **Password** | Enter the password associated with this username. **Note**: The username and password are used for generating authentication tokens only. They're not stored anywhere within the ITSMC service. |
- | **Client Id** | Enter the client ID that you want to use for OAuth2 authentication, which you generated earlier. For more information on generating a client ID and a secret, see [Set up OAuth](https://old.wiki/index.php/OAuth_Setup). |
+ | **Client Id** | Enter the client ID that you want to use for OAuth2 authentication, which you generated earlier. For more information on generating a client ID and a secret, see [Set up OAuth](https://learn.microsoft.com/azure/azure-monitor/alerts/itsmc-connections-servicenow#oauth-setup). |
| **Client Secret** | Enter the client secret generated for this ID. | | **Data Sync Scope (in Days)** | Enter the number of past days that you want the data from. The limit is 120 days. | | **Work Items To Sync** | Select the ServiceNow work items that you want to sync to Azure Log Analytics, through ITSMC. The selected values are imported into Log Analytics. Options are incidents and change requests.|
When you're successfully connected and synced:
The payload that is sent to ServiceNow has a common structure. The structure has a section of `<Description>` that contains all the alert data.
-The structure of the payload for all alert types except log search alert is [common schema](./alerts-common-schema.md).
+The structure of the payload for all alert types except log search V1 alert is [common schema](./alerts-common-schema.md).
-For Log Search Alerts (V1 and V2), the structure is:
+For Log Search Alerts (V1 only), the structure is:
- Alert (alert rule name) : \<value> - Search Query : \<value>
azure-monitor Api Custom Events Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-custom-events-metrics.md
Title: Application Insights API for custom events and metrics | Microsoft Docs
description: Insert a few lines of code in your device or desktop app, webpage, or service to track usage and diagnose issues. Last updated 09/12/2023
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, vb
azure-monitor Api Filtering Sampling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-filtering-sampling.md
Title: Filtering and preprocessing in the Application Insights SDK | Microsoft D
description: Write telemetry processors and telemetry initializers for the SDK to filter or add properties to the data before the telemetry is sent to the Application Insights portal. Last updated 11/15/2023
+ms.devlang: csharp
+# ms.devlang: csharp, javascript, python
azure-monitor App Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-map.md
Title: Application Map in Azure Application Insights | Microsoft Docs
description: Monitor complex application topologies with Application Map and Intelligent view. Last updated 07/10/2023
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, python
azure-monitor Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-ad-authentication.md
Title: Microsoft Entra authentication for Application Insights
description: Learn how to enable Microsoft Entra authentication to ensure that only authenticated telemetry is ingested in your Application Insights resources. Last updated 11/15/2023
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, python
azure-monitor Azure Vm Vmss Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-vm-vmss-apps.md
Title: Monitor performance on Azure VMs - Azure Application Insights
description: Application performance monitoring for Azure virtual machines and virtual machine scale sets. Last updated 03/22/2023
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, python
azure-monitor Convert Classic Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/convert-classic-resource.md
Use the following script to identify your Application Insights resources by inge
#### Example ```azurecli
-Get-AzApplicationInsights -SubscriptionId '1234abcd-5678-efgh-9012-ijklmnopqrst' | Format-Table -Property Name, IngestionMode, Id, @{label='Type';expression={
+Get-AzApplicationInsights -SubscriptionId 'Your Subscription ID' | Format-Table -Property Name, IngestionMode, Id, @{label='Type';expression={
if ([string]::IsNullOrEmpty($_.IngestionMode)) { 'Unknown' } elseif ($_.IngestionMode -eq 'LogAnalytics') {
azure-monitor Distributed Trace Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/distributed-trace-data.md
description: This article provides information about distributed tracing and tel
Last updated 10/14/2023
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, python
azure-monitor Opentelemetry Add Modify https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-add-modify.md
Title: Add, modify, and filter Azure Monitor OpenTelemetry for .NET, Java, Node.
description: This article provides guidance on how to add, modify, and filter OpenTelemetry for applications using Azure Monitor. Last updated 12/15/2023
+ms.devlang: csharp
+# ms.devlang: csharp, javascript, typescript, python
azure-monitor Opentelemetry Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-configuration.md
Title: Configure Azure Monitor OpenTelemetry for .NET, Java, Node.js, and Python
description: This article provides configuration guidance for .NET, Java, Node.js, and Python applications. Last updated 12/27/2023
+ms.devlang: csharp
+# ms.devlang: csharp, javascript, typescript, python
azure-monitor Opentelemetry Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md
Title: Enable Azure Monitor OpenTelemetry for .NET, Java, Node.js, and Python ap
description: This article provides guidance on how to enable Azure Monitor on applications by using OpenTelemetry. Last updated 12/20/2023
+ms.devlang: csharp
+# ms.devlang: csharp, javascript, typescript, python
azure-netapp-files Azure Policy Definitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-policy-definitions.md
Title: Azure Policy definitions for Azure NetApp Files | Microsoft Docs description: Describes the Azure Policy custom definitions and built-in definitions that you can use with Azure NetApp Files. - -- Last updated 06/02/2022
To learn how to assign a policy to resources and view compliance report, see [As
## Next steps
-* [Azure Policy documentation](../governance/policy/index.yml)
+* [Azure Policy documentation](../governance/policy/index.yml)
azure-resource-manager Parameter Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/parameter-files.md
Title: Create parameters files for Bicep deployment
description: Create parameters file for passing in values during deployment of a Bicep file Previously updated : 11/03/2023 Last updated : 01/17/2024 # Create parameters files for Bicep deployment
From Azure CLI, you can pass a parameter file with your Bicep file deployment.
With Azure CLI version 2.53.0 or later, and [Bicep CLI version 0.22.X or higher](./install.md), you can deploy a Bicep file by utilizing a Bicep parameter file. With the `using` statement within the Bicep parameters file, there is no need to provide the `--template-file` switch when specifying a Bicep parameter file for the `--parameters` switch. Including the `--template-file` switch will result in an "Only a .bicep template is allowed with a .bicepparam file" error. - ```azurecli az deployment group create \ --name ExampleDeployment \
For more information, see [Deploy resources with Bicep and Azure PowerShell](./d
## Parameter precedence
-You can use inline parameters and a local parameters file in the same deployment operation. For example, you can specify some values in the local parameters file and add other values inline during deployment. If you provide values for a parameter in both the local parameters file and inline, the inline value takes precedence. This feature hasn't been implemented for Bicep parameters file.
+You can use inline parameters and a local parameters file in the same deployment operation. For example, you can specify some values in the local parameters file and add other values inline during deployment. If you provide values for a parameter in both the local parameters file and inline, the inline value takes precedence.
It's possible to use an external parameters file, by providing the URI to the file. When you use an external parameters file, you can't pass other values either inline or from a local file. All inline parameters are ignored. Provide all parameter values in the external file.
azure-resource-manager Move Resource Group And Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-resource-group-and-subscription.md
$destinationResourceGroup = Get-AzResourceGroup -Name $destinationName
$resources = Get-AzResource -ResourceGroupName $sourceName | Where-Object { $_.Name -in $resourcesToMove } Invoke-AzResourceAction -Action validateMoveResources `--ResourceId $sourceResourceGroup.ResourceId `--Parameters @{ resources= $resources.ResourceId;targetResourceGroup = $destinationResourceGroup.ResourceId }
+ -ResourceId $sourceResourceGroup.ResourceId `
+ -Parameters @{
+ resources = $resources.ResourceId; # Wrap in an @() array if providing a single resource ID string.
+ targetResourceGroup = $destinationResourceGroup.ResourceId
+ }
``` If validation passes, you see no output.
azure-signalr Signalr Concept Serverless Development Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-serverless-development-config.md
Last updated 04/20/2022
+ms.devlang: csharp
+# ms.devlang: csharp, javascript
batch Batch Docker Container Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-docker-container-workloads.md
Title: Container workloads on Azure Batch
description: Learn how to run and scale apps from container images on Azure Batch. Create a pool of compute nodes that support running container tasks. Last updated 01/10/2024
+ms.devlang: csharp
+# ms.devlang: csharp, python
batch Batch Linux Nodes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-linux-nodes.md
Title: Run Linux on virtual machine compute nodes
description: Learn how to process parallel compute workloads on pools of Linux virtual machines in Azure Batch. Last updated 05/18/2023
+ms.devlang: csharp
+# ms.devlang: csharp, python
zone_pivot_groups: programming-languages-batch-linux-nodes
batch Batch Sig Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-sig-images.md
Title: Use the Azure Compute Gallery to create a custom image pool
description: Custom image pools are an efficient way to configure compute nodes to run your Batch workloads. Last updated 11/09/2023
+ms.devlang: csharp
+# ms.devlang: csharp, python
batch Batch User Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-user-accounts.md
description: Learn the types of user accounts and how to configure them.
Last updated 05/16/2023
+ms.devlang: csharp
+# ms.devlang: csharp, java, python
# Run tasks under user accounts in Batch
batch Large Number Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/large-number-tasks.md
Title: Submit a large number of tasks to a Batch job
description: Learn how to efficiently submit a very large number of tasks in a single Azure Batch job. Last updated 08/25/2021
+ms.devlang: csharp
+# ms.devlang: csharp, python
# Submit a large number of tasks to a Batch job
batch Large Number Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-php-create-web-role.md
Title: Create Azure web and worker roles for PHP
description: A guide to creating PHP web and worker roles in an Azure cloud service, and configuring the PHP runtime. documentationcenter: php - ms.assetid: 9f7ccda0-bd96-4f7b-a7af-fb279a9e975b
+ms.devlang: php
Last updated 04/11/2018
cloud-services Cloud Services Guestos Msrc Releases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-msrc-releases.md
na Previously updated : 12/12/2023 Last updated : 01/16/2024
The following tables show the Microsoft Security Response Center (MSRC) updates applied to the Azure Guest OS. Search this article to determine if a particular update applies to the Guest OS you are using. Updates always carry forward for the particular [family][family-explain] they were introduced in. ## December 2023 Guest OS
->[!NOTE]
->The December Guest OS is currently being rolled out to Cloud Service VMs that are configured for automatic updates. When the rollout is complete, this version will be made available for manual updates through the Azure portal and configuration files. The following patches are included in the December Guest OS. This list is subject to change.
| Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced | | | | | | |
cloud-services Cloud Services Guestos Update Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-update-matrix.md
na Previously updated : 12/08/2023 Last updated : 01/16/2024
Unsure about how to update your Guest OS? Check [this][cloud updates] out.
## News updates
+###### **January 16, 2023**
+The December Guest OS has released.
+ ###### **December 8, 2023** The November Guest OS has released.
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-7.36_202312-01 | January 16, 2024 | Post 7.39 |
| WA-GUEST-OS-7.35_202311-01 | December 8, 2023 | Post 7.38 | | WA-GUEST-OS-7.34_202310-01 | October 23, 2023 | Post 7.37 |
-| WA-GUEST-OS-7.32_202309-01 | September 25, 2023 | Post 7.36 |
+|~~WA-GUEST-OS-7.32_202309-01~~| September 25, 2023 | Post 7.36 |
|~~WA-GUEST-OS-7.30_202308-01~~| August 21, 2023 | October 23, 2023 | |~~WA-GUEST-OS-7.28_202307-01~~| July 27, 2023 | September 25, 2023 | |~~WA-GUEST-OS-7.27_202306-02~~| July 8, 2023 | August 21, 2023 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-6.66_202312-01 | January 16, 2024 | Post 6.69 |
| WA-GUEST-OS-6.65_202311-01 | December 8, 2023 | Post 6.68 | | WA-GUEST-OS-6.64_202310-01 | October 23, 2023 | Post 6.67 |
-| WA-GUEST-OS-6.62_202309-01 | September 25, 2023 | Post 6.66 |
+|~~WA-GUEST-OS-6.62_202309-01~~| September 25, 2023 | Post 6.66 |
|~~WA-GUEST-OS-6.61_202308-01~~| August 21, 2023 | October 23, 2023 | |~~WA-GUEST-OS-6.60_202307-01~~| July 27, 2023 | September 25, 2023 | |~~WA-GUEST-OS-6.59_202306-02~~| July 8, 2023 | August 21, 2023 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-5.90_202312-01 | January 16, 2024 | Post 5.93 |
| WA-GUEST-OS-5.89_202311-01 | December 8, 2023 | Post 5.92 | | WA-GUEST-OS-5.88_202310-01 | October 23, 2023 | Post 5.91 |
-| WA-GUEST-OS-5.86_202309-01 | September 25, 2023 | Post 5.90 |
+|~~WA-GUEST-OS-5.86_202309-01~~| September 25, 2023 | Post 5.90 |
|~~WA-GUEST-OS-5.85_202308-01~~| August 21, 2023 | October 23, 2023 | |~~WA-GUEST-OS-5.84_202307-01~~| July 27, 2023 | September 25, 2023 | |~~WA-GUEST-OS-5.83_202306-02~~| July 8, 2023 | August 21, 2023 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-4.126_202312-01 | January 16, 2024 | Post 4.129 |
| WA-GUEST-OS-4.125_202311-01 | December 8, 2023 | Post 4.128 | | WA-GUEST-OS-4.124_202310-01 | October 23, 2023 | Post 4.127 |
-| WA-GUEST-OS-4.122_202309-01 | September 25, 2023 | Post 4.126 |
+|~~WA-GUEST-OS-4.122_202309-01~~| September 25, 2023 | Post 4.126 |
|~~WA-GUEST-OS-4.121_202308-01~~| August 21, 2023 | October 23, 2023 | |~~WA-GUEST-OS-4.120_202307-01~~| July 27, 2023 | September 25, 2023 | |~~WA-GUEST-OS-4.119_202306-02~~| July 8, 2023 | August 21, 2023 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-3.134_202312-01 | January 16, 2024 | Post 3.137 |
| WA-GUEST-OS-3.133_202311-01 | December 8, 2023 | Post 3.136 | | WA-GUEST-OS-3.132_202310-01 | October 23, 2023 | Post 3.135 |
-| WA-GUEST-OS-3.130_202309-01 | September 25, 2023 | Post 3.134 |
+|~~WA-GUEST-OS-3.130_202309-01~~| September 25, 2023 | Post 3.134 |
|~~WA-GUEST-OS-3.129_202308-01~~| August 21, 2023 | October 23, 2023 | |~~WA-GUEST-OS-3.128_202307-01~~| July 27, 2023 | September 25, 2023 | |~~WA-GUEST-OS-3.127_202306-02~~| July 8, 2023 | August 21, 2023 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-2.146_202312-01 | January 16, 2024 | Post 2.149 |
| WA-GUEST-OS-2.145_202311-01 | December 8, 2023 | Post 2.148 | | WA-GUEST-OS-2.144_202310-01 | October 23, 2023 | Post 2.146 |
-| WA-GUEST-OS-2.142_202309-01 | September 25, 2023 | Post 2.144 |
+|~~WA-GUEST-OS-2.142_202309-01~~| September 25, 2023 | Post 2.144 |
|~~WA-GUEST-OS-2.141_202308-01~~| August 21, 2023 | October 23, 2023 | |~~WA-GUEST-OS-2.140_202307-01~~| July 27, 2023 | September 25, 2023 | |~~WA-GUEST-OS-2.139_202306-02~~| July 8, 2023 | August 21, 2023 |
communication-services Whatsapp Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/advanced-messaging/whatsapp/whatsapp-overview.md
Azure Communication Services enables you to send and receive WhatsApp messages using the Azure Communication Services Messaging SDK. This SDK can be used to engage in conversation with a customer for product inquiry and customer service scenarios. It can also be used to send out messages like appointment reminders, shipping updates, two-factor authentication, and other notification scenarios. + ## Advanced Messaging for WhatsApp features The key features of Azure Communications Services Advanced Messaging for WhatsApp include:
communication-services Email Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/email/email-overview.md
Azure Communication Services offers an intelligent communication platform to ena
With Azure Communication Services Email, you can speed up your market entry with scalable and reliable email features using your own SMTP domains. As with other communication channels, Email lets you pay only for what you use. + ## Key principles of Azure Communication Services Email Key principles of Azure Communication Services Email Service include:
communication-services Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/known-issues.md
The following sections provide information about known issues associated with th
### Chrome M115 - regression
-Chrome version 115 for Android introduced a regression when making video calls - the result of this bug is a user making a call on Azure Communication Services with this version of Chrome will have no outgoing video in Group and Azure Communication Services-Microsoft Teams calls.
-- This is a known regression introduced on [Chromium](https://bugs.chromium.org/p/chromium/issues/detail?id=1469318)-- As a short term mitigation please instruct users to use Microsoft Edge or Firefox on Android, or avoid using Google Chrome 115/116 on Android
+Chrome version 115 for Android introduced a regression when making video calls - the result of this bug is a user making a call on Azure Communication Services with this version of Chrome has no outgoing video in Group and Azure Communication Services-Microsoft Teams calls.
+- This regression is a known issue introduced on [Chromium](https://bugs.chromium.org/p/chromium/issues/detail?id=1469318)
+- As a short term mitigation, instruct users to use Microsoft Edge or Firefox on Android, or avoid using Google Chrome 115/116 on Android
### Firefox Known Issues Firefox desktop browser support is now available in public preview. Known issues are:-- Enumerating speakers is not available: If you're using Firefox, your app won't be able to enumerate or select speakers through the Communication Services device manager. In this scenario, you must select devices via the operating system. -- Virtual cameras are not currently supported when making Firefox desktop audio\video calls.
+- Enumerating speakers isn't available: If you're using Firefox, your app can't enumerate or select speakers through the Communication Services device manager. In this scenario, you must select devices via the operating system.
+- Virtual cameras aren't currently supported when making Firefox desktop audio\video calls.
### iOS Chrome Known Issues iOS Chrome browser support is now available in public preview. Known issues are: - No outgoing and incoming audio when switching browser to background or locking the device-- No incoming/outgoing audio coming from bluetooth headset. When a user connects bluetooth headset in the middle of Azure Communication Services call, the audio still comes out from the speaker until the user locks and unlocks the phone. We have seen this issue on older iOS versions (15.6, 15.7), and it is not reproducible on iOS 16.
+- No incoming/outgoing audio coming from bluetooth headset. When a user connects bluetooth headset in the middle of Azure Communication Services call, the audio still comes out from the speaker until the user locks and unlocks the phone. We have seen this issue on older iOS versions (15.6, 15.7), and it isn't reproducible on iOS 16.
### iOS 16 introduced bugs when putting browser in the background during a call
-The iOS 16 release has introduced a bug that can stop the Azure Communication Services audio\video call when using Safari mobile browser. Apple is aware of this issue and is looking for a fix on their side. The impact could be that an Azure Communication Services call might stop working during a call and the only resolution to get it working again is to have the end customer restart their phone.
+The iOS 16 release introduced a bug that can stop the Azure Communication Services audio\video call when using Safari mobile browser. Apple is aware of this issue and is looking for a fix on their side. The impact could be that an Azure Communication Services call might stop working during a call and the only resolution to get it working again is to have the end customer restart their phone.
To reproduce this bug: - Have a user using an iPhone running iOS 16
Results:
### Chrome M98 - regression Chrome version 98 introduced a regression with abnormal generation of video keyframes that impacts resolution of a sent video stream negatively for majority (70%+) of users.-- This is a known regression introduced on [Chromium](https://bugs.chromium.org/p/chromium/issues/detail?id=1295815)
+- This regression is a known issue introduced on [Chromium](https://bugs.chromium.org/p/chromium/issues/detail?id=1295815)
### While on a PSTN call, the user can still hear audio from the ACS call This issue happens when an Android Chrome user experiences an incoming PSTN call After answering the PSTN call, the microphone in the ACS call becomes muted.
-The outgoing audio of the ACS call is muted, so other participants won't hear the user who is the PSTN call.
-It's worth noting that the user's incoming audio is not muted, and this behavior is inherent to the browser.
+The outgoing audio of the ACS call is muted, so other participants can't hear the user who is the PSTN call.
+It's worth noting that the user's incoming audio isn't muted, and this behavior is inherent to the browser.
### No incoming audio during a call Occasionally, a user in an Azure Communication Services call may not be able to hear the audio from remote participants.
-There is a related [Chromium](https://bugs.chromium.org/p/chromium/issues/detail?id=1402250) bug that causes this issue, the issue can be mitigated by reconnecting the PeerConnection. We've added this workaround since SDK 1.9.1 (stable) and SDK 1.10.0 (beta)
+There's a related [Chromium](https://bugs.chromium.org/p/chromium/issues/detail?id=1402250) bug that causes this issue, the issue can be mitigated by reconnecting the PeerConnection. We added this workaround since SDK 1.9.1 (stable) and SDK 1.10.0 (beta)
-On Android Chrome, if a user joins Azure Communication Services call several times, the incoming audio can also disappear. The user is not able to hear the audio from other participants until the page is refreshed. We've fixed this issue in SDK 1.10.1-beta.1, and improved the audio resource usage.
+On Android Chrome, if a user joins Azure Communication Services call several times, the incoming audio can also disappear. The user isn't able to hear the audio from other participants until the page is refreshed. We fixed this issue in SDK 1.10.1-beta.1, and improved the audio resource usage.
### Some Android devices failing call scenarios except for group calls.
-A number of specific Android devices fail to start, accept calls, and meetings. The devices that run into this issue, won't recover and will fail on every attempt. These are mostly Samsung model A devices, particularly models A326U, A125U and A215U.
-- This is a known regression introduced on [Chromium](https://bugs.chromium.org/p/webrtc/issues/detail?id=13223).
+Many specific Android devices fail to start, accept calls, and meetings. The devices that run into this issue, can't recover and fails on every attempt. These are mostly Samsung model A devices, particularly models A326U, A125U and A215U.
+- This regression is a known issue introduced on [Chromium](https://bugs.chromium.org/p/webrtc/issues/detail?id=13223).
### Android Chrome mutes the call after browser goes to background for one minute
-On Android Chrome, if a user is on an Azure Communication Services call and puts the browser into background for one minute. The microphone will lose access and the other participants in the call won't hear the audio from the user. Once the user brings the browser to foreground, microphone is available again. Related chromium bugs [here](https://bugs.chromium.org/p/chromium/issues/detail?id=1027446) and [here](https://bugs.chromium.org/p/webrtc/issues/detail?id=10940)
+On Android Chrome, if a user is on an Azure Communication Services call and puts the browser into background for one minute. The microphone loses access and the other participants in the call can't hear the audio from the user. Once the user brings the browser to foreground, microphone is available again. Related chromium bugs [here](https://bugs.chromium.org/p/chromium/issues/detail?id=1027446) and [here](https://bugs.chromium.org/p/webrtc/issues/detail?id=10940)
### A mobile (iOS and Android) user has dropped the call but is still showing up on the participant list.
-The problem can occur if a mobile user leaves the Azure Communication Services group call without using the Call.hangUp() API. When a mobile user closes the browser or refreshes the webpage without hang up, other participants in the group call will still see this mobile user on the participant list for about 60 seconds.
+The problem can occur if a mobile user leaves the Azure Communication Services group call without using the Call.hangUp() API. When a mobile user closes the browser or refreshes the webpage without hang up, other participants in the group call can still see this mobile user on the participant list for about 60 seconds.
### iOS Safari refreshes the page if the user goes to another app and returns back to the browser
the browser page may refresh. This is because OS kills the browser. One way to m
### iOS 15.1 users joining group calls or Microsoft Teams meetings.
-* Sometimes when incoming PSTN is received the tab with the call or meeting will hang. Related WebKit bugs [here](https://bugs.webkit.org/show_bug.cgi?id=233707) and [here](https://bugs.webkit.org/show_bug.cgi?id=233708#c0).
+* Sometimes when incoming PSTN is received the tab with the call or meeting hangs. Related WebKit bugs [here](https://bugs.webkit.org/show_bug.cgi?id=233707) and [here](https://bugs.webkit.org/show_bug.cgi?id=233708#c0).
### Local microphone/camera mutes when certain interruptions occur on iOS Safari and Android Chrome.
This problem can occur if another application or the operating system takes over
- An incoming call arrives via PSTN (Public Switched Telephone Network), and it captures the microphone device access. - A user plays a YouTube video, for example, or starts a FaceTime call. Switching to another native application can capture access to the microphone or camera.-- A user enables Siri, which will capture access to the microphone.
+- A user enables Siri, which captures access to the microphone.
-On iOS, for example, while on an Azure Communication Services call, if a PSTN call comes in, then a microphoneMutedUnexepectedly bad UFD will be raised and audio will stop flowing in the Azure Communication Services call and the call will be marked as muted. Once the PSTN call is over, the user will have to go and unmute the Azure Communication Services call for audio to start flowing again in the Azure Communication Services call. In the case of Android Chrome when a PSTN call comes in, audio will stop flowing in the Azure Communication Services call and the Azure Communication Services call will not be marked as muted. In this case, there is no microphoneMutedUnexepectedly UFD event. Once the PSTN call is finished, Android Chrome will regain audio automatically and audio will start flowing normally again in the Azure Communication Services call.
+On iOS, for example, while on an Azure Communication Services call, if a PSTN call comes in, then a microphoneMutedUnexepectedly bad UFD is raised and audio stops flowing in the Azure Communication Services call and the call is marked as muted. Once the PSTN call is over, the user has to unmute the Azure Communication Services call for audio to start flowing again in the Azure Communication Services call. In the case of Android Chrome when a PSTN call comes in, audio stops flowing in the Azure Communication Services call and the Azure Communication Services call isn't marked as muted. In this case, there's no microphoneMutedUnexepectedly UFD event. Once the PSTN call is finished, Android Chrome regains audio automatically and audio starts flowing normally again in the Azure Communication Services call.
-In case camera is on and an interruption occurs, Azure Communication Services call may or may not lose the camera. If lost then camera will be marked as off and user will have to go turn it back on after the interruption has released the camera.
+In case camera is on and an interruption occurs, Azure Communication Services call may or may not lose the camera. If lost then camera is marked as off and user has to go turn it back on after the interruption released the camera.
-Occasionally, microphone or camera devices won't be released on time, and that can cause issues with the original call. For example, if the user tries to unmute while watching a YouTube video, or if a PSTN call is active simultaneously.
+Occasionally, microphone or camera devices aren't released on time, and that can cause issues with the original call. For example, if the user tries to unmute while watching a YouTube video, or if a PSTN call is active simultaneously.
-Incoming video streams won't stop rendering if the user is on iOS 15.2+ and is using SDK version 1.4.1-beta.1+, the unmute/start video steps will still be required to re-start outgoing audio and video.
+Incoming video streams don't stop rendering if the user is on iOS 15.2+ and is using SDK version 1.4.1-beta.1+, the unmute/start video steps are still required to restart outgoing audio and video.
-For iOS 15.4+, audio and video should be able to auto recover on most of the cases. On some edge cases, to unmute, an API to 'unmute' must be called by the application (can be as a result of user action) to recover the outgoing audio.
+For iOS 15.4+, audio and video should be able to auto recover on most of the cases. On some edge cases, to unmute, application must call an API to 'unmute' (can be as a result of user action) to recover the outgoing audio.
### iOS with Safari crashes and refreshes the page if a user tries to switch from front camera to back camera.
This issue is fixed in Azure Communication Services Calling SDK version 1.3.1-be
* iOS Safari version: 15.1
-### Screen sharing in macOS Ventura Safari (v16.3 and below)
-Screen sharing does not work in macOS Ventura Safari(v16.3 and below). Known issue from Safari and will be fixed in v16.4+
+### Screen sharing in macOS Ventura Safari (v16.3 and lower)
+Screen sharing doesn't work in macOS Ventura Safari(v16.3 and lower). Known issue from Safari and will be fixed in v16.4+
### Refreshing a page doesn't immediately remove the user from their call
-If a user is in a call and decides to refresh the page, the Communication Services media service won't remove this user immediately from the call. It will wait for the user to rejoin. The user will be removed from the call after the media service times out.
+If a user is in a call and decides to refresh the page, the Communication Services media service doesn't remove this user immediately from the call. It waits for the user to rejoin. The user is removed from the call after the media service times out.
It's best to build user experiences that don't require end users to refresh the page of your application while in a call. If a user refreshes the page, reuse the same Communication Services user ID after that user returns back to the application. By rejoining with the same user ID, the user is represented as the same, existing object in the `remoteParticipants` collection. From the perspective of other participants in the call, the user remains in the call during the time it takes to refresh the page, up to a minute or two.
-If the user was sending video before refreshing, the `videoStreams` collection will keep the previous stream information until the service times out and removes it. In this scenario, the application might decide to observe any new streams added to the collection, and render one with the highest `id`.
+If the user was sending video before refreshing, the `videoStreams` collection keeps the previous stream information until the service times out and removes it. In this scenario, the application might decide to observe any new streams added to the collection, and render one with the highest `id`.
-### It's not possible to render multiple previews from multiple devices on web
+### It isn't possible to render multiple previews from multiple devices on web
-This is a known limitation. For more information, see [Calling SDK overview](./voice-video-calling/calling-sdk-features.md).
+This issue is a known limitation. For more information, see [Calling SDK overview](./voice-video-calling/calling-sdk-features.md).
### Enumerating devices isn't possible in Safari when the application runs on iOS or iPadOS
-Applications can't enumerate or select speaker devices (like Bluetooth) on Safari iOS or iPadOS. This is a known limitation of these operating systems.
+Applications can't enumerate or select speaker devices (like Bluetooth) on Safari iOS or iPadOS. This issue is a known limitation of these operating systems.
-If you're using Safari on macOS, your app won't be able to enumerate or select speakers through the Communication Services device manager. In this scenario, you must select devices via the operating system. If you use Chrome on macOS, the app can enumerate or select devices through the Communication Services device manager.
+If you're using Safari on macOS, your app can't enumerate or select speakers through the Communication Services device manager. In this scenario, you must select devices via the operating system. If you use Chrome on macOS, the app can enumerate or select devices through the Communication Services device manager.
* iOS Safari version: 15.1
Switching between video devices might cause your video stream to pause while the
### Bluetooth headset microphone isn't detected or audible during the call on Safari on iOS
-Bluetooth headsets aren't supported by Safari on iOS. Your Bluetooth device won't be listed in available microphone options, and other participants won't be able to hear you if you try using Bluetooth over Safari.
+Bluetooth headsets aren't supported by Safari on iOS. Your Bluetooth device isn't listed in available microphone options, and other participants aren't able to hear you if you try using Bluetooth over Safari.
-This is a known operating system limitation. With Safari on macOS and iOS/iPadOS, it's not possible to enumerate or select speaker devices through Communication Services device manager. This is because Safari doesn't support the enumeration or selection of speakers. In this scenario, use the operating system to update your device selection.
+This regression is a known operating system limitation. With Safari on macOS and iOS/iPadOS, it isn't possible to enumerate or select speaker devices through Communication Services device manager. This is because Safari doesn't support the enumeration or selection of speakers. In this scenario, use the operating system to update your device selection.
### Rotation of a device can create poor video quality When users rotate a device, this movement can degrade the quality of video that is streaming.
-The environment in which this problem occurs is the following:
+This problem occurs in the following environments:
- Devices affected: Google Pixel 5, Google Pixel 3a, Apple iPad 8, and Apple iPad X - Client library: Calling (JavaScript)
The environment in which this problem occurs is the following:
When a Communication Services user joins a call by using the JavaScript calling SDK, and then selects the camera switch button, the UI might become unresponsive. The user must then refresh the application, or push the browser to the background.
-The environment in which this problem occurs is the following:
+This problem occurs in the following environments:
- Devices affected: Google Pixel 4a - Client library: Calling (JavaScript)
The environment in which this problem occurs is the following:
### Video signal problem when the call is in connecting state
-If a user turns video on and off quickly while the call is in the *Connecting* state, this might lead to a problem with the stream acquired for the call. It's best for developers to build their apps in a way that doesn't require video to be turned on and off while the call is in the *Connecting* state. Degraded video performance might occur in the following scenarios:
+If a user turns video on and off quickly while the call is in the *Connecting* state, this action might lead to a problem with the stream acquired for the call. It's best for developers to build their apps in a way that doesn't require video to be turned on and off while the call is in the *Connecting* state. Degraded video performance might occur in the following scenarios:
- If the user starts with audio, and then starts and stops video, while the call is in the *Connecting* state. - If the user starts with audio, and then starts and stops video, while the call is in the *Lobby* state. ### Enumerating or accessing devices for Safari on macOS and iOS
-In certain environments, you might notice that device permissions are reset after some period of time. On macOS and iOS, Safari doesn't keep permissions for a long time unless there is a stream acquired. The simplest way to work around this is to call the `DeviceManager.askDevicePermission()` API, before calling the device manager's device enumeration APIs. These enumeration APIs include `DeviceManager.getCameras()`, `DeviceManager.getSpeakers()`, and `DeviceManager.getMicrophones()`. If the permissions are there, the user won't see anything. If the permissions aren't there, the user will be prompted for the permissions again.
+In certain environments, you might notice that device permissions are reset after some period of time. On macOS and iOS, Safari doesn't keep permissions for a long time unless there's a stream acquired. The simplest way to work around this limitation is to call the `DeviceManager.askDevicePermission()` API, before calling the device manager's device enumeration APIs. These enumeration APIs include `DeviceManager.getCameras()`, `DeviceManager.getSpeakers()`, and `DeviceManager.getMicrophones()`. If the permissions are there, the user doesn't see anything. If the permissions aren't there, the user is prompted for the permissions again.
-The environment in which this problem occurs is the following:
+This problem occurs in the following environments:
- Device affected: iPhone - Client library: Calling (JavaScript)
During an ongoing group call, suppose that _User A_ sends video, and then _User
### Using third-party libraries during the call might result in audio loss
-If you use `getUserMedia` separately inside the application, the audio stream is lost. This is because a third-party library takes over device access from the Azure Communication Services library.
+If you use `getUserMedia` separately inside the application, the audio stream is lost. Audio stream is lost because a third-party library takes over device access from the Azure Communication Services library.
- Don't use third-party libraries that are using the `getUserMedia` API internally during the call. - If you still need to use a third-party library, the only way to recover the audio stream is to change the selected device (if the user has more than one), or to restart the call.
-The environment in which this problem occurs is the following:
+This problem occurs in the following environments:
- Browser: Safari - Operating system: iOS
-The cause of this problem might be that acquiring your own stream from the same device will have a side effect of running into race conditions. Acquiring streams from other devices might lead the user into insufficient USB/IO bandwidth, and the `sourceUnavailableError` rate will skyrocket.
+The cause of this problem might be that acquiring your own stream from the same device has a side effect of running into race conditions. Acquiring streams from other devices might lead the user into insufficient USB/IO bandwidth, and the `sourceUnavailableError` rate skyrockets.
-### Excessive use of certain APIs like mute/unmute will result in throttling on Azure Communication Services infrastructure
+### Excessive use of certain APIs like mute/unmute results in throttling on Azure Communication Services infrastructure
As a result of the mute/unmute API call, Azure Communication Services infrastructure informs other participants in the call about the state of audio of a local participant who invoked mute/unmute, so that participants in the call know who is muted/unmuted.
-Excessive use of mute/unmute will be blocked in Azure Communication Services infrastructure. That will happen if the participant (or application on behalf of participant) will attempt to mute/unmute continuously, every second, more than 15 times in a 30-second rolling window.
+Excessive use of mute/unmute is blocked in Azure Communication Services infrastructure. Throttling happens if the participant (or application on behalf of participant) attempts to mute/unmute continuously, every second, more than 15 times in a 30-second rolling window.
## Communication Services Call Automation APIs
-The following are known issues in the Communication Services Call Automation APIs:
+The following limitations are known issues in the Communication Services Call Automation APIs:
- The only authentication currently supported for server applications is to use a connection string.
The following are known issues in the Communication Services Call Automation API
## Group call limitations for JS web Calling SDK users
-Up to 350 users can join a group call using the JS web calling SDK. Once the call size reaches 100+ participants in the call, only the top 4 most dominant speakers that have their video camera turned on will be available to be seen. When the number of people on the call is 100+ the viewable number of incoming renders will go from 3x3 (9 incoming videos) down to 2x2 (4 incoming videos). When the number of users goes below 100, the number of supported incoming video goes back up to 3x3 (9 incoming videos).
+Up to 350 users can join a group call, room or Teams meeting. Only 100 users can join through JS web calling SDK or Teams web client, the remaining needs to join through Android/iOS/Windows calling SDK or Teams desktop/mobile client. Once the call size reaches 100+ participants in the call, only the top 4 most dominant speakers that have their video camera turned on are seen. When the number of people on the call is 100+, the viewable number of incoming renders goes from 3x3 (9 incoming videos) down to 2x2 (4 incoming videos). When the number of users goes below 100, the number of supported incoming videos goes back up to 3x3 (9 incoming videos).
## Android API emulators
-When utilizing Android API emulators on Android 5.0 (API level 21) and Android 5.1 (API level 22) some crashes are expected.
+When utilizing Android API emulators on Android 5.0 (API level 21) and Android 5.1 (API level 22), some crashes are expected.
communication-services Sub Eligibility Number Capability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/sub-eligibility-number-capability.md
Numbers can be purchased on eligible Azure subscriptions and in geographies wher
> - [Saudi Arabia](../numbers/phone-number-management-for-saudi-arabia.md) > - [Singapore](../numbers/phone-number-management-for-singapore.md) > - [Slovakia](../numbers/phone-number-management-for-slovakia.md)
+> - [South Africa](../numbers/phone-number-management-for-south-africa.md)
> - [South Korea](../numbers/phone-number-management-for-south-korea.md) > - [Spain](../numbers/phone-number-management-for-spain.md) > - [Sweden](../numbers/phone-number-management-for-sweden.md)
communication-services Pstn Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/pstn-pricing.md
All prices shown below are in USD.
|-||--| |Toll-free |N/A |USD 0.2632/min |
+## South Africa telephony offers
+### Phone number leasing charges
+|Number type |Monthly fee |
+|--|--|
+|Toll-Free |USD 22.00/mo |
+### Usage charges
+|Number type |To make calls |To receive calls |
+|-||--|
+|Toll-free |N/A |USD 0.0844/min |
+ ## South Korea telephony offers ### Phone number leasing charges |Number type |Monthly fee |
communication-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/reference.md
# Reference documentation overview ## External links and docs For each area, we have external pages to track and review our SDKs. You can consult the table below to find the matching page for your SDK of interest.
communication-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/service-limits.md
The following timeouts apply to the Communication Services Calling SDKs:
For more information about the voice and video calling SDK and service, see the [calling SDK overview](./voice-video-calling/calling-sdk-features.md) page or [known issues](./known-issues.md).
+## Job Router
+When sending or receiving a high volume of requests, you might receive a ```ThrottleLimitExceededException``` error. This error indicates you're hitting the service limitations, and your requests will be dropped until the token of bucket to handle requests is replenished after a certain time.
+
+Rate Limits for Job Router:
+
+|Operation|Scope|Timeframe (seconds)| Limit (number of requests) | Timeout in seconds|
+||--|-|-|-|
+|General Requests|Per Resource|10|1000|10|
+
+### Action to take
+If you need to send a volume of messages that exceeds the rate limits, email us at acs-ccap@microsoft.com.
+ ## Teams Interoperability and Microsoft Graph Using a Teams interoperability scenario, you'll likely use some Microsoft Graph APIs to create [meetings](/graph/cloud-communications-online-meetings).
communication-services Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/sms/concepts.md
Azure Communication Services enables you to send and receive SMS text messages using the Communication Services SMS SDKs. These SDKs can be used to support customer service scenarios, appointment reminders, two-factor authentication, and other real-time communication needs. Communication Services SMS allows you to reliably send messages while exposing deliverability and response metrics. + ## SMS features Key features of Azure Communication Services SMS SDKs include:
communication-services Teams Interop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/teams-interop.md
Azure Communication Services can be used to build custom applications and experi
> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RWGTqQ] + ## User identity models Azure Communication Services supports two types of Teams interoperability depending on the identity of the user:
communication-services Telephony Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/telephony-concept.md
Azure Communication Services Calling SDKs can be used to add telephony and Public Switched Telephone Network access to your applications. This page summarizes key telephony concepts and capabilities. See the [calling library](../../quickstarts/voice-video-calling/getting-started-with-calling.md) to learn more about specific SDK languages and capabilities. + ## Overview of telephony Whenever your users interact with a traditional telephone number, calls are facilitated by PSTN (Public Switched Telephone Network) voice calling. To make and receive PSTN calls, you need to add telephony capabilities to your Azure Communication Services resource. In this case, signaling and media use a combination of IP-based and PSTN-based technologies to connect your users. Communication Services provides two discrete ways to reach the PSTN network: Voice Calling (PSTN) and Azure direct routing.
communication-services Calling Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/calling-sdk-features.md
# Calling SDK overview ++ The Calling SDK enables end-user devices to drive voice and video communication experiences. This page provides detailed descriptions of Calling features, including platform and browser support information. To get started right away, check out [Calling quickstarts](../../quickstarts/voice-video-calling/getting-started-with-calling.md) or [Calling hero sample](../../samples/calling-hero-sample.md). Once you've started development, check out the [known issues page](../known-issues.md) to find bugs we're working on.
communication-services Local Testing Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/event-grid/local-testing-event-grid.md
Testing Event Grid triggered Azure Functions locally can be complicated. You don
- Install [Postman](https://www.postman.com/downloads/). - Have a running Azure Function that can be triggered by Event Grid. If you don't have one, you can follow the [quickstart](../../../azure-functions/functions-bindings-event-grid-trigger.md?tabs=in-process%2Cextensionv3&pivots=programming-language-javascript) to create one.
-The Azure Function can be running either in Azure if you want to test it with some test events or if you want to test the entire flow locally (press `F5` in Visual Studio Code to run it locally). If you want to test the entire flow locally, you need to use [ngrok](https://ngrok.com/) to hook your locally running Azure Function. Configure ngrok by running the command:
+The Azure Function can be running either in Azure if you want to test it with some test events or if you want to test the entire flow locally (press `F5` in Visual Studio Code to run it locally). If you want to test the entire flow with an externally triggered webhook, you need to use [ngrok](https://ngrok.com/) to expose your locally running Azure Function
+to the public, allowing it to be triggered by internet sources (as an example from Azure Event WebHooks). Configure ngrok by running the command:
```bash ngrok http 7071
+```
+It is worth remembering that exposing development resources publicly might not be considered as secure. That is why you can also run the entire workflow locally without ngrok by invoking requests to:
+```
+http://localhost:7071/runtime/webhooks/EventGrid?functionName={functionname}
``` ## Configure Postman
communication-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/overview.md
# What is Azure Communication Services? + Azure Communication Services are cloud-based services with REST APIs and client library SDKs available to help you integrate communication into your applications. You can add communication to your applications without being an expert in underlying technologies such as media encoding or telephony. Azure Communication Service is available in multiple [Azure geographies](concepts/privacy.md) and Azure for government. >[!VIDEO https://www.youtube.com/embed/chMHVHLFcao]
Scenarios for Azure Communication Services include:
To learn more, check out our [Microsoft Mechanics video](https://www.youtube.com/watch?v=apBX7ASurgM) or the resources linked next. ++ ## Common scenarios <br>
communication-services Send Email https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/send-email.md
zone_pivot_groups: acs-azcli-js-csharp-java-python-portal-nocode
# Quickstart: How to send an email using Azure Communication Service + In this quick start, you'll learn about how to send email using our Email SDKs. ::: zone pivot="platform-azportal"
communication-services Send https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/sms/send.md
zone_pivot_groups: acs-azcli-js-csharp-java-python-logic-apps
> [!IMPORTANT] > SMS capabilities depend on the phone number you use and the country/region that you're operating within as determined by your Azure billing address. For more information, visit the [Subscription eligibility](../../concepts/numbers/sub-eligibility-number-capability.md) documentation. + <br/> >[!VIDEO https://www.youtube.com/embed/YEyxSZqzF4o]
communication-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/whats-new.md
We're combining the November and December updates into one. **Have a terrific ho
<br> <br> ## New features Get detailed information on the latest Azure Communication Services feature launches.
cosmos-db Find Request Unit Charge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/find-request-unit-charge.md
Last updated 10/14/2020
+ms.devlang: csharp
+# ms.devlang: csharp, java, golang
# Find the request unit charge for operations executed in Azure Cosmos DB for Apache Cassandra
cosmos-db Spark Databricks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/spark-databricks.md
Last updated 09/24/2018
+ms.devlang: spark-scala
cosmos-db Spark Ddl Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/spark-ddl-operations.md
Last updated 10/07/2020
+ms.devlang: spark-scala
cosmos-db Spark Delete Operation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/spark-delete-operation.md
Last updated 09/24/2018
+ms.devlang: spark-scala
cosmos-db Spark Hdinsight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/spark-hdinsight.md
Last updated 09/24/2018
+ms.devlang: spark-scala
cosmos-db Spark Read Operation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/spark-read-operation.md
Last updated 06/02/2020
+ms.devlang: spark-scala
cosmos-db Spark Table Copy Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/spark-table-copy-operations.md
Last updated 09/24/2018
+ms.devlang: spark-scala
cosmos-db Spark Upsert Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/spark-upsert-operations.md
Last updated 09/24/2018
+ms.devlang: spark-scala
cosmos-db Troubleshoot Nohostavailable Exception https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/troubleshoot-nohostavailable-exception.md
Last updated 12/02/2021
+ms.devlang: csharp
+# ms.devlang: csharp, java
cosmos-db Bulk Executor Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/bulk-executor-dotnet.md
Last updated 05/10/2022
+ms.devlang: csharp
+# ms.devlang: csharp, java
cosmos-db Find Request Unit Charge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/find-request-unit-charge.md
Last updated 10/14/2020
+ms.devlang: csharp
+# ms.devlang: csharp, java
# Find the request unit charge for operations executed in Azure Cosmos DB for Gremlin
cosmos-db How To Develop Emulator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-develop-emulator.md
Use the [Azure Cosmos DB API for NoSQL .NET SDK](nosql/quickstart-dotnet.md) to
> ServerCertificateCustomValidationCallback = HttpClientHandler.DangerousAcceptAnyServerCertificateValidator > }), > ConnectionMode = ConnectionMode.Gateway,
- > LimitToEndpoint = true
> }; > > using CosmosClient client = new(
cosmos-db Change Streams https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/change-streams.md
Last updated 03/02/2021
+ms.devlang: csharp
+# ms.devlang: csharp, javascript
cosmos-db Find Request Unit Charge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/find-request-unit-charge.md
Last updated 05/12/2022
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript
cosmos-db Time To Live https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/time-to-live.md
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript
Last updated 02/16/2022
cosmos-db Find Request Unit Charge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/find-request-unit-charge.md
Last updated 06/02/2022
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, python
cosmos-db How To Manage Conflicts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-manage-conflicts.md
Last updated 06/11/2020
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript
cosmos-db How To Manage Consistency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-manage-consistency.md
Last updated 02/16/2022
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript
cosmos-db How To Use Stored Procedures Triggers Udfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-use-stored-procedures-triggers-udfs.md
Last updated 03/16/2023
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, python
cosmos-db Migrate Relational Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/migrate-relational-data.md
+ms.devlang: python
+# ms.devlang: python, scala
Last updated 02/27/2023
cosmos-db Performance Tips Query Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/performance-tips-query-sdk.md
Last updated 06/20/2023
+ms.devlang: csharp
+# ms.devlang: csharp, java
zone_pivot_groups: programming-languages-set-cosmos
cosmos-db Samples Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/samples-go.md
description: Find Go examples on GitHub for common tasks in Azure Cosmos DB, inc
+ms.devlang: golang
Last updated 10/17/2022
cosmos-db Sdk Java Spark V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/sdk-java-spark-v3.md
- Title: 'Azure Cosmos DB Apache Spark 3 OLTP Connector for API for NoSQL (Preview) release notes and resources'
-description: Learn about the Azure Cosmos DB Apache Spark 3 OLTP Connector for API for NoSQL, including release dates, retirement dates, and changes made between each version of the Azure Cosmos DB SQL Java SDK.
---- Previously updated : 11/12/2021-----
-# Azure Cosmos DB Apache Spark 3 OLTP Connector for API for NoSQL: Release notes and resources
--
-**Azure Cosmos DB OLTP Spark connector** provides Apache Spark support for Azure Cosmos DB using the API for NoSQL. Azure Cosmos DB is a globally-distributed database service which allows developers to work with data using a variety of standard APIs, such as SQL, MongoDB, Cassandra, Graph, and Table.
-
-If you have any feedback or ideas on how to improve your experience create an issue in our [SDK GitHub repository](https://github.com/Azure/azure-sdk-for-java/issues/new)
-
-## Documentation links
-
-* [Getting started](https://aka.ms/azure-cosmos-spark-3-quickstart)
-* [Catalog API](https://aka.ms/azure-cosmos-spark-3-catalog-api)
-* [Configuration Parameter Reference](https://aka.ms/azure-cosmos-spark-3-config)
-* [End-to-end sample notebook "New York City Taxi data"](https://aka.ms/azure-cosmos-spark-3-sample-nyc-taxi-data)
-* [Migration from Spark 2.4 to Spark 3.*](https://aka.ms/azure-cosmos-spark-3-migration)
-
-## Version compatibility
-* [Version compatibility for Spark 3.1](https://aka.ms/azure-cosmos-spark-3-1-version-compatibility)
-* [Version compatibility for Spark 3.2](https://aka.ms/azure-cosmos-spark-3-2-version-compatibility)
-* [Version compatibility for Spark 3.3](https://aka.ms/azure-cosmos-spark-3-3-version-compatibility)
-* [Version compatibility for Spark 3.4](https://aka.ms/azure-cosmos-spark-3-4-version-compatibility)
-
-## Release notes
-* [Release notes for Spark 3.1](https://aka.ms/azure-cosmos-spark-3-1-changelog)
-* [Release notes for Spark 3.2](https://aka.ms/azure-cosmos-spark-3-2-changelog)
-* [Release notes for Spark 3.3](https://aka.ms/azure-cosmos-spark-3-3-changelog)
-* [Release notes for Spark 3.4](https://aka.ms/azure-cosmos-spark-3-4-changelog)
-
-## Download
-* [Download of Azure Cosmos DB Spark connector for Spark 3.1](https://aka.ms/azure-cosmos-spark-3-1-download)
-* [Download of Azure Cosmos DB Spark connector for Spark 3.2](https://aka.ms/azure-cosmos-spark-3-2-download)
-* [Download of Azure Cosmos DB Spark connector for Spark 3.3](https://aka.ms/azure-cosmos-spark-3-3-download)
-* [Download of Azure Cosmos DB Spark connector for Spark 3.4](https://aka.ms/azure-cosmos-spark-3-4-download)
-
-Azure Cosmos DB Spark connector is available on [Maven Central Repo](https://search.maven.org/search?q=g:com.azure.cosmos.spark).
-
-If you encounter any bug or want to suggest a feature change, [file an issue](https://github.com/Azure/azure-sdk-for-java/issues/new).
-
-## Next steps
-
-Learn more about [Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/).
-
-Learn more about [Apache Spark](https://spark.apache.org/).
cosmos-db Tutorial Spark Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/tutorial-spark-connector.md
+
+ Title: 'Tutorial: Connect using Spark'
+
+description: Connect to Azure Cosmos DB for NoSQL using the Spark 3 OLTP connector. Use the connector to query data in your API for NoSQL account.
+++++ Last updated : 01/17/2024
+zone_pivot_groups: programming-languages-spark-all-minus-sql-r-csharp
+#CustomerIntent: As a data scientist, I want to connect to Azure Cosmos DB for NoSQL using Spark, so that I can perform analytics on my data in Azure Cosmos DB.
++
+# Tutorial: Connect to Azure Cosmos DB for NoSQL using Spark
++
+In this tutorial, you use the Azure Cosmos DB Spark connector to read or write data from an Azure Cosmos DB for NoSQL account. This tutorial uses Azure Databricks and a Jupyter notebook to illustrate how to integrate with the API for NoSQL from Spark. This tutorial focuses on Python and Scala even though you can use any language or interface supported by Spark.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> - Connect to an API for NoSQL account using Spark and a Jupyter notebook
+> - Create database and container resources
+> - Ingest data to the container
+> - Query data in the container
+> - Perform common operations on items in the container
+
+## Prerequisites
+
+- An existing Azure Cosmos DB for NoSQL account.
+ - If you have an existing Azure subscription, [create a new account](how-to-create-account.md?tabs=azure-portal).
+ - No Azure subscription? You can [try Azure Cosmos DB free](../try-free.md) with no credit card required.
+- An existing Azure Databricks workspace.
+
+## Connect using Spark and Jupyter
+
+Use your existing Azure Databricks workspace to create a compute cluster ready to use Apache Spark 3.4.x to connect to your Azure Cosmos DB for NoSQL account.
+
+1. Open your Azure Databricks workspace.
+
+1. In the workspace interface, create a new **cluster**. Configure the cluster with these settings, at a minimum:
+
+ | | **Value** |
+ | | |
+ | **Runtime version** | 13.3 LTS (Scala 2.12, Spark 3.4.1) |
+
+1. Use the workspace interface to search for **Maven** packages from **Maven Central** with a **Group Id** of `com.azure.cosmos.spark`. Install the package specific for Spark 3.4 with an **Artifact Id** prefixed with `azure-cosmos-spark_3-4` to the cluster.
+
+1. Finally, create a new **notebook**.
+
+ > [!TIP]
+ > By default, the notebook will be attached to the recently created cluster.
+
+1. Within the notebook, set OLTP configuration settings for NoSQL account endpoint, database name, and container name.
+
+ ::: zone pivot="programming-language-python"
+
+ ```python
+ # Set configuration settings
+ config = {
+ "spark.cosmos.accountEndpoint": "<nosql-account-endpoint>",
+ "spark.cosmos.accountKey": "<nosql-account-key>",
+ "spark.cosmos.database": "cosmicworks",
+ "spark.cosmos.container": "products"
+ }
+ ```
+
+ ::: zone-end
+
+ ::: zone pivot="programming-language-scala"
+
+ ```scala
+ # Set configuration settings
+ val config = Map(
+ "spark.cosmos.accountEndpoint" -> "<nosql-account-endpoint>",
+ "spark.cosmos.accountKey" -> "<nosql-account-key>",
+ "spark.cosmos.database" -> "cosmicworks",
+ "spark.cosmos.container" -> "products"
+ )
+ ```
+
+ ::: zone-end
+
+## Create a database and container
+
+Use the Catalog API to manage account resources such as databases and containers. Then, you can use OLTP to manage data within the container resource\[s\].
+
+1. Configure the Catalog API to manage API for NoSQL resources using Spark.
+
+ ::: zone pivot="programming-language-python"
+
+ ```python
+ # Configure Catalog Api
+ spark.conf.set("spark.sql.catalog.cosmosCatalog", "com.azure.cosmos.spark.CosmosCatalog")
+ spark.conf.set("spark.sql.catalog.cosmosCatalog.spark.cosmos.accountEndpoint", config["spark.cosmos.accountEndpoint"])
+ spark.conf.set("spark.sql.catalog.cosmosCatalog.spark.cosmos.accountKey", config["spark.cosmos.accountKey"])
+ ```
+
+ ::: zone-end
+
+ ::: zone pivot="programming-language-scala"
+
+ ```scala
+ // Configure Catalog Api
+ spark.conf.set(s"spark.sql.catalog.cosmosCatalog", "com.azure.cosmos.spark.CosmosCatalog")
+ spark.conf.set(s"spark.sql.catalog.cosmosCatalog.spark.cosmos.accountEndpoint", config("spark.cosmos.accountEndpoint"))
+ spark.conf.set(s"spark.sql.catalog.cosmosCatalog.spark.cosmos.accountKey", config("spark.cosmos.accountKey"))
+ ```
+
+ ::: zone-end
+
+1. Create a new database named `cosmicworks` using `CREATE DATABASE IF NOT EXISTS`.
+
+ ::: zone pivot="programming-language-python"
+
+ ```python
+ # Create a database using the Catalog API
+ spark.sql(f"CREATE DATABASE IF NOT EXISTS cosmosCatalog.cosmicworks;")
+ ```
+
+ ::: zone-end
+
+ ::: zone pivot="programming-language-scala"
+
+ ```scala
+ // Create a database using the Catalog API
+ spark.sql(s"CREATE DATABASE IF NOT EXISTS cosmosCatalog.cosmicworks;")
+ ```
+
+ ::: zone-end
+
+1. Create a new container named `products` using `CREATE TABLE IF NOT EXISTS`. Ensure that you set the partition key path to `/category` and enable autoscale throughput with a maximum throughput of `1000` request units per second (RU/s).
+
+ ::: zone pivot="programming-language-python"
+
+ ```python
+ # Create a products container using the Catalog API
+ spark.sql(("CREATE TABLE IF NOT EXISTS cosmosCatalog.cosmicworks.products USING cosmos.oltp TBLPROPERTIES(partitionKeyPath = '/category', autoScaleMaxThroughput = '1000')"))
+ ```
+
+ ::: zone-end
+
+ ::: zone pivot="programming-language-scala"
+
+ ```scala
+ // Create a products container using the Catalog API
+ spark.sql(("CREATE TABLE IF NOT EXISTS cosmosCatalog.cosmicworks.products USING cosmos.oltp TBLPROPERTIES(partitionKeyPath = '/category', autoScaleMaxThroughput = '1000')"))
+ ```
+
+ ::: zone-end
+
+1. Create another container named `employees` using a hierarchical partition key configuration with `/organization`, `/department`, and `/team` as the set of partition key paths in that specific order. Also, set the throughput to a manual amount of `400` RU/s
+
+ ::: zone pivot="programming-language-python"
+
+ ```python
+ # Create an employees container using the Catalog API
+ spark.sql(("CREATE TABLE IF NOT EXISTS cosmosCatalog.cosmicworks.employees USING cosmos.oltp TBLPROPERTIES(partitionKeyPath = '/organization,/department,/team', manualThroughput = '400')"))
+ ```
+
+ ::: zone-end
+
+ ::: zone pivot="programming-language-scala"
+
+ ```scala
+ // Create an employees container using the Catalog API
+ spark.sql(("CREATE TABLE IF NOT EXISTS cosmosCatalog.cosmicworks.employees USING cosmos.oltp TBLPROPERTIES(partitionKeyPath = '/organization,/department,/team', manualThroughput = '400')"))
+ ```
+
+ ::: zone-end
+
+1. **Run** the notebook cell\[s\] to validate that your database and containers are created within your API for NoSQL account.
+
+## Ingest data
+
+Create a sample dataset and then use OLTP to ingest that data to the API for NoSQL container.
+
+1. Create a sample data set.
+
+ ::: zone pivot="programming-language-python"
+
+ ```python
+ # Create sample data
+ products = (
+ ("68719518391", "gear-surf-surfboards", "Yamba Surfboard", 12, 850.00, False),
+ ("68719518371", "gear-surf-surfboards", "Kiama Classic Surfboard", 25, 790.00, True)
+ )
+ ```
+
+ ::: zone-end
+
+ ::: zone pivot="programming-language-scala"
+
+ ```scala
+ // Create sample data
+ val products = Seq(
+ ("68719518391", "gear-surf-surfboards", "Yamba Surfboard", 12, 850.00, false),
+ ("68719518371", "gear-surf-surfboards", "Kiama Classic Surfboard", 25, 790.00, true)
+ )
+ ```
+
+ ::: zone-end
+
+1. Use `spark.createDataFrame` and the previously saved OLTP configuration to add sample data to the target container.
+
+ ::: zone pivot="programming-language-python"
+
+ ```python
+ # Ingest sample data
+ spark.createDataFrame(products) \
+ .toDF("id", "category", "name", "quantity", "price", "clearance") \
+ .write \
+ .format("cosmos.oltp") \
+ .options(**config) \
+ .mode("APPEND") \
+ .save()
+ ```
+
+ ::: zone-end
+
+ ::: zone pivot="programming-language-scala"
+
+ ```scala
+ // Ingest sample data
+ spark.createDataFrame(products)
+ .toDF("id", "category", "name", "quantity", "price", "clearance")
+ .write
+ .format("cosmos.oltp")
+ .options(config)
+ .mode("APPEND")
+ .save()
+ ```
+
+ ::: zone-end
+
+## Query data
+
+Load OLTP data into a data frame to perform common queries on the data. You can use various syntaxes filter or query data.
+
+1. Use `spark.read` to load the OLTP data into a dataframe object. Use the same configuration used earlier in this tutorial. Also, set `spark.cosmos.read.inferSchema.enabled` to true to allow the Spark connector to infer the schema by sampling existing items.
+
+ ::: zone pivot="programming-language-python"
+
+ ```python
+ # Load data
+ df = spark.read.format("cosmos.oltp") \
+ .options(**config) \
+ .option("spark.cosmos.read.inferSchema.enabled", "true") \
+ .load()
+ ```
+
+ ::: zone-end
+
+ ::: zone pivot="programming-language-scala"
+
+ ```scala
+ // Load data
+ val df = spark.read.format("cosmos.oltp")
+ .options(config)
+ .option("spark.cosmos.read.inferSchema.enabled", "true")
+ .load()
+ ```
+
+ ::: zone-end
+
+1. Render the schema of the data loaded in the dataframe using `printSchema`.
+
+ ::: zone pivot="programming-language-python"
+
+ ```python
+ # Render schema
+ df.printSchema()
+ ```
+
+ ::: zone-end
+
+ ::: zone pivot="programming-language-scala"
+
+ ```scala
+ // Render schema
+ df.printSchema()
+ ```
+
+ ::: zone-end
+
+1. Render data rows where the `quantity` column is less than `20`. Use the `where` and `show` functions to perform this query.
+
+ ::: zone pivot="programming-language-python"
+
+ ```python
+ # Render filtered data
+ df.where("quantity < 20") \
+ .show()
+ ```
+
+ ::: zone-end
+
+ ::: zone pivot="programming-language-scala"
+
+ ```scala
+ // Render filtered data
+ df.where("quantity < 20")
+ .show()
+ ```
+
+ ::: zone-end
+
+1. Render the first data row where the `clearance` column is true. Use the `filter` function to perform this query.
+
+ ::: zone pivot="programming-language-python"
+
+ ```python
+ # Render 1 row of flitered data
+ df.filter(df.clearance == True) \
+ .show(1)
+ ```
+
+ ::: zone-end
+
+ ::: zone pivot="programming-language-scala"
+
+ ```scala
+ // Render 1 row of flitered data
+ df.filter($"clearance" === true)
+ .show(1)
+ ```
+
+ ::: zone-end
+
+1. Render five rows of data with no filter or truncation. Use the `show` function to customize the appearance and number of rows that are rendered.
+
+ ::: zone pivot="programming-language-python"
+
+ ```python
+ # Render five rows of unfiltered and untruncated data
+ df.show(5, False)
+ ```
+
+ ::: zone-end
+
+ ::: zone pivot="programming-language-scala"
+
+ ```scala
+ // Render five rows of unfiltered and untruncated data
+ df.show(5, false)
+ ```
+
+ ::: zone-end
+
+1. Query your data using this raw NoSQL query string: `SELECT * FROM cosmosCatalog.cosmicworks.products WHERE price > 800`
+
+ ::: zone pivot="programming-language-python"
+
+ ```python
+ # Render results of raw query
+ rawQuery = "SELECT * FROM cosmosCatalog.cosmicworks.products WHERE price > 800"
+ rawDf = spark.sql(rawQuery)
+ rawDf.show()
+ ```
+
+ ::: zone-end
+
+ ::: zone pivot="programming-language-scala"
+
+ ```scala
+ // Render results of raw query
+ val rawQuery = s"SELECT * FROM cosmosCatalog.cosmicworks.products WHERE price > 800"
+ val rawDf = spark.sql(rawQuery)
+ rawDf.show()
+ ```
+
+ ::: zone-end
+
+## Perform common operations
+
+When working with API for NoSQL data in Spark, you can perform partial updates or work with data as raw JSON.
+
+1. To perform a partial update of an item, perform these steps:
+
+ 1. Copy the existing `config` configuration variable and modify the properties in the new copy. Specifically; configure the write strategy to `ItemPatch`, disable bulk support, set the columns and mapped operations, and finally set the default operation type to `Set`.
+
+ ::: zone pivot="programming-language-python"
+
+ ```python
+ # Copy and modify configuration
+ configPatch = dict(config)
+ configPatch["spark.cosmos.write.strategy"] = "ItemPatch"
+ configPatch["spark.cosmos.write.bulk.enabled"] = "false"
+ configPatch["spark.cosmos.write.patch.defaultOperationType"] = "Set"
+ configPatch["spark.cosmos.write.patch.columnConfigs"] = "[col(name).op(set)]"
+ ```
+
+ ::: zone-end
+
+ ::: zone pivot="programming-language-scala"
+
+ ```scala
+ // Copy and modify configuration
+ val configPatch = scala.collection.mutable.Map.empty ++ config
+ configPatch ++= Map(
+ "spark.cosmos.write.strategy" -> "ItemPatch",
+ "spark.cosmos.write.bulk.enabled" -> "false",
+ "spark.cosmos.write.patch.defaultOperationType" -> "Set",
+ "spark.cosmos.write.patch.columnConfigs" -> "[col(name).op(set)]"
+ )
+ ```
+
+ ::: zone-end
+
+ 1. Create variables for the item partition key and unique identifier that you intend to target as part of this patch operation.
+
+ ::: zone pivot="programming-language-python"
+
+ ```python
+ # Specify target item id and partition key
+ targetItemId = "68719518391"
+ targetItemPartitionKey = "gear-surf-surfboards"
+ ```
+
+ ::: zone-end
+
+ ::: zone pivot="programming-language-scala"
+
+ ```scala
+ // Specify target item id and partition key
+ val targetItemId = "68719518391"
+ val targetItemPartitionKey = "gear-surf-surfboards"
+ ```
+
+ ::: zone-end
+
+ 1. Create a set of patch objects to specify the target item and specify fields that should be modified.
+
+ ::: zone pivot="programming-language-python"
+
+ ```python
+ # Create set of patch diffs
+ patchProducts = [{ "id": f"{targetItemId}", "category": f"{targetItemPartitionKey}", "name": "Yamba New Surfboard" }]
+ ```
+
+ ::: zone-end
+
+ ::: zone pivot="programming-language-scala"
+
+ ```scala
+ // Create set of patch diffs
+ val patchProducts = Seq(
+ (targetItemId, targetItemPartitionKey, "Yamba New Surfboard")
+ )
+ ```
+
+ ::: zone-end
+
+ 1. Create a data frame using the set of patch objects and use `write` to perform the patch operation.
+
+ ::: zone pivot="programming-language-python"
+
+ ```python
+ # Create data frame
+ spark.createDataFrame(patchProducts) \
+ .write \
+ .format("cosmos.oltp") \
+ .options(**configPatch) \
+ .mode("APPEND") \
+ .save()
+ ```
+
+ ::: zone-end
+
+ ::: zone pivot="programming-language-scala"
+
+ ```scala
+ // Create data frame
+ patchProducts
+ .toDF("id", "category", "name")
+ .write
+ .format("cosmos.oltp")
+ .options(configPatch)
+ .mode("APPEND")
+ .save()
+ ```
+
+ ::: zone-end
+
+ 1. Run a query to review the results of the patch operation. The item should now be named `Yamba New Surfboard` with no other changes.
+
+ ::: zone pivot="programming-language-python"
+
+ ```python
+ # Create and run query
+ patchQuery = f"SELECT * FROM cosmosCatalog.cosmicworks.products WHERE id = '{targetItemId}' AND category = '{targetItemPartitionKey}'"
+ patchDf = spark.sql(patchQuery)
+ patchDf.show(1)
+ ```
+
+ ::: zone-end
+
+ ::: zone pivot="programming-language-scala"
+
+ ```scala
+ // Create and run query
+ val patchQuery = s"SELECT * FROM cosmosCatalog.cosmicworks.products WHERE id = '$targetItemId' AND category = '$targetItemPartitionKey'"
+ val patchDf = spark.sql(patchQuery)
+ patchDf.show(1)
+ ```
+
+ ::: zone-end
+
+1. To work with raw JSON data, perform these steps:
+
+ 1. Copy the existing `config` configuration variable and modify the properties in the new copy. Specifically; change the target container to `employees` and configure the `contacts` column/field to use raw JSON data.
+
+ ::: zone pivot="programming-language-python"
+
+ ```python
+ # Copy and modify configuration
+ configRawJson = dict(config)
+ configRawJson["spark.cosmos.container"] = "employees"
+ configRawJson["spark.cosmos.write.patch.columnConfigs"] = "[col(contacts).path(/contacts).op(set).rawJson]"
+ ```
+
+ ::: zone-end
+
+ ::: zone pivot="programming-language-scala"
+
+ ```scala
+ // Copy and modify configuration
+ val configRawJson = scala.collection.mutable.Map.empty ++ config
+ configRawJson ++= Map(
+ "spark.cosmos.container" -> "employees",
+ "spark.cosmos.write.patch.columnConfigs" -> "[col(contacts).path(/contacts).op(set).rawJson]"
+ )
+ ```
+
+ ::: zone-end
+
+ 1. Create a set of employees to ingest into the container.
+
+ ::: zone pivot="programming-language-python"
+
+ ```python
+ # Create employee data
+ employees = (
+ ("63476388581", "CosmicWorks", "Marketing", "Outside Sales", "Alain Henry", '[ { "type": "phone", "value": "425-555-0117" }, { "email": "alain@adventure-works.com" } ]'),
+ )
+ ```
+
+ ::: zone-end
+
+ ::: zone pivot="programming-language-scala"
+
+ ```scala
+ // Create employee data
+ val employees = Seq(
+ ("63476388581", "CosmicWorks", "Marketing", "Outside Sales", "Alain Henry", """[ { "type": "phone", "value": "425-555-0117" }, { "email": "alain@adventure-works.com" } ]""")
+ )
+ ```
+
+ ::: zone-end
+
+ 1. Create a data frame and use `write` to ingest the employee data.
+
+ ::: zone pivot="programming-language-python"
+
+ ```python
+ # Ingest data
+ spark.createDataFrame(employees) \
+ .toDF("id", "organization", "department", "team", "name", "contacts") \
+ .write \
+ .format("cosmos.oltp") \
+ .options(**configRawJson) \
+ .mode("APPEND") \
+ .save()
+ ```
+
+ ::: zone-end
+
+ ::: zone pivot="programming-language-scala"
+
+ ```scala
+ // Ingest data
+ spark.createDataFrame(employees)
+ .toDF("id", "organization", "department", "team", "name", "contacts")
+ .write
+ .format("cosmos.oltp")
+ .options(configRawJson)
+ .mode("APPEND")
+ .save()
+ ```
+
+ ::: zone-end
+
+ 1. Render the data from the data frame using `show`. Observe that the `contacts` column is raw JSON in the output.
+
+ ::: zone pivot="programming-language-python"
+
+ ```python
+ # Read and render data
+ rawJsonDf = spark.read.format("cosmos.oltp") \
+ .options(**configRawJson) \
+ .load()
+ rawJsonDf.show()
+ ```
+
+ ::: zone-end
+
+ ::: zone pivot="programming-language-scala"
+
+ ```scala
+ // Read and render data
+ val rawJsonDf = spark.read.format("cosmos.oltp")
+ .options(configRawJson)
+ .load()
+ rawJsonDf.show()
+ ```
+
+ ::: zone-end
+
+## Related content
+
+- [Apache Spark](https://spark.apache.org/)
+- [Azure Cosmos DB Catalog API](https://github.com/Azure/azure-sdk-for-jav)
+- [Configuration Parameter Reference](https://github.com/Azure/azure-sdk-for-jav)
+- [Sample "New York City Taxi data" notebook](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/cosmos/azure-cosmos-spark_3_2-12/Samples/Python/NYC-Taxi-Data)
+- [Migrate from Spark 2.4 to Spark 3.*](https://github.com/Azure/azure-sdk-for-jav)
+- Version Compatibility
+ - [Version compatibility for Spark 3.1](https://github.com/Azure/azure-sdk-for-jav#version-compatibility)
+ - [Version compatibility for Spark 3.2](https://github.com/Azure/azure-sdk-for-jav#version-compatibility)
+ - [Version compatibility for Spark 3.3](https://github.com/Azure/azure-sdk-for-jav#version-compatibility)
+ - [Version compatibility for Spark 3.4](https://github.com/Azure/azure-sdk-for-jav#version-compatibility)
+- Release notes
+ - [Release notes for Spark 3.1](https://github.com/Azure/azure-sdk-for-jav)
+ - [Release notes for Spark 3.2](https://github.com/Azure/azure-sdk-for-jav)
+ - [Release notes for Spark 3.3](https://github.com/Azure/azure-sdk-for-jav)
+ - [Release notes for Spark 3.4](https://github.com/Azure/azure-sdk-for-jav)
+- Download links
+ - [Download Azure Cosmos DB Spark connect for Spark 3.1](https://github.com/Azure/azure-sdk-for-jav#download)
+ - [Download Azure Cosmos DB Spark connect for Spark 3.2](https://github.com/Azure/azure-sdk-for-jav#download)
+ - [Download Azure Cosmos DB Spark connect for Spark 3.3](https://github.com/Azure/azure-sdk-for-jav#download)
+ - [Download Azure Cosmos DB Spark connect for Spark 3.4](https://github.com/Azure/azure-sdk-for-jav#download)
+
+## Next step
+
+> [!div class="nextstepaction"]
+> [Azure Cosmos DB Spark connector on Maven Central Repository](https://central.sonatype.com/search?q=g:com.azure.cosmos.spark&smo=true)
cosmos-db How To Use Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-use-java.md
Title: Use the Azure Tables client library for Java
description: Store structured data in the cloud using the Azure Tables client library for Java.
+ms.devlang: java
Last updated 12/10/2020
cosmos-db Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/support.md
Last updated 03/07/2023
+ms.devlang: cpp
+# ms.devlang: cpp, csharp, java, javascript, php, python, ruby
cost-management-billing Cost Management Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/cost-management-error-codes.md
See [AuthorizationFailed](#AuthorizationFailed).
**More information**
-For more information about enterprise agreements, see [Troubleshoot enterprise cost views](../manage/enterprise-mgmt-grp-troubleshoot-cost-view.md).
+For more information about enterprise agreements, see [Troubleshoot enterprise cost views](../troubleshoot-billing/enterprise-mgmt-grp-troubleshoot-cost-view.md).
For more information about Microsoft Customer Agreements, see [Understand Microsoft Customer Agreement administrative roles in Azure](../manage/understand-mca-roles.md).
The message indicates that the Enterprise Agreement administrator hasn't enabled
**More information**
-For more information, see [Troubleshoot Azure enterprise cost views](../manage/enterprise-mgmt-grp-troubleshoot-cost-view.md).
+For more information, see [Troubleshoot Azure enterprise cost views](../troubleshoot-billing/enterprise-mgmt-grp-troubleshoot-cost-view.md).
## AuthorizationFailed
The message indicates that the Enterprise Agreement administrator hasn't enabled
**More information**
-For more information about troubleshooting disabled costs, see [Troubleshoot Azure enterprise cost views](../manage/enterprise-mgmt-grp-troubleshoot-cost-view.md).
+For more information about troubleshooting disabled costs, see [Troubleshoot Azure enterprise cost views](../troubleshoot-billing/enterprise-mgmt-grp-troubleshoot-cost-view.md).
## DisallowedOperation
The message indicates that your partner hasn't published pricing for the Enterpr
**More information**
-For more information, see [Troubleshoot Azure enterprise cost views](../manage/enterprise-mgmt-grp-troubleshoot-cost-view.md).
+For more information, see [Troubleshoot Azure enterprise cost views](../troubleshoot-billing/enterprise-mgmt-grp-troubleshoot-cost-view.md).
## InvalidAuthenticationTokenTenant
cost-management-billing Change Credit Card https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/change-credit-card.md
When you create a new subscription, you can specify a new credit card. When you
## Manage pay-as-you-go credit cards
-The following sections apply to customers who have a Microsoft Online Services Program billing account. Learn how to [check your billing account type](#check-the-type-of-your-account). If your billing account type is Microsoft Online Services Program, payment methods are associated with individual Azure subscriptions. If you get an error after you add the credit card, see [Credit card declined at Azure sign-up](./troubleshoot-declined-card.md).
+The following sections apply to customers who have a Microsoft Online Services Program billing account. Learn how to [check your billing account type](#check-the-type-of-your-account). If your billing account type is Microsoft Online Services Program, payment methods are associated with individual Azure subscriptions. If you get an error after you add the credit card, see [Credit card declined at Azure sign-up](../troubleshoot-billing/troubleshoot-declined-card.md).
### Change credit card for all subscriptions by adding a new credit card
cost-management-billing Direct Ea Administration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/direct-ea-administration.md
Enrollments where all associated accounts and services have been transferred to
## Next steps -- If you need to create an Azure support request for your EA enrollment, see [How to create an Azure support request for an Enterprise Agreement issue](how-to-create-azure-support-request-ea.md).
+- If you need to create an Azure support request for your EA enrollment, see [How to create an Azure support request for an Enterprise Agreement issue](../troubleshoot-billing/how-to-create-azure-support-request-ea.md).
- Read the [Cost Management + Billing FAQ](../cost-management-billing-faq.yml) for questions about EA subscription ownership.
cost-management-billing Billing Troubleshoot Azure Payment Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/troubleshoot-billing/billing-troubleshoot-azure-payment-issues.md
+
+ Title: Troubleshoot Azure payment issues
+description: Resolving an issue when updating payment information account in the Azure portal.
++
+tags: billing
+++ Last updated : 04/26/2023+++
+# Troubleshoot Azure payment issues
+
+You may experience an issue or error when you try to update the payment information account in the Microsoft Azure portal.
+
+To resolve your issue, select the subject below which most closely resembles your error.
+
+## My credit card was declined when I tried to sign up for Azure
+
+To troubleshoot issues regarding a declined card, see [Troubleshoot a declined card at Azure sign-up](troubleshoot-declined-card.md).
+
+## Unable to see subscriptions under my account to update the payment method
+
+You might be using an email ID that differs from the one that is used for the subscriptions.
+
+To troubleshoot this issue, see [No subscriptions found sign-in error for Azure portal](../troubleshoot-subscription/no-subscriptions-found.md).
+
+## Unable to use a virtual or prepaid credit as a payment method.
+
+Virtual or prepaid credit cards aren't accepted as payment for Azure subscriptions.
+
+For more information, see [Troubleshoot a declined card at Azure sign-up](troubleshoot-declined-card.md).
+
+## Unable to remove a credit card from a saved billing payment method
+
+By design, you can't remove a credit card from the active subscription.
+
+If an existing card has to be deleted, one of the following actions is required:
+
+- A new card must be added to the subscription so that the old payment instrument can be successfully deleted.
+- You can cancel the subscription to delete the subscription permanently and then remove the card.
+
+## Unable to delete an old payment method after adding a new payment method
+
+The new payment instrument might not be associated with the subscription. To help associate the payment instrument with the subscription, see [Add, update, or remove a credit card for Azure](../manage/change-credit-card.md).
+
+## Unable to delete a payment method because of `Cannot delete payment method` error
+
+The error occurs because of an outstanding balance. Clear any outstanding balances before you delete the payment method.
+
+## Unable to make payment for a subscription
+
+If you receive the error message: `Payment is past due. There is a problem with your payment method` or `We're sorry, the information cannot be saved. Close the browser and try again.`, then there's a pending payment on the card because the card was denied by your financial institution.
+
+Verify that the credit card has a sufficient balance to make a payment. If it doesn't, use another card to make the payment, or reach out to your financial institution to resolve the issue.
+
+Check with your bank for the following issues:
+
+- International transactions aren't enabled.
+- The card has a credit limit, and the balance must be settled.
+- A recurring payment is enabled on the card.
+
+## Unable to change payment method because of browser issues (browser doesn't respond, doesn't load, and so on)
+
+Sign out of all active Azure sessions, and then follow the steps in the [Browse InPrivate in Microsoft Edge article](https://support.microsoft.com/help/4026200/microsoft-edge-browse-inprivate) to start an InPrivate session within Microsoft Edge or Internet Explorer.
+
+In the private session, follow the steps at [How to change a credit card](../manage/change-credit-card.md) to update or change the credit card information.
+
+You can also try to do the following actions:
+
+- Refresh your browser
+- Use another browser
+- Delete cached cookies
+
+## My subscription is still disabled after updating the payment method.
+
+The issue occurs because of an outstanding balance. Clear any outstanding balances before you delete the payment method.
+
+## Unable to change payment method because of an XML error response page
+
+You receive the message if you're using [the Azure portal](https://portal.azure.com/) to add a new credit card.
+
+To add card details, sign-in to the Azure Account portal by using the account administrator's email address.
+
+## Why does my invoice appear as unpaid when I've paid it?
+
+- The invoice number on the remittance wasn't specified.
+- You made one payment for multiple invoices.
+
+Best practices:
+
+- Submit one wire transfer payment per invoice.
+- Specify the invoice number on the remittance.
+- Send proof of payment, identification, and remittance details.
+
+## Other help resources
+
+Other troubleshooting articles for Azure Billing and Subscriptions
+
+- [Declined card](troubleshoot-declined-card.md)
+- [Subscription sign-in issues](../troubleshoot-subscription/troubleshoot-sign-in-issue.md)
+- [No subscriptions found](../troubleshoot-subscription/no-subscriptions-found.md)
+- [Enterprise cost view disabled](enterprise-mgmt-grp-troubleshoot-cost-view.md)
+
+## Contact us for help
+
+If you have questions or need help, [create a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
+
+## Next steps
+
+- [Azure Billing documentation](../index.yml)
cost-management-billing Ea Portal Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/troubleshoot-billing/ea-portal-troubleshoot.md
+
+ Title: Troubleshoot Azure EA portal access
+description: This article describes some common issues that can occur with an Azure Enterprise Agreement (EA) in the Azure EA portal.
++ Last updated : 12/16/2022++++++
+# Troubleshoot Azure EA portal access
+
+This article describes some common issues that can occur with an Azure Enterprise Agreement (EA). The Azure EA portal is used to manage enterprise agreement users and costs. You might come across these issues when you're configuring or updating Azure EA portal access.
+
+> [!NOTE]
+> We recommend that both direct and indirect EA Azure customers use Cost Management + Billing in the Azure portal to manage their enrollment and billing instead of using the EA portal. For more information about enrollment management in the Azure portal, see [Get started with EA billing in the Azure portal](../manage/ea-direct-portal-get-started.md).
+>
+> As of February 20, 2023 indirect EA customers wonΓÇÖt be able to manage their billing account in the EA portal. Instead, they must use the Azure portal.
+>
+> This change doesnΓÇÖt affect Azure Government EA enrollments. They continue using the EA portal to manage their enrollment.
+
+## Issues adding a user to an enrollment
+
+There are different types of authentication levels for enterprise enrollments. When authentication levels are applied incorrectly, you might have issues when you try to sign in to the Azure EA portal.
+
+You use the Azure EA portal to grant access to users with different authentication levels. An enterprise administrator can update the authentication level to meet security requirements of their organization.
+
+### Authentication level types
+
+- Microsoft Account Only - For organizations that want to use, create, and manage users through Microsoft accounts.
+- Work or School Account - For organizations that have set up Microsoft Entra ID with federation to the cloud and all accounts are on a single tenant.
+- Work or School Account Cross Tenant - For organizations that have set up Microsoft Entra ID with federation to the cloud and will have accounts in multiple tenants.
+- Mixed Account - Allows you to add users with Microsoft Account and/or with a Work or School Account.
+
+The first work or school account added to the enrollment determines the _default_ domain. To add a work or school account with another tenant, you must change the authentication level under the enrollment to cross-tenant authentication.
+
+To update the Authentication Level:
+
+1. Sign in to the Azure [EA portal](https://ea.azure.com/) as an Enterprise Administrator.
+2. Select **Manage** on the left navigation panel.
+3. Select the **Enrollment** tab.
+4. Under **Enrollment Details**, select **Auth Level**.
+5. Select the pencil symbol.
+6. Select **Save**.
+
+![Example showing authentication levels ](./media/ea-portal-troubleshoot/create-ea-authentication-level-types.png)
+
+Microsoft accounts must have an associated ID created at [https://signup.live.com](https://signup.live.com/).
+
+Work or school accounts are available to organizations that have set up Microsoft Entra ID with federation and where all accounts are on a single tenant. Users can be added with work or school federated user authentication if the company's internal Microsoft Entra ID is federated.
+
+If your organization doesn't use Microsoft Entra ID federation, you can't use your work or school email address. Instead, register or create a new email address and register it as a Microsoft account.
+
+## Unable to access the Azure EA portal
+
+If you get an error message when you try to sign in to the Azure EA portal, use the following the troubleshooting steps:
+
+- Ensure that you're using the correct Azure EA portal URL, which is https://ea.azure.com.
+- Determine if your access to the Azure EA portal was added as a work or school account or as a Microsoft account.
+ - If you're using your work account, enter your work email and work password. Your work password is provided by your organization. You can check with your IT department about how to reset the password if you have issues with it.
+ - If you're using a Microsoft account, enter your Microsoft account email address and password. If you've forgotten your Microsoft account password, you can reset it at [https://account.live.com/password/reset](https://account.live.com/password/reset).
+- Use an in-private or incognito browser session to sign in so that no cookies or cached information from previous or existing sessions are kept. Clear your browser's cache and use an in-private or incognito window to open https://ea.azure.com.
+- If you get an _Invalid User_ error when using a Microsoft account, it might be because you have multiple Microsoft accounts. The one that you're trying to sign in with isn't the primary email address.
+Or, if you get an _Invalid User_ error, it might be because the wrong account type was used when the user was added to the enrollment. For example, a work or school account instead of a Microsoft account. In this example, you have another EA admin add the correct account or you need to contact [support](https://support.microsoft.com/supportforbusiness/productselection?sapId=cf791efa-485b-95a3-6fad-3daf9cd4027c).
+ - If you need to check the primary alias, go to [https://account.live.com](https://account.live.com). Then, select **Your Info** and then select **Manage how to sign in to Microsoft**. Follow the prompts to verify an alternate email address and obtain a code to access sensitive information. Enter the security code. Select **Set it up later** if you don't want to set up two-factor authentication.
+ - You see the **Manage how to sign in to Microsoft** page where you can view your account aliases. Check that the primary alias is the one that you're using to sign in to the Azure EA portal. If it isn't, you can make it your primary alias. Or, you can use the primary alias for Azure EA portal instead.
+
+## Next steps
+
+- Azure EA portal administrators should read [Azure EA portal administration](../manage/ea-portal-administration.md) to learn about common administrative tasks.
+- Read the [Cost Management + Billing FAQ](../cost-management-billing-faq.yml) for questions and answers about common issues for Azure EA Activation.
cost-management-billing Enterprise Mgmt Grp Troubleshoot Cost View https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/troubleshoot-billing/enterprise-mgmt-grp-troubleshoot-cost-view.md
+
+ Title: Troubleshoot Azure enterprise cost views
+description: Learn how to resolve any issues you might have with organizational cost views within the Azure portal.
+++++ Last updated : 12/16/2022++++
+# Troubleshoot enterprise cost views
+
+Within enterprise enrollments, there are several settings that could cause users within the enrollment to not see costs. These settings are managed by the enrollment administrator. Or, if the enrollment isn't bought directly through Microsoft, the settings are managed by the partner. This article helps you understand what the settings are and how they impact the enrollment. These settings are independent of the Azure roles.
+
+> [!NOTE]
+> We recommend that both direct and indirect EA Azure customers use Cost Management + Billing in the Azure portal to manage their enrollment and billing instead of using the EA portal. For more information about enrollment management in the Azure portal, see [Get started with EA billing in the Azure portal](../manage/ea-direct-portal-get-started.md).
+>
+> As of February 20, 2023 indirect EA customers wonΓÇÖt be able to manage their billing account in the EA portal. Instead, they must use the Azure portal.
+>
+> This change doesnΓÇÖt affect Azure Government EA enrollments. They continue using the EA portal to manage their enrollment.
+
+## Enable access to costs
+
+Are you seeing a message Unauthorized, or *"Cost views are disabled in your enrollment."* when looking for cost information?
+![Screenshot that shows "unauthorized" in Current Cost field for subscription.](./media/enterprise-mgmt-grp-troubleshoot-cost-view/unauthorized.png)
+
+It might be for one of the following reasons:
+
+1. YouΓÇÖve bought Azure through an enterprise partner, and the partner didn't release pricing yet. Contact your partner to update the pricing setting within the [Enterprise portal](https://ea.azure.com).
+2. If youΓÇÖre an EA Direct customer, there are a couple of possibilities:
+ * You're an Account Owner and your Enrollment Administrator disabled the **AO view charges** setting.
+ * You're a Department Administrator and your Enrollment Administrator disabled the **DA view charges** setting.
+ * Contact your Enrollment Administrator to get access. The Enrollment Admin can now update the settings in [Azure portal](https://portal.azure.com/). Navigate to **Policies** menu to change settings.
+ * The Enrollment Admin can update the settings in the [Enterprise portal](https://ea.azure.com/manage/enrollment).
+
+ ![Screenshot that shows the Enterprise Portal Settings for view charges.](./media/enterprise-mgmt-grp-troubleshoot-cost-view/ea-portal-settings.png)
+
+
+
+## Asset is unavailable
+
+If you get an error message stating **This asset is unavailable** when trying to access a subscription or management group, then you don't have the correct role to view this item.
+
+![Screenshot that shows "asset is unavailable" message.](./media/enterprise-mgmt-grp-troubleshoot-cost-view/asset-not-found.png)
+
+Ask your Azure subscription or management group administrator for access. For more information, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+
+## Next steps
+- If you have questions or need help, [create a support request](https://go.microsoft.com/fwlink/?linkid=2083458).
cost-management-billing How To Create Azure Support Request Ea https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/troubleshoot-billing/how-to-create-azure-support-request-ea.md
+
+ Title: How to create an Azure support request for an Enterprise Agreement issue
+description: Enterprise Agreement customers who need assistance can use the Azure portal to find self-service solutions and to create and manage support requests.
+ Last updated : 04/05/2023+++++++
+# Create an Azure support request for an Enterprise Agreement issue
+
+Azure enables you to create and manage support requests, also known as support tickets, for Enterprise Agreements. You can create and manage requests in the [Azure portal](https://portal.azure.com), which is covered in this article. You can also create and manage requests programmatically, using the [Azure support ticket REST API](/rest/api/support), or by using [Azure CLI](/cli/azure/azure-cli-support-request).
+
+> [!NOTE]
+> The Azure portal URL is specific to the Azure cloud where your organization is deployed.
+>
+>- Azure portal for commercial use is: [https://portal.azure.com](https://portal.azure.com)
+>- Azure portal for Germany is: `https://portal.microsoftazure.de`
+>- Azure portal for the United States government is: [https://portal.azure.us](https://portal.azure.us)
+
+Azure provides unlimited support for subscription management, which includes billing, quota adjustments, and account transfers. You need a support plan for technical support. For more information, see [Compare support plans](https://azure.microsoft.com/support/plans).
+
+## Getting started
+
+You can get to **Help + support** in the Azure portal. It's available from the Azure portal menu, the global header, or the resource menu for a service. Before you can file a support request, you must have appropriate permissions.
+
+### Azure role-based access control
+
+To create a support request for an Enterprise Agreement, you must be an Enterprise Administrator or Partner Administrator associated with an enterprise enrollment.
+
+### Go to Help + support from the global header
+
+To start a support request from anywhere in the Azure portal:
+
+1. Select the question mark symbol in the global header, then select **Help + support**.
+
+ :::image type="content" source="media/how-to-create-azure-support-request-ea/help-support-new-lower.png" alt-text="Screenshot of the Help menu in the Azure portal.":::
+
+1. Select **Create a support request**. Follow the prompts to provide information about your problem. We'll suggest some possible solutions, gather details about the issue, and help you submit and track the support request.
+
+ :::image type="content" source="media/how-to-create-azure-support-request-ea/new-support-request-2-lower.png" alt-text="Screenshot of the Help + support page with Create a support request link.":::
+
+### Go to Help + support from a resource menu
+
+To start a support request:
+
+1. From the resource menu, in the **Support + troubleshooting** section, select **New Support Request**.
+
+ :::image type="content" source="media/how-to-create-azure-support-request-ea/in-context-2-lower.png" alt-text="Screenshot of the New Support Request option in the resource pane.":::
+
+1. Follow the prompts to provide us with information about the problem you're having. When you start the support request process from a resource, some options are pre-selected for you.
+
+## Create a support request
+
+We'll walk you through some steps to gather information about your problem and help you solve it. Each step is described in the following sections.
+
+### Problem description
+
+1. Type a summary of your issue and then select **Issue type**.
+1. In the **Issue type** list, select **Enrollment administration** for EA portal related issues.
+ :::image type="content" source="./media/how-to-create-azure-support-request-ea/select-issue-type-enrollment-administration.png" alt-text="Screenshot showing Select Enrollment administration." lightbox="./media/how-to-create-azure-support-request-ea/select-issue-type-enrollment-administration.png" :::
+1. For **Enrollment number**, select the enrollment number.
+ :::image type="content" source="./media/how-to-create-azure-support-request-ea/select-enrollment.png" alt-text="Screenshot showing Select Enrollment number." :::
+1. For **Problem type**, select the issue category that best describes the type of problem that you have.
+ :::image type="content" source="./media/how-to-create-azure-support-request-ea/select-problem-type.png" alt-text="Screenshot showing Select a problem type." :::
+1. For **Problem subtype**, select a problem subcategory.
+
+After you've provided all of these details, select **Next: Solutions**.
+
+### Recommended solution
+
+Based on the information you provided, we'll show you recommended solutions you can use to try to resolve the problem. In some cases, we may even run a quick diagnostic. Solutions are written by Azure engineers and will solve most common problems.
+
+If you're still unable to resolve the issue, continue creating your support request by selecting **Next: Details**.
+
+### Other details
+
+Next, we collect more details about the problem. Providing thorough and detailed information in this step helps us route your support request to the right engineer.
+
+1. On the Details tab, complete the **Problem details** section so that we have more information about your issue. If possible, tell us when the problem started and any steps to reproduce it. You can upload a file, such as a log file or output from diagnostics. For more information on file uploads, see [File upload guidelines](../../azure-portal/supportability/how-to-manage-azure-support-request.md#file-upload-guidelines).
+
+1. In the **Share diagnostic information** section, select **Yes** or **No**. Selecting **Yes** allows Azure support to gather [diagnostic information](https://azure.microsoft.com/support/legal/support-diagnostic-information-collection/) from your Azure resources. If you prefer not to share this information, select **No**. In some cases, there will be more options to choose from.
+
+1. In the **Support method** section, select the severity of the issue. The maximum severity level depends on your [support plan](https://azure.microsoft.com/support/plans).
+
+1. Provide your preferred contact method, your availability, and your preferred support language.
+
+1. Next, complete the **Contact info** section so we know how to contact you.
+ :::image type="content" source="./media/how-to-create-azure-support-request-ea/details-tab.png" alt-text="Screenshot showing the Details tab." lightbox="./media/how-to-create-azure-support-request-ea/details-tab.png" :::
+
+Select **Next: Review + create** when you've completed all of the necessary information.
+
+### Review + create
+
+Before you create your request, review all of the details that you'll send to support. You can select **Previous** to return to any tab if you need to make changes. When you're satisfied the support request is complete, select **Create**.
+
+A support engineer will contact you using the method you indicated. For information about initial response times, see [Support scope and responsiveness](https://azure.microsoft.com/support/plans/response/).
+
+## Can't create request with Microsoft Account
+
+If you have a Microsoft Account (MSA) and you aren't able to create an Azure support ticket, use the following steps to file a support case. Microsoft accounts are created for services including Outlook, Windows Live, and Hotmail.
+
+To create an Azure support ticket, an *organizational account* must have the EA administrator role or Partner administrator role.
+
+If you have an MSA, have an administrator create an organizational account for you. An enterprise administrator or partner administrator must then add your organizational account as an enterprise administrator or partner administrator. Then you can use your organizational account to file a support request.
+
+- To add an Enterprise Administrator, see [Create another enterprise administrator](../manage/ea-portal-administration.md#create-another-enterprise-administrator).
+- To add a Partner Administrator, see [Manage partner administrators](../manage/ea-partner-portal-administration.md#manage-partner-administrators).
+
+## Next steps
+
+Follow these links to learn more:
+
+* [How to manage an Azure support request](../../azure-portal/supportability/how-to-manage-azure-support-request.md)
+* [Azure support ticket REST API](/rest/api/support)
+* Engage with us on [Twitter](https://twitter.com/azuresupport)
+* Get help from your peers in the [Microsoft Q&A question page](/answers/products/azure)
+* Learn more in [Azure Support FAQ](https://azure.microsoft.com/support/faq)
cost-management-billing Troubleshoot Account Not Found https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/troubleshoot-billing/troubleshoot-account-not-found.md
+
+ Title: Troubleshoot viewing your billing account in the Azure portal
+description: This article helps you troubleshoot problems when trying to view your billing account in the Azure portal.
++
+tags: billing
+++ Last updated : 04/05/2023+++
+# Troubleshoot viewing your billing account in the Azure portal
+
+A billing account is created when you sign up to use Azure. You use your billing account to manage your invoices, payments, and track costs. You might have access to multiple billing accounts. For example, you might have signed up for Azure for personal use. You could also have access to Azure through your organization's Enterprise Agreement or Microsoft Customer Agreement. For each of these scenarios, you would have a separate billing account. This article helps you troubleshoot problems when trying to view your billing account in the Azure portal.
+
+You can view your billing accounts one the [Cost Management + Billing](https://portal.azure.com/#blade/Microsoft_Azure_GTM/ModernBillingMenuBlade) page.
+
+To learn more about billing accounts and identify your billing account type, see [View billing accounts in Azure portal](../manage/view-all-accounts.md).
+
+If you're unable to see your billing account in the Azure portal, try the following options:
+
+## Sign in to a different tenant
+
+Your billing account is associated with a single Microsoft Entra tenant. You won't see your billing account on the Cost Management + Billing page if you're signed in to an incorrect tenant. Use the following steps to switch to another tenant in the Azure portal and view your billing accounts in that tenant.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Select your profile (email address) at the top right of the page.
+1. Select **Switch directory**.
+ ![Screenshot that shows selecting switch directory in the portal](./media/troubleshoot-account-not-found/select-switch-directory.png)
+1. Select a directory under the **All directories** section.
+ ![Screenshot that shows selecting a directory in the portal](./media/troubleshoot-account-not-found/select-directory.png)
+
+## Sign in with a different email address
+
+Some users have multiple email addresses to sign in to the [Azure portal](https://portal.azure.com). Not all email addresses have access to a billing account. If you sign in with an email address that has permissions to manage resources but doesn't have permissions to view a billing account, you wouldn't see the billing account on the [Cost Management + Billing](https://portal.azure.com/#blade/Microsoft_Azure_GTM/ModernBillingMenuBlade) page in the Azure portal.
+
+Sign in to the Azure portal with an email address that has permission to the billing account to access your billing account.
+
+## Sign in with a different identity
+
+Some users have two identities with the same email address - a work or school account and a personal account. Typically, only one of their identities has permissions to view a billing account. You might have two identities with a single email address. When you sign in with an identity that doesn't have permission to view a billing account, you won't see the billing account in the [Cost Management + Billing](https://portal.azure.com/#blade/Microsoft_Azure_GTM/ModernBillingMenuBlade) page. Use the following steps to switch your identity:
+
+1. Sign in to the [Azure portal](https://portal.azure.com) in an InPrivate/Incognito window.
+1. If your email address has two identities, you'll see an option to select a personal account or a work or school account. Select one of the accounts.
+1. If you can't see the billing account in the Cost Management + Billing page in the Azure portal, repeat steps 1 and 2 and select the other identity.
+
+## Contact us for help
+
+If you have questions or need help, [create a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
+
+## Next steps
+
+Read the following billing and subscription articles to help troubleshoot problems.
+
+- [Declined card](./troubleshoot-declined-card.md)
+- [Subscription sign in issues](../troubleshoot-subscription/troubleshoot-sign-in-issue.md)
+- [No subscriptions found](../troubleshoot-subscription/no-subscriptions-found.md)
+- [Enterprise cost view disabled](./enterprise-mgmt-grp-troubleshoot-cost-view.md)
cost-management-billing Troubleshoot Cant Find Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/troubleshoot-billing/troubleshoot-cant-find-invoice.md
+
+ Title: Troubleshoot cant view invoice in the Azure portal
+description: Resolving an issue when trying to view your invoice in the Azure portal.
+++
+tags: billing
+++ Last updated : 04/05/2023+++
+# Troubleshoot issues while trying to view invoice in the Azure portal
+
+You may experience issues when you try to view your invoice in the Azure portal. This short guide will discuss some common issues.
+
+## Common issues and solutions
+
+#### <a name="subnotfound"></a> You see the message ΓÇ£We canΓÇÖt display the invoices for your subscription. This typically happens when you sign in with an email, which doesnΓÇÖt have access to view invoices. Check youΓÇÖve signed in with the correct email address. If you are still seeing the error, see Why you might not see an invoice.ΓÇ¥
+
+This happens when the identity that you used to sign in does not have access to the subscription.
+
+To resolve this issue, try one of the following options:
+
+**Verify that you're signed in with the correct email address:**
+
+Only the email that has the account administrator role for the subscription can view its invoice. Verify that you've signed in with the correct email address. The email address is displayed in the email that you received when your invoice is generated.
+
+ ![Screenshot that shows invoice email](./media/troubleshoot-cant-find-invoice/invoice-email.png)
+
+**Verify that you're signed in with the correct account:**
+
+Some customers have two accounts with the same email address - a work or a school account and a personal account. Typically, only one of their accounts has permission to view invoices. You might have two accounts with your email address. If you sign in with the account that doesn't have permission, you would not see the invoice. To identify if you have multiple accounts and use a different account, follow the steps below:
+
+1. Sign in to the [Azure portal](https://portal.azure.com) in an InPrivate/Incognito window.
+1. If you have multiple accounts with the same email, then you'll be prompted to select either **Work or school account** or **Personal account**. Select one of the accounts then follow the [instructions here to view your invoice](../understand/download-azure-invoice.md#download-your-mosp-azure-subscription-invoice).
+
+ ![Screenshot that shows account selection](./media/troubleshoot-cant-find-invoice/two-accounts.png)
+
+1. Try other account, if you still can't view the invoice in the Azure portal.
+
+**Verify that you're signed in to the correct Microsoft Entra tenant:**
+
+Your billing account and subscription is associated with a Microsoft Entra tenant. If you're signed in to an incorrect tenant, you won't see the invoice for your subscription. Try the following steps to switch tenants in the Azure portal:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Select your email address from the top-right of the page.
+1. Select Switch directory.
+
+ ![Screenshot that shows selecting switch directory](./media/troubleshoot-cant-find-invoice/select-switch-tenant.png)
+
+1. Select a tenant from the All Directories section. If you don't see All Directories section, you don't have access to multiple tenants.
+
+ ![Screenshot that shows selecting another directory](./media/troubleshoot-cant-find-invoice/select-another-tenant.png)
+
+#### <a name="cantsearchinvoice"></a>You couldn't find the invoice that you see on your credit card statement
+
+You find a charge on your credit card **Microsoft Gxxxxxxxxx**. You can find all other invoices in the portal but not Gxxxxxxxxx. This happens when the invoice belongs to a different subscription or billing profile. Follow the steps below to view the invoice.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Search for the invoice number in the Azure portal search bar.
+1. Select **View your invoice**.
+
+ ![Screenshot that shows searching for invoice](./media/troubleshoot-cant-find-invoice/search-invoice.png)
+
+## Contact us for help
+
+If you have questions or need help, [create a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
+
+## Next steps
+
+- [View and download your Azure invoice](../understand/download-azure-invoice.md)
+- [View and download your Azure usage and charges](../understand/download-azure-daily-usage.md)
+- [No subscriptions found sign in error for Azure portal](../troubleshoot-subscription/no-subscriptions-found.md)
cost-management-billing Troubleshoot Csp Billing Issues Usage File Pivot Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/troubleshoot-billing/troubleshoot-csp-billing-issues-usage-file-pivot-tables.md
+
+ Title: Troubleshoot Azure CSP billing issues with usage file pivot tables
+description: This article helps you troubleshoot Azure Cloud Solution Provider (CSP) billing issues using pivot tables created from your CSV usage files.
++
+tags: billing
+++ Last updated : 04/05/2023+++
+# Troubleshoot CSP billing issues with usage file pivot tables
+
+This article helps you troubleshoot Cloud Solution Provider (CSP) billing issues using pivot tables in your Partner Center reconciliation (usage) files. Azure usage files contain all your Azure usage and consumption information. The information in the file can help you understand:
+
+- Understand how Azure reservations are getting used and applied
+- Reconcile information in Cost Management with your billed invoice
+- Troubleshoot a cost spike
+- Calculate a refund amount for a service level agreement
+
+By using the information from your usage files, you can get a better understanding of usage issues and diagnose them. Usage files are generated in comma delimited (CSV) format. Because the usage files might be large CSV files, they're easier to manipulate and view as pivot tables in a spreadsheet application like Excel. Examples in this article use Excel, but you can use any spreadsheet application that you want.
+
+Only Billing admins and Global admins have access to download reconciliation files. For more information, see [Learn how to read the line items in your Partner Center reconciliation files](/partner-center/use-the-reconciliation-files).
+
+## Get the data and format it
+
+Because Azure usage files are in CSV format, you need to prepare the data for use in Excel. Use the following steps to format the data as table.
+
+1. Download the usage file using the instructions at [Find your bill](/partner-center/read-your-bill#find-your-bill).
+1. Open the file in Excel.
+1. The unformatted data resembles the following example.
+ :::image type="content" source="./media/troubleshoot-csp-billing-issues-usage-file-pivot-tables/raw-csv-data.png" alt-text="Example showing unformatted data in Excel" lightbox="./media/troubleshoot-csp-billing-issues-usage-file-pivot-tables/raw-csv-data.png" :::
+1. Select the first field in the table, **PartnerID**.
+1. Press Ctrl + Shift + Down arrow and then Ctrl + Shift + Right Arrow to select all the information in the table.
+1. In the top menu, select **Insert** > **Table**. In the Create table box, select **My table has headers** and then select **OK**.
+ :::image type="content" source="./media/troubleshoot-csp-billing-issues-usage-file-pivot-tables/create-table-dialog.png" alt-text="Example showing the Create Table dialog" :::
+1. In top menu, select **Insert** > **Pivot Table** and then select **OK**. The action creates a new sheet in the file and takes you to the pivot table area on the right side of the sheet.
+ :::image type="content" source="./media/troubleshoot-csp-billing-issues-usage-file-pivot-tables/pivot-table-fields.png" alt-text="Example showing the PivotTable fields area" lightbox="./media/troubleshoot-csp-billing-issues-usage-file-pivot-tables/pivot-table-fields.png" :::
+
+The PivotTable Fields area is a drag-and-drop area. Continue to the next section to create the pivot table.
+
+## Create pivot table to view Azure costs by resources
+
+In this section, you create a pivot table where you can troubleshoot overall general Azure usage. The example table can help you investigate which service consumes the most resources. Or you can view the resources that incur the most cost and see how a service is getting charged.
+
+1. In the PivotTable Fields area, drag **Service Name** and **Resource** to the **Rows** area. Put **Resource** below **Service Name**.
+ :::image type="content" source="./media/troubleshoot-csp-billing-issues-usage-file-pivot-tables/rows-section.png" alt-text="Example showing Service Name and Resource in Rows" lightbox="./media/troubleshoot-csp-billing-issues-usage-file-pivot-tables/rows-section.png" :::
+1. Next, put **Post-Tax Total** in the **Values** area. You can also use the Consumed Quantity column instead to get information about consumption units and transactions. For example, GB and Hours. Or, transactions instead of cost in different currencies like USD, EUR, and INR.
+ :::image type="content" source="./media/troubleshoot-csp-billing-issues-usage-file-pivot-tables/add-pivot-table-fields.png" alt-text="Example showing columns added to pivot table fields" lightbox="./media/troubleshoot-csp-billing-issues-usage-file-pivot-tables/add-pivot-table-fields.png" :::
+1. Now you have a dashboard for generalized consumption investigation. You can filter for a specific service using the filtering options in the pivot table.
+ :::image type="content" source="./media/troubleshoot-csp-billing-issues-usage-file-pivot-tables/pivot-table-filter-option-row-label.png" alt-text="Example showing pivot table filter option for row label" lightbox="./media/troubleshoot-csp-billing-issues-usage-file-pivot-tables/pivot-table-filter-option-row-label.png" :::
+ To filter a second level in a pivot table, for example a resource, select a second-level item in the table.
+ :::image type="content" source="./media/troubleshoot-csp-billing-issues-usage-file-pivot-tables/pivot-table-filter-option-select-field.png" alt-text="Example showing filter options for Select field" lightbox="./media/troubleshoot-csp-billing-issues-usage-file-pivot-tables/pivot-table-filter-option-select-field.png" :::
+1. For additional filters, you can add **SubscriptionID** and **Customer Company Name** to the **Filters** area and select the desired scope.
+
+## Create a pivot table to view Azure usage by date
+
+In this section, you create a pivot table where you can troubleshoot overall general Azure usage by Consumed Quantity and date. It's useful to identify billing spikes by date and service. Or you can view the resources that incur the most cost and see how a service is getting charged.
+
+Your reconciliation file has two tables. One is at the top (the main table) and there's another table at the bottom of the document. This second table has much of the same information, however it doesn't include pricing or cost details. It does have usage date and consumed quantity.
++
+1. Use the same steps from the [Get the data and format it](#get-the-data-and-format-it) section to create an Excel table with the information at the bottom of the reconciliation file.
+1. When the table is ready and you have a pivot table sheet, use the same steps from the create-pivot-table-to-view-azure-costs-by-resources section to prepare the dashboard. Instead of using the Post-Tax total, put **Consumed quantity** in the **Values** area.
+1. Add **Usage Date** to the columns section. The pivot table should look like the following example.
+ :::image type="content" source="./media/troubleshoot-csp-billing-issues-usage-file-pivot-tables/final-pivot-table-fields.png" alt-text="Example showing final pivot table fields" lightbox="./media/troubleshoot-csp-billing-issues-usage-file-pivot-tables/final-pivot-table-fields.png" :::
+1. You now have a dashboard that shows the usage per date. You can extend each month by selecting the **+** symbol.
+
+The dashboard shows the consumed quantity in units such as GB, Hours, and Transfers.
+
+To view the price per day, you can add **Resource GUID** to the **Rows** area. In the upper table, add the unit price ( **ListPrice** ) for the resource. Multiply **ListPrice** by the **Consumed quantity** to calculate your pre-tax charges. The amounts should match.
+
+Some resources (services) have scaled pricing by consumed quantity. For example, some resources have a higher price for the first 100 GB consumed and a lower price for the GB used afterward. Keep scaled pricing in mind when you calculate costs manually.
+
+## Create pivot table to view cost for a specific resource
+
+A single resource can incur several charges for different services. For example, a virtual machine can incur Compute charges, OS licensing, Bandwidth (Data transfers), RI usage, and storage for snapshots. Whenever you want to review the overall usage for specific resources, the following steps guide you through creating a dashboard to view overall usage with your usage files.
+
+Reconciliation files don't contain resource-specific details. So, you use the aggregated usage file. Contact [Azure Billing support](https://go.microsoft.com/fwlink/?linkid=2083458) to have them provide you with the aggregated usage file for your subscription. Aggregated files are generated at the subscription level. The unformatted data resembles the following example.
++
+The file contains the following columns.
+
+- **UsageStart** and **UsageEnd** - Date for each line item (each unit of usage). For example, each day.
+- **MeteredResourceID** ΓÇô In Azure, it corresponds to the meter ID.
+- **Properties** - Contains the Instance ID (resource name) with other details such as location.
+- **Quantity** - Consumed quantity in the reconciliation file.
+
+1. Select the first field in the table, **PartnerID**.
+1. Press Ctrl + Shift + Down arrow and then Ctrl + Shift + Right Arrow to select all the information in the table.
+1. In the top menu, select **Insert** > **Table**. In the Create table box, select **My table has headers** and then select **OK**.
+ :::image type="content" source="./media/troubleshoot-csp-billing-issues-usage-file-pivot-tables/create-table-dialog.png" alt-text="Example showing the Create Table dialog" :::
+1. In top menu, select **Insert** > **Pivot Table** and then select **OK**. The action creates a new sheet in the file and takes you to the pivot table area on the right side of the sheet.
+ :::image type="content" source="./media/troubleshoot-csp-billing-issues-usage-file-pivot-tables/pivot-table-fields-reconciliation.png" alt-text="Example showing the PivotTable fields area for the reconciliation file" lightbox="./media/troubleshoot-csp-billing-issues-usage-file-pivot-tables/pivot-table-fields-reconciliation.png" :::
+1. Next, add **MeteredResourceID** to the **Rows** area and **Quantity** to **Values**. Results show the overall usage information. For additional details, put **UsageEndDateTime** in the **Columns** area.
+ :::image type="content" source="./media/troubleshoot-csp-billing-issues-usage-file-pivot-tables/overall-usage.png" alt-text="Example showing overall usage information" lightbox="./media/troubleshoot-csp-billing-issues-usage-file-pivot-tables/overall-usage.png" :::
+1. To view an overall report, add **Properties** to **Rows** under **MeteredResourceID**. It shows a complete dashboard for your usage.
+1. To filter by a specific resource, add **Properties** to the **Filters** area and select the desired usage. You can use Search to find a resource name.
+ To view the cost for the resource, find the total consumed quantity and multiply the value by the list price. The list price is specific for each Resource GUID (MeteredResourceID). If a resource is consuming several MeteredResourceIDs, you have to note the total value for each ID.
++
+## Next steps
+
+- [Get started with Cost Management for partners](../costs/get-started-partners.md).
cost-management-billing Troubleshoot Customer Agreement Billing Issues Usage File Pivot Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/troubleshoot-billing/troubleshoot-customer-agreement-billing-issues-usage-file-pivot-tables.md
+
+ Title: Troubleshoot Azure MCA billing issues with usage file pivot tables
+description: This article helps you troubleshoot Microsoft Customer Agreement (MCA) billing issues using pivot tables created from your CSV usage files.
++
+tags: billing
+++ Last updated : 04/05/2023+++
+# Troubleshoot MCA billing issues with usage file pivot tables
+
+This article helps you troubleshoot Microsoft Customer Agreement (MCA) billing issues using pivot tables in your usage files. Azure usage files contain all your Azure usage and consumption information. The information in the file can help you understand:
+
+- Understand how Azure reservations are getting used and applied
+- Reconcile information in Cost Management with your billed invoice
+- Troubleshoot a cost spike
+- Calculate a refund amount for a service level agreement
+
+By using the information from your usage files, you can get a better understanding of usage issues and diagnose them. Usage files are generated in comma delimited (CSV) format. Because the usage files might be large CSV files, they're easier to manipulate and view as pivot tables in a spreadsheet application like Excel. Examples in this article use Excel, but you can use any spreadsheet application that you want.
+
+Only Billing profile owners, Contributors, Readers, or Invoice Managers have access to download usage files. For more information, see [Download usage for your Microsoft Customer Agreement](../understand/download-azure-daily-usage.md).
+
+## Get the data and format it
+
+Because Azure usage files are in CSV format, you need to prepare the data for use in Excel. Use the following steps to format the data as a table.
+
+1. Download the usage file using the instructions at [Download usage in Azure portal](../understand/download-azure-daily-usage.md).
+1. Open the file in Excel.
+1. The unformatted data resembles the following example.
+ :::image type="content" source="./media/troubleshoot-customer-agreement-billing-issues-usage-file-pivot-tables/raw-csv-data-mca.png" alt-text="Example showing unformatted data" lightbox="./media/troubleshoot-customer-agreement-billing-issues-usage-file-pivot-tables/raw-csv-data-mca.png" :::
+1. Select the first field in the table, **invoiceID**.
+1. Press Ctrl + Shift + Down arrow and then Ctrl + Shift + Right Arrow to select all the information in the table.
+1. In the top menu, select **Insert** > **Table**. In the Create table box, select **My table has headers** and then select **OK**.
+1. In top menu, select **Insert** > **Pivot Table** and then select **OK**. The action creates a new sheet in the file and takes you to the pivot table area on the right side of the sheet.
+ :::image type="content" source="./media/troubleshoot-customer-agreement-billing-issues-usage-file-pivot-tables/pivot-table-fields.png" alt-text="Example showing the PivotTable fields area" lightbox="./media/troubleshoot-customer-agreement-billing-issues-usage-file-pivot-tables/pivot-table-fields.png" :::
+
+The PivotTable Fields area is a drag-and-drop area. Continue to the next section to create the pivot table.
+
+## Create pivot table to view Azure costs by resources
+
+In this section, you create a pivot table where you can troubleshoot overall general Azure usage. The example table can help you investigate which service consumes the most resources. Or you can view the resources that incur the most cost and how a service is getting charged.
+
+1. In the PivotTable Fields area, drag **Meter Category** and **Product** to the **Rows** section. Put **Product** below **Meter Category**.
+ :::image type="content" source="./media/troubleshoot-customer-agreement-billing-issues-usage-file-pivot-tables/rows-section.png" alt-text="[Example showing Meter Category and Product in Rows" lightbox="./media/troubleshoot-customer-agreement-billing-issues-usage-file-pivot-tables/rows-section.png" :::
+1. Next, add the **costInBillingCurrenty** column to the **Values** section. You can also use the **Quantity** column instead to get information about consumption units and transactions. For example, GB and Hours. Or, transactions instead of cost in different currencies like USD, EUR, and INR.
+ :::image type="content" source="./media/troubleshoot-customer-agreement-billing-issues-usage-file-pivot-tables/add-pivot-table-fields.png" alt-text="Example showing fields added to pivot table" lightbox="./media/troubleshoot-customer-agreement-billing-issues-usage-file-pivot-tables/add-pivot-table-fields.png" :::
+1. Now you have a dashboard for generalized consumption investigation. You can filter for a specific service using the filtering options in the pivot table.
+ :::image type="content" source="./media/troubleshoot-customer-agreement-billing-issues-usage-file-pivot-tables/pivot-table-filter-option-row-label.png" alt-text="Example showing pivot table filter option for row label" lightbox="./media/troubleshoot-customer-agreement-billing-issues-usage-file-pivot-tables/pivot-table-filter-option-row-label.png" :::
+ To filter a second level in a pivot table, for example a resource, select a second-level item in the table.
+ :::image type="content" source="./media/troubleshoot-customer-agreement-billing-issues-usage-file-pivot-tables/pivot-table-filter-option-select-field.png" alt-text="Example showing filter options for Select field" lightbox="./media/troubleshoot-customer-agreement-billing-issues-usage-file-pivot-tables/pivot-table-filter-option-select-field.png" :::
+1. Drag the **ResourceID** column to the **Rows** area under **Product** to see the cost of each service by resource.
+1. Add the **date** column to the **Columns** area to see daily consumption for the product.
+ :::image type="content" source="./media/troubleshoot-customer-agreement-billing-issues-usage-file-pivot-tables/pivot-table-date.png" alt-text="Example showing where to put date in the columns area" lightbox="./media/troubleshoot-customer-agreement-billing-issues-usage-file-pivot-tables/pivot-table-date.png" :::
+1. Expand and collapse months with the **+** symbols for each month's column.
+ :::image type="content" source="./media/troubleshoot-customer-agreement-billing-issues-usage-file-pivot-tables/pivot-table-month-expand-collapse.png" alt-text="Example showing the + symbol" lightbox="./media/troubleshoot-customer-agreement-billing-issues-usage-file-pivot-tables/pivot-table-month-expand-collapse.png" :::
+
+Adding both the **Cost** and **Quantity** columns in the **Values** area is optional. Doing so creates two columns for each data section below each month and day when the Date column is in the Columns section of the pivot table.
+
+For additional filters, you can add the InvoiceSection, costCenter, SubscriptionID, ResourceGroupName, or Tags to the filters section and select the desired scope.
+
+## Create pivot table to view cost for a specific resource
+
+A single resource can incur several charges for different services. For example, a virtual machine can incur Compute charges, OS licensing, Bandwidth (data transfers), RI usage, and storage for snapshots. Whenever you want to review the overall usage for specific resources, the following steps guide you through creating a dashboard to view overall usage with your usage files.
+
+1. In the right menu, drag **ResourceID** to the **Filter** section in the pivot table menu.
+1. Select the resource that you want to see the cost for. Type in the **Search** box to find a resource name.
+ :::image type="content" source="./media/troubleshoot-customer-agreement-billing-issues-usage-file-pivot-tables/resource-id-search.png" alt-text="Example showing where to search for resourceID" lightbox="./media/troubleshoot-customer-agreement-billing-issues-usage-file-pivot-tables/resource-id-search.png" :::
+1. Add **meterCategory** and **Product** to the **Rows** area. Put **Product** below **meterCategory**.
+ :::image type="content" source="./media/troubleshoot-customer-agreement-billing-issues-usage-file-pivot-tables/pivot-table-fields-meter-category.png" alt-text="Example showing where to put meterCategory in pivot table fields" lightbox="./media/troubleshoot-customer-agreement-billing-issues-usage-file-pivot-tables/pivot-table-fields-meter-category.png" :::
+1. Next, add the **Extended Cost** column to the **Values** section. You can also use the Consumed Quantity column instead to get information about consumption units and transactions. For example, GB and Hours. Or, transactions instead of cost in different currencies like USD, EUR, and INR. Now you have a dashboard that shows all the services that the resource consumes.
+1. Add the **Date** column to the **Columns** section. It shows the daily consumption.
+1. You can expand and reduce using the **+** icons in each month's column.
+ :::image type="content" source="./media/troubleshoot-customer-agreement-billing-issues-usage-file-pivot-tables/pivot-table-month-expand-collapse.png" alt-text="Example showing the + symbol" :::
+++
+## Next steps
+
+- [Explore and analyze costs with cost analysis](../costs/quick-acm-cost-analysis.md).
cost-management-billing Troubleshoot Declined Card https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/troubleshoot-billing/troubleshoot-declined-card.md
+
+ Title: Troubleshoot a declined card
+description: Resolve declined credit card problems in the Azure portal.
+++++ Last updated : 04/26/2023+++
+# Troubleshoot a declined card
+
+You may experience an issue or error in which a card is declined at Azure sign-up or after you've started using your Azure subscription.
+
+To resolve your issue, select one of the following topics that most closely resembles your error.
+
+## The card is not accepted for your country/region
+
+When you choose a card, Azure displays the card options that are valid in the country/region that you select. Contact your bank or card issuer to verify that your credit card is enabled for international transactions. For more information about supported countries/regions and currencies, see the [Azure Purchase FAQ](https://azure.microsoft.com/pricing/faq/).
+
+> [!Note]
+> - American Express credit cards are not currently supported as a payment instrument in India. We have no time frame as to when it may be an accepted form of payment.
+> - Credit cards are accepted and debit cards are accepted by most countries or regions.
+> - Hong Kong Special Administrative Region and Brazil only support credit cards.
+> - India supports debit and credit cards through Visa and Mastercard.
+
+## You're using a virtual or prepaid card
+
+Prepaid and virtual cards are not accepted as payment for Azure subscriptions.
+
+## Your credit information is inaccurate or incomplete
+
+The name, address, and CVV code that you enter must match exactly what's printed on the card.
+
+## The card is inactive or blocked
+
+Contact your bank to make sure that your card is active.
+
+You may be experiencing other sign-up issues
+
+For more information about how to troubleshoot Azure sign-up issues, see the following article:
+
+[You can't sign-up for Azure in the Azure portal](../troubleshoot-subscription/troubleshoot-azure-sign-up.md)
+
+## You represent a business that doesn't want to pay by card
+
+If you represent a business, you can use the invoice payment method (wire transfer) to pay for your Azure subscription. After you set up the account to pay by invoice, you can't change to another payment option, unless you have a Microsoft Customer Agreement and signed up for Azure through the Azure website.
+
+For more information about how to pay by invoice, see [Submit a request to pay Azure subscription by invoice](../manage/pay-by-invoice.md).
+
+## Your card information is outdated
+
+For information about how to manage your card information, including changing or removing a card, see [Add, update, or remove a credit for Azure](../manage/change-credit-card.md).
+
+## Card not authorized for service consumption (threshold billing)
+
+A billing threshold is a level of spending that, when met, triggers an authorization to the primary payment method associated to your Azure account. If the service consumption surpasses the billing threshold, Microsoft may attempt an authorization on the primary payment method. If the bank approves the authorization, it will be immediately reversed. There will be no settlement record on your bank statement.
+
+However, if the authorization on the card is declined, you're asked to update the primary payment method in order to continue using the services. For information about how to manage your card information, including changing or removing a card, see [Add, update, or remove a credit card](../manage/change-credit-card.md).
+
+For more information about threshold billing, see [Troubleshoot threshold billing](troubleshoot-threshold-billing.md).
+
+## Other help resources
+
+Other troubleshooting articles for Azure Billing and Subscriptions
+
+- [Sign-up issues](../troubleshoot-subscription/troubleshoot-azure-sign-up.md)
+- [Subscription sign-in issues](../troubleshoot-subscription/troubleshoot-sign-in-issue.md)
+- [No subscriptions found](../troubleshoot-subscription/no-subscriptions-found.md)
+- [Enterprise cost view disabled](enterprise-mgmt-grp-troubleshoot-cost-view.md)
+
+## Contact us for help
+
+If you have questions or need help, [create a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
+
+## Next steps
+
+- [Azure Billing documentation](../index.yml)
cost-management-billing Troubleshoot Ea Billing Issues Usage File Pivot Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/troubleshoot-billing/troubleshoot-ea-billing-issues-usage-file-pivot-tables.md
+
+ Title: Troubleshoot Azure EA billing issues with usage file pivot tables
+description: This article helps you troubleshoot Enterprise Agreement (EA) billing issues using pivot tables created from your CSV usage files.
++
+tags: billing
+++ Last updated : 04/05/2023+++
+# Troubleshoot EA billing issues with usage file pivot tables
+
+This article helps you troubleshoot EA billing issues using pivot tables in your usage files. Azure usage files contain all your Azure usage and consumption information. The information in the file can help you understand:
+
+- Understand how Azure reservations are getting used and applied
+- Reconcile information in Cost Management with your billed invoice
+- Troubleshoot a cost spike
+- Calculate a refund amount for a service level agreement
+
+By using the information from your usage files, you can get a better understanding of usage issues and diagnose them. Usage files are generated in comma delimited (CSV) format. Because the usage files might be large CSV files, they're easier to manipulate and view as pivot tables in a spreadsheet application like Excel. Examples in this article use Excel, but you can use any spreadsheet application that you want.
+
+Only EA admins, Account Owners, and Department Admins have access to download usage files.
+
+## Get the data and format it
+
+Because Azure usage files are in CSV format, you need to prepare the data for use in Excel. Use the following steps to format the data as table.
+
+1. Download the Usage Details Version 2 with All Charges (usage and purchases) file using the instructions at [Download usage for EA customers](../understand/download-azure-daily-usage.md).
+1. Open the file in Excel.
+1. The unformatted data resembles the following example.
+ :::image type="content" source="./media/troubleshoot-ea-billing-issues-usage-file-pivot-tables/raw-csv-data-ea.png" alt-text="Example showing unformatted data in Excel" lightbox="./media/troubleshoot-ea-billing-issues-usage-file-pivot-tables/raw-csv-data-ea.png" :::
+1. Select the first field in the table, the one containing the first column title, **BillingAccountID**.
+1. Press Ctrl + Shift + Down arrow and then Ctrl + Shift + Right Arrow to select all the information in the table.
+1. In the top menu, select **Insert** > **Table**. In the Create table box, select **My table has headers** and then select **OK**.
+ :::image type="content" source="./media/troubleshoot-ea-billing-issues-usage-file-pivot-tables/create-table-dialog.png" alt-text="Example showing the Create Table dialog" :::
+1. In top menu, select **Insert** > **Pivot Table** and then select **OK**. The action creates a new sheet in the file. It takes you to the pivot table area on the right side of the sheet.
+ :::image type="content" source="./media/troubleshoot-ea-billing-issues-usage-file-pivot-tables/pivot-table-fields.png" alt-text="Example showing the PivotTable fields area" lightbox="./media/troubleshoot-ea-billing-issues-usage-file-pivot-tables/pivot-table-fields.png" :::
+
+The PivotTable Fields area is a drag-and-drop area. Continue to the next section to create the pivot table.
+
+## Create pivot table to view Azure costs by resources
+
+In this section, you create a pivot table where you can troubleshoot overall general Azure usage. The example table can help you investigate which service consumes the most resources. Or you can view the resources that incur the most cost and how a service is getting charged.
+
+1. In the PivotTable Fields area, drag **Meter Category** and **Product** to the **Rows** section. Put **Product** below **Meter Category**.
+ :::image type="content" source="./media/troubleshoot-ea-billing-issues-usage-file-pivot-tables/rows-section.png" alt-text="Example showing Meter Category and Product in Rows" lightbox="./media/troubleshoot-ea-billing-issues-usage-file-pivot-tables/rows-section.png" :::
+1. Next, add the **Cost** column to the **Values** section. You can also use the Consumed Quantity column instead to get information about consumption units and transactions. For example, GB and Hours. Or, transactions instead of cost in different currencies like USD, EUR, and INR.
+ :::image type="content" source="./media/troubleshoot-ea-billing-issues-usage-file-pivot-tables/add-pivot-table-fields.png" alt-text="Example showing columns added to pivot table fields" lightbox="./media/troubleshoot-ea-billing-issues-usage-file-pivot-tables/add-pivot-table-fields.png" :::
+1. Now you have a dashboard for generalized consumption investigation. You can filter for a specific service using the filtering options in the pivot table.
+ :::image type="content" source="./media/troubleshoot-ea-billing-issues-usage-file-pivot-tables/pivot-table-filter-option-row-label.png" alt-text="Example showing pivot table filter option for row label" lightbox="./media/troubleshoot-ea-billing-issues-usage-file-pivot-tables/pivot-table-filter-option-row-label.png" :::
+ To filter a second level in a pivot table, for example a resource, select a second-level item in the table.
+ :::image type="content" source="./media/troubleshoot-ea-billing-issues-usage-file-pivot-tables/pivot-table-filter-option-select-field.png" alt-text="Example showing filter options for Select field" lightbox="./media/troubleshoot-ea-billing-issues-usage-file-pivot-tables/pivot-table-filter-option-select-field.png" :::
+1. Drag the **ResourceID** column to the **Rows** area under **Product** to see the cost of each service by resource. To view detailed pricing information, view your organization's UnitPrice and search for **Product** in the first column of the price list.
+1. Add the **Date** column to the **Columns** area to see daily consumption for the product.
+ :::image type="content" source="./media/troubleshoot-ea-billing-issues-usage-file-pivot-tables/pivot-table-date.png" alt-text="Example showing where to put Date in the columns area" lightbox="./media/troubleshoot-ea-billing-issues-usage-file-pivot-tables/pivot-table-date.png" :::
+1. Expand and collapse months with the **+** symbols for each month's column.
+ :::image type="content" source="./media/troubleshoot-ea-billing-issues-usage-file-pivot-tables/pivot-table-month-expand-collapse.png" alt-text="Example showing the + symbol" lightbox="./media/troubleshoot-ea-billing-issues-usage-file-pivot-tables/pivot-table-month-expand-collapse.png" :::
+ Adding both the **Cost** and **Quantity** columns in the **Values** area is optional. Doing so creates two columns for each data section below each month and day when the Date column is in the Columns section of the pivot table.
+1. For additional filters, you can add the SubscriptionID, Department, ResourceGroup, Tags, or Cost Center columns to the **Filters** area and select the item you want.
+
+## Create pivot table to view cost for a specific resource
+
+A single resource can incur several charges for different services. For example, a virtual machine can incur Compute charges, OS licensing, Bandwidth (Data transfers), RI usage, and storage for snapshots. Whenever you want to review the overall usage for specific resources, the following steps guide you through creating a dashboard to view overall usage with your usage files.
+
+1. In the right menu, drag **ResourceID** to the **Filter** section in the pivot table menu.
+1. Select the resource that you want to see the cost for. Type in the **Search** box to find a resource name.
+1. Add **Meter Category** and **Product** to the Rows section. Put **Product** below **Meter Category**.
+ :::image type="content" source="./media/troubleshoot-ea-billing-issues-usage-file-pivot-tables/pivot-table-fields-meter-category.png" alt-text="Example showing where to put Meter Category in the pivot table field area" lightbox="./media/troubleshoot-ea-billing-issues-usage-file-pivot-tables/pivot-table-fields-meter-category.png" :::
+1. Next, add the **Cost** column to the **Values** section. You can also use the Consumed Quantity column instead to get information about consumption units and transactions. For example, GB and Hours. Or, transactions instead of cost in different currencies like USD, EUR, and INR. Now you have a dashboard that shows all the services that the resource consumes.
+1. Add the **Date** column to the **Columns** section. It shows the daily consumption.
+1. You can expand and reduce using the **+** symbols in each month's column.
+ :::image type="content" source="./media/troubleshoot-ea-billing-issues-usage-file-pivot-tables/pivot-table-month-expand-collapse.png" alt-text="Example showing the + symbol" :::
+++
+## Next steps
+
+- [Explore and analyze costs with cost analysis](../costs/quick-acm-cost-analysis.md).
cost-management-billing Troubleshoot Threshold Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/troubleshoot-billing/troubleshoot-threshold-billing.md
+
+ Title: Troubleshoot threshold billing
+description: Resolve threshold billing problems.
+++++ Last updated : 02/13/2023+++
+# Troubleshoot threshold billing
+
+A billing threshold is a level of spending that, when met, triggers an authorization to the primary payment method associated to your Azure account. If the service consumption surpasses the billing threshold, Microsoft may attempt an authorization on the primary payment method. If the bank approves the authorization, it will be immediately reversed. There will be no settlement record on your bank statement.
+
+However, if the authorization on the card is declined, you're asked to update the primary payment method in order to continue using the services. For information about how to manage your card information, including changing or removing a card, see [Add, update, or remove a credit card](../manage/change-credit-card.md).
+
+## How am I notified by Microsoft for a threshold billing authorization?
+
+If the payment authorization is approved by the bank, it will immediately be reversed. You won't receive a notification. However, if the payment authorization is declined, you'll receive an email and an Azure portal notification asking you to update your payment method before your account is disabled.
+
+## When does Microsoft release withholding funds on my credit card?
+
+Microsoft immediately reverses all threshold billing authorizations after receiving a bank approval. Microsoft currently only charges the card when the invoice is due. If your bank doesn't release the funds immediately, the card issuer (such as Visa, MasterCard, and American Express) releases the authorization within 30 calendar days.
+
+## Do free trial accounts that upgrade to Pay-As-You-Go receive billing thresholds?
+
+Yes, free trial accounts that upgrade to Pay-As-You-Go will receive billing thresholds.
+
+## Which services (when consumed) count towards my billing threshold?
+
+All Microsoft services count towards a customer's billing threshold.
+
+## How do I check my current consumption level?
+
+Azure customers can view their current usage levels in Cost Management. For more information about viewing your current Azure costs, see [Start using Cost analysis](../costs/quick-acm-cost-analysis.md).
+
+## When there are multiple payment methods (with multiple billing profiles) linked to a single billing account, which one is authorized?
+
+Microsoft only authorizes the default payment method on the customer account.
+
+## Contact us for help
+
+If you have questions or need help, [create a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
+
+## Next steps
+
+- If your card was declined and you need assistance with troubleshooting, see [Troubleshoot a declined card](troubleshoot-declined-card.md).
cost-management-billing Create Subscriptions Deploy Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/troubleshoot-subscription/create-subscriptions-deploy-resources.md
+
+ Title: Message appears when you try to create multiple subscriptions
+
+description: Provides help for the message you might see when you try to create multiple subscriptions.
++
+tags: billing
+++ Last updated : 05/04/2023+++
+# Message appears when you try to create multiple subscriptions
+
+When you try to create multiple Azure subscriptions in a short period of time, you might receive a message stating:
+
+`Subscription not created. Please try again later.`
+
+The message is normal and expected.
+
+> [!IMPORTANT]
+> All existing subscriptions should generate consumption history before you create another one
+
+The message can appear for customers with the following Azure subscription agreement types:
+
+- [Microsoft Customer Agreement purchased directly through Azure.com](../manage/create-subscription.md)
+ - You can have a maximum of five subscriptions in a Microsoft Customer Agreement purchased directly through Azure.com.
+ - You can create one subscription per 24 hour period.
+ - The ability to create other subscriptions is determined on an individual basis according to your history with Azure.
+- [Microsoft Online Services Program](https://signup.azure.com/signup?offer=ms-azr-0003p)
+ - A new billing account for a Microsoft Online Services Program can have a maximum of five subscriptions. However, subscriptions transferred to the new billing account don't count against the limit.
+ - The ability to create other Microsoft Online Services Program subscriptions is determined on an individual basis according to your history with Azure.
+
+## Solution
+
+Expect a delay before you can create another subscription.
+
+If you're new to Azure and don't have any consumption usage, read the [Get started guide for Azure developers](../../guides/developer/azure-developer-guide.md) to help you get started with Azure services.
+
+## Need help? Contact us.
+
+If you have questions or need help, [create a support request](https://go.microsoft.com/fwlink/?linkid=2083458).
+
+## Next steps
+
+- Learn more about [Programmatically creating Azure subscriptions for a Microsoft Customer Agreement with the latest APIs](../manage/programmatically-create-subscription-microsoft-customer-agreement.md).
cost-management-billing No Subscriptions Found https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/troubleshoot-subscription/no-subscriptions-found.md
+
+ Title: No subscriptions found error - Azure portal sign in
+description: Provides the solution for a problem in which No subscriptions found error occurs during Azure portal sign in.
++
+tags: billing
+++ Last updated : 12/06/2022++++
+# No subscriptions found sign in error for Azure portal
+
+You might receive a "No subscriptions found" error message when you try to sign in to the [Azure portal](https://portal.azure.com/). This article provides a solution for this problem.
+
+## Symptom
+
+When you try to sign in to the [Azure portal](https://portal.azure.com/), you receive the following error message: "No subscriptions found".
+
+## Cause
+
+This problem occurs if you selected at the wrong directory, or if your account doesnΓÇÖt have sufficient permissions.
+
+## Solution
+
+### Scenario: Error message is received in the [Azure portal](https://portal.azure.com)
+
+To fix this issue:
+
+* Make sure that the correct Azure directory is selected by selecting your account at the top right.
+
+ ![Select the directory at the top right of the Azure portal](./media/no-subscriptions-found/directory-switch.png)
+* If the right Azure directory is selected but you still receive the error message, [assign the Owner role to your account](../../role-based-access-control/role-assignments-portal.md).
+
+## Need help? Contact us.
+
+If you have questions or need help, [create a support request](https://go.microsoft.com/fwlink/?linkid=2083458).
cost-management-billing Troubleshoot Azure Sign Up https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/troubleshoot-subscription/troubleshoot-azure-sign-up.md
+
+ Title: Troubleshoot issues when you sign up for a new account in the Azure portal
+description: Resolving an issue when trying to sign up for a new account in the Microsoft Azure portal.
+++
+tags: billing
+++ Last updated : 10/17/2023+++
+# Troubleshoot issues when you sign up for a new account in the Azure portal
+
+You might experience an issue when you try to sign up for a new account in the Microsoft Azure portal. This short guide walks you through the sign-up process and discusses some common issues at each step.
+
+> [!NOTE]
+> If you already have an existing account and are looking for guidance to troubleshoot sign-in issues, see [Troubleshoot Azure subscription sign-in issues](troubleshoot-sign-in-issue.md).
+
+## Before you begin
+
+Before beginning sign-up, verify the following information:
+
+- The information for your Azure profile (including contact email address, street address, and telephone number) is correct.
+- Your credit card information is correct.
+- You don't already have a Microsoft account that has the same information.
+
+## Guided walkthrough of Azure sign-up
+
+The Azure sign-up experience consists of four sections:
+
+- About you
+- Identity verification by phone
+- Identity verification by card
+- Agreement
+
+This walkthrough provides examples of the correct information to sign up for an Azure account. Each section also contains some common issues and how to resolve them.
+
+## About you
+
+When you initially sign up for Azure, you have to provide some information about yourself, including:
+
+- Your country/region
+- First name
+- Last name
+- Email address
+- Phone number
+- Credit card information
+
+### Common issues and solutions
+
+#### You see the message ΓÇ£We cannot proceed with sign-up due to an issue with your account. Please contact billing supportΓÇ¥
+
+To resolve this error, follow these steps:
+
+1. Sign in to the [Microsoft account center](https://account.microsoft.com/).
+1. At the top of the page, select **Your info**.
+1. Verify that your billing and shipping details are completed and valid.
+1. When you sign up for the Azure subscription, verify that the billing address for the credit card registration matches your bank records.
+
+If you continue to receive the message, try to sign up by using a different browser.
+
+How about InPrivate browsing?
+
+#### Free trial isn't available
+
+Have you used an Azure subscription in the past? The Azure Terms of Use agreement limits free trial activation only for a user that's new to Azure. If you have had any other type of Azure subscription, you can't activate a free trial. Consider signing up for a [Pay-As-You-Go subscription](https://azure.microsoft.com/offers/ms-azr-0003p/).
+
+#### You see the message 'You are not eligible for an Azure subscription'
+
+To resolve this issue, double-check whether the following items are true:
+
+- The information that you provided for your Azure account profile (including contact email address, street address, and telephone number) is correct.
+- The credit card information is correct.
+- You don't already have a Microsoft account that uses the same information.
+
+#### You see the message 'Your current account type is not supported'
+
+This issue can occur if the account is registered in an [unmanaged Microsoft Entra directory](../../active-directory/enterprise-users/directory-self-service-signup.md), and it isn't in your organization's Microsoft Entra directory. To resolve this issue, sign up the Azure account by using another account, or take over the unmanaged AD directory. For more information, see [Take over an unmanaged directory as administrator in Microsoft Entra ID](../../active-directory/enterprise-users/domains-admin-takeover.md).
+
+The issue can also occur if the account was created using the Microsoft 365 Developer Program. Microsoft doesn't allow purchasing other paid services using your Microsoft 365 Developer Program subscription. For more information, see [Does the subscription also include a subscription to Azure?](/office/developer-program/microsoft-365-developer-program-faq#does-the-subscription-also-include-a-subscription-to-azure-)
+
+## Identity verification by phone
+
+![Identity verification by phone](./media/troubleshoot-azure-sign-up/2.png)
+
+When you get the text message or telephone call, enter the code that you receive in the text box.
+
+### Common issues and solutions
+
+#### No verification text message or phone call
+
+Although the sign-up verification process is typically quick, it might take up to four minutes for a verification code to be delivered.
+
+Here are some other tips:
+
+- You can use any phone number for verification as long as it meets the requirements. The phone number that you enter for verification isn't stored as a contact number for the account.
+ - A Voice-over-IP (VoiP) phone number can't be used for the phone verification process.
+ - Check that your phone can receive calls or SMS messages from a United States-based telephone number.
+- Double-check the phone number that you enter, including the country code that you select in the drop-down menu.
+- If your phone doesn't receive text messages (SMS), try the **Call me** option.
+
+## Identity verification by card
+
+![Identity verification by card](./media/troubleshoot-azure-sign-up/3.png)
+
+### Common issues and solutions
+
+#### Credit card declined or not accepted
+
+Virtual or prepaid credit cards aren't accepted as payment for Azure subscriptions. To see what else might cause your card to be declined, see [Troubleshoot a declined card at Azure sign-up](../troubleshoot-billing/troubleshoot-declined-card.md).
+
+#### Credit card form doesn't support my billing address
+
+Your billing address must be in the country/region that you select in the **About you** section. Verify that you have selected the correct country/region.
+
+#### Progress bar hangs in identity verification by card section
+
+To complete the identity verification by card, third-party cookies must be allowed for your browser.
+
+Use the following steps to update your browser's cookie settings.
+
+1. Update the cookie settings.
+ - If you're using **Chrome**:
+ - Select **Settings** > **Show advanced settings** > **Privacy** > **Content settings**. Clear **Block third-party cookies and site data**.
+
+ - If you're using **Microsoft Edge**:
+ - Select **Settings** > **View advanced settings** > **Cookies** > **Don't block cookies**.
+
+1. Refresh the Azure sign-up page and check whether the problem is resolved.
+1. If the refresh didn't resolve the issue, then exit and restart the browser, and try again.
+
+### I saw a charge on my free trial account
+
+You might see a small, temporary verification hold on your credit card account after you sign up. This hold is removed within three to five days. If you're worried about managing costs, read more about [Analyzing unexpected charges](../understand/analyze-unexpected-charges.md).
+
+## Agreement
+
+Complete the Agreement.
+
+## Other issues
+
+### Can't activate Azure benefit plan like Visual Studio or Microsoft Cloud Partner Program
+
+Check that you're using the correct sign-in credentials. Then, check the benefit program and verify that you're eligible.
+- Visual Studio
+ - Verify your eligibility status on your [Visual Studio account page](https://my.visualstudio.com/Benefits).
+ - If you can't verify your status, contact [Visual Studio Subscription Support](https://visualstudio.microsoft.com/subscriptions/support/).
+- Microsoft for Startups
+ - Sign in to the [Microsoft for Startups portal](https://startups.microsoft.com/#start-two) to verify your eligibility status for Microsoft for Startups.
+ - If you can't verify your status, you can get help by creating a [Microsoft for Startups support request](https://support.microsoft.com/supportrequestform/354fe60a-ba6d-92ad-208a-6a41387aa9d8).
+- Cloud Partner Program
+ - Sign in to the [Cloud Partner Program portal](https://mspartner.microsoft.com/Pages/Locale.aspx) to verify your eligibility status. If you have the appropriate [Cloud Platform Competencies](https://mspartner.microsoft.com/pages/membership/cloud-platform-competency.aspx), you might be eligible for other benefits.
+ - If you can't verify your status, contact [Cloud Partner Program Support](https://mspartner.microsoft.com/Pages/Support/Premium/contact-support.aspx).
+
+### Can't activate new Azure In Open subscription
+
+To create an Azure In Open subscription, you must have a valid Online Service Activation (OSA) key that has at least one Azure In Open token associated with it. If you don't have an OSA key, contact one of the Microsoft Partners that are listed in [Microsoft Pinpoint](https://pinpoint.microsoft.com/).
+
+## Other help resources
+
+Other troubleshooting articles for Azure Billing and Subscriptions
+
+- [Declined card](../troubleshoot-billing/troubleshoot-declined-card.md)
+- [Subscription sign-in issues](troubleshoot-sign-in-issue.md)
+- [No subscriptions found](./no-subscriptions-found.md)
+- [Enterprise cost view disabled](../troubleshoot-billing/enterprise-mgmt-grp-troubleshoot-cost-view.md)
+
+## Contact us for help
+
+- Get answers in [Azure forums](https://azure.microsoft.com/support/forums/).
+- Connect with [@AzureSupport](https://twitter.com/AzureSupport)- answers, support, experts.
+- If you have a support plan, [open a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
+
+## Next steps
+
+- Read the [Cost Management and Billing documentation](../index.yml)
cost-management-billing Troubleshoot Not Available Conflict https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/troubleshoot-subscription/troubleshoot-not-available-conflict.md
+
+ Title: Troubleshoot Not available due to conflict error
+description: Provides the solutions for a problem where you can't select a management group for a reservation or a savings plan.
+++++ Last updated : 03/16/2023+++
+# Troubleshoot Not available due to conflict error
+
+You might see a `Not available due to conflict` error message when you try select a management group for a savings plan or reservation in to the [Azure portal](https://portal.azure.com/). This article provides solutions for the problem.
+
+## Symptom
+
+When you try to buy a reservation or savings plan in to the [Azure portal](https://portal.azure.com/) and you select a scope, you see might see a `Not available due to conflicts error`.
++
+## Cause
+
+This issue can occur when a management group is selected as the scope. An active benefit (savings plan, reservation, or centrally managed Azure Hybrid Benefit) is already applied at a parent or child scope.
+
+## Solutions
+
+To resolve this issue with overlapping benefits, you can do one of the following actions:
+
+- Select another scope.
+- Change the scope of the existing benefit (savings plan, reservation or centrally managed Azure Hybrid Benefit) to prevent the overlap.
+ - For more information about how to change the scope for a reservation, see [Change the savings plan scope](../reservations/manage-reserved-vm-instance.md#change-the-reservation-scope).
+ - For more information about how to change the scope for a savings plan, see [Change the savings plan scope](../savings-plan/manage-savings-plan.md#change-the-savings-plan-scope).
+
+## Need help? Contact us.
+
+If you have questions or need help, [create a support request](https://go.microsoft.com/fwlink/?linkid=2083458).
cost-management-billing Troubleshoot Sign In Issue https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/troubleshoot-subscription/troubleshoot-sign-in-issue.md
+
+ Title: Troubleshoot Azure subscription sign-in issues
+description: Helps to resolve the issues in which you can't sign in to the Azure portal.
+++
+tags: billing
+++ Last updated : 04/05/2023+++
+# Troubleshoot Azure subscription sign-in issues
+
+This guide helps to resolve the issues in which you can't sign in to the Azure portal.
+
+> [!NOTE]
+> If you are having issues signing up for a new Azure account, see [Troubleshoot Azure subscription sign-up issues](./troubleshoot-azure-sign-up.md).
+
+## Page hangs in the loading status
+
+If your internet browser page hangs, try each of the following steps until you can get to the Azure portal.
+
+- Refresh the page.
+- Use a different internet browser.
+- Use the private browsing mode for your browser:
+
+ - **Edge:** Open **Settings** (the three dots by your profile picture), select **New InPrivate window**, and then browse and sign in to the [Azure portal](https://portal.azure.com).
+ - **Chrome:** Choose **Incognito** mode.
+ - **Safari:** Choose **File**, then **New Private Window**.
+
+- Clear the cache and delete Internet cookies:
+
+ - **Edge:** Open **Settings** and select **Privacy and Services**. Follow the steps under **Clear Browsing Data**. Verify that the check boxes for **Browsing history**, **Download history**, and **Cached images and files** are selected, and then select **Delete**.
+ - **Chrome:** Choose **Settings** and select **Clear browsing data** under **Privacy and Security**.
+
+## You are automatically signed in as a different user
+
+This issue can occur if you use more than one user account in an internet browser.
+
+To resolve the issue, try one of the following methods:
+
+- Clear the cache and delete Internet cookies.
+
+ - **Edge:** Open **Settings** and select **Privacy and Services**. Follow the steps under **Clear Browsing Data**. Verify that the check boxes for **Browsing history**, **Download history**, **Cookies**, and **Cached images and files** are selected, and then select **Delete**.
+ - **Chrome:** Choose **Settings** and select **Clear browsing data** under **Privacy and Security**.
+- Reset your browser settings to defaults.
+- Use the private browsing mode for your browser.
+ - **Edge:** Open **Settings** (the three dots by your profile picture), select **New InPrivate window**, and then browse and sign in to the [Azure portal](https://portal.azure.com).
+ - **Chrome:** Choose **Incognito** mode.
+ - **Safari:** Choose **File**, then **New Private Window**.
+
+## I can sign in, but I see the error, No subscriptions found
+
+This problem occurs if you selected at the wrong directory, or if your account doesn't have sufficient permissions.
+
+**Scenario:** You receive the error signing into the [Azure portal](https://portal.azure.com).
+
+To fix this issue:
+
+- Verify that the correct Azure directory is selected by selecting your account at the top-right corner.
+- If the correct Azure directory is selected, but you still receive the error message, have your account [added as an Owner](../manage/add-change-subscription-administrator.md).
+
+## Additional help resources
+
+Other troubleshooting articles for Azure Billing and Subscriptions
+
+- [Declined card](../troubleshoot-billing/troubleshoot-declined-card.md)
+- [Subscription sign-up issues](./troubleshoot-azure-sign-up.md)
+- [No subscriptions found](./no-subscriptions-found.md)
+- [Enterprise cost view disabled](../troubleshoot-billing/enterprise-mgmt-grp-troubleshoot-cost-view.md)
+- [Azure Billing documentation](../index.yml)
+
+## Contact us for help
+
+If you have questions or need help but can't sign in to the Azure portal, [create a support request](https://support.microsoft.com/oas/?prid=15470).
defender-for-cloud Defender For Storage Classic Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-classic-migrate.md
The new plan also provides a more predictable and flexible pricing structure for
The new pricing plan charges based on the number of storage accounts you protect, which simplifies cost calculations and allows for easy scaling as your needs change. You can enable it at the subscription or resource level and can also exclude specific storage accounts from protected subscriptions, providing more granular control over your security coverage. Extra charges might apply to storage accounts with high-volume transactions that exceed a high monthly threshold.
-## Deprecation of Defender for Storage (classic)
-
-The classic plan will be deprecated in the future, and the deprecation will be announced three years in advance. All future capabilities will only be added to the new plan.
- > [!NOTE] > If you already have the legacy Defender for Storage (classic) enabled and want to access the new security features and pricing, you'll need to proactively migrate to the new plan. You can migrate to the new plan with one-click through the Azure Portal or use Azure Policy and IaC tools.
defender-for-cloud Defender For Storage Malware Scan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-malware-scan.md
Malware Scanning doesn't block access or change permissions to the uploaded blob
- Unsupported storage accounts: Legacy v1 storage accounts aren't supported by malware scanning. - Unsupported service: Azure Files isn't supported by malware scanning.-- Unsupported regions: Jio India West, Korea South.
- - Regions that are supported by Defender for Storage but not by malware scanning. Learn more about [availability for Defender for Storage.](/azure/defender-for-cloud/defender-for-storage-introduction)
+- Unsupported regions: Jio India West, Korea South, South Africa West.
+- Regions that are supported by Defender for Storage but not by malware scanning. Learn more about [availability for Defender for Storage.](/azure/defender-for-cloud/defender-for-storage-introduction)
- Unsupported blob types: [Append and Page blobs](/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-blobs) aren't supported for Malware Scanning. - Unsupported encryption: Client-side encrypted blobs aren't supported as they can't be decrypted before scanning by the service. However, data encrypted at rest by Customer Managed Key (CMK) is supported.
defender-for-cloud Exempt Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/exempt-resource.md
For the scope you need, you can create an exemption rule to:
This feature is in preview. [!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)] This is a premium Azure Policy capability offered at no extra cost for customers with Microsoft Defender for Cloud's enhanced security features enabled. For other users, charges might apply in the future. - You need the following permissions to make exemptions:
- - **Owner** or **Security Admin** or **Resource Policy Contributor** to create an exemption
- - To create a rule, you need permissions to edit policies in Azure Policy. [Learn more](../governance/policy/overview.md#azure-rbac-permissions-in-azure-policy).
-
+ - **Owner** or **Security Admin** or **Resource Policy Contributor** to create an exemption
+ - To create a rule, you need permissions to edit policies in Azure Policy. [Learn more](../governance/policy/overview.md#azure-rbac-permissions-in-azure-policy).
+
- You can create exemptions for recommendations included in Defender for Cloud's default [Microsoft cloud security benchmark](/security/benchmark/azure/introduction) standard, or any of the supplied regulatory standards.
+- Some recommendations included in Microsoft cloud security benchmark do not support exemptions, a list of those recommendations can be found [here](/azure/defender-for-cloud/faq-general)
+
+- Recommendations included in multible policy initiatives must [all be exempted](/azure/defender-for-cloud/faq-general)
+ - Custom recommendations can't be exempted. - If a recommendation is disabled, all of its subrecommendations are exempted. - In addition to working in the portal, you can create exemptions using the Azure Policy API. Learn more [Azure Policy exemption structure](../governance/policy/concepts/exemption-structure.md).
defender-for-cloud Remediate Vulnerability Findings Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/remediate-vulnerability-findings-vm.md
To view vulnerability assessment findings (from all of your configured scanners)
:::image type="content" source="media/remediate-vulnerability-findings-vm/vulnerabilities-should-be-remediated.png" alt-text="The findings from your vulnerability assessment solutions for all selected subscriptions." lightbox="media/remediate-vulnerability-findings-vm/vulnerabilities-should-be-remediated.png":::
-1. To filter the findings by a specific VM, open the "Affected resources" section and click the VM that interests you. Or you can select a VM from the resource health view, and view all relevant recommendations for that resource.
+1. To filter the findings by a specific VM, open the "Affected resources" section and select the VM that interests you. Or you can select a VM from the resource health view, and view all relevant recommendations for that resource.
Defender for Cloud shows the findings for that VM, ordered by severity.
To view vulnerability assessment findings (from all of your configured scanners)
## Disable specific findings
-If you have an organizational need to ignore a finding, rather than remediate it, you can optionally disable it. Disabled findings don't impact your secure score or generate unwanted noise.
+If you have an organizational need to ignore a finding, rather than remediate it, you can optionally disable it. Disabled findings don't affect your secure score or generate unwanted noise.
-When a finding matches the criteria you've defined in your disable rules, it won't appear in the list of findings. Typical scenarios include:
+When a finding matches the criteria you defined in your disable rules, it doesn't appear in the list of findings. Typical scenarios include:
- Disable findings with severity below medium - Disable findings that are non-patchable
To create a rule:
## Export the results
-To export vulnerability assessment results, you'll need to use [Azure Resource Graph](https://azure.microsoft.com/features/resource-graph/) (ARG). This tool provides instant access to resource information across your cloud environments with robust filtering, grouping, and sorting capabilities. It's a quick and efficient way to query information across Azure subscriptions programmatically or from within the Azure portal.
+To export vulnerability assessment results, you need to use [Azure Resource Graph](https://azure.microsoft.com/features/resource-graph/) (ARG). This tool provides instant access to resource information across your cloud environments with robust filtering, grouping, and sorting capabilities. It's a quick and efficient way to query information across Azure subscriptions programmatically or from within the Azure portal.
For full instructions and a sample ARG query, see the following Tech Community post: [Exporting vulnerability assessment results in Microsoft Defender for Cloud](https://techcommunity.microsoft.com/t5/azure-security-center/exporting-vulnerability-assessment-results-in-azure-security/ba-p/1212091).
defender-for-cloud Understand Malware Scan Results https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/understand-malware-scan-results.md
Malware scanning might fail to scan a blob. When this happens, the scan result i
| SAM259210: "Scan aborted - the requested blob is protected by password." | The blob is password-protected and can't be scanned. For more information, see the [malware scanning limitations](defender-for-storage-malware-scan.md#limitations) documentation. | N/A | Yes | | SAM259211: "Scan aborted - maximum archive nesting depth exceeded." | The maximum archive nesting depth was exceeded. | Archive nesting is a known method for evading malware detection. Handle this blob with care. | Yes | | SAM259212: "Scan aborted - the requested blob data is corrupt." | The blob is corrupted, and Malware Scanning was unable to scan it. | N/A | Yes |
-|SAM259213: “Scan was throttled by the service."| The scan request has temporarily exceeded the service’s rate limit. This is a measure we take to manage server load and ensure optimal performance for all users. For more information, see the malware scanning limitations documentation.|To avoid this issue in the future, please ensure your scan requests stay within the service’s rate limit. If your needs exceed the current rate limit, consider distributing your scan requests more evenly over time. |No|
+|SAM259213: “Scan was throttled by the service."| The scan request has temporarily exceeded the service’s rate limit. This is a measure we take to manage server load and ensure optimal performance for all users. For more information, see the [malware scanning limitations](/azure/defender-for-cloud/defender-for-storage-malware-scan) documentation.|To avoid this issue in the future, please ensure your scan requests stay within the service’s rate limit. If your needs exceed the current rate limit, consider distributing your scan requests more evenly over time. |No|
## Next steps
dev-box Concept Dev Box Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/concept-dev-box-concepts.md
To learn more about native Microsoft Entra join and Microsoft Entra hybrid join,
Before setting up Dev Box, you need to choose the best regions for your organization. Check [Products available by region](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=dev-box) and [Azure geographies](https://azure.microsoft.com/explore/global-infrastructure/geographies/#choose-your-region) to help you decide on the regions you use. If the region you prefer isnΓÇÖt available for Dev Box, choose a region within 500 miles.
-You specify a region for your dev center and projects. Typically, these resources are in the same region as your main office or IT management center.
+Your dev center and projects typically exist in the same region as your main office or IT management center.
The region of the virtual network specified in a network connection determines the region for a dev box. You can create multiple network connections based on the regions where you support developers. You can then use those connections when you're creating dev box pools to ensure that dev box users create dev boxes in a region close to them. Using a region close to the dev box user provides the best experience.
dev-box How To Configure Intune Conditional Access Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-configure-intune-conditional-access-policies.md
Conditional access is the protection of regulated content in a system by requiri
## Prerequisites
-None
+- Microsoft Intune subscription.
+- Permission to add and manage groups in Microsoft Intune.
## Create a dynamic device group
dev-box How To Determine Your Quota Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-determine-your-quota-usage.md
Previously updated : 01/09/2024 Last updated : 01/16/2024 # Determine resource usage and quota for Microsoft Dev Box
-To ensure that resources are available for customers, Microsoft Dev Box has a limit on the number of each type of resource that can be used in a subscription. This limit is called a quota. There are different types of quota related to Dev Box that you might see in the Developer portal and Azure portal, such as quota for Dev Box vCPU for box creation as well as portal resource limits for Dev Centers, network connections, and Dev Box Definitions.
+To ensure that resources are available for customers, Microsoft Dev Box has a limit on the number of each type of resource that can be used in a subscription. This limit is called a quota. There are different types of quotas related to Dev Box that you might see in the Developer portal and Azure portal, such as quota for Dev Box vCPU for box creation as well as resource limits for Dev Centers, network connections, and Dev Box Definitions.
-Keeping track of how your quota of virtual machine cores is being used across your subscriptions can be difficult. You might want to know what your current usage is, how much is remaining, and in what regions you have capacity. To help you understand where and how you're using your quota, Azure provides the **Usage + Quotas** page in the Azure portal.
+Understanding quota limits that affect your Dev Box resources helps you to plan for future use. You can check the [default quota level](/azure/azure-resource-manager/management/azure-subscription-service-limits?branch=main#microsoft-dev-box-limits) for each resource, view your current usage, and determine how much quota remains in each region. By monitoring the rate at which your quota is used, you can plan and prepare to [request a quota limit increase](how-to-request-quota-increase.md) before you reach the quota limit for the resource.
-For example, if dev box users encounter a vCPU quota error such as, *QuotaExceeded*, error during dev box creation there may be a need to increase this quota. A great place to start is to determine the current quota available.
+To help you understand where and how you're using your quota, Azure provides the **Usage + Quotas** page in the Azure portal. Each subscription has its own **Usage + quotas** page that covers all the various services in the subscription.
+
+For example, if dev box users encounter a vCPU quota error such as, *QuotaExceeded*, during dev box creation there might be a need to increase this quota. A great place to start is to determine the current quota available.
## Determine your Dev Box usage and quota by subscription in Azure portal
For example, if dev box users encounter a vCPU quota error such as, *QuotaExceed
:::image type="content" source="media/how-to-determine-your-quota-usage/select-dev-box.png" alt-text="Screenshot showing the Usage and quotas page, with Dev Box highlighted in the Provider filter dropdown list." lightbox="media/how-to-determine-your-quota-usage/select-dev-box.png":::
-1. In this example, you can see the **Quota name**, the **Region**, the **Subscription** the quota is assigned to, the **Current Usage**, and whether or not the limit is **Adjustable**.
-
- :::image type="content" source="media/how-to-determine-your-quota-usage/example-subscription.png" alt-text="Screenshot showing the Usage and quotas page, with column headings highlighted." lightbox="media/how-to-determine-your-quota-usage/example-subscription.png":::
+1. You can see the **Quota name**, the **Region**, the **Subscription** the quota is assigned to, the **Current Usage**, and whether or not the limit is **Adjustable**.
1. Notice that Azure groups the usage by level: **Regular**, **Low**, and **No usage**:
- :::image type="content" source="media/how-to-determine-your-quota-usage/example-subscription-groups.png" alt-text="Screenshot showing the Usage and quotas page, with virtual machine size groups highlighted." lightbox="media/how-to-determine-your-quota-usage/example-subscription-groups.png" :::
-
1. To view quota and usage information for specific regions, select the **Region:** filter, select the regions to display, and then select **Apply**. -
- :::image type="content" source="media/how-to-determine-your-quota-usage/select-regions.png" alt-text="Screenshot showing the Usage and quotas page, with the Regions dropdown list highlighted." lightbox="media/how-to-determine-your-quota-usage/select-regions.png":::
1. To view only the items that are using part of your quota, select the **Usage:** filter, and then select **Only items with usage**.
- :::image type="content" source="media/how-to-determine-your-quota-usage/select-items-with-usage.png" alt-text="Screenshot showing the Usage and quotas page, with the Usage dropdown list and Only show items with usage option highlighted." lightbox="media/how-to-determine-your-quota-usage/select-items-with-usage.png" :::
-
1. To view items that are using above a certain amount of your quota, select the **Usage:** filter, and then select **Select custom usage**. -
- :::image type="content" source="media/how-to-determine-your-quota-usage/select-custom-usage-before.png" alt-text="Screenshot showing the Usage and quotas page, with the Usage dropdown list and Select custom usage option highlighted." lightbox="media/how-to-determine-your-quota-usage/select-custom-usage-before.png" :::
1. You can then set a custom usage threshold, so only the items using above the specified percentage of the quota are displayed.
-
- :::image type="content" source="media/how-to-determine-your-quota-usage/select-custom-usage.png" alt-text="Screenshot showing the Usage and quotas page, with Select custom usage option and configuration settings highlighted." lightbox="media/how-to-determine-your-quota-usage/select-custom-usage.png":::
1. Select **Apply**.
-Each subscription has its own **Usage + quotas** page that covers all the various services in the subscription and not just Microsoft Dev Box.
- ## Related content - Check the default quota for each resource type by subscription type with [Microsoft Dev Box limits](../azure-resource-manager/management/azure-subscription-service-limits.md#microsoft-dev-box-limits)
dev-box Monitor Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/monitor-reference.md
Title: DevCenter Diagnostic Logs Reference
+ Title: Dev center (DevCenter) diagnostic logs reference
-description: Reference for the schema for Dev Center Diagnostic logs
+description: Schema reference for dev center (DevCenter) diagnostic logs. Review the list of Azure Storage and Azure Monitor Logs properties included in monitoring data.
-+ Previously updated : 04/28/2023 Last updated : 01/17/2024
-# Monitoring Microsoft DevCenter data reference
-
-This article provides a reference for log and metric data collected to analyze the performance and availability of resources within your dev center. See the [How To Monitor DevCenter Diagnostic Logs](how-to-configure-dev-box-azure-diagnostic-logs.md) article for details on collecting and analyzing monitoring data for a dev center.
+# Microsoft dev center monitoring data reference
+This article provides a reference for log and metric data collected for a Microsoft Dev Box dev center. You can use the collected data to analyze the performance and availability of resources within your dev center. For details about how to collect and analyze monitoring data for your dev center, see [Configure Azure diagnostic logs for a dev center](how-to-configure-dev-box-azure-diagnostic-logs.md).
## Resource logs
-The following table lists the properties of resource logs in DevCenter. The resource logs are collected into Azure Monitor Logs or Azure Storage. In Azure Monitor, logs are collected in the **DevCenterDiagnosticLogs** table under the resource provider name of `MICROSOFT.DEVCENTER`.
+The following table lists the properties of resource logs in a dev center. The resource logs are collected into Azure Monitor Logs or Azure Storage. In Azure Monitor, logs are collected in the **DevCenterDiagnosticLogs** table under the resource provider name of `MICROSOFT.DEVCENTER`.
| Azure Storage field or property | Azure Monitor Logs property | Description | | | | | | **time** | **TimeGenerated** | The date and time (UTC) when the operation occurred. |
-| **resourceId** | **ResourceId** | The DevCenter resource for which logs are enabled.|
-| **operationName** | **OperationName** | Name of the operation. If the event represents an Azure role-based access control (RBAC) operation, specify the Azure RBAC operation name (for example, `Microsoft.DevCenter/projects/users/devboxes/write`). This name is typically modeled in the form of an Azure Resource Manager operation, even if it's not a documented Resource Manager operation: (`Microsoft.<providerName>/<resourceType>/<subtype>/<Write/Read/Delete/Action>`)|
+| **resourceId** | **ResourceId** | The dev center resource for which logs are enabled. |
+| **operationName** | **OperationName** | Name of the operation. If the event represents an Azure role-based access control (RBAC) operation, specify the Azure RBAC operation name (for example, `Microsoft.DevCenter/projects/users/devboxes/write`). This name is typically modeled in the form of an Azure Resource Manager operation, even if it's not a documented Resource Manager operation: (`Microsoft.<providerName>/<resourceType>/<subtype>/<Write/Read/Delete/Action>`). |
| **identity** | **CallerIdentity** | The OID of the caller of the event. |
-| **TargetResourceId** | **ResourceId** | The subresource that pertains to the request. Depending on the operation performed, this value may point to a `devbox` or `environment`.|
+| **TargetResourceId** | **ResourceId** | The subresource that pertains to the request. Depending on the operation performed, this value might point to a `devbox` or `environment`. |
| **resultSignature** | **ResponseCode** | The HTTP status code returned for the operation. |
-| **resultType** | **OperationResult** | Whether the operation failed or succeeded. |
-| **correlationId** | **CorrelationId** | The unique correlation ID for the operation that can be shared with the app team if investigations are necessary.|
+| **resultType** | **OperationResult** | Indicates whether the operation failed or succeeded. |
+| **correlationId** | **CorrelationId** | The unique correlation ID for the operation that can be shared with the app team to support further investigation. |
-For a list of all Azure Monitor log categories and links to associated schemas, see [Azure Monitor Logs categories and schemas](../azure-monitor/essentials/resource-logs-schema.md).
+For a list of all Azure Monitor log categories and links to associated schemas, see [Common and service-specific schemas for Azure resource logs](../azure-monitor/essentials/resource-logs-schema.md).
## Azure Monitor Logs tables
-DevCenter uses Kusto tables from Azure Monitor Logs. You can query these tables with Log analytics. For a list of Kusto tables DevCenter uses, see the [Azure Monitor Logs table reference](how-to-configure-dev-box-azure-diagnostic-logs.md) article.
-
-## Next steps
+A dev center uses Kusto tables from Azure Monitor Logs. You can query these tables with Log Analytics. For a list of Kusto tables that a dev center uses, see the [Azure Monitor Logs table reference organized by resource type](/azure/azure-monitor/reference/tables/tables-resourcetype#dev-centers).
-For more information on monitoring DevCenter resources, see the following articles:
+## Related content
-- To learn how to configure Azure diagnostic logs for a dev center, see [Configure Azure diagnostic logs for a DevCenter](how-to-configure-dev-box-azure-diagnostic-logs.md).-- For details on monitoring Azure resources, see [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md).
+- [Configure Azure diagnostic logs for a dev center](how-to-configure-dev-box-azure-diagnostic-logs.md)
+- [Monitor Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md)
dev-box Tutorial Dev Box Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/tutorial-dev-box-limits.md
Title: "Tutorial: Limit the number of dev boxes in a project to help control costs" #Required; page title displayed in search results. "Tutorial: \<verb\> * \<noun\>". Include the brand.
-description: Each dev box incurs compute and storage costs. This tutorial shows you how to set a limit on the number of dev boxes developers can create in a project. #Required; article description that is displayed in search results. Include the word "tutorial".
---- Previously updated : 06/30/2023 #Required; mm/dd/yyyy format.
+ Title: "Tutorial: Limit the number of dev boxes in a project to help control costs"
+description: Each dev box incurs compute and storage costs. This tutorial shows you how to set a limit on the number of dev boxes developers can create in a project.
++++ Last updated : 01/11/2024 #CustomerIntent: As a project admin, I want to set a limit on the number of dev boxes a dev box user can create as part of my cost management strategy.
Last updated 06/30/2023 #Required; mm/dd/yyyy format.
You can set a limit on the number of dev boxes each developer can create within a project. You can use this functionality to help manage costs, use resources effectively, or prevent dev box creation for a given project.
-In the developer portal, you see the number of dev boxes that you've created in a project, and the total number of dev boxes you can create in the project. If you've used all your available dev boxes in a project, you can't create a new dev box.
+In the developer portal, a Dev Box User can see their existing dev boxes and their total number of allocations for each project. When they reach their allocation limit for a project, they can't create a new dev box for that project.
In this tutorial, you learn how to: > [!div class="checklist"] > * Set a dev box limit for your project by using the Azure portal
-> * View dev box Limits in the developer portal
+> * View dev box limits in the developer portal
## Prerequisites - A Dev Box project in your subscription -- Project Admin permission to that project
+- Project Admin role permissions to that project
## Set a dev box limit for your project
-The dev box limit is the number of dev boxes each developer can create in a project. For example, if you set the limit to 3, each developer in your team can create 3 dev boxes.
+The dev box limit is the number of dev boxes each developer can create in a project. For example, if you set the limit to 3, each developer in your team can create three dev boxes.
1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. In the search box, enter *projects*. In the list of results, select **Projects**.
+
+1. In the search box, enter **projects**. In the list of results, select **Projects**.
+ 1. Select the project that you want to set a limit for. + 1. On the left menu, select **Limits**.
-1. On the **Limits** page, for **Enable dev box limit**, select **Yes**.
+
+1. On the **Limits** page, toggle the **Enable dev box limit** setting to **Yes**.
- :::image type="content" source="media/tutorial-dev-box-limits/enable-dev-box-limits.png" alt-text="Screenshot showing the dev box limits options for a project, with Yes highlighted.":::
+ :::image type="content" source="media/tutorial-dev-box-limits/enable-dev-box-limits.png" alt-text="Screenshot showing the dev box limits options for a project, with Yes highlighted." lightbox="media/tutorial-dev-box-limits/enable-dev-box-limits.png":::
1. In **Dev boxes per developer**, enter a dev box limit and then select **Apply**.
- :::image type="content" source="media/tutorial-dev-box-limits/dev-box-limit-number.png" alt-text="Screenshot showing dev box limits for a project enabled, with dev boxes per developer highlighted.":::
+ :::image type="content" source="media/tutorial-dev-box-limits/dev-box-limit-number.png" alt-text="Screenshot showing dev box limits for a project enabled, with dev boxes per developer highlighted." lightbox="media/tutorial-dev-box-limits/dev-box-limit-number.png":::
->[!TIP]
-> To prevent developers creating more dev boxes in a project, set the dev box limit to 0. This wonΓÇÖt delete existing dev boxes, but it will prevent further creation of dev boxes in the project.
+> [!TIP]
+> To prevent developers creating more dev boxes in a project, set the dev box limit to 0. This action doesn't delete existing dev boxes, but it prevents creation of new dev boxes in the project.
## View dev box limits in the developer portal
-In the developer portal, select a project to see the number of dev boxes you have already created and the total number of dev boxes you can create in that project.
+In the developer portal, select a project to see the number of existing dev boxes and the total number of dev boxes you can create in that project.
+
-If youΓÇÖve used all your available dev boxes in a project, you see an error message and you can't create a new dev box:
+If all of your available dev boxes in a project are in use, you see an error message and you can't create a new dev box:
-*Your project administrator has set a limit of 3 dev boxes per user in Contoso-software-dev. Please delete a dev box in this project, or contact your administrator to increase your limit.*
+*Your project administrator has set a limit of \<number> dev boxes per user in \<project name>. Please delete a dev box in this project, or contact your administrator to increase your limit.*
## Clean up resources If you're not going to continue to use dev box limits, remove the limit with the following steps:
-1. In the search box, enter *projects*. In the list of results, select **Projects**.
-1. Select the project that you want to set a limit for.
+1. In the search box, enter **projects**. In the list of results, select **Projects**.
+
+1. Select the project associated with the limit that you want to remove.
+ 1. On the left menu, select **Limits**.
-1. On the **Limits** page, for **Enable dev box limit**, select **No**.
+
+1. On the **Limits** page, change the **Enable dev box limit** setting to **No**.
## Next steps -- [Use the CLI to configure dev box limits](/cli/azure/devcenter/admin/project)
+- [Use the Azure CLI to configure dev box limits](/cli/azure/devcenter/admin/project)
- [Manage a dev box project](how-to-manage-dev-box-projects.md) - [Microsoft Dev Box pricing](https://azure.microsoft.com/pricing/details/dev-box/)
devtest-labs Image Factory Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/image-factory-create.md
An image factory is a configuration-as-code solution that builds and distributes
The significant accelerator to get a developer desktop to a ready state in DevTest Labs is using custom images. The downside of custom images is that there's something extra to maintain in the lab. For example, trial versions of products expire over time (or) newly released security updates aren't applied, which force us to refresh the custom image periodically. With an image factory, you have a definition of the image checked in to source code control and have an automated process to produce custom images based on the definition.
-The solution enables the speed of creating virtual machines from custom images while eliminating extra ongoing maintenance costs. With this solution, you can automatically create custom images, distribute them to other DevTest Labs, and retire the old images. In the following video, you learn about the image factory, and how it's implemented with DevTest Labs. All the Azure PowerShell scripts are freely available and located here: [https://aka.ms/dtlimagefactory](https://aka.ms/dtlimagefactory).
+The solution enables the speed of creating virtual machines from custom images while eliminating extra ongoing maintenance costs. With this solution, you can automatically create custom images, distribute them to other DevTest Labs, and retire the old images. All the Azure PowerShell scripts are freely available and located here: [https://aka.ms/dtlimagefactory](https://aka.ms/dtlimagefactory).
<br/>
dms Known Issues Azure Sql Migration Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-azure-sql-migration-azure-data-studio.md
This article provides a list of known issues and troubleshooting steps associate
- **Recommendation**: Check if the selected tables exist in the target Azure SQL Database. If this migration is called from a PowerShell script, check if the table list parameter includes the correct table names and is passed into the migration. ## Error code: 2060 - SqlSchemaCopyFailed-
+<!-- Comment:Now supported by SHIR 5.37 onwards--
- **Message**: `Login failed for user 'Domain\MachineName$`. - **Cause**: This error generally happens when customer uses Windows authentication to login the source. The customer provides Windows authentication credential but SHIR converts it to machine account (Domain\MachineName$).
This article provides a list of known issues and troubleshooting steps associate
- **Recommendation**: Possible solutions for this issue are: 1) Add login for machine account "Domain\MachineName$" to the source SQL Server. [How to Create a SQL Server Computer Account Login](https://stackoverflow.com/questions/38680366/how-to-add-a-new-sql-server-machine-account). 2) Or Use SQL login to connect to source SQL Server in Azure Data Studio.
- 3) Or As an alternative, Migrate the database schema from source to target by using [PowerShell](/powershell/module/az.datamigration/new-azdatamigrationsqlserverschema) or the [SQL Server dacpac extension](/azure-data-studio/extensions/sql-server-dacpac-extension) or the [SQL Database Projects](/azure-data-studio/extensions/sql-database-project-extension) extension in Azure Data Studio.
+ 3) Or As an alternative, Migrate the database schema from source to target by using [PowerShell](/powershell/module/az.datamigration/new-azdatamigrationsqlserverschema) or the [SQL Server dacpac extension](/azure-data-studio/extensions/sql-server-dacpac-extension) or the [SQL Database Projects](/azure-data-studio/extensions/sql-database-project-extension) extension in Azure Data Studio.-->
- **Message**: `The SELECT permission was denied on the object 'sql_logins', database 'master', schema 'sys'.`
dms Tutorial Transparent Data Encryption Migration Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-transparent-data-encryption-migration-ads.md
You can use the [Azure SQL Migration extension for Azure Data Studio](/azure-dat
The TDE-enabled database migration process automates manual tasks such as backing up the database certificate keys (DEK), copying the certificate files from the on-premises SQL Server to the Azure SQL target, and then reconfiguring TDE for the target database again. > [!IMPORTANT]
- > Currently, only Azure SQL Managed Instance targets are supported.
+ > 1) Currently, only Azure SQL Managed Instance targets are supported.
+> 2) And Encrypted Backups are not supported.
In this tutorial, you learn how to migrate the example `AdventureWorksTDE` encrypted database from an on-premises instance of SQL Server to an Azure SQL managed instance.
energy-data-services How To Generate Auth Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-generate-auth-token.md
In this article, you learn how to generate the service principal auth token, a u
:::image type="content" source="media/how-to-generate-auth-token/app-registration-uri.png" alt-text="Screenshot that shows adding the URI to the app.":::
-1. Fetch the `redirect-uri` (or reply URL) for your app to receive responses from Microsoft Entra ID.
## Fetch parameters
A `client-secret` is a string value your app can use in place of a certificate t
:::image type="content" source="media/how-to-generate-auth-token/client-secret.png" alt-text="Screenshot that shows finding the client secret.":::
-#### Find the URL for your Azure Data Manager for Energy instance
+### Find redirect-uri
+The `redirect-uri` of your app, where your app sends and receives the authentication responses. It must exactly match one of the redirect URIs that you registered in the portal, except that it must be URL encoded.
+
+1. Go to **App registrations**.
+1. Under the **Manage** section, select **Authentication**.
+1. Fetch the `redirect-uri` (or reply URL) for your app to receive responses from Microsoft Entra ID.
+
+ :::image type="content" source="media/how-to-generate-auth-token/redirect-uri.png" alt-text="Screenshot that shows redirect-uri.":::
+
+### Find the adme-url for your Azure Data Manager for Energy instance
1. Create an [Azure Data Manager for Energy instance](quickstart-create-microsoft-energy-data-services-instance.md). 1. Go to your Azure Data Manager for Energy **Overview** page on the Azure portal.
A `client-secret` is a string value your app can use in place of a certificate t
:::image type="content" source="media/how-to-generate-auth-token/endpoint-url.png" alt-text="Screenshot that shows finding the URI for the Azure Data Manager for Energy instance.":::
-#### Find data-partition-id
+### Find data-partition-id
You have two ways to get the list of data partitions in your Azure Data Manager for Energy instance.
curl --location --request POST 'https://login.microsoftonline.com/<tenant-id>/oa
Generating a user's auth token is a two-step process.
-### Get the authorization code
+### Get the authorization-code
The first step to get an access token for many OpenID Connect (OIDC) and OAuth 2.0 flows is to redirect the user to the Microsoft identity platform `/authorize` endpoint. Microsoft Entra ID signs the user in and requests their consent for the permissions your app requests. In the authorization code grant flow, after consent is obtained, Microsoft Entra ID returns an authorization code to your app that it can redeem at the Microsoft identity platform `/token` endpoint for an access token.
The first step to get an access token for many OpenID Connect (OIDC) and OAuth 2
1. The browser redirects to `http://localhost:8080/?code={authorization code}&state=...` upon successful authentication. 1. Copy the response from the URL bar of the browser and fetch the text between `code=` and `&state`.
-1. Keep this authorization code handy for future use.
+1. Keep this `authorization-code` handy for future use.
#### Request format
The second step is to get the auth token and the refresh token. Your app uses th
```bash curl -X POST -H "Content-Type: application/x-www-form-urlencoded" -d 'client_id={client-id} &scope={client-id}%2f.default openid profile offline_access
- &code={authorization code}
- &redirect_uri=http%3A%2F%2Flocalhost%3a8080
+ &code={authorization-code}
+ &redirect_uri={redirect-uri}
&grant_type=authorization_code &client_secret={client-secret}' 'https://login.microsoftonline.com/{tenant-id}/oauth2/v2.0/token' ```
energy-data-services How To Manage Acls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-manage-acls.md
Run the following curl command in Azure Cloud Shell to create a new data group,
**Request format** ```bash
- curl --location --request POST "https://<URI>/api/entitlements/v2/groups/" \
+ curl --location --request POST "https://<adme-url>/api/entitlements/v2/groups/" \
--header 'data-partition-id: <data-partition>' \ --header 'Authorization: Bearer <access_token>' --data-raw '{
In case, a data record has 2 ACLs, ACL_1 and ACL_2, and a given user is member o
**Request format** ```bash
-curl --location --request PUT 'https://osdu-ship.msft-osdu-test.org/api/storage/v2/records/' \
+curl --location --request PUT 'https://<adme-url>/api/storage/v2/records/' \
--header 'data-partition-id: opendes' \ --header 'Accept: application/json' \ --header 'Authorization: Bearer <token>ΓÇÖ \
Keep the record ID from the response handy for future references.
**Request format** ```bash
-curl --location 'https://osdu-ship.msft-osdu-test.org/api/storage/v2/records/opendes:master-data--Well:999736019023' \
+curl --location 'https://<adme-url>/api/storage/v2/records/opendes:master-data--Well:999736019023' \
--header 'data-partition-id: opendes' \ --header 'Authorization: Bearer <token>ΓÇÖ ```
The first `/acl/owners/0` operation removes ACL from 0th position in the array o
**Request format** ```bash
-curl --location --request PATCH 'https://osdu-ship.msft-osdu-test.org/api/storage/v2/records/' \
+curl --location --request PATCH 'https://<adme-url>/api/storage/v2/records/' \
--header 'data-partition-id: opendes' \ --header 'Accept: application/json' \ --header 'Authorization: Bearer <token>ΓÇÖ\
energy-data-services How To Manage Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-manage-users.md
The object ID (OID) is the Microsoft Entra user OID.
Run the following curl command in Azure Cloud Shell to get all the groups that are available for you or that you have access to in the specific data partition of the Azure Data Manager for Energy instance. ```bash
- curl --location --request GET "https://<URI>/api/entitlements/v2/groups/" \
+ curl --location --request GET "https://<adme-url>/api/entitlements/v2/groups/" \
--header 'data-partition-id: <data-partition>' \ --header 'Authorization: Bearer <access_token>' ```
Run the following curl command in Azure Cloud Shell to get all the groups that a
1. The value to be sent for the parameter `email` is the OID of the user and not the user's email address. ```bash
- curl --location --request POST 'https://<URI>/api/entitlements/v2/groups/<group-name>@<data-partition-id>.dataservices.energy/members' \
+ curl --location --request POST 'https://<adme-url>/api/entitlements/v2/groups/<group-name>@<data-partition-id>.dataservices.energy/members' \
--header 'data-partition-id: <data-partition-id>' \ --header 'Authorization: Bearer <access_token>' \ --header 'Content-Type: application/json' \
Run the following curl command in Azure Cloud Shell to get all the groups that a
1. Run the following curl command in Azure Cloud Shell to get all the groups associated with the user. ```bash
- curl --location --request GET 'https://<URI>/api/entitlements/v2/members/<OBJECT_ID>/groups?type=none' \
+ curl --location --request GET 'https://<adme-url>/api/entitlements/v2/members/<OBJECT_ID>/groups?type=none' \
--header 'data-partition-id: <data-partition-id>' \ --header 'Authorization: Bearer <access_token>' ```
Run the following curl command in Azure Cloud Shell to get all the groups that a
1. *Do not* delete the OWNER of a group unless you have another OWNER who can manage users in that group. ```bash
- curl --location --request DELETE 'https://<URI>/api/entitlements/v2/members/<OBJECT_ID>' \
+ curl --location --request DELETE 'https://<adme-url>/api/entitlements/v2/members/<OBJECT_ID>' \
--header 'data-partition-id: <data-partition-id>' \ --header 'Authorization: Bearer <access_token>' ```
event-grid Cloudevents Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/cloudevents-schema.md
Title: Use Azure Event Grid with events in CloudEvents schema
description: Describes how to use the CloudEvents schema for events in Azure Event Grid. The service supports events in the JSON implementation of CloudEvents. Last updated 12/02/2022
+ms.devlang: csharp
+# ms.devlang: csharp, javascript
event-grid Event Schema Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-schema-key-vault.md
Title: Azure Key Vault as Event Grid source description: Describes the properties and schema provided for Azure Key Vault events with Azure Event Grid Previously updated : 11/17/2022 Last updated : 01/17/2024 # Azure Key Vault as Event Grid source
event-grid Monitor Mqtt Delivery Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/monitor-mqtt-delivery-reference.md
This article provides a reference of log and metric data collected to analyze th
| DropReason | The reason a session was dropped. The available values include: <br><br>- SessionExpiry: a persistent session has expired. <br>- TransientSession: a non-persistent session has expired. <br>- SessionOverflow: a client didn't connect during the lifespan of the session to receive queued QOS1 messages until the queue reached its maximum limit. <br>- AuthorizationError: a session drop because of any authorization reasons.
+## Resource logs
+
+MQTT broker in Azure Event Grid captures diagnostic logs for the following categories:
+
+- [Failed MQTT connections](#failed-mqtt-connections)
+- [Successful MQTT connections](#successful-mqtt-connections)
+- [MQTT disconnections](#mqtt-disconnections)
+- [Failed MQTT published messages](#failed-mqtt-published-messages)
+- [Failed MQTT subscription operations](#failed-mqtt-subscription-operations)
+
+This section provides schema and examples for these logs.
+
+### Common properties
+The following properties are common for all the resource logs from MQTT broker.
+
+| Property name | Type | Description |
+| -- | - | -- |
+| `time` | `DateTime` | Timestamp (UTC) when the log was generated. |
+| `resourceId` | `String` | Resource ID of the Event Grid namespace. For example: `/SUBSCRIPTIONS/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx/RESOURCEGROUPS/MYRG/PROVIDERS/MICROSOFT.EVENTGRID/NAMESPACE/MYNAMESPACE`. |
+| `location` | `String` | Location of the namespace |
+| `operationName` | `String` | Name of the operation. For example: `Microsoft.EventGrid/topicspaces/connect`, `Microsoft.EventGrid/topicspaces/disconnect`, `Microsoft.EventGrid/topicspaces/publish`, `Microsoft.EventGrid/topicspaces/subscribe`, `Microsoft.EventGrid/topicspaces/unsubscribe`. |
+| `category` | `String` | Category or type of the operation. For example: `FailedMQTTConnections`, `SuccessfulMQTTConnections`, `MQTTDisconnections`, `FailedMQTTPublishedMessages`, `FailedMQTTSubscriptionOperations`. |
+| `resultType` | `String` | Result of the operation. For example: `Failed`, `Succeeded`. |
+| `resultSignature` | `String` | Result of the failed operation. For example: `QuotaExceeded`, `ClientAuthenticationError`, `AuthorizationError`. This property isn't included for the successful events like `SuccessfulMQTTConnections`. |
+| `resultDescription` | `String` | More description about the result of the failed operation. This property isn't included for the successful events like `SuccessfulMQTTConnections`. |
+| `AuthenticationAuthority` | `String` | Type of authority used to authenticate your MQTT client. It's set to one of the following values: `Local` for clients registered in Event Grid's local registry, or `AAD` for clients using Microsoft Entra for authentication. |
+| `authenticationType` | `String` | Type of authentication used by the client. It's set to one of the following values: `CertificateThumbprintMatch`, `AccessToken`, or `CACertificate`. |
+| `clientIdentitySource` | `String` | Source of the clientΓÇÖs identity. It's `JWT` when you use Microsoft Entra ID authentication. |
+| `authenticationAuthority` | `String` | Authority of the client's identity. It's set to one of the following values: `local` for the clients in Event Grid namespace's local registry, `AAD` for AAD clients. |
+| `clientIdentity` | `String` | Value of the clientΓÇÖs identity. It's the name of the local registry or object ID for Microsoft Entra ID clients.|
++
+### Failed MQTT connections
+This log includes an entry for every failed `MQTT CONNECT` operation by the client. This log can be used to diagnose connectivity issues.
+
+Here's a sample Failed MQTT connection log entry.
+
+```json
+[
+ {
+ "time": "2023-11-06T22:45:02.6829930Z",
+ "resourceId": "/SUBSCRIPTIONS/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx/RESOURCEGROUPS/MYRG/PROVIDERS/MICROSOFT.EVENTGRID/NAMESPACE/MYNS",
+ "location": "eastus",
+ "operationName": "Microsoft.EventGrid/topicspaces/connect",
+ "category": "FailedMqttConnections",
+ "resultType": "Failed",
+ "resultSignature": "AuthenticationError",
+ "resultDescription": "Client could not be found",
+ "identity": {
+ "authenticationType": "CertificateThumbprintMatch",
+ "clientIdentitySource": "UserName",
+ "authenticationAuthority": "Local",
+ "clientIdentity": "testclient-1"
+ },
+ "properties": {
+ "sessionName": "testclient1",
+ "protocol": "MQTT5",
+ "traceId": "pwu5p3uuvzbyzpe4vyygij3it4"
+ }
+ }
+]
+```
+Here are the properties and their descriptions.
+
+| Property | Type | Description |
+| -- | - | -- |
+| `sessionName` | `String` | Name of the session provided by the client in the `MQTT CONNECT` packetΓÇÖs clientId field. |
+| `protocol` | `String` | Protocol used by the client to connect. Possible values are: MQTT3, MQTT3-WS, MQTT5, MQTT5-WS. |
+| `traceId` | `Int` | Generated trace ID. |
++
+### Successful MQTT connections
+This log includes an entry for every successful `MQTT CONNECT` operation by the client. This log can be used for auditing purposes.
+
+```json
+[
+ {
+ "time": "2023-11-07T01:22:05.2804980Z",
+ "resourceId": "/SUBSCRIPTIONS/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx/RESOURCEGROUPS/MYRG/PROVIDERS/MICROSOFT.EVENTGRID/NAMESPACE/MYNS",
+ "location": "eastus",
+ "operationName": "Microsoft.EventGrid/topicspaces/connect",
+ "category": "SuccessfulMqttConnections",
+ "resultType": "Succeeded",
+ "identity": {
+ "authenticationType": "CertificateThumbprintMatch",
+ "clientIdentitySource": "UserName",
+ "authenticationAuthority": "Local",
+ "clientIdentity": "client1"
+ },
+ "properties": {
+ "sessionName": "client1",
+ "protocol": "MQTT5"
+ }
+ }
+]
+```
+
+Here are the properties and their descriptions.
+
+| Property | Type | Description |
+| -- | - | -- |
+| `sessionName` | `String` | Name of the session provided by the client in the `MQTT CONNECT` packetΓÇÖs clientId field. |
+| `protocol` | `String` | Protocol used by the client to connect. Possible values are: MQTT3, MQTT3-WS, MQTT5, MQTT5-WS. |
++
+### MQTT disconnections
+This log includes an entry for every MQTT client disconnection from an Event Grid namespace. This log can be used to diagnose connectivity issues.
+
+```json
+[
+ {
+ "time": "2023-11-07T01:29:22.4591610Z",
+ "resourceId": "/SUBSCRIPTIONS/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx/RESOURCEGROUPS/MYRG/PROVIDERS/MICROSOFT.EVENTGRID/NAMESPACE/MYNS",
+ "location": "eastus",
+ "operationName": "Microsoft.EventGrid/topicspaces/disconnect",
+ "category": "MqttDisconnections",
+ "resultType": "Failed",
+ "resultSignature": "ClientError",
+ "resultDescription": "Timed out per negotiated Keep Alive",
+ "identity": {
+ "clientIdentity": "client1"
+ },
+ "properties": {
+ "sessionName": "client1",
+ "protocol": "MQTT5"
+ }
+ }
+]
+```
+
+Here are the properties and their descriptions.
+
+| Property | Type | Description |
+| -- | - | -- |
+| `sessionName` | `String` | Name of the session provided by the client in the `MQTT CONNECT` packetΓÇÖs clientId field. |
+| `protocol` | `String` | Protocol used by the client to connect. Possible values are: MQTT3, MQTT3-WS, MQTT5, MQTT5-WS. |
++
+### Failed MQTT published messages
+This log includes an entry for every MQTT message that failed to be published to or delivered by an Event Grid namespace. This log can be used to diagnose publishing issues and message loss.
+
+```json
+[
+ {
+ "time": "2023-11-07T01:22:48.2811790Z",
+ "resourceId": "/SUBSCRIPTIONS/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx/RESOURCEGROUPS/MYRG/PROVIDERS/MICROSOFT.EVENTGRID/NAMESPACE/MYNS",
+ "location": "eastus",
+ "operationName": "Microsoft.EventGrid/topicspaces/publish",
+ "category": "FailedMqttPublishedMessages",
+ "resultType": "Failed",
+ "resultSignature": "AuthorizationError",
+ "resultDescription": "Topic name 'testtopic/small4.0' does not match any topicspaces",
+ "identity": { "clientIdentity": "client1" },
+ "properties": {
+ "sessionName": "client1",
+ "protocol": "MQTT5",
+ "traceId": "ako65yewjjhzbdp3lxny7557fu",
+ "qos": 1,
+ "topicName": "testtopic/small4.0",
+ "operationCount": 1
+ }
+ }
+]
+```
+
+Here are the columns of the `EventGridNamespaceFailedMqttPublishedMessages` Log Analytics table and their descriptions.
+
+| Column name | Type | Description |
+| -- | - | -- |
+| `sessionName` | `String` | Name of the session provided by the client in the MQTT CONNECT packetΓÇÖs clientId field. |
+| `protocol` | `String` | Protocol used by the client to publish. Possible values are: MQTT3, MQTT3-WS, MQTT5, MQTT5-WS. |
+| `traceId` | `Int` | Generated trace ID. |
+| `qos` | `Int` | Quality of service used by the client to publish. Possible values are: 0 or 1. |
+| `topicName` | `String` | MQTT Topic Name used by the client to publish. |
+| `operationCount` | `Int` | Count of MQTT message that failed to be published to or delivered by an Event Grid namespace with the same resultDescription. |
+
+### Failed MQTT subscription operations
+This log includes an entry for every MQTT subscribe operation by an MQTT client. A log entry is added for each topic filter within the same subscribe/unsubscribe packet that has the same error. This log can be used to diagnose subscription issues and message loss.
+
+```json
+[
+ {
+ "time": "2023-11-07T01:22:39.0339970Z",
+ "resourceId": "/SUBSCRIPTIONS/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx/RESOURCEGROUPS/MYRG/PROVIDERS/MICROSOFT.EVENTGRID/NAMESPACE/MYNS",
+ "location": "eastus",
+ "operationName": "Microsoft.EventGrid/topicspaces/subscribe",
+ "category": "FailedMqttSubscriptionOperations",
+ "resultType": "Failed",
+ "resultSignature": "AuthorizationError",
+ "resultDescription": "Topic filter 'testtopic/#' does not match any topicspaces",
+ "identity": { "clientIdentity": "client1" },
+ "properties": {
+ "sessionName": "client1",
+ "protocol": "MQTT5",
+ "traceId": "gnz3cgqpozg4tbm5anvsvopafi",
+ "topicFilters": ["testtopic/#"]
+ }
+ }
+]
+```
+
+Here are the columns of the `EventGridNamespaceFailedMqttSubscriptions` Log Analytics table and their descriptions.
+
+| Column name | Type | Description |
+| -- | - | -- |
+| `sessionName` | `String` | Name of the session provided by the client in the MQTT CONNECT packetΓÇÖs clientId field. |
+| `protocol` | `String` | Protocol used by the client to publish. Possible values are: MQTT3, MQTT3-WS, MQTT5, MQTT5-WS. |
+| `traceId` | `Int` | Generated trace ID. |
+| `topicFilters` | Array of strings | List of topic names within the same packet that have the same error. |
++ ## Next steps See the following articles: - [Monitor pull delivery reference](monitor-pull-reference.md).-- [Monitor push delivery reference](monitor-push-reference.md).
+- [Monitor push delivery reference](monitor-push-reference.md).
event-grid Receive Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/receive-events.md
Title: Receive events from Azure Event Grid to an HTTP endpoint
description: Describes how to validate an HTTP endpoint, then receive and deserialize Events from Azure Event Grid. Last updated 01/10/2024
+ms.devlang: csharp
+# ms.devlang: csharp, javascript
event-grid Resize Images On Storage Blob Upload Event https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/resize-images-on-storage-blob-upload-event.md
Title: 'Tutorial: Use Azure Event Grid to automate resizing uploaded images'
description: 'In this tutorial, you learn how to integrate Azure Blob Storage and Azure Functions via Azure Event Grid. When a blob is uploaded to a container, an event is triggered. The event is delivered to an Azure function by Azure Event Grid.' Last updated 05/16/2023
+ms.devlang: csharp
+# ms.devlang: csharp, javascript
event-grid Sdk Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/sdk-overview.md
Title: Azure Event Grid SDKs
description: Describes the SDKs for Azure Event Grid. These SDKs provide management, publishing and consumption. Last updated 07/06/2023
+ms.devlang: csharp
+# ms.devlang: csharp, golang, java, javascript, python
# Event Grid SDKs for management and publishing
event-hubs Authenticate Shared Access Signature https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/authenticate-shared-access-signature.md
Title: Authenticate access to Azure Event Hubs with shared access signatures
description: This article shows you how to authenticate access to Event Hubs resources using shared access signatures. Last updated 03/13/2023
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, php
event-hubs Event Hubs Availability And Consistency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-availability-and-consistency.md
Title: Availability and consistency - Azure Event Hubs | Microsoft Docs
description: How to provide the maximum amount of availability and consistency with Azure Event Hubs using partitions. Last updated 03/13/2023
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, python
event-hubs Event Hubs Exchange Events Different Protocols https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-exchange-events-different-protocols.md
Title: Azure Event Hubs - Exchange events using different protocols
description: This article shows how consumers and producers that use different protocols (AMQP, Apache Kafka, and HTTPS) can exchange events when using Azure Event Hubs. Last updated 11/28/2022
+ms.devlang: csharp
+# ms.devlang: csharp, java
event-hubs Event Hubs Kafka Spark Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-kafka-spark-tutorial.md
Title: Connect with your Apache Spark app - Azure Event Hubs | Microsoft Docs
description: This article provides information on how to use Apache Spark with Azure Event Hubs for Kafka. Last updated 03/09/2023
+ms.devlang: spark-scala
# Connect your Apache Spark application with Azure Event Hubs
event-hubs Schema Registry Json Schema Kafka https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/schema-registry-json-schema-kafka.md
Title: Use JSON Schema with Apache Kafka applications
description: This article provides information on how to use JSON Schema in Schema Registry with Apache Kafka applications. Last updated 04/26/2023
+ms.devlang: spark-scala
healthcare-apis Dicom Data Lake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-data-lake.md
AHDS/{workspace-name}/dicom/{dicom-service-name}/{partition-name}
| `{dicom-service-name}` | The name of the DICOM service instance. | | `{partition-name}` | The name of the data partition. Note, if no partitions are specified, all DICOM data is stored in the default partition, named `Microsoft.Default`. |
+In addition to DICOM data, a small file to enable [health checks](#health-check) will be written to this location.
+ > [!NOTE] > During public preview, the DICOM service writes data to the storage container and reads the data, but user-added data isn't read and indexed by the DICOM service. Similarly, if DICOM data written by the DICOM service is modified or removed, it may result in errors when accessing data with the DICOMweb APIs. ## Permissions
-The DICOM service is granted access to the data like any other service or application accessing data in a storage account. Access can be revoked at any time without affecting your organization's ability to access the data. The DICOM service needs to be granted the [Storage Blob Data Contributor](/azure/role-based-access-control/built-in-roles#storage-blob-data-contributor) role by using a system-assigned or user-assigned managed identity.
+The DICOM service is granted access to the data like any other service or application accessing data in a storage account. Access can be revoked at any time without affecting your organization's ability to access the data. The DICOM service needs the ability to read, write, and delete files in the provided file system. This can be provided by granting the [Storage Blob Data Contributor](/azure/role-based-access-control/built-in-roles#storage-blob-data-contributor) role to the system-assigned or user-assigned managed identity attached to the DICOM service.
## Access tiers
You can manage costs for imaging data stored by the DICOM service by using Azure
To learn more about access tiers, including cost tradeoffs and best practices, see [Azure Storage access tiers](/azure/storage/blobs/access-tiers-overview)
+## Health check
+
+The DICOM service writes a small file to the data lake every 30 seconds, following the [Data Contract](#data-contracts) to ensure it maintains access. Making any changes to files stored under the `healthCheck` sub-directory might result in incorrect status of the health check.
+If there is an issue with access, status and details are displayed by [Azure Resource Health](../../service-health/overview.md). Azure Resource Health specifies if any action is required to restore access, for example reinstating a role to the DICOM service's identity.
+ ## Limitations During public preview, the DICOM service with data lake storage has these limitations:
iot-develop Quickstart Devkit Stm B L4s5i Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-stm-b-l4s5i-iot-hub.md
description: Use Azure RTOS embedded software to connect an STMicroelectronics B
+ms.devlang: csharp
Last updated 06/27/2023
iot-dps Quick Enroll Device Tpm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-enroll-device-tpm.md
Last updated 07/28/2022
+ms.devlang: csharp
+# ms.devlang: csharp, java, nodejs
zone_pivot_groups: iot-dps-set2
iot-dps Quick Enroll Device X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-enroll-device-x509.md
Last updated 07/22/2022
+ms.devlang: csharp
+# ms.devlang: csharp, java, nodejs
zone_pivot_groups: iot-dps-set2
iot-edge Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/support.md
The systems listed in the following table are considered compatible with Azure I
<sup>2</sup> Installation packages are made available on the [Azure IoT Edge releases](https://github.com/Azure/azure-iotedge/releases). See the installation steps in [Offline or specific version installation](how-to-provision-single-device-linux-symmetric.md#offline-or-specific-version-installation-optional).
+> [!NOTE]
+> CentOS Linux 7 will reach [end of life (EOL) on June 30, 2024](https://www.redhat.com/topics/linux/centos-linux-eol). In July 2024, CentOS 7 will be removed from IoT Edge *Tier 2* supported platform. If you take no action, CentOS 7 based IoT Edge devices continue to work but ongoing security patches and bug fixes in the host packages for CentOS 7 won't be available after June 30, 2024. To continue to receive support and security updates, we recommend that you update your host OS to a *Tier 1* supported platform. To learn more about CentOS 7 EOL, see [What to know about CentOS Linux EOL](https://www.redhat.com/topics/linux/centos-linux-eol) article.
+ ## Releases The following table lists the currently supported releases. IoT Edge release assets and release notes are available on the [azure-iotedge releases](https://github.com/Azure/azure-iotedge/releases) page.
iot-operations Howto Configure Kafka https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/howto-configure-kafka.md
- ignite-2023 Previously updated : 11/15/2023 Last updated : 01/16/2024 #CustomerIntent: As an operator, I want to understand how to configure Azure IoT MQ to send and receive messages between Azure IoT MQ and Kafka.
-# Send and receive messages between Azure IoT MQ and Kafka
+# Send and receive messages between Azure IoT MQ and Event Hubs or Kafka
[!INCLUDE [public-preview-note](../includes/public-preview-note.md)]
The `tls` field enables TLS encryption for the connection and optionally specifi
| Field | Description | Required | | -- | -- | -- | | tlsEnabled | A boolean value that indicates whether TLS encryption is enabled or not. It must be set to true for Event Hubs communication. | Yes |
-| caConfigMap | The name of the config map that contains the CA certificate for verifying the server's identity. This field isn't required for Event Hubs communication, as Event Hubs uses well-known CAs that are trusted by default. However, you can use this field if you want to use a custom CA certificate. | No |
+| trustedCaCertificateConfigMap | The name of the config map that contains the CA certificate for verifying the server's identity. This field isn't required for Event Hubs communication, as Event Hubs uses well-known CAs that are trusted by default. However, you can use this field if you want to use a custom CA certificate. | No |
-When specifying a trusted CA is required, create a ConfigMap containing the public potion of the CA in PEM format, and specify the name in the `caConfigMap` property.
+When specifying a trusted CA is required, create a ConfigMap containing the public potion of the CA in PEM format, and specify the name in the `trustedCaCertificateConfigMap` property.
```bash kubectl create configmap ca-pem --from-file path/to/ca.pem
The authentication field supports different types of authentication methods, suc
| Field | Description | Required | | -- | -- | -- |
-| sasl | The configuration for SASL authentication. Specify the `saslType`, which can be *plain*, *scram-sha-256*, or *scram-sha-512*, and the `secretName` to reference the Kubernetes secret containing the username and password. | Yes, if using SASL authentication |
-| x509 | The configuration for X509 authentication. Specify the `secretName` field. The `secretName` field is the name of the secret that contains the client certificate and the client key in PEM format, stored as a TLS secret. | Yes, if using X509 authentication |
+| sasl | The configuration for SASL authentication. Specify the `saslType`, which can be *plain*, *scramSha256*, or *scramSha512*, and `token` to reference the Kubernetes `secretName` or Azure Key Vault `keyVault` secret containing the password. | Yes, if using SASL authentication |
| systemAssignedManagedIdentity | The configuration for managed identity authentication. Specify the audience for the token request, which must match the Event Hubs namespace (`https://<NAMESPACE>.servicebus.windows.net`) [because the connector is a Kafka client](/azure/event-hubs/authenticate-application). A system-assigned managed identity is automatically created and assigned to the connector when it's enabled. | Yes, if using managed identity authentication |
+| x509 | The configuration for X509 authentication. Specify the `secretName` or `keyVault` field. The `secretName` field is the name of the secret that contains the client certificate and the client key in PEM format, stored as a TLS secret. | Yes, if using X509 authentication |
-You can use Azure Key Vault to manage secrets for Azure IoT MQ instead of Kubernetes secrets. To learn more, see [Manage secrets using Azure Key Vault or Kubernetes secrets](../manage-mqtt-connectivity/howto-manage-secrets.md).
+To learn how to use Azure Key Vault and the `keyVault` to manage secrets for Azure IoT MQ instead of Kubernetes secrets, see [Manage secrets using Azure Key Vault or Kubernetes secrets](../manage-mqtt-connectivity/howto-manage-secrets.md).
-For Event Hubs, use plain SASL and `$ConnectionString` as the username and the full connection string as the password.
+##### Authenticate to Event Hubs
+
+To connect to Event Hubs using a connection string and Kubernetes secret, use `plain` SASL type and `$ConnectionString` as the username and the full connection string as the password. First create the Kubernetes secret:
```bash
-kubectl create secret generic cs-secret \
+kubectl create secret generic cs-secret -n azure-iot-operations \
--from-literal=username='$ConnectionString' \ --from-literal=password='Endpoint=sb://<NAMESPACE>.servicebus.windows.net/;SharedAccessKeyName=<KEY_NAME>;SharedAccessKey=<KEY>' ```
+Then, reference the secret in the configuration:
+
+```yaml
+authentication:
+ enabled: true
+ authType:
+ sasl:
+ saslType: plain
+ token:
+ secretName: cs-secret
+```
+
+To use Azure Key Vault instead of Kubernetes secrets, create an Azure Key Vault secret with the connection string `Endpoint=sb://..`, reference it with `vaultSecret`, and specify the username as `"$ConnectionString"` in the configuration.
+
+```yaml
+authentication:
+ enabled: true
+ authType:
+ sasl:
+ saslType: plain
+ token:
+ keyVault:
+ username: "$ConnectionString"
+ vault:
+ name: my-key-vault
+ directoryId: <AKV directory ID>
+ credentials:
+ servicePrincipalLocalSecretName: aio-akv-sp
+ vaultSecret:
+ name: my-cs # Endpoint=sb://..
+ # version: 939ecc2...
+```
+
+To use managed identity, specify it as the only method under authentication. You also need to assign a role to the managed identity that grants permission to send and receive messages from Event Hubs, such as Azure Event Hubs Data Owner or Azure Event Hubs Data Sender/Receiver. To learn more, see [Authenticate an application with Microsoft Entra ID to access Event Hubs resources](/azure/event-hubs/authenticate-application#built-in-roles-for-azure-event-hubs).
+
+```yaml
+authentication:
+ enabled: true
+ authType:
+ systemAssignedManagedIdentity:
+ audience: https://<NAMESPACE>.servicebus.windows.net
+```
+
+##### X.509
+ For X.509, use Kubernetes TLS secret containing the public certificate and private key. ```bash
-kubectl create secret tls my-tls-secret \
+kubectl create secret tls my-tls-secret -n azure-iot-operations \
--cert=path/to/cert/file \ --key=path/to/key/file ```
-To use managed identity, specify it as the only method under authentication. You also need to assign a role to the managed identity that grants permission to send and receive messages from Event Hubs, such as Azure Event Hubs Data Owner or Azure Event Hubs Data Sender/Receiver. To learn more, see [Authenticate an application with Microsoft Entra ID to access Event Hubs resources](/azure/event-hubs/authenticate-application#built-in-roles-for-azure-event-hubs).
+Then specify the `secretName` in configuration.
```yaml authentication: enabled: true authType:
- systemAssignedManagedIdentity:
- audience: https://<NAMESPACE>.servicebus.windows.net
+ x509:
+ secretName: my-tls-secret
+```
+
+To use Azure Key Vault instead, make sure the [certificate and private key are properly imported](../../key-vault/certificates/tutorial-import-certificate.md) and then specify the reference with `vaultCert`.
+
+```yaml
+authentication:
+ enabled: true
+ authType:
+ x509:
+ keyVault:
+ vault:
+ name: my-key-vault
+ directoryId: <AKV directory ID>
+ credentials:
+ servicePrincipalLocalSecretName: aio-akv-sp
+ vaultCert:
+ name: my-cert
+ # version: 939ecc2...
+ ## If presenting full chain also
+ # vaultCaChainSecret:
+ # name: my-chain
+```
+
+Or, if presenting the full chain is required, upload the full chain cert and key to AKV as a PFX file and use the `vaultCaChainSecret` field instead.
+
+```yaml
+# ...
+keyVault:
+ vaultCaChainSecret:
+ name: my-cert
+ # version: 939ecc2...
``` ### Manage local broker connection
iot-operations Howto Manage Secrets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-mqtt-connectivity/howto-manage-secrets.md
- ignite-2023 Previously updated : 11/15/2023 Last updated : 01/16/2024 #CustomerIntent: As an operator, I want to configure IoT MQ to use Azure Key Vault or Kubernetes secrets so that I can securely manage secrets.
The `keyVault` field is available wherever Kubernetes secrets (`secretName`) are
| vaultCert | Yes, when using Key Vault certificates | Specifies the certificate in the Azure Key Vault. | | vaultCert.name | Yes | Specifies the name of the certificate secret. | | vaultCert.version | No | Specifies the version of the certificate secret. |
-| vaultCaChainCert | Yes, when using certificate chain | Specifies the certificate chain in the Azure Key Vault. |
-| vaultCaChainCert.name | Yes | Specifies the name of the certificate chain. |
-| vaultCaChainCert.version | No | Specifies the version of the certificate chain. |
+| vaultCaChainSecret | Yes, when using certificate chain | Specifies the certificate chain in the Azure Key Vault. |
+| vaultCaChainSecret.name | Yes | Specifies the name of the certificate chain. |
+| vaultCaChainSecret.version | No | Specifies the version of the certificate chain. |
+| username | No | Used only for Event Hubs Kafka connector, see [Send and receive messages between Azure IoT MQ and Event Hubs or Kafka](../connect-to-cloud/howto-configure-kafka.md). |
The type of secret you're using determines which of the following fields you can use: - `vaultSecret`: Use this field when you're using a regular secret. For example, you can use this field for configuring a *BrokerAuthentication* resource with the `usernamePassword` field. - `vaultCert`: Use this field when you're using the certificate type secret with client certificate and key. For example, you can use this field for enabling TLS on a *BrokerListener*.-- `vaultCaChainCert`: Use this field when you're using a regular Key Vault secret that contains the CA chain of the client certificate. This field is for when you need IoT MQ to present the CA chain of the client certificate to a remote connection. For example, you can use this field for configuring a *MqttBridgeConnector* resource with the `remoteBrokerConnection` field.
+- `vaultCaChainSecret`: Use this field when you need to present a full certificate chain, with all extra intermediate or root certificates, to the remote server. For example, you can use this field for configuring a *MqttBridgeConnector* resource with the `remoteBrokerConnection` field. To use this field, import X.509 certificates without private keys in PEM format as a multi-line regular secret (not certificate-type) to Key Vault. This field should be used in addition to `vaultCert` that has the client certificate and private key.
## Examples
spec:
servicePrincipalLocalSecretName: aio-akv-sp vaultCert: name: my-server-certificate
- version: latest
+ # version: 939ecc2...
``` This next example shows how to use Azure Key Vault for the `usernamePassword` field in a BrokerAuthentication resource:
spec:
servicePrincipalLocalSecretName: aio-akv-sp vaultSecret: name: my-username-password-db
- version: latest
+ # version: 939ecc2...
``` This example shows how to use Azure Key Vault for MQTT bridge remote broker credentials:
spec:
directoryId: <AKV directory ID> credentials: servicePrincipalLocalSecretName: aio-akv-sp
- vaultCaChainCert:
+ vaultCaChainSecret:
name: my-remote-broker-certificate
- version: latest
+ # version: 939ecc2...
``` ## Related content
key-vault Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/backup.md
If you want protection against accidental or malicious deletion of your secrets,
> [!IMPORTANT] > Key Vault does not support the ability to backup more than 500 past versions of a key, secret, or certificate object. Attempting to backup a key, secret, or certificate object may result in an error. It is not possible to delete previous versions of a key, secret, or certificate.
-Key Vault doesn't currently provide a way to back up an entire key vault in a single operation. Any attempt to use the commands listed in this document to do an automated backup of a key vault may result in errors and won't be supported by Microsoft or the Azure Key Vault team.
+Key Vault doesn't currently provide a way to back up an entire key vault in a single operation and keys, secrets and certitificates must be backup indvidually.
-Also consider the following consequences:
+Also consider the following issues:
* Backing up secrets that have multiple versions might cause time-out errors. * A backup creates a point-in-time snapshot. Secrets might renew during a backup, causing a mismatch of encryption keys.
load-testing How To Compare Multiple Test Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-compare-multiple-test-runs.md
Title: Compare load test runs to find regressions
+ Title: Compare load test runs
description: 'Learn how you can visually compare multiple test runs with Azure Load Testing to identify and analyze performance regressions.' Previously updated : 01/18/2023 Last updated : 01/11/2024 -
-# Identify performance regressions by comparing test runs in Azure Load Testing
+# Compare load test runs in Azure Load Testing
-In this article, you'll learn how you can identify performance regressions by comparing test runs in the Azure Load Testing dashboard. The dashboard overlays the client-side and server-side metric graphs for each run, which allows you to quickly analyze performance issues. You will also learn how to view and analyze the trends in client-side performance metrics.
+In this article, you learn how you can compare test runs in Azure Load Testing. You can view trends across the last 10 test runs, or you can select and compare up to five individual test runs. Optionally, you can mark a test run as a baseline to compare against.
-To identify performance regressions, you can quickly glance over the client-side metrics from your recent test runs to understand if your performance is trending favorably or unfavorably. Optionally, you can compare the recent metrics with a baseline to understand if the performance is meeting your expectations. To dive deeper into a performance regression, you can compare upto five test runs.
+To identify regressions over time, you can use the client-side metrics trends of the last 10 test runs, such as the response time, error rate, and more. In combination with [CI/CD integration](./quickstart-add-load-test-cicd.md), the trends data might help you identify which application build introduced a performance issue.
-You can compare load test runs for the following scenarios:
+When you want to compare the client-side metrics trends against a specific reference test run, you can mark that test run as your baseline. For example, before you implement performance optimizations in your application, you might first create a baseline load test run, and then validate the effects of your optimizations against your baseline.
-- Identify performance regressions between application builds or configurations. You could run a load test at each development sprint to ensure that the previous sprint didn't introduce performance issues.-- Identify which application component is responsible for a performance problem (root cause analysis). For example, an application redesign might result in slower application response times. Comparing load test runs might reveal that the root cause was a lack of database resources.
+To compare both client-side and server-side metrics, you can select up to five test runs, and compare them in the Azure Load Testing dashboard. The dashboard overlays the client-side and server-side metric graphs for each test run. By also comparing server-side application metrics in the dashboard, you can identify which application component was the root cause for a sudden performance degradation.
## Prerequisites - An Azure account with an active subscription. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. -- An Azure Load Testing resource with a test plan that has multiple test runs. To create a Load Testing resource, see [Create and run a load test](./quickstart-create-and-run-load-test.md).
+- An Azure load testing resource, which has a test with multiple test runs. To create a load testing resource, see [Create and run a load test](./quickstart-create-and-run-load-test.md).
-## Select test runs
+## Compare multiple load test runs
-To compare test runs in Azure Load Testing, you'll first have to select up to five runs within a load test. You can only compare runs that belong to the same load test.
+To compare test runs in Azure Load Testing, you first have to select up to five runs within a load test. You can only compare runs that belong to the same load test. After you select the test runs you want to compare, you can visually compare the client-side and server-side metrics for each test run in the load test dashboard.
A test run needs to be in the *Done*, *Stopped*, or *Failed* state to compare it.
Use the following steps to select the test runs:
1. Sign in to the [Azure portal](https://portal.azure.com) by using the credentials for your Azure subscription.
-1. Go to your Azure Load Testing resource and then, on the left pane, select **Tests**.
-
- :::image type="content" source="media/how-to-compare-multiple-test-runs/choose-test-from-list.png" alt-text="Screenshot that shows the list of tests for a Load Testing resource.":::
+1. Go to your load testing resource, and then select **Tests** in the left pane.
- You can also use the filters to find your load test.
+ > [!TIP]
+ > You can also use the filters to find your load test.
1. Select the test whose runs you want to compare by selecting its name.
-1. Select two or more test runs by selecting the corresponding checkboxes in the list.
-
- :::image type="content" source="media/how-to-compare-multiple-test-runs/compare-test-results-from-list.png" alt-text="Screenshot that shows a list of test runs and the 'Compare' button.":::
+1. Select two or more test runs, and then select **Compare** to compare test runs.
You can choose a maximum of five test runs to compare.
-## Compare multiple test runs
-
-After you've selected the test runs you want to compare, you can visually compare the client-side and server-side metrics for each test run in the load test dashboard.
-
-1. Select the **Compare** button to open the load test dashboard.
-
- Each test run is shown as an overlay in the different graphs.
-
- :::image type="content" source="media/how-to-compare-multiple-test-runs/compare-screen.png" alt-text="Screenshot of the 'Compare' page, displaying a comparison of two test runs.":::
+ :::image type="content" source="media/how-to-compare-multiple-test-runs/compare-test-results-from-list.png" alt-text="Screenshot that shows a list of test runs and the 'Compare' button in the Azure portal." lightbox="media/how-to-compare-multiple-test-runs/compare-test-results-from-list.png":::
-1. Optionally, use the filters to customize the graphs.
+1. On the dashboard, each test run is shown as an overlay in the different graphs.
- :::image type="content" source="media/how-to-compare-multiple-test-runs/compare-client-side-filters.png" alt-text="Screenshot of the client-side filter controls on the load test dashboard.":::
+ The dashboard enables you to compare both client-side metrics and server-side metrics. You can view the color-coding for each test run in the **Test run details** section.
- > [!TIP]
+ > [!NOTE]
> The time filter is based on the duration of the tests. A value of zero indicates the start of the test, and the maximum value marks the duration of the longest test run.
-## View metrics trends across test runs
+ :::image type="content" source="media/how-to-compare-multiple-test-runs/load-test-dashboard-compare-runs.png" alt-text="Screenshot of the load testing dashboard in the Azure portal, comparing two test runs." lightbox="media/how-to-compare-multiple-test-runs/load-test-dashboard-compare-runs.png":::
+
+## View metrics trends across load test runs
-To view metrics trends across test runs in Azure Load Testing, you'll need to have at least two test runs in the *Done*, or *Stopped* state. You can only view trends from runs that belong to the same load test.
+To view metrics trends across test runs in Azure Load Testing, you need to have at least two test runs in the *Done*, or *Stopped* state. You can only view trends from runs that belong to the same load test.
Use the following steps to view metrics trends across test runs:
Use the following steps to view metrics trends across test runs:
1. Go to your Azure Load Testing resource and then, on the left pane, select **Tests**.
- :::image type="content" source="media/how-to-compare-multiple-test-runs/choose-test-from-list.png" alt-text="Screenshot that shows the list of tests for a Load Testing resource." lightbox="media/how-to-compare-multiple-test-runs/choose-test-from-list.png":::
-
- You can also use the filters to find your load test.
1. Select the test for which you want to view metrics trends by selecting its name.
-1. On the **Test details** pane, select **Trends**
+1. Select the **Trends** tab to view the metrics trends for the load test.
- The graphs show the trends for total requests, response time, error percentage, and throughput for the ten most recent test runs.
+ The graphs show the trends for total requests, response time, error percentage, and throughput for the 10 most recent test runs.
:::image type="content" source="media/how-to-compare-multiple-test-runs/choose-trends-from-test-details.png" alt-text="Screenshot that shows the details of a Test in a Load Testing resource." lightbox="media/how-to-compare-multiple-test-runs/choose-trends-from-test-details.png":::
Use the following steps to view metrics trends across test runs:
You can select a test run that you want to analyze and open the results dashboard for that test run.
-## Use a baseline test run
+## Compare load test runs against a baseline
-You can mark a test run as baseline to compare the client-side metrics of the recent test runs with those of the baseline.
+You can mark a test run as a baseline to compare the client-side metrics of the recent test runs with the metrics of the baseline.
Use the following steps to mark a test run as baseline:
-1. On the **Trends** pane, select **Mark baseline**
+1. On the **Trends** tab, select **Mark baseline**.
:::image type="content" source="media/how-to-compare-multiple-test-runs/select-mark-baseline.png" alt-text="Screenshot that shows Mark baseline button in the Trends pane." lightbox="media/how-to-compare-multiple-test-runs/select-mark-baseline.png":::
-1. In the right context pane, select the checkbox for the test run that you want to mark as baseline, and then select **Mark baseline**
+1. From the list of test runs, select the checkbox for the test run that you want to mark as baseline, and then select **Mark baseline**.
:::image type="content" source="media/how-to-compare-multiple-test-runs/mark-test-run-as-baseline.png" alt-text="Screenshot that shows the context pane to mark a test run as baseline." lightbox="media/how-to-compare-multiple-test-runs/mark-test-run-as-baseline.png":::
-
- You can also use the filters to find your load test run.
- The baseline value is shown as a horizontal line in the charts. In the table view, an additional row with the baseline test run details is shown. For the recent test runs, an arrow mark next to the metrics
- value indicates whether the metric is trending favorably or unfavorably as compared to the baseline metric value.
+1. On the **Trends** tab, you can now view the baseline test run in the table and charts.
+
+ The baseline value is shown as a horizontal line in the charts. In the table view, an extra row with the baseline test run details is shown.
+
+ In the table, an arrow icon indicates whether the metric is trending favorably or unfavorably as compared to the baseline metric value.
:::image type="content" source="media/how-to-compare-multiple-test-runs/trends-view-with-baseline.png" alt-text="Screenshot that shows trends in metrics when a baseline is selected." lightbox="media/how-to-compare-multiple-test-runs/trends-view-with-baseline.png":::
-## Next steps
+## Related content
- Learn more about [exporting the load test results for reporting](./how-to-export-test-results.md). - Learn more about [diagnosing failing load tests](./how-to-diagnose-failing-load-test.md).
load-testing How To Monitor Server Side Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-monitor-server-side-metrics.md
Title: Monitor server-side application metrics for load testing
+ Title: Monitor server-side metrics
-description: Learn how to configure a load test to monitor server-side application metrics by using Azure Load Testing.
+description: Learn how to capture and monitor server-side application metrics when running a load test with Azure Load Testing. Add Azure app components and resource metrics to your load test configuration.
Previously updated : 01/18/2023 Last updated : 01/16/2024 # Monitor server-side application metrics by using Azure Load Testing
-
-You can monitor server-side application metrics for Azure-hosted applications when running a load test with Azure Load Testing. In this article, you'll learn how to configure app components and metrics for your load test.
-To capture metrics during your load test, you'll first [select the Azure components](#select-azure-application-components) that make up your application. Optionally, you can then [configure the list of server-side metrics](#select-server-side-resource-metrics) for each Azure component.
+In this article, you learn how to capture and monitor server-side application metrics when running a load test with Azure Load Testing. When you run a load test for an Azure-hosted application, Azure Load Testing collects resource metrics for your application components and presents them in the load testing dashboard.
+
+To capture metrics during your load test, you update the load test configuration and [add the Azure app components](#add-azure-app-components-to-a-load-test) that make up your application. The service automatically selects the most relevant resource metrics for these app components, depending on the type of component. Optionally, you can [update the list of server-side metrics](#configure-resource-metrics-for-a-load-test) for each Azure component.
Azure Load Testing integrates with Azure Monitor to capture server-side resource metrics for Azure-hosted applications. Read more about which [Azure resource types that Azure Load Testing supports](./resource-supported-azure-resource-types.md). ## Prerequisites - An Azure account with an active subscription. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. -- An Azure Load Testing resource with at least one completed test run. If you need to create an Azure Load Testing resource, see [Tutorial: Run a load test to identify performance bottlenecks](./tutorial-identify-bottlenecks-azure-portal.md).
+- An Azure load testing resource. To create a load testing resource, see [Create and run a load test](./quickstart-create-and-run-load-test.md).
-## Select Azure application components
+## Add Azure app components to a load test
-To monitor resource metrics for an Azure-hosted application, you need to specify the list of Azure application components in your load test. Azure Load Testing automatically captures a set of relevant resource metrics for each selected component. When your load test finishes, you can view the server-side metrics in the dashboard.
+To monitor resource metrics for an Azure-hosted application, you need to specify the list of Azure application components in your load test configuration. Azure Load Testing automatically captures a set of relevant resource metrics for each selected component. During the load test and after the test finishes, you can view the server-side metrics in the load testing dashboard.
For the list of Azure components that Azure Load Testing supports, see [Supported Azure resource types](./resource-supported-azure-resource-types.md). Use the following steps to configure the Azure components for your load test:
-1. In the [Azure portal](https://portal.azure.com), go to your Azure Load Testing resource.
+1. In the [Azure portal](https://portal.azure.com), go to your Azure load testing resource.
1. On the left pane, select **Tests**, and then select your load test from the list.
- :::image type="content" source="media/how-to-monitor-server-side-metrics/select-test.png" alt-text="Screenshot that shows a list of load tests to select from.":::
-
-1. On the test runs page, select **Configure**, and then select **App Components** to add or remove Azure resources to monitor during the load test.
+1. On the test details page, select **Configure**, and then select **App Components** to add or remove Azure resources to monitor during the load test.
- :::image type="content" source="media/how-to-monitor-server-side-metrics/configure-app-components.png" alt-text="Screenshot that shows the 'App Components' button for displaying app components to configure for a load test.":::
+ :::image type="content" source="media/how-to-monitor-server-side-metrics/configure-app-components.png" alt-text="Screenshot that shows the 'App Components' button for displaying app components to configure for a load test." lightbox="media/how-to-monitor-server-side-metrics/configure-app-components.png":::
-1. Select or clear the checkboxes next to the Azure resources you want to add or remove, and then select **Apply**.
+1. On the **Configure App Components** page, select or clear the checkboxes for the Azure resources you want to add or remove, and then select **Apply**.
- :::image type="content" source="media/how-to-monitor-server-side-metrics/modify-app-components.png" alt-text="Screenshot that shows how to add or remove app components from a load test configuration.":::
+ :::image type="content" source="media/how-to-monitor-server-side-metrics/modify-app-components.png" alt-text="Screenshot that shows how to add or remove app components from a load test configuration." lightbox="media/how-to-monitor-server-side-metrics/modify-app-components.png":::
- When you run the load test, Azure Load Testing will display the default resource metrics in the test run dashboard.
+ When you run the load test, Azure Load Testing displays the default resource metrics for the selected app components in the test run dashboard.
-You can change the list of resource metrics at any time. In the next section, you'll view and configure the list of resource metrics.
+You can change the list of resource metrics for each app component at any time.
-## Select server-side resource metrics
+## Configure resource metrics for a load test
-For each Azure application component, you can select the resource metrics to monitor during your load test.
+When you add app components to your load test configuration, Azure Load Testing adds the most relevant resource metrics for these components. You can add or remove resource metrics for each of the app components in your load test.
-Use the following steps to view and update the list of resource metrics:
+Use the following steps to view and update the list of resource metrics for a load test:
-1. On the test runs page, select **Configure**, and then select **Metrics** to select the specific resource metrics to capture during the load test.
+1. On the test details page, select **Configure**, and then select **Metrics** to select the specific resource metrics to capture during the load test.
- :::image type="content" source="media/how-to-monitor-server-side-metrics/configure-metrics.png" alt-text="Screenshot that shows the 'Metrics' button to configure metrics for a load test.":::
+ :::image type="content" source="media/how-to-monitor-server-side-metrics/configure-metrics.png" alt-text="Screenshot that shows the 'Metrics' button to configure metrics for a load test." lightbox="media/how-to-monitor-server-side-metrics/configure-metrics.png":::
1. Update the list of metrics you want to capture, and then select **Apply**.
- :::image type="content" source="media/how-to-monitor-server-side-metrics/modify-metrics.png" alt-text="Screenshot that shows a list of resource metrics to configure for a load test.":::
-
- Alternatively, you can update the app components and metrics from the page that shows test result details.
+ :::image type="content" source="media/how-to-monitor-server-side-metrics/modify-metrics.png" alt-text="Screenshot that shows a list of resource metrics to configure for a load test." lightbox="media/how-to-monitor-server-side-metrics/modify-metrics.png":::
1. Select **Run** to run the load test with the new configuration settings.
- :::image type="content" source="media/how-to-monitor-server-side-metrics/run-load-test.png" alt-text="Screenshot that shows the 'Run' button for running the load test from the test runs page.":::
+ :::image type="content" source="media/how-to-monitor-server-side-metrics/run-load-test.png" alt-text="Screenshot that shows the 'Run' button for running the load test from the test details page." lightbox="media/how-to-monitor-server-side-metrics/run-load-test.png":::
Notice that the test result dashboard now shows the updated server-side metrics.
- :::image type="content" source="media/how-to-monitor-server-side-metrics/dashboard-updated-metrics.png" alt-text="Screenshot that shows the updated server-side metrics on the test result dashboard.":::
-
-When you update the configuration of a load test, all future test runs will use that configuration. On the other hand, if you update a test run, the new configuration will only apply to that test run.
-
-## Next steps
+ :::image type="content" source="media/how-to-monitor-server-side-metrics/dashboard-updated-metrics.png" alt-text="Screenshot that shows the updated server-side metrics on the test result dashboard." lightbox="media/how-to-monitor-server-side-metrics/dashboard-updated-metrics.png":::
-- Learn how you can [identify performance problems by comparing metrics across multiple test runs](./how-to-compare-multiple-test-runs.md).
+> [!NOTE]
+> When you update the load test configuration of a load test, all future test runs use the updated configuration. You can also update app components and metrics on the load testing dashboard. In this case, the configuration changes only apply to the current test run.
-- Learn how to [set up a high-scale load test](./how-to-high-scale-load.md).
+## Related content
-- Learn how to [configure automated performance testing](./quickstart-add-load-test-cicd.md).
+- [View metrics trends and compare load test results to identify performance regressions](./how-to-compare-multiple-test-runs.md).
load-testing How To Parameterize Load Tests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-parameterize-load-tests.md
Title: Parameterize load tests with secrets and environment variables
+ Title: Use secrets & environment variables
description: 'Learn how to create configurable load tests by using secrets and environment variables as parameters in Azure Load Testing.' Previously updated : 01/15/2023 Last updated : 01/16/2024
-# Create configurable load tests with secrets and environment variables
+# Use secrets and environment variables in Azure Load Testing
-Learn how to change the behavior of a load test without having to edit the Apache JMeter script. With Azure Load Testing, you can use parameters to make a configurable test script. For example, turn the application endpoint into a parameter to reuse your test script across multiple environments.
+In this article, you learn how to pass secrets and environments as parameters to a load test in Azure Load Testing. You can use parameters to change the behavior of a load test without having to edit the Apache JMeter script. For example, to test a web application, specify the endpoint URL as a parameter to reuse your test script across multiple environments. You can also use parameters to avoid that you have to hard code sensitive information in the JMeter test script.
The Azure Load Testing service supports two types of parameters:
The Azure Load Testing service supports two types of parameters:
- **Environment variables**: Contain non-sensitive information and are available as environment variables in the load test engine. For example, environment variables make the application endpoint URL configurable. For more information, see [Configure load tests with environment variables](#envvars).
+You can specify parameters in the load test configuration when you create a new test or update an existing test. If you run a load test in your CI/CD workflow, you define parameters in the load test configuration file or in the CI/CD workflow definition.
+ ## Prerequisites - An Azure account with an active subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
If a parameter exists in both the YAML configuration file and the Azure Load Tes
The values of the parameters aren't stored when they're passed from the CI/CD workflow. You have to provide the parameter values again when you run the test from the Azure portal. You get a prompt to enter the missing values. For secret values, you enter the key vault secret URI. The values that you enter at the test run or rerun page are valid only for that test run. For making changes at the test level, go to **Configure Test** and enter your parameter values.
-## Next steps
+## Related content
+- Use secrets to [load test secured endpoints](./how-to-test-secured-endpoints.md).
- For more information about reading CSV files, see [Read CSV files in load tests](./how-to-read-csv-data.md).--- For information about high-scale load tests, see [Set up a high-scale load test](./how-to-high-scale-load.md).--- To learn about performance test automation, see [Configure automated performance testing](./quickstart-add-load-test-cicd.md).
logic-apps Sap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/connectors/sap.md
Along with simple string and number inputs, the SAP connector accepts the follow
1. In the action named **\[BAPI] Call method in SAP**, disable the auto-commit feature. 1. Call the action named **\[BAPI] Commit transaction** instead.
+### IP-based connections to SAP Message Server (load-balanced configuration)
+
+If you specify an IP address to connect to an SAP Message Server, for example, a load balancer, the connection might still fail with an error message similar to **"hostname SAPDBSERVER01.example.com unknown"**. The message server instructs the SAP connector to use a hostname for the connection to the backend SAP Application Server, or the server behind the load balancer. If DNS can't resolve the hostname, the connection fails.
+
+For this problem, the following workarounds or solutions exist:
+
+- Make sure that the client making the connection, such as the computer with the on-premises data gateway for the SAP connector or the ISE connector host for the ISE-based SAP connector, can resolve the hostnames returned by the message server.
+
+- In the transaction named **RZ11**, change or add the SAP setting named **ms/lg_with_hostname=0**.
+
+#### Problem context or background
+
+SAP upgraded their .NET connector (NCo) to version 3.1, which changed the way that the connector requests connections to backend servers from message servers. The connector now uses a new API for application server resolution by the message server unless you force the connector to use the previous API through the setting named **ms/lg_with_hostname=0`**. For more information, see [SAP KB Article 3305039 - SMLG IP Address setting not considered during Logon Group login](https://me.sap.com/notes/3305039/E).
+ ## Prerequisites * An Azure account and subscription. If you don't have an Azure subscription yet, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
machine-learning Concept Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-compute-instance.md
--++ Previously updated : 10/19/2022 Last updated : 01/17/2024 monikerRange: 'azureml-api-2 || azureml-api-1' #Customer intent: As a data scientist, I want to know what a compute instance is and how to use it for Azure Machine Learning.
machine-learning Concept V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-v2.md
Previously updated : 11/04/2022 Last updated : 01/17/2024 #Customer intent: As a data scientist, I want to know whether to use v1 or v2 of CLI and SDK.
SDK v2 is useful in the following scenarios:
## Should I use v1 or v2?
-Here are some considerations to help you decide which version to use.
+Support for CLI v2 will end on September 30, 2025.
+
+We encourage you to migrate your code for both CLI and SDK v1 to CLI and SDK v2. For more information, see [Upgrade to v2](how-to-migrate-from-v1.md).
### CLI v2
-Azure Machine Learning CLI v1 has been deprecated. We recommend that you use CLI v2 if:
+Azure Machine Learning CLI v1 has been deprecated. Support for the v1 extension will end on September 30, 2025. You will be able to install and use the v1 extension until that date.
-* You were a CLI v1 user.
-* You want to use new features like reusable components and managed inferencing.
-* You don't want to use a Python SDK. CLI v2 allows you to use YAML with scripts in Python, R, Java, Julia, or C#.
-* You were a user of R SDK previously. Machine Learning won't support an SDK in `R`. However, CLI v2 has support for `R` scripts.
-* You want to use command line-based automation or deployments.
-* You don't need Spark Jobs. This feature is currently available in preview in CLI v2.
+We recommend that you transition to the `ml`, or v2, extension before September 30, 2025.
### SDK v2
machine-learning How To Create Component Pipelines Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-component-pipelines-cli.md
- event-tier1-build-2022 - build-2023 - ignite-2023
+ms.devlang: azurecli
+# ms.devlang: azurecli, cliv2
# Create and run machine learning pipelines using components with the Azure Machine Learning CLI
machine-learning How To Managed Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-managed-network.md
Before following the steps in this article, make sure you have the following pre
+> [!NOTE]
+> If you are using UAI workspace please make sure to add the Network Contributor role to your identity. For more information, see [User-assigned managed identity](how-to-identity-based-service-authentication.md).
+ ## Configure a managed virtual network to allow internet outbound > [!TIP]
machine-learning How To Mlflow Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-mlflow-batch.md
In this article, learn how to deploy [MLflow](https://www.mlflow.org) models to
* Creates a batch job pipeline with a scoring script for you that can be used to process data using parallelization. > [!NOTE]
-> For more information about the supported input file types in model deployments with MLflow, view [Considerations when deploying to batch inference](#considerations-when-deploying-to-batch-inference).
+> For more information about the supported input file types and details about how MLflow model works see [Considerations when deploying to batch inference](#considerations-when-deploying-to-batch-inference).
## About this example
Follow these steps to deploy an MLflow model to a batch endpoint for running bat
- > [!NOTE]
- > Batch deployments only support deploying MLflow models with a `pyfunc` flavor. To use a different flavor, see [Customizing MLflow models deployments with a scoring script](#customizing-mlflow-models-deployments-with-a-scoring-script)..
+ > [!IMPORTANT]
+ > Configure `timeout` in your deployment based on how long it takes for your model to run inference on a single batch. The bigger the batch size the longer this value has to be. Remeber that `mini_batch_size` indicates the number of files in a batch, not the number of samples. When working with tabular data, each file may contain multiple rows which will increase the time it takes for the batch endpoint to process each file. Use high values on those cases to avoid time out errors.
7. Although you can invoke a specific deployment inside of an endpoint, you will usually want to invoke the endpoint itself and let the endpoint decide which deployment to use. Such deployment is named the "default" deployment. This gives you the possibility of changing the default deployment and hence changing the model serving the deployment without changing the contract with the user invoking the endpoint. Use the following instruction to update the default deployment:
The output looks as follows:
## Considerations when deploying to batch inference
-Azure Machine Learning supports no-code deployment for batch inference in [managed endpoints](concept-endpoints.md). This represents a convenient way to deploy models that require processing of big amounts of data in a batch-fashion.
+Azure Machine Learning supports deploying MLflow models to batch endpoints without indicating a scoring script. This represents a convenient way to deploy models that require processing of big amounts of data in a batch-fashion. Azure Machine Learning uses information in the MLflow model specification to orchestrate the inference process.
### How work is distributed on workers
-Work is distributed at the file level, for both structured and unstructured data. As a consequence, only [file datasets (v1 API)](v1/how-to-create-register-datasets.md#filedataset) or [URI folders](reference-yaml-data.md) are supported for this feature. Each worker processes batches of `Mini batch size` files at a time. Further parallelism can be achieved if `Max concurrency per instance` is increased.
+Batch Endpoints distribute work at the file level, for both structured and unstructured data. As a consequence, only [URI file](reference-yaml-data.md) and [URI folders](reference-yaml-data.md) are supported for this feature. Each worker processes batches of `Mini batch size` files at a time. For tabular data, batch endpoints don't take into account the number of rows inside of each file when distributing the work.
> [!WARNING] > Nested folder structures are not explored during inference. If you are partitioning your data using folders, make sure to flatten the structure beforehand.
-Batch deployments will call the `predict` function of the MLflow model once per file. For CSV files containing multiple rows, this may impose a memory pressure in the underlying compute. When sizing your compute, take into account not only the memory consumption of the data being read but also the memory footprint of the model itself. This is specially true for models that processes text, like transformer-based models where the memory consumption is not linear with the size of the input. If you encounter several out-of-memory exceptions, consider splitting the data in smaller files with less rows or implement batching at the row level inside of the model/scoring script.
+Batch deployments will call the `predict` function of the MLflow model once per file. For CSV files containing multiple rows, this may impose a memory pressure in the underlying compute and may increase the time it takes for the model to score a single file (specially for expensive models like large language models). If you encounter several out-of-memory exceptions or time-out entries in logs, consider splitting the data in smaller files with less rows or implement batching at the row level inside of the model/scoring script.
### File's types support
You will typically select this workflow when:
> * You model can't process each file at once because of memory constrains and it needs to read it in chunks. > [!IMPORTANT]
-> If you choose to indicate an scoring script for an MLflow model deployment, you will also have to specify the environment where the deployment will run.
+> If you choose to indicate a scoring script for an MLflow model deployment, you will also have to specify the environment where the deployment will run.
### Steps
machine-learning How To Submit Spark Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-submit-spark-jobs.md
To create a job, a standalone Spark job can be defined as a YAML specification f
- `code` - defines the location of the folder that contains source code and scripts for this job. - `entry` - defines the entry point for the job. It should cover one of these properties: - `file` - defines the name of the Python script that serves as an entry point for the job.
- - `class_name` - defines the name of the class that serves as an entry point for the job.
- `py_files` - defines a list of `.zip`, `.egg`, or `.py` files, to be placed in the `PYTHONPATH`, for successful execution of the job. This property is optional. - `jars` - defines a list of `.jar` files to include on the Spark driver, and the executor `CLASSPATH`, for successful execution of the job. This property is optional. - `files` - defines a list of files that should be copied to the working directory of each executor, for successful job execution. This property is optional.
To create a job, a standalone Spark job can be defined as a YAML specification f
- If dynamic allocation of executors is disabled, define this property: - `spark.executor.instances` - the number of Spark executor instances. - `environment` - an [Azure Machine Learning environment](./reference-yaml-environment.md) to run the job.-- `args` - the command line arguments that should be passed to the job entry point Python script or class. See the YAML specification file provided here for an example.
+- `args` - the command line arguments that should be passed to the job entry point Python script. See the YAML specification file provided here for an example.
- `resources` - this property defines the resources to be used by an Azure Machine Learning serverless Spark compute. It uses the following properties: - `instance_type` - the compute instance type to be used for Spark pool. The following instance types are currently supported: - `standard_e4s_v3`
To create a standalone Spark job, use the `azure.ai.ml.spark` function, with the
- `name` - the name of the Spark job. - `display_name` - the display name of the Spark job that should be displayed in the UI and elsewhere. - `code` - the location of the folder that contains source code and scripts for this job.-- `entry` - the entry point for the job. It should be a dictionary that defines a file or a class entry point.
+- `entry` - the entry point for the job. It should be a dictionary that defines the file entry point.
- `py_files` - a list of `.zip`, `.egg`, or `.py` files to be placed in the `PYTHONPATH`, for successful execution of the job. This parameter is optional. - `jars` - a list of `.jar` files to include in the Spark driver and executor `CLASSPATH`, for successful execution of the job. This parameter is optional. - `files` - a list of files that should be copied to the working directory of each executor, for successful execution of the job. This parameter is optional.
To create a standalone Spark job, use the `azure.ai.ml.spark` function, with the
- `executor_instances` - the number of Spark executor instances. - `environment` - the Azure Machine Learning environment that runs the job. This parameter should pass: - an object of `azure.ai.ml.entities.Environment`, or an Azure Machine Learning environment name (string).-- `args` - the command line arguments that should be passed to the job entry point Python script or class. See the sample code provided here for an example.
+- `args` - the command line arguments that should be passed to the job entry point Python script. See the sample code provided here for an example.
- `resources` - the resources to be used by an Azure Machine Learning serverless Spark compute. This parameter should pass a dictionary with: - `instance_type` - a key that defines the compute instance type to be used for the serverless Spark compute. The following instance types are currently supported: - `Standard_E4S_V3`
machine-learning How To Use Pipeline Component https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-pipeline-component.md
In this article, you'll learn how to use pipeline component in Azure Machine Lea
- Understand how to use Azure Machine Learning pipeline with [CLI v2](how-to-create-component-pipelines-cli.md) and [SDK v2](how-to-create-component-pipeline-python.md). - Understand what is [component](concept-component.md) and how to use component in Azure Machine Learning pipeline.-- Understand what is a [Azure Machine Learning pipeline](concept-ml-pipelines.md)
+- Understand what is an [Azure Machine Learning pipeline](concept-ml-pipelines.md)
## The difference between pipeline job and pipeline component
machine-learning How To Create Manage Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-create-manage-runtime.md
To get the best experience and performance, try to keep your runtime up to date.
If you select **Use customized environment**, you first need to rebuild the environment by using the latest prompt flow image. Then update your runtime with the new custom environment.
+## Switch compute instance runtime to automatic runtime (preview)
+
+Automatic runtime (preview) has following advantages over compute instance runtime:
+- Automatic manage lifecycle of runtime and underlying compute. You don't need to manually create and managed them anymore.
+- Easily customize packages by adding packages in the `requirements.txt` file in the flow folder, instead of creating a custom environment.
+
+We would recommend you to switch to automatic runtime (preview) if you're using compute instance runtime. If you have a compute instance runtime, you can switch it to an automatic runtime (preview) by using the following steps:
+- Prepare your `requirements.txt` file in the flow folder. Make sure that you don't pin the version of `promptflow` and `promptflow-tools` in `requirements.txt`, because we already include them in the runtime base image. Automatic runtime (preview) will install the packages in `requirements.txt` file when it starts.
+- If you create custom environment to create compute instance runtime, you can also use get the image from environment detail page, and specify it in `flow.dag.yaml` file in the flow folder. To learn more, see [Change the base image for automatic runtime (preview)](#change-the-base-image-for-automatic-runtime-preview). Make sure you have `acr pull` permission for the image.
++
+- If you want to keep the automatic runtime (preview) as long running compute like compute instance, you can disable the idle shutdown toggle under automatic runtime (preview) edit option.
+ ## Next steps - [Develop a standard flow](how-to-develop-a-standard-flow.md)
machine-learning How To Secure Prompt Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-secure-prompt-flow.md
Workspace managed virtual network is the recommended way to support network isol
az ml workspace provision-network --subscription <sub_id> -g <resource_group_name> -n <workspace_name> ```
-2. Add workspace MSI as `Storage File Data Privileged Contributor` and `Storage Table Data Contributor` to storage account linked with workspace.
+2. Add workspace MSI as `Storage File Data Privileged Contributor` to storage account linked with workspace.
2.1 Go to Azure portal, find the workspace.
Workspace managed virtual network is the recommended way to support network isol
:::image type="content" source="./media/how-to-secure-prompt-flow/managed-identity-workspace.png" alt-text="Diagram showing how to assign storage file data privileged contributor role to workspace managed identity." lightbox = "./media/how-to-secure-prompt-flow/managed-identity-workspace.png"::: > [!NOTE]
- > You need follow the same process to assign `Storage Table Data Contributor` role to workspace managed identity.
> This operation might take several minutes to take effect. 3. If you want to communicate with [private Azure Cognitive Services](../../ai-services/cognitive-services-virtual-networks.md), you need to add related user defined outbound rules to related resource. The Azure Machine Learning workspace creates private endpoint in the related resource with auto approve. If the status is stuck in pending, go to related resource to approve the private endpoint manually.
Workspace managed virtual network is the recommended way to support network isol
## Known limitations -- Workspace hub / lean workspace and AI studio don't support bring your own virtual network.
+- AI studio don't support bring your own virtual network, it only support workspace managed virtual network.
- Managed online endpoint only supports workspace with managed virtual network. If you want to use your own virtual network, you might need one workspace for prompt flow authoring with your virtual network and another workspace for prompt flow deployment using managed online endpoint with workspace managed virtual network. ## Next steps
machine-learning Troubleshoot Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/tools-reference/troubleshoot-guidance.md
Title: Troubleshoot guidance
-description: This article addresses frequent questions about tool usage.
+description: This article addresses frequent questions prompt flow usage.
Last updated 09/05/2023
# Troubleshoot guidance
-This article addresses frequent questions about tool usage.
+This article addresses frequent questions about prompt flow usage.
## "Package tool isn't found" error occurs when you update the flow for a code-first experience
Prompt flow relies on a file share storage to store a snapshot of the flow. If t
:::image type="content" source="../media/faq/flow-missing.png" alt-text="Screenshot that shows a flow missing an authoring page." lightbox = "../media/faq/flow-missing.png":::
-Prompt flow relies on a file share to store a snapshot of a flow. This error means that prompt flow service can operate a prompt flow folder in the file share storage, but the prompt flow UI can't find the folder in the file share storage. There are some potential reasons:
+There are possible reasons for this issue:
+- If you disabled public access to storage account, then you need have access to storage account either add you IP to the storage Firewall or add access studio through the virtual network which have private endpoint to the storage account.
-- Prompt flow relies on a datastore named `workspaceworkingdirectory` in your workspace, which uses `code-391ff5ac-6576-460f-ba4d-7e03433c68b6`. Make sure your datastore uses the same container. If your datastore is using a different file share name, you need to use a new workspace.
+ :::image type="content" source="../media/faq/storage-account-networking-firewall.png" alt-text="Screenshot that shows firewall setting on storage account." lightbox = "../media/faq/storage-account-networking-firewall.png":::
- ![Screenshot that shows the name of a file share in a datastore detail page.](../media/faq/file-share-name.png)
+- There are some cases, the account key in data store is out of sync with the storage account, you can try to update the account key in data store detail page to fix this.
-- If your file share storage is correctly named, try a different network environment, such as a home or company network. There's a rare case where a file share storage can't be accessed in some network environments even if it's enabled for public access.
+ :::image type="content" source="../media/faq/datastore-with-wrong-account-key.png" alt-text="Screenshot that shows datastore with wrong account key." lightbox = "../media/faq/datastore-with-wrong-account-key.png":::
+
+- If you are using AI studio, the storage account need set CORS to allow AI studio access the storage account, otherwise, you will see the flow missing issue. You can add following CORS settings to the storage account to fix this issue.
+ - Go to storage account page, select `Resource sharing (CORS)` under `settings`, and select to `File service` tab.
+ - Allowed origins: `https://mlworkspace.azure.ai,https://ml.azure.com,https://*.ml.azure.com,https://ai.azure.com,https://*.ai.azure.com,https://mlworkspacecanary.azure.ai,https://mlworkspace.azureml-test.net`
+ - Allowed methods: `DELETE, GET, HEAD, POST, OPTIONS, PUT`
+
+ :::image type="content" source="../media/faq/resource-sharing-setting-storage-account.png" alt-text="Screenshot that shows data store with wrong account key." lightbox = "../media/faq/resource-sharing-setting-storage-account.png":::
## Runtime-related issues
Follow these steps to find Python packages installed in runtime:
- Run the flow. Then you can find `packages.txt` in the flow folder. :::image type="content" source="../media/faq/list-packages.png" alt-text="Screenshot that shows finding Python packages installed in runtime." lightbox = "../media/faq/list-packages.png":::+
+## Flow run related issues
+
+### How to find the raw inputs and outputs of in LLM tool for further investigation?
+
+In prompt flow, on flow page with successful run and run detail page, you can find the raw inputs and outputs of LLM tool in the output section. Click the `view full output` button to view full output.
++
+`Trace` section includes each request and response to the LLM tool. You can check the raw message sent to the LLM model and the raw response from the LLM model.
++
+## How to fix 409 error in from Azure OpenAI?
+
+You may encounter 409 error from Azure OpenAI, it means you have reached the rate limit of Azure OpenAI. You can check the error message in the output section of LLM node. Learn more about [Azure OpenAI rate limit](../../../ai-services/openai/quotas-limits.md).
+
machine-learning Reference Yaml Component Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-component-spark.md
| `description` | string | Description of the component. | | | | `tags` | object | Dictionary of tags for the component. | | | | `code` | string | **Required.** The location of the folder that contains source code and scripts for the component. | | |
-| `entry` | object | **Required.** The entry point for the component. It could define a `file` or a `class_name`. | | |
+| `entry` | object | **Required.** The entry point for the component. It could define a `file`. | | |
| `entry.file` | string | The location of the folder that contains source code and scripts for the component. | | |
-| `entry.class_name` | string | The name of the class that serves as an entry point for the component. | | |
| `py_files` | object | A list of `.zip`, `.egg`, or `.py` files, to be placed in the `PYTHONPATH`, for successful execution of the job with this component. | | | | `jars` | object | A list of `.jar` files to include on the Spark driver, and the executor `CLASSPATH`, for successful execution of the job with this component. | | | | `files` | object | A list of files that should be copied to the working directory of each executor, for successful execution of the job with this component. | | | | `archives` | object | A list of archives that should be extracted into the working directory of each executor, for successful execution of the job with this component. | | | | `conf` | object | The Spark driver and executor properties. See [Attributes of the `conf` key](#attributes-of-the-conf-key) | | | | `environment` | string or object | The environment to use for the component. This value can be either a reference to an existing versioned environment in the workspace or an inline environment specification. <br><br> To reference an existing environment, use the `azureml:<environment_name>:<environment_version>` syntax or `azureml:<environment_name>@latest` (to reference the latest version of an environment). <br><br> To define an environment inline, follow the [Environment schema](./reference-yaml-environment.md#yaml-syntax). Exclude the `name` and `version` properties, because inline environments don't support them. | | |
-| `args` | string | The command line arguments that should be passed to the component entry point Python script or class. These arguments may contain the paths of input data and the location to write the output, for example `"--input_data ${{inputs.<input_name>}} --output_path ${{outputs.<output_name>}}"` | | |
+| `args` | string | The command line arguments that should be passed to the component entry point Python script. These arguments may contain the paths of input data and the location to write the output, for example `"--input_data ${{inputs.<input_name>}} --output_path ${{outputs.<output_name>}}"` | | |
| `inputs` | object | Dictionary of component inputs. The key is a name for the input within the context of the component and the value is the input value. <br><br> Inputs can be referenced in the `args` using the `${{ inputs.<input_name> }}` expression. | | | | `inputs.<input_name>` | number, integer, boolean, string or object | One of a literal value (of type number, integer, boolean, or string) or an object containing a [component input data specification](#component-inputs). | | | | `outputs` | object | Dictionary of output configurations of the component. The key is a name for the output within the context of the component and the value is the output configuration. <br><br> Outputs can be referenced in the `args` using the `${{ outputs.<output_name> }}` expression. | |
machine-learning Reference Yaml Job Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-job-spark.md
| `tags` | object | Dictionary of tags for the job. | | | | `code` | string | Local path to the source code directory to be uploaded and used for the job. | | | | `code` | string | **Required.** The location of the folder that contains source code and scripts for this job. | | |
-| `entry` | object | **Required.** The entry point for the job. It could define a `file` or a `class_name`. | | |
+| `entry` | object | **Required.** The entry point for the job. It could define a `file`. | | |
| `entry.file` | string | The location of the folder that contains source code and scripts for this job. | | |
-| `entry.class_name` | string | The name of the class that serves as an entry point for the job. | | |
| `py_files` | object | A list of `.zip`, `.egg`, or `.py` files, to be placed in the `PYTHONPATH`, for successful execution of the job. | | | | `jars` | object | A list of `.jar` files to include on the Spark driver, and the executor `CLASSPATH`, for successful execution of the job. | | | | `files` | object | A list of files that should be copied to the working directory of each executor, for successful job execution. | | | | `archives` | object | A list of archives that should be extracted into the working directory of each executor, for successful job execution. | | | | `conf` | object | The Spark driver and executor properties. See [Attributes of the `conf` key](#attributes-of-the-conf-key) | | | | `environment` | string or object | The environment to use for the job. The environment can be either a reference to an existing versioned environment in the workspace or an inline environment specification. <br><br> To reference an existing environment, use the `azureml:<environment_name>:<environment_version>` syntax or `azureml:<environment_name>@latest` (to reference the latest version of an environment). <br><br> To define an environment inline, follow the [Environment schema](./reference-yaml-environment.md#yaml-syntax). Exclude the `name` and `version` properties, because inline environments don't support them. | | |
-| `args` | string | The command line arguments that should be passed to the job entry point Python script or class. These arguments may contain the input data paths, the location to write the output, for example `"--input_data ${{inputs.<input_name>}} --output_path ${{outputs.<output_name>}}"` | | |
+| `args` | string | The command line arguments that should be passed to the job entry point Python script. These arguments may contain the input data paths, the location to write the output, for example `"--input_data ${{inputs.<input_name>}} --output_path ${{outputs.<output_name>}}"` | | |
| `resources` | object | The resources to be used by an Azure Machine Learning serverless Spark compute. One of the `compute` or `resources` should be defined. | | | | `resources.instance_type` | string | The compute instance type to be used for Spark pool. | `standard_e4s_v3`, `standard_e8s_v3`, `standard_e16s_v3`, `standard_e32s_v3`, `standard_e64s_v3`. | |
-| `resources.runtime_version` | string | The Spark runtime version. | `3.1`, `3.2` | |
+| `resources.runtime_version` | string | The Spark runtime version. | `3.2`, `3.3` | |
| `compute` | string | Name of the attached Synapse Spark pool to execute the job on. One of the `compute` or `resources` should be defined. | | | | `inputs` | object | Dictionary of inputs to the job. The key is a name for the input within the context of the job and the value is the input value. <br><br> Inputs can be referenced in the `args` using the `${{ inputs.<input_name> }}` expression. | | | | `inputs.<input_name>` | number, integer, boolean, string or object | One of a literal value (of type number, integer, boolean, or string) or an object containing a [job input data specification](#job-inputs). | | |
machine-learning How To Consume Web Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-consume-web-service.md
Last updated 11/16/2022
+ms.devlang: csharp
+# ms.devlang: csharp, golang, java, python
#Customer intent: As a developer, I need to understand how to create a client application that consumes the web service of a deployed ML model.
managed-grafana How To Grafana Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-grafana-enterprise.md
When [creating a new Azure Managed Grafana workspace](quickstart-managed-grafana
:::image type="content" source="media/grafana-enterprise/create-with-enterprise-plan.png" alt-text="Screenshot of the Grafana dashboard, instance creation basic details."::: > [!CAUTION]
- > Each Azure subscription can benefit from one free Grafana Enterprise trial. The free trial lets you try the Grafana Enterprise plan for one month. If you select a free trial and enable recurring billing, you will start getting charged after the end of your first month. Disable recurring billing if you just want to test Grafana Enterprise.
+ > Each Azure subscription can benefit from one and only one free Grafana Enterprise trial. The free trial lets you try the Grafana Enterprise plan for one month.
+ > - If you select a free trial and enable recurring billing, you will start getting charged after the end of your first month. Disable recurring billing if you just want to test Grafana Enterprise.
+ > - If you delete a Grafana Enterprise free trial resource, you will not be able to create another Grafana Enterprise free trial. Free trial is for one-time use only.
1. Select **Review + create** and review the information about your new instance, including the costs that may be associated with the Grafana Enterprise plan and potential other paid options.
To enable Grafana Enterprise on an existing Azure Managed Grafana instance, foll
:::image type="content" source="media/grafana-enterprise/enable-grafana-enterprise.png" alt-text="Screenshot of the Grafana dashboard showing how to enable Grafana enterprise on an existing workspace." lightbox="media/grafana-enterprise/enable-grafana-enterprise.png"::: 1. Select **Free Trial - Azure Managed Grafana Enterprise Upgrade** to test Grafana Enterprise for free or select the monthly plan. Review the associated costs to make sure that you selected a plan that suits you. Recurring billing is disabled by default. > [!CAUTION]
- > Each Azure subscription can benefit from one free Grafana Enterprise trial. The free trial lets you try the Grafana Enterprise plan for one month. If you select a free trial and enable recurring billing, you will start getting charged after the end of your first month. Disable recurring billing if you just want to test Grafana Enterprise.
+ > Each Azure subscription can benefit from one and only one free Grafana Enterprise trial. The free trial lets you try the Grafana Enterprise plan for one month.
+ > - If you select a free trial and enable recurring billing, you will start getting charged after the end of your first month. Disable recurring billing if you just want to test Grafana Enterprise.
+ > - If you delete a Grafana Enterprise free trial resource, you will not be able to create another Grafana Enterprise free trial. Free trial is for one-time use only.
1. Read and check the box at the bottom of the page to state that you agree with the terms displayed, and select **Update** to finalize the creation of your new Azure Managed Grafana instance.
mariadb Howto Configure Ssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-configure-ssl.md
Last updated 04/19/2023
+ms.devlang: csharp
+# ms.devlang: csharp, golang, java, php, python, ruby
# Configure SSL connectivity in your application to securely connect to Azure Database for MariaDB
mysql Concepts Networking Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-networking-vnet.md
You can then use the Azure Database for MySQL flexible server servername (FQDN)
- Public endpoint (or public IP or DNS) - An Azure Database for MySQL flexible server instance deployed to a virtual network can't have a public endpoint. - After the Azure Database for MySQL flexible server instance is deployed to a virtual network and subnet, you can't move it to another virtual network or subnet. You can't move the virtual network into another resource group or subscription.
+- Private DNS integration config cannot be changed once deployed.
- Subnet size (address spaces) can't be increased once resources exist in the subnet. ## Next steps
mysql How To Connect Tls Ssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-connect-tls-ssl.md
+ms.devlang: csharp
+# ms.devlang: csharp, golang, java, javascript, php, python, ruby
# Connect to Azure Database for MySQL - Flexible Server with encrypted connections
mysql January 2024 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/release-notes/january-2024.md
# Azure Database For MySQL Flexible Server January 2024 Maintenance We are pleased to announce the January 2024 maintenance for Azure Database for MySQL Flexible Server. This maintenance incorporates several new features and resolves known issues for enhanced performance and reliability.
+> [!NOTE]
+> Between 2024/1/12 04:00 UTC and 2024/1/15 07:00 UTC, we strategically paused Azure MySQL maintenance to proactively address a detected issue that could lead to maintenance interruptions. We're happy to report that maintenance operations are fully restored. For those impacted, you're welcome to utilize our flexible maintenance feature to conveniently reschedule your maintenance times as needed.
## Engine version changes There will be no engine version changes in this maintenance update.
mysql How To Configure Ssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-ssl.md
+ms.devlang: csharp
+# ms.devlang: csharp, golang, java, javascript, php, python, ruby
Last updated 06/20/2022
network-watcher Network Watcher Nsg Flow Logging Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-flow-logging-overview.md
Previously updated : 11/30/2023 Last updated : 01/17/2024 #CustomerIntent: As an Azure administrator, I want to learn about NSG flow logs so that I can log my network traffic to analyze and optimize the network performance.
When you delete a network security group, the associated flow log resource is de
### Cost
-NSG flow logging is billed on the volume of logs produced. High traffic volume can result in large-flow log volume and the associated costs.
+NSG flow logging is billed on the volume of logs produced. High traffic volume can result in large flow-log volume which increases the associated costs.
-Pricing of NSG flow logs doesn't include the underlying costs of storage. Using the retention policy feature with NSG flow logs means incurring separate storage costs for extended periods of time.
+NSG flow log pricing doesn't include the underlying costs of storage. Using the retention policy feature with NSG flow logs means incurring separate storage costs for extended periods of time.
-If you want to retain data forever and don't want to apply any retention policy, set retention days to 0. For more information, see [Network Watcher pricing](https://azure.microsoft.com/pricing/details/network-watcher/) and [Azure Storage pricing](https://azure.microsoft.com/pricing/details/storage/).
+If you want to retain data forever and don't want to apply a retention policy, set retention days to 0. For more information, see [Network Watcher Pricing](https://azure.microsoft.com/pricing/details/network-watcher/) and [Azure Storage Pricing](https://azure.microsoft.com/pricing/details/storage/blobs/).
### Non-default inbound TCP rules
This problem might be related to:
## Pricing
-NSG flow logs are charged per gigabyte of logs collected and come with a free tier of 5 GB/month per subscription. For more information, see [Network Watcher pricing](https://azure.microsoft.com/pricing/details/network-watcher/).
+NSG flow logs are charged per gigabyte of *Network flow logs collected* and come with a free tier of 5 GB/month per subscription. If traffic analytics is enabled with NSG flow logs, traffic analytics pricing applies at per gigabyte processing rates. Traffic analytics isn't offered with a free tier of pricing. For more information, see [Network Watcher pricing](https://azure.microsoft.com/pricing/details/network-watcher/).
-Storage of logs is charged separately. For relevant prices, see [Azure Blob Storage pricing](https://azure.microsoft.com/pricing/details/storage/blobs/).
+Storage of logs is charged separately. For more information, see [Azure Blob Storage pricing](https://azure.microsoft.com/pricing/details/storage/blobs/).
## Related content
notification-hubs Change Pricing Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/change-pricing-tier.md
Title: Change pricing tier of Notification Hubs namespace | Microsoft Docs description: Learn how to change the pricing tier of an Azure Notification Hubs namespace.- - - Last updated 08/03/2020
notification-hubs Export Modify Registrations Bulk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/export-modify-registrations-bulk.md
Title: Export and import Azure Notification Hubs registrations in bulk | Microsoft Docs description: Learn how to use Notification Hubs bulk support to perform a large number of operations on a notification hub, or to export all registrations.- - - Last updated 08/04/2020
notification-hubs Notification Hubs Gcm To Fcm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/notification-hubs-gcm-to-fcm.md
Title: Azure Notification Hubs and the Google Firebase Cloud Messaging (FCM) migration description: Describes how Azure Notification Hubs addresses the Google GCM to FCM migration.- - Last updated 12/06/2023
operator-nexus Troubleshoot Bmm Node Reboot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/troubleshoot-bmm-node-reboot.md
Perform the following Azure CLI update on any affected VMs with dummy tag values
This process restores the VM to an online state. :::image type="content" source="media\troubleshoot-bmm-server\BMM-running-status.png" alt-text="Screenshot of an example virtual machine in a running status." lightbox="media\troubleshoot-bmm-server\BMM-running-status.png":::+
+If you still have questions, [contact support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade).
+For more information about Support plans, see [Azure Support plans](https://azure.microsoft.com/support/plans/response/).
operator-nexus Troubleshoot Internet Host Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/troubleshoot-internet-host-virtual-machine.md
sudo rpm --import <https://aglet.packages.cloudpassage.com/cloudpassage.package
> If you set these flags system wide, they might lose their ability to run kubectl locally. Set them inline within the script first to help minimize the effects. For more information, see the [Xmodulo article about installing RPM packages behind a proxy](https://www.xmodulo.com/how-to-install-rpm-packages-behind-proxy.html).+
+If you still have questions, [contact support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade).
+For more information about Support plans, see [Azure Support plans](https://azure.microsoft.com/support/plans/response/).
operator-nexus Troubleshoot Isolation Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/troubleshoot-isolation-domain.md
Before you enable isolation, it's necessary to create one or more internal or ex
To access further details in the logs, see [Log Analytics workspace](../../articles/operator-nexus/concepts-observability.md#log-analytic-workspace). If you still have questions, [contact support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade).
+For more information about Support plans, see [Azure Support plans](https://azure.microsoft.com/support/plans/response/).
operator-nexus Troubleshoot Reboot Reimage Replace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/troubleshoot-reboot-reimage-replace.md
When you're performing the following physical repairs, a replace action is requi
Restarting, reimaging, and replacing are effective troubleshooting methods that you can use to address technical problems. However, it's important to have a systematic approach and to consider other factors before you try any drastic measures. If you still have questions, [contact support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade).
+For more information about Support plans, see [Azure Support plans](https://azure.microsoft.com/support/plans/response/).
partner-solutions Dynatrace Free Trial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-free-trial.md
A 30-day free trial of Azure Native Dynatrace Service is available on [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/dynatrace.dynatrace_portal_integration?tab=Overview). You can sign up using the trial plan published by Dynatrace. During the trial period, you can create a Dynatrace resource on Azure and use integrated services such as log forwarding, metrics integration, and agent based monitoring. Before the free trial expires, you can seamlessly upgrade to a paid public plan or a private offer customized for your organization.
+Before you proceed, please make sure that your subscription is marketplace purchase enabled. For more information, visit [Purchase validation checks](/marketplace/purchase-validation-checks).
+ ## Subscribe to a free trial You can access the trial plan by finding Azure Native Dynatrace Service on Azure portal or in the Azure Marketplace. Refer to the guide to [create a new resource](dynatrace-create.md#find-offer) and choose the free trial public plan while subscribing.
postgresql Concepts Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-monitoring.md
Previously updated : 12/26/2023 Last updated : 1/17/2024 # Monitor metrics on Azure Database for PostgreSQL - Flexible Server
The following metrics are available for a flexible server instance of Azure Data
|Display name |Metric ID |Unit |Description |Default enabled| |--|--|-|--||
-|**Active Connections** |`active_connections` |Count |Number of connections to your server. |Yes |
+|**Active Connections** |`active_connections` |Count |Total number of connections to database Server, covering all states of connections, including active, idle, and others as seen in `pg_stat_activity` view. For a detailed view focusing solely on active connections, use "Sessions By State" metric. |Yes |
|**Backup Storage Used** |`backup_storage_used` |Bytes |Amount of backup storage used. This metric represents the sum of storage that's consumed by all the full backups, differential backups, and log backups that are retained based on the backup retention period that's set for the server. The frequency of the backups is service managed. For geo-redundant storage, backup storage usage is twice the usage for locally redundant storage.|Yes | |**Failed Connections** |`connections_failed` |Count |Number of failed connections. |Yes | |**Succeeded Connections** |`connections_succeeded` |Count |Number of succeeded connections. |Yes |
You can choose from the following categories of enhanced metrics:
|Display name|Metric ID|Unit|Description|Dimension|Default enabled| |||||||
-|**Sessions By State** |`sessions_by_state` |Count|Overall state of the back ends. |State|No|
+|**Sessions By State** |`sessions_by_state` |Count|Overall state of the backend. |State|No|
|**Sessions By WaitEventType** |`sessions_by_wait_event_type` |Count|Sessions by the type of event for which the back end is waiting.|Wait Event Type|No| |**Oldest Backend** |`oldest_backend_time_sec` |Seconds|Age in seconds of the oldest back end (irrespective of the state).|Doesn't apply|No| |**Oldest Query** |`longest_query_time_sec`|Seconds|Age in seconds of the longest query that's currently running. |Doesn't apply|No|
You can choose from the following categories of enhanced metrics:
|Display name |Metric ID |Unit |Description |Dimension |Default enabled| ||-|--|-|||
-|**Backends** |`numbackends` |Count|Number of back ends that are connected to this database. |DatabaseName|No |
+|**Backends** |`numbackends` |Count|Number of backends that are connected to this database. |DatabaseName|No |
|**Deadlocks** |`deadlocks` |Count|Number of deadlocks that are detected in this database. |DatabaseName|No | |**Disk Blocks Hit** |`blks_hit` |Count|Number of times disk blocks were found already in the buffer cache, so that a read wasn't necessary.|DatabaseName|No | |**Disk Blocks Read** |`blks_read` |Count|Number of disk blocks that were read in this database. |DatabaseName|No |
postgresql How To Read Replicas Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-read-replicas-portal.md
Title: Manage read replicas - Azure portal, REST API - Azure Database for PostgreSQL - Flexible Server
-description: Learn how to manage read replicas Azure Database for PostgreSQL - Flexible Server from the Azure portal and REST API.
+ Title: Manage read replicas - Azure portal, CLI, REST API - Azure Database for PostgreSQL - Flexible Server
+description: Learn how to manage read replicas Azure Database for PostgreSQL - Flexible Server from the Azure portal, CLI and REST API.
-# Create and manage read replicas in Azure Database for PostgreSQL - Flexible Server from the Azure portal
+# Create and manage read replicas in Azure Database for PostgreSQL - Flexible Server from the Azure portal, CLI or REST API
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-In this article, you learn how to create and manage read replicas in Azure Database for PostgreSQL from the Azure portal. To learn more about read replicas, see the [overview](concepts-read-replicas.md).
+In this article, you learn how to create and manage read replicas in Azure Database for PostgreSQL from the Azure portal, CLI and REST API. To learn more about read replicas, see the [overview](concepts-read-replicas.md).
> [!NOTE]
role-based-access-control Quickstart Assign Role User Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/quickstart-assign-role-user-portal.md
Title: "Tutorial: Grant a user access to Azure resources using the Azure portal - Azure RBAC" description: In this tutorial, learn how to grant a user access to Azure resources using the Azure portal and Azure role-based access control (Azure RBAC). - Last updated 10/15/2021
role-based-access-control Role Assignments External Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments-external-users.md
Title: Assign Azure roles to external guest users using the Azure portal - Azure RBAC description: Learn how to grant access to Azure resources for users external to an organization using the Azure portal and Azure role-based access control (Azure RBAC).- Last updated 06/07/2023
sap Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/tutorial.md
A valid SAP user account (SAP-User or S-User account) with software download pri
1. Create the deployment folder and clone the repository. ```cloudshell-interactive
- mkdir -p ~/Azure_SAP_Automated_Deployment; cd $_
+ mkdir -p ${HOME}/Azure_SAP_Automated_Deployment; cd $_
git clone https://github.com/Azure/sap-automation-bootstrap.git config
A valid SAP user account (SAP-User or S-User account) with software download pri
git clone https://github.com/Azure/sap-automation-samples.git samples
- cp -Rp samples/Terraform/WORKSPACES ~/Azure_SAP_Automated_Deployment/WORKSPACES
+ cp -Rp samples/Terraform/WORKSPACES ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES
```
When you choose a name for your service principal, make sure that the name is un
```cloudshell-interactive export appId="<appId>"
- az role assignment create --assignee ${appId} \
- --role "User Access Administrator" \
+ az role assignment create --assignee ${appId} \
+ --role "User Access Administrator" \
--scope /subscriptions/${ARM_SUBSCRIPTION_ID} ```
The output maps to the following parameters. You use these parameters in later s
1. Open Visual Studio Code from Cloud Shell. ```cloudshell-interactive
-cd ~/Azure_SAP_Automated_Deployment/WORKSPACES
+cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES
code . ``` 1. Expand the `WORKSPACES` directory. There are five subfolders: `CONFIGURATION`, `DEPLOYER`, `LANDSCAPE`, `LIBRARY`, `SYSTEM`, and `BOMS`. Expand each of these folders to find regional deployment configuration files.
-1. Find the appropriate four-character code that corresponds to the Azure region you're using.
-
- | Region name | Region code |
- |--|-|
- | Australia East | AUEA |
- | Canada Central | CACE |
- | Central US | CEUS |
- | East US | EAUS |
- | North Europe | WEEU |
- | South Africa North | SANO |
- | Southeast Asia | SOEA |
- | UK South | UKSO |
- | West US 2 | WUS2 |
- 1. Find the Terraform variable files in the appropriate subfolder. For example, the `DEPLOYER` Terraform variable file might look like this example: ```terraform
code .
location = "westeurope" #Defines the DNS suffix for the resources
- dns_label = "azure.contoso.net"
+ dns_label = "lab.sdaf.contoso.net"
# use_private_endpoint defines that the storage accounts and key vaults have private endpoints enabled use_private_endpoint = false
Use the [deploy_controlplane.sh](bash/deploy-controlplane.md) script to deploy t
The deployment goes through cycles of deploying the infrastructure, refreshing the state, and uploading the Terraform state files to the library storage account. All of these steps are packaged into a single deployment script. The script needs the location of the configuration file for the deployer and library, and some other parameters.
-For example, choose **West Europe** as the deployment location, with the four-character name `WEEU`, as previously described. The sample deployer configuration file `LAB-WEEU-DEP05-INFRASTRUCTURE.tfvars` is in the `~/Azure_SAP_Automated_Deployment/WORKSPACES/DEPLOYER/LAB-WEEU-DEP05-INFRASTRUCTURE` folder.
+For example, choose **West Europe** as the deployment location, with the four-character name `WEEU`, as previously described. The sample deployer configuration file `LAB-WEEU-DEP05-INFRASTRUCTURE.tfvars` is in the `${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/DEPLOYER/LAB-WEEU-DEP05-INFRASTRUCTURE` folder.
-The sample SAP library configuration file `LAB-WEEU-SAP_LIBRARY.tfvars` is in the `~/Azure_SAP_Automated_Deployment/WORKSPACES/LIBRARY/LAB-WEEU-SAP_LIBRARY` folder.
+The sample SAP library configuration file `LAB-WEEU-SAP_LIBRARY.tfvars` is in the `${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/LIBRARY/LAB-WEEU-SAP_LIBRARY` folder.
Set the environment variables for the service principal:
export TF_use_webapp=true
export env_code="LAB" export vnet_code="DEP05"
-export region_code="<region_code>"
+export region_code="WEEU"
-export DEPLOYMENT_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation"
-export CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES"
+export DEPLOYMENT_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation"
+export CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES"
+export SAP_AUTOMATION_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation"
cd $CONFIG_REPO_PATH
+az login --service-principal -u "${ARM_CLIENT_ID}" -p="${ARM_CLIENT_SECRET}" --tenant "${ARM_TENANT_ID}"
+ deployer_parameter_file="${CONFIG_REPO_PATH}/DEPLOYER/${env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE/${env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE.tfvars" library_parameter_file="${CONFIG_REPO_PATH}/LIBRARY/${env_code}-${region_code}-SAP_LIBRARY/${env_code}-${region_code}-SAP_LIBRARY.tfvars"
The control plane is the most critical part of the SAP automation framework. It'
You should update the control plane tfvars file to enable private endpoints and to block public access to the storage accounts and key vaults.
-To copy the control plane configuration files to the deployer VM, you can use the `sync_deployer.sh` script. Sign in to the deployer VM and run the following commands:
+To copy the control plane configuration files to the deployer VM, you can use the `sync_deployer.sh` script. Sign in to the deployer VM and update the following command to use your terraform state storage account name. Then, run the following script:
```bash
-cd ~/Azure_SAP_Automated_Deployment/WORKSPACES
+terraform_state_storage_account=labweeutfstate###
+
+cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES
-../sap-automation/deploy/scripts/sync_deployer.sh --storageaccountname mgtneweeutfstate### --state_subscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
+../sap-automation/deploy/scripts/sync_deployer.sh --storageaccountname $terraform_state_storage_account --state_subscription $ARM_SUBSCRIPTION_ID
```
export ARM_TENANT_ID="<tenantId>"
export env_code="LAB" export vnet_code="DEP05"
-export region_code="<region_code>"
+export region_code="WEEU"
-storage_accountname="mgmtneweeutfstate###"
-vault_name="LABWEEUDEP05user###"
+terraform_state_storage_account=labweeutfstate###
+ vault_name="LABWEEUDEP05user###"
-export DEPLOYMENT_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation"
-export CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES"
+export DEPLOYMENT_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation"
+export CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES"
+export SAP_AUTOMATION_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation"
cd $CONFIG_REPO_PATH deployer_parameter_file="${CONFIG_REPO_PATH}/DEPLOYER/${env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE/${env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE.tfvars"
-library_parameter_file="${CONFIG_REPO_PATH}/LIBRARY/${env_code}-${region_code}-SAP_LIBRARY/${env_code}-${region_code}-SAP_LIBRARY.tfvars"
+ library_parameter_file="${CONFIG_REPO_PATH}/LIBRARY/${env_code}-${region_code}-SAP_LIBRARY/${env_code}-${region_code}-SAP_LIBRARY.tfvars"
+
+az logout
+az login --service-principal -u "${ARM_CLIENT_ID}" -p="${ARM_CLIENT_SECRET}" --tenant "${ARM_TENANT_ID}"
${SAP_AUTOMATION_REPO_PATH}/deploy/scripts/deploy_controlplane.sh \ --deployer_parameter_file "${deployer_parameter_file}" \ --library_parameter_file "${library_parameter_file}" \ --subscription "${ARM_SUBSCRIPTION_ID}" \
- --storageaccountname "${storage_accountname}" \
+ --storageaccountname "${terraform_state_storage_account}" \
--vault "${vault_name}" ```
You can deploy the web application using the following script:
```bash export env_code="LAB" export vnet_code="DEP05"
-export region_code="<region_code>"
+export region_code="WEEU"
export webapp_name="<webAppName>" export app_id="<appRegistrationId>" export webapp_id="<webAppId>"
az webapp restart --resource-group ${env_code}-${region_code}-${vnet_code}-INFRA
- Select **Library resource group** > **State storage account** > **Containers** > `tfstate`. Copy the name of the deployer state file. - Following from the preceding example, the name of the blob is `LAB-WEEU-DEP05-INFRASTRUCTURE.terraform.tfstate`.
-1. If necessary, register the Service Principal.
+1. If necessary, register the Service Principal, for this tutorial this step is not needed.
- The first time an environment is instantiated, a Service Principal must be registered. In this tutorial, the control plane is in the `LAB` environment and the workload zone is in `DEV`. Therefore, a Service Principal must be registered for the `DEV` environment.
+ The first time an environment is instantiated, a Service Principal must be registered. In this tutorial, the control plane is in the `LAB` environment and the workload zone is also in `LAB`. Therefore, a Service Principal must be registered for the `LAB` environment.
```bash export ARM_SUBSCRIPTION_ID="<subscriptionId>"
az webapp restart --resource-group ${env_code}-${region_code}-${vnet_code}-INFRA
export ARM_CLIENT_SECRET="<password>" export ARM_TENANT_ID="<tenant>" export key_vault="<vaultName>"
- export env_code="DEV"
- export region_code="<region_code>"
+ export env_code="LAB"
+ export region_code="WEEU"
export SAP_AUTOMATION_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation" export CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES"
+ ```
+
+ ```bash
+
${SAP_AUTOMATION_REPO_PATH}/deploy/scripts/set_secrets.sh \ --environment "${env_code}" \ --region "${region_code}" \
Use the [install_workloadzone](bash/install-workloadzone.md) script to deploy th
1. On the deployer VM, go to the `Azure_SAP_Automated_Deployment` folder. ```bash
- cd ~/Azure_SAP_Automated_Deployment/WORKSPACES/LANDSCAPE/DEV-XXXX-SAP01-INFRASTRUCTURE
- ```
-
- From the example region `northeurope`, the folder looks like:
-
- ```bash
- cd ~/Azure_SAP_Automated_Deployment/WORKSPACES/LANDSCAPE/DEV-WEEU-SAP01-INFRASTRUCTURE
+ cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/LANDSCAPE/LAB-WEEU-SAP04-INFRASTRUCTURE
``` 1. Optionally, open the workload zone configuration file and, if needed, change the network logical name to match the network name.
Use the [install_workloadzone](bash/install-workloadzone.md) script to deploy th
```bash
-export tfstate_storage_account="<storageaccountName>"
-export deployer_env_code="LAB"
-export sap_env_code="DEV"
-export region_code="<region_code>"
-export key_vault="<vaultName>"
-
-export deployer_vnet_code="DEP05"
-export vnet_code="SAP04"
- export ARM_SUBSCRIPTION_ID="<subscriptionId>" export ARM_CLIENT_ID="<appId>" export ARM_CLIENT_SECRET="<password>" export ARM_TENANT_ID="<tenantId>"
+```
+
+```bash
+export deployer_env_code="LAB"
+export sap_env_code="LAB"
+export region_code="WEEU"
+
+export deployer_vnet_code="DEP05"
+export vnet_code="SAP04"
-cd ~/Azure_SAP_Automated_Deployment/WORKSPACES/LANDSCAPE/${sap_env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE
+export tfstate_storage_account="<storageaccountName>"
+export key_vault="<vaultName>"
export CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES" export SAP_AUTOMATION_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation"
export SAP_AUTOMATION_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-auto
az login --service-principal -u "${ARM_CLIENT_ID}" -p="${ARM_CLIENT_SECRET}" --tenant "${ARM_TENANT_ID}" cd "${CONFIG_REPO_PATH}/LANDSCAPE/${sap_env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE"+ parameterFile="${sap_env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE.tfvars" deployerState="${deployer_env_code}-${region_code}-${deployer_vnet_code}-INFRASTRUCTURE.terraform.tfstate"
Deploy the SAP system.
```bash
-export sap_env_code="DEV"
-export region_code="<region_code>"
-export vnet_code="SAP01"
-export SID="X00"
+export sap_env_code="LAB"
+export region_code="WEEU"
+export vnet_code="SAP04"
+export SID="L00"
export CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES" export SAP_AUTOMATION_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation" cd ${CONFIG_REPO_PATH}/SYSTEM/${sap_env_code}-${region_code}-${vnet_code}-${SID}
-${DEPLOYMENT_REPO_PATH}/deploy/scripts/installer.sh \
+${DEPLOYMENT_REPO_PATH}/deploy/scripts/installer.sh \
--parameterfile "${sap_env_code}-${region_code}-${vnet_code}-${SID}.tfvars" \ --type sap_system ```
-The deployment command for the `northeurope` example looks like:
-
-```bash
-cd ~/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/DEV-WEEU-SAP01-X00
-
-${DEPLOYMENT_REPO_PATH}/deploy/scripts/installer.sh \
- --parameterfile DEV-WEEU-SAP01-X00.tfvars \
- --type sap_system \
- --auto-approve
-```
- Check that the system resource group is now in the Azure portal. ## Get SAP software by using the Bill of Materials
For this example configuration, the resource group is `LAB-WEEU-DEP05-INFRASTRUC
```bash export key_vault=<vaultName>
- sap_username=<sap-username>
+ sap_username=<sap-username>
az keyvault secret set --name "S-Username" --vault-name $key_vault --value "${sap_username}"; ```
For this example configuration, the resource group is `LAB-WEEU-DEP05-INFRASTRUC
1. Configure your SAP parameters file for the download process. Then, download the SAP software by using Ansible playbooks. Run the following commands: ```bash
- cd ~/Azure_SAP_Automated_Deployment/WORKSPACES
+ cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES
mkdir BOMS cd BOMS
For this example configuration, the resource group is `LAB-WEEU-DEP05-INFRASTRUC
```yaml
- bom_base_name: S4HANA_2021_FP01_v0001ms
+ bom_base_name: S42022SPS00_v0001ms
deployer_kv_name: <vaultName> BOM_directory: ${HOME}/Azure_SAP_Automated_Deployment/samples/SAP
For this example configuration, the resource group is `LAB-WEEU-DEP05-INFRASTRUC
1. Run the Ansible playbook to download the software. One way you can run the playbooks is to use the **Downloader** menu. Run the `download_menu` script. ```bash
- ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/download_menu.sh
+ ${HOME}/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/download_menu.sh
``` 1. Select which playbooks to run.
The SAP application installation happens through Ansible playbooks.
Go to the system deployment folder. ```bash
-cd ~/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/DEV-WEEU-SAP01-X00/
+cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-WEEU-SAP04-L00/
```
-Make sure you have the following files in the current folders: `sap-parameters.yaml` and `X00_host.yaml`.
+Make sure you have the following files in the current folders: `sap-parameters.yaml` and `L00_host.yaml`.
For a standalone SAP S/4HANA system, there are eight playbooks to run in sequence. One way you can run the playbooks is to use the **Configuration** menu. Run the `configuration_menu` script. ```bash
-~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/configuration_menu.sh
+${HOME}/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/configuration_menu.sh
``` Choose the playbooks to run.
-### Playbook: OS config
+### Playbook: Base Operating System configuration
-This playbook does the generic OS configuration setup on all the machines, which includes configuration of software repositories, packages, and services.
+This playbook performs the generic OS configuration setup on all the machines, which includes configuration of software repositories, packages, and services.
-### Playbook: SAP-specific OS config
+### Playbook: SAP specific Operating System configuration
-This playbook does the SAP OS configuration setup on all the machines. The steps include creation of volume groups and file systems and configuration of software repositories, packages, and services.
+This playbook performs the SAP OS configuration setup on all the machines. The steps include creation of volume groups and file systems and configuration of software repositories, packages, and services.
-### Playbook: BOM processing
+### Playbook: BOM Processing
This playbook downloads the SAP software to the SCS virtual machine.
This playbook downloads the SAP software to the SCS virtual machine.
This playbook installs SAP central services. For highly available configurations, the playbook also installs the SAP ERS instance and configures Pacemaker.
-### Playbook: HANA DB install
+### Playbook: Database Instance installation
-This playbook installs the HANA database instances.
+This playbook installs the database instances.
-### Playbook: DB load
+### Playbook: Database Load
This playbook invokes the database load task from the primary application server.
-### Playbook: HANA HA playbook
+### Playbook: Database High Availability Setup
-This playbook configures HANA system replication and Pacemaker for the HANA database.
+This playbook configures the Database High availability, for HANA it entails HANA system replication and Pacemaker for the HANA database.
-### Playbook: PAS install
+### Playbook: Primary Application Server installation
This playbook installs the primary application server.
-### Playbook: APP install
+### Playbook: Application Server installations
This playbook installs the application servers.
+### Playbook: Web Dispatcher installations
+
+This playbook installs the web dispatchers.
+ You've now deployed and configured a standalone HANA system. If you need to configure a highly available (HA) SAP HANA database, run the HANA HA playbook.
export sap_env_code="DEV"
export region_code="WEEU" export sap_vnet_code="SAP04"
-cd ~/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/${sap_env_code}-${region_code}-${sap_vnet_code}-X00
+cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/${sap_env_code}-${region_code}-${sap_vnet_code}-X00
${DEPLOYMENT_REPO_PATH}/deploy/scripts/remover.sh \ --parameterfile "${sap_env_code}-${region_code}-${sap_vnet_code}-X00.tfvars" \
export sap_env_code="DEV"
export region_code="WEEU" export sap_vnet_code="SAP01"
-cd ~/Azure_SAP_Automated_Deployment/WORKSPACES/LANDSCAPE/${sap_env_code}-${region_code}-${sap_vnet_code}-INFRASTRUCTURE
+cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/LANDSCAPE/${sap_env_code}-${region_code}-${sap_vnet_code}-INFRASTRUCTURE
${DEPLOYMENT_REPO_PATH}/deploy/scripts/remover.sh \ --parameterfile ${sap_env_code}-${region_code}-${sap_vnet_code}-INFRASTRUCTURE.tfvars \
Sign in to [Cloud Shell](https://shell.azure.com).
Go to the `WORKSPACES` folder. ```bash
-cd ~/Azure_SAP_Automated_Deployment/WORKSPACES/
+cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/
``` Export the following two environment variables: ```bash
-export DEPLOYMENT_REPO_PATH="~/Azure_SAP_Automated_Deployment/sap-automation"
+export DEPLOYMENT_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation"
export ARM_SUBSCRIPTION_ID="<subscriptionId>" ```
export region_code="WEEU"
export env_code="LAB" export vnet_code="DEP05"
-cd ~/Azure_SAP_Automated_Deployment/WORKSPACES
+cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES
${DEPLOYMENT_REPO_PATH}/deploy/scripts/remove_controlplane.sh \ --deployer_parameter_file DEPLOYER/${env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE/${env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE.tfvars \ --library_parameter_file LIBRARY/${env_code}-${region_code}-SAP_LIBRARY/${env_code}-${region_code}-SAP_LIBRARY.tfvars
sap Provider Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/provider-linux.md
For example - https://github.com/prometheus/node_exporter/releases/download/v1.6
# Change to the directory where you want to install the node exporter. wget https://github.com/prometheus/node_exporter/releases/download/v<xxx>/node_exporter-<xxx>.linux-amd64.tar.gz
-tar xvfz node_exporter-<xxx>.linux-amd64.tar.gz
+tar xzvf node_exporter-<xxx>.linux-amd64.tar.gz
cd node_exporter-<xxx>linux-amd64 nohup ./node_exporter --web.listen-address=":9100" & ```
sap Disaster Recovery Sap Hana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/disaster-recovery-sap-hana.md
Previously updated : 12/08/2023 Last updated : 01/16/2024
This article describes requirements and setup of a third HANA replication site to complement an existing Pacemaker cluster. Both SUSE Linux Enterprise Server (SLES) and RedHat Enterprise Linux (RHEL) specifics are covered.
-## Overview
+## Overview
SAP HANA supports system replication (HSR) with more than two sites connected. You can add a third site to an existing HSR pair, managed by Pacemaker in a highly available setup. You can deploy the third site in a second Azure region for disaster recovery (DR) purposes. Pacemaker and HANA cluster resource agent manage the first two sites. Pacemaker cluster doesn't control the third site.
-SAP HANA supports a third system replication site in two modes.
+SAP HANA supports a third system replication site in two modes.
+ - [Multi-target](https://help.sap.com/docs/SAP_HANA_PLATFORM/6b94445c94ae495c83a19646e7c3fd56/ba457510958241889a459e606bbcf3d3.html) replicates data changes from primary to more than one target system. Third site connected to primary, replication in a star topology. - [Multi-tier](https://help.sap.com/docs/SAP_HANA_PLATFORM/6b94445c94ae495c83a19646e7c3fd56/f730f308fede4040bcb5ccea6751e74d.html) is a two-tier replication. A cascading, or sometimes referred to as chained setup, of three different HANA tiers. Third site connects to secondary. + For more information, see [SAP HANA availability across Azure regions](./sap-hana-availability-across-regions.md#combine-availability-within-one-region-and-across-regions) for more conceptual details about HANA HSR within one and across different regions. ## Prerequisites for SLES
-Requirements for a third HSR site are different between HANA scale-up and HANA scale-out.
+Requirements for a third HSR site are different between HANA scale-up and HANA scale-out.
> [!NOTE] > Requirements in this chapter are only valid for a Pacemaker enabled landscape. Without Pacemaker, SAP HANA version requirements apply for the chosen replication mode.
Requirements for a third HSR site are different between HANA scale-up and HANA s
## Prerequisites for RHEL
-Requirements for a third HSR site are different between HANA scale-up and HANA scale-out.
+Requirements for a third HSR site are different between HANA scale-up and HANA scale-out.
> [!NOTE] > Requirements in this chapter are only valid for a Pacemaker enabled landscape. Without Pacemaker, SAP HANA version requirements apply for the chosen replication mode.
Failure of the third node won't trigger any cluster action. Cluster detects the
Example of a multi-target system replication system. For more information, see [SAP documentation](https://help.sap.com/docs/SAP_HANA_PLATFORM/4e9b18c116aa42fc84c7dbfd02111aba/2e6c71ab55f147e19b832565311a8e4e.html). ![Diagram showing an example of a HANA scale-up multi-target system replication system.](./media/sap-hana-high-availability/sap-hana-high-availability-scale-up-hsr-multi-target.png)
-1. Deploy Azure resources for the third node. Depending on your requirements, you can use a different Azure region for disaster recovery purposes.
- Steps required for the third site are similar to [virtual machines for HANA scale-up cluster](./sap-hana-high-availability.md#deploy-for-linux). Third site will use Azure infrastructure, operating system and HANA version matching the existing Pacemaker cluster, with the following exceptions:
+1. Deploy Azure resources for the third node. Depending on your requirements, you can use a different Azure region for disaster recovery purposes.
+
+ Steps required for the third site are similar to [virtual machines for HANA scale-up cluster](./sap-hana-high-availability.md#prepare-the-infrastructure). Third site will use Azure infrastructure, operating system and HANA version matching the existing Pacemaker cluster, with the following exceptions:
+ - No load balancer deployed for third site and no integration with existing cluster load balancer for the VM of third site - Don't install OS packages SAPHanaSR, SAPHanaSR-doc and OS package pattern ha_sles on third site VM - No integration into the cluster for VM or HANA resources of the third site - No HANA HA hook setup for third site in global.ini
-
-2. Install SAP HANA on third node.
+
+2. Install SAP HANA on third node.
+
Same HANA SID and HANA installation number must be used for third site.
-
-3. With SAP HANA on third site installed and running, register the third site with the primary site.
+
+3. With SAP HANA on third site installed and running, register the third site with the primary site.
+ The example uses SITE-DR as the name for third site.+ ```bash # Execute on the third site su - hn1adm
Example of a multi-target system replication system. For more information, see [
``` 4. Verify HANA system replication shows both secondary and third site.+ ```bash # Verify HANA HSR is in sync, execute on primary sudo su - hn1adm -c "python /usr/sap/HN1/HDB03/exe/python_support/systemReplicationStatus.py" ``` 5. Check the SAPHanaSR attribute for third site. SITE-DR should show up with status SOK in the sites section.+ ```bash # Check SAPHanaSR attribute on any cluster managed host (first or second site) sudo SAPHanaSR-showAttr
Example of a multi-target system replication system. For more information, see [
# HN1-SITE2 SOK # SITE-DR SOK ```
-
+ Cluster detects the replication status of connected sites and the monitored attributed can change between SOK and SFAIL. No cluster action if the replication to DR site fails. ## HANA scale-out: Add HANA multi-target system replication for DR purposes
Failure of the third node won't trigger any cluster action. Cluster detects the
Example of a multi-target system replication system. For more information, see [SAP documentation](https://help.sap.com/docs/SAP_HANA_PLATFORM/4e9b18c116aa42fc84c7dbfd02111aba/2e6c71ab55f147e19b832565311a8e4e.html). ![Diagram showing an example of a HANA scale-out multi-target system replication system.](./media/sap-hana-high-availability/sap-hana-high-availability-scale-out-hsr-multi-target.png)
-1. Deploy Azure resources for the third site. Depending on your requirements, you can use a different Azure region for disaster recovery purposes.
- Steps required for the HANA scale-out on third site are mirroring steps to deploy the [HANA scale-out cluster](./sap-hana-high-availability-scale-out-hsr-suse.md#set-up-the-infrastructure). Third site will use Azure infrastructure, operating system and HANA installation steps for SITE1 of the scale-out cluster, with the following exceptions:
+1. Deploy Azure resources for the third site. Depending on your requirements, you can use a different Azure region for disaster recovery purposes.
+
+ Steps required for the HANA scale-out on third site are mirroring steps to deploy the [HANA scale-out cluster](./sap-hana-high-availability-scale-out-hsr-suse.md#prepare-the-infrastructure). Third site will use Azure infrastructure, operating system and HANA installation steps for SITE1 of the scale-out cluster, with the following exceptions:
+ - No load balancer deployed for third site and no integration with existing cluster load balancer for the VMs of third site - Don't install OS packages SAPHanaSR-ScaleOut, SAPHanaSR-ScaleOut-doc and OS package pattern ha_sles on third site VMs - No majority maker VM for third site, as there's no cluster integration - Create NFS volume /hana/shared for third site exclusive use - No integration into the cluster for VMs or HANA resources of the third site - No HANA HA hook setup for third site in global.ini
-
- You must use the same HANA SID and HANA installation number for third site.
-
-2. With SAP HANA scale-out on third site installed and running, register the third site with the primary site.
+
+ You must use the same HANA SID and HANA installation number for third site.
+
+2. With SAP HANA scale-out on third site installed and running, register the third site with the primary site.
+ The example uses SITE-DR as the name for third site.+ ```bash # Execute on the third site su - hn1adm
Example of a multi-target system replication system. For more information, see [
``` 3. Verify HANA system replication shows both secondary and third site.+ ```bash # Verify HANA HSR is in sync, execute on primary sudo su - hn1adm -c "python /usr/sap/HN1/HDB03/exe/python_support/systemReplicationStatus.py" ``` 4. Check the SAPHanaSR attribute for third site. SITE-DR should show up with status SOK in the sites section.+ ```bash # Check SAPHanaSR attribute on any cluster managed host (first or second site) sudo SAPHanaSR-showAttr
Example of a multi-target system replication system. For more information, see [
# HANA_S1 1674815869 4 hana-s1-db1 PRIM P # HANA_S2 30 4 hana-s2-db1 SOK S ```
-
+ Cluster detects the replication status of connected sites and the monitored attributed can change between SOK and SFAIL. No cluster action if the replication to DR site fails. ## Autoregistering third site
SAP provides since HANA 2 SPS 04 parameter `register_secondaries_on_takeover`. W
For HSR [multi-tier](https://help.sap.com/docs/SAP_HANA_PLATFORM/6b94445c94ae495c83a19646e7c3fd56/f730f308fede4040bcb5ccea6751e74d.html), no automatic SAP HANA registration of the third site exists. You need to manually register the third site to the current secondary, to keep HSR replication chain for multi-tier.
-![Diagram flow showing how a HANA auto-registartion works with a third site during a takeover.](./media/sap-hana-high-availability/sap-hana-high-availability-hsr-third-site-auto-register.png)
+![Diagram flow showing how a HANA auto-registration works with a third site during a takeover.](./media/sap-hana-high-availability/sap-hana-high-availability-hsr-third-site-auto-register.png)
## Next steps - [Disaster recovery overview and infrastructure](./disaster-recovery-overview-guide.md) - [Disaster recovery for SAP workloads](./disaster-recovery-sap-guide.md)-- [High-availability architecture and scenarios for SAP NetWeaver](./sap-hana-availability-across-regions.md)
+- [High-availability architecture and scenarios for SAP NetWeaver](./sap-hana-availability-across-regions.md)
sap Sap Hana High Availability Netapp Files Red Hat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability-netapp-files-red-hat.md
vm-linux Previously updated : 07/11/2023 Last updated : 01/17/2024
The following instructions assume that you already deployed your [Azure virtual
> All commands to mount `/hana/shared` in this article are presented for NFSv4.1 `/hana/shared` volumes. > If you deployed the `/hana/shared` volumes as NFSv3 volumes, don't forget to adjust the mount commands for `/hana/shared` for NFSv3.
-## Deploy Linux virtual machine via the Azure portal
+## Prepare the infrastructure
-This document assumes that you already deployed a resource group, [Azure Virtual Network](../../virtual-network/virtual-networks-overview.md), and a subnet.
+Azure Marketplace contains images qualified for SAP HANA with the High Availability add-on, which you can use to deploy new VMs by using various versions of Red Hat.
-Deploy VMs for SAP HANA. Choose a suitable RHEL image that's supported for a HANA system. You can deploy a VM in any one of the availability options: scale set, availability zone, or availability set.
+### Deploy Linux VMs manually via the Azure portal
+
+This document assumes that you've already deployed a resource group, an [Azure virtual network](../../virtual-network/virtual-networks-overview.md), and a subnet.
+
+Deploy VMs for SAP HANA. Choose a suitable RHEL image that's supported for the HANA system. You can deploy a VM in any one of the availability options: virtual machine scale set, availability zone, or availability set.
> [!IMPORTANT] > > Make sure that the OS you select is SAP certified for SAP HANA on the specific VM types that you plan to use in your deployment. You can look up SAP HANA-certified VM types and their OS releases in [SAP HANA Certified IaaS Platforms](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=v:deCertified;ve:24;iaas;v:125;v:105;v:99;v:120). Make sure that you look at the details of the VM type to get the complete list of SAP HANA-supported OS releases for the specific VM type.
-During VM configuration, we won't add any disk because all our mount points are on NFS shares from Azure NetApp Files. Also, you can create or select an existing load balancer in the networking section. If you're creating a new load balancer, follow these steps:
-
-1. To set up a Standard load balancer, follow these configuration steps:
- 1. First, create a front-end IP pool:
- 1. Open the load balancer, select **frontend IP pool**, and select **Add**.
- 1. Enter the name of the new front-end IP pool (for example, **hana-frontend**).
- 1. Set **Assignment** to **Static** and enter the IP address (for example, **10.32.0.10**).
- 1. Select **OK**.
- 1. After the new front-end IP pool is created, note the pool IP address.
- 1. Create a single back-end pool:
- 1. Open the load balancer, select **Backend pools**, and then select **Add**.
- 1. Enter the name of the new back-end pool (for example, **hana-backend**).
- 1. Select **NIC** for **Backend Pool Configuration**.
- 1. Select **Add a virtual machine**.
- 1. Select the VMs of the HANA cluster.
- 1. Select **Add**.
- 1. Select **Save**.
- 1. Next, create a health probe:
- 1. Open the load balancer, select **health probes**, and select **Add**.
- 1. Enter the name of the new health probe (for example, **hana-hp**).
- 1. Select **TCP** as the protocol and port 625**03**. Keep the **Interval** value set to **5**.
- 1. Select **OK**.
- 1. Next, create load-balancing rules:
- 1. Open the load balancer, select **load balancing rules**, and select **Add**.
- 1. Enter the name of the new load balancer rule (for example, **hana-lb**).
- 1. Select the front-end IP address, the back-end pool, and the health probe that you created earlier (for example, **hana-frontend**, **hana-backend**, and **hana-hp**).
- 1. Increase the idle timeout to **30 minutes**.
- 1. Select **HA Ports**.
- 1. Make sure to enable **Floating IP**.
- 1. Select **OK**.
+### Configure Azure load balancer
+
+During VM configuration, you have an option to create or select exiting load balancer in networking section. Follow below steps, to setup standard load balancer for high availability setup of HANA database.
+
+#### [Azure portal](#tab/lb-portal)
++
+#### [Azure CLI](#tab/lb-azurecli)
++
+#### [PowerShell](#tab/lb-powershell)
+++ For more information about the required ports for SAP HANA, read the chapter [Connections to Tenant Databases](https://help.sap.com/viewer/78209c1d3a9b41cd8624338e42a12bf6/latest/en-US/7a9343c9f2a2436faa3cfdb5ca00c052.html) in the [SAP HANA Tenant Databases](https://help.sap.com/viewer/78209c1d3a9b41cd8624338e42a12bf6) guide or SAP Note [2388694](https://launchpad.support.sap.com/#/notes/2388694).
For more information about the required ports for SAP HANA, read the chapter [Co
> When VMs without public IP addresses are placed in the back-end pool of an internal (no public IP address) instance of Standard Azure Load Balancer, there's no outbound internet connectivity, unless more configuration is performed to allow routing to public endpoints. For more information on how to achieve outbound connectivity, see [Public endpoint connectivity for virtual machines using Standard Azure Load Balancer in SAP high-availability scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md). > [!IMPORTANT]
-> Don't enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps could cause the health probes to fail. Set the parameter **net.ipv4.tcp_timestamps** to **0**. For more information, see [Load Balancer health probes](../../load-balancer/load-balancer-custom-probe-overview.md). See also SAP Note [2382421](https://launchpad.support.sap.com/#/notes/2382421).
+> Don't enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps could cause the health probes to fail. Set the parameter **net.ipv4.tcp_timestamps** to **0**. For more information, see [Load Balancer health probes](../../load-balancer/load-balancer-custom-probe-overview.md) and SAP Note [2382421](https://launchpad.support.sap.com/#/notes/2382421).
## Mount the Azure NetApp Files volume
sap Sap Hana High Availability Netapp Files Suse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability-netapp-files-suse.md
Previously updated : 09/15/2023 Last updated : 01/16/2024
Read the following SAP Notes and papers first:
- [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md) - [Azure Virtual Machines planning and implementation for SAP on Linux](./planning-guide.md)
->[!NOTE]
+> [!NOTE]
> This article contains references to a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article. ## Overview
As you create your Azure NetApp Files for SAP HANA Scale-up systems, be aware of
The throughput of an Azure NetApp Files volume is a function of the volume size and service level, as documented in [Service level for Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-service-levels.md).
-While designing the infrastructure for SAP HANA on Azure with Azure NetApp Files, be aware of the recommendations in [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md#sizing-for-hana-database-on-azure-netapp-files).
+While designing the infrastructure for SAP HANA on Azure with Azure NetApp Files, be aware of the recommendations in [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md#sizing-for-hana-database-on-azure-netapp-files).
-The configuration in this article is presented with simple Azure NetApp Files Volumes.
+The configuration in this article is presented with simple Azure NetApp Files Volumes.
> [!IMPORTANT]
-> For production systems, where performance is a key, we recommend to evaluate and consider using [Azure NetApp Files application volume group for SAP HANA](hana-vm-operations-netapp.md#deployment-through-azure-netapp-files-application-volume-group-for-sap-hana-avg).
+> For production systems, where performance is a key, we recommend to evaluate and consider using [Azure NetApp Files application volume group for SAP HANA](hana-vm-operations-netapp.md#deployment-through-azure-netapp-files-application-volume-group-for-sap-hana-avg).
> [!NOTE] > All commands to mount /hana/shared in this article are presented for NFSv4.1 /hana/shared volumes.
The following instructions assume that you've already deployed your [Azure virtu
- Volume hanadb2-log-mnt00001 (nfs://10.3.1.4:/hanadb2-log-mnt00001) - Volume hanadb2-shared-mnt00001 (nfs://10.3.1.4:/hanadb2-shared-mnt00001)
+## Prepare the infrastructure
+
+The resource agent for SAP HANA is included in SUSE Linux Enterprise Server for SAP Applications. An image for SUSE Linux Enterprise Server for SAP Applications 12 or 15 is available in Azure Marketplace. You can use the image to deploy new VMs.
-## Deploy Linux virtual machine via Azure portal
+### Deploy Linux VMs manually via Azure portal
This document assumes that you've already deployed a resource group, [Azure Virtual Network](../../virtual-network/virtual-networks-overview.md), and subnet.
-Deploy virtual machines for SAP HANA. Choose a suitable SLES image that is supported for HANA system. You can deploy VM in any one of the availability options - scale set, availability zone or availability set.
+Deploy virtual machines for SAP HANA. Choose a suitable SLES image that is supported for HANA system. You can deploy VM in any one of the availability options - virtual machine scale set, availability zone or availability set.
> [!IMPORTANT] > Make sure that the OS you select is SAP certified for SAP HANA on the specific VM types that you plan to use in your deployment. You can look up SAP HANA-certified VM types and their OS releases in [SAP HANA Certified IaaS Platforms](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=v:deCertified;ve:24;iaas;v:125;v:105;v:99;v:120). Make sure that you look at the details of the VM type to get the complete list of SAP HANA-supported OS releases for the specific VM type.
-During VM configuration, we won't be adding any disk as all our mount points are on NFS shares from Azure NetApp Files. Also, you have an option to create or select exiting load balancer in networking section. If you're creating a new load balancer, follow below steps -
-
-1. To set up standard load balancer, follow these configuration steps:
- 1. First, create a front-end IP pool:
- 1. Open the load balancer, select **frontend IP configuration**, and select **Add**.
- 2. Enter the name of the new front-end IP (for example, **hana-frontend**).
- 3. Set the **Assignment** to **Static** and enter the IP address (for example, **10.3.0.50**).
- 4. Select **OK**.
- 5. After the new front-end IP pool is created, note the pool IP address.
- 2. Create a single back-end pool:
- 1. Open the load balancer, select **Backend pools**, and then select **Add**.
- 2. Enter the name of the new back-end pool (for example, **hana-backend**).
- 3. Select **NIC** for Backend Pool Configuration.
- 4. Select **Add a virtual machine**.
- 5. Select the virtual machines of the HANA cluster.
- 6. Select **Add**.
- 7. Select **Save**.
- 3. Next, create a health probe:
- 1. Open the load balancer, select **health probes**, and select **Add**.
- 2. Enter the name of the new health probe (for example, **hana-hp**).
- 3. Select TCP as the protocol and port 625**03**. Keep the **Interval** value set to 5.
- 4. Select **OK**.
- 4. Next, create the load-balancing rules:
- 1. Open the load balancer, select **load balancing rules**, and select **Add**.
- 2. Enter the name of the new load balancer rule (for example, **hana-lb**).
- 3. Select the front-end IP address, the back-end pool, and the health probe that you created earlier (for example, **hana-frontend**, **hana-backend** and **hana-hp**).
- 1. Increase idle timeout to 30 minutes
- 4. Select **HA Ports**.
- 5. Make sure to **enable Floating IP**.
- 6. Select **OK**.
+### Configure Azure load balancer
+
+During VM configuration, you have an option to create or select exiting load balancer in networking section. Follow below steps, to setup standard load balancer for high availability setup of HANA database.
+
+#### [Azure Portal](#tab/lb-portal)
++
+#### [Azure CLI](#tab/lb-azurecli)
++
+#### [PowerShell](#tab/lb-powershell)
+++ For more information about the required ports for SAP HANA, read the chapter [Connections to Tenant Databases](https://help.sap.com/viewer/78209c1d3a9b41cd8624338e42a12bf6/latest/en-US/7a9343c9f2a2436faa3cfdb5ca00c052.html) in the [SAP HANA Tenant Databases](https://help.sap.com/viewer/78209c1d3a9b41cd8624338e42a12bf6) guide or SAP Note [2388694](https://launchpad.support.sap.com/#/notes/2388694).
For more information about the required ports for SAP HANA, read the chapter [Co
> When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard Azure load balancer, there will be no outbound internet connectivity, unless additional configuration is performed to allow routing to public end points. For details on how to achieve outbound connectivity see [Public endpoint connectivity for Virtual Machines using Azure Standard Load Balancer in SAP high-availability scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md). > [!IMPORTANT]
-> Do not enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will cause the health probes to fail. Set parameter **net.ipv4.tcp_timestamps** to **0**. For details see [Load Balancer health probes](../../load-balancer/load-balancer-custom-probe-overview.md). See also SAP note [2382421](https://launchpad.support.sap.com/#/notes/2382421).
+>
+> - Do not enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will cause the health probes to fail. Set parameter `net.ipv4.tcp_timestamps` to `0`. For details see [Load Balancer health probes](../../load-balancer/load-balancer-custom-probe-overview.md) and SAP note [2382421](https://launchpad.support.sap.com/#/notes/2382421).
+> - To prevent saptune from changing the manually set `net.ipv4.tcp_timestamps` value from `0` back to `1`, update saptune version to 3.1.1 or higher. For more details, see [saptune 3.1.1 ΓÇô Do I Need to Update?](https://www.suse.com/c/saptune-3-1-1-do-i-need-to-update/).
## Mount the Azure NetApp Files volume
sap Sap Hana High Availability Rhel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability-rhel.md
Previously updated : 11/21/2023 Last updated : 01/17/2024 # High availability of SAP HANA on Azure VMs on Red Hat Enterprise Linux
The SAP HANA System Replication setup uses a dedicated virtual hostname and virt
* Front-end IP address: 10.0.0.13 for hn1-db * Probe port: 62503
-## Deploy for Linux
+## Prepare the infrastructure
Azure Marketplace contains images qualified for SAP HANA with the High Availability add-on, which you can use to deploy new VMs by using various versions of Red Hat.
Azure Marketplace contains images qualified for SAP HANA with the High Availabil
This document assumes that you've already deployed a resource group, an [Azure virtual network](../../virtual-network/virtual-networks-overview.md), and a subnet.
-Deploy VMs for SAP HANA. Choose a suitable RHEL image that's supported for the HANA system. You can deploy a VM in any one of the availability options: scale set, availability zone, or availability set.
+Deploy VMs for SAP HANA. Choose a suitable RHEL image that's supported for the HANA system. You can deploy a VM in any one of the availability options: virtual machine scale set, availability zone, or availability set.
> [!IMPORTANT] > > Make sure that the OS you select is SAP certified for SAP HANA on the specific VM types that you plan to use in your deployment. You can look up SAP HANA-certified VM types and their OS releases in [SAP HANA Certified IaaS Platforms](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=v:deCertified;ve:24;iaas;v:125;v:105;v:99;v:120). Make sure that you look at the details of the VM type to get the complete list of SAP HANA-supported OS releases for the specific VM type.
-During VM configuration, you can create or select an existing load balancer in the networking section. If you're creating a new load balancer, follow these steps:
+### Configure Azure load balancer
- 1. Create a front-end IP pool:
+During VM configuration, you have an option to create or select exiting load balancer in networking section. Follow below steps, to setup standard load balancer for high availability setup of HANA database.
- 1. Open the load balancer, select **frontend IP pool**, and select **Add**.
- 1. Enter the name of the new front-end IP pool (for example, **hana-frontend**).
- 1. Set **Assignment** to **Static** and enter the IP address (for example, **10.0.0.13**).
- 1. Select **OK**.
- 1. After the new front-end IP pool is created, note the pool IP address.
+#### [Azure portal](#tab/lb-portal)
- 1. Create a single back-end pool:
- 1. Open the load balancer, select **Backend pools**, and then select **Add**.
- 1. Enter the name of the new back-end pool (for example, **hana-backend**).
- 1. Select **NIC** for **Backend Pool Configuration**.
- 1. Select **Add a virtual machine**.
- 1. Select the VMs of the HANA cluster.
- 1. Select **Add**.
- 1. Select **Save**.
+#### [Azure CLI](#tab/lb-azurecli)
- 1. Create a health probe:
- 1. Open the load balancer, select **health probes**, and select **Add**.
- 1. Enter the name of the new health probe (for example, **hana-hp**).
- 1. Select **TCP** as the protocol and port 625**03**. Keep the **Interval** value set to **5**.
- 1. Select **OK**.
+#### [PowerShell](#tab/lb-powershell)
- 1. Create the load-balancing rules:
- 1. Open the load balancer, select **load balancing rules**, and select **Add**.
- 1. Enter the name of the new load balancer rule (for example, **hana-lb**).
- 1. Select the front-end IP address, the back-end pool, and the health probe that you created earlier (for example, **hana-frontend**, **hana-backend**, and **hana-hp**).
- 1. Increase the idle timeout to **30 minutes**.
- 1. Select **HA Ports**.
- 1. Increase the idle timeout to **30 minutes**.
- 1. Make sure to enable **Floating IP**.
- 1. Select **OK**.
+ For more information about the required ports for SAP HANA, read the chapter [Connections to Tenant Databases](https://help.sap.com/viewer/78209c1d3a9b41cd8624338e42a12bf6/latest/en-US/7a9343c9f2a2436faa3cfdb5ca00c052.html) in the [SAP HANA Tenant Databases](https://help.sap.com/viewer/78209c1d3a9b41cd8624338e42a12bf6) guide or SAP Note [2388694][2388694].
For more information about the required ports for SAP HANA, read the chapter [Co
> When VMs without public IP addresses are placed in the back-end pool of an internal (no public IP address) instance of Standard Azure Load Balancer, there's no outbound internet connectivity unless more configuration is performed to allow routing to public endpoints. For more information on how to achieve outbound connectivity, see [Public endpoint connectivity for VMs using Azure Standard Load Balancer in SAP high-availability scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md). > [!IMPORTANT]
-> Don't enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps could cause the health probes to fail. Set the parameter **net.ipv4.tcp_timestamps** to **0**. For more information, see [Load Balancer health probes](../../load-balancer/load-balancer-custom-probe-overview.md).
-> See also SAP Note [2382421](https://launchpad.support.sap.com/#/notes/2382421).
+> Don't enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps could cause the health probes to fail. Set the parameter `net.ipv4.tcp_timestamps` to `0`. For more information, see [Load Balancer health probes](../../load-balancer/load-balancer-custom-probe-overview.md) and SAP Note [2382421](https://launchpad.support.sap.com/#/notes/2382421).
## Install SAP HANA
sap Sap Hana High Availability Scale Out Hsr Rhel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability-scale-out-hsr-rhel.md
ms.assetid: 5e514964-c907-4324-b659-16dd825f6f87
- Previously updated : 09/26/2023 Last updated : 01/17/2024
-# High availability of SAP HANA scale-out system on Red Hat Enterprise Linux
+# High availability of SAP HANA scale-out system on Red Hat Enterprise Linux
[dbms-guide]:dbms-guide-general.md [deployment-guide]:deployment-guide.md [planning-guide]:planning-guide.md [anf-azure-doc]:../../azure-netapp-files/index.yml
-[anf-avail-matrix]:https://azure.microsoft.com/global-infrastructure/services/?products=netapp&regions=all
-[2205917]:https://launchpad.support.sap.com/#/notes/2205917
-[1944799]:https://launchpad.support.sap.com/#/notes/1944799
[1928533]:https://launchpad.support.sap.com/#/notes/1928533 [2015553]:https://launchpad.support.sap.com/#/notes/2015553 [2178632]:https://launchpad.support.sap.com/#/notes/2178632 [2191498]:https://launchpad.support.sap.com/#/notes/2191498 [2243692]:https://launchpad.support.sap.com/#/notes/2243692
-[1984787]:https://launchpad.support.sap.com/#/notes/1984787
[1999351]:https://launchpad.support.sap.com/#/notes/1999351
-[1410736]:https://launchpad.support.sap.com/#/notes/1410736
[1900823]:https://launchpad.support.sap.com/#/notes/1900823
-[2292690]:https://launchpad.support.sap.com/#/notes/2292690
-[2455582]:https://launchpad.support.sap.com/#/notes/2455582
-[2593824]:https://launchpad.support.sap.com/#/notes/2593824
[2009879]:https://launchpad.support.sap.com/#/notes/2009879 [3108302]:https://launchpad.support.sap.com/#/notes/3108302
-[sap-swcenter]:https://support.sap.com/en/my-support/software-downloads.html
- [sap-hana-ha]:sap-hana-high-availability.md
-[nfs-ha]:high-availability-guide-suse-nfs.md
-This article describes how to deploy a highly available SAP HANA system in a scale-out configuration. Specifically, the configuration uses HANA system replication (HSR) and Pacemaker on Azure Red Hat Enterprise Linux virtual machines (VMs). The shared file systems in the presented architecture are NFS mounted and are provided by [Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-introduction.md) or [NFS share on Azure Files](../../storage/files/files-nfs-protocol.md).
+This article describes how to deploy a highly available SAP HANA system in a scale-out configuration. Specifically, the configuration uses HANA system replication (HSR) and Pacemaker on Azure Red Hat Enterprise Linux virtual machines (VMs). The shared file systems in the presented architecture are NFS mounted and are provided by [Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-introduction.md) or [NFS share on Azure Files](../../storage/files/files-nfs-protocol.md).
In the example configurations and installation commands, the HANA instance is `03` and the HANA system ID is `HN1`.
Some readers will benefit from consulting a variety of SAP notes and resources b
* Azure-specific RHEL documentation: * [Install SAP HANA on Red Hat Enterprise Linux for use in Microsoft Azure](https://access.redhat.com/public-cloud/microsoft-azure). * [Red Hat Enterprise Linux Solution for SAP HANA scale-out and system replication](https://access.redhat.com/solutions/4386601).
-* [Azure NetApp Files documentation][anf-azure-doc].
+* [Azure NetApp Files documentation][anf-azure-doc].
* [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md). * [Azure Files documentation](../../storage/files/storage-files-introduction.md)
In the following diagram, there are three HANA nodes on each site, and a majorit
The HANA shared file system `/han). The HANA shared file system is NFS mounted on each HANA node in the same HANA system replication site. File systems `/hana/data` and `/hana/log` are local file systems and aren't shared between the HANA DB nodes. SAP HANA will be installed in non-shared mode.
-For recommended SAP HANA storage configurations, see [SAP HANA Azure VMs storage configurations](./hana-vm-operations-storage.md).
+For recommended SAP HANA storage configurations, see [SAP HANA Azure VMs storage configurations](./hana-vm-operations-storage.md).
> [!IMPORTANT] > If deploying all HANA file systems on Azure NetApp Files, for production systems, where performance is a key, we recommend to evaluate and consider using [Azure NetApp Files application volume group for SAP HANA](hana-vm-operations-netapp.md#deployment-through-azure-netapp-files-application-volume-group-for-sap-hana-avg). - [![Diagram of SAP HANA scale-out with HSR and Pacemaker cluster.](./media/sap-hana-high-availability-rhel/sap-hana-high-availability-scale-out-hsr-rhel.png)](./media/sap-hana-high-availability-rhel/sap-hana-high-availability-scale-out-hsr-rhel-detail.png#lightbox)
-The preceding diagram shows three subnets represented within one Azure virtual network, following the SAP HANA network recommendations:
+The preceding diagram shows three subnets represented within one Azure virtual network, following the SAP HANA network recommendations:
* For client communication: `client` 10.23.0.0/24 * For internal HANA internode communication: `inter` 10.23.1.128/26
In the instructions that follow, we assume that you've already created the resou
### Deploy Linux virtual machines via the Azure portal
-1. Deploy the Azure VMs. For this configuration, deploy seven virtual machines:
-
- - Three virtual machines to serve as HANA DB nodes for HANA replication site 1: **hana-s1-db1**, **hana-s1-db2** and **hana-s1-db3**.
- - Three virtual machines to serve as HANA DB nodes for HANA replication site 2: **hana-s2-db1**, **hana-s2-db2** and **hana-s2-db3**.
- - A small virtual machine to serve as majority maker: **hana-s-mm**.
+1. Deploy the Azure VMs. For this configuration, deploy seven virtual machines:
+
+ * Three virtual machines to serve as HANA DB nodes for HANA replication site 1: **hana-s1-db1**, **hana-s1-db2** and **hana-s1-db3**.
+ * Three virtual machines to serve as HANA DB nodes for HANA replication site 2: **hana-s2-db1**, **hana-s2-db2** and **hana-s2-db3**.
+ * A small virtual machine to serve as majority maker: **hana-s-mm**.
The VMs deployed as SAP DB HANA nodes should be certified by SAP for HANA, as published in the [SAP HANA hardware directory](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=v:deCertified;ve:24;iaas;v:125;v:105;v:99;v:120). When you're deploying the HANA DB nodes, make sure to select [accelerated network](../../virtual-network/create-vm-accelerated-networking-cli.md).
- For the majority maker node, you can deploy a small VM, because this VM doesn't run any of the SAP HANA resources. The majority maker VM is used in the cluster configuration to achieve and odd number of cluster nodes in a split-brain scenario. The majority maker VM only needs one virtual network interface in the `client` subnet in this example.
+ For the majority maker node, you can deploy a small VM, because this VM doesn't run any of the SAP HANA resources. The majority maker VM is used in the cluster configuration to achieve and odd number of cluster nodes in a split-brain scenario. The majority maker VM only needs one virtual network interface in the `client` subnet in this example.
Deploy local managed disks for `/han).
In the instructions that follow, we assume that you've already created the resou
> [!IMPORTANT] > Make sure that the operating system you select is SAP-certified for SAP HANA on the specific VM types that you're using. For a list of SAP HANA certified VM types and operating system releases for those types, see [SAP HANA certified IaaS platforms](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=v:deCertified;ve:24;iaas;v:125;v:105;v:99;v:120). Drill into the details of the listed VM type to get the complete list of SAP HANA-supported operating system releases for that type.
-1. Create six network interfaces, one for each HANA DB virtual machine, in the `inter` virtual network subnet (in this example, **hana-s1-db1-inter**, **hana-s1-db2-inter**, **hana-s1-db3-inter**, **hana-s2-db1-inter**, **hana-s2-db2-inter**, and **hana-s2-db3-inter**).
+2. Create six network interfaces, one for each HANA DB virtual machine, in the `inter` virtual network subnet (in this example, **hana-s1-db1-inter**, **hana-s1-db2-inter**, **hana-s1-db3-inter**, **hana-s2-db1-inter**, **hana-s2-db2-inter**, and **hana-s2-db3-inter**).
-1. Create six network interfaces, one for each HANA DB virtual machine, in the `hsr` virtual network subnet (in this example, **hana-s1-db1-hsr**, **hana-s1-db2-hsr**, **hana-s1-db3-hsr**, **hana-s2-db1-hsr**, **hana-s2-db2-hsr**, and **hana-s2-db3-hsr**).
+3. Create six network interfaces, one for each HANA DB virtual machine, in the `hsr` virtual network subnet (in this example, **hana-s1-db1-hsr**, **hana-s1-db2-hsr**, **hana-s1-db3-hsr**, **hana-s2-db1-hsr**, **hana-s2-db2-hsr**, and **hana-s2-db3-hsr**).
-1. Attach the newly created virtual network interfaces to the corresponding virtual machines:
+4. Attach the newly created virtual network interfaces to the corresponding virtual machines:
1. Go to the virtual machine in the [Azure portal](https://portal.azure.com/#home).
+ 2. On the left pane, select **Virtual Machines**. Filter on the virtual machine name (for example, **hana-s1-db1**), and then select the virtual machine.
+ 3. On the **Overview** pane, select **Stop** to deallocate the virtual machine.
+ 4. Select **Networking**, and then attach the network interface. In the **Attach network interface** dropdown list, select the already created network interfaces for the `inter` and `hsr` subnets.
+ 5. Select **Save**.
+ 6. Repeat steps b through e for the remaining virtual machines (in our example, **hana-s1-db2**, **hana-s1-db3**, **hana-s2-db1**, **hana-s2-db2** and **hana-s2-db3**)
+ 7. Leave the virtual machines in the stopped state for now.
- 1. On the left pane, select **Virtual Machines**. Filter on the virtual machine name (for example, **hana-s1-db1**), and then select the virtual machine.
+5. Enable [accelerated networking](../../virtual-network/create-vm-accelerated-networking-cli.md) for the additional network interfaces for the `inter` and `hsr` subnets by doing the following:
- 1. On the **Overview** pane, select **Stop** to deallocate the virtual machine.
+ 1. Open [Azure Cloud Shell](https://azure.microsoft.com/features/cloud-shell/) in the [Azure portal](https://portal.azure.com/#home).
- 1. Select **Networking**, and then attach the network interface. In the **Attach network interface** dropdown list, select the already created network interfaces for the `inter` and `hsr` subnets.
-
- 1. Select **Save**.
-
- 1. Repeat steps b through e for the remaining virtual machines (in our example, **hana-s1-db2**, **hana-s1-db3**, **hana-s2-db1**, **hana-s2-db2** and **hana-s2-db3**).
-
- 1. Leave the virtual machines in the stopped state for now.
+ 2. Run the following commands to enable accelerated networking for the additional network interfaces, which are attached to the `inter` and `hsr` subnets.
+
+ ```azurecli
+ az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s1-db1-inter --accelerated-networking true
+ az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s1-db2-inter --accelerated-networking true
+ az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s1-db3-inter --accelerated-networking true
+ az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s2-db1-inter --accelerated-networking true
+ az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s2-db2-inter --accelerated-networking true
+ az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s2-db3-inter --accelerated-networking true
+
+ az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s1-db1-hsr --accelerated-networking true
+ az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s1-db2-hsr --accelerated-networking true
+ az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s1-db3-hsr --accelerated-networking true
+ az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s2-db1-hsr --accelerated-networking true
+ az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s2-db2-hsr --accelerated-networking true
+ az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s2-db3-hsr --accelerated-networking true
+ ```
-1. Enable [accelerated networking](../../virtual-network/create-vm-accelerated-networking-cli.md) for the additional network interfaces for the `inter` and `hsr` subnets by doing the following:
+6. Start the HANA DB virtual machines.
- 1. Open [Azure Cloud Shell](https://azure.microsoft.com/features/cloud-shell/) in the [Azure portal](https://portal.azure.com/#home).
+### Configure Azure load balancer
- 1. Run the following commands to enable accelerated networking for the additional network interfaces, which are attached to the `inter` and `hsr` subnets.
+During VM configuration, you have an option to create or select exiting load balancer in networking section. Follow below steps, to setup standard load balancer for high availability setup of HANA database.
- ```azurecli
- az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s1-db1-inter --accelerated-networking true
- az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s1-db2-inter --accelerated-networking true
- az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s1-db3-inter --accelerated-networking true
- az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s2-db1-inter --accelerated-networking true
- az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s2-db2-inter --accelerated-networking true
- az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s2-db3-inter --accelerated-networking true
-
- az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s1-db1-hsr --accelerated-networking true
- az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s1-db2-hsr --accelerated-networking true
- az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s1-db3-hsr --accelerated-networking true
- az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s2-db1-hsr --accelerated-networking true
- az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s2-db2-hsr --accelerated-networking true
- az network nic update --id /subscriptions/your subscription/resourceGroups/your resource group/providers/Microsoft.Network/networkInterfaces/hana-s2-db3-hsr --accelerated-networking true
- ```
+> [!NOTE]
+>
+> * For HANA scale out, select the NIC for the `client` subnet when adding the virtual machines in the backend pool.
+> * The full set of command in Azure CLI and PowerShell adds the VMs with primary NIC in the backend pool.
-1. Start the HANA DB virtual machines.
+#### [Azure Portal](#tab/lb-portal)
-### Deploy Azure Load Balancer
-It's best to use the standard load balancer. Here's how:
+#### [Azure CLI](#tab/lb-azurecli)
-1. Create a front-end IP pool:
+The full set of Azure CLI codes display the setup of the load balancer, which include two VMs in the backend pool. Depending on the number of VMs in your HANA scale-out, you could add more VMs in the backend pool.
- 1. Open the load balancer, select **frontend IP pool**, and select **Add**.
- 1. Enter the name of the new front-end IP pool (for example, *hana-frontend*).
- 1. Set the **Assignment** to **Static**, and enter the IP address (for example, *10.23.0.18*).
- 1. Select **OK**.
- 1. After the new front-end IP pool is created, note the pool IP address.
-1. Create a single back-end pool:
-
- 1. Open the load balancer, select **Backend pools**, and then select **Add**.
- 1. Enter the name of the new back-end pool (for example, *hana-backend*).
- 2. Select **NIC** for Backend Pool Configuration.
- 1. Select **Add a virtual machine**.
- 1. Select the virtual machines of the HANA cluster (the NICs for the `client` subnet).
- 1. Select **Add**.
- 2. Select **Save**.
+#### [PowerShell](#tab/lb-powershell)
-1. Create a health probe:
+The full set of PowerShell code display the setup of the load balancer, which include two VMs in the backend pool. Depending on the number of VMs in your HANA scale-out, you could add more VMs in the backend pool.
- 1. Open the load balancer, select **health probes**, and select **Add**.
- 1. Enter the name of the new health probe (for example, *hana-hp*).
- 1. Select **TCP** as the protocol and port 625**03**. Keep the **Interval** value set to 5.
- 1. Select **OK**.
-1. Create the load-balancing rules:
-
- 1. Open the load balancer, select **load balancing rules**, and select **Add**.
- 1. Enter the name of the new load balancer rule (for example, *hana-lb*).
- 1. Select the front-end IP address, the back-end pool, and the health probe that you created earlier (for example, **hana-frontend**, **hana-backend** and **hana-hp**).
- 2. Increase idle timeout to 30 minutes
- 1. Select **HA Ports**.
- 1. Make sure to **enable Floating IP**.
- 1. Select **OK**.
+
- > [!IMPORTANT]
- > Floating IP isn't supported on a NIC secondary IP configuration in load-balancing scenarios. For details, see [Azure Load Balancer limitations](../../load-balancer/load-balancer-multivip-overview.md#limitations). If you need an additional IP address for the VM, deploy a second NIC.
-
-When you're using the standard load balancer, you should be aware of the following limitation. When you place VMs without public IP addresses in the back-end pool of an internal load balancer, there's no outbound internet connectivity. To allow routing to public end points, you need to perform additional configuration. For more information, see [Public endpoint connectivity for Virtual Machines using Azure Standard Load Balancer in SAP high-availability scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md).
+> [!IMPORTANT]
+> Floating IP isn't supported on a NIC secondary IP configuration in load-balancing scenarios. For details, see [Azure Load Balancer limitations](../../load-balancer/load-balancer-multivip-overview.md#limitations). If you need an additional IP address for the VM, deploy a second NIC.
- > [!IMPORTANT]
- > Don't enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps causes the health probes to fail. Set the parameter `net.ipv4.tcp_timestamps` to `0`. For details, see [Load Balancer health probes](../../load-balancer/load-balancer-custom-probe-overview.md) and SAP note [2382421](https://launchpad.support.sap.com/#/notes/2382421).
+> [!NOTE]
+> When you're using the standard load balancer, you should be aware of the following limitation. When you place VMs without public IP addresses in the back-end pool of an internal load balancer, there's no outbound internet connectivity. To allow routing to public end points, you need to perform additional configuration. For more information, see [Public endpoint connectivity for Virtual Machines using Azure Standard Load Balancer in SAP high-availability scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md).
+
+> [!IMPORTANT]
+> Don't enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps causes the health probes to fail. Set the parameter `net.ipv4.tcp_timestamps` to `0`. For details, see [Load Balancer health probes](../../load-balancer/load-balancer-custom-probe-overview.md) and SAP note [2382421](https://launchpad.support.sap.com/#/notes/2382421).
### Deploy NFS
The next sections describe the steps to deploy NFS - you'll need to select only
> [!TIP] > You chose to deploy `/han).
-#### Deploy the Azure NetApp Files infrastructure
+#### Deploy the Azure NetApp Files infrastructure
Deploy the Azure NetApp Files volumes for the `/han#set-up-the-azure-netapp-files-infrastructure).
-In this example, you use the following Azure NetApp Files volumes:
+In this example, you use the following Azure NetApp Files volumes:
* volume **HN1**-shared-s1 (nfs://10.23.1.7/**HN1**-shared-s1) * volume **HN1**-shared-s2 (nfs://10.23.1.7/**HN1**-shared-s2)
In this example, the following Azure Files NFS shares were used:
The instructions in the next sections are prefixed with one of the following abbreviations:
-* **[A]**: Applicable to all nodes
-* **[AH]**: Applicable to all HANA DB nodes
-* **[M]**: Applicable to the majority maker node
-* **[AH1]**: Applicable to all HANA DB nodes on SITE 1
-* **[AH2]**: Applicable to all HANA DB nodes on SITE 2
-* **[1]**: Applicable only to HANA DB node 1, SITE 1
-* **[2]**: Applicable only to HANA DB node 1, SITE 2
+* **[A]**: Applicable to all nodes
+* **[AH]**: Applicable to all HANA DB nodes
+* **[M]**: Applicable to the majority maker node
+* **[AH1]**: Applicable to all HANA DB nodes on SITE 1
+* **[AH2]**: Applicable to all HANA DB nodes on SITE 2
+* **[1]**: Applicable only to HANA DB node 1, SITE 1
+* **[2]**: Applicable only to HANA DB node 1, SITE 2
Configure and prepare your operating system by doing the following: 1. **[A]** Maintain the host files on the virtual machines. Include entries for all subnets. The following entries are added to `/etc/hosts` for this example. ```bash
- # Client subnet
- 10.23.0.11 hana-s1-db1
- 10.23.0.12 hana-s1-db1
- 10.23.0.13 hana-s1-db2
- 10.23.0.14 hana-s2-db1
- 10.23.0.15 hana-s2-db2
- 10.23.0.16 hana-s2-db3
- 10.23.0.17 hana-s-mm
- # Internode subnet
- 10.23.1.138 hana-s1-db1-inter
- 10.23.1.139 hana-s1-db2-inter
- 10.23.1.140 hana-s1-db3-inter
- 10.23.1.141 hana-s2-db1-inter
- 10.23.1.142 hana-s2-db2-inter
- 10.23.1.143 hana-s2-db3-inter
- # HSR subnet
- 10.23.1.202 hana-s1-db1-hsr
- 10.23.1.203 hana-s1-db2-hsr
- 10.23.1.204 hana-s1-db3-hsr
- 10.23.1.205 hana-s2-db1-hsr
- 10.23.1.206 hana-s2-db2-hsr
- 10.23.1.207 hana-s2-db3-hsr
+ # Client subnet
+ 10.23.0.11 hana-s1-db1
+ 10.23.0.12 hana-s1-db1
+ 10.23.0.13 hana-s1-db2
+ 10.23.0.14 hana-s2-db1
+ 10.23.0.15 hana-s2-db2
+ 10.23.0.16 hana-s2-db3
+ 10.23.0.17 hana-s-mm
+ # Internode subnet
+ 10.23.1.138 hana-s1-db1-inter
+ 10.23.1.139 hana-s1-db2-inter
+ 10.23.1.140 hana-s1-db3-inter
+ 10.23.1.141 hana-s2-db1-inter
+ 10.23.1.142 hana-s2-db2-inter
+ 10.23.1.143 hana-s2-db3-inter
+ # HSR subnet
+ 10.23.1.202 hana-s1-db1-hsr
+ 10.23.1.203 hana-s1-db2-hsr
+ 10.23.1.204 hana-s1-db3-hsr
+ 10.23.1.205 hana-s2-db1-hsr
+ 10.23.1.206 hana-s2-db2-hsr
+ 10.23.1.207 hana-s2-db3-hsr
``` - 1. **[A]** Create configuration file */etc/sysctl.d/ms-az.conf* with Microsoft for Azure configuration settings.
- <pre><code>
+ ```bash
vi /etc/sysctl.d/ms-az.conf
+
# Add the following entries in the configuration file
+
net.ipv6.conf.all.disable_ipv6 = 1 net.ipv4.tcp_max_syn_backlog = 16348 net.ipv4.conf.all.rp_filter = 0 sunrpc.tcp_slot_table_entries = 128 vm.swappiness=10
- </code></pre>
+ ```
> [!TIP] > Avoid setting `net.ipv4.ip_local_port_range` and `net.ipv4.ip_local_reserved_ports` explicitly in the `sysctl` configuration files, to allow the SAP host agent to manage the port ranges. For more details, see SAP note [2382421](https://launchpad.support.sap.com/#/notes/2382421). - 1. **[A]** Install the NFS client package.
- `yum install nfs-utils`
-
+ ```bash
+ yum install nfs-utils
+ ```
-1. **[AH]** Red Hat for HANA configuration.
+1. **[AH]** Red Hat for HANA configuration.
Configure RHEL, as described in the [Red Hat customer portal](https://access.redhat.com/solutions/2447641) and in the following SAP notes:
- - [2292690 - SAP HANA DB: Recommended OS settings for RHEL 7](https://launchpad.support.sap.com/#/notes/2292690)
- - [2777782 - SAP HANA DB: Recommended OS settings for RHEL 8](https://launchpad.support.sap.com/#/notes/2777782)
- - [2455582 - Linux: Running SAP applications compiled with GCC 6.x](https://launchpad.support.sap.com/#/notes/2455582)
- - [2593824 - Linux: Running SAP applications compiled with GCC 7.x](https://launchpad.support.sap.com/#/notes/2593824)
- - [2886607 - Linux: Running SAP applications compiled with GCC 9.x](https://launchpad.support.sap.com/#/notes/2886607)
+ * [2292690 - SAP HANA DB: Recommended OS settings for RHEL 7](https://launchpad.support.sap.com/#/notes/2292690)
+ * [2777782 - SAP HANA DB: Recommended OS settings for RHEL 8](https://launchpad.support.sap.com/#/notes/2777782)
+ * [2455582 - Linux: Running SAP applications compiled with GCC 6.x](https://launchpad.support.sap.com/#/notes/2455582)
+ * [2593824 - Linux: Running SAP applications compiled with GCC 7.x](https://launchpad.support.sap.com/#/notes/2593824)
+ * [2886607 - Linux: Running SAP applications compiled with GCC 9.x](https://launchpad.support.sap.com/#/notes/2886607)
## Prepare the file systems
In this example, the shared HANA file systems are deployed on Azure NetApp Files
options sunrpc tcp_max_slot_table_entries=128 ```
-1. **[AH]** Create mount points for the HANA database volumes.
+3. **[AH]** Create mount points for the HANA database volumes.
```bash mkdir -p /hana/shared ```
-1. **[AH]** Verify the NFS domain setting. Make sure that the domain is configured as the default Azure NetApp Files domain: `defaultv4iddomain.com`. Make sure the mapping is set to `nobody`.
+4. **[AH]** Verify the NFS domain setting. Make sure that the domain is configured as the default Azure NetApp Files domain: `defaultv4iddomain.com`. Make sure the mapping is set to `nobody`.
(This step is only needed if you're using Azure NetAppFiles NFS v4.1.) > [!IMPORTANT]
In this example, the shared HANA file systems are deployed on Azure NetApp Files
Nobody-Group = nobody ```
-1. **[AH]** Verify `nfs4_disable_idmapping`. It should be set to `Y`. To create the directory structure where `nfs4_disable_idmapping` is located, run the mount command. You won't be able to manually create the directory under */sys/modules*, because access is reserved for the kernel or drivers.
+5. **[AH]** Verify `nfs4_disable_idmapping`. It should be set to `Y`. To create the directory structure where `nfs4_disable_idmapping` is located, run the mount command. You won't be able to manually create the directory under */sys/modules*, because access is reserved for the kernel or drivers.
This step is only needed, if using Azure NetAppFiles NFSv4.1. ```bash
In this example, the shared HANA file systems are deployed on Azure NetApp Files
For more information on how to change the `nfs4_disable_idmapping` parameter, see the [Red Hat customer portal](https://access.redhat.com/solutions/1749883).
-1. **[AH1]** Mount the shared Azure NetApp Files volumes on the SITE1 HANA DB VMs.
+6. **[AH1]** Mount the shared Azure NetApp Files volumes on the SITE1 HANA DB VMs.
```bash sudo mount -o rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 10.23.1.7:/HN1-shared-s1 /hana/shared ```
-1. **[AH2]** Mount the shared Azure NetApp Files volumes on the SITE2 HANA DB VMs.
+7. **[AH2]** Mount the shared Azure NetApp Files volumes on the SITE2 HANA DB VMs.
```bash sudo mount -o rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 10.23.1.7:/HN1-shared-s2 /hana/shared ``` -
-1. **[AH]** Verify that the corresponding `/hana/shared/` file systems are mounted on all HANA DB VMs, with NFS protocol version **NFSv4**.
+8. **[AH]** Verify that the corresponding `/hana/shared/` file systems are mounted on all HANA DB VMs, with NFS protocol version **NFSv4**.
```bash sudo nfsstat -m
In this example, the shared HANA file systems are deployed on NFS on Azure Files
### Prepare the data and log local file systems
-In the presented configuration, you deploy file systems `/hana/data` and `/hana/log` on a managed disk, and you attach these file systems locally to each HANA DB VM. Run the following steps to create the local data and log volumes on each HANA DB virtual machine.
+In the presented configuration, you deploy file systems `/hana/data` and `/hana/log` on a managed disk, and you attach these file systems locally to each HANA DB VM. Run the following steps to create the local data and log volumes on each HANA DB virtual machine.
Set up the disk layout with **Logical Volume Manager (LVM)**. The following example assumes that each HANA virtual machine has three data disks attached, and that these disks are used to create two volumes. 1. **[AH]** List all of the available disks:+ ```bash ls /dev/disk/azure/scsi1/lun* ```
Set up the disk layout with **Logical Volume Manager (LVM)**. The following exam
/dev/disk/azure/scsi1/lun0 /dev/disk/azure/scsi1/lun1 /dev/disk/azure/scsi1/lun2 ```
-1. **[AH]** Create physical volumes for all of the disks that you want to use:
+2. **[AH]** Create physical volumes for all of the disks that you want to use:
+ ```bash sudo pvcreate /dev/disk/azure/scsi1/lun0 sudo pvcreate /dev/disk/azure/scsi1/lun1 sudo pvcreate /dev/disk/azure/scsi1/lun2 ```
-1. **[AH]** Create a volume group for the data files. Use one volume group for the log files and one for the shared directory of SAP HANA:
+3. **[AH]** Create a volume group for the data files. Use one volume group for the log files and one for the shared directory of SAP HANA:
+ ```bash sudo vgcreate vg_hana_data_HN1 /dev/disk/azure/scsi1/lun0 /dev/disk/azure/scsi1/lun1 sudo vgcreate vg_hana_log_HN1 /dev/disk/azure/scsi1/lun2 ```
-1. **[AH]** Create the logical volumes. A *linear* volume is created when you use `lvcreate` without the `-i` switch. We suggest that you create a *striped* volume for better I/O performance. Align the stripe sizes to the values documented in [SAP HANA VM storage configurations](./hana-vm-operations-storage.md). The `-i` argument should be the number of the underlying physical volumes and the `-I` argument is the stripe size. In this article, two physical volumes are used for the data volume, so the `-i` switch argument is set to `2`. The stripe size for the data volume is `256 KiB`. One physical volume is used for the log volume, so you don't need to use explicit `-i` or `-I` switches for the log volume commands.
+4. **[AH]** Create the logical volumes. A *linear* volume is created when you use `lvcreate` without the `-i` switch. We suggest that you create a *striped* volume for better I/O performance. Align the stripe sizes to the values documented in [SAP HANA VM storage configurations](./hana-vm-operations-storage.md). The `-i` argument should be the number of the underlying physical volumes and the `-I` argument is the stripe size. In this article, two physical volumes are used for the data volume, so the `-i` switch argument is set to `2`. The stripe size for the data volume is `256 KiB`. One physical volume is used for the log volume, so you don't need to use explicit `-i` or `-I` switches for the log volume commands.
> [!IMPORTANT] > Use the `-i` switch, and set it to the number of the underlying physical volume, when you use more than one physical volume for each data or log volume. Use the `-I` switch to specify the stripe size when you're creating a striped volume. See [SAP HANA VM storage configurations](./hana-vm-operations-storage.md) for recommended storage configurations, including stripe sizes and number of disks.
Set up the disk layout with **Logical Volume Manager (LVM)**. The following exam
sudo mkfs.xfs /dev/vg_hana_log_HN1/hana_log ```
-1. **[AH]** Create the mount directories and copy the UUID of all of the logical volumes:
+5. **[AH]** Create the mount directories and copy the UUID of all of the logical volumes:
+ ```bash sudo mkdir -p /hana/data/HN1 sudo mkdir -p /hana/log/HN1
Set up the disk layout with **Logical Volume Manager (LVM)**. The following exam
sudo blkid ```
-1. **[AH]** Create `fstab` entries for the logical volumes and mount:
+6. **[AH]** Create `fstab` entries for the logical volumes and mount:
+ ```bash sudo vi /etc/fstab ```
In this example for deploying SAP HANA in a scale-out configuration with HSR on
### Prepare for HANA installation
-1. **[AH]** Before the HANA installation, set the root password. You can disable the root password after the installation has been completed. Run as `root` command `passwd` to set the password.
+1. **[AH]** Before the HANA installation, set the root password. You can disable the root password after the installation has been completed. Run as `root` command `passwd` to set the password.
+
+2. **[1,2]** Change the permissions on `/hana/shared`.
-1. **[1,2]** Change the permissions on `/hana/shared`.
```bash chmod 775 /hana/shared ```
-1. **[1]** Verify that you can sign in **hana-s1-db2** and **hana-s1-db3** via secure shell (SSH), without being prompted for a password. If that isn't the case, exchange `ssh` keys, as documented in [Using key-based authentication](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/deployment_guide/s2-ssh-configuration-keypairs).
+3. **[1]** Verify that you can sign in **hana-s1-db2** and **hana-s1-db3** via secure shell (SSH), without being prompted for a password. If that isn't the case, exchange `ssh` keys, as documented in [Using key-based authentication](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/deployment_guide/s2-ssh-configuration-keypairs).
+ ```bash ssh root@hana-s1-db2 ssh root@hana-s1-db3 ```
-1. **[2]** Verify that you can sign in **hana-s2-db2** and **hana-s2-db3** via SSH, without being prompted for a password. If that isn't the case, exchange `ssh` keys, as documented in [Using key-based authentication](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/deployment_guide/s2-ssh-configuration-keypairs).
+4. **[2]** Verify that you can sign in **hana-s2-db2** and **hana-s2-db3** via SSH, without being prompted for a password. If that isn't the case, exchange `ssh` keys, as documented in [Using key-based authentication](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/deployment_guide/s2-ssh-configuration-keypairs).
+ ```bash ssh root@hana-s2-db2 ssh root@hana-s2-db3 ```
-1. **[AH]** Install additional packages, which are required for HANA 2.0 SP4. For more information, see SAP Note [2593824](https://launchpad.support.sap.com/#/notes/2593824) for RHEL 7.
+5. **[AH]** Install additional packages, which are required for HANA 2.0 SP4. For more information, see SAP Note [2593824](https://launchpad.support.sap.com/#/notes/2593824) for RHEL 7.
```bash # If using RHEL 7
In this example for deploying SAP HANA in a scale-out configuration with HSR on
yum install libatomic libtool-ltdl.x86_64 ```
+6. **[A]** Disable the firewall temporarily, so that it doesn't interfere with the HANA installation. You can re-enable it after the HANA installation is done.
-1. **[A]** Disable the firewall temporarily, so that it doesn't interfere with the HANA installation. You can re-enable it after the HANA installation is done.
```bash # Execute as root systemctl stop firewalld
In this example for deploying SAP HANA in a scale-out configuration with HSR on
### HANA installation on the first node on each site
-1. **[1]** Install SAP HANA by following the instructions in the [SAP HANA 2.0 installation and update guide](https://help.sap.com/viewer/2c1988d620e04368aa4103bf26f17727/2.0.04/en-US/7eb0167eb35e4e2885415205b8383584.html). The following instructions show the SAP HANA installation on the first node on SITE 1.
+1. **[1]** Install SAP HANA by following the instructions in the [SAP HANA 2.0 installation and update guide](https://help.sap.com/viewer/2c1988d620e04368aa4103bf26f17727/2.0.04/en-US/7eb0167eb35e4e2885415205b8383584.html). The following instructions show the SAP HANA installation on the first node on SITE 1.
1. Start the `hdblcm` program as `root` from the HANA installation software directory. Use the `internal_network` parameter and pass the address space for subnet, which is used for the internal HANA internode communication. ```bash ./hdblcm --internal_network=10.23.1.128/26 ```+ 1. At the prompt, enter the following values:
- * For **Choose an action**, enter **1** (for install).
- * For **Additional components for installation**, enter **2, 3**.
- * For the installation path, press Enter (defaults to */hana/shared*).
- * For **Local Host Name**, press Enter to accept the default.
- * For **Do you want to add hosts to the system?**, enter **n**.
- * For **SAP HANA System ID**, enter **HN1**.
- * For **Instance number** [00], enter **03**.
- * For **Local Host Worker Group** [default], press Enter to accept the default.
- * For **Select System Usage / Enter index [4]**, enter **4** (for custom).
- * For **Location of Data Volumes** [/hana/data/HN1], press Enter to accept the default.
- * For **Location of Log Volumes** [/hana/log/HN1], press Enter to accept the default.
- * For **Restrict maximum memory allocation?** [n], enter **n**.
- * For **Certificate Host Name For Host hana-s1-db1** [hana-s1-db1], press Enter to accept the default.
- * For **SAP Host Agent User (sapadm) Password**, enter the password.
- * For **Confirm SAP Host Agent User (sapadm) Password**, enter the password.
- * For **System Administrator (hn1adm) Password**, enter the password.
- * For **System Administrator Home Directory** [/usr/sap/HN1/home], press Enter to accept the default.
- * For **System Administrator Login Shell** [/bin/sh], press Enter to accept the default.
- * For **System Administrator User ID** [1001], press Enter to accept the default.
- * For **Enter ID of User Group (sapsys)** [79], press Enter to accept the default.
- * For **System Database User (system) Password**, enter the system's password.
- * For **Confirm System Database User (system) Password**, enter system's password.
- * For **Restart system after machine reboot?** [n], enter **n**.
- * For **Do you want to continue (y/n)**, validate the summary and if everything looks good, enter **y**.
-
-1. **[2]** Repeat the preceding step to install SAP HANA on the first node on SITE 2.
-
-1. **[1,2]** Verify *global.ini*.
+ * For **Choose an action**, enter **1** (for install).
+ * For **Additional components for installation**, enter **2, 3**.
+ * For the installation path, press Enter (defaults to */hana/shared*).
+ * For **Local Host Name**, press Enter to accept the default.
+ * For **Do you want to add hosts to the system?**, enter **n**.
+ * For **SAP HANA System ID**, enter **HN1**.
+ * For **Instance number** [00], enter **03**.
+ * For **Local Host Worker Group** [default], press Enter to accept the default.
+ * For **Select System Usage / Enter index [4]**, enter **4** (for custom).
+ * For **Location of Data Volumes** [/hana/data/HN1], press Enter to accept the default.
+ * For **Location of Log Volumes** [/hana/log/HN1], press Enter to accept the default.
+ * For **Restrict maximum memory allocation?** [n], enter **n**.
+ * For **Certificate Host Name For Host hana-s1-db1** [hana-s1-db1], press Enter to accept the default.
+ * For **SAP Host Agent User (sapadm) Password**, enter the password.
+ * For **Confirm SAP Host Agent User (sapadm) Password**, enter the password.
+ * For **System Administrator (hn1adm) Password**, enter the password.
+ * For **System Administrator Home Directory** [/usr/sap/HN1/home], press Enter to accept the default.
+ * For **System Administrator Login Shell** [/bin/sh], press Enter to accept the default.
+ * For **System Administrator User ID** [1001], press Enter to accept the default.
+ * For **Enter ID of User Group (sapsys)** [79], press Enter to accept the default.
+ * For **System Database User (system) Password**, enter the system's password.
+ * For **Confirm System Database User (system) Password**, enter system's password.
+ * For **Restart system after machine reboot?** [n], enter **n**.
+ * For **Do you want to continue (y/n)**, validate the summary and if everything looks good, enter **y**.
+
+2. **[2]** Repeat the preceding step to install SAP HANA on the first node on SITE 2.
+
+3. **[1,2]** Verify *global.ini*.
Display *global.ini*, and ensure that the configuration for the internal SAP HANA internode communication is in place. Verify the `communication` section. It should have the address space for the `inter` subnet, and `listeninterface` should be set to `.internal`. Verify the `internal_hostname_resolution` section. It should have the IP addresses for the HANA virtual machines that belong to the `inter` subnet.
In this example for deploying SAP HANA in a scale-out configuration with HSR on
10.23.1.140 = hana-s1-db3 ```
-1. **[1,2]** Prepare *global.ini* for installation in non-shared environment, as described in SAP note [2080991](https://launchpad.support.sap.com/#/notes/0002080991).
+4. **[1,2]** Prepare *global.ini* for installation in non-shared environment, as described in SAP note [2080991](https://launchpad.support.sap.com/#/notes/0002080991).
```bash sudo vi /usr/sap/HN1/SYS/global/hdb/custom/config/global.ini
In this example for deploying SAP HANA in a scale-out configuration with HSR on
basepath_shared = no ```
-1. **[1,2]** Restart SAP HANA to activate the changes.
+5. **[1,2]** Restart SAP HANA to activate the changes.
```bash sudo -u hn1adm /usr/sap/hostctrl/exe/sapcontrol -nr 03 -function StopSystem sudo -u hn1adm /usr/sap/hostctrl/exe/sapcontrol -nr 03 -function StartSystem ```
-1. **[1,2]** Verify that the client interface uses the IP addresses from the `client` subnet for communication.
+6. **[1,2]** Verify that the client interface uses the IP addresses from the `client` subnet for communication.
```bash # Execute as hn1adm
In this example for deploying SAP HANA in a scale-out configuration with HSR on
For information about how to verify the configuration, see SAP note [2183363 - Configuration of SAP HANA internal network](https://launchpad.support.sap.com/#/notes/2183363).
-1. **[AH]** Change permissions on the data and log directories to avoid a HANA installation error.
+7. **[AH]** Change permissions on the data and log directories to avoid a HANA installation error.
```bash sudo chmod o+w -R /hana/data /hana/log ```
-1. **[1]** Install the secondary HANA nodes. The example instructions in this step are for SITE 1.
- 1. Start the resident `hdblcm` program as `root`.
+8. **[1]** Install the secondary HANA nodes. The example instructions in this step are for SITE 1.
+
+ 1. Start the resident `hdblcm` program as `root`.
+ ```bash cd /hana/shared/HN1/hdblcm ./hdblcm ```
- 1. At the prompt, enter the following values:
-
- * For **Choose an action**, enter **2** (for add hosts).
- * For **Enter comma separated host names to add**, enter hana-s1-db2, hana-s1-db3.
- * For **Additional components for installation**, enter **2, 3**.
- * For **Enter Root User Name [root]**, press Enter to accept the default.
- * For **Select roles for host 'hana-s1-db2' [1]**, select 1 (for worker).
- * For **Enter Host Failover Group for host 'hana-s1-db2' [default]**, press Enter to accept the default.
- * For **Enter Storage Partition Number for host 'hana-s1-db2' [\<\<assign automatically\>\>]**, press Enter to accept the default.
- * For **Enter Worker Group for host 'hana-s1-db2' [default]**, press Enter to accept the default.
- * For **Select roles for host 'hana-s1-db3' [1]**, select 1 (for worker).
- * For **Enter Host Failover Group for host 'hana-s1-db3' [default]**, press Enter to accept the default.
- * For **Enter Storage Partition Number for host 'hana-s1-db3' [\<\<assign automatically\>\>]**, press Enter to accept the default.
- * For **Enter Worker Group for host 'hana-s1-db3' [default]**, press Enter to accept the default.
- * For **System Administrator (hn1adm) Password**, enter the password.
- * For **Enter SAP Host Agent User (sapadm) Password**, enter the password.
- * For **Confirm SAP Host Agent User (sapadm) Password**, enter the password.
- * For **Certificate Host Name For Host hana-s1-db2** [hana-s1-db2], press Enter to accept the default.
- * For **Certificate Host Name For Host hana-s1-db3** [hana-s1-db3], press Enter to accept the default.
- * For **Do you want to continue (y/n)**, validate the summary and if everything looks good, enter **y**.
-
-1. **[2]** Repeat the preceding step to install the secondary SAP HANA nodes on SITE 2.
+ 2. At the prompt, enter the following values:
+
+ * For **Choose an action**, enter **2** (for add hosts).
+ * For **Enter comma separated host names to add**, enter hana-s1-db2, hana-s1-db3.
+ * For **Additional components for installation**, enter **2, 3**.
+ * For **Enter Root User Name [root]**, press Enter to accept the default.
+ * For **Select roles for host 'hana-s1-db2' [1]**, select 1 (for worker).
+ * For **Enter Host Failover Group for host 'hana-s1-db2' [default]**, press Enter to accept the default.
+ * For **Enter Storage Partition Number for host 'hana-s1-db2' [\<\<assign automatically\>\>]**, press Enter to accept the default.
+ * For **Enter Worker Group for host 'hana-s1-db2' [default]**, press Enter to accept the default.
+ * For **Select roles for host 'hana-s1-db3' [1]**, select 1 (for worker).
+ * For **Enter Host Failover Group for host 'hana-s1-db3' [default]**, press Enter to accept the default.
+ * For **Enter Storage Partition Number for host 'hana-s1-db3' [\<\<assign automatically\>\>]**, press Enter to accept the default.
+ * For **Enter Worker Group for host 'hana-s1-db3' [default]**, press Enter to accept the default.
+ * For **System Administrator (hn1adm) Password**, enter the password.
+ * For **Enter SAP Host Agent User (sapadm) Password**, enter the password.
+ * For **Confirm SAP Host Agent User (sapadm) Password**, enter the password.
+ * For **Certificate Host Name For Host hana-s1-db2** [hana-s1-db2], press Enter to accept the default.
+ * For **Certificate Host Name For Host hana-s1-db3** [hana-s1-db3], press Enter to accept the default.
+ * For **Do you want to continue (y/n)**, validate the summary and if everything looks good, enter **y**.
+
+9. **[2]** Repeat the preceding step to install the secondary SAP HANA nodes on SITE 2.
## Configure SAP HANA 2.0 system replication
The following steps get you set up for system replication:
hdbnsutil -sr_enable --name=HANA_S1 ```
-1. **[2]** Configure system replication on SITE 2:
-
+2. **[2]** Configure system replication on SITE 2:
+ Register the second site to start the system replication. Run the following command as <hanasid\>adm: ```bash
The following steps get you set up for system replication:
sapcontrol -nr 03 -function StartSystem ```
-1. **[1]** Check the replication status and wait until all databases are in sync.
+3. **[1]** Check the replication status and wait until all databases are in sync.
```bash sudo su - hn1adm -c "python /usr/sap/HN1/HDB03/exe/python_support/systemReplicationStatus.py"
- # | Database | Host | Port | Service Name | Volume ID | Site ID | Site Name | Secondary | Secondary | Secondary | Secondary | Secondary | Replication | Replication | Replication |
- # | | | | | | | | Host | Port | Site ID | Site Name | Active Status | Mode | Status | Status Details |
- # | -- | - | -- | | | - | | - | | | | - | -- | -- | -- |
- # | HN1 | hana-s1-db3 | 30303 | indexserver | 5 | 1 | HANA_S1 | hana-s2-db3 | 30303 | 2 | HANA_S2 | YES | SYNC | ACTIVE | |
- # | SYSTEMDB | hana-s1-db1 | 30301 | nameserver | 1 | 1 | HANA_S1 | hana-s2-db1 | 30301 | 2 | HANA_S2 | YES | SYNC | ACTIVE | |
- # | HN1 | hana-s1-db1 | 30307 | xsengine | 2 | 1 | HANA_S1 | hana-s2-db1 | 30307 | 2 | HANA_S2 | YES | SYNC | ACTIVE | |
- # | HN1 | hana-s1-db1 | 30303 | indexserver | 3 | 1 | HANA_S1 | hana-s2-db1 | 30303 | 2 | HANA_S2 | YES | SYNC | ACTIVE | |
- # | HN1 | hana-s1-db2 | 30303 | indexserver | 4 | 1 | HANA_S1 | hana-s2-db2 | 30303 | 2 | HANA_S2 | YES | SYNC | ACTIVE | |
- #
- # status system replication site "2": ACTIVE
- # overall system replication status: ACTIVE
- #
- # Local System Replication State
- #
- # mode: PRIMARY
- # site id: 1
- # site name: HANA_S1
+
+ # | Database | Host | Port | Service Name | Volume ID | Site ID | Site Name | Secondary | Secondary | Secondary | Secondary | Secondary | Replication | Replication | Replication |
+ # | | | | | | | | Host | Port | Site ID | Site Name | Active Status | Mode | Status | Status Details |
+ # | -- | - | -- | | | - | | - | | | | - | -- | -- | -- |
+ # | HN1 | hana-s1-db3 | 30303 | indexserver | 5 | 1 | HANA_S1 | hana-s2-db3 | 30303 | 2 | HANA_S2 | YES | SYNC | ACTIVE | |
+ # | SYSTEMDB | hana-s1-db1 | 30301 | nameserver | 1 | 1 | HANA_S1 | hana-s2-db1 | 30301 | 2 | HANA_S2 | YES | SYNC | ACTIVE | |
+ # | HN1 | hana-s1-db1 | 30307 | xsengine | 2 | 1 | HANA_S1 | hana-s2-db1 | 30307 | 2 | HANA_S2 | YES | SYNC | ACTIVE | |
+ # | HN1 | hana-s1-db1 | 30303 | indexserver | 3 | 1 | HANA_S1 | hana-s2-db1 | 30303 | 2 | HANA_S2 | YES | SYNC | ACTIVE | |
+ # | HN1 | hana-s1-db2 | 30303 | indexserver | 4 | 1 | HANA_S1 | hana-s2-db2 | 30303 | 2 | HANA_S2 | YES | SYNC | ACTIVE | |
+ #
+ # status system replication site "2": ACTIVE
+ # overall system replication status: ACTIVE
+ #
+ # Local System Replication State
+ #
+ # mode: PRIMARY
+ # site id: 1
+ # site name: HANA_S1
```
-1. **[1,2]** Change the HANA configuration so that communication for HANA system replication is directed though the HANA system replication virtual network interfaces.
+4. **[1,2]** Change the HANA configuration so that communication for HANA system replication is directed though the HANA system replication virtual network interfaces.
+ 1. Stop HANA on both sites.+ ```bash sudo -u hn1adm /usr/sap/hostctrl/exe/sapcontrol -nr 03 -function StopSystem HDB ```
- 1. Edit *global.ini* to add the host mapping for HANA system replication. Use the IP addresses from the `hsr` subnet.
+ 2. Edit *global.ini* to add the host mapping for HANA system replication. Use the IP addresses from the `hsr` subnet.
+ ```bash sudo vi /usr/sap/HN1/SYS/global/hdb/custom/config/global.ini #Add the section
The following steps get you set up for system replication:
10.23.1.207 = hana-s2-db3 ```
- 1. Start HANA on both sites.
- ```bash
- sudo -u hn1adm /usr/sap/hostctrl/exe/sapcontrol -nr 03 -function StartSystem HDB
- ```
+ 3. Start HANA on both sites.
+
+ ```bash
+ sudo -u hn1adm /usr/sap/hostctrl/exe/sapcontrol -nr 03 -function StartSystem HDB
+ ```
- For more information, see [Host name resolution for system replication](https://help.sap.com/viewer/eb3777d5495d46c5b2fa773206bbfb46/1.0.12/en-US/c0cba1cb2ba34ec89f45b48b2157ec7b.html).
+ For more information, see [Host name resolution for system replication](https://help.sap.com/viewer/eb3777d5495d46c5b2fa773206bbfb46/1.0.12/en-US/c0cba1cb2ba34ec89f45b48b2157ec7b.html).
+
+5. **[AH]** Re-enable the firewall and open the necessary ports.
-1. **[AH]** Re-enable the firewall and open the necessary ports.
1. Re-enable the firewall.+ ```bash # Execute as root systemctl start firewalld systemctl enable firewalld ```
- 1. Open the necessary firewall ports. You will need to adjust the ports for your HANA instance number.
+ 2. Open the necessary firewall ports. You will need to adjust the ports for your HANA instance number.
> [!IMPORTANT] > Create firewall rules to allow HANA internode communication and client traffic. The required ports are listed on [TCP/IP ports of all SAP products](https://help.sap.com/viewer/ports). The following commands are just an example. In this scenario, you use system number 03.
The following steps get you set up for system replication:
To create a basic Pacemaker cluster, follow the steps in [Setting up Pacemaker on Red Hat Enterprise Linux in Azure](high-availability-guide-rhel-pacemaker.md). Include all virtual machines, including the majority maker in the cluster. > [!IMPORTANT]
-> Don't set `quorum expected-votes` to 2. This isn't a two-node cluster. Make sure that the cluster property `concurrent-fencing` is enabled, so that node fencing is deserialized.
+> Don't set `quorum expected-votes` to 2. This isn't a two-node cluster. Make sure that the cluster property `concurrent-fencing` is enabled, so that node fencing is deserialized.
## Create file system resources
For the next part of this process, you need to create file system resources. Her
sapcontrol -nr 03 -function StopSystem ```
-1. **[AH]** Unmount file system `/hana/shared`, which was temporarily mounted for the installation on all HANA DB VMs. Before you can unmount it, you need to stop any processes and sessions that are using the file system.
-
+2. **[AH]** Unmount file system `/hana/shared`, which was temporarily mounted for the installation on all HANA DB VMs. Before you can unmount it, you need to stop any processes and sessions that are using the file system.
+ ```bash umount /hana/shared ```
-1. **[1]** Create the file system cluster resources for `/hana/shared` in the disabled state. You use `--disabled` because you have to define the location constraints before the mounts are enabled.
-You chose to deploy /han).
+3. **[1]** Create the file system cluster resources for `/hana/shared` in the disabled state. You use `--disabled` because you have to define the location constraints before the mounts are enabled.
+ You chose to deploy /han).
- - In this example, the '/hana/shared' file system is deployed on Azure NetApp Files and mounted over NFSv4.1. Follow the steps in this section, only if you're using NFS on Azure NetApp Files.
+ * In this example, the '/hana/shared' file system is deployed on Azure NetApp Files and mounted over NFSv4.1. Follow the steps in this section, only if you're using NFS on Azure NetApp Files.
```bash # /hana/shared file system for site 1 pcs resource create fs_hana_shared_s1 --disabled ocf:heartbeat:Filesystem device=10.23.1.7:/HN1-shared-s1 directory=/hana/shared \ fstype=nfs options='defaults,rw,hard,timeo=600,rsize=262144,wsize=262144,proto=tcp,noatime,sec=sys,nfsvers=4.1,lock,_netdev' op monitor interval=20s on-fail=fence timeout=120s OCF_CHECK_LEVEL=20 \ op start interval=0 timeout=120 op stop interval=0 timeout=120-
- # /hana/shared file system for site 2
+
+ # /hana/shared file system for site 2
pcs resource create fs_hana_shared_s2 --disabled ocf:heartbeat:Filesystem device=10.23.1.7:/HN1-shared-s1 directory=/hana/shared \ fstype=nfs options='defaults,rw,hard,timeo=600,rsize=262144,wsize=262144,proto=tcp,noatime,sec=sys,nfsvers=4.1,lock,_netdev' op monitor interval=20s on-fail=fence timeout=120s OCF_CHECK_LEVEL=20 \ op start interval=0 timeout=120 op stop interval=0 timeout=120
+
+ # clone the /hana/shared file system resources for both site1 and site2
+ pcs resource clone fs_hana_shared_s1 meta clone-node-max=1 interleave=true
+ pcs resource clone fs_hana_shared_s2 meta clone-node-max=1 interleave=true
- # clone the /hana/shared file system resources for both site1 and site2
- pcs resource clone fs_hana_shared_s1 meta clone-node-max=1 interleave=true
- pcs resource clone fs_hana_shared_s2 meta clone-node-max=1 interleave=true
- ```
-
The suggested timeouts values allow the cluster resources to withstand protocol-specific pause, related to NFSv4.1 lease renewals on Azure NetApp Files. For more information see [NFS in NetApp Best practice](https://www.netapp.com/media/10720-tr-4067.pdf).
- - In this example, the '/hana/shared' file system is deployed on NFS on Azure Files. Follow the steps in this section, only if you're using NFS on Azure Files.
+ * In this example, the '/hana/shared' file system is deployed on NFS on Azure Files. Follow the steps in this section, only if you're using NFS on Azure Files.
```bash # /hana/shared file system for site 1 pcs resource create fs_hana_shared_s1 --disabled ocf:heartbeat:Filesystem device=sapnfsafs.file.core.windows.net:/sapnfsafs/hn1-shared-s1 directory=/hana/shared \ fstype=nfs options='defaults,rw,hard,proto=tcp,noatime,nfsvers=4.1,lock' op monitor interval=20s on-fail=fence timeout=120s OCF_CHECK_LEVEL=20 \ op start interval=0 timeout=120 op stop interval=0 timeout=120
-
- # /hana/shared file system for site 2
+
+ # /hana/shared file system for site 2
pcs resource create fs_hana_shared_s2 --disabled ocf:heartbeat:Filesystem device=sapnfsafs.file.core.windows.net:/sapnfsafs/hn1-shared-s2 directory=/hana/shared \ fstype=nfs options='defaults,rw,hard,proto=tcp,noatime,nfsvers=4.1,lock' op monitor interval=20s on-fail=fence timeout=120s OCF_CHECK_LEVEL=20 \ op start interval=0 timeout=120 op stop interval=0 timeout=120-
- # clone the /hana/shared file system resources for both site1 and site2
+
+ # clone the /hana/shared file system resources for both site1 and site2
pcs resource clone fs_hana_shared_s1 meta clone-node-max=1 interleave=true pcs resource clone fs_hana_shared_s2 meta clone-node-max=1 interleave=true ```
- The `OCF_CHECK_LEVEL=20` attribute is added to the monitor operation, so that monitor operations perform a read/write test on the file system. Without this attribute, the monitor operation only verifies that the file system is mounted. This can be a problem because when connectivity is lost, the file system might remain mounted, despite being inaccessible.
+ The `OCF_CHECK_LEVEL=20` attribute is added to the monitor operation, so that monitor operations perform a read/write test on the file system. Without this attribute, the monitor operation only verifies that the file system is mounted. This can be a problem because when connectivity is lost, the file system might remain mounted, despite being inaccessible.
- The `on-fail=fence` attribute is also added to the monitor operation. With this option, if the monitor operation fails on a node, that node is immediately fenced. Without this option, the default behavior is to stop all resources that depend on the failed resource, then restart the failed resource, and then start all the resources that depend on the failed resource. Not only can this behavior take a long time when an SAP HANA resource depends on the failed resource, but it also can fail altogether. The SAP HANA resource can't stop successfully, if the NFS share holding the HANA binaries is inaccessible.
+ The `on-fail=fence` attribute is also added to the monitor operation. With this option, if the monitor operation fails on a node, that node is immediately fenced. Without this option, the default behavior is to stop all resources that depend on the failed resource, then restart the failed resource, and then start all the resources that depend on the failed resource. Not only can this behavior take a long time when an SAP HANA resource depends on the failed resource, but it also can fail altogether. The SAP HANA resource can't stop successfully, if the NFS share holding the HANA binaries is inaccessible.
- The timeouts in the above configurations may need to be adapted to the specific SAP setup.
-
+ The timeouts in the above configurations may need to be adapted to the specific SAP setup.
-1. **[1]** Configure and verify the node attributes. All SAP HANA DB nodes on replication site 1 are assigned attribute `S1`, and all SAP HANA DB nodes on replication site 2 are assigned attribute `S2`.
+4. **[1]** Configure and verify the node attributes. All SAP HANA DB nodes on replication site 1 are assigned attribute `S1`, and all SAP HANA DB nodes on replication site 2 are assigned attribute `S2`.
```bash # HANA replication site 1 pcs node attribute hana-s1-db1 NFS_SID_SITE=S1 pcs node attribute hana-s1-db2 NFS_SID_SITE=S1 pcs node attribute hana-s1-db3 NFS_SID_SITE=S1
- # HANA replication site 2
+ # HANA replication site 2
pcs node attribute hana-s2-db1 NFS_SID_SITE=S2 pcs node attribute hana-s2-db2 NFS_SID_SITE=S2 pcs node attribute hana-s2-db3 NFS_SID_SITE=S2
- #To verify the attribute assignment to nodes execute
+ # To verify the attribute assignment to nodes execute
pcs node attribute ```
-1. **[1]** Configure the constraints that determine where the NFS file systems will be mounted, and enable the file system resources.
+5. **[1]** Configure the constraints that determine where the NFS file systems will be mounted, and enable the file system resources.
+
```bash # Configure the constraints pcs constraint location fs_hana_shared_s1-clone rule resource-discovery=never score=-INFINITY NFS_SID_SITE ne S1
You chose to deploy /hana/shared' on [NFS share on Azure Files](../../storage/fi
``` When you enable the file system resources, the cluster will mount the `/hana/shared` file systems.
-
-1. **[AH]** Verify that the Azure NetApp Files volumes are mounted under `/hana/shared`, on all HANA DB VMs on both sites.
- - Example, if using Azure NetApp Files:
+6. **[AH]** Verify that the Azure NetApp Files volumes are mounted under `/hana/shared`, on all HANA DB VMs on both sites.
+
+ * Example, if using Azure NetApp Files:
+ ```bash sudo nfsstat -m # Verify that flag vers is set to 4.1
You chose to deploy /hana/shared' on [NFS share on Azure Files](../../storage/fi
/hana/shared from 10.23.1.7:/HN1-shared-s2 Flags: rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.23.0.14,local_lock=none,addr=10.23.1.7 ```
- - Example, if using Azure Files NFS:
+
+ * Example, if using Azure Files NFS:
```bash sudo nfsstat -m
You chose to deploy /hana/shared' on [NFS share on Azure Files](../../storage/fi
Flags: rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.23.0.22,local_lock=none,addr=10.23.0.35 ```
-1. **[1]** Configure and clone the attribute resources, and configure the constraints, as follows:
+7. **[1]** Configure and clone the attribute resources, and configure the constraints, as follows:
```bash # Configure the attribute resources pcs resource create hana_nfs_s1_active ocf:pacemaker:attribute active_value=true inactive_value=false name=hana_nfs_s1_active pcs resource create hana_nfs_s2_active ocf:pacemaker:attribute active_value=true inactive_value=false name=hana_nfs_s2_active
- # Clone the attribute resources
+ # Clone the attribute resources
pcs resource clone hana_nfs_s1_active meta clone-node-max=1 interleave=true pcs resource clone hana_nfs_s2_active meta clone-node-max=1 interleave=true
- # Configure the constraints, which will set the attribute values
+ # Configure the constraints, which will set the attribute values
pcs constraint order fs_hana_shared_s1-clone then hana_nfs_s1_active-clone pcs constraint order fs_hana_shared_s2-clone then hana_nfs_s2_active-clone ```
You chose to deploy /hana/shared' on [NFS share on Azure Files](../../storage/fi
> [!TIP] > If your configuration includes file systems other than /`hana/shared`, and these file systems are NFS mounted, then include the `sequential=false` option. This option ensures that there are no ordering dependencies among the file systems. All NFS mounted file systems must start before the corresponding attribute resource, but they don't need to start in any order relative to each other. For more information, see [How do I configure SAP HANA scale-out HSR in a Pacemaker cluster when the HANA file systems are NFS shares](https://access.redhat.com/solutions/5423971).
-1. **[1]** Place Pacemaker in maintenance mode, in preparation for the creation of the HANA cluster resources.
+8. **[1]** Place Pacemaker in maintenance mode, in preparation for the creation of the HANA cluster resources.
+ ```bash pcs property set maintenance-mode=true ```
You chose to deploy /hana/shared' on [NFS share on Azure Files](../../storage/fi
Now you're ready to create the cluster resources:
-1. **[A]** Install the HANA scale-out resource agent on all cluster nodes, including the majority maker.
+1. **[A]** Install the HANA scale-out resource agent on all cluster nodes, including the majority maker.
```bash yum install -y resource-agents-sap-hana-scaleout ``` - > [!NOTE] > For the minimum supported version of package `resource-agents-sap-hana-scaleout` for your operating system release, see [Support policies for RHEL HA clusters - Management of SAP HANA in a cluster](https://access.redhat.com/articles/3397471) .
-1. **[1,2]** Install the HANA system replication hook on one HANA DB node on each system replication site. SAP HANA should still be down.
+2. **[1,2]** Install the HANA system replication hook on one HANA DB node on each system replication site. SAP HANA should still be down.
+
+ 1. Prepare the hook as `root`.
- 1. Prepare the hook as `root`.
```bash
- mkdir -p /hana/shared/myHooks
- cp /usr/share/SAPHanaSR-ScaleOut/SAPHanaSR.py /hana/shared/myHooks
- chown -R hn1adm:sapsys /hana/shared/myHooks
+ mkdir -p /hana/shared/myHooks
+ cp /usr/share/SAPHanaSR-ScaleOut/SAPHanaSR.py /hana/shared/myHooks
+ chown -R hn1adm:sapsys /hana/shared/myHooks
```
- 1. Adjust `global.ini`.
+ 2. Adjust `global.ini`.
+ ```bash # add to global.ini [ha_dr_provider_SAPHanaSR] provider = SAPHanaSR path = /hana/shared/myHooks execution_order = 1
-
+
[trace] ha_dr_saphanasr = info ```
-1. **[AH]** The cluster requires sudoers configuration on the cluster node for <sid\>adm. In this example, you achieve this by creating a new file. Run the commands as `root`.
+3. **[AH]** The cluster requires sudoers configuration on the cluster node for <sid\>adm. In this example, you achieve this by creating a new file. Run the commands as `root`.
+ ```bash sudo visudo -f /etc/sudoers.d/20-saphana # Insert the following lines and then save
Now you're ready to create the cluster resources:
Defaults!SOK, SFAIL !requiretty ```
-1. **[1,2]** Start SAP HANA on both replication sites. Run as <sid\>adm.
+4. **[1,2]** Start SAP HANA on both replication sites. Run as <sid\>adm.
```bash sapcontrol -nr 03 -function StartSystem ```
-1. **[1]** Verify the hook installation. Run as <sid\>adm on the active HANA system replication site.
+5. **[1]** Verify the hook installation. Run as <sid\>adm on the active HANA system replication site.
```bash cdtrace
Now you're ready to create the cluster resources:
{ printf "%s %s %s %s\n",$2,$3,$5,$16 }' nameserver_* # Example entries
- # 2020-07-21 22:04:32.364379 ha_dr_SAPHanaSR SFAIL
- # 2020-07-21 22:04:46.905661 ha_dr_SAPHanaSR SFAIL
+ # 2020-07-21 22:04:32.364379 ha_dr_SAPHanaSR SFAIL
+ # 2020-07-21 22:04:46.905661 ha_dr_SAPHanaSR SFAIL
# 2020-07-21 22:04:52.092016 ha_dr_SAPHanaSR SFAIL # 2020-07-21 22:04:52.782774 ha_dr_SAPHanaSR SFAIL # 2020-07-21 22:04:53.117492 ha_dr_SAPHanaSR SFAIL # 2020-07-21 22:06:35.599324 ha_dr_SAPHanaSR SOK ```
-1. **[1]** Create the HANA cluster resources. Run the following commands as `root`.
+6. **[1]** Create the HANA cluster resources. Run the following commands as `root`.
1. Make sure the cluster is already in maintenance mode.
-
- 1. Next, create the HANA topology resource.
- If you're building a RHEL **7.x** cluster, use the following commands:
+
+ 2. Next, create the HANA topology resource.
+ If you're building a RHEL **7.x** cluster, use the following commands:
+ ```bash pcs resource create SAPHanaTopology_HN1_HDB03 SAPHanaTopologyScaleOut \ SID=HN1 InstanceNumber=03 \ op start timeout=600 op stop timeout=300 op monitor interval=10 timeout=600-
+
pcs resource clone SAPHanaTopology_HN1_HDB03 meta clone-node-max=1 interleave=true ```
- If you're building a RHEL >= **8.x** cluster, use the following commands:
+ If you're building a RHEL >= **8.x** cluster, use the following commands:
+ ```bash pcs resource create SAPHanaTopology_HN1_HDB03 SAPHanaTopology \ SID=HN1 InstanceNumber=03 meta clone-node-max=1 interleave=true \ op methods interval=0s timeout=5 \ op start timeout=600 op stop timeout=300 op monitor interval=10 timeout=600-
+
pcs resource clone SAPHanaTopology_HN1_HDB03 meta clone-node-max=1 interleave=true ```
- 1. Create the HANA instance resource.
+ 3. Create the HANA instance resource.
+ > [!NOTE] > This article contains references to a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article.
-
- If you're building a RHEL **7.x** cluster, use the following commands:
+
+ If you're building a RHEL **7.x** cluster, use the following commands:
+ ```bash pcs resource create SAPHana_HN1_HDB03 SAPHanaController \ SID=HN1 InstanceNumber=03 PREFER_SITE_TAKEOVER=true DUPLICATE_PRIMARY_TIMEOUT=7200 AUTOMATED_REGISTER=false \ op start interval=0 timeout=3600 op stop interval=0 timeout=3600 op promote interval=0 timeout=3600 \ op monitor interval=60 role="Master" timeout=700 op monitor interval=61 role="Slave" timeout=700
-
+
pcs resource master msl_SAPHana_HN1_HDB03 SAPHana_HN1_HDB03 \ meta master-max="1" clone-node-max=1 interleave=true ```
- If you're building a RHEL >= **8.x** cluster, use the following commands:
+ If you're building a RHEL >= **8.x** cluster, use the following commands:
+ ```bash pcs resource create SAPHana_HN1_HDB03 SAPHanaController \ SID=HN1 InstanceNumber=03 PREFER_SITE_TAKEOVER=true DUPLICATE_PRIMARY_TIMEOUT=7200 AUTOMATED_REGISTER=false \ op demote interval=0s timeout=320 op methods interval=0s timeout=5 \ op start interval=0 timeout=3600 op stop interval=0 timeout=3600 op promote interval=0 timeout=3600 \ op monitor interval=60 role="Master" timeout=700 op monitor interval=61 role="Slave" timeout=700
-
+
pcs resource promotable SAPHana_HN1_HDB03 \ meta master-max="1" clone-node-max=1 interleave=true ```+ > [!IMPORTANT]
- > It's a good idea to set `AUTOMATED_REGISTER` to `false`, while you're performing failover tests, to prevent a failed primary instance to automatically register as secondary. After testing, as a best practice, set `AUTOMATED_REGISTER` to `true`, so that after takeover, system replication can resume automatically.
+ > It's a good idea to set `AUTOMATED_REGISTER` to `false`, while you're performing failover tests, to prevent a failed primary instance to automatically register as secondary. After testing, as a best practice, set `AUTOMATED_REGISTER` to `true`, so that after takeover, system replication can resume automatically.
+
+ 4. Create the virtual IP and associated resources.
- 1. Create the virtual IP and associated resources.
```bash pcs resource create vip_HN1_03 ocf:heartbeat:IPaddr2 ip=10.23.0.18 op monitor interval="10s" timeout="20s" sudo pcs resource create nc_HN1_03 azure-lb port=62503 sudo pcs resource group add g_ip_HN1_03 nc_HN1_03 vip_HN1_03 ```
- 1.
- 2. Create the cluster constraints.
- If you're building a RHEL **7.x** cluster, use the following commands:
+ 5. Create the cluster constraints.
+
+ If you're building a RHEL **7.x** cluster, use the following commands:
+ ```bash #Start HANA topology, before the HANA instance pcs constraint order SAPHanaTopology_HN1_HDB03-clone then msl_SAPHana_HN1_HDB03-
+
pcs constraint colocation add g_ip_HN1_03 with master msl_SAPHana_HN1_HDB03 4000 #HANA resources are only allowed to run on a node, if the node's NFS file systems are mounted. The constraint also avoids the majority maker node pcs constraint location SAPHanaTopology_HN1_HDB03-clone rule resource-discovery=never score=-INFINITY hana_nfs_s1_active ne true and hana_nfs_s2_active ne true ```
-
- If you're building a RHEL >= **8.x** cluster, use the following commands:
+
+ If you're building a RHEL >= **8.x** cluster, use the following commands:
+ ```bash #Start HANA topology, before the HANA instance pcs constraint order SAPHanaTopology_HN1_HDB03-clone then SAPHana_HN1_HDB03-clone-
+
pcs constraint colocation add g_ip_HN1_03 with master SAPHana_HN1_HDB03-clone 4000 #HANA resources are only allowed to run on a node, if the node's NFS file systems are mounted. The constraint also avoids the majority maker node pcs constraint location SAPHanaTopology_HN1_HDB03-clone rule resource-discovery=never score=-INFINITY hana_nfs_s1_active ne true and hana_nfs_s2_active ne true ```
-1. **[1]** Place the cluster out of maintenance mode. Make sure that the cluster status is `ok`, and that all of the resources are started.
+7. **[1]** Place the cluster out of maintenance mode. Make sure that the cluster status is `ok`, and that all of the resources are started.
+ ```bash sudo pcs property set maintenance-mode=false #If there are failed cluster resources, you may need to run the next command
Now you're ready to create the cluster resources:
> [!NOTE] > The timeouts in the preceding configuration are just examples, and might need to be adapted to the specific HANA setup. For instance, you might need to increase the start timeout, if it takes longer to start the SAP HANA database.
-
+ ## Configure HANA active/read-enabled system replication Starting with SAP HANA 2.0 SPS 01, SAP allows active/read-enabled setups for SAP HANA system replication. With this capability, you can use the secondary systems of SAP HANA system replication actively for read-intensive workloads. To support such a setup in a cluster, you need a second virtual IP address, which allows clients to access the secondary read-enabled SAP HANA database. To ensure that the secondary replication site can still be accessed after a takeover has occurred, the cluster needs to move the virtual IP address around with the secondary of the SAP HANA resource.
Before proceeding further, make sure you have fully configured a Red Hat high av
### Additional setup in Azure Load Balancer for active/read-enabled setup
-To proceed with provisioning your second virtual IP, make sure you have configured Azure Load Balancer as described in [Deploy Azure Load Balancer](#deploy-azure-load-balancer).
+To proceed with provisioning your second virtual IP, make sure you have configured Azure Load Balancer as described in [Configure Azure Load Balancer](#configure-azure-load-balancer).
For the *standard* load balancer, follow these additional steps on the same load balancer that you created in the earlier section.
-1. Create a second front-end IP pool:
+1. Create a second front-end IP pool:
1. Open the load balancer, select **frontend IP pool**, and select **Add**. 1. Enter the name of the second front-end IP pool (for example, *hana-secondaryIP*).
For the *standard* load balancer, follow these additional steps on the same load
The steps to configure HANA system replication are described in the [Configure SAP HANA 2.0 system replication](#configure-sap-hana-20-system-replication) section. If you are deploying a read-enabled secondary scenario, while you're configuring system replication on the second node, run following command as **hanasid**adm:
-```
+```bash
sapcontrol -nr 03 -function StopWait 600 10 hdbnsutil -sr_register --remoteHost=hana-s1-db1 --remoteInstance=03 --replicationMode=sync --name=HANA_S2 --operationMode=logreplay_readaccess
hdbnsutil -sr_register --remoteHost=hana-s1-db1 --remoteInstance=03 --replicatio
### Add a secondary virtual IP address resource for an active/read-enabled setup
-You can configure the second virtual IP and the additional constraints with the following commands. If the secondary instance is down, the secondary virtual IP will be switched to the primary.
+You can configure the second virtual IP and the additional constraints with the following commands. If the secondary instance is down, the secondary virtual IP will be switched to the primary.
-```
+```bash
pcs property set maintenance-mode=true pcs resource create secvip_HN1_03 ocf:heartbeat:IPaddr2 ip="10.23.0.19"
pcs constraint colocation add g_secip_HN1_03 with Slave msl_SAPHana_HN1_HDB03 5
pcs property set maintenance-mode=false ```+ Make sure that the cluster status is `ok`, and that all of the resources are started. The second virtual IP will run on the secondary site along with SAP HANA secondary resource.
-```
+```bash
# Example output from crm_mon #Online: [ hana-s-mm hana-s1-db1 hana-s1-db2 hana-s1-db3 hana-s2-db1 hana-s2-db2 hana-s2-db3 ] #
In the next section, you can find the typical set of failover tests to run.
When you're testing a HANA cluster configured with a read-enabled secondary, be aware of the following behavior of the second virtual IP: -- When cluster resource **SAPHana_HN1_HDB03** moves to the secondary site (**S2**), the second virtual IP will move to the other site, **hana-s1-db1**. If you have configured `AUTOMATED_REGISTER="false"`, and HANA system replication isn't registered automatically, then the second virtual IP will run on **hana-s2-db1**.
+* When cluster resource **SAPHana_HN1_HDB03** moves to the secondary site (**S2**), the second virtual IP will move to the other site, **hana-s1-db1**. If you have configured `AUTOMATED_REGISTER="false"`, and HANA system replication isn't registered automatically, then the second virtual IP will run on **hana-s2-db1**.
-- When you're testing server crash, the second virtual IP resources (**secvip_HN1_03**) and the Azure Load Balancer port resource (**secnc_HN1_03**) run on the primary server, alongside the primary virtual IP resources. While the secondary server is down, the applications that are connected to the read-enabled HANA database will connect to the primary HANA database. This behavior is expected. It allows applications that are connected to the read-enabled HANA database to operate while a secondary server is unavailable.
+* When you're testing server crash, the second virtual IP resources (**secvip_HN1_03**) and the Azure Load Balancer port resource (**secnc_HN1_03**) run on the primary server, alongside the primary virtual IP resources. While the secondary server is down, the applications that are connected to the read-enabled HANA database will connect to the primary HANA database. This behavior is expected. It allows applications that are connected to the read-enabled HANA database to operate while a secondary server is unavailable.
-- During failover and fallback, the existing connections for applications that are using the second virtual IP to connect to the HANA database might be interrupted.
+* During failover and fallback, the existing connections for applications that are using the second virtual IP to connect to the HANA database might be interrupted.
-## Test SAP HANA failover
+## Test SAP HANA failover
1. Before you start a test, check the cluster and SAP HANA system replication status.
- 1. Verify that there are no failed cluster actions.
+ 1. Verify that there are no failed cluster actions.
+
```bash #Verify that there are no failed cluster actions pcs status
When you're testing a HANA cluster configured with a read-enabled secondary, be
# vip_HN1_03 (ocf::heartbeat:IPaddr2): Started hana-s1-db1 ```
- 1. Verify that SAP HANA system replication is in sync.
+ 2. Verify that SAP HANA system replication is in sync.
```bash # Verify HANA HSR is in sync
When you're testing a HANA cluster configured with a read-enabled secondary, be
#| SYSTEMDB | hana-s1-db1 | 30301 | nameserver | 1 | 2 | HANA_S1 | hana-s2-db1 | 30301 | 1 | HANA_S2 | YES | SYNC | ACTIVE | | #| HN1 | hana-s1-db1 | 30307 | xsengine | 2 | 2 | HANA_S1 | hana-s2-db1 | 30307 | 1 | HANA_S2 | YES | SYNC | ACTIVE | | #| HN1 | hana-s1-db1 | 30303 | indexserver | 3 | 2 | HANA_S1 | hana-s2-db1 | 30303 | 1 | HANA_S2 | YES | SYNC | ACTIVE | |-
+
#status system replication site "1": ACTIVE #overall system replication status: ACTIVE-
+
#Local System Replication State #~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~-
+
#mode: PRIMARY #site id: 1 #site name: HANA_S1 ```
-1. Verify the cluster configuration for a failure scenario, when a node loses access to the NFS share (`/hana/shared`).
+2. Verify the cluster configuration for a failure scenario, when a node loses access to the NFS share (`/hana/shared`).
The SAP HANA resource agents depend on binaries, stored on `/hana/shared`, to perform operations during failover. File system `/hana/shared` is mounted over NFS in the presented configuration. A test that can be performed, is to create a temporary firewall rule to block access to the `/hana/shared` NFS mounted file system on one of the primary site VMs. This approach validates that the cluster will fail over, if access to `/hana/shared` is lost on the active system replication site.
- **Expected result**: When you block the access to the `/hana/shared` NFS mounted file system on one of the primary site VMs, the monitoring operation that performs read/write operation on file system, will fail, as it is not able to access the file system and will trigger HANA resource failover. The same result is expected when your HANA node loses access to the NFS share.
-
+ **Expected result**: When you block the access to the `/hana/shared` NFS mounted file system on one of the primary site VMs, the monitoring operation that performs read/write operation on file system, will fail, as it is not able to access the file system and will trigger HANA resource failover. The same result is expected when your HANA node loses access to the NFS share.
+ You can check the state of the cluster resources by running `crm_mon` or `pcs status`. Resource state before starting the test:+ ```bash # Output of crm_mon #7 nodes configured #45 resources configured-
+
#Online: [ hana-s-mm hana-s1-db1 hana-s1-db2 hana-s1-db3 hana-s2-db1 hana-s2-db2 hana-s2-db3 ] # #Active resources:-
+
#rsc_st_azure (stonith:fence_azure_arm): Started hana-s-mm # Clone Set: fs_hana_shared_s1-clone [fs_hana_shared_s1] # Started: [ hana-s1-db1 hana-s1-db2 hana-s1-db3 ]
When you're testing a HANA cluster configured with a read-enabled secondary, be
iptables -A INPUT -s 10.23.1.7 -j DROP; iptables -A OUTPUT -d 10.23.1.7 -j DROP ```
- The HANA VM that lost access to `/hana/shared` should restart or stop, depending on the cluster configuration. The cluster resources are migrated to the other HANA system replication site.
-
- If the cluster hasn't started on the VM that was restarted, start the cluster by running the following:
+ The HANA VM that lost access to `/hana/shared` should restart or stop, depending on the cluster configuration. The cluster resources are migrated to the other HANA system replication site.
+
+ If the cluster hasn't started on the VM that was restarted, start the cluster by running the following:
```bash # Start the cluster pcs cluster start ```
-
- When the cluster starts, file system `/hana/shared` is automatically mounted. If you set `AUTOMATED_REGISTER="false"`, you will need to configure SAP HANA system replication on the secondary site. In this case, you can run these commands to reconfigure SAP HANA as secondary.
+
+ When the cluster starts, file system `/hana/shared` is automatically mounted. If you set `AUTOMATED_REGISTER="false"`, you will need to configure SAP HANA system replication on the secondary site. In this case, you can run these commands to reconfigure SAP HANA as secondary.
```bash # Execute on the secondary
When you're testing a HANA cluster configured with a read-enabled secondary, be
pcs resource cleanup SAPHana_HN1_HDB03 ```
- The state of the resources, after the test:
+ The state of the resources, after the test:
```bash # Output of crm_mon #7 nodes configured #45 resources configured
-
+ #Online: [ hana-s-mm hana-s1-db1 hana-s1-db2 hana-s1-db3 hana-s2-db1 hana-s2-db2 hana-s2-db3 ]
-
+ #Active resources:
-
+ #rsc_st_azure (stonith:fence_azure_arm): Started hana-s-mm # Clone Set: fs_hana_shared_s1-clone [fs_hana_shared_s1] # Started: [ hana-s1-db1 hana-s1-db2 hana-s1-db3 ]
When you're testing a HANA cluster configured with a read-enabled secondary, be
# vip_HN1_03 (ocf::heartbeat:IPaddr2): Started hana-s2-db1 ``` - It's a good idea to test the SAP HANA cluster configuration thoroughly, by also performing the tests documented in [HA for SAP HANA on Azure VMs on RHEL](./sap-hana-high-availability-rhel.md#test-the-cluster-setup). ## Next steps
sap Sap Hana High Availability Scale Out Hsr Suse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability-scale-out-hsr-suse.md
Previously updated : 07/11/2023 Last updated : 01/16/2024
The presented configuration shows three HANA nodes on each site, plus majority m
The HANA shared file system `/han). The HANA shared file system is NFS mounted on each HANA node in the same HANA system replication site. File systems `/hana/data` and `/hana/log` are local file systems and aren't shared between the HANA DB nodes. SAP HANA will be installed in non-shared mode.
-For recommended SAP HANA storage configurations, see [SAP HANA Azure VMs storage configurations](./hana-vm-operations-storage.md).
+For recommended SAP HANA storage configurations, see [SAP HANA Azure VMs storage configurations](./hana-vm-operations-storage.md).
> [!IMPORTANT] > If deploying all HANA file systems on Azure NetApp Files, for production systems, where performance is a key, we recommend to evaluate and consider using [Azure NetApp Files application volume group for SAP HANA](hana-vm-operations-netapp.md#deployment-through-azure-netapp-files-application-volume-group-for-sap-hana-avg).
As `/hana/data` and `/hana/log` are deployed on local disks, it isn't necessary
If you're using Azure NetApp Files, the NFS volumes for `/han): `anf` 10.23.1.0/26.
-## Set up the infrastructure
+## Prepare the infrastructure
In the instructions that follow, we assume that you've already created the resource group, the Azure virtual network with three Azure network subnets: `client`, `inter` and `hsr`.
In the instructions that follow, we assume that you've already created the resou
When the VM is deployed via Azure portal, the network interface name is automatically generated. In these instructions for simplicity we'll refer to the automatically generated, primary network interfaces, which are attached to the `client` Azure virtual network subnet as **hana-s1-db1-client**, **hana-s1-db2-client**, **hana-s1-db3-client**, and so on. > [!IMPORTANT]
- > Make sure that the OS you select is SAP-certified for SAP HANA on the specific VM types you're using. For a list of SAP HANA certified VM types and OS releases for those types, go to the [SAP HANA certified IaaS platforms](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=v:deCertified;ve:24;iaas;v:125;v:105;v:99;v:120) site. Click into the details of the listed VM type to get the complete list of SAP HANA-supported OS releases for that type.
- > If you choose to deploy `/hana/shared` on NFS on Azure Files, we recommend to deploy on SLES15 SP2 and above.
+ >
+ > * Make sure that the OS you select is SAP-certified for SAP HANA on the specific VM types you're using. For a list of SAP HANA certified VM types and OS releases for those types, go to the [SAP HANA certified IaaS platforms](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=v:deCertified;ve:24;iaas;v:125;v:105;v:99;v:120) site. Click into the details of the listed VM type to get the complete list of SAP HANA-supported OS releases for that type.
+ > * If you choose to deploy `/hana/shared` on NFS on Azure Files, we recommend to deploy on SLES 15 SP2 and above.
2. Create six network interfaces, one for each HANA DB virtual machine, in the `inter` virtual network subnet (in this example, **hana-s1-db1-inter**, **hana-s1-db2-inter**, **hana-s1-db3-inter**, **hana-s2-db1-inter**, **hana-s2-db2-inter**, and **hana-s2-db3-inter**).
In the instructions that follow, we assume that you've already created the resou
6. Start the HANA DB virtual machines
-### Deploy Azure Load Balancer
+### Configure Azure load balancer
-1. We recommend using standard load balancer. Follow these configuration steps to deploy standard load balancer:
+During VM configuration, you have an option to create or select exiting load balancer in networking section. Follow below steps, to setup standard load balancer for high availability setup of HANA database.
- 1. First, create a front-end IP pool:
+> [!NOTE]
+>
+> * For HANA scale out, select the NIC for the `client` subnet when adding the virtual machines in the backend pool.
+> * The full set of command in Azure CLI and PowerShell adds the VMs with primary NIC in the backend pool.
- 1. Open the load balancer, select **frontend IP pool**, and select **Add**.
- 2. Enter the name of the new front-end IP pool (for example, **hana-frontend**).
- 3. Set the **Assignment** to **Static** and enter the IP address (for example, **10.23.0.27**).
- 4. Select **OK**.
- 5. After the new front-end IP pool is created, note the pool IP address.
+#### [Azure Portal](#tab/lb-portal)
- 2. Create a single back-end pool:
- 1. Open the load balancer, select **Backend pools**, and then select **Add**.
- 2. Enter the name of the new back-end pool (for example, **hana-backend**).
- 3. Select **NIC** for Backend Pool Configuration.
- 4. Select **Add a virtual machine**.
- 5. Select the virtual machines of the HANA cluster (the NICs for the `client` subnet).
- 6. Select **Add**.
- 7. Select **Save**.
+#### [Azure CLI](#tab/lb-azurecli)
- 3. Next, create a health probe:
+The full set of Azure CLI codes display the setup of the load balancer, which include two VMs in the backend pool. Depending on the number of VMs in your HANA scale-out, you could add more VMs in the backend pool.
- 1. Open the load balancer, select **health probes**, and select **Add**.
- 2. Enter the name of the new health probe (for example, **hana-hp**).
- 3. Select **TCP** as the protocol and port 625**03**. Keep the **Interval** value set to 5.
- 4. Select **OK**.
- 4. Next, create the load-balancing rules:
+#### [PowerShell](#tab/lb-powershell)
- 1. Open the load balancer, select **load balancing rules**, and select **Add**.
- 2. Enter the name of the new load balancer rule (for example, **hana-lb**).
- 3. Select the front-end IP address, the back-end pool, and the health probe that you created earlier (for example, **hana-frontend**, **hana-backend** and **hana-hp**).
- 4. Increase idle timeout to 30 minutes.
- 5. Select **HA Ports**.
- 6. Make sure to **enable Floating IP**.
- 7. Select **OK**.
+The full set of PowerShell code display the setup of the load balancer, which include two VMs in the backend pool. Depending on the number of VMs in your HANA scale-out, you could add more VMs in the backend pool.
- > [!IMPORTANT]
- > Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see [Azure Load balancer Limitations](../../load-balancer/load-balancer-multivip-overview.md#limitations). If you need additional IP address for the VM, deploy a second NIC.
- > [!NOTE]
- > When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard Azure load balancer, there will be no outbound internet connectivity, unless additional configuration is performed to allow routing to public end points. For details on how to achieve outbound connectivity see [Public endpoint connectivity for Virtual Machines using Azure Standard Load Balancer in SAP high-availability scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md).
+
- > [!IMPORTANT]
- > Do not enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will cause the health probes to fail. Set parameter **net.ipv4.tcp_timestamps** to **0**. For details see [Load Balancer health probes](../../load-balancer/load-balancer-custom-probe-overview.md).
- > See also SAP note [2382421](https://launchpad.support.sap.com/#/notes/2382421).
+> [!IMPORTANT]
+> Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see [Azure Load balancer Limitations](../../load-balancer/load-balancer-multivip-overview.md#limitations). If you need additional IP address for the VM, deploy a second NIC.
+
+> [!NOTE]
+> When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard Azure load balancer, there will be no outbound internet connectivity, unless additional configuration is performed to allow routing to public end points. For details on how to achieve outbound connectivity see [Public endpoint connectivity for Virtual Machines using Azure Standard Load Balancer in SAP high-availability scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md).
+
+> [!IMPORTANT]
+>
+> * Do not enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will cause the health probes to fail. Set parameter `net.ipv4.tcp_timestamps` to `0`. For details see [Load Balancer health probes](../../load-balancer/load-balancer-custom-probe-overview.md) and SAP note [2382421](https://launchpad.support.sap.com/#/notes/2382421).
+> * To prevent saptune from changing the manually set `net.ipv4.tcp_timestamps` value from `0` back to `1`, update saptune version to 3.1.1 or higher. For more details, see [saptune 3.1.1 ΓÇô Do I Need to Update?](https://www.suse.com/c/saptune-3-1-1-do-i-need-to-update/).
### Deploy NFS
sap Sap Hana High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability.md
Previously updated : 10/03/2023 Last updated : 01/16/2024
[2388694]:https://launchpad.support.sap.com/#/notes/2388694 [401162]:https://launchpad.support.sap.com/#/notes/401162
-[sles-for-sap-bp]:https://www.suse.com/documentation/sles-for-sap-12/
+[sles-for-sap-bp]:https://documentation.suse.com/sbp-supported.html
[sap-swcenter]:https://launchpad.support.sap.com/#/softwarecenter
This article describes how to deploy and configure the VMs, install the cluster
Before you begin, read the following SAP Notes and papers: - SAP Note [1928533]. The note includes:- - The list of Azure VM sizes that are supported for the deployment of SAP software. - Important capacity information for Azure VM sizes. - The supported SAP software, operating system (OS), and database combinations.
Before you begin, read the following SAP Notes and papers:
- [Azure Virtual Machines planning and implementation for SAP on Linux][planning-guide] guide. - [Azure Virtual Machines deployment for SAP on Linux][deployment-guide] guide. - [Azure Virtual Machines DBMS deployment for SAP on Linux][dbms-guide] guide.-- [SUSE Linux Enterprise Server for SAP Applications 12 SP3 best practices guides][sles-for-sap-bp]:-
- - Setting up an SAP HANA SR Performance Optimized Infrastructure (SLES for SAP Applications 12 SP1). The guide contains all the required information to set up SAP HANA system replication for on-premises development. Use this guide as a baseline.
- - Setting up an SAP HANA SR Cost Optimized Infrastructure (SLES for SAP Applications 12 SP1).
+- [SUSE Linux Enterprise Server for SAP Applications best practices guides][sles-for-sap-bp]:
+ - Setting up an SAP HANA SR Performance Optimized Infrastructure (SLES for SAP Applications). The guide contains all the required information to set up SAP HANA system replication for on-premises development. Use this guide as a baseline.
+ - Setting up an SAP HANA SR Cost Optimized Infrastructure (SLES for SAP Applications).
## Plan for SAP HANA high availability
The preceding figure shows an *example* load balancer that has these configurati
- Front-end IP address: 10.0.0.13 for HN1-db - Probe port: 62503
-## Deploy for Linux
+## Prepare the infrastructure
The resource agent for SAP HANA is included in SUSE Linux Enterprise Server for SAP Applications. An image for SUSE Linux Enterprise Server for SAP Applications 12 or 15 is available in Azure Marketplace. You can use the image to deploy new VMs.
The resource agent for SAP HANA is included in SUSE Linux Enterprise Server for
This document assumes that you've already deployed a resource group, [Azure Virtual Network](../../virtual-network/virtual-networks-overview.md), and subnet.
-Deploy virtual machines for SAP HANA. Choose a suitable SLES image that is supported for HANA system. You can deploy VM in any one of the availability options - scale set, availability zone or availability set.
+Deploy virtual machines for SAP HANA. Choose a suitable SLES image that is supported for HANA system. You can deploy VM in any one of the availability options - virtual machine scale set, availability zone, or availability set.
> [!IMPORTANT] > Make sure that the OS you select is SAP certified for SAP HANA on the specific VM types that you plan to use in your deployment. You can look up SAP HANA-certified VM types and their OS releases in [SAP HANA Certified IaaS Platforms](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=v:deCertified;ve:24;iaas;v:125;v:105;v:99;v:120). Make sure that you look at the details of the VM type to get the complete list of SAP HANA-supported OS releases for the specific VM type.
-During VM configuration, you have an option to create or select exiting load balancer in networking section. If you are creating a new load balancer, follow below steps -
-
-1. Set up a standard load balancer.
- 1. Create a front-end IP pool:
- 1. Open the load balancer, select **frontend IP pool**, and then select **Add**.
- 2. Enter the name of the new front-end IP pool (for example, **hana-frontend**).
- 3. Set **Assignment** to **Static** and enter the IP address (for example, **10.0.0.13**).
- 4. Select **OK**.
- 5. After the new front-end IP pool is created, note the pool IP address.
-
- 2. Create a single back-end pool:
- 1. In the load balancer, select **Backend pools**, and then select **Add**.
- 2. Enter the name of the new back-end pool (for example, **hana-backend**).
- 3. For **Backend Pool Configuration**, select **NIC**.
- 4. Select **Add a virtual machine**.
- 5. Select the VMs that are in the HANA cluster.
- 6. Select **Add**.
- 7. Select **Save**.
-
- 3. Create a health probe:
- 1. In the load balancer, select **health probes**, and then select **Add**.
- 2. Enter the name of the new health probe (for example, **hana-hp**).
- 3. For **Protocol**, select **TCP** and select port **625\<instance number\>**. Keep **Interval** set to **5**.
- 4. Select **OK**.
-
- 4. Create the load-balancing rules:
- 1. In the load balancer, select **load balancing rules**, and then select **Add**.
- 2. Enter the name of the new load balancer rule (for example, **hana-lb**).
- 3. Select the front-end IP address, the back-end pool, and the health probe that you created earlier (for example, **hana-frontend**, **hana-backend**, and **hana-hp**).
- 4. Increase the idle timeout to 30 minutes.
- 5. Select **HA Ports**.
- 6. Enable **Floating IP**.
- 7. Select **OK**.
+### Configure Azure load balancer
+
+During VM configuration, you have an option to create or select exiting load balancer in networking section. Follow below steps, to setup standard load balancer for high availability setup of HANA database.
+
+#### [Azure Portal](#tab/lb-portal)
++
+#### [Azure CLI](#tab/lb-azurecli)
++
+#### [PowerShell](#tab/lb-powershell)
+++ For more information about the required ports for SAP HANA, read the chapter [Connections to Tenant Databases](https://help.sap.com/viewer/78209c1d3a9b41cd8624338e42a12bf6/latest/en-US/7a9343c9f2a2436faa3cfdb5ca00c052.html) in the [SAP HANA Tenant Databases](https://help.sap.com/viewer/78209c1d3a9b41cd8624338e42a12bf6) guide or [SAP Note 2388694][2388694].
For more information about the required ports for SAP HANA, read the chapter [Co
> When VMs that don't have public IP addresses are placed in the back-end pool of an internal (no public IP address) standard instance of Azure Load Balancer, the default configuration is no outbound internet connectivity. You can take extra steps to allow routing to public endpoints. For details on how to achieve outbound connectivity, see [Public endpoint connectivity for VMs by using Azure Standard Load Balancer in SAP high-availability scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md). > [!IMPORTANT]
-> Don't enable TCP timestamps on Azure VMs that are placed behind Azure Load Balancer. Enabling TCP timestamps causes the health probes to fail. Set parameter `net.ipv4.tcp_timestamps` to `0`. For details see [Load Balancer health probes](../../load-balancer/load-balancer-custom-probe-overview.md) or SAP note [2382421](https://launchpad.support.sap.com/#/notes/2382421).
+>
+> - Don't enable TCP timestamps on Azure VMs that are placed behind Azure Load Balancer. Enabling TCP timestamps causes the health probes to fail. Set parameter `net.ipv4.tcp_timestamps` to `0`. For details see [Load Balancer health probes](../../load-balancer/load-balancer-custom-probe-overview.md) or SAP note [2382421](https://launchpad.support.sap.com/#/notes/2382421).
+> - To prevent saptune from changing the manually set `net.ipv4.tcp_timestamps` value from `0` back to `1`, update saptune version to 3.1.1 or higher. For more details, see [saptune 3.1.1 ΓÇô Do I Need to Update?](https://www.suse.com/c/saptune-3-1-1-do-i-need-to-update/).
## Create a Pacemaker cluster
Execute firewall rule to block the communication on one of the nodes.
When cluster nodes can't communicate to each other, there's a risk of a split-brain scenario. In such situations, cluster nodes will try to simultaneously fence each other, resulting in fence race.
-When configuring a fencing device, it's recommended to configure [`pcmk_delay_max`](https://www.suse.com/support/kb/doc/?id=000019110) property. So, in the event of split-brain scenario, the cluster introduces a random delay up to the `pcmk_delay_max` value, to the fencing action on each node. The node with the shortest delay will be selected for fencing.
+When configuring a fencing device, it's recommended to configure [`pcmk_delay_max`](https://www.suse.com/support/kb/doc/?id=000019110) property. So, in the event of split-brain scenario, the cluster introduces a random delay up to the `pcmk_delay_max` value, to the fencing action on each node. The node with the shortest delay will be selected for fencing.
Additionally, to ensure that the node running the HANA master takes priority and wins the fence race in a split brain scenario, it's recommended to set [`priority-fencing-delay`](https://documentation.suse.com/sle-ha/15-SP3/single-html/SLE-HA-administration/#pro-ha-storage-protect-fencing) property in the cluster configuration. By enabling priority-fencing-delay property, the cluster can introduce an additional delay in the fencing action specifically on the node hosting HANA master resource, allowing the node to win the fence race.
search Hybrid Search Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/hybrid-search-overview.md
Hybrid search combines results from both full text and vector queries, which use
## Structure of a hybrid query
-Hybrid search is predicated on having a search index that contains fields of various [data types](/rest/api/searchservice/supported-data-types), including plain text and numbers, geo coordinates for geospatial search, and vectors for a mathematical representation of a chunk of text or image, audio, and video. You can use almost all query capabilities in Azure AI Search with a vector query, except for client-side interactions such as autocomplete and suggestions.
+Hybrid search is predicated on having a search index that contains fields of various [data types](/rest/api/searchservice/supported-data-types), including plain text and numbers, geo coordinates for geospatial search, and vectors for a mathematical representation of a chunk of text. You can use almost all query capabilities in Azure AI Search with a vector query, except for client-side interactions such as autocomplete and suggestions.
A representative hybrid query might be as follows (notice the vector is trimmed for brevity):
search Monitor Azure Cognitive Search Data Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/monitor-azure-cognitive-search-data-reference.md
This section lists the platform metrics collected for Azure AI Search ([Microsof
| Metric ID | Unit | Description | |:-|:--|:|
-| DocumentsProcessedCount | Count | Total of the number of documents successfully processed in an indexing operation (either by an indexer or by pushing documents directly). |
+| DocumentsProcessedCount | Count | Total of the number of documents successfully processed in an indexing operation by an indexer. |
| SearchLatency | Seconds | Average search latency for queries that execute on the search service. | | SearchQueriesPerSecond | CountPerSecond | Average of the search queries per second (QPS) for the search service. It's common for queries to execute in milliseconds, so only queries that measure as seconds will appear in a metric like QPS. </br>The minimum is the lowest value for search queries per second that was registered during that minute. The same applies to the maximum value. Average is the aggregate across the entire minute. For example, within one minute, you might have a pattern like this: one second of high load that is the maximum for SearchQueriesPerSecond, followed by 58 seconds of average load, and finally one second with only one query, which is the minimum.| | SkillExecutionCount | Count | Total number of skill executions processed during an indexer operation. |
search Retrieval Augmented Generation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/retrieval-augmented-generation-overview.md
Last updated 11/20/2023
# Retrieval Augmented Generation (RAG) in Azure AI Search
-Retrieval Augmentation Generation (RAG) is an architecture that augments the capabilities of a Large Language Model (LLM) like ChatGPT by adding an information retrieval system that provides grounding data. Adding an information retrieval system gives you control over grounding data used by an LLM when it formulates a response. For an enterprise solution, RAG architecture means that you can constrain generative AI to *your enterprise content* sourced from vectorized documents, images, audio, and video.
+Retrieval Augmentation Generation (RAG) is an architecture that augments the capabilities of a Large Language Model (LLM) like ChatGPT by adding an information retrieval system that provides grounding data. Adding an information retrieval system gives you control over grounding data used by an LLM when it formulates a response. For an enterprise solution, RAG architecture means that you can constrain generative AI to *your enterprise content* sourced from vectorized documents and images, and other data formats if you have embedding models for that content.
The decision about which information retrieval system to use is critical because it determines the inputs to the LLM. The information retrieval system should provide:
Since you probably know what kind of content you want to search over, consider t
| text | tokens, unaltered text | [Indexers](search-indexer-overview.md) can pull plain text from other Azure resources like Azure Storage and Cosmos DB. You can also [push any JSON content](search-what-is-data-import.md) to an index. To modify text in flight, use [analyzers](search-analyzers.md) and [normalizers](search-normalizers.md) to add lexical processing during indexing. [Synonym maps](search-synonyms.md) are useful if source documents are missing terminology that might be used in a query. | | text | vectors <sup>1</sup> | Text can be chunked and vectorized externally and then [indexed as vector fields](vector-search-how-to-create-index.md) in your index. | | image | tokens, unaltered text <sup>2</sup> | [Skills](cognitive-search-working-with-skillsets.md) for OCR and Image Analysis can process images for text recognition or image characteristics. Image information is converted to searchable text and added to the index. Skills have an indexer requirement. |
-| image | vectors <sup>1</sup> | Images can be vectorized externally for a mathematical representation of image content and then [indexed as vector fields](vector-search-how-to-create-index.md) in your index. |
-| video | vectors <sup>1</sup> | Video files can be vectorized externally for a mathematical representation of video content and then [indexed as vector fields](vector-search-how-to-create-index.md) in your index. |
-| audio | vectors <sup>1</sup> | Audio files can be vectorized externally for a mathematical representation of audio content and then [indexed as vector fields](vector-search-how-to-create-index.md) in your index. |
+| image | vectors <sup>1</sup> | Images can be vectorized externally for a mathematical representation of image content and then [indexed as vector fields](vector-search-how-to-create-index.md) in your index. You can use an open source model like [OpenAI CLIP](https://github.com/openai/CLIP/blob/main/README.md) to vectorize text and images in the same embedding space.|
+<!-- | audio | vectors <sup>1</sup> | Vectorized audio content can be [indexed as vector fields](vector-search-how-to-create-index.md) in your index. Vectorization of audio content often requires intermediate processing that converts audio to text, and then text to vecctors. [Azure AI Speech](/azure/ai-services/speech-service/overview) and [OpenAI Whisper](https://platform.openai.com/docs/guides/speech-to-text) are two examples for this scenario. |
+| video | vectors <sup>1</sup> | Vectorized video content can be [indexed as vector fields](vector-search-how-to-create-index.md) in your index. Similar to audio, vectorization of video content also requires extra processing, such as breaking up the video into frames or smaller chunks for vectorization. | -->
<sup>1</sup> The generally available functionality of [vector support](vector-search-overview.md) requires that you call other libraries or models for data chunking and vectorization. However, [integrated vectorization (preview)](vector-search-integrated-vectorization.md) embeds these steps. For code samples showing both approaches, see [azure-search-vectors repo](https://github.com/Azure/azure-search-vector-samples).
search Search Howto Index Sharepoint Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-sharepoint-online.md
- ignite-2023 Previously updated : 12/19/2023 Last updated : 01/16/2024 # Index data from SharePoint document libraries
Last updated 12/19/2023
> > Be sure to visit the [known limitations](#limitations-and-considerations) section before you start. >
->To use this preview, [request access](https://aka.ms/azure-cognitive-search/indexer-preview). Any access request submitted after December 15, 2023 will be reviewed after January 15, 2024 for approval, with no exceptions. After access is enabled, use a [preview REST API (2023-10-01-Preview or later)](search-api-preview.md) to index your content.
+>To use this preview, [request access](https://aka.ms/azure-cognitive-search/indexer-preview). Any access request is automatically accepted after submission. After access is enabled, use a [preview REST API (2023-10-01-Preview or later)](search-api-preview.md) to index your content.
This article explains how to configure a [search indexer](search-indexer-overview.md) to index documents stored in SharePoint document libraries for full text search in Azure AI Search. Configuration steps are first, followed by behaviors and scenarios
search Search Indexer Howto Access Private https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-howto-access-private.md
- ignite-2023 Previously updated : 07/24/2023 Last updated : 01/16/2024 # Make outbound connections through a shared private link
You can create a shared private link for the following resources.
| Microsoft.DBforMySQL/servers (preview) | `mysqlServer`| | Microsoft.Web/sites (preview) <sup>3</sup> | `sites` | | Microsoft.Sql/managedInstances (preview) <sup>4</sup>| `managedInstance` |
+| Microsoft.CognitiveServices/accounts (preview) <sup>5</sup>| `openai_account` |
<sup>1</sup> If Azure Storage and Azure AI Search are in the same region, the connection to storage is made over the Microsoft backbone network, which means a shared private link is redundant for this configuration. However, if you already set up a private endpoint for Azure Storage, you should also set up a shared private link or the connection is refused on the storage side. Also, if you're using multiple storage formats for various scenarios in search, make sure to create a separate shared private link for each sub-resource.
You can create a shared private link for the following resources.
<sup>4</sup> See [Create a shared private link for a SQL Managed Instance](search-indexer-how-to-access-private-sql.md) for instructions.
+<sup>5</sup> The `Microsoft.CognitiveServices/accounts` resource type is used for indexer connections to Azure OpenAI when implementing [integrated Vectorization](vector-search-integrated-vectorization.md).
+ ## 1 - Create a shared private link Use the Azure portal, Management REST API, the Azure CLI, or Azure PowerShell to create a shared private link.
search Search What Is Azure Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-what-is-azure-search.md
Customers often ask how Azure AI Search compares with other search-related solut
Key strengths include:
-+ Store, index, and search vector embeddings for sentences, images, audio, graphs, and more.
++ Store, index, and search vector embeddings for sentences, images, graphs, and more. + Find information thatΓÇÖs semantically similar to search queries, even if the search terms arenΓÇÖt exact matches. + Use hybrid search for the best of keyword and vector search. + Relevance tuning through semantic ranking and scoring profiles.
search Vector Search How To Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-how-to-query.md
api-key: {{admin-api-key}}
## Multiple vector queries
-Multi-query vector search sends multiple queries across multiple vector fields in your search index. A common example of this query request is when using models such as [CLIP](https://openai.com/research/clip) for a multi-modal vector search where the same model can vectorize image and non-image content.
+Multi-query vector search sends multiple queries across multiple vector fields in your search index. A common example of this query request is when using models such as [CLIP](https://openai.com/research/clip) for a multimodal vector search where the same model can vectorize image and text content.
-The following query example looks for similarity in both `myImageVector` and `myTextVector`, but sends in two different query embeddings respectively. This scenario is ideal for multi-modal use cases where you want to search over different embedding spaces. This query produces a result that's scored using [Reciprocal Rank Fusion (RRF)](hybrid-search-ranking.md).
+The following query example looks for similarity in both `myImageVector` and `myTextVector`, but sends in two different query embeddings respectively, each executing in parallel. This query produces a result that's scored using [Reciprocal Rank Fusion (RRF)](hybrid-search-ranking.md).
+ `vectorQueries` provides an array of vector queries. + `vector` contains the image vectors and text vectors in the search index. Each instance is a separate query.
search Vector Search Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-overview.md
The following diagram shows the indexing and query workflows for vector search.
On the indexing side, Azure AI Search takes vector embeddings and uses a [nearest neighbors algorithm](vector-search-ranking.md) to co-locate similar vectors together in the search index (vectors about popular movies are closer than vectors about popular dog breeds).
-How you get embeddings from your source content depends on your approach and whether you can use preview features. You can vectorize or generate embeddings using models from OpenAI, Azure OpenAI, and any number of providers, over a wide range of source content including text, images, videos, and audio. You can then push pre-vectorized content to [vector fields](vector-search-how-to-create-index.md) in a search index. That's the generally available approach. If you can use preview features, Azure AI Search provides [integrated data chunking and vectorization](vector-search-integrated-vectorization.md) in an indexer pipeline. You still provide the resources (endpoints and connection information), but Azure AI Search makes all of the calls and handles the transitions.
+How you get embeddings from your source content depends on your approach and whether you can use preview features. You can vectorize or generate embeddings using models from OpenAI, Azure OpenAI, and any number of providers, over a wide range of source content including text, images, and other content types supported by the models. You can then push pre-vectorized content to [vector fields](vector-search-how-to-create-index.md) in a search index. That's the generally available approach. If you can use preview features, Azure AI Search provides [integrated data chunking and vectorization](vector-search-integrated-vectorization.md) in an indexer pipeline. You still provide the resources (endpoints and connection information), but Azure AI Search makes all of the calls and handles the transitions.
On the query side, in your client application, collect the query input from a user. You can then add an encoding step that converts the input into a vector, and then send the vector query to your index on Azure AI Search for a similarity search. As with indexing, you can deploy the [integrated vectorization (preview)](vector-search-integrated-vectorization.md) to convert text inputs to a vector. For either approach, Azure AI Search returns documents with the requested `k` nearest neighbors (kNN) in the results.
Scenarios for vector search include:
+ **Vector search for text**. Encode text using embedding models such as OpenAI embeddings or open source models such as SBERT, and retrieve documents with queries that are also encoded as vectors.
-+ **Vector search across different data types (multi-modal)**. Encode images, text, audio, and video, or even a mix of them (for example, with models like CLIP) and do a similarity search across them.
++ **Vector search across different content types (multimodal)**. Encode images and text using multimodal embeddings (for example, with [OpenAI CLIP](https://github.com/openai/CLIP) or [GPT-4 Turbo with Vision](/azure/ai-services/openai/whats-new#gpt-4-turbo-with-vision-now-available) in Azure OpenAI) and query an embedding space composed of vectors from both content types.
-+ **Multi-lingual search**. Use a multi-lingual embeddings model to represent your document in multiple languages in a single vector space to find documents regardless of the language they are in.
++ **Multilingual search**. Use a multilingual embeddings model to represent your document in multiple languages in a single vector space to find documents regardless of the language they are in. + [**Hybrid search**](hybrid-search-overview.md). Vector search is implemented at the field level, which means you can build queries that include both vector fields and searchable text fields. The queries execute in parallel and the results are merged into a single response. Optionally, add [semantic ranking](semantic-search-overview.md) for even more accuracy with L2 reranking using the same language models that power Bing.
If you're new to vectors, this section explains some core concepts.
### About vector search
-Vector search is a method of information retrieval where documents and queries are represented as vectors instead of plain text. In vector search, machine learning models generate the vector representations of source inputs, which can be text, images, audio, or video content. Having a mathematic representation of content provides a common basis for search scenarios. If everything is a vector, a query can find a match in vector space, even if the associated original content is in different media or in a different language than the query.
+Vector search is a method of information retrieval where documents and queries are represented as vectors instead of plain text. In vector search, machine learning models generate the vector representations of source inputs, which can be text, images, or other content. Having a mathematic representation of content provides a common basis for search scenarios. If everything is a vector, a query can find a match in vector space, even if the associated original content is in different media or language than the query.
### Why use vector search
-Vectors can overcome the limitations of traditional keyword-based search by using machine learning models to capture the meaning of words and phrases in context, rather than relying solely on lexical analysis and matching of individual query terms. By capturing the intent of the query, vector search can return more relevant results that match the user's needs, even if the exact terms aren't present in the document.
-
-Additionally, vector search can be applied to different types of content, such as images and videos, not just text. This enables new search experiences such as multi-modal search or cross-language search in multi-lingual applications.
+When searchable content is represented as vectors, a query can find close matches in similar content. The embedding model used for vector generation knows which words and concepts are similar, and it places the resulting vectors close together in the embedding space. For example, vectorized source documents about "clouds" and "fog" are more likely to show up in a query about "mist" because they're semantically similar, even if they aren't a lexical match.
### Embeddings and vectorization
security Best Practices And Patterns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/best-practices-and-patterns.md
Title: Security best practices and patterns - Microsoft Azure | Microsoft Docs description: This article links you to security best practices and patterns for different Azure resources.- - ms.assetid: 1cbbf8dc-ea94-4a7e-8fa0-c2cb198956c5 - Last updated 11/13/2023
service-bus-messaging Service Bus Management Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-management-libraries.md
Title: Programmatically manage Azure Service Bus namespaces and entities
description: This article explains how to dynamically or programmatically provision Service Bus namespaces and entities. Last updated 08/06/2021
+ms.devlang: csharp
+# ms.devlang: csharp,java,javascript,python
service-bus-messaging Service Bus Prefetch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-prefetch.md
Title: Prefetch messages from Azure Service Bus
description: Improve performance by prefetching messages from Azure Service Bus queues or subscriptions. Messages are readily available for local retrieval before the application requests for them. Last updated 08/29/2023
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, python
# Prefetch Azure Service Bus messages
spring-apps Expose Apps Gateway End To End Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/expose-apps-gateway-end-to-end-tls.md
Last updated 02/28/2022
+ms.devlang: java
+# ms.devlang: java, azurecli
# Expose applications with end-to-end TLS in a virtual network
spring-apps How To Connect To App Instance For Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-connect-to-app-instance-for-troubleshooting.md
The following list shows the tools available, which depend on your service plan
- Custom image deployment: Depends on the installed tool set in your image. > [!NOTE]
-> JDK tools aren't included in the path for the *source code* deployment type. Run `export PATH="$PATH:/layers/paketo-buildpacks_microsoft-openjdk/jdk/bin"` before running any JDK commands.
+> JDK tools aren't included in the path for the *source code* deployment type. Run `export PATH="$PATH:/layers/tanzu-buildpacks_microsoft-openjdk/jdk/bin"` before running any JDK commands.
## Limitations
spring-apps How To Enterprise Deploy Polyglot Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-deploy-polyglot-apps.md
The following table lists the features supported in Azure Spring Apps:
| Feature description | Comment | Environment variable | Usage | |--|--|--|-|
-| Provides the Microsoft OpenJDK. | Configures the JVM version. The default JDK version is 11. Currently supported: JDK 8, 11, and 17. | `BP_JVM_VERSION` | `--build-env BP_JVM_VERSION=11.*` |
+| Provides the Microsoft OpenJDK. | Configures the JVM version. The default JDK version is 11. Currently supported: JDK 8, 11, 17, and 21. | `BP_JVM_VERSION` | `--build-env BP_JVM_VERSION=11.*` |
| | Runtime env. Configures whether Java Native Memory Tracking (NMT) is enabled. The default value is *true*. Not supported in JDK 8. | `BPL_JAVA_NMT_ENABLED` | `--env BPL_JAVA_NMT_ENABLED=true` | | | Configures the level of detail for Java Native Memory Tracking (NMT) output. The default value is *summary*. Set to *detail* for detailed NMT output. | `BPL_JAVA_NMT_LEVEL` | `--env BPL_JAVA_NMT_ENABLED=summary` | | Add CA certificates to the system trust store at build and runtime. | See the [Configure CA certificates for app builds and deployments](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md#configure-ca-certificates-for-app-builds-and-deployments) section of [How to configure APM integration and CA certificates](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md). | N/A | N/A |
The following table lists the features supported in Azure Spring Apps:
| Feature description | Comment | Environment variable | Usage | ||--|--|--|
-| Configure the .NET Core runtime version. | Supports *Net6.0* and *Net7.0*. <br> You can configure through a *runtimeconfig.json* or MSBuild Project file. <br> The default runtime is *6.0.\**. | N/A | N/A |
+| Configure the .NET Core runtime version. | Supports *Net6.0*, *Net7.0*, and *Net8.0*. <br> You can configure through a *runtimeconfig.json* or MSBuild Project file. <br> The default runtime is *6.0.\**. | N/A | N/A |
| Add CA certificates to the system trust store at build and runtime. | See the [Configure CA certificates for app builds and deployments](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md#configure-ca-certificates-for-app-builds-and-deployments) section of [How to configure APM integration and CA certificates](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md). | N/A | N/A | | Integrate with the Dynatrace and New Relic APM agents. | See [How to configure APM integration and CA certificates](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md). | N/A | N/A | | Enable configuration of labels on the created image. | Configures both OCI-specified labels with short environment variable names and arbitrary labels using a space-delimited syntax in a single environment variable. | `BP_IMAGE_LABELS` <br> `BP_OCI_AUTHORS` <br> See more environment variables [here](https://github.com/paketo-buildpacks/image-labels). | `--build-env BP_OCI_AUTHORS=<value>` |
The following table lists the features supported in Azure Spring Apps:
| Feature description | Comment | Environment variable | Usage | ||--|--|-|
-| Specify a Python version. | Supports *3.7.\**, *3.8.\**, *3.9.\**, *3.10.\**, *3.11.\**. The default value is *3.10.\**<br> You can specify the version via the `BP_CPYTHON_VERSION` environment variable during build. | `BP_CPYTHON_VERSION` | `--build-env BP_CPYTHON_VERSION=3.8.*` |
+| Specify a Python version. | Supports *3.8.\**, *3.9.\**, *3.10.\**, *3.11.\**, *3.12.\**. The default value is *3.10.\**<br> You can specify the version via the `BP_CPYTHON_VERSION` environment variable during build. | `BP_CPYTHON_VERSION` | `--build-env BP_CPYTHON_VERSION=3.8.*` |
| Add CA certificates to the system trust store at build and runtime. | See the [Configure CA certificates for app builds and deployments](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md#configure-ca-certificates-for-app-builds-and-deployments) section of [How to configure APM integration and CA certificates](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md). | N/A | N/A | | Enable configuration of labels on the created image. | Configures both OCI-specified labels with short environment variable names and arbitrary labels using a space-delimited syntax in a single environment variable. | `BP_IMAGE_LABELS` <br> `BP_OCI_AUTHORS` <br> See more environment variables [here](https://github.com/paketo-buildpacks/image-labels). | `--build-env BP_OCI_AUTHORS=<value>` |
The following table lists the features supported in Azure Spring Apps:
| Feature description | Comment | Environment variable | Usage | ||--|--|-|
-| Specify a Go version. | Supports *1.19.\**, *1.20.\**. The default value is *1.19.\**.<br> The Go version is automatically detected from the appΓÇÖs *go.mod* file. You can override this version by setting the `BP_GO_VERSION` environment variable at build time. | `BP_GO_VERSION` | `--build-env BP_GO_VERSION=1.20.*` |
+| Specify a Go version. | Supports *1.20.\**, *1.21.\**. The default value is *1.20.\**.<br> The Go version is automatically detected from the appΓÇÖs *go.mod* file. You can override this version by setting the `BP_GO_VERSION` environment variable at build time. | `BP_GO_VERSION` | `--build-env BP_GO_VERSION=1.20.*` |
| Configure multiple targets. | Specifies multiple targets for a Go build. | `BP_GO_TARGETS` | `--build-env BP_GO_TARGETS=./some-target:./other-target` | | Add CA certificates to the system trust store at build and runtime. | See the [Configure CA certificates for app builds and deployments](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md#configure-ca-certificates-for-app-builds-and-deployments) section of [How to configure APM integration and CA certificates](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md). | N/A | N/A | | Integrate with Dynatrace APM agent. | See [How to configure APM integration and CA certificates](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md). | N/A | N/A |
The following table lists the features supported in Azure Spring Apps:
| Feature description | Comment | Environment variable | Usage | |-|--|--|--|
-| Specify a Node version. | Supports *14.\**, *16.\**, *18.\**, *19.\**. The default value is *18.\**. <br>You can specify the Node version via an *.nvmrc* or *.node-version* file at the application directory root. `BP_NODE_VERSION` overrides the settings. | `BP_NODE_VERSION` | `--build-env BP_NODE_VERSION=19.*` |
+| Specify a Node version. | Supports *16.\**, *18.\**, *19.\**, *20.\**. The default value is *20.\**. <br>You can specify the Node version via an *.nvmrc* or *.node-version* file at the application directory root. `BP_NODE_VERSION` overrides the settings. | `BP_NODE_VERSION` | `--build-env BP_NODE_VERSION=19.*` |
| Add CA certificates to the system trust store at build and runtime. | See the [Configure CA certificates for app builds and deployments](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md#configure-ca-certificates-for-app-builds-and-deployments) section of [How to configure APM integration and CA certificates](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md). | N/A | N/A | | Integrate with Dynatrace, Elastic, New Relic, App Dynamic APM agent. | See [How to configure APM integration and CA certificates](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md). | N/A | N/A | | Enable configuration of labels on the created image. | Configures both OCI-specified labels with short environment variable names and arbitrary labels using a space-delimited syntax in a single environment variable. | `BP_IMAGE_LABELS` <br> `BP_OCI_AUTHORS` <br> See more environment variables [here](https://github.com/paketo-buildpacks/image-labels). | `--build-env BP_OCI_AUTHORS=<value>` |
The following table lists the features supported in Azure Spring Apps:
| Feature description | Comment | Environment variable | Usage | ||--|--||
-| Integrate with Bellsoft OpenJDK. | Configures the JDK version. Currently supported: JDK 8, 11, and 17. | `BP_JVM_VERSION` | `--build-env BP_JVM_VERSION=17` |
+| Integrate with Bellsoft OpenJDK. | Configures the JDK version. Currently supported: JDK 8, 11, 17, and 20. | `BP_JVM_VERSION` | `--build-env BP_JVM_VERSION=17` |
| Configure arguments for the `native-image` command. | Arguments to pass directly to the native-image command. These arguments must be valid and correctly formed or the native-image command fails. | `BP_NATIVE_IMAGE_BUILD_ARGUMENTS` | `--build-env BP_NATIVE_IMAGE_BUILD_ARGUMENTS="--no-fallback"` | | Add CA certificates to the system trust store at build and runtime. | See the [Use CA certificates](./how-to-enterprise-configure-apm-intergration-and-ca-certificates.md#use-ca-certificates) section of [How to configure APM integration and CA certificates](./how-to-enterprise-configure-apm-intergration-and-ca-certificates.md). | Not applicable. | Not applicable. | | Enable configuration of labels on the created image | Configures both OCI-specified labels with short environment variable names and arbitrary labels using a space-delimited syntax in a single environment variable. | `BP_IMAGE_LABELS` <br> `BP_OCI_AUTHORS` <br> See more environment variables [here](https://github.com/paketo-buildpacks/image-labels). | `--build-env BP_OCI_AUTHORS=<value>` |
spring-apps Quickstart Deploy Restful Api App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-restful-api-app.md
The following diagram shows the architecture of the system:
::: zone pivot="sc-enterprise" -- An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+### [Azure portal + Maven plugin](#tab/Azure-portal-maven-plugin-ent)
+
+- An Azure subscription. If you don't have an Azure subscription, create an [Azure free account](https://azure.microsoft.com/free/) before you begin.
+- One of the following roles:
+ - Global Administrator or Privileged Role Administrator, for granting consent for apps requesting any permission, for any API.
+ - Cloud Application Administrator or Application Administrator, for granting consent for apps requesting any permission for any API, except Microsoft Graph app roles (application permissions).
+ - A custom directory role that includes the [permission to grant permissions to applications](/entra/identity/role-based-access-control/custom-consent-permissions), for the permissions required by the application.
+
+ For more information, see [Grant tenant-wide admin consent to an application](/entra/identity/enterprise-apps/grant-admin-consent?pivots=portal).
+- If you're deploying an Azure Spring Apps Enterprise plan instance for the first time in the target subscription, see the [Requirements](./how-to-enterprise-marketplace-offer.md#requirements) section of [Enterprise plan in Azure Marketplace](./how-to-enterprise-marketplace-offer.md).
- [Git](https://git-scm.com/downloads). - [Java Development Kit (JDK)](/java/azure/jdk/), version 17. - A Microsoft Entra tenant. For instructions on creating one, see [Quickstart: Create a new tenant in Microsoft Entra ID](../active-directory/fundamentals/create-new-tenant.md).
+### [Azure CLI](#tab/Azure-CLI)
+
+- An Azure subscription, If you don't have an Azure subscription, create an [Azure free account](https://azure.microsoft.com/free/) before you begin.
+- One of the following roles:
+ - Global Administrator or Privileged Role Administrator, for granting consent for apps requesting any permission, for any API.
+ - Cloud Application Administrator or Application Administrator, for granting consent for apps requesting any permission for any API, except Microsoft Graph app roles (application permissions).
+ - A custom directory role that includes the [permission to grant permissions to applications](/entra/identity/role-based-access-control/custom-consent-permissions), for the permissions required by the application.
+
+ For more information, see [Grant tenant-wide admin consent to an application](/entra/identity/enterprise-apps/grant-admin-consent?pivots=portal).
+- If you're deploying an Azure Spring Apps Enterprise plan instance for the first time in the target subscription, see the [Requirements](./how-to-enterprise-marketplace-offer.md#requirements) section of [Enterprise plan in Azure Marketplace](./how-to-enterprise-marketplace-offer.md).
+- [Git](https://git-scm.com/downloads).
+- [Java Development Kit (JDK)](/java/azure/jdk/), version 17.
+- A Microsoft Entra tenant. For instructions on creating one, see [Quickstart: Create a new tenant in Microsoft Entra ID](../active-directory/fundamentals/create-new-tenant.md).
+- [Azure CLI](/cli/azure/install-azure-cli) version 2.53.1 or higher.
+++ ::: zone-end ::: zone pivot="sc-consumption-plan" ### [Azure portal + Maven plugin](#tab/Azure-portal-maven-plugin) -- An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+- An Azure subscription, If you don't have an Azure subscription, create an [Azure free account](https://azure.microsoft.com/free/) before you begin.
+- One of the following roles:
+ - Global Administrator or Privileged Role Administrator, for granting consent for apps requesting any permission, for any API.
+ - Cloud Application Administrator or Application Administrator, for granting consent for apps requesting any permission for any API, except Microsoft Graph app roles (application permissions).
+ - A custom directory role that includes the [permission to grant permissions to applications](/entra/identity/role-based-access-control/custom-consent-permissions), for the permissions required by the application.
+
+ For more information, see [Grant tenant-wide admin consent to an application](/entra/identity/enterprise-apps/grant-admin-consent?pivots=portal).
- [Git](https://git-scm.com/downloads). - [Java Development Kit (JDK)](/java/azure/jdk/), version 17. - A Microsoft Entra tenant. For instructions on creating one, see [Quickstart: Create a new tenant in Microsoft Entra ID](../active-directory/fundamentals/create-new-tenant.md). ### [Azure Developer CLI](#tab/Azure-Developer-CLI) -- An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+- An Azure subscription, If you don't have an Azure subscription, create an [Azure free account](https://azure.microsoft.com/free/) before you begin.
+- One of the following roles:
+ - Global Administrator or Privileged Role Administrator, for granting consent for apps requesting any permission, for any API.
+ - Cloud Application Administrator or Application Administrator, for granting consent for apps requesting any permission for any API, except Microsoft Graph app roles (application permissions).
+ - A custom directory role that includes the [permission to grant permissions to applications](/entra/identity/role-based-access-control/custom-consent-permissions), for the permissions required by the application.
+
+ For more information, see [grant admin consent](/entra/identity/enterprise-apps/grant-admin-consent?pivots=portal#prerequisites).
- [Git](https://git-scm.com/downloads). - [Java Development Kit (JDK)](/java/azure/jdk/), version 17. - A Microsoft Entra tenant. For instructions on creating one, see [Quickstart: Create a new tenant in Microsoft Entra ID](../active-directory/fundamentals/create-new-tenant.md).
The following diagram shows the architecture of the system:
[!INCLUDE [deploy-restful-api-app-with-consumption-plan](includes/quickstart-deploy-restful-api-app/deploy-restful-api-app-with-consumption-plan.md)] - ## 5. Validate the app
-You can now access the RESTful API to see if it works.
-
-### 5.1. Request an access token
-
-The RESTful APIs act as a resource server, which is protected by Microsoft Entra ID. Before acquiring an access token, you're required to register another application in Microsoft Entra ID and grant permissions to the client application, which is named `ToDoWeb`.
-
-#### Register the client application
-
-Use the following steps to register an application in Microsoft Entra ID, which is used to add the permissions for the `ToDo` app:
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-
-1. If you have access to multiple tenants, use the **Directory + subscription** filter (:::image type="icon" source="media/quickstart-deploy-restful-api-app/portal-directory-subscription-filter.png" border="false":::) to select the tenant in which you want to register an application.
-
-1. Search for and select **Microsoft Entra ID**.
-
-1. Under **Manage**, select **App registrations** > **New registration**.
-1. Enter a name for your application in the **Name** field - for example, *ToDoWeb*. Users of your app might see this name, and you can change it later.
-
-1. For **Supported account types**, use the default value **Accounts in this organizational directory only**.
-
-1. Select **Register** to create the application.
-
-1. On the app **Overview** page, look for the **Application (client) ID** value, and then record it for later use. You need it to acquire an access token.
-
-1. Select **API permissions** > **Add a permission** > **My APIs**. Select the `ToDo` application that you registered earlier, and then select the **ToDo.Read**, **ToDo.Write**, and **ToDo.Delete** permissions. Select **Add permissions**.
-
-1. Select **Grant admin consent for \<your-tenant-name>** to grant admin consent for the permissions you added.
-
- :::image type="content" source="media/quickstart-deploy-restful-api-app/api-permissions.png" alt-text="Screenshot of the Azure portal that shows the API permissions of a web application." lightbox="media/quickstart-deploy-restful-api-app/api-permissions.png":::
-
-#### Add user to access the RESTful APIs
-
-Use the following steps to create a member user in your Microsoft Entra tenant. Then, the user can manage the data of the `ToDo` application through RESTful APIs.
-
-1. Under **Manage**, select **Users** > **New user** > **Create new user**.
-
-1. On the **Create new user** page, enter the following information:
-
- - **User principal name**: Enter a name for the user.
- - **Display name**: Enter a display name for the user.
- - **Password**: Copy the autogenerated password provided in the **Password** box.
-
- > [!NOTE]
- > New users must complete the first sign-in authentication and update their passwords, otherwise, you receive an `AADSTS50055: The password is expired` error when you get the access token.
- >
- > When a new user logs in, they receive an **Action Required** prompt. They can choose **Ask later** to skip the validation.
-
-1. Select **Review + create** to review your selections. Select **Create** to create the user.
-
-#### Update the OAuth2 configuration for Swagger UI authorization
-
-Use the following steps to update the OAuth2 configuration for Swagger UI authorization. Then, you can authorize users to acquire access tokens through the `ToDoWeb` app.
-
-1. Open the Azure Spring Apps instance in the Azure portal.
-
-1. Open your **Microsoft Entra ID** tenant in the Azure portal, and go to the registered `ToDoWeb` app.
-
-1. Under **Manage**, select **Authentication**, select **Add a platform**, and then select **Single-page application**.
-
-1. Use the format `<your-app-exposed-application-url-or-endpoint>/swagger-ui/oauth2-redirect.html` as the OAuth2 redirect URL in the **Redirect URIs** field, and then select **Configure**.
-
- :::image type="content" source="media/quickstart-deploy-restful-api-app/single-page-app-authentication.png" alt-text="Screenshot of the Azure portal that shows the Authentication page for Microsoft Entra ID." lightbox="media/quickstart-deploy-restful-api-app/single-page-app-authentication.png":::
#### Obtain the access token
storage Access Tiers Online Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/access-tiers-online-manage.md
Last updated 08/10/2023
+ms.devlang: powershell
+# ms.devlang: powershell, azurecli
storage Anonymous Read Access Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/anonymous-read-access-configure.md
Last updated 09/12/2023
+ms.devlang: powershell
+# ms.devlang: powershell, azurecli
storage Anonymous Read Access Prevent Classic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/anonymous-read-access-prevent-classic.md
Last updated 09/12/2023
+ms.devlang: powershell
+# ms.devlang: powershell, azurecli
storage Anonymous Read Access Prevent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/anonymous-read-access-prevent.md
Last updated 09/12/2023
+ms.devlang: powershell
+# ms.devlang: powershell, azurecli
storage Archive Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/archive-blob.md
Last updated 11/28/2023
+ms.devlang: powershell
+# ms.devlang: powershell, azurecli
storage Archive Rehydrate To Online Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/archive-rehydrate-to-online-tier.md
Last updated 01/17/2023
+ms.devlang: powershell
+# ms.devlang: powershell, azurecli
storage Assign Azure Role Data Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/assign-azure-role-data-access.md
Last updated 04/19/2022
+ms.devlang: powershell
+# ms.devlang: powershell, azurecli
storage Blob Inventory How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-inventory-how-to.md
Last updated 02/24/2023
+ms.devlang: powershell
+# ms.devlang: powershell, azurecli
storage Convert Append And Page Blobs To Block Blobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/convert-append-and-page-blobs-to-block-blobs.md
Last updated 01/20/2023
+ms.devlang: powershell
+# ms.devlang: powershell, azurecli
storage Data Lake Storage Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-events.md
Last updated 03/07/2023
+ms.devlang: csharp
+# ms.devlang: csharp, python
storage Data Lake Storage Query Acceleration How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-query-acceleration-how-to.md
Last updated 06/09/2022
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, python
storage Encryption Scope Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/encryption-scope-manage.md
Last updated 11/07/2023
+ms.devlang: powershell
+# ms.devlang: powershell, azurecli
storage Immutable Policy Configure Container Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/immutable-policy-configure-container-scope.md
Last updated 09/14/2022
+ms.devlang: powershell
+# ms.devlang: powershell, azurecli
storage Immutable Policy Configure Version Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/immutable-policy-configure-version-scope.md
Last updated 11/21/2023
+ms.devlang: powershell
+# ms.devlang: powershell, azurecli
storage Storage Auth Abac Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac-examples.md
Previously updated : 01/11/2024 Last updated : 01/17/2024 #Customer intent: As a dev, devops, or it admin, I want to learn about the conditions so that I write more complex conditions.
This section includes examples involving blob index tags.
### Example: Read blobs with a blob index tag
-This condition allows users to read blobs with a [blob index tag](storage-blob-index-how-to.md) key of Project and a value of Cascade. Attempts to access blobs without this key-value tag won't be allowed.
+This condition allows users to read blobs with a [blob index tag](storage-blob-index-how-to.md) key of Project and a value of Cascade. Attempts to access blobs without this key-value tag isn't allowed.
For this condition to be effective for a security principal, you must add it to all role assignments for them that include the following actions:
For this condition to be effective for a security principal, you must add it to
![Diagram of condition showing read access to blobs with a blob index tag.](./media/storage-auth-abac-examples/blob-index-tags-read.png)
-The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs below to view the examples for your preferred portal editor.
+The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs to view the examples for your preferred portal editor.
# [Portal: Visual editor](#tab/portal-visual-editor)
Here are the settings to add this condition using the Azure portal visual editor
# [Portal: Code editor](#tab/portal-code-editor)
-To add the condition using the code editor, copy the condition code sample below and paste it into the code editor.
+To add the condition using the code editor, copy the condition code sample and paste it into the code editor.
``` (
To add the condition using the code editor, copy the condition code sample below
After entering your code, switch back to the visual editor to validate it.
+> [!NOTE]
+> If you try to perform an action in the assigned role that is *not* the action restricted by the condition, `!(ActionMatches)` evaluates to true and the overall condition evaluates to true. This result allows the action to be performed.
+>
+> If you try to perform the action restricted by the condition, `!(ActionMatches)` evaluates to false, so the expression is evaluated. If the expression evaluates to true, the overall condition evaluates to true and allows the action to be performed. Otherwise, the action is not allowed to be performed.
+>
+> In this example, the condition restricts the read action except when the suboperation is `Blob.List`. This means that a List Blobs operation is allowed, but all other read actions are further evaluated against the expression that checks for the blob index tag.
+>
+>To learn more about how conditions are formatted and evaluated, see [Conditions format](../../role-based-access-control/conditions-format.md).
+ # [PowerShell](#tab/azure-powershell) Here's how to add this condition using Azure PowerShell.
There are two actions that allow you to create new blobs, so you must target bot
![Diagram of condition showing new blobs must include a blob index tag.](./media/storage-auth-abac-examples/blob-index-tags-new-blobs.png)
-The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs below to view the examples for your preferred portal editor.
+The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs to view the examples for your preferred portal editor.
# [Portal: Visual editor](#tab/portal-visual-editor)
Here are the settings to add this condition using the Azure portal.
# [Portal: Code editor](#tab/portal-code-editor)
-To add the condition using the code editor, copy the condition code sample below and paste it into the code editor.
+To add the condition using the code editor, copy the condition code sample and paste it into the code editor.
``` (
There are two actions that allow you to update tags on existing blobs, so you mu
![Diagram of condition showing existing blobs must have blob index tag keys.](./media/storage-auth-abac-examples/blob-index-tags-keys.png)
-The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs below to view the examples for your preferred portal editor.
+The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs to view the examples for your preferred portal editor.
# [Portal: Visual editor](#tab/portal-visual-editor)
Here are the settings to add this condition using the Azure portal.
# [Portal: Code editor](#tab/portal-code-editor)
-To add the condition using the code editor, copy the condition code sample below and paste it into the code editor.
+To add the condition using the code editor, copy the condition code sample and paste it into the code editor.
``` (
There are two actions that allow you to update tags on existing blobs, so you mu
![Diagram of condition showing existing blobs must have a blob index tag key and values.](./media/storage-auth-abac-examples/blob-index-tags-key-values.png)
-The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs below to view the examples for your preferred portal editor.
+The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs to view the examples for your preferred portal editor.
# [Portal: Visual editor](#tab/portal-visual-editor)
Here are the settings to add this condition using the Azure portal.
# [Portal: Code editor](#tab/portal-code-editor)
-To add the condition using the code editor, copy the condition code sample below and paste it into the code editor.
+To add the condition using the code editor, copy the condition code sample and paste it into the code editor.
``` (
Suboperations aren't used in this condition because the suboperation is needed o
![Diagram of condition showing read, write, or delete blobs in named containers.](./media/storage-auth-abac-examples/containers-read-write-delete.png)
-The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs below to view the examples for your preferred portal editor.
+The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs to view the examples for your preferred portal editor.
# [Portal: Visual editor](#tab/portal-visual-editor)
Here are the settings to add this condition using the Azure portal.
# [Portal: Code editor](#tab/portal-code-editor)
-To add the condition using the code editor, copy the condition code sample below and paste it into the code editor.
+To add the condition using the code editor, copy the condition code sample and paste it into the code editor.
**Storage Blob Data Owner**
You must add this condition to any role assignments that include the following a
![Diagram of condition showing read access to blobs in named containers with a path.](./media/storage-auth-abac-examples/containers-path-read.png)
-The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs below to view the examples for your preferred portal editor.
+The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs to view the examples for your preferred portal editor.
# [Portal: Visual editor](#tab/portal-visual-editor)
Here are the settings to add this condition using the Azure portal.
# [Portal: Code editor](#tab/portal-code-editor)
-To add the condition using the code editor, copy the condition code sample below and paste it into the code editor.
+To add the condition using the code editor, copy the condition code sample and paste it into the code editor.
**Storage Blob Data Owner**
You must add this condition to any role assignments that include the following a
![Diagram of condition showing read and list access to blobs in named containers with a path.](./media/storage-auth-abac-examples/containers-path-read.png)
-The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs below to view the examples for your preferred portal editor.
+The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs to view the examples for your preferred portal editor.
# [Portal: Visual editor](#tab/portal-visual-editor)
Here are the settings to add this condition using the Azure portal.
# [Portal: Code editor](#tab/portal-code-editor)
-To add the condition using the code editor, copy the condition code sample below and paste it into the code editor.
+To add the condition using the code editor, copy the condition code sample and paste it into the code editor.
**Storage Blob Data Owner**
You must add this condition to any role assignments that include the following a
![Diagram of condition showing write access to blobs in named containers with a path.](./media/storage-auth-abac-examples/containers-path-write.png)
-The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs below to view the examples for your preferred portal editor.
+The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs to view the examples for your preferred portal editor.
# [Portal: Visual editor](#tab/portal-visual-editor)
Here are the settings to add this condition using the Azure portal.
# [Portal: Code editor](#tab/portal-code-editor)
-To add the condition using the code editor, copy the condition code sample below and paste it into the code editor.
+To add the condition using the code editor, copy the condition code sample and paste it into the code editor.
**Storage Blob Data Owner**
You must add this condition to any role assignments that include the following a
![Diagram of condition showing read access to blobs with a blob index tag and a path.](./media/storage-auth-abac-examples/blob-index-tags-path-read.png)
-The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs below to view the examples for your preferred portal editor.
+The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs to view the examples for your preferred portal editor.
# [Portal: Visual editor](#tab/portal-visual-editor)
Here are the settings to add this condition using the Azure portal.
# [Portal: Code editor](#tab/portal-code-editor)
-To add the condition using the code editor, copy the condition code sample below and paste it into the code editor.
+To add the condition using the code editor, copy the condition code sample and paste it into the code editor.
``` (
You must add this condition to any role assignments that include the following a
![Diagram of condition showing read access to current blob version only.](./media/storage-auth-abac-examples/current-version-read-only.png)
-The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs below to view the examples for your preferred portal editor.
+The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs to view the examples for your preferred portal editor.
# [Portal: Visual editor](#tab/portal-visual-editor)
Here are the settings to add this condition using the Azure portal.
# [Portal: Code editor](#tab/portal-code-editor)
-To add the condition using the code editor, copy the condition code sample below and paste it into the code editor.
+To add the condition using the code editor, copy the condition code sample and paste it into the code editor.
**Storage Blob Data Owner**
You must add this condition to any role assignments that include the following a
![Diagram of condition showing read access to a specific blob version.](./media/storage-auth-abac-examples/version-id-specific-blob-read.png)
-The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs below to view the examples for your preferred portal editor.
+The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs to view the examples for your preferred portal editor.
# [Portal: Visual editor](#tab/portal-visual-editor)
Here are the settings to add this condition using the Azure portal.
# [Portal: Code editor](#tab/portal-code-editor)
-To add the condition using the code editor, copy the condition code sample below and paste it into the code editor.
+To add the condition using the code editor, copy the condition code sample and paste it into the code editor.
``` (
You must add this condition to any role assignments that include the following a
![Diagram of condition showing delete access to old blob versions.](./media/storage-auth-abac-examples/version-id-blob-delete.png)
-The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs below to view the examples for your preferred portal editor.
+The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs to view the examples for your preferred portal editor.
# [Portal: Visual editor](#tab/portal-visual-editor)
Here are the settings to add this condition using the Azure portal.
# [Portal: Code editor](#tab/portal-code-editor)
-To add the condition using the code editor, copy the condition code sample below and paste it into the code editor.
+To add the condition using the code editor, copy the condition code sample and paste it into the code editor.
``` (
You must add this condition to any role assignments that include the following a
![Diagram of condition showing read access to current blob versions and any blob snapshots.](./media/storage-auth-abac-examples/version-id-snapshot-blob-read.png)
-The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs below to view the examples for your preferred portal editor.
+The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs to view the examples for your preferred portal editor.
# [Portal: Visual editor](#tab/portal-visual-editor)
Here are the settings to add this condition using the Azure portal.
# [Portal: Code editor](#tab/portal-code-editor)
-To add the condition using the code editor, copy the condition code sample below and paste it into the code editor.
+To add the condition using the code editor, copy the condition code sample and paste it into the code editor.
**Storage Blob Data Owner**
You must add this condition to any role assignments that include the following a
![Diagram of condition showing read access to storage accounts with hierarchical namespace enabled.](./media/storage-auth-abac-examples/hierarchical-namespace-accounts-read.png)
-The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs below to view the examples for your preferred portal editor.
+The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs to view the examples for your preferred portal editor.
# [Portal: Visual editor](#tab/portal-visual-editor)
Here are the settings to add this condition using the Azure portal.
# [Portal: Code editor](#tab/portal-code-editor)
-To add the condition using the code editor, copy the condition code sample below and paste it into the code editor.
+To add the condition using the code editor, copy the condition code sample and paste it into the code editor.
**Storage Blob Data Owner**
You must add this condition to any role assignments that include the following a
![Diagram of condition showing read access to blobs with encryption scope validScope1 or validScope2.](./media/storage-auth-abac-examples/encryption-scope-read-blobs.png)
-The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs below to view the examples for your preferred portal editor.
+The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs to view the examples for your preferred portal editor.
# [Portal: Visual editor](#tab/portal-visual-editor)
Here are the settings to add this condition using the Azure portal.
# [Portal: Code editor](#tab/portal-code-editor)
-To add the condition using the code editor, copy the condition code sample below and paste it into the code editor.
+To add the condition using the code editor, copy the condition code sample and paste it into the code editor.
``` (
You must add this condition to any role assignments that include the following a
![Diagram of condition showing read or write access to blobs in sampleaccount storage account with encryption scope ScopeCustomKey1.](./media/storage-auth-abac-examples/encryption-scope-account-name-read-wite-blobs.png)
-The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs below to view the examples for your preferred portal editor.
+The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs to view the examples for your preferred portal editor.
# [Portal: Visual editor](#tab/portal-visual-editor)
Here are the settings to add this condition using the Azure portal.
# [Portal: Code editor](#tab/portal-code-editor)
-To add the condition using the code editor, copy the condition code sample below and paste it into the code editor.
+To add the condition using the code editor, copy the condition code sample and paste it into the code editor.
``` (
For more information, see [Allow read access to blobs based on tags and custom s
![Diagram of condition showing read or write access to blobs based on blob index tags and custom security attributes.](./media/storage-auth-abac-examples/principal-blob-index-tags-read-write.png)
-The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs below to view the examples for your preferred portal editor.
+The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs to view the examples for your preferred portal editor.
# [Portal: Visual editor](#tab/portal-visual-editor)
Here are the settings to add this condition using the Azure portal.
# [Portal: Code editor](#tab/portal-code-editor)
-To add the condition using the code editor, copy the condition code sample below and paste it into the code editor.
+To add the condition using the code editor, copy the condition code sample and paste it into the code editor.
``` (
For more information, see [Allow read access to blobs based on tags and custom s
![Diagram of condition showing read access to blobs based on blob index tags and multi-value custom security attributes.](./media/storage-auth-abac-examples/principal-blob-index-tags-multi-value-read.png)
-The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs below to view the examples for your preferred portal editor.
+The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs to view the examples for your preferred portal editor.
# [Portal: Visual editor](#tab/portal-visual-editor)
Here are the settings to add this condition using the Azure portal.
# [Portal: Code editor](#tab/portal-code-editor)
-To add the condition using the code editor, copy the condition code sample below and paste it into the code editor.
+To add the condition using the code editor, copy the condition code sample and paste it into the code editor.
``` (
There are two potential actions for reading existing blobs. To make this conditi
> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read` | | > | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action` | Add if role definition includes this action, such as Storage Blob Data Owner. |
-The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs below to view the examples for your preferred portal editor.
+The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs to view the examples for your preferred portal editor.
# [Portal: Visual editor](#tab/portal-visual-editor)
There are five potential actions for read, write, add and delete access to exist
| `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/delete` | | | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action` | Add if role definition includes this action, such as Storage Blob Data Owner. |
-The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs below to view the examples for your preferred portal editor.
+The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs to view the examples for your preferred portal editor.
# [Portal: Visual editor](#tab/portal-visual-editor)
There are two potential actions for reading existing blobs. To make this conditi
> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read` | | > | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action` | Add if role definition includes this action, such as Storage Blob Data Owner. |
-The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs below to view the examples for your preferred portal editor.
+The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs to view the examples for your preferred portal editor.
# [Portal: Visual editor](#tab/portal-visual-editor)
There are five potential actions for read, write and delete of existing blobs. T
| `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/delete` | | | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action` | Add if role definition includes this action, such as Storage Blob Data Owner.<br/>Add if the storage accounts included in this condition have hierarchical namespace enabled or might be enabled in the future. |
-The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs below to view the examples for your preferred portal editor.
+The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs to view the examples for your preferred portal editor.
# [Portal: Visual editor](#tab/portal-visual-editor)
There are two potential actions for reading existing blobs. To make this conditi
> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read` | | > | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action` | Add if role definition includes this action, such as Storage Blob Data Owner. |
-The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs below to view the examples for your preferred portal editor.
+The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs to view the examples for your preferred portal editor.
# [Portal: Visual editor](#tab/portal-visual-editor)
storage Storage Blob Append https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-append.md
Last updated 09/01/2023
+ms.devlang: csharp
+# ms.devlang: csharp, python
storage Storage Blob Client Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-client-management.md
Last updated 02/08/2023
+ms.devlang: csharp
+# ms.devlang: csharp, java, javascript, python
storage Storage Blob Container Delete Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-delete-typescript.md
Title: Delete and restore a blob container with TypeScript description: Learn how to delete and restore a blob container in your Azure Storage account using the JavaScript client library using TypeScript.- - Last updated 03/21/2023
+ms.devlang: typescript
storage Storage Create Geo Redundant Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-create-geo-redundant-storage.md
Last updated 09/02/2022
+ms.devlang: csharp
+# ms.devlang: csharp, javascript, python
#Customer intent: As a developer, I want to have my data be highly available, so that in the event of a disaster I may retrieve it.
storage Storage Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-analytics.md
If you have configured a data retention policy, you can reduce the spending by d
Every request made to an account's storage service is either billable or non-billable. Storage Analytics logs each individual request made to a service, including a status message that indicates how the request was handled. See [Understanding Azure Storage Billing - Bandwidth, Transactions, and Capacity](/archive/blogs/windowsazurestorage/understanding-windows-azure-storage-billing-bandwidth-transactions-and-capacity).
-When looking at Storage Analytics data, you can use the tables in the [Storage Analytics Logged Operations and Status Messages](/rest/api/storageservices/storage-analytics-logged-operations-and-status-messages) topic to determine what requests are billable. Then you can compare your log data to the status messages to see if you were charged for a particular request. You can also use the tables in the previous topic to investigate availability for a storage service or individual AP
+When looking at Storage Analytics data, you can use the tables in the [Storage Analytics Logged Operations and Status Messages](/rest/api/storageservices/storage-analytics-logged-operations-and-status-messages) topic to determine what requests are billable. Then you can compare your log data to the status messages to see if you were charged for a particular request. You can also use the tables in the previous topic to investigate availability for a storage service or individual API operation.
## Next steps
storage Elastic San Networking Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-networking-concepts.md
description: An overview of Azure Elastic SAN Preview networking options, includ
Previously updated : 11/29/2023 Last updated : 01/16/2024
To learn how to define network rules, see [Managing virtual network rules](elast
## Client connections
-After you have enabled the desired endpoints and granted access in your network rules, you can connect to the appropriate Elastic SAN volumes using the iSCSI protocol. For more information on how to configure client connections, see [Configure access to Elastic SAN volumes from clients](elastic-san-networking.md#configure-client-connections)
+After you have enabled the desired endpoints and granted access in your network rules, you can connect to the appropriate Elastic SAN volumes using the iSCSI protocol. To learn how to configure client connections, see the articles on how to connect to [Linux](elastic-san-connect-linux.md), [Windows](elastic-san-connect-windows.md), or [Azure Kubernetes Service cluster](elastic-san-connect-aks.md).
+
+iSCSI sessions can periodically disconnect and reconnect over the course of the day. These disconnects and reconnects are part of regular maintenance or the result of network fluctuations. You shouldn't experience any performance degradation as a result of these disconnects and reconnects, and the connections should re-establish by themselves. If a connection doesn't re-establish itself, or you're experiencing performance degradation, raise a support ticket.
> [!NOTE] > If a connection between a virtual machine (VM) and an Elastic SAN volume is lost, the connection will retry for 90 seconds until terminating. Losing a connection to an Elastic SAN volume won't cause the VM to restart.
storage Elastic San Snapshots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-snapshots.md
Previously updated : 11/15/2023 Last updated : 01/17/2024
You can use snapshots of managed disks to create new elastic SAN volumes using t
# [PowerShell](#tab/azure-powershell)
-The following command will create a 1 GiB
- ```azurepowershell New-AzElasticSanVolume -ElasticSanName $esname -ResourceGroupName $rgname -VolumeGroupName $vgname -Name $volname2 -CreationDataSourceId $snapshot.Id -SizeGiB 1 ```
storage Monitor Queue Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/monitor-queue-storage.md
Last updated 08/08/2023
+ms.devlang: csharp
+# ms.devlang: csharp, powershell, azurecli
synapse-analytics Data Explorer Ingest Data Streaming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/data-explorer/ingest-data/data-explorer-ingest-data-streaming.md
+ms.devlang: csharp
+# ms.devlang: csharp, golang, java, javascript, python
synapse-analytics Migrate To Synapse Analytics Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/migrate-to-synapse-analytics-guide.md
Title: "Azure Synapse Analytics: Migration guide"
description: Follow this guide to migrate your databases to an Azure Synapse Analytics dedicated SQL pool. - - Last updated 04/12/2023 # Migrate a data warehouse to a dedicated SQL pool in Azure Synapse Analytics
update-manager Periodic Assessment At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/periodic-assessment-at-scale.md
description: This article shows how to manage update settings for your Windows a
Previously updated : 09/18/2023 Last updated : 01/17/2024
This article describes how to enable Periodic Assessment for your machines at sc
## Enable Periodic Assessment for your Azure machines by using Azure Policy 1. Go to **Policy** in the Azure portal and select **Authoring** > **Definitions**.
-1. From the **Category** dropdown, select **Update Manager**. Select **[Preview]: Configure periodic checking for missing system updates on Azure virtual machines** for Azure machines.
+1. From the **Category** dropdown, select **Update Manager**. Select **Configure periodic checking for missing system updates on Azure virtual machines** for Azure machines.
1. When **Policy definition** opens, select **Assign**. 1. On the **Basics** tab, select your subscription as your scope. You can also specify a resource group within your subscription as the scope. Select **Next**. 1. On the **Parameters** tab, clear **Only show parameters that need input or review** so that you can see the values of parameters. In **Assessment** mode, select **AutomaticByPlatform** > **Operating system** > **Next**. You need to create separate policies for Windows and Linux.
You can monitor the compliance of resources under **Compliance** and remediation
## Enable Periodic Assessment for your Azure Arc-enabled machines by using Azure Policy 1. Go to **Policy** in the Azure portal and select **Authoring** > **Definitions**.
-1. From the **Category** dropdown, select **Update Manager**. Select **[Preview]: Configure periodic checking for missing system updates on Azure Arc-enabled servers** for Azure Arc-enabled machines.
+1. From the **Category** dropdown, select **Update Manager**. Select **Configure periodic checking for missing system updates on Azure Arc-enabled servers** for Azure Arc-enabled machines.
1. When **Policy definition** opens, select **Assign**. 1. On the **Basics** tab, select your subscription as your scope. You can also specify a resource group within your subscription as the scope. Select **Next**. 1. On the **Parameters** tab, clear **Only show parameters that need input or review** so that you can see the values of parameters. In **Assessment** mode, select **AutomaticByPlatform** > **Operating system** > **Next**. You need to create separate policies for Windows and Linux.
You can monitor compliance of resources under **Compliance** and remediation sta
This procedure applies to both Azure and Azure Arc-enabled machines. 1. Go to **Policy** in the Azure portal and select **Authoring** > **Definitions**.
-1. From the **Category** dropdown, select **Update Manager**. Select **[Preview]: Machines should be configured to periodically check for missing system updates**.
+1. From the **Category** dropdown, select **Update Manager**. Select **Machines should be configured to periodically check for missing system updates**.
1. When **Policy definition** opens, select **Assign**. 1. On the **Basics** tab, select your subscription as your scope. You can also specify a resource group within your subscription as the scope. Select **Next**. 1. On the **Parameters** and **Remediation** tabs, select **Next**.
virtual-desktop Autoscale Scaling Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/autoscale-scaling-plan.md
-# Create an autoscale scaling plan for Azure Virtual Desktop
+# Create and assign an autoscale scaling plan for Azure Virtual Desktop
Autoscale lets you scale your session host virtual machines (VMs) in a host pool up or down according to schedule to optimize deployment costs.
virtual-machines Disks Deploy Premium V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-deploy-premium-v2.md
Title: Deploy a Premium SSD v2 managed disk
description: Learn how to deploy a Premium SSD v2 and about its regional availability. Previously updated : 11/15/2023 Last updated : 01/16/2024 -+ # Deploy a Premium SSD v2
virtual-machines Disks Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-whats-new.md
+
+ Title: What's new in Azure Disk Storage
+description: Learn about new features and enhancements in Azure Disk Storage.
++ Last updated : 01/17/2024+++++
+# What's new for Azure Disk Storage
+
+Azure Disk Storage regularly receives updates for new features and enhancements. This article provides information about what's new in Azure Disk Storage.
+
+## Update summary
+
+- [What's new in 2024](#whats-new-in-2024)
+ - [Quarter 1 (January, February, March)](#quarter-1-january-february-march)
+ - [Generally available: Trusted launch support for Ultra Disks and Premium SSD v2](#generally-available-trusted-launch-support-for-ultra-disks-and-premium-ssd-v2)
+ - [Expanded regional availability for Ultra Disks](#expanded-regional-availability-for-ultra-disks)
+ - [Expanded regional availability for zone-redundant storage disks](#expanded-regional-availability-for-zone-redundant-storage-disks)
+- [What's new in 2023](#whats-new-in-2023)
+ - [Quarter 4 (October, November, December)](#quarter-4-october-november-december)
+ - [Encryption at host GA for Premium SSD v2 and Ultra Disks](#encryption-at-host-ga-for-premium-ssd-v2-and-ultra-disks)
+ - [New latency metrics (preview)](#new-latency-metrics-preview)
+ - [Expanded regional availability for Premium SSD v2](#expanded-regional-availability-for-premium-ssd-v2)
+ - [Expanded regional availability for ZRS disks](#expanded-regional-availability-for-zrs-disks)
+ - [Quarter 3 (July, August, September)](#quarter-3-july-august-september)
+ - [Expanded regional availability for ZRS disks (1)](#expanded-regional-availability-for-zrs-disks-1)
+ - [Expanded regional availability for Premium SSD v2](#expanded-regional-availability-for-premium-ssd-v2-1)
+ - [General Availability - Incremental Snapshots for Premium SSD v2 and Ultra Disks](#general-availabilityincremental-snapshots-for-premium-ssd-v2-and-ultra-disks)
+ - [Quarter 2 (April, May, June)](#quarter-2-april-may-june)
+ - [Expanded regional availability for Premium SSD v2 (2)](#expanded-regional-availability-for-premium-ssd-v2-2)
+ - [Expanded regional availability for ZRS disks (2)](#expanded-regional-availability-for-zrs-disks-2)
+ - [Azure Backup support (preview) for Premium SSD v2](#azure-backup-support-preview-for-premium-ssd-v2)
+ - [Quarter 1 (January, February, March)](#quarter-1-january-february-march-1)
+ - [Expanded regional availability for Premium SSD v2 (3)](#expanded-regional-availability-for-premium-ssd-v2-3)
+ - [Preview - Performance plus](#previewperformance-plus)
+ - [Expanded regional availability for Ultra Disks](#expanded-regional-availability-for-ultra-disks-1)
+ - [More transactions at no extra cost - Standard SSDs](#more-transactions-at-no-extra-coststandard-ssds)
+
+## What's new in 2024
+
+### Quarter 1 (January, February, March)
+
+#### Generally available: Trusted launch support for Ultra Disks and Premium SSD v2
+
+Trusted launch VMs added support for Ultra Disks and Premium SSD v2, allowing you to combine the foundational compute security of Trusted Launch with the high throughput, high IOPS, and low latency of Ultra Disks and Premium SSD v2. For more information, see [Trusted launch for Azure virtual machines](trusted-launch.md) or the [Azure Update](https://azure.microsoft.com/updates/premium-ssd-v2-and-ultra-disks-support-with-trusted-launch-vm/).
+
+#### Expanded regional availability for Ultra Disks
+
+Ultra Disks were made available in the UK West and Poland Central regions.
+
+#### Expanded regional availability for zone-redundant storage disks
+
+Zone-redundant storage (ZRS) disks were made available in West US 3 and Germany Central regions.
+
+## What's new in 2023
+
+### Quarter 4 (October, November, December)
+
+#### Encryption at host GA for Premium SSD v2 and Ultra Disks
+
+Encryption at host was previously only available for Standard HDDs, Standard SSDs, and Premium SSDs. Encryption at host is now also available as a GA offering for Premium SSD v2 and Ultra Disks. For more information on encryption at host, see [Encryption at host - End-to-end encryption for your VM data](disk-encryption.md#encryption-at-hostend-to-end-encryption-for-your-vm-data).
+
+There are some additional restrictions for Premium SSD v2 and Ultra Disks that enable encryption at host. For more information, see [Restrictions](disk-encryption.md#restrictions-1).
+
+#### New latency metrics (preview)
+
+Metrics dedicated to monitoring latency are now available as a preview feature. To learn more, see either the [metrics article](disks-metrics.md#disk-io-throughput-queue-depth-and-latency-metrics) or the [Azure Update](https://azure.microsoft.com/updates/latency-metrics-for-azure-disks-and-performance-metrics-for-temporary-disks-on-azure-virtual-machines/).
+
+#### Expanded regional availability for Premium SSD v2
+
+Premium SSD v2 disks were made available in Poland Central, China North 3, and US Gov Virginia. For more information, see the [Azure Update](https://azure.microsoft.com/updates/generally-available-azure-premium-ssd-v2-disk-storage-is-now-available-in-more-regions-pcu/).
++
+#### Expanded regional availability for ZRS disks
+
+ZRS disks were made available in the Norway East and UAE North regions. For more information, see the [Azure Update](https://azure.microsoft.com/updates/generally-available-zone-redundant-storage-for-azure-disks-is-now-available-in-norway-east-uae-north-regions/).
+
+### Quarter 3 (July, August, September)
+
+#### Expanded regional availability for ZRS disks
+
+In quarter 3, ZRS disks were made available in the China North 3, East Asia, India Central, Switzerland North, South Africa North, and Sweden Central regions.
+
+#### Expanded regional availability for Premium SSD v2
+
+In Quarter 3, Premium SSD v2 were made available in the Australia East, Brazil South, Canada Central, Central India, Central US, East Asia, France Central, Japan East, Korea Central, Norway East, South Africa North, Sweden Central, Switzerland North, and UAE North regions.
+
+#### General Availability - Incremental Snapshots for Premium SSD v2 and Ultra Disks
+
+Incremental snapshots for Premium SSD v2 and Ultra Disks were made available as a general availability (GA) feature. For more information, see either the [documentation](disks-incremental-snapshots.md#incremental-snapshots-of-premium-ssd-v2-and-ultra-disks) or the [Azure Update](https://azure.microsoft.com/updates/general-availability-incremental-snapshots-for-premium-ssd-v2-disk-and-ultra-disk-storage-3/).
+
+### Quarter 2 (April, May, June)
+
+#### Expanded regional availability for Premium SSD v2
+
+In quarter 2, Premium SSD v2 disks were made available in the Southeast Asia, UK South, South Central US, and West US 3 regions.
+
+#### Expanded regional availability for ZRS disks
+
+In quarter 2, ZRS disks were made available in the Australia East, Brazil South, Japan East, Korea Central, Qatar Central, UK South, East US, East US 2, South Central US, and Southeast Asia regions.
+
+#### Azure Backup support (preview) for Premium SSD v2
+
+Azure Backup added preview support for Azure virtual machines using Premium SSD v2 disks in the East US and West Europe regions. For more information, see the [Azure Update](https://azure.microsoft.com/updates/premium-ssd-v2-backup-support/).
+
+### Quarter 1 (January, February, March)
+
+#### Expanded regional availability for Premium SSD v2
+
+In quarter 1, Premium SSD v2 disks were made available in the East US 2, North Europe, and West US 2 regions.
+
+#### Preview - Performance plus
+
+Azure Disk Storage added a new preview feature, performance plus. Performance plus enhances the IOPS and throughput performance for Premium SSDs, Standard SSDs, and Standard HDDs that are 513 GiB and larger. For details, see [Increase IOPS and throughput limits for Azure Premium SSDs and Standard SSD/HDDs](disks-enable-performance.md)
+
+#### Expanded regional availability for Ultra Disks
+
+In quarter 1, Ultra Disks were made available in the Brazil Southeast, China North 3, Korea South, South Africa North, Switzerland North, and UAE North regions.
+
+#### More transactions at no extra cost - Standard SSDs
+
+In quarter 1, we added an hourly limit to the number of transactions that can occur a billable cost. Any transactions beyond that limit don't occur a cost. For information, see the [blog post](https://aka.ms/billedcapsblog) or [Standard SSD transactions](disks-types.md#standard-ssd-transactions).
+
+## Next steps
+
+- [Azure managed disk types](disks-types.md)
+- [Introduction to Azure managed disks](managed-disks-overview.md)
virtual-network Virtual Network Encryption Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-encryption-overview.md
Title: What is Azure Virtual Network encryption? (Preview)
-description: Overview of Azure Virtual Network encryption
+ Title: What is Azure Virtual Network encryption?
+
+description: Learn about Azure Virtual network encryption. Virtual network encryption allows you to seamlessly encrypt and decrypt traffic between Azure Virtual Machines.
Previously updated : 07/07/2023 Last updated : 01/17/2024
+# customer intent: As a network administrator, I want to learn about encryption in Azure Virtual Network so that I can secure my network traffic.
-# What is Azure Virtual Network encryption? (Preview)
+# What is Azure Virtual Network encryption?
Azure Virtual Network encryption is a feature of Azure Virtual Networks. Virtual network encryption allows you to seamlessly encrypt and decrypt traffic between Azure Virtual Machines.
Whenever Azure customer traffic moves between datacenters, Microsoft applies a d
For more information about encryption in Azure, see [Azure encryption overview](/azure/security/fundamentals/encryption-overview). > [!IMPORTANT]
-> Azure Virtual Network encryption is currently in preview.
+> Azure Virtual Network encryption is currently GA in the following regions: **UK South**, **Swiss North**, and **West Central US**. Azure Virtual Network encryption is in public preview in the remaining regions listed later in the article.
> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
Virtual network encryption has the following requirements:
| VM Series | VM SKU | | | | | D-series | **[Dv4 and Dsv4-series](/azure/virtual-machines/dv4-dsv4-series)**, **[Ddv4 and Ddsv4-series](/azure/virtual-machines/ddv4-ddsv4-series)**, **[Dav4 and Dasv4-series](/azure/virtual-machines/dav4-dasv4-series)** |
+ | D-series V5 | **[Dv5 and Dsv5-series](/azure/virtual-machines/dv5-dsv5-series)**, **[Ddv5 and Ddsv5-series](/azure/virtual-machines/ddv5-ddsv5-series)** |
| E-series | **[Ev4 and Esv4-series](/azure/virtual-machines/ev4-esv4-series)**, **[Edv4 and Edsv4-series](/azure/virtual-machines/edv4-edsv4-series)**, **[Eav4 and Easv4-series](/azure/virtual-machines/eav4-easv4-series)** |
- | M-series | **[Mv2-series](/azure/virtual-machines/mv2-series)** |
+ | E-series V5 | **[Ev4 and Esv4-series](/azure/virtual-machines/ev5-esv5-series)**, **[Edv4 and Edsv4-series](/azure/virtual-machines/edv5-edsv5-series)** |
+ | LSv3 | **[LSv3-series](/azure/virtual-machines/lsv3-series)** |
+ | M-series | **[Mv2-series](/azure/virtual-machines/mv2-series)**, **[Msv3 and Mdsv3 Medium Memory Series](/azure/virtual-machines/msv3-mdsv3-medium-series)** |
+
- Accelerated Networking must be enabled on the network interface of the virtual machine. For more information about Accelerated Networking, see ΓÇ»[What is Accelerated Networking?](/azure/virtual-network/accelerated-networking-overview).
Virtual network encryption has the following requirements:
- Global Peering is supported in regions where virtual network encryption is supported. -- Traffic to unsupported Virtual Machines is unencrypted. Use Virtual Network Flow Logs to confirm flow encryption between virtual machines. For more information, see [VNet flow logs](../network-watcher/vnet-flow-logs-overview.md).
+- Traffic to unsupported Virtual Machines is unencrypted. Use Virtual Network Flow Logs to confirm flow encryption between virtual machines. For more information, see [Virtual network flow logs](../network-watcher/vnet-flow-logs-overview.md).
+
+- The start/stop of existing virtual machines is required after enabling encryption in a virtual network.
-- The start/stop of existing virtual machines may be required after enabling encryption in a virtual network. ## Availability
-Azure Virtual Network encryption is available in the following regions during the preview:
+General Availability (GA) of Azure Virtual Network encryption is available in the following regions:
-- East US 2 EUAP
+- UK South
-- Central US EUAP
+- Swiss North
- West Central US
+Azure Virtual Network encryption is available in the following regions during the public preview:
+
+- East US 2 EUAP
+
+- Central US EUAP
+ - East US - East US 2
Azure Virtual Network encryption is available in the following regions during th
- West US 2
-To sign up to obtain access to the public preview, see [Virtual Network Encryption - Public Preview Sign Up](https://aka.ms/vnet-encryption-sign-up).
+To sign up to obtain, access to the public preview, see [Virtual Network Encryption - Public Preview Sign Up](https://aka.ms/vnet-encryption-sign-up).
## Limitations
vpn-gateway About Gateway Skus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/about-gateway-skus.md
When you configure a virtual network gateway SKU, select the SKU that satisfies
(*) You can configure "PolicyBasedTrafficSelectors" to connect a route-based VPN gateway to multiple on-premises policy-based firewall devices. Refer to [Connect VPN gateways to multiple on-premises policy-based VPN devices using PowerShell](vpn-gateway-connect-multiple-policybased-rm-ps.md) for details.
-(\*\*) The Basic SKU is considered a legacy SKU. The Basic SKU has certain feature limitations. Verify that the feature that you need is supported before you use the Basic SKU. The Basic SKU doesn't support IPv6 and can only be configured using PowerShell or Azure CLI. Additionally, the Basic SKU doesn't support RADIUS authentication.
+(\*\*) The Basic SKU is considered a legacy SKU. The Basic SKU has certain feature and performance limitations and should not be used for production purposes. Verify that the feature that you need is supported before you use the Basic SKU. The Basic SKU doesn't support IPv6 and can only be configured using PowerShell or Azure CLI. Additionally, the Basic SKU doesn't support RADIUS authentication.
## <a name="workloads"></a>Gateway SKUs - Production vs. Dev-Test workloads
Due to the differences in SLAs and feature sets, we recommend the following SKUs
| **Dev-test or proof of concept** | Basic (**) | | | |
-(\*\*) The Basic SKU is considered a legacy SKU. The Basic SKU has certain feature limitations. Verify that the feature that you need is supported before you use the Basic SKU. The Basic SKU doesn't support IPv6 and can only be configured using PowerShell or Azure CLI. Additionally, the Basic SKU doesn't support RADIUS authentication.
+(\*\*) The Basic SKU is considered a legacy SKU. The Basic SKU has certain feature and performance limitations and should not be used for production purposes. Verify that the feature that you need is supported before you use the Basic SKU. The Basic SKU doesn't support IPv6 and can only be configured using PowerShell or Azure CLI. Additionally, the Basic SKU doesn't support RADIUS authentication.
If you're using the old SKUs (legacy), the production SKU recommendations are Standard and HighPerformance. For information and instructions for old SKUs, see [Gateway SKUs (legacy)](vpn-gateway-about-skus-legacy.md).
vpn-gateway Point To Site Vpn Client Cert Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/point-to-site-vpn-client-cert-windows.md
Previously updated : 05/04/2023 Last updated : 01/17/2024 # Configure point-to-site VPN clients: certificate authentication - Windows
-This article helps you connect to your Azure virtual network (VNet) using VPN Gateway point-to-site (P2S) and **Certificate authentication**. There are multiple sets of steps in this article, depending on the tunnel type you selected for your P2S configuration, the operating system, and the VPN client that is used to connect.
+This article walks you through the necessary steps to configure VPN clients for point-to-site (P2S) virtual network connections that use certificate authentication. These steps continue on from previous articles where the [VPN Gateway point-to-site](vpn-gateway-howto-point-to-site-resource-manager-portal.md) server settings are configured.
-When you connect to an Azure VNet using a P2S IKEv2/SSTP tunnel and certificate authentication, you can use the VPN client that is natively installed on the Windows operating system from which youΓÇÖre connecting. If you use the tunnel type OpenVPN, you also have the option of using the Azure VPN Client or the OpenVPN client software. This article walks you through configuring the VPN clients.
+There are multiple sets of steps in this article, depending on the tunnel type you selected for your P2S configuration, and the VPN client that is used to connect.
## Before you begin
-Before beginning, verify that you are on the correct article. The following table shows the configuration articles available for Azure VPN Gateway P2S VPN clients. Steps differ, depending on the authentication type, tunnel type, and the client OS.
+This article assumes that you have already created and configured your VPN gateway for P2S certificate authentication. See [Configure server settings for P2S VPN Gateway connections - certificate authentication](vpn-gateway-howto-point-to-site-resource-manager-portal.md) for steps.
+
+Before beginning the workflow, verify that you're on the correct article. The following table shows the configuration articles available for Azure VPN Gateway P2S VPN clients. Steps differ, depending on the authentication type, tunnel type, and the client OS.
[!INCLUDE [All client articles](../../includes/vpn-gateway-vpn-client-install-articles.md)] >[!IMPORTANT] >[!INCLUDE [TLS](../../includes/vpn-gateway-tls-change.md)]
+## Workflow
+
+In this article, we start with generating VPN client configuration files and client certificates:
+
+1. [Generate files to configure the VPN client](#1-generate-vpn-client-configuration-files)
+1. [Generate certificates for the VPN client](#2-generate-client-certificates)
+
+After the steps in these sections are completed, continue on to [3. Configure the VPN client](#3-configure-the-vpn-client). The steps you use to configure your VPN client depend on the tunnel type for your P2S VPN gateway, and the VPN client on the client computer.
+
+* **IKEv2 and SSTP - native VPN client steps** - If your P2S VPN gateway is configured to use IKEv2/SSTP and certificate authentication, you can connect to your VNet using the native VPN client that's part of your Windows operating system. This configuration doesn't require additional client software. See [IKEv2 and SSTP - native VPN client](#ike).
+* **OpenVPN** - If your P2S VPN gateway is configured to use an OpenVPN tunnel and certificate authentication, you have the option of using either the [Azure VPN Client](#openvpn), or the [OpenVPN client](#azurevpn).
+ ## 1. Generate VPN client configuration files All of the necessary configuration settings for the VPN clients are contained in a VPN client profile configuration zip file. You can generate client profile configuration files using PowerShell, or by using the Azure portal. Either method returns the same zip file.
The VPN client profile configuration files that you generate are specific to the
For certificate authentication, a client certificate must be installed on each client computer. The client certificate you want to use must be exported with the private key, and must contain all certificates in the certification path. Additionally, for some configurations, you'll also need to install root certificate information.
-In many cases, you can install the client certificate directly on the client computer by double-clicking. However, for certain OpenVPN client configurations, you may need to extract information from the client certificate in order to complete the configuration.
+In many cases, you can install the client certificate directly on the client computer by double-clicking. However, for certain OpenVPN client configurations, you might need to extract information from the client certificate in order to complete the configuration.
* For information about working with certificates, see [Point-to site: Generate certificates](vpn-gateway-certificates-point-to-site.md). * To view an installed client certificate, open **Manage User Certificates**. The client certificate is installed in **Current User\Personal\Certificates**.
Next, configure the VPN client. Select from the following instructions:
* [OpenVPN - OpenVPN client steps](#openvpn) * [OpenVPN - Azure VPN Client steps](#azurevpn)
-## <a name="ike"></a>IKEv2 and SSTP: native VPN client steps
+## <a name="ike"></a>Native VPN client steps - IKEv2/SSTP
-This section helps you configure the native VPN client that's part of your Windows operating system to connect to your VNet. This configuration doesn't require additional client software.
+If your P2S VPN gateway is configured to use IKEv2/SSTP and certificate authentication, you can connect to your VNet using the native VPN client that's part of your Windows operating system. This configuration doesn't require additional client software.
### <a name="view-ike"></a>View configuration files
You can use the same VPN client configuration package on each Windows client com
1. Install the client certificate. Typically, you can do this by double-clicking the certificate file and providing a password if required. For more information, see [Install client certificates](point-to-site-how-to-vpn-client-install-azure-cert.md).
-1. Connect to your VPN. Go to the **VPN** settings and locate the VPN connection that you created. It's the same name as your virtual network. Select **Connect**. A pop-up message may appear. Select **Continue** to use elevated privileges.
+1. Connect to your VPN. Go to the **VPN** settings and locate the VPN connection that you created. It's the same name as your virtual network. Select **Connect**. A pop-up message might appear. Select **Continue** to use elevated privileges.
1. On the **Connection status** page, select **Connect** to start the connection. If you see a **Select Certificate** screen, verify that the client certificate showing is the one that you want to use to connect. If it isn't, use the drop-down arrow to select the correct certificate, and then select **OK**.
-## <a name="azurevpn"></a>OpenVPN: Azure VPN Client steps
+## <a name="azurevpn"></a>Azure VPN Client steps - OpenVPN
+
+If your P2S VPN gateway is configured to use an OpenVPN tunnel type and certificate authentication, you can connect using the Azure VPN Client.
-This section applies to certificate authentication configurations that use the OpenVPN tunnel type. The following steps help you download, install, and configure the Azure VPN Client to connect to your VNet. Note that these steps apply to certificate authentication. If you're using OpenVPN with Microsoft Entra authentication, see the [Microsoft Entra ID](openvpn-azure-ad-client.md) configuration article instead.
+The following steps help you download, install, and configure the Azure VPN Client to connect to your VNet. Note that these steps apply to certificate authentication. If you're using OpenVPN with Microsoft Entra authentication, see the [Microsoft Entra ID](openvpn-azure-ad-client.md) configuration article instead.
To connect, each client computer requires the following items:
When you open the zip file, you'll see the **AzureVPN** folder. Locate the **azu
If you don't see the file, verify the following items: * Verify that your VPN gateway is configured to use the OpenVPN tunnel type.
-* If you're using Microsoft Entra authentication, you may not have an AzureVPN folder. See the [Microsoft Entra ID](openvpn-azure-ad-client.md) configuration article instead.
+* If you're using Microsoft Entra authentication, you might not have an AzureVPN folder. See the [Microsoft Entra ID](openvpn-azure-ad-client.md) configuration article instead.
### Download the Azure VPN Client
The following sections discuss additional optional configuration settings that a
You can configure the Azure VPN Client with optional configuration settings such as additional DNS servers, custom DNS, forced tunneling, custom routes, and other additional settings. For a description of the available settings and configuration steps, see [Azure VPN Client optional settings](azure-vpn-client-optional-configurations.md).
-## <a name="openvpn"></a>OpenVPN: OpenVPN Client steps
+## <a name="openvpn"></a>OpenVPN Client steps - OpenVPN
-This section applies to certificate authentication configurations that are configured to use the OpenVPN tunnel type. The following steps help you configure the **OpenVPN &reg; Protocol** client and connect to your VNet.
+If your P2S VPN gateway is configured to use an OpenVPN tunnel type and certificate authentication, you can connect using an OpenVPN client. The following steps help you configure the **OpenVPN &reg; Protocol** client and connect to your VNet.
### <a name="view-openvpn"></a>View configuration files When you open the VPN client configuration package zip file, you should see an OpenVPN folder. If you don't see the folder, verify the following items: * Verify that your VPN gateway is configured to use the OpenVPN tunnel type.
-* If you're using Microsoft Entra authentication, you may not have an OpenVPN folder. See the [Microsoft Entra ID](openvpn-azure-ad-client.md) configuration article instead.
+* If you're using Microsoft Entra authentication, you might not have an OpenVPN folder. See the [Microsoft Entra ID](openvpn-azure-ad-client.md) configuration article instead.
[!INCLUDE [Configuration steps](../../includes/vpn-gateway-vwan-config-openvpn-windows.md)]
vpn-gateway Tutorial Create Gateway Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/tutorial-create-gateway-portal.md
Previously updated : 11/20/2023 Last updated : 01/17/2024
Create a VNet using the following values:
After you create your VNet, you can optionally configure Azure DDos Protection. Protection is simple to enable on any new or existing virtual network, and it requires no application or resource changes. For more information about Azure DDoS protection, see [What is Azure DDoS Protection?](../ddos-protection/ddos-protection-overview.md)
+## Create a gateway subnet
+
+The virtual network gateway requires a specific subnet named **GatewaySubnet**. The gateway subnet is part of IP address range for your virtual network and contains the IP addresses that the virtual network gateway resources and services use. Specify a gateway subnet that is /27 or larger.
++ ## <a name="VNetGateway"></a>Create a VPN gateway In this step, you create the virtual network gateway (VPN gateway) for your VNet. Creating a gateway can often take 45 minutes or more, depending on the selected gateway SKU.
vpn-gateway Tutorial Site To Site Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/tutorial-site-to-site-portal.md
Previously updated : 11/21/2023 Last updated : 01/17/2024
In this section, you'll create a virtual network (VNet) using the following valu
After you create your VNet, you can optionally configure Azure DDos Protection. Protection is simple to enable on any new or existing virtual network, and it requires no application or resource changes. For more information about Azure DDoS protection, see [What is Azure DDoS Protection?](../ddos-protection/ddos-protection-overview.md)
-## <a name="VNetGateway"></a>Create a VPN gateway
+## Create a gateway subnet
-In this step, you create the virtual network gateway for your VNet. Creating a gateway can often take 45 minutes or more, depending on the selected gateway SKU.
-### About the gateway subnet
+## <a name="VNetGateway"></a>Create a VPN gateway
+
+In this step, you create the virtual network gateway for your VNet. Creating a gateway can often take 45 minutes or more, depending on the selected gateway SKU.
### Create the gateway